Science.gov

Sample records for 1-db compression point

  1. Design Point for a Spheromak Compression Experiment

    NASA Astrophysics Data System (ADS)

    Woodruff, Simon; Romero-Talamas, Carlos A.; O'Bryan, John; Stuber, James; Darpa Spheromak Team

    2015-11-01

    Two principal issues for the spheromak concept remain to be addressed experimentally: formation efficiency and confinement scaling. We are therefore developing a design point for a spheromak experiment that will be heated by adiabatic compression, utilizing the CORSICA and NIMROD codes as well as analytic modeling with target parameters R_initial =0.3m, R_final =0.1m, T_initial =0.2keV, T_final =1.8keV, n_initial =1019m-3 and n_final = 1021m-3, with radial convergence of C =3. This low convergence differentiates the concept from MTF with C =10 or more, since the plasma will be held in equilibrium throughout compression. We present results from CORSICA showing the placement of coils and passive structure to ensure stability during compression, and design of the capacitor bank needed to both form the target plasma and compress it. We specify target parameters for the compression in terms of plasma beta, formation efficiency and energy confinement. Work performed under DARPA grant N66001-14-1-4044.

  2. Ischemic Compression After Trigger Point Injection Affect the Treatment of Myofascial Trigger Points

    PubMed Central

    Kim, Soo A; Oh, Ki Young; Choi, Won Hyuck

    2013-01-01

    Objective To investigate the effects of trigger point injection with or without ischemic compression in treatment of myofascial trigger points in the upper trapezius muscle. Methods Sixty patients with active myofascial trigger points in upper trapezius muscle were randomly divided into three groups: group 1 (n=20) received only trigger point injections, group 2 (n=20) received trigger point injections with 30 seconds of ischemic compression, and group 3 (n=20) received trigger point injections with 60 seconds of ischemic compression. The visual analogue scale, pressure pain threshold, and range of motion of the neck were assessed before treatment, immediately after treatment, and 1 week after treatment. Korean Neck Disability Indexes were assessed before treatment and 1 week after treatment. Results We found a significant improvement in all assessment parameters (p<0.05) in all groups. But, receiving trigger point injections with ischemic compression group showed significant improvement as compared with the receiving only trigger point injections group. And no significant differences between receiving 30 seconds of ischemic compression group and 60 seconds of ischemic compression group. Conclusion This study demonstrated the effectiveness of ischemic compression for myofascial trigger point. Trigger point injections combined with ischemic compression shows better effects on treatment of myofascial trigger points in the upper trapezius muscle than the only trigger point injections therapy. But the duration of ischemic compression did not affect treatment of myofascial trigger point. PMID:24020035

  3. Fixed-Rate Compressed Floating-Point Arrays.

    PubMed

    Lindstrom, Peter

    2014-12-01

    Current compression schemes for floating-point data commonly take fixed-precision values and compress them to a variable-length bit stream, complicating memory management and random access. We present a fixed-rate, near-lossless compression scheme that maps small blocks of 4(d) values in d dimensions to a fixed, user-specified number of bits per block, thereby allowing read and write random access to compressed floating-point data at block granularity. Our approach is inspired by fixed-rate texture compression methods widely adopted in graphics hardware, but has been tailored to the high dynamic range and precision demands of scientific applications. Our compressor is based on a new, lifted, orthogonal block transform and embedded coding, allowing each per-block bit stream to be truncated at any point if desired, thus facilitating bit rate selection using a single compression scheme. To avoid compression or decompression upon every data access, we employ a software write-back cache of uncompressed blocks. Our compressor has been designed with computational simplicity and speed in mind to allow for the possibility of a hardware implementation, and uses only a small number of fixed-point arithmetic operations per compressed value. We demonstrate the viability and benefits of lossy compression in several applications, including visualization, quantitative data analysis, and numerical simulation.

  4. Fixed-Rate Compressed Floating-Point Arrays.

    PubMed

    Lindstrom, Peter

    2014-12-01

    Current compression schemes for floating-point data commonly take fixed-precision values and compress them to a variable-length bit stream, complicating memory management and random access. We present a fixed-rate, near-lossless compression scheme that maps small blocks of 4(d) values in d dimensions to a fixed, user-specified number of bits per block, thereby allowing read and write random access to compressed floating-point data at block granularity. Our approach is inspired by fixed-rate texture compression methods widely adopted in graphics hardware, but has been tailored to the high dynamic range and precision demands of scientific applications. Our compressor is based on a new, lifted, orthogonal block transform and embedded coding, allowing each per-block bit stream to be truncated at any point if desired, thus facilitating bit rate selection using a single compression scheme. To avoid compression or decompression upon every data access, we employ a software write-back cache of uncompressed blocks. Our compressor has been designed with computational simplicity and speed in mind to allow for the possibility of a hardware implementation, and uses only a small number of fixed-point arithmetic operations per compressed value. We demonstrate the viability and benefits of lossy compression in several applications, including visualization, quantitative data analysis, and numerical simulation. PMID:26356981

  5. Fast and efficient compression of floating-point data.

    PubMed

    Lindstrom, Peter; Isenburg, Martin

    2006-01-01

    Large scale scientific simulation codes typically run on a cluster of CPUs that write/read time steps to/from a single file system. As data sets are constantly growing in size, this increasingly leads to I/O bottlenecks. When the rate at which data is produced exceeds the available I/O bandwidth, the simulation stalls and the CPUs are idle. Data compression can alleviate this problem by using some CPU cycles to reduce the amount of data needed to be transfered. Most compression schemes, however, are designed to operate offline and seek to maximize compression, not throughput. Furthermore, they often require quantizing floating-point values onto a uniform integer grid, which disqualifies their use in applications where exact values must be retained. We propose a simple scheme for lossless, online compression of floating-point data that transparently integrates into the I/O of many applications. A plug-in scheme for data-dependent prediction makes our scheme applicable to a wide variety of data used in visualization, such as unstructured meshes, point sets, images, and voxel grids. We achieve state-of-the-art compression rates and speeds, the latter in part due to an improved entropy coder. We demonstrate that this significantly accelerates I/O throughput in real simulation runs. Unlike previous schemes, our method also adapts well to variable-precision floating-point and integer data. PMID:17080858

  6. Fixed-rate compressed floating-point arrays

    SciTech Connect

    Lindstrom, P.

    2014-03-30

    ZFP is a library for lossy compression of single- and double-precision floating-point data. One of the unique features of ZFP is its support for fixed-rate compression, which enables random read and write access at the granularity of small blocks of values. Using a C++ interface, this allows declaring compressed arrays (1D, 2D, and 3D arrays are supported) that through operator overloading can be treated just like conventional, uncompressed arrays, but which allow the user to specify the exact number of bits to allocate to the array. ZFP also has variable-rate fixed-precision and fixed-accuracy modes, which allow the user to specify a tolerance on the relative or absolute error.

  7. Fixed-rate compressed floating-point arrays

    2014-03-30

    ZFP is a library for lossy compression of single- and double-precision floating-point data. One of the unique features of ZFP is its support for fixed-rate compression, which enables random read and write access at the granularity of small blocks of values. Using a C++ interface, this allows declaring compressed arrays (1D, 2D, and 3D arrays are supported) that through operator overloading can be treated just like conventional, uncompressed arrays, but which allow the user tomore » specify the exact number of bits to allocate to the array. ZFP also has variable-rate fixed-precision and fixed-accuracy modes, which allow the user to specify a tolerance on the relative or absolute error.« less

  8. Parametric temporal compression of infrared imagery sequences containing a slow-moving point target.

    PubMed

    Huber-Shalem, Revital; Hadar, Ofer; Rotman, Stanley R; Huber-Lerner, Merav

    2016-02-10

    Infrared (IR) imagery sequences are commonly used for detecting moving targets in the presence of evolving cloud clutter or background noise. This research focuses on slow-moving point targets that are less than one pixel in size, such as aircraft at long range from a sensor. Since transmitting IR imagery sequences to a base unit or storing them consumes considerable time and resources, a compression method that maintains the point target detection capabilities is highly desirable. In this work, we introduce a new parametric temporal compression that incorporates Gaussian fit and polynomial fit. We then proceed to spatial compression by spatially applying the lowest possible number of bits for representing each parameter over the parameters extracted by temporal compression, which is followed by bit encoding to achieve an end-to-end compression process of the sequence for data storage and transmission. We evaluate the proposed compression method using the variance estimation ratio score (VERS), which is a signal-to-noise ratio (SNR)-based measure for point target detection that scores each pixel and yields an SNR scores image. A high pixel score indicates that a target is suspected to traverse the pixel. From this score image we calculate the movie scores, which are found to be close to those of the original sequences. Furthermore, we present a new algorithm for automatic detection of the target tracks. This algorithm extracts the target location from the SNR scores image, which is acquired during the evaluation process, using Hough transform. This algorithm yields a similar detection probability (PD) and false alarm probability (PFA) of the compressed sequences and the original sequences. The parameters of the new parametric temporal compression successfully differentiate the targets from the background, yielding high PDs (above 83%) with low PFAs (below 0.043%) without the need to calculate pixel scores or to apply automatic detection of the target tracks. PMID

  9. Optimal Compression of Floating-Point Astronomical Images Without Significant Loss of Information

    NASA Technical Reports Server (NTRS)

    Pence, William D.; White, R. L.; Seaman, R.

    2010-01-01

    We describe a compression method for floating-point astronomical images that gives compression ratios of 6 - 10 while still preserving the scientifically important information in the image. The pixel values are first preprocessed by quantizing them into scaled integer intensity levels, which removes some of the uncompressible noise in the image. The integers are then losslessly compressed using the fast and efficient Rice algorithm and stored in a portable FITS format file. Quantizing an image more coarsely gives greater image compression, but it also increases the noise and degrades the precision of the photometric and astrometric measurements in the quantized image. Dithering the pixel values during the quantization process greatly improves the precision of measurements in the more coarsely quantized images. We perform a series of experiments on both synthetic and real astronomical CCD images to quantitatively demonstrate that the magnitudes and positions of stars in the quantized images can be measured with the predicted amount of precision. In order to encourage wider use of these image compression methods, we have made available a pair of general-purpose image compression programs, called fpack and funpack, which can be used to compress any FITS format image.

  10. Compression After Impact Testing of Sandwich Structures Using the Four Point Bend Test

    NASA Technical Reports Server (NTRS)

    Nettles, Alan T.; Gregory, Elizabeth; Jackson, Justin; Kenworthy, Devon

    2008-01-01

    For many composite laminated structures, the design is driven by data obtained from Compression after Impact (CAI) testing. There currently is no standard for CAI testing of sandwich structures although there is one for solid laminates of a certain thickness and lay-up configuration. Most sandwich CAI testing has followed the basic technique of this standard where the loaded ends are precision machined and placed between two platens and compressed until failure. If little or no damage is present during the compression tests, the loaded ends may need to be potted to prevent end brooming. By putting a sandwich beam in a four point bend configuration, the region between the inner supports is put under a compressive load and a sandwich laminate with damage can be tested in this manner without the need for precision machining. Also, specimens with no damage can be taken to failure so direct comparisons between damaged and undamaged strength can be made. Data is presented that demonstrates the four point bend CAI test and is compared with end loaded compression tests of the same sandwich structure.

  11. CSR Interaction at the Cross-Over of the Full Compression Point

    SciTech Connect

    Rui Li

    2005-05-01

    In recent commissioning of the 10 kW FEL at Jefferson Lab, as one varies the energy chirp of the electron bunches at the entrance of the chicane to make the bunch more and more compressed at the exit of the chicane, a sudden increase in the energy spread is observed [1] at the crossover of the full compression point. This phenomenon is accompanied simultaneously with a significant increase of the THz radiation from the electron beam. A similar observation was made earlier in the CTF II CSR experiment at CERN [2]. For example, for 5 nC bunch charge, ''the mean momentum spread increased by a factor of 4 at full compression with respect to the initial spread, and decreased to a factor of 3 larger than the initial spread at overcompression''. There is also a sudden drop of mean momentum at the full compression, along with a sudden increase in the horizontal emittance (see Fig. 5 of [2]). As a first step to understand this phenomenon, in this paper, we analyze the effective longitudinal CSR force using our recent formulation of CSR dynamics [3], and show there is a sudden increase in the magnitude of the effective longitudinal CSR force at the cross-over of the full compression point. A numerical example is given for an LCLS type chicane. The physical picture of this sudden increase is also discussed.

  12. Graph-Based Compression of Dynamic 3D Point Cloud Sequences.

    PubMed

    Thanou, Dorina; Chou, Philip A; Frossard, Pascal

    2016-04-01

    This paper addresses the problem of compression of 3D point cloud sequences that are characterized by moving 3D positions and color attributes. As temporally successive point cloud frames share some similarities, motion estimation is key to effective compression of these sequences. It, however, remains a challenging problem as the point cloud frames have varying numbers of points without explicit correspondence information. We represent the time-varying geometry of these sequences with a set of graphs, and consider 3D positions and color attributes of the point clouds as signals on the vertices of the graphs. We then cast motion estimation as a feature-matching problem between successive graphs. The motion is estimated on a sparse set of representative vertices using new spectral graph wavelet descriptors. A dense motion field is eventually interpolated by solving a graph-based regularization problem. The estimated motion is finally used for removing the temporal redundancy in the predictive coding of the 3D positions and the color characteristics of the point cloud sequences. Experimental results demonstrate that our method is able to accurately estimate the motion between consecutive frames. Moreover, motion estimation is shown to bring a significant improvement in terms of the overall compression performance of the sequence. To the best of our knowledge, this is the first paper that exploits both the spatial correlation inside each frame (through the graph) and the temporal correlation between the frames (through the motion estimation) to compress the color and the geometry of 3D point cloud sequences in an efficient way.

  13. High-Density Fixed Point for Radially Compressed Single-Component Plasmas

    SciTech Connect

    Danielson, J. R.; Surko, C. M.; O'Neil, T. M.

    2007-09-28

    Rotating electric fields are used to compress electron plasmas confined in a Penning-Malmberg trap. Bifurcation and hysteresis are observed between low-density and high-density steady states as a function of the applied electric field amplitude and frequency. These observations are explained in terms of torque-balanced fixed points using a simple model of the torques on the plasma. Perturbation experiments near the high-density fixed point are used to determine the magnitude, frequency, and voltage dependence of the drive torque. The broader implications of these results are discussed.

  14. Exact relation with two-point correlation functions and phenomenological approach for compressible magnetohydrodynamic turbulence.

    PubMed

    Banerjee, Supratik; Galtier, Sébastien

    2013-01-01

    Compressible isothermal magnetohydrodynamic turbulence is analyzed under the assumption of statistical homogeneity and in the asymptotic limit of large kinetic and magnetic Reynolds numbers. Following Kolmogorov we derive an exact relation for some two-point correlation functions which generalizes the expression recently found for hydrodynamics. We show that the magnetic field brings new source and flux terms into the dynamics which may act on the inertial range similarly as a source or a sink for the mean energy transfer rate. The introduction of a uniform magnetic field simplifies significantly the exact relation for which a simple phenomenology may be given. A prediction for axisymmetric energy spectra is eventually proposed.

  15. An upwind-biased, point-implicit relaxation algorithm for viscous, compressible perfect-gas flows

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.

    1990-01-01

    An upwind-biased, point-implicit relaxation algorithm for obtaining the numerical solution to the governing equations for three-dimensional, viscous, compressible, perfect-gas flows is described. The algorithm is derived using a finite-volume formulation in which the inviscid components of flux across cell walls are described with Roe's averaging and Harten's entropy fix with second-order corrections based on Yee's Symmetric Total Variation Diminishing scheme. Viscous terms are discretized using central differences. The relaxation strategy is well suited for computers employing either vector or parallel architectures. It is also well suited to the numerical solution of the governing equations on unstructured grids. Because of the point-implicit relaxation strategy, the algorithm remains stable at large Courant numbers without the necessity of solving large, block tri-diagonal systems. Convergence rates and grid refinement studies are conducted for Mach 5 flow through an inlet with a 10 deg compression ramp and Mach 14 flow over a 15 deg ramp. Predictions for pressure distributions, surface heating, and aerodynamics coefficients compare well with experiment data for Mach 10 flow over a blunt body.

  16. Comparison of ring compression testing to three point bend testing for unirradiated ZIRLO cladding

    SciTech Connect

    None, None

    2015-04-01

    Safe shipment and storage of nuclear reactor discharged fuel requires an understanding of how the fuel may perform under the various conditions that can be encountered. One specific focus of concern is performance during a shipment drop accident. Tests at Savannah River National Laboratory (SRNL) are being performed to characterize the properties of fuel clad relative to a mechanical accident condition such as a container drop. Unirradiated ZIRLO tubing samples have been charged with a range of hydride levels to simulate actual fuel rod levels. Samples of the hydrogen charged tubes were exposed to a radial hydride growth treatment (RHGT) consisting of heating to 400°C, applying initial hoop stresses of 90 to 170 MPa with controlled cooling and producing hydride precipitates. Initial samples have been tested using both a) ring compression test (RCT) which is shown to be sensitive to radial hydride and b) three-point bend tests which are less sensitive to radial hydride effects. Hydrides are generated in Zirconium based fuel cladding as a result of coolant (water) oxidation of the clad, hydrogen release, and a portion of the released (nascent) hydrogen absorbed into the clad and eventually exceeding the hydrogen solubility limit. The orientation of the hydrides relative to the subsequent normal and accident strains has a significant impact on the failure susceptability. In this study the impacts of stress, temperature and hydrogen levels are evaluated in reference to the propensity for hydride reorientation from the circumferential to the radial orientation. In addition the effects of radial hydrides on the Quasi Ductile Brittle Transition Temperature (DBTT) were measured. The results suggest that a) the severity of the radial hydride impact is related to the hydrogen level-peak temperature combination (for example at a peak drying temperature of 400°C; 800 PPM hydrogen has less of an impact/ less radial hydride fraction than 200 PPM hydrogen for the same thermal

  17. Development of modifications to the material point method for the simulation of thin membranes, compressible fluids, and their interactions

    SciTech Connect

    York, A.R. II

    1997-07-01

    The material point method (MPM) is an evolution of the particle in cell method where Lagrangian particles or material points are used to discretize the volume of a material. The particles carry properties such as mass, velocity, stress, and strain and move through a Eulerian or spatial mesh. The momentum equation is solved on the Eulerian mesh. Modifications to the material point method are developed that allow the simulation of thin membranes, compressible fluids, and their dynamic interactions. A single layer of material points through the thickness is used to represent a membrane. The constitutive equation for the membrane is applied in the local coordinate system of each material point. Validation problems are presented and numerical convergence is demonstrated. Fluid simulation is achieved by implementing a constitutive equation for a compressible, viscous, Newtonian fluid and by solution of the energy equation. The fluid formulation is validated by simulating a traveling shock wave in a compressible fluid. Interactions of the fluid and membrane are handled naturally with the method. The fluid and membrane communicate through the Eulerian grid on which forces are calculated due to the fluid and membrane stress states. Validation problems include simulating a projectile impacting an inflated airbag. In some impact simulations with the MPM, bodies may tend to stick together when separating. Several algorithms are proposed and tested that allow bodies to separate from each other after impact. In addition, several methods are investigated to determine the local coordinate system of a membrane material point without relying upon connectivity data.

  18. Evolution of Skin Temperature after the Application of Compressive Forces on Tendon, Muscle and Myofascial Trigger Point.

    PubMed

    Magalhães, Marina Figueiredo; Dibai-Filho, Almir Vieira; de Oliveira Guirro, Elaine Caldeira; Girasol, Carlos Eduardo; de Oliveira, Alessandra Kelly; Dias, Fabiana Rodrigues Cancio; Guirro, Rinaldo Roberto de Jesus

    2015-01-01

    Some assessment and diagnosis methods require palpation or the application of certain forces on the skin, which affects the structures beneath, we highlight the importance of defining possible influences on skin temperature as a result of this physical contact. Thus, the aim of the present study is to determine the ideal time for performing thermographic examination after palpation based on the assessment of skin temperature evolution. Randomized and crossover study carried out with 15 computer-user volunteers of both genders, between 18 and 45 years of age, who were submitted to compressive forces of 0, 1, 2 and 3 kg/cm2 for 30 seconds with a washout period of 48 hours using a portable digital dynamometer. Compressive forces were applied on the following spots on the dominant upper limb: myofascial trigger point in the levator scapulae, biceps brachii muscle and palmaris longus tendon. Volunteers were examined by means of infrared thermography before and after the application of compressive forces (15, 30, 45 and 60 minutes). In most comparisons made over time, a significant decrease was observed 30, 45 and 60 minutes after the application of compressive forces (p < 0.05) on the palmaris longus tendon and biceps brachii muscle. However, no difference was observed when comparing the different compressive forces (p > 0.05). In conclusion, infrared thermography can be used after assessment or diagnosis methods focused on the application of forces on tendons and muscles, provided the procedure is performed 15 minutes after contact with the skin. Regarding to the myofascial trigger point, the thermographic examination can be performed within 60 minutes after the contact with the skin.

  19. Evolution of Skin Temperature after the Application of Compressive Forces on Tendon, Muscle and Myofascial Trigger Point

    PubMed Central

    Magalhães, Marina Figueiredo; Dibai-Filho, Almir Vieira; de Oliveira Guirro, Elaine Caldeira; Girasol, Carlos Eduardo; de Oliveira, Alessandra Kelly; Dias, Fabiana Rodrigues Cancio; Guirro, Rinaldo Roberto de Jesus

    2015-01-01

    Some assessment and diagnosis methods require palpation or the application of certain forces on the skin, which affects the structures beneath, we highlight the importance of defining possible influences on skin temperature as a result of this physical contact. Thus, the aim of the present study is to determine the ideal time for performing thermographic examination after palpation based on the assessment of skin temperature evolution. Randomized and crossover study carried out with 15 computer-user volunteers of both genders, between 18 and 45 years of age, who were submitted to compressive forces of 0, 1, 2 and 3 kg/cm2 for 30 seconds with a washout period of 48 hours using a portable digital dynamometer. Compressive forces were applied on the following spots on the dominant upper limb: myofascial trigger point in the levator scapulae, biceps brachii muscle and palmaris longus tendon. Volunteers were examined by means of infrared thermography before and after the application of compressive forces (15, 30, 45 and 60 minutes). In most comparisons made over time, a significant decrease was observed 30, 45 and 60 minutes after the application of compressive forces (p < 0.05) on the palmaris longus tendon and biceps brachii muscle. However, no difference was observed when comparing the different compressive forces (p > 0.05). In conclusion, infrared thermography can be used after assessment or diagnosis methods focused on the application of forces on tendons and muscles, provided the procedure is performed 15 minutes after contact with the skin. Regarding to the myofascial trigger point, the thermographic examination can be performed within 60 minutes after the contact with the skin. PMID:26070073

  20. Wear Properties of UHMWPE Orientedunder Uniaxial Compression during the Molten State and at Lower Temperatures than the Melting Point

    NASA Astrophysics Data System (ADS)

    Ohta, Makoto; Hyon, Suong-Hyu; Kang, Yu-Bong; Oka, Masanori; Tsutsumi, Sadami; Murakami, Syozo; Kohjiya, Shinzo

    Ultra high molecular weight polyethylene (UHMWPE) has been used as a bearing material for artificial joints since the 1960's, and experience has shown that its wear is one of the limiting factors for long term use in such prosthetic implants. For improving wear resistance, we studied the influence of uniaxial compression on molecule orientation obtained by processing UHMWPE above (Sample A) and below (Sample B) its melting point, respectively. We then compared the wear properties of both UHMWPE samples. Using a slightly cross-linked UHMWPE, sample A was compressed during the molten state. Sample B UHMWPE was compressed at a temperature below the melting point. X-ray refraction tests revealed the (200) crystalline plane of Sample A and B to be oriented parallel to the compression surface. Further tests showed the heat of fusion and the density of Sample A to be higher than Sample B. The storage modulus of Sample A was always higher than in the original untreated UHMWPE (Sample C), while in Sample B it rapidly collapsed with increasing temperature. The αc-peak of Sample A was shifted to about 5°C higher, while the αc-peak of Sample B was shifted to the lower temperature side and the β-peak disappeared, compared with Sample C. Reciprocating wear tests carried out over 2×106 cycles, showed that the wear resistance of the sample A was enhanced by a factor of 10 when compared to Sample C. UHMWPE compressed during the molten state exhibits superior wear characteristics and has the potential to improve implant technology for artificial joints, potentially providing a longer lifetime.

  1. 1DB, a one-dimensional diffusion code for nuclear reactor analysis

    SciTech Connect

    Little, W.W. Jr. )

    1991-09-01

    1DB is a multipurpose, one-dimensional (plane, cylinder, sphere) diffusion theory code for use in reactor analysis. The code is designed to do the following: To compute k{sub eff} and perform criticality searches on time absorption, reactor composition, reactor dimensions, and buckling by means of either a flux or an adjoint model; to compute collapsed microscopic and macroscopic cross sections averaged over the spectrum in any specified zone; to compute resonance-shielded cross sections using data in the shielding factor format; and to compute isotopic burnup using decay chains specified by the user. All programming is in FORTRAN. Because variable dimensioning is employed, no simple restrictions on problem complexity can be stated. The number of spatial mesh points, energy groups, upscattering terms, etc. is limited only by the available memory. The source file contains about 3000 cards. 4 refs.

  2. A Genuine Jahn-Teller System with Compressed Geometry and Quantum Effects Originating from Zero-Point Motion.

    PubMed

    Aramburu, José Antonio; García-Fernández, Pablo; García-Lastra, Juan María; Moreno, Miguel

    2016-07-18

    First-principle calculations together with analysis of the experimental data found for 3d(9) and 3d(7) ions in cubic oxides proved that the center found in irradiated CaO:Ni(2+) corresponds to Ni(+) under a static Jahn-Teller effect displaying a compressed equilibrium geometry. It was also shown that the anomalous positive g∥ shift (g∥ -g0 =0.065) measured at T=20 K obeys the superposition of the |3 z(2) -r(2) ⟩ and |x(2) -y(2) ⟩ states driven by quantum effects associated with the zero-point motion, a mechanism first put forward by O'Brien for static Jahn-Teller systems and later extended by Ham to the dynamic Jahn-Teller case. To our knowledge, this is the first genuine Jahn-Teller system (i.e. in which exact degeneracy exists at the high-symmetry configuration) exhibiting a compressed equilibrium geometry for which large quantum effects allow experimental observation of the effect predicted by O'Brien. Analysis of the calculated energy barriers for different Jahn-Teller systems allowed us to explain the origin of the compressed geometry observed for CaO:Ni(+) . PMID:27028895

  3. A Genuine Jahn-Teller System with Compressed Geometry and Quantum Effects Originating from Zero-Point Motion.

    PubMed

    Aramburu, José Antonio; García-Fernández, Pablo; García-Lastra, Juan María; Moreno, Miguel

    2016-07-18

    First-principle calculations together with analysis of the experimental data found for 3d(9) and 3d(7) ions in cubic oxides proved that the center found in irradiated CaO:Ni(2+) corresponds to Ni(+) under a static Jahn-Teller effect displaying a compressed equilibrium geometry. It was also shown that the anomalous positive g∥ shift (g∥ -g0 =0.065) measured at T=20 K obeys the superposition of the |3 z(2) -r(2) ⟩ and |x(2) -y(2) ⟩ states driven by quantum effects associated with the zero-point motion, a mechanism first put forward by O'Brien for static Jahn-Teller systems and later extended by Ham to the dynamic Jahn-Teller case. To our knowledge, this is the first genuine Jahn-Teller system (i.e. in which exact degeneracy exists at the high-symmetry configuration) exhibiting a compressed equilibrium geometry for which large quantum effects allow experimental observation of the effect predicted by O'Brien. Analysis of the calculated energy barriers for different Jahn-Teller systems allowed us to explain the origin of the compressed geometry observed for CaO:Ni(+) .

  4. Changes in blood flow and cellular metabolism at a myofascial trigger point with trigger point release (ischemic compression): a proof-of-principle pilot study

    PubMed Central

    Moraska, Albert F.; Hickner, Robert C.; Kohrt, Wendy M.; Brewer, Alan

    2012-01-01

    Objective To demonstrate proof-of-principle measurement for physiological change within an active myofascial trigger point (MTrP) undergoing trigger point release (ischemic compression). Design Interstitial fluid was sampled continuously at a trigger point before and after intervention. Setting A biomedical research clinic at a university hospital. Participants Two subjects from a pain clinic presenting with chronic headache pain. Interventions A single microdialysis catheter was inserted into an active MTrP of the upper trapezius to allow for continuous sampling of interstitial fluid before and after application of trigger point therapy by a massage therapist. Main Outcome Measures Procedural success, pain tolerance, feasibility of intervention during sample collection, determination of physiologically relevant values for local blood flow, as well as glucose and lactate concentrations. Results Both patients tolerated the microdialysis probe insertion into the MTrP and treatment intervention without complication. Glucose and lactate concentrations were measured in the physiological range. Following intervention, a sustained increase in lactate was noted for both subjects. Conclusions Identifying physiological constituents of MTrP’s following intervention is an important step toward understanding pathophysiology and resolution of myofascial pain. The present study forwards that aim by showing proof-of-concept for collection of interstitial fluid from an MTrP before and after intervention can be accomplished using microdialysis, thus providing methodological insight toward treatment mechanism and pain resolution. Of the biomarkers measured in this study, lactate may be the most relevant for detection and treatment of abnormalities in the MTrP. PMID:22975226

  5. Evidence for the Use of Ischemic Compression and Dry Needling in the Management of Trigger Points of the Upper Trapezius in Patients with Neck Pain: A Systematic Review.

    PubMed

    Cagnie, Barbara; Castelein, Birgit; Pollie, Flore; Steelant, Lieselotte; Verhoeyen, Hanne; Cools, Ann

    2015-07-01

    The aim of this review was to describe the effects of ischemic compression and dry needling on trigger points in the upper trapezius muscle in patients with neck pain and compare these two interventions with other therapeutic interventions aiming to inactivate trigger points. Both PubMed and Web of Science were searched for randomized controlled trials using different key word combinations related to myofascial neck pain and therapeutic interventions. Four main outcome parameters were evaluated on short and medium term: pain, range of motion, functionality, and quality-of-life, including depression. Fifteen randomized controlled trials were included in this systematic review. There is moderate evidence for ischemic compression and strong evidence for dry needling to have a positive effect on pain intensity. This pain decrease is greater compared with active range of motion exercises (ischemic compression) and no or placebo intervention (ischemic compression and dry needling) but similar to other therapeutic approaches. There is moderate evidence that both ischemic compression and dry needling increase side-bending range of motion, with similar effects compared with lidocaine injection. There is weak evidence regarding its effects on functionality and quality-of-life. On the basis of this systematic review, ischemic compression and dry needling can both be recommended in the treatment of neck pain patients with trigger points in the upper trapezius muscle. Additional research with high-quality study designs are needed to develop more conclusive evidence.

  6. Evidence for the Use of Ischemic Compression and Dry Needling in the Management of Trigger Points of the Upper Trapezius in Patients with Neck Pain: A Systematic Review.

    PubMed

    Cagnie, Barbara; Castelein, Birgit; Pollie, Flore; Steelant, Lieselotte; Verhoeyen, Hanne; Cools, Ann

    2015-07-01

    The aim of this review was to describe the effects of ischemic compression and dry needling on trigger points in the upper trapezius muscle in patients with neck pain and compare these two interventions with other therapeutic interventions aiming to inactivate trigger points. Both PubMed and Web of Science were searched for randomized controlled trials using different key word combinations related to myofascial neck pain and therapeutic interventions. Four main outcome parameters were evaluated on short and medium term: pain, range of motion, functionality, and quality-of-life, including depression. Fifteen randomized controlled trials were included in this systematic review. There is moderate evidence for ischemic compression and strong evidence for dry needling to have a positive effect on pain intensity. This pain decrease is greater compared with active range of motion exercises (ischemic compression) and no or placebo intervention (ischemic compression and dry needling) but similar to other therapeutic approaches. There is moderate evidence that both ischemic compression and dry needling increase side-bending range of motion, with similar effects compared with lidocaine injection. There is weak evidence regarding its effects on functionality and quality-of-life. On the basis of this systematic review, ischemic compression and dry needling can both be recommended in the treatment of neck pain patients with trigger points in the upper trapezius muscle. Additional research with high-quality study designs are needed to develop more conclusive evidence. PMID:25768071

  7. euL1db: the European database of L1HS retrotransposon insertions in humans.

    PubMed

    Mir, Ashfaq A; Philippe, Claude; Cristofari, Gaël

    2015-01-01

    Retrotransposons account for almost half of our genome. They are mobile genetics elements-also known as jumping genes--but only the L1HS subfamily of Long Interspersed Nuclear Elements (LINEs) has retained the ability to jump autonomously in modern humans. Their mobilization in germline--but also some somatic tissues--contributes to human genetic diversity and to diseases, such as cancer. Here, we present euL1db, the European database of L1HS retrotransposon insertions in humans (available at http://euL1db.unice.fr). euL1db provides a curated and comprehensive summary of L1HS insertion polymorphisms identified in healthy or pathological human samples and published in peer-reviewed journals. A key feature of euL1db is its sample--wise organization. Hence L1HS insertion polymorphisms are connected to samples, individuals, families and clinical conditions. The current version of euL1db centralizes results obtained in 32 studies. It contains >900 samples, >140,000 sample-wise insertions and almost 9000 distinct merged insertions. euL1db will help understanding the link between L1 retrotransposon insertion polymorphisms and phenotype or disease.

  8. Effect of cervical mobilization and ischemic compression therapy on contralateral cervical side flexion and pressure pain threshold in latent upper trapezius trigger points.

    PubMed

    Ganesh, G Shankar; Singh, Harshita; Mushtaq, Shagoofa; Mohanty, Patitapaban; Pattnaik, Monalisa

    2016-07-01

    Studies have shown a clinical relationship between trigger points and joint impairments. However the cause-and effect relationship between muscle and joint dysfunctions in trigger points could not be established. The purpose of this study was to investigate effects of mobilization and ischemic compression therapy on cervical range of motion and pressure pain sensitivity in participants with latent trigger point in the upper trapezius muscle. Ninety asymptomatic participants with upper trapezius latent trigger point were randomized in to 3 groups: mobilization, ischemic compression and a control. The outcomes were measured over a 2 week period. Repeated measures ANOVA showed statistically and clinically significant pre to post improvement in both the interventional groups compared to control (p < 0.05). However the effect sizes between the intervention groups were small (<0.3) revealing minimal clinical detectable difference. PMID:27634068

  9. An evaluation of the sandwich beam in four-point bending as a compressive test method for composites

    NASA Technical Reports Server (NTRS)

    Shuart, M. J.; Herakovich, C. T.

    1978-01-01

    The experimental phase of the study included compressive tests on HTS/PMR-15 graphite/polyimide, 2024-T3 aluminum alloy, and 5052 aluminum honeycomb at room temperature, and tensile tests on graphite/polyimide at room temperature, -157 C, and 316 C. Elastic properties and strength data are presented for three laminates. The room temperature elastic properties were generally found to differ in tension and compression with Young's modulus values differing by as much as twenty-six percent. The effect of temperature on modulus and strength was shown to be laminate dependent. A three-dimensional finite element analysis predicted an essentially uniform, uniaxial compressive stress state in the top flange test section of the sandwich beam. In conclusion, the sandwich beam can be used to obtain accurate, reliable Young's modulus and Poisson's ratio data for advanced composites; however, the ultimate compressive stress for some laminates may be influenced by the specimen geometry.

  10. Improving smooth muscle cell exposure to drugs from drug-eluting stents at early time points: a variable compression approach.

    PubMed

    O'Connell, Barry M; Cunnane, Eoghan M; Denny, William J; Carroll, Grainne T; Walsh, Michael T

    2014-08-01

    The emergence of drug-eluting stents (DES) as a viable replacement for bare metal stenting has led to a significant decrease in the incidence of clinical restenosis. This is due to the transport of anti-restenotic drugs from within the polymer coating of a DES into the artery wall which arrests the cell cycle before restenosis can occur. The efficacy of DES is still under close scrutiny in the medical field as many issues regarding the effectiveness of DES drug transport in vivo still exist. One such issue, that has received less attention, is the limiting effect that stent strut compression has on the transport of drug species in the artery wall. Once the artery wall is compressed, the stents ability to transfer drug species into the arterial wall can be reduced. This leads to a reduction in the spatial therapeutic transfer of drug species to binding sites within the arterial wall. This paper investigates the concept of idealised variable compression as a means of demonstrating how such a stent design approach could improve the spatial delivery of drug species in the arterial wall. The study focused on assessing how the trends in concentration levels changed as a result of artery wall compression. Five idealised stent designs were created with a combination of thick struts that provide the necessary compression to restore luminal patency and thin uncompressive struts that improve the transport of drugs therein. By conducting numerical simulations of diffusive mass transport, this study found that the use of uncompressive struts results in a more uniform spatial distribution of drug species in the arterial wall.

  11. Operational procedure for computer program for design point characteristics of a compressed-air generator with through-flow combustor for V/STOL applications

    NASA Technical Reports Server (NTRS)

    Krebs, R. P.

    1971-01-01

    The computer program described in this report calculates the design-point characteristics of a compressed-air generator for use in V/STOL applications such as systems with a tip-turbine-driven lift fan. The program computes the dimensions and mass, as well as the thermodynamic performance of a model air generator configuration which involves a straight through-flow combustor. Physical and thermodynamic characteristics of the air generator components are also given. The program was written in FORTRAN IV language. Provision has been made so that the program will accept input values in either SI units or U.S. customary units. Each air generator design-point calculation requires about 1.5 seconds of 7094 computer time for execution.

  12. Microbunching and RF Compression

    SciTech Connect

    Venturini, M.; Migliorati, M.; Ronsivalle, C.; Ferrario, M.; Vaccarezza, C.

    2010-05-23

    Velocity bunching (or RF compression) represents a promising technique complementary to magnetic compression to achieve the high peak current required in the linac drivers for FELs. Here we report on recent progress aimed at characterizing the RF compression from the point of view of the microbunching instability. We emphasize the development of a linear theory for the gain function of the instability and its validation against macroparticle simulations that represents a useful tool in the evaluation of the compression schemes for FEL sources.

  13. Behavior of the layer compression elastic modulus near, above, and below a smectic C-hexatic I critical point in binary mixtures.

    PubMed

    Rogez, D; Benguigui, L G; Martinoty, P

    2005-02-01

    We present the first study of the layer compression modulus B carried out near, above and below the Smectic C-Hexatic I critical point in racemic mixtures of methylbutyl phenyl octylbiphenyl-carboxylate (8SI) and the octyloxy biphenyl analog (8OSI), at frequencies ranging from 0.2 Hz to 2 x 10(3) Hz. The behavior of B as a function of temperature shows a progressive evolution from a first order transition in 8SI to a continuous supercritical behavior in 8OSI. The latter is characterized by an increase in B, which appears above the transition, and which is followed by a leveling off when the temperature is decreased towards the transition. It is proposed that this behavior stems from the relaxation of the hexatic domains which are frozen in the frequency range studied. For the supercritical and near-critical compounds, B exhibits a small dip near the transition temperature, which is visible in the low frequency range only, indicating that the dynamics associated with the critical point is very slow. We also report measurements in the Crystal-J phase of the pure compounds, and show that 8SI behaves mechanically as a hexatic phase and 8OSI as a soft crystal phase.

  14. The use of the percentile method for searching empirical relationships between compression strength (UCS), Point Load (Is50) and Schmidt Hammer (RL) Indices

    NASA Astrophysics Data System (ADS)

    Bruno, Giovanni; Bobbo, Luigi; Vessia, Giovanna

    2014-05-01

    Is50 and RL indices are commonly used to indirectly estimate the compression strength of a rocky deposit by in situ and in laboratory devices. The widespread use of Point load and Schmidt hammer tests is due to the simplicity and the speediness of the execution of these tests. Their indices can be related to the UCS by means of the ordinary least square regression analyses. Several researchers suggest to take into account the lithology to build high correlated empirical expressions (R2 >0.8) to draw UCS from Is50 or RL values. Nevertheless, the lower and upper bounds of the UCS ranges of values that can be estimated by means of the two indirect indices are not clearly defined yet. Aydin (2009) stated that the Schmidt hammer test shall be used to assess the compression resistance of rocks characterized by UCS>12-20 MPa. On the other hand, the Point load measures can be performed on weak rocks but upper bound values for UCS are not suggested. In this paper, the empirical relationships between UCS, RL and Is50 are searched by means of the percentile method (Bruno et al. 2013). This method is based on looking for the best regression function, between measured data of UCS and one of the indirect indices, drawn from a subset sample of the couples of measures that are the percentile values. These values are taken from the original dataset of both measures by calculating the cumulative function. No hypothesis on the probability distribution of the sample is needed and the procedure shows to be robust with respect to odd values or outliers. In this study, the carbonate sedimentary rocks are investigated. According to the rock mass classification of Dobereiner and De Freitas (1986), the UCS values for the studied rocks range between 'extremely weak' to 'strong'. For the analyzed data, UCS varies between 1,18-270,70 MPa. Thus, through the percentile method the best empirical relationships UCS-Is50 and UCS-RL are plotted. Relationships between Is50 and RL are drawn, too

  15. Development of structural and material clavicle response corridors under axial compression and three point bending loading for clavicle finite element model validation.

    PubMed

    Zhang, Qi; Kindig, Matthew; Li, Zuoping; Crandall, Jeff R; Kerrigan, Jason R

    2014-08-22

    Clavicle injuries were frequently observed in automotive side and frontal crashes. Finite element (FE) models have been developed to understand the injury mechanism, although no clavicle loading response corridors yet exist in the literature to ensure the model response biofidelity. Moreover, the typically developed structural level (e.g., force-deflection) response corridors were shown to be insufficient for verifying the injury prediction capacity of FE model, which usually is based on strain related injury criteria. Therefore, the purpose of this study is to develop both the structural (force vs deflection) and material level (strain vs force) clavicle response corridors for validating FE models for injury risk modeling. 20 Clavicles were loaded to failure under loading conditions representative of side and frontal crashes respectively, half of which in axial compression, and the other half in three point bending. Both structural and material response corridors were developed for each loading condition. FE model that can accurately predict structural response and strain level provides a more useful tool in injury risk modeling and prediction. The corridor development method in this study could also be extended to develop corridors for other components of the human body. PMID:24975696

  16. 1DB-2DB-3DB: One-, Two-, Three-Dimensional Diffusion Code System for Nuclear Reactor Analysis

    SciTech Connect

    2007-07-01

    1DB-2DB-3DB contains multipurpose, one-, two-, and three-dimensional diffusion theory codes for use in reactor analysis. 1DB is a one-dimensional (plane, cylinder, sphere), multigroup diffusion (and Sn) code. 2DB is a two-dimensional (X-Y, R-Z, R-theta, triangular), multigroup diffusion code. 3DB is a three-dimensional (X-Y-Z, R-theta-Z, Hex-Z), multigroup diffusion code. The codes can be used to: * Compute Keff using either a flux or an adjoint flux model, * Compute isotope burnup, and * Compute flux distributions for an extraneous source The codes read cross-section libraries in DTF format. Note that cross sections are not included in this package. This release replaces earlier versions previously distributed by RSICC as CCC-614/1DB, CCC-134/2DBS, and CCC-328/3DB (RSICC IDs: C614ALLCP00, C134U110800, and C328C000000).

  17. DNABIT Compress - Genome compression algorithm.

    PubMed

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.

  18. Compression embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.

    1998-01-01

    A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%.

  19. Compression embedding

    DOEpatents

    Sandford, M.T. II; Handel, T.G.; Bradley, J.N.

    1998-07-07

    A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique are disclosed. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%. 21 figs.

  20. Compression embedding

    DOEpatents

    Sandford, M.T. II; Handel, T.G.; Bradley, J.N.

    1998-03-10

    A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique is disclosed. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method. 11 figs.

  1. Compression embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.

    1998-01-01

    A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method.

  2. Multiphase, Multicomponent Compressibility in Geothermal Reservoir Engineering

    SciTech Connect

    Macias-Chapa, L.; Ramey, H.J. Jr.

    1987-01-20

    Coefficients of compressibilities below the bubble point were computer with a thermodynamic model for single and multicomponent systems. Results showed coefficients of compressibility below the bubble point larger than the gas coefficient of compressibility at the same conditions. Two-phase compressibilities computed in the conventional way are underestimated and may lead to errors in reserve estimation and well test analysis. 10 refs., 9 figs.

  3. Recent progress in compressible turbulence

    NASA Astrophysics Data System (ADS)

    Chen, Shiyi; Xia, Zhenhua; Wang, Jianchun; Yang, Yantao

    2015-06-01

    In this paper, we review some recent studies on compressible turbulence conducted by the authors' group, which include fundamental studies on compressible isotropic turbulence (CIT) and applied studies on developing a constrained large eddy simulation (CLES) for wall-bounded turbulence. In the first part, we begin with a newly proposed hybrid compact-weighted essentially nonoscillatory (WENO) scheme for a CIT simulation that has been used to construct a systematic database of CIT. Using this database various fundamental properties of compressible turbulence have been examined, including the statistics and scaling of compressible modes, the shocklet-turbulence interaction, the effect of local compressibility on small scales, the kinetic energy cascade, and some preliminary results from a Lagrangian point of view. In the second part, the idea and formulas of the CLES are reviewed, followed by the validations of CLES and some applications in compressible engineering problems.

  4. libpolycomp: Compression/decompression library

    NASA Astrophysics Data System (ADS)

    Tomasi, Maurizio

    2016-04-01

    Libpolycomp compresses and decompresses one-dimensional streams of numbers by means of several algorithms. It is well-suited for time-ordered data acquired by astronomical instruments or simulations. One of the algorithms, called "polynomial compression", combines two widely-used ideas (namely, polynomial approximation and filtering of Fourier series) to achieve substantial compression ratios for datasets characterized by smoothness and lack of noise. Notable examples are the ephemerides of astronomical objects and the pointing information of astronomical telescopes. Other algorithms implemented in this C library are well known and already widely used, e.g., RLE, quantization, deflate (via libz) and Burrows-Wheeler transform (via libbzip2). Libpolycomp can compress the timelines acquired by the Planck/LFI instrument with an overall compression ratio of ~9, while other widely known programs (gzip, bzip2) reach compression ratios less than 1.5.

  5. Video Compression

    NASA Technical Reports Server (NTRS)

    1996-01-01

    Optivision developed two PC-compatible boards and associated software under a Goddard Space Flight Center Small Business Innovation Research grant for NASA applications in areas such as telerobotics, telesciences and spaceborne experimentation. From this technology, the company used its own funds to develop commercial products, the OPTIVideo MPEG Encoder and Decoder, which are used for realtime video compression and decompression. They are used in commercial applications including interactive video databases and video transmission. The encoder converts video source material to a compressed digital form that can be stored or transmitted, and the decoder decompresses bit streams to provide high quality playback.

  6. Treatment of vertebral body compression fractures using percutaneous kyphoplasty guided by a combination of computed tomography and C-arm fluoroscopy with finger-touch guidance to determine the needle entry point.

    PubMed

    Wang, G Y; Zhang, C C; Ren, K; Zhang, P P; Liu, C H; Zheng, Z A; Chen, Y; Fang, R

    2015-01-01

    This study aimed to evaluate the results and complications of image-guided percutaneous kyphoplasty (PKP) using computed tomography (CT) and C-arm fluoroscopy, with finger-touch guidance to determine the needle entry point. Of the 86 patients (106 PKP) examined, 56 were treated for osteoporotic vertebral compression fractures and 30 for vertebral tumors. All patients underwent image-guided treatment using CT and conventional fluoroscopy, with finger-touch identification of a puncture point within a small incision (1.5 to 2 cm). Partial or complete pain relief was achieved in 98% of patients within 24 h of treatment. Moreover, a significant improvement in functional mobility and reduction in analgesic use was observed. CT allowed the detection of cement leakage in 20.7% of the interventions. No bone cement leakages with neurologic symptoms were noted. All work channels were made only once, and bone cement was distributed near the center of the vertebral body. Our study confirms the efficacy of PKP treatment in osteoporotic and oncological patients. The combination of CT and C-arm fluoroscopy with finger-touch guidance reduces the risk of complications compared with conventional fluoroscopy alone, facilitates the detection of minor cement leakage, improves the operative procedure, and results in a favorable bone cement distribution.

  7. Data compression in digitized lines

    NASA Technical Reports Server (NTRS)

    Thapa, Khagendra

    1990-01-01

    The problem of data compression is very important in digital photogrammetry, computer assisted cartography, and GIS/LIS. In addition, it is also applicable in many other fields such as computer vision, image processing, pattern recognition, and artificial intelligence. Consequently, there are many algorithms available to solve this problem but none of them are considered to be satisfactory. In this paper, a new method of finding critical points in a digitized curve is explained. This technique, based on the normalized symmetric scattered matrix, is good for both critical points detection and data compression. In addition, the critical points detected by this algorithm are compared with those by zero-crossings.

  8. Compressed Genotyping

    PubMed Central

    Erlich, Yaniv; Gordon, Assaf; Brand, Michael; Hannon, Gregory J.; Mitra, Partha P.

    2011-01-01

    Over the past three decades we have steadily increased our knowledge on the genetic basis of many severe disorders. Nevertheless, there are still great challenges in applying this knowledge routinely in the clinic, mainly due to the relatively tedious and expensive process of genotyping. Since the genetic variations that underlie the disorders are relatively rare in the population, they can be thought of as a sparse signal. Using methods and ideas from compressed sensing and group testing, we have developed a cost-effective genotyping protocol to detect carriers for severe genetic disorders. In particular, we have adapted our scheme to a recently developed class of high throughput DNA sequencing technologies. The mathematical framework presented here has some important distinctions from the ’traditional’ compressed sensing and group testing frameworks in order to address biological and technical constraints of our setting. PMID:21451737

  9. Selfsimilar Spherical Compression Waves in Gas Dynamics

    NASA Astrophysics Data System (ADS)

    Meyer-ter-Vehn, J.; Schalk, C.

    1982-08-01

    A synopsis of different selfsimilar spherical compression waves is given pointing out their fundamental importance for the gas dynamics of inertial confinement fusion. Strong blast waves, various forms of isentropic compression waves, imploding shock waves and the solution for non-isentropic collapsing hollow spheres are included. A classification is given in terms of six singular points which characterise the different solutions and the relations between them. The presentation closely follows Guderley's original work on imploding shock waves

  10. Turbulence in Compressible Flows

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Lecture notes for the AGARD Fluid Dynamics Panel (FDP) Special Course on 'Turbulence in Compressible Flows' have been assembled in this report. The following topics were covered: Compressible Turbulent Boundary Layers, Compressible Turbulent Free Shear Layers, Turbulent Combustion, DNS/LES and RANS Simulations of Compressible Turbulent Flows, and Case Studies of Applications of Turbulence Models in Aerospace.

  11. Myofascial trigger points.

    PubMed

    Lavelle, Elizabeth Demers; Lavelle, William; Smith, Howard S

    2007-03-01

    Painful conditions of the musculoskeletal system, including myofascial pain syndrome, constitute some of the most important chronic problems encountered in a clinical practice. A myofascial trigger points is a hyperirritable spot, usually within a taut band of skeletal muscle, which is painful on compression and can give rise to characteristic referred pain, motor dysfunction, and autonomic phenomena. Trigger points may be relieved through noninvasive measures, such as spray and stretch, transcutaneous electrical stimulation, physical therapy, and massage. Invasive treatments for myofascial trigger points include injections with local anesthetics, corticosteroids, or botulism toxin or dry needling. The etiology, pathophysiology, and treatment of myofascial trigger points are addressed in this article.

  12. Compressively sensed complex networks.

    SciTech Connect

    Dunlavy, Daniel M.; Ray, Jaideep; Pinar, Ali

    2010-07-01

    The aim of this project is to develop low dimension parametric (deterministic) models of complex networks, to use compressive sensing (CS) and multiscale analysis to do so and to exploit the structure of complex networks (some are self-similar under coarsening). CS provides a new way of sampling and reconstructing networks. The approach is based on multiresolution decomposition of the adjacency matrix and its efficient sampling. It requires preprocessing of the adjacency matrix to make it 'blocky' which is the biggest (combinatorial) algorithm challenge. Current CS reconstruction algorithm makes no use of the structure of a graph, its very general (and so not very efficient/customized). Other model-based CS techniques exist, but not yet adapted to networks. Obvious starting point for future work is to increase the efficiency of reconstruction.

  13. Compressed gas manifold

    SciTech Connect

    Hildebrand, Richard J.; Wozniak, John J.

    2001-01-01

    A compressed gas storage cell interconnecting manifold including a thermally activated pressure relief device, a manual safety shut-off valve, and a port for connecting the compressed gas storage cells to a motor vehicle power source and to a refueling adapter. The manifold is mechanically and pneumatically connected to a compressed gas storage cell by a bolt including a gas passage therein.

  14. Compressible turbulent mixing: Effects of compressibility

    NASA Astrophysics Data System (ADS)

    Ni, Qionglin

    2016-04-01

    We studied by numerical simulations the effects of compressibility on passive scalar transport in stationary compressible turbulence. The turbulent Mach number varied from zero to unity. The difference in driven forcing was the magnitude ratio of compressive to solenoidal modes. In the inertial range, the scalar spectrum followed the k-5 /3 scaling and suffered negligible influence from the compressibility. The growth of the Mach number showed (1) a first reduction and second enhancement in the transfer of scalar flux; (2) an increase in the skewness and flatness of the scalar derivative and a decrease in the mixed skewness and flatness of the velocity-scalar derivatives; (3) a first stronger and second weaker intermittency of scalar relative to that of velocity; and (4) an increase in the intermittency parameter which measures the intermittency of scalar in the dissipative range. Furthermore, the growth of the compressive mode of forcing indicated (1) a decrease in the intermittency parameter and (2) less efficiency in enhancing scalar mixing. The visualization of scalar dissipation showed that, in the solenoidal-forced flow, the field was filled with the small-scale, highly convoluted structures, while in the compressive-forced flow, the field was exhibited as the regions dominated by the large-scale motions of rarefaction and compression.

  15. Stability of compressible Taylor-Couette flow

    NASA Technical Reports Server (NTRS)

    Kao, Kai-Hsiung; Chow, Chuen-Yen

    1991-01-01

    Compressible stability equations are solved using the spectral collocation method in an attempt to study the effects of temperature difference and compressibility on the stability of Taylor-Couette flow. It is found that the Chebyshev collocation spectral method yields highly accurate results using fewer grid points for solving stability problems. Comparisons are made between the result obtained by assuming small Mach number with a uniform temperature distribution and that based on fully incompressible analysis.

  16. Fracture in compression of brittle solids

    NASA Technical Reports Server (NTRS)

    1983-01-01

    The fracture of brittle solids in monotonic compression is reviewed from both the mechanistic and phenomenological points of view. The fundamental theoretical developments based on the extension of pre-existing cracks in general multiaxial stress fields are recognized as explaining extrinsic behavior where a single crack is responsible for the final failure. In contrast, shear faulting in compression is recognized to be the result of an evolutionary localization process involving en echelon action of cracks and is termed intrinsic.

  17. [Medical image compression: a review].

    PubMed

    Noreña, Tatiana; Romero, Eduardo

    2013-01-01

    Modern medicine is an increasingly complex activity , based on the evidence ; it consists of information from multiple sources : medical record text , sound recordings , images and videos generated by a large number of devices . Medical imaging is one of the most important sources of information since they offer comprehensive support of medical procedures for diagnosis and follow-up . However , the amount of information generated by image capturing gadgets quickly exceeds storage availability in radiology services , generating additional costs in devices with greater storage capacity . Besides , the current trend of developing applications in cloud computing has limitations, even though virtual storage is available from anywhere, connections are made through internet . In these scenarios the optimal use of information necessarily requires powerful compression algorithms adapted to medical activity needs . In this paper we present a review of compression techniques used for image storage , and a critical analysis of them from the point of view of their use in clinical settings. PMID:23715317

  18. [Medical image compression: a review].

    PubMed

    Noreña, Tatiana; Romero, Eduardo

    2013-01-01

    Modern medicine is an increasingly complex activity , based on the evidence ; it consists of information from multiple sources : medical record text , sound recordings , images and videos generated by a large number of devices . Medical imaging is one of the most important sources of information since they offer comprehensive support of medical procedures for diagnosis and follow-up . However , the amount of information generated by image capturing gadgets quickly exceeds storage availability in radiology services , generating additional costs in devices with greater storage capacity . Besides , the current trend of developing applications in cloud computing has limitations, even though virtual storage is available from anywhere, connections are made through internet . In these scenarios the optimal use of information necessarily requires powerful compression algorithms adapted to medical activity needs . In this paper we present a review of compression techniques used for image storage , and a critical analysis of them from the point of view of their use in clinical settings.

  19. Parallel image compression

    NASA Technical Reports Server (NTRS)

    Reif, John H.

    1987-01-01

    A parallel compression algorithm for the 16,384 processor MPP machine was developed. The serial version of the algorithm can be viewed as a combination of on-line dynamic lossless test compression techniques (which employ simple learning strategies) and vector quantization. These concepts are described. How these concepts are combined to form a new strategy for performing dynamic on-line lossy compression is discussed. Finally, the implementation of this algorithm in a massively parallel fashion on the MPP is discussed.

  20. Sequential neural text compression.

    PubMed

    Schmidhuber, J; Heil, S

    1996-01-01

    The purpose of this paper is to show that neural networks may be promising tools for data compression without loss of information. We combine predictive neural nets and statistical coding techniques to compress text files. We apply our methods to certain short newspaper articles and obtain compression ratios exceeding those of the widely used Lempel-Ziv algorithms (which build the basis of the UNIX functions "compress" and "gzip"). The main disadvantage of our methods is that they are about three orders of magnitude slower than standard methods.

  1. Multishock Compression Properties of Warm Dense Argon

    NASA Astrophysics Data System (ADS)

    Zheng, Jun; Chen, Qifeng; Yunjun, Gu; Li, Zhiguo; Shen, Zhijun

    2015-10-01

    Warm dense argon was generated by a shock reverberation technique. The diagnostics of warm dense argon were performed by a multichannel optical pyrometer and a velocity interferometer system. The equations of state in the pressure-density range of 20-150 GPa and 1.9-5.3 g/cm3 from the first- to fourth-shock compression were presented. The single-shock temperatures in the range of 17.2-23.4 kK were obtained from the spectral radiance. Experimental results indicates that multiple shock-compression ratio (ηi = ρi/ρ0) is greatly enhanced from 3.3 to 8.8, where ρ0 is the initial density of argon and ρi (i = 1, 2, 3, 4) is the compressed density from first to fourth shock, respectively. For the relative compression ratio (ηi’ = ρi/ρi-1), an interesting finding is that a turning point occurs at the second shocked states under the conditions of different experiments, and ηi’ increases with pressure in lower density regime and reversely decreases with pressure in higher density regime. The evolution of the compression ratio is controlled by the excitation of internal degrees of freedom, which increase the compression, and by the interaction effects between particles that reduce it. A temperature-density plot shows that current multishock compression states of argon have distributed into warm dense regime.

  2. Multishock Compression Properties of Warm Dense Argon.

    PubMed

    Zheng, Jun; Chen, Qifeng; Yunjun, Gu; Li, Zhiguo; Shen, Zhijun

    2015-01-01

    Warm dense argon was generated by a shock reverberation technique. The diagnostics of warm dense argon were performed by a multichannel optical pyrometer and a velocity interferometer system. The equations of state in the pressure-density range of 20-150 GPa and 1.9-5.3 g/cm(3) from the first- to fourth-shock compression were presented. The single-shock temperatures in the range of 17.2-23.4 kK were obtained from the spectral radiance. Experimental results indicates that multiple shock-compression ratio (ηi = ρi/ρ0) is greatly enhanced from 3.3 to 8.8, where ρ0 is the initial density of argon and ρi (i = 1, 2, 3, 4) is the compressed density from first to fourth shock, respectively. For the relative compression ratio (ηi' = ρi/ρi-1), an interesting finding is that a turning point occurs at the second shocked states under the conditions of different experiments, and ηi' increases with pressure in lower density regime and reversely decreases with pressure in higher density regime. The evolution of the compression ratio is controlled by the excitation of internal degrees of freedom, which increase the compression, and by the interaction effects between particles that reduce it. A temperature-density plot shows that current multishock compression states of argon have distributed into warm dense regime. PMID:26515505

  3. Lossless Astronomical Image Compression and the Effects of Random Noise

    NASA Technical Reports Server (NTRS)

    Pence, William

    2009-01-01

    In this paper we compare a variety of modern image compression methods on a large sample of astronomical images. We begin by demonstrating from first principles how the amount of noise in the image pixel values sets a theoretical upper limit on the lossless compression ratio of the image. We derive simple procedures for measuring the amount of noise in an image and for quantitatively predicting how much compression will be possible. We then compare the traditional technique of using the GZIP utility to externally compress the image, with a newer technique of dividing the image into tiles, and then compressing and storing each tile in a FITS binary table structure. This tiled-image compression technique offers a choice of other compression algorithms besides GZIP, some of which are much better suited to compressing astronomical images. Our tests on a large sample of images show that the Rice algorithm provides the best combination of speed and compression efficiency. In particular, Rice typically produces 1.5 times greater compression and provides much faster compression speed than GZIP. Floating point images generally contain too much noise to be effectively compressed with any lossless algorithm. We have developed a compression technique which discards some of the useless noise bits by quantizing the pixel values as scaled integers. The integer images can then be compressed by a factor of 4 or more. Our image compression and uncompression utilities (called fpack and funpack) that were used in this study are publicly available from the HEASARC web site.Users may run these stand-alone programs to compress and uncompress their own images.

  4. Compression Ratio Adjuster

    NASA Technical Reports Server (NTRS)

    Akkerman, J. W.

    1982-01-01

    New mechanism alters compression ratio of internal-combustion engine according to load so that engine operates at top fuel efficiency. Ordinary gasoline, diesel and gas engines with their fixed compression ratios are inefficient at partial load and at low-speed full load. Mechanism ensures engines operate as efficiently under these conditions as they do at highload and high speed.

  5. Fractal image compression

    NASA Technical Reports Server (NTRS)

    Barnsley, Michael F.; Sloan, Alan D.

    1989-01-01

    Fractals are geometric or data structures which do not simplify under magnification. Fractal Image Compression is a technique which associates a fractal to an image. On the one hand, the fractal can be described in terms of a few succinct rules, while on the other, the fractal contains much or all of the image information. Since the rules are described with less bits of data than the image, compression results. Data compression with fractals is an approach to reach high compression ratios for large data streams related to images. The high compression ratios are attained at a cost of large amounts of computation. Both lossless and lossy modes are supported by the technique. The technique is stable in that small errors in codes lead to small errors in image data. Applications to the NASA mission are discussed.

  6. Texture Studies and Compression Behaviour of Apple Flesh

    NASA Astrophysics Data System (ADS)

    James, Bryony; Fonseca, Celia

    Compressive behavior of fruit flesh has been studied using mechanical tests and microstructural analysis. Apple flesh from two cultivars (Braeburn and Cox's Orange Pippin) was investigated to represent the extremes in a spectrum of fruit flesh types, hard and juicy (Braeburn) and soft and mealy (Cox's). Force-deformation curves produced during compression of unconstrained discs of apple flesh followed trends predicted from the literature for each of the "juicy" and "mealy" types. The curves display the rupture point and, in some cases, a point of inflection that may be related to the point of incipient juice release. During compression these discs of flesh generally failed along the centre line, perpendicular to the direction of loading, through a barrelling mechanism. Cryo-Scanning Electron Microscopy (cryo-SEM) was used to examine the behavior of the parenchyma cells during fracture and compression using a purpose designed sample holder and compression tester. Fracture behavior reinforced the difference in mechanical properties between crisp and mealy fruit flesh. During compression testing prior to cryo-SEM imaging the apple flesh was constrained perpendicular to the direction of loading. Microstructural analysis suggests that, in this arrangement, the material fails along a compression front ahead of the compressing plate. Failure progresses by whole lines of parenchyma cells collapsing, or rupturing, with juice filling intercellular spaces, before the compression force is transferred to the next row of cells.

  7. 46 CFR 151.50-30 - Compressed gases.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 5 2012-10-01 2012-10-01 false Compressed gases. 151.50-30 Section 151.50-30 Shipping... BULK LIQUID HAZARDOUS MATERIAL CARGOES Special Requirements § 151.50-30 Compressed gases. (a) All tank... of gas will be directed vertically upward to a point at least 10 feet above the weatherdeck or...

  8. 46 CFR 151.50-30 - Compressed gases.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 5 2013-10-01 2013-10-01 false Compressed gases. 151.50-30 Section 151.50-30 Shipping... BULK LIQUID HAZARDOUS MATERIAL CARGOES Special Requirements § 151.50-30 Compressed gases. (a) All tank... of gas will be directed vertically upward to a point at least 10 feet above the weatherdeck or...

  9. 46 CFR 151.50-30 - Compressed gases.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 5 2011-10-01 2011-10-01 false Compressed gases. 151.50-30 Section 151.50-30 Shipping... BULK LIQUID HAZARDOUS MATERIAL CARGOES Special Requirements § 151.50-30 Compressed gases. (a) All tank... of gas will be directed vertically upward to a point at least 10 feet above the weatherdeck or...

  10. 46 CFR 151.50-30 - Compressed gases.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 5 2014-10-01 2014-10-01 false Compressed gases. 151.50-30 Section 151.50-30 Shipping... BULK LIQUID HAZARDOUS MATERIAL CARGOES Special Requirements § 151.50-30 Compressed gases. (a) All tank... of gas will be directed vertically upward to a point at least 10 feet above the weatherdeck or...

  11. 46 CFR 151.50-30 - Compressed gases.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 5 2010-10-01 2010-10-01 false Compressed gases. 151.50-30 Section 151.50-30 Shipping... BULK LIQUID HAZARDOUS MATERIAL CARGOES Special Requirements § 151.50-30 Compressed gases. (a) All tank... of gas will be directed vertically upward to a point at least 10 feet above the weatherdeck or...

  12. Selfsimilar spherical compression waves in gas dynamics

    NASA Astrophysics Data System (ADS)

    Meyer-Ter-Vehn, J.; Schalk, C.

    1982-05-01

    A synopsis of different selfsimilar spherical compression waves is given pointing out their fundamental importance for the gas dynamics of inertial confinement fusion. Strong blast waves, various forms of isentropic collapsing hollow spheres are included. A classification is given in terms of six singular points which characterize the different solutions and the relations between them. The presentation closely follows Guderley's original work on imploding shock waves.

  13. Modeling Compressed Turbulence

    SciTech Connect

    Israel, Daniel M.

    2012-07-13

    From ICE to ICF, the effect of mean compression or expansion is important for predicting the state of the turbulence. When developing combustion models, we would like to know the mix state of the reacting species. This involves density and concentration fluctuations. To date, research has focused on the effect of compression on the turbulent kinetic energy. The current work provides constraints to help development and calibration for models of species mixing effects in compressed turbulence. The Cambon, et al., re-scaling has been extended to buoyancy driven turbulence, including the fluctuating density, concentration, and temperature equations. The new scalings give us helpful constraints for developing and validating RANS turbulence models.

  14. Local compressibilities in crystals

    NASA Astrophysics Data System (ADS)

    Martín Pendás, A.; Costales, Aurora; Blanco, M. A.; Recio, J. M.; Luaña, Víctor

    2000-12-01

    An application of the atoms in molecules theory to the partitioning of static thermodynamic properties in condensed systems is presented. Attention is focused on the definition and the behavior of atomic compressibilities. Inverses of bulk moduli are found to be simple weighted averages of atomic compressibilities. Two kinds of systems are investigated as examples: four related oxide spinels and the alkali halide family. Our analyses show that the puzzling constancy of the bulk moduli of these spinels is a consequence of the value of the compressibility of an oxide ion. A functional dependence between ionic bulk moduli and ionic volume is also proposed.

  15. Compressive Optical Image Encryption

    PubMed Central

    Li, Jun; Sheng Li, Jiao; Yang Pan, Yang; Li, Rong

    2015-01-01

    An optical image encryption technique based on compressive sensing using fully optical means has been proposed. An object image is first encrypted to a white-sense stationary noise pattern using a double random phase encoding (DRPE) method in a Mach-Zehnder interferometer. Then, the encrypted image is highly compressed to a signal using single-pixel compressive holographic imaging in the optical domain. At the receiving terminal, the encrypted image is reconstructed well via compressive sensing theory, and the original image can be decrypted with three reconstructed holograms and the correct keys. The numerical simulations show that the method is effective and suitable for optical image security transmission in future all-optical networks because of the ability of completely optical implementation and substantially smaller hologram data volume. PMID:25992946

  16. Focus on Compression Stockings

    MedlinePlus

    ... sion apparel is used to prevent or control edema The post-thrombotic syndrome (PTS) is a complication ( ... complication. abdomen. This swelling is referred to as edema. If you have edema, compression therapy may be ...

  17. Muon cooling: longitudinal compression.

    PubMed

    Bao, Yu; Antognini, Aldo; Bertl, Wilhelm; Hildebrandt, Malte; Khaw, Kim Siang; Kirch, Klaus; Papa, Angela; Petitjean, Claude; Piegsa, Florian M; Ritt, Stefan; Sedlak, Kamil; Stoykov, Alexey; Taqqu, David

    2014-06-01

    A 10  MeV/c positive muon beam was stopped in helium gas of a few mbar in a magnetic field of 5 T. The muon "swarm" has been efficiently compressed from a length of 16 cm down to a few mm along the magnetic field axis (longitudinal compression) using electrostatic fields. The simulation reproduces the low energy interactions of slow muons in helium gas. Phase space compression occurs on the order of microseconds, compatible with the muon lifetime of 2  μs. This paves the way for the preparation of a high-quality low-energy muon beam, with an increase in phase space density relative to a standard surface muon beam of 10^{7}. The achievable phase space compression by using only the longitudinal stage presented here is of the order of 10^{4}.

  18. Compressive Optical Image Encryption

    NASA Astrophysics Data System (ADS)

    Li, Jun; Sheng Li, Jiao; Yang Pan, Yang; Li, Rong

    2015-05-01

    An optical image encryption technique based on compressive sensing using fully optical means has been proposed. An object image is first encrypted to a white-sense stationary noise pattern using a double random phase encoding (DRPE) method in a Mach-Zehnder interferometer. Then, the encrypted image is highly compressed to a signal using single-pixel compressive holographic imaging in the optical domain. At the receiving terminal, the encrypted image is reconstructed well via compressive sensing theory, and the original image can be decrypted with three reconstructed holograms and the correct keys. The numerical simulations show that the method is effective and suitable for optical image security transmission in future all-optical networks because of the ability of completely optical implementation and substantially smaller hologram data volume.

  19. Muon Cooling: Longitudinal Compression

    NASA Astrophysics Data System (ADS)

    Bao, Yu; Antognini, Aldo; Bertl, Wilhelm; Hildebrandt, Malte; Khaw, Kim Siang; Kirch, Klaus; Papa, Angela; Petitjean, Claude; Piegsa, Florian M.; Ritt, Stefan; Sedlak, Kamil; Stoykov, Alexey; Taqqu, David

    2014-06-01

    A 10 MeV/c positive muon beam was stopped in helium gas of a few mbar in a magnetic field of 5 T. The muon "swarm" has been efficiently compressed from a length of 16 cm down to a few mm along the magnetic field axis (longitudinal compression) using electrostatic fields. The simulation reproduces the low energy interactions of slow muons in helium gas. Phase space compression occurs on the order of microseconds, compatible with the muon lifetime of 2 μs. This paves the way for the preparation of a high-quality low-energy muon beam, with an increase in phase space density relative to a standard surface muon beam of 107. The achievable phase space compression by using only the longitudinal stage presented here is of the order of 104.

  20. Compressible Astrophysics Simulation Code

    SciTech Connect

    Howell, L.; Singer, M.

    2007-07-18

    This is an astrophysics simulation code involving a radiation diffusion module developed at LLNL coupled to compressible hydrodynamics and adaptive mesh infrastructure developed at LBNL. One intended application is to neutrino diffusion in core collapse supernovae.

  1. Image compression technique

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1997-03-25

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace`s equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image. 16 figs.

  2. Image compression technique

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1997-01-01

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace's equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image.

  3. Alternative Compression Garments

    NASA Technical Reports Server (NTRS)

    Stenger, M. B.; Lee, S. M. C.; Ribeiro, L. C.; Brown, A. K.; Westby, C. M.; Platts, S. H.

    2011-01-01

    Orthostatic intolerance after spaceflight is still an issue for astronauts as no in-flight countermeasure has been 100% effective. Future anti-gravity suits (AGS) may be similar to the Shuttle era inflatable AGS or may be a mechanical compression device like the Russian Kentavr. We have evaluated the above garments as well as elastic, gradient compression garments of varying magnitude and determined that breast-high elastic compression garments may be a suitable replacement to the current AGS. This new garment should be more comfortable than the AGS, easy to don and doff, and as effective a countermeasure to orthostatic intolerance. Furthermore, these new compression garments could be worn for several days after space flight as necessary if symptoms persisted. We conducted two studies to evaluate elastic, gradient compression garments. The purpose of these studies was to evaluate the comfort and efficacy of an alternative compression garment (ACG) immediately after actual space flight and 6 degree head-down tilt bed rest as a model of space flight, and to determine if they would impact recovery if worn for up to three days after bed rest.

  4. Integer cosine transform for image compression

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.; Pollara, F.; Shahshahani, M.

    1991-01-01

    This article describes a recently introduced transform algorithm called the integer cosine transform (ICT), which is used in transform-based data compression schemes. The ICT algorithm requires only integer operations on small integers and at the same time gives a rate-distortion performance comparable to that offered by the floating-point discrete cosine transform (DCT). The article addresses the issue of implementation complexity, which is of prime concern for source coding applications of interest in deep-space communications. Complexity reduction in the transform stage of the compression scheme is particularly relevant, since this stage accounts for most (typically over 80 percent) of the computational load.

  5. Simulation and modeling of homogeneous, compressed turbulence

    NASA Technical Reports Server (NTRS)

    Wu, C. T.; Ferziger, J. H.; Chapman, D. R.

    1985-01-01

    Low Reynolds number homogeneous turbulence undergoing low Mach number isotropic and one-dimensional compression was simulated by numerically solving the Navier-Stokes equations. The numerical simulations were performed on a CYBER 205 computer using a 64 x 64 x 64 mesh. A spectral method was used for spatial differencing and the second-order Runge-Kutta method for time advancement. A variety of statistical information was extracted from the computed flow fields. These include three-dimensional energy and dissipation spectra, two-point velocity correlations, one-dimensional energy spectra, turbulent kinetic energy and its dissipation rate, integral length scales, Taylor microscales, and Kolmogorov length scale. Results from the simulated flow fields were used to test one-point closure, two-equation models. A new one-point-closure, three-equation turbulence model which accounts for the effect of compression is proposed. The new model accurately calculates four types of flows (isotropic decay, isotropic compression, one-dimensional compression, and axisymmetric expansion flows) for a wide range of strain rates.

  6. Structure and Properties of Silica Glass Densified in Cold Compression and Hot Compression

    PubMed Central

    Guerette, Michael; Ackerson, Michael R.; Thomas, Jay; Yuan, Fenglin; Bruce Watson, E.; Walker, David; Huang, Liping

    2015-01-01

    Silica glass has been shown in numerous studies to possess significant capacity for permanent densification under pressure at different temperatures to form high density amorphous (HDA) silica. However, it is unknown to what extent the processes leading to irreversible densification of silica glass in cold-compression at room temperature and in hot-compression (e.g., near glass transition temperature) are common in nature. In this work, a hot-compression technique was used to quench silica glass from high temperature (1100 °C) and high pressure (up to 8 GPa) conditions, which leads to density increase of ~25% and Young’s modulus increase of ~71% relative to that of pristine silica glass at ambient conditions. Our experiments and molecular dynamics (MD) simulations provide solid evidences that the intermediate-range order of the hot-compressed HDA silica is distinct from that of the counterpart cold-compressed at room temperature. This explains the much higher thermal and mechanical stability of the former than the latter upon heating and compression as revealed in our in-situ Brillouin light scattering (BLS) experiments. Our studies demonstrate the limitation of the resulting density as a structural indicator of polyamorphism, and point out the importance of temperature during compression in order to fundamentally understand HDA silica. PMID:26469314

  7. Comparison of chest compression quality between the modified chest compression method with the use of smartphone application and the standardized traditional chest compression method during CPR.

    PubMed

    Park, Sang-Sub

    2014-01-01

    The purpose of this study is to grasp difference in quality of chest compression accuracy between the modified chest compression method with the use of smartphone application and the standardized traditional chest compression method. Participants were progressed 64 people except 6 absentees among 70 people who agreed to participation with completing the CPR curriculum. In the classification of group in participants, the modified chest compression method was called as smartphone group (33 people). The standardized chest compression method was called as traditional group (31 people). The common equipments in both groups were used Manikin for practice and Manikin for evaluation. In the meantime, the smartphone group for application was utilized Android and iOS Operating System (OS) of 2 smartphone products (G, i). The measurement period was conducted from September 25th to 26th, 2012. Data analysis was used SPSS WIN 12.0 program. As a result of research, the proper compression depth (mm) was shown the proper compression depth (p< 0.01) in traditional group (53.77 mm) compared to smartphone group (48.35 mm). Even the proper chest compression (%) was formed suitably (p< 0.05) in traditional group (73.96%) more than smartphone group (60.51%). As for the awareness of chest compression accuracy, the traditional group (3.83 points) had the higher awareness of chest compression accuracy (p< 0.001) than the smartphone group (2.32 points). In the questionnaire that was additionally carried out 1 question only in smartphone group, the modified chest compression method with the use of smartphone had the high negative reason in rescuer for occurrence of hand back pain (48.5%) and unstable posture (21.2%). PMID:24704648

  8. Transverse Compression of Tendons.

    PubMed

    Salisbury, S T Samuel; Buckley, C Paul; Zavatsky, Amy B

    2016-04-01

    A study was made of the deformation of tendons when compressed transverse to the fiber-aligned axis. Bovine digital extensor tendons were compression tested between flat rigid plates. The methods included: in situ image-based measurement of tendon cross-sectional shapes, after preconditioning but immediately prior to testing; multiple constant-load creep/recovery tests applied to each tendon at increasing loads; and measurements of the resulting tendon displacements in both transverse directions. In these tests, friction resisted axial stretch of the tendon during compression, giving approximately plane-strain conditions. This, together with the assumption of a form of anisotropic hyperelastic constitutive model proposed previously for tendon, justified modeling the isochronal response of tendon as that of an isotropic, slightly compressible, neo-Hookean solid. Inverse analysis, using finite-element (FE) simulations of the experiments and 10 s isochronal creep displacement data, gave values for Young's modulus and Poisson's ratio of this solid of 0.31 MPa and 0.49, respectively, for an idealized tendon shape and averaged data for all the tendons and E = 0.14 and 0.10 MPa for two specific tendons using their actual measured geometry. The compression load versus displacement curves, as measured and as simulated, showed varying degrees of stiffening with increasing load. This can be attributed mostly to geometrical changes in tendon cross section under load, varying according to the initial 3D shape of the tendon. PMID:26833218

  9. Transverse Compression of Tendons.

    PubMed

    Salisbury, S T Samuel; Buckley, C Paul; Zavatsky, Amy B

    2016-04-01

    A study was made of the deformation of tendons when compressed transverse to the fiber-aligned axis. Bovine digital extensor tendons were compression tested between flat rigid plates. The methods included: in situ image-based measurement of tendon cross-sectional shapes, after preconditioning but immediately prior to testing; multiple constant-load creep/recovery tests applied to each tendon at increasing loads; and measurements of the resulting tendon displacements in both transverse directions. In these tests, friction resisted axial stretch of the tendon during compression, giving approximately plane-strain conditions. This, together with the assumption of a form of anisotropic hyperelastic constitutive model proposed previously for tendon, justified modeling the isochronal response of tendon as that of an isotropic, slightly compressible, neo-Hookean solid. Inverse analysis, using finite-element (FE) simulations of the experiments and 10 s isochronal creep displacement data, gave values for Young's modulus and Poisson's ratio of this solid of 0.31 MPa and 0.49, respectively, for an idealized tendon shape and averaged data for all the tendons and E = 0.14 and 0.10 MPa for two specific tendons using their actual measured geometry. The compression load versus displacement curves, as measured and as simulated, showed varying degrees of stiffening with increasing load. This can be attributed mostly to geometrical changes in tendon cross section under load, varying according to the initial 3D shape of the tendon.

  10. Shock compression of condensed nonideal plasmas

    NASA Astrophysics Data System (ADS)

    Fortov, Vladimir

    2001-06-01

    The physical properties of hot dense plasmas at megabar pressures are of great interest for astro- and planetary physics, inertial confinement fusion, energetics, technology and many other applications. The lecture presents the modern results of experimental investigations of equations of state, compositions, thermodynamical and transport properties, electrical conductivity and opacity of strongly coupled plasmas generated by intense shock and rarefaction waves. The experimental methods for generation of high energy densities in matter, drivers for shock waves and fast diagnostic methods are discussed. The application of intense shock waves to solid and porous targets allows us to degenerate Fermi-like plasmas with maximum pressure up to 4Gbar and temperatures 10^7 K. Compression of plasma by a series of incident and reflected shock waves allows us to decrease irreversible heating effects. As a result, such a multiple compression process becomes close to the isentropic one which permits us to reach much higher densities and lower temperatures compared to single shock compression. On the other hand, to increase the irreversibility effects and to generate high temperature plasma states the experiments on shock compression of porous samples (fine metal powder, aerogels) were performed. The shock compression of saturated metal vapors and previously compressed noble gases by incident and reflected shocks allows us to reach nonideal plasmas on the Hugoniot. The adiabatic expansion of matter initially compressed by intense shocks up to megabars gives us the chance to investigate the intermediate region between the solid and vapor phase of nonideal plasmas, including the metal-insulator transition phase and the high temperature saturation curve with critical points of metals.

  11. Multishock Compression Properties of Warm Dense Argon

    PubMed Central

    Zheng, Jun; Chen, Qifeng; Yunjun, Gu; Li, Zhiguo; Shen, Zhijun

    2015-01-01

    Warm dense argon was generated by a shock reverberation technique. The diagnostics of warm dense argon were performed by a multichannel optical pyrometer and a velocity interferometer system. The equations of state in the pressure-density range of 20–150 GPa and 1.9–5.3 g/cm3 from the first- to fourth-shock compression were presented. The single-shock temperatures in the range of 17.2–23.4 kK were obtained from the spectral radiance. Experimental results indicates that multiple shock-compression ratio (ηi = ρi/ρ0) is greatly enhanced from 3.3 to 8.8, where ρ0 is the initial density of argon and ρi (i = 1, 2, 3, 4) is the compressed density from first to fourth shock, respectively. For the relative compression ratio (ηi’ = ρi/ρi-1), an interesting finding is that a turning point occurs at the second shocked states under the conditions of different experiments, and ηi’ increases with pressure in lower density regime and reversely decreases with pressure in higher density regime. The evolution of the compression ratio is controlled by the excitation of internal degrees of freedom, which increase the compression, and by the interaction effects between particles that reduce it. A temperature-density plot shows that current multishock compression states of argon have distributed into warm dense regime. PMID:26515505

  12. The compressible mixing layer

    NASA Technical Reports Server (NTRS)

    Vandromme, Dany; Haminh, Hieu

    1991-01-01

    The capability of turbulence modeling correctly to handle natural unsteadiness appearing in compressible turbulent flows is investigated. Physical aspects linked to the unsteadiness problem and the role of various flow parameters are analyzed. It is found that unsteady turbulent flows can be simulated by dividing these motions into an 'organized' part for which equations of motion are solved and a remaining 'incoherent' part represented by a turbulence model. Two-equation turbulence models and second-order turbulence models can yield reasonable results. For specific compressible unsteady turbulent flow, graphic presentations of different quantities may reveal complementary physical features. Strong compression zones are observed in rapid flow parts but shocklets do not yet occur.

  13. Isentropic Compression of Argon

    SciTech Connect

    H. Oona; J.C. Solem; L.R. Veeser, C.A. Ekdahl; P.J. Rodriquez; S.M. Younger; W. Lewis; W.D. Turley

    1997-08-01

    We are studying the transition of argon from an insulator to a conductor by compressing the frozen gas isentropically to pressures at which neighboring atomic orbitals overlap sufficiently to allow some electron motion between atoms. Argon and the other rare gases have closed electron shells and therefore remain montomic, even when they solidify. Their simple structure makes it likely that any measured change in conductivity is due to changes in the atomic structure, not in molecular configuration. As the crystal is compressed the band gap closes, allowing increased conductivity. We have begun research to determine the conductivity at high pressures, and it is our intention to determine the compression at which the crystal becomes a metal.

  14. New scaling for compressible wall turbulence

    NASA Astrophysics Data System (ADS)

    Pei, Jie; Chen, Jun; Fazle, Hussain; She, ZhenSu

    2013-09-01

    Classical Mach-number (M) scaling in compressible wall turbulence was suggested by van Driest (Van Driest E R. Turbulent boundary layers in compressible fluids. J Aerodynamics Science, 1951, 18(3): 145-160) and Huang et al. (Huang P G, Coleman G N, Bradshaw P. Compressible turbulent channel flows: DNS results and modeling. J Fluid Mech, 1995, 305: 185-218). Using a concept of velocity-vorticity correlation structure (VVCS), defined by high correlation regions in a field of two-point cross-correlation coefficient between a velocity and a vorticity component, we have discovered a limiting VVCS as the closest streamwise vortex structure to the wall, which provides a concrete Morkovin scaling summarizing all compressibility effects. Specifically, when the height and mean velocity of the limiting VVCS are used as the units for the length scale and the velocity, all geometrical measures in the spanwise and normal directions, as well as the mean velocity and fluctuation (r.m.s) profiles become M-independent. The results are validated by direct numerical simulations (DNS) of compressible channel flows with M up to 3. Furthermore, a quantitative model is found for the M-scaling in terms of the wall density, which is also validated by the DNS data. These findings yield a geometrical interpretation of the semi-local transformation (Huang et al., 1995), and a conclusion that the location and the thermodynamic properties associated with the limiting VVCS determine the M-effects on supersonic wall-bounded flows.

  15. Compressive Shift Retrieval

    NASA Astrophysics Data System (ADS)

    Ohlsson, Henrik; Eldar, Yonina C.; Yang, Allen Y.; Sastry, S. Shankar

    2014-08-01

    The classical shift retrieval problem considers two signals in vector form that are related by a shift. The problem is of great importance in many applications and is typically solved by maximizing the cross-correlation between the two signals. Inspired by compressive sensing, in this paper, we seek to estimate the shift directly from compressed signals. We show that under certain conditions, the shift can be recovered using fewer samples and less computation compared to the classical setup. Of particular interest is shift estimation from Fourier coefficients. We show that under rather mild conditions only one Fourier coefficient suffices to recover the true shift.

  16. Isentropic compression of argon

    SciTech Connect

    Veeser, L.R.; Ekdahl, C.A.; Oona, H.

    1997-06-01

    The compression was done in an MC-1 flux compression (explosive) generator, in order to study the transition from an insulator to a conductor. Since conductivity signals were observed in all the experiments (except when the probe is removed), both the Teflon and the argon are becoming conductive. The conductivity could not be determined (Teflon insulation properties unknown), but it could be bounded as being {sigma}=1/{rho}{le}8({Omega}cm){sub -1}, because when the Teflon breaks down, the dielectric constant is reduced. The Teflon insulator problem remains, and other ways to better insulate the probe or to measure the conductivity without a probe is being sought.

  17. Orbiting dynamic compression laboratory

    NASA Technical Reports Server (NTRS)

    Ahrens, T. J.; Vreeland, T., Jr.; Kasiraj, P.; Frisch, B.

    1984-01-01

    In order to examine the feasibility of carrying out dynamic compression experiments on a space station, the possibility of using explosive gun launchers is studied. The question of whether powders of a refractory metal (molybdenum) and a metallic glass could be well considered by dynamic compression is examined. In both cases extremely good bonds are obtained between grains of metal and metallic glass at 180 and 80 kb, respectively. When the oxide surface is reduced and the dynamic consolidation is carried out in vacuum, in the case of molybdenum, tensile tests of the recovered samples demonstrated beneficial ultimate tensile strengths.

  18. Vacancy behavior in a compressed fcc Lennard-Jones crystal

    SciTech Connect

    Beeler, J.R. Jr.

    1981-12-01

    This computer experiment study concerns the determination of the stable vacancy configuration in a compressed fcc Lennard-Jones crystal and the migration of this defect in a compressed crystal. Isotropic and uniaxial compression stress conditions were studied. The isotropic and uniaxial compression magnitudes employed were 0.94 less than or equal to eta less than or equal to 1.5, and 1.0 less than or equal to eta less than or equal to 1.5, respectively. The site-centered vacancy (SCV) was the stable vacancy configuration whenever cubic symmetry was present. This includes all of the isotropic compression cases and the particular uniaxial compression case (eta = ..sqrt..2) that give a bcc structure. In addition, the SCV was the stable configuration for uniaxial compression eta < 1.29. The out-of-plane split vacancy (SV-OP) was the stable vacancy configuration for uniaxial compression 1.29 < eta less than or equal to 1.5 and was the saddle-point configuration for SCV migration when the SCV was the stable form. For eta > 1.20, the SV-OP is an extended defect and, therefore, a saddle point for SV-OP migration could not be determined. The mechanism for the transformation from the SCV to the SV-OP as the stable form at eta = 1.29 appears to be an alternating sign (101) and/or (011) shear process.

  19. Argon Excluder Foam Compression Data

    SciTech Connect

    Clark, D.; /Fermilab

    1991-07-25

    The argon excluder is designed to reduce the media density of the dead space between the internal modules of the end calorimeters and the concave convex head to less than that of argon. The design of the excluder includes a thin circular stainless steel plate welded to the inner side of the convex pressure vessel head at a radius of 26 and 15/16 inches. It is estimated that this plate will experience a pressure differential of approximately 40 pounds per square inch. A inner foam core is incorporated into the design of the excluder as structural support. This engineering note outlines the compression data for the foam used in the north end calorimeter argon excluder. Four test samples of approximately the same dimensions were cut and machined from large blocks of the poured foam. Two of these test samples were then subjected to varying compression magnitudes until failure. For this test failure was taken to mean plastic yielding or the point at which deformation increases without a corresponding increase in loading. The third sample was subjected to a constant compressive stress for an extended period of time, to identify any 'creeping' effects. Finally, the fourth sample was cooled to cryogenic temperatures in order to determine the coefficient of thermal expansion. The compression test apparatus consisted of a state of the art INSTROM coupled with a PC workstation. The tests were run at a constant strain rate with discrete data taken at 500 millisecond intervals. The sample data is plotted as a stress strain diagram in the results. The first test was run on sample number one at a compression rate of 0.833 mills or equivalently a strain rate of 3.245 x 10{sup -4} mil/mills. The corresponding stress was then calculated from the force measured divided by the given initial area. The test was run for thirty minutes until the mode of failure, plastic yielding, was reached. The second test was run as a check of the first using sample number two, and likewise was

  20. Frictional work in double-sided tablet compression.

    PubMed

    Muñoz-Ruiz, A; Wihervaara, M; Hakkinen, M; Juslin, M; Paronen, P

    1997-04-01

    The aim of this study was to evaluate the friction during double-sided tablet compression. Dicalcium phosphate dihydrate and lactose were tabletted with a compaction simulator with symmetrical and asymmetrical double-sided sawtooth punch displacement profiles. The estimation of force transmission in a powder column was based on an exponential equation, including the material parameter consisting of both the friction coefficient and Poisson's ratio. This parameter was predetermined from a single-sided compression. A novel equation was derived from a previously presented equation for friction work in single-sided tablet compression. The basic assumption was drawn from the linearly decreasing movement of infinitely thin particle layers, which are produced as the compressing punch surface approaches the other punch. This calculation was also based on the assumption that the equilibrium point, where the particles do not move, is halfway between the punches in the symmetrical profile and at a distance proportional to the amplitudes of the asymmetrical upper and lower sawtooth profiles. The tensile strength of tablets compressed with single-double-sided profiles was identical, and thus the behavior of the materials studied under compression was independent of the compression profiles. The friction work values that were calculated with the proposed expression for double-sided profiles were close to the theoretical values, as estimated by calculations based on compressions with single-sided profiles. In conclusion, the novel mathematical expression opens new possibilities for the evaluation of friction in double-sided compression; for example, in rotary press tabletting. PMID:9109053

  1. Energy Transfer and Triadic Interactions in Compressible Turbulence

    NASA Technical Reports Server (NTRS)

    Bataille, F.; Zhou, Ye; Bertoglio, Jean-Pierre

    1997-01-01

    Using a two-point closure theory, the Eddy-Damped-Quasi-Normal-Markovian (EDQNM) approximation, we have investigated the energy transfer process and triadic interactions of compressible turbulence. In order to analyze the compressible mode directly, the Helmholtz decomposition is used. The following issues were addressed: (1) What is the mechanism of energy exchange between the solenoidal and compressible modes, and (2) Is there an energy cascade in the compressible energy transfer process? It is concluded that the compressible energy is transferred locally from the solenoidal part to the compressible part. It is also found that there is an energy cascade of the compressible mode for high turbulent Mach number (M(sub t) greater than or equal to 0.5). Since we assume that the compressibility is weak, the magnitude of the compressible (radiative or cascade) transfer is much smaller than that of solenoidal cascade. These results are further confirmed by studying the triadic energy transfer function, the most fundamental building block of the energy transfer.

  2. Learning random networks for compression of still and moving images

    NASA Technical Reports Server (NTRS)

    Gelenbe, Erol; Sungur, Mert; Cramer, Christopher

    1994-01-01

    Image compression for both still and moving images is an extremely important area of investigation, with numerous applications to videoconferencing, interactive education, home entertainment, and potential applications to earth observations, medical imaging, digital libraries, and many other areas. We describe work on a neural network methodology to compress/decompress still and moving images. We use the 'point-process' type neural network model which is closer to biophysical reality than standard models, and yet is mathematically much more tractable. We currently achieve compression ratios of the order of 120:1 for moving grey-level images, based on a combination of motion detection and compression. The observed signal-to-noise ratio varies from values above 25 to more than 35. The method is computationally fast so that compression and decompression can be carried out in real-time. It uses the adaptive capabilities of a set of neural networks so as to select varying compression ratios in real-time as a function of quality achieved. It also uses a motion detector which will avoid retransmitting portions of the image which have varied little from the previous frame. Further improvements can be achieved by using on-line learning during compression, and by appropriate compensation of nonlinearities in the compression/decompression scheme. We expect to go well beyond the 250:1 compression level for color images with good quality levels.

  3. Nonlinear Frequency Compression

    PubMed Central

    Scollie, Susan; Glista, Danielle; Seelisch, Andreas

    2013-01-01

    Frequency lowering technologies offer an alternative amplification solution for severe to profound high frequency hearing losses. While frequency lowering technologies may improve audibility of high frequency sounds, the very nature of this processing can affect the perceived sound quality. This article reports the results from two studies that investigated the impact of a nonlinear frequency compression (NFC) algorithm on perceived sound quality. In the first study, the cutoff frequency and compression ratio parameters of the NFC algorithm were varied, and their effect on the speech quality was measured subjectively with 12 normal hearing adults, 12 normal hearing children, 13 hearing impaired adults, and 9 hearing impaired children. In the second study, 12 normal hearing and 8 hearing impaired adult listeners rated the quality of speech in quiet, speech in noise, and music after processing with a different set of NFC parameters. Results showed that the cutoff frequency parameter had more impact on sound quality ratings than the compression ratio, and that the hearing impaired adults were more tolerant to increased frequency compression than normal hearing adults. No statistically significant differences were found in the sound quality ratings of speech-in-noise and music stimuli processed through various NFC settings by hearing impaired listeners. These findings suggest that there may be an acceptable range of NFC settings for hearing impaired individuals where sound quality is not adversely affected. These results may assist an Audiologist in clinical NFC hearing aid fittings for achieving a balance between high frequency audibility and sound quality. PMID:23539261

  4. Compress Your Files

    ERIC Educational Resources Information Center

    Branzburg, Jeffrey

    2005-01-01

    File compression enables data to be squeezed together, greatly reducing file size. Why would someone want to do this? Reducing file size enables the sending and receiving of files over the Internet more quickly, the ability to store more files on the hard drive, and the ability pack many related files into one archive (for example, all files…

  5. Compression: Rent or own

    SciTech Connect

    Cahill, C.

    1997-07-01

    Historically, the decision to purchase or rent compression has been set as a corporate philosophy. As companies decentralize, there seems to be a shift away from corporate philosophy toward individual profit centers. This has led the decision to rent versus purchase to be looked at on a regional or project-by-project basis.

  6. The Compressed Video Experience.

    ERIC Educational Resources Information Center

    Weber, John

    In the fall semester 1995, Southern Arkansas University- Magnolia (SAU-M) began a two semester trial delivering college classes via a compressed video link between SAU-M and its sister school Southern Arkansas University Tech (SAU-T) in Camden. As soon as the University began broadcasting and receiving classes, it was discovered that using the…

  7. Tipping Points

    NASA Astrophysics Data System (ADS)

    Hansen, J.

    2007-12-01

    A climate tipping point, at least as I have used the phrase, refers to a situation in which a changing climate forcing has reached a point such that little additional forcing (or global temperature change) is needed to cause large, relatively rapid, climate change. Present examples include potential loss of all Arctic sea ice and instability of the West Antarctic and Greenland ice sheets. Tipping points are characterized by ready feedbacks that amplify the effect of forcings. The notion that these may be runaway feedbacks is a misconception. However, present "unrealized" global warming, due to the climate system's thermal inertia, exacerbates the difficulty of avoiding global warming tipping points. I argue that prompt efforts to slow CO2 emissions and absolutely reduce non-CO2 forcings are both essential if we are to avoid tipping points that would be disastrous for humanity and creation, the planet as civilization knows it.

  8. TEM Video Compressive Sensing

    SciTech Connect

    Stevens, Andrew J.; Kovarik, Libor; Abellan, Patricia; Yuan, Xin; Carin, Lawrence; Browning, Nigel D.

    2015-08-02

    One of the main limitations of imaging at high spatial and temporal resolution during in-situ TEM experiments is the frame rate of the camera being used to image the dynamic process. While the recent development of direct detectors has provided the hardware to achieve frame rates approaching 0.1ms, the cameras are expensive and must replace existing detectors. In this paper, we examine the use of coded aperture compressive sensing methods [1, 2, 3, 4] to increase the framerate of any camera with simple, low-cost hardware modifications. The coded aperture approach allows multiple sub-frames to be coded and integrated into a single camera frame during the acquisition process, and then extracted upon readout using statistical compressive sensing inversion. Our simulations show that it should be possible to increase the speed of any camera by at least an order of magnitude. Compressive Sensing (CS) combines sensing and compression in one operation, and thus provides an approach that could further improve the temporal resolution while correspondingly reducing the electron dose rate. Because the signal is measured in a compressive manner, fewer total measurements are required. When applied to TEM video capture, compressive imaging couled improve acquisition speed and reduce the electron dose rate. CS is a recent concept, and has come to the forefront due the seminal work of Candès [5]. Since the publication of Candès, there has been enormous growth in the application of CS and development of CS variants. For electron microscopy applications, the concept of CS has also been recently applied to electron tomography [6], and reduction of electron dose in scanning transmission electron microscopy (STEM) imaging [7]. To demonstrate the applicability of coded aperture CS video reconstruction for atomic level imaging, we simulate compressive sensing on observations of Pd nanoparticles and Ag nanoparticles during exposure to high temperatures and other environmental

  9. Coded aperture compressive temporal imaging.

    PubMed

    Llull, Patrick; Liao, Xuejun; Yuan, Xin; Yang, Jianbo; Kittle, David; Carin, Lawrence; Sapiro, Guillermo; Brady, David J

    2013-05-01

    We use mechanical translation of a coded aperture for code division multiple access compression of video. We discuss the compressed video's temporal resolution and present experimental results for reconstructions of > 10 frames of temporal data per coded snapshot.

  10. Space-time compressive imaging.

    PubMed

    Treeaporn, Vicha; Ashok, Amit; Neifeld, Mark A

    2012-02-01

    Compressive imaging systems typically exploit the spatial correlation of the scene to facilitate a lower dimensional measurement relative to a conventional imaging system. In natural time-varying scenes there is a high degree of temporal correlation that may also be exploited to further reduce the number of measurements. In this work we analyze space-time compressive imaging using Karhunen-Loève (KL) projections for the read-noise-limited measurement case. Based on a comprehensive simulation study, we show that a KL-based space-time compressive imager offers higher compression relative to space-only compressive imaging. For a relative noise strength of 10% and reconstruction error of 10%, we find that space-time compressive imaging with 8×8×16 spatiotemporal blocks yields about 292× compression compared to a conventional imager, while space-only compressive imaging provides only 32× compression. Additionally, under high read-noise conditions, a space-time compressive imaging system yields lower reconstruction error than a conventional imaging system due to the multiplexing advantage. We also discuss three electro-optic space-time compressive imaging architecture classes, including charge-domain processing by a smart focal plane array (FPA). Space-time compressive imaging using a smart FPA provides an alternative method to capture the nonredundant portions of time-varying scenes.

  11. Tipping Point

    MedlinePlus

    ... Tipping Point by CPSC Blogger September 22 appliance child Childproofing CPSC danger death electrical fall furniture head injury product safety television tipover tv Watch the video in Adobe Flash format. Almost weekly, we see ...

  12. Compressible turbulence transport equations for generalized second order closure

    SciTech Connect

    Cloutman, L D

    1999-05-01

    Progress on the theory of second order closure in turbulence models of various types requires knowledge of the transport equations for various turbulence correlations. This report documents a procedure that provides such equations for a wide variety of turbulence averages for compressible flows of a multicomponent fluid. Generalizing some work by Germano for incompressible flows, we introduce an appropriate extension of his generalized second order correlations and use a generalized mass-weighted averaging procedure to derive transport equations for the correlations. The averaging procedure includes all of the commonly used averages as special cases. The resulting equations provide an internally consistent starting point for future work in developing single-point statistical turbulence transport models for fluid flows. The form invariance of the in-compressible equations also holds for the compressible case, and we discuss some of the closure issues and frequently ignored complications of statistical turbulence models of compressible flows.

  13. Progressive compressive imager

    NASA Astrophysics Data System (ADS)

    Evladov, Sergei; Levi, Ofer; Stern, Adrian

    2012-06-01

    We have designed and built a working automatic progressive sampling imaging system based on the vector sensor concept, which utilizes a unique sampling scheme of Radon projections. This sampling scheme makes it possible to progressively add information resulting in tradeoff between compression and the quality of reconstruction. The uniqueness of our sampling is that in any moment of the acquisition process the reconstruction can produce a reasonable version of the image. The advantage of the gradual addition of the samples is seen when the sparsity rate of the object is unknown, and thus the number of needed measurements. We have developed the iterative algorithm OSO (Ordered Sets Optimization) which employs our sampling scheme for creation of nearly uniform distributed sets of samples, which allows the reconstruction of Mega-Pixel images. We present the good quality reconstruction from compressed data ratios of 1:20.

  14. Digital cinema video compression

    NASA Astrophysics Data System (ADS)

    Husak, Walter

    2003-05-01

    The Motion Picture Industry began a transition from film based distribution and projection to digital distribution and projection several years ago. Digital delivery and presentation offers the prospect to increase the quality of the theatrical experience for the audience, reduce distribution costs to the distributors, and create new business opportunities for the theater owners and the studios. Digital Cinema also presents an opportunity to provide increased flexibility and security of the movies for the content owners and the theater operators. Distribution of content via electronic means to theaters is unlike any of the traditional applications for video compression. The transition from film-based media to electronic media represents a paradigm shift in video compression techniques and applications that will be discussed in this paper.

  15. Compressibility of solids

    NASA Technical Reports Server (NTRS)

    Vinet, P.; Ferrante, J.; Rose, J. H.; Smith, J. R.

    1987-01-01

    A universal form is proposed for the equation of state (EOS) of solids. Good agreement is found for a variety of test data. The form of the EOS is used to suggest a method of data analysis, which is applied to materials of geophysical interest. The isothermal bulk modulus is discussed as a function of the volume and of the pressure. The isothermal compression curves for materials of geophysical interest are examined.

  16. Basic cluster compression algorithm

    NASA Technical Reports Server (NTRS)

    Hilbert, E. E.; Lee, J.

    1980-01-01

    Feature extraction and data compression of LANDSAT data is accomplished by BCCA program which reduces costs associated with transmitting, storing, distributing, and interpreting multispectral image data. Algorithm uses spatially local clustering to extract features from image data to describe spectral characteristics of data set. Approach requires only simple repetitive computations, and parallel processing can be used for very high data rates. Program is written in FORTRAN IV for batch execution and has been implemented on SEL 32/55.

  17. Compression of Cake

    NASA Astrophysics Data System (ADS)

    Nason, Sarah; Houghton, Brittany; Renfro, Timothy

    2012-03-01

    The fall university physics class, at McMurry University, created a compression modulus experiment that even high school students could do. The class came up with this idea after a Young's modulus experiment which involved stretching wire. A question was raised of what would happen if we compressed something else? We created our own Young's modulus experiment, but in a more entertaining way. The experiment involves measuring the height of a cake both before and after a weight has been applied to the cake. We worked to derive the compression modulus by applying weight to a cake. In the end, we had our experimental cake and, ate it too! To cite this abstract, use the following reference: http://meetings.aps.org/link/BAPS.2012.TSS.B1.1

  18. Scale adaptive compressive tracking.

    PubMed

    Zhao, Pengpeng; Cui, Shaohui; Gao, Min; Fang, Dan

    2016-01-01

    Recently, the compressive tracking (CT) method (Zhang et al. in Proceedings of European conference on computer vision, pp 864-877, 2012) has attracted much attention due to its high efficiency, but it cannot well deal with the scale changing objects due to its constant tracking box. To address this issue, in this paper we propose a scale adaptive CT approach, which adaptively adjusts the scale of tracking box with the size variation of the objects. Our method significantly improves CT in three aspects: Firstly, the scale of tracking box is adaptively adjusted according to the size of the objects. Secondly, in the CT method, all the compressive features are supposed independent and equal contribution to the classifier. Actually, different compressive features have different confidence coefficients. In our proposed method, the confidence coefficients of features are computed and used to achieve different contribution to the classifier. Finally, in the CT method, the learning parameter λ is constant, which will result in large tracking drift on the occasion of object occlusion or large scale appearance variation. In our proposed method, a variable learning parameter λ is adopted, which can be adjusted according to the object appearance variation rate. Extensive experiments on the CVPR2013 tracking benchmark demonstrate the superior performance of the proposed method compared to state-of-the-art tracking algorithms. PMID:27386298

  19. Compression of multiwall microbubbles

    NASA Astrophysics Data System (ADS)

    Lebedeva, Natalia; Moore, Sam; Dobrynin, Andrey; Rubinstein, Michael; Sheiko, Sergei

    2012-02-01

    Optical monitoring of structural transformations and transport processes is prohibited if the objects to be studied are bulky and/or non-transparent. This paper is focused on the development of a microbbuble platform for acoustic imaging of heterogeneous media under harsh environmental conditions including high pressure (<500 atm), temperature (<100 C), and salinity (<10 wt%). We have studied the compression behavior of gas-filled microbubbles composed of multiple layers of surfactants and stabilizers. Upon hydrostatic compression, these bubbles undergo significant (up to 100x) changes in volume, which are completely reversible. Under repeated compression/expansion cycles, the pressure-volume P(V) characteristic of these microbubbles deviate from ideal-gas-law predictions. A theoretical model was developed to explain the observed deviations through contributions of shell elasticity and gas effusion. In addition, some of the microbubbles undergo peculiar buckling/smoothing transitions exhibiting intermittent formation of facetted structures, which suggest a solid-like nature of the pressurized shell. Preliminary studies illustrate that these pressure-resistant microbubbles maintain their mechanical stability and acoustic response at pressures greater than 1000 psi.

  20. Compression of Visibility Data for Murchison Widefield Array

    NASA Astrophysics Data System (ADS)

    Kitaeff, V. V.

    2015-09-01

    The Murchison Widefield Array (MWA) is a new low frequency radio telescope operating on the Square Kilometre Array site in Western Australia. MWA is generating tens of terabytes of data daily. The size of the required data storage has become a significant operational limitation and cost. We present a simple binary compression technique and a system for the floating point visibility data developed MWA. We present the statistics of the impact of such compression on the data with the typical compression ratio up to 1:3.1.

  1. Knee joint passive stiffness and moment in sagittal and frontal planes markedly increase with compression.

    PubMed

    Marouane, H; Shirazi-Adl, A; Adouni, M

    2015-01-01

    Knee joints are subject to large compression forces in daily activities. Due to artefact moments and instability under large compression loads, biomechanical studies impose additional constraints to circumvent the compression position-dependency in response. To quantify the effect of compression on passive knee moment resistance and stiffness, two validated finite element models of the tibiofemoral (TF) joint, one refined with depth-dependent fibril-reinforced cartilage and the other less refined with homogeneous isotropic cartilage, are used. The unconstrained TF joint response in sagittal and frontal planes is investigated at different flexion angles (0°, 15°, 30° and 45°) up to 1800 N compression preloads. The compression is applied at a novel joint mechanical balance point (MBP) identified as a point at which the compression does not cause any coupled rotations in sagittal and frontal planes. The MBP of the unconstrained joint is located at the lateral plateau in small compressions and shifts medially towards the inter-compartmental area at larger compression forces. The compression force substantially increases the joint moment-bearing capacities and instantaneous angular rigidities in both frontal and sagittal planes. The varus-valgus laxities diminish with compression preloads despite concomitant substantial reductions in collateral ligament forces. While the angular rigidity would enhance the joint stability, the augmented passive moment resistance under compression preloads plays a role in supporting external moments and should as such be considered in the knee joint musculoskeletal models.

  2. Compressive Sequential Learning for Action Similarity Labeling.

    PubMed

    Qin, Jie; Liu, Li; Zhang, Zhaoxiang; Wang, Yunhong; Shao, Ling

    2016-02-01

    Human action recognition in videos has been extensively studied in recent years due to its wide range of applications. Instead of classifying video sequences into a number of action categories, in this paper, we focus on a particular problem of action similarity labeling (ASLAN), which aims at verifying whether a pair of videos contain the same type of action or not. To address this challenge, a novel approach called compressive sequential learning (CSL) is proposed by leveraging the compressive sensing theory and sequential learning. We first project data points to a low-dimensional space by effectively exploring an important property in compressive sensing: the restricted isometry property. In particular, a very sparse measurement matrix is adopted to reduce the dimensionality efficiently. We then learn an ensemble classifier for measuring similarities between pairwise videos by iteratively minimizing its empirical risk with the AdaBoost strategy on the training set. Unlike conventional AdaBoost, the weak learner for each iteration is not explicitly defined and its parameters are learned through greedy optimization. Furthermore, an alternative of CSL named compressive sequential encoding is developed as an encoding technique and followed by a linear classifier to address the similarity-labeling problem. Our method has been systematically evaluated on four action data sets: ASLAN, KTH, HMDB51, and Hollywood2, and the results show the effectiveness and superiority of our method for ASLAN.

  3. Piston reciprocating compressed air engine

    SciTech Connect

    Cestero, L.G.

    1987-03-24

    A compressed air engine is described comprising: (a). a reservoir of compressed air, (b). two power cylinders each containing a reciprocating piston connected to a crankshaft and flywheel, (c). a transfer cylinder which communicates with each power cylinder and the reservoir, and contains a reciprocating piston connected to the crankshaft, (d). valve means controlled by rotation of the crankshaft for supplying compressed air from the reservoir to each power cylinder and for exhausting compressed air from each power cylinder to the transfer cylinder, (e). valve means controlled by rotation of the crankshaft for supplying from the transfer cylinder to the reservoir compressed air supplied to the transfer cylinder on the exhaust strokes of the pistons of the power cylinders, and (f). an externally powered fan for assisting the exhaust of compressed air from each power cylinder to the transfer cylinder and from there to the compressed air reservoir.

  4. A compressed primal-dual method for generating bivariate cubic L1 splines

    NASA Astrophysics Data System (ADS)

    Wang, Yong; Fang, Shu-Cherng; Lavery, John E.

    2007-04-01

    In this paper, we develop a compressed version of the primal-dual interior point method for generating bivariate cubic L1 splines. Discretization of the underlying optimization model, which is a nonsmooth convex programming problem, leads to an overdetermined linear system that can be handled by interior point methods. Taking advantage of the special matrix structure of the cubic L1 spline problem, we design a compressed primal-dual interior point algorithm. Computational experiments indicate that this compressed primal-dual method is robust and is much faster than the ordinary (uncompressed) primal-dual interior point algorithm.

  5. Comparative data compression techniques and multi-compression results

    NASA Astrophysics Data System (ADS)

    Hasan, M. R.; Ibrahimy, M. I.; Motakabber, S. M. A.; Ferdaus, M. M.; Khan, M. N. H.

    2013-12-01

    Data compression is very necessary in business data processing, because of the cost savings that it offers and the large volume of data manipulated in many business applications. It is a method or system for transmitting a digital image (i.e., an array of pixels) from a digital data source to a digital data receiver. More the size of the data be smaller, it provides better transmission speed and saves time. In this communication, we always want to transmit data efficiently and noise freely. This paper will provide some compression techniques for lossless text type data compression and comparative result of multiple and single compression, that will help to find out better compression output and to develop compression algorithms.

  6. Compression retaining piston

    SciTech Connect

    Quaglino, A.V. Jr.

    1987-06-16

    A piston apparatus is described for maintaining compression between the piston wall and the cylinder wall, that comprises the following: a generally cylindrical piston body, including: a head portion defining the forward end of the body; and a continuous side wall portion extending rearward from the head portion; a means for lubricating and preventing compression loss between the side wall portion and the cylinder wall, including an annular recessed area in the continuous side wall portion for receiving a quantity of fluid lubricant in fluid engagement between the wall of the recessed and the wall of the cylinder; a first and second resilient, elastomeric, heat resistant rings positioned in grooves along the wall of the continuous side wall portion, above and below the annular recessed area. Each ring engages the cylinder wall to reduce loss of lubricant within the recessed area during operation of the piston; a first pump means for providing fluid lubricant to engine components other than the pistons; and a second pump means provides fluid lubricant to the recessed area in the continuous side wall portion of the piston. The first and second pump means obtains lubricant from a common source, and the second pump means including a flow line supplies oil from a predetermined level above the level of oil provided to the first pump means. This is so that should the oil level to the second pump means fall below the predetermined level, the loss of oil to the recessed area in the continuous side wall portion of the piston would result in loss of compression and shut down of the engine.

  7. International magnetic pulse compression

    SciTech Connect

    Kirbie, H.C.; Newton, M.A.; Siemens, P.D.

    1991-04-01

    Although pulsed-power engineering traditionally has been practiced by a fairly small, close community in the areas of defense and energy research, it is becoming more common in high-power, high-energy commercial pursuits such as material processing and lasers. This paper is a synopsis of the Feb. 12--14, 1990 workshop on magnetic switching as it applies primarily to pulse compression (power transformation). During the course of the Workshop at Granlibakken, a great deal of information was amassed and a keen insight into both the problems and opportunities as to the use of this switching approach was developed. The segmented workshop format proved ideal for identifying key aspects affecting optimum performance in a variety of applications. Individual groups of experts addressed network and system modeling, magnetic materials, power conditioning, core cooling and dielectrics, and finally circuits and application. At the end, they came together to consolidate their input and formulate the workshop's conclusions, identifying roadblocks or suggesting research projects, particularly as they apply to magnetic switching's trump card -- its high-average-power-handling capability (at least on a burst-mode basis). The workshop was especially productive both in the quality and quantity of information transfer in an environment conducive to a free and open exchange of ideas. We will not delve into the organization proper of this meeting, rather we wish to commend to the interested reader this volume, which provides the definitive and most up-to-date compilation on the subject of magnetic pulse compression from underlying principles to current state of the art as well as the prognosis for the future of magnetic pulse compression as a consensus of the workshop's organizers and participants.

  8. International magnetic pulse compression

    NASA Astrophysics Data System (ADS)

    Kirbie, H. C.; Newton, M. A.; Siemens, P. D.

    1991-04-01

    Although pulsed-power engineering traditionally has been practiced by a fairly small, close community in the areas of defense and energy research, it is becoming more common in high-power, high-energy commercial pursuits such as material processing and lasers. This paper is a synopsis of the Feb. 12-14, 1990 workshop on magnetic switching as it applies primarily to pulse compression (power transformation). During the course of the Workshop at Granlibakken, a great deal of information was amassed and a keen insight into both the problems and opportunities as to the use of this switching approach was developed. The segmented workshop format proved ideal for identifying key aspects affecting optimum performance in a variety of applications. Individual groups of experts addressed network and system modeling, magnetic materials, power conditioning, core cooling and dielectrics, and finally circuits and application. At the end, they came together to consolidate their input and formulate the workshop's conclusions, identifying roadblocks or suggesting research projects, particularly as they apply to magnetic switching's trump card - its high-average-power-handling capability (at least on a burst-mode basis). The workshop was especially productive both in the quality and quantity of information transfer in an environment conducive to a free and open exchange of ideas. We will not delve into the organization proper of this meeting, rather we wish to commend to the interested reader this volume, which provides the definitive and most up-to-date compilation on the subject of magnetic pulse compression from underlying principles to current state of the art as well as the prognosis for the future of magnetic pulse compression as a consensus of the workshop's organizers and participants.

  9. Avalanches in Wood Compression

    NASA Astrophysics Data System (ADS)

    Mäkinen, T.; Miksic, A.; Ovaska, M.; Alava, Mikko J.

    2015-07-01

    Wood is a multiscale material exhibiting a complex viscoplastic response. We study avalanches in small wood samples in compression. "Woodquakes" measured by acoustic emission are surprisingly similar to earthquakes and crackling noise in rocks and laboratory tests on brittle materials. Both the distributions of event energies and of waiting (silent) times follow power laws. The stress-strain response exhibits clear signatures of localization of deformation to "weak spots" or softwood layers, as identified using digital image correlation. Even though material structure-dependent localization takes place, the avalanche behavior remains scale-free.

  10. Avalanches in Wood Compression.

    PubMed

    Mäkinen, T; Miksic, A; Ovaska, M; Alava, Mikko J

    2015-07-31

    Wood is a multiscale material exhibiting a complex viscoplastic response. We study avalanches in small wood samples in compression. "Woodquakes" measured by acoustic emission are surprisingly similar to earthquakes and crackling noise in rocks and laboratory tests on brittle materials. Both the distributions of event energies and of waiting (silent) times follow power laws. The stress-strain response exhibits clear signatures of localization of deformation to "weak spots" or softwood layers, as identified using digital image correlation. Even though material structure-dependent localization takes place, the avalanche behavior remains scale-free.

  11. Sampling video compression system

    NASA Technical Reports Server (NTRS)

    Matsumoto, Y.; Lum, H. (Inventor)

    1977-01-01

    A system for transmitting video signal of compressed bandwidth is described. The transmitting station is provided with circuitry for dividing a picture to be transmitted into a plurality of blocks containing a checkerboard pattern of picture elements. Video signals along corresponding diagonal rows of picture elements in the respective blocks are regularly sampled. A transmitter responsive to the output of the sampling circuitry is included for transmitting the sampled video signals of one frame at a reduced bandwidth over a communication channel. The receiving station is provided with a frame memory for temporarily storing transmitted video signals of one frame at the original high bandwidth frequency.

  12. Avalanches in Wood Compression.

    PubMed

    Mäkinen, T; Miksic, A; Ovaska, M; Alava, Mikko J

    2015-07-31

    Wood is a multiscale material exhibiting a complex viscoplastic response. We study avalanches in small wood samples in compression. "Woodquakes" measured by acoustic emission are surprisingly similar to earthquakes and crackling noise in rocks and laboratory tests on brittle materials. Both the distributions of event energies and of waiting (silent) times follow power laws. The stress-strain response exhibits clear signatures of localization of deformation to "weak spots" or softwood layers, as identified using digital image correlation. Even though material structure-dependent localization takes place, the avalanche behavior remains scale-free. PMID:26274428

  13. Compression of color-mapped images

    NASA Technical Reports Server (NTRS)

    Hadenfeldt, A. C.; Sayood, Khalid

    1992-01-01

    In a standard image coding scenario, pixel-to-pixel correlation nearly always exists in the data, especially if the image is a natural scene. This correlation is what allows predictive coding schemes (e.g., DPCM) to perform efficient compression. In a color-mapped image, the values stored in the pixel array are no longer directly related to the pixel intensity. Two color indices which are numerically adjacent (close) may point to two very different colors. The correlation still exists, but only via the colormap. This fact can be exploited by sorting the color map to reintroduce the structure. The sorting of colormaps is studied and it is shown how the resulting structure can be used in both lossless and lossy compression of images.

  14. Free compression tube. Applications

    NASA Astrophysics Data System (ADS)

    Rusu, Ioan

    2012-11-01

    During the flight of vehicles, their propulsion energy must overcome gravity, to ensure the displacement of air masses on vehicle trajectory, to cover both energy losses from the friction between a solid surface and the air and also the kinetic energy of reflected air masses due to the impact with the flying vehicle. The flight optimization by increasing speed and reducing fuel consumption has directed research in the aerodynamics field. The flying vehicles shapes obtained through studies in the wind tunnel provide the optimization of the impact with the air masses and the airflow along the vehicle. By energy balance studies for vehicles in flight, the author Ioan Rusu directed his research in reducing the energy lost at vehicle impact with air masses. In this respect as compared to classical solutions for building flight vehicles aerodynamic surfaces which reduce the impact and friction with air masses, Ioan Rusu has invented a device which he named free compression tube for rockets, registered with the State Office for Inventions and Trademarks of Romania, OSIM, deposit f 2011 0352. Mounted in front of flight vehicles it eliminates significantly the impact and friction of air masses with the vehicle solid. The air masses come into contact with the air inside the free compression tube and the air-solid friction is eliminated and replaced by air to air friction.

  15. Perceptually Lossless Wavelet Compression

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Yang, Gloria Y.; Solomon, Joshua A.; Villasenor, John

    1996-01-01

    The Discrete Wavelet Transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter, which we call DWT uniform quantization noise. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2(exp -1), where r is display visual resolution in pixels/degree, and L is the wavelet level. Amplitude thresholds increase rapidly with spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from low-pass to horizontal/vertical to diagonal. We propose a mathematical model for DWT noise detection thresholds that is a function of level, orientation, and display visual resolution. This allows calculation of a 'perceptually lossless' quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.

  16. Compressive sensing in medical imaging.

    PubMed

    Graff, Christian G; Sidky, Emil Y

    2015-03-10

    The promise of compressive sensing, exploitation of compressibility to achieve high quality image reconstructions with less data, has attracted a great deal of attention in the medical imaging community. At the Compressed Sensing Incubator meeting held in April 2014 at OSA Headquarters in Washington, DC, presentations were given summarizing some of the research efforts ongoing in compressive sensing for x-ray computed tomography and magnetic resonance imaging systems. This article provides an expanded version of these presentations. Sparsity-exploiting reconstruction algorithms that have gained popularity in the medical imaging community are studied, and examples of clinical applications that could benefit from compressive sensing ideas are provided. The current and potential future impact of compressive sensing on the medical imaging field is discussed.

  17. Energy transfer in compressible turbulence

    NASA Technical Reports Server (NTRS)

    Bataille, Francoise; Zhou, YE; Bertoglio, Jean-Pierre

    1995-01-01

    This letter investigates the compressible energy transfer process. We extend a methodology developed originally for incompressible turbulence and use databases from numerical simulations of a weak compressible turbulence based on Eddy-Damped-Quasi-Normal-Markovian (EDQNM) closure. In order to analyze the compressible mode directly, the well known Helmholtz decomposition is used. While the compressible component has very little influence on the solenoidal part, we found that almost all of the compressible turbulence energy is received from its solenoidal counterpart. We focus on the most fundamental building block of the energy transfer process, the triadic interactions. This analysis leads us to conclude that, at low turbulent Mach number, the compressible energy transfer process is dominated by a local radiative transfer (absorption) in both inertial and energy containing ranges.

  18. Compressive sensing in medical imaging

    PubMed Central

    Graff, Christian G.; Sidky, Emil Y.

    2015-01-01

    The promise of compressive sensing, exploitation of compressibility to achieve high quality image reconstructions with less data, has attracted a great deal of attention in the medical imaging community. At the Compressed Sensing Incubator meeting held in April 2014 at OSA Headquarters in Washington, DC, presentations were given summarizing some of the research efforts ongoing in compressive sensing for x-ray computed tomography and magnetic resonance imaging systems. This article provides an expanded version of these presentations. Sparsity-exploiting reconstruction algorithms that have gained popularity in the medical imaging community are studied, and examples of clinical applications that could benefit from compressive sensing ideas are provided. The current and potential future impact of compressive sensing on the medical imaging field is discussed. PMID:25968400

  19. ECG data compression by modeling.

    PubMed Central

    Madhukar, B.; Murthy, I. S.

    1992-01-01

    This paper presents a novel algorithm for data compression of single lead Electrocardiogram (ECG) data. The method is based on Parametric modeling of the Discrete Cosine Transformed ECG signal. Improved high frequency reconstruction is achieved by separately modeling the low and the high frequency regions of the transformed signal. Differential Pulse Code Modulation is applied on the model parameters to obtain a further increase in the compression. Compression ratios up to 1:40 were achieved without significant distortion. PMID:1482940

  20. Shock compression of precompressed deuterium

    SciTech Connect

    Armstrong, M R; Crowhurst, J C; Zaug, J M; Bastea, S; Goncharov, A F; Militzer, B

    2011-07-31

    Here we report quasi-isentropic dynamic compression and thermodynamic characterization of solid, precompressed deuterium over an ultrafast time scale (< 100 ps) and a microscopic length scale (< 1 {micro}m). We further report a fast transition in shock wave compressed solid deuterium that is consistent with the ramp to shock transition, with a time scale of less than 10 ps. These results suggest that high-density dynamic compression of hydrogen may be possible on microscopic length scales.

  1. A PDF closure model for compressible turbulent chemically reacting flows

    NASA Technical Reports Server (NTRS)

    Kollmann, W.

    1992-01-01

    The objective of the proposed research project was the analysis of single point closures based on probability density function (pdf) and characteristic functions and the development of a prediction method for the joint velocity-scalar pdf in turbulent reacting flows. Turbulent flows of boundary layer type and stagnation point flows with and without chemical reactions were be calculated as principal applications. Pdf methods for compressible reacting flows were developed and tested in comparison with available experimental data. The research work carried in this project was concentrated on the closure of pdf equations for incompressible and compressible turbulent flows with and without chemical reactions.

  2. Magnetic compression laser driving circuit

    DOEpatents

    Ball, Don G.; Birx, Dan; Cook, Edward G.

    1993-01-01

    A magnetic compression laser driving circuit is disclosed. The magnetic compression laser driving circuit compresses voltage pulses in the range of 1.5 microseconds at 20 Kilovolts of amplitude to pulses in the range of 40 nanoseconds and 60 Kilovolts of amplitude. The magnetic compression laser driving circuit includes a multi-stage magnetic switch where the last stage includes a switch having at least two turns which has larger saturated inductance with less core material so that the efficiency of the circuit and hence the laser is increased.

  3. Magnetic compression laser driving circuit

    DOEpatents

    Ball, D.G.; Birx, D.; Cook, E.G.

    1993-01-05

    A magnetic compression laser driving circuit is disclosed. The magnetic compression laser driving circuit compresses voltage pulses in the range of 1.5 microseconds at 20 kilovolts of amplitude to pulses in the range of 40 nanoseconds and 60 kilovolts of amplitude. The magnetic compression laser driving circuit includes a multi-stage magnetic switch where the last stage includes a switch having at least two turns which has larger saturated inductance with less core material so that the efficiency of the circuit and hence the laser is increased.

  4. Data compression for sequencing data

    PubMed Central

    2013-01-01

    Post-Sanger sequencing methods produce tons of data, and there is a general agreement that the challenge to store and process them must be addressed with data compression. In this review we first answer the question “why compression” in a quantitative manner. Then we also answer the questions “what” and “how”, by sketching the fundamental compression ideas, describing the main sequencing data types and formats, and comparing the specialized compression algorithms and tools. Finally, we go back to the question “why compression” and give other, perhaps surprising answers, demonstrating the pervasiveness of data compression techniques in computational biology. PMID:24252160

  5. Compressive sensing of sparse tensors.

    PubMed

    Friedland, Shmuel; Li, Qun; Schonfeld, Dan

    2014-10-01

    Compressive sensing (CS) has triggered an enormous research activity since its first appearance. CS exploits the signal's sparsity or compressibility in a particular domain and integrates data compression and acquisition, thus allowing exact reconstruction through relatively few nonadaptive linear measurements. While conventional CS theory relies on data representation in the form of vectors, many data types in various applications, such as color imaging, video sequences, and multisensor networks, are intrinsically represented by higher order tensors. Application of CS to higher order data representation is typically performed by conversion of the data to very long vectors that must be measured using very large sampling matrices, thus imposing a huge computational and memory burden. In this paper, we propose generalized tensor compressive sensing (GTCS)-a unified framework for CS of higher order tensors, which preserves the intrinsic structure of tensor data with reduced computational complexity at reconstruction. GTCS offers an efficient means for representation of multidimensional data by providing simultaneous acquisition and compression from all tensor modes. In addition, we propound two reconstruction procedures, a serial method and a parallelizable method. We then compare the performance of the proposed method with Kronecker compressive sensing (KCS) and multiway compressive sensing (MWCS). We demonstrate experimentally that GTCS outperforms KCS and MWCS in terms of both reconstruction accuracy (within a range of compression ratios) and processing speed. The major disadvantage of our methods (and of MWCS as well) is that the compression ratios may be worse than that offered by KCS.

  6. POLYCOMP: Efficient and configurable compression of astronomical timelines

    NASA Astrophysics Data System (ADS)

    Tomasi, M.

    2016-07-01

    This paper describes the implementation of polycomp, a open-sourced, publicly available program for compressing one-dimensional data series in tabular format. The program is particularly suited for compressing smooth, noiseless streams of data like pointing information, as one of the algorithms it implements applies a combination of least squares polynomial fitting and discrete Chebyshev transforms that is able to achieve a compression ratio Cr up to ≈ 40 in the examples discussed in this work. This performance comes at the expense of a loss of information, whose upper bound is configured by the user. I show two areas in which the usage of polycomp is interesting. In the first example, I compress the ephemeris table of an astronomical object (Ganymede), obtaining Cr ≈ 20, with a compression error on the x , y , z coordinates smaller than 1 m. In the second example, I compress the publicly available timelines recorded by the Low Frequency Instrument (LFI), an array of microwave radiometers onboard the ESA Planck spacecraft. The compression reduces the needed storage from ∼ 6.5 TB to ≈ 0.75 TB (Cr ≈ 9), thus making them small enough to be kept in a portable hard drive.

  7. The effects of wavelet compression on Digital Elevation Models (DEMs)

    USGS Publications Warehouse

    Oimoen, M.J.

    2004-01-01

    This paper investigates the effects of lossy compression on floating-point digital elevation models using the discrete wavelet transform. The compression of elevation data poses a different set of problems and concerns than does the compression of images. Most notably, the usefulness of DEMs depends largely in the quality of their derivatives, such as slope and aspect. Three areas extracted from the U.S. Geological Survey's National Elevation Dataset were transformed to the wavelet domain using the third order filters of the Daubechies family (DAUB6), and were made sparse by setting 95 percent of the smallest wavelet coefficients to zero. The resulting raster is compressible to a corresponding degree. The effects of the nulled coefficients on the reconstructed DEM are noted as residuals in elevation, derived slope and aspect, and delineation of drainage basins and streamlines. A simple masking technique also is presented, that maintains the integrity and flatness of water bodies in the reconstructed DEM.

  8. Compressibility of Nanocrystalline Forsterite

    SciTech Connect

    Couvy, H.; Chen, J; Drozd, V

    2010-01-01

    We established an equation of state for nanocrystalline forsterite using multi-anvil press and diamond anvil cell. Comparative high-pressure and high-temperature experiments have been performed up to 9.6 GPa and 1,300 C. We found that nanocrystalline forsterite is more compressible than macro-powder forsterite. The bulk modulus of nanocrystalline forsterite is equal to 123.3 ({+-}3.4) GPa whereas the bulk modulus of macro-powder forsterite is equal to 129.6 ({+-}3.2) GPa. This difference is attributed to a weakening of the elastic properties of grain boundary and triple junction and their significant contribution in nanocrystalline sample compare to the bulk counterpart. The bulk modulus at zero pressure of forsterite grain boundary was determined to be 83.5 GPa.

  9. Vapor compression distillation module

    NASA Technical Reports Server (NTRS)

    Nuccio, P. P.

    1975-01-01

    A Vapor Compression Distillation (VCD) module was developed and evaluated as part of a Space Station Prototype (SSP) environmental control and life support system. The VCD module includes the waste tankage, pumps, post-treatment cells, automatic controls and fault detection instrumentation. Development problems were encountered with two components: the liquid pumps, and the waste tank and quantity gauge. Peristaltic pumps were selected instead of gear pumps, and a sub-program of materials and design optimization was undertaken leading to a projected life greater than 10,000 hours of continuous operation. A bladder tank was designed and built to contain the waste liquids and deliver it to the processor. A detrimental pressure pattern imposed upon the bladder by a force-operated quantity gauge was corrected by rearranging the force application, and design goals were achieved. System testing has demonstrated that all performance goals have been fulfilled.

  10. Compressed quantum simulation

    SciTech Connect

    Kraus, B.

    2014-12-04

    Here, I summarize the results presented in B. Kraus, Phys. Rev. Lett. 107, 250503 (2011). Recently, it has been shown that certain circuits, the so-called match gate circuits, can be compressed to an exponentially smaller universal quantum computation. We use this result to demonstrate that the simulation of a 1-D Ising chain consisting of n qubits can be performed on a universal quantum computer running on only log(n) qubits. We show how the adiabatic evolution can be simulated on this exponentially smaller system and how the magnetization can be measured. Since the Ising model displays a quantum phase transition, this result implies that a quantum phase transition of a very large system can be observed with current technology.

  11. Population attribute compression

    DOEpatents

    White, James M.; Faber, Vance; Saltzman, Jeffrey S.

    1995-01-01

    An image population having a large number of attributes is processed to form a display population with a predetermined smaller number of attributes that represent the larger number of attributes. In a particular application, the color values in an image are compressed for storage in a discrete look-up table (LUT). Color space containing the LUT color values is successively subdivided into smaller volumes until a plurality of volumes are formed, each having no more than a preselected maximum number of color values. Image pixel color values can then be rapidly placed in a volume with only a relatively few LUT values from which a nearest neighbor is selected. Image color values are assigned 8 bit pointers to their closest LUT value whereby data processing requires only the 8 bit pointer value to provide 24 bit color values from the LUT.

  12. Compressed Wavefront Sensing

    PubMed Central

    Polans, James; McNabb, Ryan P.; Izatt, Joseph A.; Farsiu, Sina

    2014-01-01

    We report on an algorithm for fast wavefront sensing that incorporates sparse representation for the first time in practice. The partial derivatives of optical wavefronts were sampled sparsely with a Shack-Hartmann wavefront sensor (SHWFS) by randomly subsampling the original SHWFS data to as little as 5%. Reconstruction was performed by a sparse representation algorithm that utilized the Zernike basis. We name this method SPARZER. Experiments on real and simulated data attest to the accuracy of the proposed techniques as compared to traditional sampling and reconstruction methods. We have made the corresponding data set and software freely available online. Compressed wavefront sensing offers the potential to increase the speed of wavefront acquisition and to defray the cost of SHWFS devices. PMID:24690703

  13. Compressive Network Analysis

    PubMed Central

    Jiang, Xiaoye; Yao, Yuan; Liu, Han; Guibas, Leonidas

    2014-01-01

    Modern data acquisition routinely produces massive amounts of network data. Though many methods and models have been proposed to analyze such data, the research of network data is largely disconnected with the classical theory of statistical learning and signal processing. In this paper, we present a new framework for modeling network data, which connects two seemingly different areas: network data analysis and compressed sensing. From a nonparametric perspective, we model an observed network using a large dictionary. In particular, we consider the network clique detection problem and show connections between our formulation with a new algebraic tool, namely Randon basis pursuit in homogeneous spaces. Such a connection allows us to identify rigorous recovery conditions for clique detection problems. Though this paper is mainly conceptual, we also develop practical approximation algorithms for solving empirical problems and demonstrate their usefulness on real-world datasets. PMID:25620806

  14. Survey of Header Compression Techniques

    NASA Technical Reports Server (NTRS)

    Ishac, Joseph

    2001-01-01

    This report provides a summary of several different header compression techniques. The different techniques included are: (1) Van Jacobson's header compression (RFC 1144); (2) SCPS (Space Communications Protocol Standards) header compression (SCPS-TP, SCPS-NP); (3) Robust header compression (ROHC); and (4) The header compression techniques in RFC2507 and RFC2508. The methodology for compression and error correction for these schemes are described in the remainder of this document. All of the header compression schemes support compression over simplex links, provided that the end receiver has some means of sending data back to the sender. However, if that return path does not exist, then neither Van Jacobson's nor SCPS can be used, since both rely on TCP (Transmission Control Protocol). In addition, under link conditions of low delay and low error, all of the schemes perform as expected. However, based on the methodology of the schemes, each scheme is likely to behave differently as conditions degrade. Van Jacobson's header compression relies heavily on the TCP retransmission timer and would suffer an increase in loss propagation should the link possess a high delay and/or bit error rate (BER). The SCPS header compression scheme protects against high delay environments by avoiding delta encoding between packets. Thus, loss propagation is avoided. However, SCPS is still affected by an increased BER (bit-error-rate) since the lack of delta encoding results in larger header sizes. Next, the schemes found in RFC2507 and RFC2508 perform well for non-TCP connections in poor conditions. RFC2507 performance with TCP connections is improved by various techniques over Van Jacobson's, but still suffers a performance hit with poor link properties. Also, RFC2507 offers the ability to send TCP data without delta encoding, similar to what SCPS offers. ROHC is similar to the previous two schemes, but adds additional CRCs (cyclic redundancy check) into headers and improves

  15. Adaptive compressive sensing camera

    NASA Astrophysics Data System (ADS)

    Hsu, Charles; Hsu, Ming K.; Cha, Jae; Iwamura, Tomo; Landa, Joseph; Nguyen, Charles; Szu, Harold

    2013-05-01

    We have embedded Adaptive Compressive Sensing (ACS) algorithm on Charge-Coupled-Device (CCD) camera based on the simplest concept that each pixel is a charge bucket, and the charges comes from Einstein photoelectric conversion effect. Applying the manufactory design principle, we only allow altering each working component at a minimum one step. We then simulated what would be such a camera can do for real world persistent surveillance taking into account of diurnal, all weather, and seasonal variations. The data storage has saved immensely, and the order of magnitude of saving is inversely proportional to target angular speed. We did design two new components of CCD camera. Due to the matured CMOS (Complementary metal-oxide-semiconductor) technology, the on-chip Sample and Hold (SAH) circuitry can be designed for a dual Photon Detector (PD) analog circuitry for changedetection that predicts skipping or going forward at a sufficient sampling frame rate. For an admitted frame, there is a purely random sparse matrix [Φ] which is implemented at each bucket pixel level the charge transport bias voltage toward its neighborhood buckets or not, and if not, it goes to the ground drainage. Since the snapshot image is not a video, we could not apply the usual MPEG video compression and Hoffman entropy codec as well as powerful WaveNet Wrapper on sensor level. We shall compare (i) Pre-Processing FFT and a threshold of significant Fourier mode components and inverse FFT to check PSNR; (ii) Post-Processing image recovery will be selectively done by CDT&D adaptive version of linear programming at L1 minimization and L2 similarity. For (ii) we need to determine in new frames selection by SAH circuitry (i) the degree of information (d.o.i) K(t) dictates the purely random linear sparse combination of measurement data a la [Φ]M,N M(t) = K(t) Log N(t).

  16. Trigger point therapy.

    PubMed

    Janssens, L A

    1992-03-01

    Trigger points (TP) are objectively demonstrable foci in muscles. They are painful on compression and trigger pain in a referred area. This area may be the only locus of complaint in humans. In dogs we cannot prove the existence of referred zones of pain. Therefore, we can only diagnose a TP-induced claudication if we cannot find bone, joint, or neurologic abnormalities, and we do find TP that disappear after treatment together with the original lameness. Several methods have been developed to demonstrate TP existence objectively. These are pressure algometry, pressure threshold measurements, magnetic resonance thermography, and histology. In humans, 71% of the TP described are acupuncture points. TP treatment consists of TP stimulation with non-invasive or invasive methods such as dry needling or injections. In the dog, ten TP are described in two categories of clinical patients. First, those with one or few TP reacting favorably on treatment (+/- 80% success in +/- 2-3 weeks). Second, those with many TPs reacting badly on treatment. Most probably the latter group are fibromyalgia patients.

  17. Compression failure of composite laminates

    NASA Technical Reports Server (NTRS)

    Pipes, R. B.

    1983-01-01

    This presentation attempts to characterize the compressive behavior of Hercules AS-1/3501-6 graphite-epoxy composite. The effect of varying specimen geometry on test results is examined. The transition region is determined between buckling and compressive failure. Failure modes are defined and analytical models to describe these modes are presented.

  18. Application specific compression : final report.

    SciTech Connect

    Melgaard, David Kennett; Byrne, Raymond Harry; Myers, Daniel S.; Harrison, Carol D.; Lee, David S.; Lewis, Phillip J.; Carlson, Jeffrey J.

    2008-12-01

    With the continuing development of more capable data gathering sensors, comes an increased demand on the bandwidth for transmitting larger quantities of data. To help counteract that trend, a study was undertaken to determine appropriate lossy data compression strategies for minimizing their impact on target detection and characterization. The survey of current compression techniques led us to the conclusion that wavelet compression was well suited for this purpose. Wavelet analysis essentially applies a low-pass and high-pass filter to the data, converting the data into the related coefficients that maintain spatial information as well as frequency information. Wavelet compression is achieved by zeroing the coefficients that pertain to the noise in the signal, i.e. the high frequency, low amplitude portion. This approach is well suited for our goal because it reduces the noise in the signal with only minimal impact on the larger, lower frequency target signatures. The resulting coefficients can then be encoded using lossless techniques with higher compression levels because of the lower entropy and significant number of zeros. No significant signal degradation or difficulties in target characterization or detection were observed or measured when wavelet compression was applied to simulated and real data, even when over 80% of the coefficients were zeroed. While the exact level of compression will be data set dependent, for the data sets we studied, compression factors over 10 were found to be satisfactory where conventional lossless techniques achieved levels of less than 3.

  19. Streaming Compression of Hexahedral Meshes

    SciTech Connect

    Isenburg, M; Courbet, C

    2010-02-03

    We describe a method for streaming compression of hexahedral meshes. Given an interleaved stream of vertices and hexahedral our coder incrementally compresses the mesh in the presented order. Our coder is extremely memory efficient when the input stream documents when vertices are referenced for the last time (i.e. when it contains topological finalization tags). Our coder then continuously releases and reuses data structures that no longer contribute to compressing the remainder of the stream. This means in practice that our coder has only a small fraction of the whole mesh in memory at any time. We can therefore compress very large meshes - even meshes that do not file in memory. Compared to traditional, non-streaming approaches that load the entire mesh and globally reorder it during compression, our algorithm trades a less compact compressed representation for significant gains in speed, memory, and I/O efficiency. For example, on the 456k hexahedra 'blade' mesh, our coder is twice as fast and uses 88 times less memory (only 3.1 MB) with the compressed file increasing about 3% in size. We also present the first scheme for predictive compression of properties associated with hexahedral cells.

  20. Pressure Oscillations in Adiabatic Compression

    ERIC Educational Resources Information Center

    Stout, Roland

    2011-01-01

    After finding Moloney and McGarvey's modified adiabatic compression apparatus, I decided to insert this experiment into my physical chemistry laboratory at the last minute, replacing a problematic experiment. With insufficient time to build the apparatus, we placed a bottle between two thick textbooks and compressed it with a third textbook forced…

  1. Compression and expansion at the Bevalac

    SciTech Connect

    Poskanzer, A.M.; Doss, K.G.R.; Gustafsson, H.A.; Gutbrod, H.H.; Kolb, B.; Loehner, H.; Ludewigt, B.; Renner, T.; Riedesel, H.; Ritter, H.G.

    1984-09-01

    Recent experimental results from 4..pi.. detectors at the Bevalac are presented, with emphasis on the Plastic Ball. Heavy nuclei, in central collisions at Bevalac energies, stop in their center of mass producing a region of equilibrated, hot matter. Some of the energy is stored as potential energy of compression. At the same time the pressure builds up, producing a sidewise collective flow of nuclear matter. The system then expands until the density is reduced so that the chemical equilibria producing the light composite nuclei freeze-out, and then even further until the thermal two body interactions freeze-out. Each of these points are discussed. 9 references.

  2. Digital compression algorithms for HDTV transmission

    NASA Technical Reports Server (NTRS)

    Adkins, Kenneth C.; Shalkhauser, Mary JO; Bibyk, Steven B.

    1990-01-01

    Digital compression of video images is a possible avenue for high definition television (HDTV) transmission. Compression needs to be optimized while picture quality remains high. Two techniques for compression the digital images are explained and comparisons are drawn between the human vision system and artificial compression techniques. Suggestions for improving compression algorithms through the use of neural and analog circuitry are given.

  3. Hyperspectral fluorescence microscopy based on compressed sensing

    NASA Astrophysics Data System (ADS)

    Studer, Vincent; Bobin, Jérome; Chahid, Makhlad; Mousavi, Hamed; Candes, Emmanuel; Dahan, Maxime

    2012-03-01

    In fluorescence microscopy, one can distinguish two kinds of imaging approaches, wide field and raster scan microscopy, differing by their excitation and detection scheme. In both imaging modalities the acquisition is independent of the information content of the image. Rather, the number of acquisitions N, is imposed by the Nyquist-Shannon theorem. However, in practice, many biological images are compressible (or, equivalently here, sparse), meaning that they depend on a number of degrees of freedom K that is smaller that their size N. Recently, the mathematical theory of compressed sensing (CS) has shown how the sensing modality could take advantage of the image sparsity to reconstruct images with no loss of information while largely reducing the number M of acquisition. Here we present a novel fluorescence microscope designed along the principles of CS. It uses a spatial light modulator (DMD) to create structured wide field excitation patterns and a sensitive point detector to measure the emitted fluorescence. On sparse fluorescent samples, we could achieve compression ratio N/M of up to 64, meaning that an image can be reconstructed with a number of measurements of only 1.5 % of its pixel number. Furthemore, we extend our CS acquisition scheme to an hyperspectral imaging system.

  4. Analytical model for ramp compression

    NASA Astrophysics Data System (ADS)

    Xue, Quanxi; Jiang, Shaoen; Wang, Zhebin; Wang, Feng; Hu, Yun; Ding, Yongkun

    2016-08-01

    An analytical ramp compression model for condensed matter, which can provide explicit solutions for isentropic compression flow fields, is reported. A ramp compression experiment can be easily designed according to the capability of the loading source using this model. Specifically, important parameters, such as the maximum isentropic region width, material properties, profile of the pressure pulse, and the pressure pulse duration can be reasonably allocated or chosen. To demonstrate and study this model, laser-direct-driven ramp compression experiments and code simulation are performed successively, and the factors influencing the accuracy of the model are studied. The application and simulation show that this model can be used as guidance in the design of a ramp compression experiment. However, it is verified that further optimization work is required for a precise experimental design.

  5. Compressive strength of carbon fibers

    SciTech Connect

    Prandy, J.M. ); Hahn, H.T. )

    1991-01-01

    Most composites are weaker in compression than in tension, which is due to the poor compressive strength of the load bearing fibers. The present paper discusses the compressive strengths and failure modes of 11 different carbon fibers: PAN-AS1, AS4, IM6, IM7, T700, T300, GY-30, pitch-75, ultra high modulus (UHM), high modulus (HM), and high strength (HS). The compressive strength was determined by embedding a fiber bundle in a transparent epoxy matrix and testing in compression. The resin allows for the containment and observation of failure during and after testing while also providing lateral support to the fibers. Scanning electron microscopy (SEM) was used to determine the global failure modes of the fibers.

  6. 14. Detail, upper chord connection point on upstream side of ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    14. Detail, upper chord connection point on upstream side of truss, showing connection of upper chord, laced vertical compression member, strut, counters, and laterals. - Dry Creek Bridge, Spanning Dry Creek at Cook Road, Ione, Amador County, CA

  7. Compressive sensing exploiting wavelet-domain dependencies for ECG compression

    NASA Astrophysics Data System (ADS)

    Polania, Luisa F.; Carrillo, Rafael E.; Blanco-Velasco, Manuel; Barner, Kenneth E.

    2012-06-01

    Compressive sensing (CS) is an emerging signal processing paradigm that enables sub-Nyquist sampling of sparse signals. Extensive previous work has exploited the sparse representation of ECG signals in compression applications. In this paper, we propose the use of wavelet domain dependencies to further reduce the number of samples in compressive sensing-based ECG compression while decreasing the computational complexity. R wave events manifest themselves as chains of large coefficients propagating across scales to form a connected subtree of the wavelet coefficient tree. We show that the incorporation of this connectedness as additional prior information into a modified version of the CoSaMP algorithm can significantly reduce the required number of samples to achieve good quality in the reconstruction. This approach also allows more control over the ECG signal reconstruction, in particular, the QRS complex, which is typically distorted when prior information is not included in the recovery. The compression algorithm was tested upon records selected from the MIT-BIH arrhythmia database. Simulation results show that the proposed algorithm leads to high compression ratios associated with low distortion levels relative to state-of-the-art compression algorithms.

  8. Increasing FTIR spectromicroscopy speed and resolution through compressive imaging

    SciTech Connect

    Gallet, Julien; Riley, Michael; Hao, Zhao; Martin, Michael C

    2007-10-15

    At the Advanced Light Source at Lawrence Berkeley National Laboratory, we are investigating how to increase both the speed and resolution of synchrotron infrared imaging. Synchrotron infrared beamlines have diffraction-limited spot sizes and high signal to noise, however spectral images must be obtained one point at a time and the spatial resolution is limited by the effects of diffraction. One technique to assist in speeding up spectral image acquisition is described here and uses compressive imaging algorithms. Compressive imaging can potentially attain resolutions higher than allowed by diffraction and/or can acquire spectral images without having to measure every spatial point individually thus increasing the speed of such maps. Here we present and discuss initial tests of compressive imaging techniques performed with ALS Beamline 1.4.3?s Nic-Plan infrared microscope, Beamline 1.4.4 Continuum XL IR microscope, and also with a stand-alone Nicolet Nexus 470 FTIR spectrometer.

  9. Wavefield Compression for Full-Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Boehm, Christian; Fichtner, Andreas; de la Puente, Josep; Hanzich, Mauricio

    2015-04-01

    We present compression techniques tailored to iterative nonlinear minimization methods that significantly reduce the memory requirements to store the forward wavefield for the computation of sensitivity kernels. Full-waveform inversion on 3d data sets requires massive computing and memory capabilities. Adjoint techniques offer a powerful tool to compute the first and second derivatives. However, due to the asynchronous nature of forward and adjoint simulations, a severe bottleneck is introduced by the necessity to access both wavefields simultaneously when computing sensitivity kernels. There exist two opposing strategies to deal with this challenge. On the one hand, conventional approaches save the whole forward wavefield to the disk, which yields a significant I/O overhead and might require several terabytes of storage capacity per seismic event. On the other hand, checkpointing techniques allow to trade an almost arbitrary amount of memory requirements for a - potentially large - number of additional forward simulations. We propose an alternative approach that strikes a balance between memory requirements and the need for additional computations. Here, we aim at compressing the forward wavefield in such a way that (1) the I/O overhead is reduced substantially without the need for additional simulations, (2) the costs for compressing/decompressing the wavefield are negligible, and (3) the approximate derivatives resulting from the compressed forward wavefield do not affect the rate of convergence of a Newton-type minimization method. To this end, we apply an adaptive re-quantization of the displacement field that uses dynamically adjusted floating-point accuracies - i.e., a locally varying number of bits - to store the data. Furthermore, the spectral element functions are adaptively downsampled to a lower polynomial degree. In addition, a sliding-window cubic spline re-interpolates the temporal snapshots to recover a smooth signal. Moreover, a preprocessing step

  10. Compressive Sensing for Quantum Imaging

    NASA Astrophysics Data System (ADS)

    Howland, Gregory A.

    This thesis describes the application of compressive sensing to several challenging problems in quantum imaging with practical and fundamental implications. Compressive sensing is a measurement technique that compresses a signal during measurement such that it can be dramatically undersampled. Compressive sensing has been shown to be an extremely efficient measurement technique for imaging, particularly when detector arrays are not available. The thesis first reviews compressive sensing through the lens of quantum imaging and quantum measurement. Four important applications and their corresponding experiments are then described in detail. The first application is a compressive sensing, photon-counting lidar system. A novel depth mapping technique that uses standard, linear compressive sensing is described. Depth maps up to 256 x 256 pixel transverse resolution are recovered with depth resolution less than 2.54 cm. The first three-dimensional, photon counting video is recorded at 32 x 32 pixel resolution and 14 frames-per-second. The second application is the use of compressive sensing for complementary imaging---simultaneously imaging the transverse-position and transverse-momentum distributions of optical photons. This is accomplished by taking random, partial projections of position followed by imaging the momentum distribution on a cooled CCD camera. The projections are shown to not significantly perturb the photons' momenta while allowing high resolution position images to be reconstructed using compressive sensing. A variety of objects and their diffraction patterns are imaged including the double slit, triple slit, alphanumeric characters, and the University of Rochester logo. The third application is the use of compressive sensing to characterize spatial entanglement of photon pairs produced by spontaneous parametric downconversion. The technique gives a theoretical speedup N2/log N for N-dimensional entanglement over the standard raster scanning technique

  11. Variable compression ratio control

    SciTech Connect

    Johnson, K.A.

    1988-04-19

    In a four cycle engine that includes a crankshaft having a plural number of main shaft sections defining the crankshaft rotational axis and a plural number of crank arms defining orbital shaft sections, a plural number of combustion cylinders, a movable piston within each cylinder, each cylinder and its associated piston defining a combustion chamber, a connecting rod connecting each piston to an orbital shaft section of the crankshaft, and a plural number of stationary support walls spaced along the crankshaft axis for absorbing crankshaft forces: the improvement is described comprising means for adjustably supporting the crankshaft on the stationary walls such that the crankshaft rotational axis is adjustable along the piston-cylinder axis for the purpose of varying a resulting engine compression ratio; the adjustable support means comprising a circular cavity in each stationary wall. A circular disk swivably is seated in each cavity, each circular disk having a circular opening therethrough eccentric to the disk center. The crankshaft is arranged so that respective ones of its main shaft sections are located within respective ones of the circular openings; means for rotating each circular disk around its center so that the main shaft sections of the crankshaft are adjusted toward and away from the combustion chamber; a pinion gear on an output end of the crankshaft in axial alignment with and positioned beyond the respective ones of the main shaft sections, and a rotary output gear located about and engaged with teeth extending from the pinion gear.

  12. Compression relief engine brake

    SciTech Connect

    Meneely, V.A.

    1987-10-06

    A compression relief brake is described for four cycle internal-combustion engines, comprising: a pressurized oil supply; means for selectively pressurizing a hydraulic circuit with oil from the oil supply; a master piston and cylinder communicating with a slave piston and cylinder via the hydraulic circuit; an engine exhaust valve mechanically coupled to the engine and timed to open during the exhaust cycle of the engine the exhaust valve coupled to the slave piston. The exhaust valve is spring-based in a closed state to contact a valve seat; a sleeve frictionally and slidably disposed within a cavity defined by the slave piston which cavity communicates with the hydraulic circuit. When the hydraulic circuit is selectively pressurized and the engine is operating the sleeve entraps an incompressible volume of oil within the cavity to generate a displacement of the slave piston within the slave cylinder, whereby a first gap is maintained between the exhaust valve and its associated seat; and means for reciprocally activating the master piston for increasing the pressure within the previously pressurized hydraulic circuit during at least a portion of the expansion cycle of the engine whereby a second gap is reciprocally maintained between the exhaust valve and its associated seat.

  13. Advances in compressible turbulent mixing

    SciTech Connect

    Dannevik, W.P.; Buckingham, A.C.; Leith, C.E.

    1992-01-01

    This volume includes some recent additions to original material prepared for the Princeton International Workshop on the Physics of Compressible Turbulent Mixing, held in 1988. Workshop participants were asked to emphasize the physics of the compressible mixing process rather than measurement techniques or computational methods. Actual experimental results and their meaning were given precedence over discussions of new diagnostic developments. Theoretical interpretations and understanding were stressed rather than the exposition of new analytical model developments or advances in numerical procedures. By design, compressibility influences on turbulent mixing were discussed--almost exclusively--from the perspective of supersonic flow field studies. The papers are arranged in three topical categories: Foundations, Vortical Domination, and Strongly Coupled Compressibility. The Foundations category is a collection of seminal studies that connect current study in compressible turbulent mixing with compressible, high-speed turbulent flow research that almost vanished about two decades ago. A number of contributions are included on flow instability initiation, evolution, and transition between the states of unstable flow onset through those descriptive of fully developed turbulence. The Vortical Domination category includes theoretical and experimental studies of coherent structures, vortex pairing, vortex-dynamics-influenced pressure focusing. In the Strongly Coupled Compressibility category the organizers included the high-speed turbulent flow investigations in which the interaction of shock waves could be considered an important source for production of new turbulence or for the enhancement of pre-existing turbulence. Individual papers are processed separately.

  14. Best compression: Reciprocating or rotary?

    SciTech Connect

    Cahill, C.

    1997-07-01

    A compressor is a device used to increase the pressure of a compressible fluid. The inlet pressure can vary from a deep vacuum to a high positive pressure. The discharge pressure can range from subatmospheric levels to tens of thousands of pounds per square inch. Compressors come in numerous forms, but for oilfield applications there are two primary types, reciprocating and rotary. Both reciprocating and rotary compressors are grouped in the intermittent mode of compression. Intermittent is cyclic in nature, in that a specific quantity of gas is ingested by the compressor, acted upon and discharged before the cycle is repeated. Reciprocating compression is the most common form of compression used for oilfield applications. Rotary screw compressors have a long history but are relative newcomers to oilfield applications. The rotary screw compressor-technically a helical rotor compressor-dates back to 1878. That was when the first rotary screw was manufactured for the purpose of compressing air. Today thousands of rotary screw compression packages are being used throughout the world to compress natural gas.

  15. Compression Pylon Reduces Interference Drag

    NASA Technical Reports Server (NTRS)

    Patterson, James C., Jr.; Carlson, John R.

    1989-01-01

    New design reduces total drag by 4 percent. Pylon reduces fuselage/wing/pylon/nacelle-channel compressibility losses without creating additional drag associated with other areas of pylon. Minimum cross-sectional area of channel occurs at trailing edge of wing. Velocity of flow in channel always nearly subsonic, reducing compressibility losses associated with supersonic flow. Flow goes past trailing edge before returning to ambient conditions, resulting in no additional drag to aircraft. Designed to compress flow beneath wing by reducing velocity in this channel, thereby reducing shockwave losses and providing increase in wing lift.

  16. Context-Aware Image Compression

    PubMed Central

    Chan, Jacky C. K.; Mahjoubfar, Ata; Chen, Claire L.; Jalali, Bahram

    2016-01-01

    We describe a physics-based data compression method inspired by the photonic time stretch wherein information-rich portions of the data are dilated in a process that emulates the effect of group velocity dispersion on temporal signals. With this coding operation, the data can be downsampled at a lower rate than without it. In contrast to previous implementation of the warped stretch compression, here the decoding can be performed without the need of phase recovery. We present rate-distortion analysis and show improvement in PSNR compared to compression via uniform downsampling. PMID:27367904

  17. Partial transparency of compressed wood

    NASA Astrophysics Data System (ADS)

    Sugimoto, Hiroyuki; Sugimori, Masatoshi

    2016-05-01

    We have developed novel wood composite with optical transparency at arbitrary region. Pores in wood cells have a great variation in size. These pores expand the light path in the sample, because the refractive indexes differ between constituents of cell and air in lumen. In this study, wood compressed to close to lumen had optical transparency. Because the condition of the compression of wood needs the plastic deformation, wood was impregnated phenolic resin. The optimal condition for high transmission is compression ratio above 0.7.

  18. Designing experiments through compressed sensing.

    SciTech Connect

    Young, Joseph G.; Ridzal, Denis

    2013-06-01

    In the following paper, we discuss how to design an ensemble of experiments through the use of compressed sensing. Specifically, we show how to conduct a small number of physical experiments and then use compressed sensing to reconstruct a larger set of data. In order to accomplish this, we organize our results into four sections. We begin by extending the theory of compressed sensing to a finite product of Hilbert spaces. Then, we show how these results apply to experiment design. Next, we develop an efficient reconstruction algorithm that allows us to reconstruct experimental data projected onto a finite element basis. Finally, we verify our approach with two computational experiments.

  19. Compressibility effects on dynamic stall

    NASA Astrophysics Data System (ADS)

    Carr, Lawrence W.; Chandrasekhara, M. S.

    1996-12-01

    Dynamic stall delay of flow over airfoils rapidly pitching past the static stall angle has been studied by many scientists. However, the effect of compressibility on this dynamic stall behavior has been less comprehensively studied. This review presents a detailed assessment of research performed on this subject, including a historical review of work performed on both aircraft and helicopters, and offers insight into the impact of compressibility on the complex aerodynamic phenomenon known as dynamic stall. It also documents the major effect that compressibility can have on dynamic stall events, and the complete change of physics of the stall process that can occur as free-stream Mach number is increased.

  20. A Quadratic Closure for Compressible Turbulence

    SciTech Connect

    Futterman, J A

    2008-09-16

    We have investigated a one-point closure model for compressible turbulence based on third- and higher order cumulant discard for systems undergoing rapid deformation, such as might occur downstream of a shock or other discontinuity. In so doing, we find the lowest order contributions of turbulence to the mean flow, which lead to criteria for Adaptive Mesh Refinement. Rapid distortion theory (RDT) as originally applied by Herring closes the turbulence hierarchy of moment equations by discarding third order and higher cumulants. This is similar to the fourth-order cumulant discard hypothesis of Millionshchikov, except that the Millionshchikov hypothesis was taken to apply to incompressible homogeneous isotropic turbulence generally, whereas RDT is applied only to fluids undergoing a distortion that is 'rapid' in the sense that the interaction of the mean flow with the turbulence overwhelms the interaction of the turbulence with itself. It is also similar to Gaussian closure, in which both second and fourth-order cumulants are retained. Motivated by RDT, we develop a quadratic one-point closure for rapidly distorting compressible turbulence, without regard to homogeneity or isotropy, and make contact with two equation turbulence models, especially the K-{var_epsilon} and K-L models, and with linear instability growth. In the end, we arrive at criteria for Adaptive Mesh Refinement in Finite Volume simulations.

  1. The second modern condition? Compressed modernity as internalized reflexive cosmopolitization.

    PubMed

    Kyung-Sup, Chang

    2010-09-01

    Compressed modernity is a civilizational condition in which economic, political, social and/or cultural changes occur in an extremely condensed manner in respect to both time and space, and in which the dynamic coexistence of mutually disparate historical and social elements leads to the construction and reconstruction of a highly complex and fluid social system. During what Beck considers the second modern stage of humanity, every society reflexively internalizes cosmopolitanized risks. Societies (or their civilizational conditions) are thereby being internalized into each other, making compressed modernity a universal feature of contemporary societies. This paper theoretically discusses compressed modernity as nationally ramified from reflexive cosmopolitization, and, then, comparatively illustrates varying instances of compressed modernity in advanced capitalist societies, un(der)developed capitalist societies, and system transition societies. In lieu of a conclusion, I point out the declining status of national societies as the dominant unit of (compressed) modernity and the interactive acceleration of compressed modernity among different levels of human life ranging from individuals to the global community.

  2. Speed of Compression of Magnetosphere by CME Clouds

    NASA Astrophysics Data System (ADS)

    Nanan, B.; Alleyne, H.; Walker, S.; Lucek, E.; Reme, H.; Fazakerley, A.

    2007-12-01

    The multi-point Cluster observations provide the opportunity to study the speed of compression of the magnetosphere at the impact of extreme solar events such as CMEs. The four-point Cluster FGM (high resolution), CIS and PEACE data during the passage of 17 CME clouds during 2001-2005, together with models of magnetosphere and magnetopause, are used to obtain the speed of compression of the dayside magnetosphere. The study shows that the speed of compression (within three seconds of impact) increases with the dynamic pressure of the CMEs, and that this speed exceeds the speed of the CMEs in some (five) cases (suggesting impulsive response) when the dynamic pressure of the CMEs exceed about 20 nPa. The magnetosphere is also found to undergo damped oscillations for about two minutes after the impact of some extreme CMEs (24 October 2003 and 29 October 2003) until the magnetic pressure outside and inside the magnetopause balances. The speed of compression is also found to increase with the negative IMF Bz of the CME suggesting that part of the compression is due to CME pressure and another part is due to magnetic reconnection. The plasma data (PEACE and CIS), though of low resolution (4 seconds), are being analysed to check if the magnetic field and plasma move together or do they undergo differential motion (important for magnetic field-plasma interactions at short time scales).

  3. Internal roll compression system

    DOEpatents

    Anderson, Graydon E.

    1985-01-01

    This invention is a machine for squeezing water out of peat or other material of low tensile strength; the machine including an inner roll eccentrically positioned inside a tubular outer roll, so as to form a gradually increasing pinch area at one point therebetween, so that, as the rolls rotate, the material is placed between the rolls, and gets wrung out when passing through the pinch area.

  4. Efficient lossy compression for compressive sensing acquisition of images in compressive sensing imaging systems.

    PubMed

    Li, Xiangwei; Lan, Xuguang; Yang, Meng; Xue, Jianru; Zheng, Nanning

    2014-12-05

    Compressive Sensing Imaging (CSI) is a new framework for image acquisition, which enables the simultaneous acquisition and compression of a scene. Since the characteristics of Compressive Sensing (CS) acquisition are very different from traditional image acquisition, the general image compression solution may not work well. In this paper, we propose an efficient lossy compression solution for CS acquisition of images by considering the distinctive features of the CSI. First, we design an adaptive compressive sensing acquisition method for images according to the sampling rate, which could achieve better CS reconstruction quality for the acquired image. Second, we develop a universal quantization for the obtained CS measurements from CS acquisition without knowing any a priori information about the captured image. Finally, we apply these two methods in the CSI system for efficient lossy compression of CS acquisition. Simulation results demonstrate that the proposed solution improves the rate-distortion performance by 0.4~2 dB comparing with current state-of-the-art, while maintaining a low computational complexity.

  5. Efficient Lossy Compression for Compressive Sensing Acquisition of Images in Compressive Sensing Imaging Systems

    PubMed Central

    Li, Xiangwei; Lan, Xuguang; Yang, Meng; Xue, Jianru; Zheng, Nanning

    2014-01-01

    Compressive Sensing Imaging (CSI) is a new framework for image acquisition, which enables the simultaneous acquisition and compression of a scene. Since the characteristics of Compressive Sensing (CS) acquisition are very different from traditional image acquisition, the general image compression solution may not work well. In this paper, we propose an efficient lossy compression solution for CS acquisition of images by considering the distinctive features of the CSI. First, we design an adaptive compressive sensing acquisition method for images according to the sampling rate, which could achieve better CS reconstruction quality for the acquired image. Second, we develop a universal quantization for the obtained CS measurements from CS acquisition without knowing any a priori information about the captured image. Finally, we apply these two methods in the CSI system for efficient lossy compression of CS acquisition. Simulation results demonstrate that the proposed solution improves the rate-distortion performance by 0.4∼2 dB comparing with current state-of-the-art, while maintaining a low computational complexity. PMID:25490597

  6. [New aspects of compression therapy].

    PubMed

    Partsch, Bernhard; Partsch, Hugo

    2016-06-01

    In this review article the mechanisms of action of compression therapy are summarized and a survey of materials is presented together with some practical advice how and when these different devices should be applied. Some new experimental findings regarding the optimal dosage (= compression pressure) concerning an improvement of venous hemodynamics and a reduction of oedema are discussed. It is shown, that stiff, non-yielding material applied with adequate pressure provides hemodynamically superior effects compared to elastic material and that relatively low pressures reduce oedema. Compression over the calf is more important to increase the calf pump function compared to graduated compression. In patients with mixed, arterial-venous ulcers and an ABPI over 0.6 inelastic bandages not exceeding a sub-bandage pressure of 40 mmHg may increase the arterial flow and improve venous pumping function. PMID:27259340

  7. Efficient Decoding of Compressed Data.

    ERIC Educational Resources Information Center

    Bassiouni, Mostafa A.; Mukherjee, Amar

    1995-01-01

    Discusses the problem of enhancing the speed of Huffman decoding of compressed data. Topics addressed include the Huffman decoding tree; multibit decoding; binary string mapping problems; and algorithms for solving mapping problems. (22 references) (LRW)

  8. Compressed gas fuel storage system

    DOEpatents

    Wozniak, John J.; Tiller, Dale B.; Wienhold, Paul D.; Hildebrand, Richard J.

    2001-01-01

    A compressed gas vehicle fuel storage system comprised of a plurality of compressed gas pressure cells supported by shock-absorbing foam positioned within a shape-conforming container. The container is dimensioned relative to the compressed gas pressure cells whereby a radial air gap surrounds each compressed gas pressure cell. The radial air gap allows pressure-induced expansion of the pressure cells without resulting in the application of pressure to adjacent pressure cells or physical pressure to the container. The pressure cells are interconnected by a gas control assembly including a thermally activated pressure relief device, a manual safety shut-off valve, and means for connecting the fuel storage system to a vehicle power source and a refueling adapter. The gas control assembly is enclosed by a protective cover attached to the container. The system is attached to the vehicle with straps to enable the chassis to deform as intended in a high-speed collision.

  9. Comparison of Artificial Compressibility Methods

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Housman, Jeffrey; Kwak, Dochan

    2004-01-01

    Various artificial compressibility methods for calculating the three-dimensional incompressible Navier-Stokes equations are compared. Each method is described and numerical solutions to test problems are conducted. A comparison based on convergence behavior, accuracy, and robustness is given.

  10. Dynamics of Strongly Compressible Turbulence

    NASA Astrophysics Data System (ADS)

    Towery, Colin; Poludnenko, Alexei; Hamlington, Peter

    2015-11-01

    Strongly compressible turbulence, wherein the turbulent velocity fluctuations directly generate compression effects, plays a critical role in many important scientific and engineering problems of interest today, for instance in the processes of stellar formation and also hypersonic vehicle design. This turbulence is very unusual in comparison to ``normal,'' weakly compressible and incompressible turbulence, which is relatively well understood. Strongly compressible turbulence is characterized by large variations in the thermodynamic state of the fluid in space and time, including excited acoustic modes, strong, localized shock and rarefaction structures, and rapid heating due to viscous dissipation. The exact nature of these thermo-fluid dynamics has yet to be discerned, which greatly limits the ability of current computational engineering models to successfully treat these problems. New direct numerical simulation (DNS) results of strongly compressible isotropic turbulence will be presented along with a framework for characterizing and evaluating compressible turbulence dynamics and a connection will be made between the present diagnostic analysis and the validation of engineering turbulence models.

  11. Stress Relaxation for Granular Materials near Jamming under Cyclic Compression

    NASA Astrophysics Data System (ADS)

    Farhadi, Somayeh; Zhu, Alex Z.; Behringer, Robert P.

    2015-10-01

    We have explored isotropically jammed states of semi-2D granular materials through cyclic compression. In each compression cycle, systems of either identical ellipses or bidisperse disks transition between jammed and unjammed states. We determine the evolution of the average pressure P and structure through consecutive jammed states. We observe a transition point ϕm above which P persists over many cycles; below ϕm, P relaxes slowly. The relaxation time scale associated with P increases with packing fraction, while the relaxation time scale for collective particle motion remains constant. The collective motion of the ellipses is hindered compared to disks because of the rotational constraints on elliptical particles.

  12. Stress Relaxation for Granular Materials near Jamming under Cyclic Compression.

    PubMed

    Farhadi, Somayeh; Zhu, Alex Z; Behringer, Robert P

    2015-10-30

    We have explored isotropically jammed states of semi-2D granular materials through cyclic compression. In each compression cycle, systems of either identical ellipses or bidisperse disks transition between jammed and unjammed states. We determine the evolution of the average pressure P and structure through consecutive jammed states. We observe a transition point ϕ_{m} above which P persists over many cycles; below ϕ_{m}, P relaxes slowly. The relaxation time scale associated with P increases with packing fraction, while the relaxation time scale for collective particle motion remains constant. The collective motion of the ellipses is hindered compared to disks because of the rotational constraints on elliptical particles. PMID:26565498

  13. Optical frequency comb interference profilometry using compressive sensing.

    PubMed

    Pham, Quang Duc; Hayasaki, Yoshio

    2013-08-12

    We describe a new optical system using an ultra-stable mode-locked frequency comb femtosecond laser and compressive sensing to measure an object's surface profile. The ultra-stable frequency comb laser was used to precisely measure an object with a large depth, over a wide dynamic range. The compressive sensing technique was able to obtain the spatial information of the object with two single-pixel fast photo-receivers, with no mechanical scanning and fewer measurements than the number of sampling points. An optical experiment was performed to verify the advantages of the proposed method.

  14. Object-Based Image Compression

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.

    2003-01-01

    Image compression frequently supports reduced storage requirement in a computer system, as well as enhancement of effective channel bandwidth in a communication system, by decreasing the source bit rate through reduction of source redundancy. The majority of image compression techniques emphasize pixel-level operations, such as matching rectangular or elliptical sampling blocks taken from the source data stream, with exemplars stored in a database (e.g., a codebook in vector quantization or VQ). Alternatively, one can represent a source block via transformation, coefficient quantization, and selection of coefficients deemed significant for source content approximation in the decompressed image. This approach, called transform coding (TC), has predominated for several decades in the signal and image processing communities. A further technique that has been employed is the deduction of affine relationships from source properties such as local self-similarity, which supports the construction of adaptive codebooks in a self-VQ paradigm that has been called iterated function systems (IFS). Although VQ, TC, and IFS based compression algorithms have enjoyed varying levels of success for different types of applications, bit rate requirements, and image quality constraints, few of these algorithms examine the higher-level spatial structure of an image, and fewer still exploit this structure to enhance compression ratio. In this paper, we discuss a fourth type of compression algorithm, called object-based compression, which is based on research in joint segmentaton and compression, as well as previous research in the extraction of sketch-like representations from digital imagery. Here, large image regions that correspond to contiguous recognizeable objects or parts of objects are segmented from the source, then represented compactly in the compressed image. Segmentation is facilitated by source properties such as size, shape, texture, statistical properties, and spectral

  15. 29 CFR 1917.154 - Compressed air.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 7 2013-07-01 2013-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed...

  16. 29 CFR 1917.154 - Compressed air.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 7 2011-07-01 2011-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed...

  17. 29 CFR 1917.154 - Compressed air.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 7 2012-07-01 2012-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed...

  18. 29 CFR 1917.154 - Compressed air.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 7 2014-07-01 2014-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed...

  19. 29 CFR 1917.154 - Compressed air.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 7 2010-07-01 2010-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed...

  20. A Test Data Compression Scheme Based on Irrational Numbers Stored Coding

    PubMed Central

    Wu, Hai-feng; Cheng, Yu-sheng; Zhan, Wen-fa; Cheng, Yi-fei; Wu, Qiong; Zhu, Shi-juan

    2014-01-01

    Test question has already become an important factor to restrict the development of integrated circuit industry. A new test data compression scheme, namely irrational numbers stored (INS), is presented. To achieve the goal of compress test data efficiently, test data is converted into floating-point numbers, stored in the form of irrational numbers. The algorithm of converting floating-point number to irrational number precisely is given. Experimental results for some ISCAS 89 benchmarks show that the compression effect of proposed scheme is better than the coding methods such as FDR, AARLC, INDC, FAVLC, and VRL. PMID:25258744

  1. Perceptual Image Compression in Telemedicine

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ahumada, Albert J., Jr.; Eckstein, Miguel; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    The next era of space exploration, especially the "Mission to Planet Earth" will generate immense quantities of image data. For example, the Earth Observing System (EOS) is expected to generate in excess of one terabyte/day. NASA confronts a major technical challenge in managing this great flow of imagery: in collection, pre-processing, transmission to earth, archiving, and distribution to scientists at remote locations. Expected requirements in most of these areas clearly exceed current technology. Part of the solution to this problem lies in efficient image compression techniques. For much of this imagery, the ultimate consumer is the human eye. In this case image compression should be designed to match the visual capacities of the human observer. We have developed three techniques for optimizing image compression for the human viewer. The first consists of a formula, developed jointly with IBM and based on psychophysical measurements, that computes a DCT quantization matrix for any specified combination of viewing distance, display resolution, and display brightness. This DCT quantization matrix is used in most recent standards for digital image compression (JPEG, MPEG, CCITT H.261). The second technique optimizes the DCT quantization matrix for each individual image, based on the contents of the image. This is accomplished by means of a model of visual sensitivity to compression artifacts. The third technique extends the first two techniques to the realm of wavelet compression. Together these two techniques will allow systematic perceptual optimization of image compression in NASA imaging systems. Many of the image management challenges faced by NASA are mirrored in the field of telemedicine. Here too there are severe demands for transmission and archiving of large image databases, and the imagery is ultimately used primarily by human observers, such as radiologists. In this presentation I will describe some of our preliminary explorations of the applications

  2. Improved Compression of Wavelet-Transformed Images

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron; Klimesh, Matthew

    2005-01-01

    length and the code parameter. When this difference falls outside a fixed range, the code parameter is updated (increased or decreased). The Golomb code parameter is selected based on the average magnitude of recently encoded nonzero samples. The coding method requires no floating- point operations, and more readily adapts to local statistics than other methods. The method can also accommodate arbitrarily large input values and arbitrarily long runs of zeros. In practice, this means that changes in the dynamic range or size of the input data set would not require a change to the compressor. The algorithm has been tested in computational experiments on test images. A comparison with a previously developed algorithm that uses large code tables (generated via Huffman coding on training data) suggests that the data-compression effectiveness of the present algorithm is comparable to the best performance achievable by the previously developed algorithm.

  3. Compression and Progressive Retrieval of Multi-Dimensional Sensor Data

    NASA Astrophysics Data System (ADS)

    Lorkowski, P.; Brinkhoff, T.

    2016-06-01

    Since the emergence of sensor data streams, increasing amounts of observations have to be transmitted, stored and retrieved. Performing these tasks at the granularity of single points would mean an inappropriate waste of resources. Thus, we propose a concept that performs a partitioning of observations by spatial, temporal or other criteria (or a combination of them) into data segments. We exploit the resulting proximity (according to the partitioning dimension(s)) within each data segment for compression and efficient data retrieval. While in principle allowing lossless compression, it can also be used for progressive transmission with increasing accuracy wherever incremental data transfer is reasonable. In a first feasibility study, we apply the proposed method to a dataset of ARGO drifting buoys covering large spatio-temporal regions of the world's oceans and compare the achieved compression ratio to other formats.

  4. Sulcus formation in a compressed elastic half space

    NASA Astrophysics Data System (ADS)

    Biggins, John; Mahadevan, L.

    2012-02-01

    When a block of rubber, biological tissue or other soft material is subject to substantial compression, its surfaces undergo a folding instability. Rather than having a smooth profile, these folds contain cusps and hence have been called creases or sulcii rather than wrinkles. The stability of a compressed surface was first investigated by Biot (1965), assuming the strains associated with the instability were small. However, the compression threshold predicted with this approach is substantially too high. I will introduce a family of analytic area preserving maps that contain cusps (and hence points of infinite strain) that save energy before the linear stability threshold even at vanishing amplitude. This establishes that there is a region before the linear stability threshold is reached where the system is unstable to infinitesimal perturbations, but that this instability is quintessentially non-linear and cannot be found with linear strain elasticity.

  5. Plunger lift with wellhead compression boosts gas well production

    SciTech Connect

    Phillips, D.; Listiak, S.

    1996-10-01

    As gas wells are produced and reservoir pressures decline, it is often necessary to install wellhead compression to maintain production. As well decline continues, gas rate and velocity in the tubing will decrease to the point at which liquids cannot be lifted out of the wellbore. Even on compression, liquid loading will become a problem and production impairments will result. One remedy to the liquid loading problem is to install a plunger lift system coupled with compression. With new smart controllers, the plunger/compressor combination has been successfully installed on a number of wells. Following is a description of this type of system, and case histories of active installations in the San Juan basin.

  6. 30 CFR 75.1730 - Compressed air; general; compressed air systems.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped...

  7. 30 CFR 75.1730 - Compressed air; general; compressed air systems.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped...

  8. 30 CFR 75.1730 - Compressed air; general; compressed air systems.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped...

  9. 30 CFR 75.1730 - Compressed air; general; compressed air systems.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped...

  10. 30 CFR 75.1730 - Compressed air; general; compressed air systems.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped...

  11. Compression of spectral meteorological imagery

    NASA Technical Reports Server (NTRS)

    Miettinen, Kristo

    1993-01-01

    Data compression is essential to current low-earth-orbit spectral sensors with global coverage, e.g., meteorological sensors. Such sensors routinely produce in excess of 30 Gb of data per orbit (over 4 Mb/s for about 110 min) while typically limited to less than 10 Gb of downlink capacity per orbit (15 minutes at 10 Mb/s). Astro-Space Division develops spaceborne compression systems for compression ratios from as little as three to as much as twenty-to-one for high-fidelity reconstructions. Current hardware production and development at Astro-Space Division focuses on discrete cosine transform (DCT) systems implemented with the GE PFFT chip, a 32x32 2D-DCT engine. Spectral relations in the data are exploited through block mean extraction followed by orthonormal transformation. The transformation produces blocks with spatial correlation that are suitable for further compression with any block-oriented spatial compression system, e.g., Astro-Space Division's Laplacian modeler and analytic encoder of DCT coefficients.

  12. Data compression using Chebyshev transform

    NASA Technical Reports Server (NTRS)

    Cheng, Andrew F. (Inventor); Hawkins, III, S. Edward (Inventor); Nguyen, Lillian (Inventor); Monaco, Christopher A. (Inventor); Seagrave, Gordon G. (Inventor)

    2007-01-01

    The present invention is a method, system, and computer program product for implementation of a capable, general purpose compression algorithm that can be engaged on the fly. This invention has particular practical application with time-series data, and more particularly, time-series data obtained form a spacecraft, or similar situations where cost, size and/or power limitations are prevalent, although it is not limited to such applications. It is also particularly applicable to the compression of serial data streams and works in one, two, or three dimensions. The original input data is approximated by Chebyshev polynomials, achieving very high compression ratios on serial data streams with minimal loss of scientific information.

  13. Compressive behavior of fine sand.

    SciTech Connect

    Martin, Bradley E.; Kabir, Md. E.; Song, Bo; Chen, Wayne

    2010-04-01

    The compressive mechanical response of fine sand is experimentally investigated. The strain rate, initial density, stress state, and moisture level are systematically varied. A Kolsky bar was modified to obtain uniaxial and triaxial compressive response at high strain rates. A controlled loading pulse allows the specimen to acquire stress equilibrium and constant strain-rates. The results show that the compressive response of the fine sand is not sensitive to strain rate under the loading conditions in this study, but significantly dependent on the moisture content, initial density and lateral confinement. Partially saturated sand is more compliant than dry sand. Similar trends were reported in the quasi-static regime for experiments conducted at comparable specimen conditions. The sand becomes stiffer as initial density and/or confinement pressure increases. The sand particle size become smaller after hydrostatic pressure and further smaller after dynamic axial loading.

  14. Efficient access of compressed data

    SciTech Connect

    Eggers, S.J.; Shoshani, A.

    1980-06-01

    A compression technique is presented that allows a high degree of compression but requires only logarithmic access time. The technique is a constant suppression scheme, and is most applicable to stable databases whose distribution of constants is fairly clustered. Furthermore, the repeated use of the technique permits the suppression of a multiple number of different constants. Of particular interest is the application of the constant suppression technique to databases the composite key of which is made up of an incomplete cross product of several attribute domains. The scheme for compressing the full cross product composite key is well known. This paper, however, also handles the general, incomplete case by applying the constant suppression technique in conjunction with a composite key suppression scheme.

  15. Point Cloud Server (pcs) : Point Clouds In-Base Management and Processing

    NASA Astrophysics Data System (ADS)

    Cura, R.; Perret, J.; Paparoditis, N.

    2015-08-01

    In addition to the traditional Geographic Information System (GIS) data such as images and vectors, point cloud data has become more available. It is appreciated for its precision and true three-Dimensional (3D) nature. However, managing the point cloud can be difficult due to scaling problems and specificities of this data type. Several methods exist but are usually fairly specialised and solve only one aspect of the management problem. In this work, we propose a complete and efficient point cloud management system based on a database server that works on groups of points rather than individual points. This system is specifically designed to solve all the needs of point cloud users: fast loading, compressed storage, powerful filtering, easy data access and exporting, and integrated processing. Moreover, the system fully integrates metadata (like sensor position) and can conjointly use point clouds with images, vectors, and other point clouds. The system also offers in-base processing for easy prototyping and parallel processing and can scale well. Lastly, the system is built on open source technologies; therefore it can be easily extended and customised. We test the system will several billion points of point clouds from Lidar (aerial and terrestrial ) and stereo-vision. We demonstrate ~ 400 million pts/h loading speed, user-transparent greater than 2 to 4:1 compression ratio, filtering in the approximately 50 ms range, and output of about a million pts/s, along with classical processing, such as object detection.

  16. Flux Compression Magnetic Nozzle

    NASA Technical Reports Server (NTRS)

    Thio, Y. C. Francis; Schafer, Charles (Technical Monitor)

    2001-01-01

    In pulsed fusion propulsion schemes in which the fusion energy creates a radially expanding plasma, a magnetic nozzle is required to redirect the radially diverging flow of the expanding fusion plasma into a rearward axial flow, thereby producing a forward axial impulse to the vehicle. In a highly electrically conducting plasma, the presence of a magnetic field B in the plasma creates a pressure B(exp 2)/2(mu) in the plasma, the magnetic pressure. A gradient in the magnetic pressure can be used to decelerate the plasma traveling in the direction of increasing magnetic field, or to accelerate a plasma from rest in the direction of decreasing magnetic pressure. In principle, ignoring dissipative processes, it is possible to design magnetic configurations to produce an 'elastic' deflection of a plasma beam. In particular, it is conceivable that, by an appropriate arrangement of a set of coils, a good approximation to a parabolic 'magnetic mirror' may be formed, such that a beam of charged particles emanating from the focal point of the parabolic mirror would be reflected by the mirror to travel axially away from the mirror. The degree to which this may be accomplished depends on the degree of control one has over the flux surface of the magnetic field, which changes as a result of its interaction with a moving plasma.

  17. Stress relaxation in vanadium under shock and shockless dynamic compression

    SciTech Connect

    Kanel, G. I.; Razorenov, S. V.; Garkushin, G. V.; Savinykh, A. S.; Zaretsky, E. B.

    2015-07-28

    Evolutions of elastic-plastic waves have been recorded in three series of plate impact experiments with annealed vanadium samples under conditions of shockless and combined ramp and shock dynamic compression. The shaping of incident wave profiles was realized using intermediate base plates made of different silicate glasses through which the compression waves were entered into the samples. Measurements of the free surface velocity histories revealed an apparent growth of the Hugoniot elastic limit with decreasing average rate of compression. The growth was explained by “freezing” of the elastic precursor decay in the area of interaction of the incident and reflected waves. A set of obtained data show that the current value of the Hugoniot elastic limit and plastic strain rate is rather associated with the rate of the elastic precursor decay than with the local rate of compression. The study has revealed the contributions of dislocation multiplications in elastic waves. It has been shown that independently of the compression history the material arrives at the minimum point between the elastic and plastic waves with the same density of mobile dislocations.

  18. Compressive residual strength of graphite/epoxy laminates after impact

    NASA Technical Reports Server (NTRS)

    Guy, Teresa A.; Lagace, Paul A.

    1992-01-01

    The issue of damage tolerance after impact, in terms of the compressive residual strength, was experimentally examined in graphite/epoxy laminates using Hercules AS4/3501-6 in a (+ or - 45/0)(sub 2S) configuration. Three different impactor masses were used at various velocities and the resultant damage measured via a number of nondestructive and destructive techniques. Specimens were then tested to failure under uniaxial compression. The results clearly show that a minimum compressive residual strength exists which is below the open hole strength for a hole of the same diameter as the impactor. Increases in velocity beyond the point of minimum strength cause a difference in the damage produced and cause a resultant increase in the compressive residual strength which asymptotes to the open hole strength value. Furthermore, the results show that this minimum compressive residual strength value is independent of the impactor mass used and is only dependent upon the damage present in the impacted specimen which is the same for the three impactor mass cases. A full 3-D representation of the damage is obtained through the various techniques. Only this 3-D representation can properly characterize the damage state that causes the resultant residual strength. Assessment of the state-of-the-art in predictive analysis capabilities shows a need to further develop techniques based on the 3-D damage state that exists. In addition, the need for damage 'metrics' is clearly indicated.

  19. Data compression for satellite images

    NASA Technical Reports Server (NTRS)

    Chen, P. H.; Wintz, P. A.

    1976-01-01

    An efficient data compression system is presented for satellite pictures and two grey level pictures derived from satellite pictures. The compression techniques take advantages of the correlation between adjacent picture elements. Several source coding methods are investigated. Double delta coding is presented and shown to be the most efficient. Both predictive differential quantizing technique and double delta coding can be significantly improved by applying a background skipping technique. An extension code is constructed. This code requires very little storage space and operates efficiently. Simulation results are presented for various coding schemes and source codes.

  20. Structured illumination temporal compressive microscopy

    PubMed Central

    Yuan, Xin; Pang, Shuo

    2016-01-01

    We present a compressive video microscope based on structured illumination with incoherent light source. The source-side illumination coding scheme allows the emission photons being collected by the full aperture of the microscope objective, and thus is suitable for the fluorescence readout mode. A 2-step iterative reconstruction algorithm, termed BWISE, has been developed to address the mismatch between the illumination pattern size and the detector pixel size. Image sequences with a temporal compression ratio of 4:1 were demonstrated. PMID:27231586

  1. Extended testing of compression distillation.

    NASA Technical Reports Server (NTRS)

    Bambenek, R. A.; Nuccio, P. P.

    1972-01-01

    During the past eight years, the NASA Manned Spacecraft Center has supported the development of an integrated water and waste management system which includes the compression distillation process for recovering useable water from urine, urinal flush water, humidity condensate, commode flush water, and concentrated wash water. This paper describes the design of the compression distillation unit, developed for this system, and the testing performed to demonstrate its reliability and performance. In addition, this paper summarizes the work performed on pretreatment and post-treatment processes, to assure the recovery of sterile potable water from urine and treated urinal flush water.

  2. Compressed sensing for phase retrieval.

    PubMed

    Newton, Marcus C

    2012-05-01

    To date there are several iterative techniques that enjoy moderate success when reconstructing phase information, where only intensity measurements are made. There remains, however, a number of cases in which conventional approaches are unsuccessful. In the last decade, the theory of compressed sensing has emerged and provides a route to solving convex optimisation problems exactly via ℓ(1)-norm minimization. Here the application of compressed sensing to phase retrieval in a nonconvex setting is reported. An algorithm is presented that applies reweighted ℓ(1)-norm minimization to yield accurate reconstruction where conventional methods fail.

  3. Compressing the inert doublet model

    NASA Astrophysics Data System (ADS)

    Blinov, Nikita; Kozaczuk, Jonathan; Morrissey, David E.; de la Puente, Alejandro

    2016-02-01

    The inert doublet model relies on a discrete symmetry to prevent couplings of the new scalars to Standard Model fermions. This stabilizes the lightest inert state, which can then contribute to the observed dark matter density. In the presence of additional approximate symmetries, the resulting spectrum of exotic scalars can be compressed. Here, we study the phenomenological and cosmological implications of this scenario. We derive new limits on the compressed inert doublet model from LEP, and outline the prospects for exclusion and discovery of this model at dark matter experiments, the LHC, and future colliders.

  4. Compressing the Inert Doublet Model

    DOE PAGES

    Blinov, Nikita; Kozaczuk, Jonathan; Morrissey, David E.; de la Puente, Alejandro

    2016-02-16

    The Inert Doublet Model relies on a discrete symmetry to prevent couplings of the new scalars to Standard Model fermions. We found that this stabilizes the lightest inert state, which can then contribute to the observed dark matter density. In the presence of additional approximate symmetries, the resulting spectrum of exotic scalars can be compressed. Here, we study the phenomenological and cosmological implications of this scenario. In conclusion, we derive new limits on the compressed Inert Doublet Model from LEP, and outline the prospects for exclusion and discovery of this model at dark matter experiments, the LHC, and future colliders.

  5. Compression of hyperspectral data for automated analysis

    NASA Astrophysics Data System (ADS)

    Linderhed, Anna; Wadströmer, Niclas; Stenborg, K.-G.; Nautsch, Harald

    2009-09-01

    State of the art and coming hyperspectral optical sensors generate large amounts of data and automatic analysis is necessary. One example is Automatic Target Recognition (ATR), frequently used in military applications and a coming technique for civilian surveillance applications. When sensors communicate in networks, the capacity of the communication channel defines the limit of data transferred without compression. Automated analysis may have different demands on data quality than a human observer, and thus standard compression methods may not be optimal. This paper presents results from testing how the performance of detection methods are affected by compressing input data with COTS coders. A standard video coder has been used to compress hyperspectral data. A video is a sequence of still images, a hybrid video coder use the correlation in time by doing block based motion compensated prediction between images. In principle only the differences are transmitted. This method of coding can be used on hyperspectral data if we consider one of the three dimensions as the time axis. Spectral anomaly detection is used as detection method on mine data. This method finds every pixel in the image that is abnormal, an anomaly compared to the surroundings. The purpose of anomaly detection is to identify objects (samples, pixels) that differ significantly from the background, without any a priori explicit knowledge about the signature of the sought-after targets. Thus the role of the anomaly detector is to identify "hot spots" on which subsequent analysis can be performed. We have used data from Imspec, a hyperspectral sensor. The hyperspectral image, or the spectral cube, consists of consecutive frames of spatial-spectral images. Each pixel contains a spectrum with 240 measure points. Hyperspectral sensor data was coded with hybrid coding using a variant of MPEG2. Only I- and P- frames was used. Every 10th frame was coded as I frame. 14 hyperspectral images was coded in 3

  6. Image Segmentation, Registration, Compression, and Matching

    NASA Technical Reports Server (NTRS)

    Yadegar, Jacob; Wei, Hai; Yadegar, Joseph; Ray, Nilanjan; Zabuawala, Sakina

    2011-01-01

    A novel computational framework was developed of a 2D affine invariant matching exploiting a parameter space. Named as affine invariant parameter space (AIPS), the technique can be applied to many image-processing and computer-vision problems, including image registration, template matching, and object tracking from image sequence. The AIPS is formed by the parameters in an affine combination of a set of feature points in the image plane. In cases where the entire image can be assumed to have undergone a single affine transformation, the new AIPS match metric and matching framework becomes very effective (compared with the state-of-the-art methods at the time of this reporting). No knowledge about scaling or any other transformation parameters need to be known a priori to apply the AIPS framework. An automated suite of software tools has been created to provide accurate image segmentation (for data cleaning) and high-quality 2D image and 3D surface registration (for fusing multi-resolution terrain, image, and map data). These tools are capable of supporting existing GIS toolkits already in the marketplace, and will also be usable in a stand-alone fashion. The toolkit applies novel algorithmic approaches for image segmentation, feature extraction, and registration of 2D imagery and 3D surface data, which supports first-pass, batched, fully automatic feature extraction (for segmentation), and registration. A hierarchical and adaptive approach is taken for achieving automatic feature extraction, segmentation, and registration. Surface registration is the process of aligning two (or more) data sets to a common coordinate system, during which the transformation between their different coordinate systems is determined. Also developed here are a novel, volumetric surface modeling and compression technique that provide both quality-guaranteed mesh surface approximations and compaction of the model sizes by efficiently coding the geometry and connectivity

  7. Trajectory NG: portable, compressed, general molecular dynamics trajectories.

    PubMed

    Spångberg, Daniel; Larsson, Daniel S D; van der Spoel, David

    2011-10-01

    We present general algorithms for the compression of molecular dynamics trajectories. The standard ways to store MD trajectories as text or as raw binary floating point numbers result in very large files when efficient simulation programs are used on supercomputers. Our algorithms are based on the observation that differences in atomic coordinates/velocities, in either time or space, are generally smaller than the absolute values of the coordinates/velocities. Also, it is often possible to store values at a lower precision. We apply several compression schemes to compress the resulting differences further. The most efficient algorithms developed here use a block sorting algorithm in combination with Huffman coding. Depending on the frequency of storage of frames in the trajectory, either space, time, or combinations of space and time differences are usually the most efficient. We compare the efficiency of our algorithms with each other and with other algorithms present in the literature for various systems: liquid argon, water, a virus capsid solvated in 15 mM aqueous NaCl, and solid magnesium oxide. We perform tests to determine how much precision is necessary to obtain accurate structural and dynamic properties, as well as benchmark a parallelized implementation of the algorithms. We obtain compression ratios (compared to single precision floating point) of 1:3.3-1:35 depending on the frequency of storage of frames and the system studied. PMID:21267752

  8. Finite scale equations for compressible fluid flow

    SciTech Connect

    Margolin, Len G

    2008-01-01

    Finite-scale equations (FSE) describe the evolution of finite volumes of fluid over time. We discuss the FSE for a one-dimensional compressible fluid, whose every point is governed by the Navier-Stokes equations. The FSE contain new momentum and internal energy transport terms. These are similar to terms added in numerical simulation for high-speed flows (e.g. artificial viscosity) and for turbulent flows (e.g. subgrid scale models). These similarities suggest that the FSE may provide new insight as a basis for computational fluid dynamics. Our analysis of the FS continuity equation leads to a physical interpretation of the new transport terms, and indicates the need to carefully distinguish between volume-averaged and mass-averaged velocities in numerical simulation. We make preliminary connections to the other recent work reformulating Navier-Stokes equations.

  9. Compression of digital chest radiographs with a mixture of principal components neural network: evaluation of performance.

    PubMed

    Dony, R D; Coblentz, C L; Nabmias, C; Haykin, S

    1996-11-01

    The performance of a new, neural network-based image compression method was evaluated on digital radiographs for use in an educational environment. The network uses a mixture of principal components (MPC) representation to effect optimally adaptive transform coding of an image and has significant computational advantages over other techniques. Nine representative digital chest radiographs were compressed 10:1, 20:1, 30:1, and 40:1 with the MPC method. The five versions of each image, including the original, were shown simultaneously, in random order, to each of seven radiologists, who rated each one on a five-point scale for image quality and visibility of pathologic conditions. One radiologist also ranked four versions of each of the nine images in terms of the severity of distortion: The four versions represented 30:1 and 40:1 compression with the MPC method and with the classic Karhunen-Loève transform (KLT). Only for the images compressed 40:1 with the MPC method were there any unacceptable ratings. Nevertheless, the images compressed 40:1 received a top score in 26%-33% of the evaluations. Images compressed with the MPC method were rated better than or as good as images compressed with the KLT technique 17 of 18 times. Four of nine times, images compressed 40:1 with the MPC method were rated as good as or better than images compressed 30:1 with the KLT technique.

  10. Compression fractures of the back

    MedlinePlus

    ... Meirhaeghe J, et al. Efficacy and safety of balloon kyphoplasty compared with non-surgical care for vertebral compression fracture (FREE): a randomised controlled trial. Lancet . 2009;373(9668):1016-24. PMID: 19246088 www.ncbi.nlm.nih.gov/pubmed/19246088 .

  11. Culture: Copying, Compression, and Conventionality

    ERIC Educational Resources Information Center

    Tamariz, Mónica; Kirby, Simon

    2015-01-01

    Through cultural transmission, repeated learning by new individuals transforms cultural information, which tends to become increasingly compressible (Kirby, Cornish, & Smith, 2008; Smith, Tamariz, & Kirby, 2013). Existing diffusion chain studies include in their design two processes that could be responsible for this tendency: learning…

  12. Compressive passive millimeter wave imager

    SciTech Connect

    Gopalsami, Nachappa; Liao, Shaolin; Elmer, Thomas W; Koehl, Eugene R; Heifetz, Alexander; Raptis, Apostolos C

    2015-01-27

    A compressive scanning approach for millimeter wave imaging and sensing. A Hadamard mask is positioned to receive millimeter waves from an object to be imaged. A subset of the full set of Hadamard acquisitions is sampled. The subset is used to reconstruct an image representing the object.

  13. Teaching Time-Space Compression

    ERIC Educational Resources Information Center

    Warf, Barney

    2011-01-01

    Time-space compression shows students that geographies are plastic, mutable and forever changing. This paper justifies the need to teach this topic, which is rarely found in undergraduate course syllabi. It addresses the impacts of transportation and communications technologies to explicate its dynamics. In summarizing various conceptual…

  14. Compression testing of flammable liquids

    NASA Technical Reports Server (NTRS)

    Briles, O. M.; Hollenbaugh, R. P.

    1979-01-01

    Small cylindrical test chamber determines catalytic effect of given container material on fuel that might contribute to accidental deflagration or detonation below expected temperature under adiabatic compression. Device is useful to producers and users of flammable liquids and to safety specialists.

  15. Perceptually lossy compression of documents

    NASA Astrophysics Data System (ADS)

    Beretta, Giordano B.; Bhaskaran, Vasudev; Konstantinides, Konstantinos; Natarajan, Balas R.

    1997-06-01

    The main cost of owning a facsimile machine consists of the telephone charges for the communications, thus short transmission times are a key feature for facsimile machines. Similarly, on a packet-routed service such as the Internet, a low number of packets is essential to avoid operator wait times. Concomitantly, the user expectations have increased considerably. In facsimile, the switch from binary to full color increases the data size by a factor of 24. On the Internet, the switch from plain text American Standard Code for Information Interchange (ASCII) encoded files to files marked up in the Hypertext Markup Language (HTML) with ample embedded graphics has increased the size of transactions by several orders of magnitude. A common compressing method for raster files in these applications in the Joint Photographic Experts Group (JPEG) method, because efficient implementations are readily available. In this method the implementors design the discrete quantization tables (DQT) and the Huffman tables (HT) to maximize the compression factor while maintaining the introduced artifacts at the threshold of perceptual detectability. Unfortunately the achieved compression rates are unsatisfactory for applications such as color facsimile and World Wide Web (W3) browsing. We present a design methodology for image-independent DQTs that while producing perceptually lossy data, does not impair the reading performance of users. Combined with a text sharpening algorithm that compensates for scanning device limitations, the methodology presented in this paper allows us to achieve compression ratios near 1:100.

  16. A programmable image compression system

    NASA Technical Reports Server (NTRS)

    Farrelle, Paul M.

    1989-01-01

    A programmable image compression system which has the necessary flexibility to address diverse imaging needs is described. It can compress and expand single frame video images (monochrome or color) as well as documents and graphics (black and white or color) for archival or transmission applications. Through software control, the compression mode can be set for lossless or controlled quality coding; the image size and bit depth can be varied; and the image source and destination devices can be readily changed. Despite the large combination of image data types, image sources, and algorithms, the system provides a simple consistent interface to the programmer. This system (OPTIPAC) is based on the TITMS320C25 digital signal processing (DSP) chip and has been implemented as a co-processor board for an IBM PC-AT compatible computer. The underlying philosophy can readily be applied to different hardware platforms. By using multiple DSP chips or incorporating algorithm specific chips, the compression and expansion times can be significantly reduced to meet performance requirements.

  17. Device Assists Cardiac Chest Compression

    NASA Technical Reports Server (NTRS)

    Eichstadt, Frank T.

    1995-01-01

    Portable device facilitates effective and prolonged cardiac resuscitation by chest compression. Developed originally for use in absence of gravitation, also useful in terrestrial environments and situations (confined spaces, water rescue, medical transport) not conducive to standard manual cardiopulmonary resuscitation (CPR) techniques.

  18. COMPRESSIBLE FLOW, ENTRAINMENT, AND MEGAPLUME

    EPA Science Inventory

    It is generally believed that low Mach number, i.e., low-velocity, flow may be assumed to be incompressible flow. Under steady-state conditions, an exact equation of continuity may then be used to show that such flow is non-divergent. However, a rigorous, compressible fluid-dynam...

  19. Hyperspectral image compressive projection algorithm

    NASA Astrophysics Data System (ADS)

    Rice, Joseph P.; Allen, David W.

    2009-05-01

    We describe a compressive projection algorithm and experimentally assess its performance when used with a Hyperspectral Image Projector (HIP). The HIP is being developed by NIST for system-level performance testing of hyperspectral and multispectral imagers. It projects a two-dimensional image into the unit under test (UUT), whereby each pixel can have an independently programmable arbitrary spectrum. To efficiently project a single frame of dynamic realistic hyperspectral imagery through the collimator into the UUT, a compression algorithm has been developed whereby the series of abundance images and corresponding endmember spectra that comprise the image cube of that frame are first computed using an automated endmember-finding algorithm such as the Sequential Maximum Angle Convex Cone (SMACC) endmember model. Then these endmember spectra are projected sequentially on the HIP spectral engine in sync with the projection of the abundance images on the HIP spatial engine, during the singleframe exposure time of the UUT. The integrated spatial image captured by the UUT is the endmember-weighted sum of the abundance images, which results in the formation of a datacube for that frame. Compressive projection enables a much smaller set of broadband spectra to be projected than monochromatic projection, and thus utilizes the inherent multiplex advantage of the HIP spectral engine. As a result, radiometric brightness and projection frame rate are enhanced. In this paper, we use a visible breadboard HIP to experimentally assess the compressive projection algorithm performance.

  20. Advection by polytropic compressible turbulence

    NASA Astrophysics Data System (ADS)

    Ladeinde, F.; O'Brien, E. E.; Cai, X.; Liu, W.

    1995-11-01

    Direct numerical simulation (DNS) is used to examine scalar correlation in low Mach number, polytropic, homogeneous, two-dimensional turbulence (Ms≤0.7) for which the initial conditions, Reynolds, and Mach numbers have been chosen to produce three types of flow suggested by theory: (a) nearly incompressible flow dominated by vorticity, (b) nearly pure acoustic turbulence dominated by compression, and (c) nearly statistical equipartition of vorticity and compressions. Turbulent flows typical of each of these cases have been generated and a passive scalar field imbedded in them. The results show that a finite-difference based computer program is capable of producing results that are in reasonable agreement with pseudospectral calculations. Scalar correlations have been calculated from the DNS results and the relative magnitudes of terms in low-order scalar moment equations determined. It is shown that the scalar equation terms with explicit compressibility are negligible on a long time-averaged basis. A physical-space EDQNM model has been adapted to provide another estimate of scalar correlation evolution in these same two-dimensional, compressible cases. The use of the solenoidal component of turbulence energy, rather than total turbulence energy, in the EDQNM model gives results closer to those from DNS in all cases.

  1. A spectral collocation solution to the compressible stability eigenvalue problem

    NASA Technical Reports Server (NTRS)

    Macaraeg, Michele G.; Streett, Craig L.; Hussaini, M. Yousuff

    1988-01-01

    A newly developed spectral compressible linear stability code (SPECLS) (staggered pressure mesh) is presented for analysis of shear flow stability, and applied to high speed boundary layers and free shear flows. The formulation utilizes the first application of a staggered mesh for a compressible flow analysis by a spectral technique. An order of magnitude less number of points is needed for equivalent accuracy of growth rates compared to those calculated by a finite difference formulation. Supersonic disturbances which are found to have oscillatory structures were resolved by a spectral multi-domain discretization, which requires a factor of three fewer points than the single domain spectral stability code. It is indicated, as expected, that stability of mixing layers is enhanced by viscosity and increasing Mach number. The mean flow involves a jet being injected into a quiescent gas. Higher temperatures of the injected gas is also found to enhance stability characteristics of the free shear layer.

  2. Software documentation for compression-machine cavity control

    SciTech Connect

    Floersch, R.H.

    1981-04-01

    A new system design using closed loop control on the hydraulic system of compression transfer presses used to make filled elastomer parts will result in improved accuracy and repeatability of speed and pressure control during critical forming stages before part cure. The new design uses a microprocessor to supply set points and timing functions to the control system. Presented are the hardware and software architecture and objectives for the microprocessor portion of the control system.

  3. ADVANCED RECIPROCATING COMPRESSION TECHNOLOGY (ARCT)

    SciTech Connect

    Danny M. Deffenbaugh; Klaus Brun; Ralph E. Harris; J. Pete Harrell; Robert J. Mckee; J. Jeffrey Moore; Steven J. Svedeman; Anthony J. Smalley; Eugene L. Broerman; Robert A Hart; Marybeth G. Nored; Ryan S. Gernentz; Shane P. Siebenaler

    2005-12-01

    The U.S. natural gas pipeline industry is facing the twin challenges of increased flexibility and capacity expansion. To meet these challenges, the industry requires improved choices in gas compression to address new construction and enhancement of the currently installed infrastructure. The current fleet of installed reciprocating compression is primarily slow-speed integral machines. Most new reciprocating compression is and will be large, high-speed separable units. The major challenges with the fleet of slow-speed integral machines are: limited flexibility and a large range in performance. In an attempt to increase flexibility, many operators are choosing to single-act cylinders, which are causing reduced reliability and integrity. While the best performing units in the fleet exhibit thermal efficiencies between 90% and 92%, the low performers are running down to 50% with the mean at about 80%. The major cause for this large disparity is due to installation losses in the pulsation control system. In the better performers, the losses are about evenly split between installation losses and valve losses. The major challenges for high-speed machines are: cylinder nozzle pulsations, mechanical vibrations due to cylinder stretch, short valve life, and low thermal performance. To shift nozzle pulsation to higher orders, nozzles are shortened, and to dampen the amplitudes, orifices are added. The shortened nozzles result in mechanical coupling with the cylinder, thereby, causing increased vibration due to the cylinder stretch mode. Valve life is even shorter than for slow speeds and can be on the order of a few months. The thermal efficiency is 10% to 15% lower than slow-speed equipment with the best performance in the 75% to 80% range. The goal of this advanced reciprocating compression program is to develop the technology for both high speed and low speed compression that will expand unit flexibility, increase thermal efficiency, and increase reliability and integrity

  4. Physical examination of upper extremity compressive neuropathies.

    PubMed

    Popinchalk, Samuel P; Schaffer, Alyssa A

    2012-10-01

    A thorough history and physical examination are vital to the assessment of upper extremity compressive neuropathies. This article summarizes relevant anatomy and physical examination findings associated with upper extremity compressive neuropathies.

  5. Sensorineural deafness due to compression chamber noise.

    PubMed

    Hughes, K B

    1976-05-01

    A case of unilateral sensorineural deafness following exposure to compression chamber noise is described. A review of the current literature concerning the otological hazards of compression chambers is made. The possible pathological basis is discussed.

  6. Cluster compression algorithm: A joint clustering/data compression concept

    NASA Technical Reports Server (NTRS)

    Hilbert, E. E.

    1977-01-01

    The Cluster Compression Algorithm (CCA), which was developed to reduce costs associated with transmitting, storing, distributing, and interpreting LANDSAT multispectral image data is described. The CCA is a preprocessing algorithm that uses feature extraction and data compression to more efficiently represent the information in the image data. The format of the preprocessed data enables simply a look-up table decoding and direct use of the extracted features to reduce user computation for either image reconstruction, or computer interpretation of the image data. Basically, the CCA uses spatially local clustering to extract features from the image data to describe spectral characteristics of the data set. In addition, the features may be used to form a sequence of scalar numbers that define each picture element in terms of the cluster features. This sequence, called the feature map, is then efficiently represented by using source encoding concepts. Various forms of the CCA are defined and experimental results are presented to show trade-offs and characteristics of the various implementations. Examples are provided that demonstrate the application of the cluster compression concept to multi-spectral images from LANDSAT and other sources.

  7. FRESCO: Referential compression of highly similar sequences.

    PubMed

    Wandelt, Sebastian; Leser, Ulf

    2013-01-01

    In many applications, sets of similar texts or sequences are of high importance. Prominent examples are revision histories of documents or genomic sequences. Modern high-throughput sequencing technologies are able to generate DNA sequences at an ever-increasing rate. In parallel to the decreasing experimental time and cost necessary to produce DNA sequences, computational requirements for analysis and storage of the sequences are steeply increasing. Compression is a key technology to deal with this challenge. Recently, referential compression schemes, storing only the differences between a to-be-compressed input and a known reference sequence, gained a lot of interest in this field. In this paper, we propose a general open-source framework to compress large amounts of biological sequence data called Framework for REferential Sequence COmpression (FRESCO). Our basic compression algorithm is shown to be one to two orders of magnitudes faster than comparable related work, while achieving similar compression ratios. We also propose several techniques to further increase compression ratios, while still retaining the advantage in speed: 1) selecting a good reference sequence; and 2) rewriting a reference sequence to allow for better compression. In addition,we propose a new way of further boosting the compression ratios by applying referential compression to already referentially compressed files (second-order compression). This technique allows for compression ratios way beyond state of the art, for instance,4,000:1 and higher for human genomes. We evaluate our algorithms on a large data set from three different species (more than 1,000 genomes, more than 3 TB) and on a collection of versions of Wikipedia pages. Our results show that real-time compression of highly similar sequences at high compression ratios is possible on modern hardware.

  8. Image quality (IQ) guided multispectral image compression

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik

    2016-05-01

    Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.

  9. FRESCO: Referential compression of highly similar sequences.

    PubMed

    Wandelt, Sebastian; Leser, Ulf

    2013-01-01

    In many applications, sets of similar texts or sequences are of high importance. Prominent examples are revision histories of documents or genomic sequences. Modern high-throughput sequencing technologies are able to generate DNA sequences at an ever-increasing rate. In parallel to the decreasing experimental time and cost necessary to produce DNA sequences, computational requirements for analysis and storage of the sequences are steeply increasing. Compression is a key technology to deal with this challenge. Recently, referential compression schemes, storing only the differences between a to-be-compressed input and a known reference sequence, gained a lot of interest in this field. In this paper, we propose a general open-source framework to compress large amounts of biological sequence data called Framework for REferential Sequence COmpression (FRESCO). Our basic compression algorithm is shown to be one to two orders of magnitudes faster than comparable related work, while achieving similar compression ratios. We also propose several techniques to further increase compression ratios, while still retaining the advantage in speed: 1) selecting a good reference sequence; and 2) rewriting a reference sequence to allow for better compression. In addition,we propose a new way of further boosting the compression ratios by applying referential compression to already referentially compressed files (second-order compression). This technique allows for compression ratios way beyond state of the art, for instance,4,000:1 and higher for human genomes. We evaluate our algorithms on a large data set from three different species (more than 1,000 genomes, more than 3 TB) and on a collection of versions of Wikipedia pages. Our results show that real-time compression of highly similar sequences at high compression ratios is possible on modern hardware. PMID:24524158

  10. General-Purpose Compression for Efficient Retrieval.

    ERIC Educational Resources Information Center

    Cannane, Adam; Williams, Hugh E.

    2001-01-01

    Discusses compression of databases that reduces space requirements and retrieval times; considers compression of documents in text databases based on semistatic modeling with words; and proposes a scheme for general purpose compression that can be applied to all types of data stored in large collections. (Author/LRW)

  11. Compressibility of liquid-metallic hydrogen

    NASA Astrophysics Data System (ADS)

    MacDonald, A. H.

    1983-05-01

    An expression for the compressibility κ of liquid-metallic hydrogen, derived within adiabatic and linear screening approximations, is presented. Terms in the expression for κ have been associated with Landau parameters of the two-component Fermi liquid. The compressibility found for the liquid state is much larger than the compressibility which would be expected in the solid state.

  12. Subpicosecond compression experiments at Los Alamos National Laboratory

    SciTech Connect

    Carlsten, B.E.; Russell, S.J.; Kinross-Wright, J.M.

    1995-09-01

    The authors report on recent experiments using a magnetic chicane compressor at 8 MeV. Electron bunches at both low (0.1 nC) and high (1 nC) charges were compressed from 20 ps to less than 1 ps (FWHM). A transverse deflecting rf cavity was used to measure the bunch length at low charge; the bunch length at high charge was inferred from an induced energy spread of the beam. The longitudinal centrifugal-space charge force is calculated using a point-to-point numerical simulation and is shown not to influence the energy-spread measurement.

  13. Growing concern following compression mammography.

    PubMed

    van Netten, Johannes Pieter; Hoption Cann, Stephen; Thornton, Ian; Finegan, Rory

    2016-01-01

    A patient without clinical symptoms had a mammogram in October 2008. The procedure caused intense persistent pain, swelling and development of a haematoma following mediolateral left breast compression. Three months later, a 9×11 cm mass developed within the same region. Core biopsies showed a necrotizing high-grade ductal carcinoma, with a high mitotic index. Owing to its extensive size, the patient began chemotherapy followed by trastuzumab and later radiotherapy to obtain clear margins for a subsequent mastectomy. The mastectomy in October 2009 revealed an inflammatory carcinoma, with 2 of 3 nodes infiltrated by the tumour. The stage IIIC tumour, oestrogen and progesterone receptor negative, was highly HER2 positive. A recurrence led to further chemotherapy in February 2011. In July 2011, another recurrence was removed from the mastectomy scar. She died of progressive disease in 2012. In this article, we discuss the potential influence of compression on the natural history of the tumour. PMID:27581236

  14. Using autoencoders for mammogram compression.

    PubMed

    Tan, Chun Chet; Eswaran, Chikkannan

    2011-02-01

    This paper presents the results obtained for medical image compression using autoencoder neural networks. Since mammograms (medical images) are usually of big sizes, training of autoencoders becomes extremely tedious and difficult if the whole image is used for training. We show in this paper that the autoencoders can be trained successfully by using image patches instead of the whole image. The compression performances of different types of autoencoders are compared based on two parameters, namely mean square error and structural similarity index. It is found from the experimental results that the autoencoder which does not use Restricted Boltzmann Machine pre-training yields better results than those which use this pre-training method.

  15. Frost heave in compressible soils

    NASA Astrophysics Data System (ADS)

    Peppin, Stephen; Majumdar, Apala; Sander, Graham

    2010-05-01

    Recent frost heave experiments on compressible soils find no pore ice in the soil near the ice lenses (no frozen fringe). These results confirm early observations of Beskow that in clays the soil between ice lenses is ``soft and unfrozen'' but have yet to be explained theoretically. Recently it has been suggested that periodic ice lens formation in the absence of a frozen fringe may be due to a morphological instability of the ice--soil interface. Here we use this concept to develop a mathematical model of frost heave in compressible soils. The theory accounts for heave, overburden effects and soil consolidation. In the limit of a rigid porous medium a relation is obtained between the critical morphological number and the empirical segregation potential. Analytical and numerical solutions are found, and compared with the results of unidirectional solidification experiments.

  16. The Critical Point Facility (CPF)

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The Critical Point Facility (CPF) is an ESA multiuser facility designed for microgravity research onboard Spacelab. It has been conceived and built to offer investigators opportunities to conduct research on critical point phenomena in microgravity. This facility provides the high precision and stability temperature standards required in this field of research. It has been primarily designed for the purpose of optical investigations of transparent fluids. During a Spacelab mission, the CPF automatically processes several thermostats sequentially, each thermostat corresponding to an experiment. The CPF is now integrated in Spacelab at Kennedy Space Center, in preparation for the International Microgravity Lab. mission. The CPF was designed to submit transparent fluids to an adequate, user defined thermal scenario, and to monitor their behavior by using thermal and optical means. Because they are strongly affected by gravity, a good understanding of critical phenomena in fluids can only be gained in low gravity conditions. Fluids at the critical point become compressed under their own weight. The role played by gravity in the formation of interfaces between distinct phases is not clearly understood.

  17. Lithological Uncertainty Expressed by Normalized Compression Distance

    NASA Astrophysics Data System (ADS)

    Jatnieks, J.; Saks, T.; Delina, A.; Popovs, K.

    2012-04-01

    prediction by partial matching (PPM), used for computing the NCD metric, is highly dependant on context. We assign unique symbols for aggregate lithology types and serialize the borehole logs into text strings, where the string length represents a normalized borehole depth. This encoding ensures that both lithology types as well as depth and sequence of strata is comparable in a form most native to the universal data compression software that calculates the pairwise NCD dissimilarity matrix. The NCD results can be used for generalization of the Quaternary structure using spatial clustering followed by a Voronoi tessellation using boreholes as generator points. After dissolving cluster membership identifiers of the borehole Voronoi polygons in GIS environment, regions representing similar lithological structure can be visualized. The exact number of regions and their homogeneity depends on parameters of the clustering solution. This study is supported by the European Social Fund project No. 2009/0212/1DP/1.1.1.2.0/09/APIA/VIAA/060 Keywords: geological uncertainty, lithological uncertainty, generalization, information distance, normalized compression distance, data compression

  18. Antiproton compression and radial measurements

    SciTech Connect

    Andresen, G. B.; Bowe, P. D.; Hangst, J. S.; Bertsche, W.; Butler, E.; Charlton, M.; Humphries, A. J.; Jenkins, M. J.; Joergensen, L. V.; Madsen, N.; Werf, D. P. van der; Bray, C. C.; Chapman, S.; Fajans, J.; Povilus, A.; Wurtele, J. S.; Cesar, C. L.; Lambo, R.; Silveira, D. M.; Fujiwara, M. C.

    2008-08-08

    Control of the radial profile of trapped antiproton clouds is critical to trapping antihydrogen. We report detailed measurements of the radial manipulation of antiproton clouds, including areal density compressions by factors as large as ten, achieved by manipulating spatially overlapped electron plasmas. We show detailed measurements of the near-axis antiproton radial profile, and its relation to that of the electron plasma. We also measure the outer radial profile by ejecting antiprotons to the trap wall using an octupole magnet.

  19. Vascular compression of the duodenum.

    PubMed Central

    Moskovich, R; Cheong-Leen, P

    1986-01-01

    Compression of the third or fourth part of the duodenum by the superior mesenteric artery or one of its branches is the anatomic basis for some cases of duodenal obstruction. Two cases of vascular obstruction of the duodenum after surgical correction of scoliosis are presented. The embryologic and pathoanatomic bases for this condition, and the rationale for treatment, are described. Images Figure 1. Figure 2. Figure 3. PMID:3761291

  20. Compressing TV-image data

    NASA Technical Reports Server (NTRS)

    Hilbert, E. E.; Lee, J.; Rice, R. F.; Schlutsmeyer, A. P.

    1981-01-01

    Compressing technique calculates activity estimator for each segment of image line. Estimator is used in conjunction with allowable bits per line, N, to determine number of bits necessary to code each segment and which segments can tolerate truncation. Preprocessed line data are then passed to adaptive variable-length coder, which selects optimum transmission code. Method increases capacity of broadcast and cable television transmissions and helps reduce size of storage medium for video and digital audio recordings.

  1. SNLL materials testing compression facility

    SciTech Connect

    Kawahara, W.A.; Brandon, S.L.; Korellis, J.S.

    1986-04-01

    This report explains software enhancements and fixture modifications which expand the capabilities of a servo-hydraulic test system to include static computer-controlled ''constant true strain rate'' compression testing on cylindrical specimens. True strains in excess of -1.0 are accessible. Special software features include schemes to correct for system compliance and the ability to perform strain-rate changes; all software for test control and data acquisition/reduction is documented.

  2. Compressed air energy storage system

    DOEpatents

    Ahrens, Frederick W.; Kartsounes, George T.

    1981-01-01

    An internal combustion reciprocating engine is operable as a compressor during slack demand periods utilizing excess power from a power grid to charge air into an air storage reservoir and as an expander during peak demand periods to feed power into the power grid utilizing air obtained from the air storage reservoir together with combustible fuel. Preferably the internal combustion reciprocating engine is operated at high pressure and a low pressure turbine and compressor are also employed for air compression and power generation.

  3. Compressed air energy storage system

    DOEpatents

    Ahrens, F.W.; Kartsounes, G.T.

    An internal combustion reciprocating engine is operable as a compressor during slack demand periods utilizing excess power from a power grid to charge air into an air storage reservoir and as an expander during peak demand periods to feed power into the power grid utilizing air obtained from the air storage reservoir together with combustion reciprocating engine is operated at high pressure and a low pressure turbine and compressor are also employed for air compression and power generation.

  4. Comparison of Artificial Compressibility Methods

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Housman, Jeffrey; Kwak, Dochan

    2003-01-01

    Various artificial compressibility methods for calculating three-dimensional, steady and unsteady, laminar and turbulent, incompressible Navier-Stokes equations are compared in this work. Each method is described in detail along with appropriate physical and numerical boundary conditions. Analysis of well-posedness and numerical solutions to test problems for each method are provided. A comparison based on convergence behavior, accuracy, stability and robustness is used to establish the relative positive and negative characteristics of each method.

  5. Informationally complete measurements from compressed sensing methodology

    NASA Astrophysics Data System (ADS)

    Kalev, Amir; Riofrio, Carlos; Kosut, Robert; Deutsch, Ivan

    2015-03-01

    Compressed sensing (CS) is a technique to faithfully estimate an unknown signal from relatively few data points when the measurement samples satisfy a restricted isometry property (RIP). Recently this technique has been ported to quantum information science to perform tomography with a substantially reduced number of measurement settings. In this work we show that the constraint that a physical density matrix is positive semidefinite provides a rigorous connection between the RIP and the informational completeness (IC) of a POVM used for state tomography. This enables us to construct IC measurements that are robust to noise using tools provided by the CS methodology. The exact recovery no longer hinges on a particular convex optimization program; solving any optimization, constrained on the cone of positive matrices, effectively results in a CS estimation of the state. From a practical point of view, we can therefore employ fast algorithms developed to handle large dimensional matrices for efficient tomography of quantum states of a large dimensional Hilbert space. Supported by the National Science Foundation.

  6. Melting point, boiling point, and symmetry.

    PubMed

    Abramowitz, R; Yalkowsky, S H

    1990-09-01

    The relationship between the melting point of a compound and its chemical structure remains poorly understood. The melting point of a compound can be related to certain of its other physical chemical properties. The boiling point of a compound can be determined from additive constitutive properties, but the melting point can be estimated only with the aid of nonadditive constitutive parameters. The melting point of some non-hydrogen-bonding, rigid compounds can be estimated by the equation MP = 0.772 * BP + 110.8 * SIGMAL + 11.56 * ORTHO + 31.9 * EXPAN - 240.7 where MP is the melting point of the compound in Kelvin, BP is the boiling point, SIGMAL is the logarithm of the symmetry number, EXPAN is the cube of the eccentricity of the compound, and ORTHO indicates the number of groups that are ortho to another group.

  7. Compressibility Effects in Aeronautical Engineering

    NASA Technical Reports Server (NTRS)

    Stack, John

    1941-01-01

    Compressible-flow research, while a relatively new field in aeronautics, is very old, dating back almost to the development of the first firearm. Over the last hundred years, researches have been conducted in the ballistics field, but these results have been of practically no use in aeronautical engineering because the phenomena that have been studied have been the more or less steady supersonic condition of flow. Some work that has been done in connection with steam turbines, particularly nozzle studies, has been of value, In general, however, understanding of compressible-flow phenomena has been very incomplete and permitted no real basis for the solution of aeronautical engineering problems in which.the flow is likely to be unsteady because regions of both subsonic and supersonic speeds may occur. In the early phases of the development of the airplane, speeds were so low that the effects of compressibility could be justifiably ignored. During the last war and immediately after, however, propellers exhibited losses in efficiency as the tip speeds approached the speed of sound, and the first experiments of an aeronautical nature were therefore conducted with propellers. Results of these experiments indicated serious losses of efficiency, but aeronautical engineers were not seriously concerned at the time became it was generally possible. to design propellers with quite low tip. speeds. With the development of new engines having increased power and rotational speeds, however, the problems became of increasing importance.

  8. Modeling of compressible cake filtration

    SciTech Connect

    Abbound, N.M. . Dept. of Civil Engineering); Corapcioglu, M.Y. . Dept. of Civil Engineering)

    1993-10-15

    The transport of suspended solid particles in a liquid through porous media has importance from the viewpoint of engineering practice and industrial applications. Deposition of solid particles on a filter cloth or on a pervious porous medium forms the filter cakes. Following a literature survey, a governing equation for the cake thickness is obtained by considering an instantaneous material balance. In addition to the conservation of mass equations for the liquid, and for suspended and captured solid particles, functional relations among porosity, permeability, and pressure are obtained from literature and solved simultaneously. Later, numerical solutions for cake porosity, pore pressure, cake permeability, velocity of solid particles, concentration of suspended solid particles, and net rate of deposition are obtained. At each instant of time, the porosity decreases throughout the cake from the surface to the filter septum where it has the smallest value. As the cake thickness increases, the trends in pressure variation are similar to data obtained by other researchers. This comparison shows the validity of the theory and the associated solution presented. A sensitivity analysis shows higher pressure values at the filter septum for a less pervious membrane. Finally, a reduction in compressibility parameter provides a thicker cake, causes more particles to be captured inside the cake, and reduces the volumetric filtrate rate. The increase of solid velocity with the reduction in compressibility parameter shows that more rigid cakes compress less.

  9. COMPRESSION WAVES AND PHASE PLOTS: SIMULATIONS

    SciTech Connect

    Orlikowski, D; Minich, R

    2011-08-01

    Compression wave analysis started nearly 50 years ago with Fowles. Coperthwaite and Williams gave a method that helps identify simple and steady waves. We have been developing a method that gives describes the non-isentropic character of compression waves, in general. One result of that work is a simple analysis tool. Our method helps clearly identify when a compression wave is a simple wave, a steady wave (shock), and when the compression wave is in transition. This affects the analysis of compression wave experiments and the resulting extraction of the high-pressure equation of state.

  10. Video compressive sensing using Gaussian mixture models.

    PubMed

    Yang, Jianbo; Yuan, Xin; Liao, Xuejun; Llull, Patrick; Brady, David J; Sapiro, Guillermo; Carin, Lawrence

    2014-11-01

    A Gaussian mixture model (GMM)-based algorithm is proposed for video reconstruction from temporally compressed video measurements. The GMM is used to model spatio-temporal video patches, and the reconstruction can be efficiently computed based on analytic expressions. The GMM-based inversion method benefits from online adaptive learning and parallel computation. We demonstrate the efficacy of the proposed inversion method with videos reconstructed from simulated compressive video measurements, and from a real compressive video camera. We also use the GMM as a tool to investigate adaptive video compressive sensing, i.e., adaptive rate of temporal compression.

  11. Efficacy of compression of different capacitance beds in the amelioration of orthostatic hypotension

    NASA Technical Reports Server (NTRS)

    Denq, J. C.; Opfer-Gehrking, T. L.; Giuliani, M.; Felten, J.; Convertino, V. A.; Low, P. A.

    1997-01-01

    Orthostatic hypotension (OH) is the most disabling and serious manifestation of adrenergic failure, occurring in the autonomic neuropathies, pure autonomic failure (PAF) and multiple system atrophy (MSA). No specific treatment is currently available for most etiologies of OH. A reduction in venous capacity, secondary to some physical counter maneuvers (e.g., squatting or leg crossing), or the use of compressive garments, can ameliorate OH. However, there is little information on the differential efficacy, or the mechanisms of improvement, engendered by compression of specific capacitance beds. We therefore evaluated the efficacy of compression of specific compartments (calves, thighs, low abdomen, calves and thighs, and all compartments combined), using a modified antigravity suit, on the end-points of orthostatic blood pressure, and symptoms of orthostatic intolerance. Fourteen patients (PAF, n = 9; MSA, n = 3; diabetic autonomic neuropathy, n = 2; five males and nine females) with clinical OH were studied. The mean age was 62 years (range 31-78). The mean +/- SEM orthostatic systolic blood pressure when all compartments were compressed was 115.9 +/- 7.4 mmHg, significantly improved (p < 0.001) over the head-up tilt value without compression of 89.6 +/- 7.0 mmHg. The abdomen was the only single compartment whose compression significantly reduced OH (p < 0.005). There was a significant increase of peripheral resistance index (PRI) with compression of abdomen (p < 0.001) or all compartments (p < 0.001); end-diastolic index and cardiac index did not change. We conclude that denervation increases vascular capacity, and that venous compression improves OH by reducing this capacity and increasing PRI. Compression of all compartments is the most efficacious, followed by abdominal compression, whereas leg compression alone was less effective, presumably reflecting the large capacity of the abdomen relative to the legs.

  12. Practicality of magnetic compression for plasma density control

    NASA Astrophysics Data System (ADS)

    Gueroult, Renaud; Fisch, Nathaniel J.

    2016-03-01

    Plasma densification through magnetic compression has been suggested for time-resolved control of the wave properties in plasma-based accelerators [P. F. Schmit and N. J. Fisch, Phys. Rev. Lett. 109, 255003 (2012)]. Using particle in cell simulations with real mass ratio, the practicality of large magnetic compression on timescales shorter than the ion gyro-period is investigated. For compression times shorter than the transit time of a compressional Alfven wave across the plasma slab, results show the formation of two counter-propagating shock waves, leading to a highly non-uniform plasma density profile. Furthermore, the plasma slab displays large hydromagnetic like oscillations after the driving field has reached steady state. Peak compression is obtained when the two shocks collide in the mid-plane. At this instant, very large plasma heating is observed, and the plasma β is estimated to be about 1. Although these results point out a densification mechanism quite different and more complex than initially envisioned, these features still might be advantageous in particle accelerators.

  13. Practicality of magnetic compression for plasma density control

    DOE PAGES

    Gueroult, Renaud; Fisch, Nathaniel J.

    2016-03-16

    Here, plasma densification through magnetic compression has been suggested for time-resolved control of the wave properties in plasma-based accelerators [P. F. Schmit and N. J. Fisch, Phys. Rev. Lett. 109, 255003 (2012)]. Using particle in cell simulations with real mass ratio, the practicality of large magnetic compression on timescales shorter than the ion gyro-period is investigated. For compression times shorter than the transit time of a compressional Alfven wave across the plasma slab, results show the formation of two counter-propagating shock waves, leading to a highly non-uniform plasma density profile. Furthermore, the plasma slab displays large hydromagnetic like oscillations aftermore » the driving field has reached steady state. Peak compression is obtained when the two shocks collide in the mid-plane. At this instant, very large plasma heating is observed, and the plasmaβ is estimated to be about 1. Although these results point out a densification mechanism quite different and more complex than initially envisioned, these features still might be advantageous in particle accelerators.« less

  14. Envera Variable Compression Ratio Engine

    SciTech Connect

    Charles Mendler

    2011-03-15

    Aggressive engine downsizing, variable compression ratio and use of the Atkinson cycle are being combined to improve fuel economy by up to 40 percent relative to port fuel injected gasoline engines, while maintaining full engine power. Approach Engine downsizing is viewed by US and foreign automobile manufacturers as one of the best options for improving fuel economy. While this strategy has already demonstrated a degree of success, downsizing and fuel economy gains are currently limited. With new variable compression ratio technology however, the degree of engine downsizing and fuel economy improvement can be greatly increased. A small variable compression ratio (VCR) engine has the potential to return significantly higher vehicle fuel economy while also providing high power. Affordability and potential for near term commercialization are key attributes of the Envera VCR engine. VCR Technology To meet torque and power requirements, a smaller engine needs to do more work per stroke. This is typically accomplished by boosting the incoming charge with either a turbo or supercharger so that more energy is present in the cylinder per stroke to do the work. With current production engines the degree of engine boosting (which correlates to downsizing) is limited by detonation (combustion knock) at high boost levels. Additionally, the turbo or supercharger needs to be responsive and efficient while providing the needed boost. VCR technology eliminates the limitation of engine knock at high load levels by reducing compression ratio to {approx}9:1 (or whatever level is appropriate) when high boost pressures are needed. By reducing the compression ratio during high load demand periods there is increased volume in the cylinder at top dead center (TDC) which allows more charge (or energy) to be present in the cylinder without increasing the peak pressure. Cylinder pressure is thus kept below the level at which the engine would begin to knock. When loads on the engine are low

  15. The Compression Pathway of Quartz

    NASA Astrophysics Data System (ADS)

    Dera, P. K.; Thompson, R. M.; Downs, R. T.

    2011-12-01

    The important Earth material quartz may constitute as much as 20% of the upper continental crust. Quartz is composed solely of corner-sharing SiO4 silica tetrahedra, a primary building block of many of the Earth's crustal and mantle minerals, lunar and Martian minerals, and meteoritic minerals. Quartz is therefore an outstanding model material for investigating the response of this fundamental structural unit to changes in P, T, and x. These facts have spawned a vast literature of experimental and theoretical studies of quartz at ambient and non-ambient conditions. Investigations into the behavior of quartz at high pressure have revealed an anomalous distortion in the silicate tetrahedron with pressure not typically seen in other silicates. The tetrahedron assumes a very distinct geometry, becoming more like the Sommerville tetrahedron of O'Keeffe and Hyde (1996) as pressure increases. Traditionally, this distortion has been considered a compression mechanism for quartz, along with Si-O-Si angle-bending and a very small component of bond compression. However, tetrahedral volume decreases by only 1% between 0.59 GPa and 20.25 GPa, while unit cell volume decreases by 21%. Therefore, most of the compression in quartz is happening in tetrahedral voids, not in the silicate tetrahedron, and the distortion of the silicate tetrahedron may not be the direct consequence of decreasing volume in response to increasing pressure. The structure of quartz at high temperature and high pressure, including new structural refinements from synchrotron singe-crystal data collected to 20.25 GPa, is compared to the following three hypothetical quartz crystals: (1) Ideal quartz with perfectly regular tetrahedra and the same volume and Si-O-Si angle as its observed. (2) Model quartz with the same Si-O-Si angle and cell parameters as its observed equivalent, derived from ideal by altering the axial ratio. (3) BCC quartz with a perfectly body-centered cubic arrangement of oxygen anions and

  16. Chapter 22: Compressed Air Evaluation Protocol

    SciTech Connect

    Benton, N.

    2014-11-01

    Compressed-air systems are used widely throughout industry for many operations, including pneumatic tools, packaging and automation equipment, conveyors, and other industrial process operations. Compressed-air systems are defined as a group of subsystems composed of air compressors, air treatment equipment, controls, piping, pneumatic tools, pneumatically powered machinery, and process applications using compressed air. A compressed-air system has three primary functional subsystems: supply, distribution, and demand. Air compressors are the primary energy consumers in a compressed-air system and are the primary focus of this protocol. The two compressed-air energy efficiency measures specifically addressed in this protocol are: high-efficiency/variable speed drive (VSD) compressor replacing modulating compressor; compressed-air leak survey and repairs. This protocol provides direction on how to reliably verify savings from these two measures using a consistent approach for each.

  17. Weighted compression of spectral color information.

    PubMed

    Laamanen, Hannu; Jetsu, Tuija; Jaaskelainen, Timo; Parkkinen, Jussi

    2008-06-01

    Spectral color information is used nowadays in many different applications. Accurate spectral images are usually very large files, but a proper compression method can reduce needed storage space remarkably with a minimum loss of information. In this paper we introduce a principal component analysis (PCA) -based compression method of spectral color information. In this approach spectral data is weighted with a proper weight function before forming the correlation matrix and calculating the eigenvector basis. First we give a general framework for how to use weight functions in compression of relevant color information. Then we compare the weighted compression method with the traditional PCA compression method by compressing and reconstructing the Munsell data set consisting of 1,269 reflectance spectra and the Pantone data set consisting of 922 reflectance spectra. Two different weight functions are proposed and tested. We show that weighting clearly improves retention of color information in the PCA-based compression process. PMID:18516149

  18. Compression research on the REINAS Project

    NASA Technical Reports Server (NTRS)

    Rosen, Eric; Macy, William; Montague, Bruce R.; Pi-Sunyer, Carles; Spring, Jim; Kulp, David; Long, Dean; Langdon, Glen, Jr.; Pang, Alex; Wittenbrink, Craig M.

    1995-01-01

    We present approaches to integrating data compression technology into a database system designed to support research of air, sea, and land phenomena of interest to meteorology, oceanography, and earth science. A key element of the Real-Time Environmental Information Network and Analysis System (REINAS) system is the real-time component: to provide data as soon as acquired. Compression approaches being considered for REINAS include compression of raw data on the way into the database, compression of data produced by scientific visualization on the way out of the database, compression of modeling results, and compression of database query results. These compression needs are being incorporated through client-server, API, utility, and application code development.

  19. Image Compression in Signal-Dependent Noise

    NASA Astrophysics Data System (ADS)

    Shahnaz, Rubeena; Walkup, John F.; Krile, Thomas F.

    1999-09-01

    The performance of an image compression scheme is affected by the presence of noise, and the achievable compression may be reduced significantly. We investigated the effects of specific signal-dependent-noise (SDN) sources, such as film-grain and speckle noise, on image compression, using JPEG (Joint Photographic Experts Group) standard image compression. For the improvement of compression ratios noisy images are preprocessed for noise suppression before compression is applied. Two approaches are employed for noise suppression. In one approach an estimator designed specifically for the SDN model is used. In an alternate approach, the noise is first transformed into signal-independent noise (SIN) and then an estimator designed for SIN is employed. The performances of these two schemes are compared. The compression results achieved for noiseless, noisy, and restored images are also presented.

  20. Influence of Tension-Compression Asymmetry on the Mechanical Behavior of AZ31B Magnesium Alloy Sheets in Bending

    NASA Astrophysics Data System (ADS)

    Zhou, Ping; Beeh, Elmar; Friedrich, Horst E.

    2016-03-01

    Magnesium alloys are promising materials for lightweight design in the automotive industry due to their high strength-to-mass ratio. This study aims to study the influence of tension-compression asymmetry on the radius of curvature and energy absorption capacity of AZ31B-O magnesium alloy sheets in bending. The mechanical properties were characterized using tension, compression, and three-point bending tests. The material exhibits significant tension-compression asymmetry in terms of strength and strain hardening rate due to extension twinning in compression. The compressive yield strength is much lower than the tensile yield strength, while the strain hardening rate is much higher in compression. Furthermore, the tension-compression asymmetry in terms of r value (Lankford value) was also observed. The r value in tension is much higher than that in compression. The bending results indicate that the AZ31B-O sheet can outperform steel and aluminum sheets in terms of specific energy absorption in bending mainly due to its low density. In addition, the AZ31B-O sheet was deformed with a larger radius of curvature than the steel and aluminum sheets, which brings a benefit to energy absorption capacity. Finally, finite element simulation for three-point bending was performed using LS-DYNA and the results confirmed that the larger radius of curvature of a magnesium specimen is mainly attributed to the high strain hardening rate in compression.

  1. Computed Tomography Image Compressibility and Limitations of Compression Ratio-Based Guidelines.

    PubMed

    Pambrun, Jean-François; Noumeir, Rita

    2015-12-01

    Finding optimal compression levels for diagnostic imaging is not an easy task. Significant compressibility variations exist between modalities, but little is known about compressibility variations within modalities. Moreover, compressibility is affected by acquisition parameters. In this study, we evaluate the compressibility of thousands of computed tomography (CT) slices acquired with different slice thicknesses, exposures, reconstruction filters, slice collimations, and pitches. We demonstrate that exposure, slice thickness, and reconstruction filters have a significant impact on image compressibility due to an increased high frequency content and a lower acquisition signal-to-noise ratio. We also show that compression ratio is not a good fidelity measure. Therefore, guidelines based on compression ratio should ideally be replaced with other compression measures better correlated with image fidelity. Value-of-interest (VOI) transformations also affect the perception of quality. We have studied the effect of value-of-interest transformation and found significant masking of artifacts when window is widened. PMID:25804842

  2. Algorithmic height compression of unordered trees.

    PubMed

    Ben-Naoum, Farah; Godin, Christophe

    2016-01-21

    By nature, tree structures frequently present similarities between their sub-parts. Making use of this redundancy, different types of tree compression techniques have been designed in the literature to reduce the complexity of tree structures. A popular and efficient way to compress a tree consists of merging its isomorphic subtrees, which produces a directed acyclic graph (DAG) equivalent to the original tree. An important property of this method is that the compressed structure (i.e. the DAG) has the same height as the original tree, thus limiting partially the possibility of compression. In this paper we address the problem of further compressing this DAG in height. The difficulty is that compression must be carried out on substructures that are not exactly isomorphic as they are strictly nested within each-other. We thus introduced a notion of quasi-isomorphism between subtrees that makes it possible to define similar patterns along any given path in a tree. We then proposed an algorithm to detect these patterns and to merge them, thus leading to compressed structures corresponding to DAGs augmented with return edges. In this way, redundant information is removed from the original tree in both width and height, thus achieving minimal structural compression. The complete compression algorithm is then illustrated on the compression of various plant-like structures.

  3. Image coding compression based on DCT

    NASA Astrophysics Data System (ADS)

    Feng, Fei; Liu, Peixue; Jiang, Baohua

    2012-04-01

    With the development of computer science and communications, the digital image processing develops more and more fast. High quality images are loved by people, but it will waste more stored space in our computer and it will waste more bandwidth when it is transferred by Internet. Therefore, it's necessary to have an study on technology of image compression. At present, many algorithms about image compression is applied to network and the image compression standard is established. In this dissertation, some analysis on DCT will be written. Firstly, the principle of DCT will be shown. It's necessary to realize image compression, because of the widely using about this technology; Secondly, we will have a deep understanding of DCT by the using of Matlab, the process of image compression based on DCT, and the analysis on Huffman coding; Thirdly, image compression based on DCT will be shown by using Matlab and we can have an analysis on the quality of the picture compressed. It is true that DCT is not the only algorithm to realize image compression. I am sure there will be more algorithms to make the image compressed have a high quality. I believe the technology about image compression will be widely used in the network or communications in the future.

  4. Algorithmic height compression of unordered trees.

    PubMed

    Ben-Naoum, Farah; Godin, Christophe

    2016-01-21

    By nature, tree structures frequently present similarities between their sub-parts. Making use of this redundancy, different types of tree compression techniques have been designed in the literature to reduce the complexity of tree structures. A popular and efficient way to compress a tree consists of merging its isomorphic subtrees, which produces a directed acyclic graph (DAG) equivalent to the original tree. An important property of this method is that the compressed structure (i.e. the DAG) has the same height as the original tree, thus limiting partially the possibility of compression. In this paper we address the problem of further compressing this DAG in height. The difficulty is that compression must be carried out on substructures that are not exactly isomorphic as they are strictly nested within each-other. We thus introduced a notion of quasi-isomorphism between subtrees that makes it possible to define similar patterns along any given path in a tree. We then proposed an algorithm to detect these patterns and to merge them, thus leading to compressed structures corresponding to DAGs augmented with return edges. In this way, redundant information is removed from the original tree in both width and height, thus achieving minimal structural compression. The complete compression algorithm is then illustrated on the compression of various plant-like structures. PMID:26551155

  5. Krylov methods for compressible flows

    NASA Technical Reports Server (NTRS)

    Tidriri, M. D.

    1995-01-01

    We investigate the application of Krylov methods to compressible flows, and the effect of implicit boundary conditions on the implicit solution of nonlinear problems. Two defect-correction procedures, namely, approximate factorization (AF) for structured grids and ILU/GMRES for general grids, are considered. Also considered here are Newton-Krylov matrix-free methods that we combined with the use of mixed discretization schemes in the implicitly defined Jacobian and its preconditioner. Numerical experiments that show the performance of our approaches are then presented.

  6. Vapor Compression Distillation Flight Experiment

    NASA Technical Reports Server (NTRS)

    Hutchens, Cindy F.

    2002-01-01

    One of the major requirements associated with operating the International Space Station is the transportation -- space shuttle and Russian Progress spacecraft launches - necessary to re-supply station crews with food and water. The Vapor Compression Distillation (VCD) Flight Experiment, managed by NASA's Marshall Space Flight Center in Huntsville, Ala., is a full-scale demonstration of technology being developed to recycle crewmember urine and wastewater aboard the International Space Station and thereby reduce the amount of water that must be re-supplied. Based on results of the VCD Flight Experiment, an operational urine processor will be installed in Node 3 of the space station in 2005.

  7. Efficient compression of quantum information

    SciTech Connect

    Plesch, Martin; Buzek, Vladimir

    2010-03-15

    We propose a scheme for an exact efficient transformation of a tensor product state of many identically prepared qubits into a state of a logarithmically small number of qubits. Using a quadratic number of elementary quantum gates we transform N identically prepared qubits into a state, which is nontrivial only on the first [log{sub 2}(N+1)] qubits. This procedure might be useful for quantum memories, as only a small portion of the original qubits has to be stored. Another possible application is in communicating a direction encoded in a set of quantum states, as the compressed state provides a high-effective method for such an encoding.

  8. Overset grids in compressible flow

    NASA Technical Reports Server (NTRS)

    Eberhardt, S.; Baganoff, D.

    1985-01-01

    Numerical experiments have been performed to investigate the importance of boundary data handling with overset grids in computational fluid dynamics. Experience in using embedded grid techniques in compressible flow has shown that shock waves which cross grid boundaries become ill defined and convergence is generally degraded. Numerical boundary schemes were studied to investigate the cause of these problems and a viable solution was generated using the method of characteristics to define a boundary scheme. The model test problem investigated consisted of a detached shock wave on a 2-dimensional Mach 2 blunt, cylindrical body.

  9. Shear waves in inhomogeneous, compressible fluids in a gravity field.

    PubMed

    Godin, Oleg A

    2014-03-01

    While elastic solids support compressional and shear waves, waves in ideal compressible fluids are usually thought of as compressional waves. Here, a class of acoustic-gravity waves is studied in which the dilatation is identically zero, and the pressure and density remain constant in each fluid particle. These shear waves are described by an exact analytic solution of linearized hydrodynamics equations in inhomogeneous, quiescent, inviscid, compressible fluids with piecewise continuous parameters in a uniform gravity field. It is demonstrated that the shear acoustic-gravity waves also can be supported by moving fluids as well as quiescent, viscous fluids with and without thermal conductivity. Excitation of a shear-wave normal mode by a point source and the normal mode distortion in realistic environmental models are considered. The shear acoustic-gravity waves are likely to play a significant role in coupling wave processes in the ocean and atmosphere.

  10. Shear waves in inhomogeneous, compressible fluids in a gravity field.

    PubMed

    Godin, Oleg A

    2014-03-01

    While elastic solids support compressional and shear waves, waves in ideal compressible fluids are usually thought of as compressional waves. Here, a class of acoustic-gravity waves is studied in which the dilatation is identically zero, and the pressure and density remain constant in each fluid particle. These shear waves are described by an exact analytic solution of linearized hydrodynamics equations in inhomogeneous, quiescent, inviscid, compressible fluids with piecewise continuous parameters in a uniform gravity field. It is demonstrated that the shear acoustic-gravity waves also can be supported by moving fluids as well as quiescent, viscous fluids with and without thermal conductivity. Excitation of a shear-wave normal mode by a point source and the normal mode distortion in realistic environmental models are considered. The shear acoustic-gravity waves are likely to play a significant role in coupling wave processes in the ocean and atmosphere. PMID:24606251

  11. Compression perpendicular-to-grain behaviour of wood

    NASA Astrophysics Data System (ADS)

    Tabarsa, Taghi

    successfully examined for softwood behaviour in radial compression. Cell wall properties (cell wall modulus and yield point) of a fast grown white spruce were calculated using these mechanical models. Gross properties of a slow grown white spruce and jack pine were predicted using calculated cell wall properties and cell dimensions of these species. To predict entire stress-strain in radial compression, a parameter called compression factor (CF) was introduced in this study. CF relates the gross strain to the number of collapsed cells and hence the location of collapse in growth ring. The plateau region in stress-strain curve was predicted using the CF parameter and last upward part of stress-strain curve was predicted using calculated latewood modulus. Predicted stress-strain curve was verified by experimental results. The effects of temperature on cell wall properties were found through experiments. Empirical models were developed to describe these effects. The empirical models were successfully incorporated in the mechanical models. The extended models can potentially be used to generate stress-strain curves of softwoods at any given temperature within the range studied in this project based on cellular structure geometry and dimensions.

  12. Effect of Breast Compression on Lesion Characteristic Visibility with Diffraction-Enhanced Imaging

    SciTech Connect

    Faulconer, L.; Parham, C; Connor, D; Kuzmiak, C; Koomen, M; Lee, Y; Cho, K; Rafoth, J; Livasy, C; et al.

    2010-01-01

    Conventional mammography can not distinguish between transmitted, scattered, or refracted x-rays, thus requiring breast compression to decrease tissue depth and separate overlapping structures. Diffraction-enhanced imaging (DEI) uses monochromatic x-rays and perfect crystal diffraction to generate images with contrast based on absorption, refraction, or scatter. Because DEI possesses inherently superior contrast mechanisms, the current study assesses the effect of breast compression on lesion characteristic visibility with DEI imaging of breast specimens. Eleven breast tissue specimens, containing a total of 21 regions of interest, were imaged by DEI uncompressed, half-compressed, or fully compressed. A fully compressed DEI image was displayed on a soft-copy mammography review workstation, next to a DEI image acquired with reduced compression, maintaining all other imaging parameters. Five breast imaging radiologists scored image quality metrics considering known lesion pathology, ranking their findings on a 7-point Likert scale. When fully compressed DEI images were compared to those acquired with approximately a 25% difference in tissue thickness, there was no difference in scoring of lesion feature visibility. For fully compressed DEI images compared to those acquired with approximately a 50% difference in tissue thickness, across the five readers, there was a difference in scoring of lesion feature visibility. The scores for this difference in tissue thickness were significantly different at one rocking curve position and for benign lesion characterizations. These results should be verified in a larger study because when evaluating the radiologist scores overall, we detected a significant difference between the scores reported by the five radiologists. Reducing the need for breast compression might increase patient comfort during mammography. Our results suggest that DEI may allow a reduction in compression without substantially compromising clinical image

  13. Floating Point Control Library

    2007-08-02

    Floating Point Control is a Library that allows for the manipulation of floating point unit exception masking funtions control exceptions in both the Streaming "Single Instruction, Multiple Data" Extension 2 (SSE2) unit and the floating point unit simultaneously. FPC also provides macros to set floating point rounding and precision control.

  14. Compression creep of filamentary composites

    NASA Technical Reports Server (NTRS)

    Graesser, D. L.; Tuttle, M. E.

    1988-01-01

    Axial and transverse strain fields induced in composite laminates subjected to compressive creep loading were compared for several types of laminate layups. Unidirectional graphite/epoxy as well as multi-directional graphite/epoxy and graphite/PEEK layups were studied. Specimens with and without holes were tested. The specimens were subjected to compressive creep loading for a 10-hour period. In-plane displacements were measured using moire interferometry. A computer based data reduction scheme was developed which reduces the whole-field displacement fields obtained using moire to whole-field strain contour maps. Only slight viscoelastic response was observed in matrix-dominated laminates, except for one test in which catastrophic specimen failure occurred after a 16-hour period. In this case the specimen response was a complex combination of both viscoelastic and fracture mechanisms. No viscoelastic effects were observed for fiber-dominated laminates over the 10-hour creep time used. The experimental results for specimens with holes were compared with results obtained using a finite-element analysis. The comparison between experiment and theory was generally good. Overall strain distributions were very well predicted. The finite element analysis typically predicted slightly higher strain values at the edge of the hole, and slightly lower strain values at positions removed from the hole, than were observed experimentally. It is hypothesized that these discrepancies are due to nonlinear material behavior at the hole edge, which were not accounted for during the finite-element analysis.

  15. Hemifacial Spasm and Neurovascular Compression

    PubMed Central

    Lu, Alex Y.; Yeung, Jacky T.; Gerrard, Jason L.; Michaelides, Elias M.; Sekula, Raymond F.; Bulsara, Ketan R.

    2014-01-01

    Hemifacial spasm (HFS) is characterized by involuntary unilateral contractions of the muscles innervated by the ipsilateral facial nerve, usually starting around the eyes before progressing inferiorly to the cheek, mouth, and neck. Its prevalence is 9.8 per 100,000 persons with an average age of onset of 44 years. The accepted pathophysiology of HFS suggests that it is a disease process of the nerve root entry zone of the facial nerve. HFS can be divided into two types: primary and secondary. Primary HFS is triggered by vascular compression whereas secondary HFS comprises all other causes of facial nerve damage. Clinical examination and imaging modalities such as electromyography (EMG) and magnetic resonance imaging (MRI) are useful to differentiate HFS from other facial movement disorders and for intraoperative planning. The standard medical management for HFS is botulinum neurotoxin (BoNT) injections, which provides low-risk but limited symptomatic relief. The only curative treatment for HFS is microvascular decompression (MVD), a surgical intervention that provides lasting symptomatic relief by reducing compression of the facial nerve root. With a low rate of complications such as hearing loss, MVD remains the treatment of choice for HFS patients as intraoperative technique and monitoring continue to improve. PMID:25405219

  16. Fast spectrophotometry with compressive sensing

    NASA Astrophysics Data System (ADS)

    Starling, David; Storer, Ian

    2015-03-01

    Spectrophotometers and spectrometers have numerous applications in the physical sciences and engineering, resulting in a plethora of designs and requirements. A good spectrophotometer balances the need for high photometric precision, high spectral resolution, high durability and low cost. One way to address these design objectives is to take advantage of modern scanning and detection techniques. A common imaging method that has improved signal acquisition speed and sensitivity in limited signal scenarios is the single pixel camera. Such cameras utilize the sparsity of a signal to sample below the Nyquist rate via a process known as compressive sensing. Here, we show that a single pixel camera using compressive sensing algorithms and a digital micromirror device can replace the common scanning mechanisms found in virtually all spectrophotometers, providing a very low cost solution and improving data acquisition time. We evaluate this single pixel spectrophotometer by studying a variety of samples tested against commercial products. We conclude with an analysis of flame spectra and possible improvements for future designs.

  17. Hemifacial spasm and neurovascular compression.

    PubMed

    Lu, Alex Y; Yeung, Jacky T; Gerrard, Jason L; Michaelides, Elias M; Sekula, Raymond F; Bulsara, Ketan R

    2014-01-01

    Hemifacial spasm (HFS) is characterized by involuntary unilateral contractions of the muscles innervated by the ipsilateral facial nerve, usually starting around the eyes before progressing inferiorly to the cheek, mouth, and neck. Its prevalence is 9.8 per 100,000 persons with an average age of onset of 44 years. The accepted pathophysiology of HFS suggests that it is a disease process of the nerve root entry zone of the facial nerve. HFS can be divided into two types: primary and secondary. Primary HFS is triggered by vascular compression whereas secondary HFS comprises all other causes of facial nerve damage. Clinical examination and imaging modalities such as electromyography (EMG) and magnetic resonance imaging (MRI) are useful to differentiate HFS from other facial movement disorders and for intraoperative planning. The standard medical management for HFS is botulinum neurotoxin (BoNT) injections, which provides low-risk but limited symptomatic relief. The only curative treatment for HFS is microvascular decompression (MVD), a surgical intervention that provides lasting symptomatic relief by reducing compression of the facial nerve root. With a low rate of complications such as hearing loss, MVD remains the treatment of choice for HFS patients as intraoperative technique and monitoring continue to improve.

  18. Complexity compression: nurses under fire.

    PubMed

    Krichbaum, Kathleen; Diemert, Carol; Jacox, Lynn; Jones, Ann; Koenig, Patty; Mueller, Christine; Disch, Joanne

    2007-01-01

    It has been documented that up to 40% of the workday of nurses is taken up by meeting the ever-increasing demands of the systems of healthcare delivery in which nurses are employed. These demands include the need for increasing documentation, for learning new and seemingly ever-changing procedures, and for adapting to turnover in management and administration. Attention to these issues also means that 40% of that workday is not available to patients. Believing that these increasing demands are affecting nurses' decisions to remain in nursing or to leave, a group of Minnesota nurses and nurse educators examined the work environments of nurses and the issues related to those environments. The result of this examination was discovery of a phenomenon affecting all nurses that may be central to the projected shortage of nurses. The phenomenon is complexity compression-what nurses experience when expected to assume additional, unplanned responsibilities while simultaneously conducting their multiple responsibilities in a condensed time frame. The phenomenon was validated by a group of 58 nurses who participated in focus groups that led to the identification of factors influencing the experience of complexity compression. These factors were clustered into six major themes: personal, environmental, practice, systems and technology, administration/management, and autonomy/control. Further validation studies are planned with the population of practicing professional nurses in the state of Minnesota.

  19. Longwave infrared compressive hyperspectral imager

    NASA Astrophysics Data System (ADS)

    Dupuis, Julia R.; Kirby, Michael; Cosofret, Bogdan R.

    2015-06-01

    Physical Sciences Inc. (PSI) is developing a longwave infrared (LWIR) compressive sensing hyperspectral imager (CS HSI) based on a single pixel architecture for standoff vapor phase plume detection. The sensor employs novel use of a high throughput stationary interferometer and a digital micromirror device (DMD) converted for LWIR operation in place of the traditional cooled LWIR focal plane array. The CS HSI represents a substantial cost reduction over the state of the art in LWIR HSI instruments. Radiometric improvements for using the DMD in the LWIR spectral range have been identified and implemented. In addition, CS measurement and sparsity bases specifically tailored to the CS HSI instrument and chemical plume imaging have been developed and validated using LWIR hyperspectral image streams of chemical plumes. These bases enable comparable statistics to detection based on uncompressed data. In this paper, we present a system model predicting the overall performance of the CS HSI system. Results from a breadboard build and test validating the system model are reported. In addition, the measurement and sparsity basis work demonstrating the plume detection on compressed hyperspectral images is presented.

  20. Lossy Wavefield Compression for Full-Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Boehm, C.; Fichtner, A.; de la Puente, J.; Hanzich, M.

    2015-12-01

    We present lossy compression techniques, tailored to the inexact computation of sensitivity kernels, that significantly reduce the memory requirements of adjoint-based minimization schemes. Adjoint methods are a powerful tool to solve tomography problems in full-waveform inversion (FWI). Yet they face the challenge of massive memory requirements caused by the opposite directions of forward and adjoint simulations and the necessity to access both wavefields simultaneously during the computation of the sensitivity kernel. Thus, storage, I/O operations, and memory bandwidth become key topics in FWI. In this talk, we present strategies for the temporal and spatial compression of the forward wavefield. This comprises re-interpolation with coarse time steps and an adaptive polynomial degree of the spectral element shape functions. In addition, we predict the projection errors on a hierarchy of grids and re-quantize the residuals with an adaptive floating-point accuracy to improve the approximation. Furthermore, we use the first arrivals of adjoint waves to identify "shadow zones" that do not contribute to the sensitivity kernel at all. Updating and storing the wavefield within these shadow zones is skipped, which reduces memory requirements and computational costs at the same time. Compared to check-pointing, our approach has only a negligible computational overhead, utilizing the fact that a sufficiently accurate sensitivity kernel does not require a fully resolved forward wavefield. Furthermore, we use adaptive compression thresholds during the FWI iterations to ensure convergence. Numerical experiments on the reservoir scale and for the Western Mediterranean prove the high potential of this approach with an effective compression factor of 500-1000. Furthermore, it is computationally cheap and easy to integrate in both, finite-differences and finite-element wave propagation codes.

  1. Atomic effect algebras with compression bases

    SciTech Connect

    Caragheorgheopol, Dan; Tkadlec, Josef

    2011-01-15

    Compression base effect algebras were recently introduced by Gudder [Demonstr. Math. 39, 43 (2006)]. They generalize sequential effect algebras [Rep. Math. Phys. 49, 87 (2002)] and compressible effect algebras [Rep. Math. Phys. 54, 93 (2004)]. The present paper focuses on atomic compression base effect algebras and the consequences of atoms being foci (so-called projections) of the compressions in the compression base. Part of our work generalizes results obtained in atomic sequential effect algebras by Tkadlec [Int. J. Theor. Phys. 47, 185 (2008)]. The notion of projection-atomicity is introduced and studied, and several conditions that force a compression base effect algebra or the set of its projections to be Boolean are found. Finally, we apply some of these results to sequential effect algebras and strengthen a previously established result concerning a sufficient condition for them to be Boolean.

  2. Research on compressive fusion by multiwavelet transform

    NASA Astrophysics Data System (ADS)

    Yang, Senlin; Wan, Guobin; Li, Yuanyuan; Zhao, Xiaoxia; Chong, Xin

    2014-02-01

    A new strategy for images fusion is developed on the basis of block compressed sensing (BCS) and multiwavelet transform (MWT). Since the BCS with structured random matrix requires small memory space and enables fast computation, firstly, the images with large amounts of data can be compressively sampled into block images for fusion. Secondly, taking full advantages of multiwavelet such as symmetry, orthogonality, short support, and a higher number of vanishing moments, the compressive sampling of block images can be better described by MWT transform. Then the compressive measurements are fused with a linear weighting strategy based on MWT decomposition. And finally, the fused compressive samplings are reconstructed by the smoothed projection Landweber algorithm, with consideration of blocking artifacts. Experiment result shows that the validity of proposed method. Simultaneously, field test indicates that the compressive fusion can give similar resolution with traditional MWT fusion.

  3. Industrial Compressed Air System Energy Efficiency Guidebook.

    SciTech Connect

    United States. Bonneville Power Administration.

    1993-12-01

    Energy efficient design, operation and maintenance of compressed air systems in industrial plants can provide substantial reductions in electric power and other operational costs. This guidebook will help identify cost effective, energy efficiency opportunities in compressed air system design, re-design, operation and maintenance. The guidebook provides: (1) a broad overview of industrial compressed air systems, (2) methods for estimating compressed air consumption and projected air savings, (3) a description of applicable, generic energy conservation measures, and, (4) a review of some compressed air system demonstration projects that have taken place over the last two years. The primary audience for this guidebook includes plant maintenance supervisors, plant engineers, plant managers and others interested in energy management of industrial compressed air systems.

  4. Complex synthetic aperture radar data compression

    NASA Astrophysics Data System (ADS)

    Cirillo, Francis R.; Poehler, Paul L.; Schwartz, Debra S.; Rais, Houra

    2002-08-01

    Existing compression algorithms, primarily designed for visible electro-optical (EO) imagery, do not work well for Synthetic Aperture Radar (SAR) data. The best compression ratios achieved to date are less than 10:1 with minimal degradation to the phase data. Previously, phase data has been discarded with only magnitude data saved for analysis. Now that the importance of phase has been recognized for Interferometric Synthetic Aperture Radar (IFSAR), Coherent Change Detection (CCD), and polarimetry, requirements exist to preserve, transmit, and archive the both components. Bandwidth and storage limitations on existing and future platforms make compression of this data a top priority. This paper presents results obtained using a new compression algorithm designed specifically to compress SAR imagery, while preserving both magnitude and phase information at compression ratios of 20:1 and better.

  5. SPS antenna pointing control

    NASA Technical Reports Server (NTRS)

    Hung, J. C.

    1980-01-01

    The pointing control of a microwave antenna of the Satellite Power System was investigated emphasizing: (1) the SPS antenna pointing error sensing method; (2) a rigid body pointing control design; and (3) approaches for modeling the flexible body characteristics of the solar collector. Accuracy requirements for the antenna pointing control consist of a mechanical pointing control accuracy of three arc-minutes and an electronic phased array pointing accuracy of three arc-seconds. Results based on the factors considered in current analysis, show that the three arc-minute overall pointing control accuracy can be achieved in practice.

  6. Compression Techniques for Improved Algorithm Computational Performance

    NASA Technical Reports Server (NTRS)

    Zalameda, Joseph N.; Howell, Patricia A.; Winfree, William P.

    2005-01-01

    Analysis of thermal data requires the processing of large amounts of temporal image data. The processing of the data for quantitative information can be time intensive especially out in the field where large areas are inspected resulting in numerous data sets. By applying a temporal compression technique, improved algorithm performance can be obtained. In this study, analysis techniques are applied to compressed and non-compressed thermal data. A comparison is made based on computational speed and defect signal to noise.

  7. Eccentric crank variable compression ratio mechanism

    DOEpatents

    Lawrence, Keith Edward; Moser, William Elliott; Roozenboom, Stephan Donald; Knox, Kevin Jay

    2008-05-13

    A variable compression ratio mechanism for an internal combustion engine that has an engine block and a crankshaft is disclosed. The variable compression ratio mechanism has a plurality of eccentric disks configured to support the crankshaft. Each of the plurality of eccentric disks has at least one cylindrical portion annularly surrounded by the engine block. The variable compression ratio mechanism also has at least one actuator configured to rotate the plurality of eccentric disks.

  8. Mammography compression force in New Zealand.

    PubMed

    Poletti, J L

    1994-06-01

    Maximum compression forces have been measured in New Zealand on 37 mammography machines, using a simple hydraulic device. The median measured maximum force was 145 N, and the range 58 to 230 N. Much greater attention needs to be paid to the setting of maximum force for compression devices by service personnel. Compression devices must be included in the quality assurance programme. Where indicated by the machine, the accuracy of the indicated force for some machines is poor. PMID:8074619

  9. Compressed data for the movie industry

    NASA Astrophysics Data System (ADS)

    Tice, Bradley S.

    2013-12-01

    The paper will present a compression algorithm that will allow for both random and non-random sequential binary strings of data to be compressed for storage and transmission of media information. The compression system has direct applications to the storage and transmission of digital media such as movies, television, audio signals and other visual and auditory signals needed for engineering practicalities in such industries.

  10. Subband Coding Methods for Seismic Data Compression

    NASA Technical Reports Server (NTRS)

    Kiely, A.; Pollara, F.

    1995-01-01

    This paper presents a study of seismic data compression techniques and a compression algorithm based on subband coding. The compression technique described could be used as a progressive transmission system, where successive refinements of the data can be requested by the user. This allows seismologists to first examine a coarse version of waveforms with minimal usage of the channel and then decide where refinements are required. Rate-distortion performance results are presented and comparisons are made with two block transform methods.

  11. Compression algorithm for multideterminant wave functions.

    PubMed

    Weerasinghe, Gihan L; Ríos, Pablo López; Needs, Richard J

    2014-02-01

    A compression algorithm is introduced for multideterminant wave functions which can greatly reduce the number of determinants that need to be evaluated in quantum Monte Carlo calculations. We have devised an algorithm with three levels of compression, the least costly of which yields excellent results in polynomial time. We demonstrate the usefulness of the compression algorithm for evaluating multideterminant wave functions in quantum Monte Carlo calculations, whose computational cost is reduced by factors of between about 2 and over 25 for the examples studied. We have found evidence of sublinear scaling of quantum Monte Carlo calculations with the number of determinants when the compression algorithm is used.

  12. Evaluation and Management of Vertebral Compression Fractures

    PubMed Central

    Alexandru, Daniela; So, William

    2012-01-01

    Compression fractures affect many individuals worldwide. An estimated 1.5 million vertebral compression fractures occur every year in the US. They are common in elderly populations, and 25% of postmenopausal women are affected by a compression fracture during their lifetime. Although these fractures rarely require hospital admission, they have the potential to cause significant disability and morbidity, often causing incapacitating back pain for many months. This review provides information on the pathogenesis and pathophysiology of compression fractures, as well as clinical manifestations and treatment options. Among the available treatment options, kyphoplasty and percutaneous vertebroplasty are two minimally invasive techniques to alleviate pain and correct the sagittal imbalance of the spine. PMID:23251117

  13. Memory hierarchy using row-based compression

    DOEpatents

    Loh, Gabriel H.; O'Connor, James M.

    2016-10-25

    A system includes a first memory and a device coupleable to the first memory. The device includes a second memory to cache data from the first memory. The second memory includes a plurality of rows, each row including a corresponding set of compressed data blocks of non-uniform sizes and a corresponding set of tag blocks. Each tag block represents a corresponding compressed data block of the row. The device further includes decompression logic to decompress data blocks accessed from the second memory. The device further includes compression logic to compress data blocks to be stored in the second memory.

  14. Compression of rehydratable vegetables and cereals

    NASA Technical Reports Server (NTRS)

    Burns, E. E.

    1978-01-01

    Characteristics of freeze-dried compressed carrots, such as rehydration, volatile retention, and texture, were studied by relating histological changes to textural quality evaluation, and by determining the effects of storage temperature on freeze-dried compressed carrot bars. Results show that samples compressed with a high moisture content undergo only slight structural damage and rehydrate quickly. Cellular disruption as a result of compression at low moisture levels was the main reason for rehydration and texture differences. Products prepared from carrot cubes having 48% moisture compared favorably with a freshly cooked product in cohesiveness and elasticity, but were found slightly harder and more chewy.

  15. [Irreversible image compression in radiology. Current status].

    PubMed

    Pinto dos Santos, D; Jungmann, F; Friese, C; Düber, C; Mildenberger, P

    2013-03-01

    Due to increasing amounts of data in radiology methods for image compression appear both economically and technically interesting. Irreversible image compression allows markedly higher reduction of data volume in comparison with reversible compression algorithms but is, however, accompanied by a certain amount of mathematical and visual loss of information. Various national and international radiological societies have published recommendations for the use of irreversible image compression. The degree of acceptable compression varies across modalities and regions of interest.The DICOM standard supports JPEG, which achieves compression through tiling, DCT/DWT and quantization. Although mathematical loss due to rounding up errors and reduction of high frequency information occurs this results in relatively low visual degradation.It is still unclear where to implement irreversible compression in the radiological workflow as only few studies analyzed the impact of irreversible compression on specialized image postprocessing. As long as this is within the limits recommended by the German Radiological Society irreversible image compression could be implemented directly at the imaging modality as it would comply with § 28 of the roentgen act (RöV). PMID:23456043

  16. Single-pixel complementary compressive sampling spectrometer

    NASA Astrophysics Data System (ADS)

    Lan, Ruo-Ming; Liu, Xue-Feng; Yao, Xu-Ri; Yu, Wen-Kai; Zhai, Guang-Jie

    2016-05-01

    A new type of compressive spectroscopy technique employing a complementary sampling strategy is reported. In a single sequence of spectral compressive sampling, positive and negative measurements are performed, in which sensing matrices with a complementary relationship are used. The restricted isometry property condition necessary for accurate recovery of compressive sampling theory is satisfied mathematically. Compared with the conventional single-pixel spectroscopy technique, the complementary compressive sampling strategy can achieve spectral recovery of considerably higher quality within a shorter sampling time. We also investigate the influence of the sampling ratio and integration time on the recovery quality.

  17. Efficient compression of molecular dynamics trajectory files.

    PubMed

    Marais, Patrick; Kenwood, Julian; Smith, Keegan Carruthers; Kuttel, Michelle M; Gain, James

    2012-10-15

    We investigate whether specific properties of molecular dynamics trajectory files can be exploited to achieve effective file compression. We explore two classes of lossy, quantized compression scheme: "interframe" predictors, which exploit temporal coherence between successive frames in a simulation, and more complex "intraframe" schemes, which compress each frame independently. Our interframe predictors are fast, memory-efficient and well suited to on-the-fly compression of massive simulation data sets, and significantly outperform the benchmark BZip2 application. Our schemes are configurable: atomic positional accuracy can be sacrificed to achieve greater compression. For high fidelity compression, our linear interframe predictor gives the best results at very little computational cost: at moderate levels of approximation (12-bit quantization, maximum error ≈ 10(-2) Å), we can compress a 1-2 fs trajectory file to 5-8% of its original size. For 200 fs time steps-typically used in fine grained water diffusion experiments-we can compress files to ~25% of their input size, still substantially better than BZip2. While compression performance degrades with high levels of quantization, the simulation error is typically much greater than the associated approximation error in such cases.

  18. Spinal cord compression due to ethmoid adenocarcinoma.

    PubMed

    Johns, D R; Sweriduk, S T

    1987-10-15

    Adenocarcinoma of the ethmoid sinus is a rare tumor which has been epidemiologically linked to woodworking in the furniture industry. It has a low propensity to metastasize and has not been previously reported to cause spinal cord compression. A symptomatic epidural spinal cord compression was confirmed on magnetic resonance imaging (MRI) scan in a former furniture worker with widely disseminated metastases. The clinical features of ethmoid sinus adenocarcinoma and neoplastic spinal cord compression, and the comparative value of MRI scanning in the neuroradiologic diagnosis of spinal cord compression are reviewed.

  19. Pulsed spheromak reactor with adiabatic compression

    SciTech Connect

    Fowler, T K

    1999-03-29

    Extrapolating from the Pulsed Spheromak reactor and the LINUS concept, we consider ignition achieved by injecting a conducting liquid into the flux conserver to compress a low temperature spheromak created by gun injection and ohmic heating. The required energy to achieve ignition and high gain by compression is comparable to that required for ohmic ignition and the timescale is similar so that the mechanical power to ignite by compression is comparable to the electrical power to ignite ohmically. Potential advantages and problems are discussed. Like the High Beta scenario achieved by rapid fueling of an ohmically ignited plasma, compression must occur on timescales faster than Taylor relaxation.

  20. A biologically inspired model for signal compression

    NASA Astrophysics Data System (ADS)

    McDonnell, Mark D.; Abbott, Derek

    2007-12-01

    A model of a biological sensory neuron stimulated by a noisy analog information source is considered. It is demonstrated that action-potential generation by the neuron model can be described in terms of lossy compression theory. Lossy compression is generally characterized by (i) how much distortion is introduced, on average, due to a loss of information, and (ii) the 'rate,' or the amount of compression. Conventional compression theory is used to measure the performance of the model in terms of both distortion and rate, and the tradeoff between each. The model's applicability to a number of situations relevant to biomedical engineering, including cochlear implants, and bio-sensors is discussed.

  1. Compressible Lagrangian hydrodynamics without Lagrangian cells

    NASA Astrophysics Data System (ADS)

    Clark, Robert A.

    The partial differential Eqs [2.1, 2.2, and 2.3], along with the equation of state 2.4, which describe the time evolution of compressible fluid flow can be solved without the use of a Lagrangian mesh. The method follows embedded fluid points and uses finite difference approximations to ěc nablaP and ěc nabla · ěc u to update p, ěc u and e. We have demonstrated that the method can accurately calculate highly distorted flows without difficulty. The finite difference approximations are not unique, improvements may be found in the near future. The neighbor selection is not unique, but the one being used at present appears to do an excellent job. The method could be directly extended to three dimensions. One drawback to the method is the failure toexplicitly conserve mass, momentum and energy. In fact, at any given time, the mass is not defined. We must perform an auxiliary calculation by integrating the density field over space to obtain mass, energy and momentum. However, in all cases where we have done this, we have found the drift in these quantities to be no more than a few percent.

  2. Human Identification Using Compressed ECG Signals.

    PubMed

    Camara, Carmen; Peris-Lopez, Pedro; Tapiador, Juan E

    2015-11-01

    As a result of the increased demand for improved life styles and the increment of senior citizens over the age of 65, new home care services are demanded. Simultaneously, the medical sector is increasingly becoming the new target of cybercriminals due the potential value of users' medical information. The use of biometrics seems an effective tool as a deterrent for many of such attacks. In this paper, we propose the use of electrocardiograms (ECGs) for the identification of individuals. For instance, for a telecare service, a user could be authenticated using the information extracted from her ECG signal. The majority of ECG-based biometrics systems extract information (fiducial features) from the characteristics points of an ECG wave. In this article, we propose the use of non-fiducial features via the Hadamard Transform (HT). We show how the use of highly compressed signals (only 24 coefficients of HT) is enough to unequivocally identify individuals with a high performance (classification accuracy of 0.97 and with identification system errors in the order of 10(-2)). PMID:26364201

  3. Sparsity and Compressed Coding in Sensory Systems

    PubMed Central

    Barranca, Victor J.; Kovačič, Gregor; Zhou, Douglas; Cai, David

    2014-01-01

    Considering that many natural stimuli are sparse, can a sensory system evolve to take advantage of this sparsity? We explore this question and show that significant downstream reductions in the numbers of neurons transmitting stimuli observed in early sensory pathways might be a consequence of this sparsity. First, we model an early sensory pathway using an idealized neuronal network comprised of receptors and downstream sensory neurons. Then, by revealing a linear structure intrinsic to neuronal network dynamics, our work points to a potential mechanism for transmitting sparse stimuli, related to compressed-sensing (CS) type data acquisition. Through simulation, we examine the characteristics of networks that are optimal in sparsity encoding, and the impact of localized receptive fields beyond conventional CS theory. The results of this work suggest a new network framework of signal sparsity, freeing the notion from any dependence on specific component-space representations. We expect our CS network mechanism to provide guidance for studying sparse stimulus transmission along realistic sensory pathways as well as engineering network designs that utilize sparsity encoding. PMID:25144745

  4. Compression of thick laminated composite beams with initial impact-like damage

    NASA Technical Reports Server (NTRS)

    Breivik, N. L.; Guerdal, Z.; Griffin, O. H., Jr.

    1992-01-01

    While the study of compression after impact of laminated composites has been under consideration for many years, the complexity of the damage initiated by low velocity impact has not lent itself to simple predictive models for compression strength. The damage modes due to non-penetrating, low velocity impact by large diameter objects can be simulated using quasi-static three-point bending. The resulting damage modes are less coupled and more easily characterized than actual impact damage modes. This study includes the compression testing of specimens with well documented initial damage states obtained from three-point bend testing. Compression strengths and failure modes were obtained for quasi-isotropic stacking sequences from 0.24 to 1.1 inches thick with both grouped and interspersed ply stacking. Initial damage prior to compression testing was divided into four classifications based on the type, extent, and location of the damage. These classifications are multiple through-thickness delaminations, isolated delamination, damage near the surface, and matrix cracks. Specimens from each classification were compared to specimens tested without initial damage in order to determine the effects of the initial damage on the final compression strength and failure modes. A finite element analysis was used to aid in the understanding and explanation of the experimental results.

  5. Tomography and Simulation of Microstructure Evolution of a Closed-Cell Polymer Foam in Compression

    SciTech Connect

    Daphalapurkar, N.P.; Hanan, J.C.; Phelps, N.B.; Bale, H.; Lu, H.

    2010-10-25

    Closed-cell foams in compression exhibit complex deformation characteristics that remain incompletely understood. In this paper the microstructural evolution of closed-cell polymethacrylimide foam was simulated in compression undergoing elastic, compaction, and densification stages. The three-dimensional microstructure of the foam is determined using Micro-Computed Tomography ({micro}-CT), and is converted to material points for simulations using the material point method (MPM). The properties of the cell-walls are determined from nanoindentation on the wall of the foam. MPM simulations captured the three stages of deformations in foam compression. Features of the microstructures from simulations are compared qualitatively with the in-situ observations of the foam under compression using {micro}-CT. The stress-strain curve simulated from MPM compares reasonably with the experimental results. Based on the results from {micro}-CT and MPM simulations, it was found that elastic buckling of cell-walls occurs even in the elastic regime of compression. Within the elastic region, less than 35% of the cell-wall material carries the majority of the compressive load. In the experiment, a shear band was observed as a result of collapse of cells in a weak zone. From this collapsed weak zone a compaction (collapse) wave was seen traveling which eventually lead to the collapse of the entire foam cell-structure. Overall, this methodology will allow prediction of material properties for microstructures driving the optimization of processing and performance in foam materials.

  6. Effect of Compression Ratio on Perception of Time Compressed Phonemically Balanced Words in Kannada and Monosyllables

    PubMed Central

    Prabhu, Prashanth; Sujan, Mirale Jagadish; Rakshith, Satish

    2015-01-01

    The present study attempted to study perception of time-compressed speech and the effect of compression ratio for phonemically balanced (PB) word lists in Kannada and monosyllables. The test was administered on 30 normal hearing individuals at compression ratios of 40%, 50%, 60%, 70% and 80% for PB words in Kannada and monosyllables. The results of the study showed that the speech identification scores for time-compressed speech reduced with increase in compression ratio. The scores were better for monosyllables compared to PB words especially at higher compression ratios. The study provides speech identification scores at different compression ratio for PB words and monosyllables in individuals with normal hearing. The results of the study also showed that the scores did not vary across gender for all the compression ratios for both the stimuli. The same test material needs to be compared the clinical population with central auditory processing disorder for clinical validation of the present results. PMID:26557363

  7. PROPOSED PREDICTIVE EQUATION FOR DIAGONAL COMPRESSIVE CAPACITY OF REINFORCED CONCRETE BEAMS

    NASA Astrophysics Data System (ADS)

    Tantipidok, Patarapol; Kobayashi, Chikaharu; Matsumoto, Koji; Watanabe, Ken; Niwa, Junichiro

    The current standard specifications of JSCE fo r the diagonal compressive capacity of RC beams only consider the effect of the compressive strength of conc rete and are not applicable to high strength concrete. This research aims to investigate the effect of vari ous parameters on the diagonal compressive capacity and propose a predictive equation. Twenty five I-beams were tested by three-point bending. The verification of the effects of concrete strength, stirrup ratio and spacing, shear span to effective depth ratio, flange width to web width ratio and effective depth was performed. The diagonal compressive capacity had a linear relationship to stirrup spacing regardless of its diameter. The effect of spacing became more significant with higher concrete strength. Thus, the effect of concrete strength and stirrup spacing was interrelated. On the other hand, there were slight effects of the other parameters on the diagonal compressive capacity. Finally, a simple empirical equation for predicting the diagonal compressive capacity of RC beams was proposed. The proposed equation had an adequate simplicity and can provide an accurate estimation of the diagonal compressive capacity than the existing equations.

  8. Magnetic Flux Compression in Plasmas

    NASA Astrophysics Data System (ADS)

    Velikovich, A. L.

    2012-10-01

    Magnetic flux compression (MFC) as a method for producing ultra-high pulsed magnetic fields had been originated in the 1950s by Sakharov et al. at Arzamas in the USSR (now VNIIEF, Russia) and by Fowler et al. at Los Alamos in the US. The highest magnetic field produced by explosively driven MFC generator, 28 MG, was reported by Boyko et al. of VNIIEF. The idea of using MFC to increase the magnetic field in a magnetically confined plasma to 3-10 MG, relaxing the strict requirements on the plasma density and Lawson time, gave rise to the research area known as MTF in the US and MAGO in Russia. To make a difference in ICF, a magnetic field of ˜100 MG should be generated via MFC by a plasma liner as a part of the capsule compression scenario on a laser or pulsed power facility. This approach was first suggested in mid-1980s by Liberman and Velikovich in the USSR and Felber in the US. It has not been obvious from the start that it could work at all, given that so many mechanisms exist for anomalously fast penetration of magnetic field through plasma. And yet, many experiments stimulated by this proposal since 1986, mostly using pulsed-power drivers, demonstrated reasonably good flux compression up to ˜42 MG, although diagnostics of magnetic fields of such magnitude in HED plasmas is still problematic. The new interest of MFC in plasmas emerged with the advancement of new drivers, diagnostic methods and simulation tools. Experiments on MFC in a deuterium plasma filling a cylindrical plastic liner imploded by OMEGA laser beam led by Knauer, Betti et al. at LLE produced peak fields of 36 MG. The novel MagLIF approach to low-cost, high-efficiency ICF pursued by Herrmann, Slutz, Vesey et al. at Sandia involves pulsed-power-driven MFC to a peak field of ˜130 MG in a DT plasma. A review of the progress, current status and future prospects of MFC in plasmas is presented.

  9. Blind compressive sensing dynamic MRI

    PubMed Central

    Lingala, Sajan Goud; Jacob, Mathews

    2013-01-01

    We propose a novel blind compressive sensing (BCS) frame work to recover dynamic magnetic resonance images from undersampled measurements. This scheme models the dynamic signal as a sparse linear combination of temporal basis functions, chosen from a large dictionary. In contrast to classical compressed sensing, the BCS scheme simultaneously estimates the dictionary and the sparse coefficients from the undersampled measurements. Apart from the sparsity of the coefficients, the key difference of the BCS scheme with current low rank methods is the non-orthogonal nature of the dictionary basis functions. Since the number of degrees of freedom of the BCS model is smaller than that of the low-rank methods, it provides improved reconstructions at high acceleration rates. We formulate the reconstruction as a constrained optimization problem; the objective function is the linear combination of a data consistency term and sparsity promoting ℓ1 prior of the coefficients. The Frobenius norm dictionary constraint is used to avoid scale ambiguity. We introduce a simple and efficient majorize-minimize algorithm, which decouples the original criterion into three simpler sub problems. An alternating minimization strategy is used, where we cycle through the minimization of three simpler problems. This algorithm is seen to be considerably faster than approaches that alternates between sparse coding and dictionary estimation, as well as the extension of K-SVD dictionary learning scheme. The use of the ℓ1 penalty and Frobenius norm dictionary constraint enables the attenuation of insignificant basis functions compared to the ℓ0 norm and column norm constraint assumed in most dictionary learning algorithms; this is especially important since the number of basis functions that can be reliably estimated is restricted by the available measurements. We also observe that the proposed scheme is more robust to local minima compared to K-SVD method, which relies on greedy sparse coding

  10. Laser observations of the moon - Normal points for 1973

    NASA Technical Reports Server (NTRS)

    Mulholland, J. D.; Shelus, P. J.; Silverberg, E. C.

    1975-01-01

    McDonald Observatory lunar laser-ranging observations for 1973 are presented in the form of compressed normal points, and amendments for the 1969-1972 data set are given. Observations of the reflector mounted on the Soviet roving vehicle Lunakhod 2 have also been included.

  11. 13. Detail, upper chord connection point on upstream side of ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    13. Detail, upper chord connection point on upstream side of truss, showing connection of upper chord, laced vertical compression member, knee-braced strut, counters, and laterals. - Red Bank Creek Bridge, Spanning Red Bank Creek at Rawson Road, Red Bluff, Tehama County, CA

  12. Analysis of Compression Algorithm in Ground Collision Avoidance Systems (Auto-GCAS)

    NASA Technical Reports Server (NTRS)

    Schmalz, Tyler; Ryan, Jack

    2011-01-01

    Automatic Ground Collision Avoidance Systems (Auto-GCAS) utilizes Digital Terrain Elevation Data (DTED) stored onboard a plane to determine potential recovery maneuvers. Because of the current limitations of computer hardware on military airplanes such as the F-22 and F-35, the DTED must be compressed through a lossy technique called binary-tree tip-tilt. The purpose of this study is to determine the accuracy of the compressed data with respect to the original DTED. This study is mainly interested in the magnitude of the error between the two as well as the overall distribution of the errors throughout the DTED. By understanding how the errors of the compression technique are affected by various factors (topography, density of sampling points, sub-sampling techniques, etc.), modifications can be made to the compression technique resulting in better accuracy. This, in turn, would minimize unnecessary activation of A-GCAS during flight as well as maximizing its contribution to fighter safety.

  13. Genetic disorders producing compressive radiculopathy.

    PubMed

    Corey, Joseph M

    2006-11-01

    Back pain is a frequent complaint seen in neurological practice. In evaluating back pain, neurologists are asked to evaluate patients for radiculopathy, determine whether they may benefit from surgery, and help guide management. Although disc herniation is the most common etiology of compressive radiculopathy, there are many other causes, including genetic disorders. This article is a discussion of genetic disorders that cause or contribute to radiculopathies. These genetic disorders include neurofibromatosis, Paget's disease of bone, and ankylosing spondylitis. Numerous genetic disorders can also lead to deformities of the spine, including spinal muscular atrophy, Friedreich's ataxia, Charcot-Marie-Tooth disease, familial dysautonomia, idiopathic torsional dystonia, Marfan's syndrome, and Ehlers-Danlos syndrome. However, the extent of radiculopathy caused by spine deformities is essentially absent from the literature. Finally, recent investigation into the heritability of disc degeneration and lumbar disc herniation suggests a significant genetic component in the etiology of lumbar disc disease. PMID:17048153

  14. Compression molding of aerogel microspheres

    DOEpatents

    Pekala, Richard W.; Hrubesh, Lawrence W.

    1998-03-24

    An aerogel composite material produced by compression molding of aerogel microspheres (powders) mixed together with a small percentage of polymer binder to form monolithic shapes in a cost-effective manner. The aerogel composites are formed by mixing aerogel microspheres with a polymer binder, placing the mixture in a mold and heating under pressure, which results in a composite with a density of 50-800 kg/m.sup.3 (0.05-0.80 g/cc). The thermal conductivity of the thus formed aerogel composite is below that of air, but higher than the thermal conductivity of monolithic aerogels. The resulting aerogel composites are attractive for applications such as thermal insulation since fabrication thereof does not require large and expensive processing equipment. In addition to thermal insulation, the aerogel composites may be utilized for filtration, ICF target, double layer capacitors, and capacitive deionization.

  15. Compression molding of aerogel microspheres

    DOEpatents

    Pekala, R.W.; Hrubesh, L.W.

    1998-03-24

    An aerogel composite material produced by compression molding of aerogel microspheres (powders) mixed together with a small percentage of polymer binder to form monolithic shapes in a cost-effective manner is disclosed. The aerogel composites are formed by mixing aerogel microspheres with a polymer binder, placing the mixture in a mold and heating under pressure, which results in a composite with a density of 50--800 kg/m{sup 3} (0.05--0.80 g/cc). The thermal conductivity of the thus formed aerogel composite is below that of air, but higher than the thermal conductivity of monolithic aerogels. The resulting aerogel composites are attractive for applications such as thermal insulation since fabrication thereof does not require large and expensive processing equipment. In addition to thermal insulation, the aerogel composites may be utilized for filtration, ICF target, double layer capacitors, and capacitive deionization. 4 figs.

  16. Shock compression of liquid hydrazine

    SciTech Connect

    Garcia, B.O.; Chavez, D.J.

    1995-01-01

    Liquid hydrazine (N{sub 2}H{sub 4}) is a propellant used by the Air Force and NASA for aerospace propulsion and power systems. Because the propellant modules that contain the hydrazine can be subject to debris impacts during their use, the shock states that can occur in the hydrazine need to be characterized to safely predict its response. Several shock compression experiments have been conducted in an attempt to investigate the detonability of liquid hydrazine; however, the experiments results disagree. Therefore, in this study, we reproduced each experiment numerically to evaluate in detail the shock wave profiles generated in the liquid hydrazine. This paper presents the results of each numerical simulation and compares the results to those obtained in experiment. We also present the methodology of our approach, which includes chemical kinetic experiments, chemical equilibrium calculations, and characterization of the equation of state of liquid hydrazine.

  17. Grid-free compressive beamforming.

    PubMed

    Xenaki, Angeliki; Gerstoft, Peter

    2015-04-01

    The direction-of-arrival (DOA) estimation problem involves the localization of a few sources from a limited number of observations on an array of sensors, thus it can be formulated as a sparse signal reconstruction problem and solved efficiently with compressive sensing (CS) to achieve high-resolution imaging. On a discrete angular grid, the CS reconstruction degrades due to basis mismatch when the DOAs do not coincide with the angular directions on the grid. To overcome this limitation, a continuous formulation of the DOA problem is employed and an optimization procedure is introduced, which promotes sparsity on a continuous optimization variable. The DOA estimation problem with infinitely many unknowns, i.e., source locations and amplitudes, is solved over a few optimization variables with semidefinite programming. The grid-free CS reconstruction provides high-resolution imaging even with non-uniform arrays, single-snapshot data and under noisy conditions as demonstrated on experimental towed array data.

  18. Photon counting compressive depth mapping.

    PubMed

    Howland, Gregory A; Lum, Daniel J; Ware, Matthew R; Howell, John C

    2013-10-01

    We demonstrate a compressed sensing, photon counting lidar system based on the single-pixel camera. Our technique recovers both depth and intensity maps from a single under-sampled set of incoherent, linear projections of a scene of interest at ultra-low light levels around 0.5 picowatts. Only two-dimensional reconstructions are required to image a three-dimensional scene. We demonstrate intensity imaging and depth mapping at 256 × 256 pixel transverse resolution with acquisition times as short as 3 seconds. We also show novelty filtering, reconstructing only the difference between two instances of a scene. Finally, we acquire 32 × 32 pixel real-time video for three-dimensional object tracking at 14 frames-per-second. PMID:24104293

  19. Genetic disorders producing compressive radiculopathy.

    PubMed

    Corey, Joseph M

    2006-11-01

    Back pain is a frequent complaint seen in neurological practice. In evaluating back pain, neurologists are asked to evaluate patients for radiculopathy, determine whether they may benefit from surgery, and help guide management. Although disc herniation is the most common etiology of compressive radiculopathy, there are many other causes, including genetic disorders. This article is a discussion of genetic disorders that cause or contribute to radiculopathies. These genetic disorders include neurofibromatosis, Paget's disease of bone, and ankylosing spondylitis. Numerous genetic disorders can also lead to deformities of the spine, including spinal muscular atrophy, Friedreich's ataxia, Charcot-Marie-Tooth disease, familial dysautonomia, idiopathic torsional dystonia, Marfan's syndrome, and Ehlers-Danlos syndrome. However, the extent of radiculopathy caused by spine deformities is essentially absent from the literature. Finally, recent investigation into the heritability of disc degeneration and lumbar disc herniation suggests a significant genetic component in the etiology of lumbar disc disease.

  20. High energy femtosecond pulse compression

    NASA Astrophysics Data System (ADS)

    Lassonde, Philippe; Mironov, Sergey; Fourmaux, Sylvain; Payeur, Stéphane; Khazanov, Efim; Sergeev, Alexander; Kieffer, Jean-Claude; Mourou, Gerard

    2016-07-01

    An original method for retrieving the Kerr nonlinear index was proposed and implemented for TF12 heavy flint glass. Then, a defocusing lens made of this highly nonlinear glass was used to generate an almost constant spectral broadening across a Gaussian beam profile. The lens was designed with spherical curvatures chosen in order to match the laser beam profile, such that the product of the thickness with intensity is constant. This solid-state optics in combination with chirped mirrors was used to decrease the pulse duration at the output of a terawatt-class femtosecond laser. We demonstrated compression of a 33 fs pulse to 16 fs with 170 mJ energy.

  1. Grid-free compressive beamforming.

    PubMed

    Xenaki, Angeliki; Gerstoft, Peter

    2015-04-01

    The direction-of-arrival (DOA) estimation problem involves the localization of a few sources from a limited number of observations on an array of sensors, thus it can be formulated as a sparse signal reconstruction problem and solved efficiently with compressive sensing (CS) to achieve high-resolution imaging. On a discrete angular grid, the CS reconstruction degrades due to basis mismatch when the DOAs do not coincide with the angular directions on the grid. To overcome this limitation, a continuous formulation of the DOA problem is employed and an optimization procedure is introduced, which promotes sparsity on a continuous optimization variable. The DOA estimation problem with infinitely many unknowns, i.e., source locations and amplitudes, is solved over a few optimization variables with semidefinite programming. The grid-free CS reconstruction provides high-resolution imaging even with non-uniform arrays, single-snapshot data and under noisy conditions as demonstrated on experimental towed array data. PMID:25920844

  2. Variable density compressed image sampling.

    PubMed

    Wang, Zhongmin; Arce, Gonzalo R

    2010-01-01

    Compressed sensing (CS) provides an efficient way to acquire and reconstruct natural images from a limited number of linear projection measurements leading to sub-Nyquist sampling rates. A key to the success of CS is the design of the measurement ensemble. This correspondence focuses on the design of a novel variable density sampling strategy, where the a priori information of the statistical distributions that natural images exhibit in the wavelet domain is exploited. The proposed variable density sampling has the following advantages: 1) the generation of the measurement ensemble is computationally efficient and requires less memory; 2) the necessary number of measurements for image reconstruction is reduced; 3) the proposed sampling method can be applied to several transform domains and leads to simple implementations. Extensive simulations show the effectiveness of the proposed sampling method.

  3. A New Approach for Fingerprint Image Compression

    SciTech Connect

    Mazieres, Bertrand

    1997-12-01

    The FBI has been collecting fingerprint cards since 1924 and now has over 200 million of them. Digitized with 8 bits of grayscale resolution at 500 dots per inch, it means 2000 terabytes of information. Also, without any compression, transmitting a 10 Mb card over a 9600 baud connection will need 3 hours. Hence we need a compression and a compression as close to lossless as possible: all fingerprint details must be kept. A lossless compression usually do not give a better compression ratio than 2:1, which is not sufficient. Compressing these images with the JPEG standard leads to artefacts which appear even at low compression rates. Therefore the FBI has chosen in 1993 a scheme of compression based on a wavelet transform, followed by a scalar quantization and an entropy coding : the so-called WSQ. This scheme allows to achieve compression ratios of 20:1 without any perceptible loss of quality. The publication of the FBI specifies a decoder, which means that many parameters can be changed in the encoding process: the type of analysis/reconstruction filters, the way the bit allocation is made, the number of Huffman tables used for the entropy coding. The first encoder used 9/7 filters for the wavelet transform and did the bit allocation using a high-rate bit assumption. Since the transform is made into 64 subbands, quite a lot of bands receive only a few bits even at an archival quality compression rate of 0.75 bit/pixel. Thus, after a brief overview of the standard, we will discuss a new approach for the bit-allocation that seems to make more sense where theory is concerned. Then we will talk about some implementation aspects, particularly for the new entropy coder and the features that allow other applications than fingerprint image compression. Finally, we will compare the performances of the new encoder to those of the first encoder.

  4. Method of making a non-lead hollow point bullet

    DOEpatents

    Vaughn, Norman L.; Lowden, Richard A.

    2003-10-07

    The method of making a non-lead hollow point bullet has the steps of a) compressing an unsintered powdered metal composite core into a jacket, b) punching a hollow cavity tip portion into the core, c) seating an insert, the insert having a hollow point tip and a tail protrusion, on top of the core such that the tail protrusion couples with the hollow cavity tip portion, and d) swaging the open tip of the jacket.

  5. Survey of data compression techniques

    SciTech Connect

    Gryder, R.; Hake, K.

    1991-09-01

    PM-AIM must provide to customers in a timely fashion information about Army acquisitions. This paper discusses ways that PM-AIM can reduce the volume of data that must be transmitted between sites. Although this paper primarily discusses techniques of data compression, it also briefly discusses other options for meeting the PM-AIM requirements. The options available to PM-AIM, in addition to hardware and software data compression, include less-frequent updates, distribution of partial updates, distributed data base design, and intelligent network design. Any option that enhances the performance of the PM-AIM network is worthy of consideration. The recommendations of this paper apply to the PM-AIM project in three phases: the current phase, the target phase, and the objective phase. Each recommendation will be identified as (1) appropriate for the current phase, (2) considered for implementation during the target phase, or (3) a feature that should be part of the objective phase of PM-AIM's design. The current phase includes only those measures that can be taken with the installed leased lines. The target phase includes those measures that can be taken in transferring the traffic from the leased lines to the DSNET environment with minimal changes in the current design. The objective phase includes all the things that should be done as a matter of course. The objective phase for PM-AIM appears to be a distributed data base with data for each site stored locally and all sites having access to all data.

  6. Survey of data compression techniques

    SciTech Connect

    Gryder, R.; Hake, K.

    1991-09-01

    PM-AIM must provide to customers in a timely fashion information about Army acquisitions. This paper discusses ways that PM-AIM can reduce the volume of data that must be transmitted between sites. Although this paper primarily discusses techniques of data compression, it also briefly discusses other options for meeting the PM-AIM requirements. The options available to PM-AIM, in addition to hardware and software data compression, include less-frequent updates, distribution of partial updates, distributed data base design, and intelligent network design. Any option that enhances the performance of the PM-AIM network is worthy of consideration. The recommendations of this paper apply to the PM-AIM project in three phases: the current phase, the target phase, and the objective phase. Each recommendation will be identified as (1) appropriate for the current phase, (2) considered for implementation during the target phase, or (3) a feature that should be part of the objective phase of PM-AIM`s design. The current phase includes only those measures that can be taken with the installed leased lines. The target phase includes those measures that can be taken in transferring the traffic from the leased lines to the DSNET environment with minimal changes in the current design. The objective phase includes all the things that should be done as a matter of course. The objective phase for PM-AIM appears to be a distributed data base with data for each site stored locally and all sites having access to all data.

  7. Mental Aptitude and Comprehension of Time-Compressed and Compressed-Expanded Listening Selections.

    ERIC Educational Resources Information Center

    Sticht, Thomas G.

    The comprehensibility of materials compressed and then expanded by means of an electromechanical process was tested with 280 Army inductees divided into groups of high and low mental aptitude. Three short listening selections relating to military activities were subjected to compression and compression-expansion to produce seven versions. Data…

  8. Bunch length compression method for free electron lasers to avoid parasitic compressions

    SciTech Connect

    Douglas, David R.; Benson, Stephen; Nguyen, Dinh Cong; Tennant, Christopher; Wilson, Guy

    2015-05-26

    A method of bunch length compression method for a free electron laser (FEL) that avoids parasitic compressions by 1) applying acceleration on the falling portion of the RF waveform, 2) compressing using a positive momentum compaction (R.sub.56>0), and 3) compensating for aberration by using nonlinear magnets in the compressor beam line.

  9. 76 FR 4338 - Research and Development Strategies for Compressed & Cryo-Compressed Hydrogen Storage Workshops

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-25

    ... Laboratory, in conjunction with the Hydrogen Storage team of the EERE Fuel Cell Technologies Program, will be.../hydrogenandfuelcells/wkshp_compressedcryo.html . The purpose of the compressed hydrogen workshop on Monday February... Research and Development Strategies for Compressed & Cryo- Compressed Hydrogen Storage Workshops......

  10. Bioimpedance of soft tissue under compression.

    PubMed

    Dodde, R E; Bull, J L; Shih, A J

    2012-06-01

    In this paper compression-dependent bioimpedance measurements of porcine spleen tissue are presented. Using a Cole-Cole model, nonlinear compositional changes in extracellular and intracellular makeup; related to a loss of fluid from the tissue, are identified during compression. Bioimpedance measurements were made using a custom tetrapolar probe and bioimpedance circuitry. As the tissue is increasingly compressed up to 50%, both intracellular and extracellular resistances increase while bulk membrane capacitance decreases. Increasing compression to 80% results in an increase in intracellular resistance and bulk membrane capacitance while extracellular resistance decreases. Tissues compressed incrementally to 80% show a decreased extracellular resistance of 32%, an increased intracellular resistance of 107%, and an increased bulk membrane capacitance of 64% compared to their uncompressed values. Intracellular resistance exhibits double asymptotic curves when plotted against the peak tissue pressure during compression, possibly indicating two distinct phases of mechanical change in the tissue during compression. Based on these findings, differing theories as to what is happening at a cellular level during high tissue compression are discussed, including the possibility of cell rupture and mass exudation of cellular material.

  11. Lossless Compression on MRI Images Using SWT.

    PubMed

    Anusuya, V; Raghavan, V Srinivasa; Kavitha, G

    2014-10-01

    Medical image compression is one of the growing research fields in biomedical applications. Most medical images need to be compressed using lossless compression as each pixel information is valuable. With the wide pervasiveness of medical imaging applications in health-care settings and the increased interest in telemedicine technologies, it has become essential to reduce both storage and transmission bandwidth requirements needed for archival and communication of related data, preferably by employing lossless compression methods. Furthermore, providing random access as well as resolution and quality scalability to the compressed data has become of great utility. Random access refers to the ability to decode any section of the compressed image without having to decode the entire data set. The system proposes to implement a lossless codec using an entropy coder. 3D medical images are decomposed into 2D slices and subjected to 2D-stationary wavelet transform (SWT). The decimated coefficients are compressed in parallel using embedded block coding with optimized truncation of the embedded bit stream. These bit streams are decoded and reconstructed using inverse SWT. Finally, the compression ratio (CR) is evaluated to prove the efficiency of the proposal. As an enhancement, the proposed system concentrates on minimizing the computation time by introducing parallel computing on the arithmetic coding stage as it deals with multiple subslices.

  12. A Comparative Study of Compression Video Technology.

    ERIC Educational Resources Information Center

    Keller, Chris A.; And Others

    The purpose of this study was to provide an overview of compression devices used to increase the cost effectiveness of teleconferences by reducing satellite bandwidth requirements for the transmission of television pictures and accompanying audio signals. The main body of the report describes the comparison study of compression rates and their…

  13. A New Compression Method for FITS Tables

    NASA Technical Reports Server (NTRS)

    Pence, William; Seaman, Rob; White, Richard L.

    2010-01-01

    As the size and number of FITS binary tables generated by astronomical observatories increases, so does the need for a more efficient compression method to reduce the amount disk space and network bandwidth required to archive and down1oad the data tables. We have developed a new compression method for FITS binary tables that is modeled after the FITS tiled-image compression compression convention that has been in use for the past decade. Tests of this new method on a sample of FITS binary tables from a variety of current missions show that on average this new compression technique saves about 50% more disk space than when simply compressing the whole FITS file with gzip. Other advantages of this method are (1) the compressed FITS table is itself a valid FITS table, (2) the FITS headers remain uncompressed, thus allowing rapid read and write access to the keyword values, and (3) in the common case where the FITS file contains multiple tables, each table is compressed separately and may be accessed without having to uncompress the whole file.

  14. Sudden Viscous Dissipation of Compressing Turbulence

    DOE PAGES

    Davidovits, Seth; Fisch, Nathaniel J.

    2016-03-11

    Here we report compression of turbulent plasma can amplify the turbulent kinetic energy, if the compression is fast compared to the viscous dissipation time of the turbulent eddies. A sudden viscous dissipation mechanism is demonstrated, whereby this amplified turbulent kinetic energy is rapidly converted into thermal energy, suggesting a new paradigm for fast ignition inertial fusion.

  15. Aligned genomic data compression via improved modeling.

    PubMed

    Ochoa, Idoia; Hernaez, Mikel; Weissman, Tsachy

    2014-12-01

    With the release of the latest Next-Generation Sequencing (NGS) machine, the HiSeq X by Illumina, the cost of sequencing the whole genome of a human is expected to drop to a mere $1000. This milestone in sequencing history marks the era of affordable sequencing of individuals and opens the doors to personalized medicine. In accord, unprecedented volumes of genomic data will require storage for processing. There will be dire need not only of compressing aligned data, but also of generating compressed files that can be fed directly to downstream applications to facilitate the analysis of and inference on the data. Several approaches to this challenge have been proposed in the literature; however, focus thus far has been on the low coverage regime and most of the suggested compressors are not based on effective modeling of the data. We demonstrate the benefit of data modeling for compressing aligned reads. Specifically, we show that, by working with data models designed for the aligned data, we can improve considerably over the best compression ratio achieved by previously proposed algorithms. Our results indicate that the pareto-optimal barrier for compression rate and speed claimed by Bonfield and Mahoney (2013) [Bonfield JK and Mahoneys MV, Compression of FASTQ and SAM format sequencing data, PLOS ONE, 8(3):e59190, 2013.] does not apply for high coverage aligned data. Furthermore, our improved compression ratio is achieved by splitting the data in a manner conducive to operations in the compressed domain by downstream applications.

  16. Aligned genomic data compression via improved modeling.

    PubMed

    Ochoa, Idoia; Hernaez, Mikel; Weissman, Tsachy

    2014-12-01

    With the release of the latest Next-Generation Sequencing (NGS) machine, the HiSeq X by Illumina, the cost of sequencing the whole genome of a human is expected to drop to a mere $1000. This milestone in sequencing history marks the era of affordable sequencing of individuals and opens the doors to personalized medicine. In accord, unprecedented volumes of genomic data will require storage for processing. There will be dire need not only of compressing aligned data, but also of generating compressed files that can be fed directly to downstream applications to facilitate the analysis of and inference on the data. Several approaches to this challenge have been proposed in the literature; however, focus thus far has been on the low coverage regime and most of the suggested compressors are not based on effective modeling of the data. We demonstrate the benefit of data modeling for compressing aligned reads. Specifically, we show that, by working with data models designed for the aligned data, we can improve considerably over the best compression ratio achieved by previously proposed algorithms. Our results indicate that the pareto-optimal barrier for compression rate and speed claimed by Bonfield and Mahoney (2013) [Bonfield JK and Mahoneys MV, Compression of FASTQ and SAM format sequencing data, PLOS ONE, 8(3):e59190, 2013.] does not apply for high coverage aligned data. Furthermore, our improved compression ratio is achieved by splitting the data in a manner conducive to operations in the compressed domain by downstream applications. PMID:25395305

  17. College Students' Preference for Compressed Speech Lectures.

    ERIC Educational Resources Information Center

    Primrose, Robert A.

    To test student reactions to compressed-speech lectures, tapes for a general education course in oral communication were compressed to 49 to 77 percent of original time. Students were permitted to check them out via a dial access retrieval system. Checkouts and use of tapes were compared with student grades at semester's end. No significant…

  18. Compression and fast retrieval of SNP data

    PubMed Central

    Sambo, Francesco; Di Camillo, Barbara; Toffolo, Gianna; Cobelli, Claudio

    2014-01-01

    Motivation: The increasing interest in rare genetic variants and epistatic genetic effects on complex phenotypic traits is currently pushing genome-wide association study design towards datasets of increasing size, both in the number of studied subjects and in the number of genotyped single nucleotide polymorphisms (SNPs). This, in turn, is leading to a compelling need for new methods for compression and fast retrieval of SNP data. Results: We present a novel algorithm and file format for compressing and retrieving SNP data, specifically designed for large-scale association studies. Our algorithm is based on two main ideas: (i) compress linkage disequilibrium blocks in terms of differences with a reference SNP and (ii) compress reference SNPs exploiting information on their call rate and minor allele frequency. Tested on two SNP datasets and compared with several state-of-the-art software tools, our compression algorithm is shown to be competitive in terms of compression rate and to outperform all tools in terms of time to load compressed data. Availability and implementation: Our compression and decompression algorithms are implemented in a C++ library, are released under the GNU General Public License and are freely downloadable from http://www.dei.unipd.it/~sambofra/snpack.html. Contact: sambofra@dei.unipd.it or cobelli@dei.unipd.it. PMID:25064564

  19. LOW-VELOCITY COMPRESSIBLE FLOW THEORY

    EPA Science Inventory

    The widespread application of incompressible flow theory dominates low-velocity fluid dynamics, virtually preventing research into compressible low-velocity flow dynamics. Yet, compressible solutions to simple and well-defined flow problems and a series of contradictions in incom...

  20. Compressible turbulent boundary layer interaction experiments

    NASA Technical Reports Server (NTRS)

    Settles, G. S.; Bogdonoff, S. M.

    1981-01-01

    Four phases of research results are reported: (1) experiments on the compressible turbulent boundary layer flow in a streamwise corner; (2) the two dimensional (2D) interaction of incident shock waves with a compressible turbulent boundary layer; (3) three dimensional (3D) shock/boundary layer interactions; and (4) cooperative experiments at Princeton and numerical computations at NASA-Ames.

  1. Recoil Experiments Using a Compressed Air Cannon

    ERIC Educational Resources Information Center

    Taylor, Brett

    2006-01-01

    Ping-Pong vacuum cannons, potato guns, and compressed air cannons are popular and dramatic demonstrations for lecture and lab. Students enjoy them for the spectacle, but they can also be used effectively to teach physics. Recently we have used a student-built compressed air cannon as a laboratory activity to investigate impulse, conservation of…

  2. Hardware compression using common portions of data

    DOEpatents

    Chang, Jichuan; Viswanathan, Krishnamurthy

    2015-03-24

    Methods and devices are provided for data compression. Data compression can include receiving a plurality of data chunks, sampling at least some of the plurality of data chunks extracting a common portion from a number of the plurality of data chunks based on the sampling, and storing a remainder of the plurality of data chunks in memory.

  3. CRUSH: The NSI data compression utility

    NASA Technical Reports Server (NTRS)

    Seiler, ED

    1991-01-01

    CRUSH is a data compression utility that provides the user with several lossless compression techniques available in a single application. It is intended that the future development of CRUSH will depend upon feedback from the user community to identify new features and capabilities desired by the users. CRUSH provides an extension to the UNIX Compress program and the various VMS implementations of Compress that many users are familiar with. An important capability added by CRUSH is the addition of additional compression techniques and the option of automatically determining the best technique for a given data file. The CRUSH software is written in C and is designed to run on both VMS and UNIX systems. VMS files that are compressed will regain their full file characteristics upon decompression. To the extent possible, compressed files can be transferred between VMS and UNIX systems, and thus be decompressed on a different system than they were compressed on. Version 1 of CRUSH is currently available. This version is a VAX VMS implementation. Version 2, which has the full range of capabilities for both VMS and UNIX implementations, will be available shortly.

  4. Magnetic Bunch Compression for a Compact Compton Source

    SciTech Connect

    Gamage, B.; Satogata, Todd J.

    2013-12-01

    A compact electron accelerator suitable for Compton source applications is in design at the Center for Accelerator Science at Old Dominion University and Jefferson Lab. Here we discuss two options for transverse magnetic bunch compression and final focus, each involving a 4-dipole chicane with M_{56} tunable over a range of 1.5-2.0m with independent tuning of final focus to interaction point $\\beta$*=5mm. One design has no net bending, while the other has net bending of 90 degrees and is suitable for compact corner placement.

  5. Insertion Profiles of 4 Headless Compression Screws

    PubMed Central

    Hart, Adam; Harvey, Edward J.; Lefebvre, Louis-Philippe; Barthelat, Francois; Rabiei, Reza; Martineau, Paul A.

    2013-01-01

    Purpose In practice, the surgeon must rely on screw position (insertion depth) and tactile feedback from the screwdriver (insertion torque) to gauge compression. In this study, we identified the relationship between interfragmentary compression and these 2 factors. Methods The Acutrak Standard, Acutrak Mini, Synthes 3.0, and Herbert-Whipple implants were tested using a polyurethane foam scaphoid model. A specialized testing jig simultaneously measured compression force, insertion torque, and insertion depth at half-screw-turn intervals until failure occurred. Results The peak compression occurs at an insertion depth of −3.1 mm, −2.8 mm, 0.9 mm, and 1.5 mm for the Acutrak Mini, Acutrak Standard, Herbert-Whipple, and Synthes screws respectively (insertion depth is positive when the screw is proud above the bone and negative when buried). The compression and insertion torque at a depth of −2 mm were found to be 113 ± 18 N and 0.348 ± 0.052 Nm for the Acutrak Standard, 104 ± 15 N and 0.175 ± 0.008 Nm for the Acutrak Mini, 78 ± 9 N and 0.245 ± 0.006 Nm for the Herbert-Whipple, and 67 ± 2N, 0.233 ± 0.010 Nm for the Synthes headless compression screws. Conclusions All 4 screws generated a sizable amount of compression (> 60 N) over a wide range of insertion depths. The compression at the commonly recommended insertion depth of −2 mm was not significantly different between screws; thus, implant selection should not be based on compression profile alone. Conically shaped screws (Acutrak) generated their peak compression when they were fully buried in the foam whereas the shanked screws (Synthes and Herbert-Whipple) reached peak compression before they were fully inserted. Because insertion torque correlated poorly with compression, surgeons should avoid using tactile judgment of torque as a proxy for compression. Clinical relevance Knowledge of the insertion profile may improve our understanding of the implants, provide a better basis for comparing screws

  6. Using irreversible compression in digital radiology: a preliminary study of the opinions of radiologists

    NASA Astrophysics Data System (ADS)

    Seeram, Euclid

    2006-03-01

    The large volumes of digital images produced by digital imaging modalities in Radiology have provided the motivation for the development of picture archiving and communication systems (PACS) in an effort to provide an organized mechanism for digital image management. The development of more sophisticated methods of digital image acquisition (Multislice CT and Digital Mammography, for example), as well as the implementation and performance of PACS and Teleradiology systems in a health care environment, have created challenges in the area of image compression with respect to storing and transmitting digital images. Image compression can be reversible (lossless) or irreversible (lossy). While in the former, there is no loss of information, the latter presents concerns since there is a loss of information. This loss of information from diagnostic medical images is of primary concern not only to radiologists, but also to patients and their physicians. In 1997, Goldberg pointed out that "there is growing evidence that lossy compression can be applied without significantly affecting the diagnostic content of images... there is growing consensus in the radiologic community that some forms of lossy compression are acceptable". The purpose of this study was to explore the opinions of expert radiologists, and related professional organizations on the use of irreversible compression in routine practice The opinions of notable radiologists in the US and Canada are varied indicating no consensus of opinion on the use of irreversible compression in primary diagnosis, however, they are generally positive on the notion of the image storage and transmission advantages. Almost all radiologists are concerned with the litigation potential of an incorrect diagnosis based on irreversible compressed images. The survey of several radiology professional and related organizations reveals that no professional practice standards exist for the use of irreversible compression. Currently, the

  7. Compressive Structured Light for Recovering Inhomogeneous Participating Media.

    PubMed

    Gu, Jinwei; Nayar, Shree K; Grinspun, Eitan; Belhumeur, Peter N; Ramamoorthi, Ravi

    2013-03-01

    We propose a new method named compressive structured light for recovering inhomogeneous participating media. Whereas conventional structured light methods emit coded light patterns onto the surface of an opaque object to establish correspondence for triangulation, compressive structured light projects patterns into a volume of participating medium to produce images which are integral measurements of the volume density along the line of sight. For a typical participating medium encountered in the real world, the integral nature of the acquired images enables the use of compressive sensing techniques that can recover the entire volume density from only a few measurements. This makes the acquisition process more efficient and enables reconstruction of dynamic volumetric phenomena. Moreover, our method requires the projection of multiplexed coded illumination, which has the added advantage of increasing the signal-to-noise ratio of the acquisition. Finally, we propose an iterative algorithm to correct for the attenuation of the participating medium during the reconstruction process. We show the effectiveness of our method with simulations as well as experiments on the volumetric recovery of multiple translucent layers, 3D point clouds etched in glass, and the dynamic process of milk drops dissolving in water.

  8. Avalanches in compressed porous SiO(2)-based materials.

    PubMed

    Nataf, Guillaume F; Castillo-Villa, Pedro O; Baró, Jordi; Illa, Xavier; Vives, Eduard; Planes, Antoni; Salje, Ekhard K H

    2014-08-01

    The failure dynamics in SiO(2)-based porous materials under compression, namely the synthetic glass Gelsil and three natural sandstones, has been studied for slowly increasing compressive uniaxial stress with rates between 0.2 and 2.8 kPa/s. The measured collapsed dynamics is similar to Vycor, which is another synthetic porous SiO(2) glass similar to Gelsil but with a different porous mesostructure. Compression occurs by jerks of strain release and a major collapse at the failure point. The acoustic emission and shrinking of the samples during jerks are measured and analyzed. The energy of acoustic emission events, its duration, and waiting times between events show that the failure process follows avalanche criticality with power law statistics over ca. 4 decades with a power law exponent ɛ≃ 1.4 for the energy distribution. This exponent is consistent with the mean-field value for the collapse of granular media. Besides the absence of length, energy, and time scales, we demonstrate the existence of aftershock correlations during the failure process.

  9. Application of compressive sensing to radar altimeter design

    NASA Astrophysics Data System (ADS)

    Zhang, Yunhua; Dong, Xiao; Zhai, Wenshuai

    2015-10-01

    We propose to apply the compressive sensing technique to the design of satellite radar altimeter for increasing the sampling time window (STW) while keeping the same data rate so as to enhance the tracking robustness of an altimeter. A satellite radar altimeter can measure the range between the satellite platform where it is aboard and the averaged sea surface with centimeter level accuracy. The rising slope of the received waveform by altimeter contains important information about the sea surface, e.g. the larger the slope of the waveform, means the smoother the sea surface. Besides, the half-power point of the slope refers to the range information. For satellite altimeter, due to the rising slope just occupies fewer range bins compared with the whole range bins illuminated by the long pulse signal, i.e. the signal is sparse in this sense, thus compressive sensing technique is applicable. Altimeter echoes are simulated and the waveforms are constructed by using the traditional method as well as by compressive sensing (CS) method, they are very well agreed with each other. The advantage of using CS is that we can increase the sampling time window without increasing the data, thus the tracking capability can be enhanced without sacrificing the resolution.

  10. Multiresolution Distance Volumes for Progressive Surface Compression

    SciTech Connect

    Laney, D E; Bertram, M; Duchaineau, M A; Max, N L

    2002-04-18

    We present a surface compression method that stores surfaces as wavelet-compressed signed-distance volumes. Our approach enables the representation of surfaces with complex topology and arbitrary numbers of components within a single multiresolution data structure. This data structure elegantly handles topological modification at high compression rates. Our method does not require the costly and sometimes infeasible base mesh construction step required by subdivision surface approaches. We present several improvements over previous attempts at compressing signed-distance functions, including an 0(n) distance transform, a zero set initialization method for triangle meshes, and a specialized thresholding algorithm. We demonstrate the potential of sampled distance volumes for surface compression and progressive reconstruction for complex high genus surfaces.

  11. Compression, cochlear implants, and psychophysical laws

    NASA Astrophysics Data System (ADS)

    Zeng, Fan-Gang

    2001-05-01

    Cochlear compression contributes significantly to sharp frequency tuning and wide dynamic range in audition. The physiological mechanism underlying the compression has been traced to the outer hair cell function. Electric stimulation of the auditory nerve in cochlear implants bypasses this compression function, serving as a research tool to delineate the peripheral and central contributions to auditory functions. In this talk, I will compare psychophysical performance between acoustic and electric hearing in intensity, frequency, and time processing, and pay particular attention to the data that demonstrate the role of cochlear compression. Examples include both the cochlear-implant listeners' extremely narrow dynamic range and poor pitch discrimination and their exquisite sensitivity to changes in amplitude and phase. A unified view on the complementary contributions of cochlear compression and central expansion will be developed to account for Webers' law and Stevens power law.

  12. Compressed bitmap indices for efficient query processing

    SciTech Connect

    Wu, Kesheng; Otoo, Ekow; Shoshani, Arie

    2001-09-30

    Many database applications make extensive use of bitmap indexing schemes. In this paper, we study how to improve the efficiencies of these indexing schemes by proposing new compression schemes for the bitmaps. Most compression schemes are designed primarily to achieve good compression. During query processing they can be orders of magnitude slower than their uncompressed counterparts. The new schemes are designed to bridge this performance gap by reducing compression effectiveness and improving operation speed. In a number of tests on both synthetic data and real application data, we found that the new schemes significantly outperform the well-known compression schemes while using only modestly more space. For example, compared to the Byte-aligned Bitmap Code, the new schemes are 12 times faster and it uses only 50 percent more space. The new schemes use much less space(<30 percent) than the uncompressed scheme and are faster in a majority of the test cases.

  13. Interactive computer graphics applications for compressible aerodynamics

    NASA Technical Reports Server (NTRS)

    Benson, Thomas J.

    1994-01-01

    Three computer applications have been developed to solve inviscid compressible fluids problems using interactive computer graphics. The first application is a compressible flow calculator which solves for isentropic flow, normal shocks, and oblique shocks or centered expansions produced by two dimensional ramps. The second application couples the solutions generated by the first application to a more graphical presentation of the results to produce a desk top simulator of three compressible flow problems: 1) flow past a single compression ramp; 2) flow past two ramps in series; and 3) flow past two opposed ramps. The third application extends the results of the second to produce a design tool which solves for the flow through supersonic external or mixed compression inlets. The applications were originally developed to run on SGI or IBM workstations running GL graphics. They are currently being extended to solve additional types of flow problems and modified to operate on any X-based workstation.

  14. Detection and Characterization of Motion in Video Compression

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.; Ritter, Gerhard X.

    2003-01-01

    The movement of objects in video sequences comprises a type of spatiotemporal redundancy that can be decreased mathematically to facilitate video compression. This observation holds particularly in the case of periodic motion, for example, bipedal or quadrupedal locomotion or repetitive gestures. Previously-published motion detection techniques were based on optical flow, interframe differences represented in terms of transform coefficient perturbations, or changes in eigenvalues between frames in a video sequence. However, such methods have deficits that include sensitivity to noise, burdensome computational requirements (e.g., floating point operations), and prohibitive instability in the presence of spatial or temporal interframe discontinuities. In this paper, we discuss several techniques of motion detection in two-dimensional images of three-dimensional scenes -- pointwise tracking of constant-intensity pixels, region-based vector field characterization of apparent motion, and correlationbased detection. In the latter category is a technique called Interframe Similarity Matrices (ISMs). ISMs were developed and successfully applied by Yacoob, Black, and Davis to address the challenging problem of detecting human and animal motion in surveillance video sequences. In particular, given an N-frame video sequence, an NxN-element interframe correlation matrix can be constructed and Fourier-transformed to obtain an N/2-element power spectrum of interframe periodicities. Different actions (e.g., walking vs. running) and various actors (e.g., quadruped versus human) tend to be characterized by distinct spatiotemporal spectra, and can often be distinguished from one another. Since each spectrum can be computed from a sequence of small image regions, it is possible to represent interframe motion by a pixel tagging technique, thus implementing detection, segmentation, and representation. If there are K objects with M pixels per frame having B bits per pixel (bpp) in N

  15. Point by Point: Adding up Motivation

    ERIC Educational Resources Information Center

    Marchionda, Denise

    2010-01-01

    Students often view their course grades as a mysterious equation of teacher-given grades, teacher-given grace, and some other ethereal components based on luck. However, giving students the power to earn points based on numerous daily/weekly assignments and attendance makes the grading process objective and personal, freeing the instructor to…

  16. Point specificity in acupuncture

    PubMed Central

    2012-01-01

    The existence of point specificity in acupuncture is controversial, because many acupuncture studies using this principle to select control points have found that sham acupoints have similar effects to those of verum acupoints. Furthermore, the results of pain-related studies based on visual analogue scales have not supported the concept of point specificity. In contrast, hemodynamic, functional magnetic resonance imaging and neurophysiological studies evaluating the responses to stimulation of multiple points on the body surface have shown that point-specific actions are present. This review article focuses on clinical and laboratory studies supporting the existence of point specificity in acupuncture and also addresses studies that do not support this concept. Further research is needed to elucidate the point-specific actions of acupuncture. PMID:22373514

  17. Diagnosis and management of iliac vein compression syndrome.

    PubMed

    Shebel, Nancy D; Whalen, Chyrle C

    2005-03-01

    Iliac vein compression syndrome (IVCS) is the most probable cause of iliofemoral deep venous thrombosis (DVT). One half to two thirds of patients with left-sided iliofemoral DVT have intraluminal webs or spurs from chronic extrinsic compression of the left iliac vein at the crossing point of the right common iliac artery. Approximately 2% to 5% of those with chronic deep venous insufficiency of the left leg may have IVCS. IVCS occurs when compression of the common iliac vein is severe enough to inhibit the rate of venous outflow. In its more severe manifestation, IVCS is known to cause acute iliofemoral DVT. IVCS is caused by the combination of compression and the vibratory pressure of the right iliac artery on the iliac vein that is pinched between the artery and the pelvic bone. With the advent of catheter-directed thrombolytic therapy for patients presenting with iliofemoral DVT, the underlying cause has been unveiled and IVCS is gaining recognition. Patients presenting with symptoms of chronic venous insufficiency often fail conservative treatment, and because of their crippling symptoms, they may have a high rate of work absence or are on permanent disability. If IVCS can be identified as the cause and corrected, the patients' quality of life would improve. With the advent of endovascular stenting, the underlying cause can be easily corrected, and long-term patency is acceptable. Diagnosis can be made by being highly suspicious when patients present in either the acute or chronic state and selecting the best diagnostic tool to confirm the diagnosis. This article discusses the prevalence of IVCS, its significance for the affected population, and the relevance of recognition, and reviews the best methods of its diagnosis and treatment. Special emphasis is placed on diagnostic tools and their efficacy, and our results to date are reported. PMID:15741959

  18. Weakly relativistic and ponderomotive effects on self-focusing and self-compression of laser pulses in near critical plasmas

    SciTech Connect

    Bokaei, B.; Niknam, A. R.

    2014-10-15

    The spatiotemporal dynamics of high power laser pulses in near critical plasmas are studied taking in to account the effects of relativistic and ponderomotive nonlinearities. First, within one-dimensional analysis, the effects of initial parameters such as laser intensity, plasma density, and plasma electron temperature on the self-compression mechanism are discussed. The results illustrate that the ponderomotive nonlinearity obstructs the relativistic self-compression above a certain intensity value. Moreover, the results indicate the existence of the turning point temperature in which the compression process has its strongest strength. Next, the three-dimensional analysis of laser pulse propagation is investigated by coupling the self-focusing equation with the self-compression one. It is shown that in contrast to the case in which the only relativistic nonlinearity is considered, in the presence of ponderomotive nonlinearity, the self-compression mechanism obstructs the self-focusing and leads to an increase of the laser spot size.

  19. Confluence of the right internal iliac vein into a compressed left common iliac vein.

    PubMed

    Caggiati, Alberto; Amore, Miguel; Sedati, Pietro

    2016-03-01

    The authors describe the abnormal confluence of the right internal iliac vein into a left common iliac vein compressed by the overlying right common iliac artery. The prevalence of this combination of abnormalities, evaluated in cadavers and in living subjects by CT, was 0.9%. The possible obstacle to venous pelvic return by these anomalies is pointed out.

  20. The Effect of Transition Modeling on the Prediction of Compressible Deep Dynamic Stall

    NASA Technical Reports Server (NTRS)

    Geissler, W.; Chandrasekhara, M. S.; Platzer, M. F.; Carr, L. W.; Davis, Sanford S. (Technical Monitor)

    1997-01-01

    The importance of transition modeling in the computation of compressible, unsteady separated flows is discussed. The study showed that it is critical to predict the experimentally attained transition point properly in order to obtain good agreement with data it the same Mach number and Reynolds number.

  1. Sugar Determination in Foods with a Radially Compressed High Performance Liquid Chromatography Column.

    ERIC Educational Resources Information Center

    Ondrus, Martin G.; And Others

    1983-01-01

    Advocates use of Waters Associates Radial Compression Separation System for high performance liquid chromatography. Discusses instrumentation and reagents, outlining procedure for analyzing various foods and discussing typical student data. Points out potential problems due to impurities and pump seal life. Suggests use of ribose as internal…

  2. Static Compression of Tetramethylammonium Borohydride

    SciTech Connect

    Dalton, Douglas Allen; Somayazulu, M.; Goncharov, Alexander F.; Hemley, Russell J.

    2011-11-15

    Raman spectroscopy and synchrotron X-ray diffraction are used to examine the high-pressure behavior of tetramethylammonium borohydride (TMAB) to 40 GPa at room temperature. The measurements reveal weak pressure-induced structural transitions around 5 and 20 GPa. Rietveld analysis and Le Bail fits of the powder diffraction data based on known structures of tetramethylammonium salts indicate that the transitions are mediated by orientational ordering of the BH{sub 4}{sup -} tetrahedra followed by tilting of the (CH{sub 3}){sub 4}N{sup +} groups. X-ray diffraction patterns obtained during pressure release suggest reversibility with a degree of hysteresis. Changes in the Raman spectrum confirm that these transitions are not accompanied by bonding changes between the two ionic species. At ambient conditions, TMAB does not possess dihydrogen bonding, and Raman data confirms that this feature is not activated upon compression. The pressure-volume equation of state obtained from the diffraction data gives a bulk modulus [K{sub 0} = 5.9(6) GPa, K'{sub 0} = 9.6(4)] slightly lower than that observed for ammonia borane. Raman spectra obtained over the entire pressure range (spanning over 40% densification) indicate that the intramolecular vibrational modes are largely coupled.

  3. PHELIX for flux compression studies

    SciTech Connect

    Turchi, Peter J; Rousculp, Christopher L; Reinovsky, Robert E; Reass, William A; Griego, Jeffrey R; Oro, David M; Merrill, Frank E

    2010-06-28

    PHELIX (Precision High Energy-density Liner Implosion eXperiment) is a concept for studying electromagnetic implosions using proton radiography. This approach requires a portable pulsed power and liner implosion apparatus that can be operated in conjunction with an 800 MeV proton beam at the Los Alamos Neutron Science Center. The high resolution (< 100 micron) provided by proton radiography combined with similar precision of liner implosions driven electromagnetically can permit close comparisons of multi-frame experimental data and numerical simulations within a single dynamic event. To achieve a portable implosion system for use at high energy-density in a proton laboratory area requires sub-megajoule energies applied to implosions only a few cms in radial and axial dimension. The associated inductance changes are therefore relatively modest, so a current step-up transformer arrangement is employed to avoid excessive loss to parasitic inductances that are relatively large for low-energy banks comprising only several capacitors and switches. We describe the design, construction and operation of the PHELIX system and discuss application to liner-driven, magnetic flux compression experiments. For the latter, the ability of strong magnetic fields to deflect the proton beam may offer a novel technique for measurement of field distributions near perturbed surfaces.

  4. Compressive sensing for nuclear security.

    SciTech Connect

    Gestner, Brian Joseph

    2013-12-01

    Special nuclear material (SNM) detection has applications in nuclear material control, treaty verification, and national security. The neutron and gamma-ray radiation signature of SNMs can be indirectly observed in scintillator materials, which fluoresce when exposed to this radiation. A photomultiplier tube (PMT) coupled to the scintillator material is often used to convert this weak fluorescence to an electrical output signal. The fluorescence produced by a neutron interaction event differs from that of a gamma-ray interaction event, leading to a slightly different pulse in the PMT output signal. The ability to distinguish between these pulse types, i.e., pulse shape discrimination (PSD), has enabled applications such as neutron spectroscopy, neutron scatter cameras, and dual-mode neutron/gamma-ray imagers. In this research, we explore the use of compressive sensing to guide the development of novel mixed-signal hardware for PMT output signal acquisition. Effectively, we explore smart digitizers that extract sufficient information for PSD while requiring a considerably lower sample rate than conventional digitizers. Given that we determine the feasibility of realizing these designs in custom low-power analog integrated circuits, this research enables the incorporation of SNM detection into wireless sensor networks.

  5. Shock compression profiles in ceramics

    SciTech Connect

    Grady, D.E.; Moody, R.L.

    1996-03-01

    An investigation of the shock compression properties of high-strength ceramics has been performed using controlled planar impact techniques. In a typical experimental configuration, a ceramic target disc is held stationary, and it is struck by plates of either a similar ceramic or by plates of a well-characterized metal. All tests were performed using either a single-stage propellant gun or a two-stage light-gas gun. Particle velocity histories were measured with laser velocity interferometry (VISAR) at the interface between the back of the target ceramic and a calibrated VISAR window material. Peak impact stresses achieved in these experiments range from about 3 to 70 GPa. Ceramics tested under shock impact loading include: Al{sub 2}O{sub 3}, AlN, B{sub 4}C, SiC, Si{sub 3}N{sub 4}, TiB{sub 2}, WC and ZrO{sub 2}. This report compiles the VISAR wave profiles and experimental impact parameters within a database-useful for response model development, computational model validation studies, and independent assessment of the physics of dynamic deformation on high-strength, brittle solids.

  6. The compression pathway of quartz

    SciTech Connect

    Thompson, Richard M.; Downs, Robert T.; Dera, Przemyslaw

    2011-11-07

    The structure of quartz over the temperature domain (298 K, 1078 K) and pressure domain (0 GPa, 20.25 GPa) is compared to the following three hypothetical quartz crystals: (1) Ideal {alpha}-quartz with perfectly regular tetrahedra and the same volume and Si-O-Si angle as its observed equivalent (ideal {beta}-quartz has Si-O-Si angle fixed at 155.6{sup o}). (2) Model {alpha}-quartz with the same Si-O-Si angle and cell parameters as its observed equivalent, derived from ideal by altering the axial ratio. (3) BCC quartz with a perfectly body-centered cubic arrangement of oxygen anions and the same volume as its observed equivalent. Comparison of experimental data recorded in the literature for quartz with these hypothetical crystal structures shows that quartz becomes more ideal as temperature increases, more BCC as pressure increases, and that model quartz is a very good representation of observed quartz under all conditions. This is consistent with the hypothesis that quartz compresses through Si-O-Si angle-bending, which is resisted by anion-anion repulsion resulting in increasing distortion of the c/a axial ratio from ideal as temperature decreases and/or pressure increases.

  7. GPU Lossless Hyperspectral Data Compression System

    NASA Technical Reports Server (NTRS)

    Aranki, Nazeeh I.; Keymeulen, Didier; Kiely, Aaron B.; Klimesh, Matthew A.

    2014-01-01

    Hyperspectral imaging systems onboard aircraft or spacecraft can acquire large amounts of data, putting a strain on limited downlink and storage resources. Onboard data compression can mitigate this problem but may require a system capable of a high throughput. In order to achieve a high throughput with a software compressor, a graphics processing unit (GPU) implementation of a compressor was developed targeting the current state-of-the-art GPUs from NVIDIA(R). The implementation is based on the fast lossless (FL) compression algorithm reported in "Fast Lossless Compression of Multispectral-Image Data" (NPO- 42517), NASA Tech Briefs, Vol. 30, No. 8 (August 2006), page 26, which operates on hyperspectral data and achieves excellent compression performance while having low complexity. The FL compressor uses an adaptive filtering method and achieves state-of-the-art performance in both compression effectiveness and low complexity. The new Consultative Committee for Space Data Systems (CCSDS) Standard for Lossless Multispectral & Hyperspectral image compression (CCSDS 123) is based on the FL compressor. The software makes use of the highly-parallel processing capability of GPUs to achieve a throughput at least six times higher than that of a software implementation running on a single-core CPU. This implementation provides a practical real-time solution for compression of data from airborne hyperspectral instruments.

  8. Multiresolution Distance Volumes for Progressive Surface Compression

    SciTech Connect

    Laney, D; Bertram, M; Duchaineau, M; Max, N

    2002-01-14

    Surfaces generated by scientific simulation and range scanning can reach into the billions of polygons. Such surfaces must be aggressively compressed, but at the same time should provide for level of detail queries. Progressive compression techniques based on subdivision surfaces produce impressive results on range scanned models. However, these methods require the construction of a base mesh which parameterizes the surface to be compressed and encodes the topology of the surface. For complex surfaces with high genus and/or a large number of components, the computation of an appropriate base mesh is difficult and often infeasible. We present a surface compression method that stores surfaces as wavelet-compressed signed-distance volumes. Our method avoids the costly base-mesh construction step and offers several improvements over previous attempts at compressing signed-distance functions, including an {Omicron}(n) distance transform, a new zero set initialization method for triangle meshes, and a specialized thresholding algorithm. We demonstrate the potential of sampled distance volumes for surface compression and progressive reconstruction for complex high genus surfaces.

  9. Method of continuously producing compression molded coal

    SciTech Connect

    Yoshida, H.; Ishihara, N.; Kuwashima, S.

    1986-08-19

    This patent describes a method of producing a continuous cake of compression molded coal for chamber type coke ovens comprising steps of charging raw material coking coal into a molding box, and pressurizing the raw material coking coal with a pressing plate to obtain compression molded coal and to push the compression molded coal out of the molding box through an outlet. The improvement described here includes: the coking coal having a water content of more than 8.5% is charged into a chamber of the molding box at a side opposite the outlet, the pressing plate in the chamber is so advanced in the molding box to compression mold the coking coal into the preceding compression molded coal at a pressure no more than about 100 Kg/cm/sup 2/ so that the molded coal has a bulk density of at least 1.0 wet ton/m/sup 3/, and to push the molded coal in the molding box toward the outlet, the molded coal pressurized by the pressing plate partly remains in the molding box for supporting the coking coal freshly charged for the following cycle of the operation, and the freshly charged coal is pressed by the pressing plate so that the subsequent molded coal is combined with the preceding compression molded coal and the preceding molded coal is pushed out of the molding box whereby by repeating the serial steps of the operation, continuous cake of compression molded coal is produced.

  10. MAFCO: A Compression Tool for MAF Files

    PubMed Central

    Matos, Luís M. O.; Neves, António J. R.; Pratas, Diogo; Pinho, Armando J.

    2015-01-01

    In the last decade, the cost of genomic sequencing has been decreasing so much that researchers all over the world accumulate huge amounts of data for present and future use. These genomic data need to be efficiently stored, because storage cost is not decreasing as fast as the cost of sequencing. In order to overcome this problem, the most popular general-purpose compression tool, gzip, is usually used. However, these tools were not specifically designed to compress this kind of data, and often fall short when the intention is to reduce the data size as much as possible. There are several compression algorithms available, even for genomic data, but very few have been designed to deal with Whole Genome Alignments, containing alignments between entire genomes of several species. In this paper, we present a lossless compression tool, MAFCO, specifically designed to compress MAF (Multiple Alignment Format) files. Compared to gzip, the proposed tool attains a compression gain from 34% to 57%, depending on the data set. When compared to a recent dedicated method, which is not compatible with some data sets, the compression gain of MAFCO is about 9%. Both source-code and binaries for several operating systems are freely available for non-commercial use at: http://bioinformatics.ua.pt/software/mafco. PMID:25816229

  11. Cloud Optimized Image Format and Compression

    NASA Astrophysics Data System (ADS)

    Becker, P.; Plesea, L.; Maurer, T.

    2015-04-01

    Cloud based image storage and processing requires revaluation of formats and processing methods. For the true value of the massive volumes of earth observation data to be realized, the image data needs to be accessible from the cloud. Traditional file formats such as TIF and NITF were developed in the hay day of the desktop and assumed fast low latency file access. Other formats such as JPEG2000 provide for streaming protocols for pixel data, but still require a server to have file access. These concepts no longer truly hold in cloud based elastic storage and computation environments. This paper will provide details of a newly evolving image storage format (MRF) and compression that is optimized for cloud environments. Although the cost of storage continues to fall for large data volumes, there is still significant value in compression. For imagery data to be used in analysis and exploit the extended dynamic range of the new sensors, lossless or controlled lossy compression is of high value. Compression decreases the data volumes stored and reduces the data transferred, but the reduced data size must be balanced with the CPU required to decompress. The paper also outlines a new compression algorithm (LERC) for imagery and elevation data that optimizes this balance. Advantages of the compression include its simple to implement algorithm that enables it to be efficiently accessed using JavaScript. Combing this new cloud based image storage format and compression will help resolve some of the challenges of big image data on the internet.

  12. Compression of Space for Low Visibility Probes.

    PubMed

    Born, Sabine; Krüger, Hannah M; Zimmermann, Eckart; Cavanagh, Patrick

    2016-01-01

    Stimuli briefly flashed just before a saccade are perceived closer to the saccade target, a phenomenon known as perisaccadic compression of space (Ross et al., 1997). More recently, we have demonstrated that brief probes are attracted towards a visual reference when followed by a mask, even in the absence of saccades (Zimmermann et al., 2014a). Here, we ask whether spatial compression depends on the transient disruptions of the visual input stream caused by either a mask or a saccade. Both of these degrade the probe visibility but we show that low probe visibility alone causes compression in the absence of any disruption. In a first experiment, we varied the regions of the screen covered by a transient mask, including areas where no stimulus was presented and a condition without masking. In all conditions, we adjusted probe contrast to make the probe equally hard to detect. Compression effects were found in all conditions. To obtain compression without a mask, the probe had to be presented at much lower contrasts than with masking. Comparing mislocalizations at different probe detection rates across masking, saccades and low contrast conditions without mask or saccade, Experiment 2 confirmed this observation and showed a strong influence of probe contrast on compression. Finally, in Experiment 3, we found that compression decreased as probe duration increased both for masks and saccades although here we did find some evidence that factors other than simply visibility as we measured it contribute to compression. Our experiments suggest that compression reflects how the visual system localizes weak targets in the context of highly visible stimuli.

  13. Low compression tennis balls and skill development.

    PubMed

    Hammond, John; Smith, Christina

    2006-01-01

    Coaching aims to improve player performance and coaches have a number of coaching methods and strategies they use to enhance this process. If new methods and ideas can be determined to improve player performance they will change coaching practices and processes. This study investigated the effects of using low compression balls (LCBs) during coaching sessions with beginning tennis players. In order to assess the effectiveness of LCBs on skill learning the study employed a quasi-experimental design supported by qualitative and descriptive data. Beginner tennis players took part in coaching sessions, one group using the LCBs while the other group used standard tennis balls. Both groups were administered a skills at the beginning of a series of coaching sessions and again at the end. A statistical investigation of the difference between pre and post-test results was carried out to determine the effect of LCBs on skill learning. Additional qualitative data was obtained through interviews, video capture and the use of performance analysis of typical coaching sessions for each group. The skill test results indicated no difference in skill learning when comparing beginners using the LCBs to those using the standard balls. Coaches reported that the LCBs appeared to have a positive effect on technique development, including aspects of technique that are related to improving power of the shot. Additional benefits were that rallies went on longer and more opportunity for positive reinforcement. In order to provide a more conclusive answer to the effects of LCBs on skill learning and technique development recommendations for future research were established including a more controlled experimental environment and larger sample sizes across a longer period of time. Key PointsLCB may aid skill learning in tennis.Qualitative indicators.Statistical evidence not conclusive.Further studies of larger groups recommended. PMID:24357952

  14. Logarithmic compression methods for spectral data

    DOEpatents

    Dunham, Mark E.

    2003-01-01

    A method is provided for logarithmic compression, transmission, and expansion of spectral data. A log Gabor transformation is made of incoming time series data to output spectral phase and logarithmic magnitude values. The output phase and logarithmic magnitude values are compressed by selecting only magnitude values above a selected threshold and corresponding phase values to transmit compressed phase and logarithmic magnitude values. A reverse log Gabor transformation is then performed on the transmitted phase and logarithmic magnitude values to output transmitted time series data to a user.

  15. Compression of a bundle of light rays.

    PubMed

    Marcuse, D

    1971-03-01

    The performance of ray compression devices is discussed on the basis of a phase space treatment using Liouville's theorem. It is concluded that the area in phase space of the input bundle of rays is determined solely by the required compression ratio and possible limitations on the maximum ray angle at the output of the device. The efficiency of tapers and lenses as ray compressors is approximately equal. For linear tapers and lenses the input angle of the useful rays must not exceed the compression ratio. The performance of linear tapers and lenses is compared to a particular ray compressor using a graded refractive index distribution.

  16. Calculation methods for compressible turbulent boundary layers

    NASA Technical Reports Server (NTRS)

    Bushnell, D. M.; Cary, A. M., Jr.; Harris, J. E.

    1976-01-01

    Calculation procedures for non-reacting compressible two- and three-dimensional turbulent boundary layers were reviewed. Integral, transformation and correlation methods, as well as finite difference solutions of the complete boundary layer equations summarized. Alternative numerical solution procedures were examined, and both mean field and mean turbulence field closure models were considered. Physics and related calculation problems peculiar to compressible turbulent boundary layers are described. A catalog of available solution procedures of the finite difference, finite element, and method of weighted residuals genre is included. Influence of compressibility, low Reynolds number, wall blowing, and pressure gradient upon mean field closure constants are reported.

  17. ICER-3D Hyperspectral Image Compression Software

    NASA Technical Reports Server (NTRS)

    Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, three-dimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume. The ICER-3D software is capable of providing either lossless or lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The compression algorithm, which was derived from the ICER image compression algorithm, includes wavelet-transform, context-modeling, and entropy coding subalgorithms. The 3D wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of sets of hyperspectral image data, while facilitating elimination of spectral ringing artifacts, using a technique summarized in "Improving 3D Wavelet-Based Compression of Spectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. Correlation is further exploited by a context-modeling subalgorithm, which exploits spectral dependencies in the wavelet-transformed hyperspectral data, using an algorithm that is summarized in "Context Modeler for Wavelet Compression of Hyperspectral Images" (NPO-43239), which follows this article. An important feature of ICER-3D is a scheme for limiting the adverse effects of loss of data during transmission. In this scheme, as in the similar scheme used by ICER, the spatial-frequency domain is partitioned into rectangular error-containment regions. In ICER-3D, the partitions extend through all the wavelength bands. The data in each partition are compressed independently of those in the other partitions, so that loss or corruption of data from any partition does not affect the other partitions. Furthermore, because compression is progressive within each partition, when data are lost, any data from that partition received

  18. Compressive response of Kevlar/epoxy composites

    SciTech Connect

    Yeh, J.R.; Teply, J.L.

    1988-03-01

    A mathematical model is developed from the principle of minimum potential energy to determine the longitudinal compressive response of unidirectional fiber composites. A theoretical study based on this model is conducted to assess the influence of local fiber misalignment and the nonlinear shear deformation of the matrix. Numerical results are compared with experiments to verify this study; it appears that the predicted compressive response coincides well with experimental results. It is also shown that the compressive strength of Kevlar/epoxy is dominated by local shear failure. 12 references.

  19. Digital Image Compression Using Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Serra-Ricart, M.; Garrido, L.; Gaitan, V.; Aloy, A.

    1993-01-01

    The problem of storing, transmitting, and manipulating digital images is considered. Because of the file sizes involved, large amounts of digitized image information are becoming common in modern projects. Our goal is to described an image compression transform coder based on artificial neural networks techniques (NNCTC). A comparison of the compression results obtained from digital astronomical images by the NNCTC and the method used in the compression of the digitized sky survey from the Space Telescope Science Institute based on the H-transform is performed in order to assess the reliability of the NNCTC.

  20. Modulation compression for short wavelength harmonic generation

    SciTech Connect

    Qiang, J.

    2010-01-11

    Laser modulator is used to seed free electron lasers. In this paper, we propose a scheme to compress the initial laser modulation in the longitudinal phase space by using two opposite sign bunch compressors and two opposite sign energy chirpers. This scheme could potentially reduce the initial modulation wavelength by a factor of C and increase the energy modulation amplitude by a factor of C, where C is the compression factor of the first bunch compressor. Such a compressed energy modulation can be directly used to generate short wavelength current modulation with a large bunching factor.

  1. Image Data Compression Having Minimum Perceptual Error

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1997-01-01

    A method is presented for performing color or grayscale image compression that eliminates redundant and invisible image components. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The quantization matrix comprises visual masking by luminance and contrast technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  2. Compressed Gas Safety for Experimental Fusion Facilities

    SciTech Connect

    Cadwallader, L.C.

    2005-05-15

    Experimental fusion facilities present a variety of hazards to the operators and staff. There are unique or specialized hazards, including magnetic fields, cryogens, radio frequency emissions, and vacuum reservoirs. There are also more general industrial hazards, such as a wide variety of electrical power, pressurized air and cooling water systems in use, there are crane and hoist loads, working at height, and handling compressed gas cylinders. This paper outlines the projectile hazard associated with compressed gas cylinders and methods of treatment to provide for compressed gas safety. This information should be of interest to personnel at both magnetic and inertial fusion experiments.

  3. Compressed Gas Safety for Experimental Fusion Facilities

    SciTech Connect

    Lee C. Cadwallader

    2004-09-01

    Experimental fusion facilities present a variety of hazards to the operators and staff. There are unique or specialized hazards, including magnetic fields, cryogens, radio frequency emissions, and vacuum reservoirs. There are also more general industrial hazards, such as a wide variety of electrical power, pressurized air, and cooling water systems in use, there are crane and hoist loads, working at height, and handling compressed gas cylinders. This paper outlines the projectile hazard assoicated with compressed gas cylinders and mthods of treatment to provide for compressed gas safety. This information should be of interest to personnel at both magnetic and inertial fusion experiments.

  4. Evolution Of Nonlinear Waves in Compressing Plasma

    SciTech Connect

    P.F. Schmit, I.Y. Dodin, and N.J. Fisch

    2011-05-27

    Through particle-in-cell simulations, the evolution of nonlinear plasma waves is examined in one-dimensional collisionless plasma undergoing mechanical compression. Unlike linear waves, whose wavelength decreases proportionally to the system length L(t), nonlinear waves, such as solitary electron holes, conserve their characteristic size {Delta} during slow compression. This leads to a substantially stronger adiabatic amplification as well as rapid collisionless damping when L approaches {Delta}. On the other hand, cessation of compression halts the wave evolution, yielding a stable mode.

  5. Compressible homogeneous shear: Simulation and modeling

    NASA Technical Reports Server (NTRS)

    Sarkar, S.; Erlebacher, G.; Hussaini, M. Y.

    1992-01-01

    Compressibility effects were studied on turbulence by direct numerical simulation of homogeneous shear flow. A primary observation is that the growth of the turbulent kinetic energy decreases with increasing turbulent Mach number. The sinks provided by compressible dissipation and the pressure dilatation, along with reduced Reynolds shear stress, are shown to contribute to the reduced growth of kinetic energy. Models are proposed for these dilatational terms and verified by direct comparison with the simulations. The differences between the incompressible and compressible fields are brought out by the examination of spectra, statistical moments, and structure of the rate of strain tensor.

  6. An efficient compression scheme for bitmap indices

    SciTech Connect

    Wu, Kesheng; Otoo, Ekow J.; Shoshani, Arie

    2004-04-13

    When using an out-of-core indexing method to answer a query, it is generally assumed that the I/O cost dominates the overall query response time. Because of this, most research on indexing methods concentrate on reducing the sizes of indices. For bitmap indices, compression has been used for this purpose. However, in most cases, operations on these compressed bitmaps, mostly bitwise logical operations such as AND, OR, and NOT, spend more time in CPU than in I/O. To speedup these operations, a number of specialized bitmap compression schemes have been developed; the best known of which is the byte-aligned bitmap code (BBC). They are usually faster in performing logical operations than the general purpose compression schemes, but, the time spent in CPU still dominates the total query response time. To reduce the query response time, we designed a CPU-friendly scheme named the word-aligned hybrid (WAH) code. In this paper, we prove that the sizes of WAH compressed bitmap indices are about two words per row for large range of attributes. This size is smaller than typical sizes of commonly used indices, such as a B-tree. Therefore, WAH compressed indices are not only appropriate for low cardinality attributes but also for high cardinality attributes.In the worst case, the time to operate on compressed bitmaps is proportional to the total size of the bitmaps involved. The total size of the bitmaps required to answer a query on one attribute is proportional to the number of hits. These indicate that WAH compressed bitmap indices are optimal. To verify their effectiveness, we generated bitmap indices for four different datasets and measured the response time of many range queries. Tests confirm that sizes of compressed bitmap indices are indeed smaller than B-tree indices, and query processing with WAH compressed indices is much faster than with BBC compressed indices, projection indices and B-tree indices. In addition, we also verified that the average query response time

  7. 3M Coban 2 Layer Compression Therapy: Intelligent Compression Dynamics to Suit Different Patient Needs

    PubMed Central

    Bernatchez, Stéphanie F.; Tucker, Joseph; Schnobrich, Ellen; Parks, Patrick J.

    2012-01-01

    Problem Chronic venous insufficiency can lead to recalcitrant leg ulcers. Compression has been shown to be effective in healing these ulcers, but most products are difficult to apply and uncomfortable for patients, leading to inconsistent/ineffective clinical application and poor compliance. In addition, compression presents risks for patients with an ankle-brachial pressure index (ABPI) <0.8 because of the possibility of further compromising the arterial circulation. The ABPI is the ratio of systolic leg blood pressure (taken at ankle) to systolic arm blood pressure (taken above elbow, at brachial artery). This is measured to assess a patient's lower extremity arterial perfusion before initiating compression therapy.1 Solution Using materials science, two-layer compression systems with controlled compression and a low profile were developed. These materials allow for a more consistent bandage application with better control of the applied compression, and their low profile is compatible with most footwear, increasing patient acceptance and compliance with therapy. The original 3M™ Coban™ 2 Layer Compression System is suited for patients with an ABPI ≥0.8; 3M™ Coban™ 2 Layer Lite Compression System can be used on patients with ABPI ≥0.5. New Technology Both compression systems are composed of two layers that combine to create an inelastic sleeve conforming to the limb contour to provide a consistent proper pressure profile to reduce edema. In addition, they slip significantly less than other compression products and improve patient daily living activities and physical symptoms. Indications for Use Both compression systems are indicated for patients with venous leg ulcers, lymphedema, and other conditions where compression therapy is appropriate. Caution As with any compression system, caution must be used when mixed venous and arterial disease is present to not induce any damage. These products are not indicated when the ABPI is <0.5. PMID:24527315

  8. Parallel Tensor Compression for Large-Scale Scientific Data.

    SciTech Connect

    Kolda, Tamara G.; Ballard, Grey; Austin, Woody Nathan

    2015-10-01

    As parallel computing trends towards the exascale, scientific data produced by high-fidelity simulations are growing increasingly massive. For instance, a simulation on a three-dimensional spatial grid with 512 points per dimension that tracks 64 variables per grid point for 128 time steps yields 8 TB of data. By viewing the data as a dense five way tensor, we can compute a Tucker decomposition to find inherent low-dimensional multilinear structure, achieving compression ratios of up to 10000 on real-world data sets with negligible loss in accuracy. So that we can operate on such massive data, we present the first-ever distributed memory parallel implementation for the Tucker decomposition, whose key computations correspond to parallel linear algebra operations, albeit with nonstandard data layouts. Our approach specifies a data distribution for tensors that avoids any tensor data redistribution, either locally or in parallel. We provide accompanying analysis of the computation and communication costs of the algorithms. To demonstrate the compression and accuracy of the method, we apply our approach to real-world data sets from combustion science simulations. We also provide detailed performance results, including parallel performance in both weak and strong scaling experiments.

  9. Compressive fluorescence microscopy for biological and hyperspectral imaging.

    PubMed

    Studer, Vincent; Bobin, Jérome; Chahid, Makhlad; Mousavi, Hamed Shams; Candes, Emmanuel; Dahan, Maxime

    2012-06-26

    The mathematical theory of compressed sensing (CS) asserts that one can acquire signals from measurements whose rate is much lower than the total bandwidth. Whereas the CS theory is now well developed, challenges concerning hardware implementations of CS-based acquisition devices--especially in optics--have only started being addressed. This paper presents an implementation of compressive sensing in fluorescence microscopy and its applications to biomedical imaging. Our CS microscope combines a dynamic structured wide-field illumination and a fast and sensitive single-point fluorescence detection to enable reconstructions of images of fluorescent beads, cells, and tissues with undersampling ratios (between the number of pixels and number of measurements) up to 32. We further demonstrate a hyperspectral mode and record images with 128 spectral channels and undersampling ratios up to 64, illustrating the potential benefits of CS acquisition for higher-dimensional signals, which typically exhibits extreme redundancy. Altogether, our results emphasize the interest of CS schemes for acquisition at a significantly reduced rate and point to some remaining challenges for CS fluorescence microscopy. PMID:22689950

  10. Predicting failure: acoustic emission of berlinite under compression.

    PubMed

    Nataf, Guillaume F; Castillo-Villa, Pedro O; Sellappan, Pathikumar; Kriven, Waltraud M; Vives, Eduard; Planes, Antoni; Salje, Ekhard K H

    2014-07-01

    Acoustic emission has been measured and statistical characteristics analyzed during the stress-induced collapse of porous berlinite, AlPO4, containing up to 50 vol% porosity. Stress collapse occurs in a series of individual events (avalanches), and each avalanche leads to a jerk in sample compression with corresponding acoustic emission (AE) signals. The distribution of AE avalanche energies can be approximately described by a power law p(E)dE = E(-ε)dE (ε ~ 1.8) over a large stress interval. We observed several collapse mechanisms whereby less porous minerals show the superposition of independent jerks, which were not related to the major collapse at the failure stress. In highly porous berlinite (40% and 50%) an increase of energy emission occurred near the failure point. In contrast, the less porous samples did not show such an increase in energy emission. Instead, in the near vicinity of the main failure point they showed a reduction in the energy exponent to ~ 1.4, which is consistent with the value reported for compressed porous systems displaying critical behavior. This suggests that a critical avalanche regime with a lack of precursor events occurs. In this case, all preceding large events were 'false alarms' and unrelated to the main failure event. Our results identify a method to use pico-seismicity detection of foreshocks to warn of mine collapse before the main failure (the collapse) occurs, which can be applied to highly porous materials only.

  11. Error-resilient pyramid vector quantization for image compression.

    PubMed

    Hung, A C; Tsern, E K; Meng, T H

    1998-01-01

    Pyramid vector quantization (PVQ) uses the lattice points of a pyramidal shape in multidimensional space as the quantizer codebook. It is a fixed-rate quantization technique that can be used for the compression of Laplacian-like sources arising from transform and subband image coding, where its performance approaches the optimal entropy-coded scalar quantizer without the necessity of variable length codes. In this paper, we investigate the use of PVQ for compressed image transmission over noisy channels, where the fixed-rate quantization reduces the susceptibility to bit-error corruption. We propose a new method of deriving the indices of the lattice points of the multidimensional pyramid and describe how these techniques can also improve the channel noise immunity of general symmetric lattice quantizers. Our new indexing scheme improves channel robustness by up to 3 dB over previous indexing methods, and can be performed with similar computational cost. The final fixed-rate coding algorithm surpasses the performance of typical Joint Photographic Experts Group (JPEG) implementations and exhibits much greater error resilience.

  12. Data compression: The end-to-end information systems perspective for NASA space science missions

    NASA Technical Reports Server (NTRS)

    Tai, Wallace

    1991-01-01

    The unique characteristics of compressed data have important implications to the design of space science data systems, science applications, and data compression techniques. The sequential nature or data dependence between each of the sample values within a block of compressed data introduces an error multiplication or propagation factor which compounds the effects of communication errors. The data communication characteristics of the onboard data acquisition, storage, and telecommunication channels may influence the size of the compressed blocks and the frequency of included re-initialization points. The organization of the compressed data are continually changing depending on the entropy of the input data. This also results in a variable output rate from the instrument which may require buffering to interface with the spacecraft data system. On the ground, there exist key tradeoff issues associated with the distribution and management of the science data products when data compression techniques are applied in order to alleviate the constraints imposed by ground communication bandwidth and data storage capacity.

  13. A new data compression method and its application to cosmic shear analysis

    NASA Astrophysics Data System (ADS)

    Asgari, Marika; Schneider, Peter

    2015-06-01

    Context. Future large scale cosmological surveys will provide huge data sets whose analysis requires efficient data compression. In particular, the calculation of accurate covariances is extremely challenging with an increasing number of observables used in the statistical analysis. Aims: The primary aim of this paper is to introduce a formalism for achieving efficient data compression, based on a local expansion of the observables around a fiducial cosmological model. We specifically apply and test this approach for the case of cosmic shear statistics. In addition, we study how well band powers can be obtained from measuring shear correlation functions over a finite interval of separations. Methods: We demonstrate the performance of our approach, using a Fisher analysis on cosmic shear tomography described in terms of E-/B-mode separating statistics (COSEBIs). Results: We show that our data compression is highly effective in extracting essentially the full cosmological information from a strongly reduced number of observables. Specifically, the number of observables needed decreases by at least one order of magnitude relative to the COSEBIs, which already compress the data substantially compared to the shear two-point correlation functions. The efficiency appears to be affected only slightly if a highly inaccurate covariance is used for defining the compressed data vector, showing the robustness of the method. In addition, we show the strong limitations on the possibility of constructing top-hat filters in Fourier space, for which the real-space analog has a finite support, yielding strong bounds on the accuracy of band power estimates. Conclusions: We conclude that efficient data compression is achievable and that the number of compressed data points depends on the number of model parameters. Furthermore, a band convergence power spectrum inferred from a finite angular range cannot be accurately estimated. The error on an estimated band power is larger for a

  14. Pulse self-compression to single-cycle pulse widths a few decades above the self-focusing threshold

    NASA Astrophysics Data System (ADS)

    Voronin, A. A.; Zheltikov, A. M.

    2016-08-01

    We identify a physical scenario whereby optical-field waveforms with peak powers several decades above the critical power of self-focusing can self-compress to subcycle pulse widths. With beam breakup, intense hot spots, and optical damage of the material avoided within the pulse compression length by keeping this length shorter than the modulation-instability buildup length, the beam is shown to preserve its continuity at the point of subcycle pulse generation.

  15. Software For Tie-Point Registration Of SAR Data

    NASA Technical Reports Server (NTRS)

    Rignot, Eric; Dubois, Pascale; Okonek, Sharon; Van Zyl, Jacob; Burnette, Fred; Borgeaud, Maurice

    1995-01-01

    SAR-REG software package registers synthetic-aperture-radar (SAR) image data to common reference frame based on manual tie-pointing. Image data can be in binary, integer, floating-point, or AIRSAR compressed format. For example, with map of soil characteristics, vegetation map, digital elevation map, or SPOT multispectral image, as long as user can generate binary image to be used by tie-pointing routine and data are available in one of the previously mentioned formats. Written in FORTRAN 77.

  16. Method for compression of binary data

    DOEpatents

    Berlin, G.J.

    1996-03-26

    The disclosed method for compression of a series of data bytes, based on LZSS-based compression methods, provides faster decompression of the stored data. The method involves the creation of a flag bit buffer in a random access memory device for temporary storage of flag bits generated during normal LZSS-based compression. The flag bit buffer stores the flag bits separately from their corresponding pointers and uncompressed data bytes until all input data has been read. Then, the flag bits are appended to the compressed output stream of data. Decompression can be performed much faster because bit manipulation is only required when reading the flag bits and not when reading uncompressed data bytes and pointers. Uncompressed data is read using byte length instructions and pointers are read using word instructions, thus reducing the time required for decompression. 5 figs.

  17. Interactive calculation procedures for mixed compression inlets

    NASA Technical Reports Server (NTRS)

    Reshotko, Eli

    1983-01-01

    The proper design of engine nacelle installations for supersonic aircraft depends on a sophisticated understanding of the interactions between the boundary layers and the bounding external flows. The successful operation of mixed external-internal compression inlets depends significantly on the ability to closely control the operation of the internal compression portion of the inlet. This portion of the inlet is one where compression is achieved by multiple reflection of oblique shock waves and weak compression waves in a converging internal flow passage. However weak these shocks and waves may seem gas-dynamically, they are of sufficient strength to separate a laminar boundary layer and generally even strong enough for separation or incipient separation of the turbulent boundary layers. An understanding was developed of the viscous-inviscid interactions and of the shock wave boundary layer interactions and reflections.

  18. Efficient Quantum Information Processing via Quantum Compressions

    NASA Astrophysics Data System (ADS)

    Deng, Y.; Luo, M. X.; Ma, S. Y.

    2016-01-01

    Our purpose is to improve the quantum transmission efficiency and reduce the resource cost by quantum compressions. The lossless quantum compression is accomplished using invertible quantum transformations and applied to the quantum teleportation and the simultaneous transmission over quantum butterfly networks. New schemes can greatly reduce the entanglement cost, and partially solve transmission conflictions over common links. Moreover, the local compression scheme is useful for approximate entanglement creations from pre-shared entanglements. This special task has not been addressed because of the quantum no-cloning theorem. Our scheme depends on the local quantum compression and the bipartite entanglement transfer. Simulations show the success probability is greatly dependent of the minimal entanglement coefficient. These results may be useful in general quantum network communication.

  19. Compression asphyxia from a human pyramid.

    PubMed

    Tumram, Nilesh Keshav; Ambade, Vipul Namdeorao; Biyabani, Naushad

    2015-12-01

    In compression asphyxia, respiration is stopped by external forces on the body. It is usually due to an external force compressing the trunk such as a heavy weight on the chest or abdomen and is associated with internal injuries. In present case, the victim was trapped and crushed under the falling persons from a human pyramid formation for a "Dahi Handi" festival. There was neither any severe blunt force injury nor any significant pathological natural disease contributing to the cause of death. The victim was unable to remove himself from the situation because his cognitive responses and coordination were impaired due to alcohol intake. The victim died from asphyxia due to compression of his chest and abdomen. Compression asphyxia resulting from the collapse of a human pyramid and the dynamics of its impact force in these circumstances is very rare and is not reported previously to the best of our knowledge.

  20. 3D MHD Simulations of Spheromak Compression

    NASA Astrophysics Data System (ADS)

    Stuber, James E.; Woodruff, Simon; O'Bryan, John; Romero-Talamas, Carlos A.; Darpa Spheromak Team

    2015-11-01

    The adiabatic compression of compact tori could lead to a compact and hence low cost fusion energy system. The critical scientific issues in spheromak compression relate both to confinement properties and to the stability of the configuration undergoing compression. We present results from the NIMROD code modified with the addition of magnetic field coils that allow us to examine the role of rotation on the stability and confinement of the spheromak (extending prior work for the FRC). We present results from a scan in initial rotation, from 0 to 100km/s. We show that strong rotational shear (10km/s over 1cm) occurs. We compare the simulation results with analytic scaling relations for adiabatic compression. Work performed under DARPA grant N66001-14-1-4044.

  1. Super high compression of line drawing data

    NASA Technical Reports Server (NTRS)

    Cooper, D. B.

    1976-01-01

    Models which can be used to accurately represent the type of line drawings which occur in teleconferencing and transmission for remote classrooms and which permit considerable data compression were described. The objective was to encode these pictures in binary sequences of shortest length but such that the pictures can be reconstructed without loss of important structure. It was shown that exploitation of reasonably simple structure permits compressions in the range of 30-100 to 1. When dealing with highly stylized material such as electronic or logic circuit schematics, it is unnecessary to reproduce configurations exactly. Rather, the symbols and configurations must be understood and be reproduced, but one can use fixed font symbols for resistors, diodes, capacitors, etc. Compression of pictures of natural phenomena such as can be realized by taking a similar approach, or essentially zero error reproducibility can be achieved but at a lower level of compression.

  2. Compression behavior of unidirectional fibrous composite

    NASA Technical Reports Server (NTRS)

    Sinclair, J. H.; Chamis, C. C.

    1982-01-01

    The longitudinal compression behavior of unidirectional fiber composites is investigated using a modified Celanese test method with thick and thin test specimens. The test data obtained are interpreted using the stress/strain curves from back-to-back strain gages, examination of fracture surfaces by scanning electron microscope, and predictive equations for distinct failure modes including fiber compression failure, Euler buckling, delamination, and flexure. The results show that the longitudinal compression fracture is induced by a combination of delamination, flexure, and fiber tier breaks. No distinct fracture surface characteristics can be associated with unique failure modes. An equation is described which can be used to extract the longitudinal compression strength knowing the longitudinal tensile and flexural strengths of the same composite system.

  3. Seneca Compressed Air Energy Storage (CAES) Project

    SciTech Connect

    2012-11-30

    This document provides specifications for the process air compressor for a compressed air storage project, requests a budgetary quote, and provides supporting information, including compressor data, site specific data, water analysis, and Seneca CAES value drivers.

  4. Block adaptive rate controlled image data compression

    NASA Technical Reports Server (NTRS)

    Rice, R. F.; Hilbert, E.; Lee, J.-J.; Schlutsmeyer, A.

    1979-01-01

    A block adaptive rate controlled (BARC) image data compression algorithm is described. It is noted that in the algorithm's principal rate controlled mode, image lines can be coded at selected rates by combining practical universal noiseless coding techniques with block adaptive adjustments in linear quantization. Compression of any source data at chosen rates of 3.0 bits/sample and above can be expected to yield visual image quality with imperceptible degradation. Exact reconstruction will be obtained if the one-dimensional difference entropy is below the selected compression rate. It is noted that the compressor can also be operated as a floating rate noiseless coder by simply not altering the input data quantization. Here, the universal noiseless coder ensures that the code rate is always close to the entropy. Application of BARC image data compression to the Galileo orbiter mission of Jupiter is considered.

  5. Real time telemetry and data compression

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The generation of telemetry and data compression by the flight program was verified. The adequacy of flight program telemetry control and timing is proven by the analysis of simulation laboratory runs of past programs through the use of telemetry. The telemetry data are correctly issued using specific tags called process input/output (P10) tags. Verification is accomplished by ensuring that a specific P10 tag and mode register setting identifies the correct parameter and that the data are properly scaled for subsequent ground station reduction. It was checked that the LVDC telemetry correctly adheres to the general requirements specified by the EDD. Data compression specifications are verified using compressed data from nominal flight simulations and from a series of perturbations designed to test data table overflows, data dump rates, and compression of data for occurrence events.

  6. Method for compression of binary data

    DOEpatents

    Berlin, Gary J.

    1996-01-01

    The disclosed method for compression of a series of data bytes, based on LZSS-based compression methods, provides faster decompression of the stored data. The method involves the creation of a flag bit buffer in a random access memory device for temporary storage of flag bits generated during normal LZSS-based compression. The flag bit buffer stores the flag bits separately from their corresponding pointers and uncompressed data bytes until all input data has been read. Then, the flag bits are appended to the compressed output stream of data. Decompression can be performed much faster because bit manipulation is only required when reading the flag bits and not when reading uncompressed data bytes and pointers. Uncompressed data is read using byte length instructions and pointers are read using word instructions, thus reducing the time required for decompression.

  7. All about compression: A literature review.

    PubMed

    de Carvalho, Magali Rezende; de Andrade, Isabelle Silveira; de Abreu, Alcione Matos; Leite Ribeiro, Andrea Pinto; Peixoto, Bruno Utzeri; de Oliveira, Beatriz Guitton Renaud Baptista

    2016-06-01

    Lower extremity ulcers represent a significant public health problem as they frequently progress to chronicity, significantly impact daily activities and comfort, and represent a huge financial burden to the patient and the health system. The aim of this review was to discuss the best approach for venous leg ulcers (VLUs). Online searches were conducted in Ovid MEDLINE, Ovid EMBASE, EBSCO CINAHL, and reference lists and official guidelines. Keywords considered for this review were VLU, leg ulcer, varicose ulcer, compressive therapy, compression, and stocking. A complete assessment of the patient's overall health should be performed by a trained practitioner, focusing on history of diabetes mellitus, hypertension, dietetic habits, medications, and practice of physical exercises, followed by a thorough assessment of both legs. Compressive therapy is the gold standard treatment for VLUs, and the ankle-brachial index should be measured in all patients before compression application. PMID:27210451

  8. Pulse compression and prepulse suppression apparatus

    DOEpatents

    Dane, C.B.; Hackel, L.A.; George, E.V.; Miller, J.L.; Krupke, W.F.

    1993-11-09

    A pulse compression and prepulse suppression apparatus (10) for time compressing the output of a laser (14). A pump pulse (46) is separated from a seed pulse (48) by a first polarized beam splitter (20) according to the orientation of a half wave plate (18). The seed pulse (48) is directed into an SBS oscillator (44) by two plane mirrors (22, 26) and a corner mirror (24), the corner mirror (24) being movable to adjust timing. The pump pulse (46) is directed into an SBS amplifier 34 wherein SBS occurs. The seed pulse (48), having been propagated from the SBS oscillator (44), is then directed through the SBS amplifier (34) wherein it sweeps the energy of the pump pulse (46) out of the SBS amplifier (34) and is simultaneously compressed, and the time compressed pump pulse (46) is emitted as a pulse output (52). A second polarized beam splitter (38) directs any undepleted pump pulse 58 away from the SBS oscillator (44).

  9. Pulse compression and prepulse suppression apparatus

    DOEpatents

    Dane, Clifford B.; Hackel, Lloyd A.; George, Edward V.; Miller, John L.; Krupke, William F.

    1993-01-01

    A pulse compression and prepulse suppression apparatus (10) for time compressing the output of a laser (14). A pump pulse (46) is separated from a seed pulse (48) by a first polarized beam splitter (20) according to the orientation of a half wave plate (18). The seed pulse (48) is directed into an SBS oscillator (44) by two plane mirrors (22, 26) and a corner mirror (24), the corner mirror (24) being movable to adjust timing. The pump pulse (46) is directed into an SBS amplifier 34 wherein SBS occurs. The seed pulse (48), having been propagated from the SBS oscillator (44), is then directed through the SBS amplifier (34) wherein it sweeps the energy of the pump pulse (46) out of the SBS amplifier (34) and is simultaneously compressed, and the time compressed pump pulse (46) is emitted as a pulse output (52). A second polarized beam splitter (38) directs any undepleted pump pulse 58 away from the SBS oscillator (44).

  10. Unusual aetiology of malignant spinal cord compression.

    PubMed

    Boland, Jason; Rennick, Adrienne

    2013-06-01

    Malignant spinal cord compression (MSCC) is an oncological emergency requiring rapid diagnosis and treatment to prevent irreversible spinal cord injury and disability. A case is described in a 45-year-old male with renal cell carcinoma in which the presentation of the MSCC was atypical with principally proximal left leg weakness with no evidence of bone metastasis. This was due to an unusual aetiology of the MSCC as the renal carcinoma had metastasised to his left psoas muscle causing a lumbosacral plexopathy and infiltrated through the intervertebral disc spaces, initially causing left lateral cauda equina and upper lumbar cord compression, before complete spinal cord compression. This case illustrates the varied aetiology of MSCC and reinforces the importance of maintaining a high index of suspicion of the possibility of spinal cord compression. PMID:24644568

  11. Universal lossless compression algorithm for textual images

    NASA Astrophysics Data System (ADS)

    al Zahir, Saif

    2012-03-01

    In recent years, an unparalleled volume of textual information has been transported over the Internet via email, chatting, blogging, tweeting, digital libraries, and information retrieval systems. As the volume of text data has now exceeded 40% of the total volume of traffic on the Internet, compressing textual data becomes imperative. Many sophisticated algorithms were introduced and employed for this purpose including Huffman encoding, arithmetic encoding, the Ziv-Lempel family, Dynamic Markov Compression, and Burrow-Wheeler Transform. My research presents novel universal algorithm for compressing textual images. The algorithm comprises two parts: 1. a universal fixed-to-variable codebook; and 2. our row and column elimination coding scheme. Simulation results on a large number of Arabic, Persian, and Hebrew textual images show that this algorithm has a compression ratio of nearly 87%, which exceeds published results including JBIG2.

  12. SUPG Finite Element Simulations of Compressible Flows

    NASA Technical Reports Server (NTRS)

    Kirk, Brnjamin, S.

    2006-01-01

    The Streamline-Upwind Petrov-Galerkin (SUPG) finite element simulations of compressible flows is presented. The topics include: 1) Introduction; 2) SUPG Galerkin Finite Element Methods; 3) Applications; and 4) Bibliography.

  13. Relativistic laser pulse compression in magnetized plasmas

    SciTech Connect

    Liang, Yun; Sang, Hai-Bo Wan, Feng; Lv, Chong; Xie, Bai-Song

    2015-07-15

    The self-compression of a weak relativistic Gaussian laser pulse propagating in a magnetized plasma is investigated. The nonlinear Schrödinger equation, which describes the laser pulse amplitude evolution, is deduced and solved numerically. The pulse compression is observed in the cases of both left- and right-hand circular polarized lasers. It is found that the compressed velocity is increased for the left-hand circular polarized laser fields, while decreased for the right-hand ones, which is reinforced as the enhancement of the external magnetic field. We find a 100 fs left-hand circular polarized laser pulse is compressed in a magnetized (1757 T) plasma medium by more than ten times. The results in this paper indicate the possibility of generating particularly intense and short pulses.

  14. Fingerprint Compression Based on Sparse Representation.

    PubMed

    Shao, Guangqi; Wu, Yanping; A, Yong; Liu, Xiao; Guo, Tiande

    2014-02-01

    A new fingerprint compression algorithm based on sparse representation is introduced. Obtaining an overcomplete dictionary from a set of fingerprint patches allows us to represent them as a sparse linear combination of dictionary atoms. In the algorithm, we first construct a dictionary for predefined fingerprint image patches. For a new given fingerprint images, represent its patches according to the dictionary by computing l(0)-minimization and then quantize and encode the representation. In this paper, we consider the effect of various factors on compression results. Three groups of fingerprint images are tested. The experiments demonstrate that our algorithm is efficient compared with several competing compression techniques (JPEG, JPEG 2000, and WSQ), especially at high compression ratios. The experiments also illustrate that the proposed algorithm is robust to extract minutiae.

  15. Principles of Digital Dynamic-Range Compression

    PubMed Central

    Kates, James M.

    2005-01-01

    This article provides an overview of dynamic-range compression in digital hearing aids. Digital technology is becoming increasingly common in hearing aids, particularly because of the processing flexibility it offers and the opportunity to create more-effective devices. The focus of the paper is on the algorithms used to build digital compression systems. Of the various approaches that can be used to design a digital hearing aid, this paper considers broadband compression, multi-channel filter banks, a frequency-domain compressor using the FFT, the side-branch design that separates the filtering operation from the frequency analysis, and the frequency-warped version of the side-branch approach that modifies the analysis frequency spacing to more closely match auditory perception. Examples of the compressor frequency resolution, group delay, and compression behavior are provided for the different design approaches. PMID:16012704

  16. Pulse power applications of flux compression generators

    SciTech Connect

    Fowler, C.M.; Caird, R.S.; Erickson, D.J.; Freeman, B.L.

    1981-01-01

    Characteristics are presented for two different types of explosive driven flux compression generators and a megavolt pulse transformer. Status reports are given for rail gun and plasma focus programs for which the generators serve as power sources.

  17. Musical beauty and information compression: Complex to the ear but simple to the mind?

    PubMed Central

    2011-01-01

    Background The biological origin of music, its universal appeal across human cultures and the cause of its beauty remain mysteries. For example, why is Ludwig Van Beethoven considered a musical genius but Kylie Minogue is not? Possible answers to these questions will be framed in the context of Information Theory. Presentation of the Hypothesis The entire life-long sensory data stream of a human is enormous. The adaptive solution to this problem of scale is information compression, thought to have evolved to better handle, interpret and store sensory data. In modern humans highly sophisticated information compression is clearly manifest in philosophical, mathematical and scientific insights. For example, the Laws of Physics explain apparently complex observations with simple rules. Deep cognitive insights are reported as intrinsically satisfying, implying that at some point in evolution, the practice of successful information compression became linked to the physiological reward system. I hypothesise that the establishment of this "compression and pleasure" connection paved the way for musical appreciation, which subsequently became free (perhaps even inevitable) to emerge once audio compression had become intrinsically pleasurable in its own right. Testing the Hypothesis For a range of compositions, empirically determine the relationship between the listener's pleasure and "lossless" audio compression. I hypothesise that enduring musical masterpieces will possess an interesting objective property: despite apparent complexity, they will also exhibit high compressibility. Implications of the Hypothesis Artistic masterpieces and deep Scientific insights share the common process of data compression. Musical appreciation is a parasite on a much deeper information processing capacity. The coalescence of mathematical and musical talent in exceptional individuals has a parsimonious explanation. Musical geniuses are skilled in composing music that appears highly complex to

  18. Nickel Curie Point Engine

    ERIC Educational Resources Information Center

    Chiaverina, Chris; Lisensky, George

    2014-01-01

    Ferromagnetic materials such as nickel, iron, or cobalt lose the electron alignment that makes them attracted to a magnet when sufficient thermal energy is added. The temperature at which this change occurs is called the "Curie temperature," or "Curie point." Nickel has a Curie point of 627 K, so a candle flame is a sufficient…

  19. Model Breaking Points Conceptualized

    ERIC Educational Resources Information Center

    Vig, Rozy; Murray, Eileen; Star, Jon R.

    2014-01-01

    Current curriculum initiatives (e.g., National Governors Association Center for Best Practices and Council of Chief State School Officers 2010) advocate that models be used in the mathematics classroom. However, despite their apparent promise, there comes a point when models break, a point in the mathematical problem space where the model cannot,…

  20. A study of compressibility and compactibility of directly compressible tableting materials containing tramadol hydrochloride.

    PubMed

    Mužíková, Jitka; Kubíčková, Alena

    2016-09-01

    The paper evaluates and compares the compressibility and compactibility of directly compressible tableting materials for the preparation of hydrophilic gel matrix tablets containing tramadol hydrochloride and the coprocessed dry binders Prosolv® SMCC 90 and Disintequik™ MCC 25. The selected types of hypromellose are Methocel™ Premium K4M and Methocel™ Premium K100M in 30 and 50 % concentrations, the lubricant being magnesium stearate in a 1 % concentration. Compressibility is evaluated by means of the energy profile of compression process and compactibility by the tensile strength of tablets. The values of total energy of compression and plasticity were higher in the tableting materials containing Prosolv® SMCC 90 than in those containing Disintequik™ MCC 25. Tramadol slightly decreased the values of total energy of compression and plasticity. Tableting materials containing Prosolv® SMCC 90 yielded stronger tablets. Tramadol decreased the strength of tablets from both coprocessed dry binders.

  1. Progressive compression versus graduated compression for the management of venous insufficiency.

    PubMed

    Shepherd, Jan

    2016-09-01

    Venous leg ulceration (VLU) is a chronic condition associated with chronic venous insufficiency (CVI), where the most frequent complication is recurrence of ulceration after healing. Traditionally, graduated compression therapy has been shown to increase healing rates and also to reduce recurrence of VLU. Graduated compression occurs because the circumference of the limb is narrower at the ankle, thereby producing a higher pressure than at the calf, which is wider, creating a lower pressure. This phenomenon is explained by the principle known as Laplace's Law. Recently, the view that compression therapy must provide a graduated pressure gradient has been challenged. However, few studies so far have focused on the potential benefits of progressive compression where the pressure profile is inverted. This article will examine the contemporary concept that progressive compression may be as effective as traditional graduated compression therapy for the management of CVI. PMID:27594309

  2. Current density compression of intense ion beams

    NASA Astrophysics Data System (ADS)

    Sefkow, Adam Bennett

    Current density compression of intense ion beams in space and time is required for heavy ion fusion, in order to achieve the necessary intensities to implode an inertial confinement fusion target. Longitudinal compression to high current in a short pulse is achieved by imposing a velocity tilt upon the space-charge-dominated charge bunch, and a variety of means exist for simultaneous transverse focusing to a coincident focal plane. Compression to the desired levels requires sufficient neutralization of the beam by a pre-formed plasma during final transport. The physics of current density compression is studied in scaled experiments relevant for the operating regime of a heavy ion driver, and related theory and advanced particle-in-cell simulations provide valuable insight into the physical and technological limitations involved. A fast Faraday cup measures longitudinal compression ratios greater than 50 with pulse durations less than 5 ns, in excellent agreement with reduced models and sophisticated simulations, which account for many experimental parameters and effects. The detailed physics of achieving current density compression in the laboratory is reviewed. Quantitative examples explore the dependency of longitudinal compression on effects such as the finite-size acceleration gap, voltage waveform accuracy, variation in initial beam temperature, pulse length, intended fractional velocity tilt, and energy uncertainty, as well as aberration within focusing elements and plasma neutralization processes. In addition, plasma evolution in experimental sources responsible for the degree of beam neutralization is studied numerically, since compression stagnation occurs under inadequate neutralization conditions, which may excite nonlinear collective excitations due to beam-plasma interactions. The design of simultaneous focusing experiments using both existing and upgraded hardware is provided, and parametric variations important for compression physics are

  3. Ultralight and highly compressible graphene aerogels.

    PubMed

    Hu, Han; Zhao, Zongbin; Wan, Wubo; Gogotsi, Yury; Qiu, Jieshan

    2013-04-18

    Chemically converted graphene aerogels with ultralight density and high compressibility are prepared by diamine-mediated functionalization and assembly, followed by microwave irradiation. The resulting graphene aerogels with density as low as 3 mg cm(-3) show excellent resilience and can completely recover after more than 90% compression. The ultralight graphene aerogels possessing high elasticity are promising as compliant and energy-absorbing materials. PMID:23418081

  4. Nonlinear compressions in merging plasma jets

    SciTech Connect

    Messer, S.; Case, A.; Wu, L.; Brockington, S.; Witherspoon, F. D.

    2013-03-15

    We investigate the dynamics of merging supersonic plasma jets using an analytic model. The merging structures exhibit supersonic, nonlinear compressions which may steepen into full shocks. We estimate the distance necessary to form such shocks and the resulting jump conditions. These theoretical models are compared to experimental observations and simulated dynamics. We also use those models to extrapolate behavior of the jet-merging compressions in a Plasma Jet Magneto-Inertial Fusion reactor.

  5. The temporal scaling laws of compressible turbulence

    NASA Astrophysics Data System (ADS)

    Sun, Bohua

    2016-08-01

    This paper proposes temporal scaling laws of the density-weighted energy spectrum for compressible turbulence in terms of dissipation rate, frequency and the Mach number. The study adopts the incomplete similarity theory in the scaling analysis of compressible turbulence motion. The investigation shows that the temporal Eulerian and Lagrangian energy spectra approach the ‑5 3 and ‑ 2 power laws when the Mach number M tends to reach unity and infinity, respectively.

  6. Lossy compression of weak lensing data

    SciTech Connect

    Vanderveld, R. Ali; Bernstein, Gary M.; Stoughton, Chris; Rhodes, Jason; Massey, Richard; Dobke, Benjamin M.

    2011-07-12

    Future orbiting observatories will survey large areas of sky in order to constrain the physics of dark matter and dark energy using weak gravitational lensing and other methods. Lossy compression of the resultant data will improve the cost and feasibility of transmitting the images through the space communication network. We evaluate the consequences of the lossy compression algorithm of Bernstein et al. (2010) for the high-precision measurement of weak-lensing galaxy ellipticities. This square-root algorithm compresses each pixel independently, and the information discarded is by construction less than the Poisson error from photon shot noise. For simulated space-based images (without cosmic rays) digitized to the typical 16 bits per pixel, application of the lossy compression followed by image-wise lossless compression yields images with only 2.4 bits per pixel, a factor of 6.7 compression. We demonstrate that this compression introduces no bias in the sky background. The compression introduces a small amount of additional digitization noise to the images, and we demonstrate a corresponding small increase in ellipticity measurement noise. The ellipticity measurement method is biased by the addition of noise, so the additional digitization noise is expected to induce a multiplicative bias on the galaxies measured ellipticities. After correcting for this known noise-induced bias, we find a residual multiplicative ellipticity bias of m {approx} -4 x 10-4. This bias is small when compared to the many other issues that precision weak lensing surveys must confront, and furthermore we expect it to be reduced further with better calibration of ellipticity measurement methods.

  7. Prechamber Compression-Ignition Engine Performance

    NASA Technical Reports Server (NTRS)

    Moore, Charles S; Collins, John H , Jr

    1938-01-01

    Single-cylinder compression-ignition engine tests were made to investigate the performance characteristics of prechamber type of cylinder head. Certain fundamental variables influencing engine performance -- clearance distribution, size, shape, and direction of the passage connecting the cylinder and prechamber, shape of prechamber, cylinder clearance, compression ratio, and boosting -- were independently tested. Results of motoring and of power tests, including several typical indicator cards, are presented.

  8. An image-data-compression algorithm

    NASA Technical Reports Server (NTRS)

    Hilbert, E. E.; Rice, R. F.

    1981-01-01

    Cluster Compression Algorithm (CCA) preprocesses Landsat image data immediately following satellite data sensor (receiver). Data are reduced by extracting pertinent image features and compressing this result into concise format for transmission to ground station. This results in narrower transmission bandwidth, increased data-communication efficiency, and reduced computer time in reconstructing and analyzing image. Similar technique could be applied to other types of recorded data to cut costs of transmitting, storing, distributing, and interpreting complex information.

  9. Lossy compression of weak lensing data

    DOE PAGES

    Vanderveld, R. Ali; Bernstein, Gary M.; Stoughton, Chris; Rhodes, Jason; Massey, Richard; Dobke, Benjamin M.

    2011-07-12

    Future orbiting observatories will survey large areas of sky in order to constrain the physics of dark matter and dark energy using weak gravitational lensing and other methods. Lossy compression of the resultant data will improve the cost and feasibility of transmitting the images through the space communication network. We evaluate the consequences of the lossy compression algorithm of Bernstein et al. (2010) for the high-precision measurement of weak-lensing galaxy ellipticities. This square-root algorithm compresses each pixel independently, and the information discarded is by construction less than the Poisson error from photon shot noise. For simulated space-based images (without cosmicmore » rays) digitized to the typical 16 bits per pixel, application of the lossy compression followed by image-wise lossless compression yields images with only 2.4 bits per pixel, a factor of 6.7 compression. We demonstrate that this compression introduces no bias in the sky background. The compression introduces a small amount of additional digitization noise to the images, and we demonstrate a corresponding small increase in ellipticity measurement noise. The ellipticity measurement method is biased by the addition of noise, so the additional digitization noise is expected to induce a multiplicative bias on the galaxies measured ellipticities. After correcting for this known noise-induced bias, we find a residual multiplicative ellipticity bias of m {approx} -4 x 10-4. This bias is small when compared to the many other issues that precision weak lensing surveys must confront, and furthermore we expect it to be reduced further with better calibration of ellipticity measurement methods.« less

  10. Lossless Video Sequence Compression Using Adaptive Prediction

    NASA Technical Reports Server (NTRS)

    Li, Ying; Sayood, Khalid

    2007-01-01

    We present an adaptive lossless video compression algorithm based on predictive coding. The proposed algorithm exploits temporal, spatial, and spectral redundancies in a backward adaptive fashion with extremely low side information. The computational complexity is further reduced by using a caching strategy. We also study the relationship between the operational domain for the coder (wavelet or spatial) and the amount of temporal and spatial redundancy in the sequence being encoded. Experimental results show that the proposed scheme provides significant improvements in compression efficiencies.

  11. Method and apparatus for signal compression

    DOEpatents

    Carangelo, Robert M.

    1994-02-08

    The method and apparatus of the invention effects compression of an analog electrical signal (e.g., representing an interferogram) by introducing into it a component that is a cubic function thereof, normally as a nonlinear negative signal in a feedback loop of an Op Amp. The compressed signal will most desirably be digitized and then digitally decompressed so as to produce a signal that emulates the original.

  12. Mediastinal paraganglioma causing spinal cord compression.

    PubMed Central

    Reyes, M G; Fresco, R; Bruetman, M E

    1977-01-01

    An invasive paraganglioma of the posterior mediastinum caused spinal cord compression in a 31 year old women. Electron microscopic examination of the paraganglioma invading the epidural space revealed numerous dense-cored granules in the cytoplasm of the tumour cells. We are reporting this case to present the ultrastructure of mediastinal paraganglioma, and to call attention to an unusual cause of spinal cord compression. Images PMID:886352

  13. Digital breast tomosynthesis with minimal breast compression

    NASA Astrophysics Data System (ADS)

    Scaduto, David A.; Yang, Min; Ripton-Snyder, Jennifer; Fisher, Paul R.; Zhao, Wei

    2015-03-01

    Breast compression is utilized in mammography to improve image quality and reduce radiation dose. Lesion conspicuity is improved by reducing scatter effects on contrast and by reducing the superposition of tissue structures. However, patient discomfort due to breast compression has been cited as a potential cause of noncompliance with recommended screening practices. Further, compression may also occlude blood flow in the breast, complicating imaging with intravenous contrast agents and preventing accurate quantification of contrast enhancement and kinetics. Previous studies have investigated reducing breast compression in planar mammography and digital breast tomosynthesis (DBT), though this typically comes at the expense of degradation in image quality or increase in mean glandular dose (MGD). We propose to optimize the image acquisition technique for reduced compression in DBT without compromising image quality or increasing MGD. A zero-frequency signal-difference-to-noise ratio model is employed to investigate the relationship between tube potential, SDNR and MGD. Phantom and patient images are acquired on a prototype DBT system using the optimized imaging parameters and are assessed for image quality and lesion conspicuity. A preliminary assessment of patient motion during DBT with minimal compression is presented.

  14. Normal and Time-Compressed Speech

    PubMed Central

    Lemke, Ulrike; Kollmeier, Birger; Holube, Inga

    2016-01-01

    Short-term and long-term learning effects were investigated for the German Oldenburg sentence test (OLSA) using original and time-compressed fast speech in noise. Normal-hearing and hearing-impaired participants completed six lists of the OLSA in five sessions. Two groups of normal-hearing listeners (24 and 12 listeners) and two groups of hearing-impaired listeners (9 listeners each) performed the test with original or time-compressed speech. In general, original speech resulted in better speech recognition thresholds than time-compressed speech. Thresholds decreased with repetition for both speech materials. Confirming earlier results, the largest improvements were observed within the first measurements of the first session, indicating a rapid initial adaptation phase. The improvements were larger for time-compressed than for original speech. The novel results on long-term learning effects when using the OLSA indicate a longer phase of ongoing learning, especially for time-compressed speech, which seems to be limited by a floor effect. In addition, for normal-hearing participants, no complete transfer of learning benefits from time-compressed to original speech was observed. These effects should be borne in mind when inviting listeners repeatedly, for example, in research settings.

  15. Mining, compressing and classifying with extensible motifs

    PubMed Central

    Apostolico, Alberto; Comin, Matteo; Parida, Laxmi

    2006-01-01

    Background Motif patterns of maximal saturation emerged originally in contexts of pattern discovery in biomolecular sequences and have recently proven a valuable notion also in the design of data compression schemes. Informally, a motif is a string of intermittently solid and wild characters that recurs more or less frequently in an input sequence or family of sequences. Motif discovery techniques and tools tend to be computationally imposing, however, special classes of "rigid" motifs have been identified of which the discovery is affordable in low polynomial time. Results In the present work, "extensible" motifs are considered such that each sequence of gaps comes endowed with some elasticity, whereby the same pattern may be stretched to fit segments of the source that match all the solid characters but are otherwise of different lengths. A few applications of this notion are then described. In applications of data compression by textual substitution, extensible motifs are seen to bring savings on the size of the codebook, and hence to improve compression. In germane contexts, in which compressibility is used in its dual role as a basis for structural inference and classification, extensible motifs are seen to support unsupervised classification and phylogeny reconstruction. Conclusion Off-line compression based on extensible motifs can be used advantageously to compress and classify biological sequences. PMID:16722593

  16. Magnetized Plasma Compression for Fusion Energy

    NASA Astrophysics Data System (ADS)

    Degnan, James; Grabowski, Christopher; Domonkos, Matthew; Amdahl, David

    2013-10-01

    Magnetized Plasma Compression (MPC) uses magnetic inhibition of thermal conduction and enhancement of charge particle product capture to greatly reduce the temporal and spatial compression required relative to un-magnetized inertial fusion (IFE)--to microseconds, centimeters vs nanoseconds, sub-millimeter. MPC greatly reduces the required confinement time relative to MFE--to microseconds vs minutes. Proof of principle can be demonstrated or refuted using high current pulsed power driven compression of magnetized plasmas using magnetic pressure driven implosions of metal shells, known as imploding liners. This can be done at a cost of a few tens of millions of dollars. If demonstrated, it becomes worthwhile to develop repetitive implosion drivers. One approach is to use arrays of heavy ion beams for energy production, though with much less temporal and spatial compression than that envisioned for un-magnetized IFE, with larger compression targets, and with much less ambitious compression ratios. A less expensive, repetitive pulsed power driver, if feasible, would require engineering development for transient, rapidly replaceable transmission lines such as envisioned by Sandia National Laboratories. Supported by DOE-OFES.

  17. Aging and compressibility of municipal solid wastes.

    PubMed

    Chen, Y M; Zhan, Tony L T; Wei, H Y; Ke, H

    2009-01-01

    The expansion of a municipal solid waste (MSW) landfill requires the ability to predict settlement behavior of the existing landfill. The practice of using a single compressibility value when performing a settlement analysis may lead to inaccurate predictions. This paper gives consideration to changes in the mechanical compressibility of MSW as a function of the fill age of MSW as well as the embedding depth of MSW. Borehole samples representative of various fill ages were obtained from five boreholes drilled to the bottom of the Qizhishan landfill in Suzhou, China. Thirty-one borehole samples were used to perform confined compression tests. Waste composition and volume-mass properties (i.e., unit weight, void ratio, and water content) were measured on all the samples. The test results showed that the compressible components of the MSW (i.e., organics, plastics, paper, wood and textiles) decreased with an increase in the fill age. The in situ void ratio of the MSW was shown to decrease with depth into the landfill. The compression index, Cc, was observed to decrease from 1.0 to 0.3 with depth into the landfill. Settlement analyses were performed on the existing landfill, demonstrating that the variation of MSW compressibility with fill age or depth should be taken into account in the settlement prediction. PMID:18430560

  18. Digital data registration and differencing compression system

    NASA Technical Reports Server (NTRS)

    Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)

    1990-01-01

    A process is disclosed for x ray registration and differencing which results in more efficient compression. Differencing of registered modeled subject image with a modeled reference image forms a differenced image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three-dimensional model, which three-dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either a remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic x ray digital images.

  19. Compression-sensitive magnetic resonance elastography

    NASA Astrophysics Data System (ADS)

    Hirsch, Sebastian; Beyer, Frauke; Guo, Jing; Papazoglou, Sebastian; Tzschaetzsch, Heiko; Braun, Juergen; Sack, Ingolf

    2013-08-01

    Magnetic resonance elastography (MRE) quantifies the shear modulus of biological tissue to detect disease. Complementary to the shear elastic properties of tissue, the compression modulus may be a clinically useful biomarker because it is sensitive to tissue pressure and poromechanical interactions. In this work, we analyze the capability of MRE to measure volumetric strain and the dynamic bulk modulus (P-wave modulus) at a harmonic drive frequency commonly used in shear-wave-based MRE. Gel phantoms with various densities were created by introducing CO2-filled cavities to establish a compressible effective medium. The dependence of the effective medium's bulk modulus on phantom density was investigated via static compression tests, which confirmed theoretical predictions. The P-wave modulus of three compressible phantoms was calculated from volumetric strain measured by 3D wave-field MRE at 50 Hz drive frequency. The results demonstrate the MRE-derived volumetric strain and P-wave modulus to be sensitive to the compression properties of effective media. Since the reconstruction of the P-wave modulus requires third-order derivatives, noise remains critical, and P-wave moduli are systematically underestimated. Focusing on relative changes in the effective bulk modulus of tissue, compression-sensitive MRE may be useful for the noninvasive detection of diseases involving pathological pressure alterations such as hepatic hypertension or hydrocephalus.

  20. Digital Data Registration and Differencing Compression System

    NASA Technical Reports Server (NTRS)

    Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)

    1996-01-01

    A process for X-ray registration and differencing results in more efficient compression. Differencing of registered modeled subject image with a modeled reference image forms a differenced image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three-dimensional model, which three-dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either a remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic X-ray digital images.

  1. Digital data registration and differencing compression system

    NASA Technical Reports Server (NTRS)

    Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)

    1992-01-01

    A process for x ray registration and differencing results in more efficient compression is discussed. Differencing of registered modeled subject image with a modeled reference image forms a differential image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three dimensional model, which three dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic x ray digital images.

  2. Compressing bitmap indexes for faster search operations

    SciTech Connect

    Wu, Kesheng; Otoo, Ekow J.; Shoshani, Arie

    2002-04-25

    In this paper, we study the effects of compression on bitmap indexes. The main operations on the bitmaps during query processing are bitwise logical operations such as AND, OR, NOT, etc. Using the general purpose compression schemes, such as gzip, the logical operations on the compressed bitmaps are much slower than on the uncompressed bitmaps. Specialized compression schemes, like the byte-aligned bitmap code(BBC), are usually faster in performing logical operations than the general purpose schemes, but in many cases they are still orders of magnitude slower than the uncompressed scheme. To make the compressed bitmap indexes operate more efficiently, we designed a CPU-friendly scheme which we refer to as the word-aligned hybrid code (WAH). Tests on both synthetic and real application data show that the new scheme significantly outperforms well-known compression schemes at a modest increase in storage space. Compared to BBC, a scheme well-known for its operational efficiency, WAH performs logical operations about 12 times faster and uses only 60 percent more space. Compared to the uncompressed scheme, in most test cases WAH is faster while still using less space. We further verified with additional tests that the improvement in logical operation speed translates to similar improvement in query processing speed.

  3. SPH simulation of high density hydrogen compression

    NASA Astrophysics Data System (ADS)

    Ferrel, R.; Romero, V.

    1998-07-01

    The density dependence of the electronic energy band gap of the hydrogen has been studied with respect to the insulator-metal (IM) transition. The valence conduction band gap of solid hydrogen is about 15eV at zero pressure, therefore very high pressures are required to close the gap and achieve metallization. We propose to investigate what will be the degree to which one can expect to maintain a shockless compression of hydrogen with a low temperature (close to that of a cold isentrope) and verify if it is possible to achieve metallization. Multistage compression will be driven by energetic materials in a cylindrical implosion system, in which we expect a slow compression rate that will maintain the low temperature in the isentropic compression. It is hoped that pressures on the order of 100Mbars can be achieved while maintaining low temperatures. In order to better understand this multistage compression a smooth particle hydrodynamics (SPH) analysis has been performed. Since the SPH technique does not use a grid structure it is well suited to analyzing spatial deformation processes. This analysis will be used to improve the design of possible multistage compression devices.

  4. SPH Simulation of High Density Hydrogen Compression

    NASA Astrophysics Data System (ADS)

    Ferrel, R.; Romero, Van D.

    1997-07-01

    The density dependence of the electronic energy band gap of hydrogen has been studied with respect to the insulator-metal (IM) transition. The valence conduction band gap of solid hydrogen is about 15eV at zero pressure, therefore very high pressures are required to close the gap and achieve metallization. We are planning to investigate the degree to which shock less compression of hydrogen can be maintained at low temperature isentrope) and explore the possibililty of achieving metallization. Multistage compression will be driven by energetic materials in a cylindrical implosion system, in which we expect a slow compression rate that will maintain the low temperature in the isentropic compression. It is hoped that pressures of the order of 100 Mbars can be achieved while maintaining low temperatures. In order to understand this multistage compression better a smooth particle hydrodynamics (SPH) analysis has been performed. Since the SPH technique uses a gridless structure it is well suited to analyzing spatial deformation processes. This paper presents the analysis which will be used to improve the design of possible multistage compression devices.

  5. Static compression of porous dust aggregates

    NASA Astrophysics Data System (ADS)

    Kataoka, Akimasa; Tanaka, Hidekazu; Okuzumi, Satoshi; Wada, Koji

    2013-07-01

    To understand the structure evolution of dust aggregates is a key in the planetesimal formation. Dust grains become fluffy by coagulation in protoplanetary disks. However, once they become fluffy, they are not sufficiently compressed by collisional compression to form compact planetesimals (Okuzumi et al. 2012, ApJ, 752, 106). Thus, some other compression mechanisms are required to form planetesimals. We investigate the static compression of highly porous aggregates. First, we derive the compressive strength by numerical N-body simulations (Kataoka et al. 2013, A&A, 554, 4). Then, we apply the strength to protoplanetary disks, supposing that the highly porous aggregates can be quiasi-statically compressed by ram pressure of the disk gas and the self gravity. As a result, we find the pathway of the dust structure evolution from dust grains via fluffy aggregates to compact planetesimals. Moreover, we find that the fluffy aggregates overcome the barriers in planetesimal formation, which are radial drift, fragmentation, and bouncing barriers. (The paper is now available on arXiv: http://arxiv.org/abs/1307.7984 )

  6. A dedicated compression device for high resolution X-ray tomography of compressed gas diffusion layers

    SciTech Connect

    Tötzke, C.; Manke, I.; Banhart, J.; Gaiselmann, G.; Schmidt, V.; Bohner, J.; Müller, B. R.; Kupsch, A.; Hentschel, M. P.; Lehnert, W.

    2015-04-15

    We present an experimental approach to study the three-dimensional microstructure of gas diffusion layer (GDL) materials under realistic compression conditions. A dedicated compression device was designed that allows for synchrotron-tomographic investigation of circular samples under well-defined compression conditions. The tomographic data provide the experimental basis for stochastic modeling of nonwoven GDL materials. A plain compression tool is used to study the fiber courses in the material at different compression stages. Transport relevant geometrical parameters, such as porosity, pore size, and tortuosity distributions, are exemplarily evaluated for a GDL sample in the uncompressed state and for a compression of 30 vol.%. To mimic the geometry of the flow-field, we employed a compression punch with an integrated channel-rib-profile. It turned out that the GDL material is homogeneously compressed under the ribs, however, much less compressed underneath the channel. GDL fibers extend far into the channel volume where they might interfere with the convective gas transport and the removal of liquid water from the cell.

  7. Compression by indexing: an improvement over MPEG-4 body animation parameter compression

    NASA Astrophysics Data System (ADS)

    Chattopadhyay, Siddhartha; Bhandarkar, Suchendra M.; Li, Kang

    2006-01-01

    Body Animation Parameters (BAPs) are used to animate MPEG-4 compliant virtual human-like characters. In order to stream BAPs in real time interactive environments, the BAPs are compressed for low bitrate representation using a standard MPEG-4 compression pipeline. However, the standard MPEG-4 compression is inefficient for streaming to power-constrained devices, since the streamed data requires extra power in terms of CPU cycles for decompression. In this paper, we have proposed and implemented an indexing technique for a BAP data stream, resulting in a compressed representation of the motion data. The resulting compressed representation of the BAPs is 1superior to the MPEG-4-based BAP compression in terms of both, required network throughput and power consumption at the client end to receive the compressed data stream and extract the original BAP data from the compressed representation. Although the resulting motion after de-compression at the client end is lossy, the motion distortion is minimized by intelligent use of the hierarchical structure of the skeletal avatar model. Consequently, the proposed indexing method is ideal for streaming of motion data to power- and network-constrained devices such as PDAs, Pocket PCs and Laptop PCs operating in battery mode and other devices in a mobile network environment.

  8. Linear program relaxation of sparse nonnegative recovery in compressive sensing microarrays.

    PubMed

    Qin, Linxia; Xiu, Naihua; Kong, Lingchen; Li, Yu

    2012-01-01

    Compressive sensing microarrays (CSM) are DNA-based sensors that operate using group testing and compressive sensing principles. Mathematically, one can cast the CSM as sparse nonnegative recovery (SNR) which is to find the sparsest solutions subjected to an underdetermined system of linear equations and nonnegative restriction. In this paper, we discuss the l₁ relaxation of the SNR. By defining nonnegative restricted isometry/orthogonality constants, we give a nonnegative restricted property condition which guarantees that the SNR and the l₁ relaxation share the common unique solution. Besides, we show that any solution to the SNR must be one of the extreme points of the underlying feasible set. PMID:23251229

  9. Beam dynamics analysis in pulse compression using electron beam compact simulator for Heavy Ion Fusion

    NASA Astrophysics Data System (ADS)

    Kikuchi, Takashi; Horioka, Kazuhiko; Sasaki, Toru; Harada, Nob.

    2013-11-01

    In a final stage of an accelerator system for heavy ion inertial fusion (HIF), pulse shaping and beam current increase by bunch compression are required for effective pellet implosion. A compact simulator with an electron beam was constructed to understand the beam dynamics. In this study, we investigate theoretically and numerically the beam dynamics for the extreme bunch compression in the final stage of HIF accelerator complex. The theoretical and numerical results implied that the compact experimental device simulates the beam dynamics around the stagnation point for initial low temperature condition.

  10. The critical compressibility factor value: Associative fluids and liquid alkali metals

    SciTech Connect

    Kulinskii, V. L.

    2014-08-07

    We show how to obtain the critical compressibility factor Z{sub c} for simple and associative Lennard-Jones fluids using the critical characteristics of the Ising model on different lattices. The results show that low values of critical compressibility factor are correlated with the associative properties of fluids in critical region and can be obtained on the basis of the results for the Ising model on lattices with more than one atom per cell. An explanation for the results on the critical point line of the Lennard-Jones fluids and liquid metals is proposed within the global isomorphism approach.

  11. Compression failure of angle-ply laminates

    NASA Technical Reports Server (NTRS)

    Peel, Larry D.; Hyer, Michael W.; Shuart, Mark J.

    1991-01-01

    The present work deals with modes and mechanisms of failure in compression of angle-ply laminates. Experimental results were obtained from 42 angle-ply IM7/8551-7a specimens with a lay-up of ((plus or minus theta)/(plus or minus theta)) sub 6s where theta, the off-axis angle, ranged from 0 degrees to 90 degrees. The results showed four failure modes, these modes being a function of off-axis angle. Failure modes include fiber compression, inplane transverse tension, inplane shear, and inplane transverse compression. Excessive interlaminar shear strain was also considered as an important mode of failure. At low off-axis angles, experimentally observed values were considerably lower than published strengths. It was determined that laminate imperfections in the form of layer waviness could be a major factor in reducing compression strength. Previously developed linear buckling and geometrically nonlinear theories were used, with modifications and enhancements, to examine the influence of layer waviness on compression response. The wavy layer is described by a wave amplitude and a wave length. Linear elastic stress-strain response is assumed. The geometrically nonlinear theory, in conjunction with the maximum stress failure criterion, was used to predict compression failure and failure modes for the angle-ply laminates. A range of wave length and amplitudes were used. It was found that for 0 less than or equal to theta less than or equal to 15 degrees failure was most likely due to fiber compression. For 15 degrees less than theta less than or equal to 35 degrees, failure was most likely due to inplane transverse tension. For 35 degrees less than theta less than or equal to 70 degrees, failure was most likely due to inplane shear. For theta less than 70 degrees, failure was most likely due to inplane transverse compression. The fiber compression and transverse tension failure modes depended more heavily on wave length than on wave amplitude. Thus using a single

  12. Compression of echocardiographic scan line data using wavelet packet transform

    NASA Technical Reports Server (NTRS)

    Hang, X.; Greenberg, N. L.; Qin, J.; Thomas, J. D.

    2001-01-01

    An efficient compression strategy is indispensable for digital echocardiography. Previous work has suggested improved results utilizing wavelet transforms in the compression of 2D echocardiographic images. Set partitioning in hierarchical trees (SPIHT) was modified to compress echocardiographic scanline data based on the wavelet packet transform. A compression ratio of at least 94:1 resulted in preserved image quality.

  13. 30 CFR 57.13020 - Use of compressed air.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Use of compressed air. 57.13020 Section 57... MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-UNDERGROUND METAL AND NONMETAL MINES Compressed Air and Boilers § 57.13020 Use of compressed air. At no time shall compressed air be directed toward...

  14. 30 CFR 57.13020 - Use of compressed air.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Use of compressed air. 57.13020 Section 57... MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-UNDERGROUND METAL AND NONMETAL MINES Compressed Air and Boilers § 57.13020 Use of compressed air. At no time shall compressed air be directed toward...

  15. 30 CFR 57.13020 - Use of compressed air.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Use of compressed air. 57.13020 Section 57... MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-UNDERGROUND METAL AND NONMETAL MINES Compressed Air and Boilers § 57.13020 Use of compressed air. At no time shall compressed air be directed toward...

  16. 30 CFR 57.13020 - Use of compressed air.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Use of compressed air. 57.13020 Section 57... MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-UNDERGROUND METAL AND NONMETAL MINES Compressed Air and Boilers § 57.13020 Use of compressed air. At no time shall compressed air be directed toward...

  17. 30 CFR 56.13020 - Use of compressed air.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Use of compressed air. 56.13020 Section 56... MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-SURFACE METAL AND NONMETAL MINES Compressed Air and Boilers § 56.13020 Use of compressed air. At no time shall compressed air be directed toward a...

  18. 30 CFR 56.13020 - Use of compressed air.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Use of compressed air. 56.13020 Section 56... MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-SURFACE METAL AND NONMETAL MINES Compressed Air and Boilers § 56.13020 Use of compressed air. At no time shall compressed air be directed toward a...

  19. 30 CFR 77.412 - Compressed air systems.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Compressed air systems. 77.412 Section 77.412... for Mechanical Equipment § 77.412 Compressed air systems. (a) Compressors and compressed-air receivers... involving the pressure system of compressors, receivers, or compressed-air-powered equipment shall not...

  20. 30 CFR 56.13020 - Use of compressed air.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Use of compressed air. 56.13020 Section 56... MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-SURFACE METAL AND NONMETAL MINES Compressed Air and Boilers § 56.13020 Use of compressed air. At no time shall compressed air be directed toward a...

  1. 30 CFR 56.13020 - Use of compressed air.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Use of compressed air. 56.13020 Section 56... MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-SURFACE METAL AND NONMETAL MINES Compressed Air and Boilers § 56.13020 Use of compressed air. At no time shall compressed air be directed toward a...

  2. 30 CFR 56.13020 - Use of compressed air.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Use of compressed air. 56.13020 Section 56... MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-SURFACE METAL AND NONMETAL MINES Compressed Air and Boilers § 56.13020 Use of compressed air. At no time shall compressed air be directed toward a...

  3. 30 CFR 77.412 - Compressed air systems.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Compressed air systems. 77.412 Section 77.412... for Mechanical Equipment § 77.412 Compressed air systems. (a) Compressors and compressed-air receivers... involving the pressure system of compressors, receivers, or compressed-air-powered equipment shall not...

  4. 30 CFR 57.13020 - Use of compressed air.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Use of compressed air. 57.13020 Section 57... MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-UNDERGROUND METAL AND NONMETAL MINES Compressed Air and Boilers § 57.13020 Use of compressed air. At no time shall compressed air be directed toward...

  5. 30 CFR 77.412 - Compressed air systems.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Compressed air systems. 77.412 Section 77.412... for Mechanical Equipment § 77.412 Compressed air systems. (a) Compressors and compressed-air receivers... involving the pressure system of compressors, receivers, or compressed-air-powered equipment shall not...

  6. 30 CFR 77.412 - Compressed air systems.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Compressed air systems. 77.412 Section 77.412... for Mechanical Equipment § 77.412 Compressed air systems. (a) Compressors and compressed-air receivers... involving the pressure system of compressors, receivers, or compressed-air-powered equipment shall not...

  7. 30 CFR 77.412 - Compressed air systems.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Compressed air systems. 77.412 Section 77.412... for Mechanical Equipment § 77.412 Compressed air systems. (a) Compressors and compressed-air receivers... involving the pressure system of compressors, receivers, or compressed-air-powered equipment shall not...

  8. Critical-point nuclei

    SciTech Connect

    Clark, R.M.

    2004-10-01

    It has been suggested that a change of nuclear shape may be described in terms of a phase transition and that specific nuclei may lie close to the critical point of the transition. Analytical descriptions of such critical-point nuclei have been introduced recently and they are described briefly. The results of extensive searches for possible examples of critical-point behavior are presented. Alternative pictures, such as describing bands in the candidate nuclei using simple {Delta}K = 0 and {Delta}K = 2 rotational-coupling models, are discussed, and the limitations of the different approaches highlighted. A possible critical-point description of the transition from a vibrational to rotational pairing phase is suggested.

  9. Arctic climate tipping points.

    PubMed

    Lenton, Timothy M

    2012-02-01

    There is widespread concern that anthropogenic global warming will trigger Arctic climate tipping points. The Arctic has a long history of natural, abrupt climate changes, which together with current observations and model projections, can help us to identify which parts of the Arctic climate system might pass future tipping points. Here the climate tipping points are defined, noting that not all of them involve bifurcations leading to irreversible change. Past abrupt climate changes in the Arctic are briefly reviewed. Then, the current behaviour of a range of Arctic systems is summarised. Looking ahead, a range of potential tipping phenomena are described. This leads to a revised and expanded list of potential Arctic climate tipping elements, whose likelihood is assessed, in terms of how much warming will be required to tip them. Finally, the available responses are considered, especially the prospects for avoiding Arctic climate tipping points.

  10. Triple Point Topological Metals

    NASA Astrophysics Data System (ADS)

    Zhu, Ziming; Winkler, Georg W.; Wu, QuanSheng; Li, Ju; Soluyanov, Alexey A.

    2016-07-01

    Topologically protected fermionic quasiparticles appear in metals, where band degeneracies occur at the Fermi level, dictated by the band structure topology. While in some metals these quasiparticles are direct analogues of elementary fermionic particles of the relativistic quantum field theory, other metals can have symmetries that give rise to quasiparticles, fundamentally different from those known in high-energy physics. Here, we report on a new type of topological quasiparticles—triple point fermions—realized in metals with symmorphic crystal structure, which host crossings of three bands in the vicinity of the Fermi level protected by point group symmetries. We find two topologically different types of triple point fermions, both distinct from any other topological quasiparticles reported to date. We provide examples of existing materials that host triple point fermions of both types and discuss a variety of physical phenomena associated with these quasiparticles, such as the occurrence of topological surface Fermi arcs, transport anomalies, and topological Lifshitz transitions.

  11. Genetic optical design for a compressive sensing task

    NASA Astrophysics Data System (ADS)

    Horisaki, Ryoichi; Niihara, Takahiro; Tanida, Jun

    2016-07-01

    We present a sophisticated optical design method for reducing the number of photodetectors for a specific sensing task. The chosen design parameter is the point spread function, and the selected task is object recognition. The point spread function is optimized iteratively with a genetic algorithm for object recognition based on a neural network. In the experimental demonstration, binary classification of face and non-face datasets was performed with a single measurement using two photodetectors. A spatial light modulator operating in the amplitude modulation mode was provided in the imaging optics and was used to modulate the point spread function. In each generation of the genetic algorithm, the classification accuracy with a pattern displayed on the spatial light modulator was fed-back to the next generation to find better patterns. The proposed method increased the accuracy by about 30 % compared with a conventional imaging system in which the point spread function was the delta function. This approach is practically useful for compressing the cost, size, and observation time of optical sensors in specific applications, and robust for imperfections in optical elements.

  12. Genetic optical design for a compressive sensing task

    NASA Astrophysics Data System (ADS)

    Horisaki, Ryoichi; Niihara, Takahiro; Tanida, Jun

    2016-10-01

    We present a sophisticated optical design method for reducing the number of photodetectors for a specific sensing task. The chosen design parameter is the point spread function, and the selected task is object recognition. The point spread function is optimized iteratively with a genetic algorithm for object recognition based on a neural network. In the experimental demonstration, binary classification of face and non-face datasets was performed with a single measurement using two photodetectors. A spatial light modulator operating in the amplitude modulation mode was provided in the imaging optics and was used to modulate the point spread function. In each generation of the genetic algorithm, the classification accuracy with a pattern displayed on the spatial light modulator was fed-back to the next generation to find better patterns. The proposed method increased the accuracy by about 30 % compared with a conventional imaging system in which the point spread function was the delta function. This approach is practically useful for compressing the cost, size, and observation time of optical sensors in specific applications, and robust for imperfections in optical elements.

  13. Unpredictable points and chaos

    NASA Astrophysics Data System (ADS)

    Akhmet, Marat; Fen, Mehmet Onur

    2016-11-01

    It is revealed that a special kind of Poisson stable point, which we call an unpredictable point, gives rise to the existence of chaos in the quasi-minimal set. The existing definitions of chaos are formulated in sets of motions. This is the first time in the literature that description of chaos is initiated from a single motion. The theoretical results are exemplified by means of the symbolic dynamics.

  14. Wavelet compression techniques for hyperspectral data

    NASA Technical Reports Server (NTRS)

    Evans, Bruce; Ringer, Brian; Yeates, Mathew

    1994-01-01

    Hyperspectral sensors are electro-optic sensors which typically operate in visible and near infrared bands. Their characteristic property is the ability to resolve a relatively large number (i.e., tens to hundreds) of contiguous spectral bands to produce a detailed profile of the electromagnetic spectrum. In contrast, multispectral sensors measure relatively few non-contiguous spectral bands. Like multispectral sensors, hyperspectral sensors are often also imaging sensors, measuring spectra over an array of spatial resolution cells. The data produced may thus be viewed as a three dimensional array of samples in which two dimensions correspond to spatial position and the third to wavelength. Because they multiply the already large storage/transmission bandwidth requirements of conventional digital images, hyperspectral sensors generate formidable torrents of data. Their fine spectral resolution typically results in high redundancy in the spectral dimension, so that hyperspectral data sets are excellent candidates for compression. Although there have been a number of studies of compression algorithms for multispectral data, we are not aware of any published results for hyperspectral data. Three algorithms for hyperspectral data compression are compared. They were selected as representatives of three major approaches for extending conventional lossy image compression techniques to hyperspectral data. The simplest approach treats the data as an ensemble of images and compresses each image independently, ignoring the correlation between spectral bands. The second approach transforms the data to decorrelate the spectral bands, and then compresses the transformed data as a set of independent images. The third approach directly generalizes two-dimensional transform coding by applying a three-dimensional transform as part of the usual transform-quantize-entropy code procedure. The algorithms studied all use the discrete wavelet transform. In the first two cases, a wavelet

  15. Clinical implication of latent myofascial trigger point.

    PubMed

    Celik, Derya; Mutlu, Ebru Kaya

    2013-08-01

    Myofascial trigger points (MTrPs) are hyperirritable points located within a taut band of skeletal muscle or fascia, which cause referred pain, local tenderness and autonomic changes when compressed. There are fundamental differences between the effects produced by the two basic types of MTrPs (active and latent). Active trigger points (ATrPs) usually produce referred pain and tenderness. In contrast, latent trigger points (LTrPs) are foci of hyperirritability in a taut band of muscle, which are clinically associated with a local twitch response, tenderness and/or referred pain upon manual examination. LTrPs may be found in many pain-free skeletal muscles and may be "activated" and converted to ATrPs by continuous detrimental stimuli. ATrPs can be inactivated by different treatment strategies; however, they never fully disappear but rather convert to the latent form. Therefore, the diagnosis and treatment of LTrPs is important. This review highlights the clinical implication of LTrPs.

  16. Reference Point Heterogeneity.

    PubMed

    Terzi, Ayse; Koedijk, Kees; Noussair, Charles N; Pownall, Rachel

    2016-01-01

    It is well-established that, when confronted with a decision to be taken under risk, individuals use reference payoff levels as important inputs. The purpose of this paper is to study which reference points characterize decisions in a setting in which there are several plausible reference levels of payoff. We report an experiment, in which we investigate which of four potential reference points: (1) a population average payoff level, (2) the announced expected payoff of peers in a similar decision situation, (3) a historical average level of earnings that others have received in the same task, and (4) an announced anticipated individual payoff level, best describes decisions in a decontextualized risky decision making task. We find heterogeneity among individuals in the reference points they employ. The population average payoff level is the modal reference point, followed by experimenter's stated expectation of a participant's individual earnings, followed in turn by the average earnings of other participants in previous sessions of the same experiment. A sizeable share of individuals show multiple reference points simultaneously. The reference point that best fits the choices of the individual is not affected by a shock to her income. PMID:27672374

  17. Reference Point Heterogeneity

    PubMed Central

    Terzi, Ayse; Koedijk, Kees; Noussair, Charles N.; Pownall, Rachel

    2016-01-01

    It is well-established that, when confronted with a decision to be taken under risk, individuals use reference payoff levels as important inputs. The purpose of this paper is to study which reference points characterize decisions in a setting in which there are several plausible reference levels of payoff. We report an experiment, in which we investigate which of four potential reference points: (1) a population average payoff level, (2) the announced expected payoff of peers in a similar decision situation, (3) a historical average level of earnings that others have received in the same task, and (4) an announced anticipated individual payoff level, best describes decisions in a decontextualized risky decision making task. We find heterogeneity among individuals in the reference points they employ. The population average payoff level is the modal reference point, followed by experimenter's stated expectation of a participant's individual earnings, followed in turn by the average earnings of other participants in previous sessions of the same experiment. A sizeable share of individuals show multiple reference points simultaneously. The reference point that best fits the choices of the individual is not affected by a shock to her income. PMID:27672374

  18. Human grasp point selection.

    PubMed

    Kleinholdermann, Urs; Franz, Volker H; Gegenfurtner, Karl R

    2013-07-25

    When we grasp an object, our visuomotor system has to solve an intricate problem: how to find the best out of an infinity of possible contact points of the fingers with the object? The contact point selection model (CoPS) we present here solves this problem and predicts human grasp point selection in precision grip grasping by combining a few basic rules that have been identified in human and robotic grasping. Usually, not all of the rules can be perfectly satisfied. Therefore, we assessed their relative importance by creating simple stimuli that put them into conflict with each other in pairs. Based on these conflict experiments we made model-based grasp point predictions for another experiment with a novel set of complexly shaped objects. The results show that our model predicts the human choice of grasp points very well, and that observers' preferences for their natural grasp angles is as important as physical stability constraints. Incorporating a human grasp point selection model like the one presented here could markedly improve current approaches to cortically guided arm and hand prostheses by making movements more natural while also allowing for a more efficient use of the available information.

  19. Reference Point Heterogeneity

    PubMed Central

    Terzi, Ayse; Koedijk, Kees; Noussair, Charles N.; Pownall, Rachel

    2016-01-01

    It is well-established that, when confronted with a decision to be taken under risk, individuals use reference payoff levels as important inputs. The purpose of this paper is to study which reference points characterize decisions in a setting in which there are several plausible reference levels of payoff. We report an experiment, in which we investigate which of four potential reference points: (1) a population average payoff level, (2) the announced expected payoff of peers in a similar decision situation, (3) a historical average level of earnings that others have received in the same task, and (4) an announced anticipated individual payoff level, best describes decisions in a decontextualized risky decision making task. We find heterogeneity among individuals in the reference points they employ. The population average payoff level is the modal reference point, followed by experimenter's stated expectation of a participant's individual earnings, followed in turn by the average earnings of other participants in previous sessions of the same experiment. A sizeable share of individuals show multiple reference points simultaneously. The reference point that best fits the choices of the individual is not affected by a shock to her income.

  20. Image encryption and compression based on kronecker compressed sensing and elementary cellular automata scrambling

    NASA Astrophysics Data System (ADS)

    Chen, Tinghuan; Zhang, Meng; Wu, Jianhui; Yuen, Chau; Tong, You

    2016-10-01

    Because of simple encryption and compression procedure in single step, compressed sensing (CS) is utilized to encrypt and compress an image. Difference of sparsity levels among blocks of the sparsely transformed image degrades compression performance. In this paper, motivated by this difference of sparsity levels, we propose an encryption and compression approach combining Kronecker CS (KCS) with elementary cellular automata (ECA). In the first stage of encryption, ECA is adopted to scramble the sparsely transformed image in order to uniformize sparsity levels. A simple approximate evaluation method is introduced to test the sparsity uniformity. Due to low computational complexity and storage, in the second stage of encryption, KCS is adopted to encrypt and compress the scrambled and sparsely transformed image, where the measurement matrix with a small size is constructed from the piece-wise linear chaotic map. Theoretical analysis and experimental results show that our proposed scrambling method based on ECA has great performance in terms of scrambling and uniformity of sparsity levels. And the proposed encryption and compression method can achieve better secrecy, compression performance and flexibility.