Science.gov

Sample records for alvar variable compression

  1. Alvar variable compression engine development. Final report

    SciTech Connect

    1998-03-30

    The Alvar engine is an invention by Mr. Alvar Gustafsson of Skarblacka, Sweden. It is a four stroke spark ignition internal combustion engine, having variable compression ratio and variable displacements. The compression ratio can be varied by means of small secondary cylinders and pistons which are communicating with the main combustion chambers. The secondary pistons can be phase shifted with respect to the main pistons. The engine is suitable for multi-fuel operation. Invention rights are held by Alvar Engine AB of Sweden, a company created to handle the development of the Alvar Engine. A project was conceived wherein an optimised experimental engine would be built and tested to verify the advantages claimed for the Alvar engine and also to reveal possible drawbacks, if any. Alvar Engine AB appointed Gunnar Lundholm, professor of Combustion Engines at Lund University, Lund, Sweden as principal investigator. The project could be seen as having three parts: (1) Optimisation of the engine combustion chamber geometry; (2) Design and manufacturing of the necessary engine parts; and (3) Testing of the engine in an engine laboratory NUTEK, The Swedish Board for Industrial and Technical Development granted Gunnar Lundholm, SEK 50000 (about $6700) to travel to the US to evaluate potential research and development facilities which seemed able to perform the different project tasks.

  2. Variable compression ratio control

    SciTech Connect

    Johnson, K.A.

    1988-04-19

    In a four cycle engine that includes a crankshaft having a plural number of main shaft sections defining the crankshaft rotational axis and a plural number of crank arms defining orbital shaft sections, a plural number of combustion cylinders, a movable piston within each cylinder, each cylinder and its associated piston defining a combustion chamber, a connecting rod connecting each piston to an orbital shaft section of the crankshaft, and a plural number of stationary support walls spaced along the crankshaft axis for absorbing crankshaft forces: the improvement is described comprising means for adjustably supporting the crankshaft on the stationary walls such that the crankshaft rotational axis is adjustable along the piston-cylinder axis for the purpose of varying a resulting engine compression ratio; the adjustable support means comprising a circular cavity in each stationary wall. A circular disk swivably is seated in each cavity, each circular disk having a circular opening therethrough eccentric to the disk center. The crankshaft is arranged so that respective ones of its main shaft sections are located within respective ones of the circular openings; means for rotating each circular disk around its center so that the main shaft sections of the crankshaft are adjusted toward and away from the combustion chamber; a pinion gear on an output end of the crankshaft in axial alignment with and positioned beyond the respective ones of the main shaft sections, and a rotary output gear located about and engaged with teeth extending from the pinion gear.

  3. Envera Variable Compression Ratio Engine

    SciTech Connect

    Charles Mendler

    2011-03-15

    Aggressive engine downsizing, variable compression ratio and use of the Atkinson cycle are being combined to improve fuel economy by up to 40 percent relative to port fuel injected gasoline engines, while maintaining full engine power. Approach Engine downsizing is viewed by US and foreign automobile manufacturers as one of the best options for improving fuel economy. While this strategy has already demonstrated a degree of success, downsizing and fuel economy gains are currently limited. With new variable compression ratio technology however, the degree of engine downsizing and fuel economy improvement can be greatly increased. A small variable compression ratio (VCR) engine has the potential to return significantly higher vehicle fuel economy while also providing high power. Affordability and potential for near term commercialization are key attributes of the Envera VCR engine. VCR Technology To meet torque and power requirements, a smaller engine needs to do more work per stroke. This is typically accomplished by boosting the incoming charge with either a turbo or supercharger so that more energy is present in the cylinder per stroke to do the work. With current production engines the degree of engine boosting (which correlates to downsizing) is limited by detonation (combustion knock) at high boost levels. Additionally, the turbo or supercharger needs to be responsive and efficient while providing the needed boost. VCR technology eliminates the limitation of engine knock at high load levels by reducing compression ratio to {approx}9:1 (or whatever level is appropriate) when high boost pressures are needed. By reducing the compression ratio during high load demand periods there is increased volume in the cylinder at top dead center (TDC) which allows more charge (or energy) to be present in the cylinder without increasing the peak pressure. Cylinder pressure is thus kept below the level at which the engine would begin to knock. When loads on the engine are low

  4. Eccentric crank variable compression ratio mechanism

    DOEpatents

    Lawrence, Keith Edward; Moser, William Elliott; Roozenboom, Stephan Donald; Knox, Kevin Jay

    2008-05-13

    A variable compression ratio mechanism for an internal combustion engine that has an engine block and a crankshaft is disclosed. The variable compression ratio mechanism has a plurality of eccentric disks configured to support the crankshaft. Each of the plurality of eccentric disks has at least one cylindrical portion annularly surrounded by the engine block. The variable compression ratio mechanism also has at least one actuator configured to rotate the plurality of eccentric disks.

  5. Crankshaft assembly for variable stroke engine for variable compression

    SciTech Connect

    Heniges, W.B.

    1989-12-19

    This patent describes crankshaft assembly for a variable compression engine with reciprocating pistons. It comprises: a crankshaft assembly including a web, a crankpin, a crankshaft arm, piston driven means carried by the crankpin, eccentric means including an eccentric bushing rotatably carried by the crankpin and interposed between the crankpin and the piston driven means. The eccentric means including an eccentric mounted gear whereby adjusted rotation of the eccentric means relative the crankpin will alter the spacial relationship of the eccentric and the piston driven means to the crankshaft axis to alter piston stroke, and eccentric positioning means including a gear train comprising a first gear driven by the crankshaft arm for rotation about the crankshaft axis, a gear set driven by the first gear with certain gears of the set being displaceable, carrier means supporting the certain gears, control means coupled to the carrier means for positioning same and the certain gears, driven gears powered by the gear set which one of the driven gears in mesh with the eccentric mounted gear to impact rotate to same to alter the relationship of the eccentric bushing to the piston driven means to thereby determine stroke.

  6. Alvar soils and ecology in the boreal forest and taiga regions of Canada.

    NASA Astrophysics Data System (ADS)

    Ford, D.

    2012-04-01

    Alvars have been defined as "...a biological association based on a limestone plain with thin or no soil and, as a result, sparse vegetation. Trees and bushes are stunted or absent ... may include prairie spp." (Wikipedia). They were first described in southern Sweden, Estonia, the karst pavements of Yorkshire (UK) and the Burren (Eire). In North America alvars have been recognised and reported only in the Mixed Forest (deciduous/coniferous) Zone around the Great Lakes. An essential feature of the hydrologic controls on vegetation growth on natural alvars is that these terrains were glaciated in the last (Wisconsinan/Würm) ice age: the upper beds of any pre-existing epikarst were stripped away by glacier scour and there has been insufficient time for post-glacial epikarst to achieve the depths and densities required to support the deep rooting needed for mature forest cover. However, in the sites noted above, the alvars have been created, at least in part, by deforestation, overgrazing, burning to create browse, etc. and thus should not be considered wholly natural phenomena. There are extensive natural alvars in the Boreal Forest and Taiga ecozones in Canada. Their nature and variety will be illustrated with examples from cold temperate maritime climate settings in northern Newfoundland and the Gulf of St Lawrence and cold temperate continental to sub-arctic climates in northern Manitoba and the Northwest Territories.

  7. An Efficient Variable-Length Data-Compression Scheme

    NASA Technical Reports Server (NTRS)

    Cheung, Kar-Ming; Kiely, Aaron B.

    1996-01-01

    Adaptive variable-length coding scheme for compression of stream of independent and identically distributed source data involves either Huffman code or alternating run-length Huffman (ARH) code, depending on characteristics of data. Enables efficient compression of output of lossless or lossy precompression process, with speed and simplicity greater than those of older coding schemes developed for same purpose. In addition, scheme suitable for parallel implementation on hardware with modular structure, provides for rapid adaptation to changing data source, compatible with block orientation to alleviate memory requirements, ensures efficiency over wide range of entropy, and easily combined with such other communication schemes as those for containment of errors and for packetization.

  8. Image Compression Using Vector Quantization with Variable Block Size Division

    NASA Astrophysics Data System (ADS)

    Matsumoto, Hiroki; Kichikawa, Fumito; Sasazaki, Kazuya; Maeda, Junji; Suzuki, Yukinori

    In this paper, we propose a method for compressing a still image using vector quantization (VQ). Local fractal dimension (LFD) is computed to divided an image into variable block size. The LFD shows the complexity of local regions of an image, so that a region of an image that shows higher LFD values than those of other regions is partitioned into small blocks of pixels, while a region of an image that shows lower LFD values than those of other regions is partitioned into large blocks. Furthermore, we developed a division and merging algorithm to decrease the number of blocks to encode. This results in improvement of compression rate. We construct code books for respective blocks sizes. To encode an image, a block of pixels is transformed by discrete cosine transform (DCT) and the closest vector is chosen from the code book (CB). In decoding, the code vector corresponding to the index is selected from the CB and then the code vector is transformed by inverse DCT to reconstruct a block of pixels. Computational experiments were carried out to show the effectiveness of the proposed method. Performance of the proposed method is slightly better than that of JPEG. In the case of learning images to construct a CB being different from test images, the compression rate is comparable to compression rates of methods proposed so far, while image quality evaluated by NPIQM (normalized perceptual image quality measure) is almost the highest step. The results show that the proposed method is effective for still image compression.

  9. Pseudospectral simulation of compressible turbulence using logarithmic variables

    NASA Technical Reports Server (NTRS)

    Shebalin, John V.

    1993-01-01

    The direct numerical simulation of dissipative, highly compressible turbulent flow is performed using a pseudospectral Fourier technique. The governing equations are cast in a form where the important physical variables are the fluid velocity and the natural logarithms of the fluid density and temperature. Bulk viscosity is utilized to model polyatomic gases more accurately and to ensure numerical stability in the presence of strong shocks. Numerical examples include three-dimensional supersonic homogeneous turbulence and two-dimensional shock-turbulence interactions.

  10. Variable Quality Compression of Fluid Dynamical Data Sets Using a 3D DCT Technique

    NASA Astrophysics Data System (ADS)

    Loddoch, A.; Schmalzl, J.

    2005-12-01

    In this work we present a data compression scheme that is especially suited for the compression of data sets resulting from computational fluid dynamics (CFD). By adopting the concept of the JPEG compression standard and extending the approach of Schmalzl (Schmalzl, J. Using standard image compression algorithms to store data from computational fluid dynamics. Computers and Geosciences, 29, 10211031, 2003) we employ a three-dimensional discrete cosine transform of the data. The resulting frequency components are rearranged, quantized and finally stored using Huffman-encoding and standard variable length integer codes. The compression ratio and also the introduced loss of accuracy can be adjusted by means of two compression parameters to give the desired compression profile. Using the proposed technique compression ratios of more than 60:1 are possible with an mean error of the compressed data of less than 0.1%.

  11. Working characteristics of variable intake valve in compressed air engine.

    PubMed

    Yu, Qihui; Shi, Yan; Cai, Maolin

    2014-01-01

    A new camless compressed air engine is proposed, which can make the compressed air energy reasonably distributed. Through analysis of the camless compressed air engine, a mathematical model of the working processes was set up. Using the software MATLAB/Simulink for simulation, the pressure, temperature, and air mass of the cylinder were obtained. In order to verify the accuracy of the mathematical model, the experiments were conducted. Moreover, performance analysis was introduced to design compressed air engine. Results show that, firstly, the simulation results have good consistency with the experimental results. Secondly, under different intake pressures, the highest output power is obtained when the crank speed reaches 500 rpm, which also provides the maximum output torque. Finally, higher energy utilization efficiency can be obtained at the lower speed, intake pressure, and valve duration angle. This research can refer to the design of the camless valve of compressed air engine. PMID:25379536

  12. Working Characteristics of Variable Intake Valve in Compressed Air Engine

    PubMed Central

    Yu, Qihui; Shi, Yan; Cai, Maolin

    2014-01-01

    A new camless compressed air engine is proposed, which can make the compressed air energy reasonably distributed. Through analysis of the camless compressed air engine, a mathematical model of the working processes was set up. Using the software MATLAB/Simulink for simulation, the pressure, temperature, and air mass of the cylinder were obtained. In order to verify the accuracy of the mathematical model, the experiments were conducted. Moreover, performance analysis was introduced to design compressed air engine. Results show that, firstly, the simulation results have good consistency with the experimental results. Secondly, under different intake pressures, the highest output power is obtained when the crank speed reaches 500 rpm, which also provides the maximum output torque. Finally, higher energy utilization efficiency can be obtained at the lower speed, intake pressure, and valve duration angle. This research can refer to the design of the camless valve of compressed air engine. PMID:25379536

  13. Variable valve timing in a homogenous charge compression ignition engine

    DOEpatents

    Lawrence, Keith E.; Faletti, James J.; Funke, Steven J.; Maloney, Ronald P.

    2004-08-03

    The present invention relates generally to the field of homogenous charge compression ignition engines, in which fuel is injected when the cylinder piston is relatively close to the bottom dead center position for its compression stroke. The fuel mixes with air in the cylinder during the compression stroke to create a relatively lean homogeneous mixture that preferably ignites when the piston is relatively close to the top dead center position. However, if the ignition event occurs either earlier or later than desired, lowered performance, engine misfire, or even engine damage, can result. The present invention utilizes internal exhaust gas recirculation and/or compression ratio control to control the timing of ignition events and combustion duration in homogeneous charge compression ignition engines. Thus, at least one electro-hydraulic assist actuator is provided that is capable of mechanically engaging at least one cam actuated intake and/or exhaust valve.

  14. Combustion engine variable compression ratio apparatus and method

    DOEpatents

    Lawrence; Keith E.; Strawbridge, Bryan E.; Dutart, Charles H.

    2006-06-06

    An apparatus and method for varying a compression ratio of an engine having a block and a head mounted thereto. The apparatus and method includes a cylinder having a block portion and a head portion, a piston linearly movable in the block portion of the cylinder, a cylinder plug linearly movable in the head portion of the cylinder, and a valve located in the cylinder plug and operable to provide controlled fluid communication with the block portion of the cylinder.

  15. Effects of compression and individual variability on face recognition performance

    NASA Astrophysics Data System (ADS)

    McGarry, Delia P.; Arndt, Craig M.; McCabe, Steven A.; D'Amato, Donald P.

    2004-08-01

    The Enhanced Border Security and Visa Entry Reform Act of 2002 requires that the Visa Waiver Program be available only to countries that have a program to issue to their nationals machine-readable passports incorporating biometric identifiers complying with applicable standards established by the International Civil Aviation Organization (ICAO). In June 2002, the New Technologies Working Group of ICAO unanimously endorsed the use of face recognition (FR) as the globally interoperable biometric for machine-assisted identity confirmation with machine-readable travel documents (MRTDs), although Member States may elect to use fingerprint and/or iris recognition as additional biometric technologies. The means and formats are still being developed through which biometric information might be stored in the constrained space of integrated circuit chips embedded within travel documents. Such information will be stored in an open, yet unalterable and very compact format, probably as digitally signed and efficiently compressed images. The objective of this research is to characterize the many factors that affect FR system performance with respect to the legislated mandates concerning FR. A photograph acquisition environment and a commercial face recognition system have been installed at Mitretek, and over 1,400 images have been collected of volunteers. The image database and FR system are being used to analyze the effects of lossy image compression, individual differences, such as eyeglasses and facial hair, and the acquisition environment on FR system performance. Images are compressed by varying ratios using JPEG2000 to determine the trade-off points between recognition accuracy and compression ratio. The various acquisition factors that contribute to differences in FR system performance among individuals are also being measured. The results of this study will be used to refine and test efficient face image interchange standards that ensure highly accurate recognition, both

  16. Acoustic transmission matrix of a variable area duct or nozzle carrying a compressible subsonic flow

    NASA Technical Reports Server (NTRS)

    Miles, J. H.

    1980-01-01

    The differential equations governing the propagation of sound in a variable area duct or nozzle carrying a one-dimensional subsonic compressible fluid flow are derived and put in state variable form using acoustic pressure and particle velocity as the state variables. The duct or nozzle is divided into a number of regions. The region size is selected so that in each region the Mach number can be assumed constant and the area variation can be approximated by an exponential area variation. Consequently, the state variable equation in each region has constant coefficients. The transmission matrix for each region is obtained by solving the constant coefficient acoustic state variable differential equation. The transmission matrix for the duct or nozzle is the product of the individual transmission matrices of each region. Solutions are presented for several geometries with and without mean flow.

  17. Acoustic transmission matrix of a variable area duct or nozzle carrying a compressible subsonic flow

    NASA Technical Reports Server (NTRS)

    Miles, J. H.

    1980-01-01

    The differential equations governing the propagation of sound in a variable area duct or nozzle carrying a one dimensional subsonic compressible fluid flow are derived and put in state variable form using acoustic pressure and particle velocity as the state variables. The duct or nozzle is divided into a number of regions. The region size is selected so that in each region the Mach number can be assumed constant and the area variation can be approximated by an exponential area variation. Consequently, the state variable equation in each region has constant coefficients. The transmission matrix for each region is obtained by solving the constant coefficient acoustic state variable differential equation. The transmission matrix for the duct or nozzle is the product of the individual transmission matrices of each region. Solutions are presented for several geometries with and without mean flow.

  18. Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain

    PubMed Central

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality. PMID:23049544

  19. Variability and anisotropy of mechanical behavior of cortical bone in tension and compression.

    PubMed

    Li, Simin; Demirci, Emrah; Silberschmidt, Vadim V

    2013-05-01

    The mechanical properties of cortical bone vary not only from bone to bone; they demonstrate a spatial viability even within the same bone due to its changing microstructure. They also depend considerably on different loading modes and orientations. To understand the variability and anisotropic mechanical behavior of a cortical bone tissue, specimens cut from four anatomical quadrants of bovine femurs were investigated both in tension and compression tests. The obtained experimental results revealed a highly anisotropic mechanical behavior, depending also on the loading mode (tension and compression). A compressive longitudinal loading regime resulted in the best load-bearing capacity for cortical bone, while tensile transverse loading provided significantly poorer results. The distinctive stress-strain curves obtained for tension and compression demonstrated various damage mechanisms associated with different loading modes. The variability of mechanical properties for different cortices was evaluated with two-way ANOVA analyses. Statistical significances were found among different quadrants for the Young's modulus. The results of microstructure analysis of the entire transverse cross section of a cortical bone also confirmed variations of volume fractions of constituents at microscopic level between anatomic quadrants: microstructure of the anterior quadrant was dominated by plexiform bone, whereas secondary osteons were prominent in the posterior quadrant. The effective Young's modulus predicted using the modified Voigt-Reuss-Hill averaging scheme accurately reproduced our experimental results, corroborating additionally a strong effect of random and heterogeneous microstructure on variation of mechanical properties in cortical bone. PMID:23563047

  20. Ultraspectral sounder data compression using error-detecting reversible variable-length coding

    NASA Astrophysics Data System (ADS)

    Huang, Bormin; Ahuja, Alok; Huang, Hung-Lung; Schmit, Timothy J.; Heymann, Roger W.

    2005-08-01

    Nonreversible variable-length codes (e.g. Huffman coding, Golomb-Rice coding, and arithmetic coding) have been used in source coding to achieve efficient compression. However, a single bit error during noisy transmission can cause many codewords to be misinterpreted by the decoder. In recent years, increasing attention has been given to the design of reversible variable-length codes (RVLCs) for better data transmission in error-prone environments. RVLCs allow instantaneous decoding in both directions, which affords better detection of bit errors due to synchronization losses over a noisy channel. RVLCs have been adopted in emerging video coding standards--H.263+ and MPEG-4--to enhance their error-resilience capabilities. Given the large volume of three-dimensional data that will be generated by future space-borne ultraspectral sounders (e.g. IASI, CrIS, and HES), the use of error-robust data compression techniques will be beneficial to satellite data transmission. In this paper, we investigate a reversible variable-length code for ultraspectral sounder data compression, and present its numerical experiments on error propagation for the ultraspectral sounder data. The results show that the RVLC performs significantly better error containment than JPEG2000 Part 2.

  1. Influence of variables on the consolidation and unconfined compressive strength of crushed salt: Technical report

    SciTech Connect

    Pfeifle, T.W.; Senseny, P.E.; Mellegard, K.D.

    1987-01-01

    Eight hydrostatic compression creep tests were performed on crushed salt specimens fabricated from Avery Island dome salt. Following the creep test, each specimen was tested in unconfined compression. The experiments were performed to assess the influence of the following four variables on the consolidation and unconfined strength of crushed salt: grain size distribution, temperature, time, and moisture content. The experiment design comprised a half-fraction factorial matrix at two levels. The levels of each variable investigated were grain size distribution, uniform-graded and well-graded (coefficient of uniformity of 1 and 8); temperature 25/sup 0/C and 100/sup 0/C; time, 3.5 x 10/sup 3/s and 950 x 10/sup 3/s (approximately 60 minutes and 11 days, respectively); and moisture content, dry and wet (85% relative humidity for 24 hours). The hydrostatic creep stress was 10 MPa. The unconfined compression tests were performed at an axial strain rate of 1 x 10/sup -5/s/sup -1/. Results show that the variables time and moisture content have the greatest influence on creep consolidation, while grain size distribution and, to a somewhat lesser degree, temperature have the greatest influence on total consolidation. Time and moisture content and the confounded two-factor interactions between either grain size distribution and time or temperature and moisture content have the greatest influence on unconfined strength. 7 refs., 7 figs., 11 tabs.

  2. Adaptive variable-fidelity wavelet-based eddy-capturing approaches for compressible turbulence

    NASA Astrophysics Data System (ADS)

    Brown-Dymkoski, Eric; Vasilyev, Oleg V.

    2015-11-01

    Multiresolution wavelet methods have been developed for efficient simulation of compressible turbulence. They rely upon a filter to identify dynamically important coherent flow structures and adapt the mesh to resolve them. The filter threshold parameter, which can be specified globally or locally, allows for a continuous tradeoff between computational cost and fidelity, ranging seamlessly between DNS and adaptive LES. There are two main approaches to specifying the adaptive threshold parameter. It can be imposed as a numerical error bound, or alternatively, derived from real-time flow phenomena to ensure correct simulation of desired turbulent physics. As LES relies on often imprecise model formulations that require a high-quality mesh, this variable-fidelity approach offers a further tool for improving simulation by targeting deficiencies and locally increasing the resolution. Simultaneous physical and numerical criteria, derived from compressible flow physics and the governing equations, are used to identify turbulent regions and evaluate the fidelity. Several benchmark cases are considered to demonstrate the ability to capture variable density and thermodynamic effects in compressible turbulence. This work was supported by NSF under grant No. CBET-1236505.

  3. Structural Response of Compression-Loaded, Tow-Placed, Variable Stiffness Panels

    NASA Technical Reports Server (NTRS)

    Wu, K. Chauncey; Guerdal, Zafer; Starnes, James H., Jr.

    2002-01-01

    Results of an analytical and experimental study to characterize the structural response of two compression-loaded variable stiffness composite panels are presented and discussed. These variable stiffness panels are advanced composite structures, in which tows are laid down along precise curvilinear paths within each ply and the fiber orientation angle varies continuously throughout each ply. The panels are manufactured from AS4/977-3 graphite-epoxy pre-preg material using an advanced tow placement system. Both variable stiffness panels have the same layup, but one panel has overlapping tow bands and the other panel has a constant-thickness laminate. A baseline cross-ply panel is also analyzed and tested for comparative purposes. Tests performed on the variable stiffness panels show a linear prebuckling load-deflection response, followed by a nonlinear response to failure at loads between 4 and 53 percent greater than the baseline panel failure load. The structural response of the variable stiffness panels is also evaluated using finite element analyses. Nonlinear analyses of the variable stiffness panels are performed which include mechanical and thermal prestresses. Results from analyses that include thermal prestress conditions correlate well with measured variable stiffness panel results. The predicted response of the baseline panel also correlates well with measured results.

  4. Effects of selected design variables on three ramp, external compression inlet performance. [boundary layer control bypasses, and mass flow rate

    NASA Technical Reports Server (NTRS)

    Kamman, J. H.; Hall, C. L.

    1975-01-01

    Two inlet performance tests and one inlet/airframe drag test were conducted in 1969 at the NASA-Ames Research Center. The basic inlet system was two-dimensional, three ramp (overhead), external compression, with variable capture area. The data from these tests were analyzed to show the effects of selected design variables on the performance of this type of inlet system. The inlet design variables investigated include inlet bleed, bypass, operating mass flow ratio, inlet geometry, and variable capture area.

  5. A numerical investigation of the finite element method in compressible primitive variable Navier-Stokes flow

    NASA Technical Reports Server (NTRS)

    Cook, C. H.

    1977-01-01

    The results of a comprehensive numerical investigation of the basic capabilities of the finite element method (FEM) for numerical solution of compressible flow problems governed by the two-dimensional and axis-symmetric Navier-Stokes equations in primitive variables are presented. The strong and weak points of the method as a tool for computational fluid dynamics are considered. The relation of the linear element finite element method to finite difference methods (FDM) is explored. The calculation of free shear layer and separated flows over aircraft boattail afterbodies with plume simulators indicate the strongest assets of the method are its capabilities for reliable and accurate calculation employing variable grids which readily approximate complex geometry and capably adapt to the presence of diverse regions of large solution gradients without the necessity of domain transformation.

  6. Interfraction Liver Shape Variability and Impact on GTV Position During Liver Stereotactic Radiotherapy Using Abdominal Compression

    SciTech Connect

    Eccles, Cynthia L.; Dawson, Laura A.; Moseley, Joanne L.; Brock, Kristy K.

    2011-07-01

    Purpose: For patients receiving liver stereotactic body radiotherapy (SBRT), abdominal compression can reduce organ motion, and daily image guidance can reduce setup error. The reproducibility of liver shape under compression may impact treatment delivery accuracy. The purpose of this study was to measure the interfractional variability in liver shape under compression, after best-fit rigid liver-to-liver registration from kilovoltage (kV) cone beam computed tomography (CBCT) scans to planning computed tomography (CT) scans and its impact on gross tumor volume (GTV) position. Methods and Materials: Evaluable patients were treated in a Research Ethics Board-approved SBRT six-fraction study with abdominal compression. Kilovoltage CBCT scans were acquired before treatment and reconstructed as respiratory sorted CBCT scans offline. Manual rigid liver-to-liver registrations were performed from exhale-phase CBCT scans to exhale planning CT scans. Each CBCT liver was contoured, exported, and compared with the planning CT scan for spatial differences, by use of in house-developed finite-element model-based deformable registration (MORFEUS). Results: We evaluated 83 CBCT scans from 16 patients with 30 GTVs. The mean volume of liver that deformed by greater than 3 mm was 21.7%. Excluding 1 outlier, the maximum volume that deformed by greater than 3 mm was 36.3% in a single patient. Over all patients, the absolute maximum deformations in the left-right (LR), anterior-posterior (AP), and superior-inferior directions were 10.5 mm (SD, 2.2), 12.9 mm (SD, 3.6), and 5.6 mm (SD, 2.7), respectively. The absolute mean predicted impact of liver volume displacements on GTV by use of center of mass displacements was 0.09 mm (SD, 0.13), 0.13 mm (SD, 0.18), and 0.08 mm (SD, 0.07) in the left-right, anterior-posterior, and superior-inferior directions, respectively. Conclusions: Interfraction liver deformations in patients undergoing SBRT under abdominal compression after rigid liver

  7. Existence of Compressible Current-Vortex Sheets: Variable Coefficients Linear Analysis

    NASA Astrophysics Data System (ADS)

    Trakhinin, Yuri

    2005-09-01

    We study the initial-boundary value problem resulting from the linearization of the equations of ideal compressible magnetohydrodynamics and the Rankine-Hugoniot relations about an unsteady piecewise smooth solution. This solution is supposed to be a classical solution of the system of magnetohydrodynamics on either side of a surface of tangential discontinuity (current-vortex sheet). Under some assumptions on the unperturbed flow, we prove an energy a priori estimate for the linearized problem. Since the tangential discontinuity is characteristic, the functional setting is provided by the anisotropic weighted Sobolev space W21,σ. Despite the fact that the constant coefficients linearized problem does not meet the uniform Kreiss-Lopatinskii condition, the estimate we obtain is without loss of smoothness even for the variable coefficients problem and nonplanar current-vortex sheets. The result of this paper is a necessary step in proving the local-in-time existence of current-vortex sheet solutions of the nonlinear equations of magnetohydrodynamics.

  8. Adaptive variable-length coding for efficient compression of spacecraft television data.

    NASA Technical Reports Server (NTRS)

    Rice, R. F.; Plaunt, J. R.

    1971-01-01

    An adaptive variable length coding system is presented. Although developed primarily for the proposed Grand Tour missions, many features of this system clearly indicate a much wider applicability. Using sample to sample prediction, the coding system produces output rates within 0.25 bit/picture element (pixel) of the one-dimensional difference entropy for entropy values ranging from 0 to 8 bit/pixel. This is accomplished without the necessity of storing any code words. Performance improvements of 0.5 bit/pixel can be simply achieved by utilizing previous line correlation. A Basic Compressor, using concatenated codes, adapts to rapid changes in source statistics by automatically selecting one of three codes to use for each block of 21 pixels. The system adapts to less frequent, but more dramatic, changes in source statistics by adjusting the mode in which the Basic Compressor operates on a line-to-line basis. Furthermore, the compression system is independent of the quantization requirements of the pulse-code modulation system.

  9. Performance and exhaust emission characteristics of variable compression ratio diesel engine fuelled with esters of crude rice bran oil.

    PubMed

    Vasudeva, Mohit; Sharma, Sumeet; Mohapatra, S K; Kundu, Krishnendu

    2016-01-01

    As a substitute to petroleum-derived diesel, biodiesel has high potential as a renewable and environment friendly energy source. For petroleum importing countries the choice of feedstock for biodiesel production within the geographical region is a major influential factor. Crude rice bran oil is found to be good and viable feedstock for biodiesel production. A two step esterification is carried out for higher free fatty acid crude rice bran oil. Blends of 10, 20 and 40 % by vol. crude rice bran biodiesel are tested in a variable compression ratio diesel engine at compression ratio 15, 16, 17 and 18. Engine performance and exhaust emission parameters are examined. Cylinder pressure-crank angle variation is also plotted. The increase in compression ratio from 15 to 18 resulted in 18.6 % decrease in brake specific fuel consumption and 14.66 % increase in brake thermal efficiency on an average. Cylinder pressure increases by 15 % when compression ratio is increased. Carbon monoxide emission decreased by 22.27 %, hydrocarbon decreased by 38.4 %, carbon dioxide increased by 17.43 % and oxides of nitrogen as NOx emission increased by 22.76 % on an average when compression ratio is increased from 15 to 18. The blends of crude rice bran biodiesel show better results than diesel with increase in compression ratio. PMID:27066330

  10. The observed compression and expansion of the F2 ionosphere as a major component of ionospheric variability

    NASA Astrophysics Data System (ADS)

    Lynn, K. J. W.; Gardiner-Garden, R. S.; Heitmann, A.

    2016-05-01

    This paper examines a number of sources of ionospheric variability and demonstrates that they have relationships in common which are currently not recognized. The paper initially deals with medium to large-scale traveling ionospheric disturbances. Following sections deal with nontraveling ionospheric disturbance (TID) ionospheric variations which are often repetitious from day to day. The latter includes the temporary rise in F2 height associated with sunset in equatorial latitudes resulting from strong upward drift in ionization driven by an E × B force. The following fall in height is often referred to as the premidnight collapse and is accompanied by a temporary increase in foF2 as a result of ionospheric compression. An entirely different repetitious phenomenon reported recently from middle latitudes in the Southern Hemisphere consists of strong morning and afternoon peaks in foF2 which define a midday bite-out and occur at the equinoxes. This behavior has been speculated to be tidal in origin. All the sources of ionospheric variability listed above exhibit similar relationships associated with a temporary expansion and upward lift of the ionospheric profile and a fall involving a compression of the ionospheric profile producing a peak in foF2 at the time of maximum compression. Such ionospheric compression/decompression is followed by a period in which the ionospheric profile recovers. Such relationships in traveling ionospheric disturbances (TIDs) have been noted previously. The present paper establishes for the first time that relationships hitherto seen as occurring only with TIDs are also present in association with other drivers of ionospheric variability.

  11. Compression of ECG signals using variable-length classifıed vector sets and wavelet transforms

    NASA Astrophysics Data System (ADS)

    Gurkan, Hakan

    2012-12-01

    In this article, an improved and more efficient algorithm for the compression of the electrocardiogram (ECG) signals is presented, which combines the processes of modeling ECG signal by variable-length classified signature and envelope vector sets (VL-CSEVS), and residual error coding via wavelet transform. In particular, we form the VL-CSEVS derived from the ECG signals, which exploits the relationship between energy variation and clinical information. The VL-CSEVS are unique patterns generated from many of thousands of ECG segments of two different lengths obtained by the energy based segmentation method, then they are presented to both the transmitter and the receiver used in our proposed compression system. The proposed algorithm is tested on the MIT-BIH Arrhythmia Database and MIT-BIH Compression Test Database and its performance is evaluated by using some evaluation metrics such as the percentage root-mean-square difference (PRD), modified PRD (MPRD), maximum error, and clinical evaluation. Our experimental results imply that our proposed algorithm achieves high compression ratios with low level reconstruction error while preserving the diagnostic information in the reconstructed ECG signal, which has been supported by the clinical tests that we have carried out.

  12. Numerical solution of the compressible Navier-Stokes equations using density gradients as additional dependent variables. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Kwon, J. H.

    1977-01-01

    Numerical solution of two dimensional, time dependent, compressible viscous Navier-Stokes equations about arbitrary bodies was treated using density gradients as additional dependent variables. Thus, six dependent variables were computed with the SOR iteration method. Besides formulation for pressure gradient terms, a formulation for computing the body density was presented. To approximate the governing equations, an implicit finite difference method was employed. In computing the solution for the flow about a circular cylinder, a problem arose near the wall at both stagnation points. Thus, computations with various conditions were tried to examine the problem. Also, computations with and without formulations are compared. The flow variables were computed on 37 by 40 field first, then on an 81 by 40 field.

  13. A burst compression and expansion technique for variable-rate users in satellite-switched TDMA networks

    NASA Technical Reports Server (NTRS)

    Budinger, James M.

    1990-01-01

    A burst compression and expansion technique is described for asynchronously interconnecting variable-data-rate users with cost-efficient ground terminals in a satellite-switched, time-division-multiple-access (SS/TDMA) network. Compression and expansion buffers in each ground terminal convert between lower rate, asynchronous, continuous-user data streams and higher-rate TDMA bursts synchronized with the satellite-switched timing. The technique described uses a first-in, first-out (FIFO) memory approach which enables the use of inexpensive clock sources by both the users and the ground terminals and obviates the need for elaborate user clock synchronization processes. A continous range of data rates from kilobits per second to that approaching the modulator burst rate (hundreds of megabits per second) can be accommodated. The technique was developed for use in the NASA Lewis Research Center System Integration, Test, and Evaluation (SITE) facility. Some key features of the technique have also been implemented in the gound terminals developed at NASA Lewis for use in on-orbit evaluation of the Advanced Communications Technology Satellite (ACTS) high burst rate (HBR) system.

  14. A burst compression and expansion technique for variable-rate users in satellite-switched TDMA networks

    NASA Technical Reports Server (NTRS)

    Budinger, James M.

    1990-01-01

    A burst compression and expansion technique is described for asynchronously interconnecting variable-data-rate users with cost-efficient ground terminals in a satellite-switched, time-division-multiple-access (SS/TDMA) network. Compression and expansion buffers in each ground terminal convert between lower rate, asynchronous, continuous-user data streams and higher-rate TDMA bursts synchronized with the satellite-switched timing. The technique described uses a first-in, first-out (FIFO) memory approach which enables the use of inexpensive clock sources by both the users and the ground terminals and obviates the need for elaborate user clock synchronization processes. A continuous range of data rates from kilobits per second to that approaching the modulator burst rate (hundreds of megabits per second) can be accommodated. The technique was developed for use in the NASA Lewis Research Center System Integration, Test, and Evaluation (SITE) facility. Some key features of the technique have also been implemented in the ground terminals developed at NASA Lewis for use in on-orbit evaluation of the Advanced Communications Technology Satellite (ACTS) high burst rate (HBR) system.

  15. The structure of variable property, compressible mixing layers in binary gas mixtures

    NASA Technical Reports Server (NTRS)

    Kozusko, F.; Grosch, C. E.; Jackson, T. L.; Kennedy, Christipher A.; Gatski, Thomas B.

    1996-01-01

    We present the results of a study of the structure of a parallel compressible mixing layer in a binary mixture of gases. The gases included in this study are hydrogen (H2), helium (He), nitrogen (N2), oxygen (02), neon (Ne) and argon (Ar). Profiles of the variation of the Lewis and Prandtl numbers across the mixing layer for all thirty combinations of gases are given. It is shown that the Lewis number can vary by as much as a factor of eight and the Prandtl number by a factor of two across the mixing layer. Thus assuming constant values for the Lewis and Prandtl numbers of a binary gas mixture in the shear layer, as is done in many theoretical studies, is a poor approximation. We also present profiles of the velocity, mass fraction, temperature and density for representative binary gas mixtures at zero and supersonic Mach numbers. We show that the shape of these profiles is strongly dependent on which gases are in the mixture as well as on whether the denser gas is in the fast stream or the slow stream.

  16. A model for the compressible, viscoelastic behavior of human amnion addressing tissue variability through a single parameter.

    PubMed

    Mauri, Arabella; Ehret, Alexander E; De Focatiis, Davide S A; Mazza, Edoardo

    2016-08-01

    A viscoelastic, compressible model is proposed to rationalize the recently reported response of human amnion in multiaxial relaxation and creep experiments. The theory includes two viscoelastic contributions responsible for the short- and long-term time-dependent response of the material. These two contributions can be related to physical processes: water flow through the tissue and dissipative characteristics of the collagen fibers, respectively. An accurate agreement of the model with the mean tension and kinematic response of amnion in uniaxial relaxation tests was achieved. By variation of a single linear factor that accounts for the variability among tissue samples, the model provides very sound predictions not only of the uniaxial relaxation but also of the uniaxial creep and strip-biaxial relaxation behavior of individual samples. This suggests that a wide range of viscoelastic behaviors due to patient-specific variations in tissue composition can be represented by the model without the need of recalibration and parameter identification. PMID:26497188

  17. On Fully Developed Channel Flows: Some Solutions and Limitations, and Effects of Compressibility, Variable Properties, and Body Forces

    NASA Technical Reports Server (NTRS)

    Maslen, Stephen H.

    1959-01-01

    An examination of the effects of compressibility, variable properties, and body forces on fully developed laminar flow has indicated several limitations on such streams. In the absence of a pressure gradient, but presence of a body force (e.g., gravity), an exact fully developed gas flow results. For a liquid this follows also for the case of a constant streamwise pressure gradient. These motions are exact in the sense of a Couette flow. In the liquid case two solutions (not a new result) can occur for the same boundary conditions. An approximate analytic solution was found which agrees closely with machine calculations.In the case of approximately exact flows, it turns out that for large temperature variations across the channel the effects of convection (due to, say, a wall temperature gradient) and frictional heating must be negligible. In such a case the energy and momentum equations are separated, and the solutions are readily obtained. If the temperature variations are small, then both convection effects and frictional heating can consistently be considered. This case becomes the constant-property incompressible case (or quasi-incompressible case for free-convection flows) considered by many authors. Finally there is a brief discussion of cases wherein streamwise variations of all quantities are allowed but only a such form that independent variables are separable. For the case where the streamwise velocity varies inversely as the square root distance along the channel a solution is given.

  18. Byte structure variable length coding (BS-VLC): a new specific algorithm applied in the compression of trajectories generated by molecular dynamics

    PubMed

    Melo; Puga; Gentil; Brito; Alves; Ramos

    2000-05-01

    Molecular dynamics is a well-known technique very much used in the study of biomolecular systems. The trajectory files produced by molecular dynamics simulations are extensive, and the classical lossless algorithms give poor efficiencies in their compression. In this work, a new specific algorithm, named byte structure variable length coding (BS-VLC), is introduced. Trajectory files, obtained by molecular dynamics applied to trypsin and a trypsin:pancreatic trypsin inhibitor complex, were compressed using four classical lossless algorithms (Huffman, adaptive Huffman, LZW, and LZ77) as well as the BS-VLC algorithm. The results obtained show that BS-VLC nearly triplicates the compression efficiency of the best classical lossless algorithm, preserving a near lossless behavior. Compression efficiencies close to 50% can be obtained with a high degree of precision, and the maximum efficiency possible (75%), within this algorithm, can be performed with good precision. PMID:10850759

  19. Spatially Variable Compressibility Estimation Using the Ensemble Smoother with Bathymetry Observations: Application to the Maja Gas Reservoir

    NASA Astrophysics Data System (ADS)

    Zoccarato, C.; Bau, D.; Teatini, P.

    2015-12-01

    A data assimilation (DA) framework is established to characterize the geomechanical response of a strongly compartmentalized hydrocarbon reservoir. The available observations over the offshore gas field consist of a bathymetric survey carried out before and at the end of the ten-year production life. The time-lapse map of vertical displacements is used to infer the most important parameter characterizing the reservoir compaction, i.e. the rock formation compressibility cm. The methodology is tested for two different conceptual models: (a) cm varies with depth and the vertical effective stress (heterogeneity due to lithostratigrafic variability) and (b) cm also varies horizontally within the stratigraphic unit. The latter hypothesis is made to account for the behavior of the partitioned reservoir due to the presence of sealing faults and thrusts, which suggest the idea of a block heterogeneous cm. The calibration of the geomechanical parameters is obtain with the aid of the Ensemble Smoother algorithm, that is, an ensemble-based DA analysis scheme. In scenario (b), the number of reservoirs blocks dictates the set of uncertain parameters, whereas scenario (a) is characterized by only one uncertain parameter. The outcome from scenario (a) indicates that DA is effective in reducing the cm uncertainty. However, the maximum measured settlement is underestimated with an overestimation of the areal extent of the subsidence bowl. Significant improvements are obtained in scenario (b) where the maximum model overestimate is reduced by about 25% and an overall good match of the measured bathymetry is achieved.

  20. The Use of Fuel Chemistry and Property Variations to Evaluate the Robustness of Variable Compression Ratio as a Control Method for Gasoline HCCI

    SciTech Connect

    Szybist, James P; Bunting, Bruce G

    2007-01-01

    On a gasoline engine platform, homogeneous charge compression ignition (HCCI) holds the promise of improved fuel economy and greatly reduced engine-out NOx emissions, without an increase in particulate matter emissions. In this investigation, a variable compression ratio (CR) engine equipped with a throttle and intake air heating was used to test the robustness of these control parameters to accommodate a series of fuels blended from reference gasoline, straight run refinery naptha, and ethanol. Higher compression ratios allowed for operation with higher octane fuels, but operation could not be achieved with the reference gasoline, even at the highest compression ratio. Compression ratio and intake heat could be used separately or together to modulate combustion. A lambda of 2 provided optimum fuel efficiency, even though some throttling was necessary to achieve this condition. Ethanol did not appear to assist combustion, although only two ethanol-containing fuels were evaluated. The increased pumping work from throttling was minimal compared to the efficiency increases that were the result of lower unburned hydrocarbon (HC) and carbon monoxide (CO) emissions. Low temperature heat release was present for all the fuels, but could be suppressed with a higher intake air temperature. Results will be used to design future fuels and combustion studies with this research platform.

  1. Supercharged two-cycle engines employing novel single element reciprocating shuttle inlet valve mechanisms and with a variable compression ratio

    NASA Technical Reports Server (NTRS)

    Wiesen, Bernard (Inventor)

    2008-01-01

    This invention relates to novel reciprocating shuttle inlet valves, effective with every type of two-cycle engine, from small high-speed single cylinder model engines, to large low-speed multiple cylinder engines, employing spark or compression ignition. Also permitting the elimination of out-of-phase piston arrangements to control scavenging and supercharging of opposed-piston engines. The reciprocating shuttle inlet valve (32) and its operating mechanism (34) is constructed as a single and simple uncomplicated member, in combination with the lost-motion abutments, (46) and (48), formed in a piston skirt, obviating the need for any complex mechanisms or auxiliary drives, unaffected by heat, friction, wear or inertial forces. The reciprocating shuttle inlet valve retains the simplicity and advantages of two-cycle engines, while permitting an increase in volumetric efficiency and performance, thereby increasing the range of usefulness of two-cycle engines into many areas that are now dominated by the four-cycle engine.

  2. Hierarchical Order of Influence of Mix Variables Affecting Compressive Strength of Sustainable Concrete Containing Fly Ash, Copper Slag, Silica Fume, and Fibres

    PubMed Central

    Natarajan, Sakthieswaran; Karuppiah, Ganesan

    2014-01-01

    Experiments have been conducted to study the effect of addition of fly ash, copper slag, and steel and polypropylene fibres on compressive strength of concrete and to determine the hierarchical order of influence of the mix variables in affecting the strength using cluster analysis experimentally. While fly ash and copper slag are used for partial replacement of cement and fine aggregate, respectively, defined quantities of steel and polypropylene fibres were added to the mixes. It is found from the experimental study that, in general, irrespective of the presence or absence of fibres, (i) for a given copper slag-fine aggregate ratio, increase in fly ash-cement ratio the concrete strength decreases and with the increase in copper slag-sand ratio also the rate of strength decrease and (ii) for a given fly ash-cement ratio, increase in copper slag-fine aggregate ratio increases the strength of the concrete. From the cluster analysis, it is found that the quantities of coarse and fine aggregate present have high influence in affecting the strength. It is also observed that the quantities of fly ash and copper slag used as substitutes have equal “influence” in affecting the strength. Marginal effect of addition of fibres in the compression strength of concrete is also revealed by the cluster analysis. PMID:24707213

  3. Hierarchical order of influence of mix variables affecting compressive strength of sustainable concrete containing fly ash, copper slag, silica fume, and fibres.

    PubMed

    Natarajan, Sakthieswaran; Karuppiah, Ganesan

    2014-01-01

    Experiments have been conducted to study the effect of addition of fly ash, copper slag, and steel and polypropylene fibres on compressive strength of concrete and to determine the hierarchical order of influence of the mix variables in affecting the strength using cluster analysis experimentally. While fly ash and copper slag are used for partial replacement of cement and fine aggregate, respectively, defined quantities of steel and polypropylene fibres were added to the mixes. It is found from the experimental study that, in general, irrespective of the presence or absence of fibres, (i) for a given copper slag-fine aggregate ratio, increase in fly ash-cement ratio the concrete strength decreases and with the increase in copper slag-sand ratio also the rate of strength decrease and (ii) for a given fly ash-cement ratio, increase in copper slag-fine aggregate ratio increases the strength of the concrete. From the cluster analysis, it is found that the quantities of coarse and fine aggregate present have high influence in affecting the strength. It is also observed that the quantities of fly ash and copper slag used as substitutes have equal "influence" in affecting the strength. Marginal effect of addition of fibres in the compression strength of concrete is also revealed by the cluster analysis. PMID:24707213

  4. Detection of two-mode compression and degree of entanglement in continuous variables in parametric scattering of light

    SciTech Connect

    Rytikov, G. O.; Chekhova, M. V.

    2008-12-15

    Generation of 'twin beams' (of light with two-mode compression) in single-pass optical parametric amplifier (a crystal with a nonzero quadratic susceptibility) is considered. Radiation at the output of the nonlinear crystal is essentially multimode, which raises the question about the effect of the detection volume on the extent of suppression of noise from the difference photocurrent of the detectors. In addition, the longitudinal as well as transverse size of the region in which parametric transformation takes place is of fundamental importance. It is shown that maximal suppression of noise from difference photocurrent requires a high degree of entanglement of two-photon light at the outlet of the parametric amplifier, which is defined by Federov et al. [Phys. Rev. A 77, 032336 (2008)] as the ratio of the intensity distribution width to the correlation function width. The detection volume should be chosen taking into account both these quantities. Various modes of single-pass generation of twin beams (noncollinear frequency-degenerate and collinear frequency-nondegenerate synchronism of type I, as well as collinear frequency-degenerate synchronism of type II) are considered in connection with the degree of entanglement.

  5. Compression embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.

    1998-01-01

    A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%.

  6. Compression embedding

    DOEpatents

    Sandford, M.T. II; Handel, T.G.; Bradley, J.N.

    1998-07-07

    A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique are disclosed. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%. 21 figs.

  7. Compression embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.

    1998-01-01

    A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method.

  8. Compression embedding

    DOEpatents

    Sandford, M.T. II; Handel, T.G.; Bradley, J.N.

    1998-03-10

    A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique is disclosed. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method. 11 figs.

  9. Compressive Holography

    NASA Astrophysics Data System (ADS)

    Lim, Se Hoon

    Compressive holography estimates images from incomplete data by using sparsity priors. Compressive holography combines digital holography and compressive sensing. Digital holography consists of computational image estimation from data captured by an electronic focal plane array. Compressive sensing enables accurate data reconstruction by prior knowledge on desired signal. Computational and optical co-design optimally supports compressive holography in the joint computational and optical domain. This dissertation explores two examples of compressive holography: estimation of 3D tomographic images from 2D data and estimation of images from under sampled apertures. Compressive holography achieves single shot holographic tomography using decompressive inference. In general, 3D image reconstruction suffers from underdetermined measurements with a 2D detector. Specifically, single shot holographic tomography shows the uniqueness problem in the axial direction because the inversion is ill-posed. Compressive sensing alleviates the ill-posed problem by enforcing some sparsity constraints. Holographic tomography is applied for video-rate microscopic imaging and diffuse object imaging. In diffuse object imaging, sparsity priors are not valid in coherent image basis due to speckle. So incoherent image estimation is designed to hold the sparsity in incoherent image basis by support of multiple speckle realizations. High pixel count holography achieves high resolution and wide field-of-view imaging. Coherent aperture synthesis can be one method to increase the aperture size of a detector. Scanning-based synthetic aperture confronts a multivariable global optimization problem due to time-space measurement errors. A hierarchical estimation strategy divides the global problem into multiple local problems with support of computational and optical co-design. Compressive sparse aperture holography can be another method. Compressive sparse sampling collects most of significant field

  10. Compressible halftoning

    NASA Astrophysics Data System (ADS)

    Anderson, Peter G.; Liu, Changmeng

    2003-01-01

    We present a technique for converting continuous gray-scale images to halftone (black and white) images that lend themselves to lossless data compression with compression factor of three or better. Our method involves using novel halftone mask structures which consist of non-repeated threshold values. We have versions of both dispersed-dot and clustered-dot masks, which produce acceptable images for a variety of printers. Using the masks as a sort key allows us to reversibly rearrange the image pixels and partition them into groups with a highly skewed distribution allowing Huffman compression coding techniques to be applied. This gives compression ratios in the range 3:1 to 10:1.

  11. Linear analysis on the onset of thermal convection of highly compressible fluids with variable physical properties: Implications for the mantle convection of super-Earths

    NASA Astrophysics Data System (ADS)

    Kameyama, Masanori

    2016-02-01

    A series of our linear analysis on the onset of thermal convection was applied to that of highly compressible fluids in a planar layer whose thermal conductivity and viscosity vary in space, in order to study the influences of spatial variations in physical properties expected in the mantles of massive terrestrial planets. The thermal conductivity and viscosity are assumed to exponentially depend on depth and temperature, respectively, while the variations in thermodynamic properties (thermal expansivity and reference density) with depth are taken to be relevant for the super-Earths with 10 times the Earth's. Our analysis demonstrated that the nature of incipient thermal convection is strongly affected by the interplay between the adiabatic compression and spatial variations in physical properties of fluids. Owing to the effects of adiabatic compression, a `stratosphere' can occur in the deep mantles of super-Earths, where a vertical motion is insignificant. An emergence of `stratosphere' is greatly enhanced by the increase in thermal conductivity with depth, while it is suppressed by the decrease in thermal expansivity with depth. In addition, by the interplay between the static stability and strong temperature dependence in viscosity, convection cells tend to be confined in narrow regions around the `tropopause' at the interface between the `stratosphere' of stable stratification and the `troposphere' of unstable stratification. We also found that, depending on the variations in physical properties, two kinds of stagnant regions can separately develop in the fluid layer. One is well-known `stagnant-lids' of cold and highly viscous fluids, and the other is `basal stagnant regions' of hot and less viscous fluids. The occurrence of `basal stagnant regions' may imply that convecting motions can be insignificant in the lowermost part of the mantles of massive super-Earths, even in the absence of strong increase in viscosity with pressure (or depth).

  12. Video Compression

    NASA Technical Reports Server (NTRS)

    1996-01-01

    Optivision developed two PC-compatible boards and associated software under a Goddard Space Flight Center Small Business Innovation Research grant for NASA applications in areas such as telerobotics, telesciences and spaceborne experimentation. From this technology, the company used its own funds to develop commercial products, the OPTIVideo MPEG Encoder and Decoder, which are used for realtime video compression and decompression. They are used in commercial applications including interactive video databases and video transmission. The encoder converts video source material to a compressed digital form that can be stored or transmitted, and the decoder decompresses bit streams to provide high quality playback.

  13. A Comparison of Variable Time-Compressed Speech and Normal Rate Speech Based on Time Spent and Performance in a Course Taught by Self-Instructional Methods

    ERIC Educational Resources Information Center

    Short, Sarah Harvey

    1977-01-01

    College students using variable rate controlled speech compressors as compared with normal speed tape recorders had an average time saving of 32 percent and an average grade increase of 4.2 points on post-test scores. (Author)

  14. [Compression material].

    PubMed

    Perceau, Géraldine; Faure, Christine

    2012-01-01

    The compression of a venous ulcer is carried out with the use of bandages, and for less exudative ulcers, with socks, stockings or tights. The system of bandages is complex. Different forms of extension and therefore different types of models exist. PMID:22489428

  15. A Reweighted ℓ1-Minimization Based Compressed Sensing for the Spectral Estimation of Heart Rate Variability Using the Unevenly Sampled Data

    PubMed Central

    Chen, Szi-Wen; Chao, Shih-Chieh

    2014-01-01

    In this paper, a reweighted ℓ1-minimization based Compressed Sensing (CS) algorithm incorporating the Integral Pulse Frequency Modulation (IPFM) model for spectral estimation of HRV is introduced. Knowing as a novel sensing/sampling paradigm, the theory of CS asserts certain signals that are considered sparse or compressible can be possibly reconstructed from substantially fewer measurements than those required by traditional methods. Our study aims to employ a novel reweighted ℓ1-minimization CS method for deriving the spectrum of the modulating signal of IPFM model from incomplete RR measurements for HRV assessments. To evaluate the performance of HRV spectral estimation, a quantitative measure, referred to as the Percent Error Power (PEP) that measures the percentage of difference between the true spectrum and the spectrum derived from the incomplete RR dataset, was used. We studied the performance of spectral reconstruction from incomplete simulated and real HRV signals by experimentally truncating a number of RR data accordingly in the top portion, in the bottom portion, and in a random order from the original RR column vector. As a result, for up to 20% data truncation/loss the proposed reweighted ℓ1-minimization CS method produced, on average, 2.34%, 2.27%, and 4.55% PEP in the top, bottom, and random data-truncation cases, respectively, on Autoregressive (AR) model derived simulated HRV signals. Similarly, for up to 20% data loss the proposed method produced 5.15%, 4.33%, and 0.39% PEP in the top, bottom, and random data-truncation cases, respectively, on a real HRV database drawn from PhysioNet. Moreover, results generated by a number of intensive numerical experiments all indicated that the reweighted ℓ1-minimization CS method always achieved the most accurate and high-fidelity HRV spectral estimates in every aspect, compared with the ℓ1-minimization based method and Lomb's method used for estimating the spectrum of HRV from unevenly sampled RR

  16. A Comparison of Variable Time Compressed Speech and Normal Rate Speech Based on Time Spent and Performance in a Course Taught by Self-Instructional Methods.

    ERIC Educational Resources Information Center

    Short, Sarah Harvey

    The purpose of this study was to determine with precise measurements of time and carefully constructed posttests whether sighted students in a college course would save time and achieve higher scores when listening to cognitive information using variable time compressors as compared with students listening using normal speed tape recorders. The…

  17. Compressed Genotyping

    PubMed Central

    Erlich, Yaniv; Gordon, Assaf; Brand, Michael; Hannon, Gregory J.; Mitra, Partha P.

    2011-01-01

    Over the past three decades we have steadily increased our knowledge on the genetic basis of many severe disorders. Nevertheless, there are still great challenges in applying this knowledge routinely in the clinic, mainly due to the relatively tedious and expensive process of genotyping. Since the genetic variations that underlie the disorders are relatively rare in the population, they can be thought of as a sparse signal. Using methods and ideas from compressed sensing and group testing, we have developed a cost-effective genotyping protocol to detect carriers for severe genetic disorders. In particular, we have adapted our scheme to a recently developed class of high throughput DNA sequencing technologies. The mathematical framework presented here has some important distinctions from the ’traditional’ compressed sensing and group testing frameworks in order to address biological and technical constraints of our setting. PMID:21451737

  18. Compression and venous ulcers.

    PubMed

    Stücker, M; Link, K; Reich-Schupke, S; Altmeyer, P; Doerler, M

    2013-03-01

    Compression therapy is considered to be the most important conservative treatment of venous leg ulcers. Until a few years ago, compression bandages were regarded as first-line therapy of venous leg ulcers. However, to date medical compression stockings are the first choice of treatment. With respect to compression therapy of venous leg ulcers the following statements are widely accepted: 1. Compression improves the healing of ulcers when compared with no compression; 2. Multicomponent compression systems are more effective than single-component compression systems; 3. High compression is more effective than lower compression; 4. Medical compression stockings are more effective than compression with short stretch bandages. Healed venous leg ulcers show a high relapse rate without ongoing treatment. The use of medical stockings significantly reduces the amount of recurrent ulcers. Furthermore, the relapse rate of venous leg ulcers can be significantly reduced by a combination of compression therapy and surgery of varicose veins compared with compression therapy alone. PMID:23482538

  19. Chapter 22: Compressed Air Evaluation Protocol

    SciTech Connect

    Benton, N.

    2014-11-01

    Compressed-air systems are used widely throughout industry for many operations, including pneumatic tools, packaging and automation equipment, conveyors, and other industrial process operations. Compressed-air systems are defined as a group of subsystems composed of air compressors, air treatment equipment, controls, piping, pneumatic tools, pneumatically powered machinery, and process applications using compressed air. A compressed-air system has three primary functional subsystems: supply, distribution, and demand. Air compressors are the primary energy consumers in a compressed-air system and are the primary focus of this protocol. The two compressed-air energy efficiency measures specifically addressed in this protocol are: high-efficiency/variable speed drive (VSD) compressor replacing modulating compressor; compressed-air leak survey and repairs. This protocol provides direction on how to reliably verify savings from these two measures using a consistent approach for each.

  20. Compressive beamforming.

    PubMed

    Xenaki, Angeliki; Gerstoft, Peter; Mosegaard, Klaus

    2014-07-01

    Sound source localization with sensor arrays involves the estimation of the direction-of-arrival (DOA) from a limited number of observations. Compressive sensing (CS) solves such underdetermined problems achieving sparsity, thus improved resolution, and can be solved efficiently with convex optimization. The DOA estimation problem is formulated in the CS framework and it is shown that CS has superior performance compared to traditional DOA estimation methods especially under challenging scenarios such as coherent arrivals and single-snapshot data. An offset and resolution analysis is performed to indicate the limitations of CS. It is shown that the limitations are related to the beampattern, thus can be predicted. The high-resolution capabilities and the robustness of CS are demonstrated on experimental array data from ocean acoustic measurements for source tracking with single-snapshot data. PMID:24993212

  1. Efficient Compression of High Resolution Climate Data

    NASA Astrophysics Data System (ADS)

    Yin, J.; Schuchardt, K. L.

    2011-12-01

    resolution climate data can be massive. Those data can consume a huge amount of disk space for storage, incur significant overhead for outputting data during simulation, introduce high latency for visualization and analysis, and may even make interactive visualization and analysis impossible given the limit of the data that a conventional cluster can handle. These problems can be alleviated by with effective and efficient data compression techniques. Even though HDF5 format supports compression, previous work has mainly focused on employ traditional general purpose compression schemes such as dictionary coder and block sorting based compression scheme. Those compression schemes mainly focus on encoding repeated byte sequences efficiently and are not well suitable for compressing climate data consist mainly of distinguished float point numbers. We plan to select and customize our compression schemes according to the characteristics of high-resolution climate data. One observation on high resolution climate data is that as the resolution become higher, values of various climate variables such as temperature and pressure, become closer in nearby cells. This provides excellent opportunities for predication-based compression schemes. We have performed a preliminary estimation of compression ratios of a very simple minded predication-based compression ratio in which we compute the difference between current float point number with previous float point number and then encoding the exponent and significance part of the float point number with entropy-based compression scheme. Our results show that we can achieve higher compression ratios between 2 and 3 in lossless compression, which is significantly higher than traditional compression algorithms. We have also developed lossy compression with our techniques. We can achive orders of magnitude data reduction while ensure error bounds. Moreover, our compression scheme is much more efficient and introduces much less overhead

  2. Turbulence in Compressible Flows

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Lecture notes for the AGARD Fluid Dynamics Panel (FDP) Special Course on 'Turbulence in Compressible Flows' have been assembled in this report. The following topics were covered: Compressible Turbulent Boundary Layers, Compressible Turbulent Free Shear Layers, Turbulent Combustion, DNS/LES and RANS Simulations of Compressible Turbulent Flows, and Case Studies of Applications of Turbulence Models in Aerospace.

  3. Compressing TV-image data

    NASA Technical Reports Server (NTRS)

    Hilbert, E. E.; Lee, J.; Rice, R. F.; Schlutsmeyer, A. P.

    1981-01-01

    Compressing technique calculates activity estimator for each segment of image line. Estimator is used in conjunction with allowable bits per line, N, to determine number of bits necessary to code each segment and which segments can tolerate truncation. Preprocessed line data are then passed to adaptive variable-length coder, which selects optimum transmission code. Method increases capacity of broadcast and cable television transmissions and helps reduce size of storage medium for video and digital audio recordings.

  4. Learning in compressed space.

    PubMed

    Fabisch, Alexander; Kassahun, Yohannes; Wöhrle, Hendrik; Kirchner, Frank

    2013-06-01

    We examine two methods which are used to deal with complex machine learning problems: compressed sensing and model compression. We discuss both methods in the context of feed-forward artificial neural networks and develop the backpropagation method in compressed parameter space. We further show that compressing the weights of a layer of a multilayer perceptron is equivalent to compressing the input of the layer. Based on this theoretical framework, we will use orthogonal functions and especially random projections for compression and perform experiments in supervised and reinforcement learning to demonstrate that the presented methods reduce training time significantly. PMID:23501172

  5. Prechamber Compression-Ignition Engine Performance

    NASA Technical Reports Server (NTRS)

    Moore, Charles S; Collins, John H , Jr

    1938-01-01

    Single-cylinder compression-ignition engine tests were made to investigate the performance characteristics of prechamber type of cylinder head. Certain fundamental variables influencing engine performance -- clearance distribution, size, shape, and direction of the passage connecting the cylinder and prechamber, shape of prechamber, cylinder clearance, compression ratio, and boosting -- were independently tested. Results of motoring and of power tests, including several typical indicator cards, are presented.

  6. Microbunching and RF Compression

    SciTech Connect

    Venturini, M.; Migliorati, M.; Ronsivalle, C.; Ferrario, M.; Vaccarezza, C.

    2010-05-23

    Velocity bunching (or RF compression) represents a promising technique complementary to magnetic compression to achieve the high peak current required in the linac drivers for FELs. Here we report on recent progress aimed at characterizing the RF compression from the point of view of the microbunching instability. We emphasize the development of a linear theory for the gain function of the instability and its validation against macroparticle simulations that represents a useful tool in the evaluation of the compression schemes for FEL sources.

  7. Compressed gas manifold

    DOEpatents

    Hildebrand, Richard J.; Wozniak, John J.

    2001-01-01

    A compressed gas storage cell interconnecting manifold including a thermally activated pressure relief device, a manual safety shut-off valve, and a port for connecting the compressed gas storage cells to a motor vehicle power source and to a refueling adapter. The manifold is mechanically and pneumatically connected to a compressed gas storage cell by a bolt including a gas passage therein.

  8. Compressible turbulent mixing: Effects of compressibility

    NASA Astrophysics Data System (ADS)

    Ni, Qionglin

    2016-04-01

    We studied by numerical simulations the effects of compressibility on passive scalar transport in stationary compressible turbulence. The turbulent Mach number varied from zero to unity. The difference in driven forcing was the magnitude ratio of compressive to solenoidal modes. In the inertial range, the scalar spectrum followed the k-5 /3 scaling and suffered negligible influence from the compressibility. The growth of the Mach number showed (1) a first reduction and second enhancement in the transfer of scalar flux; (2) an increase in the skewness and flatness of the scalar derivative and a decrease in the mixed skewness and flatness of the velocity-scalar derivatives; (3) a first stronger and second weaker intermittency of scalar relative to that of velocity; and (4) an increase in the intermittency parameter which measures the intermittency of scalar in the dissipative range. Furthermore, the growth of the compressive mode of forcing indicated (1) a decrease in the intermittency parameter and (2) less efficiency in enhancing scalar mixing. The visualization of scalar dissipation showed that, in the solenoidal-forced flow, the field was filled with the small-scale, highly convoluted structures, while in the compressive-forced flow, the field was exhibited as the regions dominated by the large-scale motions of rarefaction and compression.

  9. Study of communications data compression methods

    NASA Technical Reports Server (NTRS)

    Jones, H. W.

    1978-01-01

    A simple monochrome conditional replenishment system was extended to higher compression and to higher motion levels, by incorporating spatially adaptive quantizers and field repeating. Conditional replenishment combines intraframe and interframe compression, and both areas are investigated. The gain of conditional replenishment depends on the fraction of the image changing, since only changed parts of the image need to be transmitted. If the transmission rate is set so that only one fourth of the image can be transmitted in each field, greater change fractions will overload the system. A computer simulation was prepared which incorporated (1) field repeat of changes, (2) a variable change threshold, (3) frame repeat for high change, and (4) two mode, variable rate Hadamard intraframe quantizers. The field repeat gives 2:1 compression in moving areas without noticeable degradation. Variable change threshold allows some flexibility in dealing with varying change rates, but the threshold variation must be limited for acceptable performance.

  10. Lossy Text Compression Techniques

    NASA Astrophysics Data System (ADS)

    Palaniappan, Venka; Latifi, Shahram

    Most text documents contain a large amount of redundancy. Data compression can be used to minimize this redundancy and increase transmission efficiency or save storage space. Several text compression algorithms have been introduced for lossless text compression used in critical application areas. For non-critical applications, we could use lossy text compression to improve compression efficiency. In this paper, we propose three different source models for character-based lossy text compression: Dropped Vowels (DOV), Letter Mapping (LMP), and Replacement of Characters (ROC). The working principles and transformation methods associated with these methods are presented. Compression ratios obtained are included and compared. Comparisons of performance with those of the Huffman Coding and Arithmetic Coding algorithm are also made. Finally, some ideas for further improving the performance already obtained are proposed.

  11. Radiological Image Compression

    NASA Astrophysics Data System (ADS)

    Lo, Shih-Chung Benedict

    The movement toward digital images in radiology presents the problem of how to conveniently and economically store, retrieve, and transmit the volume of digital images. Basic research into image data compression is necessary in order to move from a film-based department to an efficient digital -based department. Digital data compression technology consists of two types of compression technique: error-free and irreversible. Error -free image compression is desired; however, present techniques can only achieve compression ratio of from 1.5:1 to 3:1, depending upon the image characteristics. Irreversible image compression can achieve a much higher compression ratio; however, the image reconstructed from the compressed data shows some difference from the original image. This dissertation studies both error-free and irreversible image compression techniques. In particular, some modified error-free techniques have been tested and the recommended strategies for various radiological images are discussed. A full-frame bit-allocation irreversible compression technique has been derived. A total of 76 images which include CT head and body, and radiographs digitized to 2048 x 2048, 1024 x 1024, and 512 x 512 have been used to test this algorithm. The normalized mean -square-error (NMSE) on the difference image, defined as the difference between the original and the reconstructed image from a given compression ratio, is used as a global measurement on the quality of the reconstructed image. The NMSE's of total of 380 reconstructed and 380 difference images are measured and the results tabulated. Three complex compression methods are also suggested to compress images with special characteristics. Finally, various parameters which would effect the quality of the reconstructed images are discussed. A proposed hardware compression module is given in the last chapter.

  12. Stability of compressible boundary layers

    NASA Technical Reports Server (NTRS)

    Nayfeh, Ali H.

    1989-01-01

    The stability of compressible 2-D and 3-D boundary layers is reviewed. The stability of 2-D compressible flows differs from that of incompressible flows in two important features: There is more than one mode of instability contributing to the growth of disturbances in supersonic laminar boundary layers and the most unstable first mode wave is 3-D. Whereas viscosity has a destabilizing effect on incompressible flows, it is stabilizing for high supersonic Mach numbers. Whereas cooling stabilizes first mode waves, it destabilizes second mode waves. However, second order waves can be stabilized by suction and favorable pressure gradients. The influence of the nonparallelism on the spatial growth rate of disturbances is evaluated. The growth rate depends on the flow variable as well as the distance from the body. Floquet theory is used to investigate the subharmonic secondary instability.

  13. Universal lossless compression algorithm for textual images

    NASA Astrophysics Data System (ADS)

    al Zahir, Saif

    2012-03-01

    In recent years, an unparalleled volume of textual information has been transported over the Internet via email, chatting, blogging, tweeting, digital libraries, and information retrieval systems. As the volume of text data has now exceeded 40% of the total volume of traffic on the Internet, compressing textual data becomes imperative. Many sophisticated algorithms were introduced and employed for this purpose including Huffman encoding, arithmetic encoding, the Ziv-Lempel family, Dynamic Markov Compression, and Burrow-Wheeler Transform. My research presents novel universal algorithm for compressing textual images. The algorithm comprises two parts: 1. a universal fixed-to-variable codebook; and 2. our row and column elimination coding scheme. Simulation results on a large number of Arabic, Persian, and Hebrew textual images show that this algorithm has a compression ratio of nearly 87%, which exceeds published results including JBIG2.

  14. Parallel image compression

    NASA Technical Reports Server (NTRS)

    Reif, John H.

    1987-01-01

    A parallel compression algorithm for the 16,384 processor MPP machine was developed. The serial version of the algorithm can be viewed as a combination of on-line dynamic lossless test compression techniques (which employ simple learning strategies) and vector quantization. These concepts are described. How these concepts are combined to form a new strategy for performing dynamic on-line lossy compression is discussed. Finally, the implementation of this algorithm in a massively parallel fashion on the MPP is discussed.

  15. Fpack and Funpack Utilities for FITS Image Compression and Uncompression

    NASA Technical Reports Server (NTRS)

    Pence, W.

    2008-01-01

    Fpack is a utility program for optimally compressing images in the FITS (Flexible Image Transport System) data format (see http://fits.gsfc.nasa.gov). The associated funpack program restores the compressed image file back to its original state (as long as a lossless compression algorithm is used). These programs may be run from the host operating system command line and are analogous to the gzip and gunzip utility programs except that they are optimized for FITS format images and offer a wider choice of compression algorithms. Fpack stores the compressed image using the FITS tiled image compression convention (see http://fits.gsfc.nasa.gov/fits_registry.html). Under this convention, the image is first divided into a user-configurable grid of rectangular tiles, and then each tile is individually compressed and stored in a variable-length array column in a FITS binary table. By default, fpack usually adopts a row-by-row tiling pattern. The FITS image header keywords remain uncompressed for fast access by FITS reading and writing software. The tiled image compression convention can in principle support any number of different compression algorithms. The fpack and funpack utilities call on routines in the CFITSIO library (http://hesarc.gsfc.nasa.gov/fitsio) to perform the actual compression and uncompression of the FITS images, which currently supports the GZIP, Rice, H-compress, and PLIO IRAF pixel list compression algorithms.

  16. Fractal image compression

    NASA Technical Reports Server (NTRS)

    Barnsley, Michael F.; Sloan, Alan D.

    1989-01-01

    Fractals are geometric or data structures which do not simplify under magnification. Fractal Image Compression is a technique which associates a fractal to an image. On the one hand, the fractal can be described in terms of a few succinct rules, while on the other, the fractal contains much or all of the image information. Since the rules are described with less bits of data than the image, compression results. Data compression with fractals is an approach to reach high compression ratios for large data streams related to images. The high compression ratios are attained at a cost of large amounts of computation. Both lossless and lossy modes are supported by the technique. The technique is stable in that small errors in codes lead to small errors in image data. Applications to the NASA mission are discussed.

  17. Grid-free compressive beamforming.

    PubMed

    Xenaki, Angeliki; Gerstoft, Peter

    2015-04-01

    The direction-of-arrival (DOA) estimation problem involves the localization of a few sources from a limited number of observations on an array of sensors, thus it can be formulated as a sparse signal reconstruction problem and solved efficiently with compressive sensing (CS) to achieve high-resolution imaging. On a discrete angular grid, the CS reconstruction degrades due to basis mismatch when the DOAs do not coincide with the angular directions on the grid. To overcome this limitation, a continuous formulation of the DOA problem is employed and an optimization procedure is introduced, which promotes sparsity on a continuous optimization variable. The DOA estimation problem with infinitely many unknowns, i.e., source locations and amplitudes, is solved over a few optimization variables with semidefinite programming. The grid-free CS reconstruction provides high-resolution imaging even with non-uniform arrays, single-snapshot data and under noisy conditions as demonstrated on experimental towed array data. PMID:25920844

  18. EEG data compression techniques.

    PubMed

    Antoniol, G; Tonella, P

    1997-02-01

    In this paper, electroencephalograph (EEG) and Holter EEG data compression techniques which allow perfect reconstruction of the recorded waveform from the compressed one are presented and discussed. Data compression permits one to achieve significant reduction in the space required to store signals and in transmission time. The Huffman coding technique in conjunction with derivative computation reaches high compression ratios (on average 49% on Holter and 58% on EEG signals) with low computational complexity. By exploiting this result a simple and fast encoder/decoder scheme capable of real-time performance on a PC was implemented. This simple technique is compared with other predictive transformations, vector quantization, discrete cosine transform (DCT), and repetition count compression methods. Finally, it is shown that the adoption of a collapsed Huffman tree for the encoding/decoding operations allows one to choose the maximum codeword length without significantly affecting the compression ratio. Therefore, low cost commercial microcontrollers and storage devices can be effectively used to store long Holter EEG's in a compressed format. PMID:9214790

  19. Boson core compressibility

    NASA Astrophysics Data System (ADS)

    Khorramzadeh, Y.; Lin, Fei; Scarola, V. W.

    2012-04-01

    Strongly interacting atoms trapped in optical lattices can be used to explore phase diagrams of Hubbard models. Spatial inhomogeneity due to trapping typically obscures distinguishing observables. We propose that measures using boson double occupancy avoid trapping effects to reveal two key correlation functions. We define a boson core compressibility and core superfluid stiffness in terms of double occupancy. We use quantum Monte Carlo on the Bose-Hubbard model to empirically show that these quantities intrinsically eliminate edge effects to reveal correlations near the trap center. The boson core compressibility offers a generally applicable tool that can be used to experimentally map out phase transitions between compressible and incompressible states.

  20. Modeling Compressed Turbulence

    SciTech Connect

    Israel, Daniel M.

    2012-07-13

    From ICE to ICF, the effect of mean compression or expansion is important for predicting the state of the turbulence. When developing combustion models, we would like to know the mix state of the reacting species. This involves density and concentration fluctuations. To date, research has focused on the effect of compression on the turbulent kinetic energy. The current work provides constraints to help development and calibration for models of species mixing effects in compressed turbulence. The Cambon, et al., re-scaling has been extended to buoyancy driven turbulence, including the fluctuating density, concentration, and temperature equations. The new scalings give us helpful constraints for developing and validating RANS turbulence models.

  1. Local compressibilities in crystals

    NASA Astrophysics Data System (ADS)

    Martín Pendás, A.; Costales, Aurora; Blanco, M. A.; Recio, J. M.; Luaña, Víctor

    2000-12-01

    An application of the atoms in molecules theory to the partitioning of static thermodynamic properties in condensed systems is presented. Attention is focused on the definition and the behavior of atomic compressibilities. Inverses of bulk moduli are found to be simple weighted averages of atomic compressibilities. Two kinds of systems are investigated as examples: four related oxide spinels and the alkali halide family. Our analyses show that the puzzling constancy of the bulk moduli of these spinels is a consequence of the value of the compressibility of an oxide ion. A functional dependence between ionic bulk moduli and ionic volume is also proposed.

  2. Competing hydrostatic compression mechanisms in nickel cyanide

    NASA Astrophysics Data System (ADS)

    Adamson, J.; Lucas, T. C.; Cairns, A. B.; Funnell, N. P.; Tucker, M. G.; Kleppe, A. K.; Hriljac, J. A.; Goodwin, A. L.

    2015-12-01

    We use variable-pressure neutron and X-ray diffraction measurements to determine the uniaxial and bulk compressibilities of nickel(II) cyanide, Ni(CN)2. Whereas other layered molecular framework materials are known to exhibit negative area compressibility, we find that Ni(CN)2 does not. We attribute this difference to the existence of low-energy in-plane tilt modes that provide a pressure-activated mechanism for layer contraction. The experimental bulk modulus we measure is about four times lower than that reported elsewhere on the basis of density functional theory methods [Phys. Rev. B 83 (2011) 024301].

  3. Compressive Optical Image Encryption

    PubMed Central

    Li, Jun; Sheng Li, Jiao; Yang Pan, Yang; Li, Rong

    2015-01-01

    An optical image encryption technique based on compressive sensing using fully optical means has been proposed. An object image is first encrypted to a white-sense stationary noise pattern using a double random phase encoding (DRPE) method in a Mach-Zehnder interferometer. Then, the encrypted image is highly compressed to a signal using single-pixel compressive holographic imaging in the optical domain. At the receiving terminal, the encrypted image is reconstructed well via compressive sensing theory, and the original image can be decrypted with three reconstructed holograms and the correct keys. The numerical simulations show that the method is effective and suitable for optical image security transmission in future all-optical networks because of the ability of completely optical implementation and substantially smaller hologram data volume. PMID:25992946

  4. Military Data Compression Standard

    NASA Astrophysics Data System (ADS)

    Winterbauer, C. E.

    1982-07-01

    A facsimile interoperability data compression standard is being adopted by the U.S. Department of Defense and other North Atlantic Treaty Organization (NATO) countries. This algorithm has been shown to perform quite well in a noisy communication channel.

  5. Compressive optical image encryption.

    PubMed

    Li, Jun; Sheng Li, Jiao; Yang Pan, Yang; Li, Rong

    2015-01-01

    An optical image encryption technique based on compressive sensing using fully optical means has been proposed. An object image is first encrypted to a white-sense stationary noise pattern using a double random phase encoding (DRPE) method in a Mach-Zehnder interferometer. Then, the encrypted image is highly compressed to a signal using single-pixel compressive holographic imaging in the optical domain. At the receiving terminal, the encrypted image is reconstructed well via compressive sensing theory, and the original image can be decrypted with three reconstructed holograms and the correct keys. The numerical simulations show that the method is effective and suitable for optical image security transmission in future all-optical networks because of the ability of completely optical implementation and substantially smaller hologram data volume. PMID:25992946

  6. Focus on Compression Stockings

    MedlinePlus

    ... sion apparel is used to prevent or control edema The post-thrombotic syndrome (PTS) is a complication ( ... complication. abdomen. This swelling is referred to as edema. If you have edema, compression therapy may be ...

  7. Compressible Astrophysics Simulation Code

    Energy Science and Technology Software Center (ESTSC)

    2007-07-18

    This is an astrophysics simulation code involving a radiation diffusion module developed at LLNL coupled to compressible hydrodynamics and adaptive mesh infrastructure developed at LBNL. One intended application is to neutrino diffusion in core collapse supernovae.

  8. Similarity by compression.

    PubMed

    Melville, James L; Riley, Jenna F; Hirst, Jonathan D

    2007-01-01

    We present a simple and effective method for similarity searching in virtual high-throughput screening, requiring only a string-based representation of the molecules (e.g., SMILES) and standard compression software, available on all modern desktop computers. This method utilizes the normalized compression distance, an approximation of the normalized information distance, based on the concept of Kolmogorov complexity. On representative data sets, we demonstrate that compression-based similarity searching can outperform standard similarity searching protocols, exemplified by the Tanimoto coefficient combined with a binary fingerprint representation and data fusion. Software to carry out compression-based similarity is available from our Web site at http://comp.chem.nottingham.ac.uk/download/zippity. PMID:17238245

  9. Image compression technique

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1997-01-01

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace's equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image.

  10. Image compression technique

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1997-03-25

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace`s equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image. 16 figs.

  11. Intelligent bandwith compression

    NASA Astrophysics Data System (ADS)

    Tseng, D. Y.; Bullock, B. L.; Olin, K. E.; Kandt, R. K.; Olsen, J. D.

    1980-02-01

    The feasibility of a 1000:1 bandwidth compression ratio for image transmission has been demonstrated using image-analysis algorithms and a rule-based controller. Such a high compression ratio was achieved by first analyzing scene content using auto-cueing and feature-extraction algorithms, and then transmitting only the pertinent information consistent with mission requirements. A rule-based controller directs the flow of analysis and performs priority allocations on the extracted scene content. The reconstructed bandwidth-compressed image consists of an edge map of the scene background, with primary and secondary target windows embedded in the edge map. The bandwidth-compressed images are updated at a basic rate of 1 frame per second, with the high-priority target window updated at 7.5 frames per second. The scene-analysis algorithms used in this system together with the adaptive priority controller are described. Results of simulated 1000:1 band width-compressed images are presented. A video tape simulation of the Intelligent Bandwidth Compression system has been produced using a sequence of video input from the data base.

  12. Alternative Compression Garments

    NASA Technical Reports Server (NTRS)

    Stenger, M. B.; Lee, S. M. C.; Ribeiro, L. C.; Brown, A. K.; Westby, C. M.; Platts, S. H.

    2011-01-01

    Orthostatic intolerance after spaceflight is still an issue for astronauts as no in-flight countermeasure has been 100% effective. Future anti-gravity suits (AGS) may be similar to the Shuttle era inflatable AGS or may be a mechanical compression device like the Russian Kentavr. We have evaluated the above garments as well as elastic, gradient compression garments of varying magnitude and determined that breast-high elastic compression garments may be a suitable replacement to the current AGS. This new garment should be more comfortable than the AGS, easy to don and doff, and as effective a countermeasure to orthostatic intolerance. Furthermore, these new compression garments could be worn for several days after space flight as necessary if symptoms persisted. We conducted two studies to evaluate elastic, gradient compression garments. The purpose of these studies was to evaluate the comfort and efficacy of an alternative compression garment (ACG) immediately after actual space flight and 6 degree head-down tilt bed rest as a model of space flight, and to determine if they would impact recovery if worn for up to three days after bed rest.

  13. Ultraspectral sounder data compression using the Tunstall coding

    NASA Astrophysics Data System (ADS)

    Wei, Shih-Chieh; Huang, Bormin; Gu, Lingjia

    2007-09-01

    In an error-prone environment the compression of ultraspectral sounder data is vulnerable to error propagation. The Tungstall coding is a variable-to-fixed length code which compresses data by mapping a variable number of source symbols to a fixed number of codewords. It avoids the resynchronization difficulty encountered in fixed-to-variable length codes such as Huffman coding and arithmetic coding. This paper explores the use of the Tungstall coding in reducing the error propagation for ultraspectral sounder data compression. The results show that our Tunstall approach has a favorable compression ratio compared with JPEG-2000, 3D SPIHT, JPEG-LS, CALIC and CCSDS IDC 5/3. It also has less error propagation compared with JPEG-2000.

  14. Data compression of large document data bases.

    PubMed

    Heaps, H S

    1975-02-01

    Consideration is given to a document data base that is structured for information retrieval purposes by means of an inverted index and term dictionary. Vocabulary characteristics of various fields are described, and it is shown how the data base may be stored in a compressed form by use of restricted variable length codes that produce a compression not greatly in excess of the optimum that could be achieved through use of Huffman codes. The coding is word oriented. An alternative scheme of word fragment coding is described. It has the advantage that it allows the use of a small dictionary, but is less efficient with respect to compression of the data base. PMID:1127034

  15. Wavelet compression of medical imagery.

    PubMed

    Reiter, E

    1996-01-01

    Wavelet compression is a transform-based compression technique recently shown to provide diagnostic-quality images at compression ratios as great as 30:1. Based on a recently developed field of applied mathematics, wavelet compression has found success in compression applications from digital fingerprints to seismic data. The underlying strength of the method is attributable in large part to the efficient representation of image data by the wavelet transform. This efficient or sparse representation forms the basis for high-quality image compression by providing subsequent steps of the compression scheme with data likely to result in long runs of zero. These long runs of zero in turn compress very efficiently, allowing wavelet compression to deliver substantially better performance than existing Fourier-based methods. Although the lack of standardization has historically been an impediment to widespread adoption of wavelet compression, this situation may begin to change as the operational benefits of the technology become better known. PMID:10165355

  16. Transverse Compression of Tendons.

    PubMed

    Samuel Salisbury, S T; Paul Buckley, C; Zavatsky, Amy B

    2016-04-01

    A study was made of the deformation of tendons when compressed transverse to the fiber-aligned axis. Bovine digital extensor tendons were compression tested between flat rigid plates. The methods included: in situ image-based measurement of tendon cross-sectional shapes, after preconditioning but immediately prior to testing; multiple constant-load creep/recovery tests applied to each tendon at increasing loads; and measurements of the resulting tendon displacements in both transverse directions. In these tests, friction resisted axial stretch of the tendon during compression, giving approximately plane-strain conditions. This, together with the assumption of a form of anisotropic hyperelastic constitutive model proposed previously for tendon, justified modeling the isochronal response of tendon as that of an isotropic, slightly compressible, neo-Hookean solid. Inverse analysis, using finite-element (FE) simulations of the experiments and 10 s isochronal creep displacement data, gave values for Young's modulus and Poisson's ratio of this solid of 0.31 MPa and 0.49, respectively, for an idealized tendon shape and averaged data for all the tendons and E = 0.14 and 0.10 MPa for two specific tendons using their actual measured geometry. The compression load versus displacement curves, as measured and as simulated, showed varying degrees of stiffening with increasing load. This can be attributed mostly to geometrical changes in tendon cross section under load, varying according to the initial 3D shape of the tendon. PMID:26833218

  17. Intelligent bandwidth compression

    NASA Astrophysics Data System (ADS)

    Tseng, D. Y.; Bullock, B. L.; Olin, K. E.; Kandt, R. K.; Olsen, J. D.

    1980-02-01

    The feasibility of a 1000:1 bandwidth compression ratio for image transmission has been demonstrated using image-analysis algorithms and a rule-based controller. Such a high compression ratio was achieved by first analyzing scene content using auto-cueing and feature-extraction algorithms, and then transmitting only the pertinent information consistent with mission requirements. A rule-based controller directs the flow of analysis and performs priority allocations on the extracted scene content. The reconstructed bandwidth-compressed image consists of an edge map of the scene background, with primary and secondary target windows embedded in the edge map. The bandwidth-compressed images are updated at a basic rate of 1 frame per second, with the high-priority target window updated at 7.5 frames per second. The scene-analysis algorithms used in this system together with the adaptive priority controller are described. Results of simulated 1000:1 bandwidth-compressed images are presented.

  18. Wave energy devices with compressible volumes

    PubMed Central

    Kurniawan, Adi; Greaves, Deborah; Chaplin, John

    2014-01-01

    We present an analysis of wave energy devices with air-filled compressible submerged volumes, where variability of volume is achieved by means of a horizontal surface free to move up and down relative to the body. An analysis of bodies without power take-off (PTO) systems is first presented to demonstrate the positive effects a compressible volume could have on the body response. Subsequently, two compressible device variations are analysed. In the first variation, the compressible volume is connected to a fixed volume via an air turbine for PTO. In the second variation, a water column separates the compressible volume from another volume, which is fitted with an air turbine open to the atmosphere. Both floating and bottom-fixed, axisymmetric, configurations are considered, and linear analysis is employed throughout. Advantages and disadvantages of each device are examined in detail. Some configurations with displaced volumes less than 2000 m3 and with constant turbine coefficients are shown to be capable of achieving 80% of the theoretical maximum absorbed power over a wave period range of about 4 s. PMID:25484609

  19. Self-Similar Compressible Free Vortices

    NASA Technical Reports Server (NTRS)

    vonEllenrieder, Karl

    1998-01-01

    Lie group methods are used to find both exact and numerical similarity solutions for compressible perturbations to all incompressible, two-dimensional, axisymmetric vortex reference flow. The reference flow vorticity satisfies an eigenvalue problem for which the solutions are a set of two-dimensional, self-similar, incompressible vortices. These solutions are augmented by deriving a conserved quantity for each eigenvalue, and identifying a Lie group which leaves the reference flow equations invariant. The partial differential equations governing the compressible perturbations to these reference flows are also invariant under the action of the same group. The similarity variables found with this group are used to determine the decay rates of the velocities and thermodynamic variables in the self-similar flows, and to reduce the governing partial differential equations to a set of ordinary differential equations. The ODE's are solved analytically and numerically for a Taylor vortex reference flow, and numerically for an Oseen vortex reference flow. The solutions are used to examine the dependencies of the temperature, density, entropy, dissipation and radial velocity on the Prandtl number. Also, experimental data on compressible free vortex flow are compared to the analytical results, the evolution of vortices from initial states which are not self-similar is discussed, and the energy transfer in a slightly-compressible vortex is considered.

  20. Compressible Flow Toolbox

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin J.

    2006-01-01

    The Compressible Flow Toolbox is primarily a MATLAB-language implementation of a set of algorithms that solve approximately 280 linear and nonlinear classical equations for compressible flow. The toolbox is useful for analysis of one-dimensional steady flow with either constant entropy, friction, heat transfer, or Mach number greater than 1. The toolbox also contains algorithms for comparing and validating the equation-solving algorithms against solutions previously published in open literature. The classical equations solved by the Compressible Flow Toolbox are as follows: The isentropic-flow equations, The Fanno flow equations (pertaining to flow of an ideal gas in a pipe with friction), The Rayleigh flow equations (pertaining to frictionless flow of an ideal gas, with heat transfer, in a pipe of constant cross section), The normal-shock equations, The oblique-shock equations, and The expansion equations.

  1. Isentropic Compression of Argon

    SciTech Connect

    H. Oona; J.C. Solem; L.R. Veeser, C.A. Ekdahl; P.J. Rodriquez; S.M. Younger; W. Lewis; W.D. Turley

    1997-08-01

    We are studying the transition of argon from an insulator to a conductor by compressing the frozen gas isentropically to pressures at which neighboring atomic orbitals overlap sufficiently to allow some electron motion between atoms. Argon and the other rare gases have closed electron shells and therefore remain montomic, even when they solidify. Their simple structure makes it likely that any measured change in conductivity is due to changes in the atomic structure, not in molecular configuration. As the crystal is compressed the band gap closes, allowing increased conductivity. We have begun research to determine the conductivity at high pressures, and it is our intention to determine the compression at which the crystal becomes a metal.

  2. The compressible mixing layer

    NASA Technical Reports Server (NTRS)

    Vandromme, Dany; Haminh, Hieu

    1991-01-01

    The capability of turbulence modeling correctly to handle natural unsteadiness appearing in compressible turbulent flows is investigated. Physical aspects linked to the unsteadiness problem and the role of various flow parameters are analyzed. It is found that unsteady turbulent flows can be simulated by dividing these motions into an 'organized' part for which equations of motion are solved and a remaining 'incoherent' part represented by a turbulence model. Two-equation turbulence models and second-order turbulence models can yield reasonable results. For specific compressible unsteady turbulent flow, graphic presentations of different quantities may reveal complementary physical features. Strong compression zones are observed in rapid flow parts but shocklets do not yet occur.

  3. Orbiting dynamic compression laboratory

    NASA Technical Reports Server (NTRS)

    Ahrens, T. J.; Vreeland, T., Jr.; Kasiraj, P.; Frisch, B.

    1984-01-01

    In order to examine the feasibility of carrying out dynamic compression experiments on a space station, the possibility of using explosive gun launchers is studied. The question of whether powders of a refractory metal (molybdenum) and a metallic glass could be well considered by dynamic compression is examined. In both cases extremely good bonds are obtained between grains of metal and metallic glass at 180 and 80 kb, respectively. When the oxide surface is reduced and the dynamic consolidation is carried out in vacuum, in the case of molybdenum, tensile tests of the recovered samples demonstrated beneficial ultimate tensile strengths.

  4. Isentropic compression of argon

    SciTech Connect

    Veeser, L.R.; Ekdahl, C.A.; Oona, H.

    1997-06-01

    The compression was done in an MC-1 flux compression (explosive) generator, in order to study the transition from an insulator to a conductor. Since conductivity signals were observed in all the experiments (except when the probe is removed), both the Teflon and the argon are becoming conductive. The conductivity could not be determined (Teflon insulation properties unknown), but it could be bounded as being {sigma}=1/{rho}{le}8({Omega}cm){sub -1}, because when the Teflon breaks down, the dielectric constant is reduced. The Teflon insulator problem remains, and other ways to better insulate the probe or to measure the conductivity without a probe is being sought.

  5. COMPREHENSION OF COMPRESSED SPEECH BY ELEMENTARY SCHOOL CHILDREN.

    ERIC Educational Resources Information Center

    WOOD, C. DAVID

    THE EFFECTS OF FOUR VARIABLES ON THE EXTENT OF COMPREHENSION OF COMPRESSED SPEECH BY ELEMENTARY SCHOOL CHILDREN WERE INVESTIGATED. THESE VARIABLES WERE RATE OF PRESENTATION, GRADE LEVEL IN SCHOOL, INTELLIGENCE, AND AMOUNT OF PRACTICE. NINETY SUBJECTS PARTICIPATED IN THE EXPERIMENT. THE TASK FOR EACH SUBJECT WAS TO LISTEN INDIVIDUALLY TO 50 TAPE…

  6. Fixed-Rate Compressed Floating-Point Arrays.

    PubMed

    Lindstrom, Peter

    2014-12-01

    Current compression schemes for floating-point data commonly take fixed-precision values and compress them to a variable-length bit stream, complicating memory management and random access. We present a fixed-rate, near-lossless compression scheme that maps small blocks of 4(d) values in d dimensions to a fixed, user-specified number of bits per block, thereby allowing read and write random access to compressed floating-point data at block granularity. Our approach is inspired by fixed-rate texture compression methods widely adopted in graphics hardware, but has been tailored to the high dynamic range and precision demands of scientific applications. Our compressor is based on a new, lifted, orthogonal block transform and embedded coding, allowing each per-block bit stream to be truncated at any point if desired, thus facilitating bit rate selection using a single compression scheme. To avoid compression or decompression upon every data access, we employ a software write-back cache of uncompressed blocks. Our compressor has been designed with computational simplicity and speed in mind to allow for the possibility of a hardware implementation, and uses only a small number of fixed-point arithmetic operations per compressed value. We demonstrate the viability and benefits of lossy compression in several applications, including visualization, quantitative data analysis, and numerical simulation. PMID:26356981

  7. Deconstructed transverse mass variables

    NASA Astrophysics Data System (ADS)

    Ismail, Ahmed; Schwienhorst, Reinhard; Virzi, Joseph S.; Walker, Devin G. E.

    2015-04-01

    Traditional searches for R-parity conserving natural supersymmetry (SUSY) require large transverse mass and missing energy cuts to separate the signal from large backgrounds. SUSY models with compressed spectra inherently produce signal events with small amounts of missing energy that are hard to explore. We use this difficulty to motivate the construction of "deconstructed" transverse mass variables which are designed preserve information on both the norm and direction of the missing momentum. We demonstrate the effectiveness of these variables in searches for the pair production of supersymmetric top-quark partners which subsequently decay into a final state with an isolated lepton, jets and missing energy. We show that the use of deconstructed transverse mass variables extends the accessible compressed spectra parameter space beyond the region probed by traditional methods. The parameter space can further be expanded to neutralino masses that are larger than the difference between the stop and top masses. In addition, we also discuss how these variables allow for novel searches of single stop production, in order to directly probe unconstrained stealth stops in the small stop- and neutralino-mass regime. We also demonstrate the utility of these variables for generic gluino and stop searches in all-hadronic final states. Overall, we demonstrate that deconstructed transverse variables are essential to any search wanting to maximize signal separation from the background when the signal has undetected particles in the final state.

  8. The Compressed Video Experience.

    ERIC Educational Resources Information Center

    Weber, John

    In the fall semester 1995, Southern Arkansas University- Magnolia (SAU-M) began a two semester trial delivering college classes via a compressed video link between SAU-M and its sister school Southern Arkansas University Tech (SAU-T) in Camden. As soon as the University began broadcasting and receiving classes, it was discovered that using the…

  9. Compress Your Files

    ERIC Educational Resources Information Center

    Branzburg, Jeffrey

    2005-01-01

    File compression enables data to be squeezed together, greatly reducing file size. Why would someone want to do this? Reducing file size enables the sending and receiving of files over the Internet more quickly, the ability to store more files on the hard drive, and the ability pack many related files into one archive (for example, all files…

  10. Nonlinear Frequency Compression

    PubMed Central

    Scollie, Susan; Glista, Danielle; Seelisch, Andreas

    2013-01-01

    Frequency lowering technologies offer an alternative amplification solution for severe to profound high frequency hearing losses. While frequency lowering technologies may improve audibility of high frequency sounds, the very nature of this processing can affect the perceived sound quality. This article reports the results from two studies that investigated the impact of a nonlinear frequency compression (NFC) algorithm on perceived sound quality. In the first study, the cutoff frequency and compression ratio parameters of the NFC algorithm were varied, and their effect on the speech quality was measured subjectively with 12 normal hearing adults, 12 normal hearing children, 13 hearing impaired adults, and 9 hearing impaired children. In the second study, 12 normal hearing and 8 hearing impaired adult listeners rated the quality of speech in quiet, speech in noise, and music after processing with a different set of NFC parameters. Results showed that the cutoff frequency parameter had more impact on sound quality ratings than the compression ratio, and that the hearing impaired adults were more tolerant to increased frequency compression than normal hearing adults. No statistically significant differences were found in the sound quality ratings of speech-in-noise and music stimuli processed through various NFC settings by hearing impaired listeners. These findings suggest that there may be an acceptable range of NFC settings for hearing impaired individuals where sound quality is not adversely affected. These results may assist an Audiologist in clinical NFC hearing aid fittings for achieving a balance between high frequency audibility and sound quality. PMID:23539261

  11. Compression: Rent or own

    SciTech Connect

    Cahill, C.

    1997-07-01

    Historically, the decision to purchase or rent compression has been set as a corporate philosophy. As companies decentralize, there seems to be a shift away from corporate philosophy toward individual profit centers. This has led the decision to rent versus purchase to be looked at on a regional or project-by-project basis.

  12. Improved compression molding process

    NASA Technical Reports Server (NTRS)

    Heier, W. C.

    1967-01-01

    Modified compression molding process produces plastic molding compounds that are strong, homogeneous, free of residual stresses, and have improved ablative characteristics. The conventional method is modified by applying a vacuum to the mold during the molding cycle, using a volatile sink, and exercising precise control of the mold closure limits.

  13. Mosaic image compression

    NASA Astrophysics Data System (ADS)

    Chaudhari, Kapil A.; Reeves, Stanley J.

    2005-02-01

    Most consumer-level digital cameras use a color filter array to capture color mosaic data followed by demosaicking to obtain full-color images. However, many sophisticated demosaicking algorithms are too complex to implement on-board a camera. To use these algorithms, one must transfer the mosaic data from the camera to a computer without introducing compression losses that could generate artifacts in the demosaicked image. The memory required for losslessly stored mosaic images severely restricts the number of images that can be stored in the camera. Therefore, we need an algorithm to compress the original mosaic data losslessly so that it can later be transferred intact for demosaicking. We propose a new lossless compression technique for mosaic images in this paper. Ordinary image compression methods do not apply to mosaic images because of their non-canonical color sampling structure. Because standard compression methods such as JPEG, JPEG2000, etc. are already available in most digital cameras, we have chosen to build our algorithms using a standard method as a key part of the system. The algorithm begins by separating the mosaic image into 3 color (RGB) components. This is followed by an interpolation or down-sampling operation--depending on the particular variation of the algorithm--that makes all three components the same size. Using the three color components, we form a color image that is coded with JPEG. After appropriately reformatting the data, we calculate the residual between the original image and the coded image and then entropy-code the residual values corresponding to the mosaic data.

  14. Displaying radiologic images on personal computers: image storage and compression--Part 2.

    PubMed

    Gillespy, T; Rowberg, A H

    1994-02-01

    This is part 2 of our article on image storage and compression, the third article of our series for radiologists and imaging scientists on displaying, manipulating, and analyzing radiologic images on personal computers. Image compression is classified as lossless (nondestructive) or lossy (destructive). Common lossless compression algorithms include variable-length bit codes (Huffman codes and variants), dictionary-based compression (Lempel-Ziv variants), and arithmetic coding. Huffman codes and the Lempel-Ziv-Welch (LZW) algorithm are commonly used for image compression. All of these compression methods are enhanced if the image has been transformed into a differential image based on a differential pulse-code modulation (DPCM) algorithm. The LZW compression after the DPCM image transformation performed the best on our example images, and performed almost as well as the best of the three commercial compression programs tested. Lossy compression techniques are capable of much higher data compression, but reduced image quality and compression artifacts may be noticeable. Lossy compression is comprised of three steps: transformation, quantization, and coding. Two commonly used transformation methods are the discrete cosine transformation and discrete wavelet transformation. In both methods, most of the image information is contained in a relatively few of the transformation coefficients. The quantization step reduces many of the lower order coefficients to 0, which greatly improves the efficiency of the coding (compression) step. In fractal-based image compression, image patterns are stored as equations that can be reconstructed at different levels of resolution. PMID:8172973

  15. TEM Video Compressive Sensing

    SciTech Connect

    Stevens, Andrew J.; Kovarik, Libor; Abellan, Patricia; Yuan, Xin; Carin, Lawrence; Browning, Nigel D.

    2015-08-02

    One of the main limitations of imaging at high spatial and temporal resolution during in-situ TEM experiments is the frame rate of the camera being used to image the dynamic process. While the recent development of direct detectors has provided the hardware to achieve frame rates approaching 0.1ms, the cameras are expensive and must replace existing detectors. In this paper, we examine the use of coded aperture compressive sensing methods [1, 2, 3, 4] to increase the framerate of any camera with simple, low-cost hardware modifications. The coded aperture approach allows multiple sub-frames to be coded and integrated into a single camera frame during the acquisition process, and then extracted upon readout using statistical compressive sensing inversion. Our simulations show that it should be possible to increase the speed of any camera by at least an order of magnitude. Compressive Sensing (CS) combines sensing and compression in one operation, and thus provides an approach that could further improve the temporal resolution while correspondingly reducing the electron dose rate. Because the signal is measured in a compressive manner, fewer total measurements are required. When applied to TEM video capture, compressive imaging couled improve acquisition speed and reduce the electron dose rate. CS is a recent concept, and has come to the forefront due the seminal work of Candès [5]. Since the publication of Candès, there has been enormous growth in the application of CS and development of CS variants. For electron microscopy applications, the concept of CS has also been recently applied to electron tomography [6], and reduction of electron dose in scanning transmission electron microscopy (STEM) imaging [7]. To demonstrate the applicability of coded aperture CS video reconstruction for atomic level imaging, we simulate compressive sensing on observations of Pd nanoparticles and Ag nanoparticles during exposure to high temperatures and other environmental

  16. Data Compression for Helioseismology

    NASA Astrophysics Data System (ADS)

    Löptien, Björn

    2015-10-01

    Efficient data compression will play an important role for several upcoming and planned space missions involving helioseismology, such as Solar Orbiter. Solar Orbiter, to be launched in October 2018, will be the next space mission involving helioseismology. The main characteristic of Solar Orbiter lies in its orbit. The spacecraft will have an inclined solar orbit, reaching a solar latitude of up to 33 deg. This will allow, for the first time, probing the solar poles using local helioseismology. In addition, combined observations of Solar Orbiter and another helioseismic instrument will be used to study the deep interior of the Sun using stereoscopic helioseismology. The Doppler velocity and continuum intensity images of the Sun required for helioseismology will be provided by the Polarimetric and Helioseismic Imager (PHI). Major constraints for helioseismology with Solar Orbiter are the low telemetry and the (probably) short observing time. In addition, helioseismology of the solar poles requires observations close to the solar limb, even from the inclined orbit of Solar Orbiter. This gives rise to systematic errors. In this thesis, I derived a first estimate of the impact of lossy data compression on helioseismology. I put special emphasis on the Solar Orbiter mission, but my results are applicable to other planned missions as well. First, I studied the performance of PHI for helioseismology. Based on simulations of solar surface convection and a model of the PHI instrument, I generated a six-hour time-series of synthetic Doppler velocity images with the same properties as expected for PHI. Here, I focused on the impact of the point spread function, the spacecraft jitter, and of the photon noise level. The derived power spectra of solar oscillations suggest that PHI will be suitable for helioseismology. The low telemetry of Solar Orbiter requires extensive compression of the helioseismic data obtained by PHI. I evaluated the influence of data compression using

  17. Compression and texture in socks enhance football kicking performance.

    PubMed

    Hasan, Hosni; Davids, Keith; Chow, Jia Yi; Kerr, Graham

    2016-08-01

    The purpose of this study was to observe effects of wearing textured insoles and clinical compression socks on organisation of lower limb interceptive actions in developing athletes of different skill levels in association football. Six advanced learners and six completely novice football players (15.4±0.9years) performed 20 instep kicks with maximum velocity, in four randomly organised insoles and socks conditions, (a) Smooth Socks with Smooth Insoles (SSSI); (b) Smooth Socks with Textured Insoles (SSTI); (c) Compression Socks with Smooth Insoles (CSSI) and (d), Compression Socks with Textured Insoles (CSTI). Reflective markers were placed on key anatomical locations and the ball to facilitate three-dimensional (3D) movement recording and analysis. Data on 3D kinematic variables and initial ball velocity were analysed using one-way mixed model ANOVAs. Results revealed that wearing textured and compression materials enhanced performance in key variables, such as the maximum velocity of the instep kick and increased initial ball velocity, among advanced learners compared to the use of non-textured and compression materials. Adding texture to football boot insoles appeared to interact with compression materials to improve kicking performance, captured by these important measures. This improvement in kicking performance is likely to have occurred through enhanced somatosensory system feedback utilised for foot placement and movement organisation of the lower limbs. Data suggested that advanced learners were better at harnessing the augmented feedback information from compression and texture to regulate emerging movement patterns compared to novices. PMID:27155962

  18. Embedded memory compression for video and graphics applications

    NASA Astrophysics Data System (ADS)

    Teng, Andy; Gokce, Dane; Aleksic, Mickey; Reznik, Yuriy A.

    2010-08-01

    We describe design of a low-complexity lossless and near-lossless image compression system with random access, suitable for embedded memory compression applications. This system employs a block-based DPCM coder using variable-length encoding for the residual. As part of this design, we propose to use non-prefix (one-to-one) codes for coding of residuals, and show that they offer improvements in compression performance compared to conventional techniques, such as Golomb-Rice and Huffman codes.

  19. Progressive compressive imager

    NASA Astrophysics Data System (ADS)

    Evladov, Sergei; Levi, Ofer; Stern, Adrian

    2012-06-01

    We have designed and built a working automatic progressive sampling imaging system based on the vector sensor concept, which utilizes a unique sampling scheme of Radon projections. This sampling scheme makes it possible to progressively add information resulting in tradeoff between compression and the quality of reconstruction. The uniqueness of our sampling is that in any moment of the acquisition process the reconstruction can produce a reasonable version of the image. The advantage of the gradual addition of the samples is seen when the sparsity rate of the object is unknown, and thus the number of needed measurements. We have developed the iterative algorithm OSO (Ordered Sets Optimization) which employs our sampling scheme for creation of nearly uniform distributed sets of samples, which allows the reconstruction of Mega-Pixel images. We present the good quality reconstruction from compressed data ratios of 1:20.

  20. Digital cinema video compression

    NASA Astrophysics Data System (ADS)

    Husak, Walter

    2003-05-01

    The Motion Picture Industry began a transition from film based distribution and projection to digital distribution and projection several years ago. Digital delivery and presentation offers the prospect to increase the quality of the theatrical experience for the audience, reduce distribution costs to the distributors, and create new business opportunities for the theater owners and the studios. Digital Cinema also presents an opportunity to provide increased flexibility and security of the movies for the content owners and the theater operators. Distribution of content via electronic means to theaters is unlike any of the traditional applications for video compression. The transition from film-based media to electronic media represents a paradigm shift in video compression techniques and applications that will be discussed in this paper.

  1. Efficiency at Sorting Cards in Compressed Air

    PubMed Central

    Poulton, E. C.; Catton, M. J.; Carpenter, A.

    1964-01-01

    At a site where compressed air was being used in the construction of a tunnel, 34 men sorted cards twice, once at normal atmospheric pressure and once at 3½, 2½, or 2 atmospheres absolute pressure. An additional six men sorted cards twice at normal atmospheric pressure. When the task was carried out for the first time, all the groups of men performing at raised pressure were found to yield a reliably greater proportion of very slow responses than the group of men performing at normal pressure. There was reliably more variability in timing at 3½ and 2½ atmospheres absolute than at normal pressure. At 3½ atmospheres absolute the average performance was also reliably slower. When the task was carried out for the second time, exposure to 3½ atmospheres absolute pressure had no reliable effect. Thus compressed air affected performance only while the task was being learnt; it had little effect after practice. No reliable differences were found related to age, to length of experience in compressed air, or to the duration of the exposure to compressed air, which was never less than 10 minutes at 3½ atmospheres absolute pressure. PMID:14180485

  2. Compressibility of solids

    NASA Technical Reports Server (NTRS)

    Vinet, P.; Ferrante, J.; Rose, J. H.; Smith, J. R.

    1987-01-01

    A universal form is proposed for the equation of state (EOS) of solids. Good agreement is found for a variety of test data. The form of the EOS is used to suggest a method of data analysis, which is applied to materials of geophysical interest. The isothermal bulk modulus is discussed as a function of the volume and of the pressure. The isothermal compression curves for materials of geophysical interest are examined.

  3. Compression of Cake

    NASA Astrophysics Data System (ADS)

    Nason, Sarah; Houghton, Brittany; Renfro, Timothy

    2012-03-01

    The fall university physics class, at McMurry University, created a compression modulus experiment that even high school students could do. The class came up with this idea after a Young's modulus experiment which involved stretching wire. A question was raised of what would happen if we compressed something else? We created our own Young's modulus experiment, but in a more entertaining way. The experiment involves measuring the height of a cake both before and after a weight has been applied to the cake. We worked to derive the compression modulus by applying weight to a cake. In the end, we had our experimental cake and, ate it too! To cite this abstract, use the following reference: http://meetings.aps.org/link/BAPS.2012.TSS.B1.1

  4. Piston reciprocating compressed air engine

    SciTech Connect

    Cestero, L.G.

    1987-03-24

    A compressed air engine is described comprising: (a). a reservoir of compressed air, (b). two power cylinders each containing a reciprocating piston connected to a crankshaft and flywheel, (c). a transfer cylinder which communicates with each power cylinder and the reservoir, and contains a reciprocating piston connected to the crankshaft, (d). valve means controlled by rotation of the crankshaft for supplying compressed air from the reservoir to each power cylinder and for exhausting compressed air from each power cylinder to the transfer cylinder, (e). valve means controlled by rotation of the crankshaft for supplying from the transfer cylinder to the reservoir compressed air supplied to the transfer cylinder on the exhaust strokes of the pistons of the power cylinders, and (f). an externally powered fan for assisting the exhaust of compressed air from each power cylinder to the transfer cylinder and from there to the compressed air reservoir.

  5. High performance compression of science data

    NASA Technical Reports Server (NTRS)

    Storer, James A.; Cohn, Martin

    1994-01-01

    Two papers make up the body of this report. One presents a single-pass adaptive vector quantization algorithm that learns a codebook of variable size and shape entries; the authors present experiments on a set of test images showing that with no training or prior knowledge of the data, for a given fidelity, the compression achieved typically equals or exceeds that of the JPEG standard. The second paper addresses motion compensation, one of the most effective techniques used in the interframe data compression. A parallel block-matching algorithm for estimating interframe displacement of blocks with minimum error is presented. The algorithm is designed for a simple parallel architecture to process video in real time.

  6. High Performance Compression of Science Data

    NASA Technical Reports Server (NTRS)

    Storer, James A.; Carpentieri, Bruno; Cohn, Martin

    1994-01-01

    Two papers make up the body of this report. One presents a single-pass adaptive vector quantization algorithm that learns a codebook of variable size and shape entries; the authors present experiments on a set of test images showing that with no training or prior knowledge of the data, for a given fidelity, the compression achieved typically equals or exceeds that of the JPEG standard. The second paper addresses motion compensation, one of the most effective techniques used in interframe data compression. A parallel block-matching algorithm for estimating interframe displacement of blocks with minimum error is presented. The algorithm is designed for a simple parallel architecture to process video in real time.

  7. Isothermal compressibility determination across Bose-Einstein condensation

    NASA Astrophysics Data System (ADS)

    Poveda-Cuevas, F. J.; Castilho, P. C. M.; Mercado-Gutierrez, E. D.; Fritsch, A. R.; Muniz, S. R.; Lucioni, E.; Roati, G.; Bagnato, V. S.

    2015-07-01

    We apply the global thermodynamic variables approach to experimentally determine the isothermal compressibility parameter κT of a trapped Bose gas across the phase transition. We demonstrate the behavior of κT around the critical pressure, revealing the second-order nature of the phase transition. Compressibility is the most important susceptibility to characterize the system. The use of global variables shows advantages with respect to the usual local density approximation method and can be applied to a broad range of situations.

  8. Compressible magnetohydrodynamic sawtooth crash

    NASA Astrophysics Data System (ADS)

    Sugiyama, Linda E.

    2014-02-01

    In a toroidal magnetically confined plasma at low resistivity, compressible magnetohydrodynamic (MHD) predicts that an m = 1/n = 1 sawtooth has a fast, explosive crash phase with abrupt onset, rate nearly independent of resistivity, and localized temperature redistribution similar to experimental observations. Large scale numerical simulations show that the 1/1 MHD internal kink grows exponentially at a resistive rate until a critical amplitude, when the plasma motion accelerates rapidly, culminating in fast loss of the temperature and magnetic structure inside q < 1, with somewhat slower density redistribution. Nonlinearly, for small effective growth rate the perpendicular momentum rate of change remains small compared to its individual terms ∇p and J × B until the fast crash, so that the compressible growth rate is determined by higher order terms in a large aspect ratio expansion, as in the linear eigenmode. Reduced MHD fails completely to describe the toroidal mode; no Sweet-Parker-like reconnection layer develops. Important differences result from toroidal mode coupling effects. A set of large aspect ratio compressible MHD equations shows that the large aspect ratio expansion also breaks down in typical tokamaks with rq =1/Ro≃1/10 and a /Ro≃1/3. In the large aspect ratio limit, failure extends down to much smaller inverse aspect ratio, at growth rate scalings γ =O(ɛ2). Higher order aspect ratio terms, including B˜ϕ, become important. Nonlinearly, higher toroidal harmonics develop faster and to a greater degree than for large aspect ratio and help to accelerate the fast crash. The perpendicular momentum property applies to other transverse MHD instabilities, including m ≥ 2 magnetic islands and the plasma edge.

  9. Fast Compressive Tracking.

    PubMed

    Zhang, Kaihua; Zhang, Lei; Yang, Ming-Hsuan

    2014-10-01

    It is a challenging task to develop effective and efficient appearance models for robust object tracking due to factors such as pose variation, illumination change, occlusion, and motion blur. Existing online tracking algorithms often update models with samples from observations in recent frames. Despite much success has been demonstrated, numerous issues remain to be addressed. First, while these adaptive appearance models are data-dependent, there does not exist sufficient amount of data for online algorithms to learn at the outset. Second, online tracking algorithms often encounter the drift problems. As a result of self-taught learning, misaligned samples are likely to be added and degrade the appearance models. In this paper, we propose a simple yet effective and efficient tracking algorithm with an appearance model based on features extracted from a multiscale image feature space with data-independent basis. The proposed appearance model employs non-adaptive random projections that preserve the structure of the image feature space of objects. A very sparse measurement matrix is constructed to efficiently extract the features for the appearance model. We compress sample images of the foreground target and the background using the same sparse measurement matrix. The tracking task is formulated as a binary classification via a naive Bayes classifier with online update in the compressed domain. A coarse-to-fine search strategy is adopted to further reduce the computational complexity in the detection procedure. The proposed compressive tracking algorithm runs in real-time and performs favorably against state-of-the-art methods on challenging sequences in terms of efficiency, accuracy and robustness. PMID:26352631

  10. International magnetic pulse compression

    SciTech Connect

    Kirbie, H.C.; Newton, M.A.; Siemens, P.D.

    1991-04-01

    Although pulsed-power engineering traditionally has been practiced by a fairly small, close community in the areas of defense and energy research, it is becoming more common in high-power, high-energy commercial pursuits such as material processing and lasers. This paper is a synopsis of the Feb. 12--14, 1990 workshop on magnetic switching as it applies primarily to pulse compression (power transformation). During the course of the Workshop at Granlibakken, a great deal of information was amassed and a keen insight into both the problems and opportunities as to the use of this switching approach was developed. The segmented workshop format proved ideal for identifying key aspects affecting optimum performance in a variety of applications. Individual groups of experts addressed network and system modeling, magnetic materials, power conditioning, core cooling and dielectrics, and finally circuits and application. At the end, they came together to consolidate their input and formulate the workshop's conclusions, identifying roadblocks or suggesting research projects, particularly as they apply to magnetic switching's trump card -- its high-average-power-handling capability (at least on a burst-mode basis). The workshop was especially productive both in the quality and quantity of information transfer in an environment conducive to a free and open exchange of ideas. We will not delve into the organization proper of this meeting, rather we wish to commend to the interested reader this volume, which provides the definitive and most up-to-date compilation on the subject of magnetic pulse compression from underlying principles to current state of the art as well as the prognosis for the future of magnetic pulse compression as a consensus of the workshop's organizers and participants.

  11. Compression retaining piston

    SciTech Connect

    Quaglino, A.V. Jr.

    1987-06-16

    A piston apparatus is described for maintaining compression between the piston wall and the cylinder wall, that comprises the following: a generally cylindrical piston body, including: a head portion defining the forward end of the body; and a continuous side wall portion extending rearward from the head portion; a means for lubricating and preventing compression loss between the side wall portion and the cylinder wall, including an annular recessed area in the continuous side wall portion for receiving a quantity of fluid lubricant in fluid engagement between the wall of the recessed and the wall of the cylinder; a first and second resilient, elastomeric, heat resistant rings positioned in grooves along the wall of the continuous side wall portion, above and below the annular recessed area. Each ring engages the cylinder wall to reduce loss of lubricant within the recessed area during operation of the piston; a first pump means for providing fluid lubricant to engine components other than the pistons; and a second pump means provides fluid lubricant to the recessed area in the continuous side wall portion of the piston. The first and second pump means obtains lubricant from a common source, and the second pump means including a flow line supplies oil from a predetermined level above the level of oil provided to the first pump means. This is so that should the oil level to the second pump means fall below the predetermined level, the loss of oil to the recessed area in the continuous side wall portion of the piston would result in loss of compression and shut down of the engine.

  12. International magnetic pulse compression

    NASA Astrophysics Data System (ADS)

    Kirbie, H. C.; Newton, M. A.; Siemens, P. D.

    1991-04-01

    Although pulsed-power engineering traditionally has been practiced by a fairly small, close community in the areas of defense and energy research, it is becoming more common in high-power, high-energy commercial pursuits such as material processing and lasers. This paper is a synopsis of the Feb. 12-14, 1990 workshop on magnetic switching as it applies primarily to pulse compression (power transformation). During the course of the Workshop at Granlibakken, a great deal of information was amassed and a keen insight into both the problems and opportunities as to the use of this switching approach was developed. The segmented workshop format proved ideal for identifying key aspects affecting optimum performance in a variety of applications. Individual groups of experts addressed network and system modeling, magnetic materials, power conditioning, core cooling and dielectrics, and finally circuits and application. At the end, they came together to consolidate their input and formulate the workshop's conclusions, identifying roadblocks or suggesting research projects, particularly as they apply to magnetic switching's trump card - its high-average-power-handling capability (at least on a burst-mode basis). The workshop was especially productive both in the quality and quantity of information transfer in an environment conducive to a free and open exchange of ideas. We will not delve into the organization proper of this meeting, rather we wish to commend to the interested reader this volume, which provides the definitive and most up-to-date compilation on the subject of magnetic pulse compression from underlying principles to current state of the art as well as the prognosis for the future of magnetic pulse compression as a consensus of the workshop's organizers and participants.

  13. Comparative data compression techniques and multi-compression results

    NASA Astrophysics Data System (ADS)

    Hasan, M. R.; Ibrahimy, M. I.; Motakabber, S. M. A.; Ferdaus, M. M.; Khan, M. N. H.

    2013-12-01

    Data compression is very necessary in business data processing, because of the cost savings that it offers and the large volume of data manipulated in many business applications. It is a method or system for transmitting a digital image (i.e., an array of pixels) from a digital data source to a digital data receiver. More the size of the data be smaller, it provides better transmission speed and saves time. In this communication, we always want to transmit data efficiently and noise freely. This paper will provide some compression techniques for lossless text type data compression and comparative result of multiple and single compression, that will help to find out better compression output and to develop compression algorithms.

  14. Avalanches in Wood Compression.

    PubMed

    Mäkinen, T; Miksic, A; Ovaska, M; Alava, Mikko J

    2015-07-31

    Wood is a multiscale material exhibiting a complex viscoplastic response. We study avalanches in small wood samples in compression. "Woodquakes" measured by acoustic emission are surprisingly similar to earthquakes and crackling noise in rocks and laboratory tests on brittle materials. Both the distributions of event energies and of waiting (silent) times follow power laws. The stress-strain response exhibits clear signatures of localization of deformation to "weak spots" or softwood layers, as identified using digital image correlation. Even though material structure-dependent localization takes place, the avalanche behavior remains scale-free. PMID:26274428

  15. Compression test apparatus

    NASA Technical Reports Server (NTRS)

    Shanks, G. C. (Inventor)

    1981-01-01

    An apparatus for compressive testing of a test specimen may comprise vertically spaced upper and lower platen members between which a test specimen may be placed. The platen members are supported by a fixed support assembly. A load indicator is interposed between the upper platen member and the support assembly for supporting the total weight of the upper platen member and any additional weight which may be placed on it. Operating means are provided for moving the lower platen member upwardly toward the upper platen member whereby an increasing portion of the total weight is transferred from the load indicator to the test specimen.

  16. Compression and Entrapment Syndromes

    PubMed Central

    Heffernan, L.P.; Benstead, T.J.

    1987-01-01

    Family physicians are often confronted by patients who present with pain, numbness and weakness. Such complaints, when confined to a single extremity, most particularly to a restricted portion of the extremity, may indicate focal dysfunction of peripheral nerve structures arising from compression and/or entrapment, to which such nerves are selectively vulnerable. The authors of this article consider the paramount clinical features that allow the clinician to arrive at a correct diagnosis, reviews major points in differential diagnosis, and suggest appropriate management strategies. PMID:21263858

  17. Sampling video compression system

    NASA Technical Reports Server (NTRS)

    Matsumoto, Y.; Lum, H. (Inventor)

    1977-01-01

    A system for transmitting video signal of compressed bandwidth is described. The transmitting station is provided with circuitry for dividing a picture to be transmitted into a plurality of blocks containing a checkerboard pattern of picture elements. Video signals along corresponding diagonal rows of picture elements in the respective blocks are regularly sampled. A transmitter responsive to the output of the sampling circuitry is included for transmitting the sampled video signals of one frame at a reduced bandwidth over a communication channel. The receiving station is provided with a frame memory for temporarily storing transmitted video signals of one frame at the original high bandwidth frequency.

  18. Ultrasound beamforming using compressed data.

    PubMed

    Li, Yen-Feng; Li, Pai-Chi

    2012-05-01

    The rapid advancements in electronics technologies have made software-based beamformers for ultrasound array imaging feasible, thus facilitating the rapid development of high-performance and potentially low-cost systems. However, one challenge to realizing a fully software-based system is transferring data from the analog front end to the software back end at rates of up to a few gigabits per second. This study investigated the use of data compression to reduce the data transfer requirements and optimize the associated trade-off with beamforming quality. JPEG and JPEG2000 compression techniques were adopted. The acoustic data of a line phantom were acquired with a 128-channel array transducer at a center frequency of 3.5 MHz, and the acoustic data of a cyst phantom were acquired with a 64-channel array transducer at a center frequency of 3.33 MHz. The receive-channel data associated with each transmit event are separated into 8 × 8 blocks and several tiles before JPEG and JPEG2000 data compression is applied, respectively. In one scheme, the compression was applied to raw RF data, while in another only the amplitude of baseband data was compressed. The maximum compression ratio of RF data compression to produce an average error of lower than 5 dB was 15 with JPEG compression and 20 with JPEG2000 compression. The image quality is higher with baseband amplitude data compression than with RF data compression; although the maximum overall compression ratio (compared with the original RF data size), which was limited by the data size of uncompressed phase data, was lower than 12, the average error in this case was lower than 1 dB when the compression ratio was lower than 8. PMID:22434817

  19. Dynamic control of a homogeneous charge compression ignition engine

    DOEpatents

    Duffy, Kevin P.; Mehresh, Parag; Schuh, David; Kieser, Andrew J.; Hergart, Carl-Anders; Hardy, William L.; Rodman, Anthony; Liechty, Michael P.

    2008-06-03

    A homogenous charge compression ignition engine is operated by compressing a charge mixture of air, exhaust and fuel in a combustion chamber to an autoignition condition of the fuel. The engine may facilitate a transition from a first combination of speed and load to a second combination of speed and load by changing the charge mixture and compression ratio. This may be accomplished in a consecutive engine cycle by adjusting both a fuel injector control signal and a variable valve control signal away from a nominal variable valve control signal. Thereafter in one or more subsequent engine cycles, more sluggish adjustments are made to at least one of a geometric compression ratio control signal and an exhaust gas recirculation control signal to allow the variable valve control signal to be readjusted back toward its nominal variable valve control signal setting. By readjusting the variable valve control signal back toward its nominal setting, the engine will be ready for another transition to a new combination of engine speed and load.

  20. Perceptually Lossless Wavelet Compression

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Yang, Gloria Y.; Solomon, Joshua A.; Villasenor, John

    1996-01-01

    The Discrete Wavelet Transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter, which we call DWT uniform quantization noise. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2(exp -1), where r is display visual resolution in pixels/degree, and L is the wavelet level. Amplitude thresholds increase rapidly with spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from low-pass to horizontal/vertical to diagonal. We propose a mathematical model for DWT noise detection thresholds that is a function of level, orientation, and display visual resolution. This allows calculation of a 'perceptually lossless' quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.

  1. Compressive Sensing DNA Microarrays

    PubMed Central

    2009-01-01

    Compressive sensing microarrays (CSMs) are DNA-based sensors that operate using group testing and compressive sensing (CS) principles. In contrast to conventional DNA microarrays, in which each genetic sensor is designed to respond to a single target, in a CSM, each sensor responds to a set of targets. We study the problem of designing CSMs that simultaneously account for both the constraints from CS theory and the biochemistry of probe-target DNA hybridization. An appropriate cross-hybridization model is proposed for CSMs, and several methods are developed for probe design and CS signal recovery based on the new model. Lab experiments suggest that in order to achieve accurate hybridization profiling, consensus probe sequences are required to have sequence homology of at least 80% with all targets to be detected. Furthermore, out-of-equilibrium datasets are usually as accurate as those obtained from equilibrium conditions. Consequently, one can use CSMs in applications in which only short hybridization times are allowed. PMID:19158952

  2. Compressive Bilateral Filtering.

    PubMed

    Sugimoto, Kenjiro; Kamata, Sei-Ichiro

    2015-11-01

    This paper presents an efficient constant-time bilateral filter that produces a near-optimal performance tradeoff between approximate accuracy and computational complexity without any complicated parameter adjustment, called a compressive bilateral filter (CBLF). The constant-time means that the computational complexity is independent of its filter window size. Although many existing constant-time bilateral filters have been proposed step-by-step to pursue a more efficient performance tradeoff, they have less focused on the optimal tradeoff for their own frameworks. It is important to discuss this question, because it can reveal whether or not a constant-time algorithm still has plenty room for improvements of performance tradeoff. This paper tackles the question from a viewpoint of compressibility and highlights the fact that state-of-the-art algorithms have not yet touched the optimal tradeoff. The CBLF achieves a near-optimal performance tradeoff by two key ideas: 1) an approximate Gaussian range kernel through Fourier analysis and 2) a period length optimization. Experiments demonstrate that the CBLF significantly outperforms state-of-the-art algorithms in terms of approximate accuracy, computational complexity, and usability. PMID:26068315

  3. Cancer suppression by compression.

    PubMed

    Frieden, B Roy; Gatenby, Robert A

    2015-01-01

    Recent experiments indicate that uniformly compressing a cancer mass at its surface tends to transform many of its cells from proliferative to functional forms. Cancer cells suffer from the Warburg effect, resulting from depleted levels of cell membrane potentials. We show that the compression results in added free energy and that some of the added energy contributes distortional pressure to the cells. This excites the piezoelectric effect on the cell membranes, in particular raising the potentials on the membranes of cancer cells from their depleted levels to near-normal levels. In a sample calculation, a gain of 150 mV in is so attained. This allows the Warburg effect to be reversed. The result is at least partially regained function and accompanying increased molecular order. The transformation remains even when the pressure is turned off, suggesting a change of phase; these possibilities are briefly discussed. It is found that if the pressure is, in particular, applied adiabatically the process obeys the second law of thermodynamics, further validating the theoretical model. PMID:25520262

  4. libpolycomp: Compression/decompression library

    NASA Astrophysics Data System (ADS)

    Tomasi, Maurizio

    2016-04-01

    Libpolycomp compresses and decompresses one-dimensional streams of numbers by means of several algorithms. It is well-suited for time-ordered data acquired by astronomical instruments or simulations. One of the algorithms, called "polynomial compression", combines two widely-used ideas (namely, polynomial approximation and filtering of Fourier series) to achieve substantial compression ratios for datasets characterized by smoothness and lack of noise. Notable examples are the ephemerides of astronomical objects and the pointing information of astronomical telescopes. Other algorithms implemented in this C library are well known and already widely used, e.g., RLE, quantization, deflate (via libz) and Burrows-Wheeler transform (via libbzip2). Libpolycomp can compress the timelines acquired by the Planck/LFI instrument with an overall compression ratio of ~9, while other widely known programs (gzip, bzip2) reach compression ratios less than 1.5.

  5. Energy transfer in compressible turbulence

    NASA Technical Reports Server (NTRS)

    Bataille, Francoise; Zhou, YE; Bertoglio, Jean-Pierre

    1995-01-01

    This letter investigates the compressible energy transfer process. We extend a methodology developed originally for incompressible turbulence and use databases from numerical simulations of a weak compressible turbulence based on Eddy-Damped-Quasi-Normal-Markovian (EDQNM) closure. In order to analyze the compressible mode directly, the well known Helmholtz decomposition is used. While the compressible component has very little influence on the solenoidal part, we found that almost all of the compressible turbulence energy is received from its solenoidal counterpart. We focus on the most fundamental building block of the energy transfer process, the triadic interactions. This analysis leads us to conclude that, at low turbulent Mach number, the compressible energy transfer process is dominated by a local radiative transfer (absorption) in both inertial and energy containing ranges.

  6. Compressive sensing in medical imaging

    PubMed Central

    Graff, Christian G.; Sidky, Emil Y.

    2015-01-01

    The promise of compressive sensing, exploitation of compressibility to achieve high quality image reconstructions with less data, has attracted a great deal of attention in the medical imaging community. At the Compressed Sensing Incubator meeting held in April 2014 at OSA Headquarters in Washington, DC, presentations were given summarizing some of the research efforts ongoing in compressive sensing for x-ray computed tomography and magnetic resonance imaging systems. This article provides an expanded version of these presentations. Sparsity-exploiting reconstruction algorithms that have gained popularity in the medical imaging community are studied, and examples of clinical applications that could benefit from compressive sensing ideas are provided. The current and potential future impact of compressive sensing on the medical imaging field is discussed. PMID:25968400

  7. Hardware Accelerated Compression of LIDAR Data Using FPGA Devices

    PubMed Central

    Biasizzo, Anton; Novak, Franc

    2013-01-01

    Airborne Light Detection and Ranging (LIDAR) has become a mainstream technology for terrain data acquisition and mapping. High sampling density of LIDAR enables the acquisition of high details of the terrain, but on the other hand, it results in a vast amount of gathered data, which requires huge storage space as well as substantial processing effort. The data are usually stored in the LAS format which has become the de facto standard for LIDAR data storage and exchange. In the paper, a hardware accelerated compression of LIDAR data is presented. The compression and decompression of LIDAR data is performed by a dedicated FPGA-based circuit and interfaced to the computer via a PCI-E general bus. The hardware compressor consists of three modules: LIDAR data predictor, variable length coder, and arithmetic coder. Hardware compression is considerably faster than software compression, while it also alleviates the processor load. PMID:23673680

  8. ECG data compression by modeling.

    PubMed Central

    Madhukar, B.; Murthy, I. S.

    1992-01-01

    This paper presents a novel algorithm for data compression of single lead Electrocardiogram (ECG) data. The method is based on Parametric modeling of the Discrete Cosine Transformed ECG signal. Improved high frequency reconstruction is achieved by separately modeling the low and the high frequency regions of the transformed signal. Differential Pulse Code Modulation is applied on the model parameters to obtain a further increase in the compression. Compression ratios up to 1:40 were achieved without significant distortion. PMID:1482940

  9. Shock compression of precompressed deuterium

    SciTech Connect

    Armstrong, M R; Crowhurst, J C; Zaug, J M; Bastea, S; Goncharov, A F; Militzer, B

    2011-07-31

    Here we report quasi-isentropic dynamic compression and thermodynamic characterization of solid, precompressed deuterium over an ultrafast time scale (< 100 ps) and a microscopic length scale (< 1 {micro}m). We further report a fast transition in shock wave compressed solid deuterium that is consistent with the ramp to shock transition, with a time scale of less than 10 ps. These results suggest that high-density dynamic compression of hydrogen may be possible on microscopic length scales.

  10. Magnetic compression laser driving circuit

    DOEpatents

    Ball, D.G.; Birx, D.; Cook, E.G.

    1993-01-05

    A magnetic compression laser driving circuit is disclosed. The magnetic compression laser driving circuit compresses voltage pulses in the range of 1.5 microseconds at 20 kilovolts of amplitude to pulses in the range of 40 nanoseconds and 60 kilovolts of amplitude. The magnetic compression laser driving circuit includes a multi-stage magnetic switch where the last stage includes a switch having at least two turns which has larger saturated inductance with less core material so that the efficiency of the circuit and hence the laser is increased.

  11. Magnetic compression laser driving circuit

    DOEpatents

    Ball, Don G.; Birx, Dan; Cook, Edward G.

    1993-01-01

    A magnetic compression laser driving circuit is disclosed. The magnetic compression laser driving circuit compresses voltage pulses in the range of 1.5 microseconds at 20 Kilovolts of amplitude to pulses in the range of 40 nanoseconds and 60 Kilovolts of amplitude. The magnetic compression laser driving circuit includes a multi-stage magnetic switch where the last stage includes a switch having at least two turns which has larger saturated inductance with less core material so that the efficiency of the circuit and hence the laser is increased.

  12. Data compression for sequencing data

    PubMed Central

    2013-01-01

    Post-Sanger sequencing methods produce tons of data, and there is a general agreement that the challenge to store and process them must be addressed with data compression. In this review we first answer the question “why compression” in a quantitative manner. Then we also answer the questions “what” and “how”, by sketching the fundamental compression ideas, describing the main sequencing data types and formats, and comparing the specialized compression algorithms and tools. Finally, we go back to the question “why compression” and give other, perhaps surprising answers, demonstrating the pervasiveness of data compression techniques in computational biology. PMID:24252160

  13. On the basic equations for the second-order modeling of compressible turbulence

    NASA Technical Reports Server (NTRS)

    Liou, W. W.; Shih, T.-H.

    1991-01-01

    Equations for the mean and turbulent quantities for compressible turbulent flows are derived. Both the conventional Reynolds average and the mass-weighted, Favre average were employed to decompose the flow variable into a mean and a turbulent quality. These equations are to be used later in developing second order Reynolds stress models for high speed compressible flows. A few recent advances in modeling some of the terms in the equations due to compressibility effects are also summarized.

  14. Population attribute compression

    DOEpatents

    White, James M.; Faber, Vance; Saltzman, Jeffrey S.

    1995-01-01

    An image population having a large number of attributes is processed to form a display population with a predetermined smaller number of attributes that represent the larger number of attributes. In a particular application, the color values in an image are compressed for storage in a discrete look-up table (LUT). Color space containing the LUT color values is successively subdivided into smaller volumes until a plurality of volumes are formed, each having no more than a preselected maximum number of color values. Image pixel color values can then be rapidly placed in a volume with only a relatively few LUT values from which a nearest neighbor is selected. Image color values are assigned 8 bit pointers to their closest LUT value whereby data processing requires only the 8 bit pointer value to provide 24 bit color values from the LUT.

  15. Vapor compression distillation module

    NASA Technical Reports Server (NTRS)

    Nuccio, P. P.

    1975-01-01

    A Vapor Compression Distillation (VCD) module was developed and evaluated as part of a Space Station Prototype (SSP) environmental control and life support system. The VCD module includes the waste tankage, pumps, post-treatment cells, automatic controls and fault detection instrumentation. Development problems were encountered with two components: the liquid pumps, and the waste tank and quantity gauge. Peristaltic pumps were selected instead of gear pumps, and a sub-program of materials and design optimization was undertaken leading to a projected life greater than 10,000 hours of continuous operation. A bladder tank was designed and built to contain the waste liquids and deliver it to the processor. A detrimental pressure pattern imposed upon the bladder by a force-operated quantity gauge was corrected by rearranging the force application, and design goals were achieved. System testing has demonstrated that all performance goals have been fulfilled.

  16. Gas compression apparatus

    NASA Technical Reports Server (NTRS)

    Terp, L. S. (Inventor)

    1977-01-01

    Apparatus for transferring gas from a first container to a second container of higher pressure was devised. A free-piston compressor having a driving piston and cylinder, and a smaller diameter driven piston and cylinder, comprise the apparatus. A rod member connecting the driving and driven pistons functions for mutual reciprocation in the respective cylinders. A conduit may be provided for supplying gas to the driven cylinder from the first container. Also provided is apparatus for introducing gas to the driving piston, to compress gas by the driven piston for transfer to the second higher pressure container. The system is useful in transferring spacecraft cabin oxygen into higher pressure containers for use in extravehicular activities.

  17. Compressed hyperspectral sensing

    NASA Astrophysics Data System (ADS)

    Tsagkatakis, Grigorios; Tsakalides, Panagiotis

    2015-03-01

    Acquisition of high dimensional Hyperspectral Imaging (HSI) data using limited dimensionality imaging sensors has led to restricted capabilities designs that hinder the proliferation of HSI. To overcome this limitation, novel HSI architectures strive to minimize the strict requirements of HSI by introducing computation into the acquisition process. A framework that allows the integration of acquisition with computation is the recently proposed framework of Compressed Sensing (CS). In this work, we propose a novel HSI architecture that exploits the sampling and recovery capabilities of CS to achieve a dramatic reduction in HSI acquisition requirements. In the proposed architecture, signals from multiple spectral bands are multiplexed before getting recorded by the imaging sensor. Reconstruction of the full hyperspectral cube is achieved by exploiting a dictionary of elementary spectral profiles in a unified minimization framework. Simulation results suggest that high quality recovery is possible from a single or a small number of multiplexed frames.

  18. Compressively sensed complex networks.

    SciTech Connect

    Dunlavy, Daniel M.; Ray, Jaideep; Pinar, Ali

    2010-07-01

    The aim of this project is to develop low dimension parametric (deterministic) models of complex networks, to use compressive sensing (CS) and multiscale analysis to do so and to exploit the structure of complex networks (some are self-similar under coarsening). CS provides a new way of sampling and reconstructing networks. The approach is based on multiresolution decomposition of the adjacency matrix and its efficient sampling. It requires preprocessing of the adjacency matrix to make it 'blocky' which is the biggest (combinatorial) algorithm challenge. Current CS reconstruction algorithm makes no use of the structure of a graph, its very general (and so not very efficient/customized). Other model-based CS techniques exist, but not yet adapted to networks. Obvious starting point for future work is to increase the efficiency of reconstruction.

  19. Compressive Network Analysis

    PubMed Central

    Jiang, Xiaoye; Yao, Yuan; Liu, Han; Guibas, Leonidas

    2014-01-01

    Modern data acquisition routinely produces massive amounts of network data. Though many methods and models have been proposed to analyze such data, the research of network data is largely disconnected with the classical theory of statistical learning and signal processing. In this paper, we present a new framework for modeling network data, which connects two seemingly different areas: network data analysis and compressed sensing. From a nonparametric perspective, we model an observed network using a large dictionary. In particular, we consider the network clique detection problem and show connections between our formulation with a new algebraic tool, namely Randon basis pursuit in homogeneous spaces. Such a connection allows us to identify rigorous recovery conditions for clique detection problems. Though this paper is mainly conceptual, we also develop practical approximation algorithms for solving empirical problems and demonstrate their usefulness on real-world datasets. PMID:25620806

  20. Edge compression manifold apparatus

    DOEpatents

    Renzi, Ronald F.

    2007-02-27

    A manifold for connecting external capillaries to the inlet and/or outlet ports of a microfluidic device for high pressure applications is provided. The fluid connector for coupling at least one fluid conduit to a corresponding port of a substrate that includes: (i) a manifold comprising one or more channels extending therethrough wherein each channel is at least partially threaded, (ii) one or more threaded ferrules each defining a bore extending therethrough with each ferrule supporting a fluid conduit wherein each ferrule is threaded into a channel of the manifold, (iii) a substrate having one or more ports on its upper surface wherein the substrate is positioned below the manifold so that the one or more ports is aligned with the one or more channels of the manifold, and (iv) device to apply an axial compressive force to the substrate to couple the one or more ports of the substrate to a corresponding proximal end of a fluid conduit.

  1. Edge compression manifold apparatus

    DOEpatents

    Renzi, Ronald F.

    2004-12-21

    A manifold for connecting external capillaries to the inlet and/or outlet ports of a microfluidic device for high pressure applications is provided. The fluid connector for coupling at least one fluid conduit to a corresponding port of a substrate that includes: (i) a manifold comprising one or more channels extending therethrough wherein each channel is at least partially threaded, (ii) one or more threaded ferrules each defining a bore extending therethrough with each ferrule supporting a fluid conduit wherein each ferrule is threaded into a channel of the manifold, (iii) a substrate having one or more ports on its upper surface wherein the substrate is positioned below the manifold so that the one or more ports is aligned with the one or more channels of the manifold, and (iv) device to apply an axial compressive force to the substrate to couple the one or more ports of the substrate to a corresponding proximal end of a fluid conduit.

  2. Compression and compression fatigue testing of composite laminates

    NASA Technical Reports Server (NTRS)

    Porter, T. R.

    1982-01-01

    The effects of moisture and temperature on the fatigue and fracture response of composite laminates under compression loads were investigated. The structural laminates studied were an intermediate stiffness graphite-epoxy composite (a typical angle ply laimna liminate had a typical fan blade laminate). Full and half penetration slits and impact delaminations were the defects examined. Results are presented which show the effects of moisture on the fracture and fatigue strength at room temperature, 394 K (250 F), and 422 K (300 F). Static tests results show the effects of defect size and type on the compression-fracture strength under moisture and thermal environments. The cyclic tests results compare the fatigue lives and residual compression strength under compression only and under tension-compression fatigue loading.

  3. Adaptive compressive sensing camera

    NASA Astrophysics Data System (ADS)

    Hsu, Charles; Hsu, Ming K.; Cha, Jae; Iwamura, Tomo; Landa, Joseph; Nguyen, Charles; Szu, Harold

    2013-05-01

    We have embedded Adaptive Compressive Sensing (ACS) algorithm on Charge-Coupled-Device (CCD) camera based on the simplest concept that each pixel is a charge bucket, and the charges comes from Einstein photoelectric conversion effect. Applying the manufactory design principle, we only allow altering each working component at a minimum one step. We then simulated what would be such a camera can do for real world persistent surveillance taking into account of diurnal, all weather, and seasonal variations. The data storage has saved immensely, and the order of magnitude of saving is inversely proportional to target angular speed. We did design two new components of CCD camera. Due to the matured CMOS (Complementary metal-oxide-semiconductor) technology, the on-chip Sample and Hold (SAH) circuitry can be designed for a dual Photon Detector (PD) analog circuitry for changedetection that predicts skipping or going forward at a sufficient sampling frame rate. For an admitted frame, there is a purely random sparse matrix [Φ] which is implemented at each bucket pixel level the charge transport bias voltage toward its neighborhood buckets or not, and if not, it goes to the ground drainage. Since the snapshot image is not a video, we could not apply the usual MPEG video compression and Hoffman entropy codec as well as powerful WaveNet Wrapper on sensor level. We shall compare (i) Pre-Processing FFT and a threshold of significant Fourier mode components and inverse FFT to check PSNR; (ii) Post-Processing image recovery will be selectively done by CDT&D adaptive version of linear programming at L1 minimization and L2 similarity. For (ii) we need to determine in new frames selection by SAH circuitry (i) the degree of information (d.o.i) K(t) dictates the purely random linear sparse combination of measurement data a la [Φ]M,N M(t) = K(t) Log N(t).

  4. Compressive optical imaging systems

    NASA Astrophysics Data System (ADS)

    Wu, Yuehao

    Compared to the classic Nyquist sampling theorem, Compressed Sensing or Compressive Sampling (CS) was proposed as a more efficient alternative for sampling sparse signals. In this dissertation, we discuss the implementation of the CS theory in building a variety of optical imaging systems. CS-based Imaging Systems (CSISs) exploit the sparsity of optical images in their transformed domains by imposing incoherent CS measurement patterns on them. The amplitudes and locations of sparse frequency components of optical images in their transformed domains can be reconstructed from the CS measurement results by solving an l1-regularized minimization problem. In this work, we review the theoretical background of the CS theory and present two hardware implementation schemes for CSISs, including a single pixel detector based scheme and an array detector based scheme. The first implementation scheme is suitable for acquiring Two-Dimensional (2D) spatial information of the imaging scene. We demonstrate the feasibility of this implementation scheme by developing a single pixel camera, a multispectral imaging system, and an optical sectioning microscope for fluorescence microscopy. The array detector based scheme is suitable for hyperspectral imaging applications, wherein both the spatial and spectral information of the imaging scene are of interest. We demonstrate the feasibility of this scheme by developing a Digital Micromirror Device-based Snapshot Spectral Imaging (DMD-SSI) system, which implements CS measurement processes on the Three-Dimensional (3D) spatial/spectral information of the imaging scene. Tens of spectral images can be reconstructed from the DMD-SSI system simultaneously without any mechanical or temporal scanning processes.

  5. Survey of Header Compression Techniques

    NASA Technical Reports Server (NTRS)

    Ishac, Joseph

    2001-01-01

    This report provides a summary of several different header compression techniques. The different techniques included are: (1) Van Jacobson's header compression (RFC 1144); (2) SCPS (Space Communications Protocol Standards) header compression (SCPS-TP, SCPS-NP); (3) Robust header compression (ROHC); and (4) The header compression techniques in RFC2507 and RFC2508. The methodology for compression and error correction for these schemes are described in the remainder of this document. All of the header compression schemes support compression over simplex links, provided that the end receiver has some means of sending data back to the sender. However, if that return path does not exist, then neither Van Jacobson's nor SCPS can be used, since both rely on TCP (Transmission Control Protocol). In addition, under link conditions of low delay and low error, all of the schemes perform as expected. However, based on the methodology of the schemes, each scheme is likely to behave differently as conditions degrade. Van Jacobson's header compression relies heavily on the TCP retransmission timer and would suffer an increase in loss propagation should the link possess a high delay and/or bit error rate (BER). The SCPS header compression scheme protects against high delay environments by avoiding delta encoding between packets. Thus, loss propagation is avoided. However, SCPS is still affected by an increased BER (bit-error-rate) since the lack of delta encoding results in larger header sizes. Next, the schemes found in RFC2507 and RFC2508 perform well for non-TCP connections in poor conditions. RFC2507 performance with TCP connections is improved by various techniques over Van Jacobson's, but still suffers a performance hit with poor link properties. Also, RFC2507 offers the ability to send TCP data without delta encoding, similar to what SCPS offers. ROHC is similar to the previous two schemes, but adds additional CRCs (cyclic redundancy check) into headers and improves

  6. (Finite) statistical size effects on compressive strength.

    PubMed

    Weiss, Jérôme; Girard, Lucas; Gimbert, Florent; Amitrano, David; Vandembroucq, Damien

    2014-04-29

    The larger structures are, the lower their mechanical strength. Already discussed by Leonardo da Vinci and Edmé Mariotte several centuries ago, size effects on strength remain of crucial importance in modern engineering for the elaboration of safety regulations in structural design or the extrapolation of laboratory results to geophysical field scales. Under tensile loading, statistical size effects are traditionally modeled with a weakest-link approach. One of its prominent results is a prediction of vanishing strength at large scales that can be quantified in the framework of extreme value statistics. Despite a frequent use outside its range of validity, this approach remains the dominant tool in the field of statistical size effects. Here we focus on compressive failure, which concerns a wide range of geophysical and geotechnical situations. We show on historical and recent experimental data that weakest-link predictions are not obeyed. In particular, the mechanical strength saturates at a nonzero value toward large scales. Accounting explicitly for the elastic interactions between defects during the damage process, we build a formal analogy of compressive failure with the depinning transition of an elastic manifold. This critical transition interpretation naturally entails finite-size scaling laws for the mean strength and its associated variability. Theoretical predictions are in remarkable agreement with measurements reported for various materials such as rocks, ice, coal, or concrete. This formalism, which can also be extended to the flowing instability of granular media under multiaxial compression, has important practical consequences for future design rules. PMID:24733930

  7. (Finite) statistical size effects on compressive strength

    PubMed Central

    Weiss, Jérôme; Girard, Lucas; Gimbert, Florent; Amitrano, David; Vandembroucq, Damien

    2014-01-01

    The larger structures are, the lower their mechanical strength. Already discussed by Leonardo da Vinci and Edmé Mariotte several centuries ago, size effects on strength remain of crucial importance in modern engineering for the elaboration of safety regulations in structural design or the extrapolation of laboratory results to geophysical field scales. Under tensile loading, statistical size effects are traditionally modeled with a weakest-link approach. One of its prominent results is a prediction of vanishing strength at large scales that can be quantified in the framework of extreme value statistics. Despite a frequent use outside its range of validity, this approach remains the dominant tool in the field of statistical size effects. Here we focus on compressive failure, which concerns a wide range of geophysical and geotechnical situations. We show on historical and recent experimental data that weakest-link predictions are not obeyed. In particular, the mechanical strength saturates at a nonzero value toward large scales. Accounting explicitly for the elastic interactions between defects during the damage process, we build a formal analogy of compressive failure with the depinning transition of an elastic manifold. This critical transition interpretation naturally entails finite-size scaling laws for the mean strength and its associated variability. Theoretical predictions are in remarkable agreement with measurements reported for various materials such as rocks, ice, coal, or concrete. This formalism, which can also be extended to the flowing instability of granular media under multiaxial compression, has important practical consequences for future design rules. PMID:24733930

  8. Compressible turbulent mixing: Effects of compressibility and Schmidt number

    NASA Astrophysics Data System (ADS)

    Ni, Qionglin

    2015-11-01

    Effects of compressibility and Schmidt number on passive scalar in compressible turbulence were studied. On the effect of compressibility, the scalar spectrum followed the k- 5 / 3 inertial-range scaling and suffered negligible influence from compressibility. The transfer of scalar flux was reduced by the transition from incompressible to compressible flows, however, was enhanced by the growth of Mach number. The intermittency parameter was increased by the growth of Mach number, and was decreased by the growth of the compressive mode of driven forcing. The dependency of the mixing timescale on compressibility showed that for the driven forcing, the compressive mode was less efficient in enhancing scalar mixing. On the effect of Schmidt number (Sc), in the inertial-convective range the scalar spectrum obeyed the k- 5 / 3 scaling. For Sc >> 1, a k-1 power law appeared in the viscous-convective range, while for Sc << 1, a k- 17 / 3 power law was identified in the inertial-diffusive range. The transfer of scalar flux grew over Sc. In the Sc >> 1 flow the scalar field rolled up and mixed sufficiently, while in the Sc << 1 flow that only had the large-scale, cloudlike structures. In Sc >> 1 and Sc << 1 flows, the spectral densities of scalar advection and dissipation followed the k- 5 / 3 scaling, indicating that in compressible turbulence the processes of advection and dissipation might deferring to the Kolmogorov picture. Finally, the comparison with incompressible results showed that the scalar in compressible turbulence lacked a conspicuous bump structure in its spectrum, and was more intermittent in the dissipative range.

  9. About the use of stoichiometric hydroxyapatite in compression - incidence of manufacturing process on compressibility.

    PubMed

    Pontier, C; Viana, M; Champion, E; Bernache-Assollant, D; Chulia, D

    2001-05-01

    Literature concerning calcium phosphates in pharmacy exhibits the chemical diversity of the compounds available. Some excipient manufacturers offer hydroxyapatite as a direct compression excipient, but the chemical analysis of this compound usually shows a variability of the composition: the so-called materials can be hydroxyapatite or other calcium phosphates, uncalcined (i.e. with a low crystallinity) or calcined and well-crystallized hydroxyapatite. This study points out the incidence of the crystallinity of one compound (i.e. hydroxyapatite) on the mechanical properties. Stoichiometric hydroxyapatite is synthesized and compounds differing in their crystallinity, manufacturing process and particle size are manufactured. X-Ray diffraction analysis is used to investigate the chemical nature of the compounds. The mechanical study (study of the compression, diametral compressive strength, Heckel plots) highlights the negative effect of calcination on the mechanical properties. Porosity and specific surface area measurements show the effect of calcination on compaction. Uncalcined materials show bulk and mechanical properties in accordance with their use as direct compression excipients. PMID:11343890

  10. Compression and Predictive Distributions for Large Alphabets

    NASA Astrophysics Data System (ADS)

    Yang, Xiao

    Data generated from large alphabet exist almost everywhere in our life, for example, texts, images and videos. Traditional universal compression algorithms mostly involve small alphabets and assume implicitly an asymptotic condition under which the extra bits induced in the compression process vanishes as an infinite number of data come. In this thesis, we put the main focus on compression and prediction for large alphabets with the alphabet size comparable or larger than the sample size. We first consider sequences of random variables independent and identically generated from a large alphabet. In particular, the size of the sample is allowed to be variable. A product distribution based on Poisson sampling and tiling is proposed as the coding distribution, which highly simplifies the implementation and analysis through independence. Moreover, we characterize the behavior of the coding distribution through a condition on the tail sum of the ordered counts, and apply it to sequences satisfying this condition. Further, we apply this method to envelope classes. This coding distribution provides a convenient method to approximately compute the Shtarkov's normalized maximum likelihood (NML) distribution. And the extra price paid for this convenience is small compared to the total cost. Furthermore, we find this coding distribution can also be used to calculate the NML distribution exactly. And this calculation remains simple due to the independence of the coding distribution. Further, we consider a more realistic class---the Markov class, and in particular, tree sources. A context tree based algorithm is designed to describe the dependencies among the contexts. It is a greedy algorithm which seeks for the greatest savings in codelength when constructing the tree. Compression and prediction of individual counts associated with the contexts uses the same coding distribution as in the i.i.d case. Combining these two procedures, we demonstrate a compression algorithm based

  11. Variable compression ratio device for internal combustion engine

    DOEpatents

    Maloney, Ronald P.; Faletti, James J.

    2004-03-23

    An internal combustion engine, particularly suitable for use in a work machine, is provided with a combustion cylinder, a cylinder head at an end of the combustion cylinder and a primary piston reciprocally disposed within the combustion cylinder. The cylinder head includes a secondary cylinder and a secondary piston reciprocally disposed within the secondary cylinder. An actuator is coupled with the secondary piston for controlling the position of the secondary piston dependent upon the position of the primary piston. A communication port establishes fluid flow communication between the combustion cylinder and the secondary cylinder.

  12. Pressure Oscillations in Adiabatic Compression

    ERIC Educational Resources Information Center

    Stout, Roland

    2011-01-01

    After finding Moloney and McGarvey's modified adiabatic compression apparatus, I decided to insert this experiment into my physical chemistry laboratory at the last minute, replacing a problematic experiment. With insufficient time to build the apparatus, we placed a bottle between two thick textbooks and compressed it with a third textbook forced…

  13. Compression failure of composite laminates

    NASA Technical Reports Server (NTRS)

    Pipes, R. B.

    1983-01-01

    This presentation attempts to characterize the compressive behavior of Hercules AS-1/3501-6 graphite-epoxy composite. The effect of varying specimen geometry on test results is examined. The transition region is determined between buckling and compressive failure. Failure modes are defined and analytical models to describe these modes are presented.

  14. Data compression by wavelet transforms

    NASA Technical Reports Server (NTRS)

    Shahshahani, M.

    1992-01-01

    A wavelet transform algorithm is applied to image compression. It is observed that the algorithm does not suffer from the blockiness characteristic of the DCT-based algorithms at compression ratios exceeding 25:1, but the edges do not appear as sharp as they do with the latter method. Some suggestions for the improved performance of the wavelet transform method are presented.

  15. Application specific compression : final report.

    SciTech Connect

    Melgaard, David Kennett; Byrne, Raymond Harry; Myers, Daniel S.; Harrison, Carol D.; Lee, David S.; Lewis, Phillip J.; Carlson, Jeffrey J.

    2008-12-01

    With the continuing development of more capable data gathering sensors, comes an increased demand on the bandwidth for transmitting larger quantities of data. To help counteract that trend, a study was undertaken to determine appropriate lossy data compression strategies for minimizing their impact on target detection and characterization. The survey of current compression techniques led us to the conclusion that wavelet compression was well suited for this purpose. Wavelet analysis essentially applies a low-pass and high-pass filter to the data, converting the data into the related coefficients that maintain spatial information as well as frequency information. Wavelet compression is achieved by zeroing the coefficients that pertain to the noise in the signal, i.e. the high frequency, low amplitude portion. This approach is well suited for our goal because it reduces the noise in the signal with only minimal impact on the larger, lower frequency target signatures. The resulting coefficients can then be encoded using lossless techniques with higher compression levels because of the lower entropy and significant number of zeros. No significant signal degradation or difficulties in target characterization or detection were observed or measured when wavelet compression was applied to simulated and real data, even when over 80% of the coefficients were zeroed. While the exact level of compression will be data set dependent, for the data sets we studied, compression factors over 10 were found to be satisfactory where conventional lossless techniques achieved levels of less than 3.

  16. Streaming Compression of Hexahedral Meshes

    SciTech Connect

    Isenburg, M; Courbet, C

    2010-02-03

    We describe a method for streaming compression of hexahedral meshes. Given an interleaved stream of vertices and hexahedral our coder incrementally compresses the mesh in the presented order. Our coder is extremely memory efficient when the input stream documents when vertices are referenced for the last time (i.e. when it contains topological finalization tags). Our coder then continuously releases and reuses data structures that no longer contribute to compressing the remainder of the stream. This means in practice that our coder has only a small fraction of the whole mesh in memory at any time. We can therefore compress very large meshes - even meshes that do not file in memory. Compared to traditional, non-streaming approaches that load the entire mesh and globally reorder it during compression, our algorithm trades a less compact compressed representation for significant gains in speed, memory, and I/O efficiency. For example, on the 456k hexahedra 'blade' mesh, our coder is twice as fast and uses 88 times less memory (only 3.1 MB) with the compressed file increasing about 3% in size. We also present the first scheme for predictive compression of properties associated with hexahedral cells.

  17. Compression Shocks of Detached Flow

    NASA Technical Reports Server (NTRS)

    Eggink

    1947-01-01

    It is known that compression shocks which lead from supersonic to subsonic velocity cause the flow to separate on impact on a rigid wall. Such shocks appear at bodies with circular symmetry or wing profiles on locally exceeding sonic velocity, and in Laval nozzles with too high a back pressure. The form of the compression shocks observed therein is investigated.

  18. Digital compression algorithms for HDTV transmission

    NASA Technical Reports Server (NTRS)

    Adkins, Kenneth C.; Shalkhauser, Mary JO; Bibyk, Steven B.

    1990-01-01

    Digital compression of video images is a possible avenue for high definition television (HDTV) transmission. Compression needs to be optimized while picture quality remains high. Two techniques for compression the digital images are explained and comparisons are drawn between the human vision system and artificial compression techniques. Suggestions for improving compression algorithms through the use of neural and analog circuitry are given.

  19. Analytical model for ramp compression

    NASA Astrophysics Data System (ADS)

    Xue, Quanxi; Jiang, Shaoen; Wang, Zhebin; Wang, Feng; Hu, Yun; Ding, Yongkun

    2016-08-01

    An analytical ramp compression model for condensed matter, which can provide explicit solutions for isentropic compression flow fields, is reported. A ramp compression experiment can be easily designed according to the capability of the loading source using this model. Specifically, important parameters, such as the maximum isentropic region width, material properties, profile of the pressure pulse, and the pressure pulse duration can be reasonably allocated or chosen. To demonstrate and study this model, laser-direct-driven ramp compression experiments and code simulation are performed successively, and the factors influencing the accuracy of the model are studied. The application and simulation show that this model can be used as guidance in the design of a ramp compression experiment. However, it is verified that further optimization work is required for a precise experimental design.

  20. Compressive sensing exploiting wavelet-domain dependencies for ECG compression

    NASA Astrophysics Data System (ADS)

    Polania, Luisa F.; Carrillo, Rafael E.; Blanco-Velasco, Manuel; Barner, Kenneth E.

    2012-06-01

    Compressive sensing (CS) is an emerging signal processing paradigm that enables sub-Nyquist sampling of sparse signals. Extensive previous work has exploited the sparse representation of ECG signals in compression applications. In this paper, we propose the use of wavelet domain dependencies to further reduce the number of samples in compressive sensing-based ECG compression while decreasing the computational complexity. R wave events manifest themselves as chains of large coefficients propagating across scales to form a connected subtree of the wavelet coefficient tree. We show that the incorporation of this connectedness as additional prior information into a modified version of the CoSaMP algorithm can significantly reduce the required number of samples to achieve good quality in the reconstruction. This approach also allows more control over the ECG signal reconstruction, in particular, the QRS complex, which is typically distorted when prior information is not included in the recovery. The compression algorithm was tested upon records selected from the MIT-BIH arrhythmia database. Simulation results show that the proposed algorithm leads to high compression ratios associated with low distortion levels relative to state-of-the-art compression algorithms.

  1. Compression relief engine brake

    SciTech Connect

    Meneely, V.A.

    1987-10-06

    A compression relief brake is described for four cycle internal-combustion engines, comprising: a pressurized oil supply; means for selectively pressurizing a hydraulic circuit with oil from the oil supply; a master piston and cylinder communicating with a slave piston and cylinder via the hydraulic circuit; an engine exhaust valve mechanically coupled to the engine and timed to open during the exhaust cycle of the engine the exhaust valve coupled to the slave piston. The exhaust valve is spring-based in a closed state to contact a valve seat; a sleeve frictionally and slidably disposed within a cavity defined by the slave piston which cavity communicates with the hydraulic circuit. When the hydraulic circuit is selectively pressurized and the engine is operating the sleeve entraps an incompressible volume of oil within the cavity to generate a displacement of the slave piston within the slave cylinder, whereby a first gap is maintained between the exhaust valve and its associated seat; and means for reciprocally activating the master piston for increasing the pressure within the previously pressurized hydraulic circuit during at least a portion of the expansion cycle of the engine whereby a second gap is reciprocally maintained between the exhaust valve and its associated seat.

  2. Compressive sensing by learning a Gaussian mixture model from measurements.

    PubMed

    Yang, Jianbo; Liao, Xuejun; Yuan, Xin; Llull, Patrick; Brady, David J; Sapiro, Guillermo; Carin, Lawrence

    2015-01-01

    Compressive sensing of signals drawn from a Gaussian mixture model (GMM) admits closed-form minimum mean squared error reconstruction from incomplete linear measurements. An accurate GMM signal model is usually not available a priori, because it is difficult to obtain training signals that match the statistics of the signals being sensed. We propose to solve that problem by learning the signal model in situ, based directly on the compressive measurements of the signals, without resorting to other signals to train a model. A key feature of our method is that the signals being sensed are treated as random variables and are integrated out in the likelihood. We derive a maximum marginal likelihood estimator (MMLE) that maximizes the likelihood of the GMM of the underlying signals given only their linear compressive measurements. We extend the MMLE to a GMM with dominantly low-rank covariance matrices, to gain computational speedup. We report extensive experimental results on image inpainting, compressive sensing of high-speed video, and compressive hyperspectral imaging (the latter two based on real compressive cameras). The results demonstrate that the proposed methods outperform state-of-the-art methods by significant margins. PMID:25361508

  3. Adaptive compression of image data

    NASA Astrophysics Data System (ADS)

    Hludov, Sergei; Schroeter, Claus; Meinel, Christoph

    1998-09-01

    In this paper we will introduce a method of analyzing images, a criterium to differentiate between images, a compression method of medical images in digital form based on the classification of the image bit plane and finally an algorithm for adaptive image compression. The analysis of the image content is based on a valuation of the relative number and absolute values of the wavelet coefficients. A comparison between the original image and the decoded image will be done by a difference criteria calculated by the wavelet coefficients of the original image and the decoded image of the first and second iteration step of the wavelet transformation. This adaptive image compression algorithm is based on a classification of digital images into three classes and followed by the compression of the image by a suitable compression algorithm. Furthermore we will show that applying these classification rules on DICOM-images is a very effective method to do adaptive compression. The image classification algorithm and the image compression algorithms have been implemented in JAVA.

  4. Advances in compressible turbulent mixing

    SciTech Connect

    Dannevik, W.P.; Buckingham, A.C.; Leith, C.E.

    1992-01-01

    This volume includes some recent additions to original material prepared for the Princeton International Workshop on the Physics of Compressible Turbulent Mixing, held in 1988. Workshop participants were asked to emphasize the physics of the compressible mixing process rather than measurement techniques or computational methods. Actual experimental results and their meaning were given precedence over discussions of new diagnostic developments. Theoretical interpretations and understanding were stressed rather than the exposition of new analytical model developments or advances in numerical procedures. By design, compressibility influences on turbulent mixing were discussed--almost exclusively--from the perspective of supersonic flow field studies. The papers are arranged in three topical categories: Foundations, Vortical Domination, and Strongly Coupled Compressibility. The Foundations category is a collection of seminal studies that connect current study in compressible turbulent mixing with compressible, high-speed turbulent flow research that almost vanished about two decades ago. A number of contributions are included on flow instability initiation, evolution, and transition between the states of unstable flow onset through those descriptive of fully developed turbulence. The Vortical Domination category includes theoretical and experimental studies of coherent structures, vortex pairing, vortex-dynamics-influenced pressure focusing. In the Strongly Coupled Compressibility category the organizers included the high-speed turbulent flow investigations in which the interaction of shock waves could be considered an important source for production of new turbulence or for the enhancement of pre-existing turbulence. Individual papers are processed separately.

  5. Best compression: Reciprocating or rotary?

    SciTech Connect

    Cahill, C.

    1997-07-01

    A compressor is a device used to increase the pressure of a compressible fluid. The inlet pressure can vary from a deep vacuum to a high positive pressure. The discharge pressure can range from subatmospheric levels to tens of thousands of pounds per square inch. Compressors come in numerous forms, but for oilfield applications there are two primary types, reciprocating and rotary. Both reciprocating and rotary compressors are grouped in the intermittent mode of compression. Intermittent is cyclic in nature, in that a specific quantity of gas is ingested by the compressor, acted upon and discharged before the cycle is repeated. Reciprocating compression is the most common form of compression used for oilfield applications. Rotary screw compressors have a long history but are relative newcomers to oilfield applications. The rotary screw compressor-technically a helical rotor compressor-dates back to 1878. That was when the first rotary screw was manufactured for the purpose of compressing air. Today thousands of rotary screw compression packages are being used throughout the world to compress natural gas.

  6. Improved Compression of Wavelet-Transformed Images

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron; Klimesh, Matthew

    2005-01-01

    A recently developed data-compression method is an adaptive technique for coding quantized wavelet-transformed data, nominally as part of a complete image-data compressor. Unlike some other approaches, this method admits a simple implementation and does not rely on the use of large code tables. A common data compression approach, particularly for images, is to perform a wavelet transform on the input data, and then losslessly compress a quantized version of the wavelet-transformed data. Under this compression approach, it is common for the quantized data to include long sequences, or runs, of zeros. The new coding method uses prefixfree codes for the nonnegative integers as part of an adaptive algorithm for compressing the quantized wavelet-transformed data by run-length coding. In the form of run-length coding used here, the data sequence to be encoded is parsed into strings consisting of some number (possibly 0) of zeros, followed by a nonzero value. The nonzero value and the length of the run of zeros are encoded. For a data stream that contains a sufficiently high frequency of zeros, this method is known to be more effective than using a single variable length code to encode each symbol. The specific prefix-free codes used are from two classes of variable-length codes: a class known as Golomb codes, and a class known as exponential-Golomb codes. The codes within each class are indexed by a single integer parameter. The present method uses exponential-Golomb codes for the lengths of the runs of zeros, and Golomb codes for the nonzero values. The code parameters within each code class are determined adaptively on the fly as compression proceeds, on the basis of statistics from previously encoded values. In particular, a simple adaptive method has been devised to select the parameter identifying the particular exponential-Golomb code to use. The method tracks the average number of bits used to encode recent runlengths, and takes the difference between this average

  7. Designing experiments through compressed sensing.

    SciTech Connect

    Young, Joseph G.; Ridzal, Denis

    2013-06-01

    In the following paper, we discuss how to design an ensemble of experiments through the use of compressed sensing. Specifically, we show how to conduct a small number of physical experiments and then use compressed sensing to reconstruct a larger set of data. In order to accomplish this, we organize our results into four sections. We begin by extending the theory of compressed sensing to a finite product of Hilbert spaces. Then, we show how these results apply to experiment design. Next, we develop an efficient reconstruction algorithm that allows us to reconstruct experimental data projected onto a finite element basis. Finally, we verify our approach with two computational experiments.

  8. Context-Aware Image Compression

    PubMed Central

    Chan, Jacky C. K.; Mahjoubfar, Ata; Chen, Claire L.; Jalali, Bahram

    2016-01-01

    We describe a physics-based data compression method inspired by the photonic time stretch wherein information-rich portions of the data are dilated in a process that emulates the effect of group velocity dispersion on temporal signals. With this coding operation, the data can be downsampled at a lower rate than without it. In contrast to previous implementation of the warped stretch compression, here the decoding can be performed without the need of phase recovery. We present rate-distortion analysis and show improvement in PSNR compared to compression via uniform downsampling. PMID:27367904

  9. Image compression using constrained relaxation

    NASA Astrophysics Data System (ADS)

    He, Zhihai

    2007-01-01

    In this work, we develop a new data representation framework, called constrained relaxation for image compression. Our basic observation is that an image is not a random 2-D array of pixels. They have to satisfy a set of imaging constraints so as to form a natural image. Therefore, one of the major tasks in image representation and coding is to efficiently encode these imaging constraints. The proposed data representation and image compression method not only achieves more efficient data compression than the state-of-the-art H.264 Intra frame coding, but also provides much more resilience to wireless transmission errors with an internal error-correction capability.

  10. Lossy Compression of ACS images

    NASA Astrophysics Data System (ADS)

    Cox, Colin

    2004-01-01

    A method of compressing images stored as floating point arrays was proposed several years ago by White and Greenfield. With the increased image sizes encountered in the last few years and the consequent need to distribute large data volumes, the value of applying such a procedure has become more evident. Methods such as this which offer significant compression ratios are lossy and there is always some concern that statistically important information might be discarded. Several astronomical images have been analyzed and, in the examples tested, compression ratios of about six were obtained with no significant information loss.

  11. Partial transparency of compressed wood

    NASA Astrophysics Data System (ADS)

    Sugimoto, Hiroyuki; Sugimori, Masatoshi

    2016-05-01

    We have developed novel wood composite with optical transparency at arbitrary region. Pores in wood cells have a great variation in size. These pores expand the light path in the sample, because the refractive indexes differ between constituents of cell and air in lumen. In this study, wood compressed to close to lumen had optical transparency. Because the condition of the compression of wood needs the plastic deformation, wood was impregnated phenolic resin. The optimal condition for high transmission is compression ratio above 0.7.

  12. Television Compression Algorithms And Transmission On Packet Networks

    NASA Astrophysics Data System (ADS)

    Brainard, R. C.; Othmer, J. H.

    1988-10-01

    Wide-band packet transmission is a subject of strong current interest. The transmission of compressed TV signals over such networks is possible with any quality level. There are some specific advantages in using packet networks for TV transmission. Namely, any fixed data rate can be chosen, or a variable data rate can be utilized. However, on the negative side packet loss must be considered and differential delay in packet arrival must be compensated. The possibility of packet loss has a strong influence on compression algorithm choice. Differential delay of packet arrival is a new problem in codec design. Some issues relevant to mutual design of the transmission networks and compression algorithms will be presented. An assumption is that the packet network will maintain packet sequence integrity. For variable-rate transmission, a reasonable definition of peak data rate is necessary. Rate constraints may be necessary to encourage instituting a variable-rate service on the networks. The charging algorithm for network use will have an effect on selection of compression algorithm. Some values of and procedures for implementing packet priorities are discussed. Packet length has only a second-order effect on packet-TV considerations. Some examples of a range of codecs for differing data rates and picture quality are given. These serve to illustrate sensitivities to the various characteristics of packet networks. Perhaps more important, we talk about what we do not know about the design of such systems.

  13. A Motion-Compensating Image-Compression Scheme

    NASA Technical Reports Server (NTRS)

    Wong, Carol

    1994-01-01

    Chrominance used (in addition to luminance) in estimating motion. Variable-rate digital coding scheme for compression of color-video-image data designed to deliver pictures of good quality at moderate compressed-data rate of 1 to 2 bits per pixel, or of fair quality at rate less than 1 bit per pixel. Scheme, in principle, implemented by use of commercially available application-specific integrated circuits. Incorporates elements of some prior coding schemes, including motion compensation (MC) and discrete cosine transform (DCT).

  14. Efficient Lossy Compression for Compressive Sensing Acquisition of Images in Compressive Sensing Imaging Systems

    PubMed Central

    Li, Xiangwei; Lan, Xuguang; Yang, Meng; Xue, Jianru; Zheng, Nanning

    2014-01-01

    Compressive Sensing Imaging (CSI) is a new framework for image acquisition, which enables the simultaneous acquisition and compression of a scene. Since the characteristics of Compressive Sensing (CS) acquisition are very different from traditional image acquisition, the general image compression solution may not work well. In this paper, we propose an efficient lossy compression solution for CS acquisition of images by considering the distinctive features of the CSI. First, we design an adaptive compressive sensing acquisition method for images according to the sampling rate, which could achieve better CS reconstruction quality for the acquired image. Second, we develop a universal quantization for the obtained CS measurements from CS acquisition without knowing any a priori information about the captured image. Finally, we apply these two methods in the CSI system for efficient lossy compression of CS acquisition. Simulation results demonstrate that the proposed solution improves the rate-distortion performance by 0.4∼2 dB comparing with current state-of-the-art, while maintaining a low computational complexity. PMID:25490597

  15. Linear phase compressive filter

    DOEpatents

    McEwan, T.E.

    1995-06-06

    A phase linear filter for soliton suppression is in the form of a laddered series of stages of non-commensurate low pass filters with each low pass filter having a series coupled inductance (L) and a reverse biased, voltage dependent varactor diode, to ground which acts as a variable capacitance (C). L and C values are set to levels which correspond to a linear or conventional phase linear filter. Inductance is mapped directly from that of an equivalent nonlinear transmission line and capacitance is mapped from the linear case using a large signal equivalent of a nonlinear transmission line. 2 figs.

  16. Linear phase compressive filter

    DOEpatents

    McEwan, Thomas E.

    1995-01-01

    A phase linear filter for soliton suppression is in the form of a laddered series of stages of non-commensurate low pass filters with each low pass filter having a series coupled inductance (L) and a reverse biased, voltage dependent varactor diode, to ground which acts as a variable capacitance (C). L and C values are set to levels which correspond to a linear or conventional phase linear filter. Inductance is mapped directly from that of an equivalent nonlinear transmission line and capacitance is mapped from the linear case using a large signal equivalent of a nonlinear transmission line.

  17. Compression fractures of the back

    MedlinePlus

    Compression fractures of the back are broken vertebrae. Vertebrae are the bones of the spine. ... bone from elsewhere Tumors that start in the spine, such as multiple myeloma Having many fractures of ...

  18. Efficient Decoding of Compressed Data.

    ERIC Educational Resources Information Center

    Bassiouni, Mostafa A.; Mukherjee, Amar

    1995-01-01

    Discusses the problem of enhancing the speed of Huffman decoding of compressed data. Topics addressed include the Huffman decoding tree; multibit decoding; binary string mapping problems; and algorithms for solving mapping problems. (22 references) (LRW)

  19. [New aspects of compression therapy].

    PubMed

    Partsch, Bernhard; Partsch, Hugo

    2016-06-01

    In this review article the mechanisms of action of compression therapy are summarized and a survey of materials is presented together with some practical advice how and when these different devices should be applied. Some new experimental findings regarding the optimal dosage (= compression pressure) concerning an improvement of venous hemodynamics and a reduction of oedema are discussed. It is shown, that stiff, non-yielding material applied with adequate pressure provides hemodynamically superior effects compared to elastic material and that relatively low pressures reduce oedema. Compression over the calf is more important to increase the calf pump function compared to graduated compression. In patients with mixed, arterial-venous ulcers and an ABPI over 0.6 inelastic bandages not exceeding a sub-bandage pressure of 40 mmHg may increase the arterial flow and improve venous pumping function. PMID:27259340

  20. Compressed gas fuel storage system

    SciTech Connect

    Wozniak, John J.; Tiller, Dale B.; Wienhold, Paul D.; Hildebrand, Richard J.

    2001-01-01

    A compressed gas vehicle fuel storage system comprised of a plurality of compressed gas pressure cells supported by shock-absorbing foam positioned within a shape-conforming container. The container is dimensioned relative to the compressed gas pressure cells whereby a radial air gap surrounds each compressed gas pressure cell. The radial air gap allows pressure-induced expansion of the pressure cells without resulting in the application of pressure to adjacent pressure cells or physical pressure to the container. The pressure cells are interconnected by a gas control assembly including a thermally activated pressure relief device, a manual safety shut-off valve, and means for connecting the fuel storage system to a vehicle power source and a refueling adapter. The gas control assembly is enclosed by a protective cover attached to the container. The system is attached to the vehicle with straps to enable the chassis to deform as intended in a high-speed collision.

  1. Comparison of Artificial Compressibility Methods

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Housman, Jeffrey; Kwak, Dochan

    2004-01-01

    Various artificial compressibility methods for calculating the three-dimensional incompressible Navier-Stokes equations are compared. Each method is described and numerical solutions to test problems are conducted. A comparison based on convergence behavior, accuracy, and robustness is given.

  2. Shock compression of polyvinyl chloride

    NASA Astrophysics Data System (ADS)

    Neogi, Anupam; Mitra, Nilanjan

    2016-04-01

    This study presents shock compression simulation of atactic polyvinyl chloride (PVC) using ab-initio and classical molecular dynamics. The manuscript also identifies the limits of applicability of classical molecular dynamics based shock compression simulation for PVC. The mechanism of bond dissociation under shock loading and its progression is demonstrated in this manuscript using the density functional theory based molecular dynamics simulations. The rate of dissociation of different bonds at different shock velocities is also presented in this manuscript.

  3. Object-Based Image Compression

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.

    2003-01-01

    Image compression frequently supports reduced storage requirement in a computer system, as well as enhancement of effective channel bandwidth in a communication system, by decreasing the source bit rate through reduction of source redundancy. The majority of image compression techniques emphasize pixel-level operations, such as matching rectangular or elliptical sampling blocks taken from the source data stream, with exemplars stored in a database (e.g., a codebook in vector quantization or VQ). Alternatively, one can represent a source block via transformation, coefficient quantization, and selection of coefficients deemed significant for source content approximation in the decompressed image. This approach, called transform coding (TC), has predominated for several decades in the signal and image processing communities. A further technique that has been employed is the deduction of affine relationships from source properties such as local self-similarity, which supports the construction of adaptive codebooks in a self-VQ paradigm that has been called iterated function systems (IFS). Although VQ, TC, and IFS based compression algorithms have enjoyed varying levels of success for different types of applications, bit rate requirements, and image quality constraints, few of these algorithms examine the higher-level spatial structure of an image, and fewer still exploit this structure to enhance compression ratio. In this paper, we discuss a fourth type of compression algorithm, called object-based compression, which is based on research in joint segmentaton and compression, as well as previous research in the extraction of sketch-like representations from digital imagery. Here, large image regions that correspond to contiguous recognizeable objects or parts of objects are segmented from the source, then represented compactly in the compressed image. Segmentation is facilitated by source properties such as size, shape, texture, statistical properties, and spectral

  4. Anamorphic transformation and its application to time-bandwidth compression.

    PubMed

    Asghari, Mohammad H; Jalali, Bahram

    2013-09-20

    A general method for compressing the modulation time-bandwidth product of analog signals is introduced. As one of its applications, this physics-based signal grooming, performed in the analog domain, allows a conventional digitizer to sample and digitize the analog signal with variable resolution. The net result is that frequency components that were beyond the digitizer bandwidth can now be captured and, at the same time, the total digital data size is reduced. This compression is lossless and is achieved through a feature selective reshaping of the signal's complex field, performed in the analog domain prior to sampling. Our method is inspired by operation of Fovea centralis in the human eye and by anamorphic transformation in visual arts. The proposed transform can also be performed in the digital domain as a data compression algorithm to alleviate the storage and transmission bottlenecks associated with "big data." PMID:24085172

  5. Fixed-rate compressed floating-point arrays

    Energy Science and Technology Software Center (ESTSC)

    2014-03-30

    ZFP is a library for lossy compression of single- and double-precision floating-point data. One of the unique features of ZFP is its support for fixed-rate compression, which enables random read and write access at the granularity of small blocks of values. Using a C++ interface, this allows declaring compressed arrays (1D, 2D, and 3D arrays are supported) that through operator overloading can be treated just like conventional, uncompressed arrays, but which allow the user tomore » specify the exact number of bits to allocate to the array. ZFP also has variable-rate fixed-precision and fixed-accuracy modes, which allow the user to specify a tolerance on the relative or absolute error.« less

  6. Advanced application flight experiment breadboard pulse compression radar altimeter program

    NASA Technical Reports Server (NTRS)

    1976-01-01

    Design, development and performance of the pulse compression radar altimeter is described. The high resolution breadboard system is designed to operate from an aircraft at 10 Kft above the ocean and to accurately measure altitude, sea wave height and sea reflectivity. The minicomputer controlled Ku band system provides six basic variables and an extensive digital recording capability for experimentation purposes. Signal bandwidths of 360 MHz are obtained using a reflective array compression line. Stretch processing is used to achieve 1000:1 pulse compression. The system range command LSB is 0.62 ns or 9.25 cm. A second order altitude tracker, aided by accelerometer inputs is implemented in the system software. During flight tests the system demonstrated an altitude resolution capability of 2.1 cm and sea wave height estimation accuracy of 10%. The altitude measurement performance exceeds that of the Skylab and GEOS-C predecessors by approximately an order of magnitude.

  7. Online Adaptive Vector Quantization with Variable Size Codebook Entries.

    ERIC Educational Resources Information Center

    Constantinescu, Cornel; Storer, James A.

    1994-01-01

    Presents a new image compression algorithm that employs some of the most successful approaches to adaptive lossless compression to perform adaptive online (single pass) vector quantization with variable size codebook entries. Results of tests of the algorithm's effectiveness on standard test images are given. (12 references) (KRN)

  8. 29 CFR 1917.154 - Compressed air.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 7 2012-07-01 2012-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed...

  9. 29 CFR 1917.154 - Compressed air.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 7 2013-07-01 2013-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed...

  10. 29 CFR 1917.154 - Compressed air.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 7 2014-07-01 2014-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed...

  11. 29 CFR 1917.154 - Compressed air.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 7 2011-07-01 2011-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed...

  12. 29 CFR 1917.154 - Compressed air.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 7 2010-07-01 2010-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed...

  13. Glucose Variability

    PubMed Central

    2013-01-01

    The proposed contribution of glucose variability to the development of the complications of diabetes beyond that of glycemic exposure is supported by reports that oxidative stress, the putative mediator of such complications, is greater for intermittent as opposed to sustained hyperglycemia. Variability of glycemia in ambulatory conditions defined as the deviation from steady state is a phenomenon of normal physiology. Comprehensive recording of glycemia is required for the generation of any measurement of glucose variability. To avoid distortion of variability to that of glycemic exposure, its calculation should be devoid of a time component. PMID:23613565

  14. Perceptual Image Compression in Telemedicine

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ahumada, Albert J., Jr.; Eckstein, Miguel; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    The next era of space exploration, especially the "Mission to Planet Earth" will generate immense quantities of image data. For example, the Earth Observing System (EOS) is expected to generate in excess of one terabyte/day. NASA confronts a major technical challenge in managing this great flow of imagery: in collection, pre-processing, transmission to earth, archiving, and distribution to scientists at remote locations. Expected requirements in most of these areas clearly exceed current technology. Part of the solution to this problem lies in efficient image compression techniques. For much of this imagery, the ultimate consumer is the human eye. In this case image compression should be designed to match the visual capacities of the human observer. We have developed three techniques for optimizing image compression for the human viewer. The first consists of a formula, developed jointly with IBM and based on psychophysical measurements, that computes a DCT quantization matrix for any specified combination of viewing distance, display resolution, and display brightness. This DCT quantization matrix is used in most recent standards for digital image compression (JPEG, MPEG, CCITT H.261). The second technique optimizes the DCT quantization matrix for each individual image, based on the contents of the image. This is accomplished by means of a model of visual sensitivity to compression artifacts. The third technique extends the first two techniques to the realm of wavelet compression. Together these two techniques will allow systematic perceptual optimization of image compression in NASA imaging systems. Many of the image management challenges faced by NASA are mirrored in the field of telemedicine. Here too there are severe demands for transmission and archiving of large image databases, and the imagery is ultimately used primarily by human observers, such as radiologists. In this presentation I will describe some of our preliminary explorations of the applications

  15. Absolutely lossless compression of medical images.

    PubMed

    Ashraf, Robina; Akbar, Muhammad

    2005-01-01

    Data in medical images is very large and therefore for storage and/or transmission of these images, compression is essential. A method is proposed which provides high compression ratios for radiographic images with no loss of diagnostic quality. In the approach an image is first compressed at a high compression ratio but with loss, and the error image is then compressed losslessly. The resulting compression is not only strictly lossless, but also expected to yield a high compression ratio, especially if the lossy compression technique is good. A neural network vector quantizer (NNVQ) is used as a lossy compressor, while for lossless compression Huffman coding is used. Quality of images is evaluated by comparing with standard compression techniques available. PMID:17281110

  16. Are Compression Stockings an Effective Treatment for Orthostatic Presyncope?

    PubMed Central

    Protheroe, Clare Louise; Dikareva, Anastasia; Menon, Carlo; Claydon, Victoria Elizabeth

    2011-01-01

    Background Syncope, or fainting, affects approximately 6.2% of the population, and is associated with significant comorbidity. Many syncopal events occur secondary to excessive venous pooling and capillary filtration in the lower limbs when upright. As such, a common approach to the management of syncope is the use of compression stockings. However, research confirming their efficacy is lacking. We aimed to investigate the effect of graded calf compression stockings on orthostatic tolerance. Methodology/Principal Findings We evaluated orthostatic tolerance (OT) and haemodynamic control in 15 healthy volunteers wearing graded calf compression stockings compared to two placebo stockings in a randomized, cross-over, double-blind fashion. OT (time to presyncope, min) was determined using combined head-upright tilting and lower body negative pressure applied until presyncope. Throughout testing we continuously monitored beat-to-beat blood pressures, heart rate, stroke volume and cardiac output (finger plethysmography), cerebral and forearm blood flow velocities (Doppler ultrasound) and breath-by-breath end tidal gases. There were no significant differences in OT between compression stocking (26.0±2.3 min) and calf (29.3±2.4 min) or ankle (27.6±3.1 min) placebo conditions. Cardiovascular, cerebral and respiratory responses were similar in all conditions. The efficacy of compression stockings was related to anthropometric parameters, and could be predicted by a model based on the subject's calf circumference and shoe size (r = 0.780, p = 0.004). Conclusions/Significance These data question the use of calf compression stockings for orthostatic intolerance and highlight the need for individualised therapy accounting for anthropometric variables when considering treatment with compression stockings. PMID:22194814

  17. An overview of semantic compression

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.

    2010-08-01

    We live in such perceptually rich natural and manmade environments that detection and recognition of objects is mediated cerebrally by attentional filtering, in order to separate objects of interest from background clutter. In computer models of the human visual system, attentional filtering is often restricted to early processing, where areas of interest (AOIs) are delineated around anomalies of interest, then the pixels within each AOI's subtense are isolated for later processing. In contrast, the human visual system concurrently detects many targets at multiple levels (e.g., retinal center-surround filters, ganglion layer feature detectors, post-retinal spatial filtering, and cortical detection / filtering of features and objects, to name but a few processes). Intracranial attentional filtering appears to play multiple roles, including clutter filtration at all levels of processing - thus, we process individual retinal cell responses, early filtering response, and so forth, on up to the filtering of objects at high levels of semantic complexity. Computationally, image compression techniques have progressed from emphasizing pixels, to considering regions of pixels as foci of computational interest. In more recent research, object-based compression has been investigated with varying rate-distortion performance and computational efficiency. Codecs have been developed for a wide variety of applications, although the majority of compression and decompression transforms continue to concentrate on region- and pixel-based processing, in part because of computational convenience. It is interesting to note that a growing body of research has emphasized the detection and representation of small features in relationship to their surrounding environment, which has occasionally been called semantic compression. In this paper, we overview different types of semantic compression approaches, with particular interest in high-level compression algorithms. Various algorithms and

  18. Compression of spectral meteorological imagery

    NASA Technical Reports Server (NTRS)

    Miettinen, Kristo

    1993-01-01

    Data compression is essential to current low-earth-orbit spectral sensors with global coverage, e.g., meteorological sensors. Such sensors routinely produce in excess of 30 Gb of data per orbit (over 4 Mb/s for about 110 min) while typically limited to less than 10 Gb of downlink capacity per orbit (15 minutes at 10 Mb/s). Astro-Space Division develops spaceborne compression systems for compression ratios from as little as three to as much as twenty-to-one for high-fidelity reconstructions. Current hardware production and development at Astro-Space Division focuses on discrete cosine transform (DCT) systems implemented with the GE PFFT chip, a 32x32 2D-DCT engine. Spectral relations in the data are exploited through block mean extraction followed by orthonormal transformation. The transformation produces blocks with spatial correlation that are suitable for further compression with any block-oriented spatial compression system, e.g., Astro-Space Division's Laplacian modeler and analytic encoder of DCT coefficients.

  19. Flux Compression in HTS Films

    NASA Astrophysics Data System (ADS)

    Mikheenko, P.; Colclough, M. S.; Chakalov, R.; Kawano, K.; Muirhead, C. M.

    We report on experimental investigation of the effect of flux compression in superconducting YBa2Cu3Ox (YBCO) films and YBCO/CMR (Colossal Magnetoresistive) multilayers. The flux compression produces positive magnetic moment (m) upon the cooling in a field from above to below the critical temperature. We found effect of compression in all measured films and multilayers. In accordance with theoretical calculations, m is proportional to applied magnetic field. The amplitude of the effect depends on the cooling rate, which suggests the inhomogeneous cooling as its origin. The positive moment is always very small, a fraction of a percent of the ideal diamagnetic response. A CMR layer in contact with HTS decreases the amplitude of the effect. The flux compression weakly depends on sample size, but sensitive to its form and topology. The positive magnetic moment does not appear in bulk samples at low rates of the cooling. Our results show that the main features of the flux compression are very different from those in Paramagnetic Meissner effect observed in bulk high temperature superconductors and Nb disks.

  20. Compression of Probabilistic XML Documents

    NASA Astrophysics Data System (ADS)

    Veldman, Irma; de Keijzer, Ander; van Keulen, Maurice

    Database techniques to store, query and manipulate data that contains uncertainty receives increasing research interest. Such UDBMSs can be classified according to their underlying data model: relational, XML, or RDF. We focus on uncertain XML DBMS with as representative example the Probabilistic XML model (PXML) of [10,9]. The size of a PXML document is obviously a factor in performance. There are PXML-specific techniques to reduce the size, such as a push down mechanism, that produces equivalent but more compact PXML documents. It can only be applied, however, where possibilities are dependent. For normal XML documents there also exist several techniques for compressing a document. Since Probabilistic XML is (a special form of) normal XML, it might benefit from these methods even more. In this paper, we show that existing compression mechanisms can be combined with PXML-specific compression techniques. We also show that best compression rates are obtained with a combination of PXML-specific technique with a rather simple generic DAG-compression technique.

  1. Effects of Local Compression on Peroneal Nerve Function in Humans

    NASA Technical Reports Server (NTRS)

    Hargens, Alan R.; Botte, Michael J.; Swenson, Michael R.; Gelberman, Richard H.; Rhoades, Charles E.; Akeson, Wayne H.

    1993-01-01

    A new apparatus was developed to compress the anterior compartment selectively and reproducibly in humans. Thirty-five normal volunteers were studied to determine short-term thresholds of local tissue pressure that produce significant neuromuscular dysfunction. Local tissue fluid pressure adjacent to the deep peroneal nerve was elevated by the compression apparatus and continuously monitored for 2-3 h by the slit catheter technique. Elevation of tissue fluid pressure to within 35-40 mm Hg of diastolic blood pressure (approx. 40 mm Hg of in situ pressure in our subjects) elicited a consistent progression of neuromuscular deterioration including, in order, (a) gradual loss of sensation, as assessed by Semmes-Weinstein monofilaments, (b) subjective complaints, (c) reduced nerve conduction velocity, (d) decreased action potential amplitude of the extensor digitorum brevis muscle, and (e) motor weakness of muscles within the anterior compartment. Generally, higher intracompartment at pressures caused more rapid deterioration of neuromuscular function. In two subjects, when in situ compression levels were 0 and 30 mm Hg, normal neuromuscular function was maintained for 3 h. Threshold pressures for significant dysfunction were not always the same for each functional parameter studied, and the magnitudes of each functional deficit did not always correlate with compression level. This variable tolerance to elevated pressure emphasizes the need to monitor clinical signs and symptoms carefully in the diagnosis of compartment syndromes. The nature of the present studies was short term; longer term compression of myoneural tissues may result in dysfunction at lower pressure thresholds.

  2. Formulation development of metoprolol succinate and hydrochlorothiazide compression coated tablets.

    PubMed

    Shah, Ritesh; Parmar, Swatil; Patel, Hetal; Pandey, Sonia; Shah, Dinesh

    2013-12-01

    The purpose of present research work was to design and optimize compression coated tablet to provide an immediate release of hydrochlorothiazide in stomach and extended release of metoprolol succinate in intestine. Compression coated tablet was prepared by direct compression method which consisted of metoprolol succinate extended release core tablet and hydrochlorothiazide immediate release coat layer. Barrier coating of Hydroxy Propyl Methyl Cellulose (HPMC) E15LV was applied onto the core tablets to prevent burst release of metoprolol succinate in acidic medium. A 32 full factorial design was employed for optimization of the amount of polymers required to achieve extended release of drug. The percentage drug release at given time Q3, Q6, Q10, Q22; were selected as dependent variables. Core and compression coated tablets were evaluated for pharmaco-technical parameters. In vitro drug release of optimized batch was found to comply with Pharmacopoeial specifications. Desired release of metoprolol succinate was obtained by suitable combination of HPMC having high gelling capacity and polyethylene oxide having quick gelling capacity. The mechanism of release of metoprolol succinate from all batches was anomalous diffusion. Optimised batch was stable at accelerated conditions up to 3 months. Thus, compression coated tablet of metoprolol succinate and hydrochlorothiazide was successfully formulated. PMID:23017092

  3. Isentropic Compression of Multicomponent Mixtures of Fuels and Inert Gases

    NASA Technical Reports Server (NTRS)

    Barragan, Michelle; Julien, Howard L.; Woods, Stephen S.; Wilson, D. Bruce; Saulsberry, Regor L.

    2000-01-01

    In selected aerospace applications of the fuels hydrazine and monomethythydrazine, there occur conditions which can result in the isentropic compression of a multicomponent mixture of fuel and inert gas. One such example is when a driver gas such as helium comes out of solution and mixes with the fuel vapor, which is being compressed. A second example is when product gas from an energetic device mixes with the fuel vapor which is being compressed. Thermodynamic analysis has shown that under isentropic compression, the fuels hydrazine and monomethylhydrazine must be treated as real fluids using appropriate equations of state. The appropriate equations of state are the Peng-Robinson equation of state for hydrazine and the Redlich-Kwong-Soave equation of state for monomethylhydrazine. The addition of an inert gas of variable quantity and input temperature and pressure to the fuel compounds the problem for safety design or analysis. This work provides the appropriate thermodynamic analysis of isentropic compression of the two examples cited. In addition to an entropy balance describing the change of state, an enthalpy balance is required. The presence of multicomponents in the system requires that appropriate mixing rules are identified and applied to the analysis. This analysis is not currently available.

  4. Fast and efficient compression of floating-point data.

    PubMed

    Lindstrom, Peter; Isenburg, Martin

    2006-01-01

    Large scale scientific simulation codes typically run on a cluster of CPUs that write/read time steps to/from a single file system. As data sets are constantly growing in size, this increasingly leads to I/O bottlenecks. When the rate at which data is produced exceeds the available I/O bandwidth, the simulation stalls and the CPUs are idle. Data compression can alleviate this problem by using some CPU cycles to reduce the amount of data needed to be transfered. Most compression schemes, however, are designed to operate offline and seek to maximize compression, not throughput. Furthermore, they often require quantizing floating-point values onto a uniform integer grid, which disqualifies their use in applications where exact values must be retained. We propose a simple scheme for lossless, online compression of floating-point data that transparently integrates into the I/O of many applications. A plug-in scheme for data-dependent prediction makes our scheme applicable to a wide variety of data used in visualization, such as unstructured meshes, point sets, images, and voxel grids. We achieve state-of-the-art compression rates and speeds, the latter in part due to an improved entropy coder. We demonstrate that this significantly accelerates I/O throughput in real simulation runs. Unlike previous schemes, our method also adapts well to variable-precision floating-point and integer data. PMID:17080858

  5. 30 CFR 75.1730 - Compressed air; general; compressed air systems.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped...

  6. 30 CFR 75.1730 - Compressed air; general; compressed air systems.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped...

  7. 30 CFR 75.1730 - Compressed air; general; compressed air systems.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped...

  8. 30 CFR 75.1730 - Compressed air; general; compressed air systems.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped...

  9. 30 CFR 75.1730 - Compressed air; general; compressed air systems.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped...

  10. Effects of shock structure on temperature field in compressible turbulence

    NASA Astrophysics Data System (ADS)

    Ni, Qionglin; Chen, Shiyi

    2014-11-01

    Effects of shock structure on temperature in compressible turbulence were investigated. The small-scale shocklets and large-scale shock waves were appeared in the flows driven by solenoidal and compressive forcings, i.e. SFT & CFT, respectively. In SFT the temperature had Kolmogorov spectrum and ramp-cliff structures, while in CFT it obeyed Burgers spectrum and was dominated by large-scale rarefaction and compression. The power-law exponents for the p.d.f. of large negative dilatation were -2.5 in SFT and -3.5 in CFT, approximately corresponded to model results. The isentropic approximation of thermodynamic variables showed that in SFT, the isentropic derivation was reinforced when turbulent Mach number increased. At similar turbulent Mach number, the variables in CFT exhibited more anisentropic. It showed that the transport of temperature was increased by the small-scale viscous dissipation and the large-scale pressure-dilatation. The distribution of positive and negative components of pressure-dilatation confirmed the mechanism of negligible pressure-dilatation at small scales. Further, the positive skewness of p.d.f.s of pressure-dilatation implied that the conversion from kinetic to internal energy by compression was more intense than the opposite process by rarefaction.

  11. Motor commands induce time compression for tactile stimuli.

    PubMed

    Tomassini, Alice; Gori, Monica; Baud-Bovy, Gabriel; Sandini, Giulio; Morrone, Maria Concetta

    2014-07-01

    Saccades cause compression of visual space around the saccadic target, and also a compression of time, both phenomena thought to be related to the problem of maintaining saccadic stability (Morrone et al., 2005; Burr and Morrone, 2011). Interestingly, similar phenomena occur at the time of hand movements, when tactile stimuli are systematically mislocalized in the direction of the movement (Dassonville, 1995; Watanabe et al., 2009). In this study, we measured whether hand movements also cause an alteration of the perceived timing of tactile signals. Human participants compared the temporal separation between two pairs of tactile taps while moving their right hand in response to an auditory cue. The first pair of tactile taps was presented at variable times with respect to movement with a fixed onset asynchrony of 150 ms. Two seconds after test presentation, when the hand was stationary, the second pair of taps was delivered with a variable temporal separation. Tactile stimuli could be delivered to either the right moving or left stationary hand. When the tactile stimuli were presented to the motor effector just before and during movement, their perceived temporal separation was reduced. The time compression was effector-specific, as perceived time was veridical for the left stationary hand. The results indicate that time intervals are compressed around the time of hand movements. As for vision, the mislocalizations of time and space for touch stimuli may be consequences of a mechanism attempting to achieve perceptual stability during tactile exploration of objects, suggesting common strategies within different sensorimotor systems. PMID:24990936

  12. Evaluation of nonlinear frequency compression: Clinical outcomes

    PubMed Central

    Glista, Danielle; Scollie, Susan; Bagatto, Marlene; Seewald, Richard; Parsa, Vijay; Johnson, Andrew

    2009-01-01

    This study evaluated prototype multichannel nonlinear frequency compression (NFC) signal processing on listeners with high-frequency hearing loss. This signal processor applies NFC above a cut-off frequency. The participants were hearing-impaired adults (13) and children (11) with sloping, high-frequency hearing loss. Multiple outcome measures were repeated using a modified withdrawal design. These included speech sound detection, speech recognition, and self-reported preference measures. Group level results provide evidence of significant improvement of consonant and plural recognition when NFC was enabled. Vowel recognition did not change significantly. Analysis of individual results allowed for exploration of individual factors contributing to benefit received from NFC processing. Findings suggest that NFC processing can improve high frequency speech detection and speech recognition ability for adult and child listeners. Variability in individual outcomes related to factors such as degree and configuration of hearing loss, age of participant, and type of outcome measure. PMID:19504379

  13. Data compression using Chebyshev transform

    NASA Technical Reports Server (NTRS)

    Cheng, Andrew F. (Inventor); Hawkins, III, S. Edward (Inventor); Nguyen, Lillian (Inventor); Monaco, Christopher A. (Inventor); Seagrave, Gordon G. (Inventor)

    2007-01-01

    The present invention is a method, system, and computer program product for implementation of a capable, general purpose compression algorithm that can be engaged on the fly. This invention has particular practical application with time-series data, and more particularly, time-series data obtained form a spacecraft, or similar situations where cost, size and/or power limitations are prevalent, although it is not limited to such applications. It is also particularly applicable to the compression of serial data streams and works in one, two, or three dimensions. The original input data is approximated by Chebyshev polynomials, achieving very high compression ratios on serial data streams with minimal loss of scientific information.

  14. Compressive behavior of fine sand.

    SciTech Connect

    Martin, Bradley E.; Kabir, Md. E.; Song, Bo; Chen, Wayne

    2010-04-01

    The compressive mechanical response of fine sand is experimentally investigated. The strain rate, initial density, stress state, and moisture level are systematically varied. A Kolsky bar was modified to obtain uniaxial and triaxial compressive response at high strain rates. A controlled loading pulse allows the specimen to acquire stress equilibrium and constant strain-rates. The results show that the compressive response of the fine sand is not sensitive to strain rate under the loading conditions in this study, but significantly dependent on the moisture content, initial density and lateral confinement. Partially saturated sand is more compliant than dry sand. Similar trends were reported in the quasi-static regime for experiments conducted at comparable specimen conditions. The sand becomes stiffer as initial density and/or confinement pressure increases. The sand particle size become smaller after hydrostatic pressure and further smaller after dynamic axial loading.

  15. Data compression in digitized lines

    SciTech Connect

    Thapa, K. )

    1990-04-01

    The problem of data compression is very important in digital photogrammetry, computer assisted cartography, and GIS/LIS. In addition, it is also applicable in many other fields such as computer vision, image processing, pattern recognition, and artificial intelligence. Consequently, there are many algorithms available to solve this problem but none of them are considered to be satisfactory. In this paper, a new method of finding critical points in a digitized curve is explained. This technique, based on the normalized symmetric scattered matrix, is good for both critical points detection and data compression. In addition, the critical points detected by this algorithm are compared with those by zero-crossings. 8 refs.

  16. Simulating Ramp Compression of Diamond

    NASA Astrophysics Data System (ADS)

    Godwal, B. K.; Gonzàlez-Cataldo, F. J.; Jeanloz, R.

    2014-12-01

    We model ramp compression, shock-free dynamic loading, intended to generate a well-defined equation of state that achieves higher densities and lower temperatures than the corresponding shock Hugoniot. Ramp loading ideally approaches isentropic compression for a fluid sample, so is useful for simulating the states deep inside convecting planets. Our model explicitly evaluates the deviation of ramp from "quasi-isentropic" compression. Motivated by recent ramp-compression experiments to 5 TPa (50 Mbar), we calculate the room-temperature isotherm of diamond using first-principles density functional theory and molecular dynamics, from which we derive a principal isentrope and Hugoniot by way of the Mie-Grüneisen formulation and the Hugoniot conservation relations. We simulate ramp compression by imposing a uniaxial strain that then relaxes to an isotropic state, evaluating the change in internal energy and stress components as the sample relaxes toward isotropic strain at constant volume; temperature is well defined for the resulting hydrostatic state. Finally, we evaluate multiple shock- and ramp-loading steps to compare with single-step loading to a given final compression. Temperatures calculated for single-step ramp compression are less than Hugoniot temperatures only above 500 GPa, the two being close to each other at lower pressures. We obtain temperatures of 5095 K and 6815 K for single-step ramp loading to 600 and 800 GPa, for example, which compares well with values of ~5100 K and ~6300 K estimated from previous experiments [PRL,102, 075503, 2009]. At 800 GPa, diamond is calculated to have a temperature of 500 K along the isentrope; 900 K under multi-shock compression (asymptotic result after 8-10 steps); and 3400 K under 3-step ramp loading (200-400-800 GPa). Asymptotic multi-step shock and ramp loading are indistinguishable from the isentrope, within present uncertainties. Our simulations quantify the manner in which current experiments can simulate the

  17. GPU-accelerated compressive holography.

    PubMed

    Endo, Yutaka; Shimobaba, Tomoyoshi; Kakue, Takashi; Ito, Tomoyoshi

    2016-04-18

    In this paper, we show fast signal reconstruction for compressive holography using a graphics processing unit (GPU). We implemented a fast iterative shrinkage-thresholding algorithm on a GPU to solve the ℓ1 and total variation (TV) regularized problems that are typically used in compressive holography. Since the algorithm is highly parallel, GPUs can compute it efficiently by data-parallel computing. For better performance, our implementation exploits the structure of the measurement matrix to compute the matrix multiplications. The results show that GPU-based implementation is about 20 times faster than CPU-based implementation. PMID:27137282

  18. Analyzing Ramp Compression Wave Experiments

    NASA Astrophysics Data System (ADS)

    Hayes, D. B.

    2007-12-01

    Isentropic compression of a solid to 100's of GPa by a ramped, planar compression wave allows measurement of material properties at high strain and at modest temperature. Introduction of a measurement plane disturbs the flow, requiring special analysis techniques. If the measurement interface is windowed, the unsteady nature of the wave in the window requires special treatment. When the flow is hyperbolic the equations of motion can be integrated backward in space in the sample to a region undisturbed by the interface interactions, fully accounting for the untoward interactions. For more complex materials like hysteretic elastic/plastic solids or phase changing material, hybrid analysis techniques are required.

  19. Extended testing of compression distillation.

    NASA Technical Reports Server (NTRS)

    Bambenek, R. A.; Nuccio, P. P.

    1972-01-01

    During the past eight years, the NASA Manned Spacecraft Center has supported the development of an integrated water and waste management system which includes the compression distillation process for recovering useable water from urine, urinal flush water, humidity condensate, commode flush water, and concentrated wash water. This paper describes the design of the compression distillation unit, developed for this system, and the testing performed to demonstrate its reliability and performance. In addition, this paper summarizes the work performed on pretreatment and post-treatment processes, to assure the recovery of sterile potable water from urine and treated urinal flush water.

  20. Data compression for satellite images

    NASA Technical Reports Server (NTRS)

    Chen, P. H.; Wintz, P. A.

    1976-01-01

    An efficient data compression system is presented for satellite pictures and two grey level pictures derived from satellite pictures. The compression techniques take advantages of the correlation between adjacent picture elements. Several source coding methods are investigated. Double delta coding is presented and shown to be the most efficient. Both predictive differential quantizing technique and double delta coding can be significantly improved by applying a background skipping technique. An extension code is constructed. This code requires very little storage space and operates efficiently. Simulation results are presented for various coding schemes and source codes.

  1. Compressing the Inert Doublet Model

    DOE PAGESBeta

    Blinov, Nikita; Kozaczuk, Jonathan; Morrissey, David E.; de la Puente, Alejandro

    2016-02-16

    The Inert Doublet Model relies on a discrete symmetry to prevent couplings of the new scalars to Standard Model fermions. We found that this stabilizes the lightest inert state, which can then contribute to the observed dark matter density. In the presence of additional approximate symmetries, the resulting spectrum of exotic scalars can be compressed. Here, we study the phenomenological and cosmological implications of this scenario. In conclusion, we derive new limits on the compressed Inert Doublet Model from LEP, and outline the prospects for exclusion and discovery of this model at dark matter experiments, the LHC, and future colliders.

  2. Compressing the Inert Doublet Model

    SciTech Connect

    Blinov, Nikita; Morrissey, David E.; de la Puente, Alejandro

    2015-10-29

    The Inert Doublet Model relies on a discrete symmetry to prevent couplings of the new scalars to Standard Model fermions. We found that this stabilizes the lightest inert state, which can then contribute to the observed dark matter density. In the presence of additional approximate symmetries, the resulting spectrum of exotic scalars can be compressed. Here, we study the phenomenological and cosmological implications of this scenario. Furthermore, we derive new limits on the compressed Inert Doublet Model from LEP, and outline the prospects for exclusion and discovery of this model at dark matter experiments, the LHC, and future colliders.

  3. Structured illumination temporal compressive microscopy

    PubMed Central

    Yuan, Xin; Pang, Shuo

    2016-01-01

    We present a compressive video microscope based on structured illumination with incoherent light source. The source-side illumination coding scheme allows the emission photons being collected by the full aperture of the microscope objective, and thus is suitable for the fluorescence readout mode. A 2-step iterative reconstruction algorithm, termed BWISE, has been developed to address the mismatch between the illumination pattern size and the detector pixel size. Image sequences with a temporal compression ratio of 4:1 were demonstrated. PMID:27231586

  4. Astronomical context coder for image compression

    NASA Astrophysics Data System (ADS)

    Pata, Petr; Schindler, Jaromir

    2015-10-01

    Recent lossless still image compression formats are powerful tools for compression of all kind of common images (pictures, text, schemes, etc.). Generally, the performance of a compression algorithm depends on its ability to anticipate the image function of the processed image. In other words, a compression algorithm to be successful, it has to take perfectly the advantage of coded image properties. Astronomical data form a special class of images and they have, among general image properties, also some specific characteristics which are unique. If a new coder is able to correctly use the knowledge of these special properties it should lead to its superior performance on this specific class of images at least in terms of the compression ratio. In this work, the novel lossless astronomical image data compression method will be presented. The achievable compression ratio of this new coder will be compared to theoretical lossless compression limit and also to the recent compression standards of the astronomy and general multimedia.

  5. Managment oriented analysis of sediment yield time compression

    NASA Astrophysics Data System (ADS)

    Smetanova, Anna; Le Bissonnais, Yves; Raclot, Damien; Nunes, João P.; Licciardello, Feliciana; Le Bouteiller, Caroline; Latron, Jérôme; Rodríguez Caballero, Emilio; Mathys, Nicolle; Klotz, Sébastien; Mekki, Insaf; Gallart, Francesc; Solé Benet, Albert; Pérez Gallego, Nuria; Andrieux, Patrick; Moussa, Roger; Planchon, Olivier; Marisa Santos, Juliana; Alshihabi, Omran; Chikhaoui, Mohamed

    2016-04-01

    The understanding of inter- and intra-annual variability of sediment yield is important for the land use planning and management decisions for sustainable landscapes. It is of particular importance in the regions where the annual sediment yield is often highly dependent on the occurrence of few large events which produce the majority of sediments, such as in the Mediterranean. This phenomenon is referred as time compression, and relevance of its consideration growths with the increase in magnitude and frequency of extreme events due to climate change in many other regions. So far, time compression has ben studied mainly on events datasets, providing high resolution, but (in terms of data amount, required data precision and methods), demanding analysis. In order to provide an alternative simplified approach, the monthly and yearly time compressions were evaluated in eight Mediterranean catchments (of the R-OSMed network), representing a wide range of Mediterranean landscapes. The annual sediment yield varied between 0 to ~27100 Mg•km-2•a-1, and the monthly sediment yield between 0 to ~11600 Mg•km-2•month-1. The catchment's sediment yield was un-equally distributed at inter- and intra-annual scale, and large differences were observed between the catchments. Two types of time compression were distinguished - (i) the inter-annual (based on annual values) and intra- annual (based on monthly values). Four different rainfall-runoff-sediment yield time compression patterns were observed: (i) no time-compression of rainfall, runoff, nor sediment yield, (ii) low time compression of rainfall and runoff, but high compression of sediment yield, (iii) low compression of rainfall and high of runoff and sediment yield, and (iv) low, medium and high compression of rainfall, runoff and sediment yield. All four patterns were present at inter-annual scale, while at intra-annual scale only the two latter were present. This implies that high sediment yields occurred in

  6. Compression fractures of the back

    MedlinePlus

    ... Meirhaeghe J, et al. Efficacy and safety of balloon kyphoplasty compared with non-surgical care for vertebral compression fracture (FREE): a randomised controlled trial. Lancet . 2009;373(9668):1016-24. PMID: 19246088 www.ncbi.nlm.nih.gov/pubmed/19246088 .

  7. A programmable image compression system

    NASA Technical Reports Server (NTRS)

    Farrelle, Paul M.

    1989-01-01

    A programmable image compression system which has the necessary flexibility to address diverse imaging needs is described. It can compress and expand single frame video images (monochrome or color) as well as documents and graphics (black and white or color) for archival or transmission applications. Through software control, the compression mode can be set for lossless or controlled quality coding; the image size and bit depth can be varied; and the image source and destination devices can be readily changed. Despite the large combination of image data types, image sources, and algorithms, the system provides a simple consistent interface to the programmer. This system (OPTIPAC) is based on the TITMS320C25 digital signal processing (DSP) chip and has been implemented as a co-processor board for an IBM PC-AT compatible computer. The underlying philosophy can readily be applied to different hardware platforms. By using multiple DSP chips or incorporating algorithm specific chips, the compression and expansion times can be significantly reduced to meet performance requirements.

  8. COMPRESSIBLE FLOW, ENTRAINMENT, AND MEGAPLUME

    EPA Science Inventory

    It is generally believed that low Mach number, i.e., low-velocity, flow may be assumed to be incompressible flow. Under steady-state conditions, an exact equation of continuity may then be used to show that such flow is non-divergent. However, a rigorous, compressible fluid-dynam...

  9. Teaching Time-Space Compression

    ERIC Educational Resources Information Center

    Warf, Barney

    2011-01-01

    Time-space compression shows students that geographies are plastic, mutable and forever changing. This paper justifies the need to teach this topic, which is rarely found in undergraduate course syllabi. It addresses the impacts of transportation and communications technologies to explicate its dynamics. In summarizing various conceptual…

  10. Hyperspectral imaging using compressed sensing

    NASA Astrophysics Data System (ADS)

    Ramirez I., Gabriel Eduardo; Manian, Vidya B.

    2012-06-01

    Compressed sensing (CS) has attracted a lot of attention in recent years as a promising signal processing technique that exploits a signal's sparsity to reduce its size. It allows for simple compression that does not require a lot of additional computational power, and would allow physical implementation at the sensor using spatial light multiplexers using Texas Instruments (TI) digital micro-mirror device (DMD). The DMD can be used as a random measurement matrix, reflecting the image off the DMD is the equivalent of an inner product between the images individual pixels and the measurement matrix. CS however is asymmetrical, meaning that the signals recovery or reconstruction from the measurements does require a higher level of computation. This makes the prospect of working with the compressed version of the signal in implementations such as detection or classification much more efficient. If an initial analysis shows nothing of interest, the signal need not be reconstructed. Many hyper-spectral image applications are precisely focused on these areas, and would greatly benefit from a compression technique like CS that could help minimize the light sensor down to a single pixel, lowering costs associated with the cameras while reducing the large amounts of data generated by all the bands. The present paper will show an implementation of CS using a single pixel hyper-spectral sensor, and compare the reconstructed images to those obtained through the use of a regular sensor.

  11. Data compression preserving statistical independence

    NASA Technical Reports Server (NTRS)

    Morduch, G. E.; Rice, W. M.

    1973-01-01

    The purpose of this study was to determine the optimum points of evaluation of data compressed by means of polynomial smoothing. It is shown that a set y of m statistically independent observations Y(t sub 1), Y(t sub 2), ... Y(t sub m) of a quantity X(t), which can be described by a (n-1)th degree polynomial in time, may be represented by a set Z of n statistically independent compressed observations Z (tau sub 1), Z (tau sub 2),...Z (tau sub n), such that The compressed set Z has the same information content as the observed set Y. the times tau sub 1, tau sub 2,.. tau sub n are the zeros of an nth degree polynomial P sub n, to whose definition and properties the bulk of this report is devoted. The polynomials P sub n are defined as functions of the observation times t sub 1, t sub 2,.. t sub n, and it is interesting to note that if the observation times are continuously distributed the polynomials P sub n degenerate to legendre polynomials. The proposed data compression scheme is a little more complex than those usually employed, but has the advantage of preserving all the information content of the original observations.

  12. Prelude to compressed baryonic matter

    NASA Astrophysics Data System (ADS)

    Wilczek, Frank

    Why study compressed baryonic matter, or more generally strongly interacting matter at high densities and temperatures? Most obviously, because it's an important piece of Nature. The whole universe, in the early moments of the big bang, was filled with the stuff. Today, highly compressed baryonic matter occurs in neutron stars and during crucial moments in the development of supernovae. Also, working to understand compressed baryonic matter gives us new perspectives on ordinary baryonic matter, i.e. the matter in atomic nuclei. But perhaps the best answer is a variation on the one George Mallory gave, when asked why he sought to scale Mount Everest: Because, as a prominent feature in the landscape of physics, it's there. Compressed baryonic matter is a material we can produce in novel, challenging experiments that probe new extremes of temperature and density. On the theoretical side, it is a mathematically well-defined domain with a wealth of novel, challenging problems, as well as wide-ranging connections. Its challenges have already inspired a lot of very clever work, and revealed some wonderful surprises, as documented in this volume.

  13. Culture: Copying, Compression, and Conventionality

    ERIC Educational Resources Information Center

    Tamariz, Mónica; Kirby, Simon

    2015-01-01

    Through cultural transmission, repeated learning by new individuals transforms cultural information, which tends to become increasingly compressible (Kirby, Cornish, & Smith, 2008; Smith, Tamariz, & Kirby, 2013). Existing diffusion chain studies include in their design two processes that could be responsible for this tendency: learning…

  14. Device Assists Cardiac Chest Compression

    NASA Technical Reports Server (NTRS)

    Eichstadt, Frank T.

    1995-01-01

    Portable device facilitates effective and prolonged cardiac resuscitation by chest compression. Developed originally for use in absence of gravitation, also useful in terrestrial environments and situations (confined spaces, water rescue, medical transport) not conducive to standard manual cardiopulmonary resuscitation (CPR) techniques.

  15. Perceptually lossy compression of documents

    NASA Astrophysics Data System (ADS)

    Beretta, Giordano B.; Bhaskaran, Vasudev; Konstantinides, Konstantinos; Natarajan, Balas R.

    1997-06-01

    The main cost of owning a facsimile machine consists of the telephone charges for the communications, thus short transmission times are a key feature for facsimile machines. Similarly, on a packet-routed service such as the Internet, a low number of packets is essential to avoid operator wait times. Concomitantly, the user expectations have increased considerably. In facsimile, the switch from binary to full color increases the data size by a factor of 24. On the Internet, the switch from plain text American Standard Code for Information Interchange (ASCII) encoded files to files marked up in the Hypertext Markup Language (HTML) with ample embedded graphics has increased the size of transactions by several orders of magnitude. A common compressing method for raster files in these applications in the Joint Photographic Experts Group (JPEG) method, because efficient implementations are readily available. In this method the implementors design the discrete quantization tables (DQT) and the Huffman tables (HT) to maximize the compression factor while maintaining the introduced artifacts at the threshold of perceptual detectability. Unfortunately the achieved compression rates are unsatisfactory for applications such as color facsimile and World Wide Web (W3) browsing. We present a design methodology for image-independent DQTs that while producing perceptually lossy data, does not impair the reading performance of users. Combined with a text sharpening algorithm that compensates for scanning device limitations, the methodology presented in this paper allows us to achieve compression ratios near 1:100.

  16. Volatile Emissions from Compressed Tissue

    PubMed Central

    Dini, Francesca; Capuano, Rosamaria; Strand, Tillan; Ek, Anna-Christina; Lindgren, Margareta; Paolesse, Roberto; Di Natale, Corrado; Lundström, Ingemar

    2013-01-01

    Since almost every fifth patient treated in hospital care develops pressure ulcers, early identification of risk is important. A non-invasive method for the elucidation of endogenous biomarkers related to pressure ulcers could be an excellent tool for this purpose. We therefore found it of interest to determine if there is a difference in the emissions of volatiles from compressed and uncompressed tissue. The ultimate goal is to find a non-invasive method to obtain an early warning for the risk of developing pressure ulcers for bed-ridden persons. Chemical analysis of the emissions, collected in compresses, was made with gas-chromatography – mass spectrometry and with a chemical sensor array, the so called electronic nose. It was found that the emissions from healthy and hospitalized persons differed significantly irrespective of the site. Within each group there was a clear difference between the compressed and uncompressed site. Peaks that could be certainly deemed as markers of the compression were, however, not identified. Nonetheless, different compounds connected to the application of local mechanical pressure were found. The results obtained with GC-MS reveal the complexity of VOC composition, thus an array of non-selective chemical sensors seems to be a suitable choice for the analysis of skin emission from compressed tissues; it may represent a practical instrument for bed side diagnostics. Results show that the adopted electronic noses are likely sensitive to the total amount of the emission rather than to its composition. The development of a gas sensor-based device requires then the design of sensor receptors adequate to detect the VOCs bouquet typical of pressure. This preliminary experiment evidences the necessity of studies where each given person is followed for a long time in a ward in order to detect the insurgence of specific VOCs pattern changes signalling the occurrence of ulcers. PMID:23874929

  17. Two algorithms for compressing noise like signals

    NASA Astrophysics Data System (ADS)

    Agaian, Sos S.; Cherukuri, Ravindranath; Akopian, David

    2005-05-01

    Compression is a technique that is used to encode data so that the data needs less storage/memory space. Compression of random data is vital in case where data where we need preserve data that has low redundancy and whose power spectrum is close to noise. In case of noisy signals that are used in various data hiding schemes the data has low redundancy and low energy spectrum. Therefore, upon compressing with lossy compression algorithms the low energy spectrum might get lost. Since the LSB plane data has low redundancy, lossless compression algorithms like Run length, Huffman coding, Arithmetic coding are in effective in providing a good compression ratio. These problems motivated in developing a new class of compression algorithms for compressing noisy signals. In this paper, we introduce a two new compression technique that compresses the random data like noise with reference to know pseudo noise sequence generated using a key. In addition, we developed a representation model for digital media using the pseudo noise signals. For simulation, we have made comparison between our methods and existing compression techniques like Run length that shows the Run length cannot compress when data is random but the proposed algorithms can compress. Furthermore, the proposed algorithms can be extended to all kinds of random data used in various applications.

  18. Optimality Of Variable-Length Codes

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Miller, Warner H.; Rice, Robert F.

    1994-01-01

    Report presents analysis of performances of conceptual Rice universal noiseless coders designed to provide efficient compression of data over wide range of source-data entropies. Includes predictive preprocessor that maps source data into sequence of nonnegative integers and variable-length-coding processor, which adapts to varying entropy of source data by selecting whichever one of number of optional codes yields shortest codeword.

  19. Length-Limited Data Transformation and Compression

    SciTech Connect

    Senecal, J G

    2005-05-17

    Scientific computation is used for the simulation of increasingly complex phenomena, and generates data sets of ever increasing size, often on the order of terabytes. All of this data creates difficulties. Several problems that have been identified are (1) the inability to effectively handle the massive amounts of data created, (2) the inability to get the data off the computer and into storage fast enough, and (3) the inability of a remote user to easily obtain a rendered image of the data resulting from a simulation run. This dissertation presents several techniques that were developed to address these issues. The first is a prototype bin coder based on variable-to-variable length codes. The codes utilized are created through a process of parse tree leaf merging, rather than the common practice of leaf extension. This coder is very fast and its compression efficiency is comparable to other state-of-the-art coders. The second contribution is the Piecewise-Linear Haar (PLHaar) transform, a reversible n-bit to n-bit wavelet-like transform. PLHaar is simple to implement, ideal for environments where transform coefficients must be kept the same size as the original data, and is the only n-bit to n-bit transform suitable for both lossy and lossless coding.

  20. ADVANCED RECIPROCATING COMPRESSION TECHNOLOGY (ARCT)

    SciTech Connect

    Danny M. Deffenbaugh; Klaus Brun; Ralph E. Harris; J. Pete Harrell; Robert J. Mckee; J. Jeffrey Moore; Steven J. Svedeman; Anthony J. Smalley; Eugene L. Broerman; Robert A Hart; Marybeth G. Nored; Ryan S. Gernentz; Shane P. Siebenaler

    2005-12-01

    The U.S. natural gas pipeline industry is facing the twin challenges of increased flexibility and capacity expansion. To meet these challenges, the industry requires improved choices in gas compression to address new construction and enhancement of the currently installed infrastructure. The current fleet of installed reciprocating compression is primarily slow-speed integral machines. Most new reciprocating compression is and will be large, high-speed separable units. The major challenges with the fleet of slow-speed integral machines are: limited flexibility and a large range in performance. In an attempt to increase flexibility, many operators are choosing to single-act cylinders, which are causing reduced reliability and integrity. While the best performing units in the fleet exhibit thermal efficiencies between 90% and 92%, the low performers are running down to 50% with the mean at about 80%. The major cause for this large disparity is due to installation losses in the pulsation control system. In the better performers, the losses are about evenly split between installation losses and valve losses. The major challenges for high-speed machines are: cylinder nozzle pulsations, mechanical vibrations due to cylinder stretch, short valve life, and low thermal performance. To shift nozzle pulsation to higher orders, nozzles are shortened, and to dampen the amplitudes, orifices are added. The shortened nozzles result in mechanical coupling with the cylinder, thereby, causing increased vibration due to the cylinder stretch mode. Valve life is even shorter than for slow speeds and can be on the order of a few months. The thermal efficiency is 10% to 15% lower than slow-speed equipment with the best performance in the 75% to 80% range. The goal of this advanced reciprocating compression program is to develop the technology for both high speed and low speed compression that will expand unit flexibility, increase thermal efficiency, and increase reliability and integrity

  1. Variable delivery, fixed displacement pump

    SciTech Connect

    Sommars, Mark F.

    2001-01-01

    A variable delivery, fixed displacement pump comprises a plurality of pistons reciprocated within corresponding cylinders in a cylinder block. The pistons are reciprocated by rotation of a fixed angle swash plate connected to the pistons. The pistons and cylinders cooperate to define a plurality of fluid compression chambers each have a delivery outlet. A vent port is provided from each fluid compression chamber to vent fluid therefrom during at least a portion of the reciprocal stroke of the piston. Each piston and cylinder combination cooperates to close the associated vent port during another portion of the reciprocal stroke so that fluid is then pumped through the associated delivery outlet. The delivery rate of the pump is varied by adjusting the axial position of the swash plate relative to the cylinder block, which varies the duration of the piston stroke during which the vent port is closed.

  2. Measurement and control for mechanical compressive stress

    NASA Astrophysics Data System (ADS)

    Li, Qing; Ye, Guang; Pan, Lan; Wu, Xiushan

    2001-12-01

    At present, the indirect method is applied to measuring and controlling mechanical compressive stress, which is the measurement and control of rotating torque of screw with torque transducer during screw revolving. Because the friction coefficient between every screw-cap and washer, of screw-thread is different, the compressive stress of every screw may is different when the machinery is equipped. Therefore, the accurate measurement and control of mechanical compressive stress is realized by the direct measurement of mechanical compressive stress. The author introduces the research of contrast between compressive stress and rotating torque in the paper. The structure and work principle of a special washer type transducer is discussed emphatically. The special instrument cooperates with the washer type transducer for measuring and controlling mechanical compressive stress. The control tactics based on the rate of compressive stress is put to realize accurate control of mechanical compressive stress.

  3. Infraspinatus muscle atrophy from suprascapular nerve compression.

    PubMed

    Cordova, Christopher B; Owens, Brett D

    2014-02-01

    Muscle weakness without pain may signal a nerve compression injury. Because these injuries should be identified and treated early to prevent permanent muscle weakness and atrophy, providers should consider suprascapular nerve compression in patients with shoulder muscle weakness. PMID:24463748

  4. Cluster compression algorithm: A joint clustering/data compression concept

    NASA Technical Reports Server (NTRS)

    Hilbert, E. E.

    1977-01-01

    The Cluster Compression Algorithm (CCA), which was developed to reduce costs associated with transmitting, storing, distributing, and interpreting LANDSAT multispectral image data is described. The CCA is a preprocessing algorithm that uses feature extraction and data compression to more efficiently represent the information in the image data. The format of the preprocessed data enables simply a look-up table decoding and direct use of the extracted features to reduce user computation for either image reconstruction, or computer interpretation of the image data. Basically, the CCA uses spatially local clustering to extract features from the image data to describe spectral characteristics of the data set. In addition, the features may be used to form a sequence of scalar numbers that define each picture element in terms of the cluster features. This sequence, called the feature map, is then efficiently represented by using source encoding concepts. Various forms of the CCA are defined and experimental results are presented to show trade-offs and characteristics of the various implementations. Examples are provided that demonstrate the application of the cluster compression concept to multi-spectral images from LANDSAT and other sources.

  5. Lower body predictors of glenohumeral compressive force in high school baseball pitchers.

    PubMed

    Keeley, David W; Oliver, Gretchen D; Dougherty, Christopher P; Torry, Michael R

    2015-06-01

    The purpose of this study was to better understand how lower body kinematics relate to peak glenohumeral compressive force and develop a regression model accounting for variability in peak glenohumeral compressive force. Data were collected for 34 pitchers. Average peak glenohumeral compressive force was 1.72% ± 33% body weight (1334.9 N ± 257.5). Correlation coefficients revealed 5 kinematic variables correlated to peak glenohumeral compressive force (P < .01, α = .025). Regression models indicated 78.5% of the variance in peak glenohumeral compressive force (R2 = .785, P < .01) was explained by stride length, lateral pelvis flexion at maximum external rotation, and axial pelvis rotation velocity at release. These results indicate peak glenohumeral compressive force increases with a combination of decreased stride length, increased pelvic tilt at maximum external rotation toward the throwing arm side, and increased pelvis axial rotation velocity at release. Thus, it may be possible to decrease peak glenohumeral compressive force by optimizing the movements of the lower body while pitching. Focus should be on both training and conditioning the lower extremity in an effort to increase stride length, increase pelvis tilt toward the glove hand side at maximum external rotation, and decrease pelvis axial rotation at release. PMID:25734579

  6. Apparatus for measuring tensile and compressive properties of solid materials at cryogenic temperatures

    DOEpatents

    Gonczy, John D.; Markley, Finley W.; McCaw, William R.; Niemann, Ralph C.

    1992-01-01

    An apparatus for evaluating the tensile and compressive properties of material samples at very low or cryogenic temperatures employs a stationary frame and a dewar mounted below the frame. A pair of coaxial cylindrical tubes extend downward towards the bottom of the dewar. A compressive or tensile load is generated hydraulically and is transmitted by the inner tube to the material sample. The material sample is located near the bottom of the dewar in a liquid refrigerant bath. The apparatus employs a displacement measuring device, such as a linear variable differential transformer, to measure the deformation of the material sample relative to the amount of compressive or tensile force applied to the sample.

  7. Optimum number of technical replicates for the measurement of compression of lamb meat.

    PubMed

    Hoban, J M; van de Ven, R J; Hopkins, D L

    2016-05-01

    Up to six (average 4.63) replicate compression values were collected on cooked m. semimembranosus of lambs that had been raised at six sites across southern Australia (n=1817). Measurements on each sample were made with one of two Lloyd Texture analyser machines, with each machine having a 0.63 cm diameter plunger. Based on a log normal model with common variance on the log scale for within sample replicate results, estimates of the within sample variability of compression values were obtained, resulting in a quality control procedure for compression testing based on the coefficient of variation. PMID:26775151

  8. Simultaneous denoising and compression of multispectral images

    NASA Astrophysics Data System (ADS)

    Hagag, Ahmed; Amin, Mohamed; Abd El-Samie, Fathi E.

    2013-01-01

    A new technique for denoising and compression of multispectral satellite images to remove the effect of noise on the compression process is presented. One type of multispectral images has been considered: Landsat Enhanced Thematic Mapper Plus. The discrete wavelet transform (DWT), the dual-tree DWT, and a simple Huffman coder are used in the compression process. Simulation results show that the proposed technique is more effective than other traditional compression-only techniques.

  9. Multiphase, Multicomponent Compressibility in Geothermal Reservoir Engineering

    SciTech Connect

    Macias-Chapa, L.; Ramey, H.J. Jr.

    1987-01-20

    Coefficients of compressibilities below the bubble point were computer with a thermodynamic model for single and multicomponent systems. Results showed coefficients of compressibility below the bubble point larger than the gas coefficient of compressibility at the same conditions. Two-phase compressibilities computed in the conventional way are underestimated and may lead to errors in reserve estimation and well test analysis. 10 refs., 9 figs.

  10. FRESCO: Referential compression of highly similar sequences.

    PubMed

    Wandelt, Sebastian; Leser, Ulf

    2013-01-01

    In many applications, sets of similar texts or sequences are of high importance. Prominent examples are revision histories of documents or genomic sequences. Modern high-throughput sequencing technologies are able to generate DNA sequences at an ever-increasing rate. In parallel to the decreasing experimental time and cost necessary to produce DNA sequences, computational requirements for analysis and storage of the sequences are steeply increasing. Compression is a key technology to deal with this challenge. Recently, referential compression schemes, storing only the differences between a to-be-compressed input and a known reference sequence, gained a lot of interest in this field. In this paper, we propose a general open-source framework to compress large amounts of biological sequence data called Framework for REferential Sequence COmpression (FRESCO). Our basic compression algorithm is shown to be one to two orders of magnitudes faster than comparable related work, while achieving similar compression ratios. We also propose several techniques to further increase compression ratios, while still retaining the advantage in speed: 1) selecting a good reference sequence; and 2) rewriting a reference sequence to allow for better compression. In addition,we propose a new way of further boosting the compression ratios by applying referential compression to already referentially compressed files (second-order compression). This technique allows for compression ratios way beyond state of the art, for instance,4,000:1 and higher for human genomes. We evaluate our algorithms on a large data set from three different species (more than 1,000 genomes, more than 3 TB) and on a collection of versions of Wikipedia pages. Our results show that real-time compression of highly similar sequences at high compression ratios is possible on modern hardware. PMID:24524158

  11. Estimating the concrete compressive strength using hard clustering and fuzzy clustering based regression techniques.

    PubMed

    Nagwani, Naresh Kumar; Deo, Shirish V

    2014-01-01

    Understanding of the compressive strength of concrete is important for activities like construction arrangement, prestressing operations, and proportioning new mixtures and for the quality assurance. Regression techniques are most widely used for prediction tasks where relationship between the independent variables and dependent (prediction) variable is identified. The accuracy of the regression techniques for prediction can be improved if clustering can be used along with regression. Clustering along with regression will ensure the more accurate curve fitting between the dependent and independent variables. In this work cluster regression technique is applied for estimating the compressive strength of the concrete and a novel state of the art is proposed for predicting the concrete compressive strength. The objective of this work is to demonstrate that clustering along with regression ensures less prediction errors for estimating the concrete compressive strength. The proposed technique consists of two major stages: in the first stage, clustering is used to group the similar characteristics concrete data and then in the second stage regression techniques are applied over these clusters (groups) to predict the compressive strength from individual clusters. It is found from experiments that clustering along with regression techniques gives minimum errors for predicting compressive strength of concrete; also fuzzy clustering algorithm C-means performs better than K-means algorithm. PMID:25374939

  12. Estimating the Concrete Compressive Strength Using Hard Clustering and Fuzzy Clustering Based Regression Techniques

    PubMed Central

    Nagwani, Naresh Kumar; Deo, Shirish V.

    2014-01-01

    Understanding of the compressive strength of concrete is important for activities like construction arrangement, prestressing operations, and proportioning new mixtures and for the quality assurance. Regression techniques are most widely used for prediction tasks where relationship between the independent variables and dependent (prediction) variable is identified. The accuracy of the regression techniques for prediction can be improved if clustering can be used along with regression. Clustering along with regression will ensure the more accurate curve fitting between the dependent and independent variables. In this work cluster regression technique is applied for estimating the compressive strength of the concrete and a novel state of the art is proposed for predicting the concrete compressive strength. The objective of this work is to demonstrate that clustering along with regression ensures less prediction errors for estimating the concrete compressive strength. The proposed technique consists of two major stages: in the first stage, clustering is used to group the similar characteristics concrete data and then in the second stage regression techniques are applied over these clusters (groups) to predict the compressive strength from individual clusters. It is found from experiments that clustering along with regression techniques gives minimum errors for predicting compressive strength of concrete; also fuzzy clustering algorithm C-means performs better than K-means algorithm. PMID:25374939

  13. 29 CFR 1926.803 - Compressed air.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 8 2010-07-01 2010-07-01 false Compressed air. 1926.803 Section 1926.803 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR (CONTINUED) SAFETY AND HEALTH REGULATIONS FOR CONSTRUCTION Underground Construction, Caissons, Cofferdams and Compressed Air § 1926.803 Compressed...

  14. Multichannel Compression, Temporal Cues, and Audibility.

    ERIC Educational Resources Information Center

    Souza, Pamela E.; Turner, Christopher W.

    1998-01-01

    The effect of the reduction of the temporal envelope produced by multichannel compression on recognition was examined in 16 listeners with hearing loss, with particular focus on audibility of the speech signal. Multichannel compression improved speech recognition when superior audibility was provided by a two-channel compression system over linear…

  15. General-Purpose Compression for Efficient Retrieval.

    ERIC Educational Resources Information Center

    Cannane, Adam; Williams, Hugh E.

    2001-01-01

    Discusses compression of databases that reduces space requirements and retrieval times; considers compression of documents in text databases based on semistatic modeling with words; and proposes a scheme for general purpose compression that can be applied to all types of data stored in large collections. (Author/LRW)

  16. Tomographic Image Compression Using Multidimensional Transforms.

    ERIC Educational Resources Information Center

    Villasenor, John D.

    1994-01-01

    Describes a method for compressing tomographic images obtained using Positron Emission Tomography (PET) and Magnetic Resonance (MR) by applying transform compression using all available dimensions. This takes maximum advantage of redundancy of the data, allowing significant increases in compression efficiency and performance. (13 references) (KRN)

  17. Growing concern following compression mammography.

    PubMed

    van Netten, Johannes Pieter; Hoption Cann, Stephen; Thornton, Ian; Finegan, Rory

    2016-01-01

    A patient without clinical symptoms had a mammogram in October 2008. The procedure caused intense persistent pain, swelling and development of a haematoma following mediolateral left breast compression. Three months later, a 9×11 cm mass developed within the same region. Core biopsies showed a necrotizing high-grade ductal carcinoma, with a high mitotic index. Owing to its extensive size, the patient began chemotherapy followed by trastuzumab and later radiotherapy to obtain clear margins for a subsequent mastectomy. The mastectomy in October 2009 revealed an inflammatory carcinoma, with 2 of 3 nodes infiltrated by the tumour. The stage IIIC tumour, oestrogen and progesterone receptor negative, was highly HER2 positive. A recurrence led to further chemotherapy in February 2011. In July 2011, another recurrence was removed from the mastectomy scar. She died of progressive disease in 2012. In this article, we discuss the potential influence of compression on the natural history of the tumour. PMID:27581236

  18. [Medical image compression: a review].

    PubMed

    Noreña, Tatiana; Romero, Eduardo

    2013-01-01

    Modern medicine is an increasingly complex activity , based on the evidence ; it consists of information from multiple sources : medical record text , sound recordings , images and videos generated by a large number of devices . Medical imaging is one of the most important sources of information since they offer comprehensive support of medical procedures for diagnosis and follow-up . However , the amount of information generated by image capturing gadgets quickly exceeds storage availability in radiology services , generating additional costs in devices with greater storage capacity . Besides , the current trend of developing applications in cloud computing has limitations, even though virtual storage is available from anywhere, connections are made through internet . In these scenarios the optimal use of information necessarily requires powerful compression algorithms adapted to medical activity needs . In this paper we present a review of compression techniques used for image storage , and a critical analysis of them from the point of view of their use in clinical settings. PMID:23715317

  19. Compressive wideband microwave radar holography

    NASA Astrophysics Data System (ADS)

    Wilson, Scott A.; Narayanan, Ram M.

    2014-05-01

    Compressive sensing has emerged as a topic of great interest for radar applications requiring large amounts of data storage. Typically, full sets of data are collected at the Nyquist rate only to be compressed at some later point, where information-bearing data are retained and inconsequential data are discarded. However, under sparse conditions, it is possible to collect data at random sampling intervals less than the Nyquist rate and still gather enough meaningful data for accurate signal reconstruction. In this paper, we employ sparse sampling techniques in the recording of digital microwave holograms over a two-dimensional scanning aperture. Using a simple and fast non-linear interpolation scheme prior to image reconstruction, we show that the reconstituted image quality is well-retained with limited perceptual loss.

  20. Compressed sensing based video multicast

    NASA Astrophysics Data System (ADS)

    Schenkel, Markus B.; Luo, Chong; Frossard, Pascal; Wu, Feng

    2010-07-01

    We propose a new scheme for wireless video multicast based on compressed sensing. It has the property of graceful degradation and, unlike systems adhering to traditional separate coding, it does not suffer from a cliff effect. Compressed sensing is applied to generate measurements of equal importance from a video such that a receiver with a better channel will naturally have more information at hands to reconstruct the content without penalizing others. We experimentally compare different random matrices at the encoder side in terms of their performance for video transmission. We further investigate how properties of natural images can be exploited to improve the reconstruction performance by transmitting a small amount of side information. And we propose a way of exploiting inter-frame correlation by extending only the decoder. Finally we compare our results with a different scheme targeting the same problem with simulations and find competitive results for some channel configurations.

  1. Using autoencoders for mammogram compression.

    PubMed

    Tan, Chun Chet; Eswaran, Chikkannan

    2011-02-01

    This paper presents the results obtained for medical image compression using autoencoder neural networks. Since mammograms (medical images) are usually of big sizes, training of autoencoders becomes extremely tedious and difficult if the whole image is used for training. We show in this paper that the autoencoders can be trained successfully by using image patches instead of the whole image. The compression performances of different types of autoencoders are compared based on two parameters, namely mean square error and structural similarity index. It is found from the experimental results that the autoencoder which does not use Restricted Boltzmann Machine pre-training yields better results than those which use this pre-training method. PMID:20703586

  2. Digital filtering for data compression in telemetry systems

    SciTech Connect

    Bell, R.M.

    1994-08-01

    There are many obstacles to using data compression in a telemetry system. Non-linear quantization is often too lossy, and the data is too highly structured to make variable-length entropy codes practical. This paper describes a lossless telemetry data compression system that was built using digital FIR filters. The method of compression takes advantage of the fact that the optimal Nyquist sampling rate is rarely achievable due to two factors: (1) Sensor/transducers are not bandlimited to the frequencies of interest, and (2) Accurate, high-order analog filters are not available to perform effective band limiting and prevent aliasing. Real-time digital filtering can enhance the performance of a typical sampling system so that it approaches Nyquist sampling rates, effectively compressing the amount of data and reducing transmission bandwidth. The system that was built reduced the sampling rate of 14 high-frequency vibration channels by a factor of two, and reduced the bandwidth of the-data link from 1.8 Mbps to 1.28 Mbps. The entire circuit uses seven function-specific, digital-filter DSP`s operating in parallel (two 128-tap FIR filters can be implemented on each Motorola DSP56200), one EPROM and a Programmable Logic Device as the controller.

  3. An Architecture of Embedded Decompressor with Reconfigurability for Test Compression

    NASA Astrophysics Data System (ADS)

    Ichihara, Hideyuki; Saiki, Tomoyuki; Inoue, Tomoo

    Test compression/decompression scheme for reducing the test application time and memory requirement of an LSI tester has been proposed. In the scheme, the employed coding algorithms are tailored to a given test data, so that the tailored coding algorithm can highly compress the test data. However, these methods have some drawbacks, e. g., the coding algorithm is ineffective in extra test data except for the given test data. In this paper, we introduce an embedded decompressor that is reconfigurable according to coding algorithms and given test data. Its reconfigurability can overcome the drawbacks of conventional decompressors with keeping high compression ratio. Moreover, we propose an architecture of reconfigurable decompressors for four variable-length codings. In the proposed architecture, the common functions for four codings are implemented as fixed (or non-reconfigurable) components so as to reduce the configuration data, which is stored on an ATE and sent to a CUT. Experimental results show that (1) the configuration data size becomes reasonably small by reducing the configuration part of the decompressor, (2) the reconfigurable decompressor is effective for SoC testing in respect of the test data size, and (3) it can achieve an optimal compression of test data by Huffman coding.

  4. Confinement and controlling the effective compressive stiffness of carbyne

    NASA Astrophysics Data System (ADS)

    Kocsis, Ashley J.; Aditya Reddy Yedama, Neta; Cranford, Steven W.

    2014-08-01

    Carbyne is a one-dimensional chain of carbon atoms, consisting of repeating sp-hybridized groups, thereby representing a minimalist molecular rod or chain. While exhibiting exemplary mechanical properties in tension (a 1D modulus on the order of 313 nN and a strength on the order of 11 nN), its use as a structural component at the molecular scale is limited due to its relative weakness in compression and the immediate onset of buckling under load. To circumvent this effect, here, we probe the effect of confinement to enhance the mechanical behavior of carbyne chains in compression. Through full atomistic molecular dynamics, we characterize the mechanical properties of a free (unconfined chain) and explore the effect of confinement radius (R), free chain length (L) and temperature (T) on the effective compressive stiffness of carbyne chains and demonstrate that the stiffness can be tuned over an order of magnitude (from approximately 0.54 kcal mol-1 Å2 to 46 kcal mol-1 Å2) by geometric control. Confinement may inherently stabilize the chains, potentially providing a platform for the synthesis of extraordinarily long chains (tens of nanometers) with variable compressive response.

  5. 76 FR 4338 - Research and Development Strategies for Compressed & Cryo-Compressed Hydrogen Storage Workshops

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-25

    ... Research and Development Strategies for Compressed & Cryo- Compressed Hydrogen Storage Workshops AGENCY... Laboratory, in conjunction with the Hydrogen Storage team of the EERE Fuel Cell Technologies Program, will be hosting two days of workshops on compressed and cryo-compressed hydrogen storage in the Washington,...

  6. Turbulence modeling for compressible flows

    NASA Technical Reports Server (NTRS)

    Marvin, J. G.

    1977-01-01

    Material prepared for a course on Applications and Fundamentals of Turbulence given at the University of Tennessee Space Institute, January 10 and 11, 1977, is presented. A complete concept of turbulence modeling is described, and examples of progess for its use in computational aerodynimics are given. Modeling concepts, experiments, and computations using the concepts are reviewed in a manner that provides an up-to-date statement on the status of this problem for compressible flows.

  7. A Simplified Adiabatic Compression Apparatus

    NASA Astrophysics Data System (ADS)

    Moloney, Michael J.; McGarvey, Albert P.

    2007-10-01

    Mottmann described an excellent way to measure the ratio of specific heats for air (γ = Cp/Cv) by suddenly compressing a plastic 2-liter bottle. His arrangement can be simplified so that no valves are involved and only a single connection needs to be made. This is done by adapting the plastic cap of a 2-liter plastic bottle so it connects directly to a Vernier Software Gas Pressure Sensor2 and the LabPro3 interface.

  8. Direct simulation of compressible turbulence

    NASA Technical Reports Server (NTRS)

    Zang, T. A.; Erlebacher, Gordon; Hussaini, M. Y.

    1989-01-01

    Several direct simulations of 3-D homogeneous, compressible turbulence are presented with emphasis on the differences with incompressible turbulent simulations. A fully spectral collocation algorithm, periodic in all directions coupled with a 3rd order Runge-Kutta time discretization scheme is sufficient to produce well-resolved flows at Taylor Reynolds numbers below 40 on grids of 128x128x128. A Helmholtz decomposition of velocity is useful to differentiate between the purely compressible effects and those effects solely due to vorticity production. In the context of homogeneous flows, this decomposition in unique. Time-dependent energy and dissipation spectra of the compressible and solenoidal velocity components indicate the presence of localized small scale structures. These structures are strongly a function of the initial conditions. Researchers concentrate on a regime characterized by very small fluctuating Mach numbers Ma (on the order of 0.03) and density and temperature fluctuations much greater than sq Ma. This leads to a state in which more than 70 percent of the kinetic energy is contained in the so-called compressible component of the velocity. Furthermore, these conditions lead to the formation of curved weak shocks (or shocklets) which travel at approximately the sound speed across the physical domain. Various terms in the vorticity and divergence of velocity production equations are plotted versus time to gain some understanding of how small scales are actually formed. Possible links with Burger turbulence are examined. To visualize better the dynamics of the flow, new graphic visualization techniques have been developed. The 3-D structure of the shocks are visualized with the help of volume rendering algorithms developed in-house. A combination of stereographic projection and animation greatly increase the number of visual cues necessary to properly interpret the complex flow.

  9. Comparison of Artificial Compressibility Methods

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Housman, Jeffrey; Kwak, Dochan

    2003-01-01

    Various artificial compressibility methods for calculating three-dimensional, steady and unsteady, laminar and turbulent, incompressible Navier-Stokes equations are compared in this work. Each method is described in detail along with appropriate physical and numerical boundary conditions. Analysis of well-posedness and numerical solutions to test problems for each method are provided. A comparison based on convergence behavior, accuracy, stability and robustness is used to establish the relative positive and negative characteristics of each method.

  10. SNLL materials testing compression facility

    SciTech Connect

    Kawahara, W.A.; Brandon, S.L.; Korellis, J.S.

    1986-04-01

    This report explains software enhancements and fixture modifications which expand the capabilities of a servo-hydraulic test system to include static computer-controlled ''constant true strain rate'' compression testing on cylindrical specimens. True strains in excess of -1.0 are accessible. Special software features include schemes to correct for system compliance and the ability to perform strain-rate changes; all software for test control and data acquisition/reduction is documented.

  11. Antiproton compression and radial measurements

    SciTech Connect

    Andresen, G. B.; Bowe, P. D.; Hangst, J. S.; Bertsche, W.; Butler, E.; Charlton, M.; Humphries, A. J.; Jenkins, M. J.; Joergensen, L. V.; Madsen, N.; Werf, D. P. van der; Bray, C. C.; Chapman, S.; Fajans, J.; Povilus, A.; Wurtele, J. S.; Cesar, C. L.; Lambo, R.; Silveira, D. M.; Fujiwara, M. C.

    2008-08-08

    Control of the radial profile of trapped antiproton clouds is critical to trapping antihydrogen. We report detailed measurements of the radial manipulation of antiproton clouds, including areal density compressions by factors as large as ten, achieved by manipulating spatially overlapped electron plasmas. We show detailed measurements of the near-axis antiproton radial profile, and its relation to that of the electron plasma. We also measure the outer radial profile by ejecting antiprotons to the trap wall using an octupole magnet.

  12. Compressed air energy storage system

    SciTech Connect

    Ahrens, F.W.; Kartsounes, G.T.

    1981-07-28

    An internal combustion reciprocating engine is operable as a compressor during slack demand periods utilizing excess power from a power grid to charge air into an air storage reservoir and as an expander during peak demand periods to feed power into the power grid utilizing air obtained from the air storage reservoir together with combustible fuel. Preferably the internal combustion reciprocating engine is operated at high pressure and a low pressure turbine and compressor are also employed for air compression and power generation.

  13. Transposed compression piston and cylinder

    SciTech Connect

    Ross, M.A.

    1992-04-14

    This patent describes an improved V-type two piston Stirling engine wherein the improvement is a transposed compression piston slidably engaged in a mating cylinder. It comprises: a cylindrical body which is pivotally connected to a connecting rod at a pivot axis which is relatively nearer the outer end of the cylindrical body and has a seal relatively nearer the inner end of the cylindrical body.

  14. Compressed air energy storage system

    DOEpatents

    Ahrens, Frederick W.; Kartsounes, George T.

    1981-01-01

    An internal combustion reciprocating engine is operable as a compressor during slack demand periods utilizing excess power from a power grid to charge air into an air storage reservoir and as an expander during peak demand periods to feed power into the power grid utilizing air obtained from the air storage reservoir together with combustible fuel. Preferably the internal combustion reciprocating engine is operated at high pressure and a low pressure turbine and compressor are also employed for air compression and power generation.

  15. Compressed air energy storage system

    DOEpatents

    Ahrens, F.W.; Kartsounes, G.T.

    An internal combustion reciprocating engine is operable as a compressor during slack demand periods utilizing excess power from a power grid to charge air into an air storage reservoir and as an expander during peak demand periods to feed power into the power grid utilizing air obtained from the air storage reservoir together with combustion reciprocating engine is operated at high pressure and a low pressure turbine and compressor are also employed for air compression and power generation.

  16. Compressibility Effects in Aeronautical Engineering

    NASA Technical Reports Server (NTRS)

    Stack, John

    1941-01-01

    Compressible-flow research, while a relatively new field in aeronautics, is very old, dating back almost to the development of the first firearm. Over the last hundred years, researches have been conducted in the ballistics field, but these results have been of practically no use in aeronautical engineering because the phenomena that have been studied have been the more or less steady supersonic condition of flow. Some work that has been done in connection with steam turbines, particularly nozzle studies, has been of value, In general, however, understanding of compressible-flow phenomena has been very incomplete and permitted no real basis for the solution of aeronautical engineering problems in which.the flow is likely to be unsteady because regions of both subsonic and supersonic speeds may occur. In the early phases of the development of the airplane, speeds were so low that the effects of compressibility could be justifiably ignored. During the last war and immediately after, however, propellers exhibited losses in efficiency as the tip speeds approached the speed of sound, and the first experiments of an aeronautical nature were therefore conducted with propellers. Results of these experiments indicated serious losses of efficiency, but aeronautical engineers were not seriously concerned at the time became it was generally possible. to design propellers with quite low tip. speeds. With the development of new engines having increased power and rotational speeds, however, the problems became of increasing importance.

  17. Snapshot colored compressive spectral imager.

    PubMed

    Correa, Claudia V; Arguello, Henry; Arce, Gonzalo R

    2015-10-01

    Traditional spectral imaging approaches require sensing all the voxels of a scene. Colored mosaic FPA detector-based architectures can acquire sets of the scene's spectral components, but the number of spectral planes depends directly on the number of available filters used on the FPA, which leads to reduced spatiospectral resolutions. Instead of sensing all the voxels of the scene, compressive spectral imaging (CSI) captures coded and dispersed projections of the spatiospectral source. This approach mitigates the resolution issues by exploiting optical phenomena in lenses and other elements, which, in turn, compromise the portability of the devices. This paper presents a compact snapshot colored compressive spectral imager (SCCSI) that exploits the benefits of the colored mosaic FPA detectors and the compression capabilities of CSI sensing techniques. The proposed optical architecture has no moving parts and can capture the spatiospectral information of a scene in a single snapshot by using a dispersive element and a color-patterned detector. The optical and the mathematical models of SCCSI are presented along with a testbed implementation of the system. Simulations and real experiments show the accuracy of SCCSI and compare the reconstructions with those of similar CSI optical architectures, such as the CASSI and SSCSI systems, resulting in improvements of up to 6 dB and 1 dB of PSNR, respectively. PMID:26479928

  18. LASER COMPRESSION OF NANOCRYSTALLINE METALS

    SciTech Connect

    Meyers, M. A.; Jarmakani, H. N.; Bringa, E. M.; Earhart, P.; Remington, B. A.; Vo, N. Q.; Wang, Y. M.

    2009-12-28

    Shock compression in nanocrystalline nickel is simulated over a range of pressures (10-80 GPa) and compared with experimental results. Laser compression carried out at Omega and Janus yields new information on the deformation mechanisms of nanocrystalline Ni. Although conventional deformation does not produce hardening, the extreme regime imparted by laser compression generates an increase in hardness, attributed to the residual dislocations observed in the structure by TEM. An analytical model is applied to predict the critical pressure for the onset of twinning in nanocrystalline nickel. The slip-twinning transition pressure is shifted from 20 GPa, for polycrystalline Ni, to 80 GPa, for Ni with g. s. of 10 nm. Contributions to the net strain from the different mechanisms of plastic deformation (partials, perfect dislocations, twinning, and grain boundary shear) were quantified in the nanocrystalline samples through MD calculations. The effect of release, a phenomenon often neglected in MD simulations, on dislocation behavior was established. A large fraction of the dislocations generated at the front are annihilated.

  19. Adiabatic Compression of Oxygen: Real Fluid Temperatures

    NASA Technical Reports Server (NTRS)

    Barragan, Michelle; Wilson, D. Bruce; Stoltzfus, Joel M.

    2000-01-01

    The adiabatic compression of oxygen has been identified as an ignition source for systems operating in enriched oxygen atmospheres. Current practice is to evaluate the temperature rise on compression by treating oxygen as an ideal gas with constant heat capacity. This paper establishes the appropriate thermodynamic analysis for the common occurrence of adiabatic compression of oxygen and in the process defines a satisfactory equation of state (EOS) for oxygen. It uses that EOS to model adiabatic compression as isentropic compression and calculates final temperatures for this system using current approaches for comparison.

  20. Future Prospects of Low Compression Ignition Engines

    NASA Astrophysics Data System (ADS)

    Azim, M. A.

    2014-01-01

    This study presents a review and analysis of the effects of compression ratio and inlet air preheating on engine performance in order to assess the future prospects of low compression ignition engines. Regulation of the inlet air preheating allows some control over the combustion process in compression ignition engines. Literature shows that low compression ratio and inlet air preheating are more beneficial to internal combustion engines than detrimental. Even the disadvantages due to low compression ratio are outweighed by the advantages due to inlet air preheating and vice versa.

  1. COMPRESSION WAVES AND PHASE PLOTS: SIMULATIONS

    SciTech Connect

    Orlikowski, D; Minich, R

    2011-08-01

    Compression wave analysis started nearly 50 years ago with Fowles. Coperthwaite and Williams gave a method that helps identify simple and steady waves. We have been developing a method that gives describes the non-isentropic character of compression waves, in general. One result of that work is a simple analysis tool. Our method helps clearly identify when a compression wave is a simple wave, a steady wave (shock), and when the compression wave is in transition. This affects the analysis of compression wave experiments and the resulting extraction of the high-pressure equation of state.

  2. Spatial versus spectral compression ratio in compressive sensing of hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    August, Yitzhak; Vachman, Chaim; Stern, Adrian

    2013-05-01

    Compressive hyperspectral imaging is based on the fact that hyperspectral data is highly redundant. However, there is no symmetry between the compressibility of the spatial and spectral domains, and that should be taken into account for optimal compressive hyperspectral imaging system design. Here we present a study of the influence of the ratio between the compression in the spatial and spectral domains on the performance of a 3D separable compressive hyperspectral imaging method we recently developed.

  3. Image Compression in Signal-Dependent Noise

    NASA Astrophysics Data System (ADS)

    Shahnaz, Rubeena; Walkup, John F.; Krile, Thomas F.

    1999-09-01

    The performance of an image compression scheme is affected by the presence of noise, and the achievable compression may be reduced significantly. We investigated the effects of specific signal-dependent-noise (SDN) sources, such as film-grain and speckle noise, on image compression, using JPEG (Joint Photographic Experts Group) standard image compression. For the improvement of compression ratios noisy images are preprocessed for noise suppression before compression is applied. Two approaches are employed for noise suppression. In one approach an estimator designed specifically for the SDN model is used. In an alternate approach, the noise is first transformed into signal-independent noise (SIN) and then an estimator designed for SIN is employed. The performances of these two schemes are compared. The compression results achieved for noiseless, noisy, and restored images are also presented.

  4. Cataclysmic variables

    NASA Technical Reports Server (NTRS)

    Szkody, Paula; Cropper, Mark

    1988-01-01

    Recent observations of cataclysmic variables (CVs) at different wavelengths are reviewed, with a focus on their implications for theoretical models. Consideration is given to disk CVs (the flux distribution of the disk and changes during dwarf-nova outbursts), magnetic CVs (flux distributions and components), and the underlying stars. Typical data are presented in graphs, tables, and sample spectra, and it is concluded that more detailed multiwavelength observations are needed to improve models of radiative transfer and viscosity effects in accretion disks.

  5. An automotive suspension strut using compressible magnetorheological fluids

    NASA Astrophysics Data System (ADS)

    Hong, Sung-Ryong; Wang, Gang; Hu, Wei; Wereley, Norman M.; Niemczuk, Jack

    2005-05-01

    An automotive suspension strut is proposed that utilizes compressible magnetorheological (CMR) fluid. A CMR strut consists of a double ended rod in a hydraulic cylinder and a bypass comprising tubing and an MR valve. The diameter on each side of the piston rods are set to be different in order to develop spring force by compromising the MR fluid hydrostatically. The MR bypass valve is adopted to develop controllable damping force. A hydro-mechanical model of the CMR strut is derived, and the spring force due to fluid compressibility and the pressure drop in the MR bypass valve are analytically investigated on the basis of the model. Finally, a CMR strut, filled with silicone oil based MR fluid, is fabricated and tested. The spring force and variable damping force of the CMR strut are clearly observed in the measured data, and compares favorably with the analytical model.

  6. Towards a geometrical interpretation of quantum-information compression

    SciTech Connect

    Mitchison, Graeme; Jozsa, Richard

    2004-03-01

    Let S be the von Neumann entropy of a finite ensemble E of pure quantum states. We show that S may be naturally viewed as a function of a set of geometrical volumes in Hilbert space defined by the states and that S is monotonically increasing in each of these variables. Since S is the Schumacher compression limit of E, this monotonicity property suggests a geometrical interpretation of the quantum redundancy involved in the compression process. It provides clarification of previous work in which it was shown that S may be increased while increasing the overlap of each pair of states in the ensemble. As a by-product, our mathematical techniques also provide an interpretation of the subentropy of E.

  7. An electron beam injector for pulse compression experiments

    SciTech Connect

    Wang, J.G.; Boggasch, E.; Kehne, D.; Reiser, M.; Shea, T.; Wang, D.X.

    1990-01-01

    An electron beam injector has been constructed to study the physics of longitudal pulse compression in the University of Maryland electron beam transport experiment. The injector consists of a variable-perveance gridded electron gun followed by three matching lenses and one induction linac module. It produces a 50 ns, 40 mA electron pulse with a 2.5 to 7.5 keV, quadratically time-dependent energy shear. This beam will be injected into the existing 5-m long periodic transport channel with 38 short solenoid lenses. With the given beam parameters and initial conditions the pulse will be compressed by a factor of 4 to 5 before reaching the end of the existing solenoid channel. This paper reports on the design features and the measured general performance characteristics of the injector system including its mechanical, electrical, and beam-optical properties.

  8. Modeling Compressibility Effects in High-Speed Turbulent Flows

    NASA Technical Reports Server (NTRS)

    Sarkar, S.

    2004-01-01

    Man has strived to make objects fly faster, first from subsonic to supersonic and then to hypersonic speeds. Spacecraft and high-speed missiles routinely fly at hypersonic Mach numbers, M greater than 5. In defense applications, aircraft reach hypersonic speeds at high altitude and so may civilian aircraft in the future. Hypersonic flight, while presenting opportunities, has formidable challenges that have spurred vigorous research and development, mainly by NASA and the Air Force in the USA. Although NASP, the premier hypersonic concept of the eighties and early nineties, did not lead to flight demonstration, much basic research and technology development was possible. There is renewed interest in supersonic and hypersonic flight with the HyTech program of the Air Force and the Hyper-X program at NASA being examples of current thrusts in the field. At high-subsonic to supersonic speeds, fluid compressibility becomes increasingly important in the turbulent boundary layers and shear layers associated with the flow around aerospace vehicles. Changes in thermodynamic variables: density, temperature and pressure, interact strongly with the underlying vortical, turbulent flow. The ensuing changes to the flow may be qualitative such as shocks which have no incompressible counterpart, or quantitative such as the reduction of skin friction with Mach number, large heat transfer rates due to viscous heating, and the dramatic reduction of fuel/oxidant mixing at high convective Mach number. The peculiarities of compressible turbulence, so-called compressibility effects, have been reviewed by Fernholz and Finley. Predictions of aerodynamic performance in high-speed applications require accurate computational modeling of these "compressibility effects" on turbulence. During the course of the project we have made fundamental advances in modeling the pressure-strain correlation and developed a code to evaluate alternate turbulence models in the compressible shear layer.

  9. Compressive rendering: a rendering application of compressed sensing.

    PubMed

    Sen, Pradeep; Darabi, Soheil

    2011-04-01

    Recently, there has been growing interest in compressed sensing (CS), the new theory that shows how a small set of linear measurements can be used to reconstruct a signal if it is sparse in a transform domain. Although CS has been applied to many problems in other fields, in computer graphics, it has only been used so far to accelerate the acquisition of light transport. In this paper, we propose a novel application of compressed sensing by using it to accelerate ray-traced rendering in a manner that exploits the sparsity of the final image in the wavelet basis. To do this, we raytrace only a subset of the pixel samples in the spatial domain and use a simple, greedy CS-based algorithm to estimate the wavelet transform of the image during rendering. Since the energy of the image is concentrated more compactly in the wavelet domain, less samples are required for a result of given quality than with conventional spatial-domain rendering. By taking the inverse wavelet transform of the result, we compute an accurate reconstruction of the desired final image. Our results show that our framework can achieve high-quality images with approximately 75 percent of the pixel samples using a nonadaptive sampling scheme. In addition, we also perform better than other algorithms that might be used to fill in the missing pixel data, such as interpolation or inpainting. Furthermore, since the algorithm works in image space, it is completely independent of scene complexity. PMID:21311092

  10. Computed Tomography Image Compressibility and Limitations of Compression Ratio-Based Guidelines.

    PubMed

    Pambrun, Jean-François; Noumeir, Rita

    2015-12-01

    Finding optimal compression levels for diagnostic imaging is not an easy task. Significant compressibility variations exist between modalities, but little is known about compressibility variations within modalities. Moreover, compressibility is affected by acquisition parameters. In this study, we evaluate the compressibility of thousands of computed tomography (CT) slices acquired with different slice thicknesses, exposures, reconstruction filters, slice collimations, and pitches. We demonstrate that exposure, slice thickness, and reconstruction filters have a significant impact on image compressibility due to an increased high frequency content and a lower acquisition signal-to-noise ratio. We also show that compression ratio is not a good fidelity measure. Therefore, guidelines based on compression ratio should ideally be replaced with other compression measures better correlated with image fidelity. Value-of-interest (VOI) transformations also affect the perception of quality. We have studied the effect of value-of-interest transformation and found significant masking of artifacts when window is widened. PMID:25804842

  11. Recognising metastatic spinal cord compression.

    PubMed

    Bowers, Ben

    2015-04-01

    Metastatic spinal cord compression (MSCC) is a potentially life changing oncological emergency. Neurological function and quality of life can be preserved if patients receive an early diagnosis and rapid access to acute interventions to prevent or reduce nerve damage. Symptoms include developing spinal pain, numbness or weakness in arms or legs, or unexplained changes in bladder and bowel function. Community nurses are well placed to pick up on the 'red flag' symptoms of MSCC and ensure patients access prompt, timely investigations to minimise damage. PMID:25839873

  12. Vapor Compression Distillation Flight Experiment

    NASA Technical Reports Server (NTRS)

    Hutchens, Cindy F.

    2002-01-01

    One of the major requirements associated with operating the International Space Station is the transportation -- space shuttle and Russian Progress spacecraft launches - necessary to re-supply station crews with food and water. The Vapor Compression Distillation (VCD) Flight Experiment, managed by NASA's Marshall Space Flight Center in Huntsville, Ala., is a full-scale demonstration of technology being developed to recycle crewmember urine and wastewater aboard the International Space Station and thereby reduce the amount of water that must be re-supplied. Based on results of the VCD Flight Experiment, an operational urine processor will be installed in Node 3 of the space station in 2005.

  13. Krylov methods for compressible flows

    NASA Technical Reports Server (NTRS)

    Tidriri, M. D.

    1995-01-01

    We investigate the application of Krylov methods to compressible flows, and the effect of implicit boundary conditions on the implicit solution of nonlinear problems. Two defect-correction procedures, namely, approximate factorization (AF) for structured grids and ILU/GMRES for general grids, are considered. Also considered here are Newton-Krylov matrix-free methods that we combined with the use of mixed discretization schemes in the implicitly defined Jacobian and its preconditioner. Numerical experiments that show the performance of our approaches are then presented.

  14. Limiting SUSY compressed spectra scenarios

    NASA Astrophysics Data System (ADS)

    Nelson, Andy; Tanedo, Philip; Whiteson, Daniel

    2016-06-01

    Typical searches for supersymmetry cannot test models in which the two lightest particles have a small ("compressed") mass splitting, due to the small momentum of the particles produced in the decay of the second-to-lightest particle. However, data sets with large missing transverse momentum (ETmiss) can generically search for invisible particle production and therefore provide constraints on such models. We apply data from the ATLAS monojet (jet+ETmiss ) and vector-boson-fusion (forward jets and ETmiss ) searches to such models. In all cases, experimental limits are at least five times weaker than theoretical predictions.

  15. Variable Valve Actuation

    SciTech Connect

    Jeffrey Gutterman; A. J. Lasley

    2008-08-31

    Many approaches exist to enable advanced mode, low temperature combustion systems for diesel engines - such as premixed charge compression ignition (PCCI), Homogeneous Charge Compression Ignition (HCCI) or other HCCI-like combustion modes. The fuel properties and the quantity, distribution and temperature profile of air, fuel and residual fraction in the cylinder can have a marked effect on the heat release rate and combustion phasing. Figure 1 shows that a systems approach is required for HCCI-like combustion. While the exact requirements remain unclear (and will vary depending on fuel, engine size and application), some form of substantially variable valve actuation is a likely element in such a system. Variable valve actuation, for both intake and exhaust valve events, is a potent tool for controlling the parameters that are critical to HCCI-like combustion and expanding its operational range. Additionally, VVA can be used to optimize the combustion process as well as exhaust temperatures and impact the after treatment system requirements and its associated cost. Delphi Corporation has major manufacturing and product development and applied R&D expertise in the valve train area. Historical R&D experience includes the development of fully variable electro-hydraulic valve train on research engines as well as several generations of mechanical VVA for gasoline systems. This experience has enabled us to evaluate various implementations and determine the strengths and weaknesses of each. While a fully variable electro-hydraulic valve train system might be the 'ideal' solution technically for maximum flexibility in the timing and control of the valve events, its complexity, associated costs, and high power consumption make its implementation on low cost high volume applications unlikely. Conversely, a simple mechanical system might be a low cost solution but not deliver the flexibility required for HCCI operation. After modeling more than 200 variations of the

  16. Image coding compression based on DCT

    NASA Astrophysics Data System (ADS)

    Feng, Fei; Liu, Peixue; Jiang, Baohua

    2012-04-01

    With the development of computer science and communications, the digital image processing develops more and more fast. High quality images are loved by people, but it will waste more stored space in our computer and it will waste more bandwidth when it is transferred by Internet. Therefore, it's necessary to have an study on technology of image compression. At present, many algorithms about image compression is applied to network and the image compression standard is established. In this dissertation, some analysis on DCT will be written. Firstly, the principle of DCT will be shown. It's necessary to realize image compression, because of the widely using about this technology; Secondly, we will have a deep understanding of DCT by the using of Matlab, the process of image compression based on DCT, and the analysis on Huffman coding; Thirdly, image compression based on DCT will be shown by using Matlab and we can have an analysis on the quality of the picture compressed. It is true that DCT is not the only algorithm to realize image compression. I am sure there will be more algorithms to make the image compressed have a high quality. I believe the technology about image compression will be widely used in the network or communications in the future.

  17. Algorithmic height compression of unordered trees.

    PubMed

    Ben-Naoum, Farah; Godin, Christophe

    2016-01-21

    By nature, tree structures frequently present similarities between their sub-parts. Making use of this redundancy, different types of tree compression techniques have been designed in the literature to reduce the complexity of tree structures. A popular and efficient way to compress a tree consists of merging its isomorphic subtrees, which produces a directed acyclic graph (DAG) equivalent to the original tree. An important property of this method is that the compressed structure (i.e. the DAG) has the same height as the original tree, thus limiting partially the possibility of compression. In this paper we address the problem of further compressing this DAG in height. The difficulty is that compression must be carried out on substructures that are not exactly isomorphic as they are strictly nested within each-other. We thus introduced a notion of quasi-isomorphism between subtrees that makes it possible to define similar patterns along any given path in a tree. We then proposed an algorithm to detect these patterns and to merge them, thus leading to compressed structures corresponding to DAGs augmented with return edges. In this way, redundant information is removed from the original tree in both width and height, thus achieving minimal structural compression. The complete compression algorithm is then illustrated on the compression of various plant-like structures. PMID:26551155

  18. Effect of compressibility on the annihilation process

    NASA Astrophysics Data System (ADS)

    Hnatich, M.; Honkonen, J.; Lučivjanský, T.

    2013-07-01

    Using the renormalization group in the perturbation theory, we study the influence of a random velocity field on the kinetics of the single-species annihilation reaction at and below its critical dimension d c = 2. The advecting velocity field is modeled by a Gaussian variable self-similar in space with a finite-radius time correlation (the Antonov-Kraichnan model). We take the effect of the compressibility of the velocity field into account and analyze the model near its critical dimension using a three-parameter expansion in ∈, Δ, and η, where ∈ is the deviation from the Kolmogorov scaling, Δ is the deviation from the (critical) space dimension two, and η is the deviation from the parabolic dispersion law. Depending on the values of these exponents and the compressiblity parameter α, the studied model can exhibit various asymptotic (long-time) regimes corresponding to infrared fixed points of the renormalization group. We summarize the possible regimes and calculate the decay rates for the mean particle number in the leading order of the perturbation theory.

  19. Compressible Alfvenic Turbulence in One Dimension

    NASA Astrophysics Data System (ADS)

    Fleischer, J.; Diamond, P. H.

    1997-11-01

    Burgers' equation for 1-D compressible fluid dynamics is extended to a two-equation system which includes the effects of magnetic pressure. For the special case of equal fluid viscosity and magnetic diffusivity, the system reduces to two decoupled Burgers' equations in the characteristic (Elsasser) variables \\upsilon ± \\upsilon _A. Energy transfer, with and without external forcing, is examined for arbitrary molecular diffusivities. For forced turbulence, renormalized perturbation theory is used to calculate the effective transport coefficients. It is found that energy equi-dissipation, not equipartition, is fundamental to the turbulent state. In other words, the system dynamically self-adjusts to propagate disturbances along its characteristics. However, shock formation due to wave steepening is inhibited by the presence of small-scale forcing. Alternate large-scale structures, propagating ballistically, lead to asymmetry in the characteristic velocity pdf. These non-Gaussian tails, a hallmark of intermittency, are examined through the pdf generating functional. It is argued that the probability path integral may be approximated by the instanton contribution. Corresponding distribution functions for velocity and magnetic field fluctuations are given. Finally, implications for the spectra of turbulence and self-organization phenomena in MHD are discussed.

  20. Energy Preserved Sampling for Compressed Sensing MRI

    PubMed Central

    Peterson, Bradley S.; Ji, Genlin; Dong, Zhengchao

    2014-01-01

    The sampling patterns, cost functions, and reconstruction algorithms play important roles in optimizing compressed sensing magnetic resonance imaging (CS-MRI). Simple random sampling patterns did not take into account the energy distribution in k-space and resulted in suboptimal reconstruction of MR images. Therefore, a variety of variable density (VD) based samplings patterns had been developed. To further improve it, we propose a novel energy preserving sampling (ePRESS) method. Besides, we improve the cost function by introducing phase correction and region of support matrix, and we propose iterative thresholding algorithm (ITA) to solve the improved cost function. We evaluate the proposed ePRESS sampling method, improved cost function, and ITA reconstruction algorithm by 2D digital phantom and 2D in vivo MR brains of healthy volunteers. These assessments demonstrate that the proposed ePRESS method performs better than VD, POWER, and BKO; the improved cost function can achieve better reconstruction quality than conventional cost function; and the ITA is faster than SISTA and is competitive with FISTA in terms of computation time. PMID:24971155

  1. Compression creep of filamentary composites

    NASA Technical Reports Server (NTRS)

    Graesser, D. L.; Tuttle, M. E.

    1988-01-01

    Axial and transverse strain fields induced in composite laminates subjected to compressive creep loading were compared for several types of laminate layups. Unidirectional graphite/epoxy as well as multi-directional graphite/epoxy and graphite/PEEK layups were studied. Specimens with and without holes were tested. The specimens were subjected to compressive creep loading for a 10-hour period. In-plane displacements were measured using moire interferometry. A computer based data reduction scheme was developed which reduces the whole-field displacement fields obtained using moire to whole-field strain contour maps. Only slight viscoelastic response was observed in matrix-dominated laminates, except for one test in which catastrophic specimen failure occurred after a 16-hour period. In this case the specimen response was a complex combination of both viscoelastic and fracture mechanisms. No viscoelastic effects were observed for fiber-dominated laminates over the 10-hour creep time used. The experimental results for specimens with holes were compared with results obtained using a finite-element analysis. The comparison between experiment and theory was generally good. Overall strain distributions were very well predicted. The finite element analysis typically predicted slightly higher strain values at the edge of the hole, and slightly lower strain values at positions removed from the hole, than were observed experimentally. It is hypothesized that these discrepancies are due to nonlinear material behavior at the hole edge, which were not accounted for during the finite-element analysis.

  2. Fusion in Magnetically Compressed Targets

    NASA Astrophysics Data System (ADS)

    Mokhov, V. N.

    2004-11-01

    A comparative analysis is presented of the positive and negative features of systems using magnetic compression of the thermonuclear fusion target (MAGO/MTF) aimed at solving the controlled thermonuclear fusion (CTF) problem. The niche for the MAGO/MTF system, among the other CTF systems, in the parameter space of the energy delivered to the target, and its input time to the target, is shown. This approach was investigated at RFNC-VNIIEF for more than 15 years using the unique technique of applying explosive magnetic generators (EMG) as the energy source to preheat fusion plasma, and accelerate a liner to compress the preheated fusion plasma to the parameters required for ignition. EMG based systems produce already fusion neutrons, and their relatively low cost and record energy yield enable full scale experiments to study the possibility of achieving ignition threshold without constructing expensive stationary installations. A short review of the milestone results on the road to solving the CTF problem in the MAGO/MTF system is given.

  3. Longwave infrared compressive hyperspectral imager

    NASA Astrophysics Data System (ADS)

    Dupuis, Julia R.; Kirby, Michael; Cosofret, Bogdan R.

    2015-06-01

    Physical Sciences Inc. (PSI) is developing a longwave infrared (LWIR) compressive sensing hyperspectral imager (CS HSI) based on a single pixel architecture for standoff vapor phase plume detection. The sensor employs novel use of a high throughput stationary interferometer and a digital micromirror device (DMD) converted for LWIR operation in place of the traditional cooled LWIR focal plane array. The CS HSI represents a substantial cost reduction over the state of the art in LWIR HSI instruments. Radiometric improvements for using the DMD in the LWIR spectral range have been identified and implemented. In addition, CS measurement and sparsity bases specifically tailored to the CS HSI instrument and chemical plume imaging have been developed and validated using LWIR hyperspectral image streams of chemical plumes. These bases enable comparable statistics to detection based on uncompressed data. In this paper, we present a system model predicting the overall performance of the CS HSI system. Results from a breadboard build and test validating the system model are reported. In addition, the measurement and sparsity basis work demonstrating the plume detection on compressed hyperspectral images is presented.

  4. Fast spectrophotometry with compressive sensing

    NASA Astrophysics Data System (ADS)

    Starling, David; Storer, Ian

    2015-03-01

    Spectrophotometers and spectrometers have numerous applications in the physical sciences and engineering, resulting in a plethora of designs and requirements. A good spectrophotometer balances the need for high photometric precision, high spectral resolution, high durability and low cost. One way to address these design objectives is to take advantage of modern scanning and detection techniques. A common imaging method that has improved signal acquisition speed and sensitivity in limited signal scenarios is the single pixel camera. Such cameras utilize the sparsity of a signal to sample below the Nyquist rate via a process known as compressive sensing. Here, we show that a single pixel camera using compressive sensing algorithms and a digital micromirror device can replace the common scanning mechanisms found in virtually all spectrophotometers, providing a very low cost solution and improving data acquisition time. We evaluate this single pixel spectrophotometer by studying a variety of samples tested against commercial products. We conclude with an analysis of flame spectra and possible improvements for future designs.

  5. Industrial Compressed Air System Energy Efficiency Guidebook.

    SciTech Connect

    United States. Bonneville Power Administration.

    1993-12-01

    Energy efficient design, operation and maintenance of compressed air systems in industrial plants can provide substantial reductions in electric power and other operational costs. This guidebook will help identify cost effective, energy efficiency opportunities in compressed air system design, re-design, operation and maintenance. The guidebook provides: (1) a broad overview of industrial compressed air systems, (2) methods for estimating compressed air consumption and projected air savings, (3) a description of applicable, generic energy conservation measures, and, (4) a review of some compressed air system demonstration projects that have taken place over the last two years. The primary audience for this guidebook includes plant maintenance supervisors, plant engineers, plant managers and others interested in energy management of industrial compressed air systems.

  6. Color space selection for JPEG image compression

    NASA Astrophysics Data System (ADS)

    Moroney, Nathan; Fairchild, Mark D.

    1995-10-01

    The Joint Photographic Experts Group's image compression algorithm has been shown to provide a very efficient and powerful method of compressing images. However, there is little substantive information about which color space should be utilized when implementing the JPEG algorithm. Currently, the JPEG algorithm is set up for use with any three-component color space. The objective of this research is to determine whether or not the color space selected will significantly improve the image compression. The RGB, XYZ, YIQ, CIELAB, CIELUV, and CIELAB LCh color spaces were examined and compared. Both numerical measures and psychophysical techniques were used to assess the results. The final results indicate that the device space, RGB, is the worst color space to compress images. In comparison, the nonlinear transforms of the device space, CIELAB and CIELUV, are the best color spaces to compress images. The XYZ, YIQ, and CIELAB LCh color spaces resulted in intermediate levels of compression.

  7. Atomic effect algebras with compression bases

    SciTech Connect

    Caragheorgheopol, Dan; Tkadlec, Josef

    2011-01-15

    Compression base effect algebras were recently introduced by Gudder [Demonstr. Math. 39, 43 (2006)]. They generalize sequential effect algebras [Rep. Math. Phys. 49, 87 (2002)] and compressible effect algebras [Rep. Math. Phys. 54, 93 (2004)]. The present paper focuses on atomic compression base effect algebras and the consequences of atoms being foci (so-called projections) of the compressions in the compression base. Part of our work generalizes results obtained in atomic sequential effect algebras by Tkadlec [Int. J. Theor. Phys. 47, 185 (2008)]. The notion of projection-atomicity is introduced and studied, and several conditions that force a compression base effect algebra or the set of its projections to be Boolean are found. Finally, we apply some of these results to sequential effect algebras and strengthen a previously established result concerning a sufficient condition for them to be Boolean.

  8. Atomic effect algebras with compression bases

    NASA Astrophysics Data System (ADS)

    Caragheorgheopol, Dan; Tkadlec, Josef

    2011-01-01

    Compression base effect algebras were recently introduced by Gudder [Demonstr. Math. 39, 43 (2006)]. They generalize sequential effect algebras [Rep. Math. Phys. 49, 87 (2002)] and compressible effect algebras [Rep. Math. Phys. 54, 93 (2004)]. The present paper focuses on atomic compression base effect algebras and the consequences of atoms being foci (so-called projections) of the compressions in the compression base. Part of our work generalizes results obtained in atomic sequential effect algebras by Tkadlec [Int. J. Theor. Phys. 47, 185 (2008)]. The notion of projection-atomicity is introduced and studied, and several conditions that force a compression base effect algebra or the set of its projections to be Boolean are found. Finally, we apply some of these results to sequential effect algebras and strengthen a previously established result concerning a sufficient condition for them to be Boolean.

  9. Micromechanics of composite laminate compression failure

    NASA Technical Reports Server (NTRS)

    Guynn, E. Gail; Bradley, Walter L.

    1986-01-01

    The Dugdale analysis for metals loaded in tension was adapted to model the failure of notched composite laminates loaded in compression. Compression testing details, MTS alignment verification, and equipment needs were resolved. Thus far, only 2 ductile material systems, HST7 and F155, were selected for study. A Wild M8 Zoom Stereomicroscope and necessary attachments for video taping and 35 mm pictures were purchased. Currently, this compression test system is fully operational. A specimen is loaded in compression, and load vs shear-crippling zone size is monitored and recorded. Data from initial compression tests indicate that the Dugdale model does not accurately predict the load vs damage zone size relationship of notched composite specimens loaded in compression.

  10. Compression map, functional groups and fossilization: A chemometric approach (Pennsylvanian neuropteroid foliage, Canada)

    USGS Publications Warehouse

    D'Angelo, J. A.; Zodrow, E.L.; Mastalerz, Maria

    2012-01-01

    Nearly all of the spectrochemical studies involving Carboniferous foliage of seed-ferns are based on a limited number of pinnules, mainly compressions. In contrast, in this paper we illustrate working with a larger pinnate segment, i.e., a 22-cm long neuropteroid specimen, compression-preserved with cuticle, the compression map. The objective is to study preservation variability on a larger scale, where observation of transparency/opacity of constituent pinnules is used as a first approximation for assessing the degree of pinnule coalification/fossilization. Spectrochemical methods by Fourier transform infrared spectrometry furnish semi-quantitative data for principal component analysis.The compression map shows a high degree of preservation variability, which ranges from comparatively more coalified pinnules to less coalified pinnules that resemble fossilized-cuticles, noting that the pinnule midveins are preserved more like fossilized-cuticles. A general overall trend of coalified pinnules towards fossilized-cuticles, i.e., variable chemistry, is inferred from the semi-quantitative FTIR data as higher contents of aromatic compounds occur in the visually more opaque upper location of the compression map. The latter also shows a higher condensation of the aromatic nuclei along with some variation in both ring size and degree of aromatic substitution. From principal component analysis we infer correspondence between transparency/opacity observation and chemical information which correlate with varying degree to fossilization/coalification among pinnules. ?? 2011 Elsevier B.V.

  11. Compressive strength, chloride permeability, and freeze-thaw resistance of MWNT concretes under different chemical treatments.

    PubMed

    Wang, Xingang; Rhee, Inkyu; Wang, Yao; Xi, Yunping

    2014-01-01

    This study investigated compressive strength, chloride penetration, and freeze-thaw resistance of multiwalled carbon nanotube (MWNT) concrete. More than 100 cylindrical specimens were used to assess test variables during sensitivity observations, including water-cement ratios (0.75, 0.5, and 0.4) and exposure to chemical agents (including gum arabic, propanol, ethanol, sodium polyacrylate, methylcellulose, sodium dodecyl sulfate, and silane). To determine the adequate sonication time for MWNT dispersal in water, the compressive strengths of MWNT concrete cylinders were measured after sonication times ranging from 2 to 24 minutes. The results demonstrated that the addition of MWNT can increase the compressive strength of concrete by up to 108%. However, without chemical treatment, MWNT concretes tend to have poor freeze-thaw resistance. Among the different chemical treatments, MWNT concrete treated with sodium polyacrylate has the best compressive strength, chloride resistance, and freeze-thaw durability. PMID:25140336

  12. Test Compression for Robust Testable Path Delay Fault Testing Using Interleaving and Statistical Coding

    NASA Astrophysics Data System (ADS)

    Namba, Kazuteru; Ito, Hideo

    This paper proposes a method providing efficient test compression. The proposed method is for robust testable path delay fault testing with scan design facilitating two-pattern testing. In the proposed method, test data are interleaved before test compression using statistical coding. This paper also presents test architecture for two-pattern testing using the proposed method. The proposed method is experimentally evaluated from several viewpoints such as compression rates, test application time and area overhead. For robust testable path delay fault testing on 11 out of 20 ISCAS89 benchmark circuits, the proposed method provides better compression rates than the existing methods such as Huffman coding, run-length coding, Golomb coding, frequency-directed run-length (FDR) coding and variable-length input Huffman coding (VIHC).

  13. Compressive Strength, Chloride Permeability, and Freeze-Thaw Resistance of MWNT Concretes under Different Chemical Treatments

    PubMed Central

    Wang, Xingang; Wang, Yao; Xi, Yunping

    2014-01-01

    This study investigated compressive strength, chloride penetration, and freeze-thaw resistance of multiwalled carbon nanotube (MWNT) concrete. More than 100 cylindrical specimens were used to assess test variables during sensitivity observations, including water-cement ratios (0.75, 0.5, and 0.4) and exposure to chemical agents (including gum arabic, propanol, ethanol, sodium polyacrylate, methylcellulose, sodium dodecyl sulfate, and silane). To determine the adequate sonication time for MWNT dispersal in water, the compressive strengths of MWNT concrete cylinders were measured after sonication times ranging from 2 to 24 minutes. The results demonstrated that the addition of MWNT can increase the compressive strength of concrete by up to 108%. However, without chemical treatment, MWNT concretes tend to have poor freeze-thaw resistance. Among the different chemical treatments, MWNT concrete treated with sodium polyacrylate has the best compressive strength, chloride resistance, and freeze-thaw durability. PMID:25140336

  14. Technique for chest compressions in adult CPR

    PubMed Central

    2011-01-01

    Chest compressions have saved the lives of countless patients in cardiac arrest as they generate a small but critical amount of blood flow to the heart and brain. This is achieved by direct cardiac massage as well as a thoracic pump mechanism. In order to optimize blood flow excellent chest compression technique is critical. Thus, the quality of the delivered chest compressions is a pivotal determinant of successful resuscitation. If a patient is found unresponsive without a definite pulse or normal breathing then the responder should assume that this patient is in cardiac arrest, activate the emergency response system and immediately start chest compressions. Contra-indications to starting chest compressions include a valid Do Not Attempt Resuscitation Order. Optimal technique for adult chest compressions includes positioning the patient supine, and pushing hard and fast over the center of the chest with the outstretched arms perpendicular to the patient's chest. The rate should be at least 100 compressions per minute and any interruptions should be minimized to achieve a minimum of 60 actually delivered compressions per minute. Aggressive rotation of compressors prevents decline of chest compression quality due to fatigue. Chest compressions are terminated following return of spontaneous circulation. Unconscious patients with normal breathing are placed in the recovery position. If there is no return of spontaneous circulation, then the decision to terminate chest compressions is based on the clinical judgment that the patient's cardiac arrest is unresponsive to treatment. Finally, it is important that family and patients' loved ones who witness chest compressions be treated with consideration and sensitivity. PMID:22152601

  15. Compressed data for the movie industry

    NASA Astrophysics Data System (ADS)

    Tice, Bradley S.

    2013-12-01

    The paper will present a compression algorithm that will allow for both random and non-random sequential binary strings of data to be compressed for storage and transmission of media information. The compression system has direct applications to the storage and transmission of digital media such as movies, television, audio signals and other visual and auditory signals needed for engineering practicalities in such industries.

  16. Subband Coding Methods for Seismic Data Compression

    NASA Technical Reports Server (NTRS)

    Kiely, A.; Pollara, F.

    1995-01-01

    This paper presents a study of seismic data compression techniques and a compression algorithm based on subband coding. The compression technique described could be used as a progressive transmission system, where successive refinements of the data can be requested by the user. This allows seismologists to first examine a coarse version of waveforms with minimal usage of the channel and then decide where refinements are required. Rate-distortion performance results are presented and comparisons are made with two block transform methods.

  17. Cauda equina compression presenting as spontaneous priapism.

    PubMed Central

    Ravindran, M

    1979-01-01

    Disturbance of autonomic function is an unusual feature of compression of the cauda equina. A 61 year old man who had complete occlusion of the lumbar spinal canal with compression of the cauda equina from a large centrally prolapsed disc, had spontaneous priapism, precipitated by walking and relieved by resting. This symptom was comparable to claudication by compression of cauda equina. It subsided completely after surgical removal of a prolapsed L4-5 disc. Images PMID:438839

  18. Energy efficiency improvements in Chinese compressed airsystems

    SciTech Connect

    McKane, Aimee; Li, Li; Li, Yuqi; Taranto, T.

    2007-06-01

    Industrial compressed air systems use more than 9 percent ofall electricity used in China. Experience in China and elsewhere hasshown that these systems can be much more energy efficient when viewed asa whole system and rather than as isolated components.This paper presentsa summary and analysis of several compressed air system assessments.Through these assessments, typical compressed air management practices inChina are analyzed. Recommendations are made concerning immediate actionsthat China s enterprises can make to improve compressed air systemefficiency using best available technology and managementstrategies.

  19. Compressible turbulent flows: Modeling and similarity considerations

    NASA Technical Reports Server (NTRS)

    Zeman, Otto

    1991-01-01

    With the recent revitalization of high speed flow research, compressibility presents a new set of challenging problems to turbulence researchers. Questions arise as to what extent compressibility affects turbulence dynamics, structures, the Reynolds stress-mean velocity (constitutive) relation, and the accompanying processes of heat transfer and mixing. In astrophysical applications, compressible turbulence is believed to play an important role in intergalactic gas cloud dynamics and in accretion disk convection. Understanding and modeling of the compressibility effects in free shear flows, boundary layers, and boundary layer/shock interactions is discussed.

  20. Spectral image compression for data communications

    NASA Astrophysics Data System (ADS)

    Hauta-Kasari, Markku; Lehtonen, Juha; Parkkinen, Jussi P. S.; Jaeaeskelaeinen, Timo

    2000-12-01

    We report a technique for spectral image compression to be used in the field of data communications. The spectral domain of the images is represented by a low-dimensional component image set, which is used to obtain an efficient compression of the high-dimensional spectral data. The component images are compressed using a similar technique as the JPEG- and MPEG-type compressions use to subsample the chrominance channels. The spectral compression is based on Principal Component Analysis (PCA) combined with color image transmission coding technique of 'chromatic channel subsampling' of the component images. The component images are subsampled using 4:2:2, 4:2:0, and 4:1:1-based compressions. In addition, we extended the test for larger block sizes and larger number of component images than in the original JPEG- and MPEG-standards. Totally 50 natural spectral images were used as test material in our experiments. Several error measures of the compression are reported. The same compressions are done using Independent Component Analysis and the results are compared with PCA. These methods give a good compression ratio while keeping visual quality of color still good. Quantitative comparisons between the original and reconstructed spectral images are presented.

  1. Compression of digital chest x-rays

    NASA Astrophysics Data System (ADS)

    Cohn, Michael; Trefler, Martin; Young, Tzay S.

    1990-07-01

    The application of digital technologies to chest radiography holds the promise of routine application of intage processing techniques to effect image enhancement. However, due to their inherent spatial resolution, digital chest images impose severe constraints on data storage devices. Compression of these images will relax such constraints and facilitate image transmission on a digital network. We have evaluated image processing algorithms aimed at compression of digital chest images while improving the diagnostic quality of the image. The image quality has been measured with respect to the task of tumor detection. Compression ratios of as high as 2:1 have been achieved. This compression can then be supplemented by irreversible methods.

  2. Pulsed spheromak reactor with adiabatic compression

    SciTech Connect

    Fowler, T K

    1999-03-29

    Extrapolating from the Pulsed Spheromak reactor and the LINUS concept, we consider ignition achieved by injecting a conducting liquid into the flux conserver to compress a low temperature spheromak created by gun injection and ohmic heating. The required energy to achieve ignition and high gain by compression is comparable to that required for ohmic ignition and the timescale is similar so that the mechanical power to ignite by compression is comparable to the electrical power to ignite ohmically. Potential advantages and problems are discussed. Like the High Beta scenario achieved by rapid fueling of an ohmically ignited plasma, compression must occur on timescales faster than Taylor relaxation.

  3. Evaluation and Management of Vertebral Compression Fractures

    PubMed Central

    Alexandru, Daniela; So, William

    2012-01-01

    Compression fractures affect many individuals worldwide. An estimated 1.5 million vertebral compression fractures occur every year in the US. They are common in elderly populations, and 25% of postmenopausal women are affected by a compression fracture during their lifetime. Although these fractures rarely require hospital admission, they have the potential to cause significant disability and morbidity, often causing incapacitating back pain for many months. This review provides information on the pathogenesis and pathophysiology of compression fractures, as well as clinical manifestations and treatment options. Among the available treatment options, kyphoplasty and percutaneous vertebroplasty are two minimally invasive techniques to alleviate pain and correct the sagittal imbalance of the spine. PMID:23251117

  4. Image data compression having minimum perceptual error

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1995-01-01

    A method for performing image compression that eliminates redundant and invisible image components is described. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  5. [Irreversible image compression in radiology. Current status].

    PubMed

    Pinto dos Santos, D; Jungmann, F; Friese, C; Düber, C; Mildenberger, P

    2013-03-01

    Due to increasing amounts of data in radiology methods for image compression appear both economically and technically interesting. Irreversible image compression allows markedly higher reduction of data volume in comparison with reversible compression algorithms but is, however, accompanied by a certain amount of mathematical and visual loss of information. Various national and international radiological societies have published recommendations for the use of irreversible image compression. The degree of acceptable compression varies across modalities and regions of interest.The DICOM standard supports JPEG, which achieves compression through tiling, DCT/DWT and quantization. Although mathematical loss due to rounding up errors and reduction of high frequency information occurs this results in relatively low visual degradation.It is still unclear where to implement irreversible compression in the radiological workflow as only few studies analyzed the impact of irreversible compression on specialized image postprocessing. As long as this is within the limits recommended by the German Radiological Society irreversible image compression could be implemented directly at the imaging modality as it would comply with § 28 of the roentgen act (RöV). PMID:23456043

  6. Compression of rehydratable vegetables and cereals

    NASA Technical Reports Server (NTRS)

    Burns, E. E.

    1978-01-01

    Characteristics of freeze-dried compressed carrots, such as rehydration, volatile retention, and texture, were studied by relating histological changes to textural quality evaluation, and by determining the effects of storage temperature on freeze-dried compressed carrot bars. Results show that samples compressed with a high moisture content undergo only slight structural damage and rehydrate quickly. Cellular disruption as a result of compression at low moisture levels was the main reason for rehydration and texture differences. Products prepared from carrot cubes having 48% moisture compared favorably with a freshly cooked product in cohesiveness and elasticity, but were found slightly harder and more chewy.

  7. Single-pixel complementary compressive sampling spectrometer

    NASA Astrophysics Data System (ADS)

    Lan, Ruo-Ming; Liu, Xue-Feng; Yao, Xu-Ri; Yu, Wen-Kai; Zhai, Guang-Jie

    2016-05-01

    A new type of compressive spectroscopy technique employing a complementary sampling strategy is reported. In a single sequence of spectral compressive sampling, positive and negative measurements are performed, in which sensing matrices with a complementary relationship are used. The restricted isometry property condition necessary for accurate recovery of compressive sampling theory is satisfied mathematically. Compared with the conventional single-pixel spectroscopy technique, the complementary compressive sampling strategy can achieve spectral recovery of considerably higher quality within a shorter sampling time. We also investigate the influence of the sampling ratio and integration time on the recovery quality.

  8. Shear/compressive fatigue of insulation systems at low temperatures

    NASA Astrophysics Data System (ADS)

    Reed, R. P.; Fabian, P. E.; Bauer-McDaniel, T. S.

    Fatigue tests under combined compression and shear loading were conducted at 76 K on four types of insulation system fabricated by vacuum-pressure impregnation and pre-impregnation. Fixtures developed for static tests with loading angles of 15 °, 45 °, 75 °, 84 ° and 90 ° were used to apply cyclic loads. Fatigue tests were conducted for each material over a fatigue-life range from 1 to 10 6 cycles. The constructed fatigue S-N curves were approximately linear for all materials; data variability was remarkably low.

  9. Video data compression using artificial neural network differential vector quantization

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, Ashok K.; Bibyk, Steven B.; Ahalt, Stanley C.

    1991-01-01

    An artificial neural network vector quantizer is developed for use in data compression applications such as Digital Video. Differential Vector Quantization is used to preserve edge features, and a new adaptive algorithm, known as Frequency-Sensitive Competitive Learning, is used to develop the vector quantizer codebook. To develop real time performance, a custom Very Large Scale Integration Application Specific Integrated Circuit (VLSI ASIC) is being developed to realize the associative memory functions needed in the vector quantization algorithm. By using vector quantization, the need for Huffman coding can be eliminated, resulting in superior performance against channel bit errors than methods that use variable length codes.

  10. Knowledge-based image bandwidth compression and enhancement

    NASA Astrophysics Data System (ADS)

    Saghri, John A.; Tescher, Andrew G.

    1987-01-01

    Techniques for incorporating a priori knowledge in the digital coding and bandwidth compression of image data are described and demonstrated. An algorithm for identifying and highlighting thin lines and point objects prior to coding is presented, and the precoding enhancement of a slightly smoothed version of the image is shown to be more effective than enhancement of the original image. Also considered are readjustment of the local distortion parameter and variable-block-size coding. The line-segment criteria employed in the classification are listed in a table, and sample images demonstrating the effectiveness of the enhancement techniques are presented.

  11. Predicting catastrophes in nonlinear dynamical systems by compressive sensing

    PubMed Central

    Wang, Wen-Xu; Yang, Rui; Lai, Ying-Cheng; Kovanis, Vassilios; Grebogi, Celso

    2013-01-01

    An extremely challenging problem of significant interest is to predict catastrophes in advance of their occurrences. We present a general approach to predicting catastrophes in nonlinear dynamical systems under the assumption that the system equations are completely unknown and only time series reflecting the evolution of the dynamical variables of the system are available. Our idea is to expand the vector field or map of the underlying system into a suitable function series and then to use the compressive-sensing technique to accurately estimate the various terms in the expansion. Examples using paradigmatic chaotic systems are provided to demonstrate our idea and potential challenges are discussed. PMID:21568562

  12. Ultraspectral sounder data compression using a novel marker-based error-resilient arithmetic coder

    NASA Astrophysics Data System (ADS)

    Huang, Bormin; Sriraja, Y.; Wei, Shih-Chieh

    2006-08-01

    Entropy coding techniques aim to achieve the entropy of the source data by assigning variable-length codewords to symbols with the code lengths linked to the corresponding symbol probabilities. Entropy coders (e.g. Huffman coding, arithmetic coding), in one form or the other, are commonly used as the last stage in various compression schemes. While these variable-length coders provide better compression than fixed-length coders, they are vulnerable to transmission errors. Even a single bit error in the transmission process can cause havoc in the subsequent decoded stream. To cope with it, this research proposes a marker-based sentinel mechanism in entropy coding for error detection and recovery. We use arithmetic coding as an example to demonstrate this error-resilient technique for entropy coding. Experimental results on ultraspectral sounder data indicate that the marker-based error-resilient arithmetic coder provides remarkable robustness to correct transmission errors without significantly compromising the compression gains.

  13. Underwing compression vortex attenuation device

    NASA Technical Reports Server (NTRS)

    Patterson, James C., Jr. (Inventor)

    1993-01-01

    A vortex attenuation device is presented which dissipates a lift-induced vortex generated by a lifting aircraft wing. The device consists of a positive pressure gradient producing means in the form of a compression panel attached to the lower surface of the wing and facing perpendicular to the airflow across the wing. The panel is located between the midpoint of the local wing cord and the trailing edge in the chord-wise direction and at a point which is approximately 55 percent of the wing span as measured from the fuselage center line in the spanwise direction. When deployed in flight, this panel produces a positive pressure gradient aligned with the final roll-up of the total vortex system which interrupts the axial flow in the vortex core and causes the vortex to collapse.

  14. Genetic disorders producing compressive radiculopathy.

    PubMed

    Corey, Joseph M

    2006-11-01

    Back pain is a frequent complaint seen in neurological practice. In evaluating back pain, neurologists are asked to evaluate patients for radiculopathy, determine whether they may benefit from surgery, and help guide management. Although disc herniation is the most common etiology of compressive radiculopathy, there are many other causes, including genetic disorders. This article is a discussion of genetic disorders that cause or contribute to radiculopathies. These genetic disorders include neurofibromatosis, Paget's disease of bone, and ankylosing spondylitis. Numerous genetic disorders can also lead to deformities of the spine, including spinal muscular atrophy, Friedreich's ataxia, Charcot-Marie-Tooth disease, familial dysautonomia, idiopathic torsional dystonia, Marfan's syndrome, and Ehlers-Danlos syndrome. However, the extent of radiculopathy caused by spine deformities is essentially absent from the literature. Finally, recent investigation into the heritability of disc degeneration and lumbar disc herniation suggests a significant genetic component in the etiology of lumbar disc disease. PMID:17048153

  15. Photon counting compressive depth mapping.

    PubMed

    Howland, Gregory A; Lum, Daniel J; Ware, Matthew R; Howell, John C

    2013-10-01

    We demonstrate a compressed sensing, photon counting lidar system based on the single-pixel camera. Our technique recovers both depth and intensity maps from a single under-sampled set of incoherent, linear projections of a scene of interest at ultra-low light levels around 0.5 picowatts. Only two-dimensional reconstructions are required to image a three-dimensional scene. We demonstrate intensity imaging and depth mapping at 256 × 256 pixel transverse resolution with acquisition times as short as 3 seconds. We also show novelty filtering, reconstructing only the difference between two instances of a scene. Finally, we acquire 32 × 32 pixel real-time video for three-dimensional object tracking at 14 frames-per-second. PMID:24104293

  16. Compressive imaging in scattering media.

    PubMed

    Durán, V; Soldevila, F; Irles, E; Clemente, P; Tajahuerce, E; Andrés, P; Lancis, J

    2015-06-01

    One challenge that has long held the attention of scientists is that of clearly seeing objects hidden by turbid media, as smoke, fog or biological tissue, which has major implications in fields such as remote sensing or early diagnosis of diseases. Here, we combine structured incoherent illumination and bucket detection for imaging an absorbing object completely embedded in a scattering medium. A sequence of low-intensity microstructured light patterns is launched onto the object, whose image is accurately reconstructed through the light fluctuations measured by a single-pixel detector. Our technique is noninvasive, does not require coherent sources, raster scanning nor time-gated detection and benefits from the compressive sensing strategy. As a proof of concept, we experimentally retrieve the image of a transilluminated target both sandwiched between two holographic diffusers and embedded in a 6mm-thick sample of chicken breast. PMID:26072804

  17. Compression molding of aerogel microspheres

    DOEpatents

    Pekala, R.W.; Hrubesh, L.W.

    1998-03-24

    An aerogel composite material produced by compression molding of aerogel microspheres (powders) mixed together with a small percentage of polymer binder to form monolithic shapes in a cost-effective manner is disclosed. The aerogel composites are formed by mixing aerogel microspheres with a polymer binder, placing the mixture in a mold and heating under pressure, which results in a composite with a density of 50--800 kg/m{sup 3} (0.05--0.80 g/cc). The thermal conductivity of the thus formed aerogel composite is below that of air, but higher than the thermal conductivity of monolithic aerogels. The resulting aerogel composites are attractive for applications such as thermal insulation since fabrication thereof does not require large and expensive processing equipment. In addition to thermal insulation, the aerogel composites may be utilized for filtration, ICF target, double layer capacitors, and capacitive deionization. 4 figs.

  18. Shock compression of liquid hydrazine

    SciTech Connect

    Garcia, B.O.; Chavez, D.J.

    1995-01-01

    Liquid hydrazine (N{sub 2}H{sub 4}) is a propellant used by the Air Force and NASA for aerospace propulsion and power systems. Because the propellant modules that contain the hydrazine can be subject to debris impacts during their use, the shock states that can occur in the hydrazine need to be characterized to safely predict its response. Several shock compression experiments have been conducted in an attempt to investigate the detonability of liquid hydrazine; however, the experiments results disagree. Therefore, in this study, we reproduced each experiment numerically to evaluate in detail the shock wave profiles generated in the liquid hydrazine. This paper presents the results of each numerical simulation and compares the results to those obtained in experiment. We also present the methodology of our approach, which includes chemical kinetic experiments, chemical equilibrium calculations, and characterization of the equation of state of liquid hydrazine.

  19. Shock compression of liquid hydrazine

    SciTech Connect

    Garcia, B.O.; Chavez, D.J.

    1996-05-01

    Liquid hydrazine (N{sub 2}H{sub 4}) is a propellant used for aerospace propulsion and power systems. Because the propellant modules can be subject to debris impacts during their use, the shock states that can occur in the hydrazine need to be characterized to safely predict its response. Several shock compression experiments have been conducted to investigate the shock detonability of liquid hydrazine; however, the experiments{close_quote} results disagree. Therefore, in this study, we reproduced each experiment numerically to evaluate in detail the shock wave profiles generated in the liquid hydrazine. This paper presents the results of each numerical simulation and compares the results to those obtained in experiment. {copyright} {ital 1996 American Institute of Physics.}

  20. Compressibility of Mercury's dayside magnetosphere

    NASA Astrophysics Data System (ADS)

    Zhong, J.; Wan, W. X.; Wei, Y.; Slavin, J. A.; Raines, J. M.; Rong, Z. J.; Chai, L. H.; Han, X. H.

    2015-12-01

    The Mercury is experiencing significant variations of solar wind forcing along its large eccentric orbit. With 12 Mercury years of data from Mercury Surface, Space ENvironment, GEochemistry, and Ranging, we demonstrate that Mercury's distance from the Sun has a great effect on the size of the dayside magnetosphere that is much larger than the temporal variations. The mean solar wind standoff distance was found to be about 0.27 Mercury radii (RM) closer to the Mercury at perihelion than at aphelion. At perihelion the subsolar magnetopause can be compressed below 1.2 RM of ~2.5% of the time. The relationship between the average magnetopause standoff distance and heliocentric distance suggests that on average the effects of the erosion process appears to counter balance those of induction in Mercury's interior at perihelion. However, at aphelion, where solar wind pressure is lower and Alfvénic Mach number is higher, the effects of induction appear dominant.

  1. Compressed Sensing Based Interior Tomography

    PubMed Central

    Yu, Hengyong; Wang, Ge

    2010-01-01

    While the conventional wisdom is that the interior problem does not have a unique solution, by analytic continuation we recently showed that the interior problem can be uniquely and stably solved if we have a known sub-region inside a region-of-interest (ROI). However, such a known sub-region does not always readily available, and it is even impossible to find in some cases. Based on the compressed sensing theory, here we prove that if an object under reconstruction is essentially piecewise constant, a local ROI can be exactly and stably reconstructed via the total variation minimization. Because many objects in CT applications can be approximately modeled as piecewise constant, our approach is practically useful and suggests a new research direction of interior tomography. To illustrate the merits of our finding, we develop an iterative interior reconstruction algorithm that minimizes the total variation of a reconstructed image, and evaluate the performance in numerical simulation. PMID:19369711

  2. The inviscid compressible Goertler problem

    NASA Technical Reports Server (NTRS)

    Dando, Andrew; Seddougui, Sharon O.

    1991-01-01

    The growth rate is studied of Goertler vortices in a compressible flow in the inviscid limit of large Goertler number. Numerical solutions are obtained for 0(1) wavenumbers. The further limits of large Mach number and large wavenumber with 0(1) Mach number are considered. It is shown that two different types of disturbance modes can appear in this problem. The first is a wall layer mode, so named as it has its eigenfunctions trapped in a thin layer away from the wall and termed a trapped layer mode for large wavenumbers and an adjustment layer mode for large Mach numbers, since then this mode has its eigenfunctions concentrated in the temperature adjustment layer. The near crossing of the modes which occurs in each of the limits mentioned is investigated.

  3. High energy femtosecond pulse compression

    NASA Astrophysics Data System (ADS)

    Lassonde, Philippe; Mironov, Sergey; Fourmaux, Sylvain; Payeur, Stéphane; Khazanov, Efim; Sergeev, Alexander; Kieffer, Jean-Claude; Mourou, Gerard

    2016-07-01

    An original method for retrieving the Kerr nonlinear index was proposed and implemented for TF12 heavy flint glass. Then, a defocusing lens made of this highly nonlinear glass was used to generate an almost constant spectral broadening across a Gaussian beam profile. The lens was designed with spherical curvatures chosen in order to match the laser beam profile, such that the product of the thickness with intensity is constant. This solid-state optics in combination with chirped mirrors was used to decrease the pulse duration at the output of a terawatt-class femtosecond laser. We demonstrated compression of a 33 fs pulse to 16 fs with 170 mJ energy.

  4. Compression molding of aerogel microspheres

    DOEpatents

    Pekala, Richard W.; Hrubesh, Lawrence W.

    1998-03-24

    An aerogel composite material produced by compression molding of aerogel microspheres (powders) mixed together with a small percentage of polymer binder to form monolithic shapes in a cost-effective manner. The aerogel composites are formed by mixing aerogel microspheres with a polymer binder, placing the mixture in a mold and heating under pressure, which results in a composite with a density of 50-800 kg/m.sup.3 (0.05-0.80 g/cc). The thermal conductivity of the thus formed aerogel composite is below that of air, but higher than the thermal conductivity of monolithic aerogels. The resulting aerogel composites are attractive for applications such as thermal insulation since fabrication thereof does not require large and expensive processing equipment. In addition to thermal insulation, the aerogel composites may be utilized for filtration, ICF target, double layer capacitors, and capacitive deionization.

  5. Compressed natural gas measurement issues

    SciTech Connect

    Blazek, C.F.; Kinast, J.A.; Freeman, P.M.

    1993-12-31

    The Natural Gas Vehicle Coalition`s Measurement and Metering Task Group (MMTG) was established on July 1st, 1992 to develop suggested revisions to National Institute of Standards & Technology (NIST) Handbook 44-1992 (Specifications, Tolerances, and Other Technical Requirements for Weighing and Measuring Devices) and NIST Handbook 130-1991 (Uniform Laws & Regulations). Specifically, the suggested revisions will address the sale and measurement of compressed natural gas when sold as a motor vehicle fuel. This paper briefly discusses the activities of the MMTG and its interaction with NIST. The paper also discusses the Institute of Gas Technology`s (IGT) support of the MMTG in the area of natural gas composition, their impact on metering technology applicable to high pressure fueling stations as well as conversion factors for the establishment of ``gallon gasoline equivalent`` of natural gas. The final portion of this paper discusses IGT`s meter research activities and its meter test facility.

  6. Survey of data compression techniques

    SciTech Connect

    Gryder, R.; Hake, K.

    1991-09-01

    PM-AIM must provide to customers in a timely fashion information about Army acquisitions. This paper discusses ways that PM-AIM can reduce the volume of data that must be transmitted between sites. Although this paper primarily discusses techniques of data compression, it also briefly discusses other options for meeting the PM-AIM requirements. The options available to PM-AIM, in addition to hardware and software data compression, include less-frequent updates, distribution of partial updates, distributed data base design, and intelligent network design. Any option that enhances the performance of the PM-AIM network is worthy of consideration. The recommendations of this paper apply to the PM-AIM project in three phases: the current phase, the target phase, and the objective phase. Each recommendation will be identified as (1) appropriate for the current phase, (2) considered for implementation during the target phase, or (3) a feature that should be part of the objective phase of PM-AIM`s design. The current phase includes only those measures that can be taken with the installed leased lines. The target phase includes those measures that can be taken in transferring the traffic from the leased lines to the DSNET environment with minimal changes in the current design. The objective phase includes all the things that should be done as a matter of course. The objective phase for PM-AIM appears to be a distributed data base with data for each site stored locally and all sites having access to all data.

  7. Survey of data compression techniques

    SciTech Connect

    Gryder, R.; Hake, K.

    1991-09-01

    PM-AIM must provide to customers in a timely fashion information about Army acquisitions. This paper discusses ways that PM-AIM can reduce the volume of data that must be transmitted between sites. Although this paper primarily discusses techniques of data compression, it also briefly discusses other options for meeting the PM-AIM requirements. The options available to PM-AIM, in addition to hardware and software data compression, include less-frequent updates, distribution of partial updates, distributed data base design, and intelligent network design. Any option that enhances the performance of the PM-AIM network is worthy of consideration. The recommendations of this paper apply to the PM-AIM project in three phases: the current phase, the target phase, and the objective phase. Each recommendation will be identified as (1) appropriate for the current phase, (2) considered for implementation during the target phase, or (3) a feature that should be part of the objective phase of PM-AIM's design. The current phase includes only those measures that can be taken with the installed leased lines. The target phase includes those measures that can be taken in transferring the traffic from the leased lines to the DSNET environment with minimal changes in the current design. The objective phase includes all the things that should be done as a matter of course. The objective phase for PM-AIM appears to be a distributed data base with data for each site stored locally and all sites having access to all data.

  8. Compressed natural gas (CNG) measurement

    SciTech Connect

    Husain, Z.D.; Goodson, F.D.

    1995-12-01

    The increased level of environmental awareness has raised concerns about pollution. One area of high attention is the internal combustion engine. The internal combustion engine in and of itself is not a major pollution threat. However, the vast number of motor vehicles in use release large quantities of pollutants. Recent technological advances in ignition and engine controls coupled with unleaded fuels and catalytic converters have reduced vehicular emissions significantly. Alternate fuels have the potential to produce even greater reductions in emissions. The Natural Gas Vehicle (NGV) has been a significant alternative to accomplish the goal of cleaner combustion. Of the many alternative fuels under investigation, compressed natural gas (CNG) has demonstrated the lowest levels of emission. The only vehicle certified by the State of California as an Ultra Low Emission Vehicle (ULEV) was powered by CNG. The California emissions tests of the ULEV-CNG vehicle revealed the following concentrations: Non-Methane Hydrocarbons 0.005 grams/mile Carbon Monoxide 0.300 grams/mile Nitrogen Oxides 0.040 grams/mile. Unfortunately, CNG vehicles will not gain significant popularity until compressed natural gas is readily available in convenient locations in urban areas and in proximity to the Interstate highway system. Approximately 150,000 gasoline filling stations exist in the United States while number of CNG stations is about 1000 and many of those CNG stations are limited to fleet service only. Discussion in this paper concentrates on CNG flow measurement for fuel dispensers. Since the regulatory changes and market demands affect the flow metering and dispenser station design those aspects are discussed. The CNG industry faces a number of challenges.

  9. A New Approach for Fingerprint Image Compression

    SciTech Connect

    Mazieres, Bertrand

    1997-12-01

    The FBI has been collecting fingerprint cards since 1924 and now has over 200 million of them. Digitized with 8 bits of grayscale resolution at 500 dots per inch, it means 2000 terabytes of information. Also, without any compression, transmitting a 10 Mb card over a 9600 baud connection will need 3 hours. Hence we need a compression and a compression as close to lossless as possible: all fingerprint details must be kept. A lossless compression usually do not give a better compression ratio than 2:1, which is not sufficient. Compressing these images with the JPEG standard leads to artefacts which appear even at low compression rates. Therefore the FBI has chosen in 1993 a scheme of compression based on a wavelet transform, followed by a scalar quantization and an entropy coding : the so-called WSQ. This scheme allows to achieve compression ratios of 20:1 without any perceptible loss of quality. The publication of the FBI specifies a decoder, which means that many parameters can be changed in the encoding process: the type of analysis/reconstruction filters, the way the bit allocation is made, the number of Huffman tables used for the entropy coding. The first encoder used 9/7 filters for the wavelet transform and did the bit allocation using a high-rate bit assumption. Since the transform is made into 64 subbands, quite a lot of bands receive only a few bits even at an archival quality compression rate of 0.75 bit/pixel. Thus, after a brief overview of the standard, we will discuss a new approach for the bit-allocation that seems to make more sense where theory is concerned. Then we will talk about some implementation aspects, particularly for the new entropy coder and the features that allow other applications than fingerprint image compression. Finally, we will compare the performances of the new encoder to those of the first encoder.

  10. Shock compression response of Ti+B reactive powder mixtures

    NASA Astrophysics Data System (ADS)

    Gonzales, Manny; Gurumurthy, Ashok; Kennedy, Gregory; Gokhale, Arun; Thadhani, Naresh

    2013-06-01

    The shock compression response of Ti+2B (1:2 Ti:B stoichiometric ratio) reactive powder mixtures at ~50% theoretical material density (TMD) is investigated for shock pressures up to 5 GPa to investigate the possible shock-induced chemical reactivity of this highly exothermic mixture. The shock adiabat is produced from instrumented parallel-plate gas-gun impact experiments on encapsulated powders using poly-vinylidene fluoride (PVDF) stress gauges to measure the input and propagated stress and wave speed in the powder. The shock compression regime is probed from crush-up to full density and onward to assess the potential onset of a shock-induced chemical reaction event in the powder mixture. A series of two-dimensional continuum meso-scale simulations on real and simulated microstructures are performed to predict the shock compression response and identify the meso-scale mechanics that is essential for the so-called ``ballotechnic'' reaction. These meso-scale mechanics are investigated through stereological evolution metrics that track particle interface evolution and their respective field variables. The suitability of the synthetic microstructural representations is evaluated by comparing the experimental and predicted pressure traces. We gratefully acknowledge support and funding from DTRA through Grant No. HDTRA1-10-1-0038 and the National Defense Science and Engineering Graduate (NDSEG) Fellowship through the High Performance Computing and Modernization Office (HPCMO).

  11. The stability of compressible mixing layers in binary gases

    NASA Technical Reports Server (NTRS)

    Kozusko, F.; Lasseigne, D. G.; Grosch, C. E.; Jackson, T. L.

    1996-01-01

    We present the results of a study of the inviscid two-dimensional spatial stability of a parallel compressible mixing layer in a binary gas. The parameters of this study are the Mach number of the fast stream, the ratio of the velocity of the slow stream to that of the fast stream, the ratio of the temperatures, the composition of the gas in the slow stream and in the fast stream, and the frequency of the disturbance wave. The ratio of the molecular weight of the slow stream to that of the fast stream is found to be an important quantity and is used as an independent variable in presenting the stability characteristics of the flow. It is shown that differing molecular weights have a significant effect on the neutral-mode phase speeds, the phase speeds of the unstable modes, the maximum growth rates and the unstable frequency range of the disturbances. The molecular weight ratio is a reasonable predictor of the stability trends. We have further demonstrated that the normalized growth rate as a function of the convective Mach number is relatively insensitive (Approx. 25%) to changes in the composition of the mixing layer. Thus, the normalized growth rate is a key element when considering the stability of compressible mixing layers, since once the basic stability characteristics for a particular combination of gases is known at zero Mach number, the decrease in growth rates due to compressibility effects at the larger convective Mach numbers is somewhat predictable.

  12. Effect of force feeder on tablet strength during compression.

    PubMed

    Narang, Ajit S; Rao, Venkatramana M; Guo, Hang; Lu, Jian; Desai, Divyakant S

    2010-11-30

    Mechanical strength of tablets is an important quality attribute, which depends on both formulation and process. In this study, the effect of process variables during compression on tablet tensile strength and tabletability (the ratio of tensile strength to compression pressure) was investigated using a model formulation. Increase in turret and force feeder speeds reduced tablet tensile strength and tabletability. Turret speed affected tabletability through changes in dwell time under the compression cam and the kinetics of consolidation of granules in the die cavity. The effect of force feeder was attributed to the shearing of the granulation, leading to its over-lubrication. A dimensionless equation was derived to estimate total shear imparted by the force feeder on the granulation in terms of a shear number. Scale-independence of the relationship of tabletability with the shear number was explored on a 6-station Korsch press, a 16-station Betapress, and a 35-station Korsch XL-400 press. The use of this relationship, the exact nature of which may be formulation dependent, during tablet development is expected to provide guidance to the scale-up and interchangeability of tablet presses. PMID:20816733

  13. Automatic attention-based prioritization of unconstrained video for compression

    NASA Astrophysics Data System (ADS)

    Itti, Laurent

    2004-06-01

    We apply a biologically-motivated algorithm that selects visually-salient regions of interest in video streams to multiply-foveated video compression. Regions of high encoding priority are selected based on nonlinear integration of low-level visual cues, mimicking processing in primate occipital and posterior parietal cortex. A dynamic foveation filter then blurs (foveates) every frame, increasingly with distance from high-priority regions. Two variants of the model (one with continuously-variable blur proportional to saliency at every pixel, and the other with blur proportional to distance from three independent foveation centers) are validated against eye fixations from 4-6 human observers on 50 video clips (synthetic stimuli, video games, outdoors day and night home video, television newscast, sports, talk-shows, etc). Significant overlap is found between human and algorithmic foveations on every clip with one variant, and on 48 out of 50 clips with the other. Substantial compressed file size reductions by a factor 0.5 on average are obtained for foveated compared to unfoveated clips. These results suggest a general-purpose usefulness of the algorithm in improving compression ratios of unconstrained video.

  14. Bunch length compression method for free electron lasers to avoid parasitic compressions

    DOEpatents

    Douglas, David R.; Benson, Stephen; Nguyen, Dinh Cong; Tennant, Christopher; Wilson, Guy

    2015-05-26

    A method of bunch length compression method for a free electron laser (FEL) that avoids parasitic compressions by 1) applying acceleration on the falling portion of the RF waveform, 2) compressing using a positive momentum compaction (R.sub.56>0), and 3) compensating for aberration by using nonlinear magnets in the compressor beam line.

  15. Underwing Compression Vortex-Attenuation Device

    NASA Technical Reports Server (NTRS)

    Patterson, James C., Jr.

    1994-01-01

    Underwing compression vortex-attenuation device designed to provide method for attenuating lift-induced vortex generated by wings of airplane. Includes compression panel attached to lower surface of wing, facing perpendicular to streamwise airflow. Concept effective on all types of aircraft. Causes increase in wing lift rather than reduction when deployed. Device of interest to aircraft designers and enhances air safety in general.

  16. Hardware compression using common portions of data

    DOEpatents

    Chang, Jichuan; Viswanathan, Krishnamurthy

    2015-03-24

    Methods and devices are provided for data compression. Data compression can include receiving a plurality of data chunks, sampling at least some of the plurality of data chunks extracting a common portion from a number of the plurality of data chunks based on the sampling, and storing a remainder of the plurality of data chunks in memory.

  17. Sudden Viscous Dissipation of Compressing Turbulence

    NASA Astrophysics Data System (ADS)

    Davidovits, Seth; Fisch, Nathaniel J.

    2016-03-01

    Compression of turbulent plasma can amplify the turbulent kinetic energy, if the compression is fast compared to the viscous dissipation time of the turbulent eddies. A sudden viscous dissipation mechanism is demonstrated, whereby this amplified turbulent kinetic energy is rapidly converted into thermal energy, suggesting a new paradigm for fast ignition inertial fusion.

  18. Sudden Viscous Dissipation of Compressing Turbulence.

    PubMed

    Davidovits, Seth; Fisch, Nathaniel J

    2016-03-11

    Compression of turbulent plasma can amplify the turbulent kinetic energy, if the compression is fast compared to the viscous dissipation time of the turbulent eddies. A sudden viscous dissipation mechanism is demonstrated, whereby this amplified turbulent kinetic energy is rapidly converted into thermal energy, suggesting a new paradigm for fast ignition inertial fusion. PMID:27015488

  19. LOW-VELOCITY COMPRESSIBLE FLOW THEORY

    EPA Science Inventory

    The widespread application of incompressible flow theory dominates low-velocity fluid dynamics, virtually preventing research into compressible low-velocity flow dynamics. Yet, compressible solutions to simple and well-defined flow problems and a series of contradictions in incom...

  20. Sudden Viscous Dissipation of Compressing Turbulence

    DOE PAGESBeta

    Davidovits, Seth; Fisch, Nathaniel J.

    2016-03-11

    Here we report compression of turbulent plasma can amplify the turbulent kinetic energy, if the compression is fast compared to the viscous dissipation time of the turbulent eddies. A sudden viscous dissipation mechanism is demonstrated, whereby this amplified turbulent kinetic energy is rapidly converted into thermal energy, suggesting a new paradigm for fast ignition inertial fusion.

  1. Compression of turbulence-affected video signals

    NASA Astrophysics Data System (ADS)

    Mahpod, Shahar; Yitzhaky, Yitzhak

    2009-08-01

    A video signal obtained through a relatively long-distance atmospheric medium suffers from blur and spatiotemporal image movements caused by the air turbulence. These phenomena, which reduce the visual quality of the signal, reduce also the compression rate for motion-estimation based video compression techniques, and cause an increase of the required bandwidth of the compressed signal. The compression rate reduction results from the frequent large amount of random image local movements which differ from one image to the other, resulting from the turbulence effects. In this research we examined the increase of compression rate by developing and comparing two approaches. In the first approach, a pre-processing image restoration is first performed, which includes reduction of the random movements in the video signal and optionally de-blurring the image. Then, a standard compression process is carried out. In this case, the final de-compressed video signal is a restored version of the recorded one. The second approach intends to predict turbulence-induced motion vectors according to the latest images in the sequence. In this approach the final decompressed image should be as much the same as the recorded image (including the spatiotemporal movements). It was found that the first approach improves the compression ratio. At the second approach it was found that after running short temporal median on the video sequence the turbulence optical flow progress can be predicted very well, but this result was not enough for producing a significant improvement at this stage.

  2. A Comparative Study of Compression Video Technology.

    ERIC Educational Resources Information Center

    Keller, Chris A.; And Others

    The purpose of this study was to provide an overview of compression devices used to increase the cost effectiveness of teleconferences by reducing satellite bandwidth requirements for the transmission of television pictures and accompanying audio signals. The main body of the report describes the comparison study of compression rates and their…

  3. Recoil Experiments Using a Compressed Air Cannon

    ERIC Educational Resources Information Center

    Taylor, Brett

    2006-01-01

    Ping-Pong vacuum cannons, potato guns, and compressed air cannons are popular and dramatic demonstrations for lecture and lab. Students enjoy them for the spectacle, but they can also be used effectively to teach physics. Recently we have used a student-built compressed air cannon as a laboratory activity to investigate impulse, conservation of…

  4. Classical data compression with quantum side information

    SciTech Connect

    Devetak, I.; Winter, A.

    2003-10-01

    The problem of classical data compression when the decoder has quantum side information at his disposal is considered. This is a quantum generalization of the classical Slepian-Wolf theorem. The optimal compression rate is found to be reduced from the Shannon entropy of the source by the Holevo information between the source and side information.

  5. A New Compression Method for FITS Tables

    NASA Technical Reports Server (NTRS)

    Pence, William; Seaman, Rob; White, Richard L.

    2010-01-01

    As the size and number of FITS binary tables generated by astronomical observatories increases, so does the need for a more efficient compression method to reduce the amount disk space and network bandwidth required to archive and down1oad the data tables. We have developed a new compression method for FITS binary tables that is modeled after the FITS tiled-image compression compression convention that has been in use for the past decade. Tests of this new method on a sample of FITS binary tables from a variety of current missions show that on average this new compression technique saves about 50% more disk space than when simply compressing the whole FITS file with gzip. Other advantages of this method are (1) the compressed FITS table is itself a valid FITS table, (2) the FITS headers remain uncompressed, thus allowing rapid read and write access to the keyword values, and (3) in the common case where the FITS file contains multiple tables, each table is compressed separately and may be accessed without having to uncompress the whole file.

  6. Lossless Compression on MRI Images Using SWT.

    PubMed

    Anusuya, V; Raghavan, V Srinivasa; Kavitha, G

    2014-10-01

    Medical image compression is one of the growing research fields in biomedical applications. Most medical images need to be compressed using lossless compression as each pixel information is valuable. With the wide pervasiveness of medical imaging applications in health-care settings and the increased interest in telemedicine technologies, it has become essential to reduce both storage and transmission bandwidth requirements needed for archival and communication of related data, preferably by employing lossless compression methods. Furthermore, providing random access as well as resolution and quality scalability to the compressed data has become of great utility. Random access refers to the ability to decode any section of the compressed image without having to decode the entire data set. The system proposes to implement a lossless codec using an entropy coder. 3D medical images are decomposed into 2D slices and subjected to 2D-stationary wavelet transform (SWT). The decimated coefficients are compressed in parallel using embedded block coding with optimized truncation of the embedded bit stream. These bit streams are decoded and reconstructed using inverse SWT. Finally, the compression ratio (CR) is evaluated to prove the efficiency of the proposal. As an enhancement, the proposed system concentrates on minimizing the computation time by introducing parallel computing on the arithmetic coding stage as it deals with multiple subslices. PMID:24848945

  7. Factors modulating effective chest compressions in the neonatal period.

    PubMed

    Mildenhall, Lindsay F J; Huynh, Trang K

    2013-12-01

    The need for chest compressions in the newborn is a rare occurrence. The methods employed for delivery of chest compressions have been poorly researched. Techniques that have been studied include compression:ventilation ratios, thumb versus finger method of delivering compressions, depth of compression, site on chest of compression, synchrony or asynchrony of breaths with compressions, and modalities to improve the compression technique and consistency. Although still in its early days, an evidence-based guideline for chest compressions is beginning to take shape. PMID:23920076

  8. Insertion Profiles of 4 Headless Compression Screws

    PubMed Central

    Hart, Adam; Harvey, Edward J.; Lefebvre, Louis-Philippe; Barthelat, Francois; Rabiei, Reza; Martineau, Paul A.

    2013-01-01

    Purpose In practice, the surgeon must rely on screw position (insertion depth) and tactile feedback from the screwdriver (insertion torque) to gauge compression. In this study, we identified the relationship between interfragmentary compression and these 2 factors. Methods The Acutrak Standard, Acutrak Mini, Synthes 3.0, and Herbert-Whipple implants were tested using a polyurethane foam scaphoid model. A specialized testing jig simultaneously measured compression force, insertion torque, and insertion depth at half-screw-turn intervals until failure occurred. Results The peak compression occurs at an insertion depth of −3.1 mm, −2.8 mm, 0.9 mm, and 1.5 mm for the Acutrak Mini, Acutrak Standard, Herbert-Whipple, and Synthes screws respectively (insertion depth is positive when the screw is proud above the bone and negative when buried). The compression and insertion torque at a depth of −2 mm were found to be 113 ± 18 N and 0.348 ± 0.052 Nm for the Acutrak Standard, 104 ± 15 N and 0.175 ± 0.008 Nm for the Acutrak Mini, 78 ± 9 N and 0.245 ± 0.006 Nm for the Herbert-Whipple, and 67 ± 2N, 0.233 ± 0.010 Nm for the Synthes headless compression screws. Conclusions All 4 screws generated a sizable amount of compression (> 60 N) over a wide range of insertion depths. The compression at the commonly recommended insertion depth of −2 mm was not significantly different between screws; thus, implant selection should not be based on compression profile alone. Conically shaped screws (Acutrak) generated their peak compression when they were fully buried in the foam whereas the shanked screws (Synthes and Herbert-Whipple) reached peak compression before they were fully inserted. Because insertion torque correlated poorly with compression, surgeons should avoid using tactile judgment of torque as a proxy for compression. Clinical relevance Knowledge of the insertion profile may improve our understanding of the implants, provide a better basis for comparing screws

  9. Hyperelastic Material Properties of Mouse Skin under Compression.

    PubMed

    Wang, Yuxiang; Marshall, Kara L; Baba, Yoshichika; Gerling, Gregory J; Lumpkin, Ellen A

    2013-01-01

    The skin is a dynamic organ whose complex material properties are capable of withstanding continuous mechanical stress while accommodating insults and organism growth. Moreover, synchronized hair cycles, comprising waves of hair growth, regression and rest, are accompanied by dramatic fluctuations in skin thickness in mice. Whether such structural changes alter skin mechanics is unknown. Mouse models are extensively used to study skin biology and pathophysiology, including aging, UV-induced skin damage and somatosensory signaling. As the skin serves a pivotal role in the transfer function from sensory stimuli to neuronal signaling, we sought to define the mechanical properties of mouse skin over a range of normal physiological states. Skin thickness, stiffness and modulus were quantitatively surveyed in adult, female mice (Mus musculus). These measures were analyzed under uniaxial compression, which is relevant for touch reception and compression injuries, rather than tension, which is typically used to analyze skin mechanics. Compression tests were performed with 105 full-thickness, freshly isolated specimens from the hairy skin of the hind limb. Physiological variables included body weight, hair-cycle stage, maturity level, skin site and individual animal differences. Skin thickness and stiffness were dominated by hair-cycle stage at young (6-10 weeks) and intermediate (13-19 weeks) adult ages but by body weight in mature mice (26-34 weeks). Interestingly, stiffness varied inversely with thickness so that hyperelastic modulus was consistent across hair-cycle stages and body weights. By contrast, the mechanics of hairy skin differs markedly with anatomical location. In particular, skin containing fascial structures such as nerves and blood vessels showed significantly greater modulus than adjacent sites. Collectively, this systematic survey indicates that, although its structure changes dramatically throughout adult life, mouse skin at a given location maintains a

  10. Quasi 1D Modeling of Mixed Compression Supersonic Inlets

    NASA Technical Reports Server (NTRS)

    Kopasakis, George; Connolly, Joseph W.; Paxson, Daniel E.; Woolwine, Kyle J.

    2012-01-01

    The AeroServoElasticity task under the NASA Supersonics Project is developing dynamic models of the propulsion system and the vehicle in order to conduct research for integrated vehicle dynamic performance. As part of this effort, a nonlinear quasi 1-dimensional model of the 2-dimensional bifurcated mixed compression supersonic inlet is being developed. The model utilizes computational fluid dynamics for both the supersonic and subsonic diffusers. The oblique shocks are modeled utilizing compressible flow equations. This model also implements variable geometry required to control the normal shock position. The model is flexible and can also be utilized to simulate other mixed compression supersonic inlet designs. The model was validated both in time and in the frequency domain against the legacy LArge Perturbation INlet code, which has been previously verified using test data. This legacy code written in FORTRAN is quite extensive and complex in terms of the amount of software and number of subroutines. Further, the legacy code is not suitable for closed loop feedback controls design, and the simulation environment is not amenable to systems integration. Therefore, a solution is to develop an innovative, more simplified, mixed compression inlet model with the same steady state and dynamic performance as the legacy code that also can be used for controls design. The new nonlinear dynamic model is implemented in MATLAB Simulink. This environment allows easier development of linear models for controls design for shock positioning. The new model is also well suited for integration with a propulsion system model to study inlet/propulsion system performance, and integration with an aero-servo-elastic system model to study integrated vehicle ride quality, vehicle stability, and efficiency.

  11. Hyperelastic Material Properties of Mouse Skin under Compression

    PubMed Central

    Wang, Yuxiang; Marshall, Kara L.; Baba, Yoshichika; Gerling, Gregory J.; Lumpkin, Ellen A.

    2013-01-01

    The skin is a dynamic organ whose complex material properties are capable of withstanding continuous mechanical stress while accommodating insults and organism growth. Moreover, synchronized hair cycles, comprising waves of hair growth, regression and rest, are accompanied by dramatic fluctuations in skin thickness in mice. Whether such structural changes alter skin mechanics is unknown. Mouse models are extensively used to study skin biology and pathophysiology, including aging, UV-induced skin damage and somatosensory signaling. As the skin serves a pivotal role in the transfer function from sensory stimuli to neuronal signaling, we sought to define the mechanical properties of mouse skin over a range of normal physiological states. Skin thickness, stiffness and modulus were quantitatively surveyed in adult, female mice (Mus musculus). These measures were analyzed under uniaxial compression, which is relevant for touch reception and compression injuries, rather than tension, which is typically used to analyze skin mechanics. Compression tests were performed with 105 full-thickness, freshly isolated specimens from the hairy skin of the hind limb. Physiological variables included body weight, hair-cycle stage, maturity level, skin site and individual animal differences. Skin thickness and stiffness were dominated by hair-cycle stage at young (6–10 weeks) and intermediate (13–19 weeks) adult ages but by body weight in mature mice (26–34 weeks). Interestingly, stiffness varied inversely with thickness so that hyperelastic modulus was consistent across hair-cycle stages and body weights. By contrast, the mechanics of hairy skin differs markedly with anatomical location. In particular, skin containing fascial structures such as nerves and blood vessels showed significantly greater modulus than adjacent sites. Collectively, this systematic survey indicates that, although its structure changes dramatically throughout adult life, mouse skin at a given location

  12. Compressed bitmap indices for efficient query processing

    SciTech Connect

    Wu, Kesheng; Otoo, Ekow; Shoshani, Arie

    2001-09-30

    Many database applications make extensive use of bitmap indexing schemes. In this paper, we study how to improve the efficiencies of these indexing schemes by proposing new compression schemes for the bitmaps. Most compression schemes are designed primarily to achieve good compression. During query processing they can be orders of magnitude slower than their uncompressed counterparts. The new schemes are designed to bridge this performance gap by reducing compression effectiveness and improving operation speed. In a number of tests on both synthetic data and real application data, we found that the new schemes significantly outperform the well-known compression schemes while using only modestly more space. For example, compared to the Byte-aligned Bitmap Code, the new schemes are 12 times faster and it uses only 50 percent more space. The new schemes use much less space(<30 percent) than the uncompressed scheme and are faster in a majority of the test cases.

  13. Postprocessing of Compressed Images via Sequential Denoising.

    PubMed

    Dar, Yehuda; Bruckstein, Alfred M; Elad, Michael; Giryes, Raja

    2016-07-01

    In this paper, we propose a novel postprocessing technique for compression-artifact reduction. Our approach is based on posing this task as an inverse problem, with a regularization that leverages on existing state-of-the-art image denoising algorithms. We rely on the recently proposed Plug-and-Play Prior framework, suggesting the solution of general inverse problems via alternating direction method of multipliers, leading to a sequence of Gaussian denoising steps. A key feature in our scheme is a linearization of the compression-decompression process, so as to get a formulation that can be optimized. In addition, we supply a thorough analysis of this linear approximation for several basic compression procedures. The proposed method is suitable for diverse compression techniques that rely on transform coding. In particular, we demonstrate impressive gains in image quality for several leading compression methods-JPEG, JPEG2000, and HEVC. PMID:27214878

  14. Multiresolution Distance Volumes for Progressive Surface Compression

    SciTech Connect

    Laney, D E; Bertram, M; Duchaineau, M A; Max, N L

    2002-04-18

    We present a surface compression method that stores surfaces as wavelet-compressed signed-distance volumes. Our approach enables the representation of surfaces with complex topology and arbitrary numbers of components within a single multiresolution data structure. This data structure elegantly handles topological modification at high compression rates. Our method does not require the costly and sometimes infeasible base mesh construction step required by subdivision surface approaches. We present several improvements over previous attempts at compressing signed-distance functions, including an 0(n) distance transform, a zero set initialization method for triangle meshes, and a specialized thresholding algorithm. We demonstrate the potential of sampled distance volumes for surface compression and progressive reconstruction for complex high genus surfaces.

  15. Postprocessing of Compressed Images via Sequential Denoising

    NASA Astrophysics Data System (ADS)

    Dar, Yehuda; Bruckstein, Alfred M.; Elad, Michael; Giryes, Raja

    2016-07-01

    In this work we propose a novel postprocessing technique for compression-artifact reduction. Our approach is based on posing this task as an inverse problem, with a regularization that leverages on existing state-of-the-art image denoising algorithms. We rely on the recently proposed Plug-and-Play Prior framework, suggesting the solution of general inverse problems via Alternating Direction Method of Multipliers (ADMM), leading to a sequence of Gaussian denoising steps. A key feature in our scheme is a linearization of the compression-decompression process, so as to get a formulation that can be optimized. In addition, we supply a thorough analysis of this linear approximation for several basic compression procedures. The proposed method is suitable for diverse compression techniques that rely on transform coding. Specifically, we demonstrate impressive gains in image quality for several leading compression methods - JPEG, JPEG2000, and HEVC.

  16. An efficient medical image compression scheme.

    PubMed

    Li, Xiaofeng; Shen, Yi; Ma, Jiachen

    2005-01-01

    In this paper, a fast lossless compression scheme is presented for the medical image. This scheme consists of two stages. In the first stage, a Differential Pulse Code Modulation (DPCM) is used to decorrelate the raw image data, therefore increasing the compressibility of the medical image. In the second stage, an effective scheme based on the Huffman coding method is developed to encode the residual image. This newly proposed scheme could reduce the cost for the Huffman coding table while achieving high compression ratio. With this algorithm, a compression ratio higher than that of the lossless JPEG method for image can be obtained. At the same time, this method is quicker than the lossless JPEG2000. In other words, the newly proposed algorithm provides a good means for lossless medical image compression. PMID:17280962

  17. Interactive computer graphics applications for compressible aerodynamics

    NASA Technical Reports Server (NTRS)

    Benson, Thomas J.

    1994-01-01

    Three computer applications have been developed to solve inviscid compressible fluids problems using interactive computer graphics. The first application is a compressible flow calculator which solves for isentropic flow, normal shocks, and oblique shocks or centered expansions produced by two dimensional ramps. The second application couples the solutions generated by the first application to a more graphical presentation of the results to produce a desk top simulator of three compressible flow problems: 1) flow past a single compression ramp; 2) flow past two ramps in series; and 3) flow past two opposed ramps. The third application extends the results of the second to produce a design tool which solves for the flow through supersonic external or mixed compression inlets. The applications were originally developed to run on SGI or IBM workstations running GL graphics. They are currently being extended to solve additional types of flow problems and modified to operate on any X-based workstation.

  18. Compressive Hyperspectral Imaging With Side Information

    NASA Astrophysics Data System (ADS)

    Yuan, Xin; Tsai, Tsung-Han; Zhu, Ruoyu; Llull, Patrick; Brady, David; Carin, Lawrence

    2015-09-01

    A blind compressive sensing algorithm is proposed to reconstruct hyperspectral images from spectrally-compressed measurements.The wavelength-dependent data are coded and then superposed, mapping the three-dimensional hyperspectral datacube to a two-dimensional image. The inversion algorithm learns a dictionary {\\em in situ} from the measurements via global-local shrinkage priors. By using RGB images as side information of the compressive sensing system, the proposed approach is extended to learn a coupled dictionary from the joint dataset of the compressed measurements and the corresponding RGB images, to improve reconstruction quality. A prototype camera is built using a liquid-crystal-on-silicon modulator. Experimental reconstructions of hyperspectral datacubes from both simulated and real compressed measurements demonstrate the efficacy of the proposed inversion algorithm, the feasibility of the camera and the benefit of side information.

  19. The New CCSDS Image Compression Recommendation

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Armbruster, Philippe; Kiely, Aaron B.; Masschelein, Bart; Moury, Gilles; Schafer, Christoph

    2004-01-01

    The Consultative Committee for Space Data Systems (CCSDS) data compression working group has recently adopted a recommendation for image data compression, with a final release expected in 2005. The algorithm adopted in the recommendation consists a two dimensional discrete wavelet transform of the image, followed by progressive bit-plane coding of the transformed data. The algorithm can provide both lossless and lossy compression, and allows a user to directly control the compressed data volume or the fidelity with which the wavelet-transformed data can be reconstructed. The algorithm is suitable for both frame-based image data and scan-based sensor data, and has applications for near-earth and deep-space missions. The standard will be accompanied by free software sources on a future web site. An ASIC implementation of the compressor is currently under development. This paper describes the compression algorithm along with the requirements that drove the selection of the algorithm.

  20. Application of PCA-based data compression in the ANN-supported conceptual cost estimation of residential buildings

    NASA Astrophysics Data System (ADS)

    Juszczyk, Michał

    2016-06-01

    The paper presents concisely some research results on the application of principal component analysis for the data compression and the use of compressed data as the variables describing the model in the issue of conceptual cost estimation of residential buildings. The goal of the research was to investigate the possibility of use of compressed input data of the model in neural modelling - the basic information about residential buildings available in the early stage of design and construction cost. The results for chosen neural networks that were trained with use of the compressed input data are presented in the paper. In the summary the results obtained for the neural networks with PCA-based data compression are compared with the results obtained in the previous stage of the research for the network committees.

  1. Fast lossless compression via cascading Bloom filters

    PubMed Central

    2014-01-01

    Background Data from large Next Generation Sequencing (NGS) experiments present challenges both in terms of costs associated with storage and in time required for file transfer. It is sometimes possible to store only a summary relevant to particular applications, but generally it is desirable to keep all information needed to revisit experimental results in the future. Thus, the need for efficient lossless compression methods for NGS reads arises. It has been shown that NGS-specific compression schemes can improve results over generic compression methods, such as the Lempel-Ziv algorithm, Burrows-Wheeler transform, or Arithmetic Coding. When a reference genome is available, effective compression can be achieved by first aligning the reads to the reference genome, and then encoding each read using the alignment position combined with the differences in the read relative to the reference. These reference-based methods have been shown to compress better than reference-free schemes, but the alignment step they require demands several hours of CPU time on a typical dataset, whereas reference-free methods can usually compress in minutes. Results We present a new approach that achieves highly efficient compression by using a reference genome, but completely circumvents the need for alignment, affording a great reduction in the time needed to compress. In contrast to reference-based methods that first align reads to the genome, we hash all reads into Bloom filters to encode, and decode by querying the same Bloom filters using read-length subsequences of the reference genome. Further compression is achieved by using a cascade of such filters. Conclusions Our method, called BARCODE, runs an order of magnitude faster than reference-based methods, while compressing an order of magnitude better than reference-free methods, over a broad range of sequencing coverage. In high coverage (50-100 fold), compared to the best tested compressors, BARCODE saves 80-90% of the running time

  2. Energy requirements for quantum data compression and 1-1 coding

    SciTech Connect

    Rallan, Luke; Vedral, Vlatko

    2003-10-01

    By looking at quantum data compression in the second quantization, we present a model for the efficient generation and use of variable length codes. In this picture, lossless data compression can be seen as the minimum energy required to faithfully represent or transmit classical information contained within a quantum state. In order to represent information, we create quanta in some predefined modes (i.e., frequencies) prepared in one of the two possible internal states (the information carrying degrees of freedom). Data compression is now seen as the selective annihilation of these quanta, the energy of which is effectively dissipated into the environment. As any increase in the energy of the environment is intricately linked to any information loss and is subject to Landauer's erasure principle, we use this principle to distinguish lossless and lossy schemes and to suggest bounds on the efficiency of our lossless compression protocol. In line with the work of Bostroem and Felbinger [Phys. Rev. A 65, 032313 (2002)], we also show that when using variable length codes the classical notions of prefix or uniquely decipherable codes are unnecessarily restrictive given the structure of quantum mechanics and that a 1-1 mapping is sufficient. In the absence of this restraint, we translate existing classical results on 1-1 coding to the quantum domain to derive a new upper bound on the compression of quantum information. Finally, we present a simple quantum circuit to implement our scheme.

  3. Shock Compression of Liquid Hydrazine.

    NASA Astrophysics Data System (ADS)

    Voskoboinikov, I. M.

    1999-06-01

    The possibility of calculation of the parameters of a shock compression of liquid hydrazine within the frameworks of the schemes is shown. When the mass velocities behind shock fronts do not exceed the value equals 3.1 km/s, it may be managed under assumption of the retention of the initial compound (hydrazine) behind a shock front. The detonation velocities of hydrazine solutions with nitromethane and hydrazinenitrate correspond to the destruction of hydrazine up to ammonia and nitrogen that is accompanied by a noticeable energy release. The estimates performed demonstrate a possibility of the detonation of a liquid hydrazine with the velocity equals 8 km/s, during which the heating up of the substance behind a shock front (equals approximately 2000 K) is comparable with those observed during detonation of liquid explosives. The large values of the critical diameter of detonation are expected because of activation energy of hydrazine decomposition equals 53.2 kcal/mol. They are decreased up on addition of a certain amount of liquid explosives. Their more rapid decomposition behind a shock front gives rise to the temperature increase that is sufficient for destruction of hydrazine.

  4. Compressive sensing for nuclear security.

    SciTech Connect

    Gestner, Brian Joseph

    2013-12-01

    Special nuclear material (SNM) detection has applications in nuclear material control, treaty verification, and national security. The neutron and gamma-ray radiation signature of SNMs can be indirectly observed in scintillator materials, which fluoresce when exposed to this radiation. A photomultiplier tube (PMT) coupled to the scintillator material is often used to convert this weak fluorescence to an electrical output signal. The fluorescence produced by a neutron interaction event differs from that of a gamma-ray interaction event, leading to a slightly different pulse in the PMT output signal. The ability to distinguish between these pulse types, i.e., pulse shape discrimination (PSD), has enabled applications such as neutron spectroscopy, neutron scatter cameras, and dual-mode neutron/gamma-ray imagers. In this research, we explore the use of compressive sensing to guide the development of novel mixed-signal hardware for PMT output signal acquisition. Effectively, we explore smart digitizers that extract sufficient information for PSD while requiring a considerably lower sample rate than conventional digitizers. Given that we determine the feasibility of realizing these designs in custom low-power analog integrated circuits, this research enables the incorporation of SNM detection into wireless sensor networks.

  5. Spectral compression of single photons

    NASA Astrophysics Data System (ADS)

    Lavoie, J.; Donohue, J. M.; Wright, L. G.; Fedrizzi, A.; Resch, K. J.

    2013-05-01

    Photons are critical to quantum technologies because they can be used for virtually all quantum information tasks, for example, in quantum metrology, as the information carrier in photonic quantum computation, as a mediator in hybrid systems, and to establish long-distance networks. The physical characteristics of photons in these applications differ drastically; spectral bandwidths span 12 orders of magnitude from 50 THz (ref. 6) for quantum-optical coherence tomography to 50 Hz for certain quantum memories. Combining these technologies requires coherent interfaces that reversibly map centre frequencies and bandwidths of photons to avoid excessive loss. Here, we demonstrate bandwidth compression of single photons by a factor of 40 as well as tunability over a range 70 times that bandwidth via sum-frequency generation with chirped laser pulses. This constitutes a time-to-frequency interface for light capable of converting time-bin to colour entanglement, and enables ultrafast timing measurements. It is a step towards arbitrary waveform generation for single and entangled photons.

  6. PHELIX for flux compression studies

    SciTech Connect

    Turchi, Peter J; Rousculp, Christopher L; Reinovsky, Robert E; Reass, William A; Griego, Jeffrey R; Oro, David M; Merrill, Frank E

    2010-06-28

    PHELIX (Precision High Energy-density Liner Implosion eXperiment) is a concept for studying electromagnetic implosions using proton radiography. This approach requires a portable pulsed power and liner implosion apparatus that can be operated in conjunction with an 800 MeV proton beam at the Los Alamos Neutron Science Center. The high resolution (< 100 micron) provided by proton radiography combined with similar precision of liner implosions driven electromagnetically can permit close comparisons of multi-frame experimental data and numerical simulations within a single dynamic event. To achieve a portable implosion system for use at high energy-density in a proton laboratory area requires sub-megajoule energies applied to implosions only a few cms in radial and axial dimension. The associated inductance changes are therefore relatively modest, so a current step-up transformer arrangement is employed to avoid excessive loss to parasitic inductances that are relatively large for low-energy banks comprising only several capacitors and switches. We describe the design, construction and operation of the PHELIX system and discuss application to liner-driven, magnetic flux compression experiments. For the latter, the ability of strong magnetic fields to deflect the proton beam may offer a novel technique for measurement of field distributions near perturbed surfaces.

  7. Artificial Compressibility with Entropic Damping

    NASA Astrophysics Data System (ADS)

    Clausen, Jonathan; Roberts, Scott

    2012-11-01

    Artificial Compressibility (AC) methods relax the strict incompressibility constraint associated with the incompressible Navier-Stokes equations. Instead, they rely on an artificial equation of state relating pressure and density fluctuations through a numerical Mach number. Such methods are not new: the first AC methods date back to Chorin (1967). More recent applications can be found in the lattice-Boltzmann method, which is a kinetic/mesoscopic method that converges to an AC form of the Navier-Stokes equations. With computing hardware trending towards massively parallel architectures in order to achieve high computational throughput, AC style methods have become attractive due to their local information propagation and concomitant parallelizable algorithms. In this work, we examine a damped form of AC in the context of finite-difference and finite-element methods, with a focus on achieving time-accurate simulations. Also, we comment on the scalability of the various algorithms. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  8. The compression pathway of quartz

    SciTech Connect

    Thompson, Richard M.; Downs, Robert T.; Dera, Przemyslaw

    2011-11-07

    The structure of quartz over the temperature domain (298 K, 1078 K) and pressure domain (0 GPa, 20.25 GPa) is compared to the following three hypothetical quartz crystals: (1) Ideal {alpha}-quartz with perfectly regular tetrahedra and the same volume and Si-O-Si angle as its observed equivalent (ideal {beta}-quartz has Si-O-Si angle fixed at 155.6{sup o}). (2) Model {alpha}-quartz with the same Si-O-Si angle and cell parameters as its observed equivalent, derived from ideal by altering the axial ratio. (3) BCC quartz with a perfectly body-centered cubic arrangement of oxygen anions and the same volume as its observed equivalent. Comparison of experimental data recorded in the literature for quartz with these hypothetical crystal structures shows that quartz becomes more ideal as temperature increases, more BCC as pressure increases, and that model quartz is a very good representation of observed quartz under all conditions. This is consistent with the hypothesis that quartz compresses through Si-O-Si angle-bending, which is resisted by anion-anion repulsion resulting in increasing distortion of the c/a axial ratio from ideal as temperature decreases and/or pressure increases.

  9. Compressing images for the Internet

    NASA Astrophysics Data System (ADS)

    Beretta, Giordano B.

    1998-01-01

    The World Wide Web has rapidly become the hot new mass communications medium. Content creators are using similar design and layout styles as in printed magazines, i.e., with many color images and graphics. The information is transmitted over plain telephone lines, where the speed/price trade-off is much more severe than in the case of printed media. The standard design approach is to use palettized color and to limit as much as possible the number of colors used, so that the images can be encoded with a small number of bits per pixel using the Graphics Interchange Format (GIF) file format. The World Wide Web standards contemplate a second data encoding method (JPEG) that allows color fidelity but usually performs poorly on text, which is a critical element of information communicated on this medium. We analyze the spatial compression of color images and describe a methodology for using the JPEG method in a way that allows a compact representation while preserving full color fidelity.

  10. Static Compression of Tetramethylammonium Borohydride

    SciTech Connect

    Dalton, Douglas Allen; Somayazulu, M.; Goncharov, Alexander F.; Hemley, Russell J.

    2011-11-15

    Raman spectroscopy and synchrotron X-ray diffraction are used to examine the high-pressure behavior of tetramethylammonium borohydride (TMAB) to 40 GPa at room temperature. The measurements reveal weak pressure-induced structural transitions around 5 and 20 GPa. Rietveld analysis and Le Bail fits of the powder diffraction data based on known structures of tetramethylammonium salts indicate that the transitions are mediated by orientational ordering of the BH{sub 4}{sup -} tetrahedra followed by tilting of the (CH{sub 3}){sub 4}N{sup +} groups. X-ray diffraction patterns obtained during pressure release suggest reversibility with a degree of hysteresis. Changes in the Raman spectrum confirm that these transitions are not accompanied by bonding changes between the two ionic species. At ambient conditions, TMAB does not possess dihydrogen bonding, and Raman data confirms that this feature is not activated upon compression. The pressure-volume equation of state obtained from the diffraction data gives a bulk modulus [K{sub 0} = 5.9(6) GPa, K'{sub 0} = 9.6(4)] slightly lower than that observed for ammonia borane. Raman spectra obtained over the entire pressure range (spanning over 40% densification) indicate that the intramolecular vibrational modes are largely coupled.

  11. Shock compression profiles in ceramics

    SciTech Connect

    Grady, D.E.; Moody, R.L.

    1996-03-01

    An investigation of the shock compression properties of high-strength ceramics has been performed using controlled planar impact techniques. In a typical experimental configuration, a ceramic target disc is held stationary, and it is struck by plates of either a similar ceramic or by plates of a well-characterized metal. All tests were performed using either a single-stage propellant gun or a two-stage light-gas gun. Particle velocity histories were measured with laser velocity interferometry (VISAR) at the interface between the back of the target ceramic and a calibrated VISAR window material. Peak impact stresses achieved in these experiments range from about 3 to 70 GPa. Ceramics tested under shock impact loading include: Al{sub 2}O{sub 3}, AlN, B{sub 4}C, SiC, Si{sub 3}N{sub 4}, TiB{sub 2}, WC and ZrO{sub 2}. This report compiles the VISAR wave profiles and experimental impact parameters within a database-useful for response model development, computational model validation studies, and independent assessment of the physics of dynamic deformation on high-strength, brittle solids.

  12. Data compression: The end-to-end information systems perspective for NASA space science missions

    NASA Technical Reports Server (NTRS)

    Tai, Wallace

    1991-01-01

    The unique characteristics of compressed data have important implications to the design of space science data systems, science applications, and data compression techniques. The sequential nature or data dependence between each of the sample values within a block of compressed data introduces an error multiplication or propagation factor which compounds the effects of communication errors. The data communication characteristics of the onboard data acquisition, storage, and telecommunication channels may influence the size of the compressed blocks and the frequency of included re-initialization points. The organization of the compressed data are continually changing depending on the entropy of the input data. This also results in a variable output rate from the instrument which may require buffering to interface with the spacecraft data system. On the ground, there exist key tradeoff issues associated with the distribution and management of the science data products when data compression techniques are applied in order to alleviate the constraints imposed by ground communication bandwidth and data storage capacity.

  13. Key importance of compression properties in the biophysical characteristics of hyaluronic acid soft-tissue fillers.

    PubMed

    Gavard Molliard, Samuel; Albert, Séverine; Mondon, Karine

    2016-08-01

    Hyaluronic acid (HA) soft-tissue fillers are the most popular degradable injectable products used for correcting skin depressions and restoring facial volume loss. From a rheological perspective, HA fillers are commonly characterised through their viscoelastic properties under shear-stress. However, despite the continuous mechanical pressure that the skin applies on the fillers, compression properties in static and dynamic modes are rarely considered. In this article, three different rheological tests (shear-stress test and compression tests in static and dynamic mode) were carried out on nine CE-marked cross-linked HA fillers. Corresponding shear-stress (G', tanδ) and compression (E', tanδc, normal force FN) parameters were measured. We show here that the tested products behave differently under shear-stress and under compression even though they are used for the same indications. G' showed the expected influence on the tissue volumising capacity, and the same influence was also observed for the compression parameters E'. In conclusion, HA soft-tissue fillers exhibit widely different biophysical characteristics and many variables contribute to their overall performance. The elastic modulus G' is not the only critical parameter to consider amongst the rheological properties: the compression parameters E' and FN also provide key information, which should be taken into account for a better prediction of clinical outcomes, especially for predicting the volumising capacity and probably the ability to stimulate collagen production by fibroblasts. PMID:27093589

  14. Cloud Optimized Image Format and Compression

    NASA Astrophysics Data System (ADS)

    Becker, P.; Plesea, L.; Maurer, T.

    2015-04-01

    Cloud based image storage and processing requires revaluation of formats and processing methods. For the true value of the massive volumes of earth observation data to be realized, the image data needs to be accessible from the cloud. Traditional file formats such as TIF and NITF were developed in the hay day of the desktop and assumed fast low latency file access. Other formats such as JPEG2000 provide for streaming protocols for pixel data, but still require a server to have file access. These concepts no longer truly hold in cloud based elastic storage and computation environments. This paper will provide details of a newly evolving image storage format (MRF) and compression that is optimized for cloud environments. Although the cost of storage continues to fall for large data volumes, there is still significant value in compression. For imagery data to be used in analysis and exploit the extended dynamic range of the new sensors, lossless or controlled lossy compression is of high value. Compression decreases the data volumes stored and reduces the data transferred, but the reduced data size must be balanced with the CPU required to decompress. The paper also outlines a new compression algorithm (LERC) for imagery and elevation data that optimizes this balance. Advantages of the compression include its simple to implement algorithm that enables it to be efficiently accessed using JavaScript. Combing this new cloud based image storage format and compression will help resolve some of the challenges of big image data on the internet.

  15. Multishock Compression Properties of Warm Dense Argon.

    PubMed

    Zheng, Jun; Chen, Qifeng; Yunjun, Gu; Li, Zhiguo; Shen, Zhijun

    2015-01-01

    Warm dense argon was generated by a shock reverberation technique. The diagnostics of warm dense argon were performed by a multichannel optical pyrometer and a velocity interferometer system. The equations of state in the pressure-density range of 20-150 GPa and 1.9-5.3 g/cm(3) from the first- to fourth-shock compression were presented. The single-shock temperatures in the range of 17.2-23.4 kK were obtained from the spectral radiance. Experimental results indicates that multiple shock-compression ratio (ηi = ρi/ρ0) is greatly enhanced from 3.3 to 8.8, where ρ0 is the initial density of argon and ρi (i = 1, 2, 3, 4) is the compressed density from first to fourth shock, respectively. For the relative compression ratio (ηi' = ρi/ρi-1), an interesting finding is that a turning point occurs at the second shocked states under the conditions of different experiments, and ηi' increases with pressure in lower density regime and reversely decreases with pressure in higher density regime. The evolution of the compression ratio is controlled by the excitation of internal degrees of freedom, which increase the compression, and by the interaction effects between particles that reduce it. A temperature-density plot shows that current multishock compression states of argon have distributed into warm dense regime. PMID:26515505

  16. Multiresolution Distance Volumes for Progressive Surface Compression

    SciTech Connect

    Laney, D; Bertram, M; Duchaineau, M; Max, N

    2002-01-14

    Surfaces generated by scientific simulation and range scanning can reach into the billions of polygons. Such surfaces must be aggressively compressed, but at the same time should provide for level of detail queries. Progressive compression techniques based on subdivision surfaces produce impressive results on range scanned models. However, these methods require the construction of a base mesh which parameterizes the surface to be compressed and encodes the topology of the surface. For complex surfaces with high genus and/or a large number of components, the computation of an appropriate base mesh is difficult and often infeasible. We present a surface compression method that stores surfaces as wavelet-compressed signed-distance volumes. Our method avoids the costly base-mesh construction step and offers several improvements over previous attempts at compressing signed-distance functions, including an {Omicron}(n) distance transform, a new zero set initialization method for triangle meshes, and a specialized thresholding algorithm. We demonstrate the potential of sampled distance volumes for surface compression and progressive reconstruction for complex high genus surfaces.

  17. Aerodynamics inside a rapid compression machine

    SciTech Connect

    Mittal, Gaurav; Sung, Chih-Jen

    2006-04-15

    The aerodynamics inside a rapid compression machine after the end of compression is investigated using planar laser-induced fluorescence (PLIF) of acetone. To study the effect of reaction chamber configuration on the resulting aerodynamics and temperature field, experiments are conducted and compared using a creviced piston and a flat piston under varying conditions. Results show that the flat piston design leads to significant mixing of the cold vortex with the hot core region, which causes alternate hot and cold regions inside the combustion chamber. At higher pressures, the effect of the vortex is reduced. The creviced piston head configuration is demonstrated to result in drastic reduction of the effect of the vortex. Experimental conditions are also simulated using the Star-CD computational fluid dynamics package. Computed results closely match with experimental observation. Numerical results indicate that with a flat piston design, gas velocity after compression is very high and the core region shrinks quickly due to rapid entrainment of cold gases. Whereas, for a creviced piston head design, gas velocity after compression is significantly lower and the core region remains unaffected for a long duration. As a consequence, for the flat piston, adiabatic core assumption can significantly overpredict the maximum temperature after the end of compression. For the creviced piston, the adiabatic core assumption is found to be valid even up to 100 ms after compression. This work therefore experimentally and numerically substantiates the importance of piston head design for achieving a homogeneous core region inside a rapid compression machine. (author)

  18. GPU Lossless Hyperspectral Data Compression System

    NASA Technical Reports Server (NTRS)

    Aranki, Nazeeh I.; Keymeulen, Didier; Kiely, Aaron B.; Klimesh, Matthew A.

    2014-01-01

    Hyperspectral imaging systems onboard aircraft or spacecraft can acquire large amounts of data, putting a strain on limited downlink and storage resources. Onboard data compression can mitigate this problem but may require a system capable of a high throughput. In order to achieve a high throughput with a software compressor, a graphics processing unit (GPU) implementation of a compressor was developed targeting the current state-of-the-art GPUs from NVIDIA(R). The implementation is based on the fast lossless (FL) compression algorithm reported in "Fast Lossless Compression of Multispectral-Image Data" (NPO- 42517), NASA Tech Briefs, Vol. 30, No. 8 (August 2006), page 26, which operates on hyperspectral data and achieves excellent compression performance while having low complexity. The FL compressor uses an adaptive filtering method and achieves state-of-the-art performance in both compression effectiveness and low complexity. The new Consultative Committee for Space Data Systems (CCSDS) Standard for Lossless Multispectral & Hyperspectral image compression (CCSDS 123) is based on the FL compressor. The software makes use of the highly-parallel processing capability of GPUs to achieve a throughput at least six times higher than that of a software implementation running on a single-core CPU. This implementation provides a practical real-time solution for compression of data from airborne hyperspectral instruments.

  19. Segmentation-based CT image compression

    NASA Astrophysics Data System (ADS)

    Thammineni, Arunoday; Mukhopadhyay, Sudipta; Kamath, Vidya

    2004-04-01

    The existing image compression standards like JPEG and JPEG 2000, compress the whole image as a single frame. This makes the system simple but inefficient. The problem is acute for applications where lossless compression is mandatory viz. medical image compression. If the spatial characteristics of the image are considered, it can give rise to a more efficient coding scheme. For example, CT reconstructed images have uniform background outside the field of view (FOV). Even the portion within the FOV can be divided as anatomically relevant and irrelevant parts. They have distinctly different statistics. Hence coding them separately will result in more efficient compression. Segmentation is done based on thresholding and shape information is stored using 8-connected differential chain code. Simple 1-D DPCM is used as the prediction scheme. The experiments show that the 1st order entropies of images fall by more than 11% when each segment is coded separately. For simplicity and speed of decoding Huffman code is chosen for entropy coding. Segment based coding will have an overhead of one table per segment but the overhead is minimal. Lossless compression of image based on segmentation resulted in reduction of bit rate by 7%-9% compared to lossless compression of whole image as a single frame by the same prediction coder. Segmentation based scheme also has the advantage of natural ROI based progressive decoding. If it is allowed to delete the diagnostically irrelevant portions, the bit budget can go down as much as 40%. This concept can be extended to other modalities.

  20. Lossless compression of VLSI layout image data.

    PubMed

    Dai, Vito; Zakhor, Avideh

    2006-09-01

    We present a novel lossless compression algorithm called Context Copy Combinatorial Code (C4), which integrates the advantages of two very disparate compression techniques: context-based modeling and Lempel-Ziv (LZ) style copying. While the algorithm can be applied to many lossless compression applications, such as document image compression, our primary target application has been lossless compression of integrated circuit layout image data. These images contain a heterogeneous mix of data: dense repetitive data better suited to LZ-style coding, and less dense structured data, better suited to context-based encoding. As part of C4, we have developed a novel binary entropy coding technique called combinatorial coding which is simultaneously as efficient as arithmetic coding, and as fast as Huffman coding. Compression results show C4 outperforms JBIG, ZIP, BZIP2, and two-dimensional LZ, and achieves lossless compression ratios greater than 22 for binary layout image data, and greater than 14 for gray-pixel image data. PMID:16948299

  1. Electrorheological fluid under elongation, compression, and shearing

    NASA Astrophysics Data System (ADS)

    Tian, Y.; Meng, Y.; Mao, H.; Wen, S.

    2002-03-01

    Electrorheological (ER) fluid based on zeolite and silicone oil under elongation, compression, and shearing was investigated at room temperature. Dc electric fields were applied on the ER fluid when elongation and compression were carried out on a self-constructed test system. The shear yield stress, presenting the macroscopic interactions of particles in the ER fluid along the direction of shearing and perpendicular to the direction of the electric field, was also obtained by a HAAKE RV20 rheometer. The tensile yield stress, presenting the macroscopic interactions of particles in the ER fluid along the direction of the electric field, was achieved as the peak value in the elongating curve with an elongating yield strain of 0.15-0.20. A shear yield angle of about 15°-18.5° reasonably connected tensile yield stress with shear yield stress, agreeing with the shear yield angle tested well by other researchers. The compressing tests showed that the ER fluid has a high compressive modulus under a small compressive strain lower than 0.1. The compressive stress has an exponential relationship with the compressive strain when it is higher than 0.1, and it is much higher than shear yield stress.

  2. MAFCO: A Compression Tool for MAF Files

    PubMed Central

    Matos, Luís M. O.; Neves, António J. R.; Pratas, Diogo; Pinho, Armando J.

    2015-01-01

    In the last decade, the cost of genomic sequencing has been decreasing so much that researchers all over the world accumulate huge amounts of data for present and future use. These genomic data need to be efficiently stored, because storage cost is not decreasing as fast as the cost of sequencing. In order to overcome this problem, the most popular general-purpose compression tool, gzip, is usually used. However, these tools were not specifically designed to compress this kind of data, and often fall short when the intention is to reduce the data size as much as possible. There are several compression algorithms available, even for genomic data, but very few have been designed to deal with Whole Genome Alignments, containing alignments between entire genomes of several species. In this paper, we present a lossless compression tool, MAFCO, specifically designed to compress MAF (Multiple Alignment Format) files. Compared to gzip, the proposed tool attains a compression gain from 34% to 57%, depending on the data set. When compared to a recent dedicated method, which is not compatible with some data sets, the compression gain of MAFCO is about 9%. Both source-code and binaries for several operating systems are freely available for non-commercial use at: http://bioinformatics.ua.pt/software/mafco. PMID:25816229

  3. Compression of Space for Low Visibility Probes

    PubMed Central

    Born, Sabine; Krüger, Hannah M.; Zimmermann, Eckart; Cavanagh, Patrick

    2016-01-01

    Stimuli briefly flashed just before a saccade are perceived closer to the saccade target, a phenomenon known as perisaccadic compression of space (Ross et al., 1997). More recently, we have demonstrated that brief probes are attracted towards a visual reference when followed by a mask, even in the absence of saccades (Zimmermann et al., 2014a). Here, we ask whether spatial compression depends on the transient disruptions of the visual input stream caused by either a mask or a saccade. Both of these degrade the probe visibility but we show that low probe visibility alone causes compression in the absence of any disruption. In a first experiment, we varied the regions of the screen covered by a transient mask, including areas where no stimulus was presented and a condition without masking. In all conditions, we adjusted probe contrast to make the probe equally hard to detect. Compression effects were found in all conditions. To obtain compression without a mask, the probe had to be presented at much lower contrasts than with masking. Comparing mislocalizations at different probe detection rates across masking, saccades and low contrast conditions without mask or saccade, Experiment 2 confirmed this observation and showed a strong influence of probe contrast on compression. Finally, in Experiment 3, we found that compression decreased as probe duration increased both for masks and saccades although here we did find some evidence that factors other than simply visibility as we measured it contribute to compression. Our experiments suggest that compression reflects how the visual system localizes weak targets in the context of highly visible stimuli. PMID:27013989

  4. Data compression for full motion video transmission

    NASA Technical Reports Server (NTRS)

    Whyte, Wayne A., Jr.; Sayood, Khalid

    1991-01-01

    Clearly transmission of visual information will be a major, if not dominant, factor in determining the requirements for, and assessing the performance of, the SEI communications systems. Projected image/video requirements which are currently anticipated for SEI mission scenarios are presented. Based on this information and projected link performance figures, the image/video data compression requirements which would allow link closure are identified. Finally several approaches which could satisfy some of the compression requirements are presented and possible future approaches which show promise for more substantial compression performance improvement are discussed.

  5. Compressible homogeneous shear: Simulation and modeling

    NASA Technical Reports Server (NTRS)

    Sarkar, S.; Erlebacher, G.; Hussaini, M. Y.

    1992-01-01

    Compressibility effects were studied on turbulence by direct numerical simulation of homogeneous shear flow. A primary observation is that the growth of the turbulent kinetic energy decreases with increasing turbulent Mach number. The sinks provided by compressible dissipation and the pressure dilatation, along with reduced Reynolds shear stress, are shown to contribute to the reduced growth of kinetic energy. Models are proposed for these dilatational terms and verified by direct comparison with the simulations. The differences between the incompressible and compressible fields are brought out by the examination of spectra, statistical moments, and structure of the rate of strain tensor.

  6. Compressed Gas Safety for Experimental Fusion Facilities

    SciTech Connect

    Lee C. Cadwallader

    2004-09-01

    Experimental fusion facilities present a variety of hazards to the operators and staff. There are unique or specialized hazards, including magnetic fields, cryogens, radio frequency emissions, and vacuum reservoirs. There are also more general industrial hazards, such as a wide variety of electrical power, pressurized air, and cooling water systems in use, there are crane and hoist loads, working at height, and handling compressed gas cylinders. This paper outlines the projectile hazard assoicated with compressed gas cylinders and mthods of treatment to provide for compressed gas safety. This information should be of interest to personnel at both magnetic and inertial fusion experiments.

  7. An efficient compression scheme for bitmap indices

    SciTech Connect

    Wu, Kesheng; Otoo, Ekow J.; Shoshani, Arie

    2004-04-13

    When using an out-of-core indexing method to answer a query, it is generally assumed that the I/O cost dominates the overall query response time. Because of this, most research on indexing methods concentrate on reducing the sizes of indices. For bitmap indices, compression has been used for this purpose. However, in most cases, operations on these compressed bitmaps, mostly bitwise logical operations such as AND, OR, and NOT, spend more time in CPU than in I/O. To speedup these operations, a number of specialized bitmap compression schemes have been developed; the best known of which is the byte-aligned bitmap code (BBC). They are usually faster in performing logical operations than the general purpose compression schemes, but, the time spent in CPU still dominates the total query response time. To reduce the query response time, we designed a CPU-friendly scheme named the word-aligned hybrid (WAH) code. In this paper, we prove that the sizes of WAH compressed bitmap indices are about two words per row for large range of attributes. This size is smaller than typical sizes of commonly used indices, such as a B-tree. Therefore, WAH compressed indices are not only appropriate for low cardinality attributes but also for high cardinality attributes.In the worst case, the time to operate on compressed bitmaps is proportional to the total size of the bitmaps involved. The total size of the bitmaps required to answer a query on one attribute is proportional to the number of hits. These indicate that WAH compressed bitmap indices are optimal. To verify their effectiveness, we generated bitmap indices for four different datasets and measured the response time of many range queries. Tests confirm that sizes of compressed bitmap indices are indeed smaller than B-tree indices, and query processing with WAH compressed indices is much faster than with BBC compressed indices, projection indices and B-tree indices. In addition, we also verified that the average query response time

  8. Properties of compressible elastica from relativistic analogy.

    PubMed

    Oshri, Oz; Diamant, Haim

    2016-01-21

    Kirchhoff's kinetic analogy relates the deformation of an incompressible elastic rod to the classical dynamics of rigid body rotation. We extend the analogy to compressible filaments and find that the extension is similar to the introduction of relativistic effects into the dynamical system. The extended analogy reveals a surprising symmetry in the deformations of compressible elastica. In addition, we use known results for the buckling of compressible elastica to derive the explicit solution for the motion of a relativistic nonlinear pendulum. We discuss cases where the extended Kirchhoff analogy may be useful for the study of other soft matter systems. PMID:26563905

  9. Compressed Gas Safety for Experimental Fusion Facilities

    SciTech Connect

    Cadwallader, L.C.

    2005-05-15

    Experimental fusion facilities present a variety of hazards to the operators and staff. There are unique or specialized hazards, including magnetic fields, cryogens, radio frequency emissions, and vacuum reservoirs. There are also more general industrial hazards, such as a wide variety of electrical power, pressurized air and cooling water systems in use, there are crane and hoist loads, working at height, and handling compressed gas cylinders. This paper outlines the projectile hazard associated with compressed gas cylinders and methods of treatment to provide for compressed gas safety. This information should be of interest to personnel at both magnetic and inertial fusion experiments.

  10. Logarithmic compression methods for spectral data

    DOEpatents

    Dunham, Mark E.

    2003-01-01

    A method is provided for logarithmic compression, transmission, and expansion of spectral data. A log Gabor transformation is made of incoming time series data to output spectral phase and logarithmic magnitude values. The output phase and logarithmic magnitude values are compressed by selecting only magnitude values above a selected threshold and corresponding phase values to transmit compressed phase and logarithmic magnitude values. A reverse log Gabor transformation is then performed on the transmitted phase and logarithmic magnitude values to output transmitted time series data to a user.

  11. Evolution Of Nonlinear Waves in Compressing Plasma

    SciTech Connect

    P.F. Schmit, I.Y. Dodin, and N.J. Fisch

    2011-05-27

    Through particle-in-cell simulations, the evolution of nonlinear plasma waves is examined in one-dimensional collisionless plasma undergoing mechanical compression. Unlike linear waves, whose wavelength decreases proportionally to the system length L(t), nonlinear waves, such as solitary electron holes, conserve their characteristic size {Delta} during slow compression. This leads to a substantially stronger adiabatic amplification as well as rapid collisionless damping when L approaches {Delta}. On the other hand, cessation of compression halts the wave evolution, yielding a stable mode.

  12. Reversible intraframe compression of medical images.

    PubMed

    Roos, P; Viergever, M A; van Dijke, M A; Peters, J H

    1988-01-01

    The performance of several reversible, intraframe compression methods is compared by applying them to angiographic and magnetic resonance (MR) images. Reversible data compression involves two consecutive steps: decorrelation and coding. The result of the decorrelation step is presented in terms of entropy. Because Huffman coding generally approximates these entropy measures within a few percent, coding has not been investigated separately. It appears that a hierarchical decorrelation method based on interpolation (HINT) outperforms all other methods considered. The compression ratio is around 3 for angiographic images of 8-9 b/pixel, but is considerably less for MR images whose noise level is substantially higher. PMID:18230486

  13. Image Data Compression Having Minimum Perceptual Error

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1997-01-01

    A method is presented for performing color or grayscale image compression that eliminates redundant and invisible image components. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The quantization matrix comprises visual masking by luminance and contrast technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  14. Compression of a bundle of light rays.

    PubMed

    Marcuse, D

    1971-03-01

    The performance of ray compression devices is discussed on the basis of a phase space treatment using Liouville's theorem. It is concluded that the area in phase space of the input bundle of rays is determined solely by the required compression ratio and possible limitations on the maximum ray angle at the output of the device. The efficiency of tapers and lenses as ray compressors is approximately equal. For linear tapers and lenses the input angle of the useful rays must not exceed the compression ratio. The performance of linear tapers and lenses is compared to a particular ray compressor using a graded refractive index distribution. PMID:20094478

  15. Modulation compression for short wavelength harmonic generation

    SciTech Connect

    Qiang, J.

    2010-01-11

    Laser modulator is used to seed free electron lasers. In this paper, we propose a scheme to compress the initial laser modulation in the longitudinal phase space by using two opposite sign bunch compressors and two opposite sign energy chirpers. This scheme could potentially reduce the initial modulation wavelength by a factor of C and increase the energy modulation amplitude by a factor of C, where C is the compression factor of the first bunch compressor. Such a compressed energy modulation can be directly used to generate short wavelength current modulation with a large bunching factor.

  16. Calculation methods for compressible turbulent boundary layers

    NASA Technical Reports Server (NTRS)

    Bushnell, D. M.; Cary, A. M., Jr.; Harris, J. E.

    1976-01-01

    Calculation procedures for non-reacting compressible two- and three-dimensional turbulent boundary layers were reviewed. Integral, transformation and correlation methods, as well as finite difference solutions of the complete boundary layer equations summarized. Alternative numerical solution procedures were examined, and both mean field and mean turbulence field closure models were considered. Physics and related calculation problems peculiar to compressible turbulent boundary layers are described. A catalog of available solution procedures of the finite difference, finite element, and method of weighted residuals genre is included. Influence of compressibility, low Reynolds number, wall blowing, and pressure gradient upon mean field closure constants are reported.

  17. Compression of Complex-Valued SAR Imagery

    SciTech Connect

    Eichel P.; Ives, R.W.

    1999-03-03

    Synthetic Aperture Radars are coherent imaging systems that produce complex-valued images of the ground. Because modern systems can generate large amounts of data, there is substantial interest in applying image compression techniques to these products. In this paper, we examine the properties of complex-valued SAR images relevant to the task of data compression. We advocate the use of transform-based compression methods but employ radically different quantization strategies than those commonly used for incoherent optical images. The theory, methodology, and examples are presented.

  18. ICER-3D Hyperspectral Image Compression Software

    NASA Technical Reports Server (NTRS)

    Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, three-dimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume. The ICER-3D software is capable of providing either lossless or lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The compression algorithm, which was derived from the ICER image compression algorithm, includes wavelet-transform, context-modeling, and entropy coding subalgorithms. The 3D wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of sets of hyperspectral image data, while facilitating elimination of spectral ringing artifacts, using a technique summarized in "Improving 3D Wavelet-Based Compression of Spectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. Correlation is further exploited by a context-modeling subalgorithm, which exploits spectral dependencies in the wavelet-transformed hyperspectral data, using an algorithm that is summarized in "Context Modeler for Wavelet Compression of Hyperspectral Images" (NPO-43239), which follows this article. An important feature of ICER-3D is a scheme for limiting the adverse effects of loss of data during transmission. In this scheme, as in the similar scheme used by ICER, the spatial-frequency domain is partitioned into rectangular error-containment regions. In ICER-3D, the partitions extend through all the wavelength bands. The data in each partition are compressed independently of those in the other partitions, so that loss or corruption of data from any partition does not affect the other partitions. Furthermore, because compression is progressive within each partition, when data are lost, any data from that partition received

  19. Data compression for full motion video transmission

    NASA Technical Reports Server (NTRS)

    Whyte, Wayne A., Jr.; Sayood, Khalid

    1991-01-01

    Clearly transmission of visual information will be a major, if not dominant, factor in determining the requirements for, and assessing the performance of the Space Exploration Initiative (SEI) communications systems. Projected image/video requirements which are currently anticipated for SEI mission scenarios are presented. Based on this information and projected link performance figures, the image/video data compression requirements which would allow link closure are identified. Finally several approaches which could satisfy some of the compression requirements are presented and possible future approaches which show promise for more substantial compression performance improvement are discussed.

  20. Analysis of kink band formation under compression

    NASA Technical Reports Server (NTRS)

    Hahn, H. Thomas

    1987-01-01

    The kink band formation in unidirectional composites under compression is analyzed in the present paper. The kinematics of kink band formation is described in terms of a deformation tensor. Equilibrium conditions are then applied to relate the compression load to the deformation of fibers. Since the in situ shear behavior of the matrix resin is not known, an analysis-experiment correlation is used to find the shear failure strain in the kink band. The present analysis thus elucidates the mechanisms and identifies the controlling parameters, of compression failure.

  1. Digital Image Compression Using Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Serra-Ricart, M.; Garrido, L.; Gaitan, V.; Aloy, A.

    1993-01-01

    The problem of storing, transmitting, and manipulating digital images is considered. Because of the file sizes involved, large amounts of digitized image information are becoming common in modern projects. Our goal is to described an image compression transform coder based on artificial neural networks techniques (NNCTC). A comparison of the compression results obtained from digital astronomical images by the NNCTC and the method used in the compression of the digitized sky survey from the Space Telescope Science Institute based on the H-transform is performed in order to assess the reliability of the NNCTC.

  2. Dependability Improvement for PPM Compressed Data by Using Compression Pattern Matching

    NASA Astrophysics Data System (ADS)

    Kitakami, Masato; Okura, Toshihiro

    Data compression is popularly applied to computer systems and communication systems in order to reduce storage size and communication time, respectively. Since large data are used frequently, string matching for such data takes a long time. If the data are compressed, the time gets much longer because decompression is necessary. Long string matching time makes computer virus scan time longer and gives serious influence to the security of data. From this, CPM (Compression Pattern Matching) methods for several compression methods have been proposed. This paper proposes CPM method for PPM which achieves fast virus scan and improves dependability of the compressed data, where PPM is based on a Markov model, uses a context information, and achieves a better compression ratio than BW transform and Ziv-Lempel coding. The proposed method encodes the context information, which is generated in the compression process, and appends the encoded data at the beginning of the compressed data as a header. The proposed method uses only the header information. Computer simulation says that augmentation of the compression ratio is less than 5 percent if the order of the PPM is less than 5 and the source file size is more than 1M bytes, where order is the maximum length of the context used in PPM compression. String matching time is independent of the source file size and is very short, less than 0.3 micro seconds in the PC used for the simulation.

  3. 3M Coban 2 Layer Compression Therapy: Intelligent Compression Dynamics to Suit Different Patient Needs

    PubMed Central

    Bernatchez, Stéphanie F.; Tucker, Joseph; Schnobrich, Ellen; Parks, Patrick J.

    2012-01-01

    Problem Chronic venous insufficiency can lead to recalcitrant leg ulcers. Compression has been shown to be effective in healing these ulcers, but most products are difficult to apply and uncomfortable for patients, leading to inconsistent/ineffective clinical application and poor compliance. In addition, compression presents risks for patients with an ankle-brachial pressure index (ABPI) <0.8 because of the possibility of further compromising the arterial circulation. The ABPI is the ratio of systolic leg blood pressure (taken at ankle) to systolic arm blood pressure (taken above elbow, at brachial artery). This is measured to assess a patient's lower extremity arterial perfusion before initiating compression therapy.1 Solution Using materials science, two-layer compression systems with controlled compression and a low profile were developed. These materials allow for a more consistent bandage application with better control of the applied compression, and their low profile is compatible with most footwear, increasing patient acceptance and compliance with therapy. The original 3M™ Coban™ 2 Layer Compression System is suited for patients with an ABPI ≥0.8; 3M™ Coban™ 2 Layer Lite Compression System can be used on patients with ABPI ≥0.5. New Technology Both compression systems are composed of two layers that combine to create an inelastic sleeve conforming to the limb contour to provide a consistent proper pressure profile to reduce edema. In addition, they slip significantly less than other compression products and improve patient daily living activities and physical symptoms. Indications for Use Both compression systems are indicated for patients with venous leg ulcers, lymphedema, and other conditions where compression therapy is appropriate. Caution As with any compression system, caution must be used when mixed venous and arterial disease is present to not induce any damage. These products are not indicated when the ABPI is <0.5. PMID:24527315

  4. Stent Compression in Iliac Vein Compression Syndrome Associated with Acute Ilio-Femoral Deep Vein Thrombosis

    PubMed Central

    Cho, Hun; Kim, Jin Woo; Hong, You Sun; Lim, Sang Hyun

    2015-01-01

    Objective This study was conducted to evaluate stent compression in iliac vein compression syndrome (IVCS) and to identify its association with stent patency. Materials and Methods Between May 2005 and June 2014, after stent placement for the treatment of IVCS with acute ilio-femoral deep vein thrombosis, follow-up CT venography was performed in 48 patients (35 women, 13 men; age range 23-87 years; median age 56 years). Using follow-up CT venography, the degree of the stent compression was calculated and used to divide patients into two groups. Possible factors associated with stent compression and patency were evaluated. The cumulative degree of stent compression and patency rate were analyzed. Results All of the stents used were laser-cut nitinol stents. The proportion of limbs showing significant stent compression was 33%. Fifty-six percent of limbs in the significant stent compression group developed stent occlusion. On the other hand, only 9% of limbs in the insignificant stent compression group developed stent occlusion. Significant stent compression was inversely correlated with stent patency (p < 0.001). The median patency period evaluated with Kaplan-Meier analysis was 20.0 months for patients with significant stent compression. Other factors including gender, age, and type of stent were not correlated with stent patency. Significant stent compression occurred most frequently (87.5%) at the upper end of the stent (ilio-caval junction). Conclusion Significant compression of nitinol stents placed in IVCS highly affects stent patency. Therefore, in order to prevent stent compression in IVCS, nitinol stents with higher radial resistive force may be required. PMID:26175570

  5. Extreme compression for extreme conditions: pilot study to identify optimal compression of CT images using MPEG-4 video compression.

    PubMed

    Peterson, P Gabriel; Pak, Sung K; Nguyen, Binh; Jacobs, Genevieve; Folio, Les

    2012-12-01

    This study aims to evaluate the utility of compressed computed tomography (CT) studies (to expedite transmission) using Motion Pictures Experts Group, Layer 4 (MPEG-4) movie formatting in combat hospitals when guiding major treatment regimens. This retrospective analysis was approved by Walter Reed Army Medical Center institutional review board with a waiver for the informed consent requirement. Twenty-five CT chest, abdomen, and pelvis exams were converted from Digital Imaging and Communications in Medicine to MPEG-4 movie format at various compression ratios. Three board-certified radiologists reviewed various levels of compression on emergent CT findings on 25 combat casualties and compared with the interpretation of the original series. A Universal Trauma Window was selected at -200 HU level and 1,500 HU width, then compressed at three lossy levels. Sensitivities and specificities for each reviewer were calculated along with 95 % confidence intervals using the method of general estimating equations. The compression ratios compared were 171:1, 86:1, and 41:1 with combined sensitivities of 90 % (95 % confidence interval, 79-95), 94 % (87-97), and 100 % (93-100), respectively. Combined specificities were 100 % (85-100), 100 % (85-100), and 96 % (78-99), respectively. The introduction of CT in combat hospitals with increasing detectors and image data in recent military operations has increased the need for effective teleradiology; mandating compression technology. Image compression is currently used to transmit images from combat hospital to tertiary care centers with subspecialists and our study demonstrates MPEG-4 technology as a reasonable means of achieving such compression. PMID:22722754

  6. Athena3D: Flux-conservative Godunov-type algorithm for compressible magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Hawley, John; Simon, Jake; Stone, James; Gardiner, Thomas; Teuben, Peter

    2015-05-01

    Written in FORTRAN, Athena3D, based on Athena (ascl:1010.014), is an implementation of a flux-conservative Godunov-type algorithm for compressible magnetohydrodynamics. Features of the Athena3D code include compressible hydrodynamics and ideal MHD in one, two or three spatial dimensions in Cartesian coordinates; adiabatic and isothermal equations of state; 1st, 2nd or 3rd order reconstruction using the characteristic variables; and numerical fluxes computed using the Roe scheme. In addition, it offers the ability to add source terms to the equations and is parallelized based on MPI.

  7. A stable penalty method for the compressible Navier-Stokes equations. 1: Open boundary conditions

    NASA Technical Reports Server (NTRS)

    Hesthaven, J. S.; Gottlieb, D.

    1994-01-01

    The purpose of this paper is to present asymptotically stable open boundary conditions for the numerical approximation of the compressible Navier-Stokes equations in three spatial dimensions. The treatment uses the conservation form of the Navier-Stokes equations and utilizes linearization and localization at the boundaries based on these variables. The proposed boundary conditions are applied through a penalty procedure, thus ensuring correct behavior of the scheme as the Reynolds number tends to infinity. The versatility of this method is demonstrated for the problem of a compressible flow past a circular cylinder.

  8. Progressive lossless compression of volumetric data using small memory load.

    PubMed

    Klajnsek, Gregor; Zalik, Borut

    2005-06-01

    Nowadays, applications dealing with volumetric datasets, Medical applications being a typical representative, have become possible even on low cost computers due to a rapid increase of computer memory and processing power. However, even today, dealing with volumetric datasets creates two considerable problems: slow visualization and large file sizes. While recently, due to significant progress in graphics hardware, real-time or near real-time volume visualization has become possible, volume compression still remains a problematic issue. This paper introduces a new method for lossless compression of volumetric datasets. It is based on quadtree encoding. The method consists of three steps: during initialization, so-called division quadtree is built. The smallest unit of the division quadtree is called basic macro-block. During the processing phase, Boolean intersection is built on pairs of quadtrees, and the differences are stored. In the last phase, the variable length encoding is applied to reduce the entropy among the differences. Proposed method supports progressive visualization, what is especially important when a transfer trough the internet is needed. To test the efficiency of this method it was compared to popular octree encoding scheme. The results proved that data coherence is exploited more sufficiently using proposed quadtree approach. Additional advantage of this approach is that the algorithm does not need a lot of memory space. Only two quadtrees of two consecutive slices need be loaded in the memory at the same time. This feature makes this algorithm extremely attractive for possible hardware implementation. This paper introduces a new method for the compression of volumetric datasets. It is based on quadtree encoding. This method consists of three steps: during initialization, a so-called division quadtree is built. The smallest, unit of the division quadtree is called a basic macro-block. A Boolean intersection is built on pairs of quadtrees during

  9. Seneca Compressed Air Energy Storage (CAES) Project

    SciTech Connect

    2012-11-30

    This document provides specifications for the process air compressor for a compressed air storage project, requests a budgetary quote, and provides supporting information, including compressor data, site specific data, water analysis, and Seneca CAES value drivers.

  10. Pulse power applications of flux compression generators

    NASA Astrophysics Data System (ADS)

    Fowler, C. M.; Caird, R. S.; Erickson, D. J.; Freeman, B. L.

    Characteristics are presented for two different types of explosive driven flux compression generators and a megavolt pulse transformer. Status reports are given for rail gun and plasma focus programs for which the generators serve as power sources.

  11. Efficient Quantum Information Processing via Quantum Compressions

    NASA Astrophysics Data System (ADS)

    Deng, Y.; Luo, M. X.; Ma, S. Y.

    2016-01-01

    Our purpose is to improve the quantum transmission efficiency and reduce the resource cost by quantum compressions. The lossless quantum compression is accomplished using invertible quantum transformations and applied to the quantum teleportation and the simultaneous transmission over quantum butterfly networks. New schemes can greatly reduce the entanglement cost, and partially solve transmission conflictions over common links. Moreover, the local compression scheme is useful for approximate entanglement creations from pre-shared entanglements. This special task has not been addressed because of the quantum no-cloning theorem. Our scheme depends on the local quantum compression and the bipartite entanglement transfer. Simulations show the success probability is greatly dependent of the minimal entanglement coefficient. These results may be useful in general quantum network communication.

  12. Super high compression of line drawing data

    NASA Technical Reports Server (NTRS)

    Cooper, D. B.

    1976-01-01

    Models which can be used to accurately represent the type of line drawings which occur in teleconferencing and transmission for remote classrooms and which permit considerable data compression were described. The objective was to encode these pictures in binary sequences of shortest length but such that the pictures can be reconstructed without loss of important structure. It was shown that exploitation of reasonably simple structure permits compressions in the range of 30-100 to 1. When dealing with highly stylized material such as electronic or logic circuit schematics, it is unnecessary to reproduce configurations exactly. Rather, the symbols and configurations must be understood and be reproduced, but one can use fixed font symbols for resistors, diodes, capacitors, etc. Compression of pictures of natural phenomena such as can be realized by taking a similar approach, or essentially zero error reproducibility can be achieved but at a lower level of compression.

  13. All about compression: A literature review.

    PubMed

    de Carvalho, Magali Rezende; de Andrade, Isabelle Silveira; de Abreu, Alcione Matos; Leite Ribeiro, Andrea Pinto; Peixoto, Bruno Utzeri; de Oliveira, Beatriz Guitton Renaud Baptista

    2016-06-01

    Lower extremity ulcers represent a significant public health problem as they frequently progress to chronicity, significantly impact daily activities and comfort, and represent a huge financial burden to the patient and the health system. The aim of this review was to discuss the best approach for venous leg ulcers (VLUs). Online searches were conducted in Ovid MEDLINE, Ovid EMBASE, EBSCO CINAHL, and reference lists and official guidelines. Keywords considered for this review were VLU, leg ulcer, varicose ulcer, compressive therapy, compression, and stocking. A complete assessment of the patient's overall health should be performed by a trained practitioner, focusing on history of diabetes mellitus, hypertension, dietetic habits, medications, and practice of physical exercises, followed by a thorough assessment of both legs. Compressive therapy is the gold standard treatment for VLUs, and the ankle-brachial index should be measured in all patients before compression application. PMID:27210451

  14. Compression behavior of unidirectional fibrous composite

    NASA Technical Reports Server (NTRS)

    Sinclair, J. H.; Chamis, C. C.

    1982-01-01

    The longitudinal compression behavior of unidirectional fiber composites is investigated using a modified Celanese test method with thick and thin test specimens. The test data obtained are interpreted using the stress/strain curves from back-to-back strain gages, examination of fracture surfaces by scanning electron microscope, and predictive equations for distinct failure modes including fiber compression failure, Euler buckling, delamination, and flexure. The results show that the longitudinal compression fracture is induced by a combination of delamination, flexure, and fiber tier breaks. No distinct fracture surface characteristics can be associated with unique failure modes. An equation is described which can be used to extract the longitudinal compression strength knowing the longitudinal tensile and flexural strengths of the same composite system.

  15. Method for compression of binary data

    DOEpatents

    Berlin, Gary J.

    1996-01-01

    The disclosed method for compression of a series of data bytes, based on LZSS-based compression methods, provides faster decompression of the stored data. The method involves the creation of a flag bit buffer in a random access memory device for temporary storage of flag bits generated during normal LZSS-based compression. The flag bit buffer stores the flag bits separately from their corresponding pointers and uncompressed data bytes until all input data has been read. Then, the flag bits are appended to the compressed output stream of data. Decompression can be performed much faster because bit manipulation is only required when reading the flag bits and not when reading uncompressed data bytes and pointers. Uncompressed data is read using byte length instructions and pointers are read using word instructions, thus reducing the time required for decompression.

  16. Method for compression of binary data

    DOEpatents

    Berlin, G.J.

    1996-03-26

    The disclosed method for compression of a series of data bytes, based on LZSS-based compression methods, provides faster decompression of the stored data. The method involves the creation of a flag bit buffer in a random access memory device for temporary storage of flag bits generated during normal LZSS-based compression. The flag bit buffer stores the flag bits separately from their corresponding pointers and uncompressed data bytes until all input data has been read. Then, the flag bits are appended to the compressed output stream of data. Decompression can be performed much faster because bit manipulation is only required when reading the flag bits and not when reading uncompressed data bytes and pointers. Uncompressed data is read using byte length instructions and pointers are read using word instructions, thus reducing the time required for decompression. 5 figs.

  17. Compression of digital holographic data: an overview

    NASA Astrophysics Data System (ADS)

    Dufaux, Frederic; Xing, Yafei; Pesquet-Popescu, Beatrice; Schelkens, Peter

    2015-09-01

    Holography has the potential to become the ultimate 3D experience. Nevertheless, in order to achieve practical working systems, major scientific and technological challenges have to be tackled. In particular, as digital holographic data represents a huge amount of information, the development of efficient compression techniques is a key component. This problem has gained significant attention by the research community during the last 10 years. Given that holograms have very different signal properties when compared to natural images and video sequences, existing compression techniques (e.g. JPEG or MPEG) remain suboptimal, calling for innovative compression solutions. In this paper, we will review and analyze past and on-going work for the compression of digital holographic data.

  18. Pulse compression and prepulse suppression apparatus

    DOEpatents

    Dane, Clifford B.; Hackel, Lloyd A.; George, Edward V.; Miller, John L.; Krupke, William F.

    1993-01-01

    A pulse compression and prepulse suppression apparatus (10) for time compressing the output of a laser (14). A pump pulse (46) is separated from a seed pulse (48) by a first polarized beam splitter (20) according to the orientation of a half wave plate (18). The seed pulse (48) is directed into an SBS oscillator (44) by two plane mirrors (22, 26) and a corner mirror (24), the corner mirror (24) being movable to adjust timing. The pump pulse (46) is directed into an SBS amplifier 34 wherein SBS occurs. The seed pulse (48), having been propagated from the SBS oscillator (44), is then directed through the SBS amplifier (34) wherein it sweeps the energy of the pump pulse (46) out of the SBS amplifier (34) and is simultaneously compressed, and the time compressed pump pulse (46) is emitted as a pulse output (52). A second polarized beam splitter (38) directs any undepleted pump pulse 58 away from the SBS oscillator (44).

  19. Pulse compression and prepulse suppression apparatus

    DOEpatents

    Dane, C.B.; Hackel, L.A.; George, E.V.; Miller, J.L.; Krupke, W.F.

    1993-11-09

    A pulse compression and prepulse suppression apparatus (10) for time compressing the output of a laser (14). A pump pulse (46) is separated from a seed pulse (48) by a first polarized beam splitter (20) according to the orientation of a half wave plate (18). The seed pulse (48) is directed into an SBS oscillator (44) by two plane mirrors (22, 26) and a corner mirror (24), the corner mirror (24) being movable to adjust timing. The pump pulse (46) is directed into an SBS amplifier 34 wherein SBS occurs. The seed pulse (48), having been propagated from the SBS oscillator (44), is then directed through the SBS amplifier (34) wherein it sweeps the energy of the pump pulse (46) out of the SBS amplifier (34) and is simultaneously compressed, and the time compressed pump pulse (46) is emitted as a pulse output (52). A second polarized beam splitter (38) directs any undepleted pump pulse 58 away from the SBS oscillator (44).

  20. Large Hiatal Hernia Compressing the Heart.

    PubMed

    Matar, Andrew; Mroue, Jad; Camporesi, Enrico; Mangar, Devanand; Albrink, Michael

    2016-02-01

    We describe a 41-year-old man with De Mosier's syndrome who presented with exercise intolerance and dyspnea on exertion caused by a giant hiatal hernia compressing the heart with relief by surgical treatment. PMID:26704030

  1. Wavelet transform in electrocardiography--data compression.

    PubMed

    Provazník, I; Kozumplík, J

    1997-06-01

    An application of the wavelet transform to electrocardiography is described in the paper. The transform is used as a first stage of a lossy compression algorithm for efficient coding of rest ECG signals. The proposed technique is based on the decomposition of the ECG signal into a set of basic functions covering the time-frequency domain. Thus, non-stationary character of ECG data is considered. Some of the time-frequency signal components are removed because of their low influence to signal characteristics. Resulting components are efficiently coded by quantization, composition into a sequence of coefficients and compression by a run-length coder and a entropic Huffman coder. The proposed wavelet-based compression algorithm can compress data to average code length about 1 bit/sample. The algorithm can be also implemented to a real-time processing system when wavelet transform is computed by fast linear filters described in the paper. PMID:9291025

  2. Relativistic laser pulse compression in magnetized plasmas

    SciTech Connect

    Liang, Yun; Sang, Hai-Bo Wan, Feng; Lv, Chong; Xie, Bai-Song

    2015-07-15

    The self-compression of a weak relativistic Gaussian laser pulse propagating in a magnetized plasma is investigated. The nonlinear Schrödinger equation, which describes the laser pulse amplitude evolution, is deduced and solved numerically. The pulse compression is observed in the cases of both left- and right-hand circular polarized lasers. It is found that the compressed velocity is increased for the left-hand circular polarized laser fields, while decreased for the right-hand ones, which is reinforced as the enhancement of the external magnetic field. We find a 100 fs left-hand circular polarized laser pulse is compressed in a magnetized (1757 T) plasma medium by more than ten times. The results in this paper indicate the possibility of generating particularly intense and short pulses.

  3. Hyperspectral image data compression based on DSP

    NASA Astrophysics Data System (ADS)

    Fan, Jiming; Zhou, Jiankang; Chen, Xinhua; Shen, Weimin

    2010-11-01

    The huge data volume of hyperspectral image challenges its transportation and store. It is necessary to find an effective method to compress the hyperspectral image. Through analysis and comparison of current various algorithms, a mixed compression algorithm based on prediction, integer wavelet transform and embedded zero-tree wavelet (EZW) is proposed in this paper. We adopt a high-powered Digital Signal Processor (DSP) of TMS320DM642 to realize the proposed algorithm. Through modifying the mixed algorithm and optimizing its algorithmic language, the processing efficiency of the program was significantly improved, compared the non-optimized one. Our experiment show that the mixed algorithm based on DSP runs much faster than the algorithm on personal computer. The proposed method can achieve the nearly real-time compression with excellent image quality and compression performance.

  4. Compression asphyxia from a human pyramid.

    PubMed

    Tumram, Nilesh Keshav; Ambade, Vipul Namdeorao; Biyabani, Naushad

    2015-12-01

    In compression asphyxia, respiration is stopped by external forces on the body. It is usually due to an external force compressing the trunk such as a heavy weight on the chest or abdomen and is associated with internal injuries. In present case, the victim was trapped and crushed under the falling persons from a human pyramid formation for a "Dahi Handi" festival. There was neither any severe blunt force injury nor any significant pathological natural disease contributing to the cause of death. The victim was unable to remove himself from the situation because his cognitive responses and coordination were impaired due to alcohol intake. The victim died from asphyxia due to compression of his chest and abdomen. Compression asphyxia resulting from the collapse of a human pyramid and the dynamics of its impact force in these circumstances is very rare and is not reported previously to the best of our knowledge. PMID:26059277

  5. Ramp Compression Experiments - a Sensitivity Study

    SciTech Connect

    Bastea, M; Reisman, D

    2007-02-26

    We present the first sensitivity study of the material isentropes extracted from ramp compression experiments. We perform hydrodynamic simulations of representative experimental geometries associated with ramp compression experiments and discuss the major factors determining the accuracy of the equation of state information extracted from such data. In conclusion, we analyzed both qualitatively and quantitatively the major experimental factors that determine the accuracy of equations of state extracted from ramp compression experiments. Since in actual experiments essentially all the effects discussed here will compound, factoring out individual signatures and magnitudes, as done in the present work, is especially important. This study should provide some guidance for the effective design and analysis of ramp compression experiments, as well as for further improvements of ramp generators performance.

  6. Enhancement and compression of digital chest radiographs.

    PubMed

    Cohn, M; Trefler, M; Young, T Y

    1990-01-01

    The application of digital technologies to chest radiography holds the promise of routine application of image processing techniques to effect image enhancement. Because of their inherent spatial resolution, however, digital chest images impose severe constraints on data storage devices. Compression of these images will relax such constraints and facilitate image transmission on a digital network. We evaluated an algorithm for enhancing digital chest images that has allowed significant data compression while improving the diagnostic quality of the image. This algorithm is based on the photographic technique of unsharp masking. Image quality was measured with respect to the task of tumor detection and compression ratios as high as 2:1 were achieved. This compression can be supplemented by irreversible methods. PMID:2299708

  7. 3D MHD Simulations of Spheromak Compression

    NASA Astrophysics Data System (ADS)

    Stuber, James E.; Woodruff, Simon; O'Bryan, John; Romero-Talamas, Carlos A.; Darpa Spheromak Team

    2015-11-01

    The adiabatic compression of compact tori could lead to a compact and hence low cost fusion energy system. The critical scientific issues in spheromak compression relate both to confinement properties and to the stability of the configuration undergoing compression. We present results from the NIMROD code modified with the addition of magnetic field coils that allow us to examine the role of rotation on the stability and confinement of the spheromak (extending prior work for the FRC). We present results from a scan in initial rotation, from 0 to 100km/s. We show that strong rotational shear (10km/s over 1cm) occurs. We compare the simulation results with analytic scaling relations for adiabatic compression. Work performed under DARPA grant N66001-14-1-4044.

  8. Compressive behavior of unidirectional fibrous composites

    NASA Technical Reports Server (NTRS)

    Sinclair, J. H.; Chamis, C. C.

    1983-01-01

    The longitudinal compressive behavior of unidirectional fiber composites was investigated by using the Illinois Institute of Technology Research Institute (IITRI) test method with thick and thin test specimens. The test data obtained are interpreted by means of stress/strain curves from back-to-back strain gages, examination of fracture surfaces by scanning electron microscope, and predictive equations for distinct failure modes including fiber compression failure. Euler buckling, delamination, and flexure. The results show that longitudinal compressive fracture is induced by a combination of delamination, flexure, and fiber tier breaks. No distinct fracture surface characteristics can be associated with unique failure modes. An equation is described that can be used to extract the longitudinal compressive strength from the longitudinal tensile and flexural strengths of the same composite system.

  9. A study of compressibility and compactibility of directly compressible tableting materials containing tramadol hydrochloride.

    PubMed

    Mužíková, Jitka; Kubíčková, Alena

    2016-09-01

    The paper evaluates and compares the compressibility and compactibility of directly compressible tableting materials for the preparation of hydrophilic gel matrix tablets containing tramadol hydrochloride and the coprocessed dry binders Prosolv® SMCC 90 and Disintequik™ MCC 25. The selected types of hypromellose are Methocel™ Premium K4M and Methocel™ Premium K100M in 30 and 50 % concentrations, the lubricant being magnesium stearate in a 1 % concentration. Compressibility is evaluated by means of the energy profile of compression process and compactibility by the tensile strength of tablets. The values of total energy of compression and plasticity were higher in the tableting materials containing Prosolv® SMCC 90 than in those containing Disintequik™ MCC 25. Tramadol slightly decreased the values of total energy of compression and plasticity. Tableting materials containing Prosolv® SMCC 90 yielded stronger tablets. Tramadol decreased the strength of tablets from both coprocessed dry binders. PMID:27383891

  10. Progressive compression versus graduated compression for the management of venous insufficiency.

    PubMed

    Shepherd, Jan

    2016-09-01

    Venous leg ulceration (VLU) is a chronic condition associated with chronic venous insufficiency (CVI), where the most frequent complication is recurrence of ulceration after healing. Traditionally, graduated compression therapy has been shown to increase healing rates and also to reduce recurrence of VLU. Graduated compression occurs because the circumference of the limb is narrower at the ankle, thereby producing a higher pressure than at the calf, which is wider, creating a lower pressure. This phenomenon is explained by the principle known as Laplace's Law. Recently, the view that compression therapy must provide a graduated pressure gradient has been challenged. However, few studies so far have focused on the potential benefits of progressive compression where the pressure profile is inverted. This article will examine the contemporary concept that progressive compression may be as effective as traditional graduated compression therapy for the management of CVI. PMID:27594309

  11. Ultralight and highly compressible graphene aerogels.

    PubMed

    Hu, Han; Zhao, Zongbin; Wan, Wubo; Gogotsi, Yury; Qiu, Jieshan

    2013-04-18

    Chemically converted graphene aerogels with ultralight density and high compressibility are prepared by diamine-mediated functionalization and assembly, followed by microwave irradiation. The resulting graphene aerogels with density as low as 3 mg cm(-3) show excellent resilience and can completely recover after more than 90% compression. The ultralight graphene aerogels possessing high elasticity are promising as compliant and energy-absorbing materials. PMID:23418081

  12. Lossy compression of weak lensing data

    SciTech Connect

    Vanderveld, R. Ali; Bernstein, Gary M.; Stoughton, Chris; Rhodes, Jason; Massey, Richard; Dobke, Benjamin M.

    2011-07-12

    Future orbiting observatories will survey large areas of sky in order to constrain the physics of dark matter and dark energy using weak gravitational lensing and other methods. Lossy compression of the resultant data will improve the cost and feasibility of transmitting the images through the space communication network. We evaluate the consequences of the lossy compression algorithm of Bernstein et al. (2010) for the high-precision measurement of weak-lensing galaxy ellipticities. This square-root algorithm compresses each pixel independently, and the information discarded is by construction less than the Poisson error from photon shot noise. For simulated space-based images (without cosmic rays) digitized to the typical 16 bits per pixel, application of the lossy compression followed by image-wise lossless compression yields images with only 2.4 bits per pixel, a factor of 6.7 compression. We demonstrate that this compression introduces no bias in the sky background. The compression introduces a small amount of additional digitization noise to the images, and we demonstrate a corresponding small increase in ellipticity measurement noise. The ellipticity measurement method is biased by the addition of noise, so the additional digitization noise is expected to induce a multiplicative bias on the galaxies measured ellipticities. After correcting for this known noise-induced bias, we find a residual multiplicative ellipticity bias of m {approx} -4 x 10-4. This bias is small when compared to the many other issues that precision weak lensing surveys must confront, and furthermore we expect it to be reduced further with better calibration of ellipticity measurement methods.

  13. Compressibility of Quantum Mixed-State Signals

    SciTech Connect

    Koashi, Masato; Imoto, Nobuyuki

    2001-07-02

    We present a formula that determines the optimal number of qubits per message that allows asymptotically faithful compression of the quantum information carried by an ensemble of mixed states. The set of mixed states determines a decomposition of the Hilbert space into the redundant part and the irreducible part. After removing the redundancy, the optimal compression rate is shown to be given by the von Neumann entropy of the reduced ensemble.

  14. Lossless Video Sequence Compression Using Adaptive Prediction

    NASA Technical Reports Server (NTRS)

    Li, Ying; Sayood, Khalid

    2007-01-01

    We present an adaptive lossless video compression algorithm based on predictive coding. The proposed algorithm exploits temporal, spatial, and spectral redundancies in a backward adaptive fashion with extremely low side information. The computational complexity is further reduced by using a caching strategy. We also study the relationship between the operational domain for the coder (wavelet or spatial) and the amount of temporal and spatial redundancy in the sequence being encoded. Experimental results show that the proposed scheme provides significant improvements in compression efficiencies.

  15. Fracture in compression of brittle solids

    NASA Technical Reports Server (NTRS)

    1983-01-01

    The fracture of brittle solids in monotonic compression is reviewed from both the mechanistic and phenomenological points of view. The fundamental theoretical developments based on the extension of pre-existing cracks in general multiaxial stress fields are recognized as explaining extrinsic behavior where a single crack is responsible for the final failure. In contrast, shear faulting in compression is recognized to be the result of an evolutionary localization process involving en echelon action of cracks and is termed intrinsic.

  16. Lossy compression of weak lensing data

    DOE PAGESBeta

    Vanderveld, R. Ali; Bernstein, Gary M.; Stoughton, Chris; Rhodes, Jason; Massey, Richard; Dobke, Benjamin M.

    2011-07-12

    Future orbiting observatories will survey large areas of sky in order to constrain the physics of dark matter and dark energy using weak gravitational lensing and other methods. Lossy compression of the resultant data will improve the cost and feasibility of transmitting the images through the space communication network. We evaluate the consequences of the lossy compression algorithm of Bernstein et al. (2010) for the high-precision measurement of weak-lensing galaxy ellipticities. This square-root algorithm compresses each pixel independently, and the information discarded is by construction less than the Poisson error from photon shot noise. For simulated space-based images (without cosmicmore » rays) digitized to the typical 16 bits per pixel, application of the lossy compression followed by image-wise lossless compression yields images with only 2.4 bits per pixel, a factor of 6.7 compression. We demonstrate that this compression introduces no bias in the sky background. The compression introduces a small amount of additional digitization noise to the images, and we demonstrate a corresponding small increase in ellipticity measurement noise. The ellipticity measurement method is biased by the addition of noise, so the additional digitization noise is expected to induce a multiplicative bias on the galaxies measured ellipticities. After correcting for this known noise-induced bias, we find a residual multiplicative ellipticity bias of m {approx} -4 x 10-4. This bias is small when compared to the many other issues that precision weak lensing surveys must confront, and furthermore we expect it to be reduced further with better calibration of ellipticity measurement methods.« less

  17. Motion-adaptive compressive coded apertures

    NASA Astrophysics Data System (ADS)

    Harmany, Zachary T.; Oh, Albert; Marcia, Roummel; Willett, Rebecca

    2011-09-01

    This paper describes an adaptive compressive coded aperture imaging system for video based on motion-compensated video sparsity models. In particular, motion models based on optical flow and sparse deviations from optical flow (i.e. salient motion) can be used to (a) predict future video frames from previous compressive measurements, (b) perform reconstruction using efficient online convex programming techniques, and (c) adapt the coded aperture to yield higher reconstruction fidelity in the vicinity of this salient motion.

  18. Selfsimilar Spherical Compression Waves in Gas Dynamics

    NASA Astrophysics Data System (ADS)

    Meyer-ter-Vehn, J.; Schalk, C.

    1982-08-01

    A synopsis of different selfsimilar spherical compression waves is given pointing out their fundamental importance for the gas dynamics of inertial confinement fusion. Strong blast waves, various forms of isentropic compression waves, imploding shock waves and the solution for non-isentropic collapsing hollow spheres are included. A classification is given in terms of six singular points which characterise the different solutions and the relations between them. The presentation closely follows Guderley's original work on imploding shock waves

  19. Method and apparatus for signal compression

    DOEpatents

    Carangelo, Robert M.

    1994-02-08

    The method and apparatus of the invention effects compression of an analog electrical signal (e.g., representing an interferogram) by introducing into it a component that is a cubic function thereof, normally as a nonlinear negative signal in a feedback loop of an Op Amp. The compressed signal will most desirably be digitized and then digitally decompressed so as to produce a signal that emulates the original.

  20. Compressing bitmap indexes for faster search operations

    SciTech Connect

    Wu, Kesheng; Otoo, Ekow J.; Shoshani, Arie

    2002-04-25

    In this paper, we study the effects of compression on bitmap indexes. The main operations on the bitmaps during query processing are bitwise logical operations such as AND, OR, NOT, etc. Using the general purpose compression schemes, such as gzip, the logical operations on the compressed bitmaps are much slower than on the uncompressed bitmaps. Specialized compression schemes, like the byte-aligned bitmap code(BBC), are usually faster in performing logical operations than the general purpose schemes, but in many cases they are still orders of magnitude slower than the uncompressed scheme. To make the compressed bitmap indexes operate more efficiently, we designed a CPU-friendly scheme which we refer to as the word-aligned hybrid code (WAH). Tests on both synthetic and real application data show that the new scheme significantly outperforms well-known compression schemes at a modest increase in storage space. Compared to BBC, a scheme well-known for its operational efficiency, WAH performs logical operations about 12 times faster and uses only 60 percent more space. Compared to the uncompressed scheme, in most test cases WAH is faster while still using less space. We further verified with additional tests that the improvement in logical operation speed translates to similar improvement in query processing speed.

  1. Digital breast tomosynthesis with minimal breast compression

    NASA Astrophysics Data System (ADS)

    Scaduto, David A.; Yang, Min; Ripton-Snyder, Jennifer; Fisher, Paul R.; Zhao, Wei

    2015-03-01

    Breast compression is utilized in mammography to improve image quality and reduce radiation dose. Lesion conspicuity is improved by reducing scatter effects on contrast and by reducing the superposition of tissue structures. However, patient discomfort due to breast compression has been cited as a potential cause of noncompliance with recommended screening practices. Further, compression may also occlude blood flow in the breast, complicating imaging with intravenous contrast agents and preventing accurate quantification of contrast enhancement and kinetics. Previous studies have investigated reducing breast compression in planar mammography and digital breast tomosynthesis (DBT), though this typically comes at the expense of degradation in image quality or increase in mean glandular dose (MGD). We propose to optimize the image acquisition technique for reduced compression in DBT without compromising image quality or increasing MGD. A zero-frequency signal-difference-to-noise ratio model is employed to investigate the relationship between tube potential, SDNR and MGD. Phantom and patient images are acquired on a prototype DBT system using the optimized imaging parameters and are assessed for image quality and lesion conspicuity. A preliminary assessment of patient motion during DBT with minimal compression is presented.

  2. Compression-sensitive magnetic resonance elastography

    NASA Astrophysics Data System (ADS)

    Hirsch, Sebastian; Beyer, Frauke; Guo, Jing; Papazoglou, Sebastian; Tzschaetzsch, Heiko; Braun, Juergen; Sack, Ingolf

    2013-08-01

    Magnetic resonance elastography (MRE) quantifies the shear modulus of biological tissue to detect disease. Complementary to the shear elastic properties of tissue, the compression modulus may be a clinically useful biomarker because it is sensitive to tissue pressure and poromechanical interactions. In this work, we analyze the capability of MRE to measure volumetric strain and the dynamic bulk modulus (P-wave modulus) at a harmonic drive frequency commonly used in shear-wave-based MRE. Gel phantoms with various densities were created by introducing CO2-filled cavities to establish a compressible effective medium. The dependence of the effective medium's bulk modulus on phantom density was investigated via static compression tests, which confirmed theoretical predictions. The P-wave modulus of three compressible phantoms was calculated from volumetric strain measured by 3D wave-field MRE at 50 Hz drive frequency. The results demonstrate the MRE-derived volumetric strain and P-wave modulus to be sensitive to the compression properties of effective media. Since the reconstruction of the P-wave modulus requires third-order derivatives, noise remains critical, and P-wave moduli are systematically underestimated. Focusing on relative changes in the effective bulk modulus of tissue, compression-sensitive MRE may be useful for the noninvasive detection of diseases involving pathological pressure alterations such as hepatic hypertension or hydrocephalus.

  3. Compression-sensitive magnetic resonance elastography.

    PubMed

    Hirsch, Sebastian; Beyer, Frauke; Guo, Jing; Papazoglou, Sebastian; Tzschaetzsch, Heiko; Braun, Juergen; Sack, Ingolf

    2013-08-01

    Magnetic resonance elastography (MRE) quantifies the shear modulus of biological tissue to detect disease. Complementary to the shear elastic properties of tissue, the compression modulus may be a clinically useful biomarker because it is sensitive to tissue pressure and poromechanical interactions. In this work, we analyze the capability of MRE to measure volumetric strain and the dynamic bulk modulus (P-wave modulus) at a harmonic drive frequency commonly used in shear-wave-based MRE. Gel phantoms with various densities were created by introducing CO2-filled cavities to establish a compressible effective medium. The dependence of the effective medium's bulk modulus on phantom density was investigated via static compression tests, which confirmed theoretical predictions. The P-wave modulus of three compressible phantoms was calculated from volumetric strain measured by 3D wave-field MRE at 50 Hz drive frequency. The results demonstrate the MRE-derived volumetric strain and P-wave modulus to be sensitive to the compression properties of effective media. Since the reconstruction of the P-wave modulus requires third-order derivatives, noise remains critical, and P-wave moduli are systematically underestimated. Focusing on relative changes in the effective bulk modulus of tissue, compression-sensitive MRE may be useful for the noninvasive detection of diseases involving pathological pressure alterations such as hepatic hypertension or hydrocephalus. PMID:23852144

  4. Friction of Compression-ignition Engines

    NASA Technical Reports Server (NTRS)

    Moore, Charles S; Collins, John H , Jr

    1936-01-01

    The cost in mean effective pressure of generating air flow in the combustion chambers of single-cylinder compression-ignition engines was determined for the prechamber and the displaced-piston types of combustion chamber. For each type a wide range of air-flow quantities, speeds, and boost pressures was investigated. Supplementary tests were made to determine the effect of lubricating-oil temperature, cooling-water temperature, and compression ratio on the friction mean effective pressure of the single-cylinder test engine. Friction curves are included for two 9-cylinder, radial, compression-ignition aircraft engines. The results indicate that generating the optimum forced air flow increased the motoring losses approximately 5 pounds per square inch mean effective pressure regardless of chamber type or engine speed. With a given type of chamber, the rate of increase in friction mean effective pressure with engine speed is independent of the air-flow speed. The effect of boost pressure on the friction cannot be predicted because the friction was decreased, unchanged, or increased depending on the combustion-chamber type and design details. High compression ratio accounts for approximately 5 pounds per square inch mean effective pressure of the friction of these single-cylinder compression-ignition engines. The single-cylinder test engines used in this investigation had a much higher friction mean effective pressure than conventional aircraft engines or than the 9-cylinder, radial, compression-ignition engines tested so that performance should be compared on an indicated basis.

  5. Digital data registration and differencing compression system

    NASA Technical Reports Server (NTRS)

    Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)

    1990-01-01

    A process is disclosed for x ray registration and differencing which results in more efficient compression. Differencing of registered modeled subject image with a modeled reference image forms a differenced image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three-dimensional model, which three-dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either a remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic x ray digital images.

  6. On the Compressibility of Arterial Tissue.

    PubMed

    Nolan, D R; McGarry, J P

    2016-04-01

    Arterial tissue is commonly assumed to be incompressible. While this assumption is convenient for both experimentalists and theorists, the compressibility of arterial tissue has not been rigorously investigated. In the current study we present an experimental-computational methodology to determine the compressibility of aortic tissue and we demonstrate that specimens excised from an ovine descending aorta are significantly compressible. Specimens are stretched in the radial direction in order to fully characterise the mechanical behaviour of the tissue ground matrix. Additionally biaxial testing is performed to fully characterise the anisotropic contribution of reinforcing fibres. Due to the complexity of the experimental tests, which entail non-uniform finite deformation of a non-linear anisotropic material, it is necessary to implement an inverse finite element analysis scheme to characterise the mechanical behaviour of the arterial tissue. Results reveal that ovine aortic tissue is highly compressible; an effective Poisson's ratio of 0.44 is determined for the ground matrix component of the tissue. It is also demonstrated that correct characterisation of material compressibility has important implications for the calibration of anisotropic fibre properties using biaxial tests. Finally it is demonstrated that correct treatment of material compressibility has significant implications for the accurate prediction of the stress state in an artery under in vivo type loading. PMID:26297340

  7. Digital data registration and differencing compression system

    NASA Technical Reports Server (NTRS)

    Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)

    1992-01-01

    A process for x ray registration and differencing results in more efficient compression is discussed. Differencing of registered modeled subject image with a modeled reference image forms a differential image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three dimensional model, which three dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic x ray digital images.

  8. Digital Data Registration and Differencing Compression System

    NASA Technical Reports Server (NTRS)

    Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)

    1996-01-01

    A process for X-ray registration and differencing results in more efficient compression. Differencing of registered modeled subject image with a modeled reference image forms a differenced image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three-dimensional model, which three-dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either a remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic X-ray digital images.

  9. Robust retrieval from compressed medical image archives

    NASA Astrophysics Data System (ADS)

    Sidorov, Denis N.; Lerallut, Jean F.; Cocquerez, Jean-Pierre; Azpiroz, Joaquin

    2005-04-01

    Paper addresses the computational aspects of extracting important features directly from compressed images for the purpose of aiding biomedical image retrieval based on content. The proposed method for treatment of compressed medical archives follows the JPEG compression standard and exploits algorithm based on spacial analysis of the image cosine spectrum coefficients amplitude and location. The experiments on modality-specific archive of osteoarticular images show robustness of the method based on measured spectral spatial statistics. The features, which were based on the cosine spectrum coefficients' values, could satisfy different types of queries' modalities (MRI, US, etc), which emphasized texture and edge properties. In particular, it has been shown that there is wealth of information in the AC coefficients of the DCT transform, which can be utilized to support fast content-based image retrieval. The computational cost of proposed signature generation algorithm is low. Influence of conventional and the state-of-the-art compression techniques based on cosine and wavelet integral transforms on the performance of content-based medical image retrieval has been also studied. We found no significant differences in retrieval efficiencies for non-compressed and JPEG2000-compressed images even at the lowest bit rate tested.

  10. Lossless compression of instrumentation data. Final report

    SciTech Connect

    Stearns, S.D.

    1995-11-01

    This is our final report on Sandia National Laboratories Laboratory- Directed Research and Development (LDRD) project 3517.070. Its purpose has been to investigate lossless compression of digital waveform and image data, particularly the types of instrumentation data generated and processed at Sandia Labs. The three-year project period ran from October 1992 through September 1995. This report begins with a descriptive overview of data compression, with and without loss, followed by a summary of the activities on the Sandia project, including research at several universities and the development of waveform compression software. Persons who participated in the project are also listed. The next part of the report contains a general discussion of the principles of lossless compression. Two basic compression stages, decorrelation and entropy coding, are described and discussed. An example of seismic data compression is included. Finally, there is a bibliography of published research. Taken together, the published papers contain the details of most of the work and accomplishments on the project. This final report is primarily an overview, without the technical details and results found in the publications listed in the bibliography.

  11. Hydrodynamic effects in the atmosphere of variable stars

    NASA Technical Reports Server (NTRS)

    Davis, C. G., Jr.; Bunker, S. S.

    1975-01-01

    Numerical models of variable stars are established, using a nonlinear radiative transfer coupled hydrodynamics code. The variable Eddington method of radiative transfer is used. Comparisons are for models of W Virginis, beta Doradus, and eta Aquilae. From these models it appears that shocks are formed in the atmospheres of classical Cepheids as well as W Virginis stars. In classical Cepheids, with periods from 7 to 10 days, the bumps occurring in the light and velocity curves appear as the result of a compression wave that reflects from the star's center. At the head of the outward going compression wave, shocks form in the atmosphere. Comparisons between the hydrodynamic motions in W Virginis and classical Cepheids are made. The strong shocks in W Virginis do not penetrate into the interior as do the compression waves formed in classical Cepheids. The shocks formed in W Virginis stars cause emission lines, while in classical Cepheids the shocks are weaker.

  12. A dedicated compression device for high resolution X-ray tomography of compressed gas diffusion layers

    NASA Astrophysics Data System (ADS)

    Tötzke, C.; Manke, I.; Gaiselmann, G.; Bohner, J.; Müller, B. R.; Kupsch, A.; Hentschel, M. P.; Schmidt, V.; Banhart, J.; Lehnert, W.

    2015-04-01

    We present an experimental approach to study the three-dimensional microstructure of gas diffusion layer (GDL) materials under realistic compression conditions. A dedicated compression device was designed that allows for synchrotron-tomographic investigation of circular samples under well-defined compression conditions. The tomographic data provide the experimental basis for stochastic modeling of nonwoven GDL materials. A plain compression tool is used to study the fiber courses in the material at different compression stages. Transport relevant geometrical parameters, such as porosity, pore size, and tortuosity distributions, are exemplarily evaluated for a GDL sample in the uncompressed state and for a compression of 30 vol.%. To mimic the geometry of the flow-field, we employed a compression punch with an integrated channel-rib-profile. It turned out that the GDL material is homogeneously compressed under the ribs, however, much less compressed underneath the channel. GDL fibers extend far into the channel volume where they might interfere with the convective gas transport and the removal of liquid water from the cell.

  13. A dedicated compression device for high resolution X-ray tomography of compressed gas diffusion layers

    SciTech Connect

    Tötzke, C.; Manke, I.; Banhart, J.; Gaiselmann, G.; Schmidt, V.; Bohner, J.; Müller, B. R.; Kupsch, A.; Hentschel, M. P.; Lehnert, W.

    2015-04-15

    We present an experimental approach to study the three-dimensional microstructure of gas diffusion layer (GDL) materials under realistic compression conditions. A dedicated compression device was designed that allows for synchrotron-tomographic investigation of circular samples under well-defined compression conditions. The tomographic data provide the experimental basis for stochastic modeling of nonwoven GDL materials. A plain compression tool is used to study the fiber courses in the material at different compression stages. Transport relevant geometrical parameters, such as porosity, pore size, and tortuosity distributions, are exemplarily evaluated for a GDL sample in the uncompressed state and for a compression of 30 vol.%. To mimic the geometry of the flow-field, we employed a compression punch with an integrated channel-rib-profile. It turned out that the GDL material is homogeneously compressed under the ribs, however, much less compressed underneath the channel. GDL fibers extend far into the channel volume where they might interfere with the convective gas transport and the removal of liquid water from the cell.

  14. Compressing Data by Source Separation

    NASA Astrophysics Data System (ADS)

    Schmidt, A.; Tréguier, E.; Schmidt, F.; Moussaoui, S.

    2012-04-01

    We interpret source separation of hyperspectral data as a way of applying lossy compressing. In settings where datacubes can be interpreted as a linear combination of source spectra and their abundances and the number of sources is small, we try to quantify the trade-offs and the benefits of source separation and its implementation with non-negative source factorisation. While various methods to implement non-negative matrix factorisation have been used successfully for factoring hyperspectral images into physically meaningful sources which linearly combine to an approximation of the original image. This is useful for modelling the processes which make up the image. At the same time, the approximation opens up the potential for a significant reduction of the data by keeping only the sources and their corresponding abundances, instead of the original complete data cube. This presentation will try to explore the potential of the idea and also to establish limits of its use. Formally, the setting is as follows: we consider P pixels of a hyperspectral image which are acquired at L frequency bands and which are represented as a PxL data matrix X. Each row of this matrix represents a spectrum at a pixel with spatial index p=1..P; this implies that the original topology may be disregarded. Since we work under the assumption of linear mixing, the p-th spectrum, 1<=p<=P, can be expressed as a linear combination of r, 1<=r<=R, source spectra. Thus, X=AxS+E, E being an error matrix to be minimised, and X, A, and S only have non-negative entries. The rows of matrix S are the estimations of the R source spectra, and each entry of A expresses the contribution of the r-th component to the pixel with spatial index p. There are applications where we may interpret the rows of S as physical sources which can be combined using the columns of A to approximate the original data. If the source signals are few and strong (but not even necessarily meaningful), the data volume that has to

  15. Compression failure of angle-ply laminates

    NASA Technical Reports Server (NTRS)

    Peel, Larry D.; Hyer, Michael W.; Shuart, Mark J.

    1991-01-01

    The present work deals with modes and mechanisms of failure in compression of angle-ply laminates. Experimental results were obtained from 42 angle-ply IM7/8551-7a specimens with a lay-up of ((plus or minus theta)/(plus or minus theta)) sub 6s where theta, the off-axis angle, ranged from 0 degrees to 90 degrees. The results showed four failure modes, these modes being a function of off-axis angle. Failure modes include fiber compression, inplane transverse tension, inplane shear, and inplane transverse compression. Excessive interlaminar shear strain was also considered as an important mode of failure. At low off-axis angles, experimentally observed values were considerably lower than published strengths. It was determined that laminate imperfections in the form of layer waviness could be a major factor in reducing compression strength. Previously developed linear buckling and geometrically nonlinear theories were used, with modifications and enhancements, to examine the influence of layer waviness on compression response. The wavy layer is described by a wave amplitude and a wave length. Linear elastic stress-strain response is assumed. The geometrically nonlinear theory, in conjunction with the maximum stress failure criterion, was used to predict compression failure and failure modes for the angle-ply laminates. A range of wave length and amplitudes were used. It was found that for 0 less than or equal to theta less than or equal to 15 degrees failure was most likely due to fiber compression. For 15 degrees less than theta less than or equal to 35 degrees, failure was most likely due to inplane transverse tension. For 35 degrees less than theta less than or equal to 70 degrees, failure was most likely due to inplane shear. For theta less than 70 degrees, failure was most likely due to inplane transverse compression. The fiber compression and transverse tension failure modes depended more heavily on wave length than on wave amplitude. Thus using a single

  16. Apparatus for measuring tensile and compressive properties of solid materials at cryogenic temperatures

    DOEpatents

    Gonczy, J.D.; Markley, F.W.; McCaw, W.R.; Niemann, R.C.

    1992-04-21

    An apparatus for evaluating the tensile and compressive properties of material samples at very low or cryogenic temperatures employs a stationary frame and a dewar mounted below the frame. A pair of coaxial cylindrical tubes extend downward towards the bottom of the dewar. A compressive or tensile load is generated hydraulically and is transmitted by the inner tube to the material sample. The material sample is located near the bottom of the dewar in a liquid refrigerant bath. The apparatus employs a displacement measuring device, such as a linear variable differential transformer, to measure the deformation of the material sample relative to the amount of compressive or tensile force applied to the sample. 7 figs.

  17. Some practical aspects of lossless and nearly-lossless compression of AVHRR imagery

    NASA Technical Reports Server (NTRS)

    Hogan, David B.; Miller, Chris X.; Christensen, Than Lee; Moorti, Raj

    1994-01-01

    Compression of Advanced Very high Resolution Radiometers (AVHRR) imagery operating in a lossless or nearly-lossless mode is evaluated. Several practical issues are analyzed including: variability of compression over time and among channels, rate-smoothing buffer size, multi-spectral preprocessing of data, day/night handling, and impact on key operational data applications. This analysis is based on a DPCM algorithm employing the Universal Noiseless Coder, which is a candidate for inclusion in many future remote sensing systems. It is shown that compression rates of about 2:1 (daytime) can be achieved with modest buffer sizes (less than or equal to 2.5 Mbytes) and a relatively simple multi-spectral preprocessing step.

  18. Sublaminate buckling and compression strength of stitched uniweave graphite/epoxy laminates

    SciTech Connect

    Sharma, S.K.; Sankar, B.V.

    1995-12-31

    Effects of through-the-thickness stitching on the sublaminate buckling and residual compression strength (often referred as compression-after-impact or CAI strength) of graphite/epoxy uniweave laminates are experimentally investigated. Primarily, three stitching variables: type of stitch yarn, linear density of stitch yam and stitch density were studied. Delaminations were created by implanting teflon inserts during processing. The improvement in the CAI strength of the stitched laminates was up to 400% compared to the unstitched laminates. Stitching was observed to effectively restrict sublaminate buckling failure of the laminates. The CAI strength increases rapidly with increase in stitch density. It reaches a peak CAI strength that is very close to the compression strength of the undamaged material. All the stitch yams in this study demonstrated very close performance in improving the CAI strength. It appears that any stitch yarn with adequate breaking strength and stiffness successfully restricts the sublaminate buckling.

  19. Coherent Vortex Simulation of weakly compressible turbulent mixing layers using adaptive multiresolution methods

    NASA Astrophysics Data System (ADS)

    Roussel, Olivier; Schneider, Kai

    2010-03-01

    An adaptive mulitresolution method based on a second-order finite volume discretization is presented for solving the three-dimensional compressible Navier-Stokes equations in Cartesian geometry. The explicit time discretization is of second-order and for flux evaluation a 2-4 Mac Cormack scheme is used. Coherent Vortex Simulations (CVS) are performed by decomposing the flow variables into coherent and incoherent contributions. The coherent part is computed deterministically on a locally refined grid using the adaptive multiresolution method while the influence of the incoherent part is neglected to model turbulent dissipation. The computational efficiency of this approach in terms of memory and CPU time compression is illustrated for turbulent mixing layers in the weakly compressible regime and for Reynolds numbers based on the mixing layer thickness between 50 and 200. Comparisons with direct numerical simulations allow to assess the precision and efficiency of CVS.

  20. An investigation of the compressive strength of Kevlar 49/epoxy composites

    NASA Technical Reports Server (NTRS)

    Kulkarni, S. V.; Rosen, B. W.; Rice, J. S.

    1975-01-01

    Tests were performed to evaluate the effect of a wide range of variables including matrix properties, interface properties, fiber prestressing, secondary reinforcement, and others on the ultimate compressive strength of Kevlar 49/epoxy composites. Scanning electron microscopy is used to assess the resulting failure surfaces. In addition, a theoretical study is conducted to determine the influence of fiber anisotropy and lack of perfect bond between fiber and matrix on the shear mode microbuckling. The experimental evaluation of the effect of various constituent and process characteristics on the behavior of these unidirectional composites in compression did not reveal any substantial increase in strength. However, theoretical evaluations indicate that the high degree of fiber anisotropy results in a significant drop in the predicted stress level for internal instability. Scanning electron microscope data analysis suggests that internal fiber failure and smooth surface debonding could be responsible for the measured low compressive strengths.