Science.gov

Sample records for alvar variable compression

  1. Alvar variable compression engine development. Final report

    SciTech Connect

    1998-03-30

    The Alvar engine is an invention by Mr. Alvar Gustafsson of Skarblacka, Sweden. It is a four stroke spark ignition internal combustion engine, having variable compression ratio and variable displacements. The compression ratio can be varied by means of small secondary cylinders and pistons which are communicating with the main combustion chambers. The secondary pistons can be phase shifted with respect to the main pistons. The engine is suitable for multi-fuel operation. Invention rights are held by Alvar Engine AB of Sweden, a company created to handle the development of the Alvar Engine. A project was conceived wherein an optimised experimental engine would be built and tested to verify the advantages claimed for the Alvar engine and also to reveal possible drawbacks, if any. Alvar Engine AB appointed Gunnar Lundholm, professor of Combustion Engines at Lund University, Lund, Sweden as principal investigator. The project could be seen as having three parts: (1) Optimisation of the engine combustion chamber geometry; (2) Design and manufacturing of the necessary engine parts; and (3) Testing of the engine in an engine laboratory NUTEK, The Swedish Board for Industrial and Technical Development granted Gunnar Lundholm, SEK 50000 (about $6700) to travel to the US to evaluate potential research and development facilities which seemed able to perform the different project tasks.

  2. Variable compression ratio control

    SciTech Connect

    Johnson, K.A.

    1988-04-19

    In a four cycle engine that includes a crankshaft having a plural number of main shaft sections defining the crankshaft rotational axis and a plural number of crank arms defining orbital shaft sections, a plural number of combustion cylinders, a movable piston within each cylinder, each cylinder and its associated piston defining a combustion chamber, a connecting rod connecting each piston to an orbital shaft section of the crankshaft, and a plural number of stationary support walls spaced along the crankshaft axis for absorbing crankshaft forces: the improvement is described comprising means for adjustably supporting the crankshaft on the stationary walls such that the crankshaft rotational axis is adjustable along the piston-cylinder axis for the purpose of varying a resulting engine compression ratio; the adjustable support means comprising a circular cavity in each stationary wall. A circular disk swivably is seated in each cavity, each circular disk having a circular opening therethrough eccentric to the disk center. The crankshaft is arranged so that respective ones of its main shaft sections are located within respective ones of the circular openings; means for rotating each circular disk around its center so that the main shaft sections of the crankshaft are adjusted toward and away from the combustion chamber; a pinion gear on an output end of the crankshaft in axial alignment with and positioned beyond the respective ones of the main shaft sections, and a rotary output gear located about and engaged with teeth extending from the pinion gear.

  3. Envera Variable Compression Ratio Engine

    SciTech Connect

    Charles Mendler

    2011-03-15

    Aggressive engine downsizing, variable compression ratio and use of the Atkinson cycle are being combined to improve fuel economy by up to 40 percent relative to port fuel injected gasoline engines, while maintaining full engine power. Approach Engine downsizing is viewed by US and foreign automobile manufacturers as one of the best options for improving fuel economy. While this strategy has already demonstrated a degree of success, downsizing and fuel economy gains are currently limited. With new variable compression ratio technology however, the degree of engine downsizing and fuel economy improvement can be greatly increased. A small variable compression ratio (VCR) engine has the potential to return significantly higher vehicle fuel economy while also providing high power. Affordability and potential for near term commercialization are key attributes of the Envera VCR engine. VCR Technology To meet torque and power requirements, a smaller engine needs to do more work per stroke. This is typically accomplished by boosting the incoming charge with either a turbo or supercharger so that more energy is present in the cylinder per stroke to do the work. With current production engines the degree of engine boosting (which correlates to downsizing) is limited by detonation (combustion knock) at high boost levels. Additionally, the turbo or supercharger needs to be responsive and efficient while providing the needed boost. VCR technology eliminates the limitation of engine knock at high load levels by reducing compression ratio to {approx}9:1 (or whatever level is appropriate) when high boost pressures are needed. By reducing the compression ratio during high load demand periods there is increased volume in the cylinder at top dead center (TDC) which allows more charge (or energy) to be present in the cylinder without increasing the peak pressure. Cylinder pressure is thus kept below the level at which the engine would begin to knock. When loads on the engine are low

  4. Eccentric crank variable compression ratio mechanism

    DOEpatents

    Lawrence, Keith Edward; Moser, William Elliott; Roozenboom, Stephan Donald; Knox, Kevin Jay

    2008-05-13

    A variable compression ratio mechanism for an internal combustion engine that has an engine block and a crankshaft is disclosed. The variable compression ratio mechanism has a plurality of eccentric disks configured to support the crankshaft. Each of the plurality of eccentric disks has at least one cylindrical portion annularly surrounded by the engine block. The variable compression ratio mechanism also has at least one actuator configured to rotate the plurality of eccentric disks.

  5. Crankshaft assembly for variable stroke engine for variable compression

    SciTech Connect

    Heniges, W.B.

    1989-12-19

    This patent describes crankshaft assembly for a variable compression engine with reciprocating pistons. It comprises: a crankshaft assembly including a web, a crankpin, a crankshaft arm, piston driven means carried by the crankpin, eccentric means including an eccentric bushing rotatably carried by the crankpin and interposed between the crankpin and the piston driven means. The eccentric means including an eccentric mounted gear whereby adjusted rotation of the eccentric means relative the crankpin will alter the spacial relationship of the eccentric and the piston driven means to the crankshaft axis to alter piston stroke, and eccentric positioning means including a gear train comprising a first gear driven by the crankshaft arm for rotation about the crankshaft axis, a gear set driven by the first gear with certain gears of the set being displaceable, carrier means supporting the certain gears, control means coupled to the carrier means for positioning same and the certain gears, driven gears powered by the gear set which one of the driven gears in mesh with the eccentric mounted gear to impact rotate to same to alter the relationship of the eccentric bushing to the piston driven means to thereby determine stroke.

  6. Alvar soils and ecology in the boreal forest and taiga regions of Canada.

    NASA Astrophysics Data System (ADS)

    Ford, D.

    2012-04-01

    Alvars have been defined as "...a biological association based on a limestone plain with thin or no soil and, as a result, sparse vegetation. Trees and bushes are stunted or absent ... may include prairie spp." (Wikipedia). They were first described in southern Sweden, Estonia, the karst pavements of Yorkshire (UK) and the Burren (Eire). In North America alvars have been recognised and reported only in the Mixed Forest (deciduous/coniferous) Zone around the Great Lakes. An essential feature of the hydrologic controls on vegetation growth on natural alvars is that these terrains were glaciated in the last (Wisconsinan/Würm) ice age: the upper beds of any pre-existing epikarst were stripped away by glacier scour and there has been insufficient time for post-glacial epikarst to achieve the depths and densities required to support the deep rooting needed for mature forest cover. However, in the sites noted above, the alvars have been created, at least in part, by deforestation, overgrazing, burning to create browse, etc. and thus should not be considered wholly natural phenomena. There are extensive natural alvars in the Boreal Forest and Taiga ecozones in Canada. Their nature and variety will be illustrated with examples from cold temperate maritime climate settings in northern Newfoundland and the Gulf of St Lawrence and cold temperate continental to sub-arctic climates in northern Manitoba and the Northwest Territories.

  7. An Efficient Variable-Length Data-Compression Scheme

    NASA Technical Reports Server (NTRS)

    Cheung, Kar-Ming; Kiely, Aaron B.

    1996-01-01

    Adaptive variable-length coding scheme for compression of stream of independent and identically distributed source data involves either Huffman code or alternating run-length Huffman (ARH) code, depending on characteristics of data. Enables efficient compression of output of lossless or lossy precompression process, with speed and simplicity greater than those of older coding schemes developed for same purpose. In addition, scheme suitable for parallel implementation on hardware with modular structure, provides for rapid adaptation to changing data source, compatible with block orientation to alleviate memory requirements, ensures efficiency over wide range of entropy, and easily combined with such other communication schemes as those for containment of errors and for packetization.

  8. Image Compression Using Vector Quantization with Variable Block Size Division

    NASA Astrophysics Data System (ADS)

    Matsumoto, Hiroki; Kichikawa, Fumito; Sasazaki, Kazuya; Maeda, Junji; Suzuki, Yukinori

    In this paper, we propose a method for compressing a still image using vector quantization (VQ). Local fractal dimension (LFD) is computed to divided an image into variable block size. The LFD shows the complexity of local regions of an image, so that a region of an image that shows higher LFD values than those of other regions is partitioned into small blocks of pixels, while a region of an image that shows lower LFD values than those of other regions is partitioned into large blocks. Furthermore, we developed a division and merging algorithm to decrease the number of blocks to encode. This results in improvement of compression rate. We construct code books for respective blocks sizes. To encode an image, a block of pixels is transformed by discrete cosine transform (DCT) and the closest vector is chosen from the code book (CB). In decoding, the code vector corresponding to the index is selected from the CB and then the code vector is transformed by inverse DCT to reconstruct a block of pixels. Computational experiments were carried out to show the effectiveness of the proposed method. Performance of the proposed method is slightly better than that of JPEG. In the case of learning images to construct a CB being different from test images, the compression rate is comparable to compression rates of methods proposed so far, while image quality evaluated by NPIQM (normalized perceptual image quality measure) is almost the highest step. The results show that the proposed method is effective for still image compression.

  9. Pseudospectral simulation of compressible turbulence using logarithmic variables

    NASA Technical Reports Server (NTRS)

    Shebalin, John V.

    1993-01-01

    The direct numerical simulation of dissipative, highly compressible turbulent flow is performed using a pseudospectral Fourier technique. The governing equations are cast in a form where the important physical variables are the fluid velocity and the natural logarithms of the fluid density and temperature. Bulk viscosity is utilized to model polyatomic gases more accurately and to ensure numerical stability in the presence of strong shocks. Numerical examples include three-dimensional supersonic homogeneous turbulence and two-dimensional shock-turbulence interactions.

  10. Variable Quality Compression of Fluid Dynamical Data Sets Using a 3D DCT Technique

    NASA Astrophysics Data System (ADS)

    Loddoch, A.; Schmalzl, J.

    2005-12-01

    In this work we present a data compression scheme that is especially suited for the compression of data sets resulting from computational fluid dynamics (CFD). By adopting the concept of the JPEG compression standard and extending the approach of Schmalzl (Schmalzl, J. Using standard image compression algorithms to store data from computational fluid dynamics. Computers and Geosciences, 29, 10211031, 2003) we employ a three-dimensional discrete cosine transform of the data. The resulting frequency components are rearranged, quantized and finally stored using Huffman-encoding and standard variable length integer codes. The compression ratio and also the introduced loss of accuracy can be adjusted by means of two compression parameters to give the desired compression profile. Using the proposed technique compression ratios of more than 60:1 are possible with an mean error of the compressed data of less than 0.1%.

  11. Working characteristics of variable intake valve in compressed air engine.

    PubMed

    Yu, Qihui; Shi, Yan; Cai, Maolin

    2014-01-01

    A new camless compressed air engine is proposed, which can make the compressed air energy reasonably distributed. Through analysis of the camless compressed air engine, a mathematical model of the working processes was set up. Using the software MATLAB/Simulink for simulation, the pressure, temperature, and air mass of the cylinder were obtained. In order to verify the accuracy of the mathematical model, the experiments were conducted. Moreover, performance analysis was introduced to design compressed air engine. Results show that, firstly, the simulation results have good consistency with the experimental results. Secondly, under different intake pressures, the highest output power is obtained when the crank speed reaches 500 rpm, which also provides the maximum output torque. Finally, higher energy utilization efficiency can be obtained at the lower speed, intake pressure, and valve duration angle. This research can refer to the design of the camless valve of compressed air engine. PMID:25379536

  12. Working Characteristics of Variable Intake Valve in Compressed Air Engine

    PubMed Central

    Yu, Qihui; Shi, Yan; Cai, Maolin

    2014-01-01

    A new camless compressed air engine is proposed, which can make the compressed air energy reasonably distributed. Through analysis of the camless compressed air engine, a mathematical model of the working processes was set up. Using the software MATLAB/Simulink for simulation, the pressure, temperature, and air mass of the cylinder were obtained. In order to verify the accuracy of the mathematical model, the experiments were conducted. Moreover, performance analysis was introduced to design compressed air engine. Results show that, firstly, the simulation results have good consistency with the experimental results. Secondly, under different intake pressures, the highest output power is obtained when the crank speed reaches 500 rpm, which also provides the maximum output torque. Finally, higher energy utilization efficiency can be obtained at the lower speed, intake pressure, and valve duration angle. This research can refer to the design of the camless valve of compressed air engine. PMID:25379536

  13. Variable valve timing in a homogenous charge compression ignition engine

    DOEpatents

    Lawrence, Keith E.; Faletti, James J.; Funke, Steven J.; Maloney, Ronald P.

    2004-08-03

    The present invention relates generally to the field of homogenous charge compression ignition engines, in which fuel is injected when the cylinder piston is relatively close to the bottom dead center position for its compression stroke. The fuel mixes with air in the cylinder during the compression stroke to create a relatively lean homogeneous mixture that preferably ignites when the piston is relatively close to the top dead center position. However, if the ignition event occurs either earlier or later than desired, lowered performance, engine misfire, or even engine damage, can result. The present invention utilizes internal exhaust gas recirculation and/or compression ratio control to control the timing of ignition events and combustion duration in homogeneous charge compression ignition engines. Thus, at least one electro-hydraulic assist actuator is provided that is capable of mechanically engaging at least one cam actuated intake and/or exhaust valve.

  14. Combustion engine variable compression ratio apparatus and method

    DOEpatents

    Lawrence; Keith E.; Strawbridge, Bryan E.; Dutart, Charles H.

    2006-06-06

    An apparatus and method for varying a compression ratio of an engine having a block and a head mounted thereto. The apparatus and method includes a cylinder having a block portion and a head portion, a piston linearly movable in the block portion of the cylinder, a cylinder plug linearly movable in the head portion of the cylinder, and a valve located in the cylinder plug and operable to provide controlled fluid communication with the block portion of the cylinder.

  15. Effects of compression and individual variability on face recognition performance

    NASA Astrophysics Data System (ADS)

    McGarry, Delia P.; Arndt, Craig M.; McCabe, Steven A.; D'Amato, Donald P.

    2004-08-01

    The Enhanced Border Security and Visa Entry Reform Act of 2002 requires that the Visa Waiver Program be available only to countries that have a program to issue to their nationals machine-readable passports incorporating biometric identifiers complying with applicable standards established by the International Civil Aviation Organization (ICAO). In June 2002, the New Technologies Working Group of ICAO unanimously endorsed the use of face recognition (FR) as the globally interoperable biometric for machine-assisted identity confirmation with machine-readable travel documents (MRTDs), although Member States may elect to use fingerprint and/or iris recognition as additional biometric technologies. The means and formats are still being developed through which biometric information might be stored in the constrained space of integrated circuit chips embedded within travel documents. Such information will be stored in an open, yet unalterable and very compact format, probably as digitally signed and efficiently compressed images. The objective of this research is to characterize the many factors that affect FR system performance with respect to the legislated mandates concerning FR. A photograph acquisition environment and a commercial face recognition system have been installed at Mitretek, and over 1,400 images have been collected of volunteers. The image database and FR system are being used to analyze the effects of lossy image compression, individual differences, such as eyeglasses and facial hair, and the acquisition environment on FR system performance. Images are compressed by varying ratios using JPEG2000 to determine the trade-off points between recognition accuracy and compression ratio. The various acquisition factors that contribute to differences in FR system performance among individuals are also being measured. The results of this study will be used to refine and test efficient face image interchange standards that ensure highly accurate recognition, both

  16. Acoustic transmission matrix of a variable area duct or nozzle carrying a compressible subsonic flow

    NASA Technical Reports Server (NTRS)

    Miles, J. H.

    1980-01-01

    The differential equations governing the propagation of sound in a variable area duct or nozzle carrying a one-dimensional subsonic compressible fluid flow are derived and put in state variable form using acoustic pressure and particle velocity as the state variables. The duct or nozzle is divided into a number of regions. The region size is selected so that in each region the Mach number can be assumed constant and the area variation can be approximated by an exponential area variation. Consequently, the state variable equation in each region has constant coefficients. The transmission matrix for each region is obtained by solving the constant coefficient acoustic state variable differential equation. The transmission matrix for the duct or nozzle is the product of the individual transmission matrices of each region. Solutions are presented for several geometries with and without mean flow.

  17. Acoustic transmission matrix of a variable area duct or nozzle carrying a compressible subsonic flow

    NASA Technical Reports Server (NTRS)

    Miles, J. H.

    1980-01-01

    The differential equations governing the propagation of sound in a variable area duct or nozzle carrying a one dimensional subsonic compressible fluid flow are derived and put in state variable form using acoustic pressure and particle velocity as the state variables. The duct or nozzle is divided into a number of regions. The region size is selected so that in each region the Mach number can be assumed constant and the area variation can be approximated by an exponential area variation. Consequently, the state variable equation in each region has constant coefficients. The transmission matrix for each region is obtained by solving the constant coefficient acoustic state variable differential equation. The transmission matrix for the duct or nozzle is the product of the individual transmission matrices of each region. Solutions are presented for several geometries with and without mean flow.

  18. Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain

    PubMed Central

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality. PMID:23049544

  19. Variability and anisotropy of mechanical behavior of cortical bone in tension and compression.

    PubMed

    Li, Simin; Demirci, Emrah; Silberschmidt, Vadim V

    2013-05-01

    The mechanical properties of cortical bone vary not only from bone to bone; they demonstrate a spatial viability even within the same bone due to its changing microstructure. They also depend considerably on different loading modes and orientations. To understand the variability and anisotropic mechanical behavior of a cortical bone tissue, specimens cut from four anatomical quadrants of bovine femurs were investigated both in tension and compression tests. The obtained experimental results revealed a highly anisotropic mechanical behavior, depending also on the loading mode (tension and compression). A compressive longitudinal loading regime resulted in the best load-bearing capacity for cortical bone, while tensile transverse loading provided significantly poorer results. The distinctive stress-strain curves obtained for tension and compression demonstrated various damage mechanisms associated with different loading modes. The variability of mechanical properties for different cortices was evaluated with two-way ANOVA analyses. Statistical significances were found among different quadrants for the Young's modulus. The results of microstructure analysis of the entire transverse cross section of a cortical bone also confirmed variations of volume fractions of constituents at microscopic level between anatomic quadrants: microstructure of the anterior quadrant was dominated by plexiform bone, whereas secondary osteons were prominent in the posterior quadrant. The effective Young's modulus predicted using the modified Voigt-Reuss-Hill averaging scheme accurately reproduced our experimental results, corroborating additionally a strong effect of random and heterogeneous microstructure on variation of mechanical properties in cortical bone. PMID:23563047

  20. Influence of variables on the consolidation and unconfined compressive strength of crushed salt: Technical report

    SciTech Connect

    Pfeifle, T.W.; Senseny, P.E.; Mellegard, K.D.

    1987-01-01

    Eight hydrostatic compression creep tests were performed on crushed salt specimens fabricated from Avery Island dome salt. Following the creep test, each specimen was tested in unconfined compression. The experiments were performed to assess the influence of the following four variables on the consolidation and unconfined strength of crushed salt: grain size distribution, temperature, time, and moisture content. The experiment design comprised a half-fraction factorial matrix at two levels. The levels of each variable investigated were grain size distribution, uniform-graded and well-graded (coefficient of uniformity of 1 and 8); temperature 25/sup 0/C and 100/sup 0/C; time, 3.5 x 10/sup 3/s and 950 x 10/sup 3/s (approximately 60 minutes and 11 days, respectively); and moisture content, dry and wet (85% relative humidity for 24 hours). The hydrostatic creep stress was 10 MPa. The unconfined compression tests were performed at an axial strain rate of 1 x 10/sup -5/s/sup -1/. Results show that the variables time and moisture content have the greatest influence on creep consolidation, while grain size distribution and, to a somewhat lesser degree, temperature have the greatest influence on total consolidation. Time and moisture content and the confounded two-factor interactions between either grain size distribution and time or temperature and moisture content have the greatest influence on unconfined strength. 7 refs., 7 figs., 11 tabs.

  1. Ultraspectral sounder data compression using error-detecting reversible variable-length coding

    NASA Astrophysics Data System (ADS)

    Huang, Bormin; Ahuja, Alok; Huang, Hung-Lung; Schmit, Timothy J.; Heymann, Roger W.

    2005-08-01

    Nonreversible variable-length codes (e.g. Huffman coding, Golomb-Rice coding, and arithmetic coding) have been used in source coding to achieve efficient compression. However, a single bit error during noisy transmission can cause many codewords to be misinterpreted by the decoder. In recent years, increasing attention has been given to the design of reversible variable-length codes (RVLCs) for better data transmission in error-prone environments. RVLCs allow instantaneous decoding in both directions, which affords better detection of bit errors due to synchronization losses over a noisy channel. RVLCs have been adopted in emerging video coding standards--H.263+ and MPEG-4--to enhance their error-resilience capabilities. Given the large volume of three-dimensional data that will be generated by future space-borne ultraspectral sounders (e.g. IASI, CrIS, and HES), the use of error-robust data compression techniques will be beneficial to satellite data transmission. In this paper, we investigate a reversible variable-length code for ultraspectral sounder data compression, and present its numerical experiments on error propagation for the ultraspectral sounder data. The results show that the RVLC performs significantly better error containment than JPEG2000 Part 2.

  2. Adaptive variable-fidelity wavelet-based eddy-capturing approaches for compressible turbulence

    NASA Astrophysics Data System (ADS)

    Brown-Dymkoski, Eric; Vasilyev, Oleg V.

    2015-11-01

    Multiresolution wavelet methods have been developed for efficient simulation of compressible turbulence. They rely upon a filter to identify dynamically important coherent flow structures and adapt the mesh to resolve them. The filter threshold parameter, which can be specified globally or locally, allows for a continuous tradeoff between computational cost and fidelity, ranging seamlessly between DNS and adaptive LES. There are two main approaches to specifying the adaptive threshold parameter. It can be imposed as a numerical error bound, or alternatively, derived from real-time flow phenomena to ensure correct simulation of desired turbulent physics. As LES relies on often imprecise model formulations that require a high-quality mesh, this variable-fidelity approach offers a further tool for improving simulation by targeting deficiencies and locally increasing the resolution. Simultaneous physical and numerical criteria, derived from compressible flow physics and the governing equations, are used to identify turbulent regions and evaluate the fidelity. Several benchmark cases are considered to demonstrate the ability to capture variable density and thermodynamic effects in compressible turbulence. This work was supported by NSF under grant No. CBET-1236505.

  3. Structural Response of Compression-Loaded, Tow-Placed, Variable Stiffness Panels

    NASA Technical Reports Server (NTRS)

    Wu, K. Chauncey; Guerdal, Zafer; Starnes, James H., Jr.

    2002-01-01

    Results of an analytical and experimental study to characterize the structural response of two compression-loaded variable stiffness composite panels are presented and discussed. These variable stiffness panels are advanced composite structures, in which tows are laid down along precise curvilinear paths within each ply and the fiber orientation angle varies continuously throughout each ply. The panels are manufactured from AS4/977-3 graphite-epoxy pre-preg material using an advanced tow placement system. Both variable stiffness panels have the same layup, but one panel has overlapping tow bands and the other panel has a constant-thickness laminate. A baseline cross-ply panel is also analyzed and tested for comparative purposes. Tests performed on the variable stiffness panels show a linear prebuckling load-deflection response, followed by a nonlinear response to failure at loads between 4 and 53 percent greater than the baseline panel failure load. The structural response of the variable stiffness panels is also evaluated using finite element analyses. Nonlinear analyses of the variable stiffness panels are performed which include mechanical and thermal prestresses. Results from analyses that include thermal prestress conditions correlate well with measured variable stiffness panel results. The predicted response of the baseline panel also correlates well with measured results.

  4. Effects of selected design variables on three ramp, external compression inlet performance. [boundary layer control bypasses, and mass flow rate

    NASA Technical Reports Server (NTRS)

    Kamman, J. H.; Hall, C. L.

    1975-01-01

    Two inlet performance tests and one inlet/airframe drag test were conducted in 1969 at the NASA-Ames Research Center. The basic inlet system was two-dimensional, three ramp (overhead), external compression, with variable capture area. The data from these tests were analyzed to show the effects of selected design variables on the performance of this type of inlet system. The inlet design variables investigated include inlet bleed, bypass, operating mass flow ratio, inlet geometry, and variable capture area.

  5. A numerical investigation of the finite element method in compressible primitive variable Navier-Stokes flow

    NASA Technical Reports Server (NTRS)

    Cook, C. H.

    1977-01-01

    The results of a comprehensive numerical investigation of the basic capabilities of the finite element method (FEM) for numerical solution of compressible flow problems governed by the two-dimensional and axis-symmetric Navier-Stokes equations in primitive variables are presented. The strong and weak points of the method as a tool for computational fluid dynamics are considered. The relation of the linear element finite element method to finite difference methods (FDM) is explored. The calculation of free shear layer and separated flows over aircraft boattail afterbodies with plume simulators indicate the strongest assets of the method are its capabilities for reliable and accurate calculation employing variable grids which readily approximate complex geometry and capably adapt to the presence of diverse regions of large solution gradients without the necessity of domain transformation.

  6. Interfraction Liver Shape Variability and Impact on GTV Position During Liver Stereotactic Radiotherapy Using Abdominal Compression

    SciTech Connect

    Eccles, Cynthia L.; Dawson, Laura A.; Moseley, Joanne L.; Brock, Kristy K.

    2011-07-01

    Purpose: For patients receiving liver stereotactic body radiotherapy (SBRT), abdominal compression can reduce organ motion, and daily image guidance can reduce setup error. The reproducibility of liver shape under compression may impact treatment delivery accuracy. The purpose of this study was to measure the interfractional variability in liver shape under compression, after best-fit rigid liver-to-liver registration from kilovoltage (kV) cone beam computed tomography (CBCT) scans to planning computed tomography (CT) scans and its impact on gross tumor volume (GTV) position. Methods and Materials: Evaluable patients were treated in a Research Ethics Board-approved SBRT six-fraction study with abdominal compression. Kilovoltage CBCT scans were acquired before treatment and reconstructed as respiratory sorted CBCT scans offline. Manual rigid liver-to-liver registrations were performed from exhale-phase CBCT scans to exhale planning CT scans. Each CBCT liver was contoured, exported, and compared with the planning CT scan for spatial differences, by use of in house-developed finite-element model-based deformable registration (MORFEUS). Results: We evaluated 83 CBCT scans from 16 patients with 30 GTVs. The mean volume of liver that deformed by greater than 3 mm was 21.7%. Excluding 1 outlier, the maximum volume that deformed by greater than 3 mm was 36.3% in a single patient. Over all patients, the absolute maximum deformations in the left-right (LR), anterior-posterior (AP), and superior-inferior directions were 10.5 mm (SD, 2.2), 12.9 mm (SD, 3.6), and 5.6 mm (SD, 2.7), respectively. The absolute mean predicted impact of liver volume displacements on GTV by use of center of mass displacements was 0.09 mm (SD, 0.13), 0.13 mm (SD, 0.18), and 0.08 mm (SD, 0.07) in the left-right, anterior-posterior, and superior-inferior directions, respectively. Conclusions: Interfraction liver deformations in patients undergoing SBRT under abdominal compression after rigid liver

  7. Existence of Compressible Current-Vortex Sheets: Variable Coefficients Linear Analysis

    NASA Astrophysics Data System (ADS)

    Trakhinin, Yuri

    2005-09-01

    We study the initial-boundary value problem resulting from the linearization of the equations of ideal compressible magnetohydrodynamics and the Rankine-Hugoniot relations about an unsteady piecewise smooth solution. This solution is supposed to be a classical solution of the system of magnetohydrodynamics on either side of a surface of tangential discontinuity (current-vortex sheet). Under some assumptions on the unperturbed flow, we prove an energy a priori estimate for the linearized problem. Since the tangential discontinuity is characteristic, the functional setting is provided by the anisotropic weighted Sobolev space W21,σ. Despite the fact that the constant coefficients linearized problem does not meet the uniform Kreiss-Lopatinskii condition, the estimate we obtain is without loss of smoothness even for the variable coefficients problem and nonplanar current-vortex sheets. The result of this paper is a necessary step in proving the local-in-time existence of current-vortex sheet solutions of the nonlinear equations of magnetohydrodynamics.

  8. Adaptive variable-length coding for efficient compression of spacecraft television data.

    NASA Technical Reports Server (NTRS)

    Rice, R. F.; Plaunt, J. R.

    1971-01-01

    An adaptive variable length coding system is presented. Although developed primarily for the proposed Grand Tour missions, many features of this system clearly indicate a much wider applicability. Using sample to sample prediction, the coding system produces output rates within 0.25 bit/picture element (pixel) of the one-dimensional difference entropy for entropy values ranging from 0 to 8 bit/pixel. This is accomplished without the necessity of storing any code words. Performance improvements of 0.5 bit/pixel can be simply achieved by utilizing previous line correlation. A Basic Compressor, using concatenated codes, adapts to rapid changes in source statistics by automatically selecting one of three codes to use for each block of 21 pixels. The system adapts to less frequent, but more dramatic, changes in source statistics by adjusting the mode in which the Basic Compressor operates on a line-to-line basis. Furthermore, the compression system is independent of the quantization requirements of the pulse-code modulation system.

  9. Performance and exhaust emission characteristics of variable compression ratio diesel engine fuelled with esters of crude rice bran oil.

    PubMed

    Vasudeva, Mohit; Sharma, Sumeet; Mohapatra, S K; Kundu, Krishnendu

    2016-01-01

    As a substitute to petroleum-derived diesel, biodiesel has high potential as a renewable and environment friendly energy source. For petroleum importing countries the choice of feedstock for biodiesel production within the geographical region is a major influential factor. Crude rice bran oil is found to be good and viable feedstock for biodiesel production. A two step esterification is carried out for higher free fatty acid crude rice bran oil. Blends of 10, 20 and 40 % by vol. crude rice bran biodiesel are tested in a variable compression ratio diesel engine at compression ratio 15, 16, 17 and 18. Engine performance and exhaust emission parameters are examined. Cylinder pressure-crank angle variation is also plotted. The increase in compression ratio from 15 to 18 resulted in 18.6 % decrease in brake specific fuel consumption and 14.66 % increase in brake thermal efficiency on an average. Cylinder pressure increases by 15 % when compression ratio is increased. Carbon monoxide emission decreased by 22.27 %, hydrocarbon decreased by 38.4 %, carbon dioxide increased by 17.43 % and oxides of nitrogen as NOx emission increased by 22.76 % on an average when compression ratio is increased from 15 to 18. The blends of crude rice bran biodiesel show better results than diesel with increase in compression ratio. PMID:27066330

  10. The observed compression and expansion of the F2 ionosphere as a major component of ionospheric variability

    NASA Astrophysics Data System (ADS)

    Lynn, K. J. W.; Gardiner-Garden, R. S.; Heitmann, A.

    2016-05-01

    This paper examines a number of sources of ionospheric variability and demonstrates that they have relationships in common which are currently not recognized. The paper initially deals with medium to large-scale traveling ionospheric disturbances. Following sections deal with nontraveling ionospheric disturbance (TID) ionospheric variations which are often repetitious from day to day. The latter includes the temporary rise in F2 height associated with sunset in equatorial latitudes resulting from strong upward drift in ionization driven by an E × B force. The following fall in height is often referred to as the premidnight collapse and is accompanied by a temporary increase in foF2 as a result of ionospheric compression. An entirely different repetitious phenomenon reported recently from middle latitudes in the Southern Hemisphere consists of strong morning and afternoon peaks in foF2 which define a midday bite-out and occur at the equinoxes. This behavior has been speculated to be tidal in origin. All the sources of ionospheric variability listed above exhibit similar relationships associated with a temporary expansion and upward lift of the ionospheric profile and a fall involving a compression of the ionospheric profile producing a peak in foF2 at the time of maximum compression. Such ionospheric compression/decompression is followed by a period in which the ionospheric profile recovers. Such relationships in traveling ionospheric disturbances (TIDs) have been noted previously. The present paper establishes for the first time that relationships hitherto seen as occurring only with TIDs are also present in association with other drivers of ionospheric variability.

  11. Compression of ECG signals using variable-length classifıed vector sets and wavelet transforms

    NASA Astrophysics Data System (ADS)

    Gurkan, Hakan

    2012-12-01

    In this article, an improved and more efficient algorithm for the compression of the electrocardiogram (ECG) signals is presented, which combines the processes of modeling ECG signal by variable-length classified signature and envelope vector sets (VL-CSEVS), and residual error coding via wavelet transform. In particular, we form the VL-CSEVS derived from the ECG signals, which exploits the relationship between energy variation and clinical information. The VL-CSEVS are unique patterns generated from many of thousands of ECG segments of two different lengths obtained by the energy based segmentation method, then they are presented to both the transmitter and the receiver used in our proposed compression system. The proposed algorithm is tested on the MIT-BIH Arrhythmia Database and MIT-BIH Compression Test Database and its performance is evaluated by using some evaluation metrics such as the percentage root-mean-square difference (PRD), modified PRD (MPRD), maximum error, and clinical evaluation. Our experimental results imply that our proposed algorithm achieves high compression ratios with low level reconstruction error while preserving the diagnostic information in the reconstructed ECG signal, which has been supported by the clinical tests that we have carried out.

  12. Numerical solution of the compressible Navier-Stokes equations using density gradients as additional dependent variables. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Kwon, J. H.

    1977-01-01

    Numerical solution of two dimensional, time dependent, compressible viscous Navier-Stokes equations about arbitrary bodies was treated using density gradients as additional dependent variables. Thus, six dependent variables were computed with the SOR iteration method. Besides formulation for pressure gradient terms, a formulation for computing the body density was presented. To approximate the governing equations, an implicit finite difference method was employed. In computing the solution for the flow about a circular cylinder, a problem arose near the wall at both stagnation points. Thus, computations with various conditions were tried to examine the problem. Also, computations with and without formulations are compared. The flow variables were computed on 37 by 40 field first, then on an 81 by 40 field.

  13. A burst compression and expansion technique for variable-rate users in satellite-switched TDMA networks

    NASA Technical Reports Server (NTRS)

    Budinger, James M.

    1990-01-01

    A burst compression and expansion technique is described for asynchronously interconnecting variable-data-rate users with cost-efficient ground terminals in a satellite-switched, time-division-multiple-access (SS/TDMA) network. Compression and expansion buffers in each ground terminal convert between lower rate, asynchronous, continuous-user data streams and higher-rate TDMA bursts synchronized with the satellite-switched timing. The technique described uses a first-in, first-out (FIFO) memory approach which enables the use of inexpensive clock sources by both the users and the ground terminals and obviates the need for elaborate user clock synchronization processes. A continous range of data rates from kilobits per second to that approaching the modulator burst rate (hundreds of megabits per second) can be accommodated. The technique was developed for use in the NASA Lewis Research Center System Integration, Test, and Evaluation (SITE) facility. Some key features of the technique have also been implemented in the gound terminals developed at NASA Lewis for use in on-orbit evaluation of the Advanced Communications Technology Satellite (ACTS) high burst rate (HBR) system.

  14. A burst compression and expansion technique for variable-rate users in satellite-switched TDMA networks

    NASA Technical Reports Server (NTRS)

    Budinger, James M.

    1990-01-01

    A burst compression and expansion technique is described for asynchronously interconnecting variable-data-rate users with cost-efficient ground terminals in a satellite-switched, time-division-multiple-access (SS/TDMA) network. Compression and expansion buffers in each ground terminal convert between lower rate, asynchronous, continuous-user data streams and higher-rate TDMA bursts synchronized with the satellite-switched timing. The technique described uses a first-in, first-out (FIFO) memory approach which enables the use of inexpensive clock sources by both the users and the ground terminals and obviates the need for elaborate user clock synchronization processes. A continuous range of data rates from kilobits per second to that approaching the modulator burst rate (hundreds of megabits per second) can be accommodated. The technique was developed for use in the NASA Lewis Research Center System Integration, Test, and Evaluation (SITE) facility. Some key features of the technique have also been implemented in the ground terminals developed at NASA Lewis for use in on-orbit evaluation of the Advanced Communications Technology Satellite (ACTS) high burst rate (HBR) system.

  15. The structure of variable property, compressible mixing layers in binary gas mixtures

    NASA Technical Reports Server (NTRS)

    Kozusko, F.; Grosch, C. E.; Jackson, T. L.; Kennedy, Christipher A.; Gatski, Thomas B.

    1996-01-01

    We present the results of a study of the structure of a parallel compressible mixing layer in a binary mixture of gases. The gases included in this study are hydrogen (H2), helium (He), nitrogen (N2), oxygen (02), neon (Ne) and argon (Ar). Profiles of the variation of the Lewis and Prandtl numbers across the mixing layer for all thirty combinations of gases are given. It is shown that the Lewis number can vary by as much as a factor of eight and the Prandtl number by a factor of two across the mixing layer. Thus assuming constant values for the Lewis and Prandtl numbers of a binary gas mixture in the shear layer, as is done in many theoretical studies, is a poor approximation. We also present profiles of the velocity, mass fraction, temperature and density for representative binary gas mixtures at zero and supersonic Mach numbers. We show that the shape of these profiles is strongly dependent on which gases are in the mixture as well as on whether the denser gas is in the fast stream or the slow stream.

  16. A model for the compressible, viscoelastic behavior of human amnion addressing tissue variability through a single parameter.

    PubMed

    Mauri, Arabella; Ehret, Alexander E; De Focatiis, Davide S A; Mazza, Edoardo

    2016-08-01

    A viscoelastic, compressible model is proposed to rationalize the recently reported response of human amnion in multiaxial relaxation and creep experiments. The theory includes two viscoelastic contributions responsible for the short- and long-term time-dependent response of the material. These two contributions can be related to physical processes: water flow through the tissue and dissipative characteristics of the collagen fibers, respectively. An accurate agreement of the model with the mean tension and kinematic response of amnion in uniaxial relaxation tests was achieved. By variation of a single linear factor that accounts for the variability among tissue samples, the model provides very sound predictions not only of the uniaxial relaxation but also of the uniaxial creep and strip-biaxial relaxation behavior of individual samples. This suggests that a wide range of viscoelastic behaviors due to patient-specific variations in tissue composition can be represented by the model without the need of recalibration and parameter identification. PMID:26497188

  17. On Fully Developed Channel Flows: Some Solutions and Limitations, and Effects of Compressibility, Variable Properties, and Body Forces

    NASA Technical Reports Server (NTRS)

    Maslen, Stephen H.

    1959-01-01

    An examination of the effects of compressibility, variable properties, and body forces on fully developed laminar flow has indicated several limitations on such streams. In the absence of a pressure gradient, but presence of a body force (e.g., gravity), an exact fully developed gas flow results. For a liquid this follows also for the case of a constant streamwise pressure gradient. These motions are exact in the sense of a Couette flow. In the liquid case two solutions (not a new result) can occur for the same boundary conditions. An approximate analytic solution was found which agrees closely with machine calculations.In the case of approximately exact flows, it turns out that for large temperature variations across the channel the effects of convection (due to, say, a wall temperature gradient) and frictional heating must be negligible. In such a case the energy and momentum equations are separated, and the solutions are readily obtained. If the temperature variations are small, then both convection effects and frictional heating can consistently be considered. This case becomes the constant-property incompressible case (or quasi-incompressible case for free-convection flows) considered by many authors. Finally there is a brief discussion of cases wherein streamwise variations of all quantities are allowed but only a such form that independent variables are separable. For the case where the streamwise velocity varies inversely as the square root distance along the channel a solution is given.

  18. Byte structure variable length coding (BS-VLC): a new specific algorithm applied in the compression of trajectories generated by molecular dynamics

    PubMed

    Melo; Puga; Gentil; Brito; Alves; Ramos

    2000-05-01

    Molecular dynamics is a well-known technique very much used in the study of biomolecular systems. The trajectory files produced by molecular dynamics simulations are extensive, and the classical lossless algorithms give poor efficiencies in their compression. In this work, a new specific algorithm, named byte structure variable length coding (BS-VLC), is introduced. Trajectory files, obtained by molecular dynamics applied to trypsin and a trypsin:pancreatic trypsin inhibitor complex, were compressed using four classical lossless algorithms (Huffman, adaptive Huffman, LZW, and LZ77) as well as the BS-VLC algorithm. The results obtained show that BS-VLC nearly triplicates the compression efficiency of the best classical lossless algorithm, preserving a near lossless behavior. Compression efficiencies close to 50% can be obtained with a high degree of precision, and the maximum efficiency possible (75%), within this algorithm, can be performed with good precision. PMID:10850759

  19. Spatially Variable Compressibility Estimation Using the Ensemble Smoother with Bathymetry Observations: Application to the Maja Gas Reservoir

    NASA Astrophysics Data System (ADS)

    Zoccarato, C.; Bau, D.; Teatini, P.

    2015-12-01

    A data assimilation (DA) framework is established to characterize the geomechanical response of a strongly compartmentalized hydrocarbon reservoir. The available observations over the offshore gas field consist of a bathymetric survey carried out before and at the end of the ten-year production life. The time-lapse map of vertical displacements is used to infer the most important parameter characterizing the reservoir compaction, i.e. the rock formation compressibility cm. The methodology is tested for two different conceptual models: (a) cm varies with depth and the vertical effective stress (heterogeneity due to lithostratigrafic variability) and (b) cm also varies horizontally within the stratigraphic unit. The latter hypothesis is made to account for the behavior of the partitioned reservoir due to the presence of sealing faults and thrusts, which suggest the idea of a block heterogeneous cm. The calibration of the geomechanical parameters is obtain with the aid of the Ensemble Smoother algorithm, that is, an ensemble-based DA analysis scheme. In scenario (b), the number of reservoirs blocks dictates the set of uncertain parameters, whereas scenario (a) is characterized by only one uncertain parameter. The outcome from scenario (a) indicates that DA is effective in reducing the cm uncertainty. However, the maximum measured settlement is underestimated with an overestimation of the areal extent of the subsidence bowl. Significant improvements are obtained in scenario (b) where the maximum model overestimate is reduced by about 25% and an overall good match of the measured bathymetry is achieved.

  20. The Use of Fuel Chemistry and Property Variations to Evaluate the Robustness of Variable Compression Ratio as a Control Method for Gasoline HCCI

    SciTech Connect

    Szybist, James P; Bunting, Bruce G

    2007-01-01

    On a gasoline engine platform, homogeneous charge compression ignition (HCCI) holds the promise of improved fuel economy and greatly reduced engine-out NOx emissions, without an increase in particulate matter emissions. In this investigation, a variable compression ratio (CR) engine equipped with a throttle and intake air heating was used to test the robustness of these control parameters to accommodate a series of fuels blended from reference gasoline, straight run refinery naptha, and ethanol. Higher compression ratios allowed for operation with higher octane fuels, but operation could not be achieved with the reference gasoline, even at the highest compression ratio. Compression ratio and intake heat could be used separately or together to modulate combustion. A lambda of 2 provided optimum fuel efficiency, even though some throttling was necessary to achieve this condition. Ethanol did not appear to assist combustion, although only two ethanol-containing fuels were evaluated. The increased pumping work from throttling was minimal compared to the efficiency increases that were the result of lower unburned hydrocarbon (HC) and carbon monoxide (CO) emissions. Low temperature heat release was present for all the fuels, but could be suppressed with a higher intake air temperature. Results will be used to design future fuels and combustion studies with this research platform.

  1. Supercharged two-cycle engines employing novel single element reciprocating shuttle inlet valve mechanisms and with a variable compression ratio

    NASA Technical Reports Server (NTRS)

    Wiesen, Bernard (Inventor)

    2008-01-01

    This invention relates to novel reciprocating shuttle inlet valves, effective with every type of two-cycle engine, from small high-speed single cylinder model engines, to large low-speed multiple cylinder engines, employing spark or compression ignition. Also permitting the elimination of out-of-phase piston arrangements to control scavenging and supercharging of opposed-piston engines. The reciprocating shuttle inlet valve (32) and its operating mechanism (34) is constructed as a single and simple uncomplicated member, in combination with the lost-motion abutments, (46) and (48), formed in a piston skirt, obviating the need for any complex mechanisms or auxiliary drives, unaffected by heat, friction, wear or inertial forces. The reciprocating shuttle inlet valve retains the simplicity and advantages of two-cycle engines, while permitting an increase in volumetric efficiency and performance, thereby increasing the range of usefulness of two-cycle engines into many areas that are now dominated by the four-cycle engine.

  2. Hierarchical Order of Influence of Mix Variables Affecting Compressive Strength of Sustainable Concrete Containing Fly Ash, Copper Slag, Silica Fume, and Fibres

    PubMed Central

    Natarajan, Sakthieswaran; Karuppiah, Ganesan

    2014-01-01

    Experiments have been conducted to study the effect of addition of fly ash, copper slag, and steel and polypropylene fibres on compressive strength of concrete and to determine the hierarchical order of influence of the mix variables in affecting the strength using cluster analysis experimentally. While fly ash and copper slag are used for partial replacement of cement and fine aggregate, respectively, defined quantities of steel and polypropylene fibres were added to the mixes. It is found from the experimental study that, in general, irrespective of the presence or absence of fibres, (i) for a given copper slag-fine aggregate ratio, increase in fly ash-cement ratio the concrete strength decreases and with the increase in copper slag-sand ratio also the rate of strength decrease and (ii) for a given fly ash-cement ratio, increase in copper slag-fine aggregate ratio increases the strength of the concrete. From the cluster analysis, it is found that the quantities of coarse and fine aggregate present have high influence in affecting the strength. It is also observed that the quantities of fly ash and copper slag used as substitutes have equal “influence” in affecting the strength. Marginal effect of addition of fibres in the compression strength of concrete is also revealed by the cluster analysis. PMID:24707213

  3. Hierarchical order of influence of mix variables affecting compressive strength of sustainable concrete containing fly ash, copper slag, silica fume, and fibres.

    PubMed

    Natarajan, Sakthieswaran; Karuppiah, Ganesan

    2014-01-01

    Experiments have been conducted to study the effect of addition of fly ash, copper slag, and steel and polypropylene fibres on compressive strength of concrete and to determine the hierarchical order of influence of the mix variables in affecting the strength using cluster analysis experimentally. While fly ash and copper slag are used for partial replacement of cement and fine aggregate, respectively, defined quantities of steel and polypropylene fibres were added to the mixes. It is found from the experimental study that, in general, irrespective of the presence or absence of fibres, (i) for a given copper slag-fine aggregate ratio, increase in fly ash-cement ratio the concrete strength decreases and with the increase in copper slag-sand ratio also the rate of strength decrease and (ii) for a given fly ash-cement ratio, increase in copper slag-fine aggregate ratio increases the strength of the concrete. From the cluster analysis, it is found that the quantities of coarse and fine aggregate present have high influence in affecting the strength. It is also observed that the quantities of fly ash and copper slag used as substitutes have equal "influence" in affecting the strength. Marginal effect of addition of fibres in the compression strength of concrete is also revealed by the cluster analysis. PMID:24707213

  4. Detection of two-mode compression and degree of entanglement in continuous variables in parametric scattering of light

    SciTech Connect

    Rytikov, G. O.; Chekhova, M. V.

    2008-12-15

    Generation of 'twin beams' (of light with two-mode compression) in single-pass optical parametric amplifier (a crystal with a nonzero quadratic susceptibility) is considered. Radiation at the output of the nonlinear crystal is essentially multimode, which raises the question about the effect of the detection volume on the extent of suppression of noise from the difference photocurrent of the detectors. In addition, the longitudinal as well as transverse size of the region in which parametric transformation takes place is of fundamental importance. It is shown that maximal suppression of noise from difference photocurrent requires a high degree of entanglement of two-photon light at the outlet of the parametric amplifier, which is defined by Federov et al. [Phys. Rev. A 77, 032336 (2008)] as the ratio of the intensity distribution width to the correlation function width. The detection volume should be chosen taking into account both these quantities. Various modes of single-pass generation of twin beams (noncollinear frequency-degenerate and collinear frequency-nondegenerate synchronism of type I, as well as collinear frequency-degenerate synchronism of type II) are considered in connection with the degree of entanglement.

  5. Compression embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.

    1998-01-01

    A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%.

  6. Compression embedding

    DOEpatents

    Sandford, M.T. II; Handel, T.G.; Bradley, J.N.

    1998-07-07

    A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique are disclosed. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%. 21 figs.

  7. Compression embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.

    1998-01-01

    A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method.

  8. Compression embedding

    DOEpatents

    Sandford, M.T. II; Handel, T.G.; Bradley, J.N.

    1998-03-10

    A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique is disclosed. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method. 11 figs.

  9. Compressive Holography

    NASA Astrophysics Data System (ADS)

    Lim, Se Hoon

    Compressive holography estimates images from incomplete data by using sparsity priors. Compressive holography combines digital holography and compressive sensing. Digital holography consists of computational image estimation from data captured by an electronic focal plane array. Compressive sensing enables accurate data reconstruction by prior knowledge on desired signal. Computational and optical co-design optimally supports compressive holography in the joint computational and optical domain. This dissertation explores two examples of compressive holography: estimation of 3D tomographic images from 2D data and estimation of images from under sampled apertures. Compressive holography achieves single shot holographic tomography using decompressive inference. In general, 3D image reconstruction suffers from underdetermined measurements with a 2D detector. Specifically, single shot holographic tomography shows the uniqueness problem in the axial direction because the inversion is ill-posed. Compressive sensing alleviates the ill-posed problem by enforcing some sparsity constraints. Holographic tomography is applied for video-rate microscopic imaging and diffuse object imaging. In diffuse object imaging, sparsity priors are not valid in coherent image basis due to speckle. So incoherent image estimation is designed to hold the sparsity in incoherent image basis by support of multiple speckle realizations. High pixel count holography achieves high resolution and wide field-of-view imaging. Coherent aperture synthesis can be one method to increase the aperture size of a detector. Scanning-based synthetic aperture confronts a multivariable global optimization problem due to time-space measurement errors. A hierarchical estimation strategy divides the global problem into multiple local problems with support of computational and optical co-design. Compressive sparse aperture holography can be another method. Compressive sparse sampling collects most of significant field

  10. Compressible halftoning

    NASA Astrophysics Data System (ADS)

    Anderson, Peter G.; Liu, Changmeng

    2003-01-01

    We present a technique for converting continuous gray-scale images to halftone (black and white) images that lend themselves to lossless data compression with compression factor of three or better. Our method involves using novel halftone mask structures which consist of non-repeated threshold values. We have versions of both dispersed-dot and clustered-dot masks, which produce acceptable images for a variety of printers. Using the masks as a sort key allows us to reversibly rearrange the image pixels and partition them into groups with a highly skewed distribution allowing Huffman compression coding techniques to be applied. This gives compression ratios in the range 3:1 to 10:1.

  11. Linear analysis on the onset of thermal convection of highly compressible fluids with variable physical properties: Implications for the mantle convection of super-Earths

    NASA Astrophysics Data System (ADS)

    Kameyama, Masanori

    2016-02-01

    A series of our linear analysis on the onset of thermal convection was applied to that of highly compressible fluids in a planar layer whose thermal conductivity and viscosity vary in space, in order to study the influences of spatial variations in physical properties expected in the mantles of massive terrestrial planets. The thermal conductivity and viscosity are assumed to exponentially depend on depth and temperature, respectively, while the variations in thermodynamic properties (thermal expansivity and reference density) with depth are taken to be relevant for the super-Earths with 10 times the Earth's. Our analysis demonstrated that the nature of incipient thermal convection is strongly affected by the interplay between the adiabatic compression and spatial variations in physical properties of fluids. Owing to the effects of adiabatic compression, a `stratosphere' can occur in the deep mantles of super-Earths, where a vertical motion is insignificant. An emergence of `stratosphere' is greatly enhanced by the increase in thermal conductivity with depth, while it is suppressed by the decrease in thermal expansivity with depth. In addition, by the interplay between the static stability and strong temperature dependence in viscosity, convection cells tend to be confined in narrow regions around the `tropopause' at the interface between the `stratosphere' of stable stratification and the `troposphere' of unstable stratification. We also found that, depending on the variations in physical properties, two kinds of stagnant regions can separately develop in the fluid layer. One is well-known `stagnant-lids' of cold and highly viscous fluids, and the other is `basal stagnant regions' of hot and less viscous fluids. The occurrence of `basal stagnant regions' may imply that convecting motions can be insignificant in the lowermost part of the mantles of massive super-Earths, even in the absence of strong increase in viscosity with pressure (or depth).

  12. A Comparison of Variable Time-Compressed Speech and Normal Rate Speech Based on Time Spent and Performance in a Course Taught by Self-Instructional Methods

    ERIC Educational Resources Information Center

    Short, Sarah Harvey

    1977-01-01

    College students using variable rate controlled speech compressors as compared with normal speed tape recorders had an average time saving of 32 percent and an average grade increase of 4.2 points on post-test scores. (Author)

  13. Video Compression

    NASA Technical Reports Server (NTRS)

    1996-01-01

    Optivision developed two PC-compatible boards and associated software under a Goddard Space Flight Center Small Business Innovation Research grant for NASA applications in areas such as telerobotics, telesciences and spaceborne experimentation. From this technology, the company used its own funds to develop commercial products, the OPTIVideo MPEG Encoder and Decoder, which are used for realtime video compression and decompression. They are used in commercial applications including interactive video databases and video transmission. The encoder converts video source material to a compressed digital form that can be stored or transmitted, and the decoder decompresses bit streams to provide high quality playback.

  14. [Compression material].

    PubMed

    Perceau, Géraldine; Faure, Christine

    2012-01-01

    The compression of a venous ulcer is carried out with the use of bandages, and for less exudative ulcers, with socks, stockings or tights. The system of bandages is complex. Different forms of extension and therefore different types of models exist. PMID:22489428

  15. A Reweighted ℓ1-Minimization Based Compressed Sensing for the Spectral Estimation of Heart Rate Variability Using the Unevenly Sampled Data

    PubMed Central

    Chen, Szi-Wen; Chao, Shih-Chieh

    2014-01-01

    In this paper, a reweighted ℓ1-minimization based Compressed Sensing (CS) algorithm incorporating the Integral Pulse Frequency Modulation (IPFM) model for spectral estimation of HRV is introduced. Knowing as a novel sensing/sampling paradigm, the theory of CS asserts certain signals that are considered sparse or compressible can be possibly reconstructed from substantially fewer measurements than those required by traditional methods. Our study aims to employ a novel reweighted ℓ1-minimization CS method for deriving the spectrum of the modulating signal of IPFM model from incomplete RR measurements for HRV assessments. To evaluate the performance of HRV spectral estimation, a quantitative measure, referred to as the Percent Error Power (PEP) that measures the percentage of difference between the true spectrum and the spectrum derived from the incomplete RR dataset, was used. We studied the performance of spectral reconstruction from incomplete simulated and real HRV signals by experimentally truncating a number of RR data accordingly in the top portion, in the bottom portion, and in a random order from the original RR column vector. As a result, for up to 20% data truncation/loss the proposed reweighted ℓ1-minimization CS method produced, on average, 2.34%, 2.27%, and 4.55% PEP in the top, bottom, and random data-truncation cases, respectively, on Autoregressive (AR) model derived simulated HRV signals. Similarly, for up to 20% data loss the proposed method produced 5.15%, 4.33%, and 0.39% PEP in the top, bottom, and random data-truncation cases, respectively, on a real HRV database drawn from PhysioNet. Moreover, results generated by a number of intensive numerical experiments all indicated that the reweighted ℓ1-minimization CS method always achieved the most accurate and high-fidelity HRV spectral estimates in every aspect, compared with the ℓ1-minimization based method and Lomb's method used for estimating the spectrum of HRV from unevenly sampled RR

  16. A Comparison of Variable Time Compressed Speech and Normal Rate Speech Based on Time Spent and Performance in a Course Taught by Self-Instructional Methods.

    ERIC Educational Resources Information Center

    Short, Sarah Harvey

    The purpose of this study was to determine with precise measurements of time and carefully constructed posttests whether sighted students in a college course would save time and achieve higher scores when listening to cognitive information using variable time compressors as compared with students listening using normal speed tape recorders. The…

  17. Compressed Genotyping

    PubMed Central

    Erlich, Yaniv; Gordon, Assaf; Brand, Michael; Hannon, Gregory J.; Mitra, Partha P.

    2011-01-01

    Over the past three decades we have steadily increased our knowledge on the genetic basis of many severe disorders. Nevertheless, there are still great challenges in applying this knowledge routinely in the clinic, mainly due to the relatively tedious and expensive process of genotyping. Since the genetic variations that underlie the disorders are relatively rare in the population, they can be thought of as a sparse signal. Using methods and ideas from compressed sensing and group testing, we have developed a cost-effective genotyping protocol to detect carriers for severe genetic disorders. In particular, we have adapted our scheme to a recently developed class of high throughput DNA sequencing technologies. The mathematical framework presented here has some important distinctions from the ’traditional’ compressed sensing and group testing frameworks in order to address biological and technical constraints of our setting. PMID:21451737

  18. Compression and venous ulcers.

    PubMed

    Stücker, M; Link, K; Reich-Schupke, S; Altmeyer, P; Doerler, M

    2013-03-01

    Compression therapy is considered to be the most important conservative treatment of venous leg ulcers. Until a few years ago, compression bandages were regarded as first-line therapy of venous leg ulcers. However, to date medical compression stockings are the first choice of treatment. With respect to compression therapy of venous leg ulcers the following statements are widely accepted: 1. Compression improves the healing of ulcers when compared with no compression; 2. Multicomponent compression systems are more effective than single-component compression systems; 3. High compression is more effective than lower compression; 4. Medical compression stockings are more effective than compression with short stretch bandages. Healed venous leg ulcers show a high relapse rate without ongoing treatment. The use of medical stockings significantly reduces the amount of recurrent ulcers. Furthermore, the relapse rate of venous leg ulcers can be significantly reduced by a combination of compression therapy and surgery of varicose veins compared with compression therapy alone. PMID:23482538

  19. Chapter 22: Compressed Air Evaluation Protocol

    SciTech Connect

    Benton, N.

    2014-11-01

    Compressed-air systems are used widely throughout industry for many operations, including pneumatic tools, packaging and automation equipment, conveyors, and other industrial process operations. Compressed-air systems are defined as a group of subsystems composed of air compressors, air treatment equipment, controls, piping, pneumatic tools, pneumatically powered machinery, and process applications using compressed air. A compressed-air system has three primary functional subsystems: supply, distribution, and demand. Air compressors are the primary energy consumers in a compressed-air system and are the primary focus of this protocol. The two compressed-air energy efficiency measures specifically addressed in this protocol are: high-efficiency/variable speed drive (VSD) compressor replacing modulating compressor; compressed-air leak survey and repairs. This protocol provides direction on how to reliably verify savings from these two measures using a consistent approach for each.

  20. Compressive beamforming.

    PubMed

    Xenaki, Angeliki; Gerstoft, Peter; Mosegaard, Klaus

    2014-07-01

    Sound source localization with sensor arrays involves the estimation of the direction-of-arrival (DOA) from a limited number of observations. Compressive sensing (CS) solves such underdetermined problems achieving sparsity, thus improved resolution, and can be solved efficiently with convex optimization. The DOA estimation problem is formulated in the CS framework and it is shown that CS has superior performance compared to traditional DOA estimation methods especially under challenging scenarios such as coherent arrivals and single-snapshot data. An offset and resolution analysis is performed to indicate the limitations of CS. It is shown that the limitations are related to the beampattern, thus can be predicted. The high-resolution capabilities and the robustness of CS are demonstrated on experimental array data from ocean acoustic measurements for source tracking with single-snapshot data. PMID:24993212

  1. Efficient Compression of High Resolution Climate Data

    NASA Astrophysics Data System (ADS)

    Yin, J.; Schuchardt, K. L.

    2011-12-01

    resolution climate data can be massive. Those data can consume a huge amount of disk space for storage, incur significant overhead for outputting data during simulation, introduce high latency for visualization and analysis, and may even make interactive visualization and analysis impossible given the limit of the data that a conventional cluster can handle. These problems can be alleviated by with effective and efficient data compression techniques. Even though HDF5 format supports compression, previous work has mainly focused on employ traditional general purpose compression schemes such as dictionary coder and block sorting based compression scheme. Those compression schemes mainly focus on encoding repeated byte sequences efficiently and are not well suitable for compressing climate data consist mainly of distinguished float point numbers. We plan to select and customize our compression schemes according to the characteristics of high-resolution climate data. One observation on high resolution climate data is that as the resolution become higher, values of various climate variables such as temperature and pressure, become closer in nearby cells. This provides excellent opportunities for predication-based compression schemes. We have performed a preliminary estimation of compression ratios of a very simple minded predication-based compression ratio in which we compute the difference between current float point number with previous float point number and then encoding the exponent and significance part of the float point number with entropy-based compression scheme. Our results show that we can achieve higher compression ratios between 2 and 3 in lossless compression, which is significantly higher than traditional compression algorithms. We have also developed lossy compression with our techniques. We can achive orders of magnitude data reduction while ensure error bounds. Moreover, our compression scheme is much more efficient and introduces much less overhead

  2. Turbulence in Compressible Flows

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Lecture notes for the AGARD Fluid Dynamics Panel (FDP) Special Course on 'Turbulence in Compressible Flows' have been assembled in this report. The following topics were covered: Compressible Turbulent Boundary Layers, Compressible Turbulent Free Shear Layers, Turbulent Combustion, DNS/LES and RANS Simulations of Compressible Turbulent Flows, and Case Studies of Applications of Turbulence Models in Aerospace.

  3. Compressing TV-image data

    NASA Technical Reports Server (NTRS)

    Hilbert, E. E.; Lee, J.; Rice, R. F.; Schlutsmeyer, A. P.

    1981-01-01

    Compressing technique calculates activity estimator for each segment of image line. Estimator is used in conjunction with allowable bits per line, N, to determine number of bits necessary to code each segment and which segments can tolerate truncation. Preprocessed line data are then passed to adaptive variable-length coder, which selects optimum transmission code. Method increases capacity of broadcast and cable television transmissions and helps reduce size of storage medium for video and digital audio recordings.

  4. Learning in compressed space.

    PubMed

    Fabisch, Alexander; Kassahun, Yohannes; Wöhrle, Hendrik; Kirchner, Frank

    2013-06-01

    We examine two methods which are used to deal with complex machine learning problems: compressed sensing and model compression. We discuss both methods in the context of feed-forward artificial neural networks and develop the backpropagation method in compressed parameter space. We further show that compressing the weights of a layer of a multilayer perceptron is equivalent to compressing the input of the layer. Based on this theoretical framework, we will use orthogonal functions and especially random projections for compression and perform experiments in supervised and reinforcement learning to demonstrate that the presented methods reduce training time significantly. PMID:23501172

  5. Prechamber Compression-Ignition Engine Performance

    NASA Technical Reports Server (NTRS)

    Moore, Charles S; Collins, John H , Jr

    1938-01-01

    Single-cylinder compression-ignition engine tests were made to investigate the performance characteristics of prechamber type of cylinder head. Certain fundamental variables influencing engine performance -- clearance distribution, size, shape, and direction of the passage connecting the cylinder and prechamber, shape of prechamber, cylinder clearance, compression ratio, and boosting -- were independently tested. Results of motoring and of power tests, including several typical indicator cards, are presented.

  6. Microbunching and RF Compression

    SciTech Connect

    Venturini, M.; Migliorati, M.; Ronsivalle, C.; Ferrario, M.; Vaccarezza, C.

    2010-05-23

    Velocity bunching (or RF compression) represents a promising technique complementary to magnetic compression to achieve the high peak current required in the linac drivers for FELs. Here we report on recent progress aimed at characterizing the RF compression from the point of view of the microbunching instability. We emphasize the development of a linear theory for the gain function of the instability and its validation against macroparticle simulations that represents a useful tool in the evaluation of the compression schemes for FEL sources.

  7. Compressed gas manifold

    DOEpatents

    Hildebrand, Richard J.; Wozniak, John J.

    2001-01-01

    A compressed gas storage cell interconnecting manifold including a thermally activated pressure relief device, a manual safety shut-off valve, and a port for connecting the compressed gas storage cells to a motor vehicle power source and to a refueling adapter. The manifold is mechanically and pneumatically connected to a compressed gas storage cell by a bolt including a gas passage therein.

  8. Compressible turbulent mixing: Effects of compressibility

    NASA Astrophysics Data System (ADS)

    Ni, Qionglin

    2016-04-01

    We studied by numerical simulations the effects of compressibility on passive scalar transport in stationary compressible turbulence. The turbulent Mach number varied from zero to unity. The difference in driven forcing was the magnitude ratio of compressive to solenoidal modes. In the inertial range, the scalar spectrum followed the k-5 /3 scaling and suffered negligible influence from the compressibility. The growth of the Mach number showed (1) a first reduction and second enhancement in the transfer of scalar flux; (2) an increase in the skewness and flatness of the scalar derivative and a decrease in the mixed skewness and flatness of the velocity-scalar derivatives; (3) a first stronger and second weaker intermittency of scalar relative to that of velocity; and (4) an increase in the intermittency parameter which measures the intermittency of scalar in the dissipative range. Furthermore, the growth of the compressive mode of forcing indicated (1) a decrease in the intermittency parameter and (2) less efficiency in enhancing scalar mixing. The visualization of scalar dissipation showed that, in the solenoidal-forced flow, the field was filled with the small-scale, highly convoluted structures, while in the compressive-forced flow, the field was exhibited as the regions dominated by the large-scale motions of rarefaction and compression.

  9. Study of communications data compression methods

    NASA Technical Reports Server (NTRS)

    Jones, H. W.

    1978-01-01

    A simple monochrome conditional replenishment system was extended to higher compression and to higher motion levels, by incorporating spatially adaptive quantizers and field repeating. Conditional replenishment combines intraframe and interframe compression, and both areas are investigated. The gain of conditional replenishment depends on the fraction of the image changing, since only changed parts of the image need to be transmitted. If the transmission rate is set so that only one fourth of the image can be transmitted in each field, greater change fractions will overload the system. A computer simulation was prepared which incorporated (1) field repeat of changes, (2) a variable change threshold, (3) frame repeat for high change, and (4) two mode, variable rate Hadamard intraframe quantizers. The field repeat gives 2:1 compression in moving areas without noticeable degradation. Variable change threshold allows some flexibility in dealing with varying change rates, but the threshold variation must be limited for acceptable performance.

  10. Lossy Text Compression Techniques

    NASA Astrophysics Data System (ADS)

    Palaniappan, Venka; Latifi, Shahram

    Most text documents contain a large amount of redundancy. Data compression can be used to minimize this redundancy and increase transmission efficiency or save storage space. Several text compression algorithms have been introduced for lossless text compression used in critical application areas. For non-critical applications, we could use lossy text compression to improve compression efficiency. In this paper, we propose three different source models for character-based lossy text compression: Dropped Vowels (DOV), Letter Mapping (LMP), and Replacement of Characters (ROC). The working principles and transformation methods associated with these methods are presented. Compression ratios obtained are included and compared. Comparisons of performance with those of the Huffman Coding and Arithmetic Coding algorithm are also made. Finally, some ideas for further improving the performance already obtained are proposed.

  11. Radiological Image Compression

    NASA Astrophysics Data System (ADS)

    Lo, Shih-Chung Benedict

    The movement toward digital images in radiology presents the problem of how to conveniently and economically store, retrieve, and transmit the volume of digital images. Basic research into image data compression is necessary in order to move from a film-based department to an efficient digital -based department. Digital data compression technology consists of two types of compression technique: error-free and irreversible. Error -free image compression is desired; however, present techniques can only achieve compression ratio of from 1.5:1 to 3:1, depending upon the image characteristics. Irreversible image compression can achieve a much higher compression ratio; however, the image reconstructed from the compressed data shows some difference from the original image. This dissertation studies both error-free and irreversible image compression techniques. In particular, some modified error-free techniques have been tested and the recommended strategies for various radiological images are discussed. A full-frame bit-allocation irreversible compression technique has been derived. A total of 76 images which include CT head and body, and radiographs digitized to 2048 x 2048, 1024 x 1024, and 512 x 512 have been used to test this algorithm. The normalized mean -square-error (NMSE) on the difference image, defined as the difference between the original and the reconstructed image from a given compression ratio, is used as a global measurement on the quality of the reconstructed image. The NMSE's of total of 380 reconstructed and 380 difference images are measured and the results tabulated. Three complex compression methods are also suggested to compress images with special characteristics. Finally, various parameters which would effect the quality of the reconstructed images are discussed. A proposed hardware compression module is given in the last chapter.

  12. Stability of compressible boundary layers

    NASA Technical Reports Server (NTRS)

    Nayfeh, Ali H.

    1989-01-01

    The stability of compressible 2-D and 3-D boundary layers is reviewed. The stability of 2-D compressible flows differs from that of incompressible flows in two important features: There is more than one mode of instability contributing to the growth of disturbances in supersonic laminar boundary layers and the most unstable first mode wave is 3-D. Whereas viscosity has a destabilizing effect on incompressible flows, it is stabilizing for high supersonic Mach numbers. Whereas cooling stabilizes first mode waves, it destabilizes second mode waves. However, second order waves can be stabilized by suction and favorable pressure gradients. The influence of the nonparallelism on the spatial growth rate of disturbances is evaluated. The growth rate depends on the flow variable as well as the distance from the body. Floquet theory is used to investigate the subharmonic secondary instability.

  13. Universal lossless compression algorithm for textual images

    NASA Astrophysics Data System (ADS)

    al Zahir, Saif

    2012-03-01

    In recent years, an unparalleled volume of textual information has been transported over the Internet via email, chatting, blogging, tweeting, digital libraries, and information retrieval systems. As the volume of text data has now exceeded 40% of the total volume of traffic on the Internet, compressing textual data becomes imperative. Many sophisticated algorithms were introduced and employed for this purpose including Huffman encoding, arithmetic encoding, the Ziv-Lempel family, Dynamic Markov Compression, and Burrow-Wheeler Transform. My research presents novel universal algorithm for compressing textual images. The algorithm comprises two parts: 1. a universal fixed-to-variable codebook; and 2. our row and column elimination coding scheme. Simulation results on a large number of Arabic, Persian, and Hebrew textual images show that this algorithm has a compression ratio of nearly 87%, which exceeds published results including JBIG2.

  14. Parallel image compression

    NASA Technical Reports Server (NTRS)

    Reif, John H.

    1987-01-01

    A parallel compression algorithm for the 16,384 processor MPP machine was developed. The serial version of the algorithm can be viewed as a combination of on-line dynamic lossless test compression techniques (which employ simple learning strategies) and vector quantization. These concepts are described. How these concepts are combined to form a new strategy for performing dynamic on-line lossy compression is discussed. Finally, the implementation of this algorithm in a massively parallel fashion on the MPP is discussed.

  15. Fpack and Funpack Utilities for FITS Image Compression and Uncompression

    NASA Technical Reports Server (NTRS)

    Pence, W.

    2008-01-01

    Fpack is a utility program for optimally compressing images in the FITS (Flexible Image Transport System) data format (see http://fits.gsfc.nasa.gov). The associated funpack program restores the compressed image file back to its original state (as long as a lossless compression algorithm is used). These programs may be run from the host operating system command line and are analogous to the gzip and gunzip utility programs except that they are optimized for FITS format images and offer a wider choice of compression algorithms. Fpack stores the compressed image using the FITS tiled image compression convention (see http://fits.gsfc.nasa.gov/fits_registry.html). Under this convention, the image is first divided into a user-configurable grid of rectangular tiles, and then each tile is individually compressed and stored in a variable-length array column in a FITS binary table. By default, fpack usually adopts a row-by-row tiling pattern. The FITS image header keywords remain uncompressed for fast access by FITS reading and writing software. The tiled image compression convention can in principle support any number of different compression algorithms. The fpack and funpack utilities call on routines in the CFITSIO library (http://hesarc.gsfc.nasa.gov/fitsio) to perform the actual compression and uncompression of the FITS images, which currently supports the GZIP, Rice, H-compress, and PLIO IRAF pixel list compression algorithms.

  16. Fractal image compression

    NASA Technical Reports Server (NTRS)

    Barnsley, Michael F.; Sloan, Alan D.

    1989-01-01

    Fractals are geometric or data structures which do not simplify under magnification. Fractal Image Compression is a technique which associates a fractal to an image. On the one hand, the fractal can be described in terms of a few succinct rules, while on the other, the fractal contains much or all of the image information. Since the rules are described with less bits of data than the image, compression results. Data compression with fractals is an approach to reach high compression ratios for large data streams related to images. The high compression ratios are attained at a cost of large amounts of computation. Both lossless and lossy modes are supported by the technique. The technique is stable in that small errors in codes lead to small errors in image data. Applications to the NASA mission are discussed.

  17. Grid-free compressive beamforming.

    PubMed

    Xenaki, Angeliki; Gerstoft, Peter

    2015-04-01

    The direction-of-arrival (DOA) estimation problem involves the localization of a few sources from a limited number of observations on an array of sensors, thus it can be formulated as a sparse signal reconstruction problem and solved efficiently with compressive sensing (CS) to achieve high-resolution imaging. On a discrete angular grid, the CS reconstruction degrades due to basis mismatch when the DOAs do not coincide with the angular directions on the grid. To overcome this limitation, a continuous formulation of the DOA problem is employed and an optimization procedure is introduced, which promotes sparsity on a continuous optimization variable. The DOA estimation problem with infinitely many unknowns, i.e., source locations and amplitudes, is solved over a few optimization variables with semidefinite programming. The grid-free CS reconstruction provides high-resolution imaging even with non-uniform arrays, single-snapshot data and under noisy conditions as demonstrated on experimental towed array data. PMID:25920844

  18. EEG data compression techniques.

    PubMed

    Antoniol, G; Tonella, P

    1997-02-01

    In this paper, electroencephalograph (EEG) and Holter EEG data compression techniques which allow perfect reconstruction of the recorded waveform from the compressed one are presented and discussed. Data compression permits one to achieve significant reduction in the space required to store signals and in transmission time. The Huffman coding technique in conjunction with derivative computation reaches high compression ratios (on average 49% on Holter and 58% on EEG signals) with low computational complexity. By exploiting this result a simple and fast encoder/decoder scheme capable of real-time performance on a PC was implemented. This simple technique is compared with other predictive transformations, vector quantization, discrete cosine transform (DCT), and repetition count compression methods. Finally, it is shown that the adoption of a collapsed Huffman tree for the encoding/decoding operations allows one to choose the maximum codeword length without significantly affecting the compression ratio. Therefore, low cost commercial microcontrollers and storage devices can be effectively used to store long Holter EEG's in a compressed format. PMID:9214790

  19. Boson core compressibility

    NASA Astrophysics Data System (ADS)

    Khorramzadeh, Y.; Lin, Fei; Scarola, V. W.

    2012-04-01

    Strongly interacting atoms trapped in optical lattices can be used to explore phase diagrams of Hubbard models. Spatial inhomogeneity due to trapping typically obscures distinguishing observables. We propose that measures using boson double occupancy avoid trapping effects to reveal two key correlation functions. We define a boson core compressibility and core superfluid stiffness in terms of double occupancy. We use quantum Monte Carlo on the Bose-Hubbard model to empirically show that these quantities intrinsically eliminate edge effects to reveal correlations near the trap center. The boson core compressibility offers a generally applicable tool that can be used to experimentally map out phase transitions between compressible and incompressible states.

  20. Modeling Compressed Turbulence

    SciTech Connect

    Israel, Daniel M.

    2012-07-13

    From ICE to ICF, the effect of mean compression or expansion is important for predicting the state of the turbulence. When developing combustion models, we would like to know the mix state of the reacting species. This involves density and concentration fluctuations. To date, research has focused on the effect of compression on the turbulent kinetic energy. The current work provides constraints to help development and calibration for models of species mixing effects in compressed turbulence. The Cambon, et al., re-scaling has been extended to buoyancy driven turbulence, including the fluctuating density, concentration, and temperature equations. The new scalings give us helpful constraints for developing and validating RANS turbulence models.

  1. Local compressibilities in crystals

    NASA Astrophysics Data System (ADS)

    Martín Pendás, A.; Costales, Aurora; Blanco, M. A.; Recio, J. M.; Luaña, Víctor

    2000-12-01

    An application of the atoms in molecules theory to the partitioning of static thermodynamic properties in condensed systems is presented. Attention is focused on the definition and the behavior of atomic compressibilities. Inverses of bulk moduli are found to be simple weighted averages of atomic compressibilities. Two kinds of systems are investigated as examples: four related oxide spinels and the alkali halide family. Our analyses show that the puzzling constancy of the bulk moduli of these spinels is a consequence of the value of the compressibility of an oxide ion. A functional dependence between ionic bulk moduli and ionic volume is also proposed.

  2. Competing hydrostatic compression mechanisms in nickel cyanide

    NASA Astrophysics Data System (ADS)

    Adamson, J.; Lucas, T. C.; Cairns, A. B.; Funnell, N. P.; Tucker, M. G.; Kleppe, A. K.; Hriljac, J. A.; Goodwin, A. L.

    2015-12-01

    We use variable-pressure neutron and X-ray diffraction measurements to determine the uniaxial and bulk compressibilities of nickel(II) cyanide, Ni(CN)2. Whereas other layered molecular framework materials are known to exhibit negative area compressibility, we find that Ni(CN)2 does not. We attribute this difference to the existence of low-energy in-plane tilt modes that provide a pressure-activated mechanism for layer contraction. The experimental bulk modulus we measure is about four times lower than that reported elsewhere on the basis of density functional theory methods [Phys. Rev. B 83 (2011) 024301].

  3. Military Data Compression Standard

    NASA Astrophysics Data System (ADS)

    Winterbauer, C. E.

    1982-07-01

    A facsimile interoperability data compression standard is being adopted by the U.S. Department of Defense and other North Atlantic Treaty Organization (NATO) countries. This algorithm has been shown to perform quite well in a noisy communication channel.

  4. Compressive optical image encryption.

    PubMed

    Li, Jun; Sheng Li, Jiao; Yang Pan, Yang; Li, Rong

    2015-01-01

    An optical image encryption technique based on compressive sensing using fully optical means has been proposed. An object image is first encrypted to a white-sense stationary noise pattern using a double random phase encoding (DRPE) method in a Mach-Zehnder interferometer. Then, the encrypted image is highly compressed to a signal using single-pixel compressive holographic imaging in the optical domain. At the receiving terminal, the encrypted image is reconstructed well via compressive sensing theory, and the original image can be decrypted with three reconstructed holograms and the correct keys. The numerical simulations show that the method is effective and suitable for optical image security transmission in future all-optical networks because of the ability of completely optical implementation and substantially smaller hologram data volume. PMID:25992946

  5. Focus on Compression Stockings

    MedlinePlus

    ... sion apparel is used to prevent or control edema The post-thrombotic syndrome (PTS) is a complication ( ... complication. abdomen. This swelling is referred to as edema. If you have edema, compression therapy may be ...

  6. Compressive Optical Image Encryption

    PubMed Central

    Li, Jun; Sheng Li, Jiao; Yang Pan, Yang; Li, Rong

    2015-01-01

    An optical image encryption technique based on compressive sensing using fully optical means has been proposed. An object image is first encrypted to a white-sense stationary noise pattern using a double random phase encoding (DRPE) method in a Mach-Zehnder interferometer. Then, the encrypted image is highly compressed to a signal using single-pixel compressive holographic imaging in the optical domain. At the receiving terminal, the encrypted image is reconstructed well via compressive sensing theory, and the original image can be decrypted with three reconstructed holograms and the correct keys. The numerical simulations show that the method is effective and suitable for optical image security transmission in future all-optical networks because of the ability of completely optical implementation and substantially smaller hologram data volume. PMID:25992946

  7. Compressible Astrophysics Simulation Code

    Energy Science and Technology Software Center (ESTSC)

    2007-07-18

    This is an astrophysics simulation code involving a radiation diffusion module developed at LLNL coupled to compressible hydrodynamics and adaptive mesh infrastructure developed at LBNL. One intended application is to neutrino diffusion in core collapse supernovae.

  8. Similarity by compression.

    PubMed

    Melville, James L; Riley, Jenna F; Hirst, Jonathan D

    2007-01-01

    We present a simple and effective method for similarity searching in virtual high-throughput screening, requiring only a string-based representation of the molecules (e.g., SMILES) and standard compression software, available on all modern desktop computers. This method utilizes the normalized compression distance, an approximation of the normalized information distance, based on the concept of Kolmogorov complexity. On representative data sets, we demonstrate that compression-based similarity searching can outperform standard similarity searching protocols, exemplified by the Tanimoto coefficient combined with a binary fingerprint representation and data fusion. Software to carry out compression-based similarity is available from our Web site at http://comp.chem.nottingham.ac.uk/download/zippity. PMID:17238245

  9. Image compression technique

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1997-01-01

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace's equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image.

  10. Image compression technique

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1997-03-25

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace`s equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image. 16 figs.

  11. Alternative Compression Garments

    NASA Technical Reports Server (NTRS)

    Stenger, M. B.; Lee, S. M. C.; Ribeiro, L. C.; Brown, A. K.; Westby, C. M.; Platts, S. H.

    2011-01-01

    Orthostatic intolerance after spaceflight is still an issue for astronauts as no in-flight countermeasure has been 100% effective. Future anti-gravity suits (AGS) may be similar to the Shuttle era inflatable AGS or may be a mechanical compression device like the Russian Kentavr. We have evaluated the above garments as well as elastic, gradient compression garments of varying magnitude and determined that breast-high elastic compression garments may be a suitable replacement to the current AGS. This new garment should be more comfortable than the AGS, easy to don and doff, and as effective a countermeasure to orthostatic intolerance. Furthermore, these new compression garments could be worn for several days after space flight as necessary if symptoms persisted. We conducted two studies to evaluate elastic, gradient compression garments. The purpose of these studies was to evaluate the comfort and efficacy of an alternative compression garment (ACG) immediately after actual space flight and 6 degree head-down tilt bed rest as a model of space flight, and to determine if they would impact recovery if worn for up to three days after bed rest.

  12. Intelligent bandwith compression

    NASA Astrophysics Data System (ADS)

    Tseng, D. Y.; Bullock, B. L.; Olin, K. E.; Kandt, R. K.; Olsen, J. D.

    1980-02-01

    The feasibility of a 1000:1 bandwidth compression ratio for image transmission has been demonstrated using image-analysis algorithms and a rule-based controller. Such a high compression ratio was achieved by first analyzing scene content using auto-cueing and feature-extraction algorithms, and then transmitting only the pertinent information consistent with mission requirements. A rule-based controller directs the flow of analysis and performs priority allocations on the extracted scene content. The reconstructed bandwidth-compressed image consists of an edge map of the scene background, with primary and secondary target windows embedded in the edge map. The bandwidth-compressed images are updated at a basic rate of 1 frame per second, with the high-priority target window updated at 7.5 frames per second. The scene-analysis algorithms used in this system together with the adaptive priority controller are described. Results of simulated 1000:1 band width-compressed images are presented. A video tape simulation of the Intelligent Bandwidth Compression system has been produced using a sequence of video input from the data base.

  13. Ultraspectral sounder data compression using the Tunstall coding

    NASA Astrophysics Data System (ADS)

    Wei, Shih-Chieh; Huang, Bormin; Gu, Lingjia

    2007-09-01

    In an error-prone environment the compression of ultraspectral sounder data is vulnerable to error propagation. The Tungstall coding is a variable-to-fixed length code which compresses data by mapping a variable number of source symbols to a fixed number of codewords. It avoids the resynchronization difficulty encountered in fixed-to-variable length codes such as Huffman coding and arithmetic coding. This paper explores the use of the Tungstall coding in reducing the error propagation for ultraspectral sounder data compression. The results show that our Tunstall approach has a favorable compression ratio compared with JPEG-2000, 3D SPIHT, JPEG-LS, CALIC and CCSDS IDC 5/3. It also has less error propagation compared with JPEG-2000.

  14. Data compression of large document data bases.

    PubMed

    Heaps, H S

    1975-02-01

    Consideration is given to a document data base that is structured for information retrieval purposes by means of an inverted index and term dictionary. Vocabulary characteristics of various fields are described, and it is shown how the data base may be stored in a compressed form by use of restricted variable length codes that produce a compression not greatly in excess of the optimum that could be achieved through use of Huffman codes. The coding is word oriented. An alternative scheme of word fragment coding is described. It has the advantage that it allows the use of a small dictionary, but is less efficient with respect to compression of the data base. PMID:1127034

  15. Wavelet compression of medical imagery.

    PubMed

    Reiter, E

    1996-01-01

    Wavelet compression is a transform-based compression technique recently shown to provide diagnostic-quality images at compression ratios as great as 30:1. Based on a recently developed field of applied mathematics, wavelet compression has found success in compression applications from digital fingerprints to seismic data. The underlying strength of the method is attributable in large part to the efficient representation of image data by the wavelet transform. This efficient or sparse representation forms the basis for high-quality image compression by providing subsequent steps of the compression scheme with data likely to result in long runs of zero. These long runs of zero in turn compress very efficiently, allowing wavelet compression to deliver substantially better performance than existing Fourier-based methods. Although the lack of standardization has historically been an impediment to widespread adoption of wavelet compression, this situation may begin to change as the operational benefits of the technology become better known. PMID:10165355

  16. Wave energy devices with compressible volumes

    PubMed Central

    Kurniawan, Adi; Greaves, Deborah; Chaplin, John

    2014-01-01

    We present an analysis of wave energy devices with air-filled compressible submerged volumes, where variability of volume is achieved by means of a horizontal surface free to move up and down relative to the body. An analysis of bodies without power take-off (PTO) systems is first presented to demonstrate the positive effects a compressible volume could have on the body response. Subsequently, two compressible device variations are analysed. In the first variation, the compressible volume is connected to a fixed volume via an air turbine for PTO. In the second variation, a water column separates the compressible volume from another volume, which is fitted with an air turbine open to the atmosphere. Both floating and bottom-fixed, axisymmetric, configurations are considered, and linear analysis is employed throughout. Advantages and disadvantages of each device are examined in detail. Some configurations with displaced volumes less than 2000 m3 and with constant turbine coefficients are shown to be capable of achieving 80% of the theoretical maximum absorbed power over a wave period range of about 4 s. PMID:25484609

  17. Transverse Compression of Tendons.

    PubMed

    Samuel Salisbury, S T; Paul Buckley, C; Zavatsky, Amy B

    2016-04-01

    A study was made of the deformation of tendons when compressed transverse to the fiber-aligned axis. Bovine digital extensor tendons were compression tested between flat rigid plates. The methods included: in situ image-based measurement of tendon cross-sectional shapes, after preconditioning but immediately prior to testing; multiple constant-load creep/recovery tests applied to each tendon at increasing loads; and measurements of the resulting tendon displacements in both transverse directions. In these tests, friction resisted axial stretch of the tendon during compression, giving approximately plane-strain conditions. This, together with the assumption of a form of anisotropic hyperelastic constitutive model proposed previously for tendon, justified modeling the isochronal response of tendon as that of an isotropic, slightly compressible, neo-Hookean solid. Inverse analysis, using finite-element (FE) simulations of the experiments and 10 s isochronal creep displacement data, gave values for Young's modulus and Poisson's ratio of this solid of 0.31 MPa and 0.49, respectively, for an idealized tendon shape and averaged data for all the tendons and E = 0.14 and 0.10 MPa for two specific tendons using their actual measured geometry. The compression load versus displacement curves, as measured and as simulated, showed varying degrees of stiffening with increasing load. This can be attributed mostly to geometrical changes in tendon cross section under load, varying according to the initial 3D shape of the tendon. PMID:26833218

  18. Intelligent bandwidth compression

    NASA Astrophysics Data System (ADS)

    Tseng, D. Y.; Bullock, B. L.; Olin, K. E.; Kandt, R. K.; Olsen, J. D.

    1980-02-01

    The feasibility of a 1000:1 bandwidth compression ratio for image transmission has been demonstrated using image-analysis algorithms and a rule-based controller. Such a high compression ratio was achieved by first analyzing scene content using auto-cueing and feature-extraction algorithms, and then transmitting only the pertinent information consistent with mission requirements. A rule-based controller directs the flow of analysis and performs priority allocations on the extracted scene content. The reconstructed bandwidth-compressed image consists of an edge map of the scene background, with primary and secondary target windows embedded in the edge map. The bandwidth-compressed images are updated at a basic rate of 1 frame per second, with the high-priority target window updated at 7.5 frames per second. The scene-analysis algorithms used in this system together with the adaptive priority controller are described. Results of simulated 1000:1 bandwidth-compressed images are presented.

  19. Self-Similar Compressible Free Vortices

    NASA Technical Reports Server (NTRS)

    vonEllenrieder, Karl

    1998-01-01

    Lie group methods are used to find both exact and numerical similarity solutions for compressible perturbations to all incompressible, two-dimensional, axisymmetric vortex reference flow. The reference flow vorticity satisfies an eigenvalue problem for which the solutions are a set of two-dimensional, self-similar, incompressible vortices. These solutions are augmented by deriving a conserved quantity for each eigenvalue, and identifying a Lie group which leaves the reference flow equations invariant. The partial differential equations governing the compressible perturbations to these reference flows are also invariant under the action of the same group. The similarity variables found with this group are used to determine the decay rates of the velocities and thermodynamic variables in the self-similar flows, and to reduce the governing partial differential equations to a set of ordinary differential equations. The ODE's are solved analytically and numerically for a Taylor vortex reference flow, and numerically for an Oseen vortex reference flow. The solutions are used to examine the dependencies of the temperature, density, entropy, dissipation and radial velocity on the Prandtl number. Also, experimental data on compressible free vortex flow are compared to the analytical results, the evolution of vortices from initial states which are not self-similar is discussed, and the energy transfer in a slightly-compressible vortex is considered.

  20. Compressible Flow Toolbox

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin J.

    2006-01-01

    The Compressible Flow Toolbox is primarily a MATLAB-language implementation of a set of algorithms that solve approximately 280 linear and nonlinear classical equations for compressible flow. The toolbox is useful for analysis of one-dimensional steady flow with either constant entropy, friction, heat transfer, or Mach number greater than 1. The toolbox also contains algorithms for comparing and validating the equation-solving algorithms against solutions previously published in open literature. The classical equations solved by the Compressible Flow Toolbox are as follows: The isentropic-flow equations, The Fanno flow equations (pertaining to flow of an ideal gas in a pipe with friction), The Rayleigh flow equations (pertaining to frictionless flow of an ideal gas, with heat transfer, in a pipe of constant cross section), The normal-shock equations, The oblique-shock equations, and The expansion equations.

  1. Isentropic Compression of Argon

    SciTech Connect

    H. Oona; J.C. Solem; L.R. Veeser, C.A. Ekdahl; P.J. Rodriquez; S.M. Younger; W. Lewis; W.D. Turley

    1997-08-01

    We are studying the transition of argon from an insulator to a conductor by compressing the frozen gas isentropically to pressures at which neighboring atomic orbitals overlap sufficiently to allow some electron motion between atoms. Argon and the other rare gases have closed electron shells and therefore remain montomic, even when they solidify. Their simple structure makes it likely that any measured change in conductivity is due to changes in the atomic structure, not in molecular configuration. As the crystal is compressed the band gap closes, allowing increased conductivity. We have begun research to determine the conductivity at high pressures, and it is our intention to determine the compression at which the crystal becomes a metal.

  2. The compressible mixing layer

    NASA Technical Reports Server (NTRS)

    Vandromme, Dany; Haminh, Hieu

    1991-01-01

    The capability of turbulence modeling correctly to handle natural unsteadiness appearing in compressible turbulent flows is investigated. Physical aspects linked to the unsteadiness problem and the role of various flow parameters are analyzed. It is found that unsteady turbulent flows can be simulated by dividing these motions into an 'organized' part for which equations of motion are solved and a remaining 'incoherent' part represented by a turbulence model. Two-equation turbulence models and second-order turbulence models can yield reasonable results. For specific compressible unsteady turbulent flow, graphic presentations of different quantities may reveal complementary physical features. Strong compression zones are observed in rapid flow parts but shocklets do not yet occur.

  3. COMPREHENSION OF COMPRESSED SPEECH BY ELEMENTARY SCHOOL CHILDREN.

    ERIC Educational Resources Information Center

    WOOD, C. DAVID

    THE EFFECTS OF FOUR VARIABLES ON THE EXTENT OF COMPREHENSION OF COMPRESSED SPEECH BY ELEMENTARY SCHOOL CHILDREN WERE INVESTIGATED. THESE VARIABLES WERE RATE OF PRESENTATION, GRADE LEVEL IN SCHOOL, INTELLIGENCE, AND AMOUNT OF PRACTICE. NINETY SUBJECTS PARTICIPATED IN THE EXPERIMENT. THE TASK FOR EACH SUBJECT WAS TO LISTEN INDIVIDUALLY TO 50 TAPE…

  4. Isentropic compression of argon

    SciTech Connect

    Veeser, L.R.; Ekdahl, C.A.; Oona, H.

    1997-06-01

    The compression was done in an MC-1 flux compression (explosive) generator, in order to study the transition from an insulator to a conductor. Since conductivity signals were observed in all the experiments (except when the probe is removed), both the Teflon and the argon are becoming conductive. The conductivity could not be determined (Teflon insulation properties unknown), but it could be bounded as being {sigma}=1/{rho}{le}8({Omega}cm){sub -1}, because when the Teflon breaks down, the dielectric constant is reduced. The Teflon insulator problem remains, and other ways to better insulate the probe or to measure the conductivity without a probe is being sought.

  5. Orbiting dynamic compression laboratory

    NASA Technical Reports Server (NTRS)

    Ahrens, T. J.; Vreeland, T., Jr.; Kasiraj, P.; Frisch, B.

    1984-01-01

    In order to examine the feasibility of carrying out dynamic compression experiments on a space station, the possibility of using explosive gun launchers is studied. The question of whether powders of a refractory metal (molybdenum) and a metallic glass could be well considered by dynamic compression is examined. In both cases extremely good bonds are obtained between grains of metal and metallic glass at 180 and 80 kb, respectively. When the oxide surface is reduced and the dynamic consolidation is carried out in vacuum, in the case of molybdenum, tensile tests of the recovered samples demonstrated beneficial ultimate tensile strengths.

  6. Fixed-Rate Compressed Floating-Point Arrays.

    PubMed

    Lindstrom, Peter

    2014-12-01

    Current compression schemes for floating-point data commonly take fixed-precision values and compress them to a variable-length bit stream, complicating memory management and random access. We present a fixed-rate, near-lossless compression scheme that maps small blocks of 4(d) values in d dimensions to a fixed, user-specified number of bits per block, thereby allowing read and write random access to compressed floating-point data at block granularity. Our approach is inspired by fixed-rate texture compression methods widely adopted in graphics hardware, but has been tailored to the high dynamic range and precision demands of scientific applications. Our compressor is based on a new, lifted, orthogonal block transform and embedded coding, allowing each per-block bit stream to be truncated at any point if desired, thus facilitating bit rate selection using a single compression scheme. To avoid compression or decompression upon every data access, we employ a software write-back cache of uncompressed blocks. Our compressor has been designed with computational simplicity and speed in mind to allow for the possibility of a hardware implementation, and uses only a small number of fixed-point arithmetic operations per compressed value. We demonstrate the viability and benefits of lossy compression in several applications, including visualization, quantitative data analysis, and numerical simulation. PMID:26356981

  7. Deconstructed transverse mass variables

    NASA Astrophysics Data System (ADS)

    Ismail, Ahmed; Schwienhorst, Reinhard; Virzi, Joseph S.; Walker, Devin G. E.

    2015-04-01

    Traditional searches for R-parity conserving natural supersymmetry (SUSY) require large transverse mass and missing energy cuts to separate the signal from large backgrounds. SUSY models with compressed spectra inherently produce signal events with small amounts of missing energy that are hard to explore. We use this difficulty to motivate the construction of "deconstructed" transverse mass variables which are designed preserve information on both the norm and direction of the missing momentum. We demonstrate the effectiveness of these variables in searches for the pair production of supersymmetric top-quark partners which subsequently decay into a final state with an isolated lepton, jets and missing energy. We show that the use of deconstructed transverse mass variables extends the accessible compressed spectra parameter space beyond the region probed by traditional methods. The parameter space can further be expanded to neutralino masses that are larger than the difference between the stop and top masses. In addition, we also discuss how these variables allow for novel searches of single stop production, in order to directly probe unconstrained stealth stops in the small stop- and neutralino-mass regime. We also demonstrate the utility of these variables for generic gluino and stop searches in all-hadronic final states. Overall, we demonstrate that deconstructed transverse variables are essential to any search wanting to maximize signal separation from the background when the signal has undetected particles in the final state.

  8. Compress Your Files

    ERIC Educational Resources Information Center

    Branzburg, Jeffrey

    2005-01-01

    File compression enables data to be squeezed together, greatly reducing file size. Why would someone want to do this? Reducing file size enables the sending and receiving of files over the Internet more quickly, the ability to store more files on the hard drive, and the ability pack many related files into one archive (for example, all files…

  9. The Compressed Video Experience.

    ERIC Educational Resources Information Center

    Weber, John

    In the fall semester 1995, Southern Arkansas University- Magnolia (SAU-M) began a two semester trial delivering college classes via a compressed video link between SAU-M and its sister school Southern Arkansas University Tech (SAU-T) in Camden. As soon as the University began broadcasting and receiving classes, it was discovered that using the…

  10. Nonlinear Frequency Compression

    PubMed Central

    Scollie, Susan; Glista, Danielle; Seelisch, Andreas

    2013-01-01

    Frequency lowering technologies offer an alternative amplification solution for severe to profound high frequency hearing losses. While frequency lowering technologies may improve audibility of high frequency sounds, the very nature of this processing can affect the perceived sound quality. This article reports the results from two studies that investigated the impact of a nonlinear frequency compression (NFC) algorithm on perceived sound quality. In the first study, the cutoff frequency and compression ratio parameters of the NFC algorithm were varied, and their effect on the speech quality was measured subjectively with 12 normal hearing adults, 12 normal hearing children, 13 hearing impaired adults, and 9 hearing impaired children. In the second study, 12 normal hearing and 8 hearing impaired adult listeners rated the quality of speech in quiet, speech in noise, and music after processing with a different set of NFC parameters. Results showed that the cutoff frequency parameter had more impact on sound quality ratings than the compression ratio, and that the hearing impaired adults were more tolerant to increased frequency compression than normal hearing adults. No statistically significant differences were found in the sound quality ratings of speech-in-noise and music stimuli processed through various NFC settings by hearing impaired listeners. These findings suggest that there may be an acceptable range of NFC settings for hearing impaired individuals where sound quality is not adversely affected. These results may assist an Audiologist in clinical NFC hearing aid fittings for achieving a balance between high frequency audibility and sound quality. PMID:23539261

  11. Compression: Rent or own

    SciTech Connect

    Cahill, C.

    1997-07-01

    Historically, the decision to purchase or rent compression has been set as a corporate philosophy. As companies decentralize, there seems to be a shift away from corporate philosophy toward individual profit centers. This has led the decision to rent versus purchase to be looked at on a regional or project-by-project basis.

  12. Improved compression molding process

    NASA Technical Reports Server (NTRS)

    Heier, W. C.

    1967-01-01

    Modified compression molding process produces plastic molding compounds that are strong, homogeneous, free of residual stresses, and have improved ablative characteristics. The conventional method is modified by applying a vacuum to the mold during the molding cycle, using a volatile sink, and exercising precise control of the mold closure limits.

  13. Mosaic image compression

    NASA Astrophysics Data System (ADS)

    Chaudhari, Kapil A.; Reeves, Stanley J.

    2005-02-01

    Most consumer-level digital cameras use a color filter array to capture color mosaic data followed by demosaicking to obtain full-color images. However, many sophisticated demosaicking algorithms are too complex to implement on-board a camera. To use these algorithms, one must transfer the mosaic data from the camera to a computer without introducing compression losses that could generate artifacts in the demosaicked image. The memory required for losslessly stored mosaic images severely restricts the number of images that can be stored in the camera. Therefore, we need an algorithm to compress the original mosaic data losslessly so that it can later be transferred intact for demosaicking. We propose a new lossless compression technique for mosaic images in this paper. Ordinary image compression methods do not apply to mosaic images because of their non-canonical color sampling structure. Because standard compression methods such as JPEG, JPEG2000, etc. are already available in most digital cameras, we have chosen to build our algorithms using a standard method as a key part of the system. The algorithm begins by separating the mosaic image into 3 color (RGB) components. This is followed by an interpolation or down-sampling operation--depending on the particular variation of the algorithm--that makes all three components the same size. Using the three color components, we form a color image that is coded with JPEG. After appropriately reformatting the data, we calculate the residual between the original image and the coded image and then entropy-code the residual values corresponding to the mosaic data.

  14. Displaying radiologic images on personal computers: image storage and compression--Part 2.

    PubMed

    Gillespy, T; Rowberg, A H

    1994-02-01

    This is part 2 of our article on image storage and compression, the third article of our series for radiologists and imaging scientists on displaying, manipulating, and analyzing radiologic images on personal computers. Image compression is classified as lossless (nondestructive) or lossy (destructive). Common lossless compression algorithms include variable-length bit codes (Huffman codes and variants), dictionary-based compression (Lempel-Ziv variants), and arithmetic coding. Huffman codes and the Lempel-Ziv-Welch (LZW) algorithm are commonly used for image compression. All of these compression methods are enhanced if the image has been transformed into a differential image based on a differential pulse-code modulation (DPCM) algorithm. The LZW compression after the DPCM image transformation performed the best on our example images, and performed almost as well as the best of the three commercial compression programs tested. Lossy compression techniques are capable of much higher data compression, but reduced image quality and compression artifacts may be noticeable. Lossy compression is comprised of three steps: transformation, quantization, and coding. Two commonly used transformation methods are the discrete cosine transformation and discrete wavelet transformation. In both methods, most of the image information is contained in a relatively few of the transformation coefficients. The quantization step reduces many of the lower order coefficients to 0, which greatly improves the efficiency of the coding (compression) step. In fractal-based image compression, image patterns are stored as equations that can be reconstructed at different levels of resolution. PMID:8172973

  15. TEM Video Compressive Sensing

    SciTech Connect

    Stevens, Andrew J.; Kovarik, Libor; Abellan, Patricia; Yuan, Xin; Carin, Lawrence; Browning, Nigel D.

    2015-08-02

    One of the main limitations of imaging at high spatial and temporal resolution during in-situ TEM experiments is the frame rate of the camera being used to image the dynamic process. While the recent development of direct detectors has provided the hardware to achieve frame rates approaching 0.1ms, the cameras are expensive and must replace existing detectors. In this paper, we examine the use of coded aperture compressive sensing methods [1, 2, 3, 4] to increase the framerate of any camera with simple, low-cost hardware modifications. The coded aperture approach allows multiple sub-frames to be coded and integrated into a single camera frame during the acquisition process, and then extracted upon readout using statistical compressive sensing inversion. Our simulations show that it should be possible to increase the speed of any camera by at least an order of magnitude. Compressive Sensing (CS) combines sensing and compression in one operation, and thus provides an approach that could further improve the temporal resolution while correspondingly reducing the electron dose rate. Because the signal is measured in a compressive manner, fewer total measurements are required. When applied to TEM video capture, compressive imaging couled improve acquisition speed and reduce the electron dose rate. CS is a recent concept, and has come to the forefront due the seminal work of Candès [5]. Since the publication of Candès, there has been enormous growth in the application of CS and development of CS variants. For electron microscopy applications, the concept of CS has also been recently applied to electron tomography [6], and reduction of electron dose in scanning transmission electron microscopy (STEM) imaging [7]. To demonstrate the applicability of coded aperture CS video reconstruction for atomic level imaging, we simulate compressive sensing on observations of Pd nanoparticles and Ag nanoparticles during exposure to high temperatures and other environmental

  16. Compression and texture in socks enhance football kicking performance.

    PubMed

    Hasan, Hosni; Davids, Keith; Chow, Jia Yi; Kerr, Graham

    2016-08-01

    The purpose of this study was to observe effects of wearing textured insoles and clinical compression socks on organisation of lower limb interceptive actions in developing athletes of different skill levels in association football. Six advanced learners and six completely novice football players (15.4±0.9years) performed 20 instep kicks with maximum velocity, in four randomly organised insoles and socks conditions, (a) Smooth Socks with Smooth Insoles (SSSI); (b) Smooth Socks with Textured Insoles (SSTI); (c) Compression Socks with Smooth Insoles (CSSI) and (d), Compression Socks with Textured Insoles (CSTI). Reflective markers were placed on key anatomical locations and the ball to facilitate three-dimensional (3D) movement recording and analysis. Data on 3D kinematic variables and initial ball velocity were analysed using one-way mixed model ANOVAs. Results revealed that wearing textured and compression materials enhanced performance in key variables, such as the maximum velocity of the instep kick and increased initial ball velocity, among advanced learners compared to the use of non-textured and compression materials. Adding texture to football boot insoles appeared to interact with compression materials to improve kicking performance, captured by these important measures. This improvement in kicking performance is likely to have occurred through enhanced somatosensory system feedback utilised for foot placement and movement organisation of the lower limbs. Data suggested that advanced learners were better at harnessing the augmented feedback information from compression and texture to regulate emerging movement patterns compared to novices. PMID:27155962

  17. Embedded memory compression for video and graphics applications

    NASA Astrophysics Data System (ADS)

    Teng, Andy; Gokce, Dane; Aleksic, Mickey; Reznik, Yuriy A.

    2010-08-01

    We describe design of a low-complexity lossless and near-lossless image compression system with random access, suitable for embedded memory compression applications. This system employs a block-based DPCM coder using variable-length encoding for the residual. As part of this design, we propose to use non-prefix (one-to-one) codes for coding of residuals, and show that they offer improvements in compression performance compared to conventional techniques, such as Golomb-Rice and Huffman codes.

  18. Data Compression for Helioseismology

    NASA Astrophysics Data System (ADS)

    Löptien, Björn

    2015-10-01

    Efficient data compression will play an important role for several upcoming and planned space missions involving helioseismology, such as Solar Orbiter. Solar Orbiter, to be launched in October 2018, will be the next space mission involving helioseismology. The main characteristic of Solar Orbiter lies in its orbit. The spacecraft will have an inclined solar orbit, reaching a solar latitude of up to 33 deg. This will allow, for the first time, probing the solar poles using local helioseismology. In addition, combined observations of Solar Orbiter and another helioseismic instrument will be used to study the deep interior of the Sun using stereoscopic helioseismology. The Doppler velocity and continuum intensity images of the Sun required for helioseismology will be provided by the Polarimetric and Helioseismic Imager (PHI). Major constraints for helioseismology with Solar Orbiter are the low telemetry and the (probably) short observing time. In addition, helioseismology of the solar poles requires observations close to the solar limb, even from the inclined orbit of Solar Orbiter. This gives rise to systematic errors. In this thesis, I derived a first estimate of the impact of lossy data compression on helioseismology. I put special emphasis on the Solar Orbiter mission, but my results are applicable to other planned missions as well. First, I studied the performance of PHI for helioseismology. Based on simulations of solar surface convection and a model of the PHI instrument, I generated a six-hour time-series of synthetic Doppler velocity images with the same properties as expected for PHI. Here, I focused on the impact of the point spread function, the spacecraft jitter, and of the photon noise level. The derived power spectra of solar oscillations suggest that PHI will be suitable for helioseismology. The low telemetry of Solar Orbiter requires extensive compression of the helioseismic data obtained by PHI. I evaluated the influence of data compression using

  19. Digital cinema video compression

    NASA Astrophysics Data System (ADS)

    Husak, Walter

    2003-05-01

    The Motion Picture Industry began a transition from film based distribution and projection to digital distribution and projection several years ago. Digital delivery and presentation offers the prospect to increase the quality of the theatrical experience for the audience, reduce distribution costs to the distributors, and create new business opportunities for the theater owners and the studios. Digital Cinema also presents an opportunity to provide increased flexibility and security of the movies for the content owners and the theater operators. Distribution of content via electronic means to theaters is unlike any of the traditional applications for video compression. The transition from film-based media to electronic media represents a paradigm shift in video compression techniques and applications that will be discussed in this paper.

  20. Progressive compressive imager

    NASA Astrophysics Data System (ADS)

    Evladov, Sergei; Levi, Ofer; Stern, Adrian

    2012-06-01

    We have designed and built a working automatic progressive sampling imaging system based on the vector sensor concept, which utilizes a unique sampling scheme of Radon projections. This sampling scheme makes it possible to progressively add information resulting in tradeoff between compression and the quality of reconstruction. The uniqueness of our sampling is that in any moment of the acquisition process the reconstruction can produce a reasonable version of the image. The advantage of the gradual addition of the samples is seen when the sparsity rate of the object is unknown, and thus the number of needed measurements. We have developed the iterative algorithm OSO (Ordered Sets Optimization) which employs our sampling scheme for creation of nearly uniform distributed sets of samples, which allows the reconstruction of Mega-Pixel images. We present the good quality reconstruction from compressed data ratios of 1:20.

  1. Efficiency at Sorting Cards in Compressed Air

    PubMed Central

    Poulton, E. C.; Catton, M. J.; Carpenter, A.

    1964-01-01

    At a site where compressed air was being used in the construction of a tunnel, 34 men sorted cards twice, once at normal atmospheric pressure and once at 3½, 2½, or 2 atmospheres absolute pressure. An additional six men sorted cards twice at normal atmospheric pressure. When the task was carried out for the first time, all the groups of men performing at raised pressure were found to yield a reliably greater proportion of very slow responses than the group of men performing at normal pressure. There was reliably more variability in timing at 3½ and 2½ atmospheres absolute than at normal pressure. At 3½ atmospheres absolute the average performance was also reliably slower. When the task was carried out for the second time, exposure to 3½ atmospheres absolute pressure had no reliable effect. Thus compressed air affected performance only while the task was being learnt; it had little effect after practice. No reliable differences were found related to age, to length of experience in compressed air, or to the duration of the exposure to compressed air, which was never less than 10 minutes at 3½ atmospheres absolute pressure. PMID:14180485

  2. Compressibility of solids

    NASA Technical Reports Server (NTRS)

    Vinet, P.; Ferrante, J.; Rose, J. H.; Smith, J. R.

    1987-01-01

    A universal form is proposed for the equation of state (EOS) of solids. Good agreement is found for a variety of test data. The form of the EOS is used to suggest a method of data analysis, which is applied to materials of geophysical interest. The isothermal bulk modulus is discussed as a function of the volume and of the pressure. The isothermal compression curves for materials of geophysical interest are examined.

  3. Compression of Cake

    NASA Astrophysics Data System (ADS)

    Nason, Sarah; Houghton, Brittany; Renfro, Timothy

    2012-03-01

    The fall university physics class, at McMurry University, created a compression modulus experiment that even high school students could do. The class came up with this idea after a Young's modulus experiment which involved stretching wire. A question was raised of what would happen if we compressed something else? We created our own Young's modulus experiment, but in a more entertaining way. The experiment involves measuring the height of a cake both before and after a weight has been applied to the cake. We worked to derive the compression modulus by applying weight to a cake. In the end, we had our experimental cake and, ate it too! To cite this abstract, use the following reference: http://meetings.aps.org/link/BAPS.2012.TSS.B1.1

  4. Piston reciprocating compressed air engine

    SciTech Connect

    Cestero, L.G.

    1987-03-24

    A compressed air engine is described comprising: (a). a reservoir of compressed air, (b). two power cylinders each containing a reciprocating piston connected to a crankshaft and flywheel, (c). a transfer cylinder which communicates with each power cylinder and the reservoir, and contains a reciprocating piston connected to the crankshaft, (d). valve means controlled by rotation of the crankshaft for supplying compressed air from the reservoir to each power cylinder and for exhausting compressed air from each power cylinder to the transfer cylinder, (e). valve means controlled by rotation of the crankshaft for supplying from the transfer cylinder to the reservoir compressed air supplied to the transfer cylinder on the exhaust strokes of the pistons of the power cylinders, and (f). an externally powered fan for assisting the exhaust of compressed air from each power cylinder to the transfer cylinder and from there to the compressed air reservoir.

  5. Isothermal compressibility determination across Bose-Einstein condensation

    NASA Astrophysics Data System (ADS)

    Poveda-Cuevas, F. J.; Castilho, P. C. M.; Mercado-Gutierrez, E. D.; Fritsch, A. R.; Muniz, S. R.; Lucioni, E.; Roati, G.; Bagnato, V. S.

    2015-07-01

    We apply the global thermodynamic variables approach to experimentally determine the isothermal compressibility parameter κT of a trapped Bose gas across the phase transition. We demonstrate the behavior of κT around the critical pressure, revealing the second-order nature of the phase transition. Compressibility is the most important susceptibility to characterize the system. The use of global variables shows advantages with respect to the usual local density approximation method and can be applied to a broad range of situations.

  6. High performance compression of science data

    NASA Technical Reports Server (NTRS)

    Storer, James A.; Cohn, Martin

    1994-01-01

    Two papers make up the body of this report. One presents a single-pass adaptive vector quantization algorithm that learns a codebook of variable size and shape entries; the authors present experiments on a set of test images showing that with no training or prior knowledge of the data, for a given fidelity, the compression achieved typically equals or exceeds that of the JPEG standard. The second paper addresses motion compensation, one of the most effective techniques used in the interframe data compression. A parallel block-matching algorithm for estimating interframe displacement of blocks with minimum error is presented. The algorithm is designed for a simple parallel architecture to process video in real time.

  7. High Performance Compression of Science Data

    NASA Technical Reports Server (NTRS)

    Storer, James A.; Carpentieri, Bruno; Cohn, Martin

    1994-01-01

    Two papers make up the body of this report. One presents a single-pass adaptive vector quantization algorithm that learns a codebook of variable size and shape entries; the authors present experiments on a set of test images showing that with no training or prior knowledge of the data, for a given fidelity, the compression achieved typically equals or exceeds that of the JPEG standard. The second paper addresses motion compensation, one of the most effective techniques used in interframe data compression. A parallel block-matching algorithm for estimating interframe displacement of blocks with minimum error is presented. The algorithm is designed for a simple parallel architecture to process video in real time.

  8. International magnetic pulse compression

    SciTech Connect

    Kirbie, H.C.; Newton, M.A.; Siemens, P.D.

    1991-04-01

    Although pulsed-power engineering traditionally has been practiced by a fairly small, close community in the areas of defense and energy research, it is becoming more common in high-power, high-energy commercial pursuits such as material processing and lasers. This paper is a synopsis of the Feb. 12--14, 1990 workshop on magnetic switching as it applies primarily to pulse compression (power transformation). During the course of the Workshop at Granlibakken, a great deal of information was amassed and a keen insight into both the problems and opportunities as to the use of this switching approach was developed. The segmented workshop format proved ideal for identifying key aspects affecting optimum performance in a variety of applications. Individual groups of experts addressed network and system modeling, magnetic materials, power conditioning, core cooling and dielectrics, and finally circuits and application. At the end, they came together to consolidate their input and formulate the workshop's conclusions, identifying roadblocks or suggesting research projects, particularly as they apply to magnetic switching's trump card -- its high-average-power-handling capability (at least on a burst-mode basis). The workshop was especially productive both in the quality and quantity of information transfer in an environment conducive to a free and open exchange of ideas. We will not delve into the organization proper of this meeting, rather we wish to commend to the interested reader this volume, which provides the definitive and most up-to-date compilation on the subject of magnetic pulse compression from underlying principles to current state of the art as well as the prognosis for the future of magnetic pulse compression as a consensus of the workshop's organizers and participants.

  9. Compression retaining piston

    SciTech Connect

    Quaglino, A.V. Jr.

    1987-06-16

    A piston apparatus is described for maintaining compression between the piston wall and the cylinder wall, that comprises the following: a generally cylindrical piston body, including: a head portion defining the forward end of the body; and a continuous side wall portion extending rearward from the head portion; a means for lubricating and preventing compression loss between the side wall portion and the cylinder wall, including an annular recessed area in the continuous side wall portion for receiving a quantity of fluid lubricant in fluid engagement between the wall of the recessed and the wall of the cylinder; a first and second resilient, elastomeric, heat resistant rings positioned in grooves along the wall of the continuous side wall portion, above and below the annular recessed area. Each ring engages the cylinder wall to reduce loss of lubricant within the recessed area during operation of the piston; a first pump means for providing fluid lubricant to engine components other than the pistons; and a second pump means provides fluid lubricant to the recessed area in the continuous side wall portion of the piston. The first and second pump means obtains lubricant from a common source, and the second pump means including a flow line supplies oil from a predetermined level above the level of oil provided to the first pump means. This is so that should the oil level to the second pump means fall below the predetermined level, the loss of oil to the recessed area in the continuous side wall portion of the piston would result in loss of compression and shut down of the engine.

  10. International magnetic pulse compression

    NASA Astrophysics Data System (ADS)

    Kirbie, H. C.; Newton, M. A.; Siemens, P. D.

    1991-04-01

    Although pulsed-power engineering traditionally has been practiced by a fairly small, close community in the areas of defense and energy research, it is becoming more common in high-power, high-energy commercial pursuits such as material processing and lasers. This paper is a synopsis of the Feb. 12-14, 1990 workshop on magnetic switching as it applies primarily to pulse compression (power transformation). During the course of the Workshop at Granlibakken, a great deal of information was amassed and a keen insight into both the problems and opportunities as to the use of this switching approach was developed. The segmented workshop format proved ideal for identifying key aspects affecting optimum performance in a variety of applications. Individual groups of experts addressed network and system modeling, magnetic materials, power conditioning, core cooling and dielectrics, and finally circuits and application. At the end, they came together to consolidate their input and formulate the workshop's conclusions, identifying roadblocks or suggesting research projects, particularly as they apply to magnetic switching's trump card - its high-average-power-handling capability (at least on a burst-mode basis). The workshop was especially productive both in the quality and quantity of information transfer in an environment conducive to a free and open exchange of ideas. We will not delve into the organization proper of this meeting, rather we wish to commend to the interested reader this volume, which provides the definitive and most up-to-date compilation on the subject of magnetic pulse compression from underlying principles to current state of the art as well as the prognosis for the future of magnetic pulse compression as a consensus of the workshop's organizers and participants.

  11. Compressible magnetohydrodynamic sawtooth crash

    NASA Astrophysics Data System (ADS)

    Sugiyama, Linda E.

    2014-02-01

    In a toroidal magnetically confined plasma at low resistivity, compressible magnetohydrodynamic (MHD) predicts that an m = 1/n = 1 sawtooth has a fast, explosive crash phase with abrupt onset, rate nearly independent of resistivity, and localized temperature redistribution similar to experimental observations. Large scale numerical simulations show that the 1/1 MHD internal kink grows exponentially at a resistive rate until a critical amplitude, when the plasma motion accelerates rapidly, culminating in fast loss of the temperature and magnetic structure inside q < 1, with somewhat slower density redistribution. Nonlinearly, for small effective growth rate the perpendicular momentum rate of change remains small compared to its individual terms ∇p and J × B until the fast crash, so that the compressible growth rate is determined by higher order terms in a large aspect ratio expansion, as in the linear eigenmode. Reduced MHD fails completely to describe the toroidal mode; no Sweet-Parker-like reconnection layer develops. Important differences result from toroidal mode coupling effects. A set of large aspect ratio compressible MHD equations shows that the large aspect ratio expansion also breaks down in typical tokamaks with rq =1/Ro≃1/10 and a /Ro≃1/3. In the large aspect ratio limit, failure extends down to much smaller inverse aspect ratio, at growth rate scalings γ =O(ɛ2). Higher order aspect ratio terms, including B˜ϕ, become important. Nonlinearly, higher toroidal harmonics develop faster and to a greater degree than for large aspect ratio and help to accelerate the fast crash. The perpendicular momentum property applies to other transverse MHD instabilities, including m ≥ 2 magnetic islands and the plasma edge.

  12. Fast Compressive Tracking.

    PubMed

    Zhang, Kaihua; Zhang, Lei; Yang, Ming-Hsuan

    2014-10-01

    It is a challenging task to develop effective and efficient appearance models for robust object tracking due to factors such as pose variation, illumination change, occlusion, and motion blur. Existing online tracking algorithms often update models with samples from observations in recent frames. Despite much success has been demonstrated, numerous issues remain to be addressed. First, while these adaptive appearance models are data-dependent, there does not exist sufficient amount of data for online algorithms to learn at the outset. Second, online tracking algorithms often encounter the drift problems. As a result of self-taught learning, misaligned samples are likely to be added and degrade the appearance models. In this paper, we propose a simple yet effective and efficient tracking algorithm with an appearance model based on features extracted from a multiscale image feature space with data-independent basis. The proposed appearance model employs non-adaptive random projections that preserve the structure of the image feature space of objects. A very sparse measurement matrix is constructed to efficiently extract the features for the appearance model. We compress sample images of the foreground target and the background using the same sparse measurement matrix. The tracking task is formulated as a binary classification via a naive Bayes classifier with online update in the compressed domain. A coarse-to-fine search strategy is adopted to further reduce the computational complexity in the detection procedure. The proposed compressive tracking algorithm runs in real-time and performs favorably against state-of-the-art methods on challenging sequences in terms of efficiency, accuracy and robustness. PMID:26352631

  13. Comparative data compression techniques and multi-compression results

    NASA Astrophysics Data System (ADS)

    Hasan, M. R.; Ibrahimy, M. I.; Motakabber, S. M. A.; Ferdaus, M. M.; Khan, M. N. H.

    2013-12-01

    Data compression is very necessary in business data processing, because of the cost savings that it offers and the large volume of data manipulated in many business applications. It is a method or system for transmitting a digital image (i.e., an array of pixels) from a digital data source to a digital data receiver. More the size of the data be smaller, it provides better transmission speed and saves time. In this communication, we always want to transmit data efficiently and noise freely. This paper will provide some compression techniques for lossless text type data compression and comparative result of multiple and single compression, that will help to find out better compression output and to develop compression algorithms.

  14. Compression and Entrapment Syndromes

    PubMed Central

    Heffernan, L.P.; Benstead, T.J.

    1987-01-01

    Family physicians are often confronted by patients who present with pain, numbness and weakness. Such complaints, when confined to a single extremity, most particularly to a restricted portion of the extremity, may indicate focal dysfunction of peripheral nerve structures arising from compression and/or entrapment, to which such nerves are selectively vulnerable. The authors of this article consider the paramount clinical features that allow the clinician to arrive at a correct diagnosis, reviews major points in differential diagnosis, and suggest appropriate management strategies. PMID:21263858

  15. Avalanches in Wood Compression.

    PubMed

    Mäkinen, T; Miksic, A; Ovaska, M; Alava, Mikko J

    2015-07-31

    Wood is a multiscale material exhibiting a complex viscoplastic response. We study avalanches in small wood samples in compression. "Woodquakes" measured by acoustic emission are surprisingly similar to earthquakes and crackling noise in rocks and laboratory tests on brittle materials. Both the distributions of event energies and of waiting (silent) times follow power laws. The stress-strain response exhibits clear signatures of localization of deformation to "weak spots" or softwood layers, as identified using digital image correlation. Even though material structure-dependent localization takes place, the avalanche behavior remains scale-free. PMID:26274428

  16. Compression test apparatus

    NASA Technical Reports Server (NTRS)

    Shanks, G. C. (Inventor)

    1981-01-01

    An apparatus for compressive testing of a test specimen may comprise vertically spaced upper and lower platen members between which a test specimen may be placed. The platen members are supported by a fixed support assembly. A load indicator is interposed between the upper platen member and the support assembly for supporting the total weight of the upper platen member and any additional weight which may be placed on it. Operating means are provided for moving the lower platen member upwardly toward the upper platen member whereby an increasing portion of the total weight is transferred from the load indicator to the test specimen.

  17. Sampling video compression system

    NASA Technical Reports Server (NTRS)

    Matsumoto, Y.; Lum, H. (Inventor)

    1977-01-01

    A system for transmitting video signal of compressed bandwidth is described. The transmitting station is provided with circuitry for dividing a picture to be transmitted into a plurality of blocks containing a checkerboard pattern of picture elements. Video signals along corresponding diagonal rows of picture elements in the respective blocks are regularly sampled. A transmitter responsive to the output of the sampling circuitry is included for transmitting the sampled video signals of one frame at a reduced bandwidth over a communication channel. The receiving station is provided with a frame memory for temporarily storing transmitted video signals of one frame at the original high bandwidth frequency.

  18. Ultrasound beamforming using compressed data.

    PubMed

    Li, Yen-Feng; Li, Pai-Chi

    2012-05-01

    The rapid advancements in electronics technologies have made software-based beamformers for ultrasound array imaging feasible, thus facilitating the rapid development of high-performance and potentially low-cost systems. However, one challenge to realizing a fully software-based system is transferring data from the analog front end to the software back end at rates of up to a few gigabits per second. This study investigated the use of data compression to reduce the data transfer requirements and optimize the associated trade-off with beamforming quality. JPEG and JPEG2000 compression techniques were adopted. The acoustic data of a line phantom were acquired with a 128-channel array transducer at a center frequency of 3.5 MHz, and the acoustic data of a cyst phantom were acquired with a 64-channel array transducer at a center frequency of 3.33 MHz. The receive-channel data associated with each transmit event are separated into 8 × 8 blocks and several tiles before JPEG and JPEG2000 data compression is applied, respectively. In one scheme, the compression was applied to raw RF data, while in another only the amplitude of baseband data was compressed. The maximum compression ratio of RF data compression to produce an average error of lower than 5 dB was 15 with JPEG compression and 20 with JPEG2000 compression. The image quality is higher with baseband amplitude data compression than with RF data compression; although the maximum overall compression ratio (compared with the original RF data size), which was limited by the data size of uncompressed phase data, was lower than 12, the average error in this case was lower than 1 dB when the compression ratio was lower than 8. PMID:22434817

  19. Dynamic control of a homogeneous charge compression ignition engine

    DOEpatents

    Duffy, Kevin P.; Mehresh, Parag; Schuh, David; Kieser, Andrew J.; Hergart, Carl-Anders; Hardy, William L.; Rodman, Anthony; Liechty, Michael P.

    2008-06-03

    A homogenous charge compression ignition engine is operated by compressing a charge mixture of air, exhaust and fuel in a combustion chamber to an autoignition condition of the fuel. The engine may facilitate a transition from a first combination of speed and load to a second combination of speed and load by changing the charge mixture and compression ratio. This may be accomplished in a consecutive engine cycle by adjusting both a fuel injector control signal and a variable valve control signal away from a nominal variable valve control signal. Thereafter in one or more subsequent engine cycles, more sluggish adjustments are made to at least one of a geometric compression ratio control signal and an exhaust gas recirculation control signal to allow the variable valve control signal to be readjusted back toward its nominal variable valve control signal setting. By readjusting the variable valve control signal back toward its nominal setting, the engine will be ready for another transition to a new combination of engine speed and load.

  20. Compressive Sensing DNA Microarrays

    PubMed Central

    2009-01-01

    Compressive sensing microarrays (CSMs) are DNA-based sensors that operate using group testing and compressive sensing (CS) principles. In contrast to conventional DNA microarrays, in which each genetic sensor is designed to respond to a single target, in a CSM, each sensor responds to a set of targets. We study the problem of designing CSMs that simultaneously account for both the constraints from CS theory and the biochemistry of probe-target DNA hybridization. An appropriate cross-hybridization model is proposed for CSMs, and several methods are developed for probe design and CS signal recovery based on the new model. Lab experiments suggest that in order to achieve accurate hybridization profiling, consensus probe sequences are required to have sequence homology of at least 80% with all targets to be detected. Furthermore, out-of-equilibrium datasets are usually as accurate as those obtained from equilibrium conditions. Consequently, one can use CSMs in applications in which only short hybridization times are allowed. PMID:19158952

  1. Perceptually Lossless Wavelet Compression

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Yang, Gloria Y.; Solomon, Joshua A.; Villasenor, John

    1996-01-01

    The Discrete Wavelet Transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter, which we call DWT uniform quantization noise. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2(exp -1), where r is display visual resolution in pixels/degree, and L is the wavelet level. Amplitude thresholds increase rapidly with spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from low-pass to horizontal/vertical to diagonal. We propose a mathematical model for DWT noise detection thresholds that is a function of level, orientation, and display visual resolution. This allows calculation of a 'perceptually lossless' quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.

  2. Cancer suppression by compression.

    PubMed

    Frieden, B Roy; Gatenby, Robert A

    2015-01-01

    Recent experiments indicate that uniformly compressing a cancer mass at its surface tends to transform many of its cells from proliferative to functional forms. Cancer cells suffer from the Warburg effect, resulting from depleted levels of cell membrane potentials. We show that the compression results in added free energy and that some of the added energy contributes distortional pressure to the cells. This excites the piezoelectric effect on the cell membranes, in particular raising the potentials on the membranes of cancer cells from their depleted levels to near-normal levels. In a sample calculation, a gain of 150 mV in is so attained. This allows the Warburg effect to be reversed. The result is at least partially regained function and accompanying increased molecular order. The transformation remains even when the pressure is turned off, suggesting a change of phase; these possibilities are briefly discussed. It is found that if the pressure is, in particular, applied adiabatically the process obeys the second law of thermodynamics, further validating the theoretical model. PMID:25520262

  3. Compressive Bilateral Filtering.

    PubMed

    Sugimoto, Kenjiro; Kamata, Sei-Ichiro

    2015-11-01

    This paper presents an efficient constant-time bilateral filter that produces a near-optimal performance tradeoff between approximate accuracy and computational complexity without any complicated parameter adjustment, called a compressive bilateral filter (CBLF). The constant-time means that the computational complexity is independent of its filter window size. Although many existing constant-time bilateral filters have been proposed step-by-step to pursue a more efficient performance tradeoff, they have less focused on the optimal tradeoff for their own frameworks. It is important to discuss this question, because it can reveal whether or not a constant-time algorithm still has plenty room for improvements of performance tradeoff. This paper tackles the question from a viewpoint of compressibility and highlights the fact that state-of-the-art algorithms have not yet touched the optimal tradeoff. The CBLF achieves a near-optimal performance tradeoff by two key ideas: 1) an approximate Gaussian range kernel through Fourier analysis and 2) a period length optimization. Experiments demonstrate that the CBLF significantly outperforms state-of-the-art algorithms in terms of approximate accuracy, computational complexity, and usability. PMID:26068315

  4. Energy transfer in compressible turbulence

    NASA Technical Reports Server (NTRS)

    Bataille, Francoise; Zhou, YE; Bertoglio, Jean-Pierre

    1995-01-01

    This letter investigates the compressible energy transfer process. We extend a methodology developed originally for incompressible turbulence and use databases from numerical simulations of a weak compressible turbulence based on Eddy-Damped-Quasi-Normal-Markovian (EDQNM) closure. In order to analyze the compressible mode directly, the well known Helmholtz decomposition is used. While the compressible component has very little influence on the solenoidal part, we found that almost all of the compressible turbulence energy is received from its solenoidal counterpart. We focus on the most fundamental building block of the energy transfer process, the triadic interactions. This analysis leads us to conclude that, at low turbulent Mach number, the compressible energy transfer process is dominated by a local radiative transfer (absorption) in both inertial and energy containing ranges.

  5. Compressive sensing in medical imaging

    PubMed Central

    Graff, Christian G.; Sidky, Emil Y.

    2015-01-01

    The promise of compressive sensing, exploitation of compressibility to achieve high quality image reconstructions with less data, has attracted a great deal of attention in the medical imaging community. At the Compressed Sensing Incubator meeting held in April 2014 at OSA Headquarters in Washington, DC, presentations were given summarizing some of the research efforts ongoing in compressive sensing for x-ray computed tomography and magnetic resonance imaging systems. This article provides an expanded version of these presentations. Sparsity-exploiting reconstruction algorithms that have gained popularity in the medical imaging community are studied, and examples of clinical applications that could benefit from compressive sensing ideas are provided. The current and potential future impact of compressive sensing on the medical imaging field is discussed. PMID:25968400

  6. libpolycomp: Compression/decompression library

    NASA Astrophysics Data System (ADS)

    Tomasi, Maurizio

    2016-04-01

    Libpolycomp compresses and decompresses one-dimensional streams of numbers by means of several algorithms. It is well-suited for time-ordered data acquired by astronomical instruments or simulations. One of the algorithms, called "polynomial compression", combines two widely-used ideas (namely, polynomial approximation and filtering of Fourier series) to achieve substantial compression ratios for datasets characterized by smoothness and lack of noise. Notable examples are the ephemerides of astronomical objects and the pointing information of astronomical telescopes. Other algorithms implemented in this C library are well known and already widely used, e.g., RLE, quantization, deflate (via libz) and Burrows-Wheeler transform (via libbzip2). Libpolycomp can compress the timelines acquired by the Planck/LFI instrument with an overall compression ratio of ~9, while other widely known programs (gzip, bzip2) reach compression ratios less than 1.5.

  7. Hardware Accelerated Compression of LIDAR Data Using FPGA Devices

    PubMed Central

    Biasizzo, Anton; Novak, Franc

    2013-01-01

    Airborne Light Detection and Ranging (LIDAR) has become a mainstream technology for terrain data acquisition and mapping. High sampling density of LIDAR enables the acquisition of high details of the terrain, but on the other hand, it results in a vast amount of gathered data, which requires huge storage space as well as substantial processing effort. The data are usually stored in the LAS format which has become the de facto standard for LIDAR data storage and exchange. In the paper, a hardware accelerated compression of LIDAR data is presented. The compression and decompression of LIDAR data is performed by a dedicated FPGA-based circuit and interfaced to the computer via a PCI-E general bus. The hardware compressor consists of three modules: LIDAR data predictor, variable length coder, and arithmetic coder. Hardware compression is considerably faster than software compression, while it also alleviates the processor load. PMID:23673680

  8. ECG data compression by modeling.

    PubMed Central

    Madhukar, B.; Murthy, I. S.

    1992-01-01

    This paper presents a novel algorithm for data compression of single lead Electrocardiogram (ECG) data. The method is based on Parametric modeling of the Discrete Cosine Transformed ECG signal. Improved high frequency reconstruction is achieved by separately modeling the low and the high frequency regions of the transformed signal. Differential Pulse Code Modulation is applied on the model parameters to obtain a further increase in the compression. Compression ratios up to 1:40 were achieved without significant distortion. PMID:1482940

  9. Shock compression of precompressed deuterium

    SciTech Connect

    Armstrong, M R; Crowhurst, J C; Zaug, J M; Bastea, S; Goncharov, A F; Militzer, B

    2011-07-31

    Here we report quasi-isentropic dynamic compression and thermodynamic characterization of solid, precompressed deuterium over an ultrafast time scale (< 100 ps) and a microscopic length scale (< 1 {micro}m). We further report a fast transition in shock wave compressed solid deuterium that is consistent with the ramp to shock transition, with a time scale of less than 10 ps. These results suggest that high-density dynamic compression of hydrogen may be possible on microscopic length scales.

  10. Data compression for sequencing data

    PubMed Central

    2013-01-01

    Post-Sanger sequencing methods produce tons of data, and there is a general agreement that the challenge to store and process them must be addressed with data compression. In this review we first answer the question “why compression” in a quantitative manner. Then we also answer the questions “what” and “how”, by sketching the fundamental compression ideas, describing the main sequencing data types and formats, and comparing the specialized compression algorithms and tools. Finally, we go back to the question “why compression” and give other, perhaps surprising answers, demonstrating the pervasiveness of data compression techniques in computational biology. PMID:24252160

  11. Magnetic compression laser driving circuit

    DOEpatents

    Ball, D.G.; Birx, D.; Cook, E.G.

    1993-01-05

    A magnetic compression laser driving circuit is disclosed. The magnetic compression laser driving circuit compresses voltage pulses in the range of 1.5 microseconds at 20 kilovolts of amplitude to pulses in the range of 40 nanoseconds and 60 kilovolts of amplitude. The magnetic compression laser driving circuit includes a multi-stage magnetic switch where the last stage includes a switch having at least two turns which has larger saturated inductance with less core material so that the efficiency of the circuit and hence the laser is increased.

  12. Magnetic compression laser driving circuit

    DOEpatents

    Ball, Don G.; Birx, Dan; Cook, Edward G.

    1993-01-01

    A magnetic compression laser driving circuit is disclosed. The magnetic compression laser driving circuit compresses voltage pulses in the range of 1.5 microseconds at 20 Kilovolts of amplitude to pulses in the range of 40 nanoseconds and 60 Kilovolts of amplitude. The magnetic compression laser driving circuit includes a multi-stage magnetic switch where the last stage includes a switch having at least two turns which has larger saturated inductance with less core material so that the efficiency of the circuit and hence the laser is increased.

  13. On the basic equations for the second-order modeling of compressible turbulence

    NASA Technical Reports Server (NTRS)

    Liou, W. W.; Shih, T.-H.

    1991-01-01

    Equations for the mean and turbulent quantities for compressible turbulent flows are derived. Both the conventional Reynolds average and the mass-weighted, Favre average were employed to decompose the flow variable into a mean and a turbulent quality. These equations are to be used later in developing second order Reynolds stress models for high speed compressible flows. A few recent advances in modeling some of the terms in the equations due to compressibility effects are also summarized.

  14. Vapor compression distillation module

    NASA Technical Reports Server (NTRS)

    Nuccio, P. P.

    1975-01-01

    A Vapor Compression Distillation (VCD) module was developed and evaluated as part of a Space Station Prototype (SSP) environmental control and life support system. The VCD module includes the waste tankage, pumps, post-treatment cells, automatic controls and fault detection instrumentation. Development problems were encountered with two components: the liquid pumps, and the waste tank and quantity gauge. Peristaltic pumps were selected instead of gear pumps, and a sub-program of materials and design optimization was undertaken leading to a projected life greater than 10,000 hours of continuous operation. A bladder tank was designed and built to contain the waste liquids and deliver it to the processor. A detrimental pressure pattern imposed upon the bladder by a force-operated quantity gauge was corrected by rearranging the force application, and design goals were achieved. System testing has demonstrated that all performance goals have been fulfilled.

  15. Gas compression apparatus

    NASA Technical Reports Server (NTRS)

    Terp, L. S. (Inventor)

    1977-01-01

    Apparatus for transferring gas from a first container to a second container of higher pressure was devised. A free-piston compressor having a driving piston and cylinder, and a smaller diameter driven piston and cylinder, comprise the apparatus. A rod member connecting the driving and driven pistons functions for mutual reciprocation in the respective cylinders. A conduit may be provided for supplying gas to the driven cylinder from the first container. Also provided is apparatus for introducing gas to the driving piston, to compress gas by the driven piston for transfer to the second higher pressure container. The system is useful in transferring spacecraft cabin oxygen into higher pressure containers for use in extravehicular activities.

  16. Compressed hyperspectral sensing

    NASA Astrophysics Data System (ADS)

    Tsagkatakis, Grigorios; Tsakalides, Panagiotis

    2015-03-01

    Acquisition of high dimensional Hyperspectral Imaging (HSI) data using limited dimensionality imaging sensors has led to restricted capabilities designs that hinder the proliferation of HSI. To overcome this limitation, novel HSI architectures strive to minimize the strict requirements of HSI by introducing computation into the acquisition process. A framework that allows the integration of acquisition with computation is the recently proposed framework of Compressed Sensing (CS). In this work, we propose a novel HSI architecture that exploits the sampling and recovery capabilities of CS to achieve a dramatic reduction in HSI acquisition requirements. In the proposed architecture, signals from multiple spectral bands are multiplexed before getting recorded by the imaging sensor. Reconstruction of the full hyperspectral cube is achieved by exploiting a dictionary of elementary spectral profiles in a unified minimization framework. Simulation results suggest that high quality recovery is possible from a single or a small number of multiplexed frames.

  17. Population attribute compression

    DOEpatents

    White, James M.; Faber, Vance; Saltzman, Jeffrey S.

    1995-01-01

    An image population having a large number of attributes is processed to form a display population with a predetermined smaller number of attributes that represent the larger number of attributes. In a particular application, the color values in an image are compressed for storage in a discrete look-up table (LUT). Color space containing the LUT color values is successively subdivided into smaller volumes until a plurality of volumes are formed, each having no more than a preselected maximum number of color values. Image pixel color values can then be rapidly placed in a volume with only a relatively few LUT values from which a nearest neighbor is selected. Image color values are assigned 8 bit pointers to their closest LUT value whereby data processing requires only the 8 bit pointer value to provide 24 bit color values from the LUT.

  18. Compressive Network Analysis

    PubMed Central

    Jiang, Xiaoye; Yao, Yuan; Liu, Han; Guibas, Leonidas

    2014-01-01

    Modern data acquisition routinely produces massive amounts of network data. Though many methods and models have been proposed to analyze such data, the research of network data is largely disconnected with the classical theory of statistical learning and signal processing. In this paper, we present a new framework for modeling network data, which connects two seemingly different areas: network data analysis and compressed sensing. From a nonparametric perspective, we model an observed network using a large dictionary. In particular, we consider the network clique detection problem and show connections between our formulation with a new algebraic tool, namely Randon basis pursuit in homogeneous spaces. Such a connection allows us to identify rigorous recovery conditions for clique detection problems. Though this paper is mainly conceptual, we also develop practical approximation algorithms for solving empirical problems and demonstrate their usefulness on real-world datasets. PMID:25620806

  19. Edge compression manifold apparatus

    DOEpatents

    Renzi, Ronald F.

    2007-02-27

    A manifold for connecting external capillaries to the inlet and/or outlet ports of a microfluidic device for high pressure applications is provided. The fluid connector for coupling at least one fluid conduit to a corresponding port of a substrate that includes: (i) a manifold comprising one or more channels extending therethrough wherein each channel is at least partially threaded, (ii) one or more threaded ferrules each defining a bore extending therethrough with each ferrule supporting a fluid conduit wherein each ferrule is threaded into a channel of the manifold, (iii) a substrate having one or more ports on its upper surface wherein the substrate is positioned below the manifold so that the one or more ports is aligned with the one or more channels of the manifold, and (iv) device to apply an axial compressive force to the substrate to couple the one or more ports of the substrate to a corresponding proximal end of a fluid conduit.

  20. Edge compression manifold apparatus

    DOEpatents

    Renzi, Ronald F.

    2004-12-21

    A manifold for connecting external capillaries to the inlet and/or outlet ports of a microfluidic device for high pressure applications is provided. The fluid connector for coupling at least one fluid conduit to a corresponding port of a substrate that includes: (i) a manifold comprising one or more channels extending therethrough wherein each channel is at least partially threaded, (ii) one or more threaded ferrules each defining a bore extending therethrough with each ferrule supporting a fluid conduit wherein each ferrule is threaded into a channel of the manifold, (iii) a substrate having one or more ports on its upper surface wherein the substrate is positioned below the manifold so that the one or more ports is aligned with the one or more channels of the manifold, and (iv) device to apply an axial compressive force to the substrate to couple the one or more ports of the substrate to a corresponding proximal end of a fluid conduit.

  1. Compressively sensed complex networks.

    SciTech Connect

    Dunlavy, Daniel M.; Ray, Jaideep; Pinar, Ali

    2010-07-01

    The aim of this project is to develop low dimension parametric (deterministic) models of complex networks, to use compressive sensing (CS) and multiscale analysis to do so and to exploit the structure of complex networks (some are self-similar under coarsening). CS provides a new way of sampling and reconstructing networks. The approach is based on multiresolution decomposition of the adjacency matrix and its efficient sampling. It requires preprocessing of the adjacency matrix to make it 'blocky' which is the biggest (combinatorial) algorithm challenge. Current CS reconstruction algorithm makes no use of the structure of a graph, its very general (and so not very efficient/customized). Other model-based CS techniques exist, but not yet adapted to networks. Obvious starting point for future work is to increase the efficiency of reconstruction.

  2. Compression and compression fatigue testing of composite laminates

    NASA Technical Reports Server (NTRS)

    Porter, T. R.

    1982-01-01

    The effects of moisture and temperature on the fatigue and fracture response of composite laminates under compression loads were investigated. The structural laminates studied were an intermediate stiffness graphite-epoxy composite (a typical angle ply laimna liminate had a typical fan blade laminate). Full and half penetration slits and impact delaminations were the defects examined. Results are presented which show the effects of moisture on the fracture and fatigue strength at room temperature, 394 K (250 F), and 422 K (300 F). Static tests results show the effects of defect size and type on the compression-fracture strength under moisture and thermal environments. The cyclic tests results compare the fatigue lives and residual compression strength under compression only and under tension-compression fatigue loading.

  3. Adaptive compressive sensing camera

    NASA Astrophysics Data System (ADS)

    Hsu, Charles; Hsu, Ming K.; Cha, Jae; Iwamura, Tomo; Landa, Joseph; Nguyen, Charles; Szu, Harold

    2013-05-01

    We have embedded Adaptive Compressive Sensing (ACS) algorithm on Charge-Coupled-Device (CCD) camera based on the simplest concept that each pixel is a charge bucket, and the charges comes from Einstein photoelectric conversion effect. Applying the manufactory design principle, we only allow altering each working component at a minimum one step. We then simulated what would be such a camera can do for real world persistent surveillance taking into account of diurnal, all weather, and seasonal variations. The data storage has saved immensely, and the order of magnitude of saving is inversely proportional to target angular speed. We did design two new components of CCD camera. Due to the matured CMOS (Complementary metal-oxide-semiconductor) technology, the on-chip Sample and Hold (SAH) circuitry can be designed for a dual Photon Detector (PD) analog circuitry for changedetection that predicts skipping or going forward at a sufficient sampling frame rate. For an admitted frame, there is a purely random sparse matrix [Φ] which is implemented at each bucket pixel level the charge transport bias voltage toward its neighborhood buckets or not, and if not, it goes to the ground drainage. Since the snapshot image is not a video, we could not apply the usual MPEG video compression and Hoffman entropy codec as well as powerful WaveNet Wrapper on sensor level. We shall compare (i) Pre-Processing FFT and a threshold of significant Fourier mode components and inverse FFT to check PSNR; (ii) Post-Processing image recovery will be selectively done by CDT&D adaptive version of linear programming at L1 minimization and L2 similarity. For (ii) we need to determine in new frames selection by SAH circuitry (i) the degree of information (d.o.i) K(t) dictates the purely random linear sparse combination of measurement data a la [Φ]M,N M(t) = K(t) Log N(t).

  4. Compressive optical imaging systems

    NASA Astrophysics Data System (ADS)

    Wu, Yuehao

    Compared to the classic Nyquist sampling theorem, Compressed Sensing or Compressive Sampling (CS) was proposed as a more efficient alternative for sampling sparse signals. In this dissertation, we discuss the implementation of the CS theory in building a variety of optical imaging systems. CS-based Imaging Systems (CSISs) exploit the sparsity of optical images in their transformed domains by imposing incoherent CS measurement patterns on them. The amplitudes and locations of sparse frequency components of optical images in their transformed domains can be reconstructed from the CS measurement results by solving an l1-regularized minimization problem. In this work, we review the theoretical background of the CS theory and present two hardware implementation schemes for CSISs, including a single pixel detector based scheme and an array detector based scheme. The first implementation scheme is suitable for acquiring Two-Dimensional (2D) spatial information of the imaging scene. We demonstrate the feasibility of this implementation scheme by developing a single pixel camera, a multispectral imaging system, and an optical sectioning microscope for fluorescence microscopy. The array detector based scheme is suitable for hyperspectral imaging applications, wherein both the spatial and spectral information of the imaging scene are of interest. We demonstrate the feasibility of this scheme by developing a Digital Micromirror Device-based Snapshot Spectral Imaging (DMD-SSI) system, which implements CS measurement processes on the Three-Dimensional (3D) spatial/spectral information of the imaging scene. Tens of spectral images can be reconstructed from the DMD-SSI system simultaneously without any mechanical or temporal scanning processes.

  5. Survey of Header Compression Techniques

    NASA Technical Reports Server (NTRS)

    Ishac, Joseph

    2001-01-01

    This report provides a summary of several different header compression techniques. The different techniques included are: (1) Van Jacobson's header compression (RFC 1144); (2) SCPS (Space Communications Protocol Standards) header compression (SCPS-TP, SCPS-NP); (3) Robust header compression (ROHC); and (4) The header compression techniques in RFC2507 and RFC2508. The methodology for compression and error correction for these schemes are described in the remainder of this document. All of the header compression schemes support compression over simplex links, provided that the end receiver has some means of sending data back to the sender. However, if that return path does not exist, then neither Van Jacobson's nor SCPS can be used, since both rely on TCP (Transmission Control Protocol). In addition, under link conditions of low delay and low error, all of the schemes perform as expected. However, based on the methodology of the schemes, each scheme is likely to behave differently as conditions degrade. Van Jacobson's header compression relies heavily on the TCP retransmission timer and would suffer an increase in loss propagation should the link possess a high delay and/or bit error rate (BER). The SCPS header compression scheme protects against high delay environments by avoiding delta encoding between packets. Thus, loss propagation is avoided. However, SCPS is still affected by an increased BER (bit-error-rate) since the lack of delta encoding results in larger header sizes. Next, the schemes found in RFC2507 and RFC2508 perform well for non-TCP connections in poor conditions. RFC2507 performance with TCP connections is improved by various techniques over Van Jacobson's, but still suffers a performance hit with poor link properties. Also, RFC2507 offers the ability to send TCP data without delta encoding, similar to what SCPS offers. ROHC is similar to the previous two schemes, but adds additional CRCs (cyclic redundancy check) into headers and improves

  6. (Finite) statistical size effects on compressive strength.

    PubMed

    Weiss, Jérôme; Girard, Lucas; Gimbert, Florent; Amitrano, David; Vandembroucq, Damien

    2014-04-29

    The larger structures are, the lower their mechanical strength. Already discussed by Leonardo da Vinci and Edmé Mariotte several centuries ago, size effects on strength remain of crucial importance in modern engineering for the elaboration of safety regulations in structural design or the extrapolation of laboratory results to geophysical field scales. Under tensile loading, statistical size effects are traditionally modeled with a weakest-link approach. One of its prominent results is a prediction of vanishing strength at large scales that can be quantified in the framework of extreme value statistics. Despite a frequent use outside its range of validity, this approach remains the dominant tool in the field of statistical size effects. Here we focus on compressive failure, which concerns a wide range of geophysical and geotechnical situations. We show on historical and recent experimental data that weakest-link predictions are not obeyed. In particular, the mechanical strength saturates at a nonzero value toward large scales. Accounting explicitly for the elastic interactions between defects during the damage process, we build a formal analogy of compressive failure with the depinning transition of an elastic manifold. This critical transition interpretation naturally entails finite-size scaling laws for the mean strength and its associated variability. Theoretical predictions are in remarkable agreement with measurements reported for various materials such as rocks, ice, coal, or concrete. This formalism, which can also be extended to the flowing instability of granular media under multiaxial compression, has important practical consequences for future design rules. PMID:24733930

  7. (Finite) statistical size effects on compressive strength

    PubMed Central

    Weiss, Jérôme; Girard, Lucas; Gimbert, Florent; Amitrano, David; Vandembroucq, Damien

    2014-01-01

    The larger structures are, the lower their mechanical strength. Already discussed by Leonardo da Vinci and Edmé Mariotte several centuries ago, size effects on strength remain of crucial importance in modern engineering for the elaboration of safety regulations in structural design or the extrapolation of laboratory results to geophysical field scales. Under tensile loading, statistical size effects are traditionally modeled with a weakest-link approach. One of its prominent results is a prediction of vanishing strength at large scales that can be quantified in the framework of extreme value statistics. Despite a frequent use outside its range of validity, this approach remains the dominant tool in the field of statistical size effects. Here we focus on compressive failure, which concerns a wide range of geophysical and geotechnical situations. We show on historical and recent experimental data that weakest-link predictions are not obeyed. In particular, the mechanical strength saturates at a nonzero value toward large scales. Accounting explicitly for the elastic interactions between defects during the damage process, we build a formal analogy of compressive failure with the depinning transition of an elastic manifold. This critical transition interpretation naturally entails finite-size scaling laws for the mean strength and its associated variability. Theoretical predictions are in remarkable agreement with measurements reported for various materials such as rocks, ice, coal, or concrete. This formalism, which can also be extended to the flowing instability of granular media under multiaxial compression, has important practical consequences for future design rules. PMID:24733930

  8. Compressible turbulent mixing: Effects of compressibility and Schmidt number

    NASA Astrophysics Data System (ADS)

    Ni, Qionglin

    2015-11-01

    Effects of compressibility and Schmidt number on passive scalar in compressible turbulence were studied. On the effect of compressibility, the scalar spectrum followed the k- 5 / 3 inertial-range scaling and suffered negligible influence from compressibility. The transfer of scalar flux was reduced by the transition from incompressible to compressible flows, however, was enhanced by the growth of Mach number. The intermittency parameter was increased by the growth of Mach number, and was decreased by the growth of the compressive mode of driven forcing. The dependency of the mixing timescale on compressibility showed that for the driven forcing, the compressive mode was less efficient in enhancing scalar mixing. On the effect of Schmidt number (Sc), in the inertial-convective range the scalar spectrum obeyed the k- 5 / 3 scaling. For Sc >> 1, a k-1 power law appeared in the viscous-convective range, while for Sc << 1, a k- 17 / 3 power law was identified in the inertial-diffusive range. The transfer of scalar flux grew over Sc. In the Sc >> 1 flow the scalar field rolled up and mixed sufficiently, while in the Sc << 1 flow that only had the large-scale, cloudlike structures. In Sc >> 1 and Sc << 1 flows, the spectral densities of scalar advection and dissipation followed the k- 5 / 3 scaling, indicating that in compressible turbulence the processes of advection and dissipation might deferring to the Kolmogorov picture. Finally, the comparison with incompressible results showed that the scalar in compressible turbulence lacked a conspicuous bump structure in its spectrum, and was more intermittent in the dissipative range.

  9. About the use of stoichiometric hydroxyapatite in compression - incidence of manufacturing process on compressibility.

    PubMed

    Pontier, C; Viana, M; Champion, E; Bernache-Assollant, D; Chulia, D

    2001-05-01

    Literature concerning calcium phosphates in pharmacy exhibits the chemical diversity of the compounds available. Some excipient manufacturers offer hydroxyapatite as a direct compression excipient, but the chemical analysis of this compound usually shows a variability of the composition: the so-called materials can be hydroxyapatite or other calcium phosphates, uncalcined (i.e. with a low crystallinity) or calcined and well-crystallized hydroxyapatite. This study points out the incidence of the crystallinity of one compound (i.e. hydroxyapatite) on the mechanical properties. Stoichiometric hydroxyapatite is synthesized and compounds differing in their crystallinity, manufacturing process and particle size are manufactured. X-Ray diffraction analysis is used to investigate the chemical nature of the compounds. The mechanical study (study of the compression, diametral compressive strength, Heckel plots) highlights the negative effect of calcination on the mechanical properties. Porosity and specific surface area measurements show the effect of calcination on compaction. Uncalcined materials show bulk and mechanical properties in accordance with their use as direct compression excipients. PMID:11343890

  10. Compression and Predictive Distributions for Large Alphabets

    NASA Astrophysics Data System (ADS)

    Yang, Xiao

    Data generated from large alphabet exist almost everywhere in our life, for example, texts, images and videos. Traditional universal compression algorithms mostly involve small alphabets and assume implicitly an asymptotic condition under which the extra bits induced in the compression process vanishes as an infinite number of data come. In this thesis, we put the main focus on compression and prediction for large alphabets with the alphabet size comparable or larger than the sample size. We first consider sequences of random variables independent and identically generated from a large alphabet. In particular, the size of the sample is allowed to be variable. A product distribution based on Poisson sampling and tiling is proposed as the coding distribution, which highly simplifies the implementation and analysis through independence. Moreover, we characterize the behavior of the coding distribution through a condition on the tail sum of the ordered counts, and apply it to sequences satisfying this condition. Further, we apply this method to envelope classes. This coding distribution provides a convenient method to approximately compute the Shtarkov's normalized maximum likelihood (NML) distribution. And the extra price paid for this convenience is small compared to the total cost. Furthermore, we find this coding distribution can also be used to calculate the NML distribution exactly. And this calculation remains simple due to the independence of the coding distribution. Further, we consider a more realistic class---the Markov class, and in particular, tree sources. A context tree based algorithm is designed to describe the dependencies among the contexts. It is a greedy algorithm which seeks for the greatest savings in codelength when constructing the tree. Compression and prediction of individual counts associated with the contexts uses the same coding distribution as in the i.i.d case. Combining these two procedures, we demonstrate a compression algorithm based

  11. Variable compression ratio device for internal combustion engine

    DOEpatents

    Maloney, Ronald P.; Faletti, James J.

    2004-03-23

    An internal combustion engine, particularly suitable for use in a work machine, is provided with a combustion cylinder, a cylinder head at an end of the combustion cylinder and a primary piston reciprocally disposed within the combustion cylinder. The cylinder head includes a secondary cylinder and a secondary piston reciprocally disposed within the secondary cylinder. An actuator is coupled with the secondary piston for controlling the position of the secondary piston dependent upon the position of the primary piston. A communication port establishes fluid flow communication between the combustion cylinder and the secondary cylinder.

  12. Compression failure of composite laminates

    NASA Technical Reports Server (NTRS)

    Pipes, R. B.

    1983-01-01

    This presentation attempts to characterize the compressive behavior of Hercules AS-1/3501-6 graphite-epoxy composite. The effect of varying specimen geometry on test results is examined. The transition region is determined between buckling and compressive failure. Failure modes are defined and analytical models to describe these modes are presented.

  13. Pressure Oscillations in Adiabatic Compression

    ERIC Educational Resources Information Center

    Stout, Roland

    2011-01-01

    After finding Moloney and McGarvey's modified adiabatic compression apparatus, I decided to insert this experiment into my physical chemistry laboratory at the last minute, replacing a problematic experiment. With insufficient time to build the apparatus, we placed a bottle between two thick textbooks and compressed it with a third textbook forced…

  14. Compression Shocks of Detached Flow

    NASA Technical Reports Server (NTRS)

    Eggink

    1947-01-01

    It is known that compression shocks which lead from supersonic to subsonic velocity cause the flow to separate on impact on a rigid wall. Such shocks appear at bodies with circular symmetry or wing profiles on locally exceeding sonic velocity, and in Laval nozzles with too high a back pressure. The form of the compression shocks observed therein is investigated.

  15. Streaming Compression of Hexahedral Meshes

    SciTech Connect

    Isenburg, M; Courbet, C

    2010-02-03

    We describe a method for streaming compression of hexahedral meshes. Given an interleaved stream of vertices and hexahedral our coder incrementally compresses the mesh in the presented order. Our coder is extremely memory efficient when the input stream documents when vertices are referenced for the last time (i.e. when it contains topological finalization tags). Our coder then continuously releases and reuses data structures that no longer contribute to compressing the remainder of the stream. This means in practice that our coder has only a small fraction of the whole mesh in memory at any time. We can therefore compress very large meshes - even meshes that do not file in memory. Compared to traditional, non-streaming approaches that load the entire mesh and globally reorder it during compression, our algorithm trades a less compact compressed representation for significant gains in speed, memory, and I/O efficiency. For example, on the 456k hexahedra 'blade' mesh, our coder is twice as fast and uses 88 times less memory (only 3.1 MB) with the compressed file increasing about 3% in size. We also present the first scheme for predictive compression of properties associated with hexahedral cells.

  16. Data compression by wavelet transforms

    NASA Technical Reports Server (NTRS)

    Shahshahani, M.

    1992-01-01

    A wavelet transform algorithm is applied to image compression. It is observed that the algorithm does not suffer from the blockiness characteristic of the DCT-based algorithms at compression ratios exceeding 25:1, but the edges do not appear as sharp as they do with the latter method. Some suggestions for the improved performance of the wavelet transform method are presented.

  17. Application specific compression : final report.

    SciTech Connect

    Melgaard, David Kennett; Byrne, Raymond Harry; Myers, Daniel S.; Harrison, Carol D.; Lee, David S.; Lewis, Phillip J.; Carlson, Jeffrey J.

    2008-12-01

    With the continuing development of more capable data gathering sensors, comes an increased demand on the bandwidth for transmitting larger quantities of data. To help counteract that trend, a study was undertaken to determine appropriate lossy data compression strategies for minimizing their impact on target detection and characterization. The survey of current compression techniques led us to the conclusion that wavelet compression was well suited for this purpose. Wavelet analysis essentially applies a low-pass and high-pass filter to the data, converting the data into the related coefficients that maintain spatial information as well as frequency information. Wavelet compression is achieved by zeroing the coefficients that pertain to the noise in the signal, i.e. the high frequency, low amplitude portion. This approach is well suited for our goal because it reduces the noise in the signal with only minimal impact on the larger, lower frequency target signatures. The resulting coefficients can then be encoded using lossless techniques with higher compression levels because of the lower entropy and significant number of zeros. No significant signal degradation or difficulties in target characterization or detection were observed or measured when wavelet compression was applied to simulated and real data, even when over 80% of the coefficients were zeroed. While the exact level of compression will be data set dependent, for the data sets we studied, compression factors over 10 were found to be satisfactory where conventional lossless techniques achieved levels of less than 3.

  18. Digital compression algorithms for HDTV transmission

    NASA Technical Reports Server (NTRS)

    Adkins, Kenneth C.; Shalkhauser, Mary JO; Bibyk, Steven B.

    1990-01-01

    Digital compression of video images is a possible avenue for high definition television (HDTV) transmission. Compression needs to be optimized while picture quality remains high. Two techniques for compression the digital images are explained and comparisons are drawn between the human vision system and artificial compression techniques. Suggestions for improving compression algorithms through the use of neural and analog circuitry are given.

  19. Analytical model for ramp compression

    NASA Astrophysics Data System (ADS)

    Xue, Quanxi; Jiang, Shaoen; Wang, Zhebin; Wang, Feng; Hu, Yun; Ding, Yongkun

    2016-08-01

    An analytical ramp compression model for condensed matter, which can provide explicit solutions for isentropic compression flow fields, is reported. A ramp compression experiment can be easily designed according to the capability of the loading source using this model. Specifically, important parameters, such as the maximum isentropic region width, material properties, profile of the pressure pulse, and the pressure pulse duration can be reasonably allocated or chosen. To demonstrate and study this model, laser-direct-driven ramp compression experiments and code simulation are performed successively, and the factors influencing the accuracy of the model are studied. The application and simulation show that this model can be used as guidance in the design of a ramp compression experiment. However, it is verified that further optimization work is required for a precise experimental design.

  20. Compressive sensing exploiting wavelet-domain dependencies for ECG compression

    NASA Astrophysics Data System (ADS)

    Polania, Luisa F.; Carrillo, Rafael E.; Blanco-Velasco, Manuel; Barner, Kenneth E.

    2012-06-01

    Compressive sensing (CS) is an emerging signal processing paradigm that enables sub-Nyquist sampling of sparse signals. Extensive previous work has exploited the sparse representation of ECG signals in compression applications. In this paper, we propose the use of wavelet domain dependencies to further reduce the number of samples in compressive sensing-based ECG compression while decreasing the computational complexity. R wave events manifest themselves as chains of large coefficients propagating across scales to form a connected subtree of the wavelet coefficient tree. We show that the incorporation of this connectedness as additional prior information into a modified version of the CoSaMP algorithm can significantly reduce the required number of samples to achieve good quality in the reconstruction. This approach also allows more control over the ECG signal reconstruction, in particular, the QRS complex, which is typically distorted when prior information is not included in the recovery. The compression algorithm was tested upon records selected from the MIT-BIH arrhythmia database. Simulation results show that the proposed algorithm leads to high compression ratios associated with low distortion levels relative to state-of-the-art compression algorithms.

  1. Compression relief engine brake

    SciTech Connect

    Meneely, V.A.

    1987-10-06

    A compression relief brake is described for four cycle internal-combustion engines, comprising: a pressurized oil supply; means for selectively pressurizing a hydraulic circuit with oil from the oil supply; a master piston and cylinder communicating with a slave piston and cylinder via the hydraulic circuit; an engine exhaust valve mechanically coupled to the engine and timed to open during the exhaust cycle of the engine the exhaust valve coupled to the slave piston. The exhaust valve is spring-based in a closed state to contact a valve seat; a sleeve frictionally and slidably disposed within a cavity defined by the slave piston which cavity communicates with the hydraulic circuit. When the hydraulic circuit is selectively pressurized and the engine is operating the sleeve entraps an incompressible volume of oil within the cavity to generate a displacement of the slave piston within the slave cylinder, whereby a first gap is maintained between the exhaust valve and its associated seat; and means for reciprocally activating the master piston for increasing the pressure within the previously pressurized hydraulic circuit during at least a portion of the expansion cycle of the engine whereby a second gap is reciprocally maintained between the exhaust valve and its associated seat.

  2. Compressive sensing by learning a Gaussian mixture model from measurements.

    PubMed

    Yang, Jianbo; Liao, Xuejun; Yuan, Xin; Llull, Patrick; Brady, David J; Sapiro, Guillermo; Carin, Lawrence

    2015-01-01

    Compressive sensing of signals drawn from a Gaussian mixture model (GMM) admits closed-form minimum mean squared error reconstruction from incomplete linear measurements. An accurate GMM signal model is usually not available a priori, because it is difficult to obtain training signals that match the statistics of the signals being sensed. We propose to solve that problem by learning the signal model in situ, based directly on the compressive measurements of the signals, without resorting to other signals to train a model. A key feature of our method is that the signals being sensed are treated as random variables and are integrated out in the likelihood. We derive a maximum marginal likelihood estimator (MMLE) that maximizes the likelihood of the GMM of the underlying signals given only their linear compressive measurements. We extend the MMLE to a GMM with dominantly low-rank covariance matrices, to gain computational speedup. We report extensive experimental results on image inpainting, compressive sensing of high-speed video, and compressive hyperspectral imaging (the latter two based on real compressive cameras). The results demonstrate that the proposed methods outperform state-of-the-art methods by significant margins. PMID:25361508

  3. Adaptive compression of image data

    NASA Astrophysics Data System (ADS)

    Hludov, Sergei; Schroeter, Claus; Meinel, Christoph

    1998-09-01

    In this paper we will introduce a method of analyzing images, a criterium to differentiate between images, a compression method of medical images in digital form based on the classification of the image bit plane and finally an algorithm for adaptive image compression. The analysis of the image content is based on a valuation of the relative number and absolute values of the wavelet coefficients. A comparison between the original image and the decoded image will be done by a difference criteria calculated by the wavelet coefficients of the original image and the decoded image of the first and second iteration step of the wavelet transformation. This adaptive image compression algorithm is based on a classification of digital images into three classes and followed by the compression of the image by a suitable compression algorithm. Furthermore we will show that applying these classification rules on DICOM-images is a very effective method to do adaptive compression. The image classification algorithm and the image compression algorithms have been implemented in JAVA.

  4. Best compression: Reciprocating or rotary?

    SciTech Connect

    Cahill, C.

    1997-07-01

    A compressor is a device used to increase the pressure of a compressible fluid. The inlet pressure can vary from a deep vacuum to a high positive pressure. The discharge pressure can range from subatmospheric levels to tens of thousands of pounds per square inch. Compressors come in numerous forms, but for oilfield applications there are two primary types, reciprocating and rotary. Both reciprocating and rotary compressors are grouped in the intermittent mode of compression. Intermittent is cyclic in nature, in that a specific quantity of gas is ingested by the compressor, acted upon and discharged before the cycle is repeated. Reciprocating compression is the most common form of compression used for oilfield applications. Rotary screw compressors have a long history but are relative newcomers to oilfield applications. The rotary screw compressor-technically a helical rotor compressor-dates back to 1878. That was when the first rotary screw was manufactured for the purpose of compressing air. Today thousands of rotary screw compression packages are being used throughout the world to compress natural gas.

  5. Advances in compressible turbulent mixing

    SciTech Connect

    Dannevik, W.P.; Buckingham, A.C.; Leith, C.E.

    1992-01-01

    This volume includes some recent additions to original material prepared for the Princeton International Workshop on the Physics of Compressible Turbulent Mixing, held in 1988. Workshop participants were asked to emphasize the physics of the compressible mixing process rather than measurement techniques or computational methods. Actual experimental results and their meaning were given precedence over discussions of new diagnostic developments. Theoretical interpretations and understanding were stressed rather than the exposition of new analytical model developments or advances in numerical procedures. By design, compressibility influences on turbulent mixing were discussed--almost exclusively--from the perspective of supersonic flow field studies. The papers are arranged in three topical categories: Foundations, Vortical Domination, and Strongly Coupled Compressibility. The Foundations category is a collection of seminal studies that connect current study in compressible turbulent mixing with compressible, high-speed turbulent flow research that almost vanished about two decades ago. A number of contributions are included on flow instability initiation, evolution, and transition between the states of unstable flow onset through those descriptive of fully developed turbulence. The Vortical Domination category includes theoretical and experimental studies of coherent structures, vortex pairing, vortex-dynamics-influenced pressure focusing. In the Strongly Coupled Compressibility category the organizers included the high-speed turbulent flow investigations in which the interaction of shock waves could be considered an important source for production of new turbulence or for the enhancement of pre-existing turbulence. Individual papers are processed separately.

  6. Improved Compression of Wavelet-Transformed Images

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron; Klimesh, Matthew

    2005-01-01

    A recently developed data-compression method is an adaptive technique for coding quantized wavelet-transformed data, nominally as part of a complete image-data compressor. Unlike some other approaches, this method admits a simple implementation and does not rely on the use of large code tables. A common data compression approach, particularly for images, is to perform a wavelet transform on the input data, and then losslessly compress a quantized version of the wavelet-transformed data. Under this compression approach, it is common for the quantized data to include long sequences, or runs, of zeros. The new coding method uses prefixfree codes for the nonnegative integers as part of an adaptive algorithm for compressing the quantized wavelet-transformed data by run-length coding. In the form of run-length coding used here, the data sequence to be encoded is parsed into strings consisting of some number (possibly 0) of zeros, followed by a nonzero value. The nonzero value and the length of the run of zeros are encoded. For a data stream that contains a sufficiently high frequency of zeros, this method is known to be more effective than using a single variable length code to encode each symbol. The specific prefix-free codes used are from two classes of variable-length codes: a class known as Golomb codes, and a class known as exponential-Golomb codes. The codes within each class are indexed by a single integer parameter. The present method uses exponential-Golomb codes for the lengths of the runs of zeros, and Golomb codes for the nonzero values. The code parameters within each code class are determined adaptively on the fly as compression proceeds, on the basis of statistics from previously encoded values. In particular, a simple adaptive method has been devised to select the parameter identifying the particular exponential-Golomb code to use. The method tracks the average number of bits used to encode recent runlengths, and takes the difference between this average

  7. Lossy Compression of ACS images

    NASA Astrophysics Data System (ADS)

    Cox, Colin

    2004-01-01

    A method of compressing images stored as floating point arrays was proposed several years ago by White and Greenfield. With the increased image sizes encountered in the last few years and the consequent need to distribute large data volumes, the value of applying such a procedure has become more evident. Methods such as this which offer significant compression ratios are lossy and there is always some concern that statistically important information might be discarded. Several astronomical images have been analyzed and, in the examples tested, compression ratios of about six were obtained with no significant information loss.

  8. Designing experiments through compressed sensing.

    SciTech Connect

    Young, Joseph G.; Ridzal, Denis

    2013-06-01

    In the following paper, we discuss how to design an ensemble of experiments through the use of compressed sensing. Specifically, we show how to conduct a small number of physical experiments and then use compressed sensing to reconstruct a larger set of data. In order to accomplish this, we organize our results into four sections. We begin by extending the theory of compressed sensing to a finite product of Hilbert spaces. Then, we show how these results apply to experiment design. Next, we develop an efficient reconstruction algorithm that allows us to reconstruct experimental data projected onto a finite element basis. Finally, we verify our approach with two computational experiments.

  9. Image compression using constrained relaxation

    NASA Astrophysics Data System (ADS)

    He, Zhihai

    2007-01-01

    In this work, we develop a new data representation framework, called constrained relaxation for image compression. Our basic observation is that an image is not a random 2-D array of pixels. They have to satisfy a set of imaging constraints so as to form a natural image. Therefore, one of the major tasks in image representation and coding is to efficiently encode these imaging constraints. The proposed data representation and image compression method not only achieves more efficient data compression than the state-of-the-art H.264 Intra frame coding, but also provides much more resilience to wireless transmission errors with an internal error-correction capability.

  10. Context-Aware Image Compression

    PubMed Central

    Chan, Jacky C. K.; Mahjoubfar, Ata; Chen, Claire L.; Jalali, Bahram

    2016-01-01

    We describe a physics-based data compression method inspired by the photonic time stretch wherein information-rich portions of the data are dilated in a process that emulates the effect of group velocity dispersion on temporal signals. With this coding operation, the data can be downsampled at a lower rate than without it. In contrast to previous implementation of the warped stretch compression, here the decoding can be performed without the need of phase recovery. We present rate-distortion analysis and show improvement in PSNR compared to compression via uniform downsampling. PMID:27367904

  11. Partial transparency of compressed wood

    NASA Astrophysics Data System (ADS)

    Sugimoto, Hiroyuki; Sugimori, Masatoshi

    2016-05-01

    We have developed novel wood composite with optical transparency at arbitrary region. Pores in wood cells have a great variation in size. These pores expand the light path in the sample, because the refractive indexes differ between constituents of cell and air in lumen. In this study, wood compressed to close to lumen had optical transparency. Because the condition of the compression of wood needs the plastic deformation, wood was impregnated phenolic resin. The optimal condition for high transmission is compression ratio above 0.7.

  12. Television Compression Algorithms And Transmission On Packet Networks

    NASA Astrophysics Data System (ADS)

    Brainard, R. C.; Othmer, J. H.

    1988-10-01

    Wide-band packet transmission is a subject of strong current interest. The transmission of compressed TV signals over such networks is possible with any quality level. There are some specific advantages in using packet networks for TV transmission. Namely, any fixed data rate can be chosen, or a variable data rate can be utilized. However, on the negative side packet loss must be considered and differential delay in packet arrival must be compensated. The possibility of packet loss has a strong influence on compression algorithm choice. Differential delay of packet arrival is a new problem in codec design. Some issues relevant to mutual design of the transmission networks and compression algorithms will be presented. An assumption is that the packet network will maintain packet sequence integrity. For variable-rate transmission, a reasonable definition of peak data rate is necessary. Rate constraints may be necessary to encourage instituting a variable-rate service on the networks. The charging algorithm for network use will have an effect on selection of compression algorithm. Some values of and procedures for implementing packet priorities are discussed. Packet length has only a second-order effect on packet-TV considerations. Some examples of a range of codecs for differing data rates and picture quality are given. These serve to illustrate sensitivities to the various characteristics of packet networks. Perhaps more important, we talk about what we do not know about the design of such systems.

  13. A Motion-Compensating Image-Compression Scheme

    NASA Technical Reports Server (NTRS)

    Wong, Carol

    1994-01-01

    Chrominance used (in addition to luminance) in estimating motion. Variable-rate digital coding scheme for compression of color-video-image data designed to deliver pictures of good quality at moderate compressed-data rate of 1 to 2 bits per pixel, or of fair quality at rate less than 1 bit per pixel. Scheme, in principle, implemented by use of commercially available application-specific integrated circuits. Incorporates elements of some prior coding schemes, including motion compensation (MC) and discrete cosine transform (DCT).

  14. Efficient Lossy Compression for Compressive Sensing Acquisition of Images in Compressive Sensing Imaging Systems

    PubMed Central

    Li, Xiangwei; Lan, Xuguang; Yang, Meng; Xue, Jianru; Zheng, Nanning

    2014-01-01

    Compressive Sensing Imaging (CSI) is a new framework for image acquisition, which enables the simultaneous acquisition and compression of a scene. Since the characteristics of Compressive Sensing (CS) acquisition are very different from traditional image acquisition, the general image compression solution may not work well. In this paper, we propose an efficient lossy compression solution for CS acquisition of images by considering the distinctive features of the CSI. First, we design an adaptive compressive sensing acquisition method for images according to the sampling rate, which could achieve better CS reconstruction quality for the acquired image. Second, we develop a universal quantization for the obtained CS measurements from CS acquisition without knowing any a priori information about the captured image. Finally, we apply these two methods in the CSI system for efficient lossy compression of CS acquisition. Simulation results demonstrate that the proposed solution improves the rate-distortion performance by 0.4∼2 dB comparing with current state-of-the-art, while maintaining a low computational complexity. PMID:25490597

  15. Linear phase compressive filter

    DOEpatents

    McEwan, T.E.

    1995-06-06

    A phase linear filter for soliton suppression is in the form of a laddered series of stages of non-commensurate low pass filters with each low pass filter having a series coupled inductance (L) and a reverse biased, voltage dependent varactor diode, to ground which acts as a variable capacitance (C). L and C values are set to levels which correspond to a linear or conventional phase linear filter. Inductance is mapped directly from that of an equivalent nonlinear transmission line and capacitance is mapped from the linear case using a large signal equivalent of a nonlinear transmission line. 2 figs.

  16. Linear phase compressive filter

    DOEpatents

    McEwan, Thomas E.

    1995-01-01

    A phase linear filter for soliton suppression is in the form of a laddered series of stages of non-commensurate low pass filters with each low pass filter having a series coupled inductance (L) and a reverse biased, voltage dependent varactor diode, to ground which acts as a variable capacitance (C). L and C values are set to levels which correspond to a linear or conventional phase linear filter. Inductance is mapped directly from that of an equivalent nonlinear transmission line and capacitance is mapped from the linear case using a large signal equivalent of a nonlinear transmission line.

  17. Comparison of Artificial Compressibility Methods

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Housman, Jeffrey; Kwak, Dochan

    2004-01-01

    Various artificial compressibility methods for calculating the three-dimensional incompressible Navier-Stokes equations are compared. Each method is described and numerical solutions to test problems are conducted. A comparison based on convergence behavior, accuracy, and robustness is given.

  18. Efficient Decoding of Compressed Data.

    ERIC Educational Resources Information Center

    Bassiouni, Mostafa A.; Mukherjee, Amar

    1995-01-01

    Discusses the problem of enhancing the speed of Huffman decoding of compressed data. Topics addressed include the Huffman decoding tree; multibit decoding; binary string mapping problems; and algorithms for solving mapping problems. (22 references) (LRW)

  19. [New aspects of compression therapy].

    PubMed

    Partsch, Bernhard; Partsch, Hugo

    2016-06-01

    In this review article the mechanisms of action of compression therapy are summarized and a survey of materials is presented together with some practical advice how and when these different devices should be applied. Some new experimental findings regarding the optimal dosage (= compression pressure) concerning an improvement of venous hemodynamics and a reduction of oedema are discussed. It is shown, that stiff, non-yielding material applied with adequate pressure provides hemodynamically superior effects compared to elastic material and that relatively low pressures reduce oedema. Compression over the calf is more important to increase the calf pump function compared to graduated compression. In patients with mixed, arterial-venous ulcers and an ABPI over 0.6 inelastic bandages not exceeding a sub-bandage pressure of 40 mmHg may increase the arterial flow and improve venous pumping function. PMID:27259340

  20. Compression fractures of the back

    MedlinePlus

    Compression fractures of the back are broken vertebrae. Vertebrae are the bones of the spine. ... bone from elsewhere Tumors that start in the spine, such as multiple myeloma Having many fractures of ...

  1. Compressed gas fuel storage system

    SciTech Connect

    Wozniak, John J.; Tiller, Dale B.; Wienhold, Paul D.; Hildebrand, Richard J.

    2001-01-01

    A compressed gas vehicle fuel storage system comprised of a plurality of compressed gas pressure cells supported by shock-absorbing foam positioned within a shape-conforming container. The container is dimensioned relative to the compressed gas pressure cells whereby a radial air gap surrounds each compressed gas pressure cell. The radial air gap allows pressure-induced expansion of the pressure cells without resulting in the application of pressure to adjacent pressure cells or physical pressure to the container. The pressure cells are interconnected by a gas control assembly including a thermally activated pressure relief device, a manual safety shut-off valve, and means for connecting the fuel storage system to a vehicle power source and a refueling adapter. The gas control assembly is enclosed by a protective cover attached to the container. The system is attached to the vehicle with straps to enable the chassis to deform as intended in a high-speed collision.

  2. Shock compression of polyvinyl chloride

    NASA Astrophysics Data System (ADS)

    Neogi, Anupam; Mitra, Nilanjan

    2016-04-01

    This study presents shock compression simulation of atactic polyvinyl chloride (PVC) using ab-initio and classical molecular dynamics. The manuscript also identifies the limits of applicability of classical molecular dynamics based shock compression simulation for PVC. The mechanism of bond dissociation under shock loading and its progression is demonstrated in this manuscript using the density functional theory based molecular dynamics simulations. The rate of dissociation of different bonds at different shock velocities is also presented in this manuscript.

  3. Advanced application flight experiment breadboard pulse compression radar altimeter program

    NASA Technical Reports Server (NTRS)

    1976-01-01

    Design, development and performance of the pulse compression radar altimeter is described. The high resolution breadboard system is designed to operate from an aircraft at 10 Kft above the ocean and to accurately measure altitude, sea wave height and sea reflectivity. The minicomputer controlled Ku band system provides six basic variables and an extensive digital recording capability for experimentation purposes. Signal bandwidths of 360 MHz are obtained using a reflective array compression line. Stretch processing is used to achieve 1000:1 pulse compression. The system range command LSB is 0.62 ns or 9.25 cm. A second order altitude tracker, aided by accelerometer inputs is implemented in the system software. During flight tests the system demonstrated an altitude resolution capability of 2.1 cm and sea wave height estimation accuracy of 10%. The altitude measurement performance exceeds that of the Skylab and GEOS-C predecessors by approximately an order of magnitude.

  4. Fixed-rate compressed floating-point arrays

    Energy Science and Technology Software Center (ESTSC)

    2014-03-30

    ZFP is a library for lossy compression of single- and double-precision floating-point data. One of the unique features of ZFP is its support for fixed-rate compression, which enables random read and write access at the granularity of small blocks of values. Using a C++ interface, this allows declaring compressed arrays (1D, 2D, and 3D arrays are supported) that through operator overloading can be treated just like conventional, uncompressed arrays, but which allow the user tomore » specify the exact number of bits to allocate to the array. ZFP also has variable-rate fixed-precision and fixed-accuracy modes, which allow the user to specify a tolerance on the relative or absolute error.« less

  5. Anamorphic transformation and its application to time-bandwidth compression.

    PubMed

    Asghari, Mohammad H; Jalali, Bahram

    2013-09-20

    A general method for compressing the modulation time-bandwidth product of analog signals is introduced. As one of its applications, this physics-based signal grooming, performed in the analog domain, allows a conventional digitizer to sample and digitize the analog signal with variable resolution. The net result is that frequency components that were beyond the digitizer bandwidth can now be captured and, at the same time, the total digital data size is reduced. This compression is lossless and is achieved through a feature selective reshaping of the signal's complex field, performed in the analog domain prior to sampling. Our method is inspired by operation of Fovea centralis in the human eye and by anamorphic transformation in visual arts. The proposed transform can also be performed in the digital domain as a data compression algorithm to alleviate the storage and transmission bottlenecks associated with "big data." PMID:24085172

  6. Object-Based Image Compression

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.

    2003-01-01

    Image compression frequently supports reduced storage requirement in a computer system, as well as enhancement of effective channel bandwidth in a communication system, by decreasing the source bit rate through reduction of source redundancy. The majority of image compression techniques emphasize pixel-level operations, such as matching rectangular or elliptical sampling blocks taken from the source data stream, with exemplars stored in a database (e.g., a codebook in vector quantization or VQ). Alternatively, one can represent a source block via transformation, coefficient quantization, and selection of coefficients deemed significant for source content approximation in the decompressed image. This approach, called transform coding (TC), has predominated for several decades in the signal and image processing communities. A further technique that has been employed is the deduction of affine relationships from source properties such as local self-similarity, which supports the construction of adaptive codebooks in a self-VQ paradigm that has been called iterated function systems (IFS). Although VQ, TC, and IFS based compression algorithms have enjoyed varying levels of success for different types of applications, bit rate requirements, and image quality constraints, few of these algorithms examine the higher-level spatial structure of an image, and fewer still exploit this structure to enhance compression ratio. In this paper, we discuss a fourth type of compression algorithm, called object-based compression, which is based on research in joint segmentaton and compression, as well as previous research in the extraction of sketch-like representations from digital imagery. Here, large image regions that correspond to contiguous recognizeable objects or parts of objects are segmented from the source, then represented compactly in the compressed image. Segmentation is facilitated by source properties such as size, shape, texture, statistical properties, and spectral

  7. Online Adaptive Vector Quantization with Variable Size Codebook Entries.

    ERIC Educational Resources Information Center

    Constantinescu, Cornel; Storer, James A.

    1994-01-01

    Presents a new image compression algorithm that employs some of the most successful approaches to adaptive lossless compression to perform adaptive online (single pass) vector quantization with variable size codebook entries. Results of tests of the algorithm's effectiveness on standard test images are given. (12 references) (KRN)

  8. Glucose Variability

    PubMed Central

    2013-01-01

    The proposed contribution of glucose variability to the development of the complications of diabetes beyond that of glycemic exposure is supported by reports that oxidative stress, the putative mediator of such complications, is greater for intermittent as opposed to sustained hyperglycemia. Variability of glycemia in ambulatory conditions defined as the deviation from steady state is a phenomenon of normal physiology. Comprehensive recording of glycemia is required for the generation of any measurement of glucose variability. To avoid distortion of variability to that of glycemic exposure, its calculation should be devoid of a time component. PMID:23613565

  9. 29 CFR 1917.154 - Compressed air.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 7 2012-07-01 2012-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed...

  10. 29 CFR 1917.154 - Compressed air.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 7 2013-07-01 2013-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed...

  11. 29 CFR 1917.154 - Compressed air.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 7 2014-07-01 2014-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed...

  12. 29 CFR 1917.154 - Compressed air.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 7 2011-07-01 2011-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed...

  13. 29 CFR 1917.154 - Compressed air.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 7 2010-07-01 2010-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed...

  14. Perceptual Image Compression in Telemedicine

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ahumada, Albert J., Jr.; Eckstein, Miguel; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    The next era of space exploration, especially the "Mission to Planet Earth" will generate immense quantities of image data. For example, the Earth Observing System (EOS) is expected to generate in excess of one terabyte/day. NASA confronts a major technical challenge in managing this great flow of imagery: in collection, pre-processing, transmission to earth, archiving, and distribution to scientists at remote locations. Expected requirements in most of these areas clearly exceed current technology. Part of the solution to this problem lies in efficient image compression techniques. For much of this imagery, the ultimate consumer is the human eye. In this case image compression should be designed to match the visual capacities of the human observer. We have developed three techniques for optimizing image compression for the human viewer. The first consists of a formula, developed jointly with IBM and based on psychophysical measurements, that computes a DCT quantization matrix for any specified combination of viewing distance, display resolution, and display brightness. This DCT quantization matrix is used in most recent standards for digital image compression (JPEG, MPEG, CCITT H.261). The second technique optimizes the DCT quantization matrix for each individual image, based on the contents of the image. This is accomplished by means of a model of visual sensitivity to compression artifacts. The third technique extends the first two techniques to the realm of wavelet compression. Together these two techniques will allow systematic perceptual optimization of image compression in NASA imaging systems. Many of the image management challenges faced by NASA are mirrored in the field of telemedicine. Here too there are severe demands for transmission and archiving of large image databases, and the imagery is ultimately used primarily by human observers, such as radiologists. In this presentation I will describe some of our preliminary explorations of the applications

  15. Absolutely lossless compression of medical images.

    PubMed

    Ashraf, Robina; Akbar, Muhammad

    2005-01-01

    Data in medical images is very large and therefore for storage and/or transmission of these images, compression is essential. A method is proposed which provides high compression ratios for radiographic images with no loss of diagnostic quality. In the approach an image is first compressed at a high compression ratio but with loss, and the error image is then compressed losslessly. The resulting compression is not only strictly lossless, but also expected to yield a high compression ratio, especially if the lossy compression technique is good. A neural network vector quantizer (NNVQ) is used as a lossy compressor, while for lossless compression Huffman coding is used. Quality of images is evaluated by comparing with standard compression techniques available. PMID:17281110

  16. Are Compression Stockings an Effective Treatment for Orthostatic Presyncope?

    PubMed Central

    Protheroe, Clare Louise; Dikareva, Anastasia; Menon, Carlo; Claydon, Victoria Elizabeth

    2011-01-01

    Background Syncope, or fainting, affects approximately 6.2% of the population, and is associated with significant comorbidity. Many syncopal events occur secondary to excessive venous pooling and capillary filtration in the lower limbs when upright. As such, a common approach to the management of syncope is the use of compression stockings. However, research confirming their efficacy is lacking. We aimed to investigate the effect of graded calf compression stockings on orthostatic tolerance. Methodology/Principal Findings We evaluated orthostatic tolerance (OT) and haemodynamic control in 15 healthy volunteers wearing graded calf compression stockings compared to two placebo stockings in a randomized, cross-over, double-blind fashion. OT (time to presyncope, min) was determined using combined head-upright tilting and lower body negative pressure applied until presyncope. Throughout testing we continuously monitored beat-to-beat blood pressures, heart rate, stroke volume and cardiac output (finger plethysmography), cerebral and forearm blood flow velocities (Doppler ultrasound) and breath-by-breath end tidal gases. There were no significant differences in OT between compression stocking (26.0±2.3 min) and calf (29.3±2.4 min) or ankle (27.6±3.1 min) placebo conditions. Cardiovascular, cerebral and respiratory responses were similar in all conditions. The efficacy of compression stockings was related to anthropometric parameters, and could be predicted by a model based on the subject's calf circumference and shoe size (r = 0.780, p = 0.004). Conclusions/Significance These data question the use of calf compression stockings for orthostatic intolerance and highlight the need for individualised therapy accounting for anthropometric variables when considering treatment with compression stockings. PMID:22194814

  17. An overview of semantic compression

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.

    2010-08-01

    We live in such perceptually rich natural and manmade environments that detection and recognition of objects is mediated cerebrally by attentional filtering, in order to separate objects of interest from background clutter. In computer models of the human visual system, attentional filtering is often restricted to early processing, where areas of interest (AOIs) are delineated around anomalies of interest, then the pixels within each AOI's subtense are isolated for later processing. In contrast, the human visual system concurrently detects many targets at multiple levels (e.g., retinal center-surround filters, ganglion layer feature detectors, post-retinal spatial filtering, and cortical detection / filtering of features and objects, to name but a few processes). Intracranial attentional filtering appears to play multiple roles, including clutter filtration at all levels of processing - thus, we process individual retinal cell responses, early filtering response, and so forth, on up to the filtering of objects at high levels of semantic complexity. Computationally, image compression techniques have progressed from emphasizing pixels, to considering regions of pixels as foci of computational interest. In more recent research, object-based compression has been investigated with varying rate-distortion performance and computational efficiency. Codecs have been developed for a wide variety of applications, although the majority of compression and decompression transforms continue to concentrate on region- and pixel-based processing, in part because of computational convenience. It is interesting to note that a growing body of research has emphasized the detection and representation of small features in relationship to their surrounding environment, which has occasionally been called semantic compression. In this paper, we overview different types of semantic compression approaches, with particular interest in high-level compression algorithms. Various algorithms and

  18. Fast and efficient compression of floating-point data.

    PubMed

    Lindstrom, Peter; Isenburg, Martin

    2006-01-01

    Large scale scientific simulation codes typically run on a cluster of CPUs that write/read time steps to/from a single file system. As data sets are constantly growing in size, this increasingly leads to I/O bottlenecks. When the rate at which data is produced exceeds the available I/O bandwidth, the simulation stalls and the CPUs are idle. Data compression can alleviate this problem by using some CPU cycles to reduce the amount of data needed to be transfered. Most compression schemes, however, are designed to operate offline and seek to maximize compression, not throughput. Furthermore, they often require quantizing floating-point values onto a uniform integer grid, which disqualifies their use in applications where exact values must be retained. We propose a simple scheme for lossless, online compression of floating-point data that transparently integrates into the I/O of many applications. A plug-in scheme for data-dependent prediction makes our scheme applicable to a wide variety of data used in visualization, such as unstructured meshes, point sets, images, and voxel grids. We achieve state-of-the-art compression rates and speeds, the latter in part due to an improved entropy coder. We demonstrate that this significantly accelerates I/O throughput in real simulation runs. Unlike previous schemes, our method also adapts well to variable-precision floating-point and integer data. PMID:17080858

  19. Effects of Local Compression on Peroneal Nerve Function in Humans

    NASA Technical Reports Server (NTRS)

    Hargens, Alan R.; Botte, Michael J.; Swenson, Michael R.; Gelberman, Richard H.; Rhoades, Charles E.; Akeson, Wayne H.

    1993-01-01

    A new apparatus was developed to compress the anterior compartment selectively and reproducibly in humans. Thirty-five normal volunteers were studied to determine short-term thresholds of local tissue pressure that produce significant neuromuscular dysfunction. Local tissue fluid pressure adjacent to the deep peroneal nerve was elevated by the compression apparatus and continuously monitored for 2-3 h by the slit catheter technique. Elevation of tissue fluid pressure to within 35-40 mm Hg of diastolic blood pressure (approx. 40 mm Hg of in situ pressure in our subjects) elicited a consistent progression of neuromuscular deterioration including, in order, (a) gradual loss of sensation, as assessed by Semmes-Weinstein monofilaments, (b) subjective complaints, (c) reduced nerve conduction velocity, (d) decreased action potential amplitude of the extensor digitorum brevis muscle, and (e) motor weakness of muscles within the anterior compartment. Generally, higher intracompartment at pressures caused more rapid deterioration of neuromuscular function. In two subjects, when in situ compression levels were 0 and 30 mm Hg, normal neuromuscular function was maintained for 3 h. Threshold pressures for significant dysfunction were not always the same for each functional parameter studied, and the magnitudes of each functional deficit did not always correlate with compression level. This variable tolerance to elevated pressure emphasizes the need to monitor clinical signs and symptoms carefully in the diagnosis of compartment syndromes. The nature of the present studies was short term; longer term compression of myoneural tissues may result in dysfunction at lower pressure thresholds.

  20. Formulation development of metoprolol succinate and hydrochlorothiazide compression coated tablets.

    PubMed

    Shah, Ritesh; Parmar, Swatil; Patel, Hetal; Pandey, Sonia; Shah, Dinesh

    2013-12-01

    The purpose of present research work was to design and optimize compression coated tablet to provide an immediate release of hydrochlorothiazide in stomach and extended release of metoprolol succinate in intestine. Compression coated tablet was prepared by direct compression method which consisted of metoprolol succinate extended release core tablet and hydrochlorothiazide immediate release coat layer. Barrier coating of Hydroxy Propyl Methyl Cellulose (HPMC) E15LV was applied onto the core tablets to prevent burst release of metoprolol succinate in acidic medium. A 32 full factorial design was employed for optimization of the amount of polymers required to achieve extended release of drug. The percentage drug release at given time Q3, Q6, Q10, Q22; were selected as dependent variables. Core and compression coated tablets were evaluated for pharmaco-technical parameters. In vitro drug release of optimized batch was found to comply with Pharmacopoeial specifications. Desired release of metoprolol succinate was obtained by suitable combination of HPMC having high gelling capacity and polyethylene oxide having quick gelling capacity. The mechanism of release of metoprolol succinate from all batches was anomalous diffusion. Optimised batch was stable at accelerated conditions up to 3 months. Thus, compression coated tablet of metoprolol succinate and hydrochlorothiazide was successfully formulated. PMID:23017092

  1. Isentropic Compression of Multicomponent Mixtures of Fuels and Inert Gases

    NASA Technical Reports Server (NTRS)

    Barragan, Michelle; Julien, Howard L.; Woods, Stephen S.; Wilson, D. Bruce; Saulsberry, Regor L.

    2000-01-01

    In selected aerospace applications of the fuels hydrazine and monomethythydrazine, there occur conditions which can result in the isentropic compression of a multicomponent mixture of fuel and inert gas. One such example is when a driver gas such as helium comes out of solution and mixes with the fuel vapor, which is being compressed. A second example is when product gas from an energetic device mixes with the fuel vapor which is being compressed. Thermodynamic analysis has shown that under isentropic compression, the fuels hydrazine and monomethylhydrazine must be treated as real fluids using appropriate equations of state. The appropriate equations of state are the Peng-Robinson equation of state for hydrazine and the Redlich-Kwong-Soave equation of state for monomethylhydrazine. The addition of an inert gas of variable quantity and input temperature and pressure to the fuel compounds the problem for safety design or analysis. This work provides the appropriate thermodynamic analysis of isentropic compression of the two examples cited. In addition to an entropy balance describing the change of state, an enthalpy balance is required. The presence of multicomponents in the system requires that appropriate mixing rules are identified and applied to the analysis. This analysis is not currently available.

  2. Flux Compression in HTS Films

    NASA Astrophysics Data System (ADS)

    Mikheenko, P.; Colclough, M. S.; Chakalov, R.; Kawano, K.; Muirhead, C. M.

    We report on experimental investigation of the effect of flux compression in superconducting YBa2Cu3Ox (YBCO) films and YBCO/CMR (Colossal Magnetoresistive) multilayers. The flux compression produces positive magnetic moment (m) upon the cooling in a field from above to below the critical temperature. We found effect of compression in all measured films and multilayers. In accordance with theoretical calculations, m is proportional to applied magnetic field. The amplitude of the effect depends on the cooling rate, which suggests the inhomogeneous cooling as its origin. The positive moment is always very small, a fraction of a percent of the ideal diamagnetic response. A CMR layer in contact with HTS decreases the amplitude of the effect. The flux compression weakly depends on sample size, but sensitive to its form and topology. The positive magnetic moment does not appear in bulk samples at low rates of the cooling. Our results show that the main features of the flux compression are very different from those in Paramagnetic Meissner effect observed in bulk high temperature superconductors and Nb disks.

  3. Compression of Probabilistic XML Documents

    NASA Astrophysics Data System (ADS)

    Veldman, Irma; de Keijzer, Ander; van Keulen, Maurice

    Database techniques to store, query and manipulate data that contains uncertainty receives increasing research interest. Such UDBMSs can be classified according to their underlying data model: relational, XML, or RDF. We focus on uncertain XML DBMS with as representative example the Probabilistic XML model (PXML) of [10,9]. The size of a PXML document is obviously a factor in performance. There are PXML-specific techniques to reduce the size, such as a push down mechanism, that produces equivalent but more compact PXML documents. It can only be applied, however, where possibilities are dependent. For normal XML documents there also exist several techniques for compressing a document. Since Probabilistic XML is (a special form of) normal XML, it might benefit from these methods even more. In this paper, we show that existing compression mechanisms can be combined with PXML-specific compression techniques. We also show that best compression rates are obtained with a combination of PXML-specific technique with a rather simple generic DAG-compression technique.

  4. Compression of spectral meteorological imagery

    NASA Technical Reports Server (NTRS)

    Miettinen, Kristo

    1993-01-01

    Data compression is essential to current low-earth-orbit spectral sensors with global coverage, e.g., meteorological sensors. Such sensors routinely produce in excess of 30 Gb of data per orbit (over 4 Mb/s for about 110 min) while typically limited to less than 10 Gb of downlink capacity per orbit (15 minutes at 10 Mb/s). Astro-Space Division develops spaceborne compression systems for compression ratios from as little as three to as much as twenty-to-one for high-fidelity reconstructions. Current hardware production and development at Astro-Space Division focuses on discrete cosine transform (DCT) systems implemented with the GE PFFT chip, a 32x32 2D-DCT engine. Spectral relations in the data are exploited through block mean extraction followed by orthonormal transformation. The transformation produces blocks with spatial correlation that are suitable for further compression with any block-oriented spatial compression system, e.g., Astro-Space Division's Laplacian modeler and analytic encoder of DCT coefficients.

  5. 30 CFR 75.1730 - Compressed air; general; compressed air systems.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped...

  6. 30 CFR 75.1730 - Compressed air; general; compressed air systems.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped...

  7. 30 CFR 75.1730 - Compressed air; general; compressed air systems.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped...

  8. 30 CFR 75.1730 - Compressed air; general; compressed air systems.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped...

  9. 30 CFR 75.1730 - Compressed air; general; compressed air systems.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped...

  10. Effects of shock structure on temperature field in compressible turbulence

    NASA Astrophysics Data System (ADS)

    Ni, Qionglin; Chen, Shiyi

    2014-11-01

    Effects of shock structure on temperature in compressible turbulence were investigated. The small-scale shocklets and large-scale shock waves were appeared in the flows driven by solenoidal and compressive forcings, i.e. SFT & CFT, respectively. In SFT the temperature had Kolmogorov spectrum and ramp-cliff structures, while in CFT it obeyed Burgers spectrum and was dominated by large-scale rarefaction and compression. The power-law exponents for the p.d.f. of large negative dilatation were -2.5 in SFT and -3.5 in CFT, approximately corresponded to model results. The isentropic approximation of thermodynamic variables showed that in SFT, the isentropic derivation was reinforced when turbulent Mach number increased. At similar turbulent Mach number, the variables in CFT exhibited more anisentropic. It showed that the transport of temperature was increased by the small-scale viscous dissipation and the large-scale pressure-dilatation. The distribution of positive and negative components of pressure-dilatation confirmed the mechanism of negligible pressure-dilatation at small scales. Further, the positive skewness of p.d.f.s of pressure-dilatation implied that the conversion from kinetic to internal energy by compression was more intense than the opposite process by rarefaction.

  11. Motor commands induce time compression for tactile stimuli.

    PubMed

    Tomassini, Alice; Gori, Monica; Baud-Bovy, Gabriel; Sandini, Giulio; Morrone, Maria Concetta

    2014-07-01

    Saccades cause compression of visual space around the saccadic target, and also a compression of time, both phenomena thought to be related to the problem of maintaining saccadic stability (Morrone et al., 2005; Burr and Morrone, 2011). Interestingly, similar phenomena occur at the time of hand movements, when tactile stimuli are systematically mislocalized in the direction of the movement (Dassonville, 1995; Watanabe et al., 2009). In this study, we measured whether hand movements also cause an alteration of the perceived timing of tactile signals. Human participants compared the temporal separation between two pairs of tactile taps while moving their right hand in response to an auditory cue. The first pair of tactile taps was presented at variable times with respect to movement with a fixed onset asynchrony of 150 ms. Two seconds after test presentation, when the hand was stationary, the second pair of taps was delivered with a variable temporal separation. Tactile stimuli could be delivered to either the right moving or left stationary hand. When the tactile stimuli were presented to the motor effector just before and during movement, their perceived temporal separation was reduced. The time compression was effector-specific, as perceived time was veridical for the left stationary hand. The results indicate that time intervals are compressed around the time of hand movements. As for vision, the mislocalizations of time and space for touch stimuli may be consequences of a mechanism attempting to achieve perceptual stability during tactile exploration of objects, suggesting common strategies within different sensorimotor systems. PMID:24990936

  12. Evaluation of nonlinear frequency compression: Clinical outcomes

    PubMed Central

    Glista, Danielle; Scollie, Susan; Bagatto, Marlene; Seewald, Richard; Parsa, Vijay; Johnson, Andrew

    2009-01-01

    This study evaluated prototype multichannel nonlinear frequency compression (NFC) signal processing on listeners with high-frequency hearing loss. This signal processor applies NFC above a cut-off frequency. The participants were hearing-impaired adults (13) and children (11) with sloping, high-frequency hearing loss. Multiple outcome measures were repeated using a modified withdrawal design. These included speech sound detection, speech recognition, and self-reported preference measures. Group level results provide evidence of significant improvement of consonant and plural recognition when NFC was enabled. Vowel recognition did not change significantly. Analysis of individual results allowed for exploration of individual factors contributing to benefit received from NFC processing. Findings suggest that NFC processing can improve high frequency speech detection and speech recognition ability for adult and child listeners. Variability in individual outcomes related to factors such as degree and configuration of hearing loss, age of participant, and type of outcome measure. PMID:19504379

  13. Compressive behavior of fine sand.

    SciTech Connect

    Martin, Bradley E.; Kabir, Md. E.; Song, Bo; Chen, Wayne

    2010-04-01

    The compressive mechanical response of fine sand is experimentally investigated. The strain rate, initial density, stress state, and moisture level are systematically varied. A Kolsky bar was modified to obtain uniaxial and triaxial compressive response at high strain rates. A controlled loading pulse allows the specimen to acquire stress equilibrium and constant strain-rates. The results show that the compressive response of the fine sand is not sensitive to strain rate under the loading conditions in this study, but significantly dependent on the moisture content, initial density and lateral confinement. Partially saturated sand is more compliant than dry sand. Similar trends were reported in the quasi-static regime for experiments conducted at comparable specimen conditions. The sand becomes stiffer as initial density and/or confinement pressure increases. The sand particle size become smaller after hydrostatic pressure and further smaller after dynamic axial loading.

  14. Data compression using Chebyshev transform

    NASA Technical Reports Server (NTRS)

    Cheng, Andrew F. (Inventor); Hawkins, III, S. Edward (Inventor); Nguyen, Lillian (Inventor); Monaco, Christopher A. (Inventor); Seagrave, Gordon G. (Inventor)

    2007-01-01

    The present invention is a method, system, and computer program product for implementation of a capable, general purpose compression algorithm that can be engaged on the fly. This invention has particular practical application with time-series data, and more particularly, time-series data obtained form a spacecraft, or similar situations where cost, size and/or power limitations are prevalent, although it is not limited to such applications. It is also particularly applicable to the compression of serial data streams and works in one, two, or three dimensions. The original input data is approximated by Chebyshev polynomials, achieving very high compression ratios on serial data streams with minimal loss of scientific information.

  15. Structured illumination temporal compressive microscopy

    PubMed Central

    Yuan, Xin; Pang, Shuo

    2016-01-01

    We present a compressive video microscope based on structured illumination with incoherent light source. The source-side illumination coding scheme allows the emission photons being collected by the full aperture of the microscope objective, and thus is suitable for the fluorescence readout mode. A 2-step iterative reconstruction algorithm, termed BWISE, has been developed to address the mismatch between the illumination pattern size and the detector pixel size. Image sequences with a temporal compression ratio of 4:1 were demonstrated. PMID:27231586

  16. Analyzing Ramp Compression Wave Experiments

    NASA Astrophysics Data System (ADS)

    Hayes, D. B.

    2007-12-01

    Isentropic compression of a solid to 100's of GPa by a ramped, planar compression wave allows measurement of material properties at high strain and at modest temperature. Introduction of a measurement plane disturbs the flow, requiring special analysis techniques. If the measurement interface is windowed, the unsteady nature of the wave in the window requires special treatment. When the flow is hyperbolic the equations of motion can be integrated backward in space in the sample to a region undisturbed by the interface interactions, fully accounting for the untoward interactions. For more complex materials like hysteretic elastic/plastic solids or phase changing material, hybrid analysis techniques are required.

  17. Extended testing of compression distillation.

    NASA Technical Reports Server (NTRS)

    Bambenek, R. A.; Nuccio, P. P.

    1972-01-01

    During the past eight years, the NASA Manned Spacecraft Center has supported the development of an integrated water and waste management system which includes the compression distillation process for recovering useable water from urine, urinal flush water, humidity condensate, commode flush water, and concentrated wash water. This paper describes the design of the compression distillation unit, developed for this system, and the testing performed to demonstrate its reliability and performance. In addition, this paper summarizes the work performed on pretreatment and post-treatment processes, to assure the recovery of sterile potable water from urine and treated urinal flush water.

  18. Data compression in digitized lines

    SciTech Connect

    Thapa, K. )

    1990-04-01

    The problem of data compression is very important in digital photogrammetry, computer assisted cartography, and GIS/LIS. In addition, it is also applicable in many other fields such as computer vision, image processing, pattern recognition, and artificial intelligence. Consequently, there are many algorithms available to solve this problem but none of them are considered to be satisfactory. In this paper, a new method of finding critical points in a digitized curve is explained. This technique, based on the normalized symmetric scattered matrix, is good for both critical points detection and data compression. In addition, the critical points detected by this algorithm are compared with those by zero-crossings. 8 refs.

  19. Simulating Ramp Compression of Diamond

    NASA Astrophysics Data System (ADS)

    Godwal, B. K.; Gonzàlez-Cataldo, F. J.; Jeanloz, R.

    2014-12-01

    We model ramp compression, shock-free dynamic loading, intended to generate a well-defined equation of state that achieves higher densities and lower temperatures than the corresponding shock Hugoniot. Ramp loading ideally approaches isentropic compression for a fluid sample, so is useful for simulating the states deep inside convecting planets. Our model explicitly evaluates the deviation of ramp from "quasi-isentropic" compression. Motivated by recent ramp-compression experiments to 5 TPa (50 Mbar), we calculate the room-temperature isotherm of diamond using first-principles density functional theory and molecular dynamics, from which we derive a principal isentrope and Hugoniot by way of the Mie-Grüneisen formulation and the Hugoniot conservation relations. We simulate ramp compression by imposing a uniaxial strain that then relaxes to an isotropic state, evaluating the change in internal energy and stress components as the sample relaxes toward isotropic strain at constant volume; temperature is well defined for the resulting hydrostatic state. Finally, we evaluate multiple shock- and ramp-loading steps to compare with single-step loading to a given final compression. Temperatures calculated for single-step ramp compression are less than Hugoniot temperatures only above 500 GPa, the two being close to each other at lower pressures. We obtain temperatures of 5095 K and 6815 K for single-step ramp loading to 600 and 800 GPa, for example, which compares well with values of ~5100 K and ~6300 K estimated from previous experiments [PRL,102, 075503, 2009]. At 800 GPa, diamond is calculated to have a temperature of 500 K along the isentrope; 900 K under multi-shock compression (asymptotic result after 8-10 steps); and 3400 K under 3-step ramp loading (200-400-800 GPa). Asymptotic multi-step shock and ramp loading are indistinguishable from the isentrope, within present uncertainties. Our simulations quantify the manner in which current experiments can simulate the

  20. GPU-accelerated compressive holography.

    PubMed

    Endo, Yutaka; Shimobaba, Tomoyoshi; Kakue, Takashi; Ito, Tomoyoshi

    2016-04-18

    In this paper, we show fast signal reconstruction for compressive holography using a graphics processing unit (GPU). We implemented a fast iterative shrinkage-thresholding algorithm on a GPU to solve the ℓ1 and total variation (TV) regularized problems that are typically used in compressive holography. Since the algorithm is highly parallel, GPUs can compute it efficiently by data-parallel computing. For better performance, our implementation exploits the structure of the measurement matrix to compute the matrix multiplications. The results show that GPU-based implementation is about 20 times faster than CPU-based implementation. PMID:27137282

  1. Compressing the Inert Doublet Model

    DOE PAGESBeta

    Blinov, Nikita; Kozaczuk, Jonathan; Morrissey, David E.; de la Puente, Alejandro

    2016-02-16

    The Inert Doublet Model relies on a discrete symmetry to prevent couplings of the new scalars to Standard Model fermions. We found that this stabilizes the lightest inert state, which can then contribute to the observed dark matter density. In the presence of additional approximate symmetries, the resulting spectrum of exotic scalars can be compressed. Here, we study the phenomenological and cosmological implications of this scenario. In conclusion, we derive new limits on the compressed Inert Doublet Model from LEP, and outline the prospects for exclusion and discovery of this model at dark matter experiments, the LHC, and future colliders.

  2. Compressing the Inert Doublet Model

    SciTech Connect

    Blinov, Nikita; Morrissey, David E.; de la Puente, Alejandro

    2015-10-29

    The Inert Doublet Model relies on a discrete symmetry to prevent couplings of the new scalars to Standard Model fermions. We found that this stabilizes the lightest inert state, which can then contribute to the observed dark matter density. In the presence of additional approximate symmetries, the resulting spectrum of exotic scalars can be compressed. Here, we study the phenomenological and cosmological implications of this scenario. Furthermore, we derive new limits on the compressed Inert Doublet Model from LEP, and outline the prospects for exclusion and discovery of this model at dark matter experiments, the LHC, and future colliders.

  3. Data compression for satellite images

    NASA Technical Reports Server (NTRS)

    Chen, P. H.; Wintz, P. A.

    1976-01-01

    An efficient data compression system is presented for satellite pictures and two grey level pictures derived from satellite pictures. The compression techniques take advantages of the correlation between adjacent picture elements. Several source coding methods are investigated. Double delta coding is presented and shown to be the most efficient. Both predictive differential quantizing technique and double delta coding can be significantly improved by applying a background skipping technique. An extension code is constructed. This code requires very little storage space and operates efficiently. Simulation results are presented for various coding schemes and source codes.

  4. Astronomical context coder for image compression

    NASA Astrophysics Data System (ADS)

    Pata, Petr; Schindler, Jaromir

    2015-10-01

    Recent lossless still image compression formats are powerful tools for compression of all kind of common images (pictures, text, schemes, etc.). Generally, the performance of a compression algorithm depends on its ability to anticipate the image function of the processed image. In other words, a compression algorithm to be successful, it has to take perfectly the advantage of coded image properties. Astronomical data form a special class of images and they have, among general image properties, also some specific characteristics which are unique. If a new coder is able to correctly use the knowledge of these special properties it should lead to its superior performance on this specific class of images at least in terms of the compression ratio. In this work, the novel lossless astronomical image data compression method will be presented. The achievable compression ratio of this new coder will be compared to theoretical lossless compression limit and also to the recent compression standards of the astronomy and general multimedia.

  5. Managment oriented analysis of sediment yield time compression

    NASA Astrophysics Data System (ADS)

    Smetanova, Anna; Le Bissonnais, Yves; Raclot, Damien; Nunes, João P.; Licciardello, Feliciana; Le Bouteiller, Caroline; Latron, Jérôme; Rodríguez Caballero, Emilio; Mathys, Nicolle; Klotz, Sébastien; Mekki, Insaf; Gallart, Francesc; Solé Benet, Albert; Pérez Gallego, Nuria; Andrieux, Patrick; Moussa, Roger; Planchon, Olivier; Marisa Santos, Juliana; Alshihabi, Omran; Chikhaoui, Mohamed

    2016-04-01

    The understanding of inter- and intra-annual variability of sediment yield is important for the land use planning and management decisions for sustainable landscapes. It is of particular importance in the regions where the annual sediment yield is often highly dependent on the occurrence of few large events which produce the majority of sediments, such as in the Mediterranean. This phenomenon is referred as time compression, and relevance of its consideration growths with the increase in magnitude and frequency of extreme events due to climate change in many other regions. So far, time compression has ben studied mainly on events datasets, providing high resolution, but (in terms of data amount, required data precision and methods), demanding analysis. In order to provide an alternative simplified approach, the monthly and yearly time compressions were evaluated in eight Mediterranean catchments (of the R-OSMed network), representing a wide range of Mediterranean landscapes. The annual sediment yield varied between 0 to ~27100 Mg•km-2•a-1, and the monthly sediment yield between 0 to ~11600 Mg•km-2•month-1. The catchment's sediment yield was un-equally distributed at inter- and intra-annual scale, and large differences were observed between the catchments. Two types of time compression were distinguished - (i) the inter-annual (based on annual values) and intra- annual (based on monthly values). Four different rainfall-runoff-sediment yield time compression patterns were observed: (i) no time-compression of rainfall, runoff, nor sediment yield, (ii) low time compression of rainfall and runoff, but high compression of sediment yield, (iii) low compression of rainfall and high of runoff and sediment yield, and (iv) low, medium and high compression of rainfall, runoff and sediment yield. All four patterns were present at inter-annual scale, while at intra-annual scale only the two latter were present. This implies that high sediment yields occurred in

  6. Teaching Time-Space Compression

    ERIC Educational Resources Information Center

    Warf, Barney

    2011-01-01

    Time-space compression shows students that geographies are plastic, mutable and forever changing. This paper justifies the need to teach this topic, which is rarely found in undergraduate course syllabi. It addresses the impacts of transportation and communications technologies to explicate its dynamics. In summarizing various conceptual…

  7. Hyperspectral imaging using compressed sensing

    NASA Astrophysics Data System (ADS)

    Ramirez I., Gabriel Eduardo; Manian, Vidya B.

    2012-06-01

    Compressed sensing (CS) has attracted a lot of attention in recent years as a promising signal processing technique that exploits a signal's sparsity to reduce its size. It allows for simple compression that does not require a lot of additional computational power, and would allow physical implementation at the sensor using spatial light multiplexers using Texas Instruments (TI) digital micro-mirror device (DMD). The DMD can be used as a random measurement matrix, reflecting the image off the DMD is the equivalent of an inner product between the images individual pixels and the measurement matrix. CS however is asymmetrical, meaning that the signals recovery or reconstruction from the measurements does require a higher level of computation. This makes the prospect of working with the compressed version of the signal in implementations such as detection or classification much more efficient. If an initial analysis shows nothing of interest, the signal need not be reconstructed. Many hyper-spectral image applications are precisely focused on these areas, and would greatly benefit from a compression technique like CS that could help minimize the light sensor down to a single pixel, lowering costs associated with the cameras while reducing the large amounts of data generated by all the bands. The present paper will show an implementation of CS using a single pixel hyper-spectral sensor, and compare the reconstructed images to those obtained through the use of a regular sensor.

  8. Compression fractures of the back

    MedlinePlus

    ... Meirhaeghe J, et al. Efficacy and safety of balloon kyphoplasty compared with non-surgical care for vertebral compression fracture (FREE): a randomised controlled trial. Lancet . 2009;373(9668):1016-24. PMID: 19246088 www.ncbi.nlm.nih.gov/pubmed/19246088 .

  9. A programmable image compression system

    NASA Technical Reports Server (NTRS)

    Farrelle, Paul M.

    1989-01-01

    A programmable image compression system which has the necessary flexibility to address diverse imaging needs is described. It can compress and expand single frame video images (monochrome or color) as well as documents and graphics (black and white or color) for archival or transmission applications. Through software control, the compression mode can be set for lossless or controlled quality coding; the image size and bit depth can be varied; and the image source and destination devices can be readily changed. Despite the large combination of image data types, image sources, and algorithms, the system provides a simple consistent interface to the programmer. This system (OPTIPAC) is based on the TITMS320C25 digital signal processing (DSP) chip and has been implemented as a co-processor board for an IBM PC-AT compatible computer. The underlying philosophy can readily be applied to different hardware platforms. By using multiple DSP chips or incorporating algorithm specific chips, the compression and expansion times can be significantly reduced to meet performance requirements.

  10. COMPRESSIBLE FLOW, ENTRAINMENT, AND MEGAPLUME

    EPA Science Inventory

    It is generally believed that low Mach number, i.e., low-velocity, flow may be assumed to be incompressible flow. Under steady-state conditions, an exact equation of continuity may then be used to show that such flow is non-divergent. However, a rigorous, compressible fluid-dynam...