Science.gov

Sample records for alvar variable compression

  1. Alvar variable compression engine development. Final report

    SciTech Connect

    1998-03-30

    The Alvar engine is an invention by Mr. Alvar Gustafsson of Skarblacka, Sweden. It is a four stroke spark ignition internal combustion engine, having variable compression ratio and variable displacements. The compression ratio can be varied by means of small secondary cylinders and pistons which are communicating with the main combustion chambers. The secondary pistons can be phase shifted with respect to the main pistons. The engine is suitable for multi-fuel operation. Invention rights are held by Alvar Engine AB of Sweden, a company created to handle the development of the Alvar Engine. A project was conceived wherein an optimised experimental engine would be built and tested to verify the advantages claimed for the Alvar engine and also to reveal possible drawbacks, if any. Alvar Engine AB appointed Gunnar Lundholm, professor of Combustion Engines at Lund University, Lund, Sweden as principal investigator. The project could be seen as having three parts: (1) Optimisation of the engine combustion chamber geometry; (2) Design and manufacturing of the necessary engine parts; and (3) Testing of the engine in an engine laboratory NUTEK, The Swedish Board for Industrial and Technical Development granted Gunnar Lundholm, SEK 50000 (about $6700) to travel to the US to evaluate potential research and development facilities which seemed able to perform the different project tasks.

  2. Variable compression ratio control

    SciTech Connect

    Johnson, K.A.

    1988-04-19

    In a four cycle engine that includes a crankshaft having a plural number of main shaft sections defining the crankshaft rotational axis and a plural number of crank arms defining orbital shaft sections, a plural number of combustion cylinders, a movable piston within each cylinder, each cylinder and its associated piston defining a combustion chamber, a connecting rod connecting each piston to an orbital shaft section of the crankshaft, and a plural number of stationary support walls spaced along the crankshaft axis for absorbing crankshaft forces: the improvement is described comprising means for adjustably supporting the crankshaft on the stationary walls such that the crankshaft rotational axis is adjustable along the piston-cylinder axis for the purpose of varying a resulting engine compression ratio; the adjustable support means comprising a circular cavity in each stationary wall. A circular disk swivably is seated in each cavity, each circular disk having a circular opening therethrough eccentric to the disk center. The crankshaft is arranged so that respective ones of its main shaft sections are located within respective ones of the circular openings; means for rotating each circular disk around its center so that the main shaft sections of the crankshaft are adjusted toward and away from the combustion chamber; a pinion gear on an output end of the crankshaft in axial alignment with and positioned beyond the respective ones of the main shaft sections, and a rotary output gear located about and engaged with teeth extending from the pinion gear.

  3. Variable density compressed image sampling.

    PubMed

    Wang, Zhongmin; Arce, Gonzalo R

    2010-01-01

    Compressed sensing (CS) provides an efficient way to acquire and reconstruct natural images from a limited number of linear projection measurements leading to sub-Nyquist sampling rates. A key to the success of CS is the design of the measurement ensemble. This correspondence focuses on the design of a novel variable density sampling strategy, where the a priori information of the statistical distributions that natural images exhibit in the wavelet domain is exploited. The proposed variable density sampling has the following advantages: 1) the generation of the measurement ensemble is computationally efficient and requires less memory; 2) the necessary number of measurements for image reconstruction is reduced; 3) the proposed sampling method can be applied to several transform domains and leads to simple implementations. Extensive simulations show the effectiveness of the proposed sampling method.

  4. Envera Variable Compression Ratio Engine

    SciTech Connect

    Charles Mendler

    2011-03-15

    Aggressive engine downsizing, variable compression ratio and use of the Atkinson cycle are being combined to improve fuel economy by up to 40 percent relative to port fuel injected gasoline engines, while maintaining full engine power. Approach Engine downsizing is viewed by US and foreign automobile manufacturers as one of the best options for improving fuel economy. While this strategy has already demonstrated a degree of success, downsizing and fuel economy gains are currently limited. With new variable compression ratio technology however, the degree of engine downsizing and fuel economy improvement can be greatly increased. A small variable compression ratio (VCR) engine has the potential to return significantly higher vehicle fuel economy while also providing high power. Affordability and potential for near term commercialization are key attributes of the Envera VCR engine. VCR Technology To meet torque and power requirements, a smaller engine needs to do more work per stroke. This is typically accomplished by boosting the incoming charge with either a turbo or supercharger so that more energy is present in the cylinder per stroke to do the work. With current production engines the degree of engine boosting (which correlates to downsizing) is limited by detonation (combustion knock) at high boost levels. Additionally, the turbo or supercharger needs to be responsive and efficient while providing the needed boost. VCR technology eliminates the limitation of engine knock at high load levels by reducing compression ratio to {approx}9:1 (or whatever level is appropriate) when high boost pressures are needed. By reducing the compression ratio during high load demand periods there is increased volume in the cylinder at top dead center (TDC) which allows more charge (or energy) to be present in the cylinder without increasing the peak pressure. Cylinder pressure is thus kept below the level at which the engine would begin to knock. When loads on the engine are low

  5. Eccentric crank variable compression ratio mechanism

    DOEpatents

    Lawrence, Keith Edward; Moser, William Elliott; Roozenboom, Stephan Donald; Knox, Kevin Jay

    2008-05-13

    A variable compression ratio mechanism for an internal combustion engine that has an engine block and a crankshaft is disclosed. The variable compression ratio mechanism has a plurality of eccentric disks configured to support the crankshaft. Each of the plurality of eccentric disks has at least one cylindrical portion annularly surrounded by the engine block. The variable compression ratio mechanism also has at least one actuator configured to rotate the plurality of eccentric disks.

  6. Crankshaft assembly for variable stroke engine for variable compression

    SciTech Connect

    Heniges, W.B.

    1989-12-19

    This patent describes crankshaft assembly for a variable compression engine with reciprocating pistons. It comprises: a crankshaft assembly including a web, a crankpin, a crankshaft arm, piston driven means carried by the crankpin, eccentric means including an eccentric bushing rotatably carried by the crankpin and interposed between the crankpin and the piston driven means. The eccentric means including an eccentric mounted gear whereby adjusted rotation of the eccentric means relative the crankpin will alter the spacial relationship of the eccentric and the piston driven means to the crankshaft axis to alter piston stroke, and eccentric positioning means including a gear train comprising a first gear driven by the crankshaft arm for rotation about the crankshaft axis, a gear set driven by the first gear with certain gears of the set being displaceable, carrier means supporting the certain gears, control means coupled to the carrier means for positioning same and the certain gears, driven gears powered by the gear set which one of the driven gears in mesh with the eccentric mounted gear to impact rotate to same to alter the relationship of the eccentric bushing to the piston driven means to thereby determine stroke.

  7. Statistical conditional sampling for variable-resolution video compression.

    PubMed

    Wong, Alexander; Shafiee, Mohammad Javad; Azimifar, Zohreh

    2012-01-01

    In this study, we investigate a variable-resolution approach to video compression based on Conditional Random Field and statistical conditional sampling in order to further improve compression rate while maintaining high-quality video. In the proposed approach, representative key-frames within a video shot are identified and stored at full resolution. The remaining frames within the video shot are stored and compressed at a reduced resolution. At the decompression stage, a region-based dictionary is constructed from the key-frames and used to restore the reduced resolution frames to the original resolution via statistical conditional sampling. The sampling approach is based on the conditional probability of the CRF modeling by use of the constructed dictionary. Experimental results show that the proposed variable-resolution approach via statistical conditional sampling has potential for improving compression rates when compared to compressing the video at full resolution, while achieving higher video quality when compared to compressing the video at reduced resolution.

  8. Statistical Conditional Sampling for Variable-Resolution Video Compression

    PubMed Central

    Wong, Alexander; Shafiee, Mohammad Javad; Azimifar, Zohreh

    2012-01-01

    In this study, we investigate a variable-resolution approach to video compression based on Conditional Random Field and statistical conditional sampling in order to further improve compression rate while maintaining high-quality video. In the proposed approach, representative key-frames within a video shot are identified and stored at full resolution. The remaining frames within the video shot are stored and compressed at a reduced resolution. At the decompression stage, a region-based dictionary is constructed from the key-frames and used to restore the reduced resolution frames to the original resolution via statistical conditional sampling. The sampling approach is based on the conditional probability of the CRF modeling by use of the constructed dictionary. Experimental results show that the proposed variable-resolution approach via statistical conditional sampling has potential for improving compression rates when compared to compressing the video at full resolution, while achieving higher video quality when compared to compressing the video at reduced resolution. PMID:23056188

  9. Alvar soils and ecology in the boreal forest and taiga regions of Canada.

    NASA Astrophysics Data System (ADS)

    Ford, D.

    2012-04-01

    Alvars have been defined as "...a biological association based on a limestone plain with thin or no soil and, as a result, sparse vegetation. Trees and bushes are stunted or absent ... may include prairie spp." (Wikipedia). They were first described in southern Sweden, Estonia, the karst pavements of Yorkshire (UK) and the Burren (Eire). In North America alvars have been recognised and reported only in the Mixed Forest (deciduous/coniferous) Zone around the Great Lakes. An essential feature of the hydrologic controls on vegetation growth on natural alvars is that these terrains were glaciated in the last (Wisconsinan/Würm) ice age: the upper beds of any pre-existing epikarst were stripped away by glacier scour and there has been insufficient time for post-glacial epikarst to achieve the depths and densities required to support the deep rooting needed for mature forest cover. However, in the sites noted above, the alvars have been created, at least in part, by deforestation, overgrazing, burning to create browse, etc. and thus should not be considered wholly natural phenomena. There are extensive natural alvars in the Boreal Forest and Taiga ecozones in Canada. Their nature and variety will be illustrated with examples from cold temperate maritime climate settings in northern Newfoundland and the Gulf of St Lawrence and cold temperate continental to sub-arctic climates in northern Manitoba and the Northwest Territories.

  10. An Efficient Variable-Length Data-Compression Scheme

    NASA Technical Reports Server (NTRS)

    Cheung, Kar-Ming; Kiely, Aaron B.

    1996-01-01

    Adaptive variable-length coding scheme for compression of stream of independent and identically distributed source data involves either Huffman code or alternating run-length Huffman (ARH) code, depending on characteristics of data. Enables efficient compression of output of lossless or lossy precompression process, with speed and simplicity greater than those of older coding schemes developed for same purpose. In addition, scheme suitable for parallel implementation on hardware with modular structure, provides for rapid adaptation to changing data source, compatible with block orientation to alleviate memory requirements, ensures efficiency over wide range of entropy, and easily combined with such other communication schemes as those for containment of errors and for packetization.

  11. Fluctuations of thermodynamic variables in compressible isotropic turbulence

    NASA Astrophysics Data System (ADS)

    Donzis, Diego; Jagannathan, Shriram

    2014-11-01

    A distinguishing feature of compressible turbulence is the appearance of fluctuations of thermodynamic variables. While their importance is well-known in understanding these flows, some of their basic characteristics such as the Reynolds and Mach number dependence are not well understood. We use a large database of Direct Numerical Simulation of stationary compressible isotropic turbulence on up to 20483 grids at Taylor Reynolds numbers up to 450 and a range of Mach numbers (Mt ~ 0 . 1 - 0 . 6) to examine statistical properties of thermodynamic variables. Our focus is on the PDFs and moments of pressure, density and temperature. While results at low Mt are consistent with incompressible results, qualitative changes are observed at higher Mt with a transition around Mt ~ 0 . 3 . For example, the PDF of pressure changes from negatively to positively skewed as Mt increases. Similar changes are observed for temperature and density. We suggest that large fluctuations of thermodynamic variables will be log-normal at high Mt. We also find that, relative to incompressible turbulence, the correlation between enstrophy and low-pressure regions is weakened at high Mt which can be explained by the dominance of the so-called dilatational pressure.

  12. High frequency chest compression effects heart rate variability.

    PubMed

    Lee, Jongwon; Lee, Yong W; Warwick, Warren J

    2007-01-01

    High frequency chest compression (HFCC) supplies a sequence of air pulses through a jacket worn by a patient to remove excessive mucus for the treatment or prevention of lung disease patients. The air pulses produced from the pulse generator propagates over the thorax delivering the vibration and compression energy. A number of studies have demonstrated that the HFCC system increases the ability to clear mucus and improves lung function. Few studies have examined the change in instantaneous heart rate (iHR) and heart rate variability (HRV) during the HFCC therapy. The purpose of this study is to measure the change of HRV with four experimental protocols: (a) without HFCC, (b) during Inflated, (c)HFCC at 6Hz, and (d) HFCC at 21Hz. The nonlinearity and regularity of HRV was assessed by approximate entropy (ApEn), a method used to quantify the complexities and randomness. To compute the ApEn, we sectioned with a total of eight epochs and displayed the ApEn over the each epoch. Our results show significant differences in the both the iHR and HRV between the experimental protocols. The iHR was elevated at both the (c) 6Hz and (d) 21Hz condition from without HFCC (10%, 16%, respectively). We also found that the HFCC system tends to increase the HRV. Our study suggests that monitoring iHR and HRV are very important physiological indexes during HFCC therapy.

  13. Working characteristics of variable intake valve in compressed air engine.

    PubMed

    Yu, Qihui; Shi, Yan; Cai, Maolin

    2014-01-01

    A new camless compressed air engine is proposed, which can make the compressed air energy reasonably distributed. Through analysis of the camless compressed air engine, a mathematical model of the working processes was set up. Using the software MATLAB/Simulink for simulation, the pressure, temperature, and air mass of the cylinder were obtained. In order to verify the accuracy of the mathematical model, the experiments were conducted. Moreover, performance analysis was introduced to design compressed air engine. Results show that, firstly, the simulation results have good consistency with the experimental results. Secondly, under different intake pressures, the highest output power is obtained when the crank speed reaches 500 rpm, which also provides the maximum output torque. Finally, higher energy utilization efficiency can be obtained at the lower speed, intake pressure, and valve duration angle. This research can refer to the design of the camless valve of compressed air engine. PMID:25379536

  14. Working Characteristics of Variable Intake Valve in Compressed Air Engine

    PubMed Central

    Yu, Qihui; Shi, Yan; Cai, Maolin

    2014-01-01

    A new camless compressed air engine is proposed, which can make the compressed air energy reasonably distributed. Through analysis of the camless compressed air engine, a mathematical model of the working processes was set up. Using the software MATLAB/Simulink for simulation, the pressure, temperature, and air mass of the cylinder were obtained. In order to verify the accuracy of the mathematical model, the experiments were conducted. Moreover, performance analysis was introduced to design compressed air engine. Results show that, firstly, the simulation results have good consistency with the experimental results. Secondly, under different intake pressures, the highest output power is obtained when the crank speed reaches 500 rpm, which also provides the maximum output torque. Finally, higher energy utilization efficiency can be obtained at the lower speed, intake pressure, and valve duration angle. This research can refer to the design of the camless valve of compressed air engine. PMID:25379536

  15. Variable valve timing in a homogenous charge compression ignition engine

    DOEpatents

    Lawrence, Keith E.; Faletti, James J.; Funke, Steven J.; Maloney, Ronald P.

    2004-08-03

    The present invention relates generally to the field of homogenous charge compression ignition engines, in which fuel is injected when the cylinder piston is relatively close to the bottom dead center position for its compression stroke. The fuel mixes with air in the cylinder during the compression stroke to create a relatively lean homogeneous mixture that preferably ignites when the piston is relatively close to the top dead center position. However, if the ignition event occurs either earlier or later than desired, lowered performance, engine misfire, or even engine damage, can result. The present invention utilizes internal exhaust gas recirculation and/or compression ratio control to control the timing of ignition events and combustion duration in homogeneous charge compression ignition engines. Thus, at least one electro-hydraulic assist actuator is provided that is capable of mechanically engaging at least one cam actuated intake and/or exhaust valve.

  16. Variable percolation threshold of composites with fiber fillers under compression

    NASA Astrophysics Data System (ADS)

    Lin, Chuan; Wang, Hongtao; Yang, Wei

    2010-07-01

    The piezoresistant effect in conducting fiber-filled composites has been studied by a continuum percolation model. Simulation was performed by a Monte Carlo method that took into account both the deformation-induced fiber bending and rotation. The percolation threshold was found to rise with the compression strain, which explains the observed positive piezoresistive coefficients in such composites. The simulations unveiled the effect of the microstructure evolution during deformation. The fibers are found to align perpendicularly to the compression direction. As the fiber is bended, the effective length in making a conductive network is shortened. Both effects contribute to a larger percolation threshold and imply a positive piezoresistive coefficient according the universal power law.

  17. Combustion engine variable compression ratio apparatus and method

    DOEpatents

    Lawrence; Keith E.; Strawbridge, Bryan E.; Dutart, Charles H.

    2006-06-06

    An apparatus and method for varying a compression ratio of an engine having a block and a head mounted thereto. The apparatus and method includes a cylinder having a block portion and a head portion, a piston linearly movable in the block portion of the cylinder, a cylinder plug linearly movable in the head portion of the cylinder, and a valve located in the cylinder plug and operable to provide controlled fluid communication with the block portion of the cylinder.

  18. Effects of compression and individual variability on face recognition performance

    NASA Astrophysics Data System (ADS)

    McGarry, Delia P.; Arndt, Craig M.; McCabe, Steven A.; D'Amato, Donald P.

    2004-08-01

    The Enhanced Border Security and Visa Entry Reform Act of 2002 requires that the Visa Waiver Program be available only to countries that have a program to issue to their nationals machine-readable passports incorporating biometric identifiers complying with applicable standards established by the International Civil Aviation Organization (ICAO). In June 2002, the New Technologies Working Group of ICAO unanimously endorsed the use of face recognition (FR) as the globally interoperable biometric for machine-assisted identity confirmation with machine-readable travel documents (MRTDs), although Member States may elect to use fingerprint and/or iris recognition as additional biometric technologies. The means and formats are still being developed through which biometric information might be stored in the constrained space of integrated circuit chips embedded within travel documents. Such information will be stored in an open, yet unalterable and very compact format, probably as digitally signed and efficiently compressed images. The objective of this research is to characterize the many factors that affect FR system performance with respect to the legislated mandates concerning FR. A photograph acquisition environment and a commercial face recognition system have been installed at Mitretek, and over 1,400 images have been collected of volunteers. The image database and FR system are being used to analyze the effects of lossy image compression, individual differences, such as eyeglasses and facial hair, and the acquisition environment on FR system performance. Images are compressed by varying ratios using JPEG2000 to determine the trade-off points between recognition accuracy and compression ratio. The various acquisition factors that contribute to differences in FR system performance among individuals are also being measured. The results of this study will be used to refine and test efficient face image interchange standards that ensure highly accurate recognition, both

  19. Aalto University Undergraduate Centre. Protected Alvar Aalto Building Awarded for Accessibility After Renovation.

    PubMed

    Raike, Antti; Ahlava, Antti; Tuomi, Teemu; Skyttä, Pauliina; Verma, Ira

    2016-01-01

    The main building of the former Helsinki University of Technology (TKK) designed by Alvar Aalto is part of the cultural heritage in Finland. The building underwent a major renovation in 2011-2015 and has now become an awarded Undergraduate Centre for the modern interdisciplinary education of Aalto University. This paper presents how the architectural masterpiece from the 1960's was renovated and updated into a modern and accessible university building. Particular attention was paid for entering the building by wheelchairs, prams and pushchairs. The successful renovation was awarded in 2015 by the 'Esteetön Suomi -palkinto' (Accessible Finland Award), given every two years as a mark of recognition to activities or locations implementing the principles of accessibility and Universal Design for all on a broad scale and in a nationally significant way. PMID:27534312

  20. Scalar Damage Variable Determined in the Uniaxial and Triaxial Compression Conditions of Sandstone Samples

    NASA Astrophysics Data System (ADS)

    Cieślik, Jerzy

    2013-03-01

    The article is based on the results of uniaxial and triaxial compression tests, performed on Wustenzeller sandstone. An overview of the possible definitions of damage variable describing the process of damage development on the basis of various hypotheses has been presented in the first part of the article. In the main part of the article the author has presented the results of laboratory investigations, where the state of damage and its changes in rock samples under uniaxial and triaxial compression conditions were being observed. Using a modified procedure of triaxial tests, a definition of damage variable, determined on the basis of changes of volumetric stiffness of an examined rock, has been developed. Damage variable defined this way, in relation to a variable determined on the basis of axial stiffness changes, points to some anisotropy effects of damage phenomenon. The results obtained from both methods have been compared whereas the relations determining the evolution of damage variable in the loading process have been established.

  1. Fixed-quality/variable bit-rate on-board image compression for future CNES missions

    NASA Astrophysics Data System (ADS)

    Camarero, Roberto; Delaunay, Xavier; Thiebaut, Carole

    2012-10-01

    The huge improvements in resolution and dynamic range of current [1][2] and future CNES remote sensing missions (from 5m/2.5m in Spot5 to 70cm in Pleiades) illustrate the increasing need of efficient on-board image compressors. Many techniques have been considered by CNES during the last years in order to go beyond usual compression ratios: new image transforms or post-transforms [3][4], exceptional processing [5], selective compression [6]. However, even if significant improvements have been obtained, none of those techniques has ever contested an essential drawback in current on-board compression schemes: fixed-rate (or compression ratio). This classical assumption provides highly-predictable data volumes that simplify storage and transmission. But on the other hand, it demands to compress every image-segment (strip) of the scene within the same amount of data. Therefore, this fixed bit-rate is dimensioned on the worst case assessments to guarantee the quality requirements in all areas of the image. This is obviously not the most economical way of achieving the required image quality for every single segment. Thus, CNES has started a study to re-use existing compressors [7] in a Fixed-Quality/Variable bit-rate mode. The main idea is to compute a local complexity metric in order to assign the optimum bit-rate to comply with quality requirements. Consequently, complex areas are less compressed than simple ones, offering a better image quality for an equivalent global bit-rate. "Near-lossless bit-rate" of image segments has revealed as an efficient image complexity estimator. It links quality criteria and bit-rates through a single theoretical relationship. Compression parameters are thus automatically computed in accordance with the quality requirements. In addition, this complexity estimator could be implemented in a one-pass compression and truncation scheme.

  2. Acoustic transmission matrix of a variable area duct or nozzle carrying a compressible subsonic flow

    NASA Technical Reports Server (NTRS)

    Miles, J. H.

    1980-01-01

    The differential equations governing the propagation of sound in a variable area duct or nozzle carrying a one dimensional subsonic compressible fluid flow are derived and put in state variable form using acoustic pressure and particle velocity as the state variables. The duct or nozzle is divided into a number of regions. The region size is selected so that in each region the Mach number can be assumed constant and the area variation can be approximated by an exponential area variation. Consequently, the state variable equation in each region has constant coefficients. The transmission matrix for each region is obtained by solving the constant coefficient acoustic state variable differential equation. The transmission matrix for the duct or nozzle is the product of the individual transmission matrices of each region. Solutions are presented for several geometries with and without mean flow.

  3. Influence of variables on the consolidation and unconfined compressive strength of crushed salt: Technical report

    SciTech Connect

    Pfeifle, T.W.; Senseny, P.E.; Mellegard, K.D.

    1987-01-01

    Eight hydrostatic compression creep tests were performed on crushed salt specimens fabricated from Avery Island dome salt. Following the creep test, each specimen was tested in unconfined compression. The experiments were performed to assess the influence of the following four variables on the consolidation and unconfined strength of crushed salt: grain size distribution, temperature, time, and moisture content. The experiment design comprised a half-fraction factorial matrix at two levels. The levels of each variable investigated were grain size distribution, uniform-graded and well-graded (coefficient of uniformity of 1 and 8); temperature 25/sup 0/C and 100/sup 0/C; time, 3.5 x 10/sup 3/s and 950 x 10/sup 3/s (approximately 60 minutes and 11 days, respectively); and moisture content, dry and wet (85% relative humidity for 24 hours). The hydrostatic creep stress was 10 MPa. The unconfined compression tests were performed at an axial strain rate of 1 x 10/sup -5/s/sup -1/. Results show that the variables time and moisture content have the greatest influence on creep consolidation, while grain size distribution and, to a somewhat lesser degree, temperature have the greatest influence on total consolidation. Time and moisture content and the confounded two-factor interactions between either grain size distribution and time or temperature and moisture content have the greatest influence on unconfined strength. 7 refs., 7 figs., 11 tabs.

  4. Adaptive variable-fidelity wavelet-based eddy-capturing approaches for compressible turbulence

    NASA Astrophysics Data System (ADS)

    Brown-Dymkoski, Eric; Vasilyev, Oleg V.

    2015-11-01

    Multiresolution wavelet methods have been developed for efficient simulation of compressible turbulence. They rely upon a filter to identify dynamically important coherent flow structures and adapt the mesh to resolve them. The filter threshold parameter, which can be specified globally or locally, allows for a continuous tradeoff between computational cost and fidelity, ranging seamlessly between DNS and adaptive LES. There are two main approaches to specifying the adaptive threshold parameter. It can be imposed as a numerical error bound, or alternatively, derived from real-time flow phenomena to ensure correct simulation of desired turbulent physics. As LES relies on often imprecise model formulations that require a high-quality mesh, this variable-fidelity approach offers a further tool for improving simulation by targeting deficiencies and locally increasing the resolution. Simultaneous physical and numerical criteria, derived from compressible flow physics and the governing equations, are used to identify turbulent regions and evaluate the fidelity. Several benchmark cases are considered to demonstrate the ability to capture variable density and thermodynamic effects in compressible turbulence. This work was supported by NSF under grant No. CBET-1236505.

  5. A Variable Splitting based Algorithm for Fast Multi-Coil Blind Compressed Sensing MRI reconstruction

    PubMed Central

    Bhave, Sampada; Lingala, Sajan Goud; Jacob, Mathews

    2015-01-01

    Recent work on blind compressed sensing (BCS) has shown that exploiting sparsity in dictionaries that are learnt directly from the data at hand can outperform compressed sensing (CS) that uses fixed dictionaries. A challenge with BCS however is the large computational complexity during its optimization, which limits its practical use in several MRI applications. In this paper, we propose a novel optimization algorithm that utilize variable splitting strategies to significantly improve the convergence speed of the BCS optimization. The splitting allows us to efficiently decouple the sparse coefficient, and dictionary update steps from the data fidelity term, resulting in subproblems that take closed form analytical solutions, which otherwise require slower iterative conjugate gradient algorithms. Through experiments on multi coil parametric MRI data, we demonstrate the superior performance of BCS, while achieving convergence speed up factors of over 15 fold over the previously proposed implementation of the BCS algorithm. PMID:25570473

  6. Structural Response of Compression-Loaded, Tow-Placed, Variable Stiffness Panels

    NASA Technical Reports Server (NTRS)

    Wu, K. Chauncey; Guerdal, Zafer; Starnes, James H., Jr.

    2002-01-01

    Results of an analytical and experimental study to characterize the structural response of two compression-loaded variable stiffness composite panels are presented and discussed. These variable stiffness panels are advanced composite structures, in which tows are laid down along precise curvilinear paths within each ply and the fiber orientation angle varies continuously throughout each ply. The panels are manufactured from AS4/977-3 graphite-epoxy pre-preg material using an advanced tow placement system. Both variable stiffness panels have the same layup, but one panel has overlapping tow bands and the other panel has a constant-thickness laminate. A baseline cross-ply panel is also analyzed and tested for comparative purposes. Tests performed on the variable stiffness panels show a linear prebuckling load-deflection response, followed by a nonlinear response to failure at loads between 4 and 53 percent greater than the baseline panel failure load. The structural response of the variable stiffness panels is also evaluated using finite element analyses. Nonlinear analyses of the variable stiffness panels are performed which include mechanical and thermal prestresses. Results from analyses that include thermal prestress conditions correlate well with measured variable stiffness panel results. The predicted response of the baseline panel also correlates well with measured results.

  7. A numerical investigation of the finite element method in compressible primitive variable Navier-Stokes flow

    NASA Technical Reports Server (NTRS)

    Cook, C. H.

    1977-01-01

    The results of a comprehensive numerical investigation of the basic capabilities of the finite element method (FEM) for numerical solution of compressible flow problems governed by the two-dimensional and axis-symmetric Navier-Stokes equations in primitive variables are presented. The strong and weak points of the method as a tool for computational fluid dynamics are considered. The relation of the linear element finite element method to finite difference methods (FDM) is explored. The calculation of free shear layer and separated flows over aircraft boattail afterbodies with plume simulators indicate the strongest assets of the method are its capabilities for reliable and accurate calculation employing variable grids which readily approximate complex geometry and capably adapt to the presence of diverse regions of large solution gradients without the necessity of domain transformation.

  8. Effects of selected design variables on three ramp, external compression inlet performance. [boundary layer control bypasses, and mass flow rate

    NASA Technical Reports Server (NTRS)

    Kamman, J. H.; Hall, C. L.

    1975-01-01

    Two inlet performance tests and one inlet/airframe drag test were conducted in 1969 at the NASA-Ames Research Center. The basic inlet system was two-dimensional, three ramp (overhead), external compression, with variable capture area. The data from these tests were analyzed to show the effects of selected design variables on the performance of this type of inlet system. The inlet design variables investigated include inlet bleed, bypass, operating mass flow ratio, inlet geometry, and variable capture area.

  9. Interfraction Liver Shape Variability and Impact on GTV Position During Liver Stereotactic Radiotherapy Using Abdominal Compression

    SciTech Connect

    Eccles, Cynthia L.; Dawson, Laura A.; Moseley, Joanne L.; Brock, Kristy K.

    2011-07-01

    Purpose: For patients receiving liver stereotactic body radiotherapy (SBRT), abdominal compression can reduce organ motion, and daily image guidance can reduce setup error. The reproducibility of liver shape under compression may impact treatment delivery accuracy. The purpose of this study was to measure the interfractional variability in liver shape under compression, after best-fit rigid liver-to-liver registration from kilovoltage (kV) cone beam computed tomography (CBCT) scans to planning computed tomography (CT) scans and its impact on gross tumor volume (GTV) position. Methods and Materials: Evaluable patients were treated in a Research Ethics Board-approved SBRT six-fraction study with abdominal compression. Kilovoltage CBCT scans were acquired before treatment and reconstructed as respiratory sorted CBCT scans offline. Manual rigid liver-to-liver registrations were performed from exhale-phase CBCT scans to exhale planning CT scans. Each CBCT liver was contoured, exported, and compared with the planning CT scan for spatial differences, by use of in house-developed finite-element model-based deformable registration (MORFEUS). Results: We evaluated 83 CBCT scans from 16 patients with 30 GTVs. The mean volume of liver that deformed by greater than 3 mm was 21.7%. Excluding 1 outlier, the maximum volume that deformed by greater than 3 mm was 36.3% in a single patient. Over all patients, the absolute maximum deformations in the left-right (LR), anterior-posterior (AP), and superior-inferior directions were 10.5 mm (SD, 2.2), 12.9 mm (SD, 3.6), and 5.6 mm (SD, 2.7), respectively. The absolute mean predicted impact of liver volume displacements on GTV by use of center of mass displacements was 0.09 mm (SD, 0.13), 0.13 mm (SD, 0.18), and 0.08 mm (SD, 0.07) in the left-right, anterior-posterior, and superior-inferior directions, respectively. Conclusions: Interfraction liver deformations in patients undergoing SBRT under abdominal compression after rigid liver

  10. Lateral-torsional buckling of compressed and highly variable cross section beams

    NASA Astrophysics Data System (ADS)

    Mascolo, Ida; Pasquino, Mario

    2016-06-01

    In the critical state of a beam under central compression a flexural-torsional equilibrium shape becomes possible in addition to the fundamental straight equilibrium shape and the Euler bending. Particularly, torsional configuration takes place in all cases where the line of shear centres does not correspond with the line of centres of mass. This condition is obtained here about a z-axis highly variable section beam; with the assumptions that shear centres are aligned and line of centres is bound to not deform. For the purpose, let us evaluate an open thin wall C-cross section with flanges width and web height linearly variables along z-axis in order to have shear centres axis approximately aligned with gravity centres axis. Thus, differential equations that govern the problem are obtained. Because of the section variability, the numerical integration of differential equations that gives the true critical load is complex and lengthy. For this reason, it is given an energetic formulation of the problem by the theorem of minimum total potential energy (Ritz-Rayleigh method). It is expected an experimental validation that proposes the model studied.

  11. Performance and exhaust emission characteristics of variable compression ratio diesel engine fuelled with esters of crude rice bran oil.

    PubMed

    Vasudeva, Mohit; Sharma, Sumeet; Mohapatra, S K; Kundu, Krishnendu

    2016-01-01

    As a substitute to petroleum-derived diesel, biodiesel has high potential as a renewable and environment friendly energy source. For petroleum importing countries the choice of feedstock for biodiesel production within the geographical region is a major influential factor. Crude rice bran oil is found to be good and viable feedstock for biodiesel production. A two step esterification is carried out for higher free fatty acid crude rice bran oil. Blends of 10, 20 and 40 % by vol. crude rice bran biodiesel are tested in a variable compression ratio diesel engine at compression ratio 15, 16, 17 and 18. Engine performance and exhaust emission parameters are examined. Cylinder pressure-crank angle variation is also plotted. The increase in compression ratio from 15 to 18 resulted in 18.6 % decrease in brake specific fuel consumption and 14.66 % increase in brake thermal efficiency on an average. Cylinder pressure increases by 15 % when compression ratio is increased. Carbon monoxide emission decreased by 22.27 %, hydrocarbon decreased by 38.4 %, carbon dioxide increased by 17.43 % and oxides of nitrogen as NOx emission increased by 22.76 % on an average when compression ratio is increased from 15 to 18. The blends of crude rice bran biodiesel show better results than diesel with increase in compression ratio.

  12. Performance and exhaust emission characteristics of variable compression ratio diesel engine fuelled with esters of crude rice bran oil.

    PubMed

    Vasudeva, Mohit; Sharma, Sumeet; Mohapatra, S K; Kundu, Krishnendu

    2016-01-01

    As a substitute to petroleum-derived diesel, biodiesel has high potential as a renewable and environment friendly energy source. For petroleum importing countries the choice of feedstock for biodiesel production within the geographical region is a major influential factor. Crude rice bran oil is found to be good and viable feedstock for biodiesel production. A two step esterification is carried out for higher free fatty acid crude rice bran oil. Blends of 10, 20 and 40 % by vol. crude rice bran biodiesel are tested in a variable compression ratio diesel engine at compression ratio 15, 16, 17 and 18. Engine performance and exhaust emission parameters are examined. Cylinder pressure-crank angle variation is also plotted. The increase in compression ratio from 15 to 18 resulted in 18.6 % decrease in brake specific fuel consumption and 14.66 % increase in brake thermal efficiency on an average. Cylinder pressure increases by 15 % when compression ratio is increased. Carbon monoxide emission decreased by 22.27 %, hydrocarbon decreased by 38.4 %, carbon dioxide increased by 17.43 % and oxides of nitrogen as NOx emission increased by 22.76 % on an average when compression ratio is increased from 15 to 18. The blends of crude rice bran biodiesel show better results than diesel with increase in compression ratio. PMID:27066330

  13. Development and validation of a turbulent-mix model for variable-density and compressible flows

    NASA Astrophysics Data System (ADS)

    Banerjee, Arindam; Gore, Robert A.; Andrews, Malcolm J.

    2010-10-01

    The modeling of buoyancy driven turbulent flows is considered in conjunction with an advanced statistical turbulence model referred to as the BHR (Besnard-Harlow-Rauenzahn) k-S-a model. The BHR k-S-a model is focused on variable-density and compressible flows such as Rayleigh-Taylor (RT), Richtmyer-Meshkov (RM), and Kelvin-Helmholtz (KH) driven mixing. The BHR k-S-a turbulence mix model has been implemented in the RAGE hydro-code, and model constants are evaluated based on analytical self-similar solutions of the model equations. The results are then compared with a large test database available from experiments and direct numerical simulations (DNS) of RT, RM, and KH driven mixing. Furthermore, we describe research to understand how the BHR k-S-a turbulence model operates over a range of moderate to high Reynolds number buoyancy driven flows, with a goal of placing the modeling of buoyancy driven turbulent flows at the same level of development as that of single phase shear flows.

  14. Adaptive variable-length coding for efficient compression of spacecraft television data.

    NASA Technical Reports Server (NTRS)

    Rice, R. F.; Plaunt, J. R.

    1971-01-01

    An adaptive variable length coding system is presented. Although developed primarily for the proposed Grand Tour missions, many features of this system clearly indicate a much wider applicability. Using sample to sample prediction, the coding system produces output rates within 0.25 bit/picture element (pixel) of the one-dimensional difference entropy for entropy values ranging from 0 to 8 bit/pixel. This is accomplished without the necessity of storing any code words. Performance improvements of 0.5 bit/pixel can be simply achieved by utilizing previous line correlation. A Basic Compressor, using concatenated codes, adapts to rapid changes in source statistics by automatically selecting one of three codes to use for each block of 21 pixels. The system adapts to less frequent, but more dramatic, changes in source statistics by adjusting the mode in which the Basic Compressor operates on a line-to-line basis. Furthermore, the compression system is independent of the quantization requirements of the pulse-code modulation system.

  15. The observed compression and expansion of the F2 ionosphere as a major component of ionospheric variability

    NASA Astrophysics Data System (ADS)

    Lynn, K. J. W.; Gardiner-Garden, R. S.; Heitmann, A.

    2016-05-01

    This paper examines a number of sources of ionospheric variability and demonstrates that they have relationships in common which are currently not recognized. The paper initially deals with medium to large-scale traveling ionospheric disturbances. Following sections deal with nontraveling ionospheric disturbance (TID) ionospheric variations which are often repetitious from day to day. The latter includes the temporary rise in F2 height associated with sunset in equatorial latitudes resulting from strong upward drift in ionization driven by an E × B force. The following fall in height is often referred to as the premidnight collapse and is accompanied by a temporary increase in foF2 as a result of ionospheric compression. An entirely different repetitious phenomenon reported recently from middle latitudes in the Southern Hemisphere consists of strong morning and afternoon peaks in foF2 which define a midday bite-out and occur at the equinoxes. This behavior has been speculated to be tidal in origin. All the sources of ionospheric variability listed above exhibit similar relationships associated with a temporary expansion and upward lift of the ionospheric profile and a fall involving a compression of the ionospheric profile producing a peak in foF2 at the time of maximum compression. Such ionospheric compression/decompression is followed by a period in which the ionospheric profile recovers. Such relationships in traveling ionospheric disturbances (TIDs) have been noted previously. The present paper establishes for the first time that relationships hitherto seen as occurring only with TIDs are also present in association with other drivers of ionospheric variability.

  16. Compression of ECG signals using variable-length classifıed vector sets and wavelet transforms

    NASA Astrophysics Data System (ADS)

    Gurkan, Hakan

    2012-12-01

    In this article, an improved and more efficient algorithm for the compression of the electrocardiogram (ECG) signals is presented, which combines the processes of modeling ECG signal by variable-length classified signature and envelope vector sets (VL-CSEVS), and residual error coding via wavelet transform. In particular, we form the VL-CSEVS derived from the ECG signals, which exploits the relationship between energy variation and clinical information. The VL-CSEVS are unique patterns generated from many of thousands of ECG segments of two different lengths obtained by the energy based segmentation method, then they are presented to both the transmitter and the receiver used in our proposed compression system. The proposed algorithm is tested on the MIT-BIH Arrhythmia Database and MIT-BIH Compression Test Database and its performance is evaluated by using some evaluation metrics such as the percentage root-mean-square difference (PRD), modified PRD (MPRD), maximum error, and clinical evaluation. Our experimental results imply that our proposed algorithm achieves high compression ratios with low level reconstruction error while preserving the diagnostic information in the reconstructed ECG signal, which has been supported by the clinical tests that we have carried out.

  17. Improving smooth muscle cell exposure to drugs from drug-eluting stents at early time points: a variable compression approach.

    PubMed

    O'Connell, Barry M; Cunnane, Eoghan M; Denny, William J; Carroll, Grainne T; Walsh, Michael T

    2014-08-01

    The emergence of drug-eluting stents (DES) as a viable replacement for bare metal stenting has led to a significant decrease in the incidence of clinical restenosis. This is due to the transport of anti-restenotic drugs from within the polymer coating of a DES into the artery wall which arrests the cell cycle before restenosis can occur. The efficacy of DES is still under close scrutiny in the medical field as many issues regarding the effectiveness of DES drug transport in vivo still exist. One such issue, that has received less attention, is the limiting effect that stent strut compression has on the transport of drug species in the artery wall. Once the artery wall is compressed, the stents ability to transfer drug species into the arterial wall can be reduced. This leads to a reduction in the spatial therapeutic transfer of drug species to binding sites within the arterial wall. This paper investigates the concept of idealised variable compression as a means of demonstrating how such a stent design approach could improve the spatial delivery of drug species in the arterial wall. The study focused on assessing how the trends in concentration levels changed as a result of artery wall compression. Five idealised stent designs were created with a combination of thick struts that provide the necessary compression to restore luminal patency and thin uncompressive struts that improve the transport of drugs therein. By conducting numerical simulations of diffusive mass transport, this study found that the use of uncompressive struts results in a more uniform spatial distribution of drug species in the arterial wall.

  18. A burst compression and expansion technique for variable-rate users in satellite-switched TDMA networks

    NASA Technical Reports Server (NTRS)

    Budinger, James M.

    1990-01-01

    A burst compression and expansion technique is described for asynchronously interconnecting variable-data-rate users with cost-efficient ground terminals in a satellite-switched, time-division-multiple-access (SS/TDMA) network. Compression and expansion buffers in each ground terminal convert between lower rate, asynchronous, continuous-user data streams and higher-rate TDMA bursts synchronized with the satellite-switched timing. The technique described uses a first-in, first-out (FIFO) memory approach which enables the use of inexpensive clock sources by both the users and the ground terminals and obviates the need for elaborate user clock synchronization processes. A continous range of data rates from kilobits per second to that approaching the modulator burst rate (hundreds of megabits per second) can be accommodated. The technique was developed for use in the NASA Lewis Research Center System Integration, Test, and Evaluation (SITE) facility. Some key features of the technique have also been implemented in the gound terminals developed at NASA Lewis for use in on-orbit evaluation of the Advanced Communications Technology Satellite (ACTS) high burst rate (HBR) system.

  19. A burst compression and expansion technique for variable-rate users in satellite-switched TDMA networks

    NASA Technical Reports Server (NTRS)

    Budinger, James M.

    1990-01-01

    A burst compression and expansion technique is described for asynchronously interconnecting variable-data-rate users with cost-efficient ground terminals in a satellite-switched, time-division-multiple-access (SS/TDMA) network. Compression and expansion buffers in each ground terminal convert between lower rate, asynchronous, continuous-user data streams and higher-rate TDMA bursts synchronized with the satellite-switched timing. The technique described uses a first-in, first-out (FIFO) memory approach which enables the use of inexpensive clock sources by both the users and the ground terminals and obviates the need for elaborate user clock synchronization processes. A continuous range of data rates from kilobits per second to that approaching the modulator burst rate (hundreds of megabits per second) can be accommodated. The technique was developed for use in the NASA Lewis Research Center System Integration, Test, and Evaluation (SITE) facility. Some key features of the technique have also been implemented in the ground terminals developed at NASA Lewis for use in on-orbit evaluation of the Advanced Communications Technology Satellite (ACTS) high burst rate (HBR) system.

  20. The structure of variable property, compressible mixing layers in binary gas mixtures

    NASA Technical Reports Server (NTRS)

    Kozusko, F.; Grosch, C. E.; Jackson, T. L.; Kennedy, Christipher A.; Gatski, Thomas B.

    1996-01-01

    We present the results of a study of the structure of a parallel compressible mixing layer in a binary mixture of gases. The gases included in this study are hydrogen (H2), helium (He), nitrogen (N2), oxygen (02), neon (Ne) and argon (Ar). Profiles of the variation of the Lewis and Prandtl numbers across the mixing layer for all thirty combinations of gases are given. It is shown that the Lewis number can vary by as much as a factor of eight and the Prandtl number by a factor of two across the mixing layer. Thus assuming constant values for the Lewis and Prandtl numbers of a binary gas mixture in the shear layer, as is done in many theoretical studies, is a poor approximation. We also present profiles of the velocity, mass fraction, temperature and density for representative binary gas mixtures at zero and supersonic Mach numbers. We show that the shape of these profiles is strongly dependent on which gases are in the mixture as well as on whether the denser gas is in the fast stream or the slow stream.

  1. On Fully Developed Channel Flows: Some Solutions and Limitations, and Effects of Compressibility, Variable Properties, and Body Forces

    NASA Technical Reports Server (NTRS)

    Maslen, Stephen H.

    1959-01-01

    An examination of the effects of compressibility, variable properties, and body forces on fully developed laminar flow has indicated several limitations on such streams. In the absence of a pressure gradient, but presence of a body force (e.g., gravity), an exact fully developed gas flow results. For a liquid this follows also for the case of a constant streamwise pressure gradient. These motions are exact in the sense of a Couette flow. In the liquid case two solutions (not a new result) can occur for the same boundary conditions. An approximate analytic solution was found which agrees closely with machine calculations.In the case of approximately exact flows, it turns out that for large temperature variations across the channel the effects of convection (due to, say, a wall temperature gradient) and frictional heating must be negligible. In such a case the energy and momentum equations are separated, and the solutions are readily obtained. If the temperature variations are small, then both convection effects and frictional heating can consistently be considered. This case becomes the constant-property incompressible case (or quasi-incompressible case for free-convection flows) considered by many authors. Finally there is a brief discussion of cases wherein streamwise variations of all quantities are allowed but only a such form that independent variables are separable. For the case where the streamwise velocity varies inversely as the square root distance along the channel a solution is given.

  2. Assessing impact of formulation and process variables on in-vitro performance of directly compressed abuse deterrent formulations.

    PubMed

    Rahman, Ziyaur; Yang, Yang; Korang-Yeboah, Maxwell; Siddiqui, Akhtar; Xu, Xiaoming; Ashraf, Muhammad; Khan, Mansoor A

    2016-04-11

    Prescription drug products abuse/misuse is epidemic in United States. Opioids drug forms major portion of prescription drug product abuse. Abuse deterrence formulation (ADF) is one of the many approaches taken by sponsors to tackle this problem. It involves formulating opioids into dosage forms that will be difficult to abuse/misuse. Current investigation focused on evaluating the abuse deterrent properties (ADP) of ADF manufactured by direct compression method. Effect of process and formulation variables on ADP was investigated by statistical design of experiment (fractional factorial design). Independent factors studied were molecular weight of polyethylene oxide (Polyox™), curing time, temperature and method, and antioxidant type. Sotalol hydrochloride was selected as a model drug. ADP investigated were hardness/crush resistance, syringeability and injectability, physical manipulation (reduction into powder) and drug extraction in water and alcohol. Hardness and syringeability are evaluated by newly developed quantitative procedure. Other properties were also investigated such as morphology, crystallinity, assay and dissolution. The hardness and drug extraction was significantly (p<0.05) affected by curing temperature. Formulations could be powdered in 3 min irrespective of their hardness. Syringeability and injectability were intrinsic properties of the polymer used in the formulation, and were not affected by the investigated factors. Crystallinity of the polymer and drug changed, and was dependent upon curing temperature and time. The dissolution and assay were independent of formulation and process parameters studied. In conclusion, the study indicated some advantages of ADF product compared to non-ADF prepared by direct compression. However, the ADF should not be viewed as abuse proof product rather as incrementally improved product.

  3. Spatially Variable Compressibility Estimation Using the Ensemble Smoother with Bathymetry Observations: Application to the Maja Gas Reservoir

    NASA Astrophysics Data System (ADS)

    Zoccarato, C.; Bau, D.; Teatini, P.

    2015-12-01

    A data assimilation (DA) framework is established to characterize the geomechanical response of a strongly compartmentalized hydrocarbon reservoir. The available observations over the offshore gas field consist of a bathymetric survey carried out before and at the end of the ten-year production life. The time-lapse map of vertical displacements is used to infer the most important parameter characterizing the reservoir compaction, i.e. the rock formation compressibility cm. The methodology is tested for two different conceptual models: (a) cm varies with depth and the vertical effective stress (heterogeneity due to lithostratigrafic variability) and (b) cm also varies horizontally within the stratigraphic unit. The latter hypothesis is made to account for the behavior of the partitioned reservoir due to the presence of sealing faults and thrusts, which suggest the idea of a block heterogeneous cm. The calibration of the geomechanical parameters is obtain with the aid of the Ensemble Smoother algorithm, that is, an ensemble-based DA analysis scheme. In scenario (b), the number of reservoirs blocks dictates the set of uncertain parameters, whereas scenario (a) is characterized by only one uncertain parameter. The outcome from scenario (a) indicates that DA is effective in reducing the cm uncertainty. However, the maximum measured settlement is underestimated with an overestimation of the areal extent of the subsidence bowl. Significant improvements are obtained in scenario (b) where the maximum model overestimate is reduced by about 25% and an overall good match of the measured bathymetry is achieved.

  4. Effects of density, velocity gradient, and compressibility on side-jet formation in round jets with variable density

    NASA Astrophysics Data System (ADS)

    Muramatsu, Akinori

    2013-11-01

    When a low density gas compared with the ambient gas is discharged from a round nozzle, side jets that are radial ejections of jet fluid are generated at the initial region of the jet. The density ratio between the jet fluid and the ambient fluid is a main parameter for the side-jet formation. Since the side-jet formation is also related to the instability of shear layer, it depends on the velocity gradient of the shear layer in the jet. The velocity gradient is evaluated by a ratio of the momentum thickness and the nozzle diameter at the nozzle exit. Compressibility suppresses the instability and the generation of the side jets. The compressibility is evaluated by a Mach number, which is a ratio defined by an issuing velocity of the jet and a sound velocity in the ambient fluid. Influence of these three parameters on the side-jet formation was examined experimentally. The density ratio and momentum thickness ratio were varied from 0.14 to 1.53, and from 14 to 155, respectively. The Mach number was varied to 0.7. Existence of side jets was confirmed by flow visualization using a laser sheet. Domains for the side-jet formation by the density ratio, the momentum thickness ratio, and the Mach number were determined.

  5. Supercharged two-cycle engines employing novel single element reciprocating shuttle inlet valve mechanisms and with a variable compression ratio

    NASA Technical Reports Server (NTRS)

    Wiesen, Bernard (Inventor)

    2008-01-01

    This invention relates to novel reciprocating shuttle inlet valves, effective with every type of two-cycle engine, from small high-speed single cylinder model engines, to large low-speed multiple cylinder engines, employing spark or compression ignition. Also permitting the elimination of out-of-phase piston arrangements to control scavenging and supercharging of opposed-piston engines. The reciprocating shuttle inlet valve (32) and its operating mechanism (34) is constructed as a single and simple uncomplicated member, in combination with the lost-motion abutments, (46) and (48), formed in a piston skirt, obviating the need for any complex mechanisms or auxiliary drives, unaffected by heat, friction, wear or inertial forces. The reciprocating shuttle inlet valve retains the simplicity and advantages of two-cycle engines, while permitting an increase in volumetric efficiency and performance, thereby increasing the range of usefulness of two-cycle engines into many areas that are now dominated by the four-cycle engine.

  6. Variable-Length Character String Analyses of Three Data-Bases, and their Application for File Compression.

    ERIC Educational Resources Information Center

    Barton, Ian J.; And Others

    A novel text analysis and characterization method involves the generation from text samples of sets of variable-length character strings. These sets are intermediate in number between the character set and the total number of words in a data base; their distribution is less disparate than those of either characters or words. The size of the sets…

  7. Hierarchical Order of Influence of Mix Variables Affecting Compressive Strength of Sustainable Concrete Containing Fly Ash, Copper Slag, Silica Fume, and Fibres

    PubMed Central

    Natarajan, Sakthieswaran; Karuppiah, Ganesan

    2014-01-01

    Experiments have been conducted to study the effect of addition of fly ash, copper slag, and steel and polypropylene fibres on compressive strength of concrete and to determine the hierarchical order of influence of the mix variables in affecting the strength using cluster analysis experimentally. While fly ash and copper slag are used for partial replacement of cement and fine aggregate, respectively, defined quantities of steel and polypropylene fibres were added to the mixes. It is found from the experimental study that, in general, irrespective of the presence or absence of fibres, (i) for a given copper slag-fine aggregate ratio, increase in fly ash-cement ratio the concrete strength decreases and with the increase in copper slag-sand ratio also the rate of strength decrease and (ii) for a given fly ash-cement ratio, increase in copper slag-fine aggregate ratio increases the strength of the concrete. From the cluster analysis, it is found that the quantities of coarse and fine aggregate present have high influence in affecting the strength. It is also observed that the quantities of fly ash and copper slag used as substitutes have equal “influence” in affecting the strength. Marginal effect of addition of fibres in the compression strength of concrete is also revealed by the cluster analysis. PMID:24707213

  8. Hierarchical order of influence of mix variables affecting compressive strength of sustainable concrete containing fly ash, copper slag, silica fume, and fibres.

    PubMed

    Natarajan, Sakthieswaran; Karuppiah, Ganesan

    2014-01-01

    Experiments have been conducted to study the effect of addition of fly ash, copper slag, and steel and polypropylene fibres on compressive strength of concrete and to determine the hierarchical order of influence of the mix variables in affecting the strength using cluster analysis experimentally. While fly ash and copper slag are used for partial replacement of cement and fine aggregate, respectively, defined quantities of steel and polypropylene fibres were added to the mixes. It is found from the experimental study that, in general, irrespective of the presence or absence of fibres, (i) for a given copper slag-fine aggregate ratio, increase in fly ash-cement ratio the concrete strength decreases and with the increase in copper slag-sand ratio also the rate of strength decrease and (ii) for a given fly ash-cement ratio, increase in copper slag-fine aggregate ratio increases the strength of the concrete. From the cluster analysis, it is found that the quantities of coarse and fine aggregate present have high influence in affecting the strength. It is also observed that the quantities of fly ash and copper slag used as substitutes have equal "influence" in affecting the strength. Marginal effect of addition of fibres in the compression strength of concrete is also revealed by the cluster analysis.

  9. Influence of admixed citric acid and physiological variables on the vinpocetine release from sodium alginate compressed matrix tablets.

    PubMed

    Nie, Shufang; Wu, Jie; Liu, Hui; Pan, Weisan; Liu, Yanli

    2011-08-01

    In this study, the controlled release matrix tablets of vinpocetine were prepared by direct compression using sodium alginate (SAL) as hydrophilic polymer and different amounts of citric acid as hydrosoluble acidic excipient to set up a system bringing about zero-order release of this drug in distilled water containing 0.5% sodium dodecyl sulfate. At the critical content of admixed citric acid (60 mg/tab.), the lowest drug-release rate was observed. In order to explain the effect of this critical content on drug-release rate from SAL matrices, investigation of the possibility of interaction of citric acid with SAL was performed using differential scanning calorimetric analysis and infrared analysis, which confirmed the existence of direct citric acid-SAL interaction when these two excipients came in contact with water. A zero-order drug-release system could be obtained by regulating the ratio of citric acid-to-SAL and the capacity of this system in controlling drug-release rate depended on the extent of interaction between citric acid and SAL. It is worth noticing that pH and the ionic strength of the dissolution medium were found to exert an influence on the drug-release performance of SAL tablets.

  10. Accelerated MRI with CIRcular Cartesian UnderSampling (CIRCUS): a variable density Cartesian sampling strategy for compressed sensing and parallel imaging

    PubMed Central

    Saloner, David

    2014-01-01

    Purpose This study proposes and evaluates a novel method for generating efficient undersampling patterns for 3D Cartesian acquisition with compressed sensing (CS) and parallel imaging (PI). Methods Image quality achieved with schemes that accelerate data acquisition, including CS and PI, are sensitive to the design of the specific undersampling scheme used. Ideally random sampling is required to recover MR images from undersampled data with CS. In practice, pseudo-random sampling schemes are usually applied. Radial or spiral sampling either for Cartesian or non-Cartesian acquisitions has been using because of its favorable features such as interleaving flexibility. In this study, we propose to undersample data on the ky-kz plane of the 3D Cartesian acquisition by circularly selecting sampling points in a way that maintains the features of both random and radial or spiral sampling. Results The proposed sampling scheme is shown to outperform conventional random and radial or spiral samplings for 3D Cartesian acquisition and is found to be comparable to advanced variable-density Poisson-Disk sampling (vPDS) while retaining interleaving flexibility for dynamic imaging, based on the results with retrospective undersampling. Our preliminary results with the prospective implementation of the proposed undersampling strategy demonstrated its favorable features. Conclusions The proposed undersampling patterns for 3D Cartesian acquisition possess the desirable properties of randomization and radial or spiral trajectories. It provides easy implementation, flexible sampling, and high accuracy of image reconstruction with CS and PI. PMID:24649436

  11. DNABIT Compress - Genome compression algorithm.

    PubMed

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.

  12. Compression embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.

    1998-01-01

    A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%.

  13. Compression embedding

    DOEpatents

    Sandford, M.T. II; Handel, T.G.; Bradley, J.N.

    1998-07-07

    A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique are disclosed. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%. 21 figs.

  14. Compression embedding

    DOEpatents

    Sandford, M.T. II; Handel, T.G.; Bradley, J.N.

    1998-03-10

    A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique is disclosed. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method. 11 figs.

  15. Compression embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.

    1998-01-01

    A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method.

  16. Knock-Limited Performance of Triptane and 28-R Fuel Blends as Affected by Changes in Compression Ratio and in Engine Operating Variables

    NASA Technical Reports Server (NTRS)

    Brun, Rinaldo J.; Feder, Melvin S.; Fisher, William F.

    1947-01-01

    A knock-limited performance investigation was conducted on blends of triptane and 28-P fuel with a 12-cylinder, V-type, liquid-cooled aircraft engine of 1710-cubic-inch displacement at three compression ratios: 6.65, 7.93, and 9.68. At each compression ratio, the effect of changes in temperature of the inlet air to the auxiliary-stage supercharger and in fuel-air ratio were investigated at engine speeds of 2280 and. 3000 rpm. The results show that knock-limited engine performance, as improved by the use of triptane, allowed operation at both take-off and cruising power at a compression ratio of 9.68. At an inlet-air temperature of 60 deg F, an engine speed of 3000 rpm ; and a fuel-air ratio of 0,095 (approximately take-off conditions), a knock-limited engine output of 1500 brake horsepower was possible with 100-percent 28-R fuel at a compression ratio of 6.65; 20-percent triptane was required for the same power output at a compression ratio of 7.93, and 75 percent at a compression ratio of 9.68 allowed an output of 1480 brake horsepower. Knock-limited power output was more sensitive to changes in fuel-air ratio as the engine speed was increased from 2280 to 3000 rpm, as the compression ratio is raised from 6.65 to 9.68, or as the inlet-air temperature is raised from 0 deg to 120 deg F.

  17. Linear analysis on the onset of thermal convection of highly compressible fluids with variable physical properties: Implications for the mantle convection of super-Earths

    NASA Astrophysics Data System (ADS)

    Kameyama, Masanori

    2016-02-01

    A series of our linear analysis on the onset of thermal convection was applied to that of highly compressible fluids in a planar layer whose thermal conductivity and viscosity vary in space, in order to study the influences of spatial variations in physical properties expected in the mantles of massive terrestrial planets. The thermal conductivity and viscosity are assumed to exponentially depend on depth and temperature, respectively, while the variations in thermodynamic properties (thermal expansivity and reference density) with depth are taken to be relevant for the super-Earths with 10 times the Earth's. Our analysis demonstrated that the nature of incipient thermal convection is strongly affected by the interplay between the adiabatic compression and spatial variations in physical properties of fluids. Owing to the effects of adiabatic compression, a `stratosphere' can occur in the deep mantles of super-Earths, where a vertical motion is insignificant. An emergence of `stratosphere' is greatly enhanced by the increase in thermal conductivity with depth, while it is suppressed by the decrease in thermal expansivity with depth. In addition, by the interplay between the static stability and strong temperature dependence in viscosity, convection cells tend to be confined in narrow regions around the `tropopause' at the interface between the `stratosphere' of stable stratification and the `troposphere' of unstable stratification. We also found that, depending on the variations in physical properties, two kinds of stagnant regions can separately develop in the fluid layer. One is well-known `stagnant-lids' of cold and highly viscous fluids, and the other is `basal stagnant regions' of hot and less viscous fluids. The occurrence of `basal stagnant regions' may imply that convecting motions can be insignificant in the lowermost part of the mantles of massive super-Earths, even in the absence of strong increase in viscosity with pressure (or depth).

  18. Effect of processing variables (different compression packing processes and investment material types) and time on the dimensional accuracy of polymethyl methacrylate denture bases.

    PubMed

    Baydas, Seyfettin; Bayindir, Funda; Akyil, M Samil

    2003-06-01

    In this study we determined the effect of different compression packing processes, investment materials (a hemihydrate and dental stone) and time on the dimensional accuracy of polymethyl metacrylate denture bases. Square stainless steel plates (15 mm x 15 mm x 5 mm) were prepared to make an acrylic resin specimen. The linear dimensional changes of acrylic resin were determined by measuring the distances of fix points. Measurements were made at 24 hours, 48 hours, 12 days and 30 days after setting with a digital compass. Dimensional changes of test specimens that were obtained with three different flasks and two press techniques were compared by univariate analysis. Measurements of the linear dimensions of specimens cured by different compression packing techniques suggested that differences existed. The time interval differences were not significant. According to the results; flask and investment material types affect the dimensional accuracy of test specimens (p < 0.05). The least dimensional change observed in the specimens was obtained with Type 1 flask-dental stone-manual press combinations.

  19. Video Compression

    NASA Technical Reports Server (NTRS)

    1996-01-01

    Optivision developed two PC-compatible boards and associated software under a Goddard Space Flight Center Small Business Innovation Research grant for NASA applications in areas such as telerobotics, telesciences and spaceborne experimentation. From this technology, the company used its own funds to develop commercial products, the OPTIVideo MPEG Encoder and Decoder, which are used for realtime video compression and decompression. They are used in commercial applications including interactive video databases and video transmission. The encoder converts video source material to a compressed digital form that can be stored or transmitted, and the decoder decompresses bit streams to provide high quality playback.

  20. A Reweighted ℓ1-Minimization Based Compressed Sensing for the Spectral Estimation of Heart Rate Variability Using the Unevenly Sampled Data

    PubMed Central

    Chen, Szi-Wen; Chao, Shih-Chieh

    2014-01-01

    In this paper, a reweighted ℓ1-minimization based Compressed Sensing (CS) algorithm incorporating the Integral Pulse Frequency Modulation (IPFM) model for spectral estimation of HRV is introduced. Knowing as a novel sensing/sampling paradigm, the theory of CS asserts certain signals that are considered sparse or compressible can be possibly reconstructed from substantially fewer measurements than those required by traditional methods. Our study aims to employ a novel reweighted ℓ1-minimization CS method for deriving the spectrum of the modulating signal of IPFM model from incomplete RR measurements for HRV assessments. To evaluate the performance of HRV spectral estimation, a quantitative measure, referred to as the Percent Error Power (PEP) that measures the percentage of difference between the true spectrum and the spectrum derived from the incomplete RR dataset, was used. We studied the performance of spectral reconstruction from incomplete simulated and real HRV signals by experimentally truncating a number of RR data accordingly in the top portion, in the bottom portion, and in a random order from the original RR column vector. As a result, for up to 20% data truncation/loss the proposed reweighted ℓ1-minimization CS method produced, on average, 2.34%, 2.27%, and 4.55% PEP in the top, bottom, and random data-truncation cases, respectively, on Autoregressive (AR) model derived simulated HRV signals. Similarly, for up to 20% data loss the proposed method produced 5.15%, 4.33%, and 0.39% PEP in the top, bottom, and random data-truncation cases, respectively, on a real HRV database drawn from PhysioNet. Moreover, results generated by a number of intensive numerical experiments all indicated that the reweighted ℓ1-minimization CS method always achieved the most accurate and high-fidelity HRV spectral estimates in every aspect, compared with the ℓ1-minimization based method and Lomb's method used for estimating the spectrum of HRV from unevenly sampled RR

  1. Compressed Genotyping

    PubMed Central

    Erlich, Yaniv; Gordon, Assaf; Brand, Michael; Hannon, Gregory J.; Mitra, Partha P.

    2011-01-01

    Over the past three decades we have steadily increased our knowledge on the genetic basis of many severe disorders. Nevertheless, there are still great challenges in applying this knowledge routinely in the clinic, mainly due to the relatively tedious and expensive process of genotyping. Since the genetic variations that underlie the disorders are relatively rare in the population, they can be thought of as a sparse signal. Using methods and ideas from compressed sensing and group testing, we have developed a cost-effective genotyping protocol to detect carriers for severe genetic disorders. In particular, we have adapted our scheme to a recently developed class of high throughput DNA sequencing technologies. The mathematical framework presented here has some important distinctions from the ’traditional’ compressed sensing and group testing frameworks in order to address biological and technical constraints of our setting. PMID:21451737

  2. Chapter 22: Compressed Air Evaluation Protocol

    SciTech Connect

    Benton, N.

    2014-11-01

    Compressed-air systems are used widely throughout industry for many operations, including pneumatic tools, packaging and automation equipment, conveyors, and other industrial process operations. Compressed-air systems are defined as a group of subsystems composed of air compressors, air treatment equipment, controls, piping, pneumatic tools, pneumatically powered machinery, and process applications using compressed air. A compressed-air system has three primary functional subsystems: supply, distribution, and demand. Air compressors are the primary energy consumers in a compressed-air system and are the primary focus of this protocol. The two compressed-air energy efficiency measures specifically addressed in this protocol are: high-efficiency/variable speed drive (VSD) compressor replacing modulating compressor; compressed-air leak survey and repairs. This protocol provides direction on how to reliably verify savings from these two measures using a consistent approach for each.

  3. Scale adaptive compressive tracking.

    PubMed

    Zhao, Pengpeng; Cui, Shaohui; Gao, Min; Fang, Dan

    2016-01-01

    Recently, the compressive tracking (CT) method (Zhang et al. in Proceedings of European conference on computer vision, pp 864-877, 2012) has attracted much attention due to its high efficiency, but it cannot well deal with the scale changing objects due to its constant tracking box. To address this issue, in this paper we propose a scale adaptive CT approach, which adaptively adjusts the scale of tracking box with the size variation of the objects. Our method significantly improves CT in three aspects: Firstly, the scale of tracking box is adaptively adjusted according to the size of the objects. Secondly, in the CT method, all the compressive features are supposed independent and equal contribution to the classifier. Actually, different compressive features have different confidence coefficients. In our proposed method, the confidence coefficients of features are computed and used to achieve different contribution to the classifier. Finally, in the CT method, the learning parameter λ is constant, which will result in large tracking drift on the occasion of object occlusion or large scale appearance variation. In our proposed method, a variable learning parameter λ is adopted, which can be adjusted according to the object appearance variation rate. Extensive experiments on the CVPR2013 tracking benchmark demonstrate the superior performance of the proposed method compared to state-of-the-art tracking algorithms. PMID:27386298

  4. Turbulence in Compressible Flows

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Lecture notes for the AGARD Fluid Dynamics Panel (FDP) Special Course on 'Turbulence in Compressible Flows' have been assembled in this report. The following topics were covered: Compressible Turbulent Boundary Layers, Compressible Turbulent Free Shear Layers, Turbulent Combustion, DNS/LES and RANS Simulations of Compressible Turbulent Flows, and Case Studies of Applications of Turbulence Models in Aerospace.

  5. 19. VAL, DETAIL OF 'Y' JOINT CONNECTING THE COMPRESSION TANK ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    19. VAL, DETAIL OF 'Y' JOINT CONNECTING THE COMPRESSION TANK TO THE LAUNCHING TUBES. - Variable Angle Launcher Complex, Variable Angle Launcher, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  6. 20. VAL, DETAIL OF QUICKACTING VALVE (QAV) ABOVE COMPRESSION TANK ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    20. VAL, DETAIL OF QUICK-ACTING VALVE (QAV) ABOVE COMPRESSION TANK ON THE LAUNCHER BRIDGE. - Variable Angle Launcher Complex, Variable Angle Launcher, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  7. Compressing TV-image data

    NASA Technical Reports Server (NTRS)

    Hilbert, E. E.; Lee, J.; Rice, R. F.; Schlutsmeyer, A. P.

    1981-01-01

    Compressing technique calculates activity estimator for each segment of image line. Estimator is used in conjunction with allowable bits per line, N, to determine number of bits necessary to code each segment and which segments can tolerate truncation. Preprocessed line data are then passed to adaptive variable-length coder, which selects optimum transmission code. Method increases capacity of broadcast and cable television transmissions and helps reduce size of storage medium for video and digital audio recordings.

  8. Prechamber Compression-Ignition Engine Performance

    NASA Technical Reports Server (NTRS)

    Moore, Charles S; Collins, John H , Jr

    1938-01-01

    Single-cylinder compression-ignition engine tests were made to investigate the performance characteristics of prechamber type of cylinder head. Certain fundamental variables influencing engine performance -- clearance distribution, size, shape, and direction of the passage connecting the cylinder and prechamber, shape of prechamber, cylinder clearance, compression ratio, and boosting -- were independently tested. Results of motoring and of power tests, including several typical indicator cards, are presented.

  9. Microbunching and RF Compression

    SciTech Connect

    Venturini, M.; Migliorati, M.; Ronsivalle, C.; Ferrario, M.; Vaccarezza, C.

    2010-05-23

    Velocity bunching (or RF compression) represents a promising technique complementary to magnetic compression to achieve the high peak current required in the linac drivers for FELs. Here we report on recent progress aimed at characterizing the RF compression from the point of view of the microbunching instability. We emphasize the development of a linear theory for the gain function of the instability and its validation against macroparticle simulations that represents a useful tool in the evaluation of the compression schemes for FEL sources.

  10. Compressed gas manifold

    SciTech Connect

    Hildebrand, Richard J.; Wozniak, John J.

    2001-01-01

    A compressed gas storage cell interconnecting manifold including a thermally activated pressure relief device, a manual safety shut-off valve, and a port for connecting the compressed gas storage cells to a motor vehicle power source and to a refueling adapter. The manifold is mechanically and pneumatically connected to a compressed gas storage cell by a bolt including a gas passage therein.

  11. Compressible turbulent mixing: Effects of compressibility

    NASA Astrophysics Data System (ADS)

    Ni, Qionglin

    2016-04-01

    We studied by numerical simulations the effects of compressibility on passive scalar transport in stationary compressible turbulence. The turbulent Mach number varied from zero to unity. The difference in driven forcing was the magnitude ratio of compressive to solenoidal modes. In the inertial range, the scalar spectrum followed the k-5 /3 scaling and suffered negligible influence from the compressibility. The growth of the Mach number showed (1) a first reduction and second enhancement in the transfer of scalar flux; (2) an increase in the skewness and flatness of the scalar derivative and a decrease in the mixed skewness and flatness of the velocity-scalar derivatives; (3) a first stronger and second weaker intermittency of scalar relative to that of velocity; and (4) an increase in the intermittency parameter which measures the intermittency of scalar in the dissipative range. Furthermore, the growth of the compressive mode of forcing indicated (1) a decrease in the intermittency parameter and (2) less efficiency in enhancing scalar mixing. The visualization of scalar dissipation showed that, in the solenoidal-forced flow, the field was filled with the small-scale, highly convoluted structures, while in the compressive-forced flow, the field was exhibited as the regions dominated by the large-scale motions of rarefaction and compression.

  12. Variability for Categorical Variables

    ERIC Educational Resources Information Center

    Kader, Gary D.; Perry, Mike

    2007-01-01

    Introductory statistics textbooks rarely discuss the concept of variability for a categorical variable and thus, in this case, do not provide a measure of variability. The impression is thus given that there is no measurement of variability for a categorical variable. A measure of variability depends on the concept of variability. Research has…

  13. Study of communications data compression methods

    NASA Technical Reports Server (NTRS)

    Jones, H. W.

    1978-01-01

    A simple monochrome conditional replenishment system was extended to higher compression and to higher motion levels, by incorporating spatially adaptive quantizers and field repeating. Conditional replenishment combines intraframe and interframe compression, and both areas are investigated. The gain of conditional replenishment depends on the fraction of the image changing, since only changed parts of the image need to be transmitted. If the transmission rate is set so that only one fourth of the image can be transmitted in each field, greater change fractions will overload the system. A computer simulation was prepared which incorporated (1) field repeat of changes, (2) a variable change threshold, (3) frame repeat for high change, and (4) two mode, variable rate Hadamard intraframe quantizers. The field repeat gives 2:1 compression in moving areas without noticeable degradation. Variable change threshold allows some flexibility in dealing with varying change rates, but the threshold variation must be limited for acceptable performance.

  14. Universal lossless compression algorithm for textual images

    NASA Astrophysics Data System (ADS)

    al Zahir, Saif

    2012-03-01

    In recent years, an unparalleled volume of textual information has been transported over the Internet via email, chatting, blogging, tweeting, digital libraries, and information retrieval systems. As the volume of text data has now exceeded 40% of the total volume of traffic on the Internet, compressing textual data becomes imperative. Many sophisticated algorithms were introduced and employed for this purpose including Huffman encoding, arithmetic encoding, the Ziv-Lempel family, Dynamic Markov Compression, and Burrow-Wheeler Transform. My research presents novel universal algorithm for compressing textual images. The algorithm comprises two parts: 1. a universal fixed-to-variable codebook; and 2. our row and column elimination coding scheme. Simulation results on a large number of Arabic, Persian, and Hebrew textual images show that this algorithm has a compression ratio of nearly 87%, which exceeds published results including JBIG2.

  15. Parallel image compression

    NASA Technical Reports Server (NTRS)

    Reif, John H.

    1987-01-01

    A parallel compression algorithm for the 16,384 processor MPP machine was developed. The serial version of the algorithm can be viewed as a combination of on-line dynamic lossless test compression techniques (which employ simple learning strategies) and vector quantization. These concepts are described. How these concepts are combined to form a new strategy for performing dynamic on-line lossy compression is discussed. Finally, the implementation of this algorithm in a massively parallel fashion on the MPP is discussed.

  16. Fpack and Funpack Utilities for FITS Image Compression and Uncompression

    NASA Technical Reports Server (NTRS)

    Pence, W.

    2008-01-01

    Fpack is a utility program for optimally compressing images in the FITS (Flexible Image Transport System) data format (see http://fits.gsfc.nasa.gov). The associated funpack program restores the compressed image file back to its original state (as long as a lossless compression algorithm is used). These programs may be run from the host operating system command line and are analogous to the gzip and gunzip utility programs except that they are optimized for FITS format images and offer a wider choice of compression algorithms. Fpack stores the compressed image using the FITS tiled image compression convention (see http://fits.gsfc.nasa.gov/fits_registry.html). Under this convention, the image is first divided into a user-configurable grid of rectangular tiles, and then each tile is individually compressed and stored in a variable-length array column in a FITS binary table. By default, fpack usually adopts a row-by-row tiling pattern. The FITS image header keywords remain uncompressed for fast access by FITS reading and writing software. The tiled image compression convention can in principle support any number of different compression algorithms. The fpack and funpack utilities call on routines in the CFITSIO library (http://hesarc.gsfc.nasa.gov/fitsio) to perform the actual compression and uncompression of the FITS images, which currently supports the GZIP, Rice, H-compress, and PLIO IRAF pixel list compression algorithms.

  17. Sequential neural text compression.

    PubMed

    Schmidhuber, J; Heil, S

    1996-01-01

    The purpose of this paper is to show that neural networks may be promising tools for data compression without loss of information. We combine predictive neural nets and statistical coding techniques to compress text files. We apply our methods to certain short newspaper articles and obtain compression ratios exceeding those of the widely used Lempel-Ziv algorithms (which build the basis of the UNIX functions "compress" and "gzip"). The main disadvantage of our methods is that they are about three orders of magnitude slower than standard methods.

  18. Bilateral compressive optic neuropathy secondary to bilateral sphenoethmoidal mucoceles.

    PubMed

    Newton, N; Baratham, G; Sinniah, R; Lim, A

    1989-01-01

    We have presented a rare case of bilateral posterior sphenoethmoidal sinus mucoceles with bilateral compressive optic neuropathy. While the duration of compression was variably present over a 10-month period, there were nevertheless significant improvements in visual acuity of the right eye and visual fields bilaterally following extensive optic nerve decompression.

  19. Compression Ratio Adjuster

    NASA Technical Reports Server (NTRS)

    Akkerman, J. W.

    1982-01-01

    New mechanism alters compression ratio of internal-combustion engine according to load so that engine operates at top fuel efficiency. Ordinary gasoline, diesel and gas engines with their fixed compression ratios are inefficient at partial load and at low-speed full load. Mechanism ensures engines operate as efficiently under these conditions as they do at highload and high speed.

  20. Fractal image compression

    NASA Technical Reports Server (NTRS)

    Barnsley, Michael F.; Sloan, Alan D.

    1989-01-01

    Fractals are geometric or data structures which do not simplify under magnification. Fractal Image Compression is a technique which associates a fractal to an image. On the one hand, the fractal can be described in terms of a few succinct rules, while on the other, the fractal contains much or all of the image information. Since the rules are described with less bits of data than the image, compression results. Data compression with fractals is an approach to reach high compression ratios for large data streams related to images. The high compression ratios are attained at a cost of large amounts of computation. Both lossless and lossy modes are supported by the technique. The technique is stable in that small errors in codes lead to small errors in image data. Applications to the NASA mission are discussed.

  1. Grid-free compressive beamforming.

    PubMed

    Xenaki, Angeliki; Gerstoft, Peter

    2015-04-01

    The direction-of-arrival (DOA) estimation problem involves the localization of a few sources from a limited number of observations on an array of sensors, thus it can be formulated as a sparse signal reconstruction problem and solved efficiently with compressive sensing (CS) to achieve high-resolution imaging. On a discrete angular grid, the CS reconstruction degrades due to basis mismatch when the DOAs do not coincide with the angular directions on the grid. To overcome this limitation, a continuous formulation of the DOA problem is employed and an optimization procedure is introduced, which promotes sparsity on a continuous optimization variable. The DOA estimation problem with infinitely many unknowns, i.e., source locations and amplitudes, is solved over a few optimization variables with semidefinite programming. The grid-free CS reconstruction provides high-resolution imaging even with non-uniform arrays, single-snapshot data and under noisy conditions as demonstrated on experimental towed array data.

  2. Grid-free compressive beamforming.

    PubMed

    Xenaki, Angeliki; Gerstoft, Peter

    2015-04-01

    The direction-of-arrival (DOA) estimation problem involves the localization of a few sources from a limited number of observations on an array of sensors, thus it can be formulated as a sparse signal reconstruction problem and solved efficiently with compressive sensing (CS) to achieve high-resolution imaging. On a discrete angular grid, the CS reconstruction degrades due to basis mismatch when the DOAs do not coincide with the angular directions on the grid. To overcome this limitation, a continuous formulation of the DOA problem is employed and an optimization procedure is introduced, which promotes sparsity on a continuous optimization variable. The DOA estimation problem with infinitely many unknowns, i.e., source locations and amplitudes, is solved over a few optimization variables with semidefinite programming. The grid-free CS reconstruction provides high-resolution imaging even with non-uniform arrays, single-snapshot data and under noisy conditions as demonstrated on experimental towed array data. PMID:25920844

  3. Modeling Compressed Turbulence

    SciTech Connect

    Israel, Daniel M.

    2012-07-13

    From ICE to ICF, the effect of mean compression or expansion is important for predicting the state of the turbulence. When developing combustion models, we would like to know the mix state of the reacting species. This involves density and concentration fluctuations. To date, research has focused on the effect of compression on the turbulent kinetic energy. The current work provides constraints to help development and calibration for models of species mixing effects in compressed turbulence. The Cambon, et al., re-scaling has been extended to buoyancy driven turbulence, including the fluctuating density, concentration, and temperature equations. The new scalings give us helpful constraints for developing and validating RANS turbulence models.

  4. Local compressibilities in crystals

    NASA Astrophysics Data System (ADS)

    Martín Pendás, A.; Costales, Aurora; Blanco, M. A.; Recio, J. M.; Luaña, Víctor

    2000-12-01

    An application of the atoms in molecules theory to the partitioning of static thermodynamic properties in condensed systems is presented. Attention is focused on the definition and the behavior of atomic compressibilities. Inverses of bulk moduli are found to be simple weighted averages of atomic compressibilities. Two kinds of systems are investigated as examples: four related oxide spinels and the alkali halide family. Our analyses show that the puzzling constancy of the bulk moduli of these spinels is a consequence of the value of the compressibility of an oxide ion. A functional dependence between ionic bulk moduli and ionic volume is also proposed.

  5. Compressive wavefront sensing with weak values.

    PubMed

    Howland, Gregory A; Lum, Daniel J; Howell, John C

    2014-08-11

    We demonstrate a wavefront sensor that unites weak measurement and the compressive-sensing, single-pixel camera. Using a high-resolution spatial light modulator (SLM) as a variable waveplate, we weakly couple an optical field's transverse-position and polarization degrees of freedom. By placing random, binary patterns on the SLM, polarization serves as a meter for directly measuring random projections of the wavefront's real and imaginary components. Compressive-sensing optimization techniques can then recover the wavefront. We acquire high quality, 256 × 256 pixel images of the wavefront from only 10,000 projections. Photon-counting detectors give sub-picowatt sensitivity.

  6. Compressive Optical Image Encryption

    PubMed Central

    Li, Jun; Sheng Li, Jiao; Yang Pan, Yang; Li, Rong

    2015-01-01

    An optical image encryption technique based on compressive sensing using fully optical means has been proposed. An object image is first encrypted to a white-sense stationary noise pattern using a double random phase encoding (DRPE) method in a Mach-Zehnder interferometer. Then, the encrypted image is highly compressed to a signal using single-pixel compressive holographic imaging in the optical domain. At the receiving terminal, the encrypted image is reconstructed well via compressive sensing theory, and the original image can be decrypted with three reconstructed holograms and the correct keys. The numerical simulations show that the method is effective and suitable for optical image security transmission in future all-optical networks because of the ability of completely optical implementation and substantially smaller hologram data volume. PMID:25992946

  7. Focus on Compression Stockings

    MedlinePlus

    ... sion apparel is used to prevent or control edema The post-thrombotic syndrome (PTS) is a complication ( ... complication. abdomen. This swelling is referred to as edema. If you have edema, compression therapy may be ...

  8. Muon cooling: longitudinal compression.

    PubMed

    Bao, Yu; Antognini, Aldo; Bertl, Wilhelm; Hildebrandt, Malte; Khaw, Kim Siang; Kirch, Klaus; Papa, Angela; Petitjean, Claude; Piegsa, Florian M; Ritt, Stefan; Sedlak, Kamil; Stoykov, Alexey; Taqqu, David

    2014-06-01

    A 10  MeV/c positive muon beam was stopped in helium gas of a few mbar in a magnetic field of 5 T. The muon "swarm" has been efficiently compressed from a length of 16 cm down to a few mm along the magnetic field axis (longitudinal compression) using electrostatic fields. The simulation reproduces the low energy interactions of slow muons in helium gas. Phase space compression occurs on the order of microseconds, compatible with the muon lifetime of 2  μs. This paves the way for the preparation of a high-quality low-energy muon beam, with an increase in phase space density relative to a standard surface muon beam of 10^{7}. The achievable phase space compression by using only the longitudinal stage presented here is of the order of 10^{4}.

  9. Compressive Optical Image Encryption

    NASA Astrophysics Data System (ADS)

    Li, Jun; Sheng Li, Jiao; Yang Pan, Yang; Li, Rong

    2015-05-01

    An optical image encryption technique based on compressive sensing using fully optical means has been proposed. An object image is first encrypted to a white-sense stationary noise pattern using a double random phase encoding (DRPE) method in a Mach-Zehnder interferometer. Then, the encrypted image is highly compressed to a signal using single-pixel compressive holographic imaging in the optical domain. At the receiving terminal, the encrypted image is reconstructed well via compressive sensing theory, and the original image can be decrypted with three reconstructed holograms and the correct keys. The numerical simulations show that the method is effective and suitable for optical image security transmission in future all-optical networks because of the ability of completely optical implementation and substantially smaller hologram data volume.

  10. Muon Cooling: Longitudinal Compression

    NASA Astrophysics Data System (ADS)

    Bao, Yu; Antognini, Aldo; Bertl, Wilhelm; Hildebrandt, Malte; Khaw, Kim Siang; Kirch, Klaus; Papa, Angela; Petitjean, Claude; Piegsa, Florian M.; Ritt, Stefan; Sedlak, Kamil; Stoykov, Alexey; Taqqu, David

    2014-06-01

    A 10 MeV/c positive muon beam was stopped in helium gas of a few mbar in a magnetic field of 5 T. The muon "swarm" has been efficiently compressed from a length of 16 cm down to a few mm along the magnetic field axis (longitudinal compression) using electrostatic fields. The simulation reproduces the low energy interactions of slow muons in helium gas. Phase space compression occurs on the order of microseconds, compatible with the muon lifetime of 2 μs. This paves the way for the preparation of a high-quality low-energy muon beam, with an increase in phase space density relative to a standard surface muon beam of 107. The achievable phase space compression by using only the longitudinal stage presented here is of the order of 104.

  11. Compressible Astrophysics Simulation Code

    SciTech Connect

    Howell, L.; Singer, M.

    2007-07-18

    This is an astrophysics simulation code involving a radiation diffusion module developed at LLNL coupled to compressible hydrodynamics and adaptive mesh infrastructure developed at LBNL. One intended application is to neutrino diffusion in core collapse supernovae.

  12. Image compression technique

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1997-03-25

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace`s equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image. 16 figs.

  13. Image compression technique

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1997-01-01

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace's equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image.

  14. Alternative Compression Garments

    NASA Technical Reports Server (NTRS)

    Stenger, M. B.; Lee, S. M. C.; Ribeiro, L. C.; Brown, A. K.; Westby, C. M.; Platts, S. H.

    2011-01-01

    Orthostatic intolerance after spaceflight is still an issue for astronauts as no in-flight countermeasure has been 100% effective. Future anti-gravity suits (AGS) may be similar to the Shuttle era inflatable AGS or may be a mechanical compression device like the Russian Kentavr. We have evaluated the above garments as well as elastic, gradient compression garments of varying magnitude and determined that breast-high elastic compression garments may be a suitable replacement to the current AGS. This new garment should be more comfortable than the AGS, easy to don and doff, and as effective a countermeasure to orthostatic intolerance. Furthermore, these new compression garments could be worn for several days after space flight as necessary if symptoms persisted. We conducted two studies to evaluate elastic, gradient compression garments. The purpose of these studies was to evaluate the comfort and efficacy of an alternative compression garment (ACG) immediately after actual space flight and 6 degree head-down tilt bed rest as a model of space flight, and to determine if they would impact recovery if worn for up to three days after bed rest.

  15. Transverse Compression of Tendons.

    PubMed

    Salisbury, S T Samuel; Buckley, C Paul; Zavatsky, Amy B

    2016-04-01

    A study was made of the deformation of tendons when compressed transverse to the fiber-aligned axis. Bovine digital extensor tendons were compression tested between flat rigid plates. The methods included: in situ image-based measurement of tendon cross-sectional shapes, after preconditioning but immediately prior to testing; multiple constant-load creep/recovery tests applied to each tendon at increasing loads; and measurements of the resulting tendon displacements in both transverse directions. In these tests, friction resisted axial stretch of the tendon during compression, giving approximately plane-strain conditions. This, together with the assumption of a form of anisotropic hyperelastic constitutive model proposed previously for tendon, justified modeling the isochronal response of tendon as that of an isotropic, slightly compressible, neo-Hookean solid. Inverse analysis, using finite-element (FE) simulations of the experiments and 10 s isochronal creep displacement data, gave values for Young's modulus and Poisson's ratio of this solid of 0.31 MPa and 0.49, respectively, for an idealized tendon shape and averaged data for all the tendons and E = 0.14 and 0.10 MPa for two specific tendons using their actual measured geometry. The compression load versus displacement curves, as measured and as simulated, showed varying degrees of stiffening with increasing load. This can be attributed mostly to geometrical changes in tendon cross section under load, varying according to the initial 3D shape of the tendon. PMID:26833218

  16. Transverse Compression of Tendons.

    PubMed

    Salisbury, S T Samuel; Buckley, C Paul; Zavatsky, Amy B

    2016-04-01

    A study was made of the deformation of tendons when compressed transverse to the fiber-aligned axis. Bovine digital extensor tendons were compression tested between flat rigid plates. The methods included: in situ image-based measurement of tendon cross-sectional shapes, after preconditioning but immediately prior to testing; multiple constant-load creep/recovery tests applied to each tendon at increasing loads; and measurements of the resulting tendon displacements in both transverse directions. In these tests, friction resisted axial stretch of the tendon during compression, giving approximately plane-strain conditions. This, together with the assumption of a form of anisotropic hyperelastic constitutive model proposed previously for tendon, justified modeling the isochronal response of tendon as that of an isotropic, slightly compressible, neo-Hookean solid. Inverse analysis, using finite-element (FE) simulations of the experiments and 10 s isochronal creep displacement data, gave values for Young's modulus and Poisson's ratio of this solid of 0.31 MPa and 0.49, respectively, for an idealized tendon shape and averaged data for all the tendons and E = 0.14 and 0.10 MPa for two specific tendons using their actual measured geometry. The compression load versus displacement curves, as measured and as simulated, showed varying degrees of stiffening with increasing load. This can be attributed mostly to geometrical changes in tendon cross section under load, varying according to the initial 3D shape of the tendon.

  17. Self-Similar Compressible Free Vortices

    NASA Technical Reports Server (NTRS)

    vonEllenrieder, Karl

    1998-01-01

    Lie group methods are used to find both exact and numerical similarity solutions for compressible perturbations to all incompressible, two-dimensional, axisymmetric vortex reference flow. The reference flow vorticity satisfies an eigenvalue problem for which the solutions are a set of two-dimensional, self-similar, incompressible vortices. These solutions are augmented by deriving a conserved quantity for each eigenvalue, and identifying a Lie group which leaves the reference flow equations invariant. The partial differential equations governing the compressible perturbations to these reference flows are also invariant under the action of the same group. The similarity variables found with this group are used to determine the decay rates of the velocities and thermodynamic variables in the self-similar flows, and to reduce the governing partial differential equations to a set of ordinary differential equations. The ODE's are solved analytically and numerically for a Taylor vortex reference flow, and numerically for an Oseen vortex reference flow. The solutions are used to examine the dependencies of the temperature, density, entropy, dissipation and radial velocity on the Prandtl number. Also, experimental data on compressible free vortex flow are compared to the analytical results, the evolution of vortices from initial states which are not self-similar is discussed, and the energy transfer in a slightly-compressible vortex is considered.

  18. The compressible mixing layer

    NASA Technical Reports Server (NTRS)

    Vandromme, Dany; Haminh, Hieu

    1991-01-01

    The capability of turbulence modeling correctly to handle natural unsteadiness appearing in compressible turbulent flows is investigated. Physical aspects linked to the unsteadiness problem and the role of various flow parameters are analyzed. It is found that unsteady turbulent flows can be simulated by dividing these motions into an 'organized' part for which equations of motion are solved and a remaining 'incoherent' part represented by a turbulence model. Two-equation turbulence models and second-order turbulence models can yield reasonable results. For specific compressible unsteady turbulent flow, graphic presentations of different quantities may reveal complementary physical features. Strong compression zones are observed in rapid flow parts but shocklets do not yet occur.

  19. Isentropic Compression of Argon

    SciTech Connect

    H. Oona; J.C. Solem; L.R. Veeser, C.A. Ekdahl; P.J. Rodriquez; S.M. Younger; W. Lewis; W.D. Turley

    1997-08-01

    We are studying the transition of argon from an insulator to a conductor by compressing the frozen gas isentropically to pressures at which neighboring atomic orbitals overlap sufficiently to allow some electron motion between atoms. Argon and the other rare gases have closed electron shells and therefore remain montomic, even when they solidify. Their simple structure makes it likely that any measured change in conductivity is due to changes in the atomic structure, not in molecular configuration. As the crystal is compressed the band gap closes, allowing increased conductivity. We have begun research to determine the conductivity at high pressures, and it is our intention to determine the compression at which the crystal becomes a metal.

  20. Compressive Shift Retrieval

    NASA Astrophysics Data System (ADS)

    Ohlsson, Henrik; Eldar, Yonina C.; Yang, Allen Y.; Sastry, S. Shankar

    2014-08-01

    The classical shift retrieval problem considers two signals in vector form that are related by a shift. The problem is of great importance in many applications and is typically solved by maximizing the cross-correlation between the two signals. Inspired by compressive sensing, in this paper, we seek to estimate the shift directly from compressed signals. We show that under certain conditions, the shift can be recovered using fewer samples and less computation compared to the classical setup. Of particular interest is shift estimation from Fourier coefficients. We show that under rather mild conditions only one Fourier coefficient suffices to recover the true shift.

  1. Isentropic compression of argon

    SciTech Connect

    Veeser, L.R.; Ekdahl, C.A.; Oona, H.

    1997-06-01

    The compression was done in an MC-1 flux compression (explosive) generator, in order to study the transition from an insulator to a conductor. Since conductivity signals were observed in all the experiments (except when the probe is removed), both the Teflon and the argon are becoming conductive. The conductivity could not be determined (Teflon insulation properties unknown), but it could be bounded as being {sigma}=1/{rho}{le}8({Omega}cm){sub -1}, because when the Teflon breaks down, the dielectric constant is reduced. The Teflon insulator problem remains, and other ways to better insulate the probe or to measure the conductivity without a probe is being sought.

  2. Orbiting dynamic compression laboratory

    NASA Technical Reports Server (NTRS)

    Ahrens, T. J.; Vreeland, T., Jr.; Kasiraj, P.; Frisch, B.

    1984-01-01

    In order to examine the feasibility of carrying out dynamic compression experiments on a space station, the possibility of using explosive gun launchers is studied. The question of whether powders of a refractory metal (molybdenum) and a metallic glass could be well considered by dynamic compression is examined. In both cases extremely good bonds are obtained between grains of metal and metallic glass at 180 and 80 kb, respectively. When the oxide surface is reduced and the dynamic consolidation is carried out in vacuum, in the case of molybdenum, tensile tests of the recovered samples demonstrated beneficial ultimate tensile strengths.

  3. Fixed-Rate Compressed Floating-Point Arrays.

    PubMed

    Lindstrom, Peter

    2014-12-01

    Current compression schemes for floating-point data commonly take fixed-precision values and compress them to a variable-length bit stream, complicating memory management and random access. We present a fixed-rate, near-lossless compression scheme that maps small blocks of 4(d) values in d dimensions to a fixed, user-specified number of bits per block, thereby allowing read and write random access to compressed floating-point data at block granularity. Our approach is inspired by fixed-rate texture compression methods widely adopted in graphics hardware, but has been tailored to the high dynamic range and precision demands of scientific applications. Our compressor is based on a new, lifted, orthogonal block transform and embedded coding, allowing each per-block bit stream to be truncated at any point if desired, thus facilitating bit rate selection using a single compression scheme. To avoid compression or decompression upon every data access, we employ a software write-back cache of uncompressed blocks. Our compressor has been designed with computational simplicity and speed in mind to allow for the possibility of a hardware implementation, and uses only a small number of fixed-point arithmetic operations per compressed value. We demonstrate the viability and benefits of lossy compression in several applications, including visualization, quantitative data analysis, and numerical simulation.

  4. Fixed-Rate Compressed Floating-Point Arrays.

    PubMed

    Lindstrom, Peter

    2014-12-01

    Current compression schemes for floating-point data commonly take fixed-precision values and compress them to a variable-length bit stream, complicating memory management and random access. We present a fixed-rate, near-lossless compression scheme that maps small blocks of 4(d) values in d dimensions to a fixed, user-specified number of bits per block, thereby allowing read and write random access to compressed floating-point data at block granularity. Our approach is inspired by fixed-rate texture compression methods widely adopted in graphics hardware, but has been tailored to the high dynamic range and precision demands of scientific applications. Our compressor is based on a new, lifted, orthogonal block transform and embedded coding, allowing each per-block bit stream to be truncated at any point if desired, thus facilitating bit rate selection using a single compression scheme. To avoid compression or decompression upon every data access, we employ a software write-back cache of uncompressed blocks. Our compressor has been designed with computational simplicity and speed in mind to allow for the possibility of a hardware implementation, and uses only a small number of fixed-point arithmetic operations per compressed value. We demonstrate the viability and benefits of lossy compression in several applications, including visualization, quantitative data analysis, and numerical simulation. PMID:26356981

  5. Nonlinear Frequency Compression

    PubMed Central

    Scollie, Susan; Glista, Danielle; Seelisch, Andreas

    2013-01-01

    Frequency lowering technologies offer an alternative amplification solution for severe to profound high frequency hearing losses. While frequency lowering technologies may improve audibility of high frequency sounds, the very nature of this processing can affect the perceived sound quality. This article reports the results from two studies that investigated the impact of a nonlinear frequency compression (NFC) algorithm on perceived sound quality. In the first study, the cutoff frequency and compression ratio parameters of the NFC algorithm were varied, and their effect on the speech quality was measured subjectively with 12 normal hearing adults, 12 normal hearing children, 13 hearing impaired adults, and 9 hearing impaired children. In the second study, 12 normal hearing and 8 hearing impaired adult listeners rated the quality of speech in quiet, speech in noise, and music after processing with a different set of NFC parameters. Results showed that the cutoff frequency parameter had more impact on sound quality ratings than the compression ratio, and that the hearing impaired adults were more tolerant to increased frequency compression than normal hearing adults. No statistically significant differences were found in the sound quality ratings of speech-in-noise and music stimuli processed through various NFC settings by hearing impaired listeners. These findings suggest that there may be an acceptable range of NFC settings for hearing impaired individuals where sound quality is not adversely affected. These results may assist an Audiologist in clinical NFC hearing aid fittings for achieving a balance between high frequency audibility and sound quality. PMID:23539261

  6. Compress Your Files

    ERIC Educational Resources Information Center

    Branzburg, Jeffrey

    2005-01-01

    File compression enables data to be squeezed together, greatly reducing file size. Why would someone want to do this? Reducing file size enables the sending and receiving of files over the Internet more quickly, the ability to store more files on the hard drive, and the ability pack many related files into one archive (for example, all files…

  7. Compression: Rent or own

    SciTech Connect

    Cahill, C.

    1997-07-01

    Historically, the decision to purchase or rent compression has been set as a corporate philosophy. As companies decentralize, there seems to be a shift away from corporate philosophy toward individual profit centers. This has led the decision to rent versus purchase to be looked at on a regional or project-by-project basis.

  8. The Compressed Video Experience.

    ERIC Educational Resources Information Center

    Weber, John

    In the fall semester 1995, Southern Arkansas University- Magnolia (SAU-M) began a two semester trial delivering college classes via a compressed video link between SAU-M and its sister school Southern Arkansas University Tech (SAU-T) in Camden. As soon as the University began broadcasting and receiving classes, it was discovered that using the…

  9. TEM Video Compressive Sensing

    SciTech Connect

    Stevens, Andrew J.; Kovarik, Libor; Abellan, Patricia; Yuan, Xin; Carin, Lawrence; Browning, Nigel D.

    2015-08-02

    One of the main limitations of imaging at high spatial and temporal resolution during in-situ TEM experiments is the frame rate of the camera being used to image the dynamic process. While the recent development of direct detectors has provided the hardware to achieve frame rates approaching 0.1ms, the cameras are expensive and must replace existing detectors. In this paper, we examine the use of coded aperture compressive sensing methods [1, 2, 3, 4] to increase the framerate of any camera with simple, low-cost hardware modifications. The coded aperture approach allows multiple sub-frames to be coded and integrated into a single camera frame during the acquisition process, and then extracted upon readout using statistical compressive sensing inversion. Our simulations show that it should be possible to increase the speed of any camera by at least an order of magnitude. Compressive Sensing (CS) combines sensing and compression in one operation, and thus provides an approach that could further improve the temporal resolution while correspondingly reducing the electron dose rate. Because the signal is measured in a compressive manner, fewer total measurements are required. When applied to TEM video capture, compressive imaging couled improve acquisition speed and reduce the electron dose rate. CS is a recent concept, and has come to the forefront due the seminal work of Candès [5]. Since the publication of Candès, there has been enormous growth in the application of CS and development of CS variants. For electron microscopy applications, the concept of CS has also been recently applied to electron tomography [6], and reduction of electron dose in scanning transmission electron microscopy (STEM) imaging [7]. To demonstrate the applicability of coded aperture CS video reconstruction for atomic level imaging, we simulate compressive sensing on observations of Pd nanoparticles and Ag nanoparticles during exposure to high temperatures and other environmental

  10. Coded aperture compressive temporal imaging.

    PubMed

    Llull, Patrick; Liao, Xuejun; Yuan, Xin; Yang, Jianbo; Kittle, David; Carin, Lawrence; Sapiro, Guillermo; Brady, David J

    2013-05-01

    We use mechanical translation of a coded aperture for code division multiple access compression of video. We discuss the compressed video's temporal resolution and present experimental results for reconstructions of > 10 frames of temporal data per coded snapshot.

  11. Compression and texture in socks enhance football kicking performance.

    PubMed

    Hasan, Hosni; Davids, Keith; Chow, Jia Yi; Kerr, Graham

    2016-08-01

    The purpose of this study was to observe effects of wearing textured insoles and clinical compression socks on organisation of lower limb interceptive actions in developing athletes of different skill levels in association football. Six advanced learners and six completely novice football players (15.4±0.9years) performed 20 instep kicks with maximum velocity, in four randomly organised insoles and socks conditions, (a) Smooth Socks with Smooth Insoles (SSSI); (b) Smooth Socks with Textured Insoles (SSTI); (c) Compression Socks with Smooth Insoles (CSSI) and (d), Compression Socks with Textured Insoles (CSTI). Reflective markers were placed on key anatomical locations and the ball to facilitate three-dimensional (3D) movement recording and analysis. Data on 3D kinematic variables and initial ball velocity were analysed using one-way mixed model ANOVAs. Results revealed that wearing textured and compression materials enhanced performance in key variables, such as the maximum velocity of the instep kick and increased initial ball velocity, among advanced learners compared to the use of non-textured and compression materials. Adding texture to football boot insoles appeared to interact with compression materials to improve kicking performance, captured by these important measures. This improvement in kicking performance is likely to have occurred through enhanced somatosensory system feedback utilised for foot placement and movement organisation of the lower limbs. Data suggested that advanced learners were better at harnessing the augmented feedback information from compression and texture to regulate emerging movement patterns compared to novices.

  12. Compression and texture in socks enhance football kicking performance.

    PubMed

    Hasan, Hosni; Davids, Keith; Chow, Jia Yi; Kerr, Graham

    2016-08-01

    The purpose of this study was to observe effects of wearing textured insoles and clinical compression socks on organisation of lower limb interceptive actions in developing athletes of different skill levels in association football. Six advanced learners and six completely novice football players (15.4±0.9years) performed 20 instep kicks with maximum velocity, in four randomly organised insoles and socks conditions, (a) Smooth Socks with Smooth Insoles (SSSI); (b) Smooth Socks with Textured Insoles (SSTI); (c) Compression Socks with Smooth Insoles (CSSI) and (d), Compression Socks with Textured Insoles (CSTI). Reflective markers were placed on key anatomical locations and the ball to facilitate three-dimensional (3D) movement recording and analysis. Data on 3D kinematic variables and initial ball velocity were analysed using one-way mixed model ANOVAs. Results revealed that wearing textured and compression materials enhanced performance in key variables, such as the maximum velocity of the instep kick and increased initial ball velocity, among advanced learners compared to the use of non-textured and compression materials. Adding texture to football boot insoles appeared to interact with compression materials to improve kicking performance, captured by these important measures. This improvement in kicking performance is likely to have occurred through enhanced somatosensory system feedback utilised for foot placement and movement organisation of the lower limbs. Data suggested that advanced learners were better at harnessing the augmented feedback information from compression and texture to regulate emerging movement patterns compared to novices. PMID:27155962

  13. Space-time compressive imaging.

    PubMed

    Treeaporn, Vicha; Ashok, Amit; Neifeld, Mark A

    2012-02-01

    Compressive imaging systems typically exploit the spatial correlation of the scene to facilitate a lower dimensional measurement relative to a conventional imaging system. In natural time-varying scenes there is a high degree of temporal correlation that may also be exploited to further reduce the number of measurements. In this work we analyze space-time compressive imaging using Karhunen-Loève (KL) projections for the read-noise-limited measurement case. Based on a comprehensive simulation study, we show that a KL-based space-time compressive imager offers higher compression relative to space-only compressive imaging. For a relative noise strength of 10% and reconstruction error of 10%, we find that space-time compressive imaging with 8×8×16 spatiotemporal blocks yields about 292× compression compared to a conventional imager, while space-only compressive imaging provides only 32× compression. Additionally, under high read-noise conditions, a space-time compressive imaging system yields lower reconstruction error than a conventional imaging system due to the multiplexing advantage. We also discuss three electro-optic space-time compressive imaging architecture classes, including charge-domain processing by a smart focal plane array (FPA). Space-time compressive imaging using a smart FPA provides an alternative method to capture the nonredundant portions of time-varying scenes.

  14. Progressive compressive imager

    NASA Astrophysics Data System (ADS)

    Evladov, Sergei; Levi, Ofer; Stern, Adrian

    2012-06-01

    We have designed and built a working automatic progressive sampling imaging system based on the vector sensor concept, which utilizes a unique sampling scheme of Radon projections. This sampling scheme makes it possible to progressively add information resulting in tradeoff between compression and the quality of reconstruction. The uniqueness of our sampling is that in any moment of the acquisition process the reconstruction can produce a reasonable version of the image. The advantage of the gradual addition of the samples is seen when the sparsity rate of the object is unknown, and thus the number of needed measurements. We have developed the iterative algorithm OSO (Ordered Sets Optimization) which employs our sampling scheme for creation of nearly uniform distributed sets of samples, which allows the reconstruction of Mega-Pixel images. We present the good quality reconstruction from compressed data ratios of 1:20.

  15. Digital cinema video compression

    NASA Astrophysics Data System (ADS)

    Husak, Walter

    2003-05-01

    The Motion Picture Industry began a transition from film based distribution and projection to digital distribution and projection several years ago. Digital delivery and presentation offers the prospect to increase the quality of the theatrical experience for the audience, reduce distribution costs to the distributors, and create new business opportunities for the theater owners and the studios. Digital Cinema also presents an opportunity to provide increased flexibility and security of the movies for the content owners and the theater operators. Distribution of content via electronic means to theaters is unlike any of the traditional applications for video compression. The transition from film-based media to electronic media represents a paradigm shift in video compression techniques and applications that will be discussed in this paper.

  16. Compressibility of solids

    NASA Technical Reports Server (NTRS)

    Vinet, P.; Ferrante, J.; Rose, J. H.; Smith, J. R.

    1987-01-01

    A universal form is proposed for the equation of state (EOS) of solids. Good agreement is found for a variety of test data. The form of the EOS is used to suggest a method of data analysis, which is applied to materials of geophysical interest. The isothermal bulk modulus is discussed as a function of the volume and of the pressure. The isothermal compression curves for materials of geophysical interest are examined.

  17. Basic cluster compression algorithm

    NASA Technical Reports Server (NTRS)

    Hilbert, E. E.; Lee, J.

    1980-01-01

    Feature extraction and data compression of LANDSAT data is accomplished by BCCA program which reduces costs associated with transmitting, storing, distributing, and interpreting multispectral image data. Algorithm uses spatially local clustering to extract features from image data to describe spectral characteristics of data set. Approach requires only simple repetitive computations, and parallel processing can be used for very high data rates. Program is written in FORTRAN IV for batch execution and has been implemented on SEL 32/55.

  18. Compression of Cake

    NASA Astrophysics Data System (ADS)

    Nason, Sarah; Houghton, Brittany; Renfro, Timothy

    2012-03-01

    The fall university physics class, at McMurry University, created a compression modulus experiment that even high school students could do. The class came up with this idea after a Young's modulus experiment which involved stretching wire. A question was raised of what would happen if we compressed something else? We created our own Young's modulus experiment, but in a more entertaining way. The experiment involves measuring the height of a cake both before and after a weight has been applied to the cake. We worked to derive the compression modulus by applying weight to a cake. In the end, we had our experimental cake and, ate it too! To cite this abstract, use the following reference: http://meetings.aps.org/link/BAPS.2012.TSS.B1.1

  19. Compression of multiwall microbubbles

    NASA Astrophysics Data System (ADS)

    Lebedeva, Natalia; Moore, Sam; Dobrynin, Andrey; Rubinstein, Michael; Sheiko, Sergei

    2012-02-01

    Optical monitoring of structural transformations and transport processes is prohibited if the objects to be studied are bulky and/or non-transparent. This paper is focused on the development of a microbbuble platform for acoustic imaging of heterogeneous media under harsh environmental conditions including high pressure (<500 atm), temperature (<100 C), and salinity (<10 wt%). We have studied the compression behavior of gas-filled microbubbles composed of multiple layers of surfactants and stabilizers. Upon hydrostatic compression, these bubbles undergo significant (up to 100x) changes in volume, which are completely reversible. Under repeated compression/expansion cycles, the pressure-volume P(V) characteristic of these microbubbles deviate from ideal-gas-law predictions. A theoretical model was developed to explain the observed deviations through contributions of shell elasticity and gas effusion. In addition, some of the microbubbles undergo peculiar buckling/smoothing transitions exhibiting intermittent formation of facetted structures, which suggest a solid-like nature of the pressurized shell. Preliminary studies illustrate that these pressure-resistant microbubbles maintain their mechanical stability and acoustic response at pressures greater than 1000 psi.

  20. Improved SUPG formulations for compressible flows

    NASA Astrophysics Data System (ADS)

    Senga, Masayoshi

    Stabilization and shock-capturing parameters introduced recently for the StreamlineUpwind/Petrov-Galerkin (SUPG) formulation of compressible flows based on conservation variables are assessed in test computations with inviscid supersonic flows and different types of finite element meshes. The new shock-capturing parameters, categorized as "YZbeta Shock-Capturing" in this paper, are compared to earlier parameters derived based on the entropy variables. In addition to being much simpler, the new shock-capturing parameters yield better shock quality the test computations, with more substantial improvements seen for triangular elements. Numerical experiments with inviscid supersonic flows around cylinders and spheres are carried out to evaluate the stabilization and shock-capturing parameters introduced recently for the Streamline-Upwind/Petrov-Galerkin (SUPG) formulation of compressible flows based on conservation variables. The tests with the cylinders are carried out for both structured and unstructured meshes. The new shock-capturing parameters; which we call "YZbeta Shock-Capturing", are compared to earlier SUPG parameters derived based on the entropy variables. In addition to being much simpler, the new shock-capturing parameters yield better shock quality in the test computations, with more substantial improvements seen for unstructured meshes with triangular and tetrahedral elements. Furthermore, the results obtained with YZbeta Shock-Capturing compare very favorably to those obtained with the well established OVERFLOW code.

  1. Piston reciprocating compressed air engine

    SciTech Connect

    Cestero, L.G.

    1987-03-24

    A compressed air engine is described comprising: (a). a reservoir of compressed air, (b). two power cylinders each containing a reciprocating piston connected to a crankshaft and flywheel, (c). a transfer cylinder which communicates with each power cylinder and the reservoir, and contains a reciprocating piston connected to the crankshaft, (d). valve means controlled by rotation of the crankshaft for supplying compressed air from the reservoir to each power cylinder and for exhausting compressed air from each power cylinder to the transfer cylinder, (e). valve means controlled by rotation of the crankshaft for supplying from the transfer cylinder to the reservoir compressed air supplied to the transfer cylinder on the exhaust strokes of the pistons of the power cylinders, and (f). an externally powered fan for assisting the exhaust of compressed air from each power cylinder to the transfer cylinder and from there to the compressed air reservoir.

  2. High Performance Compression of Science Data

    NASA Technical Reports Server (NTRS)

    Storer, James A.; Carpentieri, Bruno; Cohn, Martin

    1994-01-01

    Two papers make up the body of this report. One presents a single-pass adaptive vector quantization algorithm that learns a codebook of variable size and shape entries; the authors present experiments on a set of test images showing that with no training or prior knowledge of the data, for a given fidelity, the compression achieved typically equals or exceeds that of the JPEG standard. The second paper addresses motion compensation, one of the most effective techniques used in interframe data compression. A parallel block-matching algorithm for estimating interframe displacement of blocks with minimum error is presented. The algorithm is designed for a simple parallel architecture to process video in real time.

  3. High performance compression of science data

    NASA Technical Reports Server (NTRS)

    Storer, James A.; Cohn, Martin

    1994-01-01

    Two papers make up the body of this report. One presents a single-pass adaptive vector quantization algorithm that learns a codebook of variable size and shape entries; the authors present experiments on a set of test images showing that with no training or prior knowledge of the data, for a given fidelity, the compression achieved typically equals or exceeds that of the JPEG standard. The second paper addresses motion compensation, one of the most effective techniques used in the interframe data compression. A parallel block-matching algorithm for estimating interframe displacement of blocks with minimum error is presented. The algorithm is designed for a simple parallel architecture to process video in real time.

  4. Estimates on compressed neural networks regression.

    PubMed

    Zhang, Yongquan; Li, Youmei; Sun, Jianyong; Ji, Jiabing

    2015-03-01

    When the neural element number n of neural networks is larger than the sample size m, the overfitting problem arises since there are more parameters than actual data (more variable than constraints). In order to overcome the overfitting problem, we propose to reduce the number of neural elements by using compressed projection A which does not need to satisfy the condition of Restricted Isometric Property (RIP). By applying probability inequalities and approximation properties of the feedforward neural networks (FNNs), we prove that solving the FNNs regression learning algorithm in the compressed domain instead of the original domain reduces the sample error at the price of an increased (but controlled) approximation error, where the covering number theory is used to estimate the excess error, and an upper bound of the excess error is given.

  5. Isothermal compressibility determination across Bose-Einstein condensation

    NASA Astrophysics Data System (ADS)

    Poveda-Cuevas, F. J.; Castilho, P. C. M.; Mercado-Gutierrez, E. D.; Fritsch, A. R.; Muniz, S. R.; Lucioni, E.; Roati, G.; Bagnato, V. S.

    2015-07-01

    We apply the global thermodynamic variables approach to experimentally determine the isothermal compressibility parameter κT of a trapped Bose gas across the phase transition. We demonstrate the behavior of κT around the critical pressure, revealing the second-order nature of the phase transition. Compressibility is the most important susceptibility to characterize the system. The use of global variables shows advantages with respect to the usual local density approximation method and can be applied to a broad range of situations.

  6. Comparative data compression techniques and multi-compression results

    NASA Astrophysics Data System (ADS)

    Hasan, M. R.; Ibrahimy, M. I.; Motakabber, S. M. A.; Ferdaus, M. M.; Khan, M. N. H.

    2013-12-01

    Data compression is very necessary in business data processing, because of the cost savings that it offers and the large volume of data manipulated in many business applications. It is a method or system for transmitting a digital image (i.e., an array of pixels) from a digital data source to a digital data receiver. More the size of the data be smaller, it provides better transmission speed and saves time. In this communication, we always want to transmit data efficiently and noise freely. This paper will provide some compression techniques for lossless text type data compression and comparative result of multiple and single compression, that will help to find out better compression output and to develop compression algorithms.

  7. Compression retaining piston

    SciTech Connect

    Quaglino, A.V. Jr.

    1987-06-16

    A piston apparatus is described for maintaining compression between the piston wall and the cylinder wall, that comprises the following: a generally cylindrical piston body, including: a head portion defining the forward end of the body; and a continuous side wall portion extending rearward from the head portion; a means for lubricating and preventing compression loss between the side wall portion and the cylinder wall, including an annular recessed area in the continuous side wall portion for receiving a quantity of fluid lubricant in fluid engagement between the wall of the recessed and the wall of the cylinder; a first and second resilient, elastomeric, heat resistant rings positioned in grooves along the wall of the continuous side wall portion, above and below the annular recessed area. Each ring engages the cylinder wall to reduce loss of lubricant within the recessed area during operation of the piston; a first pump means for providing fluid lubricant to engine components other than the pistons; and a second pump means provides fluid lubricant to the recessed area in the continuous side wall portion of the piston. The first and second pump means obtains lubricant from a common source, and the second pump means including a flow line supplies oil from a predetermined level above the level of oil provided to the first pump means. This is so that should the oil level to the second pump means fall below the predetermined level, the loss of oil to the recessed area in the continuous side wall portion of the piston would result in loss of compression and shut down of the engine.

  8. International magnetic pulse compression

    SciTech Connect

    Kirbie, H.C.; Newton, M.A.; Siemens, P.D.

    1991-04-01

    Although pulsed-power engineering traditionally has been practiced by a fairly small, close community in the areas of defense and energy research, it is becoming more common in high-power, high-energy commercial pursuits such as material processing and lasers. This paper is a synopsis of the Feb. 12--14, 1990 workshop on magnetic switching as it applies primarily to pulse compression (power transformation). During the course of the Workshop at Granlibakken, a great deal of information was amassed and a keen insight into both the problems and opportunities as to the use of this switching approach was developed. The segmented workshop format proved ideal for identifying key aspects affecting optimum performance in a variety of applications. Individual groups of experts addressed network and system modeling, magnetic materials, power conditioning, core cooling and dielectrics, and finally circuits and application. At the end, they came together to consolidate their input and formulate the workshop's conclusions, identifying roadblocks or suggesting research projects, particularly as they apply to magnetic switching's trump card -- its high-average-power-handling capability (at least on a burst-mode basis). The workshop was especially productive both in the quality and quantity of information transfer in an environment conducive to a free and open exchange of ideas. We will not delve into the organization proper of this meeting, rather we wish to commend to the interested reader this volume, which provides the definitive and most up-to-date compilation on the subject of magnetic pulse compression from underlying principles to current state of the art as well as the prognosis for the future of magnetic pulse compression as a consensus of the workshop's organizers and participants.

  9. International magnetic pulse compression

    NASA Astrophysics Data System (ADS)

    Kirbie, H. C.; Newton, M. A.; Siemens, P. D.

    1991-04-01

    Although pulsed-power engineering traditionally has been practiced by a fairly small, close community in the areas of defense and energy research, it is becoming more common in high-power, high-energy commercial pursuits such as material processing and lasers. This paper is a synopsis of the Feb. 12-14, 1990 workshop on magnetic switching as it applies primarily to pulse compression (power transformation). During the course of the Workshop at Granlibakken, a great deal of information was amassed and a keen insight into both the problems and opportunities as to the use of this switching approach was developed. The segmented workshop format proved ideal for identifying key aspects affecting optimum performance in a variety of applications. Individual groups of experts addressed network and system modeling, magnetic materials, power conditioning, core cooling and dielectrics, and finally circuits and application. At the end, they came together to consolidate their input and formulate the workshop's conclusions, identifying roadblocks or suggesting research projects, particularly as they apply to magnetic switching's trump card - its high-average-power-handling capability (at least on a burst-mode basis). The workshop was especially productive both in the quality and quantity of information transfer in an environment conducive to a free and open exchange of ideas. We will not delve into the organization proper of this meeting, rather we wish to commend to the interested reader this volume, which provides the definitive and most up-to-date compilation on the subject of magnetic pulse compression from underlying principles to current state of the art as well as the prognosis for the future of magnetic pulse compression as a consensus of the workshop's organizers and participants.

  10. Avalanches in Wood Compression

    NASA Astrophysics Data System (ADS)

    Mäkinen, T.; Miksic, A.; Ovaska, M.; Alava, Mikko J.

    2015-07-01

    Wood is a multiscale material exhibiting a complex viscoplastic response. We study avalanches in small wood samples in compression. "Woodquakes" measured by acoustic emission are surprisingly similar to earthquakes and crackling noise in rocks and laboratory tests on brittle materials. Both the distributions of event energies and of waiting (silent) times follow power laws. The stress-strain response exhibits clear signatures of localization of deformation to "weak spots" or softwood layers, as identified using digital image correlation. Even though material structure-dependent localization takes place, the avalanche behavior remains scale-free.

  11. Avalanches in Wood Compression.

    PubMed

    Mäkinen, T; Miksic, A; Ovaska, M; Alava, Mikko J

    2015-07-31

    Wood is a multiscale material exhibiting a complex viscoplastic response. We study avalanches in small wood samples in compression. "Woodquakes" measured by acoustic emission are surprisingly similar to earthquakes and crackling noise in rocks and laboratory tests on brittle materials. Both the distributions of event energies and of waiting (silent) times follow power laws. The stress-strain response exhibits clear signatures of localization of deformation to "weak spots" or softwood layers, as identified using digital image correlation. Even though material structure-dependent localization takes place, the avalanche behavior remains scale-free.

  12. Sampling video compression system

    NASA Technical Reports Server (NTRS)

    Matsumoto, Y.; Lum, H. (Inventor)

    1977-01-01

    A system for transmitting video signal of compressed bandwidth is described. The transmitting station is provided with circuitry for dividing a picture to be transmitted into a plurality of blocks containing a checkerboard pattern of picture elements. Video signals along corresponding diagonal rows of picture elements in the respective blocks are regularly sampled. A transmitter responsive to the output of the sampling circuitry is included for transmitting the sampled video signals of one frame at a reduced bandwidth over a communication channel. The receiving station is provided with a frame memory for temporarily storing transmitted video signals of one frame at the original high bandwidth frequency.

  13. Avalanches in Wood Compression.

    PubMed

    Mäkinen, T; Miksic, A; Ovaska, M; Alava, Mikko J

    2015-07-31

    Wood is a multiscale material exhibiting a complex viscoplastic response. We study avalanches in small wood samples in compression. "Woodquakes" measured by acoustic emission are surprisingly similar to earthquakes and crackling noise in rocks and laboratory tests on brittle materials. Both the distributions of event energies and of waiting (silent) times follow power laws. The stress-strain response exhibits clear signatures of localization of deformation to "weak spots" or softwood layers, as identified using digital image correlation. Even though material structure-dependent localization takes place, the avalanche behavior remains scale-free. PMID:26274428

  14. Dynamic control of a homogeneous charge compression ignition engine

    DOEpatents

    Duffy, Kevin P.; Mehresh, Parag; Schuh, David; Kieser, Andrew J.; Hergart, Carl-Anders; Hardy, William L.; Rodman, Anthony; Liechty, Michael P.

    2008-06-03

    A homogenous charge compression ignition engine is operated by compressing a charge mixture of air, exhaust and fuel in a combustion chamber to an autoignition condition of the fuel. The engine may facilitate a transition from a first combination of speed and load to a second combination of speed and load by changing the charge mixture and compression ratio. This may be accomplished in a consecutive engine cycle by adjusting both a fuel injector control signal and a variable valve control signal away from a nominal variable valve control signal. Thereafter in one or more subsequent engine cycles, more sluggish adjustments are made to at least one of a geometric compression ratio control signal and an exhaust gas recirculation control signal to allow the variable valve control signal to be readjusted back toward its nominal variable valve control signal setting. By readjusting the variable valve control signal back toward its nominal setting, the engine will be ready for another transition to a new combination of engine speed and load.

  15. Free compression tube. Applications

    NASA Astrophysics Data System (ADS)

    Rusu, Ioan

    2012-11-01

    During the flight of vehicles, their propulsion energy must overcome gravity, to ensure the displacement of air masses on vehicle trajectory, to cover both energy losses from the friction between a solid surface and the air and also the kinetic energy of reflected air masses due to the impact with the flying vehicle. The flight optimization by increasing speed and reducing fuel consumption has directed research in the aerodynamics field. The flying vehicles shapes obtained through studies in the wind tunnel provide the optimization of the impact with the air masses and the airflow along the vehicle. By energy balance studies for vehicles in flight, the author Ioan Rusu directed his research in reducing the energy lost at vehicle impact with air masses. In this respect as compared to classical solutions for building flight vehicles aerodynamic surfaces which reduce the impact and friction with air masses, Ioan Rusu has invented a device which he named free compression tube for rockets, registered with the State Office for Inventions and Trademarks of Romania, OSIM, deposit f 2011 0352. Mounted in front of flight vehicles it eliminates significantly the impact and friction of air masses with the vehicle solid. The air masses come into contact with the air inside the free compression tube and the air-solid friction is eliminated and replaced by air to air friction.

  16. Perceptually Lossless Wavelet Compression

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Yang, Gloria Y.; Solomon, Joshua A.; Villasenor, John

    1996-01-01

    The Discrete Wavelet Transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter, which we call DWT uniform quantization noise. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2(exp -1), where r is display visual resolution in pixels/degree, and L is the wavelet level. Amplitude thresholds increase rapidly with spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from low-pass to horizontal/vertical to diagonal. We propose a mathematical model for DWT noise detection thresholds that is a function of level, orientation, and display visual resolution. This allows calculation of a 'perceptually lossless' quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.

  17. Compressive sensing in medical imaging.

    PubMed

    Graff, Christian G; Sidky, Emil Y

    2015-03-10

    The promise of compressive sensing, exploitation of compressibility to achieve high quality image reconstructions with less data, has attracted a great deal of attention in the medical imaging community. At the Compressed Sensing Incubator meeting held in April 2014 at OSA Headquarters in Washington, DC, presentations were given summarizing some of the research efforts ongoing in compressive sensing for x-ray computed tomography and magnetic resonance imaging systems. This article provides an expanded version of these presentations. Sparsity-exploiting reconstruction algorithms that have gained popularity in the medical imaging community are studied, and examples of clinical applications that could benefit from compressive sensing ideas are provided. The current and potential future impact of compressive sensing on the medical imaging field is discussed.

  18. Energy transfer in compressible turbulence

    NASA Technical Reports Server (NTRS)

    Bataille, Francoise; Zhou, YE; Bertoglio, Jean-Pierre

    1995-01-01

    This letter investigates the compressible energy transfer process. We extend a methodology developed originally for incompressible turbulence and use databases from numerical simulations of a weak compressible turbulence based on Eddy-Damped-Quasi-Normal-Markovian (EDQNM) closure. In order to analyze the compressible mode directly, the well known Helmholtz decomposition is used. While the compressible component has very little influence on the solenoidal part, we found that almost all of the compressible turbulence energy is received from its solenoidal counterpart. We focus on the most fundamental building block of the energy transfer process, the triadic interactions. This analysis leads us to conclude that, at low turbulent Mach number, the compressible energy transfer process is dominated by a local radiative transfer (absorption) in both inertial and energy containing ranges.

  19. Recent progress in compressible turbulence

    NASA Astrophysics Data System (ADS)

    Chen, Shiyi; Xia, Zhenhua; Wang, Jianchun; Yang, Yantao

    2015-06-01

    In this paper, we review some recent studies on compressible turbulence conducted by the authors' group, which include fundamental studies on compressible isotropic turbulence (CIT) and applied studies on developing a constrained large eddy simulation (CLES) for wall-bounded turbulence. In the first part, we begin with a newly proposed hybrid compact-weighted essentially nonoscillatory (WENO) scheme for a CIT simulation that has been used to construct a systematic database of CIT. Using this database various fundamental properties of compressible turbulence have been examined, including the statistics and scaling of compressible modes, the shocklet-turbulence interaction, the effect of local compressibility on small scales, the kinetic energy cascade, and some preliminary results from a Lagrangian point of view. In the second part, the idea and formulas of the CLES are reviewed, followed by the validations of CLES and some applications in compressible engineering problems.

  20. libpolycomp: Compression/decompression library

    NASA Astrophysics Data System (ADS)

    Tomasi, Maurizio

    2016-04-01

    Libpolycomp compresses and decompresses one-dimensional streams of numbers by means of several algorithms. It is well-suited for time-ordered data acquired by astronomical instruments or simulations. One of the algorithms, called "polynomial compression", combines two widely-used ideas (namely, polynomial approximation and filtering of Fourier series) to achieve substantial compression ratios for datasets characterized by smoothness and lack of noise. Notable examples are the ephemerides of astronomical objects and the pointing information of astronomical telescopes. Other algorithms implemented in this C library are well known and already widely used, e.g., RLE, quantization, deflate (via libz) and Burrows-Wheeler transform (via libbzip2). Libpolycomp can compress the timelines acquired by the Planck/LFI instrument with an overall compression ratio of ~9, while other widely known programs (gzip, bzip2) reach compression ratios less than 1.5.

  1. Compressive sensing in medical imaging

    PubMed Central

    Graff, Christian G.; Sidky, Emil Y.

    2015-01-01

    The promise of compressive sensing, exploitation of compressibility to achieve high quality image reconstructions with less data, has attracted a great deal of attention in the medical imaging community. At the Compressed Sensing Incubator meeting held in April 2014 at OSA Headquarters in Washington, DC, presentations were given summarizing some of the research efforts ongoing in compressive sensing for x-ray computed tomography and magnetic resonance imaging systems. This article provides an expanded version of these presentations. Sparsity-exploiting reconstruction algorithms that have gained popularity in the medical imaging community are studied, and examples of clinical applications that could benefit from compressive sensing ideas are provided. The current and potential future impact of compressive sensing on the medical imaging field is discussed. PMID:25968400

  2. The compressive strengths of ice cubes of different sizes

    SciTech Connect

    Kuehn, G.A.; Schulson, E.M.; Jones, D.E.; Zhang, J. . Thayer School of Engineering)

    1993-05-01

    Cubes of side length from 10 to 150 mm were prepared from freshwater granular ice of about 1 mm grain size and then compressed uniaxially to failure at [minus]10 C. In addition to size, the variables were strain rate (10[sup [minus]5] s[sup [minus]1] and 10[sup [minus]2] s[sup [minus]1]) and boundary conditions (ground brass plates, ground and polished brass plates, and brass brushes). The results showed that over the range investigated, size is not an important factor when considering the ductile compressive strength of ice. It also appears that size is not a factor when considering the brittle compressive failure strength under more ideal loading conditions. However, under less ideal conditions where perturbations on the loading surface may be significant, the brittle compressive strength decreases as the size of cube increases. In this case, the effect is attributed to nonsimultaneous failure.

  3. Hardware accelerated compression of LIDAR data using FPGA devices.

    PubMed

    Biasizzo, Anton; Novak, Franc

    2013-05-14

    Airborne Light Detection and Ranging (LIDAR) has become a mainstream technology for terrain data acquisition and mapping. High sampling density of LIDAR enables the acquisition of high details of the terrain, but on the other hand, it results in a vast amount of gathered data, which requires huge storage space as well as substantial processing effort. The data are usually stored in the LAS format which has become the de facto standard for LIDAR data storage and exchange. In the paper, a hardware accelerated compression of LIDAR data is presented. The compression and decompression of LIDAR data is performed by a dedicated FPGA-based circuit and interfaced to the computer via a PCI-E general bus. The hardware compressor consists of three modules: LIDAR data predictor, variable length coder, and arithmetic coder. Hardware compression is considerably faster than software compression, while it also alleviates the processor load.

  4. ECG data compression by modeling.

    PubMed Central

    Madhukar, B.; Murthy, I. S.

    1992-01-01

    This paper presents a novel algorithm for data compression of single lead Electrocardiogram (ECG) data. The method is based on Parametric modeling of the Discrete Cosine Transformed ECG signal. Improved high frequency reconstruction is achieved by separately modeling the low and the high frequency regions of the transformed signal. Differential Pulse Code Modulation is applied on the model parameters to obtain a further increase in the compression. Compression ratios up to 1:40 were achieved without significant distortion. PMID:1482940

  5. Shock compression of precompressed deuterium

    SciTech Connect

    Armstrong, M R; Crowhurst, J C; Zaug, J M; Bastea, S; Goncharov, A F; Militzer, B

    2011-07-31

    Here we report quasi-isentropic dynamic compression and thermodynamic characterization of solid, precompressed deuterium over an ultrafast time scale (< 100 ps) and a microscopic length scale (< 1 {micro}m). We further report a fast transition in shock wave compressed solid deuterium that is consistent with the ramp to shock transition, with a time scale of less than 10 ps. These results suggest that high-density dynamic compression of hydrogen may be possible on microscopic length scales.

  6. Magnetic compression laser driving circuit

    DOEpatents

    Ball, Don G.; Birx, Dan; Cook, Edward G.

    1993-01-01

    A magnetic compression laser driving circuit is disclosed. The magnetic compression laser driving circuit compresses voltage pulses in the range of 1.5 microseconds at 20 Kilovolts of amplitude to pulses in the range of 40 nanoseconds and 60 Kilovolts of amplitude. The magnetic compression laser driving circuit includes a multi-stage magnetic switch where the last stage includes a switch having at least two turns which has larger saturated inductance with less core material so that the efficiency of the circuit and hence the laser is increased.

  7. Magnetic compression laser driving circuit

    DOEpatents

    Ball, D.G.; Birx, D.; Cook, E.G.

    1993-01-05

    A magnetic compression laser driving circuit is disclosed. The magnetic compression laser driving circuit compresses voltage pulses in the range of 1.5 microseconds at 20 kilovolts of amplitude to pulses in the range of 40 nanoseconds and 60 kilovolts of amplitude. The magnetic compression laser driving circuit includes a multi-stage magnetic switch where the last stage includes a switch having at least two turns which has larger saturated inductance with less core material so that the efficiency of the circuit and hence the laser is increased.

  8. Data compression for sequencing data

    PubMed Central

    2013-01-01

    Post-Sanger sequencing methods produce tons of data, and there is a general agreement that the challenge to store and process them must be addressed with data compression. In this review we first answer the question “why compression” in a quantitative manner. Then we also answer the questions “what” and “how”, by sketching the fundamental compression ideas, describing the main sequencing data types and formats, and comparing the specialized compression algorithms and tools. Finally, we go back to the question “why compression” and give other, perhaps surprising answers, demonstrating the pervasiveness of data compression techniques in computational biology. PMID:24252160

  9. Compressive sensing of sparse tensors.

    PubMed

    Friedland, Shmuel; Li, Qun; Schonfeld, Dan

    2014-10-01

    Compressive sensing (CS) has triggered an enormous research activity since its first appearance. CS exploits the signal's sparsity or compressibility in a particular domain and integrates data compression and acquisition, thus allowing exact reconstruction through relatively few nonadaptive linear measurements. While conventional CS theory relies on data representation in the form of vectors, many data types in various applications, such as color imaging, video sequences, and multisensor networks, are intrinsically represented by higher order tensors. Application of CS to higher order data representation is typically performed by conversion of the data to very long vectors that must be measured using very large sampling matrices, thus imposing a huge computational and memory burden. In this paper, we propose generalized tensor compressive sensing (GTCS)-a unified framework for CS of higher order tensors, which preserves the intrinsic structure of tensor data with reduced computational complexity at reconstruction. GTCS offers an efficient means for representation of multidimensional data by providing simultaneous acquisition and compression from all tensor modes. In addition, we propound two reconstruction procedures, a serial method and a parallelizable method. We then compare the performance of the proposed method with Kronecker compressive sensing (KCS) and multiway compressive sensing (MWCS). We demonstrate experimentally that GTCS outperforms KCS and MWCS in terms of both reconstruction accuracy (within a range of compression ratios) and processing speed. The major disadvantage of our methods (and of MWCS as well) is that the compression ratios may be worse than that offered by KCS.

  10. On the basic equations for the second-order modeling of compressible turbulence

    NASA Technical Reports Server (NTRS)

    Liou, W. W.; Shih, T.-H.

    1991-01-01

    Equations for the mean and turbulent quantities for compressible turbulent flows are derived. Both the conventional Reynolds average and the mass-weighted, Favre average were employed to decompose the flow variable into a mean and a turbulent quality. These equations are to be used later in developing second order Reynolds stress models for high speed compressible flows. A few recent advances in modeling some of the terms in the equations due to compressibility effects are also summarized.

  11. Compressibility of Nanocrystalline Forsterite

    SciTech Connect

    Couvy, H.; Chen, J; Drozd, V

    2010-01-01

    We established an equation of state for nanocrystalline forsterite using multi-anvil press and diamond anvil cell. Comparative high-pressure and high-temperature experiments have been performed up to 9.6 GPa and 1,300 C. We found that nanocrystalline forsterite is more compressible than macro-powder forsterite. The bulk modulus of nanocrystalline forsterite is equal to 123.3 ({+-}3.4) GPa whereas the bulk modulus of macro-powder forsterite is equal to 129.6 ({+-}3.2) GPa. This difference is attributed to a weakening of the elastic properties of grain boundary and triple junction and their significant contribution in nanocrystalline sample compare to the bulk counterpart. The bulk modulus at zero pressure of forsterite grain boundary was determined to be 83.5 GPa.

  12. Compressively sensed complex networks.

    SciTech Connect

    Dunlavy, Daniel M.; Ray, Jaideep; Pinar, Ali

    2010-07-01

    The aim of this project is to develop low dimension parametric (deterministic) models of complex networks, to use compressive sensing (CS) and multiscale analysis to do so and to exploit the structure of complex networks (some are self-similar under coarsening). CS provides a new way of sampling and reconstructing networks. The approach is based on multiresolution decomposition of the adjacency matrix and its efficient sampling. It requires preprocessing of the adjacency matrix to make it 'blocky' which is the biggest (combinatorial) algorithm challenge. Current CS reconstruction algorithm makes no use of the structure of a graph, its very general (and so not very efficient/customized). Other model-based CS techniques exist, but not yet adapted to networks. Obvious starting point for future work is to increase the efficiency of reconstruction.

  13. Vapor compression distillation module

    NASA Technical Reports Server (NTRS)

    Nuccio, P. P.

    1975-01-01

    A Vapor Compression Distillation (VCD) module was developed and evaluated as part of a Space Station Prototype (SSP) environmental control and life support system. The VCD module includes the waste tankage, pumps, post-treatment cells, automatic controls and fault detection instrumentation. Development problems were encountered with two components: the liquid pumps, and the waste tank and quantity gauge. Peristaltic pumps were selected instead of gear pumps, and a sub-program of materials and design optimization was undertaken leading to a projected life greater than 10,000 hours of continuous operation. A bladder tank was designed and built to contain the waste liquids and deliver it to the processor. A detrimental pressure pattern imposed upon the bladder by a force-operated quantity gauge was corrected by rearranging the force application, and design goals were achieved. System testing has demonstrated that all performance goals have been fulfilled.

  14. Compressed quantum simulation

    SciTech Connect

    Kraus, B.

    2014-12-04

    Here, I summarize the results presented in B. Kraus, Phys. Rev. Lett. 107, 250503 (2011). Recently, it has been shown that certain circuits, the so-called match gate circuits, can be compressed to an exponentially smaller universal quantum computation. We use this result to demonstrate that the simulation of a 1-D Ising chain consisting of n qubits can be performed on a universal quantum computer running on only log(n) qubits. We show how the adiabatic evolution can be simulated on this exponentially smaller system and how the magnetization can be measured. Since the Ising model displays a quantum phase transition, this result implies that a quantum phase transition of a very large system can be observed with current technology.

  15. Population attribute compression

    DOEpatents

    White, James M.; Faber, Vance; Saltzman, Jeffrey S.

    1995-01-01

    An image population having a large number of attributes is processed to form a display population with a predetermined smaller number of attributes that represent the larger number of attributes. In a particular application, the color values in an image are compressed for storage in a discrete look-up table (LUT). Color space containing the LUT color values is successively subdivided into smaller volumes until a plurality of volumes are formed, each having no more than a preselected maximum number of color values. Image pixel color values can then be rapidly placed in a volume with only a relatively few LUT values from which a nearest neighbor is selected. Image color values are assigned 8 bit pointers to their closest LUT value whereby data processing requires only the 8 bit pointer value to provide 24 bit color values from the LUT.

  16. Compressed Wavefront Sensing

    PubMed Central

    Polans, James; McNabb, Ryan P.; Izatt, Joseph A.; Farsiu, Sina

    2014-01-01

    We report on an algorithm for fast wavefront sensing that incorporates sparse representation for the first time in practice. The partial derivatives of optical wavefronts were sampled sparsely with a Shack-Hartmann wavefront sensor (SHWFS) by randomly subsampling the original SHWFS data to as little as 5%. Reconstruction was performed by a sparse representation algorithm that utilized the Zernike basis. We name this method SPARZER. Experiments on real and simulated data attest to the accuracy of the proposed techniques as compared to traditional sampling and reconstruction methods. We have made the corresponding data set and software freely available online. Compressed wavefront sensing offers the potential to increase the speed of wavefront acquisition and to defray the cost of SHWFS devices. PMID:24690703

  17. Compressive Network Analysis

    PubMed Central

    Jiang, Xiaoye; Yao, Yuan; Liu, Han; Guibas, Leonidas

    2014-01-01

    Modern data acquisition routinely produces massive amounts of network data. Though many methods and models have been proposed to analyze such data, the research of network data is largely disconnected with the classical theory of statistical learning and signal processing. In this paper, we present a new framework for modeling network data, which connects two seemingly different areas: network data analysis and compressed sensing. From a nonparametric perspective, we model an observed network using a large dictionary. In particular, we consider the network clique detection problem and show connections between our formulation with a new algebraic tool, namely Randon basis pursuit in homogeneous spaces. Such a connection allows us to identify rigorous recovery conditions for clique detection problems. Though this paper is mainly conceptual, we also develop practical approximation algorithms for solving empirical problems and demonstrate their usefulness on real-world datasets. PMID:25620806

  18. Survey of Header Compression Techniques

    NASA Technical Reports Server (NTRS)

    Ishac, Joseph

    2001-01-01

    This report provides a summary of several different header compression techniques. The different techniques included are: (1) Van Jacobson's header compression (RFC 1144); (2) SCPS (Space Communications Protocol Standards) header compression (SCPS-TP, SCPS-NP); (3) Robust header compression (ROHC); and (4) The header compression techniques in RFC2507 and RFC2508. The methodology for compression and error correction for these schemes are described in the remainder of this document. All of the header compression schemes support compression over simplex links, provided that the end receiver has some means of sending data back to the sender. However, if that return path does not exist, then neither Van Jacobson's nor SCPS can be used, since both rely on TCP (Transmission Control Protocol). In addition, under link conditions of low delay and low error, all of the schemes perform as expected. However, based on the methodology of the schemes, each scheme is likely to behave differently as conditions degrade. Van Jacobson's header compression relies heavily on the TCP retransmission timer and would suffer an increase in loss propagation should the link possess a high delay and/or bit error rate (BER). The SCPS header compression scheme protects against high delay environments by avoiding delta encoding between packets. Thus, loss propagation is avoided. However, SCPS is still affected by an increased BER (bit-error-rate) since the lack of delta encoding results in larger header sizes. Next, the schemes found in RFC2507 and RFC2508 perform well for non-TCP connections in poor conditions. RFC2507 performance with TCP connections is improved by various techniques over Van Jacobson's, but still suffers a performance hit with poor link properties. Also, RFC2507 offers the ability to send TCP data without delta encoding, similar to what SCPS offers. ROHC is similar to the previous two schemes, but adds additional CRCs (cyclic redundancy check) into headers and improves

  19. Adaptive compressive sensing camera

    NASA Astrophysics Data System (ADS)

    Hsu, Charles; Hsu, Ming K.; Cha, Jae; Iwamura, Tomo; Landa, Joseph; Nguyen, Charles; Szu, Harold

    2013-05-01

    We have embedded Adaptive Compressive Sensing (ACS) algorithm on Charge-Coupled-Device (CCD) camera based on the simplest concept that each pixel is a charge bucket, and the charges comes from Einstein photoelectric conversion effect. Applying the manufactory design principle, we only allow altering each working component at a minimum one step. We then simulated what would be such a camera can do for real world persistent surveillance taking into account of diurnal, all weather, and seasonal variations. The data storage has saved immensely, and the order of magnitude of saving is inversely proportional to target angular speed. We did design two new components of CCD camera. Due to the matured CMOS (Complementary metal-oxide-semiconductor) technology, the on-chip Sample and Hold (SAH) circuitry can be designed for a dual Photon Detector (PD) analog circuitry for changedetection that predicts skipping or going forward at a sufficient sampling frame rate. For an admitted frame, there is a purely random sparse matrix [Φ] which is implemented at each bucket pixel level the charge transport bias voltage toward its neighborhood buckets or not, and if not, it goes to the ground drainage. Since the snapshot image is not a video, we could not apply the usual MPEG video compression and Hoffman entropy codec as well as powerful WaveNet Wrapper on sensor level. We shall compare (i) Pre-Processing FFT and a threshold of significant Fourier mode components and inverse FFT to check PSNR; (ii) Post-Processing image recovery will be selectively done by CDT&D adaptive version of linear programming at L1 minimization and L2 similarity. For (ii) we need to determine in new frames selection by SAH circuitry (i) the degree of information (d.o.i) K(t) dictates the purely random linear sparse combination of measurement data a la [Φ]M,N M(t) = K(t) Log N(t).

  20. (Finite) statistical size effects on compressive strength

    PubMed Central

    Weiss, Jérôme; Girard, Lucas; Gimbert, Florent; Amitrano, David; Vandembroucq, Damien

    2014-01-01

    The larger structures are, the lower their mechanical strength. Already discussed by Leonardo da Vinci and Edmé Mariotte several centuries ago, size effects on strength remain of crucial importance in modern engineering for the elaboration of safety regulations in structural design or the extrapolation of laboratory results to geophysical field scales. Under tensile loading, statistical size effects are traditionally modeled with a weakest-link approach. One of its prominent results is a prediction of vanishing strength at large scales that can be quantified in the framework of extreme value statistics. Despite a frequent use outside its range of validity, this approach remains the dominant tool in the field of statistical size effects. Here we focus on compressive failure, which concerns a wide range of geophysical and geotechnical situations. We show on historical and recent experimental data that weakest-link predictions are not obeyed. In particular, the mechanical strength saturates at a nonzero value toward large scales. Accounting explicitly for the elastic interactions between defects during the damage process, we build a formal analogy of compressive failure with the depinning transition of an elastic manifold. This critical transition interpretation naturally entails finite-size scaling laws for the mean strength and its associated variability. Theoretical predictions are in remarkable agreement with measurements reported for various materials such as rocks, ice, coal, or concrete. This formalism, which can also be extended to the flowing instability of granular media under multiaxial compression, has important practical consequences for future design rules. PMID:24733930

  1. Lossless compression for 3D PET

    SciTech Connect

    Macq, B.; Sibomana, M.; Coppens, A.; Bol, A.; Michel, C. ); Baker, K.; Jones, B. )

    1994-12-01

    A new adaptive scheme is proposed for the lossless compression of positron emission tomography (PET) sinogram data. The algorithm uses an adaptive differential pulse code modulator (ADPCM) followed by a universal variable length coder (UVLC). Contrasting with Lempel-Ziv (LZ), which operates on a whole sinogram, UVLC operates very efficiently on short data blocks. This is a major advantage for real-time implementation. The algorithms is adaptive and codes data after some on-line estimations of the statistics inside each block. Its efficiency is tested when coding dynamic and static scans from two PET scanners and reaches asymptotically the entropy limit for long frames. For very short 3D frames, the new algorithm is twice more efficient than LZ. Since an application specific integrated circuit (ASIC) implementing a similar UVLC scheme is available today, a similar one should be able to sustain PET data lossless compression and decompression at a rate of 27 MBytes/sec. This algorithm is consequently a good candidate for the next generation of lossless compression engine.

  2. Lossless compression for 3D PET

    SciTech Connect

    Macq, B.; Sibomana, M.; Coppens, A.; Bol, A.; Michel, C. . Telecommunication Lab.); Baker, K.; Jones, B. )

    1994-08-01

    A new adaptive scheme is proposed for the lossless compression of positron emission tomography (PET) sinogram data. The algorithm uses an adaptive differential pulse code modulator (ADPCM) followed by a universal variable length coder (UVLC). Contrasting with Lempel-Ziv (LZ), which operates on a whole sinogram, UVLC operates very efficiently on short data blocks. This is a major advantage for real-time implementation. The algorithm is adaptive and codes data after some on-line estimations of the statistics inside each block. Its efficiency is tested when coding dynamic and static scans from two PET scanners and reaches asymptotically the entropy limit for long frames. For very short 3D frames, the new algorithm is twice more efficient than LZ. Since an ASIC implementing a similar UVLC scheme is available today, a similar one should be able to sustain PET data lossless compression and decompression at a rate of 27 MBytes/sec. This algorithm is consequently a good candidate for the next generation of lossless compression engine.

  3. Charts for checking the stability of compression members in trusses

    NASA Technical Reports Server (NTRS)

    Borkmann, K

    1936-01-01

    The present report contains a set of charts developed for computing the fixity effect on a compression member in a truss through its adjacent members, the amount of fixity being considered variable with the particular total truss load. The use of the charts is illustrated on two- and three-bay systems, as well as on a triangular truss.

  4. Compression and Predictive Distributions for Large Alphabets

    NASA Astrophysics Data System (ADS)

    Yang, Xiao

    Data generated from large alphabet exist almost everywhere in our life, for example, texts, images and videos. Traditional universal compression algorithms mostly involve small alphabets and assume implicitly an asymptotic condition under which the extra bits induced in the compression process vanishes as an infinite number of data come. In this thesis, we put the main focus on compression and prediction for large alphabets with the alphabet size comparable or larger than the sample size. We first consider sequences of random variables independent and identically generated from a large alphabet. In particular, the size of the sample is allowed to be variable. A product distribution based on Poisson sampling and tiling is proposed as the coding distribution, which highly simplifies the implementation and analysis through independence. Moreover, we characterize the behavior of the coding distribution through a condition on the tail sum of the ordered counts, and apply it to sequences satisfying this condition. Further, we apply this method to envelope classes. This coding distribution provides a convenient method to approximately compute the Shtarkov's normalized maximum likelihood (NML) distribution. And the extra price paid for this convenience is small compared to the total cost. Furthermore, we find this coding distribution can also be used to calculate the NML distribution exactly. And this calculation remains simple due to the independence of the coding distribution. Further, we consider a more realistic class---the Markov class, and in particular, tree sources. A context tree based algorithm is designed to describe the dependencies among the contexts. It is a greedy algorithm which seeks for the greatest savings in codelength when constructing the tree. Compression and prediction of individual counts associated with the contexts uses the same coding distribution as in the i.i.d case. Combining these two procedures, we demonstrate a compression algorithm based

  5. Direct compression of chitosan: process and formulation factors to improve powder flow and tablet performance.

    PubMed

    Buys, Gerhard M; du Plessis, Lissinda H; Marais, Andries F; Kotze, Awie F; Hamman, Josias H

    2013-06-01

    Chitosan is a polymer derived from chitin that is widely available at relatively low cost, but due to compression challenges it has limited application for the production of direct compression tablets. The aim of this study was to use certain process and formulation variables to improve manufacturing of tablets containing chitosan as bulking agent. Chitosan particle size and flow properties were determined, which included bulk density, tapped density, compressibility and moisture uptake. The effect of process variables (i.e. compression force, punch depth, percentage compaction in a novel double fill compression process) and formulation variables (i.e. type of glidant, citric acid, pectin, coating with Eudragit S®) on chitosan tablet performance (i.e. mass variation, tensile strength, dissolution) was investigated. Moisture content of the chitosan powder, particle size and the inclusion of glidants had a pronounced effect on its flow ability. Varying the percentage compaction during the first cycle of a double fill compression process produced chitosan tablets with more acceptable tensile strength and dissolution rate properties. The inclusion of citric acid and pectin into the formulation significantly decreased the dissolution rate of isoniazid from the tablets due to gel formation. Direct compression of chitosan powder into tablets can be significantly improved by the investigated process and formulation variables as well as applying a double fill compression process.

  6. Compression failure of composite laminates

    NASA Technical Reports Server (NTRS)

    Pipes, R. B.

    1983-01-01

    This presentation attempts to characterize the compressive behavior of Hercules AS-1/3501-6 graphite-epoxy composite. The effect of varying specimen geometry on test results is examined. The transition region is determined between buckling and compressive failure. Failure modes are defined and analytical models to describe these modes are presented.

  7. Application specific compression : final report.

    SciTech Connect

    Melgaard, David Kennett; Byrne, Raymond Harry; Myers, Daniel S.; Harrison, Carol D.; Lee, David S.; Lewis, Phillip J.; Carlson, Jeffrey J.

    2008-12-01

    With the continuing development of more capable data gathering sensors, comes an increased demand on the bandwidth for transmitting larger quantities of data. To help counteract that trend, a study was undertaken to determine appropriate lossy data compression strategies for minimizing their impact on target detection and characterization. The survey of current compression techniques led us to the conclusion that wavelet compression was well suited for this purpose. Wavelet analysis essentially applies a low-pass and high-pass filter to the data, converting the data into the related coefficients that maintain spatial information as well as frequency information. Wavelet compression is achieved by zeroing the coefficients that pertain to the noise in the signal, i.e. the high frequency, low amplitude portion. This approach is well suited for our goal because it reduces the noise in the signal with only minimal impact on the larger, lower frequency target signatures. The resulting coefficients can then be encoded using lossless techniques with higher compression levels because of the lower entropy and significant number of zeros. No significant signal degradation or difficulties in target characterization or detection were observed or measured when wavelet compression was applied to simulated and real data, even when over 80% of the coefficients were zeroed. While the exact level of compression will be data set dependent, for the data sets we studied, compression factors over 10 were found to be satisfactory where conventional lossless techniques achieved levels of less than 3.

  8. Streaming Compression of Hexahedral Meshes

    SciTech Connect

    Isenburg, M; Courbet, C

    2010-02-03

    We describe a method for streaming compression of hexahedral meshes. Given an interleaved stream of vertices and hexahedral our coder incrementally compresses the mesh in the presented order. Our coder is extremely memory efficient when the input stream documents when vertices are referenced for the last time (i.e. when it contains topological finalization tags). Our coder then continuously releases and reuses data structures that no longer contribute to compressing the remainder of the stream. This means in practice that our coder has only a small fraction of the whole mesh in memory at any time. We can therefore compress very large meshes - even meshes that do not file in memory. Compared to traditional, non-streaming approaches that load the entire mesh and globally reorder it during compression, our algorithm trades a less compact compressed representation for significant gains in speed, memory, and I/O efficiency. For example, on the 456k hexahedra 'blade' mesh, our coder is twice as fast and uses 88 times less memory (only 3.1 MB) with the compressed file increasing about 3% in size. We also present the first scheme for predictive compression of properties associated with hexahedral cells.

  9. Pressure Oscillations in Adiabatic Compression

    ERIC Educational Resources Information Center

    Stout, Roland

    2011-01-01

    After finding Moloney and McGarvey's modified adiabatic compression apparatus, I decided to insert this experiment into my physical chemistry laboratory at the last minute, replacing a problematic experiment. With insufficient time to build the apparatus, we placed a bottle between two thick textbooks and compressed it with a third textbook forced…

  10. Digital compression algorithms for HDTV transmission

    NASA Technical Reports Server (NTRS)

    Adkins, Kenneth C.; Shalkhauser, Mary JO; Bibyk, Steven B.

    1990-01-01

    Digital compression of video images is a possible avenue for high definition television (HDTV) transmission. Compression needs to be optimized while picture quality remains high. Two techniques for compression the digital images are explained and comparisons are drawn between the human vision system and artificial compression techniques. Suggestions for improving compression algorithms through the use of neural and analog circuitry are given.

  11. Variable compression ratio device for internal combustion engine

    DOEpatents

    Maloney, Ronald P.; Faletti, James J.

    2004-03-23

    An internal combustion engine, particularly suitable for use in a work machine, is provided with a combustion cylinder, a cylinder head at an end of the combustion cylinder and a primary piston reciprocally disposed within the combustion cylinder. The cylinder head includes a secondary cylinder and a secondary piston reciprocally disposed within the secondary cylinder. An actuator is coupled with the secondary piston for controlling the position of the secondary piston dependent upon the position of the primary piston. A communication port establishes fluid flow communication between the combustion cylinder and the secondary cylinder.

  12. Analytical model for ramp compression

    NASA Astrophysics Data System (ADS)

    Xue, Quanxi; Jiang, Shaoen; Wang, Zhebin; Wang, Feng; Hu, Yun; Ding, Yongkun

    2016-08-01

    An analytical ramp compression model for condensed matter, which can provide explicit solutions for isentropic compression flow fields, is reported. A ramp compression experiment can be easily designed according to the capability of the loading source using this model. Specifically, important parameters, such as the maximum isentropic region width, material properties, profile of the pressure pulse, and the pressure pulse duration can be reasonably allocated or chosen. To demonstrate and study this model, laser-direct-driven ramp compression experiments and code simulation are performed successively, and the factors influencing the accuracy of the model are studied. The application and simulation show that this model can be used as guidance in the design of a ramp compression experiment. However, it is verified that further optimization work is required for a precise experimental design.

  13. Compressive strength of carbon fibers

    SciTech Connect

    Prandy, J.M. ); Hahn, H.T. )

    1991-01-01

    Most composites are weaker in compression than in tension, which is due to the poor compressive strength of the load bearing fibers. The present paper discusses the compressive strengths and failure modes of 11 different carbon fibers: PAN-AS1, AS4, IM6, IM7, T700, T300, GY-30, pitch-75, ultra high modulus (UHM), high modulus (HM), and high strength (HS). The compressive strength was determined by embedding a fiber bundle in a transparent epoxy matrix and testing in compression. The resin allows for the containment and observation of failure during and after testing while also providing lateral support to the fibers. Scanning electron microscopy (SEM) was used to determine the global failure modes of the fibers.

  14. Compressive sensing exploiting wavelet-domain dependencies for ECG compression

    NASA Astrophysics Data System (ADS)

    Polania, Luisa F.; Carrillo, Rafael E.; Blanco-Velasco, Manuel; Barner, Kenneth E.

    2012-06-01

    Compressive sensing (CS) is an emerging signal processing paradigm that enables sub-Nyquist sampling of sparse signals. Extensive previous work has exploited the sparse representation of ECG signals in compression applications. In this paper, we propose the use of wavelet domain dependencies to further reduce the number of samples in compressive sensing-based ECG compression while decreasing the computational complexity. R wave events manifest themselves as chains of large coefficients propagating across scales to form a connected subtree of the wavelet coefficient tree. We show that the incorporation of this connectedness as additional prior information into a modified version of the CoSaMP algorithm can significantly reduce the required number of samples to achieve good quality in the reconstruction. This approach also allows more control over the ECG signal reconstruction, in particular, the QRS complex, which is typically distorted when prior information is not included in the recovery. The compression algorithm was tested upon records selected from the MIT-BIH arrhythmia database. Simulation results show that the proposed algorithm leads to high compression ratios associated with low distortion levels relative to state-of-the-art compression algorithms.

  15. Compressive Sensing for Quantum Imaging

    NASA Astrophysics Data System (ADS)

    Howland, Gregory A.

    This thesis describes the application of compressive sensing to several challenging problems in quantum imaging with practical and fundamental implications. Compressive sensing is a measurement technique that compresses a signal during measurement such that it can be dramatically undersampled. Compressive sensing has been shown to be an extremely efficient measurement technique for imaging, particularly when detector arrays are not available. The thesis first reviews compressive sensing through the lens of quantum imaging and quantum measurement. Four important applications and their corresponding experiments are then described in detail. The first application is a compressive sensing, photon-counting lidar system. A novel depth mapping technique that uses standard, linear compressive sensing is described. Depth maps up to 256 x 256 pixel transverse resolution are recovered with depth resolution less than 2.54 cm. The first three-dimensional, photon counting video is recorded at 32 x 32 pixel resolution and 14 frames-per-second. The second application is the use of compressive sensing for complementary imaging---simultaneously imaging the transverse-position and transverse-momentum distributions of optical photons. This is accomplished by taking random, partial projections of position followed by imaging the momentum distribution on a cooled CCD camera. The projections are shown to not significantly perturb the photons' momenta while allowing high resolution position images to be reconstructed using compressive sensing. A variety of objects and their diffraction patterns are imaged including the double slit, triple slit, alphanumeric characters, and the University of Rochester logo. The third application is the use of compressive sensing to characterize spatial entanglement of photon pairs produced by spontaneous parametric downconversion. The technique gives a theoretical speedup N2/log N for N-dimensional entanglement over the standard raster scanning technique

  16. Combined data encryption and compression using chaos functions

    NASA Astrophysics Data System (ADS)

    Bose, Ranjan; Pathak, Saumitr

    2004-10-01

    Past research in the field of cryptography has not given much consideration to arithmetic coding as a feasible encryption technique, with studies proving compression-specific arithmetic coding to be largely unsuitable for encryption. Nevertheless, adaptive modelling, which offers a huge model, variable in structure, and as completely as possible a function of the entire text that has been transmitted since the time the model was initialised, is a suitable candidate for a possible encryption-compression combine. The focus of the work presented in this paper has been to incorporate recent results of chaos theory, proven to be cryptographically secure, into arithmetic coding, to devise a convenient method to make the structure of the model unpredictable and variable in nature, and yet to retain, as far as is possible, statistical harmony, so that compression is possible. A chaos-based adaptive arithmetic coding-encryption technique has been designed, developed and tested and its implementation has been discussed. For typical text files, the proposed encoder gives compression between 67.5% and 70.5%, the zero-order compression suffering by about 6% due to encryption, and is not susceptible to previously carried out attacks on arithmetic coding algorithms.

  17. Compression relief engine brake

    SciTech Connect

    Meneely, V.A.

    1987-10-06

    A compression relief brake is described for four cycle internal-combustion engines, comprising: a pressurized oil supply; means for selectively pressurizing a hydraulic circuit with oil from the oil supply; a master piston and cylinder communicating with a slave piston and cylinder via the hydraulic circuit; an engine exhaust valve mechanically coupled to the engine and timed to open during the exhaust cycle of the engine the exhaust valve coupled to the slave piston. The exhaust valve is spring-based in a closed state to contact a valve seat; a sleeve frictionally and slidably disposed within a cavity defined by the slave piston which cavity communicates with the hydraulic circuit. When the hydraulic circuit is selectively pressurized and the engine is operating the sleeve entraps an incompressible volume of oil within the cavity to generate a displacement of the slave piston within the slave cylinder, whereby a first gap is maintained between the exhaust valve and its associated seat; and means for reciprocally activating the master piston for increasing the pressure within the previously pressurized hydraulic circuit during at least a portion of the expansion cycle of the engine whereby a second gap is reciprocally maintained between the exhaust valve and its associated seat.

  18. Compressive sensing by learning a Gaussian mixture model from measurements.

    PubMed

    Yang, Jianbo; Liao, Xuejun; Yuan, Xin; Llull, Patrick; Brady, David J; Sapiro, Guillermo; Carin, Lawrence

    2015-01-01

    Compressive sensing of signals drawn from a Gaussian mixture model (GMM) admits closed-form minimum mean squared error reconstruction from incomplete linear measurements. An accurate GMM signal model is usually not available a priori, because it is difficult to obtain training signals that match the statistics of the signals being sensed. We propose to solve that problem by learning the signal model in situ, based directly on the compressive measurements of the signals, without resorting to other signals to train a model. A key feature of our method is that the signals being sensed are treated as random variables and are integrated out in the likelihood. We derive a maximum marginal likelihood estimator (MMLE) that maximizes the likelihood of the GMM of the underlying signals given only their linear compressive measurements. We extend the MMLE to a GMM with dominantly low-rank covariance matrices, to gain computational speedup. We report extensive experimental results on image inpainting, compressive sensing of high-speed video, and compressive hyperspectral imaging (the latter two based on real compressive cameras). The results demonstrate that the proposed methods outperform state-of-the-art methods by significant margins.

  19. Advances in compressible turbulent mixing

    SciTech Connect

    Dannevik, W.P.; Buckingham, A.C.; Leith, C.E.

    1992-01-01

    This volume includes some recent additions to original material prepared for the Princeton International Workshop on the Physics of Compressible Turbulent Mixing, held in 1988. Workshop participants were asked to emphasize the physics of the compressible mixing process rather than measurement techniques or computational methods. Actual experimental results and their meaning were given precedence over discussions of new diagnostic developments. Theoretical interpretations and understanding were stressed rather than the exposition of new analytical model developments or advances in numerical procedures. By design, compressibility influences on turbulent mixing were discussed--almost exclusively--from the perspective of supersonic flow field studies. The papers are arranged in three topical categories: Foundations, Vortical Domination, and Strongly Coupled Compressibility. The Foundations category is a collection of seminal studies that connect current study in compressible turbulent mixing with compressible, high-speed turbulent flow research that almost vanished about two decades ago. A number of contributions are included on flow instability initiation, evolution, and transition between the states of unstable flow onset through those descriptive of fully developed turbulence. The Vortical Domination category includes theoretical and experimental studies of coherent structures, vortex pairing, vortex-dynamics-influenced pressure focusing. In the Strongly Coupled Compressibility category the organizers included the high-speed turbulent flow investigations in which the interaction of shock waves could be considered an important source for production of new turbulence or for the enhancement of pre-existing turbulence. Individual papers are processed separately.

  20. Best compression: Reciprocating or rotary?

    SciTech Connect

    Cahill, C.

    1997-07-01

    A compressor is a device used to increase the pressure of a compressible fluid. The inlet pressure can vary from a deep vacuum to a high positive pressure. The discharge pressure can range from subatmospheric levels to tens of thousands of pounds per square inch. Compressors come in numerous forms, but for oilfield applications there are two primary types, reciprocating and rotary. Both reciprocating and rotary compressors are grouped in the intermittent mode of compression. Intermittent is cyclic in nature, in that a specific quantity of gas is ingested by the compressor, acted upon and discharged before the cycle is repeated. Reciprocating compression is the most common form of compression used for oilfield applications. Rotary screw compressors have a long history but are relative newcomers to oilfield applications. The rotary screw compressor-technically a helical rotor compressor-dates back to 1878. That was when the first rotary screw was manufactured for the purpose of compressing air. Today thousands of rotary screw compression packages are being used throughout the world to compress natural gas.

  1. Improved Compression of Wavelet-Transformed Images

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron; Klimesh, Matthew

    2005-01-01

    A recently developed data-compression method is an adaptive technique for coding quantized wavelet-transformed data, nominally as part of a complete image-data compressor. Unlike some other approaches, this method admits a simple implementation and does not rely on the use of large code tables. A common data compression approach, particularly for images, is to perform a wavelet transform on the input data, and then losslessly compress a quantized version of the wavelet-transformed data. Under this compression approach, it is common for the quantized data to include long sequences, or runs, of zeros. The new coding method uses prefixfree codes for the nonnegative integers as part of an adaptive algorithm for compressing the quantized wavelet-transformed data by run-length coding. In the form of run-length coding used here, the data sequence to be encoded is parsed into strings consisting of some number (possibly 0) of zeros, followed by a nonzero value. The nonzero value and the length of the run of zeros are encoded. For a data stream that contains a sufficiently high frequency of zeros, this method is known to be more effective than using a single variable length code to encode each symbol. The specific prefix-free codes used are from two classes of variable-length codes: a class known as Golomb codes, and a class known as exponential-Golomb codes. The codes within each class are indexed by a single integer parameter. The present method uses exponential-Golomb codes for the lengths of the runs of zeros, and Golomb codes for the nonzero values. The code parameters within each code class are determined adaptively on the fly as compression proceeds, on the basis of statistics from previously encoded values. In particular, a simple adaptive method has been devised to select the parameter identifying the particular exponential-Golomb code to use. The method tracks the average number of bits used to encode recent runlengths, and takes the difference between this average

  2. Compression Pylon Reduces Interference Drag

    NASA Technical Reports Server (NTRS)

    Patterson, James C., Jr.; Carlson, John R.

    1989-01-01

    New design reduces total drag by 4 percent. Pylon reduces fuselage/wing/pylon/nacelle-channel compressibility losses without creating additional drag associated with other areas of pylon. Minimum cross-sectional area of channel occurs at trailing edge of wing. Velocity of flow in channel always nearly subsonic, reducing compressibility losses associated with supersonic flow. Flow goes past trailing edge before returning to ambient conditions, resulting in no additional drag to aircraft. Designed to compress flow beneath wing by reducing velocity in this channel, thereby reducing shockwave losses and providing increase in wing lift.

  3. Context-Aware Image Compression

    PubMed Central

    Chan, Jacky C. K.; Mahjoubfar, Ata; Chen, Claire L.; Jalali, Bahram

    2016-01-01

    We describe a physics-based data compression method inspired by the photonic time stretch wherein information-rich portions of the data are dilated in a process that emulates the effect of group velocity dispersion on temporal signals. With this coding operation, the data can be downsampled at a lower rate than without it. In contrast to previous implementation of the warped stretch compression, here the decoding can be performed without the need of phase recovery. We present rate-distortion analysis and show improvement in PSNR compared to compression via uniform downsampling. PMID:27367904

  4. Partial transparency of compressed wood

    NASA Astrophysics Data System (ADS)

    Sugimoto, Hiroyuki; Sugimori, Masatoshi

    2016-05-01

    We have developed novel wood composite with optical transparency at arbitrary region. Pores in wood cells have a great variation in size. These pores expand the light path in the sample, because the refractive indexes differ between constituents of cell and air in lumen. In this study, wood compressed to close to lumen had optical transparency. Because the condition of the compression of wood needs the plastic deformation, wood was impregnated phenolic resin. The optimal condition for high transmission is compression ratio above 0.7.

  5. Designing experiments through compressed sensing.

    SciTech Connect

    Young, Joseph G.; Ridzal, Denis

    2013-06-01

    In the following paper, we discuss how to design an ensemble of experiments through the use of compressed sensing. Specifically, we show how to conduct a small number of physical experiments and then use compressed sensing to reconstruct a larger set of data. In order to accomplish this, we organize our results into four sections. We begin by extending the theory of compressed sensing to a finite product of Hilbert spaces. Then, we show how these results apply to experiment design. Next, we develop an efficient reconstruction algorithm that allows us to reconstruct experimental data projected onto a finite element basis. Finally, we verify our approach with two computational experiments.

  6. Compressibility effects on dynamic stall

    NASA Astrophysics Data System (ADS)

    Carr, Lawrence W.; Chandrasekhara, M. S.

    1996-12-01

    Dynamic stall delay of flow over airfoils rapidly pitching past the static stall angle has been studied by many scientists. However, the effect of compressibility on this dynamic stall behavior has been less comprehensively studied. This review presents a detailed assessment of research performed on this subject, including a historical review of work performed on both aircraft and helicopters, and offers insight into the impact of compressibility on the complex aerodynamic phenomenon known as dynamic stall. It also documents the major effect that compressibility can have on dynamic stall events, and the complete change of physics of the stall process that can occur as free-stream Mach number is increased.

  7. Television Compression Algorithms And Transmission On Packet Networks

    NASA Astrophysics Data System (ADS)

    Brainard, R. C.; Othmer, J. H.

    1988-10-01

    Wide-band packet transmission is a subject of strong current interest. The transmission of compressed TV signals over such networks is possible with any quality level. There are some specific advantages in using packet networks for TV transmission. Namely, any fixed data rate can be chosen, or a variable data rate can be utilized. However, on the negative side packet loss must be considered and differential delay in packet arrival must be compensated. The possibility of packet loss has a strong influence on compression algorithm choice. Differential delay of packet arrival is a new problem in codec design. Some issues relevant to mutual design of the transmission networks and compression algorithms will be presented. An assumption is that the packet network will maintain packet sequence integrity. For variable-rate transmission, a reasonable definition of peak data rate is necessary. Rate constraints may be necessary to encourage instituting a variable-rate service on the networks. The charging algorithm for network use will have an effect on selection of compression algorithm. Some values of and procedures for implementing packet priorities are discussed. Packet length has only a second-order effect on packet-TV considerations. Some examples of a range of codecs for differing data rates and picture quality are given. These serve to illustrate sensitivities to the various characteristics of packet networks. Perhaps more important, we talk about what we do not know about the design of such systems.

  8. A Motion-Compensating Image-Compression Scheme

    NASA Technical Reports Server (NTRS)

    Wong, Carol

    1994-01-01

    Chrominance used (in addition to luminance) in estimating motion. Variable-rate digital coding scheme for compression of color-video-image data designed to deliver pictures of good quality at moderate compressed-data rate of 1 to 2 bits per pixel, or of fair quality at rate less than 1 bit per pixel. Scheme, in principle, implemented by use of commercially available application-specific integrated circuits. Incorporates elements of some prior coding schemes, including motion compensation (MC) and discrete cosine transform (DCT).

  9. Efficient lossy compression for compressive sensing acquisition of images in compressive sensing imaging systems.

    PubMed

    Li, Xiangwei; Lan, Xuguang; Yang, Meng; Xue, Jianru; Zheng, Nanning

    2014-12-05

    Compressive Sensing Imaging (CSI) is a new framework for image acquisition, which enables the simultaneous acquisition and compression of a scene. Since the characteristics of Compressive Sensing (CS) acquisition are very different from traditional image acquisition, the general image compression solution may not work well. In this paper, we propose an efficient lossy compression solution for CS acquisition of images by considering the distinctive features of the CSI. First, we design an adaptive compressive sensing acquisition method for images according to the sampling rate, which could achieve better CS reconstruction quality for the acquired image. Second, we develop a universal quantization for the obtained CS measurements from CS acquisition without knowing any a priori information about the captured image. Finally, we apply these two methods in the CSI system for efficient lossy compression of CS acquisition. Simulation results demonstrate that the proposed solution improves the rate-distortion performance by 0.4~2 dB comparing with current state-of-the-art, while maintaining a low computational complexity.

  10. Efficient Lossy Compression for Compressive Sensing Acquisition of Images in Compressive Sensing Imaging Systems

    PubMed Central

    Li, Xiangwei; Lan, Xuguang; Yang, Meng; Xue, Jianru; Zheng, Nanning

    2014-01-01

    Compressive Sensing Imaging (CSI) is a new framework for image acquisition, which enables the simultaneous acquisition and compression of a scene. Since the characteristics of Compressive Sensing (CS) acquisition are very different from traditional image acquisition, the general image compression solution may not work well. In this paper, we propose an efficient lossy compression solution for CS acquisition of images by considering the distinctive features of the CSI. First, we design an adaptive compressive sensing acquisition method for images according to the sampling rate, which could achieve better CS reconstruction quality for the acquired image. Second, we develop a universal quantization for the obtained CS measurements from CS acquisition without knowing any a priori information about the captured image. Finally, we apply these two methods in the CSI system for efficient lossy compression of CS acquisition. Simulation results demonstrate that the proposed solution improves the rate-distortion performance by 0.4∼2 dB comparing with current state-of-the-art, while maintaining a low computational complexity. PMID:25490597

  11. [New aspects of compression therapy].

    PubMed

    Partsch, Bernhard; Partsch, Hugo

    2016-06-01

    In this review article the mechanisms of action of compression therapy are summarized and a survey of materials is presented together with some practical advice how and when these different devices should be applied. Some new experimental findings regarding the optimal dosage (= compression pressure) concerning an improvement of venous hemodynamics and a reduction of oedema are discussed. It is shown, that stiff, non-yielding material applied with adequate pressure provides hemodynamically superior effects compared to elastic material and that relatively low pressures reduce oedema. Compression over the calf is more important to increase the calf pump function compared to graduated compression. In patients with mixed, arterial-venous ulcers and an ABPI over 0.6 inelastic bandages not exceeding a sub-bandage pressure of 40 mmHg may increase the arterial flow and improve venous pumping function. PMID:27259340

  12. Efficient Decoding of Compressed Data.

    ERIC Educational Resources Information Center

    Bassiouni, Mostafa A.; Mukherjee, Amar

    1995-01-01

    Discusses the problem of enhancing the speed of Huffman decoding of compressed data. Topics addressed include the Huffman decoding tree; multibit decoding; binary string mapping problems; and algorithms for solving mapping problems. (22 references) (LRW)

  13. Compressed gas fuel storage system

    DOEpatents

    Wozniak, John J.; Tiller, Dale B.; Wienhold, Paul D.; Hildebrand, Richard J.

    2001-01-01

    A compressed gas vehicle fuel storage system comprised of a plurality of compressed gas pressure cells supported by shock-absorbing foam positioned within a shape-conforming container. The container is dimensioned relative to the compressed gas pressure cells whereby a radial air gap surrounds each compressed gas pressure cell. The radial air gap allows pressure-induced expansion of the pressure cells without resulting in the application of pressure to adjacent pressure cells or physical pressure to the container. The pressure cells are interconnected by a gas control assembly including a thermally activated pressure relief device, a manual safety shut-off valve, and means for connecting the fuel storage system to a vehicle power source and a refueling adapter. The gas control assembly is enclosed by a protective cover attached to the container. The system is attached to the vehicle with straps to enable the chassis to deform as intended in a high-speed collision.

  14. Comparison of Artificial Compressibility Methods

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Housman, Jeffrey; Kwak, Dochan

    2004-01-01

    Various artificial compressibility methods for calculating the three-dimensional incompressible Navier-Stokes equations are compared. Each method is described and numerical solutions to test problems are conducted. A comparison based on convergence behavior, accuracy, and robustness is given.

  15. Linear phase compressive filter

    DOEpatents

    McEwan, Thomas E.

    1995-01-01

    A phase linear filter for soliton suppression is in the form of a laddered series of stages of non-commensurate low pass filters with each low pass filter having a series coupled inductance (L) and a reverse biased, voltage dependent varactor diode, to ground which acts as a variable capacitance (C). L and C values are set to levels which correspond to a linear or conventional phase linear filter. Inductance is mapped directly from that of an equivalent nonlinear transmission line and capacitance is mapped from the linear case using a large signal equivalent of a nonlinear transmission line.

  16. Linear phase compressive filter

    DOEpatents

    McEwan, T.E.

    1995-06-06

    A phase linear filter for soliton suppression is in the form of a laddered series of stages of non-commensurate low pass filters with each low pass filter having a series coupled inductance (L) and a reverse biased, voltage dependent varactor diode, to ground which acts as a variable capacitance (C). L and C values are set to levels which correspond to a linear or conventional phase linear filter. Inductance is mapped directly from that of an equivalent nonlinear transmission line and capacitance is mapped from the linear case using a large signal equivalent of a nonlinear transmission line. 2 figs.

  17. Dynamics of Strongly Compressible Turbulence

    NASA Astrophysics Data System (ADS)

    Towery, Colin; Poludnenko, Alexei; Hamlington, Peter

    2015-11-01

    Strongly compressible turbulence, wherein the turbulent velocity fluctuations directly generate compression effects, plays a critical role in many important scientific and engineering problems of interest today, for instance in the processes of stellar formation and also hypersonic vehicle design. This turbulence is very unusual in comparison to ``normal,'' weakly compressible and incompressible turbulence, which is relatively well understood. Strongly compressible turbulence is characterized by large variations in the thermodynamic state of the fluid in space and time, including excited acoustic modes, strong, localized shock and rarefaction structures, and rapid heating due to viscous dissipation. The exact nature of these thermo-fluid dynamics has yet to be discerned, which greatly limits the ability of current computational engineering models to successfully treat these problems. New direct numerical simulation (DNS) results of strongly compressible isotropic turbulence will be presented along with a framework for characterizing and evaluating compressible turbulence dynamics and a connection will be made between the present diagnostic analysis and the validation of engineering turbulence models.

  18. Object-Based Image Compression

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.

    2003-01-01

    Image compression frequently supports reduced storage requirement in a computer system, as well as enhancement of effective channel bandwidth in a communication system, by decreasing the source bit rate through reduction of source redundancy. The majority of image compression techniques emphasize pixel-level operations, such as matching rectangular or elliptical sampling blocks taken from the source data stream, with exemplars stored in a database (e.g., a codebook in vector quantization or VQ). Alternatively, one can represent a source block via transformation, coefficient quantization, and selection of coefficients deemed significant for source content approximation in the decompressed image. This approach, called transform coding (TC), has predominated for several decades in the signal and image processing communities. A further technique that has been employed is the deduction of affine relationships from source properties such as local self-similarity, which supports the construction of adaptive codebooks in a self-VQ paradigm that has been called iterated function systems (IFS). Although VQ, TC, and IFS based compression algorithms have enjoyed varying levels of success for different types of applications, bit rate requirements, and image quality constraints, few of these algorithms examine the higher-level spatial structure of an image, and fewer still exploit this structure to enhance compression ratio. In this paper, we discuss a fourth type of compression algorithm, called object-based compression, which is based on research in joint segmentaton and compression, as well as previous research in the extraction of sketch-like representations from digital imagery. Here, large image regions that correspond to contiguous recognizeable objects or parts of objects are segmented from the source, then represented compactly in the compressed image. Segmentation is facilitated by source properties such as size, shape, texture, statistical properties, and spectral

  19. Anamorphic transformation and its application to time-bandwidth compression.

    PubMed

    Asghari, Mohammad H; Jalali, Bahram

    2013-09-20

    A general method for compressing the modulation time-bandwidth product of analog signals is introduced. As one of its applications, this physics-based signal grooming, performed in the analog domain, allows a conventional digitizer to sample and digitize the analog signal with variable resolution. The net result is that frequency components that were beyond the digitizer bandwidth can now be captured and, at the same time, the total digital data size is reduced. This compression is lossless and is achieved through a feature selective reshaping of the signal's complex field, performed in the analog domain prior to sampling. Our method is inspired by operation of Fovea centralis in the human eye and by anamorphic transformation in visual arts. The proposed transform can also be performed in the digital domain as a data compression algorithm to alleviate the storage and transmission bottlenecks associated with "big data."

  20. A defect stream function formulation for compressible turbulent boundary layers

    NASA Technical Reports Server (NTRS)

    Barnwell, Richard W.; Wahls, Richard A.

    1990-01-01

    Progress to date on the development of a method for turbulent, wall-bounded flow which uses the defect stream function formulation in the outer layer and an analytic law of the wall and wake formulation in the inner region is reviewed. This two-formulation approach avoids the need to computationally resolve the high-gradient inner layer. One of the most appealing recent developments is the transformation of the compressible governing equation for the defect stream function into a linear, second-order differential equation which has analytic solutions for many problems of practical interest. Numerical and analytic results for incompressible and compressible flows are shown to be in excellent agreement with experimental results. In this paper the two-formulation approach is applied to primitive-variable computations. Excellent comparisons with experiment are presented for two compressible flat plate flows.

  1. Fixed-rate compressed floating-point arrays

    SciTech Connect

    Lindstrom, P.

    2014-03-30

    ZFP is a library for lossy compression of single- and double-precision floating-point data. One of the unique features of ZFP is its support for fixed-rate compression, which enables random read and write access at the granularity of small blocks of values. Using a C++ interface, this allows declaring compressed arrays (1D, 2D, and 3D arrays are supported) that through operator overloading can be treated just like conventional, uncompressed arrays, but which allow the user to specify the exact number of bits to allocate to the array. ZFP also has variable-rate fixed-precision and fixed-accuracy modes, which allow the user to specify a tolerance on the relative or absolute error.

  2. Fixed-rate compressed floating-point arrays

    2014-03-30

    ZFP is a library for lossy compression of single- and double-precision floating-point data. One of the unique features of ZFP is its support for fixed-rate compression, which enables random read and write access at the granularity of small blocks of values. Using a C++ interface, this allows declaring compressed arrays (1D, 2D, and 3D arrays are supported) that through operator overloading can be treated just like conventional, uncompressed arrays, but which allow the user tomore » specify the exact number of bits to allocate to the array. ZFP also has variable-rate fixed-precision and fixed-accuracy modes, which allow the user to specify a tolerance on the relative or absolute error.« less

  3. Advanced application flight experiment breadboard pulse compression radar altimeter program

    NASA Technical Reports Server (NTRS)

    1976-01-01

    Design, development and performance of the pulse compression radar altimeter is described. The high resolution breadboard system is designed to operate from an aircraft at 10 Kft above the ocean and to accurately measure altitude, sea wave height and sea reflectivity. The minicomputer controlled Ku band system provides six basic variables and an extensive digital recording capability for experimentation purposes. Signal bandwidths of 360 MHz are obtained using a reflective array compression line. Stretch processing is used to achieve 1000:1 pulse compression. The system range command LSB is 0.62 ns or 9.25 cm. A second order altitude tracker, aided by accelerometer inputs is implemented in the system software. During flight tests the system demonstrated an altitude resolution capability of 2.1 cm and sea wave height estimation accuracy of 10%. The altitude measurement performance exceeds that of the Skylab and GEOS-C predecessors by approximately an order of magnitude.

  4. Anamorphic transformation and its application to time-bandwidth compression.

    PubMed

    Asghari, Mohammad H; Jalali, Bahram

    2013-09-20

    A general method for compressing the modulation time-bandwidth product of analog signals is introduced. As one of its applications, this physics-based signal grooming, performed in the analog domain, allows a conventional digitizer to sample and digitize the analog signal with variable resolution. The net result is that frequency components that were beyond the digitizer bandwidth can now be captured and, at the same time, the total digital data size is reduced. This compression is lossless and is achieved through a feature selective reshaping of the signal's complex field, performed in the analog domain prior to sampling. Our method is inspired by operation of Fovea centralis in the human eye and by anamorphic transformation in visual arts. The proposed transform can also be performed in the digital domain as a data compression algorithm to alleviate the storage and transmission bottlenecks associated with "big data." PMID:24085172

  5. 29 CFR 1917.154 - Compressed air.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 7 2013-07-01 2013-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed...

  6. 29 CFR 1917.154 - Compressed air.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 7 2011-07-01 2011-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed...

  7. 29 CFR 1917.154 - Compressed air.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 7 2012-07-01 2012-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed...

  8. 29 CFR 1917.154 - Compressed air.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 7 2014-07-01 2014-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed...

  9. 29 CFR 1917.154 - Compressed air.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 7 2010-07-01 2010-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed...

  10. Perceptual Image Compression in Telemedicine

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ahumada, Albert J., Jr.; Eckstein, Miguel; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    The next era of space exploration, especially the "Mission to Planet Earth" will generate immense quantities of image data. For example, the Earth Observing System (EOS) is expected to generate in excess of one terabyte/day. NASA confronts a major technical challenge in managing this great flow of imagery: in collection, pre-processing, transmission to earth, archiving, and distribution to scientists at remote locations. Expected requirements in most of these areas clearly exceed current technology. Part of the solution to this problem lies in efficient image compression techniques. For much of this imagery, the ultimate consumer is the human eye. In this case image compression should be designed to match the visual capacities of the human observer. We have developed three techniques for optimizing image compression for the human viewer. The first consists of a formula, developed jointly with IBM and based on psychophysical measurements, that computes a DCT quantization matrix for any specified combination of viewing distance, display resolution, and display brightness. This DCT quantization matrix is used in most recent standards for digital image compression (JPEG, MPEG, CCITT H.261). The second technique optimizes the DCT quantization matrix for each individual image, based on the contents of the image. This is accomplished by means of a model of visual sensitivity to compression artifacts. The third technique extends the first two techniques to the realm of wavelet compression. Together these two techniques will allow systematic perceptual optimization of image compression in NASA imaging systems. Many of the image management challenges faced by NASA are mirrored in the field of telemedicine. Here too there are severe demands for transmission and archiving of large image databases, and the imagery is ultimately used primarily by human observers, such as radiologists. In this presentation I will describe some of our preliminary explorations of the applications

  11. Logarithmic laws for compressible turbulent boundary layers

    SciTech Connect

    So, R.M.C.; Zhang, H.S.; Gatski, T.B.; Speziale, C.G.

    1994-11-01

    Dimensional similarity arguments proposed by Millikan are used with the Morkovin hypothesis to deduce logarithmic laws for compressible turbulent boundary layers as an alternative to the traditional van Driest analysis. It is shown that an overlap exists between the wall layer and the defect layer, and this leads to logarithmic behavior in the overlap region. The von Karman constant is found to depend parametrically on the Mach number based on the friction velocity, the dimensionless total heat flux, and the specific heat ratio. Even though it remains constant at approximately 0.41 for a freestream Mach number range of 0 to 4.544 with adiabatic wall boundary conditions, it rises sharply as the Mach number increases significantly beyond 4.544. The intercept of the logarithmic law of the wall is found to depend on the Mach number based on the friction velocity, the dimensionless total heat flux, the Prandtl number evaluated at the wall, and the specific heat ratio. On the other hand, the intercept of the logarithmic defect law is parametric in the pressure gradient parameter and all of the aforementioned dimensionless variables except the Prandtl number. A skin friction law is also deduced for compressible boundary layers. The skin friction coefficient is shown to depend on the momentum thickness Reynolds number, the wall temperature ratio, and all of the other parameters already mentioned. 26 refs.

  12. 30 CFR 75.1730 - Compressed air; general; compressed air systems.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped...

  13. 30 CFR 75.1730 - Compressed air; general; compressed air systems.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped...

  14. 30 CFR 75.1730 - Compressed air; general; compressed air systems.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped...

  15. 30 CFR 75.1730 - Compressed air; general; compressed air systems.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped...

  16. 30 CFR 75.1730 - Compressed air; general; compressed air systems.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped...

  17. Compression of spectral meteorological imagery

    NASA Technical Reports Server (NTRS)

    Miettinen, Kristo

    1993-01-01

    Data compression is essential to current low-earth-orbit spectral sensors with global coverage, e.g., meteorological sensors. Such sensors routinely produce in excess of 30 Gb of data per orbit (over 4 Mb/s for about 110 min) while typically limited to less than 10 Gb of downlink capacity per orbit (15 minutes at 10 Mb/s). Astro-Space Division develops spaceborne compression systems for compression ratios from as little as three to as much as twenty-to-one for high-fidelity reconstructions. Current hardware production and development at Astro-Space Division focuses on discrete cosine transform (DCT) systems implemented with the GE PFFT chip, a 32x32 2D-DCT engine. Spectral relations in the data are exploited through block mean extraction followed by orthonormal transformation. The transformation produces blocks with spatial correlation that are suitable for further compression with any block-oriented spatial compression system, e.g., Astro-Space Division's Laplacian modeler and analytic encoder of DCT coefficients.

  18. Isentropic Compression of Multicomponent Mixtures of Fuels and Inert Gases

    NASA Technical Reports Server (NTRS)

    Barragan, Michelle; Julien, Howard L.; Woods, Stephen S.; Wilson, D. Bruce; Saulsberry, Regor L.

    2000-01-01

    In selected aerospace applications of the fuels hydrazine and monomethythydrazine, there occur conditions which can result in the isentropic compression of a multicomponent mixture of fuel and inert gas. One such example is when a driver gas such as helium comes out of solution and mixes with the fuel vapor, which is being compressed. A second example is when product gas from an energetic device mixes with the fuel vapor which is being compressed. Thermodynamic analysis has shown that under isentropic compression, the fuels hydrazine and monomethylhydrazine must be treated as real fluids using appropriate equations of state. The appropriate equations of state are the Peng-Robinson equation of state for hydrazine and the Redlich-Kwong-Soave equation of state for monomethylhydrazine. The addition of an inert gas of variable quantity and input temperature and pressure to the fuel compounds the problem for safety design or analysis. This work provides the appropriate thermodynamic analysis of isentropic compression of the two examples cited. In addition to an entropy balance describing the change of state, an enthalpy balance is required. The presence of multicomponents in the system requires that appropriate mixing rules are identified and applied to the analysis. This analysis is not currently available.

  19. Fast and efficient compression of floating-point data.

    PubMed

    Lindstrom, Peter; Isenburg, Martin

    2006-01-01

    Large scale scientific simulation codes typically run on a cluster of CPUs that write/read time steps to/from a single file system. As data sets are constantly growing in size, this increasingly leads to I/O bottlenecks. When the rate at which data is produced exceeds the available I/O bandwidth, the simulation stalls and the CPUs are idle. Data compression can alleviate this problem by using some CPU cycles to reduce the amount of data needed to be transfered. Most compression schemes, however, are designed to operate offline and seek to maximize compression, not throughput. Furthermore, they often require quantizing floating-point values onto a uniform integer grid, which disqualifies their use in applications where exact values must be retained. We propose a simple scheme for lossless, online compression of floating-point data that transparently integrates into the I/O of many applications. A plug-in scheme for data-dependent prediction makes our scheme applicable to a wide variety of data used in visualization, such as unstructured meshes, point sets, images, and voxel grids. We achieve state-of-the-art compression rates and speeds, the latter in part due to an improved entropy coder. We demonstrate that this significantly accelerates I/O throughput in real simulation runs. Unlike previous schemes, our method also adapts well to variable-precision floating-point and integer data. PMID:17080858

  20. Effects of Local Compression on Peroneal Nerve Function in Humans

    NASA Technical Reports Server (NTRS)

    Hargens, Alan R.; Botte, Michael J.; Swenson, Michael R.; Gelberman, Richard H.; Rhoades, Charles E.; Akeson, Wayne H.

    1993-01-01

    A new apparatus was developed to compress the anterior compartment selectively and reproducibly in humans. Thirty-five normal volunteers were studied to determine short-term thresholds of local tissue pressure that produce significant neuromuscular dysfunction. Local tissue fluid pressure adjacent to the deep peroneal nerve was elevated by the compression apparatus and continuously monitored for 2-3 h by the slit catheter technique. Elevation of tissue fluid pressure to within 35-40 mm Hg of diastolic blood pressure (approx. 40 mm Hg of in situ pressure in our subjects) elicited a consistent progression of neuromuscular deterioration including, in order, (a) gradual loss of sensation, as assessed by Semmes-Weinstein monofilaments, (b) subjective complaints, (c) reduced nerve conduction velocity, (d) decreased action potential amplitude of the extensor digitorum brevis muscle, and (e) motor weakness of muscles within the anterior compartment. Generally, higher intracompartment at pressures caused more rapid deterioration of neuromuscular function. In two subjects, when in situ compression levels were 0 and 30 mm Hg, normal neuromuscular function was maintained for 3 h. Threshold pressures for significant dysfunction were not always the same for each functional parameter studied, and the magnitudes of each functional deficit did not always correlate with compression level. This variable tolerance to elevated pressure emphasizes the need to monitor clinical signs and symptoms carefully in the diagnosis of compartment syndromes. The nature of the present studies was short term; longer term compression of myoneural tissues may result in dysfunction at lower pressure thresholds.

  1. Analysis-Driven Lossy Compression of DNA Microarray Images.

    PubMed

    Hernández-Cabronero, Miguel; Blanes, Ian; Pinho, Armando J; Marcellin, Michael W; Serra-Sagristà, Joan

    2016-02-01

    DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yield only limited compression performance (compression ratios below 2:1), whereas lossy coding methods may introduce unacceptable distortions in the analysis process. This work introduces a novel Relative Quantizer (RQ), which employs non-uniform quantization intervals designed for improved compression while bounding the impact on the DNA microarray analysis. This quantizer constrains the maximum relative error introduced into quantized imagery, devoting higher precision to pixels critical to the analysis process. For suitable parameter choices, the resulting variations in the DNA microarray analysis are less than half of those inherent to the experimental variability. Experimental results reveal that appropriate analysis can still be performed for average compression ratios exceeding 4.5:1.

  2. Effects of shock structure on temperature field in compressible turbulence

    NASA Astrophysics Data System (ADS)

    Ni, Qionglin; Chen, Shiyi

    2014-11-01

    Effects of shock structure on temperature in compressible turbulence were investigated. The small-scale shocklets and large-scale shock waves were appeared in the flows driven by solenoidal and compressive forcings, i.e. SFT & CFT, respectively. In SFT the temperature had Kolmogorov spectrum and ramp-cliff structures, while in CFT it obeyed Burgers spectrum and was dominated by large-scale rarefaction and compression. The power-law exponents for the p.d.f. of large negative dilatation were -2.5 in SFT and -3.5 in CFT, approximately corresponded to model results. The isentropic approximation of thermodynamic variables showed that in SFT, the isentropic derivation was reinforced when turbulent Mach number increased. At similar turbulent Mach number, the variables in CFT exhibited more anisentropic. It showed that the transport of temperature was increased by the small-scale viscous dissipation and the large-scale pressure-dilatation. The distribution of positive and negative components of pressure-dilatation confirmed the mechanism of negligible pressure-dilatation at small scales. Further, the positive skewness of p.d.f.s of pressure-dilatation implied that the conversion from kinetic to internal energy by compression was more intense than the opposite process by rarefaction.

  3. Evaluation of nonlinear frequency compression: Clinical outcomes

    PubMed Central

    Glista, Danielle; Scollie, Susan; Bagatto, Marlene; Seewald, Richard; Parsa, Vijay; Johnson, Andrew

    2009-01-01

    This study evaluated prototype multichannel nonlinear frequency compression (NFC) signal processing on listeners with high-frequency hearing loss. This signal processor applies NFC above a cut-off frequency. The participants were hearing-impaired adults (13) and children (11) with sloping, high-frequency hearing loss. Multiple outcome measures were repeated using a modified withdrawal design. These included speech sound detection, speech recognition, and self-reported preference measures. Group level results provide evidence of significant improvement of consonant and plural recognition when NFC was enabled. Vowel recognition did not change significantly. Analysis of individual results allowed for exploration of individual factors contributing to benefit received from NFC processing. Findings suggest that NFC processing can improve high frequency speech detection and speech recognition ability for adult and child listeners. Variability in individual outcomes related to factors such as degree and configuration of hearing loss, age of participant, and type of outcome measure. PMID:19504379

  4. Data compression using Chebyshev transform

    NASA Technical Reports Server (NTRS)

    Cheng, Andrew F. (Inventor); Hawkins, III, S. Edward (Inventor); Nguyen, Lillian (Inventor); Monaco, Christopher A. (Inventor); Seagrave, Gordon G. (Inventor)

    2007-01-01

    The present invention is a method, system, and computer program product for implementation of a capable, general purpose compression algorithm that can be engaged on the fly. This invention has particular practical application with time-series data, and more particularly, time-series data obtained form a spacecraft, or similar situations where cost, size and/or power limitations are prevalent, although it is not limited to such applications. It is also particularly applicable to the compression of serial data streams and works in one, two, or three dimensions. The original input data is approximated by Chebyshev polynomials, achieving very high compression ratios on serial data streams with minimal loss of scientific information.

  5. Compressive behavior of fine sand.

    SciTech Connect

    Martin, Bradley E.; Kabir, Md. E.; Song, Bo; Chen, Wayne

    2010-04-01

    The compressive mechanical response of fine sand is experimentally investigated. The strain rate, initial density, stress state, and moisture level are systematically varied. A Kolsky bar was modified to obtain uniaxial and triaxial compressive response at high strain rates. A controlled loading pulse allows the specimen to acquire stress equilibrium and constant strain-rates. The results show that the compressive response of the fine sand is not sensitive to strain rate under the loading conditions in this study, but significantly dependent on the moisture content, initial density and lateral confinement. Partially saturated sand is more compliant than dry sand. Similar trends were reported in the quasi-static regime for experiments conducted at comparable specimen conditions. The sand becomes stiffer as initial density and/or confinement pressure increases. The sand particle size become smaller after hydrostatic pressure and further smaller after dynamic axial loading.

  6. Efficient access of compressed data

    SciTech Connect

    Eggers, S.J.; Shoshani, A.

    1980-06-01

    A compression technique is presented that allows a high degree of compression but requires only logarithmic access time. The technique is a constant suppression scheme, and is most applicable to stable databases whose distribution of constants is fairly clustered. Furthermore, the repeated use of the technique permits the suppression of a multiple number of different constants. Of particular interest is the application of the constant suppression technique to databases the composite key of which is made up of an incomplete cross product of several attribute domains. The scheme for compressing the full cross product composite key is well known. This paper, however, also handles the general, incomplete case by applying the constant suppression technique in conjunction with a composite key suppression scheme.

  7. Data compression for satellite images

    NASA Technical Reports Server (NTRS)

    Chen, P. H.; Wintz, P. A.

    1976-01-01

    An efficient data compression system is presented for satellite pictures and two grey level pictures derived from satellite pictures. The compression techniques take advantages of the correlation between adjacent picture elements. Several source coding methods are investigated. Double delta coding is presented and shown to be the most efficient. Both predictive differential quantizing technique and double delta coding can be significantly improved by applying a background skipping technique. An extension code is constructed. This code requires very little storage space and operates efficiently. Simulation results are presented for various coding schemes and source codes.

  8. Structured illumination temporal compressive microscopy

    PubMed Central

    Yuan, Xin; Pang, Shuo

    2016-01-01

    We present a compressive video microscope based on structured illumination with incoherent light source. The source-side illumination coding scheme allows the emission photons being collected by the full aperture of the microscope objective, and thus is suitable for the fluorescence readout mode. A 2-step iterative reconstruction algorithm, termed BWISE, has been developed to address the mismatch between the illumination pattern size and the detector pixel size. Image sequences with a temporal compression ratio of 4:1 were demonstrated. PMID:27231586

  9. Extended testing of compression distillation.

    NASA Technical Reports Server (NTRS)

    Bambenek, R. A.; Nuccio, P. P.

    1972-01-01

    During the past eight years, the NASA Manned Spacecraft Center has supported the development of an integrated water and waste management system which includes the compression distillation process for recovering useable water from urine, urinal flush water, humidity condensate, commode flush water, and concentrated wash water. This paper describes the design of the compression distillation unit, developed for this system, and the testing performed to demonstrate its reliability and performance. In addition, this paper summarizes the work performed on pretreatment and post-treatment processes, to assure the recovery of sterile potable water from urine and treated urinal flush water.

  10. Data compression in digitized lines

    NASA Technical Reports Server (NTRS)

    Thapa, Khagendra

    1990-01-01

    The problem of data compression is very important in digital photogrammetry, computer assisted cartography, and GIS/LIS. In addition, it is also applicable in many other fields such as computer vision, image processing, pattern recognition, and artificial intelligence. Consequently, there are many algorithms available to solve this problem but none of them are considered to be satisfactory. In this paper, a new method of finding critical points in a digitized curve is explained. This technique, based on the normalized symmetric scattered matrix, is good for both critical points detection and data compression. In addition, the critical points detected by this algorithm are compared with those by zero-crossings.

  11. Compressed sensing for phase retrieval.

    PubMed

    Newton, Marcus C

    2012-05-01

    To date there are several iterative techniques that enjoy moderate success when reconstructing phase information, where only intensity measurements are made. There remains, however, a number of cases in which conventional approaches are unsuccessful. In the last decade, the theory of compressed sensing has emerged and provides a route to solving convex optimisation problems exactly via ℓ(1)-norm minimization. Here the application of compressed sensing to phase retrieval in a nonconvex setting is reported. An algorithm is presented that applies reweighted ℓ(1)-norm minimization to yield accurate reconstruction where conventional methods fail.

  12. Compressing the inert doublet model

    NASA Astrophysics Data System (ADS)

    Blinov, Nikita; Kozaczuk, Jonathan; Morrissey, David E.; de la Puente, Alejandro

    2016-02-01

    The inert doublet model relies on a discrete symmetry to prevent couplings of the new scalars to Standard Model fermions. This stabilizes the lightest inert state, which can then contribute to the observed dark matter density. In the presence of additional approximate symmetries, the resulting spectrum of exotic scalars can be compressed. Here, we study the phenomenological and cosmological implications of this scenario. We derive new limits on the compressed inert doublet model from LEP, and outline the prospects for exclusion and discovery of this model at dark matter experiments, the LHC, and future colliders.

  13. Compressing the Inert Doublet Model

    DOE PAGES

    Blinov, Nikita; Kozaczuk, Jonathan; Morrissey, David E.; de la Puente, Alejandro

    2016-02-16

    The Inert Doublet Model relies on a discrete symmetry to prevent couplings of the new scalars to Standard Model fermions. We found that this stabilizes the lightest inert state, which can then contribute to the observed dark matter density. In the presence of additional approximate symmetries, the resulting spectrum of exotic scalars can be compressed. Here, we study the phenomenological and cosmological implications of this scenario. In conclusion, we derive new limits on the compressed Inert Doublet Model from LEP, and outline the prospects for exclusion and discovery of this model at dark matter experiments, the LHC, and future colliders.

  14. Managment oriented analysis of sediment yield time compression

    NASA Astrophysics Data System (ADS)

    Smetanova, Anna; Le Bissonnais, Yves; Raclot, Damien; Nunes, João P.; Licciardello, Feliciana; Le Bouteiller, Caroline; Latron, Jérôme; Rodríguez Caballero, Emilio; Mathys, Nicolle; Klotz, Sébastien; Mekki, Insaf; Gallart, Francesc; Solé Benet, Albert; Pérez Gallego, Nuria; Andrieux, Patrick; Moussa, Roger; Planchon, Olivier; Marisa Santos, Juliana; Alshihabi, Omran; Chikhaoui, Mohamed

    2016-04-01

    The understanding of inter- and intra-annual variability of sediment yield is important for the land use planning and management decisions for sustainable landscapes. It is of particular importance in the regions where the annual sediment yield is often highly dependent on the occurrence of few large events which produce the majority of sediments, such as in the Mediterranean. This phenomenon is referred as time compression, and relevance of its consideration growths with the increase in magnitude and frequency of extreme events due to climate change in many other regions. So far, time compression has ben studied mainly on events datasets, providing high resolution, but (in terms of data amount, required data precision and methods), demanding analysis. In order to provide an alternative simplified approach, the monthly and yearly time compressions were evaluated in eight Mediterranean catchments (of the R-OSMed network), representing a wide range of Mediterranean landscapes. The annual sediment yield varied between 0 to ~27100 Mg•km-2•a-1, and the monthly sediment yield between 0 to ~11600 Mg•km-2•month-1. The catchment's sediment yield was un-equally distributed at inter- and intra-annual scale, and large differences were observed between the catchments. Two types of time compression were distinguished - (i) the inter-annual (based on annual values) and intra- annual (based on monthly values). Four different rainfall-runoff-sediment yield time compression patterns were observed: (i) no time-compression of rainfall, runoff, nor sediment yield, (ii) low time compression of rainfall and runoff, but high compression of sediment yield, (iii) low compression of rainfall and high of runoff and sediment yield, and (iv) low, medium and high compression of rainfall, runoff and sediment yield. All four patterns were present at inter-annual scale, while at intra-annual scale only the two latter were present. This implies that high sediment yields occurred in

  15. Compression fractures of the back

    MedlinePlus

    ... Meirhaeghe J, et al. Efficacy and safety of balloon kyphoplasty compared with non-surgical care for vertebral compression fracture (FREE): a randomised controlled trial. Lancet . 2009;373(9668):1016-24. PMID: 19246088 www.ncbi.nlm.nih.gov/pubmed/19246088 .

  16. Culture: Copying, Compression, and Conventionality

    ERIC Educational Resources Information Center

    Tamariz, Mónica; Kirby, Simon

    2015-01-01

    Through cultural transmission, repeated learning by new individuals transforms cultural information, which tends to become increasingly compressible (Kirby, Cornish, & Smith, 2008; Smith, Tamariz, & Kirby, 2013). Existing diffusion chain studies include in their design two processes that could be responsible for this tendency: learning…

  17. Compressive passive millimeter wave imager

    SciTech Connect

    Gopalsami, Nachappa; Liao, Shaolin; Elmer, Thomas W; Koehl, Eugene R; Heifetz, Alexander; Raptis, Apostolos C

    2015-01-27

    A compressive scanning approach for millimeter wave imaging and sensing. A Hadamard mask is positioned to receive millimeter waves from an object to be imaged. A subset of the full set of Hadamard acquisitions is sampled. The subset is used to reconstruct an image representing the object.

  18. Teaching Time-Space Compression

    ERIC Educational Resources Information Center

    Warf, Barney

    2011-01-01

    Time-space compression shows students that geographies are plastic, mutable and forever changing. This paper justifies the need to teach this topic, which is rarely found in undergraduate course syllabi. It addresses the impacts of transportation and communications technologies to explicate its dynamics. In summarizing various conceptual…

  19. Compression testing of flammable liquids

    NASA Technical Reports Server (NTRS)

    Briles, O. M.; Hollenbaugh, R. P.

    1979-01-01

    Small cylindrical test chamber determines catalytic effect of given container material on fuel that might contribute to accidental deflagration or detonation below expected temperature under adiabatic compression. Device is useful to producers and users of flammable liquids and to safety specialists.

  20. Perceptually lossy compression of documents

    NASA Astrophysics Data System (ADS)

    Beretta, Giordano B.; Bhaskaran, Vasudev; Konstantinides, Konstantinos; Natarajan, Balas R.

    1997-06-01

    The main cost of owning a facsimile machine consists of the telephone charges for the communications, thus short transmission times are a key feature for facsimile machines. Similarly, on a packet-routed service such as the Internet, a low number of packets is essential to avoid operator wait times. Concomitantly, the user expectations have increased considerably. In facsimile, the switch from binary to full color increases the data size by a factor of 24. On the Internet, the switch from plain text American Standard Code for Information Interchange (ASCII) encoded files to files marked up in the Hypertext Markup Language (HTML) with ample embedded graphics has increased the size of transactions by several orders of magnitude. A common compressing method for raster files in these applications in the Joint Photographic Experts Group (JPEG) method, because efficient implementations are readily available. In this method the implementors design the discrete quantization tables (DQT) and the Huffman tables (HT) to maximize the compression factor while maintaining the introduced artifacts at the threshold of perceptual detectability. Unfortunately the achieved compression rates are unsatisfactory for applications such as color facsimile and World Wide Web (W3) browsing. We present a design methodology for image-independent DQTs that while producing perceptually lossy data, does not impair the reading performance of users. Combined with a text sharpening algorithm that compensates for scanning device limitations, the methodology presented in this paper allows us to achieve compression ratios near 1:100.

  1. A programmable image compression system

    NASA Technical Reports Server (NTRS)

    Farrelle, Paul M.

    1989-01-01

    A programmable image compression system which has the necessary flexibility to address diverse imaging needs is described. It can compress and expand single frame video images (monochrome or color) as well as documents and graphics (black and white or color) for archival or transmission applications. Through software control, the compression mode can be set for lossless or controlled quality coding; the image size and bit depth can be varied; and the image source and destination devices can be readily changed. Despite the large combination of image data types, image sources, and algorithms, the system provides a simple consistent interface to the programmer. This system (OPTIPAC) is based on the TITMS320C25 digital signal processing (DSP) chip and has been implemented as a co-processor board for an IBM PC-AT compatible computer. The underlying philosophy can readily be applied to different hardware platforms. By using multiple DSP chips or incorporating algorithm specific chips, the compression and expansion times can be significantly reduced to meet performance requirements.

  2. Device Assists Cardiac Chest Compression

    NASA Technical Reports Server (NTRS)

    Eichstadt, Frank T.

    1995-01-01

    Portable device facilitates effective and prolonged cardiac resuscitation by chest compression. Developed originally for use in absence of gravitation, also useful in terrestrial environments and situations (confined spaces, water rescue, medical transport) not conducive to standard manual cardiopulmonary resuscitation (CPR) techniques.

  3. COMPRESSIBLE FLOW, ENTRAINMENT, AND MEGAPLUME

    EPA Science Inventory

    It is generally believed that low Mach number, i.e., low-velocity, flow may be assumed to be incompressible flow. Under steady-state conditions, an exact equation of continuity may then be used to show that such flow is non-divergent. However, a rigorous, compressible fluid-dynam...

  4. Hyperspectral image compressive projection algorithm

    NASA Astrophysics Data System (ADS)

    Rice, Joseph P.; Allen, David W.

    2009-05-01

    We describe a compressive projection algorithm and experimentally assess its performance when used with a Hyperspectral Image Projector (HIP). The HIP is being developed by NIST for system-level performance testing of hyperspectral and multispectral imagers. It projects a two-dimensional image into the unit under test (UUT), whereby each pixel can have an independently programmable arbitrary spectrum. To efficiently project a single frame of dynamic realistic hyperspectral imagery through the collimator into the UUT, a compression algorithm has been developed whereby the series of abundance images and corresponding endmember spectra that comprise the image cube of that frame are first computed using an automated endmember-finding algorithm such as the Sequential Maximum Angle Convex Cone (SMACC) endmember model. Then these endmember spectra are projected sequentially on the HIP spectral engine in sync with the projection of the abundance images on the HIP spatial engine, during the singleframe exposure time of the UUT. The integrated spatial image captured by the UUT is the endmember-weighted sum of the abundance images, which results in the formation of a datacube for that frame. Compressive projection enables a much smaller set of broadband spectra to be projected than monochromatic projection, and thus utilizes the inherent multiplex advantage of the HIP spectral engine. As a result, radiometric brightness and projection frame rate are enhanced. In this paper, we use a visible breadboard HIP to experimentally assess the compressive projection algorithm performance.

  5. Advection by polytropic compressible turbulence

    NASA Astrophysics Data System (ADS)

    Ladeinde, F.; O'Brien, E. E.; Cai, X.; Liu, W.

    1995-11-01

    Direct numerical simulation (DNS) is used to examine scalar correlation in low Mach number, polytropic, homogeneous, two-dimensional turbulence (Ms≤0.7) for which the initial conditions, Reynolds, and Mach numbers have been chosen to produce three types of flow suggested by theory: (a) nearly incompressible flow dominated by vorticity, (b) nearly pure acoustic turbulence dominated by compression, and (c) nearly statistical equipartition of vorticity and compressions. Turbulent flows typical of each of these cases have been generated and a passive scalar field imbedded in them. The results show that a finite-difference based computer program is capable of producing results that are in reasonable agreement with pseudospectral calculations. Scalar correlations have been calculated from the DNS results and the relative magnitudes of terms in low-order scalar moment equations determined. It is shown that the scalar equation terms with explicit compressibility are negligible on a long time-averaged basis. A physical-space EDQNM model has been adapted to provide another estimate of scalar correlation evolution in these same two-dimensional, compressible cases. The use of the solenoidal component of turbulence energy, rather than total turbulence energy, in the EDQNM model gives results closer to those from DNS in all cases.

  6. Optimality Of Variable-Length Codes

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Miller, Warner H.; Rice, Robert F.

    1994-01-01

    Report presents analysis of performances of conceptual Rice universal noiseless coders designed to provide efficient compression of data over wide range of source-data entropies. Includes predictive preprocessor that maps source data into sequence of nonnegative integers and variable-length-coding processor, which adapts to varying entropy of source data by selecting whichever one of number of optional codes yields shortest codeword.

  7. ADVANCED RECIPROCATING COMPRESSION TECHNOLOGY (ARCT)

    SciTech Connect

    Danny M. Deffenbaugh; Klaus Brun; Ralph E. Harris; J. Pete Harrell; Robert J. Mckee; J. Jeffrey Moore; Steven J. Svedeman; Anthony J. Smalley; Eugene L. Broerman; Robert A Hart; Marybeth G. Nored; Ryan S. Gernentz; Shane P. Siebenaler

    2005-12-01

    The U.S. natural gas pipeline industry is facing the twin challenges of increased flexibility and capacity expansion. To meet these challenges, the industry requires improved choices in gas compression to address new construction and enhancement of the currently installed infrastructure. The current fleet of installed reciprocating compression is primarily slow-speed integral machines. Most new reciprocating compression is and will be large, high-speed separable units. The major challenges with the fleet of slow-speed integral machines are: limited flexibility and a large range in performance. In an attempt to increase flexibility, many operators are choosing to single-act cylinders, which are causing reduced reliability and integrity. While the best performing units in the fleet exhibit thermal efficiencies between 90% and 92%, the low performers are running down to 50% with the mean at about 80%. The major cause for this large disparity is due to installation losses in the pulsation control system. In the better performers, the losses are about evenly split between installation losses and valve losses. The major challenges for high-speed machines are: cylinder nozzle pulsations, mechanical vibrations due to cylinder stretch, short valve life, and low thermal performance. To shift nozzle pulsation to higher orders, nozzles are shortened, and to dampen the amplitudes, orifices are added. The shortened nozzles result in mechanical coupling with the cylinder, thereby, causing increased vibration due to the cylinder stretch mode. Valve life is even shorter than for slow speeds and can be on the order of a few months. The thermal efficiency is 10% to 15% lower than slow-speed equipment with the best performance in the 75% to 80% range. The goal of this advanced reciprocating compression program is to develop the technology for both high speed and low speed compression that will expand unit flexibility, increase thermal efficiency, and increase reliability and integrity

  8. Physical examination of upper extremity compressive neuropathies.

    PubMed

    Popinchalk, Samuel P; Schaffer, Alyssa A

    2012-10-01

    A thorough history and physical examination are vital to the assessment of upper extremity compressive neuropathies. This article summarizes relevant anatomy and physical examination findings associated with upper extremity compressive neuropathies.

  9. Sensorineural deafness due to compression chamber noise.

    PubMed

    Hughes, K B

    1976-05-01

    A case of unilateral sensorineural deafness following exposure to compression chamber noise is described. A review of the current literature concerning the otological hazards of compression chambers is made. The possible pathological basis is discussed.

  10. Length-Limited Data Transformation and Compression

    SciTech Connect

    Senecal, Joshua G.

    2005-09-01

    Scientific computation is used for the simulation of increasingly complex phenomena, and generates data sets of ever increasing size, often on the order of terabytes. All of this data creates difficulties. Several problems that have been identified are (1) the inability to effectively handle the massive amounts of data created, (2) the inability to get the data off the computer and into storage fast enough, and (3) the inability of a remote user to easily obtain a rendered image of the data resulting from a simulation run. This dissertation presents several techniques that were developed to address these issues. The first is a prototype bin coder based on variable-to-variable length codes. The codes utilized are created through a process of parse tree leaf merging, rather than the common practice of leaf extension. This coder is very fast and its compression efficiency is comparable to other state-of-the-art coders. The second contribution is the Piecewise-Linear Haar (PLHaar) transform, a reversible n-bit to n-bit wavelet-like transform. PLHaar is simple to implement, ideal for environments where transform coefficients must be kept the same size as the original data, and is the only n-bit to n-bit transform suitable for both lossy and lossless coding.

  11. Cluster compression algorithm: A joint clustering/data compression concept

    NASA Technical Reports Server (NTRS)

    Hilbert, E. E.

    1977-01-01

    The Cluster Compression Algorithm (CCA), which was developed to reduce costs associated with transmitting, storing, distributing, and interpreting LANDSAT multispectral image data is described. The CCA is a preprocessing algorithm that uses feature extraction and data compression to more efficiently represent the information in the image data. The format of the preprocessed data enables simply a look-up table decoding and direct use of the extracted features to reduce user computation for either image reconstruction, or computer interpretation of the image data. Basically, the CCA uses spatially local clustering to extract features from the image data to describe spectral characteristics of the data set. In addition, the features may be used to form a sequence of scalar numbers that define each picture element in terms of the cluster features. This sequence, called the feature map, is then efficiently represented by using source encoding concepts. Various forms of the CCA are defined and experimental results are presented to show trade-offs and characteristics of the various implementations. Examples are provided that demonstrate the application of the cluster compression concept to multi-spectral images from LANDSAT and other sources.

  12. Variable delivery, fixed displacement pump

    DOEpatents

    Sommars, Mark F.

    2001-01-01

    A variable delivery, fixed displacement pump comprises a plurality of pistons reciprocated within corresponding cylinders in a cylinder block. The pistons are reciprocated by rotation of a fixed angle swash plate connected to the pistons. The pistons and cylinders cooperate to define a plurality of fluid compression chambers each have a delivery outlet. A vent port is provided from each fluid compression chamber to vent fluid therefrom during at least a portion of the reciprocal stroke of the piston. Each piston and cylinder combination cooperates to close the associated vent port during another portion of the reciprocal stroke so that fluid is then pumped through the associated delivery outlet. The delivery rate of the pump is varied by adjusting the axial position of the swash plate relative to the cylinder block, which varies the duration of the piston stroke during which the vent port is closed.

  13. Lower body predictors of glenohumeral compressive force in high school baseball pitchers.

    PubMed

    Keeley, David W; Oliver, Gretchen D; Dougherty, Christopher P; Torry, Michael R

    2015-06-01

    The purpose of this study was to better understand how lower body kinematics relate to peak glenohumeral compressive force and develop a regression model accounting for variability in peak glenohumeral compressive force. Data were collected for 34 pitchers. Average peak glenohumeral compressive force was 1.72% ± 33% body weight (1334.9 N ± 257.5). Correlation coefficients revealed 5 kinematic variables correlated to peak glenohumeral compressive force (P < .01, α = .025). Regression models indicated 78.5% of the variance in peak glenohumeral compressive force (R2 = .785, P < .01) was explained by stride length, lateral pelvis flexion at maximum external rotation, and axial pelvis rotation velocity at release. These results indicate peak glenohumeral compressive force increases with a combination of decreased stride length, increased pelvic tilt at maximum external rotation toward the throwing arm side, and increased pelvis axial rotation velocity at release. Thus, it may be possible to decrease peak glenohumeral compressive force by optimizing the movements of the lower body while pitching. Focus should be on both training and conditioning the lower extremity in an effort to increase stride length, increase pelvis tilt toward the glove hand side at maximum external rotation, and decrease pelvis axial rotation at release. PMID:25734579

  14. Lower body predictors of glenohumeral compressive force in high school baseball pitchers.

    PubMed

    Keeley, David W; Oliver, Gretchen D; Dougherty, Christopher P; Torry, Michael R

    2015-06-01

    The purpose of this study was to better understand how lower body kinematics relate to peak glenohumeral compressive force and develop a regression model accounting for variability in peak glenohumeral compressive force. Data were collected for 34 pitchers. Average peak glenohumeral compressive force was 1.72% ± 33% body weight (1334.9 N ± 257.5). Correlation coefficients revealed 5 kinematic variables correlated to peak glenohumeral compressive force (P < .01, α = .025). Regression models indicated 78.5% of the variance in peak glenohumeral compressive force (R2 = .785, P < .01) was explained by stride length, lateral pelvis flexion at maximum external rotation, and axial pelvis rotation velocity at release. These results indicate peak glenohumeral compressive force increases with a combination of decreased stride length, increased pelvic tilt at maximum external rotation toward the throwing arm side, and increased pelvis axial rotation velocity at release. Thus, it may be possible to decrease peak glenohumeral compressive force by optimizing the movements of the lower body while pitching. Focus should be on both training and conditioning the lower extremity in an effort to increase stride length, increase pelvis tilt toward the glove hand side at maximum external rotation, and decrease pelvis axial rotation at release.

  15. FRESCO: Referential compression of highly similar sequences.

    PubMed

    Wandelt, Sebastian; Leser, Ulf

    2013-01-01

    In many applications, sets of similar texts or sequences are of high importance. Prominent examples are revision histories of documents or genomic sequences. Modern high-throughput sequencing technologies are able to generate DNA sequences at an ever-increasing rate. In parallel to the decreasing experimental time and cost necessary to produce DNA sequences, computational requirements for analysis and storage of the sequences are steeply increasing. Compression is a key technology to deal with this challenge. Recently, referential compression schemes, storing only the differences between a to-be-compressed input and a known reference sequence, gained a lot of interest in this field. In this paper, we propose a general open-source framework to compress large amounts of biological sequence data called Framework for REferential Sequence COmpression (FRESCO). Our basic compression algorithm is shown to be one to two orders of magnitudes faster than comparable related work, while achieving similar compression ratios. We also propose several techniques to further increase compression ratios, while still retaining the advantage in speed: 1) selecting a good reference sequence; and 2) rewriting a reference sequence to allow for better compression. In addition,we propose a new way of further boosting the compression ratios by applying referential compression to already referentially compressed files (second-order compression). This technique allows for compression ratios way beyond state of the art, for instance,4,000:1 and higher for human genomes. We evaluate our algorithms on a large data set from three different species (more than 1,000 genomes, more than 3 TB) and on a collection of versions of Wikipedia pages. Our results show that real-time compression of highly similar sequences at high compression ratios is possible on modern hardware.

  16. Image quality (IQ) guided multispectral image compression

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik

    2016-05-01

    Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.

  17. Multiphase, Multicomponent Compressibility in Geothermal Reservoir Engineering

    SciTech Connect

    Macias-Chapa, L.; Ramey, H.J. Jr.

    1987-01-20

    Coefficients of compressibilities below the bubble point were computer with a thermodynamic model for single and multicomponent systems. Results showed coefficients of compressibility below the bubble point larger than the gas coefficient of compressibility at the same conditions. Two-phase compressibilities computed in the conventional way are underestimated and may lead to errors in reserve estimation and well test analysis. 10 refs., 9 figs.

  18. FRESCO: Referential compression of highly similar sequences.

    PubMed

    Wandelt, Sebastian; Leser, Ulf

    2013-01-01

    In many applications, sets of similar texts or sequences are of high importance. Prominent examples are revision histories of documents or genomic sequences. Modern high-throughput sequencing technologies are able to generate DNA sequences at an ever-increasing rate. In parallel to the decreasing experimental time and cost necessary to produce DNA sequences, computational requirements for analysis and storage of the sequences are steeply increasing. Compression is a key technology to deal with this challenge. Recently, referential compression schemes, storing only the differences between a to-be-compressed input and a known reference sequence, gained a lot of interest in this field. In this paper, we propose a general open-source framework to compress large amounts of biological sequence data called Framework for REferential Sequence COmpression (FRESCO). Our basic compression algorithm is shown to be one to two orders of magnitudes faster than comparable related work, while achieving similar compression ratios. We also propose several techniques to further increase compression ratios, while still retaining the advantage in speed: 1) selecting a good reference sequence; and 2) rewriting a reference sequence to allow for better compression. In addition,we propose a new way of further boosting the compression ratios by applying referential compression to already referentially compressed files (second-order compression). This technique allows for compression ratios way beyond state of the art, for instance,4,000:1 and higher for human genomes. We evaluate our algorithms on a large data set from three different species (more than 1,000 genomes, more than 3 TB) and on a collection of versions of Wikipedia pages. Our results show that real-time compression of highly similar sequences at high compression ratios is possible on modern hardware. PMID:24524158

  19. Apparatus for measuring tensile and compressive properties of solid materials at cryogenic temperatures

    DOEpatents

    Gonczy, John D.; Markley, Finley W.; McCaw, William R.; Niemann, Ralph C.

    1992-01-01

    An apparatus for evaluating the tensile and compressive properties of material samples at very low or cryogenic temperatures employs a stationary frame and a dewar mounted below the frame. A pair of coaxial cylindrical tubes extend downward towards the bottom of the dewar. A compressive or tensile load is generated hydraulically and is transmitted by the inner tube to the material sample. The material sample is located near the bottom of the dewar in a liquid refrigerant bath. The apparatus employs a displacement measuring device, such as a linear variable differential transformer, to measure the deformation of the material sample relative to the amount of compressive or tensile force applied to the sample.

  20. Distributed Relaxation Multigrid and Defect Correction Applied to the Compressible Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    Thomas, J. L.; Diskin, B.; Brandt, A.

    1999-01-01

    The distributed-relaxation multigrid and defect- correction methods are applied to the two- dimensional compressible Navier-Stokes equations. The formulation is intended for high Reynolds number applications and several applications are made at a laminar Reynolds number of 10,000. A staggered- grid arrangement of variables is used; the coupled pressure and internal energy equations are solved together with multigrid, requiring a block 2x2 matrix solution. Textbook multigrid efficiencies are attained for incompressible and slightly compressible simulations of the boundary layer on a flat plate. Textbook efficiencies are obtained for compressible simulations up to Mach numbers of 0.7 for a viscous wake simulation.

  1. Optimum number of technical replicates for the measurement of compression of lamb meat.

    PubMed

    Hoban, J M; van de Ven, R J; Hopkins, D L

    2016-05-01

    Up to six (average 4.63) replicate compression values were collected on cooked m. semimembranosus of lambs that had been raised at six sites across southern Australia (n=1817). Measurements on each sample were made with one of two Lloyd Texture analyser machines, with each machine having a 0.63 cm diameter plunger. Based on a log normal model with common variance on the log scale for within sample replicate results, estimates of the within sample variability of compression values were obtained, resulting in a quality control procedure for compression testing based on the coefficient of variation. PMID:26775151

  2. General-Purpose Compression for Efficient Retrieval.

    ERIC Educational Resources Information Center

    Cannane, Adam; Williams, Hugh E.

    2001-01-01

    Discusses compression of databases that reduces space requirements and retrieval times; considers compression of documents in text databases based on semistatic modeling with words; and proposes a scheme for general purpose compression that can be applied to all types of data stored in large collections. (Author/LRW)

  3. Compressibility of liquid-metallic hydrogen

    NASA Astrophysics Data System (ADS)

    MacDonald, A. H.

    1983-05-01

    An expression for the compressibility κ of liquid-metallic hydrogen, derived within adiabatic and linear screening approximations, is presented. Terms in the expression for κ have been associated with Landau parameters of the two-component Fermi liquid. The compressibility found for the liquid state is much larger than the compressibility which would be expected in the solid state.

  4. Controlling variability.

    PubMed

    Sanger, Terence D

    2010-11-01

    In human motor control, there is uncertainty in both estimation of initial sensory state and prediction of the outcome of motor commands. With practice, increasing precision can often be achieved, but such precision incurs costs in time, effort, and neural resources. Therefore, motor planning must account for variability, uncertainty, and noise, not just at the endpoint of movement but throughout the movement. The author presents a mathematical basis for understanding the time course of uncertainty during movement. He shows that it is possible to achieve accurate control of the endpoint of a movement even with highly inaccurate and variable controllers. The results provide a first step toward a theory of optimal control for variable, uncertain, and noisy systems that must nevertheless accomplish real-world tasks reliably.

  5. Estimating the Concrete Compressive Strength Using Hard Clustering and Fuzzy Clustering Based Regression Techniques

    PubMed Central

    Nagwani, Naresh Kumar; Deo, Shirish V.

    2014-01-01

    Understanding of the compressive strength of concrete is important for activities like construction arrangement, prestressing operations, and proportioning new mixtures and for the quality assurance. Regression techniques are most widely used for prediction tasks where relationship between the independent variables and dependent (prediction) variable is identified. The accuracy of the regression techniques for prediction can be improved if clustering can be used along with regression. Clustering along with regression will ensure the more accurate curve fitting between the dependent and independent variables. In this work cluster regression technique is applied for estimating the compressive strength of the concrete and a novel state of the art is proposed for predicting the concrete compressive strength. The objective of this work is to demonstrate that clustering along with regression ensures less prediction errors for estimating the concrete compressive strength. The proposed technique consists of two major stages: in the first stage, clustering is used to group the similar characteristics concrete data and then in the second stage regression techniques are applied over these clusters (groups) to predict the compressive strength from individual clusters. It is found from experiments that clustering along with regression techniques gives minimum errors for predicting compressive strength of concrete; also fuzzy clustering algorithm C-means performs better than K-means algorithm. PMID:25374939

  6. Growing concern following compression mammography.

    PubMed

    van Netten, Johannes Pieter; Hoption Cann, Stephen; Thornton, Ian; Finegan, Rory

    2016-01-01

    A patient without clinical symptoms had a mammogram in October 2008. The procedure caused intense persistent pain, swelling and development of a haematoma following mediolateral left breast compression. Three months later, a 9×11 cm mass developed within the same region. Core biopsies showed a necrotizing high-grade ductal carcinoma, with a high mitotic index. Owing to its extensive size, the patient began chemotherapy followed by trastuzumab and later radiotherapy to obtain clear margins for a subsequent mastectomy. The mastectomy in October 2009 revealed an inflammatory carcinoma, with 2 of 3 nodes infiltrated by the tumour. The stage IIIC tumour, oestrogen and progesterone receptor negative, was highly HER2 positive. A recurrence led to further chemotherapy in February 2011. In July 2011, another recurrence was removed from the mastectomy scar. She died of progressive disease in 2012. In this article, we discuss the potential influence of compression on the natural history of the tumour. PMID:27581236

  7. Using autoencoders for mammogram compression.

    PubMed

    Tan, Chun Chet; Eswaran, Chikkannan

    2011-02-01

    This paper presents the results obtained for medical image compression using autoencoder neural networks. Since mammograms (medical images) are usually of big sizes, training of autoencoders becomes extremely tedious and difficult if the whole image is used for training. We show in this paper that the autoencoders can be trained successfully by using image patches instead of the whole image. The compression performances of different types of autoencoders are compared based on two parameters, namely mean square error and structural similarity index. It is found from the experimental results that the autoencoder which does not use Restricted Boltzmann Machine pre-training yields better results than those which use this pre-training method.

  8. Frost heave in compressible soils

    NASA Astrophysics Data System (ADS)

    Peppin, Stephen; Majumdar, Apala; Sander, Graham

    2010-05-01

    Recent frost heave experiments on compressible soils find no pore ice in the soil near the ice lenses (no frozen fringe). These results confirm early observations of Beskow that in clays the soil between ice lenses is ``soft and unfrozen'' but have yet to be explained theoretically. Recently it has been suggested that periodic ice lens formation in the absence of a frozen fringe may be due to a morphological instability of the ice--soil interface. Here we use this concept to develop a mathematical model of frost heave in compressible soils. The theory accounts for heave, overburden effects and soil consolidation. In the limit of a rigid porous medium a relation is obtained between the critical morphological number and the empirical segregation potential. Analytical and numerical solutions are found, and compared with the results of unidirectional solidification experiments.

  9. [Medical image compression: a review].

    PubMed

    Noreña, Tatiana; Romero, Eduardo

    2013-01-01

    Modern medicine is an increasingly complex activity , based on the evidence ; it consists of information from multiple sources : medical record text , sound recordings , images and videos generated by a large number of devices . Medical imaging is one of the most important sources of information since they offer comprehensive support of medical procedures for diagnosis and follow-up . However , the amount of information generated by image capturing gadgets quickly exceeds storage availability in radiology services , generating additional costs in devices with greater storage capacity . Besides , the current trend of developing applications in cloud computing has limitations, even though virtual storage is available from anywhere, connections are made through internet . In these scenarios the optimal use of information necessarily requires powerful compression algorithms adapted to medical activity needs . In this paper we present a review of compression techniques used for image storage , and a critical analysis of them from the point of view of their use in clinical settings. PMID:23715317

  10. [Medical image compression: a review].

    PubMed

    Noreña, Tatiana; Romero, Eduardo

    2013-01-01

    Modern medicine is an increasingly complex activity , based on the evidence ; it consists of information from multiple sources : medical record text , sound recordings , images and videos generated by a large number of devices . Medical imaging is one of the most important sources of information since they offer comprehensive support of medical procedures for diagnosis and follow-up . However , the amount of information generated by image capturing gadgets quickly exceeds storage availability in radiology services , generating additional costs in devices with greater storage capacity . Besides , the current trend of developing applications in cloud computing has limitations, even though virtual storage is available from anywhere, connections are made through internet . In these scenarios the optimal use of information necessarily requires powerful compression algorithms adapted to medical activity needs . In this paper we present a review of compression techniques used for image storage , and a critical analysis of them from the point of view of their use in clinical settings.

  11. Application of wavelet packet transform to compressing Raman spectra data

    NASA Astrophysics Data System (ADS)

    Chen, Chen; Peng, Fei; Cheng, Qinghua; Xu, Dahai

    2008-12-01

    Abstract The Wavelet transform has been established with the Fourier transform as a data-processing method in analytical fields. The main fields of application are related to de-noising, compression, variable reduction, and signal suppression. Raman spectroscopy (RS) is characterized by the frequency excursion that can show the information of molecule. Every substance has its own feature Raman spectroscopy, which can analyze the structure, components, concentrations and some other properties of samples easily. RS is a powerful analytical tool for detection and identification. There are many databases of RS. But the data of Raman spectrum needs large space to storing and long time to searching. In this paper, Wavelet packet is chosen to compress Raman spectra data of some benzene series. The obtained results show that the energy retained is as high as 99.9% after compression, while the percentage for number of zeros is 87.50%. It was concluded that the Wavelet packet has significance in compressing the RS data.

  12. Digital filtering for data compression in telemetry systems

    NASA Astrophysics Data System (ADS)

    Bell, R. M.

    There are many obstacles to using data compression in a telemetry system. Non-linear quantization is often too lossy, and the data is too highly structured to make variable-length entropy codes practical. This paper describes a lossless telemetry data compression system that was built using digital FIR filters. The method of compression takes advantage of the fact that the optimal Nyquist sampling rate is rarely achievable due to two factors: (1) Sensor/transducers are not bandlimited to the frequencies of interest; and (2) Accurate, high-order analog filters are not available to perform effective band limiting and prevent aliasing. Real-time digital filtering can enhance the performance of a typical sampling system so that it approaches Nyquist sampling rates, effectively compressing the amount of data and reducing transmission bandwidth. The system that was built reduced the sampling rate of 14 high-frequency vibration channels by a factor of two, and reduced the bandwidth of the-data link from 1.8 Mbps to 1.28 Mbps. The entire circuit uses seven function-specific, digital-filter DSP's operating in parallel (two 128-tap FIR filters can be implemented on each Motorola DSP56200), one EPROM and a Programmable Logic Device as the controller.

  13. Digital filtering for data compression in telemetry systems

    SciTech Connect

    Bell, R.M.

    1994-08-01

    There are many obstacles to using data compression in a telemetry system. Non-linear quantization is often too lossy, and the data is too highly structured to make variable-length entropy codes practical. This paper describes a lossless telemetry data compression system that was built using digital FIR filters. The method of compression takes advantage of the fact that the optimal Nyquist sampling rate is rarely achievable due to two factors: (1) Sensor/transducers are not bandlimited to the frequencies of interest, and (2) Accurate, high-order analog filters are not available to perform effective band limiting and prevent aliasing. Real-time digital filtering can enhance the performance of a typical sampling system so that it approaches Nyquist sampling rates, effectively compressing the amount of data and reducing transmission bandwidth. The system that was built reduced the sampling rate of 14 high-frequency vibration channels by a factor of two, and reduced the bandwidth of the-data link from 1.8 Mbps to 1.28 Mbps. The entire circuit uses seven function-specific, digital-filter DSP`s operating in parallel (two 128-tap FIR filters can be implemented on each Motorola DSP56200), one EPROM and a Programmable Logic Device as the controller.

  14. Using a visual discrimination model for the detection of compression artifacts in virtual pathology images.

    PubMed

    Johnson, Jeffrey P; Krupinski, Elizabeth A; Yan, Michelle; Roehrig, Hans; Graham, Anna R; Weinstein, Ronald S

    2011-02-01

    A major issue in telepathology is the extremely large and growing size of digitized "virtual" slides, which can require several gigabytes of storage and cause significant delays in data transmission for remote image interpretation and interactive visualization by pathologists. Compression can reduce this massive amount of virtual slide data, but reversible (lossless) methods limit data reduction to less than 50%, while lossy compression can degrade image quality and diagnostic accuracy. "Visually lossless" compression offers the potential for using higher compression levels without noticeable artifacts, but requires a rate-control strategy that adapts to image content and loss visibility. We investigated the utility of a visual discrimination model (VDM) and other distortion metrics for predicting JPEG 2000 bit rates corresponding to visually lossless compression of virtual slides for breast biopsy specimens. Threshold bit rates were determined experimentally with human observers for a variety of tissue regions cropped from virtual slides. For test images compressed to their visually lossless thresholds, just-noticeable difference (JND) metrics computed by the VDM were nearly constant at the 95th percentile level or higher, and were significantly less variable than peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) metrics. Our results suggest that VDM metrics could be used to guide the compression of virtual slides to achieve visually lossless compression while providing 5-12 times the data reduction of reversible methods. PMID:20875970

  15. Antiproton compression and radial measurements

    SciTech Connect

    Andresen, G. B.; Bowe, P. D.; Hangst, J. S.; Bertsche, W.; Butler, E.; Charlton, M.; Humphries, A. J.; Jenkins, M. J.; Joergensen, L. V.; Madsen, N.; Werf, D. P. van der; Bray, C. C.; Chapman, S.; Fajans, J.; Povilus, A.; Wurtele, J. S.; Cesar, C. L.; Lambo, R.; Silveira, D. M.; Fujiwara, M. C.

    2008-08-08

    Control of the radial profile of trapped antiproton clouds is critical to trapping antihydrogen. We report detailed measurements of the radial manipulation of antiproton clouds, including areal density compressions by factors as large as ten, achieved by manipulating spatially overlapped electron plasmas. We show detailed measurements of the near-axis antiproton radial profile, and its relation to that of the electron plasma. We also measure the outer radial profile by ejecting antiprotons to the trap wall using an octupole magnet.

  16. Vascular compression of the duodenum.

    PubMed Central

    Moskovich, R; Cheong-Leen, P

    1986-01-01

    Compression of the third or fourth part of the duodenum by the superior mesenteric artery or one of its branches is the anatomic basis for some cases of duodenal obstruction. Two cases of vascular obstruction of the duodenum after surgical correction of scoliosis are presented. The embryologic and pathoanatomic bases for this condition, and the rationale for treatment, are described. Images Figure 1. Figure 2. Figure 3. PMID:3761291

  17. SNLL materials testing compression facility

    SciTech Connect

    Kawahara, W.A.; Brandon, S.L.; Korellis, J.S.

    1986-04-01

    This report explains software enhancements and fixture modifications which expand the capabilities of a servo-hydraulic test system to include static computer-controlled ''constant true strain rate'' compression testing on cylindrical specimens. True strains in excess of -1.0 are accessible. Special software features include schemes to correct for system compliance and the ability to perform strain-rate changes; all software for test control and data acquisition/reduction is documented.

  18. Compressed air energy storage system

    DOEpatents

    Ahrens, Frederick W.; Kartsounes, George T.

    1981-01-01

    An internal combustion reciprocating engine is operable as a compressor during slack demand periods utilizing excess power from a power grid to charge air into an air storage reservoir and as an expander during peak demand periods to feed power into the power grid utilizing air obtained from the air storage reservoir together with combustible fuel. Preferably the internal combustion reciprocating engine is operated at high pressure and a low pressure turbine and compressor are also employed for air compression and power generation.

  19. Compressed air energy storage system

    DOEpatents

    Ahrens, F.W.; Kartsounes, G.T.

    An internal combustion reciprocating engine is operable as a compressor during slack demand periods utilizing excess power from a power grid to charge air into an air storage reservoir and as an expander during peak demand periods to feed power into the power grid utilizing air obtained from the air storage reservoir together with combustion reciprocating engine is operated at high pressure and a low pressure turbine and compressor are also employed for air compression and power generation.

  20. Comparison of Artificial Compressibility Methods

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Housman, Jeffrey; Kwak, Dochan

    2003-01-01

    Various artificial compressibility methods for calculating three-dimensional, steady and unsteady, laminar and turbulent, incompressible Navier-Stokes equations are compared in this work. Each method is described in detail along with appropriate physical and numerical boundary conditions. Analysis of well-posedness and numerical solutions to test problems for each method are provided. A comparison based on convergence behavior, accuracy, stability and robustness is used to establish the relative positive and negative characteristics of each method.

  1. Compressibility Effects in Aeronautical Engineering

    NASA Technical Reports Server (NTRS)

    Stack, John

    1941-01-01

    Compressible-flow research, while a relatively new field in aeronautics, is very old, dating back almost to the development of the first firearm. Over the last hundred years, researches have been conducted in the ballistics field, but these results have been of practically no use in aeronautical engineering because the phenomena that have been studied have been the more or less steady supersonic condition of flow. Some work that has been done in connection with steam turbines, particularly nozzle studies, has been of value, In general, however, understanding of compressible-flow phenomena has been very incomplete and permitted no real basis for the solution of aeronautical engineering problems in which.the flow is likely to be unsteady because regions of both subsonic and supersonic speeds may occur. In the early phases of the development of the airplane, speeds were so low that the effects of compressibility could be justifiably ignored. During the last war and immediately after, however, propellers exhibited losses in efficiency as the tip speeds approached the speed of sound, and the first experiments of an aeronautical nature were therefore conducted with propellers. Results of these experiments indicated serious losses of efficiency, but aeronautical engineers were not seriously concerned at the time became it was generally possible. to design propellers with quite low tip. speeds. With the development of new engines having increased power and rotational speeds, however, the problems became of increasing importance.

  2. Modeling of compressible cake filtration

    SciTech Connect

    Abbound, N.M. . Dept. of Civil Engineering); Corapcioglu, M.Y. . Dept. of Civil Engineering)

    1993-10-15

    The transport of suspended solid particles in a liquid through porous media has importance from the viewpoint of engineering practice and industrial applications. Deposition of solid particles on a filter cloth or on a pervious porous medium forms the filter cakes. Following a literature survey, a governing equation for the cake thickness is obtained by considering an instantaneous material balance. In addition to the conservation of mass equations for the liquid, and for suspended and captured solid particles, functional relations among porosity, permeability, and pressure are obtained from literature and solved simultaneously. Later, numerical solutions for cake porosity, pore pressure, cake permeability, velocity of solid particles, concentration of suspended solid particles, and net rate of deposition are obtained. At each instant of time, the porosity decreases throughout the cake from the surface to the filter septum where it has the smallest value. As the cake thickness increases, the trends in pressure variation are similar to data obtained by other researchers. This comparison shows the validity of the theory and the associated solution presented. A sensitivity analysis shows higher pressure values at the filter septum for a less pervious membrane. Finally, a reduction in compressibility parameter provides a thicker cake, causes more particles to be captured inside the cake, and reduces the volumetric filtrate rate. The increase of solid velocity with the reduction in compressibility parameter shows that more rigid cakes compress less.

  3. Which variability?

    PubMed

    Toraldo, Alessio; Luzzatti, Claudio

    2006-02-01

    Drai and Grodzinsky provide a valuable analysis that offers a way of disentangling the effects of Movement and Mood in agrammatic comprehension. However, their mathematical implementation (Beta model) hides theoretically relevant information, i.e., qualitative heterogeneities of performance within the patient sample. This heterogeneity is crucial in the variability debate.

  4. COMPRESSION WAVES AND PHASE PLOTS: SIMULATIONS

    SciTech Connect

    Orlikowski, D; Minich, R

    2011-08-01

    Compression wave analysis started nearly 50 years ago with Fowles. Coperthwaite and Williams gave a method that helps identify simple and steady waves. We have been developing a method that gives describes the non-isentropic character of compression waves, in general. One result of that work is a simple analysis tool. Our method helps clearly identify when a compression wave is a simple wave, a steady wave (shock), and when the compression wave is in transition. This affects the analysis of compression wave experiments and the resulting extraction of the high-pressure equation of state.

  5. Video compressive sensing using Gaussian mixture models.

    PubMed

    Yang, Jianbo; Yuan, Xin; Liao, Xuejun; Llull, Patrick; Brady, David J; Sapiro, Guillermo; Carin, Lawrence

    2014-11-01

    A Gaussian mixture model (GMM)-based algorithm is proposed for video reconstruction from temporally compressed video measurements. The GMM is used to model spatio-temporal video patches, and the reconstruction can be efficiently computed based on analytic expressions. The GMM-based inversion method benefits from online adaptive learning and parallel computation. We demonstrate the efficacy of the proposed inversion method with videos reconstructed from simulated compressive video measurements, and from a real compressive video camera. We also use the GMM as a tool to investigate adaptive video compressive sensing, i.e., adaptive rate of temporal compression.

  6. The Compression Pathway of Quartz

    NASA Astrophysics Data System (ADS)

    Dera, P. K.; Thompson, R. M.; Downs, R. T.

    2011-12-01

    The important Earth material quartz may constitute as much as 20% of the upper continental crust. Quartz is composed solely of corner-sharing SiO4 silica tetrahedra, a primary building block of many of the Earth's crustal and mantle minerals, lunar and Martian minerals, and meteoritic minerals. Quartz is therefore an outstanding model material for investigating the response of this fundamental structural unit to changes in P, T, and x. These facts have spawned a vast literature of experimental and theoretical studies of quartz at ambient and non-ambient conditions. Investigations into the behavior of quartz at high pressure have revealed an anomalous distortion in the silicate tetrahedron with pressure not typically seen in other silicates. The tetrahedron assumes a very distinct geometry, becoming more like the Sommerville tetrahedron of O'Keeffe and Hyde (1996) as pressure increases. Traditionally, this distortion has been considered a compression mechanism for quartz, along with Si-O-Si angle-bending and a very small component of bond compression. However, tetrahedral volume decreases by only 1% between 0.59 GPa and 20.25 GPa, while unit cell volume decreases by 21%. Therefore, most of the compression in quartz is happening in tetrahedral voids, not in the silicate tetrahedron, and the distortion of the silicate tetrahedron may not be the direct consequence of decreasing volume in response to increasing pressure. The structure of quartz at high temperature and high pressure, including new structural refinements from synchrotron singe-crystal data collected to 20.25 GPa, is compared to the following three hypothetical quartz crystals: (1) Ideal quartz with perfectly regular tetrahedra and the same volume and Si-O-Si angle as its observed. (2) Model quartz with the same Si-O-Si angle and cell parameters as its observed equivalent, derived from ideal by altering the axial ratio. (3) BCC quartz with a perfectly body-centered cubic arrangement of oxygen anions and

  7. Weighted compression of spectral color information.

    PubMed

    Laamanen, Hannu; Jetsu, Tuija; Jaaskelainen, Timo; Parkkinen, Jussi

    2008-06-01

    Spectral color information is used nowadays in many different applications. Accurate spectral images are usually very large files, but a proper compression method can reduce needed storage space remarkably with a minimum loss of information. In this paper we introduce a principal component analysis (PCA) -based compression method of spectral color information. In this approach spectral data is weighted with a proper weight function before forming the correlation matrix and calculating the eigenvector basis. First we give a general framework for how to use weight functions in compression of relevant color information. Then we compare the weighted compression method with the traditional PCA compression method by compressing and reconstructing the Munsell data set consisting of 1,269 reflectance spectra and the Pantone data set consisting of 922 reflectance spectra. Two different weight functions are proposed and tested. We show that weighting clearly improves retention of color information in the PCA-based compression process. PMID:18516149

  8. Compression research on the REINAS Project

    NASA Technical Reports Server (NTRS)

    Rosen, Eric; Macy, William; Montague, Bruce R.; Pi-Sunyer, Carles; Spring, Jim; Kulp, David; Long, Dean; Langdon, Glen, Jr.; Pang, Alex; Wittenbrink, Craig M.

    1995-01-01

    We present approaches to integrating data compression technology into a database system designed to support research of air, sea, and land phenomena of interest to meteorology, oceanography, and earth science. A key element of the Real-Time Environmental Information Network and Analysis System (REINAS) system is the real-time component: to provide data as soon as acquired. Compression approaches being considered for REINAS include compression of raw data on the way into the database, compression of data produced by scientific visualization on the way out of the database, compression of modeling results, and compression of database query results. These compression needs are being incorporated through client-server, API, utility, and application code development.

  9. Image Compression in Signal-Dependent Noise

    NASA Astrophysics Data System (ADS)

    Shahnaz, Rubeena; Walkup, John F.; Krile, Thomas F.

    1999-09-01

    The performance of an image compression scheme is affected by the presence of noise, and the achievable compression may be reduced significantly. We investigated the effects of specific signal-dependent-noise (SDN) sources, such as film-grain and speckle noise, on image compression, using JPEG (Joint Photographic Experts Group) standard image compression. For the improvement of compression ratios noisy images are preprocessed for noise suppression before compression is applied. Two approaches are employed for noise suppression. In one approach an estimator designed specifically for the SDN model is used. In an alternate approach, the noise is first transformed into signal-independent noise (SIN) and then an estimator designed for SIN is employed. The performances of these two schemes are compared. The compression results achieved for noiseless, noisy, and restored images are also presented.

  10. Variable-Pressure Washer

    NASA Technical Reports Server (NTRS)

    Smeltzer, Stanley S., III; Estrada, Hector

    2005-01-01

    The variable-pressure washer (VPW) is a proposed device that is so named because (1) it would play the role similar to that played by an ordinary washer, except that (2) the clamping pressure applied by it would vary with either circumferential or radial position. In a typical contemplated application, the radially varying clamping pressure would be used to obtain more nearly uniform compression on a pair of concentric seals (e.g., an O-ring or a gasket) in an assembly that experiences larger deformations normal to the sealing surface for locations around the outer diameter of the attachment flange when compared to locations around the inner diameter. The VPW (see figure) would include two interlocking channel rings pushed axially away from each other by compression spring-like components located at two or more radial positions. Each spring would have a different stiffness based on the radial location. Overlapping splits in each interlocking channel ring would allow for the non-uniform deformation in the rings. Each spring would be held in place by retaining cups attached to the inner flat surfaces of the channel rings. A plunger attached to one channel ring on the central axis would be captured in a plunger housing attached to the other channel ring: The capture of the plunger would hold the VPW together. When the VPW was clamped between two flat surfaces, the clamping force would be distributed unevenly across the face of the washer in the radial direction. The different stiffnesses of the springs would be chosen, in conjunction with other design parameters, to obtain a specified radial variation of clamping pressure in the presence of a specified clamping force.

  11. Towards a geometrical interpretation of quantum-information compression

    NASA Astrophysics Data System (ADS)

    Mitchison, Graeme; Jozsa, Richard

    2004-03-01

    Let S be the von Neumann entropy of a finite ensemble E of pure quantum states. We show that S may be naturally viewed as a function of a set of geometrical volumes in Hilbert space defined by the states and that S is monotonically increasing in each of these variables. Since S is the Schumacher compression limit of E , this monotonicity property suggests a geometrical interpretation of the quantum redundancy involved in the compression process. It provides clarification of previous work in which it was shown that S may be increased while increasing the overlap of each pair of states in the ensemble. As a by-product, our mathematical techniques also provide an interpretation of the subentropy of E .

  12. Tsunami speed variations in density-stratified compressible global oceans

    NASA Astrophysics Data System (ADS)

    Watada, Shingo

    2013-08-01

    Tsunami speed variations in the deep ocean caused by seawater density stratification is investigated using a newly developed propagator matrix method that is applicable to seawater with depth-variable sound speeds and density gradients. For a 4 km deep ocean, the total tsunami speed reduction is 0.44% compared with incompressible homogeneous seawater; two thirds of the reduction is due to elastic energy stored in the water and one third is due to water density stratification mainly by hydrostatic compression. Tsunami speeds are computed for global ocean density and sound speed profiles, and characteristic structures are discussed. Tsunami speed reductions are proportional to ocean depth with small variations, except in warm Mediterranean seas. The impacts of seawater compressibility and the elasticity effect of the solid earth on tsunami traveltime should be included for precise modeling of transoceanic tsunamis.

  13. An automotive suspension strut using compressible magnetorheological fluids

    NASA Astrophysics Data System (ADS)

    Hong, Sung-Ryong; Wang, Gang; Hu, Wei; Wereley, Norman M.; Niemczuk, Jack

    2005-05-01

    An automotive suspension strut is proposed that utilizes compressible magnetorheological (CMR) fluid. A CMR strut consists of a double ended rod in a hydraulic cylinder and a bypass comprising tubing and an MR valve. The diameter on each side of the piston rods are set to be different in order to develop spring force by compromising the MR fluid hydrostatically. The MR bypass valve is adopted to develop controllable damping force. A hydro-mechanical model of the CMR strut is derived, and the spring force due to fluid compressibility and the pressure drop in the MR bypass valve are analytically investigated on the basis of the model. Finally, a CMR strut, filled with silicone oil based MR fluid, is fabricated and tested. The spring force and variable damping force of the CMR strut are clearly observed in the measured data, and compares favorably with the analytical model.

  14. Reading with fixed and variable character pitch.

    PubMed

    Arditi, A; Knoblauch, K; Grunwald, I

    1990-10-01

    We compared the effects of fixed and variable (proportional) spacing on reading speeds and found variable pitch to yield better performance at medium and large character sizes and fixed pitch to be superior for character sizes approaching the acuity limit. The data indicate at least two crowding effects at the smallest sizes: one that interferes with individual character identification and one that interferes with word identification. A control experiment using rapid serial visual presentation suggests that it is the greater horizontal compression and consequently reduced eye-movement requirements of variable pitch that are responsible for its superiority at medium and large character sizes.

  15. Modeling Compressibility Effects in High-Speed Turbulent Flows

    NASA Technical Reports Server (NTRS)

    Sarkar, S.

    2004-01-01

    Man has strived to make objects fly faster, first from subsonic to supersonic and then to hypersonic speeds. Spacecraft and high-speed missiles routinely fly at hypersonic Mach numbers, M greater than 5. In defense applications, aircraft reach hypersonic speeds at high altitude and so may civilian aircraft in the future. Hypersonic flight, while presenting opportunities, has formidable challenges that have spurred vigorous research and development, mainly by NASA and the Air Force in the USA. Although NASP, the premier hypersonic concept of the eighties and early nineties, did not lead to flight demonstration, much basic research and technology development was possible. There is renewed interest in supersonic and hypersonic flight with the HyTech program of the Air Force and the Hyper-X program at NASA being examples of current thrusts in the field. At high-subsonic to supersonic speeds, fluid compressibility becomes increasingly important in the turbulent boundary layers and shear layers associated with the flow around aerospace vehicles. Changes in thermodynamic variables: density, temperature and pressure, interact strongly with the underlying vortical, turbulent flow. The ensuing changes to the flow may be qualitative such as shocks which have no incompressible counterpart, or quantitative such as the reduction of skin friction with Mach number, large heat transfer rates due to viscous heating, and the dramatic reduction of fuel/oxidant mixing at high convective Mach number. The peculiarities of compressible turbulence, so-called compressibility effects, have been reviewed by Fernholz and Finley. Predictions of aerodynamic performance in high-speed applications require accurate computational modeling of these "compressibility effects" on turbulence. During the course of the project we have made fundamental advances in modeling the pressure-strain correlation and developed a code to evaluate alternate turbulence models in the compressible shear layer.

  16. Argon Excluder Foam Compression Data

    SciTech Connect

    Clark, D.; /Fermilab

    1991-07-25

    The argon excluder is designed to reduce the media density of the dead space between the internal modules of the end calorimeters and the concave convex head to less than that of argon. The design of the excluder includes a thin circular stainless steel plate welded to the inner side of the convex pressure vessel head at a radius of 26 and 15/16 inches. It is estimated that this plate will experience a pressure differential of approximately 40 pounds per square inch. A inner foam core is incorporated into the design of the excluder as structural support. This engineering note outlines the compression data for the foam used in the north end calorimeter argon excluder. Four test samples of approximately the same dimensions were cut and machined from large blocks of the poured foam. Two of these test samples were then subjected to varying compression magnitudes until failure. For this test failure was taken to mean plastic yielding or the point at which deformation increases without a corresponding increase in loading. The third sample was subjected to a constant compressive stress for an extended period of time, to identify any 'creeping' effects. Finally, the fourth sample was cooled to cryogenic temperatures in order to determine the coefficient of thermal expansion. The compression test apparatus consisted of a state of the art INSTROM coupled with a PC workstation. The tests were run at a constant strain rate with discrete data taken at 500 millisecond intervals. The sample data is plotted as a stress strain diagram in the results. The first test was run on sample number one at a compression rate of 0.833 mills or equivalently a strain rate of 3.245 x 10{sup -4} mil/mills. The corresponding stress was then calculated from the force measured divided by the given initial area. The test was run for thirty minutes until the mode of failure, plastic yielding, was reached. The second test was run as a check of the first using sample number two, and likewise was

  17. Compressive sensing as a paradigm for building physics models

    NASA Astrophysics Data System (ADS)

    Nelson, Lance J.; Hart, Gus L. W.; Zhou, Fei; Ozoliņš, Vidvuds

    2013-01-01

    The widely accepted intuition that the important properties of solids are determined by a few key variables underpins many methods in physics. Though this reductionist paradigm is applicable in many physical problems, its utility can be limited because the intuition for identifying the key variables often does not exist or is difficult to develop. Machine learning algorithms (genetic programming, neural networks, Bayesian methods, etc.) attempt to eliminate the a priori need for such intuition but often do so with increased computational burden and human time. A recently developed technique in the field of signal processing, compressive sensing (CS), provides a simple, general, and efficient way of finding the key descriptive variables. CS is a powerful paradigm for model building; we show that its models are more physical and predict more accurately than current state-of-the-art approaches and can be constructed at a fraction of the computational cost and user effort.

  18. Computed Tomography Image Compressibility and Limitations of Compression Ratio-Based Guidelines.

    PubMed

    Pambrun, Jean-François; Noumeir, Rita

    2015-12-01

    Finding optimal compression levels for diagnostic imaging is not an easy task. Significant compressibility variations exist between modalities, but little is known about compressibility variations within modalities. Moreover, compressibility is affected by acquisition parameters. In this study, we evaluate the compressibility of thousands of computed tomography (CT) slices acquired with different slice thicknesses, exposures, reconstruction filters, slice collimations, and pitches. We demonstrate that exposure, slice thickness, and reconstruction filters have a significant impact on image compressibility due to an increased high frequency content and a lower acquisition signal-to-noise ratio. We also show that compression ratio is not a good fidelity measure. Therefore, guidelines based on compression ratio should ideally be replaced with other compression measures better correlated with image fidelity. Value-of-interest (VOI) transformations also affect the perception of quality. We have studied the effect of value-of-interest transformation and found significant masking of artifacts when window is widened. PMID:25804842

  19. Algorithmic height compression of unordered trees.

    PubMed

    Ben-Naoum, Farah; Godin, Christophe

    2016-01-21

    By nature, tree structures frequently present similarities between their sub-parts. Making use of this redundancy, different types of tree compression techniques have been designed in the literature to reduce the complexity of tree structures. A popular and efficient way to compress a tree consists of merging its isomorphic subtrees, which produces a directed acyclic graph (DAG) equivalent to the original tree. An important property of this method is that the compressed structure (i.e. the DAG) has the same height as the original tree, thus limiting partially the possibility of compression. In this paper we address the problem of further compressing this DAG in height. The difficulty is that compression must be carried out on substructures that are not exactly isomorphic as they are strictly nested within each-other. We thus introduced a notion of quasi-isomorphism between subtrees that makes it possible to define similar patterns along any given path in a tree. We then proposed an algorithm to detect these patterns and to merge them, thus leading to compressed structures corresponding to DAGs augmented with return edges. In this way, redundant information is removed from the original tree in both width and height, thus achieving minimal structural compression. The complete compression algorithm is then illustrated on the compression of various plant-like structures.

  20. Image coding compression based on DCT

    NASA Astrophysics Data System (ADS)

    Feng, Fei; Liu, Peixue; Jiang, Baohua

    2012-04-01

    With the development of computer science and communications, the digital image processing develops more and more fast. High quality images are loved by people, but it will waste more stored space in our computer and it will waste more bandwidth when it is transferred by Internet. Therefore, it's necessary to have an study on technology of image compression. At present, many algorithms about image compression is applied to network and the image compression standard is established. In this dissertation, some analysis on DCT will be written. Firstly, the principle of DCT will be shown. It's necessary to realize image compression, because of the widely using about this technology; Secondly, we will have a deep understanding of DCT by the using of Matlab, the process of image compression based on DCT, and the analysis on Huffman coding; Thirdly, image compression based on DCT will be shown by using Matlab and we can have an analysis on the quality of the picture compressed. It is true that DCT is not the only algorithm to realize image compression. I am sure there will be more algorithms to make the image compressed have a high quality. I believe the technology about image compression will be widely used in the network or communications in the future.

  1. Algorithmic height compression of unordered trees.

    PubMed

    Ben-Naoum, Farah; Godin, Christophe

    2016-01-21

    By nature, tree structures frequently present similarities between their sub-parts. Making use of this redundancy, different types of tree compression techniques have been designed in the literature to reduce the complexity of tree structures. A popular and efficient way to compress a tree consists of merging its isomorphic subtrees, which produces a directed acyclic graph (DAG) equivalent to the original tree. An important property of this method is that the compressed structure (i.e. the DAG) has the same height as the original tree, thus limiting partially the possibility of compression. In this paper we address the problem of further compressing this DAG in height. The difficulty is that compression must be carried out on substructures that are not exactly isomorphic as they are strictly nested within each-other. We thus introduced a notion of quasi-isomorphism between subtrees that makes it possible to define similar patterns along any given path in a tree. We then proposed an algorithm to detect these patterns and to merge them, thus leading to compressed structures corresponding to DAGs augmented with return edges. In this way, redundant information is removed from the original tree in both width and height, thus achieving minimal structural compression. The complete compression algorithm is then illustrated on the compression of various plant-like structures. PMID:26551155

  2. Krylov methods for compressible flows

    NASA Technical Reports Server (NTRS)

    Tidriri, M. D.

    1995-01-01

    We investigate the application of Krylov methods to compressible flows, and the effect of implicit boundary conditions on the implicit solution of nonlinear problems. Two defect-correction procedures, namely, approximate factorization (AF) for structured grids and ILU/GMRES for general grids, are considered. Also considered here are Newton-Krylov matrix-free methods that we combined with the use of mixed discretization schemes in the implicitly defined Jacobian and its preconditioner. Numerical experiments that show the performance of our approaches are then presented.

  3. Vapor Compression Distillation Flight Experiment

    NASA Technical Reports Server (NTRS)

    Hutchens, Cindy F.

    2002-01-01

    One of the major requirements associated with operating the International Space Station is the transportation -- space shuttle and Russian Progress spacecraft launches - necessary to re-supply station crews with food and water. The Vapor Compression Distillation (VCD) Flight Experiment, managed by NASA's Marshall Space Flight Center in Huntsville, Ala., is a full-scale demonstration of technology being developed to recycle crewmember urine and wastewater aboard the International Space Station and thereby reduce the amount of water that must be re-supplied. Based on results of the VCD Flight Experiment, an operational urine processor will be installed in Node 3 of the space station in 2005.

  4. Efficient compression of quantum information

    SciTech Connect

    Plesch, Martin; Buzek, Vladimir

    2010-03-15

    We propose a scheme for an exact efficient transformation of a tensor product state of many identically prepared qubits into a state of a logarithmically small number of qubits. Using a quadratic number of elementary quantum gates we transform N identically prepared qubits into a state, which is nontrivial only on the first [log{sub 2}(N+1)] qubits. This procedure might be useful for quantum memories, as only a small portion of the original qubits has to be stored. Another possible application is in communicating a direction encoded in a set of quantum states, as the compressed state provides a high-effective method for such an encoding.

  5. Overset grids in compressible flow

    NASA Technical Reports Server (NTRS)

    Eberhardt, S.; Baganoff, D.

    1985-01-01

    Numerical experiments have been performed to investigate the importance of boundary data handling with overset grids in computational fluid dynamics. Experience in using embedded grid techniques in compressible flow has shown that shock waves which cross grid boundaries become ill defined and convergence is generally degraded. Numerical boundary schemes were studied to investigate the cause of these problems and a viable solution was generated using the method of characteristics to define a boundary scheme. The model test problem investigated consisted of a detached shock wave on a 2-dimensional Mach 2 blunt, cylindrical body.

  6. Variable Valve Actuation

    SciTech Connect

    Jeffrey Gutterman; A. J. Lasley

    2008-08-31

    Many approaches exist to enable advanced mode, low temperature combustion systems for diesel engines - such as premixed charge compression ignition (PCCI), Homogeneous Charge Compression Ignition (HCCI) or other HCCI-like combustion modes. The fuel properties and the quantity, distribution and temperature profile of air, fuel and residual fraction in the cylinder can have a marked effect on the heat release rate and combustion phasing. Figure 1 shows that a systems approach is required for HCCI-like combustion. While the exact requirements remain unclear (and will vary depending on fuel, engine size and application), some form of substantially variable valve actuation is a likely element in such a system. Variable valve actuation, for both intake and exhaust valve events, is a potent tool for controlling the parameters that are critical to HCCI-like combustion and expanding its operational range. Additionally, VVA can be used to optimize the combustion process as well as exhaust temperatures and impact the after treatment system requirements and its associated cost. Delphi Corporation has major manufacturing and product development and applied R&D expertise in the valve train area. Historical R&D experience includes the development of fully variable electro-hydraulic valve train on research engines as well as several generations of mechanical VVA for gasoline systems. This experience has enabled us to evaluate various implementations and determine the strengths and weaknesses of each. While a fully variable electro-hydraulic valve train system might be the 'ideal' solution technically for maximum flexibility in the timing and control of the valve events, its complexity, associated costs, and high power consumption make its implementation on low cost high volume applications unlikely. Conversely, a simple mechanical system might be a low cost solution but not deliver the flexibility required for HCCI operation. After modeling more than 200 variations of the

  7. Large eddy simulations of compressible turbulent flows

    NASA Technical Reports Server (NTRS)

    Porter-Locklear, Freda

    1995-01-01

    An evaluation of existing models for Large Eddy Simulations (LES) of incompressible turbulent flows has been completed. LES is a computation in which the large, energy-carrying structures to momentum and energy transfer is computed exactly, and only the effect of the smallest scales of turbulence is modeled. That is, the large eddies are computed and the smaller eddies are modeled. The dynamics of the largest eddies are believed to account for most of sound generation and transport properties in a turbulent flow. LES analysis is based on an observation that pressure, velocity, temperature, and other variables are the sum of their large-scale and small-scale parts. For instance, u(i) (velocity) can be written as the sum of bar-u(i) and u(i)-prime, where bar-u(i) is the large-scale and u(i)-prime is the subgrid-scale (SGS). The governing equations for large eddies in compressible flows are obtained after filtering the continuity, momentum, and energy equations, and recasting in terms of Favre averages. The filtering operation maintains only large scales. The effects of the small-scales are present in the governing equations through the SGS stress tensor tau(ij) and SGS heat flux q(i). The mathematical formulation of the Favre-averaged equations of motion for LES is complete.

  8. Algebraic Flux Correction II. Compressible Euler Equations

    NASA Astrophysics Data System (ADS)

    Kuzmin, Dmitri; Möller, Matthias

    Algebraic flux correction schemes of TVD and FCT type are extended to systems of hyperbolic conservation laws. The group finite element formulation is employed for the treatment of the compressible Euler equations. An efficient algorithm is proposed for the edge-by-edge matrix assembly. A generalization of Roe's approximate Riemann solver is derived by rendering all off-diagonal matrix blocks positive semi-definite. Another usable low-order method is constructed by adding scalar artificial viscosity proportional to the spectral radius of the cumulative Roe matrix. The limiting of antidiffusive fluxes is performed using a transformation to the characteristic variables or a suitable synchronization of correction factors for the conservative ones. The outer defect correction loop is equipped with a block-diagonal preconditioner so as to decouple the discretized Euler equations and solve them in a segregated fashion. As an alternative, a strongly coupled solution strategy (global BiCGSTAB method with a block-Gauß-Seidel preconditioner) is introduced for applications which call for the use of large time steps. Various algorithmic aspects including the implementation of characteristic boundary conditions are addressed. Simulation results are presented for inviscid flows in a wide range of Mach numbers.

  9. Inverse lithography source optimization via compressive sensing.

    PubMed

    Song, Zhiyang; Ma, Xu; Gao, Jie; Wang, Jie; Li, Yanqiu; Arce, Gonzalo R

    2014-06-16

    Source optimization (SO) has emerged as a key technique for improving lithographic imaging over a range of process variations. Current SO approaches are pixel-based, where the source pattern is designed by solving a quadratic optimization problem using gradient-based algorithms or solving a linear programming problem. Most of these methods, however, are either computational intensive or result in a process window (PW) that may be further extended. This paper applies the rich theory of compressive sensing (CS) to develop an efficient and robust SO method. In order to accelerate the SO design, the source optimization is formulated as an underdetermined linear problem, where the number of equations can be much less than the source variables. Assuming the source pattern is a sparse pattern on a certain basis, the SO problem is transformed into a l1-norm image reconstruction problem based on CS theory. The linearized Bregman algorithm is applied to synthesize the sparse optimal source pattern on a representation basis, which effectively improves the source manufacturability. It is shown that the proposed linear SO formulation is more effective for improving the contrast of the aerial image than the traditional quadratic formulation. The proposed SO method shows that sparse-regularization in inverse lithography can indeed extend the PW of lithography systems. A set of simulations and analysis demonstrate the superiority of the proposed SO method over the traditional approaches.

  10. Visually weighted reconstruction of compressive sensing MRI.

    PubMed

    Oh, Heeseok; Lee, Sanghoon

    2014-04-01

    Compressive sensing (CS) enables the reconstruction of a magnetic resonance (MR) image from undersampled data in k-space with relatively low-quality distortion when compared to the original image. In addition, CS allows the scan time to be significantly reduced. Along with a reduction in the computational overhead, we investigate an effective way to improve visual quality through the use of a weighted optimization algorithm for reconstruction after variable density random undersampling in the phase encoding direction over k-space. In contrast to conventional magnetic resonance imaging (MRI) reconstruction methods, the visual weight, in particular, the region of interest (ROI), is investigated here for quality improvement. In addition, we employ a wavelet transform to analyze the reconstructed image in the space domain and fully utilize data sparsity over the spatial and frequency domains. The visual weight is constructed by reflecting the perceptual characteristics of the human visual system (HVS), and then applied to ℓ1 norm minimization, which gives priority to each coefficient during the reconstruction process. Using objective quality assessment metrics, it was found that an image reconstructed using the visual weight has higher local and global quality than those processed by conventional methods.

  11. Energy Preserved Sampling for Compressed Sensing MRI

    PubMed Central

    Peterson, Bradley S.; Ji, Genlin; Dong, Zhengchao

    2014-01-01

    The sampling patterns, cost functions, and reconstruction algorithms play important roles in optimizing compressed sensing magnetic resonance imaging (CS-MRI). Simple random sampling patterns did not take into account the energy distribution in k-space and resulted in suboptimal reconstruction of MR images. Therefore, a variety of variable density (VD) based samplings patterns had been developed. To further improve it, we propose a novel energy preserving sampling (ePRESS) method. Besides, we improve the cost function by introducing phase correction and region of support matrix, and we propose iterative thresholding algorithm (ITA) to solve the improved cost function. We evaluate the proposed ePRESS sampling method, improved cost function, and ITA reconstruction algorithm by 2D digital phantom and 2D in vivo MR brains of healthy volunteers. These assessments demonstrate that the proposed ePRESS method performs better than VD, POWER, and BKO; the improved cost function can achieve better reconstruction quality than conventional cost function; and the ITA is faster than SISTA and is competitive with FISTA in terms of computation time. PMID:24971155

  12. Compression creep of filamentary composites

    NASA Technical Reports Server (NTRS)

    Graesser, D. L.; Tuttle, M. E.

    1988-01-01

    Axial and transverse strain fields induced in composite laminates subjected to compressive creep loading were compared for several types of laminate layups. Unidirectional graphite/epoxy as well as multi-directional graphite/epoxy and graphite/PEEK layups were studied. Specimens with and without holes were tested. The specimens were subjected to compressive creep loading for a 10-hour period. In-plane displacements were measured using moire interferometry. A computer based data reduction scheme was developed which reduces the whole-field displacement fields obtained using moire to whole-field strain contour maps. Only slight viscoelastic response was observed in matrix-dominated laminates, except for one test in which catastrophic specimen failure occurred after a 16-hour period. In this case the specimen response was a complex combination of both viscoelastic and fracture mechanisms. No viscoelastic effects were observed for fiber-dominated laminates over the 10-hour creep time used. The experimental results for specimens with holes were compared with results obtained using a finite-element analysis. The comparison between experiment and theory was generally good. Overall strain distributions were very well predicted. The finite element analysis typically predicted slightly higher strain values at the edge of the hole, and slightly lower strain values at positions removed from the hole, than were observed experimentally. It is hypothesized that these discrepancies are due to nonlinear material behavior at the hole edge, which were not accounted for during the finite-element analysis.

  13. Hemifacial Spasm and Neurovascular Compression

    PubMed Central

    Lu, Alex Y.; Yeung, Jacky T.; Gerrard, Jason L.; Michaelides, Elias M.; Sekula, Raymond F.; Bulsara, Ketan R.

    2014-01-01

    Hemifacial spasm (HFS) is characterized by involuntary unilateral contractions of the muscles innervated by the ipsilateral facial nerve, usually starting around the eyes before progressing inferiorly to the cheek, mouth, and neck. Its prevalence is 9.8 per 100,000 persons with an average age of onset of 44 years. The accepted pathophysiology of HFS suggests that it is a disease process of the nerve root entry zone of the facial nerve. HFS can be divided into two types: primary and secondary. Primary HFS is triggered by vascular compression whereas secondary HFS comprises all other causes of facial nerve damage. Clinical examination and imaging modalities such as electromyography (EMG) and magnetic resonance imaging (MRI) are useful to differentiate HFS from other facial movement disorders and for intraoperative planning. The standard medical management for HFS is botulinum neurotoxin (BoNT) injections, which provides low-risk but limited symptomatic relief. The only curative treatment for HFS is microvascular decompression (MVD), a surgical intervention that provides lasting symptomatic relief by reducing compression of the facial nerve root. With a low rate of complications such as hearing loss, MVD remains the treatment of choice for HFS patients as intraoperative technique and monitoring continue to improve. PMID:25405219

  14. Fast spectrophotometry with compressive sensing

    NASA Astrophysics Data System (ADS)

    Starling, David; Storer, Ian

    2015-03-01

    Spectrophotometers and spectrometers have numerous applications in the physical sciences and engineering, resulting in a plethora of designs and requirements. A good spectrophotometer balances the need for high photometric precision, high spectral resolution, high durability and low cost. One way to address these design objectives is to take advantage of modern scanning and detection techniques. A common imaging method that has improved signal acquisition speed and sensitivity in limited signal scenarios is the single pixel camera. Such cameras utilize the sparsity of a signal to sample below the Nyquist rate via a process known as compressive sensing. Here, we show that a single pixel camera using compressive sensing algorithms and a digital micromirror device can replace the common scanning mechanisms found in virtually all spectrophotometers, providing a very low cost solution and improving data acquisition time. We evaluate this single pixel spectrophotometer by studying a variety of samples tested against commercial products. We conclude with an analysis of flame spectra and possible improvements for future designs.

  15. Hemifacial spasm and neurovascular compression.

    PubMed

    Lu, Alex Y; Yeung, Jacky T; Gerrard, Jason L; Michaelides, Elias M; Sekula, Raymond F; Bulsara, Ketan R

    2014-01-01

    Hemifacial spasm (HFS) is characterized by involuntary unilateral contractions of the muscles innervated by the ipsilateral facial nerve, usually starting around the eyes before progressing inferiorly to the cheek, mouth, and neck. Its prevalence is 9.8 per 100,000 persons with an average age of onset of 44 years. The accepted pathophysiology of HFS suggests that it is a disease process of the nerve root entry zone of the facial nerve. HFS can be divided into two types: primary and secondary. Primary HFS is triggered by vascular compression whereas secondary HFS comprises all other causes of facial nerve damage. Clinical examination and imaging modalities such as electromyography (EMG) and magnetic resonance imaging (MRI) are useful to differentiate HFS from other facial movement disorders and for intraoperative planning. The standard medical management for HFS is botulinum neurotoxin (BoNT) injections, which provides low-risk but limited symptomatic relief. The only curative treatment for HFS is microvascular decompression (MVD), a surgical intervention that provides lasting symptomatic relief by reducing compression of the facial nerve root. With a low rate of complications such as hearing loss, MVD remains the treatment of choice for HFS patients as intraoperative technique and monitoring continue to improve.

  16. Complexity compression: nurses under fire.

    PubMed

    Krichbaum, Kathleen; Diemert, Carol; Jacox, Lynn; Jones, Ann; Koenig, Patty; Mueller, Christine; Disch, Joanne

    2007-01-01

    It has been documented that up to 40% of the workday of nurses is taken up by meeting the ever-increasing demands of the systems of healthcare delivery in which nurses are employed. These demands include the need for increasing documentation, for learning new and seemingly ever-changing procedures, and for adapting to turnover in management and administration. Attention to these issues also means that 40% of that workday is not available to patients. Believing that these increasing demands are affecting nurses' decisions to remain in nursing or to leave, a group of Minnesota nurses and nurse educators examined the work environments of nurses and the issues related to those environments. The result of this examination was discovery of a phenomenon affecting all nurses that may be central to the projected shortage of nurses. The phenomenon is complexity compression-what nurses experience when expected to assume additional, unplanned responsibilities while simultaneously conducting their multiple responsibilities in a condensed time frame. The phenomenon was validated by a group of 58 nurses who participated in focus groups that led to the identification of factors influencing the experience of complexity compression. These factors were clustered into six major themes: personal, environmental, practice, systems and technology, administration/management, and autonomy/control. Further validation studies are planned with the population of practicing professional nurses in the state of Minnesota.

  17. Longwave infrared compressive hyperspectral imager

    NASA Astrophysics Data System (ADS)

    Dupuis, Julia R.; Kirby, Michael; Cosofret, Bogdan R.

    2015-06-01

    Physical Sciences Inc. (PSI) is developing a longwave infrared (LWIR) compressive sensing hyperspectral imager (CS HSI) based on a single pixel architecture for standoff vapor phase plume detection. The sensor employs novel use of a high throughput stationary interferometer and a digital micromirror device (DMD) converted for LWIR operation in place of the traditional cooled LWIR focal plane array. The CS HSI represents a substantial cost reduction over the state of the art in LWIR HSI instruments. Radiometric improvements for using the DMD in the LWIR spectral range have been identified and implemented. In addition, CS measurement and sparsity bases specifically tailored to the CS HSI instrument and chemical plume imaging have been developed and validated using LWIR hyperspectral image streams of chemical plumes. These bases enable comparable statistics to detection based on uncompressed data. In this paper, we present a system model predicting the overall performance of the CS HSI system. Results from a breadboard build and test validating the system model are reported. In addition, the measurement and sparsity basis work demonstrating the plume detection on compressed hyperspectral images is presented.

  18. Atomic effect algebras with compression bases

    SciTech Connect

    Caragheorgheopol, Dan; Tkadlec, Josef

    2011-01-15

    Compression base effect algebras were recently introduced by Gudder [Demonstr. Math. 39, 43 (2006)]. They generalize sequential effect algebras [Rep. Math. Phys. 49, 87 (2002)] and compressible effect algebras [Rep. Math. Phys. 54, 93 (2004)]. The present paper focuses on atomic compression base effect algebras and the consequences of atoms being foci (so-called projections) of the compressions in the compression base. Part of our work generalizes results obtained in atomic sequential effect algebras by Tkadlec [Int. J. Theor. Phys. 47, 185 (2008)]. The notion of projection-atomicity is introduced and studied, and several conditions that force a compression base effect algebra or the set of its projections to be Boolean are found. Finally, we apply some of these results to sequential effect algebras and strengthen a previously established result concerning a sufficient condition for them to be Boolean.

  19. Research on compressive fusion by multiwavelet transform

    NASA Astrophysics Data System (ADS)

    Yang, Senlin; Wan, Guobin; Li, Yuanyuan; Zhao, Xiaoxia; Chong, Xin

    2014-02-01

    A new strategy for images fusion is developed on the basis of block compressed sensing (BCS) and multiwavelet transform (MWT). Since the BCS with structured random matrix requires small memory space and enables fast computation, firstly, the images with large amounts of data can be compressively sampled into block images for fusion. Secondly, taking full advantages of multiwavelet such as symmetry, orthogonality, short support, and a higher number of vanishing moments, the compressive sampling of block images can be better described by MWT transform. Then the compressive measurements are fused with a linear weighting strategy based on MWT decomposition. And finally, the fused compressive samplings are reconstructed by the smoothed projection Landweber algorithm, with consideration of blocking artifacts. Experiment result shows that the validity of proposed method. Simultaneously, field test indicates that the compressive fusion can give similar resolution with traditional MWT fusion.

  20. Industrial Compressed Air System Energy Efficiency Guidebook.

    SciTech Connect

    United States. Bonneville Power Administration.

    1993-12-01

    Energy efficient design, operation and maintenance of compressed air systems in industrial plants can provide substantial reductions in electric power and other operational costs. This guidebook will help identify cost effective, energy efficiency opportunities in compressed air system design, re-design, operation and maintenance. The guidebook provides: (1) a broad overview of industrial compressed air systems, (2) methods for estimating compressed air consumption and projected air savings, (3) a description of applicable, generic energy conservation measures, and, (4) a review of some compressed air system demonstration projects that have taken place over the last two years. The primary audience for this guidebook includes plant maintenance supervisors, plant engineers, plant managers and others interested in energy management of industrial compressed air systems.

  1. Complex synthetic aperture radar data compression

    NASA Astrophysics Data System (ADS)

    Cirillo, Francis R.; Poehler, Paul L.; Schwartz, Debra S.; Rais, Houra

    2002-08-01

    Existing compression algorithms, primarily designed for visible electro-optical (EO) imagery, do not work well for Synthetic Aperture Radar (SAR) data. The best compression ratios achieved to date are less than 10:1 with minimal degradation to the phase data. Previously, phase data has been discarded with only magnitude data saved for analysis. Now that the importance of phase has been recognized for Interferometric Synthetic Aperture Radar (IFSAR), Coherent Change Detection (CCD), and polarimetry, requirements exist to preserve, transmit, and archive the both components. Bandwidth and storage limitations on existing and future platforms make compression of this data a top priority. This paper presents results obtained using a new compression algorithm designed specifically to compress SAR imagery, while preserving both magnitude and phase information at compression ratios of 20:1 and better.

  2. Bringing light into the dark: effects of compression clothing on performance and recovery.

    PubMed

    Born, Dennis-Peter; Sperlich, Billy; Holmberg, Hans-Christer

    2013-01-01

    To assess original research addressing the effect of the application of compression clothing on sport performance and recovery after exercise, a computer-based literature research was performed in July 2011 using the electronic databases PubMed, MEDLINE, SPORTDiscus, and Web of Science. Studies examining the effect of compression clothing on endurance, strength and power, motor control, and physiological, psychological, and biomechanical parameters during or after exercise were included, and means and measures of variability of the outcome measures were recorded to estimate the effect size (Hedges g) and associated 95% confidence intervals for comparisons of experimental (compression) and control trials (noncompression). The characteristics of the compression clothing, participants, and study design were also extracted. The original research from peer-reviewed journals was examined using the Physiotherapy Evidence Database (PEDro) Scale. Results indicated small effect sizes for the application of compression clothing during exercise for short-duration sprints (10-60 m), vertical-jump height, extending time to exhaustion (such as running at VO2max or during incremental tests), and time-trial performance (3-60 min). When compression clothing was applied for recovery purposes after exercise, small to moderate effect sizes were observed in recovery of maximal strength and power, especially vertical-jump exercise; reductions in muscle swelling and perceived muscle pain; blood lactate removal; and increases in body temperature. These results suggest that the application of compression clothing may assist athletic performance and recovery in given situations with consideration of the effects magnitude and practical relevance.

  3. Compressive Strength, Chloride Permeability, and Freeze-Thaw Resistance of MWNT Concretes under Different Chemical Treatments

    PubMed Central

    Wang, Xingang; Wang, Yao; Xi, Yunping

    2014-01-01

    This study investigated compressive strength, chloride penetration, and freeze-thaw resistance of multiwalled carbon nanotube (MWNT) concrete. More than 100 cylindrical specimens were used to assess test variables during sensitivity observations, including water-cement ratios (0.75, 0.5, and 0.4) and exposure to chemical agents (including gum arabic, propanol, ethanol, sodium polyacrylate, methylcellulose, sodium dodecyl sulfate, and silane). To determine the adequate sonication time for MWNT dispersal in water, the compressive strengths of MWNT concrete cylinders were measured after sonication times ranging from 2 to 24 minutes. The results demonstrated that the addition of MWNT can increase the compressive strength of concrete by up to 108%. However, without chemical treatment, MWNT concretes tend to have poor freeze-thaw resistance. Among the different chemical treatments, MWNT concrete treated with sodium polyacrylate has the best compressive strength, chloride resistance, and freeze-thaw durability. PMID:25140336

  4. Compressive strength, chloride permeability, and freeze-thaw resistance of MWNT concretes under different chemical treatments.

    PubMed

    Wang, Xingang; Rhee, Inkyu; Wang, Yao; Xi, Yunping

    2014-01-01

    This study investigated compressive strength, chloride penetration, and freeze-thaw resistance of multiwalled carbon nanotube (MWNT) concrete. More than 100 cylindrical specimens were used to assess test variables during sensitivity observations, including water-cement ratios (0.75, 0.5, and 0.4) and exposure to chemical agents (including gum arabic, propanol, ethanol, sodium polyacrylate, methylcellulose, sodium dodecyl sulfate, and silane). To determine the adequate sonication time for MWNT dispersal in water, the compressive strengths of MWNT concrete cylinders were measured after sonication times ranging from 2 to 24 minutes. The results demonstrated that the addition of MWNT can increase the compressive strength of concrete by up to 108%. However, without chemical treatment, MWNT concretes tend to have poor freeze-thaw resistance. Among the different chemical treatments, MWNT concrete treated with sodium polyacrylate has the best compressive strength, chloride resistance, and freeze-thaw durability.

  5. Compression Techniques for Improved Algorithm Computational Performance

    NASA Technical Reports Server (NTRS)

    Zalameda, Joseph N.; Howell, Patricia A.; Winfree, William P.

    2005-01-01

    Analysis of thermal data requires the processing of large amounts of temporal image data. The processing of the data for quantitative information can be time intensive especially out in the field where large areas are inspected resulting in numerous data sets. By applying a temporal compression technique, improved algorithm performance can be obtained. In this study, analysis techniques are applied to compressed and non-compressed thermal data. A comparison is made based on computational speed and defect signal to noise.

  6. Mammography compression force in New Zealand.

    PubMed

    Poletti, J L

    1994-06-01

    Maximum compression forces have been measured in New Zealand on 37 mammography machines, using a simple hydraulic device. The median measured maximum force was 145 N, and the range 58 to 230 N. Much greater attention needs to be paid to the setting of maximum force for compression devices by service personnel. Compression devices must be included in the quality assurance programme. Where indicated by the machine, the accuracy of the indicated force for some machines is poor. PMID:8074619

  7. Compressed data for the movie industry

    NASA Astrophysics Data System (ADS)

    Tice, Bradley S.

    2013-12-01

    The paper will present a compression algorithm that will allow for both random and non-random sequential binary strings of data to be compressed for storage and transmission of media information. The compression system has direct applications to the storage and transmission of digital media such as movies, television, audio signals and other visual and auditory signals needed for engineering practicalities in such industries.

  8. Subband Coding Methods for Seismic Data Compression

    NASA Technical Reports Server (NTRS)

    Kiely, A.; Pollara, F.

    1995-01-01

    This paper presents a study of seismic data compression techniques and a compression algorithm based on subband coding. The compression technique described could be used as a progressive transmission system, where successive refinements of the data can be requested by the user. This allows seismologists to first examine a coarse version of waveforms with minimal usage of the channel and then decide where refinements are required. Rate-distortion performance results are presented and comparisons are made with two block transform methods.

  9. Compression map, functional groups and fossilization: A chemometric approach (Pennsylvanian neuropteroid foliage, Canada)

    USGS Publications Warehouse

    D'Angelo, J. A.; Zodrow, E.L.; Mastalerz, Maria

    2012-01-01

    Nearly all of the spectrochemical studies involving Carboniferous foliage of seed-ferns are based on a limited number of pinnules, mainly compressions. In contrast, in this paper we illustrate working with a larger pinnate segment, i.e., a 22-cm long neuropteroid specimen, compression-preserved with cuticle, the compression map. The objective is to study preservation variability on a larger scale, where observation of transparency/opacity of constituent pinnules is used as a first approximation for assessing the degree of pinnule coalification/fossilization. Spectrochemical methods by Fourier transform infrared spectrometry furnish semi-quantitative data for principal component analysis.The compression map shows a high degree of preservation variability, which ranges from comparatively more coalified pinnules to less coalified pinnules that resemble fossilized-cuticles, noting that the pinnule midveins are preserved more like fossilized-cuticles. A general overall trend of coalified pinnules towards fossilized-cuticles, i.e., variable chemistry, is inferred from the semi-quantitative FTIR data as higher contents of aromatic compounds occur in the visually more opaque upper location of the compression map. The latter also shows a higher condensation of the aromatic nuclei along with some variation in both ring size and degree of aromatic substitution. From principal component analysis we infer correspondence between transparency/opacity observation and chemical information which correlate with varying degree to fossilization/coalification among pinnules. ?? 2011 Elsevier B.V.

  10. Compression algorithm for multideterminant wave functions.

    PubMed

    Weerasinghe, Gihan L; Ríos, Pablo López; Needs, Richard J

    2014-02-01

    A compression algorithm is introduced for multideterminant wave functions which can greatly reduce the number of determinants that need to be evaluated in quantum Monte Carlo calculations. We have devised an algorithm with three levels of compression, the least costly of which yields excellent results in polynomial time. We demonstrate the usefulness of the compression algorithm for evaluating multideterminant wave functions in quantum Monte Carlo calculations, whose computational cost is reduced by factors of between about 2 and over 25 for the examples studied. We have found evidence of sublinear scaling of quantum Monte Carlo calculations with the number of determinants when the compression algorithm is used.

  11. Evaluation and Management of Vertebral Compression Fractures

    PubMed Central

    Alexandru, Daniela; So, William

    2012-01-01

    Compression fractures affect many individuals worldwide. An estimated 1.5 million vertebral compression fractures occur every year in the US. They are common in elderly populations, and 25% of postmenopausal women are affected by a compression fracture during their lifetime. Although these fractures rarely require hospital admission, they have the potential to cause significant disability and morbidity, often causing incapacitating back pain for many months. This review provides information on the pathogenesis and pathophysiology of compression fractures, as well as clinical manifestations and treatment options. Among the available treatment options, kyphoplasty and percutaneous vertebroplasty are two minimally invasive techniques to alleviate pain and correct the sagittal imbalance of the spine. PMID:23251117

  12. Memory hierarchy using row-based compression

    DOEpatents

    Loh, Gabriel H.; O'Connor, James M.

    2016-10-25

    A system includes a first memory and a device coupleable to the first memory. The device includes a second memory to cache data from the first memory. The second memory includes a plurality of rows, each row including a corresponding set of compressed data blocks of non-uniform sizes and a corresponding set of tag blocks. Each tag block represents a corresponding compressed data block of the row. The device further includes decompression logic to decompress data blocks accessed from the second memory. The device further includes compression logic to compress data blocks to be stored in the second memory.

  13. Compression of rehydratable vegetables and cereals

    NASA Technical Reports Server (NTRS)

    Burns, E. E.

    1978-01-01

    Characteristics of freeze-dried compressed carrots, such as rehydration, volatile retention, and texture, were studied by relating histological changes to textural quality evaluation, and by determining the effects of storage temperature on freeze-dried compressed carrot bars. Results show that samples compressed with a high moisture content undergo only slight structural damage and rehydrate quickly. Cellular disruption as a result of compression at low moisture levels was the main reason for rehydration and texture differences. Products prepared from carrot cubes having 48% moisture compared favorably with a freshly cooked product in cohesiveness and elasticity, but were found slightly harder and more chewy.

  14. [Irreversible image compression in radiology. Current status].

    PubMed

    Pinto dos Santos, D; Jungmann, F; Friese, C; Düber, C; Mildenberger, P

    2013-03-01

    Due to increasing amounts of data in radiology methods for image compression appear both economically and technically interesting. Irreversible image compression allows markedly higher reduction of data volume in comparison with reversible compression algorithms but is, however, accompanied by a certain amount of mathematical and visual loss of information. Various national and international radiological societies have published recommendations for the use of irreversible image compression. The degree of acceptable compression varies across modalities and regions of interest.The DICOM standard supports JPEG, which achieves compression through tiling, DCT/DWT and quantization. Although mathematical loss due to rounding up errors and reduction of high frequency information occurs this results in relatively low visual degradation.It is still unclear where to implement irreversible compression in the radiological workflow as only few studies analyzed the impact of irreversible compression on specialized image postprocessing. As long as this is within the limits recommended by the German Radiological Society irreversible image compression could be implemented directly at the imaging modality as it would comply with § 28 of the roentgen act (RöV). PMID:23456043

  15. Single-pixel complementary compressive sampling spectrometer

    NASA Astrophysics Data System (ADS)

    Lan, Ruo-Ming; Liu, Xue-Feng; Yao, Xu-Ri; Yu, Wen-Kai; Zhai, Guang-Jie

    2016-05-01

    A new type of compressive spectroscopy technique employing a complementary sampling strategy is reported. In a single sequence of spectral compressive sampling, positive and negative measurements are performed, in which sensing matrices with a complementary relationship are used. The restricted isometry property condition necessary for accurate recovery of compressive sampling theory is satisfied mathematically. Compared with the conventional single-pixel spectroscopy technique, the complementary compressive sampling strategy can achieve spectral recovery of considerably higher quality within a shorter sampling time. We also investigate the influence of the sampling ratio and integration time on the recovery quality.

  16. Efficient compression of molecular dynamics trajectory files.

    PubMed

    Marais, Patrick; Kenwood, Julian; Smith, Keegan Carruthers; Kuttel, Michelle M; Gain, James

    2012-10-15

    We investigate whether specific properties of molecular dynamics trajectory files can be exploited to achieve effective file compression. We explore two classes of lossy, quantized compression scheme: "interframe" predictors, which exploit temporal coherence between successive frames in a simulation, and more complex "intraframe" schemes, which compress each frame independently. Our interframe predictors are fast, memory-efficient and well suited to on-the-fly compression of massive simulation data sets, and significantly outperform the benchmark BZip2 application. Our schemes are configurable: atomic positional accuracy can be sacrificed to achieve greater compression. For high fidelity compression, our linear interframe predictor gives the best results at very little computational cost: at moderate levels of approximation (12-bit quantization, maximum error ≈ 10(-2) Å), we can compress a 1-2 fs trajectory file to 5-8% of its original size. For 200 fs time steps-typically used in fine grained water diffusion experiments-we can compress files to ~25% of their input size, still substantially better than BZip2. While compression performance degrades with high levels of quantization, the simulation error is typically much greater than the associated approximation error in such cases.

  17. Spinal cord compression due to ethmoid adenocarcinoma.

    PubMed

    Johns, D R; Sweriduk, S T

    1987-10-15

    Adenocarcinoma of the ethmoid sinus is a rare tumor which has been epidemiologically linked to woodworking in the furniture industry. It has a low propensity to metastasize and has not been previously reported to cause spinal cord compression. A symptomatic epidural spinal cord compression was confirmed on magnetic resonance imaging (MRI) scan in a former furniture worker with widely disseminated metastases. The clinical features of ethmoid sinus adenocarcinoma and neoplastic spinal cord compression, and the comparative value of MRI scanning in the neuroradiologic diagnosis of spinal cord compression are reviewed.

  18. Pulsed spheromak reactor with adiabatic compression

    SciTech Connect

    Fowler, T K

    1999-03-29

    Extrapolating from the Pulsed Spheromak reactor and the LINUS concept, we consider ignition achieved by injecting a conducting liquid into the flux conserver to compress a low temperature spheromak created by gun injection and ohmic heating. The required energy to achieve ignition and high gain by compression is comparable to that required for ohmic ignition and the timescale is similar so that the mechanical power to ignite by compression is comparable to the electrical power to ignite ohmically. Potential advantages and problems are discussed. Like the High Beta scenario achieved by rapid fueling of an ohmically ignited plasma, compression must occur on timescales faster than Taylor relaxation.

  19. A biologically inspired model for signal compression

    NASA Astrophysics Data System (ADS)

    McDonnell, Mark D.; Abbott, Derek

    2007-12-01

    A model of a biological sensory neuron stimulated by a noisy analog information source is considered. It is demonstrated that action-potential generation by the neuron model can be described in terms of lossy compression theory. Lossy compression is generally characterized by (i) how much distortion is introduced, on average, due to a loss of information, and (ii) the 'rate,' or the amount of compression. Conventional compression theory is used to measure the performance of the model in terms of both distortion and rate, and the tradeoff between each. The model's applicability to a number of situations relevant to biomedical engineering, including cochlear implants, and bio-sensors is discussed.

  20. Effect of Compression Ratio on Perception of Time Compressed Phonemically Balanced Words in Kannada and Monosyllables

    PubMed Central

    Prabhu, Prashanth; Sujan, Mirale Jagadish; Rakshith, Satish

    2015-01-01

    The present study attempted to study perception of time-compressed speech and the effect of compression ratio for phonemically balanced (PB) word lists in Kannada and monosyllables. The test was administered on 30 normal hearing individuals at compression ratios of 40%, 50%, 60%, 70% and 80% for PB words in Kannada and monosyllables. The results of the study showed that the speech identification scores for time-compressed speech reduced with increase in compression ratio. The scores were better for monosyllables compared to PB words especially at higher compression ratios. The study provides speech identification scores at different compression ratio for PB words and monosyllables in individuals with normal hearing. The results of the study also showed that the scores did not vary across gender for all the compression ratios for both the stimuli. The same test material needs to be compared the clinical population with central auditory processing disorder for clinical validation of the present results. PMID:26557363

  1. Video data compression using artificial neural network differential vector quantization

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, Ashok K.; Bibyk, Steven B.; Ahalt, Stanley C.

    1991-01-01

    An artificial neural network vector quantizer is developed for use in data compression applications such as Digital Video. Differential Vector Quantization is used to preserve edge features, and a new adaptive algorithm, known as Frequency-Sensitive Competitive Learning, is used to develop the vector quantizer codebook. To develop real time performance, a custom Very Large Scale Integration Application Specific Integrated Circuit (VLSI ASIC) is being developed to realize the associative memory functions needed in the vector quantization algorithm. By using vector quantization, the need for Huffman coding can be eliminated, resulting in superior performance against channel bit errors than methods that use variable length codes.

  2. Magnetic Flux Compression in Plasmas

    NASA Astrophysics Data System (ADS)

    Velikovich, A. L.

    2012-10-01

    Magnetic flux compression (MFC) as a method for producing ultra-high pulsed magnetic fields had been originated in the 1950s by Sakharov et al. at Arzamas in the USSR (now VNIIEF, Russia) and by Fowler et al. at Los Alamos in the US. The highest magnetic field produced by explosively driven MFC generator, 28 MG, was reported by Boyko et al. of VNIIEF. The idea of using MFC to increase the magnetic field in a magnetically confined plasma to 3-10 MG, relaxing the strict requirements on the plasma density and Lawson time, gave rise to the research area known as MTF in the US and MAGO in Russia. To make a difference in ICF, a magnetic field of ˜100 MG should be generated via MFC by a plasma liner as a part of the capsule compression scenario on a laser or pulsed power facility. This approach was first suggested in mid-1980s by Liberman and Velikovich in the USSR and Felber in the US. It has not been obvious from the start that it could work at all, given that so many mechanisms exist for anomalously fast penetration of magnetic field through plasma. And yet, many experiments stimulated by this proposal since 1986, mostly using pulsed-power drivers, demonstrated reasonably good flux compression up to ˜42 MG, although diagnostics of magnetic fields of such magnitude in HED plasmas is still problematic. The new interest of MFC in plasmas emerged with the advancement of new drivers, diagnostic methods and simulation tools. Experiments on MFC in a deuterium plasma filling a cylindrical plastic liner imploded by OMEGA laser beam led by Knauer, Betti et al. at LLE produced peak fields of 36 MG. The novel MagLIF approach to low-cost, high-efficiency ICF pursued by Herrmann, Slutz, Vesey et al. at Sandia involves pulsed-power-driven MFC to a peak field of ˜130 MG in a DT plasma. A review of the progress, current status and future prospects of MFC in plasmas is presented.

  3. Blind compressive sensing dynamic MRI

    PubMed Central

    Lingala, Sajan Goud; Jacob, Mathews

    2013-01-01

    We propose a novel blind compressive sensing (BCS) frame work to recover dynamic magnetic resonance images from undersampled measurements. This scheme models the dynamic signal as a sparse linear combination of temporal basis functions, chosen from a large dictionary. In contrast to classical compressed sensing, the BCS scheme simultaneously estimates the dictionary and the sparse coefficients from the undersampled measurements. Apart from the sparsity of the coefficients, the key difference of the BCS scheme with current low rank methods is the non-orthogonal nature of the dictionary basis functions. Since the number of degrees of freedom of the BCS model is smaller than that of the low-rank methods, it provides improved reconstructions at high acceleration rates. We formulate the reconstruction as a constrained optimization problem; the objective function is the linear combination of a data consistency term and sparsity promoting ℓ1 prior of the coefficients. The Frobenius norm dictionary constraint is used to avoid scale ambiguity. We introduce a simple and efficient majorize-minimize algorithm, which decouples the original criterion into three simpler sub problems. An alternating minimization strategy is used, where we cycle through the minimization of three simpler problems. This algorithm is seen to be considerably faster than approaches that alternates between sparse coding and dictionary estimation, as well as the extension of K-SVD dictionary learning scheme. The use of the ℓ1 penalty and Frobenius norm dictionary constraint enables the attenuation of insignificant basis functions compared to the ℓ0 norm and column norm constraint assumed in most dictionary learning algorithms; this is especially important since the number of basis functions that can be reliably estimated is restricted by the available measurements. We also observe that the proposed scheme is more robust to local minima compared to K-SVD method, which relies on greedy sparse coding

  4. Genetic disorders producing compressive radiculopathy.

    PubMed

    Corey, Joseph M

    2006-11-01

    Back pain is a frequent complaint seen in neurological practice. In evaluating back pain, neurologists are asked to evaluate patients for radiculopathy, determine whether they may benefit from surgery, and help guide management. Although disc herniation is the most common etiology of compressive radiculopathy, there are many other causes, including genetic disorders. This article is a discussion of genetic disorders that cause or contribute to radiculopathies. These genetic disorders include neurofibromatosis, Paget's disease of bone, and ankylosing spondylitis. Numerous genetic disorders can also lead to deformities of the spine, including spinal muscular atrophy, Friedreich's ataxia, Charcot-Marie-Tooth disease, familial dysautonomia, idiopathic torsional dystonia, Marfan's syndrome, and Ehlers-Danlos syndrome. However, the extent of radiculopathy caused by spine deformities is essentially absent from the literature. Finally, recent investigation into the heritability of disc degeneration and lumbar disc herniation suggests a significant genetic component in the etiology of lumbar disc disease. PMID:17048153

  5. Compression molding of aerogel microspheres

    DOEpatents

    Pekala, Richard W.; Hrubesh, Lawrence W.

    1998-03-24

    An aerogel composite material produced by compression molding of aerogel microspheres (powders) mixed together with a small percentage of polymer binder to form monolithic shapes in a cost-effective manner. The aerogel composites are formed by mixing aerogel microspheres with a polymer binder, placing the mixture in a mold and heating under pressure, which results in a composite with a density of 50-800 kg/m.sup.3 (0.05-0.80 g/cc). The thermal conductivity of the thus formed aerogel composite is below that of air, but higher than the thermal conductivity of monolithic aerogels. The resulting aerogel composites are attractive for applications such as thermal insulation since fabrication thereof does not require large and expensive processing equipment. In addition to thermal insulation, the aerogel composites may be utilized for filtration, ICF target, double layer capacitors, and capacitive deionization.

  6. Compression molding of aerogel microspheres

    DOEpatents

    Pekala, R.W.; Hrubesh, L.W.

    1998-03-24

    An aerogel composite material produced by compression molding of aerogel microspheres (powders) mixed together with a small percentage of polymer binder to form monolithic shapes in a cost-effective manner is disclosed. The aerogel composites are formed by mixing aerogel microspheres with a polymer binder, placing the mixture in a mold and heating under pressure, which results in a composite with a density of 50--800 kg/m{sup 3} (0.05--0.80 g/cc). The thermal conductivity of the thus formed aerogel composite is below that of air, but higher than the thermal conductivity of monolithic aerogels. The resulting aerogel composites are attractive for applications such as thermal insulation since fabrication thereof does not require large and expensive processing equipment. In addition to thermal insulation, the aerogel composites may be utilized for filtration, ICF target, double layer capacitors, and capacitive deionization. 4 figs.

  7. Shock compression of liquid hydrazine

    SciTech Connect

    Garcia, B.O.; Chavez, D.J.

    1995-01-01

    Liquid hydrazine (N{sub 2}H{sub 4}) is a propellant used by the Air Force and NASA for aerospace propulsion and power systems. Because the propellant modules that contain the hydrazine can be subject to debris impacts during their use, the shock states that can occur in the hydrazine need to be characterized to safely predict its response. Several shock compression experiments have been conducted in an attempt to investigate the detonability of liquid hydrazine; however, the experiments results disagree. Therefore, in this study, we reproduced each experiment numerically to evaluate in detail the shock wave profiles generated in the liquid hydrazine. This paper presents the results of each numerical simulation and compares the results to those obtained in experiment. We also present the methodology of our approach, which includes chemical kinetic experiments, chemical equilibrium calculations, and characterization of the equation of state of liquid hydrazine.

  8. Photon counting compressive depth mapping.

    PubMed

    Howland, Gregory A; Lum, Daniel J; Ware, Matthew R; Howell, John C

    2013-10-01

    We demonstrate a compressed sensing, photon counting lidar system based on the single-pixel camera. Our technique recovers both depth and intensity maps from a single under-sampled set of incoherent, linear projections of a scene of interest at ultra-low light levels around 0.5 picowatts. Only two-dimensional reconstructions are required to image a three-dimensional scene. We demonstrate intensity imaging and depth mapping at 256 × 256 pixel transverse resolution with acquisition times as short as 3 seconds. We also show novelty filtering, reconstructing only the difference between two instances of a scene. Finally, we acquire 32 × 32 pixel real-time video for three-dimensional object tracking at 14 frames-per-second. PMID:24104293

  9. Genetic disorders producing compressive radiculopathy.

    PubMed

    Corey, Joseph M

    2006-11-01

    Back pain is a frequent complaint seen in neurological practice. In evaluating back pain, neurologists are asked to evaluate patients for radiculopathy, determine whether they may benefit from surgery, and help guide management. Although disc herniation is the most common etiology of compressive radiculopathy, there are many other causes, including genetic disorders. This article is a discussion of genetic disorders that cause or contribute to radiculopathies. These genetic disorders include neurofibromatosis, Paget's disease of bone, and ankylosing spondylitis. Numerous genetic disorders can also lead to deformities of the spine, including spinal muscular atrophy, Friedreich's ataxia, Charcot-Marie-Tooth disease, familial dysautonomia, idiopathic torsional dystonia, Marfan's syndrome, and Ehlers-Danlos syndrome. However, the extent of radiculopathy caused by spine deformities is essentially absent from the literature. Finally, recent investigation into the heritability of disc degeneration and lumbar disc herniation suggests a significant genetic component in the etiology of lumbar disc disease.

  10. High energy femtosecond pulse compression

    NASA Astrophysics Data System (ADS)

    Lassonde, Philippe; Mironov, Sergey; Fourmaux, Sylvain; Payeur, Stéphane; Khazanov, Efim; Sergeev, Alexander; Kieffer, Jean-Claude; Mourou, Gerard

    2016-07-01

    An original method for retrieving the Kerr nonlinear index was proposed and implemented for TF12 heavy flint glass. Then, a defocusing lens made of this highly nonlinear glass was used to generate an almost constant spectral broadening across a Gaussian beam profile. The lens was designed with spherical curvatures chosen in order to match the laser beam profile, such that the product of the thickness with intensity is constant. This solid-state optics in combination with chirped mirrors was used to decrease the pulse duration at the output of a terawatt-class femtosecond laser. We demonstrated compression of a 33 fs pulse to 16 fs with 170 mJ energy.

  11. A New Approach for Fingerprint Image Compression

    SciTech Connect

    Mazieres, Bertrand

    1997-12-01

    The FBI has been collecting fingerprint cards since 1924 and now has over 200 million of them. Digitized with 8 bits of grayscale resolution at 500 dots per inch, it means 2000 terabytes of information. Also, without any compression, transmitting a 10 Mb card over a 9600 baud connection will need 3 hours. Hence we need a compression and a compression as close to lossless as possible: all fingerprint details must be kept. A lossless compression usually do not give a better compression ratio than 2:1, which is not sufficient. Compressing these images with the JPEG standard leads to artefacts which appear even at low compression rates. Therefore the FBI has chosen in 1993 a scheme of compression based on a wavelet transform, followed by a scalar quantization and an entropy coding : the so-called WSQ. This scheme allows to achieve compression ratios of 20:1 without any perceptible loss of quality. The publication of the FBI specifies a decoder, which means that many parameters can be changed in the encoding process: the type of analysis/reconstruction filters, the way the bit allocation is made, the number of Huffman tables used for the entropy coding. The first encoder used 9/7 filters for the wavelet transform and did the bit allocation using a high-rate bit assumption. Since the transform is made into 64 subbands, quite a lot of bands receive only a few bits even at an archival quality compression rate of 0.75 bit/pixel. Thus, after a brief overview of the standard, we will discuss a new approach for the bit-allocation that seems to make more sense where theory is concerned. Then we will talk about some implementation aspects, particularly for the new entropy coder and the features that allow other applications than fingerprint image compression. Finally, we will compare the performances of the new encoder to those of the first encoder.

  12. Survey of data compression techniques

    SciTech Connect

    Gryder, R.; Hake, K.

    1991-09-01

    PM-AIM must provide to customers in a timely fashion information about Army acquisitions. This paper discusses ways that PM-AIM can reduce the volume of data that must be transmitted between sites. Although this paper primarily discusses techniques of data compression, it also briefly discusses other options for meeting the PM-AIM requirements. The options available to PM-AIM, in addition to hardware and software data compression, include less-frequent updates, distribution of partial updates, distributed data base design, and intelligent network design. Any option that enhances the performance of the PM-AIM network is worthy of consideration. The recommendations of this paper apply to the PM-AIM project in three phases: the current phase, the target phase, and the objective phase. Each recommendation will be identified as (1) appropriate for the current phase, (2) considered for implementation during the target phase, or (3) a feature that should be part of the objective phase of PM-AIM's design. The current phase includes only those measures that can be taken with the installed leased lines. The target phase includes those measures that can be taken in transferring the traffic from the leased lines to the DSNET environment with minimal changes in the current design. The objective phase includes all the things that should be done as a matter of course. The objective phase for PM-AIM appears to be a distributed data base with data for each site stored locally and all sites having access to all data.

  13. Survey of data compression techniques

    SciTech Connect

    Gryder, R.; Hake, K.

    1991-09-01

    PM-AIM must provide to customers in a timely fashion information about Army acquisitions. This paper discusses ways that PM-AIM can reduce the volume of data that must be transmitted between sites. Although this paper primarily discusses techniques of data compression, it also briefly discusses other options for meeting the PM-AIM requirements. The options available to PM-AIM, in addition to hardware and software data compression, include less-frequent updates, distribution of partial updates, distributed data base design, and intelligent network design. Any option that enhances the performance of the PM-AIM network is worthy of consideration. The recommendations of this paper apply to the PM-AIM project in three phases: the current phase, the target phase, and the objective phase. Each recommendation will be identified as (1) appropriate for the current phase, (2) considered for implementation during the target phase, or (3) a feature that should be part of the objective phase of PM-AIM`s design. The current phase includes only those measures that can be taken with the installed leased lines. The target phase includes those measures that can be taken in transferring the traffic from the leased lines to the DSNET environment with minimal changes in the current design. The objective phase includes all the things that should be done as a matter of course. The objective phase for PM-AIM appears to be a distributed data base with data for each site stored locally and all sites having access to all data.

  14. Mental Aptitude and Comprehension of Time-Compressed and Compressed-Expanded Listening Selections.

    ERIC Educational Resources Information Center

    Sticht, Thomas G.

    The comprehensibility of materials compressed and then expanded by means of an electromechanical process was tested with 280 Army inductees divided into groups of high and low mental aptitude. Three short listening selections relating to military activities were subjected to compression and compression-expansion to produce seven versions. Data…

  15. Bunch length compression method for free electron lasers to avoid parasitic compressions

    SciTech Connect

    Douglas, David R.; Benson, Stephen; Nguyen, Dinh Cong; Tennant, Christopher; Wilson, Guy

    2015-05-26

    A method of bunch length compression method for a free electron laser (FEL) that avoids parasitic compressions by 1) applying acceleration on the falling portion of the RF waveform, 2) compressing using a positive momentum compaction (R.sub.56>0), and 3) compensating for aberration by using nonlinear magnets in the compressor beam line.

  16. 76 FR 4338 - Research and Development Strategies for Compressed & Cryo-Compressed Hydrogen Storage Workshops

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-25

    ... Laboratory, in conjunction with the Hydrogen Storage team of the EERE Fuel Cell Technologies Program, will be.../hydrogenandfuelcells/wkshp_compressedcryo.html . The purpose of the compressed hydrogen workshop on Monday February... Research and Development Strategies for Compressed & Cryo- Compressed Hydrogen Storage Workshops......

  17. The stability of compressible mixing layers in binary gases

    NASA Technical Reports Server (NTRS)

    Kozusko, F.; Lasseigne, D. G.; Grosch, C. E.; Jackson, T. L.

    1996-01-01

    We present the results of a study of the inviscid two-dimensional spatial stability of a parallel compressible mixing layer in a binary gas. The parameters of this study are the Mach number of the fast stream, the ratio of the velocity of the slow stream to that of the fast stream, the ratio of the temperatures, the composition of the gas in the slow stream and in the fast stream, and the frequency of the disturbance wave. The ratio of the molecular weight of the slow stream to that of the fast stream is found to be an important quantity and is used as an independent variable in presenting the stability characteristics of the flow. It is shown that differing molecular weights have a significant effect on the neutral-mode phase speeds, the phase speeds of the unstable modes, the maximum growth rates and the unstable frequency range of the disturbances. The molecular weight ratio is a reasonable predictor of the stability trends. We have further demonstrated that the normalized growth rate as a function of the convective Mach number is relatively insensitive (Approx. 25%) to changes in the composition of the mixing layer. Thus, the normalized growth rate is a key element when considering the stability of compressible mixing layers, since once the basic stability characteristics for a particular combination of gases is known at zero Mach number, the decrease in growth rates due to compressibility effects at the larger convective Mach numbers is somewhat predictable.

  18. Compression through decomposition into browse and residual images

    NASA Technical Reports Server (NTRS)

    Novik, Dmitry A.; Tilton, James C.; Manohar, M.

    1993-01-01

    Economical archival and retrieval of image data is becoming increasingly important considering the unprecedented data volumes expected from the Earth Observing System (EOS) instruments. For cost effective browsing the image data (possibly from remote site), and retrieving the original image data from the data archive, we suggest an integrated image browse and data archive system employing incremental transmission. We produce our browse image data with the JPEG/DCT lossy compression approach. Image residual data is then obtained by taking the pixel by pixel differences between the original data and the browse image data. We then code the residual data with a form of variable length coding called diagonal coding. In our experiments, the JPEG/DCT is used at different quality factors (Q) to generate the browse and residual data. The algorithm has been tested on band 4 of two Thematic mapper (TM) data sets. The best overall compression ratios (of about 1.7) were obtained when a quality factor of Q=50 was used to produce browse data at a compression ratio of 10 to 11. At this quality factor the browse image data has virtually no visible distortions for the images tested.

  19. Bioimpedance of soft tissue under compression.

    PubMed

    Dodde, R E; Bull, J L; Shih, A J

    2012-06-01

    In this paper compression-dependent bioimpedance measurements of porcine spleen tissue are presented. Using a Cole-Cole model, nonlinear compositional changes in extracellular and intracellular makeup; related to a loss of fluid from the tissue, are identified during compression. Bioimpedance measurements were made using a custom tetrapolar probe and bioimpedance circuitry. As the tissue is increasingly compressed up to 50%, both intracellular and extracellular resistances increase while bulk membrane capacitance decreases. Increasing compression to 80% results in an increase in intracellular resistance and bulk membrane capacitance while extracellular resistance decreases. Tissues compressed incrementally to 80% show a decreased extracellular resistance of 32%, an increased intracellular resistance of 107%, and an increased bulk membrane capacitance of 64% compared to their uncompressed values. Intracellular resistance exhibits double asymptotic curves when plotted against the peak tissue pressure during compression, possibly indicating two distinct phases of mechanical change in the tissue during compression. Based on these findings, differing theories as to what is happening at a cellular level during high tissue compression are discussed, including the possibility of cell rupture and mass exudation of cellular material.

  20. Lossless Compression on MRI Images Using SWT.

    PubMed

    Anusuya, V; Raghavan, V Srinivasa; Kavitha, G

    2014-10-01

    Medical image compression is one of the growing research fields in biomedical applications. Most medical images need to be compressed using lossless compression as each pixel information is valuable. With the wide pervasiveness of medical imaging applications in health-care settings and the increased interest in telemedicine technologies, it has become essential to reduce both storage and transmission bandwidth requirements needed for archival and communication of related data, preferably by employing lossless compression methods. Furthermore, providing random access as well as resolution and quality scalability to the compressed data has become of great utility. Random access refers to the ability to decode any section of the compressed image without having to decode the entire data set. The system proposes to implement a lossless codec using an entropy coder. 3D medical images are decomposed into 2D slices and subjected to 2D-stationary wavelet transform (SWT). The decimated coefficients are compressed in parallel using embedded block coding with optimized truncation of the embedded bit stream. These bit streams are decoded and reconstructed using inverse SWT. Finally, the compression ratio (CR) is evaluated to prove the efficiency of the proposal. As an enhancement, the proposed system concentrates on minimizing the computation time by introducing parallel computing on the arithmetic coding stage as it deals with multiple subslices.

  1. A Comparative Study of Compression Video Technology.

    ERIC Educational Resources Information Center

    Keller, Chris A.; And Others

    The purpose of this study was to provide an overview of compression devices used to increase the cost effectiveness of teleconferences by reducing satellite bandwidth requirements for the transmission of television pictures and accompanying audio signals. The main body of the report describes the comparison study of compression rates and their…

  2. A New Compression Method for FITS Tables

    NASA Technical Reports Server (NTRS)

    Pence, William; Seaman, Rob; White, Richard L.

    2010-01-01

    As the size and number of FITS binary tables generated by astronomical observatories increases, so does the need for a more efficient compression method to reduce the amount disk space and network bandwidth required to archive and down1oad the data tables. We have developed a new compression method for FITS binary tables that is modeled after the FITS tiled-image compression compression convention that has been in use for the past decade. Tests of this new method on a sample of FITS binary tables from a variety of current missions show that on average this new compression technique saves about 50% more disk space than when simply compressing the whole FITS file with gzip. Other advantages of this method are (1) the compressed FITS table is itself a valid FITS table, (2) the FITS headers remain uncompressed, thus allowing rapid read and write access to the keyword values, and (3) in the common case where the FITS file contains multiple tables, each table is compressed separately and may be accessed without having to uncompress the whole file.

  3. Sudden Viscous Dissipation of Compressing Turbulence

    DOE PAGES

    Davidovits, Seth; Fisch, Nathaniel J.

    2016-03-11

    Here we report compression of turbulent plasma can amplify the turbulent kinetic energy, if the compression is fast compared to the viscous dissipation time of the turbulent eddies. A sudden viscous dissipation mechanism is demonstrated, whereby this amplified turbulent kinetic energy is rapidly converted into thermal energy, suggesting a new paradigm for fast ignition inertial fusion.

  4. Aligned genomic data compression via improved modeling.

    PubMed

    Ochoa, Idoia; Hernaez, Mikel; Weissman, Tsachy

    2014-12-01

    With the release of the latest Next-Generation Sequencing (NGS) machine, the HiSeq X by Illumina, the cost of sequencing the whole genome of a human is expected to drop to a mere $1000. This milestone in sequencing history marks the era of affordable sequencing of individuals and opens the doors to personalized medicine. In accord, unprecedented volumes of genomic data will require storage for processing. There will be dire need not only of compressing aligned data, but also of generating compressed files that can be fed directly to downstream applications to facilitate the analysis of and inference on the data. Several approaches to this challenge have been proposed in the literature; however, focus thus far has been on the low coverage regime and most of the suggested compressors are not based on effective modeling of the data. We demonstrate the benefit of data modeling for compressing aligned reads. Specifically, we show that, by working with data models designed for the aligned data, we can improve considerably over the best compression ratio achieved by previously proposed algorithms. Our results indicate that the pareto-optimal barrier for compression rate and speed claimed by Bonfield and Mahoney (2013) [Bonfield JK and Mahoneys MV, Compression of FASTQ and SAM format sequencing data, PLOS ONE, 8(3):e59190, 2013.] does not apply for high coverage aligned data. Furthermore, our improved compression ratio is achieved by splitting the data in a manner conducive to operations in the compressed domain by downstream applications.

  5. Aligned genomic data compression via improved modeling.

    PubMed

    Ochoa, Idoia; Hernaez, Mikel; Weissman, Tsachy

    2014-12-01

    With the release of the latest Next-Generation Sequencing (NGS) machine, the HiSeq X by Illumina, the cost of sequencing the whole genome of a human is expected to drop to a mere $1000. This milestone in sequencing history marks the era of affordable sequencing of individuals and opens the doors to personalized medicine. In accord, unprecedented volumes of genomic data will require storage for processing. There will be dire need not only of compressing aligned data, but also of generating compressed files that can be fed directly to downstream applications to facilitate the analysis of and inference on the data. Several approaches to this challenge have been proposed in the literature; however, focus thus far has been on the low coverage regime and most of the suggested compressors are not based on effective modeling of the data. We demonstrate the benefit of data modeling for compressing aligned reads. Specifically, we show that, by working with data models designed for the aligned data, we can improve considerably over the best compression ratio achieved by previously proposed algorithms. Our results indicate that the pareto-optimal barrier for compression rate and speed claimed by Bonfield and Mahoney (2013) [Bonfield JK and Mahoneys MV, Compression of FASTQ and SAM format sequencing data, PLOS ONE, 8(3):e59190, 2013.] does not apply for high coverage aligned data. Furthermore, our improved compression ratio is achieved by splitting the data in a manner conducive to operations in the compressed domain by downstream applications. PMID:25395305

  6. College Students' Preference for Compressed Speech Lectures.

    ERIC Educational Resources Information Center

    Primrose, Robert A.

    To test student reactions to compressed-speech lectures, tapes for a general education course in oral communication were compressed to 49 to 77 percent of original time. Students were permitted to check them out via a dial access retrieval system. Checkouts and use of tapes were compared with student grades at semester's end. No significant…

  7. Compression and fast retrieval of SNP data

    PubMed Central

    Sambo, Francesco; Di Camillo, Barbara; Toffolo, Gianna; Cobelli, Claudio

    2014-01-01

    Motivation: The increasing interest in rare genetic variants and epistatic genetic effects on complex phenotypic traits is currently pushing genome-wide association study design towards datasets of increasing size, both in the number of studied subjects and in the number of genotyped single nucleotide polymorphisms (SNPs). This, in turn, is leading to a compelling need for new methods for compression and fast retrieval of SNP data. Results: We present a novel algorithm and file format for compressing and retrieving SNP data, specifically designed for large-scale association studies. Our algorithm is based on two main ideas: (i) compress linkage disequilibrium blocks in terms of differences with a reference SNP and (ii) compress reference SNPs exploiting information on their call rate and minor allele frequency. Tested on two SNP datasets and compared with several state-of-the-art software tools, our compression algorithm is shown to be competitive in terms of compression rate and to outperform all tools in terms of time to load compressed data. Availability and implementation: Our compression and decompression algorithms are implemented in a C++ library, are released under the GNU General Public License and are freely downloadable from http://www.dei.unipd.it/~sambofra/snpack.html. Contact: sambofra@dei.unipd.it or cobelli@dei.unipd.it. PMID:25064564

  8. LOW-VELOCITY COMPRESSIBLE FLOW THEORY

    EPA Science Inventory

    The widespread application of incompressible flow theory dominates low-velocity fluid dynamics, virtually preventing research into compressible low-velocity flow dynamics. Yet, compressible solutions to simple and well-defined flow problems and a series of contradictions in incom...

  9. Compressible turbulent boundary layer interaction experiments

    NASA Technical Reports Server (NTRS)

    Settles, G. S.; Bogdonoff, S. M.

    1981-01-01

    Four phases of research results are reported: (1) experiments on the compressible turbulent boundary layer flow in a streamwise corner; (2) the two dimensional (2D) interaction of incident shock waves with a compressible turbulent boundary layer; (3) three dimensional (3D) shock/boundary layer interactions; and (4) cooperative experiments at Princeton and numerical computations at NASA-Ames.

  10. Recoil Experiments Using a Compressed Air Cannon

    ERIC Educational Resources Information Center

    Taylor, Brett

    2006-01-01

    Ping-Pong vacuum cannons, potato guns, and compressed air cannons are popular and dramatic demonstrations for lecture and lab. Students enjoy them for the spectacle, but they can also be used effectively to teach physics. Recently we have used a student-built compressed air cannon as a laboratory activity to investigate impulse, conservation of…

  11. Hardware compression using common portions of data

    DOEpatents

    Chang, Jichuan; Viswanathan, Krishnamurthy

    2015-03-24

    Methods and devices are provided for data compression. Data compression can include receiving a plurality of data chunks, sampling at least some of the plurality of data chunks extracting a common portion from a number of the plurality of data chunks based on the sampling, and storing a remainder of the plurality of data chunks in memory.

  12. CRUSH: The NSI data compression utility

    NASA Technical Reports Server (NTRS)

    Seiler, ED

    1991-01-01

    CRUSH is a data compression utility that provides the user with several lossless compression techniques available in a single application. It is intended that the future development of CRUSH will depend upon feedback from the user community to identify new features and capabilities desired by the users. CRUSH provides an extension to the UNIX Compress program and the various VMS implementations of Compress that many users are familiar with. An important capability added by CRUSH is the addition of additional compression techniques and the option of automatically determining the best technique for a given data file. The CRUSH software is written in C and is designed to run on both VMS and UNIX systems. VMS files that are compressed will regain their full file characteristics upon decompression. To the extent possible, compressed files can be transferred between VMS and UNIX systems, and thus be decompressed on a different system than they were compressed on. Version 1 of CRUSH is currently available. This version is a VAX VMS implementation. Version 2, which has the full range of capabilities for both VMS and UNIX implementations, will be available shortly.

  13. Insertion Profiles of 4 Headless Compression Screws

    PubMed Central

    Hart, Adam; Harvey, Edward J.; Lefebvre, Louis-Philippe; Barthelat, Francois; Rabiei, Reza; Martineau, Paul A.

    2013-01-01

    Purpose In practice, the surgeon must rely on screw position (insertion depth) and tactile feedback from the screwdriver (insertion torque) to gauge compression. In this study, we identified the relationship between interfragmentary compression and these 2 factors. Methods The Acutrak Standard, Acutrak Mini, Synthes 3.0, and Herbert-Whipple implants were tested using a polyurethane foam scaphoid model. A specialized testing jig simultaneously measured compression force, insertion torque, and insertion depth at half-screw-turn intervals until failure occurred. Results The peak compression occurs at an insertion depth of −3.1 mm, −2.8 mm, 0.9 mm, and 1.5 mm for the Acutrak Mini, Acutrak Standard, Herbert-Whipple, and Synthes screws respectively (insertion depth is positive when the screw is proud above the bone and negative when buried). The compression and insertion torque at a depth of −2 mm were found to be 113 ± 18 N and 0.348 ± 0.052 Nm for the Acutrak Standard, 104 ± 15 N and 0.175 ± 0.008 Nm for the Acutrak Mini, 78 ± 9 N and 0.245 ± 0.006 Nm for the Herbert-Whipple, and 67 ± 2N, 0.233 ± 0.010 Nm for the Synthes headless compression screws. Conclusions All 4 screws generated a sizable amount of compression (> 60 N) over a wide range of insertion depths. The compression at the commonly recommended insertion depth of −2 mm was not significantly different between screws; thus, implant selection should not be based on compression profile alone. Conically shaped screws (Acutrak) generated their peak compression when they were fully buried in the foam whereas the shanked screws (Synthes and Herbert-Whipple) reached peak compression before they were fully inserted. Because insertion torque correlated poorly with compression, surgeons should avoid using tactile judgment of torque as a proxy for compression. Clinical relevance Knowledge of the insertion profile may improve our understanding of the implants, provide a better basis for comparing screws

  14. Multiresolution Distance Volumes for Progressive Surface Compression

    SciTech Connect

    Laney, D E; Bertram, M; Duchaineau, M A; Max, N L

    2002-04-18

    We present a surface compression method that stores surfaces as wavelet-compressed signed-distance volumes. Our approach enables the representation of surfaces with complex topology and arbitrary numbers of components within a single multiresolution data structure. This data structure elegantly handles topological modification at high compression rates. Our method does not require the costly and sometimes infeasible base mesh construction step required by subdivision surface approaches. We present several improvements over previous attempts at compressing signed-distance functions, including an 0(n) distance transform, a zero set initialization method for triangle meshes, and a specialized thresholding algorithm. We demonstrate the potential of sampled distance volumes for surface compression and progressive reconstruction for complex high genus surfaces.

  15. Compression, cochlear implants, and psychophysical laws

    NASA Astrophysics Data System (ADS)

    Zeng, Fan-Gang

    2001-05-01

    Cochlear compression contributes significantly to sharp frequency tuning and wide dynamic range in audition. The physiological mechanism underlying the compression has been traced to the outer hair cell function. Electric stimulation of the auditory nerve in cochlear implants bypasses this compression function, serving as a research tool to delineate the peripheral and central contributions to auditory functions. In this talk, I will compare psychophysical performance between acoustic and electric hearing in intensity, frequency, and time processing, and pay particular attention to the data that demonstrate the role of cochlear compression. Examples include both the cochlear-implant listeners' extremely narrow dynamic range and poor pitch discrimination and their exquisite sensitivity to changes in amplitude and phase. A unified view on the complementary contributions of cochlear compression and central expansion will be developed to account for Webers' law and Stevens power law.

  16. Compressed bitmap indices for efficient query processing

    SciTech Connect

    Wu, Kesheng; Otoo, Ekow; Shoshani, Arie

    2001-09-30

    Many database applications make extensive use of bitmap indexing schemes. In this paper, we study how to improve the efficiencies of these indexing schemes by proposing new compression schemes for the bitmaps. Most compression schemes are designed primarily to achieve good compression. During query processing they can be orders of magnitude slower than their uncompressed counterparts. The new schemes are designed to bridge this performance gap by reducing compression effectiveness and improving operation speed. In a number of tests on both synthetic data and real application data, we found that the new schemes significantly outperform the well-known compression schemes while using only modestly more space. For example, compared to the Byte-aligned Bitmap Code, the new schemes are 12 times faster and it uses only 50 percent more space. The new schemes use much less space(<30 percent) than the uncompressed scheme and are faster in a majority of the test cases.

  17. Interactive computer graphics applications for compressible aerodynamics

    NASA Technical Reports Server (NTRS)

    Benson, Thomas J.

    1994-01-01

    Three computer applications have been developed to solve inviscid compressible fluids problems using interactive computer graphics. The first application is a compressible flow calculator which solves for isentropic flow, normal shocks, and oblique shocks or centered expansions produced by two dimensional ramps. The second application couples the solutions generated by the first application to a more graphical presentation of the results to produce a desk top simulator of three compressible flow problems: 1) flow past a single compression ramp; 2) flow past two ramps in series; and 3) flow past two opposed ramps. The third application extends the results of the second to produce a design tool which solves for the flow through supersonic external or mixed compression inlets. The applications were originally developed to run on SGI or IBM workstations running GL graphics. They are currently being extended to solve additional types of flow problems and modified to operate on any X-based workstation.

  18. Application of PCA-based data compression in the ANN-supported conceptual cost estimation of residential buildings

    NASA Astrophysics Data System (ADS)

    Juszczyk, Michał

    2016-06-01

    The paper presents concisely some research results on the application of principal component analysis for the data compression and the use of compressed data as the variables describing the model in the issue of conceptual cost estimation of residential buildings. The goal of the research was to investigate the possibility of use of compressed input data of the model in neural modelling - the basic information about residential buildings available in the early stage of design and construction cost. The results for chosen neural networks that were trained with use of the compressed input data are presented in the paper. In the summary the results obtained for the neural networks with PCA-based data compression are compared with the results obtained in the previous stage of the research for the network committees.

  19. Hyperelastic Material Properties of Mouse Skin under Compression.

    PubMed

    Wang, Yuxiang; Marshall, Kara L; Baba, Yoshichika; Gerling, Gregory J; Lumpkin, Ellen A

    2013-01-01

    The skin is a dynamic organ whose complex material properties are capable of withstanding continuous mechanical stress while accommodating insults and organism growth. Moreover, synchronized hair cycles, comprising waves of hair growth, regression and rest, are accompanied by dramatic fluctuations in skin thickness in mice. Whether such structural changes alter skin mechanics is unknown. Mouse models are extensively used to study skin biology and pathophysiology, including aging, UV-induced skin damage and somatosensory signaling. As the skin serves a pivotal role in the transfer function from sensory stimuli to neuronal signaling, we sought to define the mechanical properties of mouse skin over a range of normal physiological states. Skin thickness, stiffness and modulus were quantitatively surveyed in adult, female mice (Mus musculus). These measures were analyzed under uniaxial compression, which is relevant for touch reception and compression injuries, rather than tension, which is typically used to analyze skin mechanics. Compression tests were performed with 105 full-thickness, freshly isolated specimens from the hairy skin of the hind limb. Physiological variables included body weight, hair-cycle stage, maturity level, skin site and individual animal differences. Skin thickness and stiffness were dominated by hair-cycle stage at young (6-10 weeks) and intermediate (13-19 weeks) adult ages but by body weight in mature mice (26-34 weeks). Interestingly, stiffness varied inversely with thickness so that hyperelastic modulus was consistent across hair-cycle stages and body weights. By contrast, the mechanics of hairy skin differs markedly with anatomical location. In particular, skin containing fascial structures such as nerves and blood vessels showed significantly greater modulus than adjacent sites. Collectively, this systematic survey indicates that, although its structure changes dramatically throughout adult life, mouse skin at a given location maintains a

  20. Determination of friction coefficient in unconfined compression of brain tissue.

    PubMed

    Rashid, Badar; Destrade, Michel; Gilchrist, Michael D

    2012-10-01

    Unconfined compression tests are more convenient to perform on cylindrical samples of brain tissue than tensile tests in order to estimate mechanical properties of the brain tissue because they allow homogeneous deformations. The reliability of these tests depends significantly on the amount of friction generated at the specimen/platen interface. Thus, there is a crucial need to find an approximate value of the friction coefficient in order to predict a possible overestimation of stresses during unconfined compression tests. In this study, a combined experimental-computational approach was adopted to estimate the dynamic friction coefficient μ of porcine brain matter against metal platens in compressive tests. Cylindrical samples of porcine brain tissue were tested up to 30% strain at variable strain rates, both under bonded and lubricated conditions in the same controlled environment. It was established that μ was equal to 0.09±0.03, 0.18±0.04, 0.18±0.04 and 0.20±0.02 at strain rates of 1, 30, 60 and 90/s, respectively. Additional tests were also performed to analyze brain tissue under lubricated and bonded conditions, with and without initial contact of the top platen with the brain tissue, with different specimen aspect ratios and with different lubricants (Phosphate Buffer Saline (PBS), Polytetrafluoroethylene (PTFE) and Silicone). The test conditions (lubricant used, biological tissue, loading velocity) adopted in this study were similar to the studies conducted by other research groups. This study will help to understand the amount of friction generated during unconfined compression of brain tissue for strain rates of up to 90/s.

  1. Quasi 1D Modeling of Mixed Compression Supersonic Inlets

    NASA Technical Reports Server (NTRS)

    Kopasakis, George; Connolly, Joseph W.; Paxson, Daniel E.; Woolwine, Kyle J.

    2012-01-01

    The AeroServoElasticity task under the NASA Supersonics Project is developing dynamic models of the propulsion system and the vehicle in order to conduct research for integrated vehicle dynamic performance. As part of this effort, a nonlinear quasi 1-dimensional model of the 2-dimensional bifurcated mixed compression supersonic inlet is being developed. The model utilizes computational fluid dynamics for both the supersonic and subsonic diffusers. The oblique shocks are modeled utilizing compressible flow equations. This model also implements variable geometry required to control the normal shock position. The model is flexible and can also be utilized to simulate other mixed compression supersonic inlet designs. The model was validated both in time and in the frequency domain against the legacy LArge Perturbation INlet code, which has been previously verified using test data. This legacy code written in FORTRAN is quite extensive and complex in terms of the amount of software and number of subroutines. Further, the legacy code is not suitable for closed loop feedback controls design, and the simulation environment is not amenable to systems integration. Therefore, a solution is to develop an innovative, more simplified, mixed compression inlet model with the same steady state and dynamic performance as the legacy code that also can be used for controls design. The new nonlinear dynamic model is implemented in MATLAB Simulink. This environment allows easier development of linear models for controls design for shock positioning. The new model is also well suited for integration with a propulsion system model to study inlet/propulsion system performance, and integration with an aero-servo-elastic system model to study integrated vehicle ride quality, vehicle stability, and efficiency.

  2. Compressed bit stream classification using VQ and GMM

    NASA Astrophysics Data System (ADS)

    Chen, Wenhua; Kuo, C.-C. Jay

    1997-10-01

    Algorithms of classifying and segmenting bit streams with different source content (such as speech, text and image, etc.) and different coding methods (such as ADPCM, (mu) -law, tiff, gif and JPEG, etc.) in a communication channel are investigated. In previous work, we focused on the separation of fixed- and variable-length coded bit streams, and the classification of two variable-length coded bit streams by using Fourier analysis and entropy feature. In this work, we consider the classification of multiple (more than two sources) compressed bit streams by using vector quantization (VQ) and Gaussian mixture modeling (GMM). The performance of the VQ and GMM techniques depend on various parameters such as the size of the codebook, the number of mixtures and the test segment length. It is demonstrated with experiments that both VQ and GMM outperform the single entropy feature. It is also shown that GMM generally outperforms VQ.

  3. Data structures and compression algorithms for genomic sequence data

    PubMed Central

    Brandon, Marty C.; Wallace, Douglas C.; Baldi, Pierre

    2009-01-01

    Motivation: The continuing exponential accumulation of full genome data, including full diploid human genomes, creates new challenges not only for understanding genomic structure, function and evolution, but also for the storage, navigation and privacy of genomic data. Here, we develop data structures and algorithms for the efficient storage of genomic and other sequence data that may also facilitate querying and protecting the data. Results: The general idea is to encode only the differences between a genome sequence and a reference sequence, using absolute or relative coordinates for the location of the differences. These locations and the corresponding differential variants can be encoded into binary strings using various entropy coding methods, from fixed codes such as Golomb and Elias codes, to variables codes, such as Huffman codes. We demonstrate the approach and various tradeoffs using highly variables human mitochondrial genome sequences as a testbed. With only a partial level of optimization, 3615 genome sequences occupying 56 MB in GenBank are compressed down to only 167 KB, achieving a 345-fold compression rate, using the revised Cambridge Reference Sequence as the reference sequence. Using the consensus sequence as the reference sequence, the data can be stored using only 133 KB, corresponding to a 433-fold level of compression, roughly a 23% improvement. Extensions to nuclear genomes and high-throughput sequencing data are discussed. Availability: Data are publicly available from GenBank, the HapMap web site, and the MITOMAP database. Supplementary materials with additional results, statistics, and software implementations are available from http://mammag.web.uci.edu/bin/view/Mitowiki/ProjectDNACompression. Contact: pfbaldi@ics.uci.edu PMID:19447783

  4. Energy requirements for quantum data compression and 1-1 coding

    SciTech Connect

    Rallan, Luke; Vedral, Vlatko

    2003-10-01

    By looking at quantum data compression in the second quantization, we present a model for the efficient generation and use of variable length codes. In this picture, lossless data compression can be seen as the minimum energy required to faithfully represent or transmit classical information contained within a quantum state. In order to represent information, we create quanta in some predefined modes (i.e., frequencies) prepared in one of the two possible internal states (the information carrying degrees of freedom). Data compression is now seen as the selective annihilation of these quanta, the energy of which is effectively dissipated into the environment. As any increase in the energy of the environment is intricately linked to any information loss and is subject to Landauer's erasure principle, we use this principle to distinguish lossless and lossy schemes and to suggest bounds on the efficiency of our lossless compression protocol. In line with the work of Bostroem and Felbinger [Phys. Rev. A 65, 032313 (2002)], we also show that when using variable length codes the classical notions of prefix or uniquely decipherable codes are unnecessarily restrictive given the structure of quantum mechanics and that a 1-1 mapping is sufficient. In the absence of this restraint, we translate existing classical results on 1-1 coding to the quantum domain to derive a new upper bound on the compression of quantum information. Finally, we present a simple quantum circuit to implement our scheme.

  5. Static Compression of Tetramethylammonium Borohydride

    SciTech Connect

    Dalton, Douglas Allen; Somayazulu, M.; Goncharov, Alexander F.; Hemley, Russell J.

    2011-11-15

    Raman spectroscopy and synchrotron X-ray diffraction are used to examine the high-pressure behavior of tetramethylammonium borohydride (TMAB) to 40 GPa at room temperature. The measurements reveal weak pressure-induced structural transitions around 5 and 20 GPa. Rietveld analysis and Le Bail fits of the powder diffraction data based on known structures of tetramethylammonium salts indicate that the transitions are mediated by orientational ordering of the BH{sub 4}{sup -} tetrahedra followed by tilting of the (CH{sub 3}){sub 4}N{sup +} groups. X-ray diffraction patterns obtained during pressure release suggest reversibility with a degree of hysteresis. Changes in the Raman spectrum confirm that these transitions are not accompanied by bonding changes between the two ionic species. At ambient conditions, TMAB does not possess dihydrogen bonding, and Raman data confirms that this feature is not activated upon compression. The pressure-volume equation of state obtained from the diffraction data gives a bulk modulus [K{sub 0} = 5.9(6) GPa, K'{sub 0} = 9.6(4)] slightly lower than that observed for ammonia borane. Raman spectra obtained over the entire pressure range (spanning over 40% densification) indicate that the intramolecular vibrational modes are largely coupled.

  6. PHELIX for flux compression studies

    SciTech Connect

    Turchi, Peter J; Rousculp, Christopher L; Reinovsky, Robert E; Reass, William A; Griego, Jeffrey R; Oro, David M; Merrill, Frank E

    2010-06-28

    PHELIX (Precision High Energy-density Liner Implosion eXperiment) is a concept for studying electromagnetic implosions using proton radiography. This approach requires a portable pulsed power and liner implosion apparatus that can be operated in conjunction with an 800 MeV proton beam at the Los Alamos Neutron Science Center. The high resolution (< 100 micron) provided by proton radiography combined with similar precision of liner implosions driven electromagnetically can permit close comparisons of multi-frame experimental data and numerical simulations within a single dynamic event. To achieve a portable implosion system for use at high energy-density in a proton laboratory area requires sub-megajoule energies applied to implosions only a few cms in radial and axial dimension. The associated inductance changes are therefore relatively modest, so a current step-up transformer arrangement is employed to avoid excessive loss to parasitic inductances that are relatively large for low-energy banks comprising only several capacitors and switches. We describe the design, construction and operation of the PHELIX system and discuss application to liner-driven, magnetic flux compression experiments. For the latter, the ability of strong magnetic fields to deflect the proton beam may offer a novel technique for measurement of field distributions near perturbed surfaces.

  7. Compressive sensing for nuclear security.

    SciTech Connect

    Gestner, Brian Joseph

    2013-12-01

    Special nuclear material (SNM) detection has applications in nuclear material control, treaty verification, and national security. The neutron and gamma-ray radiation signature of SNMs can be indirectly observed in scintillator materials, which fluoresce when exposed to this radiation. A photomultiplier tube (PMT) coupled to the scintillator material is often used to convert this weak fluorescence to an electrical output signal. The fluorescence produced by a neutron interaction event differs from that of a gamma-ray interaction event, leading to a slightly different pulse in the PMT output signal. The ability to distinguish between these pulse types, i.e., pulse shape discrimination (PSD), has enabled applications such as neutron spectroscopy, neutron scatter cameras, and dual-mode neutron/gamma-ray imagers. In this research, we explore the use of compressive sensing to guide the development of novel mixed-signal hardware for PMT output signal acquisition. Effectively, we explore smart digitizers that extract sufficient information for PSD while requiring a considerably lower sample rate than conventional digitizers. Given that we determine the feasibility of realizing these designs in custom low-power analog integrated circuits, this research enables the incorporation of SNM detection into wireless sensor networks.

  8. Shock compression profiles in ceramics

    SciTech Connect

    Grady, D.E.; Moody, R.L.

    1996-03-01

    An investigation of the shock compression properties of high-strength ceramics has been performed using controlled planar impact techniques. In a typical experimental configuration, a ceramic target disc is held stationary, and it is struck by plates of either a similar ceramic or by plates of a well-characterized metal. All tests were performed using either a single-stage propellant gun or a two-stage light-gas gun. Particle velocity histories were measured with laser velocity interferometry (VISAR) at the interface between the back of the target ceramic and a calibrated VISAR window material. Peak impact stresses achieved in these experiments range from about 3 to 70 GPa. Ceramics tested under shock impact loading include: Al{sub 2}O{sub 3}, AlN, B{sub 4}C, SiC, Si{sub 3}N{sub 4}, TiB{sub 2}, WC and ZrO{sub 2}. This report compiles the VISAR wave profiles and experimental impact parameters within a database-useful for response model development, computational model validation studies, and independent assessment of the physics of dynamic deformation on high-strength, brittle solids.

  9. The compression pathway of quartz

    SciTech Connect

    Thompson, Richard M.; Downs, Robert T.; Dera, Przemyslaw

    2011-11-07

    The structure of quartz over the temperature domain (298 K, 1078 K) and pressure domain (0 GPa, 20.25 GPa) is compared to the following three hypothetical quartz crystals: (1) Ideal {alpha}-quartz with perfectly regular tetrahedra and the same volume and Si-O-Si angle as its observed equivalent (ideal {beta}-quartz has Si-O-Si angle fixed at 155.6{sup o}). (2) Model {alpha}-quartz with the same Si-O-Si angle and cell parameters as its observed equivalent, derived from ideal by altering the axial ratio. (3) BCC quartz with a perfectly body-centered cubic arrangement of oxygen anions and the same volume as its observed equivalent. Comparison of experimental data recorded in the literature for quartz with these hypothetical crystal structures shows that quartz becomes more ideal as temperature increases, more BCC as pressure increases, and that model quartz is a very good representation of observed quartz under all conditions. This is consistent with the hypothesis that quartz compresses through Si-O-Si angle-bending, which is resisted by anion-anion repulsion resulting in increasing distortion of the c/a axial ratio from ideal as temperature decreases and/or pressure increases.

  10. Application of a Reynolds stress turbulence model to the compressible shear layer

    NASA Technical Reports Server (NTRS)

    Sarkar, S.; Balakrishnan, L.

    1990-01-01

    Theoretically based turbulence models have had success in predicting many features of incompressible, free shear layers. However, attempts to extend these models to the high-speed, compressible shear layer have been less effective. In the present work, the compressible shear layer was studied with a second-order turbulence closure, which initially used only variable density extensions of incompressible models for the Reynolds stress transport equation and the dissipation rate transport equation. The quasi-incompressible closure was unsuccessful; the predicted effect of the convective Mach number on the shear layer growth rate was significantly smaller than that observed in experiments. Having thus confirmed that compressibility effects have to be explicitly considered, a new model for the compressible dissipation was introduced into the closure. This model is based on a low Mach number, asymptotic analysis of the Navier-Stokes equations, and on direct numerical simulation of compressible, isotropic turbulence. The use of the new model for the compressible dissipation led to good agreement of the computed growth rates with the experimental data. Both the computations and the experiments indicate a dramatic reduction in the growth rate when the convective Mach number is increased. Experimental data on the normalized maximum turbulence intensities and shear stress also show a reduction with increasing Mach number.

  11. Key importance of compression properties in the biophysical characteristics of hyaluronic acid soft-tissue fillers.

    PubMed

    Gavard Molliard, Samuel; Albert, Séverine; Mondon, Karine

    2016-08-01

    Hyaluronic acid (HA) soft-tissue fillers are the most popular degradable injectable products used for correcting skin depressions and restoring facial volume loss. From a rheological perspective, HA fillers are commonly characterised through their viscoelastic properties under shear-stress. However, despite the continuous mechanical pressure that the skin applies on the fillers, compression properties in static and dynamic modes are rarely considered. In this article, three different rheological tests (shear-stress test and compression tests in static and dynamic mode) were carried out on nine CE-marked cross-linked HA fillers. Corresponding shear-stress (G', tanδ) and compression (E', tanδc, normal force FN) parameters were measured. We show here that the tested products behave differently under shear-stress and under compression even though they are used for the same indications. G' showed the expected influence on the tissue volumising capacity, and the same influence was also observed for the compression parameters E'. In conclusion, HA soft-tissue fillers exhibit widely different biophysical characteristics and many variables contribute to their overall performance. The elastic modulus G' is not the only critical parameter to consider amongst the rheological properties: the compression parameters E' and FN also provide key information, which should be taken into account for a better prediction of clinical outcomes, especially for predicting the volumising capacity and probably the ability to stimulate collagen production by fibroblasts. PMID:27093589

  12. Data compression: The end-to-end information systems perspective for NASA space science missions

    NASA Technical Reports Server (NTRS)

    Tai, Wallace

    1991-01-01

    The unique characteristics of compressed data have important implications to the design of space science data systems, science applications, and data compression techniques. The sequential nature or data dependence between each of the sample values within a block of compressed data introduces an error multiplication or propagation factor which compounds the effects of communication errors. The data communication characteristics of the onboard data acquisition, storage, and telecommunication channels may influence the size of the compressed blocks and the frequency of included re-initialization points. The organization of the compressed data are continually changing depending on the entropy of the input data. This also results in a variable output rate from the instrument which may require buffering to interface with the spacecraft data system. On the ground, there exist key tradeoff issues associated with the distribution and management of the science data products when data compression techniques are applied in order to alleviate the constraints imposed by ground communication bandwidth and data storage capacity.

  13. GPU Lossless Hyperspectral Data Compression System

    NASA Technical Reports Server (NTRS)

    Aranki, Nazeeh I.; Keymeulen, Didier; Kiely, Aaron B.; Klimesh, Matthew A.

    2014-01-01

    Hyperspectral imaging systems onboard aircraft or spacecraft can acquire large amounts of data, putting a strain on limited downlink and storage resources. Onboard data compression can mitigate this problem but may require a system capable of a high throughput. In order to achieve a high throughput with a software compressor, a graphics processing unit (GPU) implementation of a compressor was developed targeting the current state-of-the-art GPUs from NVIDIA(R). The implementation is based on the fast lossless (FL) compression algorithm reported in "Fast Lossless Compression of Multispectral-Image Data" (NPO- 42517), NASA Tech Briefs, Vol. 30, No. 8 (August 2006), page 26, which operates on hyperspectral data and achieves excellent compression performance while having low complexity. The FL compressor uses an adaptive filtering method and achieves state-of-the-art performance in both compression effectiveness and low complexity. The new Consultative Committee for Space Data Systems (CCSDS) Standard for Lossless Multispectral & Hyperspectral image compression (CCSDS 123) is based on the FL compressor. The software makes use of the highly-parallel processing capability of GPUs to achieve a throughput at least six times higher than that of a software implementation running on a single-core CPU. This implementation provides a practical real-time solution for compression of data from airborne hyperspectral instruments.

  14. Multiresolution Distance Volumes for Progressive Surface Compression

    SciTech Connect

    Laney, D; Bertram, M; Duchaineau, M; Max, N

    2002-01-14

    Surfaces generated by scientific simulation and range scanning can reach into the billions of polygons. Such surfaces must be aggressively compressed, but at the same time should provide for level of detail queries. Progressive compression techniques based on subdivision surfaces produce impressive results on range scanned models. However, these methods require the construction of a base mesh which parameterizes the surface to be compressed and encodes the topology of the surface. For complex surfaces with high genus and/or a large number of components, the computation of an appropriate base mesh is difficult and often infeasible. We present a surface compression method that stores surfaces as wavelet-compressed signed-distance volumes. Our method avoids the costly base-mesh construction step and offers several improvements over previous attempts at compressing signed-distance functions, including an {Omicron}(n) distance transform, a new zero set initialization method for triangle meshes, and a specialized thresholding algorithm. We demonstrate the potential of sampled distance volumes for surface compression and progressive reconstruction for complex high genus surfaces.

  15. Method of continuously producing compression molded coal

    SciTech Connect

    Yoshida, H.; Ishihara, N.; Kuwashima, S.

    1986-08-19

    This patent describes a method of producing a continuous cake of compression molded coal for chamber type coke ovens comprising steps of charging raw material coking coal into a molding box, and pressurizing the raw material coking coal with a pressing plate to obtain compression molded coal and to push the compression molded coal out of the molding box through an outlet. The improvement described here includes: the coking coal having a water content of more than 8.5% is charged into a chamber of the molding box at a side opposite the outlet, the pressing plate in the chamber is so advanced in the molding box to compression mold the coking coal into the preceding compression molded coal at a pressure no more than about 100 Kg/cm/sup 2/ so that the molded coal has a bulk density of at least 1.0 wet ton/m/sup 3/, and to push the molded coal in the molding box toward the outlet, the molded coal pressurized by the pressing plate partly remains in the molding box for supporting the coking coal freshly charged for the following cycle of the operation, and the freshly charged coal is pressed by the pressing plate so that the subsequent molded coal is combined with the preceding compression molded coal and the preceding molded coal is pushed out of the molding box whereby by repeating the serial steps of the operation, continuous cake of compression molded coal is produced.

  16. MAFCO: A Compression Tool for MAF Files

    PubMed Central

    Matos, Luís M. O.; Neves, António J. R.; Pratas, Diogo; Pinho, Armando J.

    2015-01-01

    In the last decade, the cost of genomic sequencing has been decreasing so much that researchers all over the world accumulate huge amounts of data for present and future use. These genomic data need to be efficiently stored, because storage cost is not decreasing as fast as the cost of sequencing. In order to overcome this problem, the most popular general-purpose compression tool, gzip, is usually used. However, these tools were not specifically designed to compress this kind of data, and often fall short when the intention is to reduce the data size as much as possible. There are several compression algorithms available, even for genomic data, but very few have been designed to deal with Whole Genome Alignments, containing alignments between entire genomes of several species. In this paper, we present a lossless compression tool, MAFCO, specifically designed to compress MAF (Multiple Alignment Format) files. Compared to gzip, the proposed tool attains a compression gain from 34% to 57%, depending on the data set. When compared to a recent dedicated method, which is not compatible with some data sets, the compression gain of MAFCO is about 9%. Both source-code and binaries for several operating systems are freely available for non-commercial use at: http://bioinformatics.ua.pt/software/mafco. PMID:25816229

  17. Cloud Optimized Image Format and Compression

    NASA Astrophysics Data System (ADS)

    Becker, P.; Plesea, L.; Maurer, T.

    2015-04-01

    Cloud based image storage and processing requires revaluation of formats and processing methods. For the true value of the massive volumes of earth observation data to be realized, the image data needs to be accessible from the cloud. Traditional file formats such as TIF and NITF were developed in the hay day of the desktop and assumed fast low latency file access. Other formats such as JPEG2000 provide for streaming protocols for pixel data, but still require a server to have file access. These concepts no longer truly hold in cloud based elastic storage and computation environments. This paper will provide details of a newly evolving image storage format (MRF) and compression that is optimized for cloud environments. Although the cost of storage continues to fall for large data volumes, there is still significant value in compression. For imagery data to be used in analysis and exploit the extended dynamic range of the new sensors, lossless or controlled lossy compression is of high value. Compression decreases the data volumes stored and reduces the data transferred, but the reduced data size must be balanced with the CPU required to decompress. The paper also outlines a new compression algorithm (LERC) for imagery and elevation data that optimizes this balance. Advantages of the compression include its simple to implement algorithm that enables it to be efficiently accessed using JavaScript. Combing this new cloud based image storage format and compression will help resolve some of the challenges of big image data on the internet.

  18. Multishock Compression Properties of Warm Dense Argon

    NASA Astrophysics Data System (ADS)

    Zheng, Jun; Chen, Qifeng; Yunjun, Gu; Li, Zhiguo; Shen, Zhijun

    2015-10-01

    Warm dense argon was generated by a shock reverberation technique. The diagnostics of warm dense argon were performed by a multichannel optical pyrometer and a velocity interferometer system. The equations of state in the pressure-density range of 20-150 GPa and 1.9-5.3 g/cm3 from the first- to fourth-shock compression were presented. The single-shock temperatures in the range of 17.2-23.4 kK were obtained from the spectral radiance. Experimental results indicates that multiple shock-compression ratio (ηi = ρi/ρ0) is greatly enhanced from 3.3 to 8.8, where ρ0 is the initial density of argon and ρi (i = 1, 2, 3, 4) is the compressed density from first to fourth shock, respectively. For the relative compression ratio (ηi’ = ρi/ρi-1), an interesting finding is that a turning point occurs at the second shocked states under the conditions of different experiments, and ηi’ increases with pressure in lower density regime and reversely decreases with pressure in higher density regime. The evolution of the compression ratio is controlled by the excitation of internal degrees of freedom, which increase the compression, and by the interaction effects between particles that reduce it. A temperature-density plot shows that current multishock compression states of argon have distributed into warm dense regime.

  19. Multishock Compression Properties of Warm Dense Argon.

    PubMed

    Zheng, Jun; Chen, Qifeng; Yunjun, Gu; Li, Zhiguo; Shen, Zhijun

    2015-01-01

    Warm dense argon was generated by a shock reverberation technique. The diagnostics of warm dense argon were performed by a multichannel optical pyrometer and a velocity interferometer system. The equations of state in the pressure-density range of 20-150 GPa and 1.9-5.3 g/cm(3) from the first- to fourth-shock compression were presented. The single-shock temperatures in the range of 17.2-23.4 kK were obtained from the spectral radiance. Experimental results indicates that multiple shock-compression ratio (ηi = ρi/ρ0) is greatly enhanced from 3.3 to 8.8, where ρ0 is the initial density of argon and ρi (i = 1, 2, 3, 4) is the compressed density from first to fourth shock, respectively. For the relative compression ratio (ηi' = ρi/ρi-1), an interesting finding is that a turning point occurs at the second shocked states under the conditions of different experiments, and ηi' increases with pressure in lower density regime and reversely decreases with pressure in higher density regime. The evolution of the compression ratio is controlled by the excitation of internal degrees of freedom, which increase the compression, and by the interaction effects between particles that reduce it. A temperature-density plot shows that current multishock compression states of argon have distributed into warm dense regime. PMID:26515505

  20. Compression of Space for Low Visibility Probes.

    PubMed

    Born, Sabine; Krüger, Hannah M; Zimmermann, Eckart; Cavanagh, Patrick

    2016-01-01

    Stimuli briefly flashed just before a saccade are perceived closer to the saccade target, a phenomenon known as perisaccadic compression of space (Ross et al., 1997). More recently, we have demonstrated that brief probes are attracted towards a visual reference when followed by a mask, even in the absence of saccades (Zimmermann et al., 2014a). Here, we ask whether spatial compression depends on the transient disruptions of the visual input stream caused by either a mask or a saccade. Both of these degrade the probe visibility but we show that low probe visibility alone causes compression in the absence of any disruption. In a first experiment, we varied the regions of the screen covered by a transient mask, including areas where no stimulus was presented and a condition without masking. In all conditions, we adjusted probe contrast to make the probe equally hard to detect. Compression effects were found in all conditions. To obtain compression without a mask, the probe had to be presented at much lower contrasts than with masking. Comparing mislocalizations at different probe detection rates across masking, saccades and low contrast conditions without mask or saccade, Experiment 2 confirmed this observation and showed a strong influence of probe contrast on compression. Finally, in Experiment 3, we found that compression decreased as probe duration increased both for masks and saccades although here we did find some evidence that factors other than simply visibility as we measured it contribute to compression. Our experiments suggest that compression reflects how the visual system localizes weak targets in the context of highly visible stimuli.

  1. Logarithmic compression methods for spectral data

    DOEpatents

    Dunham, Mark E.

    2003-01-01

    A method is provided for logarithmic compression, transmission, and expansion of spectral data. A log Gabor transformation is made of incoming time series data to output spectral phase and logarithmic magnitude values. The output phase and logarithmic magnitude values are compressed by selecting only magnitude values above a selected threshold and corresponding phase values to transmit compressed phase and logarithmic magnitude values. A reverse log Gabor transformation is then performed on the transmitted phase and logarithmic magnitude values to output transmitted time series data to a user.

  2. Compression of a bundle of light rays.

    PubMed

    Marcuse, D

    1971-03-01

    The performance of ray compression devices is discussed on the basis of a phase space treatment using Liouville's theorem. It is concluded that the area in phase space of the input bundle of rays is determined solely by the required compression ratio and possible limitations on the maximum ray angle at the output of the device. The efficiency of tapers and lenses as ray compressors is approximately equal. For linear tapers and lenses the input angle of the useful rays must not exceed the compression ratio. The performance of linear tapers and lenses is compared to a particular ray compressor using a graded refractive index distribution.

  3. Calculation methods for compressible turbulent boundary layers

    NASA Technical Reports Server (NTRS)

    Bushnell, D. M.; Cary, A. M., Jr.; Harris, J. E.

    1976-01-01

    Calculation procedures for non-reacting compressible two- and three-dimensional turbulent boundary layers were reviewed. Integral, transformation and correlation methods, as well as finite difference solutions of the complete boundary layer equations summarized. Alternative numerical solution procedures were examined, and both mean field and mean turbulence field closure models were considered. Physics and related calculation problems peculiar to compressible turbulent boundary layers are described. A catalog of available solution procedures of the finite difference, finite element, and method of weighted residuals genre is included. Influence of compressibility, low Reynolds number, wall blowing, and pressure gradient upon mean field closure constants are reported.

  4. ICER-3D Hyperspectral Image Compression Software

    NASA Technical Reports Server (NTRS)

    Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, three-dimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume. The ICER-3D software is capable of providing either lossless or lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The compression algorithm, which was derived from the ICER image compression algorithm, includes wavelet-transform, context-modeling, and entropy coding subalgorithms. The 3D wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of sets of hyperspectral image data, while facilitating elimination of spectral ringing artifacts, using a technique summarized in "Improving 3D Wavelet-Based Compression of Spectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. Correlation is further exploited by a context-modeling subalgorithm, which exploits spectral dependencies in the wavelet-transformed hyperspectral data, using an algorithm that is summarized in "Context Modeler for Wavelet Compression of Hyperspectral Images" (NPO-43239), which follows this article. An important feature of ICER-3D is a scheme for limiting the adverse effects of loss of data during transmission. In this scheme, as in the similar scheme used by ICER, the spatial-frequency domain is partitioned into rectangular error-containment regions. In ICER-3D, the partitions extend through all the wavelength bands. The data in each partition are compressed independently of those in the other partitions, so that loss or corruption of data from any partition does not affect the other partitions. Furthermore, because compression is progressive within each partition, when data are lost, any data from that partition received

  5. Compressive response of Kevlar/epoxy composites

    SciTech Connect

    Yeh, J.R.; Teply, J.L.

    1988-03-01

    A mathematical model is developed from the principle of minimum potential energy to determine the longitudinal compressive response of unidirectional fiber composites. A theoretical study based on this model is conducted to assess the influence of local fiber misalignment and the nonlinear shear deformation of the matrix. Numerical results are compared with experiments to verify this study; it appears that the predicted compressive response coincides well with experimental results. It is also shown that the compressive strength of Kevlar/epoxy is dominated by local shear failure. 12 references.

  6. Digital Image Compression Using Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Serra-Ricart, M.; Garrido, L.; Gaitan, V.; Aloy, A.

    1993-01-01

    The problem of storing, transmitting, and manipulating digital images is considered. Because of the file sizes involved, large amounts of digitized image information are becoming common in modern projects. Our goal is to described an image compression transform coder based on artificial neural networks techniques (NNCTC). A comparison of the compression results obtained from digital astronomical images by the NNCTC and the method used in the compression of the digitized sky survey from the Space Telescope Science Institute based on the H-transform is performed in order to assess the reliability of the NNCTC.

  7. Modulation compression for short wavelength harmonic generation

    SciTech Connect

    Qiang, J.

    2010-01-11

    Laser modulator is used to seed free electron lasers. In this paper, we propose a scheme to compress the initial laser modulation in the longitudinal phase space by using two opposite sign bunch compressors and two opposite sign energy chirpers. This scheme could potentially reduce the initial modulation wavelength by a factor of C and increase the energy modulation amplitude by a factor of C, where C is the compression factor of the first bunch compressor. Such a compressed energy modulation can be directly used to generate short wavelength current modulation with a large bunching factor.

  8. Image Data Compression Having Minimum Perceptual Error

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1997-01-01

    A method is presented for performing color or grayscale image compression that eliminates redundant and invisible image components. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The quantization matrix comprises visual masking by luminance and contrast technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  9. Compressed Gas Safety for Experimental Fusion Facilities

    SciTech Connect

    Cadwallader, L.C.

    2005-05-15

    Experimental fusion facilities present a variety of hazards to the operators and staff. There are unique or specialized hazards, including magnetic fields, cryogens, radio frequency emissions, and vacuum reservoirs. There are also more general industrial hazards, such as a wide variety of electrical power, pressurized air and cooling water systems in use, there are crane and hoist loads, working at height, and handling compressed gas cylinders. This paper outlines the projectile hazard associated with compressed gas cylinders and methods of treatment to provide for compressed gas safety. This information should be of interest to personnel at both magnetic and inertial fusion experiments.

  10. Compressed Gas Safety for Experimental Fusion Facilities

    SciTech Connect

    Lee C. Cadwallader

    2004-09-01

    Experimental fusion facilities present a variety of hazards to the operators and staff. There are unique or specialized hazards, including magnetic fields, cryogens, radio frequency emissions, and vacuum reservoirs. There are also more general industrial hazards, such as a wide variety of electrical power, pressurized air, and cooling water systems in use, there are crane and hoist loads, working at height, and handling compressed gas cylinders. This paper outlines the projectile hazard assoicated with compressed gas cylinders and mthods of treatment to provide for compressed gas safety. This information should be of interest to personnel at both magnetic and inertial fusion experiments.

  11. Evolution Of Nonlinear Waves in Compressing Plasma

    SciTech Connect

    P.F. Schmit, I.Y. Dodin, and N.J. Fisch

    2011-05-27

    Through particle-in-cell simulations, the evolution of nonlinear plasma waves is examined in one-dimensional collisionless plasma undergoing mechanical compression. Unlike linear waves, whose wavelength decreases proportionally to the system length L(t), nonlinear waves, such as solitary electron holes, conserve their characteristic size {Delta} during slow compression. This leads to a substantially stronger adiabatic amplification as well as rapid collisionless damping when L approaches {Delta}. On the other hand, cessation of compression halts the wave evolution, yielding a stable mode.

  12. Compressible homogeneous shear: Simulation and modeling

    NASA Technical Reports Server (NTRS)

    Sarkar, S.; Erlebacher, G.; Hussaini, M. Y.

    1992-01-01

    Compressibility effects were studied on turbulence by direct numerical simulation of homogeneous shear flow. A primary observation is that the growth of the turbulent kinetic energy decreases with increasing turbulent Mach number. The sinks provided by compressible dissipation and the pressure dilatation, along with reduced Reynolds shear stress, are shown to contribute to the reduced growth of kinetic energy. Models are proposed for these dilatational terms and verified by direct comparison with the simulations. The differences between the incompressible and compressible fields are brought out by the examination of spectra, statistical moments, and structure of the rate of strain tensor.

  13. An efficient compression scheme for bitmap indices

    SciTech Connect

    Wu, Kesheng; Otoo, Ekow J.; Shoshani, Arie

    2004-04-13

    When using an out-of-core indexing method to answer a query, it is generally assumed that the I/O cost dominates the overall query response time. Because of this, most research on indexing methods concentrate on reducing the sizes of indices. For bitmap indices, compression has been used for this purpose. However, in most cases, operations on these compressed bitmaps, mostly bitwise logical operations such as AND, OR, and NOT, spend more time in CPU than in I/O. To speedup these operations, a number of specialized bitmap compression schemes have been developed; the best known of which is the byte-aligned bitmap code (BBC). They are usually faster in performing logical operations than the general purpose compression schemes, but, the time spent in CPU still dominates the total query response time. To reduce the query response time, we designed a CPU-friendly scheme named the word-aligned hybrid (WAH) code. In this paper, we prove that the sizes of WAH compressed bitmap indices are about two words per row for large range of attributes. This size is smaller than typical sizes of commonly used indices, such as a B-tree. Therefore, WAH compressed indices are not only appropriate for low cardinality attributes but also for high cardinality attributes.In the worst case, the time to operate on compressed bitmaps is proportional to the total size of the bitmaps involved. The total size of the bitmaps required to answer a query on one attribute is proportional to the number of hits. These indicate that WAH compressed bitmap indices are optimal. To verify their effectiveness, we generated bitmap indices for four different datasets and measured the response time of many range queries. Tests confirm that sizes of compressed bitmap indices are indeed smaller than B-tree indices, and query processing with WAH compressed indices is much faster than with BBC compressed indices, projection indices and B-tree indices. In addition, we also verified that the average query response time

  14. 3M Coban 2 Layer Compression Therapy: Intelligent Compression Dynamics to Suit Different Patient Needs

    PubMed Central

    Bernatchez, Stéphanie F.; Tucker, Joseph; Schnobrich, Ellen; Parks, Patrick J.

    2012-01-01

    Problem Chronic venous insufficiency can lead to recalcitrant leg ulcers. Compression has been shown to be effective in healing these ulcers, but most products are difficult to apply and uncomfortable for patients, leading to inconsistent/ineffective clinical application and poor compliance. In addition, compression presents risks for patients with an ankle-brachial pressure index (ABPI) <0.8 because of the possibility of further compromising the arterial circulation. The ABPI is the ratio of systolic leg blood pressure (taken at ankle) to systolic arm blood pressure (taken above elbow, at brachial artery). This is measured to assess a patient's lower extremity arterial perfusion before initiating compression therapy.1 Solution Using materials science, two-layer compression systems with controlled compression and a low profile were developed. These materials allow for a more consistent bandage application with better control of the applied compression, and their low profile is compatible with most footwear, increasing patient acceptance and compliance with therapy. The original 3M™ Coban™ 2 Layer Compression System is suited for patients with an ABPI ≥0.8; 3M™ Coban™ 2 Layer Lite Compression System can be used on patients with ABPI ≥0.5. New Technology Both compression systems are composed of two layers that combine to create an inelastic sleeve conforming to the limb contour to provide a consistent proper pressure profile to reduce edema. In addition, they slip significantly less than other compression products and improve patient daily living activities and physical symptoms. Indications for Use Both compression systems are indicated for patients with venous leg ulcers, lymphedema, and other conditions where compression therapy is appropriate. Caution As with any compression system, caution must be used when mixed venous and arterial disease is present to not induce any damage. These products are not indicated when the ABPI is <0.5. PMID:24527315

  15. Slip velocity method for three-dimensional compressible turbulent boundary layers

    NASA Technical Reports Server (NTRS)

    Barnwell, Richard W.; Wahls, Richard A.

    1988-01-01

    A slip velocity method for 2-D incompressible turbulent boundary layers was presented in AIAA Paper 88-0137. The inner part of the boundary layer was characterized by a law of the wall and a law of the wake, and the outer part was characterized by an arbitrary eddy viscosity model. In the present study for compressible flows, only a law of the wall is considered. The problem of 2-D compressible flow is treated first; then the extension to 3-D flow is addressed. A formulation for primitive variables is presented.

  16. A stable penalty method for the compressible Navier-Stokes equations. 1: Open boundary conditions

    NASA Technical Reports Server (NTRS)

    Hesthaven, J. S.; Gottlieb, D.

    1994-01-01

    The purpose of this paper is to present asymptotically stable open boundary conditions for the numerical approximation of the compressible Navier-Stokes equations in three spatial dimensions. The treatment uses the conservation form of the Navier-Stokes equations and utilizes linearization and localization at the boundaries based on these variables. The proposed boundary conditions are applied through a penalty procedure, thus ensuring correct behavior of the scheme as the Reynolds number tends to infinity. The versatility of this method is demonstrated for the problem of a compressible flow past a circular cylinder.

  17. Athena3D: Flux-conservative Godunov-type algorithm for compressible magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Hawley, John; Simon, Jake; Stone, James; Gardiner, Thomas; Teuben, Peter

    2015-05-01

    Written in FORTRAN, Athena3D, based on Athena (ascl:1010.014), is an implementation of a flux-conservative Godunov-type algorithm for compressible magnetohydrodynamics. Features of the Athena3D code include compressible hydrodynamics and ideal MHD in one, two or three spatial dimensions in Cartesian coordinates; adiabatic and isothermal equations of state; 1st, 2nd or 3rd order reconstruction using the characteristic variables; and numerical fluxes computed using the Roe scheme. In addition, it offers the ability to add source terms to the equations and is parallelized based on MPI.

  18. Compressibility effects in the Miami isopycnic coordinate ocean model

    NASA Astrophysics Data System (ADS)

    Sun, Shan

    Potential density referenced to sea level pressure (sigmasb0) has shown its usefulness as vertical coordinate in ocean models in many ways, but there are problems with sigmasb0 (potential density referenced to sea level) coordinates in the deep ocean: sigmasb0(z) can be multivalued, leading to coordinate folding, and sigmasb0 surfaces can deviate from the so-called neutral surfaces, which are the surfaces along which turbulent lateral mixing takes place in a stratified medium. The reason for both of these problems is that most isopycnal models regard seawater as uniformly compressible. However, the effect of water temperature on compressibility cannot be ignored. In this study a two-pronged approach is taken to improve the model accuracy. First, since the effects of compressibility variation are proportional to the difference between the local and the reference pressure, we replace the model's traditional sigmasb0 coordinate by sigmasb2 (potential density referenced to 2000 dbar). This step eliminates many of the coordinate folding problems associated with sigmasb0 and generally reduces the difference between coordinate and neutrally buoyant surfaces. Second, we split the compressibility coefficient into a pressure- and a temperature-dependent part and, recognizing that the former is dynamically passive, retain only the effect of the latter in the governing equations. This is accomplished by introducing a new variable called "active density"-the density with the pressure-related compressibility removed. Therefore, sigmasb2 is adopted as vertical coordinate, but active density is used to express the seawater density within the layers. The above changes are applied in a near-global, 16-layer, 2sp° x 2sp°cos (lat.) Miami Isopycnic Coordinate Ocean Model (MICOM). The model is driven by observed atmospheric conditions. MICOM modified in this fashion produces realistic meridional mass and associated heat fluxes in the three major ocean basins. A realistic formation

  19. Method for compression of binary data

    DOEpatents

    Berlin, G.J.

    1996-03-26

    The disclosed method for compression of a series of data bytes, based on LZSS-based compression methods, provides faster decompression of the stored data. The method involves the creation of a flag bit buffer in a random access memory device for temporary storage of flag bits generated during normal LZSS-based compression. The flag bit buffer stores the flag bits separately from their corresponding pointers and uncompressed data bytes until all input data has been read. Then, the flag bits are appended to the compressed output stream of data. Decompression can be performed much faster because bit manipulation is only required when reading the flag bits and not when reading uncompressed data bytes and pointers. Uncompressed data is read using byte length instructions and pointers are read using word instructions, thus reducing the time required for decompression. 5 figs.

  20. Interactive calculation procedures for mixed compression inlets

    NASA Technical Reports Server (NTRS)

    Reshotko, Eli

    1983-01-01

    The proper design of engine nacelle installations for supersonic aircraft depends on a sophisticated understanding of the interactions between the boundary layers and the bounding external flows. The successful operation of mixed external-internal compression inlets depends significantly on the ability to closely control the operation of the internal compression portion of the inlet. This portion of the inlet is one where compression is achieved by multiple reflection of oblique shock waves and weak compression waves in a converging internal flow passage. However weak these shocks and waves may seem gas-dynamically, they are of sufficient strength to separate a laminar boundary layer and generally even strong enough for separation or incipient separation of the turbulent boundary layers. An understanding was developed of the viscous-inviscid interactions and of the shock wave boundary layer interactions and reflections.

  1. Efficient Quantum Information Processing via Quantum Compressions

    NASA Astrophysics Data System (ADS)

    Deng, Y.; Luo, M. X.; Ma, S. Y.

    2016-01-01

    Our purpose is to improve the quantum transmission efficiency and reduce the resource cost by quantum compressions. The lossless quantum compression is accomplished using invertible quantum transformations and applied to the quantum teleportation and the simultaneous transmission over quantum butterfly networks. New schemes can greatly reduce the entanglement cost, and partially solve transmission conflictions over common links. Moreover, the local compression scheme is useful for approximate entanglement creations from pre-shared entanglements. This special task has not been addressed because of the quantum no-cloning theorem. Our scheme depends on the local quantum compression and the bipartite entanglement transfer. Simulations show the success probability is greatly dependent of the minimal entanglement coefficient. These results may be useful in general quantum network communication.

  2. Compression asphyxia from a human pyramid.

    PubMed

    Tumram, Nilesh Keshav; Ambade, Vipul Namdeorao; Biyabani, Naushad

    2015-12-01

    In compression asphyxia, respiration is stopped by external forces on the body. It is usually due to an external force compressing the trunk such as a heavy weight on the chest or abdomen and is associated with internal injuries. In present case, the victim was trapped and crushed under the falling persons from a human pyramid formation for a "Dahi Handi" festival. There was neither any severe blunt force injury nor any significant pathological natural disease contributing to the cause of death. The victim was unable to remove himself from the situation because his cognitive responses and coordination were impaired due to alcohol intake. The victim died from asphyxia due to compression of his chest and abdomen. Compression asphyxia resulting from the collapse of a human pyramid and the dynamics of its impact force in these circumstances is very rare and is not reported previously to the best of our knowledge.

  3. 3D MHD Simulations of Spheromak Compression

    NASA Astrophysics Data System (ADS)

    Stuber, James E.; Woodruff, Simon; O'Bryan, John; Romero-Talamas, Carlos A.; Darpa Spheromak Team

    2015-11-01

    The adiabatic compression of compact tori could lead to a compact and hence low cost fusion energy system. The critical scientific issues in spheromak compression relate both to confinement properties and to the stability of the configuration undergoing compression. We present results from the NIMROD code modified with the addition of magnetic field coils that allow us to examine the role of rotation on the stability and confinement of the spheromak (extending prior work for the FRC). We present results from a scan in initial rotation, from 0 to 100km/s. We show that strong rotational shear (10km/s over 1cm) occurs. We compare the simulation results with analytic scaling relations for adiabatic compression. Work performed under DARPA grant N66001-14-1-4044.

  4. Super high compression of line drawing data

    NASA Technical Reports Server (NTRS)

    Cooper, D. B.

    1976-01-01

    Models which can be used to accurately represent the type of line drawings which occur in teleconferencing and transmission for remote classrooms and which permit considerable data compression were described. The objective was to encode these pictures in binary sequences of shortest length but such that the pictures can be reconstructed without loss of important structure. It was shown that exploitation of reasonably simple structure permits compressions in the range of 30-100 to 1. When dealing with highly stylized material such as electronic or logic circuit schematics, it is unnecessary to reproduce configurations exactly. Rather, the symbols and configurations must be understood and be reproduced, but one can use fixed font symbols for resistors, diodes, capacitors, etc. Compression of pictures of natural phenomena such as can be realized by taking a similar approach, or essentially zero error reproducibility can be achieved but at a lower level of compression.

  5. Compression behavior of unidirectional fibrous composite

    NASA Technical Reports Server (NTRS)

    Sinclair, J. H.; Chamis, C. C.

    1982-01-01

    The longitudinal compression behavior of unidirectional fiber composites is investigated using a modified Celanese test method with thick and thin test specimens. The test data obtained are interpreted using the stress/strain curves from back-to-back strain gages, examination of fracture surfaces by scanning electron microscope, and predictive equations for distinct failure modes including fiber compression failure, Euler buckling, delamination, and flexure. The results show that the longitudinal compression fracture is induced by a combination of delamination, flexure, and fiber tier breaks. No distinct fracture surface characteristics can be associated with unique failure modes. An equation is described which can be used to extract the longitudinal compression strength knowing the longitudinal tensile and flexural strengths of the same composite system.

  6. Seneca Compressed Air Energy Storage (CAES) Project

    SciTech Connect

    2012-11-30

    This document provides specifications for the process air compressor for a compressed air storage project, requests a budgetary quote, and provides supporting information, including compressor data, site specific data, water analysis, and Seneca CAES value drivers.

  7. Block adaptive rate controlled image data compression

    NASA Technical Reports Server (NTRS)

    Rice, R. F.; Hilbert, E.; Lee, J.-J.; Schlutsmeyer, A.

    1979-01-01

    A block adaptive rate controlled (BARC) image data compression algorithm is described. It is noted that in the algorithm's principal rate controlled mode, image lines can be coded at selected rates by combining practical universal noiseless coding techniques with block adaptive adjustments in linear quantization. Compression of any source data at chosen rates of 3.0 bits/sample and above can be expected to yield visual image quality with imperceptible degradation. Exact reconstruction will be obtained if the one-dimensional difference entropy is below the selected compression rate. It is noted that the compressor can also be operated as a floating rate noiseless coder by simply not altering the input data quantization. Here, the universal noiseless coder ensures that the code rate is always close to the entropy. Application of BARC image data compression to the Galileo orbiter mission of Jupiter is considered.

  8. Real time telemetry and data compression

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The generation of telemetry and data compression by the flight program was verified. The adequacy of flight program telemetry control and timing is proven by the analysis of simulation laboratory runs of past programs through the use of telemetry. The telemetry data are correctly issued using specific tags called process input/output (P10) tags. Verification is accomplished by ensuring that a specific P10 tag and mode register setting identifies the correct parameter and that the data are properly scaled for subsequent ground station reduction. It was checked that the LVDC telemetry correctly adheres to the general requirements specified by the EDD. Data compression specifications are verified using compressed data from nominal flight simulations and from a series of perturbations designed to test data table overflows, data dump rates, and compression of data for occurrence events.

  9. Method for compression of binary data

    DOEpatents

    Berlin, Gary J.

    1996-01-01

    The disclosed method for compression of a series of data bytes, based on LZSS-based compression methods, provides faster decompression of the stored data. The method involves the creation of a flag bit buffer in a random access memory device for temporary storage of flag bits generated during normal LZSS-based compression. The flag bit buffer stores the flag bits separately from their corresponding pointers and uncompressed data bytes until all input data has been read. Then, the flag bits are appended to the compressed output stream of data. Decompression can be performed much faster because bit manipulation is only required when reading the flag bits and not when reading uncompressed data bytes and pointers. Uncompressed data is read using byte length instructions and pointers are read using word instructions, thus reducing the time required for decompression.

  10. All about compression: A literature review.

    PubMed

    de Carvalho, Magali Rezende; de Andrade, Isabelle Silveira; de Abreu, Alcione Matos; Leite Ribeiro, Andrea Pinto; Peixoto, Bruno Utzeri; de Oliveira, Beatriz Guitton Renaud Baptista

    2016-06-01

    Lower extremity ulcers represent a significant public health problem as they frequently progress to chronicity, significantly impact daily activities and comfort, and represent a huge financial burden to the patient and the health system. The aim of this review was to discuss the best approach for venous leg ulcers (VLUs). Online searches were conducted in Ovid MEDLINE, Ovid EMBASE, EBSCO CINAHL, and reference lists and official guidelines. Keywords considered for this review were VLU, leg ulcer, varicose ulcer, compressive therapy, compression, and stocking. A complete assessment of the patient's overall health should be performed by a trained practitioner, focusing on history of diabetes mellitus, hypertension, dietetic habits, medications, and practice of physical exercises, followed by a thorough assessment of both legs. Compressive therapy is the gold standard treatment for VLUs, and the ankle-brachial index should be measured in all patients before compression application. PMID:27210451

  11. Pulse compression and prepulse suppression apparatus

    DOEpatents

    Dane, C.B.; Hackel, L.A.; George, E.V.; Miller, J.L.; Krupke, W.F.

    1993-11-09

    A pulse compression and prepulse suppression apparatus (10) for time compressing the output of a laser (14). A pump pulse (46) is separated from a seed pulse (48) by a first polarized beam splitter (20) according to the orientation of a half wave plate (18). The seed pulse (48) is directed into an SBS oscillator (44) by two plane mirrors (22, 26) and a corner mirror (24), the corner mirror (24) being movable to adjust timing. The pump pulse (46) is directed into an SBS amplifier 34 wherein SBS occurs. The seed pulse (48), having been propagated from the SBS oscillator (44), is then directed through the SBS amplifier (34) wherein it sweeps the energy of the pump pulse (46) out of the SBS amplifier (34) and is simultaneously compressed, and the time compressed pump pulse (46) is emitted as a pulse output (52). A second polarized beam splitter (38) directs any undepleted pump pulse 58 away from the SBS oscillator (44).

  12. Pulse compression and prepulse suppression apparatus

    DOEpatents

    Dane, Clifford B.; Hackel, Lloyd A.; George, Edward V.; Miller, John L.; Krupke, William F.

    1993-01-01

    A pulse compression and prepulse suppression apparatus (10) for time compressing the output of a laser (14). A pump pulse (46) is separated from a seed pulse (48) by a first polarized beam splitter (20) according to the orientation of a half wave plate (18). The seed pulse (48) is directed into an SBS oscillator (44) by two plane mirrors (22, 26) and a corner mirror (24), the corner mirror (24) being movable to adjust timing. The pump pulse (46) is directed into an SBS amplifier 34 wherein SBS occurs. The seed pulse (48), having been propagated from the SBS oscillator (44), is then directed through the SBS amplifier (34) wherein it sweeps the energy of the pump pulse (46) out of the SBS amplifier (34) and is simultaneously compressed, and the time compressed pump pulse (46) is emitted as a pulse output (52). A second polarized beam splitter (38) directs any undepleted pump pulse 58 away from the SBS oscillator (44).

  13. Unusual aetiology of malignant spinal cord compression.

    PubMed

    Boland, Jason; Rennick, Adrienne

    2013-06-01

    Malignant spinal cord compression (MSCC) is an oncological emergency requiring rapid diagnosis and treatment to prevent irreversible spinal cord injury and disability. A case is described in a 45-year-old male with renal cell carcinoma in which the presentation of the MSCC was atypical with principally proximal left leg weakness with no evidence of bone metastasis. This was due to an unusual aetiology of the MSCC as the renal carcinoma had metastasised to his left psoas muscle causing a lumbosacral plexopathy and infiltrated through the intervertebral disc spaces, initially causing left lateral cauda equina and upper lumbar cord compression, before complete spinal cord compression. This case illustrates the varied aetiology of MSCC and reinforces the importance of maintaining a high index of suspicion of the possibility of spinal cord compression. PMID:24644568

  14. SUPG Finite Element Simulations of Compressible Flows

    NASA Technical Reports Server (NTRS)

    Kirk, Brnjamin, S.

    2006-01-01

    The Streamline-Upwind Petrov-Galerkin (SUPG) finite element simulations of compressible flows is presented. The topics include: 1) Introduction; 2) SUPG Galerkin Finite Element Methods; 3) Applications; and 4) Bibliography.

  15. Relativistic laser pulse compression in magnetized plasmas

    SciTech Connect

    Liang, Yun; Sang, Hai-Bo Wan, Feng; Lv, Chong; Xie, Bai-Song

    2015-07-15

    The self-compression of a weak relativistic Gaussian laser pulse propagating in a magnetized plasma is investigated. The nonlinear Schrödinger equation, which describes the laser pulse amplitude evolution, is deduced and solved numerically. The pulse compression is observed in the cases of both left- and right-hand circular polarized lasers. It is found that the compressed velocity is increased for the left-hand circular polarized laser fields, while decreased for the right-hand ones, which is reinforced as the enhancement of the external magnetic field. We find a 100 fs left-hand circular polarized laser pulse is compressed in a magnetized (1757 T) plasma medium by more than ten times. The results in this paper indicate the possibility of generating particularly intense and short pulses.

  16. Fingerprint Compression Based on Sparse Representation.

    PubMed

    Shao, Guangqi; Wu, Yanping; A, Yong; Liu, Xiao; Guo, Tiande

    2014-02-01

    A new fingerprint compression algorithm based on sparse representation is introduced. Obtaining an overcomplete dictionary from a set of fingerprint patches allows us to represent them as a sparse linear combination of dictionary atoms. In the algorithm, we first construct a dictionary for predefined fingerprint image patches. For a new given fingerprint images, represent its patches according to the dictionary by computing l(0)-minimization and then quantize and encode the representation. In this paper, we consider the effect of various factors on compression results. Three groups of fingerprint images are tested. The experiments demonstrate that our algorithm is efficient compared with several competing compression techniques (JPEG, JPEG 2000, and WSQ), especially at high compression ratios. The experiments also illustrate that the proposed algorithm is robust to extract minutiae.

  17. Principles of Digital Dynamic-Range Compression

    PubMed Central

    Kates, James M.

    2005-01-01

    This article provides an overview of dynamic-range compression in digital hearing aids. Digital technology is becoming increasingly common in hearing aids, particularly because of the processing flexibility it offers and the opportunity to create more-effective devices. The focus of the paper is on the algorithms used to build digital compression systems. Of the various approaches that can be used to design a digital hearing aid, this paper considers broadband compression, multi-channel filter banks, a frequency-domain compressor using the FFT, the side-branch design that separates the filtering operation from the frequency analysis, and the frequency-warped version of the side-branch approach that modifies the analysis frequency spacing to more closely match auditory perception. Examples of the compressor frequency resolution, group delay, and compression behavior are provided for the different design approaches. PMID:16012704

  18. Pulse power applications of flux compression generators

    SciTech Connect

    Fowler, C.M.; Caird, R.S.; Erickson, D.J.; Freeman, B.L.

    1981-01-01

    Characteristics are presented for two different types of explosive driven flux compression generators and a megavolt pulse transformer. Status reports are given for rail gun and plasma focus programs for which the generators serve as power sources.

  19. A study of compressibility and compactibility of directly compressible tableting materials containing tramadol hydrochloride.

    PubMed

    Mužíková, Jitka; Kubíčková, Alena

    2016-09-01

    The paper evaluates and compares the compressibility and compactibility of directly compressible tableting materials for the preparation of hydrophilic gel matrix tablets containing tramadol hydrochloride and the coprocessed dry binders Prosolv® SMCC 90 and Disintequik™ MCC 25. The selected types of hypromellose are Methocel™ Premium K4M and Methocel™ Premium K100M in 30 and 50 % concentrations, the lubricant being magnesium stearate in a 1 % concentration. Compressibility is evaluated by means of the energy profile of compression process and compactibility by the tensile strength of tablets. The values of total energy of compression and plasticity were higher in the tableting materials containing Prosolv® SMCC 90 than in those containing Disintequik™ MCC 25. Tramadol slightly decreased the values of total energy of compression and plasticity. Tableting materials containing Prosolv® SMCC 90 yielded stronger tablets. Tramadol decreased the strength of tablets from both coprocessed dry binders.

  20. Progressive compression versus graduated compression for the management of venous insufficiency.

    PubMed

    Shepherd, Jan

    2016-09-01

    Venous leg ulceration (VLU) is a chronic condition associated with chronic venous insufficiency (CVI), where the most frequent complication is recurrence of ulceration after healing. Traditionally, graduated compression therapy has been shown to increase healing rates and also to reduce recurrence of VLU. Graduated compression occurs because the circumference of the limb is narrower at the ankle, thereby producing a higher pressure than at the calf, which is wider, creating a lower pressure. This phenomenon is explained by the principle known as Laplace's Law. Recently, the view that compression therapy must provide a graduated pressure gradient has been challenged. However, few studies so far have focused on the potential benefits of progressive compression where the pressure profile is inverted. This article will examine the contemporary concept that progressive compression may be as effective as traditional graduated compression therapy for the management of CVI. PMID:27594309

  1. Current density compression of intense ion beams

    NASA Astrophysics Data System (ADS)

    Sefkow, Adam Bennett

    Current density compression of intense ion beams in space and time is required for heavy ion fusion, in order to achieve the necessary intensities to implode an inertial confinement fusion target. Longitudinal compression to high current in a short pulse is achieved by imposing a velocity tilt upon the space-charge-dominated charge bunch, and a variety of means exist for simultaneous transverse focusing to a coincident focal plane. Compression to the desired levels requires sufficient neutralization of the beam by a pre-formed plasma during final transport. The physics of current density compression is studied in scaled experiments relevant for the operating regime of a heavy ion driver, and related theory and advanced particle-in-cell simulations provide valuable insight into the physical and technological limitations involved. A fast Faraday cup measures longitudinal compression ratios greater than 50 with pulse durations less than 5 ns, in excellent agreement with reduced models and sophisticated simulations, which account for many experimental parameters and effects. The detailed physics of achieving current density compression in the laboratory is reviewed. Quantitative examples explore the dependency of longitudinal compression on effects such as the finite-size acceleration gap, voltage waveform accuracy, variation in initial beam temperature, pulse length, intended fractional velocity tilt, and energy uncertainty, as well as aberration within focusing elements and plasma neutralization processes. In addition, plasma evolution in experimental sources responsible for the degree of beam neutralization is studied numerically, since compression stagnation occurs under inadequate neutralization conditions, which may excite nonlinear collective excitations due to beam-plasma interactions. The design of simultaneous focusing experiments using both existing and upgraded hardware is provided, and parametric variations important for compression physics are

  2. Ultralight and highly compressible graphene aerogels.

    PubMed

    Hu, Han; Zhao, Zongbin; Wan, Wubo; Gogotsi, Yury; Qiu, Jieshan

    2013-04-18

    Chemically converted graphene aerogels with ultralight density and high compressibility are prepared by diamine-mediated functionalization and assembly, followed by microwave irradiation. The resulting graphene aerogels with density as low as 3 mg cm(-3) show excellent resilience and can completely recover after more than 90% compression. The ultralight graphene aerogels possessing high elasticity are promising as compliant and energy-absorbing materials. PMID:23418081

  3. Nonlinear compressions in merging plasma jets

    SciTech Connect

    Messer, S.; Case, A.; Wu, L.; Brockington, S.; Witherspoon, F. D.

    2013-03-15

    We investigate the dynamics of merging supersonic plasma jets using an analytic model. The merging structures exhibit supersonic, nonlinear compressions which may steepen into full shocks. We estimate the distance necessary to form such shocks and the resulting jump conditions. These theoretical models are compared to experimental observations and simulated dynamics. We also use those models to extrapolate behavior of the jet-merging compressions in a Plasma Jet Magneto-Inertial Fusion reactor.

  4. Stability of compressible Taylor-Couette flow

    NASA Technical Reports Server (NTRS)

    Kao, Kai-Hsiung; Chow, Chuen-Yen

    1991-01-01

    Compressible stability equations are solved using the spectral collocation method in an attempt to study the effects of temperature difference and compressibility on the stability of Taylor-Couette flow. It is found that the Chebyshev collocation spectral method yields highly accurate results using fewer grid points for solving stability problems. Comparisons are made between the result obtained by assuming small Mach number with a uniform temperature distribution and that based on fully incompressible analysis.

  5. The temporal scaling laws of compressible turbulence

    NASA Astrophysics Data System (ADS)

    Sun, Bohua

    2016-08-01

    This paper proposes temporal scaling laws of the density-weighted energy spectrum for compressible turbulence in terms of dissipation rate, frequency and the Mach number. The study adopts the incomplete similarity theory in the scaling analysis of compressible turbulence motion. The investigation shows that the temporal Eulerian and Lagrangian energy spectra approach the ‑5 3 and ‑ 2 power laws when the Mach number M tends to reach unity and infinity, respectively.

  6. Lossy compression of weak lensing data

    SciTech Connect

    Vanderveld, R. Ali; Bernstein, Gary M.; Stoughton, Chris; Rhodes, Jason; Massey, Richard; Dobke, Benjamin M.

    2011-07-12

    Future orbiting observatories will survey large areas of sky in order to constrain the physics of dark matter and dark energy using weak gravitational lensing and other methods. Lossy compression of the resultant data will improve the cost and feasibility of transmitting the images through the space communication network. We evaluate the consequences of the lossy compression algorithm of Bernstein et al. (2010) for the high-precision measurement of weak-lensing galaxy ellipticities. This square-root algorithm compresses each pixel independently, and the information discarded is by construction less than the Poisson error from photon shot noise. For simulated space-based images (without cosmic rays) digitized to the typical 16 bits per pixel, application of the lossy compression followed by image-wise lossless compression yields images with only 2.4 bits per pixel, a factor of 6.7 compression. We demonstrate that this compression introduces no bias in the sky background. The compression introduces a small amount of additional digitization noise to the images, and we demonstrate a corresponding small increase in ellipticity measurement noise. The ellipticity measurement method is biased by the addition of noise, so the additional digitization noise is expected to induce a multiplicative bias on the galaxies measured ellipticities. After correcting for this known noise-induced bias, we find a residual multiplicative ellipticity bias of m {approx} -4 x 10-4. This bias is small when compared to the many other issues that precision weak lensing surveys must confront, and furthermore we expect it to be reduced further with better calibration of ellipticity measurement methods.

  7. Selfsimilar Spherical Compression Waves in Gas Dynamics

    NASA Astrophysics Data System (ADS)

    Meyer-ter-Vehn, J.; Schalk, C.

    1982-08-01

    A synopsis of different selfsimilar spherical compression waves is given pointing out their fundamental importance for the gas dynamics of inertial confinement fusion. Strong blast waves, various forms of isentropic compression waves, imploding shock waves and the solution for non-isentropic collapsing hollow spheres are included. A classification is given in terms of six singular points which characterise the different solutions and the relations between them. The presentation closely follows Guderley's original work on imploding shock waves

  8. Fracture in compression of brittle solids

    NASA Technical Reports Server (NTRS)

    1983-01-01

    The fracture of brittle solids in monotonic compression is reviewed from both the mechanistic and phenomenological points of view. The fundamental theoretical developments based on the extension of pre-existing cracks in general multiaxial stress fields are recognized as explaining extrinsic behavior where a single crack is responsible for the final failure. In contrast, shear faulting in compression is recognized to be the result of an evolutionary localization process involving en echelon action of cracks and is termed intrinsic.

  9. An image-data-compression algorithm

    NASA Technical Reports Server (NTRS)

    Hilbert, E. E.; Rice, R. F.

    1981-01-01

    Cluster Compression Algorithm (CCA) preprocesses Landsat image data immediately following satellite data sensor (receiver). Data are reduced by extracting pertinent image features and compressing this result into concise format for transmission to ground station. This results in narrower transmission bandwidth, increased data-communication efficiency, and reduced computer time in reconstructing and analyzing image. Similar technique could be applied to other types of recorded data to cut costs of transmitting, storing, distributing, and interpreting complex information.

  10. Lossy compression of weak lensing data

    DOE PAGES

    Vanderveld, R. Ali; Bernstein, Gary M.; Stoughton, Chris; Rhodes, Jason; Massey, Richard; Dobke, Benjamin M.

    2011-07-12

    Future orbiting observatories will survey large areas of sky in order to constrain the physics of dark matter and dark energy using weak gravitational lensing and other methods. Lossy compression of the resultant data will improve the cost and feasibility of transmitting the images through the space communication network. We evaluate the consequences of the lossy compression algorithm of Bernstein et al. (2010) for the high-precision measurement of weak-lensing galaxy ellipticities. This square-root algorithm compresses each pixel independently, and the information discarded is by construction less than the Poisson error from photon shot noise. For simulated space-based images (without cosmicmore » rays) digitized to the typical 16 bits per pixel, application of the lossy compression followed by image-wise lossless compression yields images with only 2.4 bits per pixel, a factor of 6.7 compression. We demonstrate that this compression introduces no bias in the sky background. The compression introduces a small amount of additional digitization noise to the images, and we demonstrate a corresponding small increase in ellipticity measurement noise. The ellipticity measurement method is biased by the addition of noise, so the additional digitization noise is expected to induce a multiplicative bias on the galaxies measured ellipticities. After correcting for this known noise-induced bias, we find a residual multiplicative ellipticity bias of m {approx} -4 x 10-4. This bias is small when compared to the many other issues that precision weak lensing surveys must confront, and furthermore we expect it to be reduced further with better calibration of ellipticity measurement methods.« less

  11. Lossless Video Sequence Compression Using Adaptive Prediction

    NASA Technical Reports Server (NTRS)

    Li, Ying; Sayood, Khalid

    2007-01-01

    We present an adaptive lossless video compression algorithm based on predictive coding. The proposed algorithm exploits temporal, spatial, and spectral redundancies in a backward adaptive fashion with extremely low side information. The computational complexity is further reduced by using a caching strategy. We also study the relationship between the operational domain for the coder (wavelet or spatial) and the amount of temporal and spatial redundancy in the sequence being encoded. Experimental results show that the proposed scheme provides significant improvements in compression efficiencies.

  12. Method and apparatus for signal compression

    DOEpatents

    Carangelo, Robert M.

    1994-02-08

    The method and apparatus of the invention effects compression of an analog electrical signal (e.g., representing an interferogram) by introducing into it a component that is a cubic function thereof, normally as a nonlinear negative signal in a feedback loop of an Op Amp. The compressed signal will most desirably be digitized and then digitally decompressed so as to produce a signal that emulates the original.

  13. Mediastinal paraganglioma causing spinal cord compression.

    PubMed Central

    Reyes, M G; Fresco, R; Bruetman, M E

    1977-01-01

    An invasive paraganglioma of the posterior mediastinum caused spinal cord compression in a 31 year old women. Electron microscopic examination of the paraganglioma invading the epidural space revealed numerous dense-cored granules in the cytoplasm of the tumour cells. We are reporting this case to present the ultrastructure of mediastinal paraganglioma, and to call attention to an unusual cause of spinal cord compression. Images PMID:886352

  14. Digital breast tomosynthesis with minimal breast compression

    NASA Astrophysics Data System (ADS)

    Scaduto, David A.; Yang, Min; Ripton-Snyder, Jennifer; Fisher, Paul R.; Zhao, Wei

    2015-03-01

    Breast compression is utilized in mammography to improve image quality and reduce radiation dose. Lesion conspicuity is improved by reducing scatter effects on contrast and by reducing the superposition of tissue structures. However, patient discomfort due to breast compression has been cited as a potential cause of noncompliance with recommended screening practices. Further, compression may also occlude blood flow in the breast, complicating imaging with intravenous contrast agents and preventing accurate quantification of contrast enhancement and kinetics. Previous studies have investigated reducing breast compression in planar mammography and digital breast tomosynthesis (DBT), though this typically comes at the expense of degradation in image quality or increase in mean glandular dose (MGD). We propose to optimize the image acquisition technique for reduced compression in DBT without compromising image quality or increasing MGD. A zero-frequency signal-difference-to-noise ratio model is employed to investigate the relationship between tube potential, SDNR and MGD. Phantom and patient images are acquired on a prototype DBT system using the optimized imaging parameters and are assessed for image quality and lesion conspicuity. A preliminary assessment of patient motion during DBT with minimal compression is presented.

  15. Normal and Time-Compressed Speech

    PubMed Central

    Lemke, Ulrike; Kollmeier, Birger; Holube, Inga

    2016-01-01

    Short-term and long-term learning effects were investigated for the German Oldenburg sentence test (OLSA) using original and time-compressed fast speech in noise. Normal-hearing and hearing-impaired participants completed six lists of the OLSA in five sessions. Two groups of normal-hearing listeners (24 and 12 listeners) and two groups of hearing-impaired listeners (9 listeners each) performed the test with original or time-compressed speech. In general, original speech resulted in better speech recognition thresholds than time-compressed speech. Thresholds decreased with repetition for both speech materials. Confirming earlier results, the largest improvements were observed within the first measurements of the first session, indicating a rapid initial adaptation phase. The improvements were larger for time-compressed than for original speech. The novel results on long-term learning effects when using the OLSA indicate a longer phase of ongoing learning, especially for time-compressed speech, which seems to be limited by a floor effect. In addition, for normal-hearing participants, no complete transfer of learning benefits from time-compressed to original speech was observed. These effects should be borne in mind when inviting listeners repeatedly, for example, in research settings.

  16. Mining, compressing and classifying with extensible motifs

    PubMed Central

    Apostolico, Alberto; Comin, Matteo; Parida, Laxmi

    2006-01-01

    Background Motif patterns of maximal saturation emerged originally in contexts of pattern discovery in biomolecular sequences and have recently proven a valuable notion also in the design of data compression schemes. Informally, a motif is a string of intermittently solid and wild characters that recurs more or less frequently in an input sequence or family of sequences. Motif discovery techniques and tools tend to be computationally imposing, however, special classes of "rigid" motifs have been identified of which the discovery is affordable in low polynomial time. Results In the present work, "extensible" motifs are considered such that each sequence of gaps comes endowed with some elasticity, whereby the same pattern may be stretched to fit segments of the source that match all the solid characters but are otherwise of different lengths. A few applications of this notion are then described. In applications of data compression by textual substitution, extensible motifs are seen to bring savings on the size of the codebook, and hence to improve compression. In germane contexts, in which compressibility is used in its dual role as a basis for structural inference and classification, extensible motifs are seen to support unsupervised classification and phylogeny reconstruction. Conclusion Off-line compression based on extensible motifs can be used advantageously to compress and classify biological sequences. PMID:16722593

  17. Magnetized Plasma Compression for Fusion Energy

    NASA Astrophysics Data System (ADS)

    Degnan, James; Grabowski, Christopher; Domonkos, Matthew; Amdahl, David

    2013-10-01

    Magnetized Plasma Compression (MPC) uses magnetic inhibition of thermal conduction and enhancement of charge particle product capture to greatly reduce the temporal and spatial compression required relative to un-magnetized inertial fusion (IFE)--to microseconds, centimeters vs nanoseconds, sub-millimeter. MPC greatly reduces the required confinement time relative to MFE--to microseconds vs minutes. Proof of principle can be demonstrated or refuted using high current pulsed power driven compression of magnetized plasmas using magnetic pressure driven implosions of metal shells, known as imploding liners. This can be done at a cost of a few tens of millions of dollars. If demonstrated, it becomes worthwhile to develop repetitive implosion drivers. One approach is to use arrays of heavy ion beams for energy production, though with much less temporal and spatial compression than that envisioned for un-magnetized IFE, with larger compression targets, and with much less ambitious compression ratios. A less expensive, repetitive pulsed power driver, if feasible, would require engineering development for transient, rapidly replaceable transmission lines such as envisioned by Sandia National Laboratories. Supported by DOE-OFES.

  18. Aging and compressibility of municipal solid wastes.

    PubMed

    Chen, Y M; Zhan, Tony L T; Wei, H Y; Ke, H

    2009-01-01

    The expansion of a municipal solid waste (MSW) landfill requires the ability to predict settlement behavior of the existing landfill. The practice of using a single compressibility value when performing a settlement analysis may lead to inaccurate predictions. This paper gives consideration to changes in the mechanical compressibility of MSW as a function of the fill age of MSW as well as the embedding depth of MSW. Borehole samples representative of various fill ages were obtained from five boreholes drilled to the bottom of the Qizhishan landfill in Suzhou, China. Thirty-one borehole samples were used to perform confined compression tests. Waste composition and volume-mass properties (i.e., unit weight, void ratio, and water content) were measured on all the samples. The test results showed that the compressible components of the MSW (i.e., organics, plastics, paper, wood and textiles) decreased with an increase in the fill age. The in situ void ratio of the MSW was shown to decrease with depth into the landfill. The compression index, Cc, was observed to decrease from 1.0 to 0.3 with depth into the landfill. Settlement analyses were performed on the existing landfill, demonstrating that the variation of MSW compressibility with fill age or depth should be taken into account in the settlement prediction. PMID:18430560

  19. Digital data registration and differencing compression system

    NASA Technical Reports Server (NTRS)

    Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)

    1990-01-01

    A process is disclosed for x ray registration and differencing which results in more efficient compression. Differencing of registered modeled subject image with a modeled reference image forms a differenced image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three-dimensional model, which three-dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either a remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic x ray digital images.

  20. Compression-sensitive magnetic resonance elastography

    NASA Astrophysics Data System (ADS)

    Hirsch, Sebastian; Beyer, Frauke; Guo, Jing; Papazoglou, Sebastian; Tzschaetzsch, Heiko; Braun, Juergen; Sack, Ingolf

    2013-08-01

    Magnetic resonance elastography (MRE) quantifies the shear modulus of biological tissue to detect disease. Complementary to the shear elastic properties of tissue, the compression modulus may be a clinically useful biomarker because it is sensitive to tissue pressure and poromechanical interactions. In this work, we analyze the capability of MRE to measure volumetric strain and the dynamic bulk modulus (P-wave modulus) at a harmonic drive frequency commonly used in shear-wave-based MRE. Gel phantoms with various densities were created by introducing CO2-filled cavities to establish a compressible effective medium. The dependence of the effective medium's bulk modulus on phantom density was investigated via static compression tests, which confirmed theoretical predictions. The P-wave modulus of three compressible phantoms was calculated from volumetric strain measured by 3D wave-field MRE at 50 Hz drive frequency. The results demonstrate the MRE-derived volumetric strain and P-wave modulus to be sensitive to the compression properties of effective media. Since the reconstruction of the P-wave modulus requires third-order derivatives, noise remains critical, and P-wave moduli are systematically underestimated. Focusing on relative changes in the effective bulk modulus of tissue, compression-sensitive MRE may be useful for the noninvasive detection of diseases involving pathological pressure alterations such as hepatic hypertension or hydrocephalus.

  1. Digital Data Registration and Differencing Compression System

    NASA Technical Reports Server (NTRS)

    Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)

    1996-01-01

    A process for X-ray registration and differencing results in more efficient compression. Differencing of registered modeled subject image with a modeled reference image forms a differenced image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three-dimensional model, which three-dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either a remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic X-ray digital images.

  2. Digital data registration and differencing compression system

    NASA Technical Reports Server (NTRS)

    Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)

    1992-01-01

    A process for x ray registration and differencing results in more efficient compression is discussed. Differencing of registered modeled subject image with a modeled reference image forms a differential image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three dimensional model, which three dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic x ray digital images.

  3. Compressing bitmap indexes for faster search operations

    SciTech Connect

    Wu, Kesheng; Otoo, Ekow J.; Shoshani, Arie

    2002-04-25

    In this paper, we study the effects of compression on bitmap indexes. The main operations on the bitmaps during query processing are bitwise logical operations such as AND, OR, NOT, etc. Using the general purpose compression schemes, such as gzip, the logical operations on the compressed bitmaps are much slower than on the uncompressed bitmaps. Specialized compression schemes, like the byte-aligned bitmap code(BBC), are usually faster in performing logical operations than the general purpose schemes, but in many cases they are still orders of magnitude slower than the uncompressed scheme. To make the compressed bitmap indexes operate more efficiently, we designed a CPU-friendly scheme which we refer to as the word-aligned hybrid code (WAH). Tests on both synthetic and real application data show that the new scheme significantly outperforms well-known compression schemes at a modest increase in storage space. Compared to BBC, a scheme well-known for its operational efficiency, WAH performs logical operations about 12 times faster and uses only 60 percent more space. Compared to the uncompressed scheme, in most test cases WAH is faster while still using less space. We further verified with additional tests that the improvement in logical operation speed translates to similar improvement in query processing speed.

  4. SPH simulation of high density hydrogen compression

    NASA Astrophysics Data System (ADS)

    Ferrel, R.; Romero, V.

    1998-07-01

    The density dependence of the electronic energy band gap of the hydrogen has been studied with respect to the insulator-metal (IM) transition. The valence conduction band gap of solid hydrogen is about 15eV at zero pressure, therefore very high pressures are required to close the gap and achieve metallization. We propose to investigate what will be the degree to which one can expect to maintain a shockless compression of hydrogen with a low temperature (close to that of a cold isentrope) and verify if it is possible to achieve metallization. Multistage compression will be driven by energetic materials in a cylindrical implosion system, in which we expect a slow compression rate that will maintain the low temperature in the isentropic compression. It is hoped that pressures on the order of 100Mbars can be achieved while maintaining low temperatures. In order to better understand this multistage compression a smooth particle hydrodynamics (SPH) analysis has been performed. Since the SPH technique does not use a grid structure it is well suited to analyzing spatial deformation processes. This analysis will be used to improve the design of possible multistage compression devices.

  5. SPH Simulation of High Density Hydrogen Compression

    NASA Astrophysics Data System (ADS)

    Ferrel, R.; Romero, Van D.

    1997-07-01

    The density dependence of the electronic energy band gap of hydrogen has been studied with respect to the insulator-metal (IM) transition. The valence conduction band gap of solid hydrogen is about 15eV at zero pressure, therefore very high pressures are required to close the gap and achieve metallization. We are planning to investigate the degree to which shock less compression of hydrogen can be maintained at low temperature isentrope) and explore the possibililty of achieving metallization. Multistage compression will be driven by energetic materials in a cylindrical implosion system, in which we expect a slow compression rate that will maintain the low temperature in the isentropic compression. It is hoped that pressures of the order of 100 Mbars can be achieved while maintaining low temperatures. In order to understand this multistage compression better a smooth particle hydrodynamics (SPH) analysis has been performed. Since the SPH technique uses a gridless structure it is well suited to analyzing spatial deformation processes. This paper presents the analysis which will be used to improve the design of possible multistage compression devices.

  6. Static compression of porous dust aggregates

    NASA Astrophysics Data System (ADS)

    Kataoka, Akimasa; Tanaka, Hidekazu; Okuzumi, Satoshi; Wada, Koji

    2013-07-01

    To understand the structure evolution of dust aggregates is a key in the planetesimal formation. Dust grains become fluffy by coagulation in protoplanetary disks. However, once they become fluffy, they are not sufficiently compressed by collisional compression to form compact planetesimals (Okuzumi et al. 2012, ApJ, 752, 106). Thus, some other compression mechanisms are required to form planetesimals. We investigate the static compression of highly porous aggregates. First, we derive the compressive strength by numerical N-body simulations (Kataoka et al. 2013, A&A, 554, 4). Then, we apply the strength to protoplanetary disks, supposing that the highly porous aggregates can be quiasi-statically compressed by ram pressure of the disk gas and the self gravity. As a result, we find the pathway of the dust structure evolution from dust grains via fluffy aggregates to compact planetesimals. Moreover, we find that the fluffy aggregates overcome the barriers in planetesimal formation, which are radial drift, fragmentation, and bouncing barriers. (The paper is now available on arXiv: http://arxiv.org/abs/1307.7984 )

  7. A dedicated compression device for high resolution X-ray tomography of compressed gas diffusion layers

    SciTech Connect

    Tötzke, C.; Manke, I.; Banhart, J.; Gaiselmann, G.; Schmidt, V.; Bohner, J.; Müller, B. R.; Kupsch, A.; Hentschel, M. P.; Lehnert, W.

    2015-04-15

    We present an experimental approach to study the three-dimensional microstructure of gas diffusion layer (GDL) materials under realistic compression conditions. A dedicated compression device was designed that allows for synchrotron-tomographic investigation of circular samples under well-defined compression conditions. The tomographic data provide the experimental basis for stochastic modeling of nonwoven GDL materials. A plain compression tool is used to study the fiber courses in the material at different compression stages. Transport relevant geometrical parameters, such as porosity, pore size, and tortuosity distributions, are exemplarily evaluated for a GDL sample in the uncompressed state and for a compression of 30 vol.%. To mimic the geometry of the flow-field, we employed a compression punch with an integrated channel-rib-profile. It turned out that the GDL material is homogeneously compressed under the ribs, however, much less compressed underneath the channel. GDL fibers extend far into the channel volume where they might interfere with the convective gas transport and the removal of liquid water from the cell.

  8. Compression by indexing: an improvement over MPEG-4 body animation parameter compression

    NASA Astrophysics Data System (ADS)

    Chattopadhyay, Siddhartha; Bhandarkar, Suchendra M.; Li, Kang

    2006-01-01

    Body Animation Parameters (BAPs) are used to animate MPEG-4 compliant virtual human-like characters. In order to stream BAPs in real time interactive environments, the BAPs are compressed for low bitrate representation using a standard MPEG-4 compression pipeline. However, the standard MPEG-4 compression is inefficient for streaming to power-constrained devices, since the streamed data requires extra power in terms of CPU cycles for decompression. In this paper, we have proposed and implemented an indexing technique for a BAP data stream, resulting in a compressed representation of the motion data. The resulting compressed representation of the BAPs is 1superior to the MPEG-4-based BAP compression in terms of both, required network throughput and power consumption at the client end to receive the compressed data stream and extract the original BAP data from the compressed representation. Although the resulting motion after de-compression at the client end is lossy, the motion distortion is minimized by intelligent use of the hierarchical structure of the skeletal avatar model. Consequently, the proposed indexing method is ideal for streaming of motion data to power- and network-constrained devices such as PDAs, Pocket PCs and Laptop PCs operating in battery mode and other devices in a mobile network environment.

  9. Hydrodynamic effects in the atmosphere of variable stars

    NASA Technical Reports Server (NTRS)

    Davis, C. G., Jr.; Bunker, S. S.

    1975-01-01

    Numerical models of variable stars are established, using a nonlinear radiative transfer coupled hydrodynamics code. The variable Eddington method of radiative transfer is used. Comparisons are for models of W Virginis, beta Doradus, and eta Aquilae. From these models it appears that shocks are formed in the atmospheres of classical Cepheids as well as W Virginis stars. In classical Cepheids, with periods from 7 to 10 days, the bumps occurring in the light and velocity curves appear as the result of a compression wave that reflects from the star's center. At the head of the outward going compression wave, shocks form in the atmosphere. Comparisons between the hydrodynamic motions in W Virginis and classical Cepheids are made. The strong shocks in W Virginis do not penetrate into the interior as do the compression waves formed in classical Cepheids. The shocks formed in W Virginis stars cause emission lines, while in classical Cepheids the shocks are weaker.

  10. Compression failure of angle-ply laminates

    NASA Technical Reports Server (NTRS)

    Peel, Larry D.; Hyer, Michael W.; Shuart, Mark J.

    1991-01-01

    The present work deals with modes and mechanisms of failure in compression of angle-ply laminates. Experimental results were obtained from 42 angle-ply IM7/8551-7a specimens with a lay-up of ((plus or minus theta)/(plus or minus theta)) sub 6s where theta, the off-axis angle, ranged from 0 degrees to 90 degrees. The results showed four failure modes, these modes being a function of off-axis angle. Failure modes include fiber compression, inplane transverse tension, inplane shear, and inplane transverse compression. Excessive interlaminar shear strain was also considered as an important mode of failure. At low off-axis angles, experimentally observed values were considerably lower than published strengths. It was determined that laminate imperfections in the form of layer waviness could be a major factor in reducing compression strength. Previously developed linear buckling and geometrically nonlinear theories were used, with modifications and enhancements, to examine the influence of layer waviness on compression response. The wavy layer is described by a wave amplitude and a wave length. Linear elastic stress-strain response is assumed. The geometrically nonlinear theory, in conjunction with the maximum stress failure criterion, was used to predict compression failure and failure modes for the angle-ply laminates. A range of wave length and amplitudes were used. It was found that for 0 less than or equal to theta less than or equal to 15 degrees failure was most likely due to fiber compression. For 15 degrees less than theta less than or equal to 35 degrees, failure was most likely due to inplane transverse tension. For 35 degrees less than theta less than or equal to 70 degrees, failure was most likely due to inplane shear. For theta less than 70 degrees, failure was most likely due to inplane transverse compression. The fiber compression and transverse tension failure modes depended more heavily on wave length than on wave amplitude. Thus using a single

  11. An investigation of the compressive strength of Kevlar 49/epoxy composites

    NASA Technical Reports Server (NTRS)

    Kulkarni, S. V.; Rosen, B. W.; Rice, J. S.

    1975-01-01

    Tests were performed to evaluate the effect of a wide range of variables including matrix properties, interface properties, fiber prestressing, secondary reinforcement, and others on the ultimate compressive strength of Kevlar 49/epoxy composites. Scanning electron microscopy is used to assess the resulting failure surfaces. In addition, a theoretical study is conducted to determine the influence of fiber anisotropy and lack of perfect bond between fiber and matrix on the shear mode microbuckling. The experimental evaluation of the effect of various constituent and process characteristics on the behavior of these unidirectional composites in compression did not reveal any substantial increase in strength. However, theoretical evaluations indicate that the high degree of fiber anisotropy results in a significant drop in the predicted stress level for internal instability. Scanning electron microscope data analysis suggests that internal fiber failure and smooth surface debonding could be responsible for the measured low compressive strengths.

  12. Some practical aspects of lossless and nearly-lossless compression of AVHRR imagery

    NASA Technical Reports Server (NTRS)

    Hogan, David B.; Miller, Chris X.; Christensen, Than Lee; Moorti, Raj

    1994-01-01

    Compression of Advanced Very high Resolution Radiometers (AVHRR) imagery operating in a lossless or nearly-lossless mode is evaluated. Several practical issues are analyzed including: variability of compression over time and among channels, rate-smoothing buffer size, multi-spectral preprocessing of data, day/night handling, and impact on key operational data applications. This analysis is based on a DPCM algorithm employing the Universal Noiseless Coder, which is a candidate for inclusion in many future remote sensing systems. It is shown that compression rates of about 2:1 (daytime) can be achieved with modest buffer sizes (less than or equal to 2.5 Mbytes) and a relatively simple multi-spectral preprocessing step.

  13. Sublaminate buckling and compression strength of stitched uniweave graphite/epoxy laminates

    SciTech Connect

    Sharma, S.K.; Sankar, B.V.

    1995-12-31

    Effects of through-the-thickness stitching on the sublaminate buckling and residual compression strength (often referred as compression-after-impact or CAI strength) of graphite/epoxy uniweave laminates are experimentally investigated. Primarily, three stitching variables: type of stitch yarn, linear density of stitch yam and stitch density were studied. Delaminations were created by implanting teflon inserts during processing. The improvement in the CAI strength of the stitched laminates was up to 400% compared to the unstitched laminates. Stitching was observed to effectively restrict sublaminate buckling failure of the laminates. The CAI strength increases rapidly with increase in stitch density. It reaches a peak CAI strength that is very close to the compression strength of the undamaged material. All the stitch yams in this study demonstrated very close performance in improving the CAI strength. It appears that any stitch yarn with adequate breaking strength and stiffness successfully restricts the sublaminate buckling.

  14. Non-premixed flame-turbulence interaction in compressible turbulent flow

    SciTech Connect

    Livescu, D.; Madnia, C. K.

    2002-01-01

    Nonpremixed turbulent reacting flows are intrinsically difficult to model due to the strong coupling between turbulent motions and reaction. The large amount of heat released by a typical hydrocarbon flame leads to significant modifications of the thermodynamic variables and the molecular transport coefficients and thus alters the fluid dynamics. Additionally, in nonpremixed combustion, the flame has a complex spatial structure. Localized expansions and contractions occur, enhancing the dilatational motions. Therefore, the compressibility of the flow and the heat release are intimately related. However, fundamental studies of the role of compressibility on the scalar mixing and reaction are scarce. In this paper they present results concerning the fundamental aspects of the interaction between non-premixed flame and compressible turbulence.

  15. Flexible, compressible, hydrophobic, floatable, and conductive carbon nanotube-polymer sponge

    NASA Astrophysics Data System (ADS)

    Han, Jin-Woo; Kim, Beomseok; Li, Jing; Meyyappan, M.

    2013-02-01

    A flexible, compressible, hydrophobic, ice-repelling, floatable, and conductive carbon nanotube (CNT)-polydimethylsiloxane (PDMS) sponge is presented. The microporous sponge-like PDMS scaffold fabricated with a sugar cube template is capable of CNT uptake. The CNT-PDMS sponge (CPS) is deformable and compressible up to 90%. The Young's modulus varies from 22 KPa to 200 KPa depending on the applied strain. The conductive pathways via the CNT network increase with compressive strain similar to a variable resistor or pressure sensor. The softness of the CPS can be utilized for artificial skin to grip sensitive objects. In addition, the contact angle of water droplets on CPS shows 141°, and thus the hydrophobic nature of the CPS can be exploited as a floating electrode. Furthermore, the hydrophobicity is maintained below freezing temperature, allowing an ice-repelling electrode.

  16. Predicting the fidelity of JPEG2000 compressed CT images using DICOM header information

    SciTech Connect

    Kim, Kil Joong; Kim, Bohyoung; Lee, Hyunna; Choi, Hosik; Jeon, Jong-June; Ahn, Jeong-Hwan; Lee, Kyoung Ho

    2011-12-15

    Purpose: To propose multiple logistic regression (MLR) and artificial neural network (ANN) models constructed using digital imaging and communications in medicine (DICOM) header information in predicting the fidelity of Joint Photographic Experts Group (JPEG) 2000 compressed abdomen computed tomography (CT) images. Methods: Our institutional review board approved this study and waived informed patient consent. Using a JPEG2000 algorithm, 360 abdomen CT images were compressed reversibly (n = 48, as negative control) or irreversibly (n = 312) to one of different compression ratios (CRs) ranging from 4:1 to 10:1. Five radiologists independently determined whether the original and compressed images were distinguishable or indistinguishable. The 312 irreversibly compressed images were divided randomly into training (n = 156) and testing (n = 156) sets. The MLR and ANN models were constructed regarding the DICOM header information as independent variables and the pooled radiologists' responses as dependent variable. As independent variables, we selected the CR (DICOM tag number: 0028, 2112), effective tube current-time product (0018, 9332), section thickness (0018, 0050), and field of view (0018, 0090) among the DICOM tags. Using the training set, an optimal subset of independent variables was determined by backward stepwise selection in a four-fold cross-validation scheme. The MLR and ANN models were constructed with the determined independent variables using the training set. The models were then evaluated on the testing set by using receiver-operating-characteristic (ROC) analysis regarding the radiologists' pooled responses as the reference standard and by measuring Spearman rank correlation between the model prediction and the number of radiologists who rated the two images as distinguishable. Results: The CR and section thickness were determined as the optimal independent variables. The areas under the ROC curve for the MLR and ANN predictions were 0.91 (95% CI; 0

  17. Compression of echocardiographic scan line data using wavelet packet transform

    NASA Technical Reports Server (NTRS)

    Hang, X.; Greenberg, N. L.; Qin, J.; Thomas, J. D.

    2001-01-01

    An efficient compression strategy is indispensable for digital echocardiography. Previous work has suggested improved results utilizing wavelet transforms in the compression of 2D echocardiographic images. Set partitioning in hierarchical trees (SPIHT) was modified to compress echocardiographic scanline data based on the wavelet packet transform. A compression ratio of at least 94:1 resulted in preserved image quality.

  18. 30 CFR 57.13020 - Use of compressed air.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Use of compressed air. 57.13020 Section 57... MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-UNDERGROUND METAL AND NONMETAL MINES Compressed Air and Boilers § 57.13020 Use of compressed air. At no time shall compressed air be directed toward...

  19. 30 CFR 57.13020 - Use of compressed air.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Use of compressed air. 57.13020 Section 57... MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-UNDERGROUND METAL AND NONMETAL MINES Compressed Air and Boilers § 57.13020 Use of compressed air. At no time shall compressed air be directed toward...

  20. 30 CFR 57.13020 - Use of compressed air.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Use of compressed air. 57.13020 Section 57... MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-UNDERGROUND METAL AND NONMETAL MINES Compressed Air and Boilers § 57.13020 Use of compressed air. At no time shall compressed air be directed toward...

  1. 30 CFR 57.13020 - Use of compressed air.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Use of compressed air. 57.13020 Section 57... MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-UNDERGROUND METAL AND NONMETAL MINES Compressed Air and Boilers § 57.13020 Use of compressed air. At no time shall compressed air be directed toward...

  2. 30 CFR 56.13020 - Use of compressed air.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Use of compressed air. 56.13020 Section 56... MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-SURFACE METAL AND NONMETAL MINES Compressed Air and Boilers § 56.13020 Use of compressed air. At no time shall compressed air be directed toward a...

  3. 30 CFR 56.13020 - Use of compressed air.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Use of compressed air. 56.13020 Section 56... MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-SURFACE METAL AND NONMETAL MINES Compressed Air and Boilers § 56.13020 Use of compressed air. At no time shall compressed air be directed toward a...

  4. 30 CFR 77.412 - Compressed air systems.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Compressed air systems. 77.412 Section 77.412... for Mechanical Equipment § 77.412 Compressed air systems. (a) Compressors and compressed-air receivers... involving the pressure system of compressors, receivers, or compressed-air-powered equipment shall not...

  5. 30 CFR 56.13020 - Use of compressed air.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Use of compressed air. 56.13020 Section 56... MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-SURFACE METAL AND NONMETAL MINES Compressed Air and Boilers § 56.13020 Use of compressed air. At no time shall compressed air be directed toward a...

  6. 30 CFR 56.13020 - Use of compressed air.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Use of compressed air. 56.13020 Section 56... MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-SURFACE METAL AND NONMETAL MINES Compressed Air and Boilers § 56.13020 Use of compressed air. At no time shall compressed air be directed toward a...

  7. 30 CFR 56.13020 - Use of compressed air.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Use of compressed air. 56.13020 Section 56... MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-SURFACE METAL AND NONMETAL MINES Compressed Air and Boilers § 56.13020 Use of compressed air. At no time shall compressed air be directed toward a...

  8. 30 CFR 77.412 - Compressed air systems.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Compressed air systems. 77.412 Section 77.412... for Mechanical Equipment § 77.412 Compressed air systems. (a) Compressors and compressed-air receivers... involving the pressure system of compressors, receivers, or compressed-air-powered equipment shall not...

  9. 30 CFR 57.13020 - Use of compressed air.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Use of compressed air. 57.13020 Section 57... MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-UNDERGROUND METAL AND NONMETAL MINES Compressed Air and Boilers § 57.13020 Use of compressed air. At no time shall compressed air be directed toward...

  10. 30 CFR 77.412 - Compressed air systems.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Compressed air systems. 77.412 Section 77.412... for Mechanical Equipment § 77.412 Compressed air systems. (a) Compressors and compressed-air receivers... involving the pressure system of compressors, receivers, or compressed-air-powered equipment shall not...

  11. 30 CFR 77.412 - Compressed air systems.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Compressed air systems. 77.412 Section 77.412... for Mechanical Equipment § 77.412 Compressed air systems. (a) Compressors and compressed-air receivers... involving the pressure system of compressors, receivers, or compressed-air-powered equipment shall not...

  12. 30 CFR 77.412 - Compressed air systems.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Compressed air systems. 77.412 Section 77.412... for Mechanical Equipment § 77.412 Compressed air systems. (a) Compressors and compressed-air receivers... involving the pressure system of compressors, receivers, or compressed-air-powered equipment shall not...

  13. Variable word length encoder reduces TV bandwith requirements

    NASA Technical Reports Server (NTRS)

    Sivertson, W. E., Jr.

    1965-01-01

    Adaptive variable resolution encoding technique provides an adaptive compression pseudo-random noise signal processor for reducing television bandwidth requirements. Complementary processors are required in both the transmitting and receiving systems. The pretransmission processor is analog-to-digital, while the postreception processor is digital-to-analog.

  14. 3D Compressible Melt Transport with Mesh Adaptivity

    NASA Astrophysics Data System (ADS)

    Dannberg, J.; Heister, T.

    2015-12-01

    Melt generation and migration have been the subject of numerous investigations. However, their typical time and length scales are vastly different from mantle convection, and the material properties are highly spatially variable and make the problem strongly non-linear. These challenges make it difficult to study these processes in a unified framework and in three dimensions. We present our extension of the mantle convection code ASPECT that allows for solving additional equations describing the behavior of melt percolating through and interacting with a viscously deforming host rock. One particular advantage is ASPECT's adaptive mesh refinement, as the resolution can be increased in areas where melt is present and viscosity gradients are steep, whereas a lower resolution is sufficient in regions without melt. Our approach includes both melt migration and melt generation, allowing for different melting parametrizations. In contrast to previous formulations, we consider the individual compressibilities of the solid and fluid phases in addition to compaction flow. This ensures self-consistency when linking melt generation to processes in the deeper mantle, where the compressibility of the solid phase becomes more important. We evaluate the functionality and potential of this method using a series of benchmarks and applications, including solitary waves, magmatic shear bands and melt generation and transport in a rising mantle plume. We compare results of the compressible and incompressible formulation and find melt volume differences of up to 15%. Moreover, we demonstrate that adaptive mesh refinement has the potential to reduce the runtime of a computation by more than one order of magnitude. Our model of magma dynamics provides a framework for investigating links between the deep mantle and melt generation and migration. This approach could prove particularly useful applied to modeling the generation of komatiites or other melts originating in greater depths.

  15. Tsunami Speed Variations in Density-stratified Compressible Global Oceans

    NASA Astrophysics Data System (ADS)

    Watada, S.

    2013-12-01

    Recent tsunami observations in the deep ocean have accumulated unequivocal evidence that tsunami traveltime delays compared with the linear long-wave tsunami simulations occur during tsunami propagation in the deep ocean. The delay is up to 2% of the tsunami traveltime. Watada et al. [2013] investigated the cause of the delay using the normal mode theory of tsunamis and attributed the delay to the compressibility of seawater, the elasticity of the solid earth, and the gravitational potential change associated with mass motion during the passage of tsunamis. Tsunami speed variations in the deep ocean caused by seawater density stratification is investigated using a newly developed propagator matrix method that is applicable to seawater with depth-variable sound speeds and density gradients. For a 4-km deep ocean, the total tsunami speed reduction is 0.45% compared with incompressible homogeneous seawater; two thirds of the reduction is due to elastic energy stored in the water and one third is due to water density stratification mainly by hydrostatic compression. Tsunami speeds are computed for global ocean density and sound speed profiles and characteristic structures are discussed. Tsunami speed reductions are proportional to ocean depth with small variations, except for in warm Mediterranean seas. The impacts of seawater compressibility and the elasticity effect of the solid earth on tsunami traveltime should be included for precise modeling of trans-oceanic tsunamis. Data locations where a vertical ocean profile deeper than 2500 m is available in World Ocean Atlas 2009. The dark gray area indicates the Pacific Ocean defined in WOA09. a) Tsunami speed variations. Red, gray and black bars represent global, Pacific, and Mediterranean Sea, respectively. b) Regression lines of the tsunami velocity reduction for all oceans. c)Vertical ocean profiles at grid points indicated by the stars in Figure 1.

  16. Wavelet compression techniques for hyperspectral data

    NASA Technical Reports Server (NTRS)

    Evans, Bruce; Ringer, Brian; Yeates, Mathew

    1994-01-01

    Hyperspectral sensors are electro-optic sensors which typically operate in visible and near infrared bands. Their characteristic property is the ability to resolve a relatively large number (i.e., tens to hundreds) of contiguous spectral bands to produce a detailed profile of the electromagnetic spectrum. In contrast, multispectral sensors measure relatively few non-contiguous spectral bands. Like multispectral sensors, hyperspectral sensors are often also imaging sensors, measuring spectra over an array of spatial resolution cells. The data produced may thus be viewed as a three dimensional array of samples in which two dimensions correspond to spatial position and the third to wavelength. Because they multiply the already large storage/transmission bandwidth requirements of conventional digital images, hyperspectral sensors generate formidable torrents of data. Their fine spectral resolution typically results in high redundancy in the spectral dimension, so that hyperspectral data sets are excellent candidates for compression. Although there have been a number of studies of compression algorithms for multispectral data, we are not aware of any published results for hyperspectral data. Three algorithms for hyperspectral data compression are compared. They were selected as representatives of three major approaches for extending conventional lossy image compression techniques to hyperspectral data. The simplest approach treats the data as an ensemble of images and compresses each image independently, ignoring the correlation between spectral bands. The second approach transforms the data to decorrelate the spectral bands, and then compresses the transformed data as a set of independent images. The third approach directly generalizes two-dimensional transform coding by applying a three-dimensional transform as part of the usual transform-quantize-entropy code procedure. The algorithms studied all use the discrete wavelet transform. In the first two cases, a wavelet

  17. Shock compression of condensed nonideal plasmas

    NASA Astrophysics Data System (ADS)

    Fortov, Vladimir

    2001-06-01

    The physical properties of hot dense plasmas at megabar pressures are of great interest for astro- and planetary physics, inertial confinement fusion, energetics, technology and many other applications. The lecture presents the modern results of experimental investigations of equations of state, compositions, thermodynamical and transport properties, electrical conductivity and opacity of strongly coupled plasmas generated by intense shock and rarefaction waves. The experimental methods for generation of high energy densities in matter, drivers for shock waves and fast diagnostic methods are discussed. The application of intense shock waves to solid and porous targets allows us to degenerate Fermi-like plasmas with maximum pressure up to 4Gbar and temperatures 10^7 K. Compression of plasma by a series of incident and reflected shock waves allows us to decrease irreversible heating effects. As a result, such a multiple compression process becomes close to the isentropic one which permits us to reach much higher densities and lower temperatures compared to single shock compression. On the other hand, to increase the irreversibility effects and to generate high temperature plasma states the experiments on shock compression of porous samples (fine metal powder, aerogels) were performed. The shock compression of saturated metal vapors and previously compressed noble gases by incident and reflected shocks allows us to reach nonideal plasmas on the Hugoniot. The adiabatic expansion of matter initially compressed by intense shocks up to megabars gives us the chance to investigate the intermediate region between the solid and vapor phase of nonideal plasmas, including the metal-insulator transition phase and the high temperature saturation curve with critical points of metals.

  18. Multishock Compression Properties of Warm Dense Argon

    PubMed Central

    Zheng, Jun; Chen, Qifeng; Yunjun, Gu; Li, Zhiguo; Shen, Zhijun

    2015-01-01

    Warm dense argon was generated by a shock reverberation technique. The diagnostics of warm dense argon were performed by a multichannel optical pyrometer and a velocity interferometer system. The equations of state in the pressure-density range of 20–150 GPa and 1.9–5.3 g/cm3 from the first- to fourth-shock compression were presented. The single-shock temperatures in the range of 17.2–23.4 kK were obtained from the spectral radiance. Experimental results indicates that multiple shock-compression ratio (ηi = ρi/ρ0) is greatly enhanced from 3.3 to 8.8, where ρ0 is the initial density of argon and ρi (i = 1, 2, 3, 4) is the compressed density from first to fourth shock, respectively. For the relative compression ratio (ηi’ = ρi/ρi-1), an interesting finding is that a turning point occurs at the second shocked states under the conditions of different experiments, and ηi’ increases with pressure in lower density regime and reversely decreases with pressure in higher density regime. The evolution of the compression ratio is controlled by the excitation of internal degrees of freedom, which increase the compression, and by the interaction effects between particles that reduce it. A temperature-density plot shows that current multishock compression states of argon have distributed into warm dense regime. PMID:26515505

  19. Image encryption and compression based on kronecker compressed sensing and elementary cellular automata scrambling

    NASA Astrophysics Data System (ADS)

    Chen, Tinghuan; Zhang, Meng; Wu, Jianhui; Yuen, Chau; Tong, You

    2016-10-01

    Because of simple encryption and compression procedure in single step, compressed sensing (CS) is utilized to encrypt and compress an image. Difference of sparsity levels among blocks of the sparsely transformed image degrades compression performance. In this paper, motivated by this difference of sparsity levels, we propose an encryption and compression approach combining Kronecker CS (KCS) with elementary cellular automata (ECA). In the first stage of encryption, ECA is adopted to scramble the sparsely transformed image in order to uniformize sparsity levels. A simple approximate evaluation method is introduced to test the sparsity uniformity. Due to low computational complexity and storage, in the second stage of encryption, KCS is adopted to encrypt and compress the scrambled and sparsely transformed image, where the measurement matrix with a small size is constructed from the piece-wise linear chaotic map. Theoretical analysis and experimental results show that our proposed scrambling method based on ECA has great performance in terms of scrambling and uniformity of sparsity levels. And the proposed encryption and compression method can achieve better secrecy, compression performance and flexibility.

  20. Bringing light into the dark: effects of compression clothing on performance and recovery.

    PubMed

    Born, Dennis-Peter; Sperlich, Billy; Holmberg, Hans-Christer

    2013-01-01

    To assess original research addressing the effect of the application of compression clothing on sport performance and recovery after exercise, a computer-based literature research was performed in July 2011 using the electronic databases PubMed, MEDLINE, SPORTDiscus, and Web of Science. Studies examining the effect of compression clothing on endurance, strength and power, motor control, and physiological, psychological, and biomechanical parameters during or after exercise were included, and means and measures of variability of the outcome measures were recorded to estimate the effect size (Hedges g) and associated 95% confidence intervals for comparisons of experimental (compression) and control trials (noncompression). The characteristics of the compression clothing, participants, and study design were also extracted. The original research from peer-reviewed journals was examined using the Physiotherapy Evidence Database (PEDro) Scale. Results indicated small effect sizes for the application of compression clothing during exercise for short-duration sprints (10-60 m), vertical-jump height, extending time to exhaustion (such as running at VO2max or during incremental tests), and time-trial performance (3-60 min). When compression clothing was applied for recovery purposes after exercise, small to moderate effect sizes were observed in recovery of maximal strength and power, especially vertical-jump exercise; reductions in muscle swelling and perceived muscle pain; blood lactate removal; and increases in body temperature. These results suggest that the application of compression clothing may assist athletic performance and recovery in given situations with consideration of the effects magnitude and practical relevance. PMID:23302134