A comparison of select image-compression algorithms for an electronic still camera
NASA Technical Reports Server (NTRS)
Nerheim, Rosalee
1989-01-01
This effort is a study of image-compression algorithms for an electronic still camera. An electronic still camera can record and transmit high-quality images without the use of film, because images are stored digitally in computer memory. However, high-resolution images contain an enormous amount of information, and will strain the camera's data-storage system. Image compression will allow more images to be stored in the camera's memory. For the electronic still camera, a compression algorithm that produces a reconstructed image of high fidelity is most important. Efficiency of the algorithm is the second priority. High fidelity and efficiency are more important than a high compression ratio. Several algorithms were chosen for this study and judged on fidelity, efficiency and compression ratio. The transform method appears to be the best choice. At present, the method is compressing images to a ratio of 5.3:1 and producing high-fidelity reconstructed images.
A Study of the Efficiency of High-strength, Steel, Cellular-core Sandwich Plates in Compression
NASA Technical Reports Server (NTRS)
Johnson, Aldie E , Jr; Semonian, Joseph W
1956-01-01
Structural efficiency curves are presented for high-strength, stainless-steel, cellular-core sandwich plates of various proportions subjected to compressive end loads for temperatures of 80 F and 600 F. Optimum proportions of sandwich plates for any value of the compressive loading intensity can be determined from the curves. The efficiency of steel sandwich plates of optimum proportions is compared with the efficiency of solid plates of high-strength steel and aluminum and titanium alloys at the two temperatures.
Audiovisual focus of attention and its application to Ultra High Definition video compression
NASA Astrophysics Data System (ADS)
Rerabek, Martin; Nemoto, Hiromi; Lee, Jong-Seok; Ebrahimi, Touradj
2014-02-01
Using Focus of Attention (FoA) as a perceptual process in image and video compression belongs to well-known approaches to increase coding efficiency. It has been shown that foveated coding, when compression quality varies across the image according to region of interest, is more efficient than the alternative coding, when all region are compressed in a similar way. However, widespread use of such foveated compression has been prevented due to two main conflicting causes, namely, the complexity and the efficiency of algorithms for FoA detection. One way around these is to use as much information as possible from the scene. Since most video sequences have an associated audio, and moreover, in many cases there is a correlation between the audio and the visual content, audiovisual FoA can improve efficiency of the detection algorithm while remaining of low complexity. This paper discusses a simple yet efficient audiovisual FoA algorithm based on correlation of dynamics between audio and video signal components. Results of audiovisual FoA detection algorithm are subsequently taken into account for foveated coding and compression. This approach is implemented into H.265/HEVC encoder producing a bitstream which is fully compliant to any H.265/HEVC decoder. The influence of audiovisual FoA in the perceived quality of high and ultra-high definition audiovisual sequences is explored and the amount of gain in compression efficiency is analyzed.
Zeng, Xianglong; Guo, Hairun; Zhou, Binbin; Bache, Morten
2012-11-19
We propose an efficient approach to improve few-cycle soliton compression with cascaded quadratic nonlinearities by using an engineered multi-section structure of the nonlinear crystal. By exploiting engineering of the cascaded quadratic nonlinearities, in each section soliton compression with a low effective order is realized, and high-quality few-cycle pulses with large compression factors are feasible. Each subsequent section is designed so that the compressed pulse exiting the previous section experiences an overall effective self-defocusing cubic nonlinearity corresponding to a modest soliton order, which is kept larger than unity to ensure further compression. This is done by increasing the cascaded quadratic nonlinearity in the new section with an engineered reduced residual phase mismatch. The low soliton orders in each section ensure excellent pulse quality and high efficiency. Numerical results show that compressed pulses with less than three-cycle duration can be achieved even when the compression factor is very large, and in contrast to standard soliton compression, these compressed pulses have minimal pedestal and high quality factor.
Highly Efficient Compression Algorithms for Multichannel EEG.
Shaw, Laxmi; Rahman, Daleef; Routray, Aurobinda
2018-05-01
The difficulty associated with processing and understanding the high dimensionality of electroencephalogram (EEG) data requires developing efficient and robust compression algorithms. In this paper, different lossless compression techniques of single and multichannel EEG data, including Huffman coding, arithmetic coding, Markov predictor, linear predictor, context-based error modeling, multivariate autoregression (MVAR), and a low complexity bivariate model have been examined and their performances have been compared. Furthermore, a high compression algorithm named general MVAR and a modified context-based error modeling for multichannel EEG have been proposed. The resulting compression algorithm produces a higher relative compression ratio of 70.64% on average compared with the existing methods, and in some cases, it goes up to 83.06%. The proposed methods are designed to compress a large amount of multichannel EEG data efficiently so that the data storage and transmission bandwidth can be effectively used. These methods have been validated using several experimental multichannel EEG recordings of different subjects and publicly available standard databases. The satisfactory parametric measures of these methods, namely percent-root-mean square distortion, peak signal-to-noise ratio, root-mean-square error, and cross correlation, show their superiority over the state-of-the-art compression methods.
Fuels for high-compression engines
NASA Technical Reports Server (NTRS)
Sparrow, Stanwood W
1926-01-01
From theoretical considerations one would expect an increase in power and thermal efficiency to result from increasing the compression ratio of an internal combustion engine. In reality it is upon the expansion ratio that the power and thermal efficiency depend, but since in conventional engines this is equal to the compression ratio, it is generally understood that a change in one ratio is accompanied by an equal change in the other. Tests over a wide range of compression ratios (extending to ratios as high as 14.1) have shown that ordinarily an increase in power and thermal efficiency is obtained as expected provided serious detonation or preignition does not result from the increase in ratio.
Sriraam, N.
2012-01-01
Developments of new classes of efficient compression algorithms, software systems, and hardware for data intensive applications in today's digital health care systems provide timely and meaningful solutions in response to exponentially growing patient information data complexity and associated analysis requirements. Of the different 1D medical signals, electroencephalography (EEG) data is of great importance to the neurologist for detecting brain-related disorders. The volume of digitized EEG data generated and preserved for future reference exceeds the capacity of recent developments in digital storage and communication media and hence there is a need for an efficient compression system. This paper presents a new and efficient high performance lossless EEG compression using wavelet transform and neural network predictors. The coefficients generated from the EEG signal by integer wavelet transform are used to train the neural network predictors. The error residues are further encoded using a combinational entropy encoder, Lempel-Ziv-arithmetic encoder. Also a new context-based error modeling is also investigated to improve the compression efficiency. A compression ratio of 2.99 (with compression efficiency of 67%) is achieved with the proposed scheme with less encoding time thereby providing diagnostic reliability for lossless transmission as well as recovery of EEG signals for telemedicine applications. PMID:22489238
Sriraam, N
2012-01-01
Developments of new classes of efficient compression algorithms, software systems, and hardware for data intensive applications in today's digital health care systems provide timely and meaningful solutions in response to exponentially growing patient information data complexity and associated analysis requirements. Of the different 1D medical signals, electroencephalography (EEG) data is of great importance to the neurologist for detecting brain-related disorders. The volume of digitized EEG data generated and preserved for future reference exceeds the capacity of recent developments in digital storage and communication media and hence there is a need for an efficient compression system. This paper presents a new and efficient high performance lossless EEG compression using wavelet transform and neural network predictors. The coefficients generated from the EEG signal by integer wavelet transform are used to train the neural network predictors. The error residues are further encoded using a combinational entropy encoder, Lempel-Ziv-arithmetic encoder. Also a new context-based error modeling is also investigated to improve the compression efficiency. A compression ratio of 2.99 (with compression efficiency of 67%) is achieved with the proposed scheme with less encoding time thereby providing diagnostic reliability for lossless transmission as well as recovery of EEG signals for telemedicine applications.
NASA Technical Reports Server (NTRS)
Akkerman, J. W.
1982-01-01
New mechanism alters compression ratio of internal-combustion engine according to load so that engine operates at top fuel efficiency. Ordinary gasoline, diesel and gas engines with their fixed compression ratios are inefficient at partial load and at low-speed full load. Mechanism ensures engines operate as efficiently under these conditions as they do at highload and high speed.
Adaptive efficient compression of genomes
2012-01-01
Modern high-throughput sequencing technologies are able to generate DNA sequences at an ever increasing rate. In parallel to the decreasing experimental time and cost necessary to produce DNA sequences, computational requirements for analysis and storage of the sequences are steeply increasing. Compression is a key technology to deal with this challenge. Recently, referential compression schemes, storing only the differences between a to-be-compressed input and a known reference sequence, gained a lot of interest in this field. However, memory requirements of the current algorithms are high and run times often are slow. In this paper, we propose an adaptive, parallel and highly efficient referential sequence compression method which allows fine-tuning of the trade-off between required memory and compression speed. When using 12 MB of memory, our method is for human genomes on-par with the best previous algorithms in terms of compression ratio (400:1) and compression speed. In contrast, it compresses a complete human genome in just 11 seconds when provided with 9 GB of main memory, which is almost three times faster than the best competitor while using less main memory. PMID:23146997
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurnik, Charles W; Benton, Nathanael; Burns, Patrick
Compressed-air systems are used widely throughout industry for many operations, including pneumatic tools, packaging and automation equipment, conveyors, and other industrial process operations. Compressed-air systems are defined as a group of subsystems composed of air compressors, air treatment equipment, controls, piping, pneumatic tools, pneumatically powered machinery, and process applications using compressed air. A compressed-air system has three primary functional subsystems: supply, distribution, and demand. Air compressors are the primary energy consumers in a compressed-air system and are the primary focus of this protocol. The two compressed-air energy efficiency measures specifically addressed in this protocol are: High-efficiency/variable speed drive (VSD) compressormore » replacing modulating, load/unload, or constant-speed compressor; and Compressed-air leak survey and repairs. This protocol provides direction on how to reliably verify savings from these two measures using a consistent approach for each.« less
Structural efficiencies of various aluminum, titanium, and steel alloys at elevated temperatures
NASA Technical Reports Server (NTRS)
Heimerl, George J; Hughes, Philip J
1953-01-01
Efficient temperature ranges are indicated for two high-strength aluminum alloys, two titanium alloys, and three steels for some short-time compression-loading applications at elevated temperatures. Only the effects of constant temperatures and short exposure to temperature are considered, and creep is assumed not to be a factor. The structural efficiency analysis is based upon preliminary results of short-time elevated-temperature compressive stress-strain tests of the materials. The analysis covers strength under uniaxial compression, elastic stiffness, column buckling, and the buckling of long plates in compression or in shear.
Energy Efficient Image/Video Data Transmission on Commercial Multi-Core Processors
Lee, Sungju; Kim, Heegon; Chung, Yongwha; Park, Daihee
2012-01-01
In transmitting image/video data over Video Sensor Networks (VSNs), energy consumption must be minimized while maintaining high image/video quality. Although image/video compression is well known for its efficiency and usefulness in VSNs, the excessive costs associated with encoding computation and complexity still hinder its adoption for practical use. However, it is anticipated that high-performance handheld multi-core devices will be used as VSN processing nodes in the near future. In this paper, we propose a way to improve the energy efficiency of image and video compression with multi-core processors while maintaining the image/video quality. We improve the compression efficiency at the algorithmic level or derive the optimal parameters for the combination of a machine and compression based on the tradeoff between the energy consumption and the image/video quality. Based on experimental results, we confirm that the proposed approach can improve the energy efficiency of the straightforward approach by a factor of 2∼5 without compromising image/video quality. PMID:23202181
2D-RBUC for efficient parallel compression of residuals
NASA Astrophysics Data System (ADS)
Đurđević, Đorđe M.; Tartalja, Igor I.
2018-02-01
In this paper, we present a method for lossless compression of residuals with an efficient SIMD parallel decompression. The residuals originate from lossy or near lossless compression of height fields, which are commonly used to represent models of terrains. The algorithm is founded on the existing RBUC method for compression of non-uniform data sources. We have adapted the method to capture 2D spatial locality of height fields, and developed the data decompression algorithm for modern GPU architectures already present even in home computers. In combination with the point-level SIMD-parallel lossless/lossy high field compression method HFPaC, characterized by fast progressive decompression and seamlessly reconstructed surface, the newly proposed method trades off small efficiency degradation for a non negligible compression ratio (measured up to 91%) benefit.
2011-01-01
The aim of this study was to investigate bending stiffness and compression strength perpendicular to the grain of Norway spruce (Picea abies (L.) Karst.) trunkwood with different anatomical and hydraulic properties. Hydraulically less safe mature sapwood had bigger hydraulic lumen diameters and higher specific hydraulic conductivities than hydraulically safer juvenile wood. Bending stiffness (MOE) was higher, whereas radial compression strength lower in mature than in juvenile wood. A density-based tradeoff between MOE and hydraulic efficiency was apparent in mature wood only. Across cambial age, bending stiffness did not compromise hydraulic efficiency due to variation in latewood percent and because of the structural demands of the tree top (e.g. high flexibility). Radial compression strength compromised, however, hydraulic efficiency because it was extremely dependent on the characteristics of the “weakest” wood part, the highly conductive earlywood. An increase in conduit wall reinforcement of earlywood tracheids would be too costly for the tree. Increasing radial compression strength by modification of microfibril angles or ray cell number could result in a decrease of MOE, which would negatively affect the trunk’s capability to support the crown. We propose that radial compression strength could be an easily assessable and highly predictive parameter for the resistance against implosion or vulnerability to cavitation across conifer species, which should be topic of further studies. PMID:22058609
An effective and efficient compression algorithm for ECG signals with irregular periods.
Chou, Hsiao-Hsuan; Chen, Ying-Jui; Shiau, Yu-Chien; Kuo, Te-Son
2006-06-01
This paper presents an effective and efficient preprocessing algorithm for two-dimensional (2-D) electrocardiogram (ECG) compression to better compress irregular ECG signals by exploiting their inter- and intra-beat correlations. To better reveal the correlation structure, we first convert the ECG signal into a proper 2-D representation, or image. This involves a few steps including QRS detection and alignment, period sorting, and length equalization. The resulting 2-D ECG representation is then ready to be compressed by an appropriate image compression algorithm. We choose the state-of-the-art JPEG2000 for its high efficiency and flexibility. In this way, the proposed algorithm is shown to outperform some existing arts in the literature by simultaneously achieving high compression ratio (CR), low percent root mean squared difference (PRD), low maximum error (MaxErr), and low standard derivation of errors (StdErr). In particular, because the proposed period sorting method rearranges the detected heartbeats into a smoother image that is easier to compress, this algorithm is insensitive to irregular ECG periods. Thus either the irregular ECG signals or the QRS false-detection cases can be better compressed. This is a significant improvement over existing 2-D ECG compression methods. Moreover, this algorithm is not tied exclusively to JPEG2000. It can also be combined with other 2-D preprocessing methods or appropriate codecs to enhance the compression performance in irregular ECG cases.
A pulse-compression-ring circuit for high-efficiency electric propulsion.
Owens, Thomas L
2008-03-01
A highly efficient, highly reliable pulsed-power system has been developed for use in high power, repetitively pulsed inductive plasma thrusters. The pulsed inductive thruster ejects plasma propellant at a high velocity using a Lorentz force developed through inductive coupling to the plasma. Having greatly increased propellant-utilization efficiency compared to chemical rockets, this type of electric propulsion system may one day propel spacecraft on long-duration deep-space missions. High system reliability and electrical efficiency are extremely important for these extended missions. In the prototype pulsed-power system described here, exceptional reliability is achieved using a pulse-compression circuit driven by both active solid-state switching and passive magnetic switching. High efficiency is achieved using a novel ring architecture that recovers unused energy in a pulse-compression system with minimal circuit loss after each impulse. As an added benefit, voltage reversal is eliminated in the ring topology, resulting in long lifetimes for energy-storage capacitors. System tests were performed using an adjustable inductive load at a voltage level of 3.3 kV, a peak current of 20 kA, and a current switching rate of 15 kA/micros.
Efficient image acquisition design for a cancer detection system
NASA Astrophysics Data System (ADS)
Nguyen, Dung; Roehrig, Hans; Borders, Marisa H.; Fitzpatrick, Kimberly A.; Roveda, Janet
2013-09-01
Modern imaging modalities, such as Computed Tomography (CT), Digital Breast Tomosynthesis (DBT) or Magnetic Resonance Tomography (MRT) are able to acquire volumetric images with an isotropic resolution in micrometer (um) or millimeter (mm) range. When used in interactive telemedicine applications, these raw images need a huge storage unit, thereby necessitating the use of high bandwidth data communication link. To reduce the cost of transmission and enable archiving, especially for medical applications, image compression is performed. Recent advances in compression algorithms have resulted in a vast array of data compression techniques, but because of the characteristics of these images, there are challenges to overcome to transmit these images efficiently. In addition, the recent studies raise the low dose mammography risk on high risk patient. Our preliminary studies indicate that by bringing the compression before the analog-to-digital conversion (ADC) stage is more efficient than other compression techniques after the ADC. The linearity characteristic of the compressed sensing and ability to perform the digital signal processing (DSP) during data conversion open up a new area of research regarding the roles of sparsity in medical image registration, medical image analysis (for example, automatic image processing algorithm to efficiently extract the relevant information for the clinician), further Xray dose reduction for mammography, and contrast enhancement.
NASA Astrophysics Data System (ADS)
Dimitrakopoulos, Panagiotis
2018-03-01
The calculation of polytropic efficiencies is a very important task, especially during the development of new compression units, like compressor impellers, stages and stage groups. Such calculations are also crucial for the determination of the performance of a whole compressor. As processors and computational capacities have substantially been improved in the last years, the need for a new, rigorous, robust, accurate and at the same time standardized method merged, regarding the computation of the polytropic efficiencies, especially based on thermodynamics of real gases. The proposed method is based on the rigorous definition of the polytropic efficiency. The input consists of pressure and temperature values at the end points of the compression path (suction and discharge), for a given working fluid. The average relative error for the studied cases was 0.536 %. Thus, this high-accuracy method is proposed for efficiency calculations related with turbocompressors and their compression units, especially when they are operating at high power levels, for example in jet engines and high-power plants.
A new efficient method for color image compression based on visual attention mechanism
NASA Astrophysics Data System (ADS)
Shao, Xiaoguang; Gao, Kun; Lv, Lily; Ni, Guoqiang
2010-11-01
One of the key procedures in color image compression is to extract its region of interests (ROIs) and evaluate different compression ratios. A new non-uniform color image compression algorithm with high efficiency is proposed in this paper by using a biology-motivated selective attention model for the effective extraction of ROIs in natural images. When the ROIs have been extracted and labeled in the image, the subsequent work is to encode the ROIs and other regions with different compression ratios via popular JPEG algorithm. Furthermore, experiment results and quantitative and qualitative analysis in the paper show perfect performance when comparing with other traditional color image compression approaches.
RF pulse compression for future linear colliders
NASA Astrophysics Data System (ADS)
Wilson, Perry B.
1995-07-01
Future (nonsuperconducting) linear colliders will require very high values of peak rf power per meter of accelerating structure. The role of rf pulse compression in producing this power is examined within the context of overall rf system design for three future colliders at energies of 1.0-1.5 TeV, 5 TeV, and 25 TeV. In order to keep the average AC input power and the length of the accelerator within reasonable limits, a collider in the 1.0-1.5 TeV energy range will probably be built at an x-band rf frequency, and will require a peak power on the order of 150-200 MW per meter of accelerating structure. A 5 TeV collider at 34 GHz with a reasonable length (35 km) and AC input power (225 MW) would require about 550 MW per meter of structure. Two-beam accelerators can achieve peak powers of this order by applying dc pulse compression techniques (induction linac modules) to produce the drive beam. Klystron-driven colliders achieve high peak power by a combination of dc pulse compression (modulators) and rf pulse compression, with about the same overall rf system efficiency (30-40%) as a two-beam collider. A high gain (6.8) three-stage binary pulse compression system with high efficiency (80%) is described, which (compared to a SLED-II system) can be used to reduce the klystron peak power by about a factor of two, or alternatively, to cut the number of klystrons in half for a 1.0-1.5 TeV x-band collider. For a 5 TeV klystron-driven collider, a high gain, high efficiency rf pulse compression system is essential.
Efficient compression of molecular dynamics trajectory files.
Marais, Patrick; Kenwood, Julian; Smith, Keegan Carruthers; Kuttel, Michelle M; Gain, James
2012-10-15
We investigate whether specific properties of molecular dynamics trajectory files can be exploited to achieve effective file compression. We explore two classes of lossy, quantized compression scheme: "interframe" predictors, which exploit temporal coherence between successive frames in a simulation, and more complex "intraframe" schemes, which compress each frame independently. Our interframe predictors are fast, memory-efficient and well suited to on-the-fly compression of massive simulation data sets, and significantly outperform the benchmark BZip2 application. Our schemes are configurable: atomic positional accuracy can be sacrificed to achieve greater compression. For high fidelity compression, our linear interframe predictor gives the best results at very little computational cost: at moderate levels of approximation (12-bit quantization, maximum error ≈ 10(-2) Å), we can compress a 1-2 fs trajectory file to 5-8% of its original size. For 200 fs time steps-typically used in fine grained water diffusion experiments-we can compress files to ~25% of their input size, still substantially better than BZip2. While compression performance degrades with high levels of quantization, the simulation error is typically much greater than the associated approximation error in such cases. Copyright © 2012 Wiley Periodicals, Inc.
152 W average power Tm-doped fiber CPA system.
Stutzki, Fabian; Gaida, Christian; Gebhardt, Martin; Jansen, Florian; Wienke, Andreas; Zeitner, Uwe; Fuchs, Frank; Jauregui, Cesar; Wandt, Dieter; Kracht, Dietmar; Limpert, Jens; Tünnermann, Andreas
2014-08-15
A high-power thulium (Tm)-doped fiber chirped-pulse amplification system emitting a record compressed average output power of 152 W and 4 MW peak power is demonstrated. This result is enabled by utilizing Tm-doped photonic crystal fibers with mode-field diameters of 35 μm, which mitigate detrimental nonlinearities, exhibit slope efficiencies of more than 50%, and allow for reaching a pump-power-limited average output power of 241 W. The high-compression efficiency has been achieved by using multilayer dielectric gratings with diffraction efficiencies higher than 98%.
Layered compression for high-precision depth data.
Miao, Dan; Fu, Jingjing; Lu, Yan; Li, Shipeng; Chen, Chang Wen
2015-12-01
With the development of depth data acquisition technologies, access to high-precision depth with more than 8-b depths has become much easier and determining how to efficiently represent and compress high-precision depth is essential for practical depth storage and transmission systems. In this paper, we propose a layered high-precision depth compression framework based on an 8-b image/video encoder to achieve efficient compression with low complexity. Within this framework, considering the characteristics of the high-precision depth, a depth map is partitioned into two layers: 1) the most significant bits (MSBs) layer and 2) the least significant bits (LSBs) layer. The MSBs layer provides rough depth value distribution, while the LSBs layer records the details of the depth value variation. For the MSBs layer, an error-controllable pixel domain encoding scheme is proposed to exploit the data correlation of the general depth information with sharp edges and to guarantee the data format of LSBs layer is 8 b after taking the quantization error from MSBs layer. For the LSBs layer, standard 8-b image/video codec is leveraged to perform the compression. The experimental results demonstrate that the proposed coding scheme can achieve real-time depth compression with satisfactory reconstruction quality. Moreover, the compressed depth data generated from this scheme can achieve better performance in view synthesis and gesture recognition applications compared with the conventional coding schemes because of the error control algorithm.
Highly efficient frequency conversion with bandwidth compression of quantum light
Allgaier, Markus; Ansari, Vahid; Sansoni, Linda; Eigner, Christof; Quiring, Viktor; Ricken, Raimund; Harder, Georg; Brecht, Benjamin; Silberhorn, Christine
2017-01-01
Hybrid quantum networks rely on efficient interfacing of dissimilar quantum nodes, as elements based on parametric downconversion sources, quantum dots, colour centres or atoms are fundamentally different in their frequencies and bandwidths. Although pulse manipulation has been demonstrated in very different systems, to date no interface exists that provides both an efficient bandwidth compression and a substantial frequency translation at the same time. Here we demonstrate an engineered sum-frequency-conversion process in lithium niobate that achieves both goals. We convert pure photons at telecom wavelengths to the visible range while compressing the bandwidth by a factor of 7.47 under preservation of non-classical photon-number statistics. We achieve internal conversion efficiencies of 61.5%, significantly outperforming spectral filtering for bandwidth compression. Our system thus makes the connection between previously incompatible quantum systems as a step towards usable quantum networks. PMID:28134242
NASA Technical Reports Server (NTRS)
Thomas, J. L.; Diskin, B.; Brandt, A.
1999-01-01
The distributed-relaxation multigrid and defect- correction methods are applied to the two- dimensional compressible Navier-Stokes equations. The formulation is intended for high Reynolds number applications and several applications are made at a laminar Reynolds number of 10,000. A staggered- grid arrangement of variables is used; the coupled pressure and internal energy equations are solved together with multigrid, requiring a block 2x2 matrix solution. Textbook multigrid efficiencies are attained for incompressible and slightly compressible simulations of the boundary layer on a flat plate. Textbook efficiencies are obtained for compressible simulations up to Mach numbers of 0.7 for a viscous wake simulation.
Improving Remote Health Monitoring: A Low-Complexity ECG Compression Approach
Al-Ali, Abdulla; Mohamed, Amr; Ward, Rabab
2018-01-01
Recent advances in mobile technology have created a shift towards using battery-driven devices in remote monitoring settings and smart homes. Clinicians are carrying out diagnostic and screening procedures based on the electrocardiogram (ECG) signals collected remotely for outpatients who need continuous monitoring. High-speed transmission and analysis of large recorded ECG signals are essential, especially with the increased use of battery-powered devices. Exploring low-power alternative compression methodologies that have high efficiency and that enable ECG signal collection, transmission, and analysis in a smart home or remote location is required. Compression algorithms based on adaptive linear predictors and decimation by a factor B/K are evaluated based on compression ratio (CR), percentage root-mean-square difference (PRD), and heartbeat detection accuracy of the reconstructed ECG signal. With two databases (153 subjects), the new algorithm demonstrates the highest compression performance (CR=6 and PRD=1.88) and overall detection accuracy (99.90% sensitivity, 99.56% positive predictivity) over both databases. The proposed algorithm presents an advantage for the real-time transmission of ECG signals using a faster and more efficient method, which meets the growing demand for more efficient remote health monitoring. PMID:29337892
Improving Remote Health Monitoring: A Low-Complexity ECG Compression Approach.
Elgendi, Mohamed; Al-Ali, Abdulla; Mohamed, Amr; Ward, Rabab
2018-01-16
Recent advances in mobile technology have created a shift towards using battery-driven devices in remote monitoring settings and smart homes. Clinicians are carrying out diagnostic and screening procedures based on the electrocardiogram (ECG) signals collected remotely for outpatients who need continuous monitoring. High-speed transmission and analysis of large recorded ECG signals are essential, especially with the increased use of battery-powered devices. Exploring low-power alternative compression methodologies that have high efficiency and that enable ECG signal collection, transmission, and analysis in a smart home or remote location is required. Compression algorithms based on adaptive linear predictors and decimation by a factor B / K are evaluated based on compression ratio (CR), percentage root-mean-square difference (PRD), and heartbeat detection accuracy of the reconstructed ECG signal. With two databases (153 subjects), the new algorithm demonstrates the highest compression performance ( CR = 6 and PRD = 1.88 ) and overall detection accuracy (99.90% sensitivity, 99.56% positive predictivity) over both databases. The proposed algorithm presents an advantage for the real-time transmission of ECG signals using a faster and more efficient method, which meets the growing demand for more efficient remote health monitoring.
Compression of a mixed antiproton and electron non-neutral plasma to high densities
NASA Astrophysics Data System (ADS)
Aghion, Stefano; Amsler, Claude; Bonomi, Germano; Brusa, Roberto S.; Caccia, Massimo; Caravita, Ruggero; Castelli, Fabrizio; Cerchiari, Giovanni; Comparat, Daniel; Consolati, Giovanni; Demetrio, Andrea; Di Noto, Lea; Doser, Michael; Evans, Craig; Fanì, Mattia; Ferragut, Rafael; Fesel, Julian; Fontana, Andrea; Gerber, Sebastian; Giammarchi, Marco; Gligorova, Angela; Guatieri, Francesco; Haider, Stefan; Hinterberger, Alexander; Holmestad, Helga; Kellerbauer, Alban; Khalidova, Olga; Krasnický, Daniel; Lagomarsino, Vittorio; Lansonneur, Pierre; Lebrun, Patrice; Malbrunot, Chloé; Mariazzi, Sebastiano; Marton, Johann; Matveev, Victor; Mazzotta, Zeudi; Müller, Simon R.; Nebbia, Giancarlo; Nedelec, Patrick; Oberthaler, Markus; Pacifico, Nicola; Pagano, Davide; Penasa, Luca; Petracek, Vojtech; Prelz, Francesco; Prevedelli, Marco; Rienaecker, Benjamin; Robert, Jacques; Røhne, Ole M.; Rotondi, Alberto; Sandaker, Heidi; Santoro, Romualdo; Smestad, Lillian; Sorrentino, Fiodor; Testera, Gemma; Tietje, Ingmari C.; Widmann, Eberhard; Yzombard, Pauline; Zimmer, Christian; Zmeskal, Johann; Zurlo, Nicola; Antonello, Massimiliano
2018-04-01
We describe a multi-step "rotating wall" compression of a mixed cold antiproton-electron non-neutral plasma in a 4.46 T Penning-Malmberg trap developed in the context of the AEḡIS experiment at CERN. Such traps are routinely used for the preparation of cold antiprotons suitable for antihydrogen production. A tenfold antiproton radius compression has been achieved, with a minimum antiproton radius of only 0.17 mm. We describe the experimental conditions necessary to perform such a compression: minimizing the tails of the electron density distribution is paramount to ensure that the antiproton density distribution follows that of the electrons. Such electron density tails are remnants of rotating wall compression and in many cases can remain unnoticed. We observe that the compression dynamics for a pure electron plasma behaves the same way as that of a mixed antiproton and electron plasma. Thanks to this optimized compression method and the high single shot antiproton catching efficiency, we observe for the first time cold and dense non-neutral antiproton plasmas with particle densities n ≥ 1013 m-3, which pave the way for an efficient pulsed antihydrogen production in AEḡIS.
Economic and environmental evaluation of compressed-air cars
NASA Astrophysics Data System (ADS)
Creutzig, Felix; Papson, Andrew; Schipper, Lee; Kammen, Daniel M.
2009-10-01
Climate change and energy security require a reduction in travel demand, a modal shift, and technological innovation in the transport sector. Through a series of press releases and demonstrations, a car using energy stored in compressed air produced by a compressor has been suggested as an environmentally friendly vehicle of the future. We analyze the thermodynamic efficiency of a compressed-air car powered by a pneumatic engine and consider the merits of compressed air versus chemical storage of potential energy. Even under highly optimistic assumptions the compressed-air car is significantly less efficient than a battery electric vehicle and produces more greenhouse gas emissions than a conventional gas-powered car with a coal intensive power mix. However, a pneumatic-combustion hybrid is technologically feasible, inexpensive and could eventually compete with hybrid electric vehicles.
Engineering tough, highly compressible, biodegradable hydrogels by tuning the network architecture.
Gu, Dunyin; Tan, Shereen; Xu, Chenglong; O'Connor, Andrea J; Qiao, Greg G
2017-06-20
By precisely tuning the network architecture, tough, highly compressible hydrogels were engineered. The hydrogels were made by interconnecting high-functionality hydrophobic domains through linear tri-block chains, consisting of soft hydrophilic middle blocks, flanked with flexible hydrophobic blocks. In showing their applicability, the efficient encapsulation and prolonged release of hydrophobic drugs were achieved.
A Comparison of LBG and ADPCM Speech Compression Techniques
NASA Astrophysics Data System (ADS)
Bachu, Rajesh G.; Patel, Jignasa; Barkana, Buket D.
Speech compression is the technology of converting human speech into an efficiently encoded representation that can later be decoded to produce a close approximation of the original signal. In all speech there is a degree of predictability and speech coding techniques exploit this to reduce bit rates yet still maintain a suitable level of quality. This paper is a study and implementation of Linde-Buzo-Gray Algorithm (LBG) and Adaptive Differential Pulse Code Modulation (ADPCM) algorithms to compress speech signals. In here we implemented the methods using MATLAB 7.0. The methods we used in this study gave good results and performance in compressing the speech and listening tests showed that efficient and high quality coding is achieved.
Compression of computer generated phase-shifting hologram sequence using AVC and HEVC
NASA Astrophysics Data System (ADS)
Xing, Yafei; Pesquet-Popescu, Béatrice; Dufaux, Frederic
2013-09-01
With the capability of achieving twice the compression ratio of Advanced Video Coding (AVC) with similar reconstruction quality, High Efficiency Video Coding (HEVC) is expected to become the newleading technique of video coding. In order to reduce the storage and transmission burden of digital holograms, in this paper we propose to use HEVC for compressing the phase-shifting digital hologram sequences (PSDHS). By simulating phase-shifting digital holography (PSDH) interferometry, interference patterns between illuminated three dimensional( 3D) virtual objects and the stepwise phase changed reference wave are generated as digital holograms. The hologram sequences are obtained by the movement of the virtual objects and compressed by AVC and HEVC. The experimental results show that AVC and HEVC are efficient to compress PSDHS, with HEVC giving better performance. Good compression rate and reconstruction quality can be obtained with bitrate above 15000kbps.
A nonlinear relaxation/quasi-Newton algorithm for the compressible Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Edwards, Jack R.; Mcrae, D. S.
1992-01-01
A highly efficient implicit method for the computation of steady, two-dimensional compressible Navier-Stokes flowfields is presented. The discretization of the governing equations is hybrid in nature, with flux-vector splitting utilized in the streamwise direction and central differences with flux-limited artificial dissipation used for the transverse fluxes. Line Jacobi relaxation is used to provide a suitable initial guess for a new nonlinear iteration strategy based on line Gauss-Seidel sweeps. The applicability of quasi-Newton methods as convergence accelerators for this and other line relaxation algorithms is discussed, and efficient implementations of such techniques are presented. Convergence histories and comparisons with experimental data are presented for supersonic flow over a flat plate and for several high-speed compression corner interactions. Results indicate a marked improvement in computational efficiency over more conventional upwind relaxation strategies, particularly for flowfields containing large pockets of streamwise subsonic flow.
Ma, JiaLi; Zhang, TanTan; Dong, MingChui
2015-05-01
This paper presents a novel electrocardiogram (ECG) compression method for e-health applications by adapting an adaptive Fourier decomposition (AFD) algorithm hybridized with a symbol substitution (SS) technique. The compression consists of two stages: first stage AFD executes efficient lossy compression with high fidelity; second stage SS performs lossless compression enhancement and built-in data encryption, which is pivotal for e-health. Validated with 48 ECG records from MIT-BIH arrhythmia benchmark database, the proposed method achieves averaged compression ratio (CR) of 17.6-44.5 and percentage root mean square difference (PRD) of 0.8-2.0% with a highly linear and robust PRD-CR relationship, pushing forward the compression performance to an unexploited region. As such, this paper provides an attractive candidate of ECG compression method for pervasive e-health applications.
High efficiency video coding for ultrasound video communication in m-health systems.
Panayides, A; Antoniou, Z; Pattichis, M S; Pattichis, C S; Constantinides, A G
2012-01-01
Emerging high efficiency video compression methods and wider availability of wireless network infrastructure will significantly advance existing m-health applications. For medical video communications, the emerging video compression and network standards support low-delay and high-resolution video transmission, at the clinically acquired resolution and frame rates. Such advances are expected to further promote the adoption of m-health systems for remote diagnosis and emergency incidents in daily clinical practice. This paper compares the performance of the emerging high efficiency video coding (HEVC) standard to the current state-of-the-art H.264/AVC standard. The experimental evaluation, based on five atherosclerotic plaque ultrasound videos encoded at QCIF, CIF, and 4CIF resolutions demonstrates that 50% reductions in bitrate requirements is possible for equivalent clinical quality.
Lok, U-Wai; Li, Pai-Chi
2016-03-01
Graphics processing unit (GPU)-based software beamforming has advantages over hardware-based beamforming of easier programmability and a faster design cycle, since complicated imaging algorithms can be efficiently programmed and modified. However, the need for a high data rate when transferring ultrasound radio-frequency (RF) data from the hardware front end to the software back end limits the real-time performance. Data compression methods can be applied to the hardware front end to mitigate the data transfer issue. Nevertheless, most decompression processes cannot be performed efficiently on a GPU, thus becoming another bottleneck of the real-time imaging. Moreover, lossless (or nearly lossless) compression is desirable to avoid image quality degradation. In a previous study, we proposed a real-time lossless compression-decompression algorithm and demonstrated that it can reduce the overall processing time because the reduction in data transfer time is greater than the computation time required for compression/decompression. This paper analyzes the lossless compression method in order to understand the factors limiting the compression efficiency. Based on the analytical results, a nearly lossless compression is proposed to further enhance the compression efficiency. The proposed method comprises a transformation coding method involving modified lossless compression that aims at suppressing amplitude data. The simulation results indicate that the compression ratio (CR) of the proposed approach can be enhanced from nearly 1.8 to 2.5, thus allowing a higher data acquisition rate at the front end. The spatial and contrast resolutions with and without compression were almost identical, and the process of decompressing the data of a single frame on a GPU took only several milliseconds. Moreover, the proposed method has been implemented in a 64-channel system that we built in-house to demonstrate the feasibility of the proposed algorithm in a real system. It was found that channel data from a 64-channel system can be transferred using the standard USB 3.0 interface in most practical imaging applications.
Possible improvements in gasoline engines
NASA Technical Reports Server (NTRS)
Ziembinski, S
1923-01-01
High-compression engines are investigated with the three main objects being elimination of vibration, increase of maximum efficiency, and conservation of this efficiency at the highest possible speeds.
Coding For Compression Of Low-Entropy Data
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu
1994-01-01
Improved method of encoding digital data provides for efficient lossless compression of partially or even mostly redundant data from low-information-content source. Method of coding implemented in relatively simple, high-speed arithmetic and logic circuits. Also increases coding efficiency beyond that of established Huffman coding method in that average number of bits per code symbol can be less than 1, which is the lower bound for Huffman code.
Binary video codec for data reduction in wireless visual sensor networks
NASA Astrophysics Data System (ADS)
Khursheed, Khursheed; Ahmad, Naeem; Imran, Muhammad; O'Nils, Mattias
2013-02-01
Wireless Visual Sensor Networks (WVSN) is formed by deploying many Visual Sensor Nodes (VSNs) in the field. Typical applications of WVSN include environmental monitoring, health care, industrial process monitoring, stadium/airports monitoring for security reasons and many more. The energy budget in the outdoor applications of WVSN is limited to the batteries and the frequent replacement of batteries is usually not desirable. So the processing as well as the communication energy consumption of the VSN needs to be optimized in such a way that the network remains functional for longer duration. The images captured by VSN contain huge amount of data and require efficient computational resources for processing the images and wide communication bandwidth for the transmission of the results. Image processing algorithms must be designed and developed in such a way that they are computationally less complex and must provide high compression rate. For some applications of WVSN, the captured images can be segmented into bi-level images and hence bi-level image coding methods will efficiently reduce the information amount in these segmented images. But the compression rate of the bi-level image coding methods is limited by the underlined compression algorithm. Hence there is a need for designing other intelligent and efficient algorithms which are computationally less complex and provide better compression rate than that of bi-level image coding methods. Change coding is one such algorithm which is computationally less complex (require only exclusive OR operations) and provide better compression efficiency compared to image coding but it is effective for applications having slight changes between adjacent frames of the video. The detection and coding of the Region of Interest (ROIs) in the change frame efficiently reduce the information amount in the change frame. But, if the number of objects in the change frames is higher than a certain level then the compression efficiency of both the change coding and ROI coding becomes worse than that of image coding. This paper explores the compression efficiency of the Binary Video Codec (BVC) for the data reduction in WVSN. We proposed to implement all the three compression techniques i.e. image coding, change coding and ROI coding at the VSN and then select the smallest bit stream among the results of the three compression techniques. In this way the compression performance of the BVC will never become worse than that of image coding. We concluded that the compression efficiency of BVC is always better than that of change coding and is always better than or equal that of ROI coding and image coding.
Modeling of Single and Dual Reservoir Porous Media Compressed Gas (Air and CO2) Storage Systems
NASA Astrophysics Data System (ADS)
Oldenburg, C. M.; Liu, H.; Borgia, A.; Pan, L.
2017-12-01
Intermittent renewable energy sources are causing increasing demand for energy storage. The deep subsurface offers promising opportunities for energy storage because it can safely contain high-pressure gases. Porous media compressed air energy storage (PM-CAES) is one approach, although the only facilities in operation are in caverns (C-CAES) rather than porous media. Just like in C-CAES, PM-CAES operates generally by injecting working gas (air) through well(s) into the reservoir compressing the cushion gas (existing air in the reservoir). During energy recovery, high-pressure air from the reservoir is mixed with fuel in a combustion turbine to produce electricity, thereby reducing compression costs. Unlike in C-CAES, the storage of energy in PM-CAES occurs variably across pressure gradients in the formation, while the solid grains of the matrix can release/store heat. Because air is the working gas, PM-CAES has fairly low thermal efficiency and low energy storage density. To improve the energy storage density, we have conceived and modeled a closed-loop two-reservoir compressed CO2 energy storage system. One reservoir is the low-pressure reservoir, and the other is the high-pressure reservoir. CO2 is cycled back and forth between reservoirs depending on whether energy needs to be stored or recovered. We have carried out thermodynamic and parametric analyses of the performance of an idealized two-reservoir CO2 energy storage system under supercritical and transcritical conditions for CO2 using a steady-state model. Results show that the transcritical compressed CO2 energy storage system has higher round-trip efficiency and exergy efficiency, and larger energy storage density than the supercritical compressed CO2 energy storage. However, the configuration of supercritical compressed CO2 energy storage is simpler, and the energy storage densities of the two systems are both higher than that of PM-CAES, which is advantageous in terms of storage volume for a given power rating.
Tanner, Timo; Antikainen, Osmo; Ehlers, Henrik; Yliruusi, Jouko
2017-06-30
With modern tableting machines large amounts of tablets are produced with high output. Consequently, methods to examine powder compression in a high-velocity setting are in demand. In the present study, a novel gravitation-based method was developed to examine powder compression. A steel bar is dropped on a punch to compress microcrystalline cellulose and starch samples inside the die. The distance of the bar is being read by a high-accuracy laser displacement sensor which provides a reliable distance-time plot for the bar movement. In-die height and density of the compact can be seen directly from this data, which can be examined further to obtain information on velocity, acceleration and energy distribution during compression. The energy consumed in compact formation could also be seen. Despite the high vertical compression speed, the method was proven to be cost-efficient, accurate and reproducible. Copyright © 2017 Elsevier B.V. All rights reserved.
Onboard Image Processing System for Hyperspectral Sensor
Hihara, Hiroki; Moritani, Kotaro; Inoue, Masao; Hoshi, Yoshihiro; Iwasaki, Akira; Takada, Jun; Inada, Hitomi; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Tanii, Jun
2015-01-01
Onboard image processing systems for a hyperspectral sensor have been developed in order to maximize image data transmission efficiency for large volume and high speed data downlink capacity. Since more than 100 channels are required for hyperspectral sensors on Earth observation satellites, fast and small-footprint lossless image compression capability is essential for reducing the size and weight of a sensor system. A fast lossless image compression algorithm has been developed, and is implemented in the onboard correction circuitry of sensitivity and linearity of Complementary Metal Oxide Semiconductor (CMOS) sensors in order to maximize the compression ratio. The employed image compression method is based on Fast, Efficient, Lossless Image compression System (FELICS), which is a hierarchical predictive coding method with resolution scaling. To improve FELICS’s performance of image decorrelation and entropy coding, we apply a two-dimensional interpolation prediction and adaptive Golomb-Rice coding. It supports progressive decompression using resolution scaling while still maintaining superior performance measured as speed and complexity. Coding efficiency and compression speed enlarge the effective capacity of signal transmission channels, which lead to reducing onboard hardware by multiplexing sensor signals into a reduced number of compression circuits. The circuitry is embedded into the data formatter of the sensor system without adding size, weight, power consumption, and fabrication cost. PMID:26404281
Image and Video Compression with VLSI Neural Networks
NASA Technical Reports Server (NTRS)
Fang, W.; Sheu, B.
1993-01-01
An advanced motion-compensated predictive video compression system based on artificial neural networks has been developed to effectively eliminate the temporal and spatial redundancy of video image sequences and thus reduce the bandwidth and storage required for the transmission and recording of the video signal. The VLSI neuroprocessor for high-speed high-ratio image compression based upon a self-organization network and the conventional algorithm for vector quantization are compared. The proposed method is quite efficient and can achieve near-optimal results.
Basic concepts for the design of high-efficiency single-junction and multibandgap solar cells
NASA Technical Reports Server (NTRS)
Fan, J. C. C.
1985-01-01
Concepts for obtaining practical solar-cell modules with one-sun efficiencies up to 30 percent at air mass 1 are now well understood. Such high-efficiency modules utilize multibandgap structures. To achieve module efficiencies significantly above 30 percent, it is necessary to employ different concepts such as spectral compression and broad-band detection. A detailed description of concepts for the design of high-efficiency multibandgap solar cells is given.
HEVC for high dynamic range services
NASA Astrophysics Data System (ADS)
Kim, Seung-Hwan; Zhao, Jie; Misra, Kiran; Segall, Andrew
2015-09-01
Displays capable of showing a greater range of luminance values can render content containing high dynamic range information in a way such that the viewers have a more immersive experience. This paper introduces the design aspects of a high dynamic range (HDR) system, and examines the performance of the HDR processing chain in terms of compression efficiency. Specifically it examines the relation between recently introduced Society of Motion Picture and Television Engineers (SMPTE) ST 2084 transfer function and the High Efficiency Video Coding (HEVC) standard. SMPTE ST 2084 is designed to cover the full range of an HDR signal from 0 to 10,000 nits, however in many situations the valid signal range of actual video might be smaller than SMPTE ST 2084 supported range. The above restricted signal range results in restricted range of code values for input video data and adversely impacts compression efficiency. In this paper, we propose a code value remapping method that extends the restricted range code values into the full range code values so that the existing standards such as HEVC may better compress the video content. The paper also identifies related non-normative encoder-only changes that are required for remapping method for a fair comparison with anchor. Results are presented comparing the efficiency of the current approach versus the proposed remapping method for HM-16.2.
Effect of multilayer high-compression bandaging on ankle range of motion and oxygen cost of walking
Roaldsen, K S; Elfving, B; Stanghelle, J K; Mattsson, E
2012-01-01
Objective To evaluate the effects of multilayer high-compression bandaging on ankle range of motion, oxygen consumption and subjective walking ability in healthy subjects. Method A volunteer sample of 22 healthy subjects (10 women and 12 men; aged 67 [63–83] years) were studied. The intervention included treadmill-walking at self-selected speed with and without multilayer high-compression bandaging (Proforeº), randomly selected. The primary outcome variables were ankle range of motion, oxygen consumption and subjective walking ability. Results Total ankle range of motion decreased 4% with compression. No change in oxygen cost of walking was observed. Less than half the subjects reported that walking-shoe comfort or walking distance was negatively affected. Conclusion Ankle range of motion decreased with compression but could probably be counteracted with a regular exercise programme. There were no indications that walking with compression was more exhausting than walking without. Appropriate walking shoes could seem important to secure gait efficiency when using compression garments. PMID:21810941
A Streaming PCA VLSI Chip for Neural Data Compression.
Wu, Tong; Zhao, Wenfeng; Guo, Hongsun; Lim, Hubert H; Yang, Zhi
2017-12-01
Neural recording system miniaturization and integration with low-power wireless technologies require compressing neural data before transmission. Feature extraction is a procedure to represent data in a low-dimensional space; its integration into a recording chip can be an efficient approach to compress neural data. In this paper, we propose a streaming principal component analysis algorithm and its microchip implementation to compress multichannel local field potential (LFP) and spike data. The circuits have been designed in a 65-nm CMOS technology and occupy a silicon area of 0.06 mm. Throughout the experiments, the chip compresses LFPs by 10 at the expense of as low as 1% reconstruction errors and 144-nW/channel power consumption; for spikes, the achieved compression ratio is 25 with 8% reconstruction errors and 3.05-W/channel power consumption. In addition, the algorithm and its hardware architecture can swiftly adapt to nonstationary spiking activities, which enables efficient hardware sharing among multiple channels to support a high-channel count recorder.
Wavelet data compression for archiving high-resolution icosahedral model data
NASA Astrophysics Data System (ADS)
Wang, N.; Bao, J.; Lee, J.
2011-12-01
With the increase of the resolution of global circulation models, it becomes ever more important to develop highly effective solutions to archive the huge datasets produced by those models. While lossless data compression guarantees the accuracy of the restored data, it can only achieve limited reduction of data size. Wavelet transform based data compression offers significant potentials in data size reduction, and it has been shown very effective in transmitting data for remote visualizations. However, for data archive purposes, a detailed study has to be conducted to evaluate its impact to the datasets that will be used in further numerical computations. In this study, we carried out two sets of experiments for both summer and winter seasons. An icosahedral grid weather model and a highly efficient wavelet data compression software were used for this study. Initial conditions were compressed and input to the model to run to 10 days. The forecast results were then compared to those forecast results from the model run with the original uncompressed initial conditions. Several visual comparisons, as well as the statistics of numerical comparisons are presented. These results indicate that with specified minimum accuracy losses, wavelet data compression achieves significant data size reduction, and at the same time, it maintains minimum numerical impacts to the datasets. In addition, some issues are discussed to increase the archive efficiency while retaining a complete set of meta data for each archived file.
High-power picosecond pulses by SPM-induced spectral compression in a fiber amplifier
NASA Astrophysics Data System (ADS)
Schreiber, T.; Liem, A.; Roeser, F.; Zellmer, H.; Tuennermann, A.; Limpert, J.; Deguil-Robin, N.; Manek-Honninger, I.; Salin, F.; Courjaud, A.; Honninger, C.; Mottay, E.
2005-04-01
The fiber based generation of nearly transform-limited 10-ps pulses with 200 kW peak power (97 W average power) based on SPM-induced spectral compression is reported. Efficient second harmonic generation applying this source is also discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abdelaziz, Omar; Qu, Ming; Sun, Xiao-Guang
Separate sensible and latent cooling systems offer superior energy efficiency performance compared to conventional vapor compression air conditioning systems. In this paper we describe an innovative non-vapor compression system that uses electrochemical compressor (ECC) to pump hydrogen between 2-metal hydride reservoirs to provide the sensible cooling effect. The heat rejected during this process is used to regenerate the ionic liquid (IL) used for desiccant dehumidification. The overall system design is illustrated. The Xergy version 4C electrochemical compressor, while not designed as a high pressure system, develops in excess of 2 MPa (300 psia) and pressure ratios > 30. The projectedmore » base efficiency improvement of the electrochemical compressor is expected to be ~ 20% with higher efficiency when in low capacity mode due to being throttleable to lower capacity with improved efficiency. The IL was tailored to maximize the absorption/desorption rate of water vapor at moderate regeneration temperature. This IL, namely, [EMIm].OAc, is a hydrophilic IL with a working concentration range of 28.98% when operating between 25 75 C. The ECC metal hydride system is expected to show superior performance to typical vapor compression systems. As such, the combined efficiency gains from the use of ECC and separate and sensible cooling would offer significant potential savings to existing vapor compression cooling technology. A high efficiency Window Air Conditioner system is described based on this novel configuration. The system s schematic is provided. Models compared well with actual operating data obtained by running the prototype system. Finally, a model of an LiCl desiccant system in conjunction with the ECC-based metal hydride heat exchangers is provided.« less
Fingerprint recognition of wavelet-based compressed images by neuro-fuzzy clustering
NASA Astrophysics Data System (ADS)
Liu, Ti C.; Mitra, Sunanda
1996-06-01
Image compression plays a crucial role in many important and diverse applications requiring efficient storage and transmission. This work mainly focuses on a wavelet transform (WT) based compression of fingerprint images and the subsequent classification of the reconstructed images. The algorithm developed involves multiresolution wavelet decomposition, uniform scalar quantization, entropy and run- length encoder/decoder and K-means clustering of the invariant moments as fingerprint features. The performance of the WT-based compression algorithm has been compared with JPEG current image compression standard. Simulation results show that WT outperforms JPEG in high compression ratio region and the reconstructed fingerprint image yields proper classification.
The effect of changes in compression ratio upon engine performance
NASA Technical Reports Server (NTRS)
Sparrow, Stanwood W
1925-01-01
This report is based upon engine tests made at the Bureau of Standards during 1920, 1921, 1922, and 1923. The majority of these tests were of aviation engines and were made in the Altitude Laboratory. For a small portion of the work a single cylinder experimental engine was used. This, however, was operated only at sea-level pressures. The report shows that an increase in break horsepower and a decrease in the pounds of fuel used per brake horsepower hour usually results from an increase in compression ratio. This holds true at least up to the highest ratio investigated, 14 to 1, provided there is no serious preignition or detonation at any ratio. To avoid preignition and detonation when employing high-compression ratios, it is often necessary to use some fuel other than gasoline. It has been found that the consumption of some of these fuels in pounds per brake horsepower hour is so much greater than the consumption of gasoline that it offsets the decrease derived from the use of the high-compression ratio. The changes in indicated thermal efficiency with changes in compression ratio are in close agreement with what would be anticipated from a consideration of the air cycle efficiencies at the various ratios. In so far as these tests are concerned there is no evidence that a change in compression ratio produces an appreciable, consistent change in friction horsepower, volumetric efficiency, or in the range of fuel-air ratios over which the engine can operate. The ratio between the heat loss to the jacket water and the heat converted into brake horsepower or indicated horsepower decreases with increase in compression ratio. (author)
Canova, Frederico; Clady, Raphael; Chambaret, Jean-Paul; Flury, Manuel; Tonchev, Svtelen; Fechner, Renate; Parriaux, Olivier
2007-11-12
High efficiency, broad-band TE-polarization diffraction over a wavelength range centered at 800 nm is obtained by high index gratings placed on a non-corrugated mirror. More than 96% efficiency wide band top-hat diffraction efficiency spectra, as well as more than 1 J/cm(2) damage threshold under 50 fs pulses are demonstrated experimentally. This opens the way to high-efficiency Chirped Pulse Amplification for high average power laser machining by means of all-dielectric structures as well as for ultra-short high energy pulses by means of metal-dielectric structures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deng, Liulin; Garimella, Sandilya V. B.; Hamid, Ahmed M.
We report on the implementation of a traveling wave (TW) based compression ratio ion mobility programming (CRIMP) approach within Structures for Lossless Ion Manipulations (SLIM) that enables both greatly enlarged trapped ion charge capacities and also their subsequent efficient compression for use in ion mobility (IM) separations. Ion accumulation is conducted in a long serpentine path TW SLIM region after which CRIMP allows the large ion populations to be ‘squeezed’. The compression process occurs at an interface between two SLIM regions, one operating conventionally and the second having an intermittently pausing or ‘stuttering’ TW, allowing the contents of multiple binsmore » of ions from the first region to be merged into a single bin in the second region. In this initial work stationary voltages in the second region were used to block ions from exiting the first (trapping) region, and the resumption of TWs in the second region allows ions to exit, and the population to also be compressed if CRIMP is applied. In our initial evaluation we show that the number of charges trapped for a 40 s accumulation period was ~5×109, more than two orders of magnitude greater than the previously reported charge capacity using an ion funnel trap. We also show that over 1×109 ions can be accumulated with high efficiency in the present device, and that the extent of subsequent compression is only limited by the space charge capacity of the trapping region. Lower compression ratios allow increased IM peak heights without significant loss of signal, while excessively large compression ratios can lead to ion losses and other artifacts. Importantly, we show that extended ion accumulation in conjunction with CRIMP and multiple passes provides the basis for a highly desirable combination of ultra-high sensitivity and ultra-high resolution IM separations using SLIM.« less
High Order Filter Methods for the Non-ideal Compressible MHD Equations
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sjoegreen, Bjoern
2003-01-01
The generalization of a class of low-dissipative high order filter finite difference methods for long time wave propagation of shock/turbulence/combustion compressible viscous gas dynamic flows to compressible MHD equations for structured curvilinear grids has been achieved. The new scheme is shown to provide a natural and efficient way for the minimization of the divergence of the magnetic field numerical error. Standard divergence cleaning is not required by the present filter approach. For certain non-ideal MHD test cases, divergence free preservation of the magnetic fields has been achieved.
Divergence Free High Order Filter Methods for the Compressible MHD Equations
NASA Technical Reports Server (NTRS)
Yea, H. C.; Sjoegreen, Bjoern
2003-01-01
The generalization of a class of low-dissipative high order filter finite difference methods for long time wave propagation of shock/turbulence/combustion compressible viscous gas dynamic flows to compressible MHD equations for structured curvilinear grids has been achieved. The new scheme is shown to provide a natural and efficient way for the minimization of the divergence of the magnetic field numerical error. Standard diver- gence cleaning is not required by the present filter approach. For certain MHD test cases, divergence free preservation of the magnetic fields has been achieved.
Compressed/reconstructed test images for CRAF/Cassini
NASA Technical Reports Server (NTRS)
Dolinar, S.; Cheung, K.-M.; Onyszchuk, I.; Pollara, F.; Arnold, S.
1991-01-01
A set of compressed, then reconstructed, test images submitted to the Comet Rendezvous Asteroid Flyby (CRAF)/Cassini project is presented as part of its evaluation of near lossless high compression algorithms for representing image data. A total of seven test image files were provided by the project. The seven test images were compressed, then reconstructed with high quality (root mean square error of approximately one or two gray levels on an 8 bit gray scale), using discrete cosine transforms or Hadamard transforms and efficient entropy coders. The resulting compression ratios varied from about 2:1 to about 10:1, depending on the activity or randomness in the source image. This was accomplished without any special effort to optimize the quantizer or to introduce special postprocessing to filter the reconstruction errors. A more complete set of measurements, showing the relative performance of the compression algorithms over a wide range of compression ratios and reconstruction errors, shows that additional compression is possible at a small sacrifice in fidelity.
Structural Efficiency of Composite Struts for Aerospace Applications
NASA Technical Reports Server (NTRS)
Jegley, Dawn C.; Wu, K. Chauncey; McKenney, Martin J.; Oremont, Leonard
2011-01-01
The structural efficiency of carbon-epoxy tapered struts is considered through trade studies, detailed analysis, manufacturing and experimentation. Since some of the lunar lander struts are more highly loaded than struts used in applications such as satellites and telescopes, the primary focus of the effort is on these highly loaded struts. Lunar lander requirements include that the strut has to be tapered on both ends, complicating the design and limiting the manufacturing process. Optimal stacking sequences, geometries, and materials are determined and the sensitivity of the strut weight to each parameter is evaluated. The trade study results indicate that the most efficient carbon-epoxy struts are 30 percent lighter than the most efficient aluminum-lithium struts. Structurally efficient, highly loaded struts were fabricated and loaded in tension and compression to determine if they met the design requirements and to verify the accuracy of the analyses. Experimental evaluation of some of these struts demonstrated that they could meet the greatest Altair loading requirements in both tension and compression. These results could be applied to other vehicles requiring struts with high loading and light weight.
Effect of compressibility on the hypervelocity penetration
NASA Astrophysics Data System (ADS)
Song, W. J.; Chen, X. W.; Chen, P.
2018-02-01
We further consider the effect of rod strength by employing the compressible penetration model to study the effect of compressibility on hypervelocity penetration. Meanwhile, we define different instances of penetration efficiency in various modified models and compare these penetration efficiencies to identify the effects of different factors in the compressible model. To systematically discuss the effect of compressibility in different metallic rod-target combinations, we construct three cases, i.e., the penetrations by the more compressible rod into the less compressible target, rod into the analogously compressible target, and the less compressible rod into the more compressible target. The effects of volumetric strain, internal energy, and strength on the penetration efficiency are analyzed simultaneously. It indicates that the compressibility of the rod and target increases the pressure at the rod/target interface. The more compressible rod/target has larger volumetric strain and higher internal energy. Both the larger volumetric strain and higher strength enhance the penetration or anti-penetration ability. On the other hand, the higher internal energy weakens the penetration or anti-penetration ability. The two trends conflict, but the volumetric strain dominates in the variation of the penetration efficiency, which would not approach the hydrodynamic limit if the rod and target are not analogously compressible. However, if the compressibility of the rod and target is analogous, it has little effect on the penetration efficiency.
Symmetry compression method for discovering network motifs.
Wang, Jianxin; Huang, Yuannan; Wu, Fang-Xiang; Pan, Yi
2012-01-01
Discovering network motifs could provide a significant insight into systems biology. Interestingly, many biological networks have been found to have a high degree of symmetry (automorphism), which is inherent in biological network topologies. The symmetry due to the large number of basic symmetric subgraphs (BSSs) causes a certain redundant calculation in discovering network motifs. Therefore, we compress all basic symmetric subgraphs before extracting compressed subgraphs and propose an efficient decompression algorithm to decompress all compressed subgraphs without loss of any information. In contrast to previous approaches, the novel Symmetry Compression method for Motif Detection, named as SCMD, eliminates most redundant calculations caused by widespread symmetry of biological networks. We use SCMD to improve three notable exact algorithms and two efficient sampling algorithms. Results of all exact algorithms with SCMD are the same as those of the original algorithms, since SCMD is a lossless method. The sampling results show that the use of SCMD almost does not affect the quality of sampling results. For highly symmetric networks, we find that SCMD used in both exact and sampling algorithms can help get a remarkable speedup. Furthermore, SCMD enables us to find larger motifs in biological networks with notable symmetry than previously possible.
NASA Astrophysics Data System (ADS)
Teng, J.; Gu, Y. Q.; Zhu, B.; Hong, W.; Zhao, Z. Q.; Zhou, W. M.; Cao, L. F.
2013-11-01
This paper presents a new method of laser produced proton beam collimation and spectrum compression using a combination of a solenoid field and a RF cavity. The solenoid collects laser-driven protons efficiently within an angle that is smaller than 12 degrees because it is mounted few millimeters from the target, and collimates protons with energies around 2.3 MeV. The collimated proton beam then passes through a RF cavity to allow compression of the spectrum. Particle-in-cell (PIC) simulations demonstrate the proton beam transport in the solenoid and RF electric fields. Excellent energy compression and collection efficiency of protons are presented. This method for proton beam optimization is suitable for high repetition-rate laser acceleration proton beams, which could be used as an injector for a conventional proton accelerator.
Compression of next-generation sequencing reads aided by highly efficient de novo assembly
Jones, Daniel C.; Ruzzo, Walter L.; Peng, Xinxia
2012-01-01
We present Quip, a lossless compression algorithm for next-generation sequencing data in the FASTQ and SAM/BAM formats. In addition to implementing reference-based compression, we have developed, to our knowledge, the first assembly-based compressor, using a novel de novo assembly algorithm. A probabilistic data structure is used to dramatically reduce the memory required by traditional de Bruijn graph assemblers, allowing millions of reads to be assembled very efficiently. Read sequences are then stored as positions within the assembled contigs. This is combined with statistical compression of read identifiers, quality scores, alignment information and sequences, effectively collapsing very large data sets to <15% of their original size with no loss of information. Availability: Quip is freely available under the 3-clause BSD license from http://cs.washington.edu/homes/dcjones/quip. PMID:22904078
NASA Astrophysics Data System (ADS)
O'Connor, Sean M.; Lynch, Jerome P.; Gilbert, Anna C.
2013-04-01
Wireless sensors have emerged to offer low-cost sensors with impressive functionality (e.g., data acquisition, computing, and communication) and modular installations. Such advantages enable higher nodal densities than tethered systems resulting in increased spatial resolution of the monitoring system. However, high nodal density comes at a cost as huge amounts of data are generated, weighing heavy on power sources, transmission bandwidth, and data management requirements, often making data compression necessary. The traditional compression paradigm consists of high rate (>Nyquist) uniform sampling and storage of the entire target signal followed by some desired compression scheme prior to transmission. The recently proposed compressed sensing (CS) framework combines the acquisition and compression stage together, thus removing the need for storage and operation of the full target signal prior to transmission. The effectiveness of the CS approach hinges on the presence of a sparse representation of the target signal in a known basis, similarly exploited by several traditional compressive sensing applications today (e.g., imaging, MRI). Field implementations of CS schemes in wireless SHM systems have been challenging due to the lack of commercially available sensing units capable of sampling methods (e.g., random) consistent with the compressed sensing framework, often moving evaluation of CS techniques to simulation and post-processing. The research presented here describes implementation of a CS sampling scheme to the Narada wireless sensing node and the energy efficiencies observed in the deployed sensors. Of interest in this study is the compressibility of acceleration response signals collected from a multi-girder steel-concrete composite bridge. The study shows the benefit of CS in reducing data requirements while ensuring data analysis on compressed data remain accurate.
A modified JPEG-LS lossless compression method for remote sensing images
NASA Astrophysics Data System (ADS)
Deng, Lihua; Huang, Zhenghua
2015-12-01
As many variable length source coders, JPEG-LS is highly vulnerable to channel errors which occur in the transmission of remote sensing images. The error diffusion is one of the important factors which infect its robustness. The common method of improving the error resilience of JPEG-LS is dividing the image into many strips or blocks, and then coding each of them independently, but this method reduces the coding efficiency. In this paper, a block based JPEP-LS lossless compression method with an adaptive parameter is proposed. In the modified scheme, the threshold parameter RESET is adapted to an image and the compression efficiency is close to that of the conventional JPEG-LS.
Spatial compression algorithm for the analysis of very large multivariate images
Keenan, Michael R [Albuquerque, NM
2008-07-15
A method for spatially compressing data sets enables the efficient analysis of very large multivariate images. The spatial compression algorithms use a wavelet transformation to map an image into a compressed image containing a smaller number of pixels that retain the original image's information content. Image analysis can then be performed on a compressed data matrix consisting of a reduced number of significant wavelet coefficients. Furthermore, a block algorithm can be used for performing common operations more efficiently. The spatial compression algorithms can be combined with spectral compression algorithms to provide further computational efficiencies.
Telemedicine + OCT: toward design of optimized algorithms for high-quality compressed images
NASA Astrophysics Data System (ADS)
Mousavi, Mahta; Lurie, Kristen; Land, Julian; Javidi, Tara; Ellerbee, Audrey K.
2014-03-01
Telemedicine is an emerging technology that aims to provide clinical healthcare at a distance. Among its goals, the transfer of diagnostic images over telecommunication channels has been quite appealing to the medical community. When viewed as an adjunct to biomedical device hardware, one highly important consideration aside from the transfer rate and speed is the accuracy of the reconstructed image at the receiver end. Although optical coherence tomography (OCT) is an established imaging technique that is ripe for telemedicine, the effects of OCT data compression, which may be necessary on certain telemedicine platforms, have not received much attention in the literature. We investigate the performance and efficiency of several lossless and lossy compression techniques for OCT data and characterize their effectiveness with respect to achievable compression ratio, compression rate and preservation of image quality. We examine the effects of compression in the interferogram vs. A-scan domain as assessed with various objective and subjective metrics.
NASA Astrophysics Data System (ADS)
Chen, Xiaotao; Song, Jie; Liang, Lixiao; Si, Yang; Wang, Le; Xue, Xiaodai
2017-10-01
Large-scale energy storage system (ESS) plays an important role in the planning and operation of smart grid and energy internet. Compressed air energy storage (CAES) is one of promising large-scale energy storage techniques. However, the high cost of the storage of compressed air and the low capacity remain to be solved. This paper proposes a novel non-supplementary fired compressed air energy storage system (NSF-CAES) based on salt cavern air storage to address the issues of air storage and the efficiency of CAES. Operating mechanisms of the proposed NSF-CAES are analysed based on thermodynamics principle. Key factors which has impact on the system storage efficiency are thoroughly explored. The energy storage efficiency of the proposed NSF-CAES system can be improved by reducing the maximum working pressure of the salt cavern and improving inlet air pressure of the turbine. Simulation results show that the electric-to-electric conversion efficiency of the proposed NSF-CAES can reach 63.29% with a maximum salt cavern working pressure of 9.5 MPa and 9 MPa inlet air pressure of the turbine, which is higher than the current commercial CAES plants.
NASA Astrophysics Data System (ADS)
Dreißigacker, Volker
2018-04-01
The development of new technologies for large-scale electricity storage is a key element in future flexible electricity transmission systems. Electricity storage in adiabatic compressed air energy storage (A-CAES) power plants offers the prospect of making a substantial contribution to reach this goal. This concept allows efficient, local zero-emission electricity storage on the basis of compressed air in underground caverns. The compression and expansion of air in turbomachinery help to balance power generation peaks that are not demand-driven on the one hand and consumption-induced load peaks on the other. For further improvements in cost efficiencies and flexibility, system modifications are necessary. Therefore, a novel concept regarding the integration of an electrical heating component is investigated. This modification allows increased power plant flexibilities and decreasing component sizes due to the generated high temperature heat with simultaneously decreasing total round trip efficiencies. For an exemplarily A-CAES case simulation studies regarding the electrical heating power and thermal energy storage sizes were conducted to identify the potentials in cost reduction of the central power plant components and the loss in round trip efficiency.
Compression of regions in the global advanced very high resolution radiometer 1-km data set
NASA Technical Reports Server (NTRS)
Kess, Barbara L.; Steinwand, Daniel R.; Reichenbach, Stephen E.
1994-01-01
The global advanced very high resolution radiometer (AVHRR) 1-km data set is a 10-band image produced at USGS' EROS Data Center for the study of the world's land surfaces. The image contains masked regions for non-land areas which are identical in each band but vary between data sets. They comprise over 75 percent of this 9.7 gigabyte image. The mask is compressed once and stored separately from the land data which is compressed for each of the 10 bands. The mask is stored in a hierarchical format for multi-resolution decompression of geographic subwindows of the image. The land for each band is compressed by modifying a method that ignores fill values. This multi-spectral region compression efficiently compresses the region data and precludes fill values from interfering with land compression statistics. Results show that the masked regions in a one-byte test image (6.5 Gigabytes) compress to 0.2 percent of the 557,756,146 bytes they occupy in the original image, resulting in a compression ratio of 89.9 percent for the entire image.
Predefined Redundant Dictionary for Effective Depth Maps Representation
NASA Astrophysics Data System (ADS)
Sebai, Dorsaf; Chaieb, Faten; Ghorbel, Faouzi
2016-01-01
The multi-view video plus depth (MVD) video format consists of two components: texture and depth map, where a combination of these components enables a receiver to generate arbitrary virtual views. However, MVD presents a very voluminous video format that requires a compression process for storage and especially for transmission. Conventional codecs are perfectly efficient for texture images compression but not for intrinsic depth maps properties. Depth images indeed are characterized by areas of smoothly varying grey levels separated by sharp discontinuities at the position of object boundaries. Preserving these characteristics is important to enable high quality view synthesis at the receiver side. In this paper, sparse representation of depth maps is discussed. It is shown that a significant gain in sparsity is achieved when particular mixed dictionaries are used for approximating these types of images with greedy selection strategies. Experiments are conducted to confirm the effectiveness at producing sparse representations, and competitiveness, with respect to candidate state-of-art dictionaries. Finally, the resulting method is shown to be effective for depth maps compression and represents an advantage over the ongoing 3D high efficiency video coding compression standard, particularly at medium and high bitrates.
Lee, HyungJune; Kim, HyunSeok; Chang, Ik Joon
2014-01-01
We propose a technique to optimize the energy efficiency of data collection in sensor networks by exploiting a selective data compression. To achieve such an aim, we need to make optimal decisions regarding two aspects: (1) which sensor nodes should execute compression; and (2) which compression algorithm should be used by the selected sensor nodes. We formulate this problem into binary integer programs, which provide an energy-optimal solution under the given latency constraint. Our simulation results show that the optimization algorithm significantly reduces the overall network-wide energy consumption for data collection. In the environment having a stationary sink from stationary sensor nodes, the optimized data collection shows 47% energy savings compared to the state-of-the-art collection protocol (CTP). More importantly, we demonstrate that our optimized data collection provides the best performance in an intermittent network under high interference. In such networks, we found that the selective compression for frequent packet retransmissions saves up to 55% energy compared to the best known protocol. PMID:24721763
Compressed sparse tensor based quadrature for vibrational quantum mechanics integrals
Rai, Prashant; Sargsyan, Khachik; Najm, Habib N.
2018-03-20
A new method for fast evaluation of high dimensional integrals arising in quantum mechanics is proposed. Here, the method is based on sparse approximation of a high dimensional function followed by a low-rank compression. In the first step, we interpret the high dimensional integrand as a tensor in a suitable tensor product space and determine its entries by a compressed sensing based algorithm using only a few function evaluations. Secondly, we implement a rank reduction strategy to compress this tensor in a suitable low-rank tensor format using standard tensor compression tools. This allows representing a high dimensional integrand function asmore » a small sum of products of low dimensional functions. Finally, a low dimensional Gauss–Hermite quadrature rule is used to integrate this low-rank representation, thus alleviating the curse of dimensionality. Finally, numerical tests on synthetic functions, as well as on energy correction integrals for water and formaldehyde molecules demonstrate the efficiency of this method using very few function evaluations as compared to other integration strategies.« less
Compressed sparse tensor based quadrature for vibrational quantum mechanics integrals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rai, Prashant; Sargsyan, Khachik; Najm, Habib N.
A new method for fast evaluation of high dimensional integrals arising in quantum mechanics is proposed. Here, the method is based on sparse approximation of a high dimensional function followed by a low-rank compression. In the first step, we interpret the high dimensional integrand as a tensor in a suitable tensor product space and determine its entries by a compressed sensing based algorithm using only a few function evaluations. Secondly, we implement a rank reduction strategy to compress this tensor in a suitable low-rank tensor format using standard tensor compression tools. This allows representing a high dimensional integrand function asmore » a small sum of products of low dimensional functions. Finally, a low dimensional Gauss–Hermite quadrature rule is used to integrate this low-rank representation, thus alleviating the curse of dimensionality. Finally, numerical tests on synthetic functions, as well as on energy correction integrals for water and formaldehyde molecules demonstrate the efficiency of this method using very few function evaluations as compared to other integration strategies.« less
Wang, Yuhao; Li, Xin; Xu, Kai; Ren, Fengbo; Yu, Hao
2017-04-01
Compressive sensing is widely used in biomedical applications, and the sampling matrix plays a critical role on both quality and power consumption of signal acquisition. It projects a high-dimensional vector of data into a low-dimensional subspace by matrix-vector multiplication. An optimal sampling matrix can ensure accurate data reconstruction and/or high compression ratio. Most existing optimization methods can only produce real-valued embedding matrices that result in large energy consumption during data acquisition. In this paper, we propose an efficient method that finds an optimal Boolean sampling matrix in order to reduce the energy consumption. Compared to random Boolean embedding, our data-driven Boolean sampling matrix can improve the image recovery quality by 9 dB. Moreover, in terms of sampling hardware complexity, it reduces the energy consumption by 4.6× and the silicon area by 1.9× over the data-driven real-valued embedding.
Development of Carbon Dioxide Hermitic Compressor
NASA Astrophysics Data System (ADS)
Imai, Satoshi; Oda, Atsushi; Ebara, Toshiyuki
Because of global environmental problems, the existing refrigerants are to be replaced with natural refrigerants. CO2 is one of the natural refrigerants and environmentally safe, inflammable and non-toxic refrigerant. Therefore high efficiency compressor that can operate with natural refrigerants, especially CO2, needs to be developed. We developed a prototype CO2 hermetic compressor, which is able to use in carbon dioxide refrigerating systems for practical use. The compressor has two rolling pistons, and it leads to low vibrations, low noise. In additions, two-stage compression with two cylinders is adopted, because pressure difference is too large to compress in one stage. And inner pressure of the shell case is intermediate pressure to minimize gas leakage between compressing rooms and inner space of shell case. Intermediate pressure design enabled to make the compressor smaller in size and lighter in weight. As a result, the compressor achieved high efficiency and high reliability by these technology. We plan to study heat pump water heater, cup vending machine and various applications with CO2 compressor.
A compressible multiphase framework for simulating supersonic atomization
NASA Astrophysics Data System (ADS)
Regele, Jonathan D.; Garrick, Daniel P.; Hosseinzadeh-Nik, Zahra; Aslani, Mohamad; Owkes, Mark
2016-11-01
The study of atomization in supersonic combustors is critical in designing efficient and high performance scramjets. Numerical methods incorporating surface tension effects have largely focused on the incompressible regime as most atomization applications occur at low Mach numbers. Simulating surface tension effects in high speed compressible flow requires robust numerical methods that can handle discontinuities caused by both material interfaces and shocks. A shock capturing/diffused interface method is developed to simulate high-speed compressible gas-liquid flows with surface tension effects using the five-equation model. This includes developments that account for the interfacial pressure jump that occurs in the presence of surface tension. A simple and efficient method for computing local interface curvature is developed and an acoustic non-dimensional scaling for the surface tension force is proposed. The method successfully captures a variety of droplet breakup modes over a range of Weber numbers and demonstrates the impact of surface tension in countering droplet deformation in both subsonic and supersonic cross flows.
Tensor-product preconditioners for higher-order space-time discontinuous Galerkin methods
NASA Astrophysics Data System (ADS)
Diosady, Laslo T.; Murman, Scott M.
2017-02-01
A space-time discontinuous-Galerkin spectral-element discretization is presented for direct numerical simulation of the compressible Navier-Stokes equations. An efficient solution technique based on a matrix-free Newton-Krylov method is developed in order to overcome the stiffness associated with high solution order. The use of tensor-product basis functions is key to maintaining efficiency at high-order. Efficient preconditioning methods are presented which can take advantage of the tensor-product formulation. A diagonalized Alternating-Direction-Implicit (ADI) scheme is extended to the space-time discontinuous Galerkin discretization. A new preconditioner for the compressible Euler/Navier-Stokes equations based on the fast-diagonalization method is also presented. Numerical results demonstrate the effectiveness of these preconditioners for the direct numerical simulation of subsonic turbulent flows.
Tensor-Product Preconditioners for Higher-Order Space-Time Discontinuous Galerkin Methods
NASA Technical Reports Server (NTRS)
Diosady, Laslo T.; Murman, Scott M.
2016-01-01
space-time discontinuous-Galerkin spectral-element discretization is presented for direct numerical simulation of the compressible Navier-Stokes equat ions. An efficient solution technique based on a matrix-free Newton-Krylov method is developed in order to overcome the stiffness associated with high solution order. The use of tensor-product basis functions is key to maintaining efficiency at high order. Efficient preconditioning methods are presented which can take advantage of the tensor-product formulation. A diagonalized Alternating-Direction-Implicit (ADI) scheme is extended to the space-time discontinuous Galerkin discretization. A new preconditioner for the compressible Euler/Navier-Stokes equations based on the fast-diagonalization method is also presented. Numerical results demonstrate the effectiveness of these preconditioners for the direct numerical simulation of subsonic turbulent flows.
Compressed air production with waste heat utilization in industry
NASA Astrophysics Data System (ADS)
Nolting, E.
1984-06-01
The centralized power-heat coupling (PHC) technique using block heating power stations, is presented. Compressed air production in PHC technique with internal combustion engine drive achieves a high degree of primary energy utilization. Cost savings of 50% are reached compared to conventional production. The simultaneous utilization of compressed air and heat is especially interesting. A speed regulated drive via an internal combustion motor gives a further saving of 10% to 20% compared to intermittent operation. The high fuel utilization efficiency ( 80%) leads to a pay off after two years for operation times of 3000 hr.
A seismic data compression system using subband coding
NASA Technical Reports Server (NTRS)
Kiely, A. B.; Pollara, F.
1995-01-01
This article presents a study of seismic data compression techniques and a compression algorithm based on subband coding. The algorithm includes three stages: a decorrelation stage, a quantization stage that introduces a controlled amount of distortion to allow for high compression ratios, and a lossless entropy coding stage based on a simple but efficient arithmetic coding method. Subband coding methods are particularly suited to the decorrelation of nonstationary processes such as seismic events. Adaptivity to the nonstationary behavior of the waveform is achieved by dividing the data into separate blocks that are encoded separately with an adaptive arithmetic encoder. This is done with high efficiency due to the low overhead introduced by the arithmetic encoder in specifying its parameters. The technique could be used as a progressive transmission system, where successive refinements of the data can be requested by the user. This allows seismologists to first examine a coarse version of waveforms with minimal usage of the channel and then decide where refinements are required. Rate-distortion performance results are presented and comparisons are made with two block transform methods.
Fast and Adaptive Lossless Onboard Hyperspectral Data Compression System
NASA Technical Reports Server (NTRS)
Aranki, Nazeeh I.; Keymeulen, Didier; Kimesh, Matthew A.
2012-01-01
Modern hyperspectral imaging systems are able to acquire far more data than can be downlinked from a spacecraft. Onboard data compression helps to alleviate this problem, but requires a system capable of power efficiency and high throughput. Software solutions have limited throughput performance and are power-hungry. Dedicated hardware solutions can provide both high throughput and power efficiency, while taking the load off of the main processor. Thus a hardware compression system was developed. The implementation uses a field-programmable gate array (FPGA). The implementation is based on the fast lossless (FL) compression algorithm reported in Fast Lossless Compression of Multispectral-Image Data (NPO-42517), NASA Tech Briefs, Vol. 30, No. 8 (August 2006), page 26, which achieves excellent compression performance and has low complexity. This algorithm performs predictive compression using an adaptive filtering method, and uses adaptive Golomb coding. The implementation also packetizes the coded data. The FL algorithm is well suited for implementation in hardware. In the FPGA implementation, one sample is compressed every clock cycle, which makes for a fast and practical realtime solution for space applications. Benefits of this implementation are: 1) The underlying algorithm achieves a combination of low complexity and compression effectiveness that exceeds that of techniques currently in use. 2) The algorithm requires no training data or other specific information about the nature of the spectral bands for a fixed instrument dynamic range. 3) Hardware acceleration provides a throughput improvement of 10 to 100 times vs. the software implementation. A prototype of the compressor is available in software, but it runs at a speed that does not meet spacecraft requirements. The hardware implementation targets the Xilinx Virtex IV FPGAs, and makes the use of this compressor practical for Earth satellites as well as beyond-Earth missions with hyperspectral instruments.
NASA Technical Reports Server (NTRS)
Stack, John; Draley, Eugene C; Delano, James B; Feldman, Lewis
1950-01-01
As part of a general investigation of propellers at high forward speeds, tests of two 2-blade propellers having the NACA 4-(3)(8)-03 and NACA 4-(3)(8)-45 blade designs have been made in the Langley 8-foot high-speed tunnel through a range of blade angle from 20 degrees to 60 degrees for forward Mach numbers from 0.165 to 0.725 to establish in detail the changes in propeller characteristics due to compressibility effects. These propellers differed primarily only in blade solidity, one propeller having 50 percent and more solidity than the other. Serious losses in propeller efficiency were found as the propeller tip Mach number exceeded 0.91, irrespective of forward speed or blade angle. The magnitude of the efficiency losses varied from 9 percent to 22 percent per 0.1 increase in tip Mach number above the critical value. The range of advance ratio for peak efficiency decreased markedly with increase of forward speed. The general form of the changes in thrust and power coefficients was found to be similar to the changes in airfoil lift coefficient with changes in Mach number. Efficiency losses due to compressibility effects decreased with increase of blade width. The results indicated that the high level of propeller efficiency obtained at low speeds could be maintained to forward sea-level speeds exceeding 500 miles per hour.
Video Compression Study: h.265 vs h.264
NASA Technical Reports Server (NTRS)
Pryor, Jonathan
2016-01-01
H.265 video compression (also known as High Efficiency Video Encoding (HEVC)) promises to provide double the video quality at half the bandwidth, or the same quality at half the bandwidth of h.264 video compression [1]. This study uses a Tektronix PQA500 to determine the video quality gains by using h.265 encoding. This study also compares two video encoders to see how different implementations of h.264 and h.265 impact video quality at various bandwidths.
Gas turbine power plant with supersonic shock compression ramps
Lawlor, Shawn P [Bellevue, WA; Novaresi, Mark A [San Diego, CA; Cornelius, Charles C [Kirkland, WA
2008-10-14
A gas turbine engine. The engine is based on the use of a gas turbine driven rotor having a compression ramp traveling at a local supersonic inlet velocity (based on the combination of inlet gas velocity and tangential speed of the ramp) which compresses inlet gas against a stationary sidewall. The supersonic compressor efficiently achieves high compression ratios while utilizing a compact, stabilized gasdynamic flow path. Operated at supersonic speeds, the inlet stabilizes an oblique/normal shock system in the gasdynamic flow path formed between the rim of the rotor, the strakes, and a stationary external housing. Part load efficiency is enhanced by use of a lean pre-mix system, a pre-swirl compressor, and a bypass stream to bleed a portion of the gas after passing through the pre-swirl compressor to the combustion gas outlet. Use of a stationary low NOx combustor provides excellent emissions results.
A compression scheme for radio data in high performance computing
NASA Astrophysics Data System (ADS)
Masui, K.; Amiri, M.; Connor, L.; Deng, M.; Fandino, M.; Höfer, C.; Halpern, M.; Hanna, D.; Hincks, A. D.; Hinshaw, G.; Parra, J. M.; Newburgh, L. B.; Shaw, J. R.; Vanderlinde, K.
2015-09-01
We present a procedure for efficiently compressing astronomical radio data for high performance applications. Integrated, post-correlation data are first passed through a nearly lossless rounding step which compares the precision of the data to a generalized and calibration-independent form of the radiometer equation. This allows the precision of the data to be reduced in a way that has an insignificant impact on the data. The newly developed Bitshuffle lossless compression algorithm is subsequently applied. When the algorithm is used in conjunction with the HDF5 library and data format, data produced by the CHIME Pathfinder telescope is compressed to 28% of its original size and decompression throughputs in excess of 1 GB/s are obtained on a single core.
Fast depth decision for HEVC inter prediction based on spatial and temporal correlation
NASA Astrophysics Data System (ADS)
Chen, Gaoxing; Liu, Zhenyu; Ikenaga, Takeshi
2016-07-01
High efficiency video coding (HEVC) is a video compression standard that outperforms the predecessor H.264/AVC by doubling the compression efficiency. To enhance the compression accuracy, the partition sizes ranging is from 4x4 to 64x64 in HEVC. However, the manifold partition sizes dramatically increase the encoding complexity. This paper proposes a fast depth decision based on spatial and temporal correlation. Spatial correlation utilize the code tree unit (CTU) Splitting information and temporal correlation utilize the motion vector predictor represented CTU in inter prediction to determine the maximum depth in each CTU. Experimental results show that the proposed method saves about 29.1% of the original processing time with 0.9% of BD-bitrate increase on average.
Fast lossless compression via cascading Bloom filters
2014-01-01
Background Data from large Next Generation Sequencing (NGS) experiments present challenges both in terms of costs associated with storage and in time required for file transfer. It is sometimes possible to store only a summary relevant to particular applications, but generally it is desirable to keep all information needed to revisit experimental results in the future. Thus, the need for efficient lossless compression methods for NGS reads arises. It has been shown that NGS-specific compression schemes can improve results over generic compression methods, such as the Lempel-Ziv algorithm, Burrows-Wheeler transform, or Arithmetic Coding. When a reference genome is available, effective compression can be achieved by first aligning the reads to the reference genome, and then encoding each read using the alignment position combined with the differences in the read relative to the reference. These reference-based methods have been shown to compress better than reference-free schemes, but the alignment step they require demands several hours of CPU time on a typical dataset, whereas reference-free methods can usually compress in minutes. Results We present a new approach that achieves highly efficient compression by using a reference genome, but completely circumvents the need for alignment, affording a great reduction in the time needed to compress. In contrast to reference-based methods that first align reads to the genome, we hash all reads into Bloom filters to encode, and decode by querying the same Bloom filters using read-length subsequences of the reference genome. Further compression is achieved by using a cascade of such filters. Conclusions Our method, called BARCODE, runs an order of magnitude faster than reference-based methods, while compressing an order of magnitude better than reference-free methods, over a broad range of sequencing coverage. In high coverage (50-100 fold), compared to the best tested compressors, BARCODE saves 80-90% of the running time while only increasing space slightly. PMID:25252952
Fast lossless compression via cascading Bloom filters.
Rozov, Roye; Shamir, Ron; Halperin, Eran
2014-01-01
Data from large Next Generation Sequencing (NGS) experiments present challenges both in terms of costs associated with storage and in time required for file transfer. It is sometimes possible to store only a summary relevant to particular applications, but generally it is desirable to keep all information needed to revisit experimental results in the future. Thus, the need for efficient lossless compression methods for NGS reads arises. It has been shown that NGS-specific compression schemes can improve results over generic compression methods, such as the Lempel-Ziv algorithm, Burrows-Wheeler transform, or Arithmetic Coding. When a reference genome is available, effective compression can be achieved by first aligning the reads to the reference genome, and then encoding each read using the alignment position combined with the differences in the read relative to the reference. These reference-based methods have been shown to compress better than reference-free schemes, but the alignment step they require demands several hours of CPU time on a typical dataset, whereas reference-free methods can usually compress in minutes. We present a new approach that achieves highly efficient compression by using a reference genome, but completely circumvents the need for alignment, affording a great reduction in the time needed to compress. In contrast to reference-based methods that first align reads to the genome, we hash all reads into Bloom filters to encode, and decode by querying the same Bloom filters using read-length subsequences of the reference genome. Further compression is achieved by using a cascade of such filters. Our method, called BARCODE, runs an order of magnitude faster than reference-based methods, while compressing an order of magnitude better than reference-free methods, over a broad range of sequencing coverage. In high coverage (50-100 fold), compared to the best tested compressors, BARCODE saves 80-90% of the running time while only increasing space slightly.
Lawlor, Shawn P [Bellevue, WA; Novaresi, Mark A [San Diego, CA; Cornelius, Charles C [Kirkland, WA
2008-02-26
A gas compressor based on the use of a driven rotor having an axially oriented compression ramp traveling at a local supersonic inlet velocity (based on the combination of inlet gas velocity and tangential speed of the ramp) which forms a supersonic shockwave axially, between adjacent strakes. In using this method to compress inlet gas, the supersonic compressor efficiently achieves high compression ratios while utilizing a compact, stabilized gasdynamic flow path. Operated at supersonic speeds, the inlet stabilizes an oblique/normal shock system in the gasdyanamic flow path formed between the gas compression ramp on a strake, the shock capture lip on the adjacent strake, and captures the resultant pressure within the stationary external housing while providing a diffuser downstream of the compression ramp.
LES of Temporally Evolving Mixing Layers by an Eighth-Order Filter Scheme
NASA Technical Reports Server (NTRS)
Hadjadj, A; Yee, H. C.; Sjogreen, B.
2011-01-01
An eighth-order filter method for a wide range of compressible flow speeds (H.C. Yee and B. Sjogreen, Proceedings of ICOSAHOM09, June 22-26, 2009, Trondheim, Norway) are employed for large eddy simulations (LES) of temporally evolving mixing layers (TML) for different convective Mach numbers (Mc) and Reynolds numbers. The high order filter method is designed for accurate and efficient simulations of shock-free compressible turbulence, turbulence with shocklets and turbulence with strong shocks with minimum tuning of scheme parameters. The value of Mc considered is for the TML range from the quasi-incompressible regime to the highly compressible supersonic regime. The three main characteristics of compressible TML (the self similarity property, compressibility effects and the presence of large-scale structure with shocklets for high Mc) are considered for the LES study. The LES results using the same scheme parameters for all studied cases agree well with experimental results of Barone et al. (2006), and published direct numerical simulations (DNS) work of Rogers & Moser (1994) and Pantano & Sarkar (2002).
Comparison of lossless compression techniques for prepress color images
NASA Astrophysics Data System (ADS)
Van Assche, Steven; Denecker, Koen N.; Philips, Wilfried R.; Lemahieu, Ignace L.
1998-12-01
In the pre-press industry color images have both a high spatial and a high color resolution. Such images require a considerable amount of storage space and impose long transmission times. Data compression is desired to reduce these storage and transmission problems. Because of the high quality requirements in the pre-press industry only lossless compression is acceptable. Most existing lossless compression schemes operate on gray-scale images. In this case the color components of color images must be compressed independently. However, higher compression ratios can be achieved by exploiting inter-color redundancies. In this paper we present a comparison of three state-of-the-art lossless compression techniques which exploit such color redundancies: IEP (Inter- color Error Prediction) and a KLT-based technique, which are both linear color decorrelation techniques, and Interframe CALIC, which uses a non-linear approach to color decorrelation. It is shown that these techniques are able to exploit color redundancies and that color decorrelation can be done effectively and efficiently. The linear color decorrelators provide a considerable coding gain (about 2 bpp) on some typical prepress images. The non-linear interframe CALIC predictor does not yield better results, but the full interframe CALIC technique does.
Spectral compression algorithms for the analysis of very large multivariate images
Keenan, Michael R.
2007-10-16
A method for spectrally compressing data sets enables the efficient analysis of very large multivariate images. The spectral compression algorithm uses a factored representation of the data that can be obtained from Principal Components Analysis or other factorization technique. Furthermore, a block algorithm can be used for performing common operations more efficiently. An image analysis can be performed on the factored representation of the data, using only the most significant factors. The spectral compression algorithm can be combined with a spatial compression algorithm to provide further computational efficiencies.
Mangal, Sharad; Meiser, Felix; Morton, David; Larson, Ian
2015-01-01
Tablets represent the preferred and most commonly dispensed pharmaceutical dosage form for administering active pharmaceutical ingredients (APIs). Minimizing the cost of goods and improving manufacturing output efficiency has motivated companies to use direct compression as a preferred method of tablet manufacturing. Excipients dictate the success of direct compression, notably by optimizing powder formulation compactability and flow, thus there has been a surge in creating excipients specifically designed to meet these needs for direct compression. Greater scientific understanding of tablet manufacturing coupled with effective application of the principles of material science and particle engineering has resulted in a number of improved direct compression excipients. Despite this, significant practical disadvantages of direct compression remain relative to granulation, and this is partly due to the limitations of direct compression excipients. For instance, in formulating high-dose APIs, a much higher level of excipient is required relative to wet or dry granulation and so tablets are much bigger. Creating excipients to enable direct compression of high-dose APIs requires the knowledge of the relationship between fundamental material properties and excipient functionalities. In this paper, we review the current understanding of the relationship between fundamental material properties and excipient functionality for direct compression.
Bitshuffle: Filter for improving compression of typed binary data
NASA Astrophysics Data System (ADS)
Masui, Kiyoshi
2017-12-01
Bitshuffle rearranges typed, binary data for improving compression; the algorithm is implemented in a python/C package within the Numpy framework. The library can be used alongside HDF5 to compress and decompress datasets and is integrated through the dynamically loaded filters framework. Algorithmically, Bitshuffle is closely related to HDF5's Shuffle filter except it operates at the bit level instead of the byte level. Arranging a typed data array in to a matrix with the elements as the rows and the bits within the elements as the columns, Bitshuffle "transposes" the matrix, such that all the least-significant-bits are in a row, etc. This transposition is performed within blocks of data roughly 8kB long; this does not in itself compress data, but rearranges it for more efficient compression. A compression library is necessary to perform the actual compression. This scheme has been used for compression of radio data in high performance computing.
NASA Technical Reports Server (NTRS)
Tserng, Hua-Quen; Ketterson, Andrew; Saunier, Paul; McCarty, Larry; Davis, Steve
1998-01-01
The design, fabrication, and performance of K-band high-efficiency, linear power pHEMT amplifiers implemented in Embedded Transmission Line (ETL) MMIC configuration with unthinned GaAs substrate and topside grounding are reported. A three-stage amplifier achieved a power-added efficiency of 40.5% with 264 mW output at 20.2 GHz. The linear gain is 28.5 dB with 1-dB gain compression output power of 200 mW and 31% power-added efficiency. The carrier-to-third-order intermodulation ratio is approx. 20 dBc at the 1-dB compression point. A RF functional yield of more than 90% has been achieved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Kesheng
2007-08-02
An index in a database system is a data structure that utilizes redundant information about the base data to speed up common searching and retrieval operations. Most commonly used indexes are variants of B-trees, such as B+-tree and B*-tree. FastBit implements a set of alternative indexes call compressed bitmap indexes. Compared with B-tree variants, these indexes provide very efficient searching and retrieval operations by sacrificing the efficiency of updating the indexes after the modification of an individual record. In addition to the well-known strengths of bitmap indexes, FastBit has a special strength stemming from the bitmap compression scheme used. Themore » compression method is called the Word-Aligned Hybrid (WAH) code. It reduces the bitmap indexes to reasonable sizes and at the same time allows very efficient bitwise logical operations directly on the compressed bitmaps. Compared with the well-known compression methods such as LZ77 and Byte-aligned Bitmap code (BBC), WAH sacrifices some space efficiency for a significant improvement in operational efficiency. Since the bitwise logical operations are the most important operations needed to answer queries, using WAH compression has been shown to answer queries significantly faster than using other compression schemes. Theoretical analyses showed that WAH compressed bitmap indexes are optimal for one-dimensional range queries. Only the most efficient indexing schemes such as B+-tree and B*-tree have this optimality property. However, bitmap indexes are superior because they can efficiently answer multi-dimensional range queries by combining the answers to one-dimensional queries.« less
An efficient and robust 3D mesh compression based on 3D watermarking and wavelet transform
NASA Astrophysics Data System (ADS)
Zagrouba, Ezzeddine; Ben Jabra, Saoussen; Didi, Yosra
2011-06-01
The compression and watermarking of 3D meshes are very important in many areas of activity including digital cinematography, virtual reality as well as CAD design. However, most studies on 3D watermarking and 3D compression are done independently. To verify a good trade-off between protection and a fast transfer of 3D meshes, this paper proposes a new approach which combines 3D mesh compression with mesh watermarking. This combination is based on a wavelet transformation. In fact, the used compression method is decomposed to two stages: geometric encoding and topologic encoding. The proposed approach consists to insert a signature between these two stages. First, the wavelet transformation is applied to the original mesh to obtain two components: wavelets coefficients and a coarse mesh. Then, the geometric encoding is done on these two components. The obtained coarse mesh will be marked using a robust mesh watermarking scheme. This insertion into coarse mesh allows obtaining high robustness to several attacks. Finally, the topologic encoding is applied to the marked coarse mesh to obtain the compressed mesh. The combination of compression and watermarking permits to detect the presence of signature after a compression of the marked mesh. In plus, it allows transferring protected 3D meshes with the minimum size. The experiments and evaluations show that the proposed approach presents efficient results in terms of compression gain, invisibility and robustness of the signature against of many attacks.
NASA Astrophysics Data System (ADS)
Pyszczek, R.; Mazuro, P.; Teodorczyk, A.
2016-09-01
This paper is focused on the CAI combustion control in a turbocharged 2-stroke Opposed-Piston (OP) engine. The barrel type OP engine arrangement is of particular interest for the authors because of its robust design, high mechanical efficiency and relatively easy incorporation of a Variable Compression Ratio (VCR). The other advantage of such design is that combustion chamber is formed between two moving pistons - there is no additional cylinder head to be cooled which directly results in an increased thermal efficiency. Furthermore, engine operation in a Controlled Auto-Ignition (CAI) mode at high compression ratios (CR) raises a possibility of reaching even higher efficiencies and very low emissions. In order to control CAI combustion such measures as VCR and water injection were considered for indirect ignition timing control. Numerical simulations of the scavenging and combustion processes were performed with the 3D CFD multipurpose AVL Fire solver. Numerous cases were calculated with different engine compression ratios and different amounts of directly and indirectly injected water. The influence of the VCR and water injection on the ignition timing and engine performance was determined and their application in the real engine was discussed.
Energy-efficient sensing in wireless sensor networks using compressed sensing.
Razzaque, Mohammad Abdur; Dobson, Simon
2014-02-12
Sensing of the application environment is the main purpose of a wireless sensor network. Most existing energy management strategies and compression techniques assume that the sensing operation consumes significantly less energy than radio transmission and reception. This assumption does not hold in a number of practical applications. Sensing energy consumption in these applications may be comparable to, or even greater than, that of the radio. In this work, we support this claim by a quantitative analysis of the main operational energy costs of popular sensors, radios and sensor motes. In light of the importance of sensing level energy costs, especially for power hungry sensors, we consider compressed sensing and distributed compressed sensing as potential approaches to provide energy efficient sensing in wireless sensor networks. Numerical experiments investigating the effectiveness of compressed sensing and distributed compressed sensing using real datasets show their potential for efficient utilization of sensing and overall energy costs in wireless sensor networks. It is shown that, for some applications, compressed sensing and distributed compressed sensing can provide greater energy efficiency than transform coding and model-based adaptive sensing in wireless sensor networks.
Nano-electro-mechanical pump: Giant pumping of water in carbon nanotubes
Farimani, Amir Barati; Heiranian, Mohammad; Aluru, Narayana R.
2016-01-01
A fully controllable nano-electro-mechanical device that can pump fluids at nanoscale is proposed. Using molecular dynamics simulations, we show that an applied electric field to an ion@C60 inside a water-filled carbon nanotube can pump water with excellent efficiency. The key physical mechanism governing the fluid pumping is the conversion of electrical energy into hydrodynamic flow with efficiencies as high as 64%. Our results show that water can be compressed up to 7% higher than its bulk value by applying electric fields. High flux of water (up to 13,000 molecules/ns) is obtained by the electro-mechanical, piston-cylinder-like moving mechanism of the ion@C60 in the CNT. This large flux results from the piston-like mechanism, compressibility of water (increase in density of water due to molecular ordering), orienting dipole along the electric field and efficient electrical to mechanical energy conversion. Our findings can pave the way towards efficient energy conversion, pumping of fluids at nanoscale, and drug delivery. PMID:27193507
Nano-electro-mechanical pump: Giant pumping of water in carbon nanotubes
NASA Astrophysics Data System (ADS)
Farimani, Amir Barati; Heiranian, Mohammad; Aluru, Narayana R.
2016-05-01
A fully controllable nano-electro-mechanical device that can pump fluids at nanoscale is proposed. Using molecular dynamics simulations, we show that an applied electric field to an ion@C60 inside a water-filled carbon nanotube can pump water with excellent efficiency. The key physical mechanism governing the fluid pumping is the conversion of electrical energy into hydrodynamic flow with efficiencies as high as 64%. Our results show that water can be compressed up to 7% higher than its bulk value by applying electric fields. High flux of water (up to 13,000 molecules/ns) is obtained by the electro-mechanical, piston-cylinder-like moving mechanism of the ion@C60 in the CNT. This large flux results from the piston-like mechanism, compressibility of water (increase in density of water due to molecular ordering), orienting dipole along the electric field and efficient electrical to mechanical energy conversion. Our findings can pave the way towards efficient energy conversion, pumping of fluids at nanoscale, and drug delivery.
Nano-electro-mechanical pump: Giant pumping of water in carbon nanotubes.
Farimani, Amir Barati; Heiranian, Mohammad; Aluru, Narayana R
2016-05-19
A fully controllable nano-electro-mechanical device that can pump fluids at nanoscale is proposed. Using molecular dynamics simulations, we show that an applied electric field to an ion@C60 inside a water-filled carbon nanotube can pump water with excellent efficiency. The key physical mechanism governing the fluid pumping is the conversion of electrical energy into hydrodynamic flow with efficiencies as high as 64%. Our results show that water can be compressed up to 7% higher than its bulk value by applying electric fields. High flux of water (up to 13,000 molecules/ns) is obtained by the electro-mechanical, piston-cylinder-like moving mechanism of the ion@C60 in the CNT. This large flux results from the piston-like mechanism, compressibility of water (increase in density of water due to molecular ordering), orienting dipole along the electric field and efficient electrical to mechanical energy conversion. Our findings can pave the way towards efficient energy conversion, pumping of fluids at nanoscale, and drug delivery.
High-order ENO schemes applied to two- and three-dimensional compressible flow
NASA Technical Reports Server (NTRS)
Shu, Chi-Wang; Erlebacher, Gordon; Zang, Thomas A.; Whitaker, David; Osher, Stanley
1991-01-01
High order essentially non-oscillatory (ENO) finite difference schemes are applied to the 2-D and 3-D compressible Euler and Navier-Stokes equations. Practical issues, such as vectorization, efficiency of coding, cost comparison with other numerical methods, and accuracy degeneracy effects, are discussed. Numerical examples are provided which are representative of computational problems of current interest in transition and turbulence physics. These require both nonoscillatory shock capturing and high resolution for detailed structures in the smooth regions and demonstrate the advantage of ENO schemes.
Tseng, Yun-Hua; Lu, Chih-Wen
2017-01-01
Compressed sensing (CS) is a promising approach to the compression and reconstruction of electrocardiogram (ECG) signals. It has been shown that following reconstruction, most of the changes between the original and reconstructed signals are distributed in the Q, R, and S waves (QRS) region. Furthermore, any increase in the compression ratio tends to increase the magnitude of the change. This paper presents a novel approach integrating the near-precise compressed (NPC) and CS algorithms. The simulation results presented notable improvements in signal-to-noise ratio (SNR) and compression ratio (CR). The efficacy of this approach was verified by fabricating a highly efficient low-cost chip using the Taiwan Semiconductor Manufacturing Company’s (TSMC) 0.18-μm Complementary Metal-Oxide-Semiconductor (CMOS) technology. The proposed core has an operating frequency of 60 MHz and gate counts of 2.69 K. PMID:28991216
A method of vehicle license plate recognition based on PCANet and compressive sensing
NASA Astrophysics Data System (ADS)
Ye, Xianyi; Min, Feng
2018-03-01
The manual feature extraction of the traditional method for vehicle license plates has no good robustness to change in diversity. And the high feature dimension that is extracted with Principal Component Analysis Network (PCANet) leads to low classification efficiency. For solving these problems, a method of vehicle license plate recognition based on PCANet and compressive sensing is proposed. First, PCANet is used to extract the feature from the images of characters. And then, the sparse measurement matrix which is a very sparse matrix and consistent with Restricted Isometry Property (RIP) condition of the compressed sensing is used to reduce the dimensions of extracted features. Finally, the Support Vector Machine (SVM) is used to train and recognize the features whose dimension has been reduced. Experimental results demonstrate that the proposed method has better performance than Convolutional Neural Network (CNN) in the recognition and time. Compared with no compression sensing, the proposed method has lower feature dimension for the increase of efficiency.
NASA Technical Reports Server (NTRS)
Browning, L. H.; Argenbright, L. A.
1983-01-01
A thermokinetic SI engine simulation was used to study the effects of simple nitrogen oxide control techniques on performance and emissions of a methanol fueled engine. As part of this simulation, a ring crevice storage model was formulated to predict UBF emissions. The study included spark retard, two methods of compression ratio increase and EGR. The study concludes that use of EGR in high turbulence, high compression engines will both maximize power and thermal efficiency while minimizing harmful exhaust pollutants.
SEMG signal compression based on two-dimensional techniques.
de Melo, Wheidima Carneiro; de Lima Filho, Eddie Batista; da Silva Júnior, Waldir Sabino
2016-04-18
Recently, two-dimensional techniques have been successfully employed for compressing surface electromyographic (SEMG) records as images, through the use of image and video encoders. Such schemes usually provide specific compressors, which are tuned for SEMG data, or employ preprocessing techniques, before the two-dimensional encoding procedure, in order to provide a suitable data organization, whose correlations can be better exploited by off-the-shelf encoders. Besides preprocessing input matrices, one may also depart from those approaches and employ an adaptive framework, which is able to directly tackle SEMG signals reassembled as images. This paper proposes a new two-dimensional approach for SEMG signal compression, which is based on a recurrent pattern matching algorithm called multidimensional multiscale parser (MMP). The mentioned encoder was modified, in order to efficiently work with SEMG signals and exploit their inherent redundancies. Moreover, a new preprocessing technique, named as segmentation by similarity (SbS), which has the potential to enhance the exploitation of intra- and intersegment correlations, is introduced, the percentage difference sorting (PDS) algorithm is employed, with different image compressors, and results with the high efficiency video coding (HEVC), H.264/AVC, and JPEG2000 encoders are presented. Experiments were carried out with real isometric and dynamic records, acquired in laboratory. Dynamic signals compressed with H.264/AVC and HEVC, when combined with preprocessing techniques, resulted in good percent root-mean-square difference [Formula: see text] compression factor figures, for low and high compression factors, respectively. Besides, regarding isometric signals, the modified two-dimensional MMP algorithm outperformed state-of-the-art schemes, for low compression factors, the combination between SbS and HEVC proved to be competitive, for high compression factors, and JPEG2000, combined with PDS, provided good performance allied to low computational complexity, all in terms of percent root-mean-square difference [Formula: see text] compression factor. The proposed schemes are effective and, specifically, the modified MMP algorithm can be considered as an interesting alternative for isometric signals, regarding traditional SEMG encoders. Besides, the approach based on off-the-shelf image encoders has the potential of fast implementation and dissemination, given that many embedded systems may already have such encoders available, in the underlying hardware/software architecture.
Efficient biprediction decision scheme for fast high efficiency video coding encoding
NASA Astrophysics Data System (ADS)
Park, Sang-hyo; Lee, Seung-ho; Jang, Euee S.; Jun, Dongsan; Kang, Jung-Won
2016-11-01
An efficient biprediction decision scheme of high efficiency video coding (HEVC) is proposed for fast-encoding applications. For low-delay video applications, bidirectional prediction can be used to increase compression performance efficiently with previous reference frames. However, at the same time, the computational complexity of the HEVC encoder is significantly increased due to the additional biprediction search. Although a some research has attempted to reduce this complexity, whether the prediction is strongly related to both motion complexity and prediction modes in a coding unit has not yet been investigated. A method that avoids most compression-inefficient search points is proposed so that the computational complexity of the motion estimation process can be dramatically decreased. To determine if biprediction is critical, the proposed method exploits the stochastic correlation of the context of prediction units (PUs): the direction of a PU and the accuracy of a motion vector. Through experimental results, the proposed method showed that the time complexity of biprediction can be reduced to 30% on average, outperforming existing methods in view of encoding time, number of function calls, and memory access.
Roofbolters with compressed-air rotators
NASA Astrophysics Data System (ADS)
Lantsevich, MA; Repin Klishin, AA, VI; Kokoulin, DI
2018-03-01
The specifications of the most popular roofbolters of domestic and foreign manufacture currently in operation in coal mines are discussed. Compressed-air roofbolters SAP and SAP2 designed at the Institute of Mining are capable of drilling in hard rocks. The authors describe the compressed-air rotator of SAP2 roofbolter with alternate motion rotors. From the comparative analysis of characteristics of SAP and SAP 2 roofbolters, the combination of high-frequency axial and rotary impacts on a drilling tool in SAP2 ensure efficient drilling in rocks with the strength up to 160 MPa.
Energy efficient solvent regeneration process for carbon dioxide capture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Shaojun; Meyer, Howard S.; Li, Shiguang
A process for removing carbon dioxide from a carbon dioxide-loaded solvent uses two stages of flash apparatus. Carbon dioxide is flashed from the solvent at a higher temperature and pressure in the first stage, and a lower temperature and pressure in the second stage, and is fed to a multi-stage compression train for high pressure liquefaction. Because some of the carbon dioxide fed to the compression train is already under pressure, less energy is required to further compress the carbon dioxide to a liquid state, compared to conventional processes.
Compression-RSA technique: A more efficient encryption-decryption procedure
NASA Astrophysics Data System (ADS)
Mandangan, Arif; Mei, Loh Chai; Hung, Chang Ee; Che Hussin, Che Haziqah
2014-06-01
The efficiency of encryption-decryption procedures has become a major problem in asymmetric cryptography. Compression-RSA technique is developed to overcome the efficiency problem by compressing the numbers of kplaintext, where k∈Z+ and k > 2, becoming only 2 plaintext. That means, no matter how large the numbers of plaintext, they will be compressed to only 2 plaintext. The encryption-decryption procedures are expected to be more efficient since these procedures only receive 2 inputs to be processed instead of kinputs. However, it is observed that as the numbers of original plaintext are increasing, the size of the new plaintext becomes bigger. As a consequence, it will probably affect the efficiency of encryption-decryption procedures, especially for RSA cryptosystem since both of its encryption-decryption procedures involve exponential operations. In this paper, we evaluated the relationship between the numbers of original plaintext and the size of the new plaintext. In addition, we conducted several experiments to show that the RSA cryptosystem with embedded Compression-RSA technique is more efficient than the ordinary RSA cryptosystem.
Mohammed, Monzoorul Haque; Dutta, Anirban; Bose, Tungadri; Chadaram, Sudha; Mande, Sharmila S
2012-10-01
An unprecedented quantity of genome sequence data is currently being generated using next-generation sequencing platforms. This has necessitated the development of novel bioinformatics approaches and algorithms that not only facilitate a meaningful analysis of these data but also aid in efficient compression, storage, retrieval and transmission of huge volumes of the generated data. We present a novel compression algorithm (DELIMINATE) that can rapidly compress genomic sequence data in a loss-less fashion. Validation results indicate relatively higher compression efficiency of DELIMINATE when compared with popular general purpose compression algorithms, namely, gzip, bzip2 and lzma. Linux, Windows and Mac implementations (both 32 and 64-bit) of DELIMINATE are freely available for download at: http://metagenomics.atc.tcs.com/compression/DELIMINATE. sharmila@atc.tcs.com Supplementary data are available at Bioinformatics online.
NASA Astrophysics Data System (ADS)
Gurovich, V.; Virozub, A.; Rososhek, A.; Bland, S.; Spielman, R. B.; Krasik, Ya. E.
2018-05-01
A major experimental research area in material equation-of-state today involves the use of off-Hugoniot measurements rather than shock experiments that give only Hugoniot data. There is a wide range of applications using quasi-isentropic compression of matter including the direct measurement of the complete isentrope of materials in a single experiment and minimizing the heating of flyer plates for high-velocity shock measurements. We propose a novel approach to generating quasi-isentropic compression of matter. Using analytical modeling and hydrodynamic simulations, we show that a working fluid composed of compressed water, generated by an underwater electrical explosion of a planar wire array, might be used to efficiently drive the quasi-isentropic compression of a copper target to pressures ˜2 × 1011 Pa without any complex target designs.
Zhang, Zhilin; Jung, Tzyy-Ping; Makeig, Scott; Rao, Bhaskar D
2013-02-01
Fetal ECG (FECG) telemonitoring is an important branch in telemedicine. The design of a telemonitoring system via a wireless body area network with low energy consumption for ambulatory use is highly desirable. As an emerging technique, compressed sensing (CS) shows great promise in compressing/reconstructing data with low energy consumption. However, due to some specific characteristics of raw FECG recordings such as nonsparsity and strong noise contamination, current CS algorithms generally fail in this application. This paper proposes to use the block sparse Bayesian learning framework to compress/reconstruct nonsparse raw FECG recordings. Experimental results show that the framework can reconstruct the raw recordings with high quality. Especially, the reconstruction does not destroy the interdependence relation among the multichannel recordings. This ensures that the independent component analysis decomposition of the reconstructed recordings has high fidelity. Furthermore, the framework allows the use of a sparse binary sensing matrix with much fewer nonzero entries to compress recordings. Particularly, each column of the matrix can contain only two nonzero entries. This shows that the framework, compared to other algorithms such as current CS algorithms and wavelet algorithms, can greatly reduce code execution in CPU in the data compression stage.
Hige Compression Ratio Turbo Gasoline Engine Operation Using Alcohol Enhancement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heywood, John; Jo, Young Suk; Lewis, Raymond
The overall objective of this project was to quantify the potential for improving the performance and efficiency of gasoline engine technology by use of alcohols to suppress knock. Knock-free operation is obtained by direct injection of a second “anti-knock” fuel such as ethanol, which suppresses knock when, with gasoline fuel, knock would occur. Suppressing knock enables increased turbocharging, engine downsizing, and use of higher compression ratios throughout the engine’s operating map. This project combined engine testing and simulation to define knock onset conditions, with different mixtures of gasoline and alcohol, and with this information quantify the potential for improving themore » efficiency of turbocharged gasoline spark-ignition engines, and the on-vehicle fuel consumption reductions that could then be realized. The more focused objectives of this project were therefore to: Determine engine efficiency with aggressive turbocharging and downsizing and high compression ratio (up to a compression ratio of 13.5:1) over the engine’s operating range; Determine the knock limits of a turbocharged and downsized engine as a function of engine speed and load; Determine the amount of the knock-suppressing alcohol fuel consumed, through the use of various alcohol-gasoline and alcohol-water gasoline blends, for different driving cycles, relative to the gasoline consumed; Determine implications of using alcohol-boosted engines, with their higher efficiency operation, in both light-duty and medium-duty vehicle sectors.« less
Fast heating of ultrahigh-density plasma as a step towards laser fusion ignition.
Kodama, R; Norreys, P A; Mima, K; Dangor, A E; Evans, R G; Fujita, H; Kitagawa, Y; Krushelnick, K; Miyakoshi, T; Miyanaga, N; Norimatsu, T; Rose, S J; Shozaki, T; Shigemori, K; Sunahara, A; Tampo, M; Tanaka, K A; Toyama, Y; Yamanaka, T; Zepf, M
2001-08-23
Modern high-power lasers can generate extreme states of matter that are relevant to astrophysics, equation-of-state studies and fusion energy research. Laser-driven implosions of spherical polymer shells have, for example, achieved an increase in density of 1,000 times relative to the solid state. These densities are large enough to enable controlled fusion, but to achieve energy gain a small volume of compressed fuel (known as the 'spark') must be heated to temperatures of about 108 K (corresponding to thermal energies in excess of 10 keV). In the conventional approach to controlled fusion, the spark is both produced and heated by accurately timed shock waves, but this process requires both precise implosion symmetry and a very large drive energy. In principle, these requirements can be significantly relaxed by performing the compression and fast heating separately; however, this 'fast ignitor' approach also suffers drawbacks, such as propagation losses and deflection of the ultra-intense laser pulse by the plasma surrounding the compressed fuel. Here we employ a new compression geometry that eliminates these problems; we combine production of compressed matter in a laser-driven implosion with picosecond-fast heating by a laser pulse timed to coincide with the peak compression. Our approach therefore permits efficient compression and heating to be carried out simultaneously, providing a route to efficient fusion energy production.
Experimental investigation of the ecological hybrid refrigeration cycle
NASA Astrophysics Data System (ADS)
Cyklis, Piotr; Kantor, Ryszard; Ryncarz, Tomasz; Górski, Bogusław; Duda, Roman
2014-09-01
The requirements for environmentally friendly refrigerants promote application of CO2 and water as working fluids. However there are two problems related to that, namely high temperature limit for CO2 in condenser due to the low critical temperature, and low temperature limit for water being the result of high triple point temperature. This can be avoided by application of the hybrid adsorption-compression system, where water is the working fluid in the adsorption high temperature cycle used to cool down the CO2 compression cycle condenser. The adsorption process is powered with a low temperature renewable heat source as solar collectors or other waste heat source. The refrigeration system integrating adsorption and compression system has been designed and constructed in the Laboratory of Thermodynamics and Thermal Machine Measurements of Cracow University of Technology. The heat source for adsorption system consists of 16 tube tulbular collectors. The CO2 compression low temperature cycle is based on two parallel compressors with frequency inverter. Energy efficiency and TEWI of this hybrid system is quite promising in comparison with the compression only systems.
H.264/AVC Video Compression on Smartphones
NASA Astrophysics Data System (ADS)
Sharabayko, M. P.; Markov, N. G.
2017-01-01
In this paper, we studied the usage of H.264/AVC video compression tools by the flagship smartphones. The results show that only a subset of tools is used, meaning that there is still a potential to achieve higher compression efficiency within the H.264/AVC standard, but the most advanced smartphones are already reaching the compression efficiency limit of H.264/AVC.
NASA Astrophysics Data System (ADS)
Sivaganesan, S.; Chandrasekaran, M.; Ruban, M.
2017-03-01
The present experimental investigation evaluates the effects of using blends of diesel fuel with 20% concentration of Methyl Ester of Jatropha biodiesel blended with various compression ratio. Both the diesel and biodiesel fuel blend was injected at 23º BTDC to the combustion chamber. The experiment was carried out with three different compression ratio. Biodiesel was extracted from Jatropha oil, 20% (B20) concentration is found to be best blend ratio from the earlier experimental study. The engine was maintained at various compression ratio i.e., 17.5, 16.5 and 15.5 respectively. The main objective is to obtain minimum specific fuel consumption, better efficiency and lesser Emission with different compression ratio. The results concluded that full load show an increase in efficiency when compared with diesel, highest efficiency is obtained with B20MEOJBA with compression ratio 17.5. It is noted that there is an increase in thermal efficiency as the blend ratio increases. Biodiesel blend has performance closer to diesel, but emission is reduced in all blends of B20MEOJBA compared to diesel. Thus this work focuses on the best compression ratio and suitability of biodiesel blends in diesel engine as an alternate fuel.
Ma, Longtao; Chen, Shengmei; Pei, Zengxia; Huang, Yan; Liang, Guojin; Mo, Funian; Yang, Qi; Su, Jun; Gao, Yihua; Zapien, Juan Antonio; Zhi, Chunyi
2018-02-27
The exploitation of a high-efficient, low-cost, and stable non-noble-metal-based catalyst with oxygen reduction reaction (ORR) and oxygen evolution reaction (OER) simultaneously, as air electrode material for a rechargeable zinc-air battery is significantly crucial. Meanwhile, the compressible flexibility of a battery is the prerequisite of wearable or/and portable electronics. Herein, we present a strategy via single-site dispersion of an Fe-N x species on a two-dimensional (2D) highly graphitic porous nitrogen-doped carbon layer to implement superior catalytic activity toward ORR/OER (with a half-wave potential of 0.86 V for ORR and an overpotential of 390 mV at 10 mA·cm -2 for OER) in an alkaline medium. Furthermore, an elastic polyacrylamide hydrogel based electrolyte with the capability to retain great elasticity even under a highly corrosive alkaline environment is utilized to develop a solid-state compressible and rechargeable zinc-air battery. The creatively developed battery has a low charge-discharge voltage gap (0.78 V at 5 mA·cm -2 ) and large power density (118 mW·cm -2 ). It could be compressed up to 54% strain and bent up to 90° without charge/discharge performance and output power degradation. Our results reveal that single-site dispersion of catalytic active sites on a porous support for a bifunctional oxygen catalyst as cathode integrating a specially designed elastic electrolyte is a feasible strategy for fabricating efficient compressible and rechargeable zinc-air batteries, which could enlighten the design and development of other functional electronic devices.
JP3D compressed-domain watermarking of volumetric medical data sets
NASA Astrophysics Data System (ADS)
Ouled Zaid, Azza; Makhloufi, Achraf; Olivier, Christian
2010-01-01
Increasing transmission of medical data across multiple user systems raises concerns for medical image watermarking. Additionaly, the use of volumetric images triggers the need for efficient compression techniques in picture archiving and communication systems (PACS), or telemedicine applications. This paper describes an hybrid data hiding/compression system, adapted to volumetric medical imaging. The central contribution is to integrate blind watermarking, based on turbo trellis-coded quantization (TCQ), to JP3D encoder. Results of our method applied to Magnetic Resonance (MR) and Computed Tomography (CT) medical images have shown that our watermarking scheme is robust to JP3D compression attacks and can provide relative high data embedding rate whereas keep a relative lower distortion.
Spatial domain entertainment audio decompression/compression
NASA Astrophysics Data System (ADS)
Chan, Y. K.; Tam, Ka Him K.
2014-02-01
The ARM7 NEON processor with 128bit SIMD hardware accelerator requires a peak performance of 13.99 Mega Cycles per Second for MP3 stereo entertainment quality decoding. For similar compression bit rate, OGG and AAC is preferred over MP3. The Patent Cooperation Treaty Application dated 28/August/2012 describes an audio decompression scheme producing a sequence of interleaving "min to Max" and "Max to min" rising and falling segments. The number of interior audio samples bound by "min to Max" or "Max to min" can be {0|1|…|N} audio samples. The magnitudes of samples, including the bounding min and Max, are distributed as normalized constants within the 0 and 1 of the bounding magnitudes. The decompressed audio is then a "sequence of static segments" on a frame by frame basis. Some of these frames needed to be post processed to elevate high frequency. The post processing is compression efficiency neutral and the additional decoding complexity is only a small fraction of the overall decoding complexity without the need of extra hardware. Compression efficiency can be speculated as very high as source audio had been decimated and converted to a set of data with only "segment length and corresponding segment magnitude" attributes. The PCT describes how these two attributes are efficiently coded by the PCT innovative coding scheme. The PCT decoding efficiency is obviously very high and decoding latency is basically zero. Both hardware requirement and run time is at least an order of magnitude better than MP3 variants. The side benefit is ultra low power consumption on mobile device. The acid test on how such a simplistic waveform representation can indeed reproduce authentic decompressed quality is benchmarked versus OGG(aoTuv Beta 6.03) by three pair of stereo audio frames and one broadcast like voice audio frame with each frame consisting 2,028 samples at 44,100KHz sampling frequency.
Ren, Xiuyan; Huang, Chang; Duan, Lijie; Liu, Baijun; Bu, Lvjun; Guan, Shuang; Hou, Jiliang; Zhang, Huixuan; Gao, Guanghui
2017-05-14
Toughness, strechability and compressibility for hydrogels were ordinarily balanced for their use as mechanically responsive materials. For example, macromolecular microsphere composite hydrogels with chemical crosslinking exhibited excellent compression strength and strechability, but poor tensile stress. Here, a novel strategy for the preparation of a super-tough, ultra-stretchable and strongly compressive hydrogel was proposed by introducing core-shell latex particles (LPs) as crosslinking centers for inducing efficient aggregation of hydrophobic chains. The core-shell LPs always maintained a spherical shape due to the presence of a hard core even by an external force and the soft shell could interact with hydrophobic chains due to hydrophobic interactions. As a result, the hydrogels reinforced by core-shell LPs exhibited not only a high tensile strength of 1.8 MPa and dramatic elongation of over 20 times, but also an excellent compressive performance of 13.5 MPa at a strain of 90%. The Mullins effect was verified for the validity of core-shell LP-reinforced hydrogels by inducing aggregation of hydrophobic chains. The novel strategy strives to provide a better avenue for designing and developing a new generation of hydrophobic association tough hydrogels with excellent mechanical properties.
Oil-free centrifugal hydrogen compression technology demonstration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heshmat, Hooshang
2014-05-31
One of the key elements in realizing a mature market for hydrogen vehicles is the deployment of a safe and efficient hydrogen production and delivery infrastructure on a scale that can compete economically with current fuels. The challenge, however, is that hydrogen, being the lightest and smallest of gases with a lower viscosity and density than natural gas, readily migrates through small spaces and is difficult to compresses efficiently. While efficient and cost effective compression technology is crucial to effective pipeline delivery of hydrogen, the compression methods used currently rely on oil lubricated positive displacement (PD) machines. PD compression technologymore » is very costly, has poor reliability and durability, especially for components subjected to wear (e.g., valves, rider bands and piston rings) and contaminates hydrogen with lubricating fluid. Even so called “oil-free” machines use oil lubricants that migrate into and contaminate the gas path. Due to the poor reliability of PD compressors, current hydrogen producers often install duplicate units in order to maintain on-line times of 98-99%. Such machine redundancy adds substantially to system capital costs. As such, DOE deemed that low capital cost, reliable, efficient and oil-free advanced compressor technologies are needed. MiTi’s solution is a completely oil-free, multi-stage, high-speed, centrifugal compressor designed for flow capacity of 500,000 kg/day with a discharge pressure of 1200 psig. The design employs oil-free compliant foil bearings and seals to allow for very high operating speeds, totally contamination free operation, long life and reliability. This design meets the DOE’s performance targets and achieves an extremely aggressive, specific power metric of 0.48 kW-hr/kg and provides significant improvements in reliability/durability, energy efficiency, sealing and freedom from contamination. The multi-stage compressor system concept has been validated through full scale performance testing of a single stage with helium similitude gas at full speed in accordance with ASME PTC-10. The experimental results indicated that aerodynamic performance, with respect to compressor discharge pressure, flow, power and efficiency exceeded theoretical prediction. Dynamic testing of a simulated multistage centrifugal compressor was also completed under a parallel program to validate the integrity and viability of the system concept. The results give strong confidence in the feasibility of the multi-stage design for use in hydrogen gas transportation and delivery from production locations to point of use.« less
Word aligned bitmap compression method, data structure, and apparatus
Wu, Kesheng; Shoshani, Arie; Otoo, Ekow
2004-12-14
The Word-Aligned Hybrid (WAH) bitmap compression method and data structure is a relatively efficient method for searching and performing logical, counting, and pattern location operations upon large datasets. The technique is comprised of a data structure and methods that are optimized for computational efficiency by using the WAH compression method, which typically takes advantage of the target computing system's native word length. WAH is particularly apropos to infrequently varying databases, including those found in the on-line analytical processing (OLAP) industry, due to the increased computational efficiency of the WAH compressed bitmap index. Some commercial database products already include some version of a bitmap index, which could possibly be replaced by the WAH bitmap compression techniques for potentially increased operation speed, as well as increased efficiencies in constructing compressed bitmaps. Combined together, this technique may be particularly useful for real-time business intelligence. Additional WAH applications may include scientific modeling, such as climate and combustion simulations, to minimize search time for analysis and subsequent data visualization.
Optimization of plasma amplifiers
Sadler, James D.; Trines, Raoul M. G. M.; Tabak, Max; ...
2017-05-24
Here, plasma amplifiers offer a route to side-step limitations on chirped pulse amplification and generate laser pulses at the power frontier. They compress long pulses by transferring energy to a shorter pulse via the Raman or Brillouin instabilities. We present an extensive kinetic numerical study of the three-dimensional parameter space for the Raman case. Further particle-in-cell simulations find the optimal seed pulse parameters for experimentally relevant constraints. The high-efficiency self-similar behavior is observed only for seeds shorter than the linear Raman growth time. A test case similar to an upcoming experiment at the Laboratory for Laser Energetics is found tomore » maintain good transverse coherence and high-energy efficiency. Effective compression of a 10kJ, nanosecond-long driver pulse is also demonstrated in a 15-cm-long amplifier.« less
Optimization of plasma amplifiers
NASA Astrophysics Data System (ADS)
Sadler, James D.; Trines, Raoul M. Â. G. Â. M.; Tabak, Max; Haberberger, Dan; Froula, Dustin H.; Davies, Andrew S.; Bucht, Sara; Silva, Luís O.; Alves, E. Paulo; Fiúza, Frederico; Ceurvorst, Luke; Ratan, Naren; Kasim, Muhammad F.; Bingham, Robert; Norreys, Peter A.
2017-05-01
Plasma amplifiers offer a route to side-step limitations on chirped pulse amplification and generate laser pulses at the power frontier. They compress long pulses by transferring energy to a shorter pulse via the Raman or Brillouin instabilities. We present an extensive kinetic numerical study of the three-dimensional parameter space for the Raman case. Further particle-in-cell simulations find the optimal seed pulse parameters for experimentally relevant constraints. The high-efficiency self-similar behavior is observed only for seeds shorter than the linear Raman growth time. A test case similar to an upcoming experiment at the Laboratory for Laser Energetics is found to maintain good transverse coherence and high-energy efficiency. Effective compression of a 10 kJ , nanosecond-long driver pulse is also demonstrated in a 15-cm-long amplifier.
Exploring compression techniques for ROOT IO
NASA Astrophysics Data System (ADS)
Zhang, Z.; Bockelman, B.
2017-10-01
ROOT provides an flexible format used throughout the HEP community. The number of use cases - from an archival data format to end-stage analysis - has required a number of tradeoffs to be exposed to the user. For example, a high “compression level” in the traditional DEFLATE algorithm will result in a smaller file (saving disk space) at the cost of slower decompression (costing CPU time when read). At the scale of the LHC experiment, poor design choices can result in terabytes of wasted space or wasted CPU time. We explore and attempt to quantify some of these tradeoffs. Specifically, we explore: the use of alternate compressing algorithms to optimize for read performance; an alternate method of compressing individual events to allow efficient random access; and a new approach to whole-file compression. Quantitative results are given, as well as guidance on how to make compression decisions for different use cases.
Piezoresistance and solar cell efficiency
NASA Technical Reports Server (NTRS)
Weizer, Victor G.
1987-01-01
Diffusion-induced stresses in silicon are shown to result in large localized changes in the minority-carrier mobility which in turn can have a significant effect on cell output. Evidence is given that both compressive and tensile stresses can be generated in either the emitter or the base region. Tensile stresses in the base appear to be much more effective in altering cell performance than do compressive stresses. While most stress-related effects appear to degrade cell efficiency, this is not always the case. Evidence is presented showing that arsenic-induced stresses can result in emitter characteristics comparable to those found in the MINP cell without requiring a high degree of surface passivation.
An incremental block-line-Gauss-Seidel method for the Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Napolitano, M.; Walters, R. W.
1985-01-01
A block-line-Gauss-Seidel (LGS) method is developed for solving the incompressible and compressible Navier-Stokes equations in two dimensions. The method requires only one block-tridiagonal solution process per iteration and is consequently faster per step than the linearized block-ADI methods. Results are presented for both incompressible and compressible separated flows: in all cases the proposed block-LGS method is more efficient than the block-ADI methods. Furthermore, for high Reynolds number weakly separated incompressible flow in a channel, which proved to be an impossible task for a block-ADI method, solutions have been obtained very efficiently by the new scheme.
Symposium on Turbulence (13th) Held at Rolla, Missouri on September 21- 23, 1992
1992-09-01
this article Is part of a project aimed at Increasing the role of computational fluid dynamics ( CFD ) in the process of developing more efficient gas...techniques in and fluid physics of high speed compressible or reacting flows undergoing significant changes of indices of refraction. Possible Topics...in experimental fluid mechanics; homogeneous turbulence, including closures and statistical properties; turbulence in compressible fluids ; fine scale
A Study on Homogeneous Charge Compression Ignition Gasoline Engines
NASA Astrophysics Data System (ADS)
Kaneko, Makoto; Morikawa, Koji; Itoh, Jin; Saishu, Youhei
A new engine concept consisting of HCCI combustion for low and midrange loads and spark ignition combustion for high loads was introduced. The timing of the intake valve closing was adjusted to alter the negative valve overlap and effective compression ratio to provide suitable HCCI conditions. The effect of mixture formation on auto-ignition was also investigated using a direct injection engine. As a result, HCCI combustion was achieved with a relatively low compression ratio when the intake air was heated by internal EGR. The resulting combustion was at a high thermal efficiency, comparable to that of modern diesel engines, and produced almost no NOx emissions or smoke. The mixture stratification increased the local A/F concentration, resulting in higher reactivity. A wide range of combustible A/F ratios was used to control the compression ignition timing. Photographs showed that the flame filled the entire chamber during combustion, reducing both emissions and fuel consumption.
NASA Technical Reports Server (NTRS)
Soreide, David; Bogue, Rodney K.; Ehernberger, L. J.; Seidel, Jonathan
1997-01-01
Inlet unstart causes a disturbance akin to severe turbulence for a supersonic commercial airplane. Consequently, the current goal for the frequency of unstarts is a few times per fleet lifetime. For a mixed-compression inlet, there is a tradeoff between propulsion system efficiency and unstart margin. As the unstart margin decreases, propulsion system efficiency increases, but so does the unstart rate. This paper intends to first, quantify that tradeoff for the High Speed Civil Transport (HSCT) and second, to examine the benefits of using a sensor to detect turbulence ahead of the airplane. When the presence of turbulence is known with sufficient lead time to allow the propulsion system to adjust the unstart margin, then inlet un,starts can be minimized while overall efficiency is maximized. The NASA Airborne Coherent Lidar for Advanced In-Flight Measurements program is developing a lidar system to serve as a prototype of the forward-looking sensor. This paper reports on the progress of this development program and its application to the prevention of inlet unstart in a mixed-compression supersonic inlet. Quantified benefits include significantly reduced takeoff gross weight (TOGW), which could increase payload, reduce direct operating costs, or increase range for the HSCT.
Chattoraj, Sayantan; Sun, Changquan Calvin
2018-04-01
Continuous manufacturing of tablets has many advantages, including batch size flexibility, demand-adaptive scale up or scale down, consistent product quality, small operational foot print, and increased manufacturing efficiency. Simplicity makes direct compression the most suitable process for continuous tablet manufacturing. However, deficiencies in powder flow and compression of active pharmaceutical ingredients (APIs) limit the range of drug loading that can routinely be considered for direct compression. For the widespread adoption of continuous direct compression, effective API engineering strategies to address power flow and compression problems are needed. Appropriate implementation of these strategies would facilitate the design of high-quality robust drug products, as stipulated by the Quality-by-Design framework. Here, several crystal and particle engineering strategies for improving powder flow and compression properties are summarized. The focus is on the underlying materials science, which is the foundation for effective API engineering to enable successful continuous manufacturing by the direct compression process. Copyright © 2018 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Operations and maintenance in the glass container industry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barbieri, D.; Jacobson, D.
1999-07-01
Compressed air is a significant electrical end-use at most manufacturing facilities, and few industries utilize compressed air to the extent of the glass container industry. Unfortunately, compressed air is often a significant source of wasted energy because many customers view it as a low-maintenance system. In the case of the glass container industry, compressed air is a mission-critical system used for driving production machinery, blowing glass, cooling plungers and product, and packaging. Leakage totaling 10% of total compressed air capacity is not uncommon, and leakage rates upwards of 40% have been observed. Even though energy savings from repairing compressed airmore » leaks can be substantial, regular maintenance procedures are often not in place for compressed air systems. In order to achieve future savings in the compressed air end-use, O and M programs must make a special effort to educate customers on the significant energy impacts of regular compressed air system maintenance. This paper will focus on the glass industry, its reliability on compressed air, and the unique savings potential in the glass container industry. Through a technical review of the glass production process, this paper will identify compressed air as a highly significant electrical consumer in these facilities and present ideas on how to produce and deliver compressed air in a more efficient manner. It will also examine a glass container manufacturer with extremely high savings potential in compressed air systems, but little initiative to establish and perform compressed air maintenance due to an if it works, don't mess with it maintenance philosophy. Finally, this paper will address the economic benefit of compressed air maintenance in this and other manufacturing industries.« less
Nonpainful wide-area compression inhibits experimental pain.
Honigman, Liat; Bar-Bachar, Ofrit; Yarnitsky, David; Sprecher, Elliot; Granovsky, Yelena
2016-09-01
Compression therapy, a well-recognized treatment for lymphoedema and venous disorders, pressurizes limbs and generates massive non-noxious afferent sensory barrages. The aim of this study was to study whether such afferent activity has an analgesic effect when applied on the lower limbs, hypothesizing that larger compression areas will induce stronger analgesic effects, and whether this effect correlates with conditioned pain modulation (CPM). Thirty young healthy subjects received painful heat and pressure stimuli (47°C for 30 seconds, forearm; 300 kPa for 15 seconds, wrist) before and during 3 compression protocols of either SMALL (up to ankles), MEDIUM (up to knees), or LARGE (up to hips) compression areas. Conditioned pain modulation (heat pain conditioned by noxious cold water) was tested before and after each compression protocol. The LARGE protocol induced more analgesia for heat than the SMALL protocol (P < 0.001). The analgesic effect interacted with gender (P = 0.015). The LARGE protocol was more efficient for females, whereas the MEDIUM protocol was more efficient for males. Pressure pain was reduced by all protocols (P < 0.001) with no differences between protocols and no gender effect. Conditioned pain modulation was more efficient than the compression-induced analgesia. For the LARGE protocol, precompression CPM efficiency positively correlated with compression-induced analgesia. Large body area compression exerts an area-dependent analgesic effect on experimental pain stimuli. The observed correlation with pain inhibition in response to robust non-noxious sensory stimulation may suggest that compression therapy shares similar mechanisms with inhibitory pain modulation assessed through CPM.
Method and system for efficient video compression with low-complexity encoder
NASA Technical Reports Server (NTRS)
Chen, Jun (Inventor); He, Dake (Inventor); Sheinin, Vadim (Inventor); Jagmohan, Ashish (Inventor); Lu, Ligang (Inventor)
2012-01-01
Disclosed are a method and system for video compression, wherein the video encoder has low computational complexity and high compression efficiency. The disclosed system comprises a video encoder and a video decoder, wherein the method for encoding includes the steps of converting a source frame into a space-frequency representation; estimating conditional statistics of at least one vector of space-frequency coefficients; estimating encoding rates based on the said conditional statistics; and applying Slepian-Wolf codes with the said computed encoding rates. The preferred method for decoding includes the steps of; generating a side-information vector of frequency coefficients based on previously decoded source data, encoder statistics, and previous reconstructions of the source frequency vector; and performing Slepian-Wolf decoding of at least one source frequency vector based on the generated side-information, the Slepian-Wolf code bits and the encoder statistics.
Heterogeneous Compression of Large Collections of Evolutionary Trees.
Matthews, Suzanne J
2015-01-01
Compressing heterogeneous collections of trees is an open problem in computational phylogenetics. In a heterogeneous tree collection, each tree can contain a unique set of taxa. An ideal compression method would allow for the efficient archival of large tree collections and enable scientists to identify common evolutionary relationships over disparate analyses. In this paper, we extend TreeZip to compress heterogeneous collections of trees. TreeZip is the most efficient algorithm for compressing homogeneous tree collections. To the best of our knowledge, no other domain-based compression algorithm exists for large heterogeneous tree collections or enable their rapid analysis. Our experimental results indicate that TreeZip averages 89.03 percent (72.69 percent) space savings on unweighted (weighted) collections of trees when the level of heterogeneity in a collection is moderate. The organization of the TRZ file allows for efficient computations over heterogeneous data. For example, consensus trees can be computed in mere seconds. Lastly, combining the TreeZip compressed (TRZ) file with general-purpose compression yields average space savings of 97.34 percent (81.43 percent) on unweighted (weighted) collections of trees. Our results lead us to believe that TreeZip will prove invaluable in the efficient archival of tree collections, and enables scientists to develop novel methods for relating heterogeneous collections of trees.
Laser and acoustic lens for lithotripsy
Visuri, Steven R.; Makarewicz, Anthony J.; London, Richard A.; Benett, William J.; Krulevitch, Peter; Da Silva, Luiz B.
2002-01-01
An acoustic focusing device whose acoustic waves are generated by laser radiation through an optical fiber. The acoustic energy is capable of efficient destruction of renal and biliary calculi and deliverable to the site of the calculi via an endoscopic procedure. The device includes a transducer tip attached to the distal end of an optical fiber through which laser energy is directed. The transducer tip encapsulates an exogenous absorbing dye. Under proper irradiation conditions (high absorbed energy density, short pulse duration) a stress wave is produced via thermoelastic expansion of the absorber for the destruction of the calculi. The transducer tip can be configured into an acoustic lens such that the transmitted acoustic wave is shaped or focused. Also, compressive stress waves can be reflected off a high density/low density interface to invert the compressive wave into a tensile stress wave, and tensile stresses may be more effective in some instances in disrupting material as most materials are weaker in tension than compression. Estimations indicate that stress amplitudes provided by this device can be magnified more than 100 times, greatly improving the efficiency of optical energy for targeted material destruction.
Medical Ultrasound Video Coding with H.265/HEVC Based on ROI Extraction
Wu, Yueying; Liu, Pengyu; Gao, Yuan; Jia, Kebin
2016-01-01
High-efficiency video compression technology is of primary importance to the storage and transmission of digital medical video in modern medical communication systems. To further improve the compression performance of medical ultrasound video, two innovative technologies based on diagnostic region-of-interest (ROI) extraction using the high efficiency video coding (H.265/HEVC) standard are presented in this paper. First, an effective ROI extraction algorithm based on image textural features is proposed to strengthen the applicability of ROI detection results in the H.265/HEVC quad-tree coding structure. Second, a hierarchical coding method based on transform coefficient adjustment and a quantization parameter (QP) selection process is designed to implement the otherness encoding for ROIs and non-ROIs. Experimental results demonstrate that the proposed optimization strategy significantly improves the coding performance by achieving a BD-BR reduction of 13.52% and a BD-PSNR gain of 1.16 dB on average compared to H.265/HEVC (HM15.0). The proposed medical video coding algorithm is expected to satisfy low bit-rate compression requirements for modern medical communication systems. PMID:27814367
Medical Ultrasound Video Coding with H.265/HEVC Based on ROI Extraction.
Wu, Yueying; Liu, Pengyu; Gao, Yuan; Jia, Kebin
2016-01-01
High-efficiency video compression technology is of primary importance to the storage and transmission of digital medical video in modern medical communication systems. To further improve the compression performance of medical ultrasound video, two innovative technologies based on diagnostic region-of-interest (ROI) extraction using the high efficiency video coding (H.265/HEVC) standard are presented in this paper. First, an effective ROI extraction algorithm based on image textural features is proposed to strengthen the applicability of ROI detection results in the H.265/HEVC quad-tree coding structure. Second, a hierarchical coding method based on transform coefficient adjustment and a quantization parameter (QP) selection process is designed to implement the otherness encoding for ROIs and non-ROIs. Experimental results demonstrate that the proposed optimization strategy significantly improves the coding performance by achieving a BD-BR reduction of 13.52% and a BD-PSNR gain of 1.16 dB on average compared to H.265/HEVC (HM15.0). The proposed medical video coding algorithm is expected to satisfy low bit-rate compression requirements for modern medical communication systems.
The importance of robust error control in data compression applications
NASA Technical Reports Server (NTRS)
Woolley, S. I.
1993-01-01
Data compression has become an increasingly popular option as advances in information technology have placed further demands on data storage capabilities. With compression ratios as high as 100:1 the benefits are clear; however, the inherent intolerance of many compression formats to error events should be given careful consideration. If we consider that efficiently compressed data will ideally contain no redundancy, then the introduction of a channel error must result in a change of understanding from that of the original source. While the prefix property of codes such as Huffman enables resynchronisation, this is not sufficient to arrest propagating errors in an adaptive environment. Arithmetic, Lempel-Ziv, discrete cosine transform (DCT) and fractal methods are similarly prone to error propagating behaviors. It is, therefore, essential that compression implementations provide sufficient combatant error control in order to maintain data integrity. Ideally, this control should be derived from a full understanding of the prevailing error mechanisms and their interaction with both the system configuration and the compression schemes in use.
Perceka, Wisena; Liao, Wen-Cheng; Wang, Yo-de
2016-01-01
Addition of steel fibers to high strength concrete (HSC) improves its post-peak behavior and energy absorbing capability, which can be described well in term of toughness. This paper attempts to obtain both analytically and experimentally the efficiency of steel fibers in HSC columns with hybrid confinement of transverse reinforcement and steel fibers. Toughness ratio (TR) to quantify the confinement efficiency of HSC columns with hybrid confinement is proposed through a regression analysis by involving sixty-nine TRs of HSC without steel fibers and twenty-seven TRs of HSC with hybrid of transverse reinforcement and steel fibers. The proposed TR equation was further verified by compression tests of seventeen HSC columns conducted in this study, where twelve specimens were reinforced by high strength rebars in longitudinal and transverse directions. The results show that the efficiency of steel fibers in concrete depends on transverse reinforcement spacing, where the steel fibers are more effective if the spacing transverse reinforcement becomes larger in the range of 0.25–1 effective depth of the section column. Furthermore, the axial load–strain curves were developed by employing finite element software (OpenSees) for simulating the response of the structural system. Comparisons between numerical and experimental axial load–strain curves were carried out. PMID:28773391
Van Blarigan, Peter
2001-01-01
A combustion system which can utilize high compression ratios, short burn durations, and homogeneous fuel/air mixtures in conjunction with low equivalence ratios. In particular, a free-piston, two-stroke autoignition internal combustion engine including an electrical generator having a linear alternator with a double-ended free piston that oscillates inside a closed cylinder is provided. Fuel and air are introduced in a two-stroke cycle fashion on each end, where the cylinder charge is compressed to the point of autoignition without spark plugs. The piston is driven in an oscillating motion as combustion occurs successively on each end. This leads to rapid combustion at almost constant volume for any fuel/air equivalence ratio mixture at very high compression ratios. The engine is characterized by high thermal efficiency and low NO.sub.x emissions. The engine is particularly suited for generating electrical current in a hybrid automobile.
Image processing using Gallium Arsenide (GaAs) technology
NASA Technical Reports Server (NTRS)
Miller, Warner H.
1989-01-01
The need to increase the information return from space-borne imaging systems has increased in the past decade. The use of multi-spectral data has resulted in the need for finer spatial resolution and greater spectral coverage. Onboard signal processing will be necessary in order to utilize the available Tracking and Data Relay Satellite System (TDRSS) communication channel at high efficiency. A generally recognized approach to the increased efficiency of channel usage is through data compression techniques. The compression technique implemented is a differential pulse code modulation (DPCM) scheme with a non-uniform quantizer. The need to advance the state-of-the-art of onboard processing was recognized and a GaAs integrated circuit technology was chosen. An Adaptive Programmable Processor (APP) chip set was developed which is based on an 8-bit slice general processor. The reason for choosing the compression technique for the Multi-spectral Linear Array (MLA) instrument is described. Also a description is given of the GaAs integrated circuit chip set which will demonstrate that data compression can be performed onboard in real time at data rate in the order of 500 Mb/s.
Scalable Coding of Plenoptic Images by Using a Sparse Set and Disparities.
Li, Yun; Sjostrom, Marten; Olsson, Roger; Jennehag, Ulf
2016-01-01
One of the light field capturing techniques is the focused plenoptic capturing. By placing a microlens array in front of the photosensor, the focused plenoptic cameras capture both spatial and angular information of a scene in each microlens image and across microlens images. The capturing results in a significant amount of redundant information, and the captured image is usually of a large resolution. A coding scheme that removes the redundancy before coding can be of advantage for efficient compression, transmission, and rendering. In this paper, we propose a lossy coding scheme to efficiently represent plenoptic images. The format contains a sparse image set and its associated disparities. The reconstruction is performed by disparity-based interpolation and inpainting, and the reconstructed image is later employed as a prediction reference for the coding of the full plenoptic image. As an outcome of the representation, the proposed scheme inherits a scalable structure with three layers. The results show that plenoptic images are compressed efficiently with over 60 percent bit rate reduction compared with High Efficiency Video Coding intra coding, and with over 20 percent compared with an High Efficiency Video Coding block copying mode.
Evaluation and analysis on the coupling performance of a high-speed turboexpander compressor
NASA Astrophysics Data System (ADS)
Chen, Shuangtao; Fan, Yufeng; Yang, Shanju; Chen, Xingya; Hou, Yu
2017-12-01
A high-speed turboexpander compressor (TEC) for small reverse Brayton air refrigerator is tested and analyzed in the present work. A TEC consists of an expander and a compressor, which are coupled together and interact with each other directly. Meanwhile, the expander and compressor have different effects on the refrigerator. The TEC overall efficiency, which contains effects of the expander's expansion, the compressor's pre-compression, and the pressure drop between them, was proved. It unifies influences of both compression and expansion processes on the COP of refrigerator and could be used to evaluate the TEC overall performance. Then, the coupling parameters were analyzed, which shows that for a TEC, the expander efficiency should be fully utilized first, followed by the compressor pressure ratio. Experiments were carried out to test the TEC coupling performances. The results indicated that, the TEC overall efficiency could reach 67.2%, and meanwhile 22.3% of the energy output was recycled.
2D-pattern matching image and video compression: theory, algorithms, and experiments.
Alzina, Marc; Szpankowski, Wojciech; Grama, Ananth
2002-01-01
In this paper, we propose a lossy data compression framework based on an approximate two-dimensional (2D) pattern matching (2D-PMC) extension of the Lempel-Ziv (1977, 1978) lossless scheme. This framework forms the basis upon which higher level schemes relying on differential coding, frequency domain techniques, prediction, and other methods can be built. We apply our pattern matching framework to image and video compression and report on theoretical and experimental results. Theoretically, we show that the fixed database model used for video compression leads to suboptimal but computationally efficient performance. The compression ratio of this model is shown to tend to the generalized entropy. For image compression, we use a growing database model for which we provide an approximate analysis. The implementation of 2D-PMC is a challenging problem from the algorithmic point of view. We use a range of techniques and data structures such as k-d trees, generalized run length coding, adaptive arithmetic coding, and variable and adaptive maximum distortion level to achieve good compression ratios at high compression speeds. We demonstrate bit rates in the range of 0.25-0.5 bpp for high-quality images and data rates in the range of 0.15-0.5 Mbps for a baseline video compression scheme that does not use any prediction or interpolation. We also demonstrate that this asymmetric compression scheme is capable of extremely fast decompression making it particularly suitable for networked multimedia applications.
NASA Technical Reports Server (NTRS)
Held, Louis F.; Pritchard, Ernest I.
1946-01-01
An investigation was conducted to evaluate the possibilities of utilizing the high-performance characteristics of triptane and xylidines blended with 28-R fuel in order to increase fuel economy by the use of high compression ratios and maximum-economy spark setting. Full-scale single-cylinder knock tests were run with 20 deg B.T.C. and maximum-economy spark settings at compression ratios of 6.9, 8.0, and 10.0, and with two inlet-air temperatures. The fuels tested consisted of triptane, four triptane and one xylidines blend with 28-R, and 28-R fuel alone. Indicated specific fuel consumption at lean mixtures was decreased approximately 17 percent at a compression ratio of 10.0 and maximum-economy spark setting, as compared to that obtained with a compression ratio of 6.9 and normal spark setting. When compression ratio was increased from 6.9 to 10.0 at an inlet-air temperature of 150 F, normal spark setting, and a fuel-air ratio of 0.065, 55-percent triptane was required with 28-R fuel to maintain the knock-limited brake power level obtained with 28-R fuel at a compression ratio of 6.9. Brake specific fuel consumption was decreased 17.5 percent at a compression ratio of 10.0 relative to that obtained at a compression ratio of 6.9. Approximately similar results were noted at an inlet-air temperature of 250 F. For concentrations up through at least 20 percent, triptane can be more efficiently used at normal than at maximum-economy spark setting to maintain a constant knock-limited power output over the range of compression ratios tested.
Ding, Yichun; Yang, Jack; Tolle, Charles R; Zhu, Zhengtao
2018-05-09
Flexible and wearable pressure sensor may offer convenient, timely, and portable solutions to human motion detection, yet it is a challenge to develop cost-effective materials for pressure sensor with high compressibility and sensitivity. Herein, a cost-efficient and scalable approach is reported to prepare a highly flexible and compressible conductive sponge for piezoresistive pressure sensor. The conductive sponge, poly(3,4-ethylenedioxythiophene):poly(styrenesulfonate) (PEDOT:PSS)@melamine sponge (MS), is prepared by one-step dip coating the commercial melamine sponge (MS) in an aqueous dispersion of poly(3,4-ethylenedioxythiophene):poly(styrenesulfonate) (PEDOT:PSS). Due to the interconnected porous structure of MS, the conductive PEDOT:PSS@MS has a high compressibility and a stable piezoresistive response at the compressive strain up to 80%, as well as good reproducibility over 1000 cycles. Thereafter, versatile pressure sensors fabricated using the conductive PEDOT:PSS@MS sponges are attached to the different parts of human body; the capabilities of these devices to detect a variety of human motions including speaking, finger bending, elbow bending, and walking are evaluated. Furthermore, prototype tactile sensory array based on these pressure sensors is demonstrated.
Loaded delay lines for future RF pulse compression systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, R.M.; Wilson, P.B.; Kroll, N.M.
1995-05-01
The peak power delivered by the klystrons in the NLCRA (Next Linear Collider Test Accelerator) now under construction at SLAC is enhanced by a factor of four in a SLED-II type of R.F. pulse compression system (pulse width compression ratio of six). To achieve the desired output pulse duration of 250 ns, a delay line constructed from a 36 m length of circular waveguide is used. Future colliders, however, will require even higher peak power and larger compression factors, which favors a more efficient binary pulse compression approach. Binary pulse compression, however, requires a line whose delay time is approximatelymore » proportional to the compression factor. To reduce the length of these lines to manageable proportions, periodically loaded delay lines are being analyzed using a generalized scattering matrix approach. One issue under study is the possibility of propagating two TE{sub o} modes, one with a high group velocity and one with a group velocity of the order 0.05c, for use in a single-line binary pulse compression system. Particular attention is paid to time domain pulse degradation and to Ohmic losses.« less
Kruse, Lyle W.
1985-01-01
A portal radiation monitor combines 0.1% FAR with high sensitivity to special nuclear material. The monitor utilizes pulse shape discrimination, dynamic compression of the photomultiplier output and scintillators sized to maintain efficiency over the entire portal area.
Kruse, L.W.
1982-03-23
A portal radiation monitor combines .1% FAR with high sensitivity to special nuclear material. The monitor utilizes pulse shape discrimination, dynamic compression of the photomultiplier output and scintillators sized to maintain efficiency over the entire portal area.
A new wind energy conversion system
NASA Technical Reports Server (NTRS)
Smetana, F. O.
1975-01-01
It is presupposed that vertical axis wind energy machines will be superior to horizontal axis machines on a power output/cost basis and the design of a new wind energy machine is presented. The design employs conical cones with sharp lips and smooth surfaces to promote maximum drag and minimize skin friction. The cones are mounted on a vertical axis in such a way as to assist torque development. Storing wind energy as compressed air is thought to be optimal and reasons are: (1) the efficiency of compression is fairly high compared to the conversion of mechanical energy to electrical energy in storage batteries; (2) the release of stored energy through an air motor has high efficiency; and (3) design, construction, and maintenance of an all-mechanical system is usually simpler than for a mechanical to electrical conversion system.
Nonpainful wide-area compression inhibits experimental pain
Honigman, Liat; Bar-Bachar, Ofrit; Yarnitsky, David; Sprecher, Elliot; Granovsky, Yelena
2016-01-01
Abstract Compression therapy, a well-recognized treatment for lymphoedema and venous disorders, pressurizes limbs and generates massive non-noxious afferent sensory barrages. The aim of this study was to study whether such afferent activity has an analgesic effect when applied on the lower limbs, hypothesizing that larger compression areas will induce stronger analgesic effects, and whether this effect correlates with conditioned pain modulation (CPM). Thirty young healthy subjects received painful heat and pressure stimuli (47°C for 30 seconds, forearm; 300 kPa for 15 seconds, wrist) before and during 3 compression protocols of either SMALL (up to ankles), MEDIUM (up to knees), or LARGE (up to hips) compression areas. Conditioned pain modulation (heat pain conditioned by noxious cold water) was tested before and after each compression protocol. The LARGE protocol induced more analgesia for heat than the SMALL protocol (P < 0.001). The analgesic effect interacted with gender (P = 0.015). The LARGE protocol was more efficient for females, whereas the MEDIUM protocol was more efficient for males. Pressure pain was reduced by all protocols (P < 0.001) with no differences between protocols and no gender effect. Conditioned pain modulation was more efficient than the compression-induced analgesia. For the LARGE protocol, precompression CPM efficiency positively correlated with compression-induced analgesia. Large body area compression exerts an area-dependent analgesic effect on experimental pain stimuli. The observed correlation with pain inhibition in response to robust non-noxious sensory stimulation may suggest that compression therapy shares similar mechanisms with inhibitory pain modulation assessed through CPM. PMID:27152691
Improving transmission efficiency of large sequence alignment/map (SAM) files.
Sakib, Muhammad Nazmus; Tang, Jijun; Zheng, W Jim; Huang, Chin-Tser
2011-01-01
Research in bioinformatics primarily involves collection and analysis of a large volume of genomic data. Naturally, it demands efficient storage and transfer of this huge amount of data. In recent years, some research has been done to find efficient compression algorithms to reduce the size of various sequencing data. One way to improve the transmission time of large files is to apply a maximum lossless compression on them. In this paper, we present SAMZIP, a specialized encoding scheme, for sequence alignment data in SAM (Sequence Alignment/Map) format, which improves the compression ratio of existing compression tools available. In order to achieve this, we exploit the prior knowledge of the file format and specifications. Our experimental results show that our encoding scheme improves compression ratio, thereby reducing overall transmission time significantly.
Economic efficiency of application of innovative materials and structures in high-rise construction
NASA Astrophysics Data System (ADS)
Golov, Roman; Dikareva, Varvara; Gorshkov, Roman; Agarkov, Anatoly
2018-03-01
The article is devoted to the analysis of technical and economic efficiency of application of tube confined concrete structures in high-rise construction. The study of comparative costs of materials with the use of different supporting columns was carried out. The main design, operational, technological and economic advantages of the tube confined concrete technology were evaluated, conclusions were drawn about the high strength and deformation properties of axial compression of steel tubes filled with high-strength concrete. The efficiency of the tube confined concrete use is substantiated, which depends mainly on the scale factor and percentage of reinforcement affecting its load-bearing capacity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elizondo-Decanini, Juan M.; Coleman, Phillip D.; Moorman, Matthew W.
Low- and high-voltage Soliton waves were produced and used to demonstrate collision and compression using diode-based nonlinear transmission lines. Experiments demonstrate soliton addition and compression using homogeneous nonlinear lines. We built the nonlinear lines using commercially available diodes. These diodes are chosen after their capacitance versus voltage dependence is used in a model and the line design characteristics are calculated and simulated. Nonlinear ceramic capacitors are then used to demonstrate high-voltage pulse amplification and compression. The line is designed such that a simple capacitor discharge, input signal, develops soliton trains in as few as 12 stages. We also demonstrated outputmore » voltages in excess of 40 kV using Y5V-based commercial capacitors. The results show some key features that determine efficient production of trains of solitons in the kilovolt range.« less
Reconfigurable Hardware for Compressing Hyperspectral Image Data
NASA Technical Reports Server (NTRS)
Aranki, Nazeeh; Namkung, Jeffrey; Villapando, Carlos; Kiely, Aaron; Klimesh, Matthew; Xie, Hua
2010-01-01
High-speed, low-power, reconfigurable electronic hardware has been developed to implement ICER-3D, an algorithm for compressing hyperspectral-image data. The algorithm and parts thereof have been the topics of several NASA Tech Briefs articles, including Context Modeler for Wavelet Compression of Hyperspectral Images (NPO-43239) and ICER-3D Hyperspectral Image Compression Software (NPO-43238), which appear elsewhere in this issue of NASA Tech Briefs. As described in more detail in those articles, the algorithm includes three main subalgorithms: one for computing wavelet transforms, one for context modeling, and one for entropy encoding. For the purpose of designing the hardware, these subalgorithms are treated as modules to be implemented efficiently in field-programmable gate arrays (FPGAs). The design takes advantage of industry- standard, commercially available FPGAs. The implementation targets the Xilinx Virtex II pro architecture, which has embedded PowerPC processor cores with flexible on-chip bus architecture. It incorporates an efficient parallel and pipelined architecture to compress the three-dimensional image data. The design provides for internal buffering to minimize intensive input/output operations while making efficient use of offchip memory. The design is scalable in that the subalgorithms are implemented as independent hardware modules that can be combined in parallel to increase throughput. The on-chip processor manages the overall operation of the compression system, including execution of the top-level control functions as well as scheduling, initiating, and monitoring processes. The design prototype has been demonstrated to be capable of compressing hyperspectral data at a rate of 4.5 megasamples per second at a conservative clock frequency of 50 MHz, with a potential for substantially greater throughput at a higher clock frequency. The power consumption of the prototype is less than 6.5 W. The reconfigurability (by means of reprogramming) of the FPGAs makes it possible to effectively alter the design to some extent to satisfy different requirements without adding hardware. The implementation could be easily propagated to future FPGA generations and/or to custom application-specific integrated circuits.
Parallel discontinuous Galerkin FEM for computing hyperbolic conservation law on unstructured grids
NASA Astrophysics Data System (ADS)
Ma, Xinrong; Duan, Zhijian
2018-04-01
High-order resolution Discontinuous Galerkin finite element methods (DGFEM) has been known as a good method for solving Euler equations and Navier-Stokes equations on unstructured grid, but it costs too much computational resources. An efficient parallel algorithm was presented for solving the compressible Euler equations. Moreover, the multigrid strategy based on three-stage three-order TVD Runge-Kutta scheme was used in order to improve the computational efficiency of DGFEM and accelerate the convergence of the solution of unsteady compressible Euler equations. In order to make each processor maintain load balancing, the domain decomposition method was employed. Numerical experiment performed for the inviscid transonic flow fluid problems around NACA0012 airfoil and M6 wing. The results indicated that our parallel algorithm can improve acceleration and efficiency significantly, which is suitable for calculating the complex flow fluid.
Structural efficiency studies of corrugated compression panels with curved caps and beaded webs
NASA Technical Reports Server (NTRS)
Davis, R. C.; Mills, C. T.; Prabhakaran, R.; Jackson, L. R.
1984-01-01
Curved cross-sectional elements are employed in structural concepts for minimum-mass compression panels. Corrugated panel concepts with curved caps and beaded webs are optimized by using a nonlinear mathematical programming procedure and a rigorous buckling analysis. These panel geometries are shown to have superior structural efficiencies compared with known concepts published in the literature. Fabrication of these efficient corrugation concepts became possible by advances made in the art of superplastically forming of metals. Results of the mass optimization studies of the concepts are presented as structural efficiency charts for axial compression.
A very efficient RCS data compression and reconstruction technique, volume 4
NASA Technical Reports Server (NTRS)
Tseng, N. Y.; Burnside, W. D.
1992-01-01
A very efficient compression and reconstruction scheme for RCS measurement data was developed. The compression is done by isolating the scattering mechanisms on the target and recording their individual responses in the frequency and azimuth scans, respectively. The reconstruction, which is an inverse process of the compression, is granted by the sampling theorem. Two sets of data, the corner reflectors and the F-117 fighter model, were processed and the results were shown to be convincing. The compression ratio can be as large as several hundred, depending on the target's geometry and scattering characteristics.
NASA Astrophysics Data System (ADS)
Zhu, Zhenyu; Wang, Jianyu
1996-11-01
In this paper, two compression schemes are presented to meet the urgent needs of compressing the huge volume and high data rate of imaging spectrometer images. According to the multidimensional feature of the images and the high fidelity requirement of the reconstruction, both schemes were devised to exploit the high redundancy in both spatial and spectral dimension based on the mature wavelet transform technology. Wavelet transform was applied here in two ways: First, with the spatial wavelet transform and the spectral DPCM decorrelation, a ratio up to 84.3 with PSNR > 48db's near-lossless result was attained. This is based ont he fact that the edge structure among all the spectral bands are similar while WT has higher resolution in high frequency components. Secondly, with the wavelet's high efficiency in processing the 'wideband transient' signals, it was used to transform the raw nonstationary signals in the spectral dimension. A good result was also attained.
NASA Astrophysics Data System (ADS)
Arteev, M. S.; Vaulin, V. A.; Slinko, V. N.; Chumerin, P. Yu; Yushkov, Yu G.
1992-06-01
An analysis is made of the possibility of using a commercial microsecond microwave oscillator, supplemented by a device for time compression of microwave pulses, in pumping of industrial lasers with a high efficiency of conversion of the pump source energy into laser radiation. The results are reported of preliminary experiments on the commissioning of an excimer XeCl laser.
Embedded wavelet packet transform technique for texture compression
NASA Astrophysics Data System (ADS)
Li, Jin; Cheng, Po-Yuen; Kuo, C.-C. Jay
1995-09-01
A highly efficient texture compression scheme is proposed in this research. With this scheme, energy compaction of texture images is first achieved by the wavelet packet transform, and an embedding approach is then adopted for the coding of the wavelet packet transform coefficients. By comparing the proposed algorithm with the JPEG standard, FBI wavelet/scalar quantization standard and the EZW scheme with extensive experimental results, we observe a significant improvement in the rate-distortion performance and visual quality.
Organizing Compression of Hyperspectral Imagery to Allow Efficient Parallel Decompression
NASA Technical Reports Server (NTRS)
Klimesh, Matthew A.; Kiely, Aaron B.
2014-01-01
family of schemes has been devised for organizing the output of an algorithm for predictive data compression of hyperspectral imagery so as to allow efficient parallelization in both the compressor and decompressor. In these schemes, the compressor performs a number of iterations, during each of which a portion of the data is compressed via parallel threads operating on independent portions of the data. The general idea is that for each iteration it is predetermined how much compressed data will be produced from each thread.
An Implementation Of Elias Delta Code And ElGamal Algorithm In Image Compression And Security
NASA Astrophysics Data System (ADS)
Rachmawati, Dian; Andri Budiman, Mohammad; Saffiera, Cut Amalia
2018-01-01
In data transmission such as transferring an image, confidentiality, integrity, and efficiency of data storage aspects are highly needed. To maintain the confidentiality and integrity of data, one of the techniques used is ElGamal. The strength of this algorithm is found on the difficulty of calculating discrete logs in a large prime modulus. ElGamal belongs to the class of Asymmetric Key Algorithm and resulted in enlargement of the file size, therefore data compression is required. Elias Delta Code is one of the compression algorithms that use delta code table. The image was first compressed using Elias Delta Code Algorithm, then the result of the compression was encrypted by using ElGamal algorithm. Prime test was implemented using Agrawal Biswas Algorithm. The result showed that ElGamal method could maintain the confidentiality and integrity of data with MSE and PSNR values 0 and infinity. The Elias Delta Code method generated compression ratio and space-saving each with average values of 62.49%, and 37.51%.
NASA Astrophysics Data System (ADS)
Asilah Khairi, Nor; Bahari Jambek, Asral
2017-11-01
An Internet of Things (IoT) device is usually powered by a small battery, which does not last long. As a result, saving energy in IoT devices has become an important issue when it comes to this subject. Since power consumption is the primary cause of radio communication, some researchers have proposed several compression algorithms with the purpose of overcoming this particular problem. Several data compression algorithms from previous reference papers are discussed in this paper. The description of the compression algorithm in the reference papers was collected and summarized in a table form. From the analysis, MAS compression algorithm was selected as a project prototype due to its high potential for meeting the project requirements. Besides that, it also produced better performance regarding energy-saving, better memory usage, and data transmission efficiency. This method is also suitable to be implemented in WSN. MAS compression algorithm will be prototyped and applied in portable electronic devices for Internet of Things applications.
A transcutaneous energy transmission system for artificial heart adapting to changing impedance.
Fu, Yang; Hu, Liang; Ruan, Xiaodong; Fu, Xin
2015-04-01
This article presents a coil-coupling-based transcutaneous energy transmission system (TETS) for wirelessly powering an implanted artificial heart. Keeping high efficiency is especially important for TETS, which is usually difficult due to transmission impedance changes in practice, which are commonly caused by power requirement variation for different body movements and coil-couple malposition accompanying skin peristalsis. The TETS introduced in this article is designed based on a class-E power amplifier (E-PA), of which efficiency is over 95% when its load is kept in a certain range. A resonance matching and impedance compressing functions coupled network based on parallel-series capacitors is proposed in the design, to enhance the energy transmission efficiency and capacity of the coil-couple through resonating, and meanwhile compress the changing range of the transmission impedance to meet the load requirements of the E-PA and thus keep the high efficiency of TETS. An analytical model of the designed TETS is built to analyze the effect of the network and also provide bases for following parameters determination. Then, according algorithms are provided to determine the optimal parameters required in the TETS for good performance both in resonance matching and impedance compressing. The design is tested by a series of experiments, which validate that the TETS can transmit a wide range of power with a total efficiency of at least 70% and commonly beyond 80%, even when the coil-couple is seriously malpositioned. The design methodology proposed in this article can be applied to any existing TETS based on E-PA to improve their performance in actual applications. Copyright © 2014 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
An efficient compression scheme for bitmap indices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Kesheng; Otoo, Ekow J.; Shoshani, Arie
2004-04-13
When using an out-of-core indexing method to answer a query, it is generally assumed that the I/O cost dominates the overall query response time. Because of this, most research on indexing methods concentrate on reducing the sizes of indices. For bitmap indices, compression has been used for this purpose. However, in most cases, operations on these compressed bitmaps, mostly bitwise logical operations such as AND, OR, and NOT, spend more time in CPU than in I/O. To speedup these operations, a number of specialized bitmap compression schemes have been developed; the best known of which is the byte-aligned bitmap codemore » (BBC). They are usually faster in performing logical operations than the general purpose compression schemes, but, the time spent in CPU still dominates the total query response time. To reduce the query response time, we designed a CPU-friendly scheme named the word-aligned hybrid (WAH) code. In this paper, we prove that the sizes of WAH compressed bitmap indices are about two words per row for large range of attributes. This size is smaller than typical sizes of commonly used indices, such as a B-tree. Therefore, WAH compressed indices are not only appropriate for low cardinality attributes but also for high cardinality attributes.In the worst case, the time to operate on compressed bitmaps is proportional to the total size of the bitmaps involved. The total size of the bitmaps required to answer a query on one attribute is proportional to the number of hits. These indicate that WAH compressed bitmap indices are optimal. To verify their effectiveness, we generated bitmap indices for four different datasets and measured the response time of many range queries. Tests confirm that sizes of compressed bitmap indices are indeed smaller than B-tree indices, and query processing with WAH compressed indices is much faster than with BBC compressed indices, projection indices and B-tree indices. In addition, we also verified that the average query response time is proportional to the index size. This indicates that the compressed bitmap indices are efficient for very large datasets.« less
Backwards compatible high dynamic range video compression
NASA Astrophysics Data System (ADS)
Dolzhenko, Vladimir; Chesnokov, Vyacheslav; Edirisinghe, Eran A.
2014-02-01
This paper presents a two layer CODEC architecture for high dynamic range video compression. The base layer contains the tone mapped video stream encoded with 8 bits per component which can be decoded using conventional equipment. The base layer content is optimized for rendering on low dynamic range displays. The enhancement layer contains the image difference, in perceptually uniform color space, between the result of inverse tone mapped base layer content and the original video stream. Prediction of the high dynamic range content reduces the redundancy in the transmitted data while still preserves highlights and out-of-gamut colors. Perceptually uniform colorspace enables using standard ratedistortion optimization algorithms. We present techniques for efficient implementation and encoding of non-uniform tone mapping operators with low overhead in terms of bitstream size and number of operations. The transform representation is based on human vision system model and suitable for global and local tone mapping operators. The compression techniques include predicting the transform parameters from previously decoded frames and from already decoded data for current frame. Different video compression techniques are compared: backwards compatible and non-backwards compatible using AVC and HEVC codecs.
On-Chip Neural Data Compression Based On Compressed Sensing With Sparse Sensing Matrices.
Zhao, Wenfeng; Sun, Biao; Wu, Tong; Yang, Zhi
2018-02-01
On-chip neural data compression is an enabling technique for wireless neural interfaces that suffer from insufficient bandwidth and power budgets to transmit the raw data. The data compression algorithm and its implementation should be power and area efficient and functionally reliable over different datasets. Compressed sensing is an emerging technique that has been applied to compress various neurophysiological data. However, the state-of-the-art compressed sensing (CS) encoders leverage random but dense binary measurement matrices, which incur substantial implementation costs on both power and area that could offset the benefits from the reduced wireless data rate. In this paper, we propose two CS encoder designs based on sparse measurement matrices that could lead to efficient hardware implementation. Specifically, two different approaches for the construction of sparse measurement matrices, i.e., the deterministic quasi-cyclic array code (QCAC) matrix and -sparse random binary matrix [-SRBM] are exploited. We demonstrate that the proposed CS encoders lead to comparable recovery performance. And efficient VLSI architecture designs are proposed for QCAC-CS and -SRBM encoders with reduced area and total power consumption.
Ho, B T; Tsai, M J; Wei, J; Ma, M; Saipetch, P
1996-01-01
A new method of video compression for angiographic images has been developed to achieve high compression ratio (~20:1) while eliminating block artifacts which leads to loss of diagnostic accuracy. This method adopts motion picture experts group's (MPEGs) motion compensated prediction to takes advantage of frame to frame correlation. However, in contrast to MPEG, the error images arising from mismatches in the motion estimation are encoded by discrete wavelet transform (DWT) rather than block discrete cosine transform (DCT). Furthermore, the authors developed a classification scheme which label each block in an image as intra, error, or background type and encode it accordingly. This hybrid coding can significantly improve the compression efficiency in certain eases. This method can be generalized for any dynamic image sequences applications sensitive to block artifacts.
NASA Technical Reports Server (NTRS)
Rice, R. F.
1974-01-01
End-to-end system considerations involving channel coding and data compression which could drastically improve the efficiency in communicating pictorial information from future planetary spacecraft are presented.
ERGC: an efficient referential genome compression algorithm
Saha, Subrata; Rajasekaran, Sanguthevar
2015-01-01
Motivation: Genome sequencing has become faster and more affordable. Consequently, the number of available complete genomic sequences is increasing rapidly. As a result, the cost to store, process, analyze and transmit the data is becoming a bottleneck for research and future medical applications. So, the need for devising efficient data compression and data reduction techniques for biological sequencing data is growing by the day. Although there exists a number of standard data compression algorithms, they are not efficient in compressing biological data. These generic algorithms do not exploit some inherent properties of the sequencing data while compressing. To exploit statistical and information-theoretic properties of genomic sequences, we need specialized compression algorithms. Five different next-generation sequencing data compression problems have been identified and studied in the literature. We propose a novel algorithm for one of these problems known as reference-based genome compression. Results: We have done extensive experiments using five real sequencing datasets. The results on real genomes show that our proposed algorithm is indeed competitive and performs better than the best known algorithms for this problem. It achieves compression ratios that are better than those of the currently best performing algorithms. The time to compress and decompress the whole genome is also very promising. Availability and implementation: The implementations are freely available for non-commercial purposes. They can be downloaded from http://engr.uconn.edu/∼rajasek/ERGC.zip. Contact: rajasek@engr.uconn.edu PMID:26139636
DOE Office of Scientific and Technical Information (OSTI.GOV)
Houssainy, Sammy; Janbozorgi, Mohammad; Kavehpour, Pirouz
Compressed Air Energy Storage (CAES) can potentially allow renewable energy sources to meet electricity demands as reliably as coal-fired power plants. However, conventional CAES systems rely on the combustion of natural gas, require large storage volumes, and operate at high pressures, which possess inherent problems such as high costs, strict geological locations, and the production of greenhouse gas emissions. A novel and patented hybrid thermal-compressed air energy storage (HT-CAES) design is presented which allows a portion of the available energy, from the grid or renewable sources, to operate a compressor and the remainder to be converted and stored in themore » form of heat, through joule heating in a sensible thermal storage medium. The HT-CAES design incudes a turbocharger unit that provides supplementary mass flow rate alongside the air storage. The hybrid design and the addition of a turbocharger have the beneficial effect of mitigating the shortcomings of conventional CAES systems and its derivatives by eliminating combustion emissions and reducing storage volumes, operating pressures, and costs. Storage efficiency and cost are the two key factors, which upon integration with renewable energies would allow the sources to operate as independent forms of sustainable energy. The potential of the HT-CAES design is illustrated through a thermodynamic optimization study, which outlines key variables that have a major impact on the performance and economics of the storage system. The optimization analysis quantifies the required distribution of energy between thermal and compressed air energy storage, for maximum efficiency, and for minimum cost. This study provides a roundtrip energy and exergy efficiency map of the storage system and illustrates a trade off that exists between its capital cost and performance.« less
Micro-optical fabrication by ultraprecision diamond machining and precision molding
NASA Astrophysics Data System (ADS)
Li, Hui; Li, Likai; Naples, Neil J.; Roblee, Jeffrey W.; Yi, Allen Y.
2017-06-01
Ultraprecision diamond machining and high volume molding for affordable high precision high performance optical elements are becoming a viable process in optical industry for low cost high quality microoptical component manufacturing. In this process, first high precision microoptical molds are fabricated using ultraprecision single point diamond machining followed by high volume production methods such as compression or injection molding. In the last two decades, there have been steady improvements in ultraprecision machine design and performance, particularly with the introduction of both slow tool and fast tool servo. Today optical molds, including freeform surfaces and microlens arrays, are routinely diamond machined to final finish without post machining polishing. For consumers, compression molding or injection molding provide efficient and high quality optics at extremely low cost. In this paper, first ultraprecision machine design and machining processes such as slow tool and fast too servo are described then both compression molding and injection molding of polymer optics are discussed. To implement precision optical manufacturing by molding, numerical modeling can be included in the future as a critical part of the manufacturing process to ensure high product quality.
An Energy-Efficient Compressive Image Coding for Green Internet of Things (IoT).
Li, Ran; Duan, Xiaomeng; Li, Xu; He, Wei; Li, Yanling
2018-04-17
Aimed at a low-energy consumption of Green Internet of Things (IoT), this paper presents an energy-efficient compressive image coding scheme, which provides compressive encoder and real-time decoder according to Compressive Sensing (CS) theory. The compressive encoder adaptively measures each image block based on the block-based gradient field, which models the distribution of block sparse degree, and the real-time decoder linearly reconstructs each image block through a projection matrix, which is learned by Minimum Mean Square Error (MMSE) criterion. Both the encoder and decoder have a low computational complexity, so that they only consume a small amount of energy. Experimental results show that the proposed scheme not only has a low encoding and decoding complexity when compared with traditional methods, but it also provides good objective and subjective reconstruction qualities. In particular, it presents better time-distortion performance than JPEG. Therefore, the proposed compressive image coding is a potential energy-efficient scheme for Green IoT.
Upper-Thermospheric Observations and Neutral-Gas Dynamics at High Latitudes During Solar Maximum.
1987-01-01
quickly, allowing the higher-latitude lines to spring back in towards the Earth ( Vallance -Jones, 1974). This also compresses and heats the plasma on high... Richards , and D. G. Torr. A new determination of the ultraviolet heating efficiency of the thermosphere. J. Geophys. Res., 85, 6819 - 6826, 1980b. Torr...M. R., D. G. Torr, and P. G. Richards . The solar ultraviolet heating efficiency of the midlatitude thermosphere. Geophys. Res. Lett., 7, 373 - 376
Comparison of compression efficiency between HEVC/H.265 and VP9 based on subjective assessments
NASA Astrophysics Data System (ADS)
Řeřábek, Martin; Ebrahimi, Touradj
2014-09-01
Current increasing effort of broadcast providers to transmit UHD (Ultra High Definition) content is likely to increase demand for ultra high definition televisions (UHDTVs). To compress UHDTV content, several alternative encoding mechanisms exist. In addition to internationally recognized standards, open access proprietary options, such as VP9 video encoding scheme, have recently appeared and are gaining popularity. One of the main goals of these encoders is to efficiently compress video sequences beyond HDTV resolution for various scenarios, such as broadcasting or internet streaming. In this paper, a broadcast scenario rate-distortion performance analysis and mutual comparison of one of the latest video coding standards H.265/HEVC with recently released proprietary video coding scheme VP9 is presented. Also, currently one of the most popular and widely spread encoder H.264/AVC has been included into the evaluation to serve as a comparison baseline. The comparison is performed by means of subjective evaluations showing actual differences between encoding algorithms in terms of perceived quality. The results indicate a general dominance of HEVC based encoding algorithm in comparison to other alternatives, while VP9 and AVC showing similar performance.
A Basic Behavior of CNG DI Combustion in a Spark-Ignited Rapid Compression Machine
NASA Astrophysics Data System (ADS)
Huang, Zuohua; Shiga, Seiichi; Ueda, Takamasa; Jingu, Nobuhisa; Nakamura, Hisao; Ishima, Tsuneaki; Obokata, Tomio; Tsue, Mitsuhiro; Kono, Michikata
A basic characteristics of compressed natural gas direct-injection (CNG DI) combustion was studied by using a rapid compression machine. Results show that comparing with homogeneous mixture, CNG DI has short combustion duration, high pressure rise due to combustion, and high rate of heat release, which are considered to come from the charge stratification and the gas flow generated by the fuel injection. CNG DI can realize extremely lean combustion which reaches 0.03 equivalence ratio, φ. Combustion duration, maximum pressure rise due to combustion and combustion efficiency are found to be insensitive to the injection modes. Unburned methane showed almost the same level as that of homogeneous mixture combustion. CO increased steeply with the increase in φ when φ was greater than 0.8 due to the excessive stratification, and NOx peak value shifted to the region of lower φ. Combustion inefficiency maintains less than 0.08 in the range of φ from 0.1 to 0.9 and increases at very low φ due to bulk quenching and at higher φ due to excessive stratification. The combustion efficiency estimated from combustion products shows good agreement with that of heat release analysis.
NASA Technical Reports Server (NTRS)
Novik, Dmitry A.; Tilton, James C.
1993-01-01
The compression, or efficient coding, of single band or multispectral still images is becoming an increasingly important topic. While lossy compression approaches can produce reconstructions that are visually close to the original, many scientific and engineering applications require exact (lossless) reconstructions. However, the most popular and efficient lossless compression techniques do not fully exploit the two-dimensional structural links existing in the image data. We describe here a general approach to lossless data compression that effectively exploits two-dimensional structural links of any length. After describing in detail two main variants on this scheme, we discuss experimental results.
Wang, Gang; Zhao, Zhikai; Ning, Yongjie
2018-05-28
As the application of a coal mine Internet of Things (IoT), mobile measurement devices, such as intelligent mine lamps, cause moving measurement data to be increased. How to transmit these large amounts of mobile measurement data effectively has become an urgent problem. This paper presents a compressed sensing algorithm for the large amount of coal mine IoT moving measurement data based on a multi-hop network and total variation. By taking gas data in mobile measurement data as an example, two network models for the transmission of gas data flow, namely single-hop and multi-hop transmission modes, are investigated in depth, and a gas data compressed sensing collection model is built based on a multi-hop network. To utilize the sparse characteristics of gas data, the concept of total variation is introduced and a high-efficiency gas data compression and reconstruction method based on Total Variation Sparsity based on Multi-Hop (TVS-MH) is proposed. According to the simulation results, by using the proposed method, the moving measurement data flow from an underground distributed mobile network can be acquired and transmitted efficiently.
Sandford, M.T. II; Handel, T.G.; Bradley, J.N.
1998-07-07
A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique are disclosed. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%. 21 figs.
Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.
1998-01-01
A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%.
Comparing NetCDF and SciDB on managing and querying 5D hydrologic dataset
NASA Astrophysics Data System (ADS)
Liu, Haicheng; Xiao, Xiao
2016-11-01
Efficiently extracting information from high dimensional hydro-meteorological modelling datasets requires smart solutions. Traditional methods are mostly based on files, which can be edited and accessed handily. But they have problems of efficiency due to contiguous storage structure. Others propose databases as an alternative for advantages such as native functionalities for manipulating multidimensional (MD) arrays, smart caching strategy and scalability. In this research, NetCDF file based solutions and the multidimensional array database management system (DBMS) SciDB applying chunked storage structure are benchmarked to determine the best solution for storing and querying 5D large hydrologic modelling dataset. The effect of data storage configurations including chunk size, dimension order and compression on query performance is explored. Results indicate that dimension order to organize storage of 5D data has significant influence on query performance if chunk size is very large. But the effect becomes insignificant when chunk size is properly set. Compression of SciDB mostly has negative influence on query performance. Caching is an advantage but may be influenced by execution of different query processes. On the whole, NetCDF solution without compression is in general more efficient than the SciDB DBMS.
Compressive Spectral Method for the Simulation of the Nonlinear Gravity Waves
Bayındır, Cihan
2016-01-01
In this paper an approach for decreasing the computational effort required for the spectral simulations of the fully nonlinear ocean waves is introduced. The proposed approach utilizes the compressive sampling algorithm and depends on the idea of using a smaller number of spectral components compared to the classical spectral method. After performing the time integration with a smaller number of spectral components and using the compressive sampling technique, it is shown that the ocean wave field can be reconstructed with a significantly better efficiency compared to the classical spectral method. For the sparse ocean wave model in the frequency domain the fully nonlinear ocean waves with Jonswap spectrum is considered. By implementation of a high-order spectral method it is shown that the proposed methodology can simulate the linear and the fully nonlinear ocean waves with negligible difference in the accuracy and with a great efficiency by reducing the computation time significantly especially for large time evolutions. PMID:26911357
Matched metal die compression molded structural random fiber sheet molding compound flywheel
Kulkarni, Satish V.; Christensen, Richard M.; Toland, Richard H.
1985-01-01
A flywheel (10) is described that is useful for energy storage in a hybrid vehicle automotive power system or in some stationary applications. The flywheel (10) has a body of essentially planar isotropic high strength structural random fiber sheet molding compound (SMC-R). The flywheel (10) may be economically produced by a matched metal die compression molding process. The flywheel (10) makes energy intensive efficient use of a fiber/resin composite while having a shape designed by theory assuming planar isotropy.
Kulkarni, S.V.; Christensen, R.M.; Toland, R.H.
1980-09-24
A flywheel is described that is useful for energy storage in a hybrid vehicle automotive power system or in some stationary applications. The flywheel has a body of essentially planar isotropic high strength structural random fiber sheet molding compound (SMC-R). The flywheel may be economically produced by a matched metal die compression molding process. The flywheel makes energy intensive efficient use of a fiber/resin composite while having a shape designed by theory assuming planar isotropy.
Data compression for satellite images
NASA Technical Reports Server (NTRS)
Chen, P. H.; Wintz, P. A.
1976-01-01
An efficient data compression system is presented for satellite pictures and two grey level pictures derived from satellite pictures. The compression techniques take advantages of the correlation between adjacent picture elements. Several source coding methods are investigated. Double delta coding is presented and shown to be the most efficient. Both predictive differential quantizing technique and double delta coding can be significantly improved by applying a background skipping technique. An extension code is constructed. This code requires very little storage space and operates efficiently. Simulation results are presented for various coding schemes and source codes.
Soliton production with nonlinear homogeneous lines
Elizondo-Decanini, Juan M.; Coleman, Phillip D.; Moorman, Matthew W.; ...
2015-11-24
Low- and high-voltage Soliton waves were produced and used to demonstrate collision and compression using diode-based nonlinear transmission lines. Experiments demonstrate soliton addition and compression using homogeneous nonlinear lines. We built the nonlinear lines using commercially available diodes. These diodes are chosen after their capacitance versus voltage dependence is used in a model and the line design characteristics are calculated and simulated. Nonlinear ceramic capacitors are then used to demonstrate high-voltage pulse amplification and compression. The line is designed such that a simple capacitor discharge, input signal, develops soliton trains in as few as 12 stages. We also demonstrated outputmore » voltages in excess of 40 kV using Y5V-based commercial capacitors. The results show some key features that determine efficient production of trains of solitons in the kilovolt range.« less
Quantum autoencoders for efficient compression of quantum data
NASA Astrophysics Data System (ADS)
Romero, Jonathan; Olson, Jonathan P.; Aspuru-Guzik, Alan
2017-12-01
Classical autoencoders are neural networks that can learn efficient low-dimensional representations of data in higher-dimensional space. The task of an autoencoder is, given an input x, to map x to a lower dimensional point y such that x can likely be recovered from y. The structure of the underlying autoencoder network can be chosen to represent the data on a smaller dimension, effectively compressing the input. Inspired by this idea, we introduce the model of a quantum autoencoder to perform similar tasks on quantum data. The quantum autoencoder is trained to compress a particular data set of quantum states, where a classical compression algorithm cannot be employed. The parameters of the quantum autoencoder are trained using classical optimization algorithms. We show an example of a simple programmable circuit that can be trained as an efficient autoencoder. We apply our model in the context of quantum simulation to compress ground states of the Hubbard model and molecular Hamiltonians.
Zhang, Yu; Wu, Jianxin; Cai, Jianfei
2016-05-01
In large-scale visual recognition and image retrieval tasks, feature vectors, such as Fisher vector (FV) or the vector of locally aggregated descriptors (VLAD), have achieved state-of-the-art results. However, the combination of the large numbers of examples and high-dimensional vectors necessitates dimensionality reduction, in order to reduce its storage and CPU costs to a reasonable range. In spite of the popularity of various feature compression methods, this paper shows that the feature (dimension) selection is a better choice for high-dimensional FV/VLAD than the feature (dimension) compression methods, e.g., product quantization. We show that strong correlation among the feature dimensions in the FV and the VLAD may not exist, which renders feature selection a natural choice. We also show that, many dimensions in FV/VLAD are noise. Throwing them away using feature selection is better than compressing them and useful dimensions altogether using feature compression methods. To choose features, we propose an efficient importance sorting algorithm considering both the supervised and unsupervised cases, for visual recognition and image retrieval, respectively. Combining with the 1-bit quantization, feature selection has achieved both higher accuracy and less computational cost than feature compression methods, such as product quantization, on the FV and the VLAD image representations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kotlin, J.J.; Dunteman, N.R.; Scott, D.I.
1983-01-01
The current Electro-Motive Division 645 Series turbocharged engines are the Model FB and EC. The FB engine combines the highest thermal efficiency with the highest specific output of any EMD engine to date. The FB Series incorporates 16:1 compression ratio with a fire ring piston and an improved turbocharger design. Engine components included in the FB engine provide very high output levels with exceptional reliability. This paper also describes the performance of the lower rated Model EC engine series which feature high thermal efficiency and utilize many engine components well proven in service and basic to the Model FB Series.
NASA Technical Reports Server (NTRS)
Schuette, Evan H
1945-01-01
Design charts are developed for 24s-t aluminum-alloy flat compression panels with longitudinal z-section stiffeners. These charts make possible the design of the lightest panels of this type for a wide range of design requirements. Examples of the use of the charts are given and it is pointed out on the basis of these examples that, over a wide range of design conditions, the maintenance of buckle-free surfaces does not conflict with the achievement of high structural efficiency. The achievement of the maximum possible structural efficiency with 24s-t aluminum-alloy panels, however, requires closer stiffener spacings than those now in common use.
CoGI: Towards Compressing Genomes as an Image.
Xie, Xiaojing; Zhou, Shuigeng; Guan, Jihong
2015-01-01
Genomic science is now facing an explosive increase of data thanks to the fast development of sequencing technology. This situation poses serious challenges to genomic data storage and transferring. It is desirable to compress data to reduce storage and transferring cost, and thus to boost data distribution and utilization efficiency. Up to now, a number of algorithms / tools have been developed for compressing genomic sequences. Unlike the existing algorithms, most of which treat genomes as one-dimensional text strings and compress them based on dictionaries or probability models, this paper proposes a novel approach called CoGI (the abbreviation of Compressing Genomes as an Image) for genome compression, which transforms the genomic sequences to a two-dimensional binary image (or bitmap), then applies a rectangular partition coding algorithm to compress the binary image. CoGI can be used as either a reference-based compressor or a reference-free compressor. For the former, we develop two entropy-based algorithms to select a proper reference genome. Performance evaluation is conducted on various genomes. Experimental results show that the reference-based CoGI significantly outperforms two state-of-the-art reference-based genome compressors GReEn and RLZ-opt in both compression ratio and compression efficiency. It also achieves comparable compression ratio but two orders of magnitude higher compression efficiency in comparison with XM--one state-of-the-art reference-free genome compressor. Furthermore, our approach performs much better than Gzip--a general-purpose and widely-used compressor, in both compression speed and compression ratio. So, CoGI can serve as an effective and practical genome compressor. The source code and other related documents of CoGI are available at: http://admis.fudan.edu.cn/projects/cogi.htm.
An O(Nm(sup 2)) Plane Solver for the Compressible Navier-Stokes Equations
NASA Technical Reports Server (NTRS)
Thomas, J. L.; Bonhaus, D. L.; Anderson, W. K.; Rumsey, C. L.; Biedron, R. T.
1999-01-01
A hierarchical multigrid algorithm for efficient steady solutions to the two-dimensional compressible Navier-Stokes equations is developed and demonstrated. The algorithm applies multigrid in two ways: a Full Approximation Scheme (FAS) for a nonlinear residual equation and a Correction Scheme (CS) for a linearized defect correction implicit equation. Multigrid analyses which include the effect of boundary conditions in one direction are used to estimate the convergence rate of the algorithm for a model convection equation. Three alternating-line- implicit algorithms are compared in terms of efficiency. The analyses indicate that full multigrid efficiency is not attained in the general case; the number of cycles to attain convergence is dependent on the mesh density for high-frequency cross-stream variations. However, the dependence is reasonably small and fast convergence is eventually attained for any given frequency with either the FAS or the CS scheme alone. The paper summarizes numerical computations for which convergence has been attained to within truncation error in a few multigrid cycles for both inviscid and viscous ow simulations on highly stretched meshes.
Fahmy, Gamal; Black, John; Panchanathan, Sethuraman
2006-06-01
Today's multimedia applications demand sophisticated compression and classification techniques in order to store, transmit, and retrieve audio-visual information efficiently. Over the last decade, perceptually based image compression methods have been gaining importance. These methods take into account the abilities (and the limitations) of human visual perception (HVP) when performing compression. The upcoming MPEG 7 standard also addresses the need for succinct classification and indexing of visual content for efficient retrieval. However, there has been no research that has attempted to exploit the characteristics of the human visual system to perform both compression and classification jointly. One area of HVP that has unexplored potential for joint compression and classification is spatial frequency perception. Spatial frequency content that is perceived by humans can be characterized in terms of three parameters, which are: 1) magnitude; 2) phase; and 3) orientation. While the magnitude of spatial frequency content has been exploited in several existing image compression techniques, the novel contribution of this paper is its focus on the use of phase coherence for joint compression and classification in the wavelet domain. Specifically, this paper describes a human visual system-based method for measuring the degree to which an image contains coherent (perceptible) phase information, and then exploits that information to provide joint compression and classification. Simulation results that demonstrate the efficiency of this method are presented.
Compressing DNA sequence databases with coil.
White, W Timothy J; Hendy, Michael D
2008-05-20
Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip) compression - an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression - the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST) data. Finally, coil can efficiently encode incremental additions to a sequence database. coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work.
Compressing DNA sequence databases with coil
White, W Timothy J; Hendy, Michael D
2008-01-01
Background Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip) compression – an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. Results We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression – the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST) data. Finally, coil can efficiently encode incremental additions to a sequence database. Conclusion coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work. PMID:18489794
ERGC: an efficient referential genome compression algorithm.
Saha, Subrata; Rajasekaran, Sanguthevar
2015-11-01
Genome sequencing has become faster and more affordable. Consequently, the number of available complete genomic sequences is increasing rapidly. As a result, the cost to store, process, analyze and transmit the data is becoming a bottleneck for research and future medical applications. So, the need for devising efficient data compression and data reduction techniques for biological sequencing data is growing by the day. Although there exists a number of standard data compression algorithms, they are not efficient in compressing biological data. These generic algorithms do not exploit some inherent properties of the sequencing data while compressing. To exploit statistical and information-theoretic properties of genomic sequences, we need specialized compression algorithms. Five different next-generation sequencing data compression problems have been identified and studied in the literature. We propose a novel algorithm for one of these problems known as reference-based genome compression. We have done extensive experiments using five real sequencing datasets. The results on real genomes show that our proposed algorithm is indeed competitive and performs better than the best known algorithms for this problem. It achieves compression ratios that are better than those of the currently best performing algorithms. The time to compress and decompress the whole genome is also very promising. The implementations are freely available for non-commercial purposes. They can be downloaded from http://engr.uconn.edu/∼rajasek/ERGC.zip. rajasek@engr.uconn.edu. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
SCALCE: boosting sequence compression algorithms using locally consistent encoding.
Hach, Faraz; Numanagic, Ibrahim; Alkan, Can; Sahinalp, S Cenk
2012-12-01
The high throughput sequencing (HTS) platforms generate unprecedented amounts of data that introduce challenges for the computational infrastructure. Data management, storage and analysis have become major logistical obstacles for those adopting the new platforms. The requirement for large investment for this purpose almost signalled the end of the Sequence Read Archive hosted at the National Center for Biotechnology Information (NCBI), which holds most of the sequence data generated world wide. Currently, most HTS data are compressed through general purpose algorithms such as gzip. These algorithms are not designed for compressing data generated by the HTS platforms; for example, they do not take advantage of the specific nature of genomic sequence data, that is, limited alphabet size and high similarity among reads. Fast and efficient compression algorithms designed specifically for HTS data should be able to address some of the issues in data management, storage and communication. Such algorithms would also help with analysis provided they offer additional capabilities such as random access to any read and indexing for efficient sequence similarity search. Here we present SCALCE, a 'boosting' scheme based on Locally Consistent Parsing technique, which reorganizes the reads in a way that results in a higher compression speed and compression rate, independent of the compression algorithm in use and without using a reference genome. Our tests indicate that SCALCE can improve the compression rate achieved through gzip by a factor of 4.19-when the goal is to compress the reads alone. In fact, on SCALCE reordered reads, gzip running time can improve by a factor of 15.06 on a standard PC with a single core and 6 GB memory. Interestingly even the running time of SCALCE + gzip improves that of gzip alone by a factor of 2.09. When compared with the recently published BEETL, which aims to sort the (inverted) reads in lexicographic order for improving bzip2, SCALCE + gzip provides up to 2.01 times better compression while improving the running time by a factor of 5.17. SCALCE also provides the option to compress the quality scores as well as the read names, in addition to the reads themselves. This is achieved by compressing the quality scores through order-3 Arithmetic Coding (AC) and the read names through gzip through the reordering SCALCE provides on the reads. This way, in comparison with gzip compression of the unordered FASTQ files (including reads, read names and quality scores), SCALCE (together with gzip and arithmetic encoding) can provide up to 3.34 improvement in the compression rate and 1.26 improvement in running time. Our algorithm, SCALCE (Sequence Compression Algorithm using Locally Consistent Encoding), is implemented in C++ with both gzip and bzip2 compression options. It also supports multithreading when gzip option is selected, and the pigz binary is available. It is available at http://scalce.sourceforge.net. fhach@cs.sfu.ca or cenk@cs.sfu.ca Supplementary data are available at Bioinformatics online.
An ROI multi-resolution compression method for 3D-HEVC
NASA Astrophysics Data System (ADS)
Ti, Chunli; Guan, Yudong; Xu, Guodong; Teng, Yidan; Miao, Xinyuan
2017-09-01
3D High Efficiency Video Coding (3D-HEVC) provides a significant potential on increasing the compression ratio of multi-view RGB-D videos. However, the bit rate still rises dramatically with the improvement of the video resolution, which will bring challenges to the transmission network, especially the mobile network. This paper propose an ROI multi-resolution compression method for 3D-HEVC to better preserve the information in ROI on condition of limited bandwidth. This is realized primarily through ROI extraction and compression multi-resolution preprocessed video as alternative data according to the network conditions. At first, the semantic contours are detected by the modified structured forests to restrain the color textures inside objects. The ROI is then determined utilizing the contour neighborhood along with the face region and foreground area of the scene. Secondly, the RGB-D videos are divided into slices and compressed via 3D-HEVC under different resolutions for selection by the audiences and applications. Afterwards, the reconstructed low-resolution videos from 3D-HEVC encoder are directly up-sampled via Laplace transformation and used to replace the non-ROI areas of the high-resolution videos. Finally, the ROI multi-resolution compressed slices are obtained by compressing the ROI preprocessed videos with 3D-HEVC. The temporal and special details of non-ROI are reduced in the low-resolution videos, so the ROI will be better preserved by the encoder automatically. Experiments indicate that the proposed method can keep the key high-frequency information with subjective significance while the bit rate is reduced.
Flow-through compression cell for small-angle and ultra-small-angle neutron scattering measurements
NASA Astrophysics Data System (ADS)
Hjelm, Rex P.; Taylor, Mark A.; Frash, Luke P.; Hawley, Marilyn E.; Ding, Mei; Xu, Hongwu; Barker, John; Olds, Daniel; Heath, Jason; Dewers, Thomas
2018-05-01
In situ measurements of geological materials under compression and with hydrostatic fluid pressure are important in understanding their behavior under field conditions, which in turn provides critical information for application-driven research. In particular, understanding the role of nano- to micro-scale porosity in the subsurface liquid and gas flow is critical for the high-fidelity characterization of the transport and more efficient extraction of the associated energy resources. In other applications, where parts are produced by the consolidation of powders by compression, the resulting porosity and crystallite orientation (texture) may affect its in-use characteristics. Small-angle neutron scattering (SANS) and ultra SANS are ideal probes for characterization of these porous structures over the nano to micro length scales. Here we show the design, realization, and performance of a novel neutron scattering sample environment, a specially designed compression cell, which provides compressive stress and hydrostatic pressures with effective stress up to 60 MPa, using the neutron beam to probe the effects of stress vectors parallel to the neutron beam. We demonstrate that the neutron optics is suitable for the experimental objectives and that the system is highly stable to the stress and pressure conditions of the measurements.
Mpeg2 codec HD improvements with medical and robotic imaging benefits
NASA Astrophysics Data System (ADS)
Picard, Wayne F. J.
2010-02-01
In this report, we propose an efficient scheme to use High Definition Television (HDTV) in a console or notebook format as a computer terminal in addition to their role as TV display unit. In the proposed scheme, we assume that the main computer is situated at a remote location. The computer raster in the remote server is compressed using an HD E- >Mpeg2 encoder and transmitted to the terminal at home. The built-in E->Mpeg2 decoder in the terminal decompresses the compressed bit stream, and displays the raster. The terminal will be fitted with a mouse and keyboard, through which the interaction with the remote computer server can be performed via a communications back channel. The terminal in a notebook format can thus be used as a high resolution computer and multimedia device. We will consider developments such as the required HD enhanced Mpeg2 resolution (E->Mpeg2) and its medical ramifications due to improvements on compressed image quality with 2D to 3D conversion (Mpeg3) and using the compressed Discrete Cosine Transform coefficients in the reality compression of vision and control of medical robotic surgeons.
NASA Astrophysics Data System (ADS)
Abdellah, Skoudarli; Mokhtar, Nibouche; Amina, Serir
2015-11-01
The H.264/AVC video coding standard is used in a wide range of applications from video conferencing to high-definition television according to its high compression efficiency. This efficiency is mainly acquired from the newly allowed prediction schemes including variable block modes. However, these schemes require a high complexity to select the optimal mode. Consequently, complexity reduction in the H.264/AVC encoder has recently become a very challenging task in the video compression domain, especially when implementing the encoder in real-time applications. Fast mode decision algorithms play an important role in reducing the overall complexity of the encoder. In this paper, we propose an adaptive fast intermode algorithm based on motion activity, temporal stationarity, and spatial homogeneity. This algorithm predicts the motion activity of the current macroblock from its neighboring blocks and identifies temporal stationary regions and spatially homogeneous regions using adaptive threshold values based on content video features. Extensive experimental work has been done in high profile, and results show that the proposed source-coding algorithm effectively reduces the computational complexity by 53.18% on average compared with the reference software encoder, while maintaining the high-coding efficiency of H.264/AVC by incurring only 0.097 dB in total peak signal-to-noise ratio and 0.228% increment on the total bit rate.
Tomographic Image Compression Using Multidimensional Transforms.
ERIC Educational Resources Information Center
Villasenor, John D.
1994-01-01
Describes a method for compressing tomographic images obtained using Positron Emission Tomography (PET) and Magnetic Resonance (MR) by applying transform compression using all available dimensions. This takes maximum advantage of redundancy of the data, allowing significant increases in compression efficiency and performance. (13 references) (KRN)
Safiuddin, Md.; Raman, Sudharshan N.; Abdus Salam, Md.; Jumaat, Mohd. Zamin
2016-01-01
Modeling is a very useful method for the performance prediction of concrete. Most of the models available in literature are related to the compressive strength because it is a major mechanical property used in concrete design. Many attempts were taken to develop suitable mathematical models for the prediction of compressive strength of different concretes, but not for self-consolidating high-strength concrete (SCHSC) containing palm oil fuel ash (POFA). The present study has used artificial neural networks (ANN) to predict the compressive strength of SCHSC incorporating POFA. The ANN model has been developed and validated in this research using the mix proportioning and experimental strength data of 20 different SCHSC mixes. Seventy percent (70%) of the data were used to carry out the training of the ANN model. The remaining 30% of the data were used for testing the model. The training of the ANN model was stopped when the root mean square error (RMSE) and the percentage of good patterns was 0.001 and ≈100%, respectively. The predicted compressive strength values obtained from the trained ANN model were much closer to the experimental values of compressive strength. The coefficient of determination (R2) for the relationship between the predicted and experimental compressive strengths was 0.9486, which shows the higher degree of accuracy of the network pattern. Furthermore, the predicted compressive strength was found very close to the experimental compressive strength during the testing process of the ANN model. The absolute and percentage relative errors in the testing process were significantly low with a mean value of 1.74 MPa and 3.13%, respectively, which indicated that the compressive strength of SCHSC including POFA can be efficiently predicted by the ANN. PMID:28773520
Safiuddin, Md; Raman, Sudharshan N; Abdus Salam, Md; Jumaat, Mohd Zamin
2016-05-20
Modeling is a very useful method for the performance prediction of concrete. Most of the models available in literature are related to the compressive strength because it is a major mechanical property used in concrete design. Many attempts were taken to develop suitable mathematical models for the prediction of compressive strength of different concretes, but not for self-consolidating high-strength concrete (SCHSC) containing palm oil fuel ash (POFA). The present study has used artificial neural networks (ANN) to predict the compressive strength of SCHSC incorporating POFA. The ANN model has been developed and validated in this research using the mix proportioning and experimental strength data of 20 different SCHSC mixes. Seventy percent (70%) of the data were used to carry out the training of the ANN model. The remaining 30% of the data were used for testing the model. The training of the ANN model was stopped when the root mean square error (RMSE) and the percentage of good patterns was 0.001 and ≈100%, respectively. The predicted compressive strength values obtained from the trained ANN model were much closer to the experimental values of compressive strength. The coefficient of determination ( R ²) for the relationship between the predicted and experimental compressive strengths was 0.9486, which shows the higher degree of accuracy of the network pattern. Furthermore, the predicted compressive strength was found very close to the experimental compressive strength during the testing process of the ANN model. The absolute and percentage relative errors in the testing process were significantly low with a mean value of 1.74 MPa and 3.13%, respectively, which indicated that the compressive strength of SCHSC including POFA can be efficiently predicted by the ANN.
Sharifahmadian, Ershad
2006-01-01
The set partitioning in hierarchical trees (SPIHT) algorithm is very effective and computationally simple technique for image and signal compression. Here the author modified the algorithm which provides even better performance than the SPIHT algorithm. The enhanced set partitioning in hierarchical trees (ESPIHT) algorithm has performance faster than the SPIHT algorithm. In addition, the proposed algorithm reduces the number of bits in a bit stream which is stored or transmitted. I applied it to compression of multichannel ECG data. Also, I presented a specific procedure based on the modified algorithm for more efficient compression of multichannel ECG data. This method employed on selected records from the MIT-BIH arrhythmia database. According to experiments, the proposed method attained the significant results regarding compression of multichannel ECG data. Furthermore, in order to compress one signal which is stored for a long time, the proposed multichannel compression method can be utilized efficiently.
NASA Astrophysics Data System (ADS)
Ohwada, Taku; Shibata, Yuki; Kato, Takuma; Nakamura, Taichi
2018-06-01
Developed is a high-order accurate shock-capturing scheme for the compressible Euler/Navier-Stokes equations; the formal accuracy is 5th order in space and 4th order in time. The performance and efficiency of the scheme are validated in various numerical tests. The main ingredients of the scheme are nothing special; they are variants of the standard numerical flux, MUSCL, the usual Lagrange's polynomial and the conventional Runge-Kutta method. The scheme can compute a boundary layer accurately with a rational resolution and capture a stationary contact discontinuity sharply without inner points. And yet it is endowed with high resistance against shock anomalies (carbuncle phenomenon, post-shock oscillations, etc.). A good balance between high robustness and low dissipation is achieved by blending three types of numerical fluxes according to physical situation in an intuitively easy-to-understand way. The performance of the scheme is largely comparable to that of WENO5-Rusanov, while its computational cost is 30-40% less than of that of the advanced scheme.
Simpson, Jared
2018-01-24
Wellcome Trust Sanger Institute's Jared Simpson on Memory efficient sequence analysis using compressed data structures at the Metagenomics Informatics Challenges Workshop held at the DOE JGI on October 12-13, 2011.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-25
... DEPARTMENT OF ENERGY Research and Development Strategies for Compressed & Cryo- Compressed Hydrogen Storage Workshops AGENCY: Fuel Cell Technologies Program, Office of Energy Efficiency and Renewable Energy, Department of Energy. ACTION: Notice of meeting. SUMMARY: The Systems Integration group of...
Near-lossless multichannel EEG compression based on matrix and tensor decompositions.
Dauwels, Justin; Srinivasan, K; Reddy, M Ramasubba; Cichocki, Andrzej
2013-05-01
A novel near-lossless compression algorithm for multichannel electroencephalogram (MC-EEG) is proposed based on matrix/tensor decomposition models. MC-EEG is represented in suitable multiway (multidimensional) forms to efficiently exploit temporal and spatial correlations simultaneously. Several matrix/tensor decomposition models are analyzed in view of efficient decorrelation of the multiway forms of MC-EEG. A compression algorithm is built based on the principle of “lossy plus residual coding,” consisting of a matrix/tensor decomposition-based coder in the lossy layer followed by arithmetic coding in the residual layer. This approach guarantees a specifiable maximum absolute error between original and reconstructed signals. The compression algorithm is applied to three different scalp EEG datasets and an intracranial EEG dataset, each with different sampling rate and resolution. The proposed algorithm achieves attractive compression ratios compared to compressing individual channels separately. For similar compression ratios, the proposed algorithm achieves nearly fivefold lower average error compared to a similar wavelet-based volumetric MC-EEG compression algorithm.
NASA Technical Reports Server (NTRS)
Rao, T. R. N.; Seetharaman, G.; Feng, G. L.
1996-01-01
With the development of new advanced instruments for remote sensing applications, sensor data will be generated at a rate that not only requires increased onboard processing and storage capability, but imposes demands on the space to ground communication link and ground data management-communication system. Data compression and error control codes provide viable means to alleviate these demands. Two types of data compression have been studied by many researchers in the area of information theory: a lossless technique that guarantees full reconstruction of the data, and a lossy technique which generally gives higher data compaction ratio but incurs some distortion in the reconstructed data. To satisfy the many science disciplines which NASA supports, lossless data compression becomes a primary focus for the technology development. While transmitting the data obtained by any lossless data compression, it is very important to use some error-control code. For a long time, convolutional codes have been widely used in satellite telecommunications. To more efficiently transform the data obtained by the Rice algorithm, it is required to meet the a posteriori probability (APP) for each decoded bit. A relevant algorithm for this purpose has been proposed which minimizes the bit error probability in the decoding linear block and convolutional codes and meets the APP for each decoded bit. However, recent results on iterative decoding of 'Turbo codes', turn conventional wisdom on its head and suggest fundamentally new techniques. During the past several months of this research, the following approaches have been developed: (1) a new lossless data compression algorithm, which is much better than the extended Rice algorithm for various types of sensor data, (2) a new approach to determine the generalized Hamming weights of the algebraic-geometric codes defined by a large class of curves in high-dimensional spaces, (3) some efficient improved geometric Goppa codes for disk memory systems and high-speed mass memory systems, and (4) a tree based approach for data compression using dynamic programming.
Hybrid-drive implosion system for ICF targets
Mark, James W.
1988-08-02
Hybrid-drive implosion systems (20,40) for ICF targets (10,22,42) are described which permit a significant increase in target gain at fixed total driver energy. The ICF target is compressed in two phases, an initial compression phase and a final peak power phase, with each phase driven by a separate, optimized driver. The targets comprise a hollow spherical ablator (12) surroundingly disposed around fusion fuel (14). The ablator is first compressed to higher density by a laser system (24), or by an ion beam system (44), that in each case is optimized for this initial phase of compression of the target. Then, following compression of the ablator, energy is directly delivered into the compressed ablator by an ion beam driver system (30,48) that is optimized for this second phase of operation of the target. The fusion fuel (14) is driven, at high gain, to conditions wherein fusion reactions occur. This phase separation allows hydrodynamic efficiency and energy deposition uniformity to be individually optimized, thereby securing significant advantages in energy gain. In additional embodiments, the same or separate drivers supply energy for ICF target implosion.
Hybrid-drive implosion system for ICF targets
Mark, James W.
1988-01-01
Hybrid-drive implosion systems (20,40) for ICF targets (10,22,42) are described which permit a significant increase in target gain at fixed total driver energy. The ICF target is compressed in two phases, an initial compression phase and a final peak power phase, with each phase driven by a separate, optimized driver. The targets comprise a hollow spherical ablator (12) surroundingly disposed around fusion fuel (14). The ablator is first compressed to higher density by a laser system (24), or by an ion beam system (44), that in each case is optimized for this initial phase of compression of the target. Then, following compression of the ablator, energy is directly delivered into the compressed ablator by an ion beam driver system (30,48) that is optimized for this second phase of operation of the target. The fusion fuel (14) is driven, at high gain, to conditions wherein fusion reactions occur. This phase separation allows hydrodynamic efficiency and energy deposition uniformity to be individually optimized, thereby securing significant advantages in energy gain. In additional embodiments, the same or separate drivers supply energy for ICF target implosion.
Hybrid-drive implosion system for ICF targets
Mark, J.W.K.
1987-10-14
Hybrid-drive implosion systems for ICF targets are described which permit a significant increase in target gain at fixed total driver energy. The ICF target is compressed in two phases, an initial compression phase and a final peak power phase, with each phase driven by a separate, optimized driver. The targets comprise a hollow spherical ablator surroundingly disposed around fusion fuel. The ablator is first compressed to higher density by a laser system, or by an ion beam system, that in each case is optimized for this initial phase of compression of the target. Then, following compression of the ablator, energy is directly delivered into the compressed ablator by an ion beam driver system that is optimized for this second phase of operation of the target. The fusion fuel is driven, at high gain, to conditions wherein fusion reactions occur. This phase separation allows hydrodynamic efficiency and energy deposition uniformity to be individually optimized, thereby securing significant advantages in energy gain. In additional embodiments, the same or separate drivers supply energy for ICF target implosion. 3 figs.
López, Carlos; Lejeune, Marylène; Escrivà, Patricia; Bosch, Ramón; Salvadó, Maria Teresa; Pons, Lluis E.; Baucells, Jordi; Cugat, Xavier; Álvaro, Tomás; Jaén, Joaquín
2008-01-01
This study investigates the effects of digital image compression on automatic quantification of immunohistochemical nuclear markers. We examined 188 images with a previously validated computer-assisted analysis system. A first group was composed of 47 images captured in TIFF format, and other three contained the same images converted from TIFF to JPEG format with 3×, 23× and 46× compression. Counts of TIFF format images were compared with the other three groups. Overall, differences in the count of the images increased with the percentage of compression. Low-complexity images (≤100 cells/field, without clusters or with small-area clusters) had small differences (<5 cells/field in 95–100% of cases) and high-complexity images showed substantial differences (<35–50 cells/field in 95–100% of cases). Compression does not compromise the accuracy of immunohistochemical nuclear marker counts obtained by computer-assisted analysis systems for digital images with low complexity and could be an efficient method for storing these images. PMID:18755997
Mochizuki, Futa; Kagawa, Keiichiro; Okihara, Shin-ichiro; Seo, Min-Woong; Zhang, Bo; Takasawa, Taishi; Yasutomi, Keita; Kawahito, Shoji
2016-02-22
In the work described in this paper, an image reproduction scheme with an ultra-high-speed temporally compressive multi-aperture CMOS image sensor was demonstrated. The sensor captures an object by compressing a sequence of images with focal-plane temporally random-coded shutters, followed by reconstruction of time-resolved images. Because signals are modulated pixel-by-pixel during capturing, the maximum frame rate is defined only by the charge transfer speed and can thus be higher than those of conventional ultra-high-speed cameras. The frame rate and optical efficiency of the multi-aperture scheme are discussed. To demonstrate the proposed imaging method, a 5×3 multi-aperture image sensor was fabricated. The average rising and falling times of the shutters were 1.53 ns and 1.69 ns, respectively. The maximum skew among the shutters was 3 ns. The sensor observed plasma emission by compressing it to 15 frames, and a series of 32 images at 200 Mfps was reconstructed. In the experiment, by correcting disparities and considering temporal pixel responses, artifacts in the reconstructed images were reduced. An improvement in PSNR from 25.8 dB to 30.8 dB was confirmed in simulations.
Lone-pair interactions and photodissociation of compressed nitrogen trifluoride
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurzydłowski, D., E-mail: dkurzydlowski@uw.edu.pl; Department of Biogeochemistry, Max Planck Institute for Chemistry, 55128 Mainz; Wang, H. B.
2014-08-14
High-pressure behavior of nitrogen trifluoride (NF{sub 3}) was investigated by Raman and IR spectroscopy at pressures up to 55 GPa and room temperature, as well as by periodic calculations up to 100 GPa. Experimentally, we find three solid-solid phase transitions at 9, 18, and 39.5 GPa. Vibrational spectroscopy indicates that in all observed phases NF{sub 3} remains in the molecular form, in contrast to the behavior of compressed ammonia. This finding is confirmed by density functional theory calculations, which also indicate that the phase transitions of compressed NF{sub 3} are governed by the interplay between lone‑pair interactions and efficient moleculemore » packing. Although nitrogen trifluoride is molecular in the whole pressure range studied, we show that it can be photodissociated by mid-IR laser radiation. This finding paves the way for the use of NF{sub 3} as an oxidizing and fluorinating agent in high-pressure reactions.« less
Alvarez, Guillermo Dufort Y; Favaro, Federico; Lecumberry, Federico; Martin, Alvaro; Oliver, Juan P; Oreggioni, Julian; Ramirez, Ignacio; Seroussi, Gadiel; Steinfeld, Leonardo
2018-02-01
This work presents a wireless multichannel electroencephalogram (EEG) recording system featuring lossless and near-lossless compression of the digitized EEG signal. Two novel, low-complexity, efficient compression algorithms were developed and tested in a low-power platform. The algorithms were tested on six public EEG databases comparing favorably with the best compression rates reported up to date in the literature. In its lossless mode, the platform is capable of encoding and transmitting 59-channel EEG signals, sampled at 500 Hz and 16 bits per sample, at a current consumption of 337 A per channel; this comes with a guarantee that the decompressed signal is identical to the sampled one. The near-lossless mode allows for significant energy savings and/or higher throughputs in exchange for a small guaranteed maximum per-sample distortion in the recovered signal. Finally, we address the tradeoff between computation cost and transmission savings by evaluating three alternatives: sending raw data, or encoding with one of two compression algorithms that differ in complexity and compression performance. We observe that the higher the throughput (number of channels and sampling rate) the larger the benefits obtained from compression.
Song, Xiaoying; Huang, Qijun; Chang, Sheng; He, Jin; Wang, Hao
2018-06-01
To improve the compression rates for lossless compression of medical images, an efficient algorithm, based on irregular segmentation and region-based prediction, is proposed in this paper. Considering that the first step of a region-based compression algorithm is segmentation, this paper proposes a hybrid method by combining geometry-adaptive partitioning and quadtree partitioning to achieve adaptive irregular segmentation for medical images. Then, least square (LS)-based predictors are adaptively designed for each region (regular subblock or irregular subregion). The proposed adaptive algorithm not only exploits spatial correlation between pixels but it utilizes local structure similarity, resulting in efficient compression performance. Experimental results show that the average compression performance of the proposed algorithm is 10.48, 4.86, 3.58, and 0.10% better than that of JPEG 2000, CALIC, EDP, and JPEG-LS, respectively. Graphical abstract ᅟ.
Buléon, Clément; Delaunay, Julie; Parienti, Jean-Jacques; Halbout, Laurent; Arrot, Xavier; Gérard, Jean-Louis; Hanouz, Jean-Luc
2016-09-01
Chest compressions require physical effort leading to increased fatigue and rapid degradation in the quality of cardiopulmonary resuscitation overtime. Despite harmful effect of interrupting chest compressions, current guidelines recommend that rescuers switch every 2 minutes. The impact on the quality of chest compressions during extended cardiopulmonary resuscitation has yet to be assessed. We conducted randomized crossover study on manikin (ResusciAnne; Laerdal). After randomization, 60 professional emergency rescuers performed 2 × 10 minutes of continuous chest compressions with and without a feedback device (CPRmeter). Efficient compression rate (primary outcome) was defined as the frequency target reached along with depth and leaning at the same time (recorded continuously). The 10-minute mean efficient compression rate was significantly better in the feedback group: 42% vs 21% (P< .001). There was no significant difference between the first (43%) and the tenth minute (36%; P= .068) with feedback. Conversely, a significant difference was evident from the second minute without feedback (35% initially vs 27%; P< .001). The efficient compression rate difference with and without feedback was significant every minute, from the second minute onwards. CPRmeter feedback significantly improved chest compression depth from the first minute, leaning from the second minute and rate from the third minute. A real-time feedback device delivers longer effective, steadier chest compressions over time. An extrapolation of these results from simulation may allow rescuer switches to be carried out beyond the currently recommended 2 minutes when a feedback device is used. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Ellison, D. C.; Jones, F. C.; Eichler, D.
1983-01-01
Both hydrodynamic calculations (Drury and Volk, 1981, and Axford et al., 1982) and kinetic simulations imply the existence of thermal subshocks in high-Mach-number cosmic-ray-mediated shocks. The injection efficiency of particles from the thermal background into the diffusive shock-acceleration process is determined in part by the sharpness and compression ratio of these subshocks. Results are reported for a Monte Carlo simulation that includes both the back reaction of accelerated particles on the inflowing plasma, producing a smoothing of the shock transition, and the free escape of particles allowing arbitrarily large overall compression ratios in high-Mach-number steady-state shocks. Energy spectra and estimates of the proportion of thermal ions accelerated to high energy are obtained.
NASA Astrophysics Data System (ADS)
Ellison, D. C.; Jones, F. C.; Eichler, D.
1983-08-01
Both hydrodynamic calculations (Drury and Volk, 1981, and Axford et al., 1982) and kinetic simulations imply the existence of thermal subshocks in high-Mach-number cosmic-ray-mediated shocks. The injection efficiency of particles from the thermal background into the diffusive shock-acceleration process is determined in part by the sharpness and compression ratio of these subshocks. Results are reported for a Monte Carlo simulation that includes both the back reaction of accelerated particles on the inflowing plasma, producing a smoothing of the shock transition, and the free escape of particles allowing arbitrarily large overall compression ratios in high-Mach-number steady-state shocks. Energy spectra and estimates of the proportion of thermal ions accelerated to high energy are obtained.
Data compression strategies for ptychographic diffraction imaging
NASA Astrophysics Data System (ADS)
Loetgering, Lars; Rose, Max; Treffer, David; Vartanyants, Ivan A.; Rosenhahn, Axel; Wilhein, Thomas
2017-12-01
Ptychography is a computational imaging method for solving inverse scattering problems. To date, the high amount of redundancy present in ptychographic data sets requires computer memory that is orders of magnitude larger than the retrieved information. Here, we propose and compare data compression strategies that significantly reduce the amount of data required for wavefield inversion. Information metrics are used to measure the amount of data redundancy present in ptychographic data. Experimental results demonstrate the technique to be memory efficient and stable in the presence of systematic errors such as partial coherence and noise.
Oxygen-enriched air for MHD power plants
NASA Technical Reports Server (NTRS)
Ebeling, R. W., Jr.; Cutting, J. C.; Burkhart, J. A.
1979-01-01
Cryogenic air-separation process cycle variations and compression schemes are examined. They are designed to minimize net system power required to supply pressurized, oxygen-enriched air to the combustor of an MHD power plant with a coal input of 2000 MWt. Power requirements and capital costs for oxygen production and enriched air compression for enrichment levels from 13 to 50% are determined. The results are presented as curves from which total compression power requirements can be estimated for any desired enrichment level at any delivery pressure. It is found that oxygen enrichment and recuperative heating of MHD combustor air to 1400 F yields near-term power plant efficiencies in excess of 45%. A minimum power compression system requires 167 MW to supply 330 lb of oxygen per second and costs roughly 100 million dollars. Preliminary studies show MHD/steam power plants to be competitive with plants using high-temperature air preheaters burning gas.
High efficient optical remote sensing images acquisition for nano-satellite-framework
NASA Astrophysics Data System (ADS)
Li, Feng; Xin, Lei; Liu, Yang; Fu, Jie; Liu, Yuhong; Guo, Yi
2017-09-01
It is more difficult and challenging to implement Nano-satellite (NanoSat) based optical Earth observation missions than conventional satellites because of the limitation of volume, weight and power consumption. In general, an image compression unit is a necessary onboard module to save data transmission bandwidth and disk space. The image compression unit can get rid of redundant information of those captured images. In this paper, a new image acquisition framework is proposed for NanoSat based optical Earth observation applications. The entire process of image acquisition and compression unit can be integrated in the photo detector array chip, that is, the output data of the chip is already compressed. That is to say, extra image compression unit is no longer needed; therefore, the power, volume, and weight of the common onboard image compression units consumed can be largely saved. The advantages of the proposed framework are: the image acquisition and image compression are combined into a single step; it can be easily built in CMOS architecture; quick view can be provided without reconstruction in the framework; Given a certain compression ratio, the reconstructed image quality is much better than those CS based methods. The framework holds promise to be widely used in the future.
Efficient Sparse Signal Transmission over a Lossy Link Using Compressive Sensing
Wu, Liantao; Yu, Kai; Cao, Dongyu; Hu, Yuhen; Wang, Zhi
2015-01-01
Reliable data transmission over lossy communication link is expensive due to overheads for error protection. For signals that have inherent sparse structures, compressive sensing (CS) is applied to facilitate efficient sparse signal transmissions over lossy communication links without data compression or error protection. The natural packet loss in the lossy link is modeled as a random sampling process of the transmitted data, and the original signal will be reconstructed from the lossy transmission results using the CS-based reconstruction method at the receiving end. The impacts of packet lengths on transmission efficiency under different channel conditions have been discussed, and interleaving is incorporated to mitigate the impact of burst data loss. Extensive simulations and experiments have been conducted and compared to the traditional automatic repeat request (ARQ) interpolation technique, and very favorable results have been observed in terms of both accuracy of the reconstructed signals and the transmission energy consumption. Furthermore, the packet length effect provides useful insights for using compressed sensing for efficient sparse signal transmission via lossy links. PMID:26287195
Mechanical Properties and Eco-Efficiency of Steel Fiber Reinforced Alkali-Activated Slag Concrete.
Kim, Sun-Woo; Jang, Seok-Joon; Kang, Dae-Hyun; Ahn, Kyung-Lim; Yun, Hyun-Do
2015-10-30
Conventional concrete production that uses ordinary Portland cement (OPC) as a binder seems unsustainable due to its high energy consumption, natural resource exhaustion and huge carbon dioxide (CO₂) emissions. To transform the conventional process of concrete production to a more sustainable process, the replacement of high energy-consumptive PC with new binders such as fly ash and alkali-activated slag (AAS) from available industrial by-products has been recognized as an alternative. This paper investigates the effect of curing conditions and steel fiber inclusion on the compressive and flexural performance of AAS concrete with a specified compressive strength of 40 MPa to evaluate the feasibility of AAS concrete as an alternative to normal concrete for CO₂ emission reduction in the concrete industry. Their performances are compared with reference concrete produced using OPC. The eco-efficiency of AAS use for concrete production was also evaluated by binder intensity and CO₂ intensity based on the test results and literature data. Test results show that it is possible to produce AAS concrete with compressive and flexural performances comparable to conventional concrete. Wet-curing and steel fiber inclusion improve the mechanical performance of AAS concrete. Also, the utilization of AAS as a sustainable binder can lead to significant CO₂ emissions reduction and resources and energy conservation in the concrete industry.
Compressive Sensing Image Sensors-Hardware Implementation
Dadkhah, Mohammadreza; Deen, M. Jamal; Shirani, Shahram
2013-01-01
The compressive sensing (CS) paradigm uses simultaneous sensing and compression to provide an efficient image acquisition technique. The main advantages of the CS method include high resolution imaging using low resolution sensor arrays and faster image acquisition. Since the imaging philosophy in CS imagers is different from conventional imaging systems, new physical structures have been developed for cameras that use the CS technique. In this paper, a review of different hardware implementations of CS encoding in optical and electrical domains is presented. Considering the recent advances in CMOS (complementary metal–oxide–semiconductor) technologies and the feasibility of performing on-chip signal processing, important practical issues in the implementation of CS in CMOS sensors are emphasized. In addition, the CS coding for video capture is discussed. PMID:23584123
Information content exploitation of imaging spectrometer's images for lossless compression
NASA Astrophysics Data System (ADS)
Wang, Jianyu; Zhu, Zhenyu; Lin, Kan
1996-11-01
Imaging spectrometer, such as MAIS produces a tremendous volume of image data with up to 5.12 Mbps raw data rate, which needs urgently a real-time, efficient and reversible compression implementation. Between the lossy scheme with high compression ratio and the lossless scheme with high fidelity, we must make our choice based on the particular information content analysis of each imaging spectrometer's image data. In this paper, we present a careful analysis of information-preserving compression of imaging spectrometer MAIS with an entropy and autocorrelation study on the hyperspectral images. First, the statistical information in an actual MAIS image, captured in Marble Bar Australia, is measured with its entropy, conditional entropy, mutual information and autocorrelation coefficients on both spatial dimensions and spectral dimension. With these careful analyses, it is shown that there is high redundancy existing in the spatial dimensions, but the correlation in spectral dimension of the raw images is smaller than expected. The main reason of the nonstationarity on spectral dimension is attributed to the instruments's discrepancy on detector's response and channel's amplification in different spectral bands. To restore its natural correlation, we preprocess the signal in advance. There are two methods to accomplish this requirement: onboard radiation calibration and normalization. A better result can be achieved by the former one. After preprocessing, the spectral correlation increases so high that it contributes much redundancy in addition to spatial correlation. At last, an on-board hardware implementation for the lossless compression is presented with an ideal result.
Indexing and retrieval of MPEG compressed video
NASA Astrophysics Data System (ADS)
Kobla, Vikrant; Doermann, David S.
1998-04-01
To keep pace with the increased popularity of digital video as an archival medium, the development of techniques for fast and efficient analysis of ideo streams is essential. In particular, solutions to the problems of storing, indexing, browsing, and retrieving video data from large multimedia databases are necessary to a low access to these collections. Given that video is often stored efficiently in a compressed format, the costly overhead of decompression can be reduced by analyzing the compressed representation directly. In earlier work, we presented compressed domain parsing techniques which identified shots, subshots, and scenes. In this article, we present efficient key frame selection, feature extraction, indexing, and retrieval techniques that are directly applicable to MPEG compressed video. We develop a frame type independent representation which normalizes spatial and temporal features including frame type, frame size, macroblock encoding, and motion compensation vectors. Features for indexing are derived directly from this representation and mapped to a low- dimensional space where they can be accessed using standard database techniques. Spatial information is used as primary index into the database and temporal information is used to rank retrieved clips and enhance the robustness of the system. The techniques presented enable efficient indexing, querying, and retrieval of compressed video as demonstrated by our system which typically takes a fraction of a second to retrieve similar video scenes from a database, with over 95 percent recall.
Compressed Natural Gas Technology for Alternative Fuel Power Plants
NASA Astrophysics Data System (ADS)
Pujotomo, Isworo
2018-02-01
Gas has great potential to be converted into electrical energy. Indonesia has natural gas reserves up to 50 years in the future, but the optimization of the gas to be converted into electricity is low and unable to compete with coal. Gas is converted into electricity has low electrical efficiency (25%), and the raw materials are more expensive than coal. Steam from a lot of wasted gas turbine, thus the need for utilizing exhaust gas results from gas turbine units. Combined cycle technology (Gas and Steam Power Plant) be a solution to improve the efficiency of electricity. Among other Thermal Units, Steam Power Plant (Combined Cycle Power Plant) has a high electrical efficiency (45%). Weakness of the current Gas and Steam Power Plant peak burden still using fuel oil. Compressed Natural Gas (CNG) Technology may be used to accommodate the gas with little land use. CNG gas stored in the circumstances of great pressure up to 250 bar, in contrast to gas directly converted into electricity in a power plant only 27 bar pressure. Stored in CNG gas used as a fuel to replace load bearing peak. Lawyer System on CNG conversion as well as the power plant is generally only used compressed gas with greater pressure and a bit of land.
NASA Astrophysics Data System (ADS)
Yoshida, Tomonori; Muto, Daiki; Tamai, Tomoya; Suzuki, Shinsuke
2018-04-01
Porous aluminum alloy with aligned unidirectional pores was fabricated by dipping A1050 tubes into A6061 semi-solid slurry. The porous aluminum alloy was processed through Equal-channel Angular Extrusion (ECAE) while preventing cracking and maintaining both the pore size and porosity by setting the insert material and loading back pressure. The specific compressive yield strength of the sample aged after 13 passes of ECAE was approximately 2.5 times higher than that of the solid-solutionized sample without ECAE. Both the energy absorption E V and energy absorption efficiency η V after four passes of ECAE were approximately 1.2 times higher than that of the solid-solutionized sample without ECAE. The specific yield strength was improved via work hardening and precipitation following dynamic aging during ECAE. E V was improved by the application of high compressive stress at the beginning of the compression owing to work hardening via ECAE. η V was improved by a steep increase of stress at low compressive strain and by a gradual increase of stress in the range up to 50 pct of compressive strain. The gradual increase of stress was caused by continuous shear fracture in the metallic part, which was due to the high dislocation density and existence of unidirectional pores parallel to the compressive direction in the structure.
Reference-free compression of high throughput sequencing data with a probabilistic de Bruijn graph.
Benoit, Gaëtan; Lemaitre, Claire; Lavenier, Dominique; Drezen, Erwan; Dayris, Thibault; Uricaru, Raluca; Rizk, Guillaume
2015-09-14
Data volumes generated by next-generation sequencing (NGS) technologies is now a major concern for both data storage and transmission. This triggered the need for more efficient methods than general purpose compression tools, such as the widely used gzip method. We present a novel reference-free method meant to compress data issued from high throughput sequencing technologies. Our approach, implemented in the software LEON, employs techniques derived from existing assembly principles. The method is based on a reference probabilistic de Bruijn Graph, built de novo from the set of reads and stored in a Bloom filter. Each read is encoded as a path in this graph, by memorizing an anchoring kmer and a list of bifurcations. The same probabilistic de Bruijn Graph is used to perform a lossy transformation of the quality scores, which allows to obtain higher compression rates without losing pertinent information for downstream analyses. LEON was run on various real sequencing datasets (whole genome, exome, RNA-seq or metagenomics). In all cases, LEON showed higher overall compression ratios than state-of-the-art compression software. On a C. elegans whole genome sequencing dataset, LEON divided the original file size by more than 20. LEON is an open source software, distributed under GNU affero GPL License, available for download at http://gatb.inria.fr/software/leon/.
NASA Astrophysics Data System (ADS)
Yoshida, Tomonori; Muto, Daiki; Tamai, Tomoya; Suzuki, Shinsuke
2018-06-01
Porous aluminum alloy with aligned unidirectional pores was fabricated by dipping A1050 tubes into A6061 semi-solid slurry. The porous aluminum alloy was processed through Equal-channel Angular Extrusion (ECAE) while preventing cracking and maintaining both the pore size and porosity by setting the insert material and loading back pressure. The specific compressive yield strength of the sample aged after 13 passes of ECAE was approximately 2.5 times higher than that of the solid-solutionized sample without ECAE. Both the energy absorption E V and energy absorption efficiency η V after four passes of ECAE were approximately 1.2 times higher than that of the solid-solutionized sample without ECAE. The specific yield strength was improved via work hardening and precipitation following dynamic aging during ECAE. E V was improved by the application of high compressive stress at the beginning of the compression owing to work hardening via ECAE. η V was improved by a steep increase of stress at low compressive strain and by a gradual increase of stress in the range up to 50 pct of compressive strain. The gradual increase of stress was caused by continuous shear fracture in the metallic part, which was due to the high dislocation density and existence of unidirectional pores parallel to the compressive direction in the structure.
SCALCE: boosting sequence compression algorithms using locally consistent encoding
Hach, Faraz; Numanagić, Ibrahim; Sahinalp, S Cenk
2012-01-01
Motivation: The high throughput sequencing (HTS) platforms generate unprecedented amounts of data that introduce challenges for the computational infrastructure. Data management, storage and analysis have become major logistical obstacles for those adopting the new platforms. The requirement for large investment for this purpose almost signalled the end of the Sequence Read Archive hosted at the National Center for Biotechnology Information (NCBI), which holds most of the sequence data generated world wide. Currently, most HTS data are compressed through general purpose algorithms such as gzip. These algorithms are not designed for compressing data generated by the HTS platforms; for example, they do not take advantage of the specific nature of genomic sequence data, that is, limited alphabet size and high similarity among reads. Fast and efficient compression algorithms designed specifically for HTS data should be able to address some of the issues in data management, storage and communication. Such algorithms would also help with analysis provided they offer additional capabilities such as random access to any read and indexing for efficient sequence similarity search. Here we present SCALCE, a ‘boosting’ scheme based on Locally Consistent Parsing technique, which reorganizes the reads in a way that results in a higher compression speed and compression rate, independent of the compression algorithm in use and without using a reference genome. Results: Our tests indicate that SCALCE can improve the compression rate achieved through gzip by a factor of 4.19—when the goal is to compress the reads alone. In fact, on SCALCE reordered reads, gzip running time can improve by a factor of 15.06 on a standard PC with a single core and 6 GB memory. Interestingly even the running time of SCALCE + gzip improves that of gzip alone by a factor of 2.09. When compared with the recently published BEETL, which aims to sort the (inverted) reads in lexicographic order for improving bzip2, SCALCE + gzip provides up to 2.01 times better compression while improving the running time by a factor of 5.17. SCALCE also provides the option to compress the quality scores as well as the read names, in addition to the reads themselves. This is achieved by compressing the quality scores through order-3 Arithmetic Coding (AC) and the read names through gzip through the reordering SCALCE provides on the reads. This way, in comparison with gzip compression of the unordered FASTQ files (including reads, read names and quality scores), SCALCE (together with gzip and arithmetic encoding) can provide up to 3.34 improvement in the compression rate and 1.26 improvement in running time. Availability: Our algorithm, SCALCE (Sequence Compression Algorithm using Locally Consistent Encoding), is implemented in C++ with both gzip and bzip2 compression options. It also supports multithreading when gzip option is selected, and the pigz binary is available. It is available at http://scalce.sourceforge.net. Contact: fhach@cs.sfu.ca or cenk@cs.sfu.ca Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23047557
Design and evaluation of a bolted joint for a discrete carbon-epoxy rod-reinforced hat section
NASA Technical Reports Server (NTRS)
Rousseau, Carl Q.; Baker, Donald J.
1996-01-01
The use of prefabricated pultruded carbon-epoxy rods has reduced the manufacturing complexity and costs of stiffened composite panels while increasing the damage tolerance of the panels. However, repairability of these highly efficient discrete stiffeners has been a concern. Design, analysis, and test results are presented in this paper for a bolted-joint repair for the pultruded rod concept that is capable of efficiently transferring axial loads in a hat-section stiffener on the upper skin segment of a heavily loaded aircraft wing component. A tension and a compression joint design were evaluated. The tension joint design achieved approximately 1.0% strain in the carbon-epoxy rod-reinforced hat-section and failed in a metal fitting at 166% of the design ultimate load. The compression joint design failed in the carbon-epoxy rod-reinforced hat-section test specimen area at approximately 0.7% strain and at 110% of the design ultimate load. This strain level of 0.7% in compression is similar to the failure strain observed in previously reported carbon-epoxy rod-reinforced hat-section column tests.
Song, Xiaoying; Huang, Qijun; Chang, Sheng; He, Jin; Wang, Hao
2016-12-01
To address the low compression efficiency of lossless compression and the low image quality of general near-lossless compression, a novel near-lossless compression algorithm based on adaptive spatial prediction is proposed for medical sequence images for possible diagnostic use in this paper. The proposed method employs adaptive block size-based spatial prediction to predict blocks directly in the spatial domain and Lossless Hadamard Transform before quantization to improve the quality of reconstructed images. The block-based prediction breaks the pixel neighborhood constraint and takes full advantage of the local spatial correlations found in medical images. The adaptive block size guarantees a more rational division of images and the improved use of the local structure. The results indicate that the proposed algorithm can efficiently compress medical images and produces a better peak signal-to-noise ratio (PSNR) under the same pre-defined distortion than other near-lossless methods.
Image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing
NASA Astrophysics Data System (ADS)
Zhou, Nanrun; Pan, Shumin; Cheng, Shan; Zhou, Zhihong
2016-08-01
Most image encryption algorithms based on low-dimensional chaos systems bear security risks and suffer encryption data expansion when adopting nonlinear transformation directly. To overcome these weaknesses and reduce the possible transmission burden, an efficient image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing is proposed. The original image is measured by the measurement matrices in two directions to achieve compression and encryption simultaneously, and then the resulting image is re-encrypted by the cycle shift operation controlled by a hyper-chaotic system. Cycle shift operation can change the values of the pixels efficiently. The proposed cryptosystem decreases the volume of data to be transmitted and simplifies the keys distribution simultaneously as a nonlinear encryption system. Simulation results verify the validity and the reliability of the proposed algorithm with acceptable compression and security performance.
NASA Astrophysics Data System (ADS)
Shimada, M.; Yokoya, K.; Suwada, T.; Enomoto, A.
2007-06-01
The lattice and beam optics of the arc section of the KEK-ERL test facility, having an energy of 200 MeV, were optimized to efficiently suppress emittance growth based on a simulation using a particle-tracking method taking coherent synchrotron radiation effects into account. The lattice optimization in the arc section was performed under two conditions: a high-current mode with a bunch charge of 76.9 pC without bunch compression, and a short-bunch mode with bunch compression, producing a final bunch length of around 0.1 ps. The simulation results showed that, in the high-current mode, emittance growth was efficiently suppressed by keeping a root-mean-square (rms) bunch length of 1 ps at a bunch charge of 76.9 pC, and in the short-bunch mode, emittance growth was kept within permissible limits with a maximum allowable bunch charge of 23.1 pC at an rms bunch length of 0.1 ps.
Ultralight, scalable, and high-temperature-resilient ceramic nanofiber sponges.
Wang, Haolun; Zhang, Xuan; Wang, Ning; Li, Yan; Feng, Xue; Huang, Ya; Zhao, Chunsong; Liu, Zhenglian; Fang, Minghao; Ou, Gang; Gao, Huajian; Li, Xiaoyan; Wu, Hui
2017-06-01
Ultralight and resilient porous nanostructures have been fabricated in various material forms, including carbon, polymers, and metals. However, the development of ultralight and high-temperature resilient structures still remains extremely challenging. Ceramics exhibit good mechanical and chemical stability at high temperatures, but their brittleness and sensitivity to flaws significantly complicate the fabrication of resilient porous ceramic nanostructures. We report the manufacturing of large-scale, lightweight, high-temperature resilient, three-dimensional sponges based on a variety of oxide ceramic (for example, TiO 2 , ZrO 2 , yttria-stabilized ZrO 2 , and BaTiO 3 ) nanofibers through an efficient solution blow-spinning process. The ceramic sponges consist of numerous tangled ceramic nanofibers, with densities varying from 8 to 40 mg/cm 3 . In situ uniaxial compression in a scanning electron microscope showed that the TiO 2 nanofiber sponge exhibits high energy absorption (for example, dissipation of up to 29.6 mJ/cm 3 in energy density at 50% strain) and recovers rapidly after compression in excess of 20% strain at both room temperature and 400°C. The sponge exhibits excellent resilience with residual strains of only ~1% at 800°C after 10 cycles of 10% compression strain and maintains good recoverability after compression at ~1300°C. We show that ceramic nanofiber sponges can serve multiple functions, such as elasticity-dependent electrical resistance, photocatalytic activity, and thermal insulation.
Ultralight, scalable, and high-temperature–resilient ceramic nanofiber sponges
Wang, Haolun; Zhang, Xuan; Wang, Ning; Li, Yan; Feng, Xue; Huang, Ya; Zhao, Chunsong; Liu, Zhenglian; Fang, Minghao; Ou, Gang; Gao, Huajian; Li, Xiaoyan; Wu, Hui
2017-01-01
Ultralight and resilient porous nanostructures have been fabricated in various material forms, including carbon, polymers, and metals. However, the development of ultralight and high-temperature resilient structures still remains extremely challenging. Ceramics exhibit good mechanical and chemical stability at high temperatures, but their brittleness and sensitivity to flaws significantly complicate the fabrication of resilient porous ceramic nanostructures. We report the manufacturing of large-scale, lightweight, high-temperature resilient, three-dimensional sponges based on a variety of oxide ceramic (for example, TiO2, ZrO2, yttria-stabilized ZrO2, and BaTiO3) nanofibers through an efficient solution blow-spinning process. The ceramic sponges consist of numerous tangled ceramic nanofibers, with densities varying from 8 to 40 mg/cm3. In situ uniaxial compression in a scanning electron microscope showed that the TiO2 nanofiber sponge exhibits high energy absorption (for example, dissipation of up to 29.6 mJ/cm3 in energy density at 50% strain) and recovers rapidly after compression in excess of 20% strain at both room temperature and 400°C. The sponge exhibits excellent resilience with residual strains of only ~1% at 800°C after 10 cycles of 10% compression strain and maintains good recoverability after compression at ~1300°C. We show that ceramic nanofiber sponges can serve multiple functions, such as elasticity-dependent electrical resistance, photocatalytic activity, and thermal insulation. PMID:28630915
Quantization Distortion in Block Transform-Compressed Data
NASA Technical Reports Server (NTRS)
Boden, A. F.
1995-01-01
The popular JPEG image compression standard is an example of a block transform-based compression scheme; the image is systematically subdivided into block that are individually transformed, quantized, and encoded. The compression is achieved by quantizing the transformed data, reducing the data entropy and thus facilitating efficient encoding. A generic block transform model is introduced.
A database for assessment of effect of lossy compression on digital mammograms
NASA Astrophysics Data System (ADS)
Wang, Jiheng; Sahiner, Berkman; Petrick, Nicholas; Pezeshk, Aria
2018-03-01
With widespread use of screening digital mammography, efficient storage of the vast amounts of data has become a challenge. While lossless image compression causes no risk to the interpretation of the data, it does not allow for high compression rates. Lossy compression and the associated higher compression ratios are therefore more desirable. The U.S. Food and Drug Administration (FDA) currently interprets the Mammography Quality Standards Act as prohibiting lossy compression of digital mammograms for primary image interpretation, image retention, or transfer to the patient or her designated recipient. Previous work has used reader studies to determine proper usage criteria for evaluating lossy image compression in mammography, and utilized different measures and metrics to characterize medical image quality. The drawback of such studies is that they rely on a threshold on compression ratio as the fundamental criterion for preserving the quality of images. However, compression ratio is not a useful indicator of image quality. On the other hand, many objective image quality metrics (IQMs) have shown excellent performance for natural image content for consumer electronic applications. In this paper, we create a new synthetic mammogram database with several unique features. We compare and characterize the impact of image compression on several clinically relevant image attributes such as perceived contrast and mass appearance for different kinds of masses. We plan to use this database to develop a new objective IQM for measuring the quality of compressed mammographic images to help determine the allowed maximum compression for different kinds of breasts and masses in terms of visual and diagnostic quality.
Distillation with Vapour Compression. An Undergraduate Experimental Facility.
ERIC Educational Resources Information Center
Pritchard, Colin
1986-01-01
Discusses the need to design distillation columns that are more energy efficient. Describes a "design and build" project completed by two college students aimed at demonstrating the principles of vapour compression distillation in a more energy efficient way. General design specifications are given, along with suggestions for teaching…
Divergence-Free SPH for Incompressible and Viscous Fluids.
Bender, Jan; Koschier, Dan
2017-03-01
In this paper we present a novel Smoothed Particle Hydrodynamics (SPH) method for the efficient and stable simulation of incompressible fluids. The most efficient SPH-based approaches enforce incompressibility either on position or velocity level. However, the continuity equation for incompressible flow demands to maintain a constant density and a divergence-free velocity field. We propose a combination of two novel implicit pressure solvers enforcing both a low volume compression as well as a divergence-free velocity field. While a compression-free fluid is essential for realistic physical behavior, a divergence-free velocity field drastically reduces the number of required solver iterations and increases the stability of the simulation significantly. Thanks to the improved stability, our method can handle larger time steps than previous approaches. This results in a substantial performance gain since the computationally expensive neighborhood search has to be performed less frequently. Moreover, we introduce a third optional implicit solver to simulate highly viscous fluids which seamlessly integrates into our solver framework. Our implicit viscosity solver produces realistic results while introducing almost no numerical damping. We demonstrate the efficiency, robustness and scalability of our method in a variety of complex simulations including scenarios with millions of turbulent particles or highly viscous materials.
Roll Compaction and Tableting of High Loaded Metformin Formulations Using Efficient Binders.
Arndt, Oscar-Rupert; Kleinebudde, Peter
2018-04-23
Metformin has a poor tabletability and flowability. Therefore, metformin is typically wet granulated with a binder before tableting. To save production costs, it would be desirable to implement a roll compaction/dry granulation (RCDG) process for metformin instead of using wet granulation. In order to implement RCDG, the efficiency of dry binders is crucial to ensure a high drug load and suitable properties of dry granules and tablets. This study evaluates dry granules manufactured by RCDG and subsequently tableting of high metformin content formulations (≥ 87.5%). Based on previous results, fine particle grades of hydroxypropylcellulose and copovidone in different fractions were compared as dry binders. The formulations are suitable for RCDG and tableting. Furthermore, results can be connected to in-die and out-of-die compressibility analysis. The addition of 7% of dry binder is a good compromise to generate sufficient mechanical properties on the one hand, but also to save resources and ensure a high metformin content on the other hand. Hydroxypropylcellulose was more efficient in terms of granule size, tensile strength and friability. Three percent croscarmellose was added to reach the specifications of the US Pharmacopeia regarding dissolution. The final formulation has a metformin content of 87.5%. A loss in tabletability does not occur for granules compressed at different specific compaction forces, which displays a robust tensile strength of tablets independent of the granulation process.
Use of zerotree coding in a high-speed pyramid image multiresolution decomposition
NASA Astrophysics Data System (ADS)
Vega-Pineda, Javier; Cabrera, Sergio D.; Lucero, Aldo
1995-03-01
A Zerotree (ZT) coding scheme is applied as a post-processing stage to avoid transmitting zero data in the High-Speed Pyramid (HSP) image compression algorithm. This algorithm has features that increase the capability of the ZT coding to give very high compression rates. In this paper the impact of the ZT coding scheme is analyzed and quantified. The HSP algorithm creates a discrete-time multiresolution analysis based on a hierarchical decomposition technique that is a subsampling pyramid. The filters used to create the image residues and expansions can be related to wavelet representations. According to the pixel coordinates and the level in the pyramid, N2 different wavelet basis functions of various sizes and rotations are linearly combined. The HSP algorithm is computationally efficient because of the simplicity of the required operations, and as a consequence, it can be very easily implemented with VLSI hardware. This is the HSP's principal advantage over other compression schemes. The ZT coding technique transforms the different quantized image residual levels created by the HSP algorithm into a bit stream. The use of ZT's compresses even further the already compressed image taking advantage of parent-child relationships (trees) between the pixels of the residue images at different levels of the pyramid. Zerotree coding uses the links between zeros along the hierarchical structure of the pyramid, to avoid transmission of those that form branches of all zeros. Compression performance and algorithm complexity of the combined HSP-ZT method are compared with those of the JPEG standard technique.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curran, Scott; Briggs, Thomas E; Cho, Kukwon
2011-01-01
In-cylinder blending of gasoline and diesel to achieve Reactivity Controlled Compression Ignition (RCCI) has been shown to reduce NOx and PM emissions while maintaining or improving brake thermal efficiency as compared to conventional diesel combustion (CDC). The RCCI concept has an advantage over many advanced combustion strategies in that by varying both the percent of premixed gasoline and EGR rate, stable combustion can be extended over more of the light-duty drive cycle load range. Changing the percent premixed gasoline changes the fuel reactivity stratification in the cylinder providing further control of combustion phasing and pressure rise rate than the usemore » of EGR alone. This paper examines the combustion and emissions performance of light-duty diesel engine using direct injected diesel fuel and port injected gasoline to carry out RCCI for steady-state engine conditions which are consistent with a light-duty drive cycle. A GM 1.9L four-cylinder engine with the stock compression ratio of 17.5:1, common rail diesel injection system, high-pressure EGR system and variable geometry turbocharger was modified to allow for port fuel injection with gasoline. Engine-out emissions, engine performance and combustion behavior for RCCI operation is compared against both CDC and a premixed charge compression ignition (PCCI) strategy which relies on high levels of EGR dilution. The effect of percent of premixed gasoline, EGR rate, boost level, intake mixture temperature, combustion phasing and pressure rise rate is investigated for RCCI combustion for the light-duty modal points. Engine-out emissions of NOx and PM were found to be considerably lower for RCCI operation as compared to CDC and PCCI, while HC and CO emissions were higher. Brake thermal efficiency was similar or higher for many of the modal conditions for RCCI operation. The emissions results are used to estimate hot-start FTP-75 emissions levels with RCCI and are compared against CDC and PCCI modes.« less
Volumetric Medical Image Coding: An Object-based, Lossy-to-lossless and Fully Scalable Approach
Danyali, Habibiollah; Mertins, Alfred
2011-01-01
In this article, an object-based, highly scalable, lossy-to-lossless 3D wavelet coding approach for volumetric medical image data (e.g., magnetic resonance (MR) and computed tomography (CT)) is proposed. The new method, called 3DOBHS-SPIHT, is based on the well-known set partitioning in the hierarchical trees (SPIHT) algorithm and supports both quality and resolution scalability. The 3D input data is grouped into groups of slices (GOS) and each GOS is encoded and decoded as a separate unit. The symmetric tree definition of the original 3DSPIHT is improved by introducing a new asymmetric tree structure. While preserving the compression efficiency, the new tree structure allows for a small size of each GOS, which not only reduces memory consumption during the encoding and decoding processes, but also facilitates more efficient random access to certain segments of slices. To achieve more compression efficiency, the algorithm only encodes the main object of interest in each 3D data set, which can have any arbitrary shape, and ignores the unnecessary background. The experimental results on some MR data sets show the good performance of the 3DOBHS-SPIHT algorithm for multi-resolution lossy-to-lossless coding. The compression efficiency, full scalability, and object-based features of the proposed approach, beside its lossy-to-lossless coding support, make it a very attractive candidate for volumetric medical image information archiving and transmission applications. PMID:22606653
Efficient generation of ultra-intense few-cycle radially polarized laser pulses.
Carbajo, Sergio; Granados, Eduardo; Schimpf, Damian; Sell, Alexander; Hong, Kyung-Han; Moses, Jeffrey; Kärtner, Franz X
2014-04-15
We report on efficient generation of millijoule-level, kilohertz-repetition-rate few-cycle laser pulses with radial polarization by combining a gas-filled hollow-waveguide compression technique with a suitable polarization mode converter. Peak power levels >85 GW are routinely achieved, capable of reaching relativistic intensities >10(19) W/cm2 with carrier-envelope-phase control, by employing readily accessible ultrafast high-energy laser technology.
NASA Technical Reports Server (NTRS)
Steinthorsson, E.; Modiano, David; Colella, Phillip
1994-01-01
A methodology for accurate and efficient simulation of unsteady, compressible flows is presented. The cornerstones of the methodology are a special discretization of the Navier-Stokes equations on structured body-fitted grid systems and an efficient solution-adaptive mesh refinement technique for structured grids. The discretization employs an explicit multidimensional upwind scheme for the inviscid fluxes and an implicit treatment of the viscous terms. The mesh refinement technique is based on the AMR algorithm of Berger and Colella. In this approach, cells on each level of refinement are organized into a small number of topologically rectangular blocks, each containing several thousand cells. The small number of blocks leads to small overhead in managing data, while their size and regular topology means that a high degree of optimization can be achieved on computers with vector processors.
NASA Astrophysics Data System (ADS)
Khursheed, Khursheed; Imran, Muhammad; Ahmad, Naeem; O'Nils, Mattias
2012-06-01
Wireless Visual Sensor Network (WVSN) is an emerging field which combines image sensor, on board computation unit, communication component and energy source. Compared to the traditional wireless sensor network, which operates on one dimensional data, such as temperature, pressure values etc., WVSN operates on two dimensional data (images) which requires higher processing power and communication bandwidth. Normally, WVSNs are deployed in areas where installation of wired solutions is not feasible. The energy budget in these networks is limited to the batteries, because of the wireless nature of the application. Due to the limited availability of energy, the processing at Visual Sensor Nodes (VSN) and communication from VSN to server should consume as low energy as possible. Transmission of raw images wirelessly consumes a lot of energy and requires higher communication bandwidth. Data compression methods reduce data efficiently and hence will be effective in reducing communication cost in WVSN. In this paper, we have compared the compression efficiency and complexity of six well known bi-level image compression methods. The focus is to determine the compression algorithms which can efficiently compress bi-level images and their computational complexity is suitable for computational platform used in WVSNs. These results can be used as a road map for selection of compression methods for different sets of constraints in WVSN.
JPEG2000 Image Compression on Solar EUV Images
NASA Astrophysics Data System (ADS)
Fischer, Catherine E.; Müller, Daniel; De Moortel, Ineke
2017-01-01
For future solar missions as well as ground-based telescopes, efficient ways to return and process data have become increasingly important. Solar Orbiter, which is the next ESA/NASA mission to explore the Sun and the heliosphere, is a deep-space mission, which implies a limited telemetry rate that makes efficient onboard data compression a necessity to achieve the mission science goals. Missions like the Solar Dynamics Observatory (SDO) and future ground-based telescopes such as the Daniel K. Inouye Solar Telescope, on the other hand, face the challenge of making petabyte-sized solar data archives accessible to the solar community. New image compression standards address these challenges by implementing efficient and flexible compression algorithms that can be tailored to user requirements. We analyse solar images from the Atmospheric Imaging Assembly (AIA) instrument onboard SDO to study the effect of lossy JPEG2000 (from the Joint Photographic Experts Group 2000) image compression at different bitrates. To assess the quality of compressed images, we use the mean structural similarity (MSSIM) index as well as the widely used peak signal-to-noise ratio (PSNR) as metrics and compare the two in the context of solar EUV images. In addition, we perform tests to validate the scientific use of the lossily compressed images by analysing examples of an on-disc and off-limb coronal-loop oscillation time-series observed by AIA/SDO.
New algorithms for processing time-series big EEG data within mobile health monitoring systems.
Serhani, Mohamed Adel; Menshawy, Mohamed El; Benharref, Abdelghani; Harous, Saad; Navaz, Alramzana Nujum
2017-10-01
Recent advances in miniature biomedical sensors, mobile smartphones, wireless communications, and distributed computing technologies provide promising techniques for developing mobile health systems. Such systems are capable of monitoring epileptic seizures reliably, which are classified as chronic diseases. Three challenging issues raised in this context with regard to the transformation, compression, storage, and visualization of big data, which results from a continuous recording of epileptic seizures using mobile devices. In this paper, we address the above challenges by developing three new algorithms to process and analyze big electroencephalography data in a rigorous and efficient manner. The first algorithm is responsible for transforming the standard European Data Format (EDF) into the standard JavaScript Object Notation (JSON) and compressing the transformed JSON data to decrease the size and time through the transfer process and to increase the network transfer rate. The second algorithm focuses on collecting and storing the compressed files generated by the transformation and compression algorithm. The collection process is performed with respect to the on-the-fly technique after decompressing files. The third algorithm provides relevant real-time interaction with signal data by prospective users. It particularly features the following capabilities: visualization of single or multiple signal channels on a smartphone device and query data segments. We tested and evaluated the effectiveness of our approach through a software architecture model implementing a mobile health system to monitor epileptic seizures. The experimental findings from 45 experiments are promising and efficiently satisfy the approach's objectives in a price of linearity. Moreover, the size of compressed JSON files and transfer times are reduced by 10% and 20%, respectively, while the average total time is remarkably reduced by 67% through all performed experiments. Our approach successfully develops efficient algorithms in terms of processing time, memory usage, and energy consumption while maintaining a high scalability of the proposed solution. Our approach efficiently supports data partitioning and parallelism relying on the MapReduce platform, which can help in monitoring and automatic detection of epileptic seizures. Copyright © 2017 Elsevier B.V. All rights reserved.
Computational fluid dynamics research
NASA Technical Reports Server (NTRS)
Chandra, Suresh; Jones, Kenneth; Hassan, Hassan; Mcrae, David Scott
1992-01-01
The focus of research in the computational fluid dynamics (CFD) area is two fold: (1) to develop new approaches for turbulence modeling so that high speed compressible flows can be studied for applications to entry and re-entry flows; and (2) to perform research to improve CFD algorithm accuracy and efficiency for high speed flows. Research activities, faculty and student participation, publications, and financial information are outlined.
NASA Astrophysics Data System (ADS)
Chen, Feng; Xu, Ai-Guo; Zhang, Guang-Cai; Gan, Yan-Biao; Cheng, Tao; Li, Ying-Jun
2009-10-01
We present a highly efficient lattice Boltzmann model for simulating compressible flows. This model is based on the combination of an appropriate finite difference scheme, a 16-discrete-velocity model [Kataoka and Tsutahara, Phys. Rev. E 69 (2004) 035701(R)] and reasonable dispersion and dissipation terms. The dispersion term effectively reduces the oscillation at the discontinuity and enhances numerical precision. The dissipation term makes the new model more easily meet with the von Neumann stability condition. This model works for both high-speed and low-speed flows with arbitrary specific-heat-ratio. With the new model simulation results for the well-known benchmark problems get a high accuracy compared with the analytic or experimental ones. The used benchmark tests include (i) Shock tubes such as the Sod, Lax, Sjogreen, Colella explosion wave, and collision of two strong shocks, (ii) Regular and Mach shock reflections, and (iii) Shock wave reaction on cylindrical bubble problems. With a more realistic equation of state or free-energy functional, the new model has the potential tostudy the complex procedure of shock wave reaction on porous materials.
Dempsey, Adam B.; Curran, Scott J.; Wagner, Robert M.
2016-01-14
Many research studies have shown that low temperature combustion in compression ignition engines has the ability to yield ultra-low NOx and soot emissions while maintaining high thermal efficiency. To achieve low temperature combustion, sufficient mixing time between the fuel and air in a globally dilute environment is required, thereby avoiding fuel-rich regions and reducing peak combustion temperatures, which significantly reduces soot and NOx formation, respectively. It has been demonstrated that achieving low temperature combustion with diesel fuel over a wide range of conditions is difficult because of its properties, namely, low volatility and high chemical reactivity. On the contrary, gasolinemore » has a high volatility and low chemical reactivity, meaning it is easier to achieve the amount of premixing time required prior to autoignition to achieve low temperature combustion. In order to achieve low temperature combustion while meeting other constraints, such as low pressure rise rates and maintaining control over the timing of combustion, in-cylinder fuel stratification has been widely investigated for gasoline low temperature combustion engines. The level of fuel stratification is, in reality, a continuum ranging from fully premixed (i.e. homogeneous charge of fuel and air) to heavily stratified, heterogeneous operation, such as diesel combustion. However, to illustrate the impact of fuel stratification on gasoline compression ignition, the authors have identified three representative operating strategies: partial, moderate, and heavy fuel stratification. Thus, this article provides an overview and perspective of the current research efforts to develop engine operating strategies for achieving gasoline low temperature combustion in a compression ignition engine via fuel stratification. In this paper, computational fluid dynamics modeling of the in-cylinder processes during the closed valve portion of the cycle was used to illustrate the opportunities and challenges associated with the various fuel stratification levels.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dempsey, Adam B.; Curran, Scott J.; Wagner, Robert M.
Many research studies have shown that low temperature combustion in compression ignition engines has the ability to yield ultra-low NOx and soot emissions while maintaining high thermal efficiency. To achieve low temperature combustion, sufficient mixing time between the fuel and air in a globally dilute environment is required, thereby avoiding fuel-rich regions and reducing peak combustion temperatures, which significantly reduces soot and NOx formation, respectively. It has been demonstrated that achieving low temperature combustion with diesel fuel over a wide range of conditions is difficult because of its properties, namely, low volatility and high chemical reactivity. On the contrary, gasolinemore » has a high volatility and low chemical reactivity, meaning it is easier to achieve the amount of premixing time required prior to autoignition to achieve low temperature combustion. In order to achieve low temperature combustion while meeting other constraints, such as low pressure rise rates and maintaining control over the timing of combustion, in-cylinder fuel stratification has been widely investigated for gasoline low temperature combustion engines. The level of fuel stratification is, in reality, a continuum ranging from fully premixed (i.e. homogeneous charge of fuel and air) to heavily stratified, heterogeneous operation, such as diesel combustion. However, to illustrate the impact of fuel stratification on gasoline compression ignition, the authors have identified three representative operating strategies: partial, moderate, and heavy fuel stratification. Thus, this article provides an overview and perspective of the current research efforts to develop engine operating strategies for achieving gasoline low temperature combustion in a compression ignition engine via fuel stratification. In this paper, computational fluid dynamics modeling of the in-cylinder processes during the closed valve portion of the cycle was used to illustrate the opportunities and challenges associated with the various fuel stratification levels.« less
Compression of the Global Land 1-km AVHRR dataset
Kess, B. L.; Steinwand, D.R.; Reichenbach, S.E.
1996-01-01
Large datasets, such as the Global Land 1-km Advanced Very High Resolution Radiometer (AVHRR) Data Set (Eidenshink and Faundeen 1994), require compression methods that provide efficient storage and quick access to portions of the data. A method of lossless compression is described that provides multiresolution decompression within geographic subwindows of multi-spectral, global, 1-km, AVHRR images. The compression algorithm segments each image into blocks and compresses each block in a hierarchical format. Users can access the data by specifying either a geographic subwindow or the whole image and a resolution (1,2,4, 8, or 16 km). The Global Land 1-km AVHRR data are presented in the Interrupted Goode's Homolosine map projection. These images contain masked regions for non-land areas which comprise 80 per cent of the image. A quadtree algorithm is used to compress the masked regions. The compressed region data are stored separately from the compressed land data. Results show that the masked regions compress to 0·143 per cent of the bytes they occupy in the test image and the land areas are compressed to 33·2 per cent of their original size. The entire image is compressed hierarchically to 6·72 per cent of the original image size, reducing the data from 9·05 gigabytes to 623 megabytes. These results are compared to the first order entropy of the residual image produced with lossless Joint Photographic Experts Group predictors. Compression results are also given for Lempel-Ziv-Welch (LZW) and LZ77, the algorithms used by UNIX compress and GZIP respectively. In addition to providing multiresolution decompression of geographic subwindows of the data, the hierarchical approach and the use of quadtrees for storing the masked regions gives a marked improvement over these popular methods.
NASA Astrophysics Data System (ADS)
Schmalz, Mark S.; Ritter, Gerhard X.; Caimi, Frank M.
2001-12-01
A wide variety of digital image compression transforms developed for still imaging and broadcast video transmission are unsuitable for Internet video applications due to insufficient compression ratio, poor reconstruction fidelity, or excessive computational requirements. Examples include hierarchical transforms that require all, or large portion of, a source image to reside in memory at one time, transforms that induce significant locking effect at operationally salient compression ratios, and algorithms that require large amounts of floating-point computation. The latter constraint holds especially for video compression by small mobile imaging devices for transmission to, and compression on, platforms such as palmtop computers or personal digital assistants (PDAs). As Internet video requirements for frame rate and resolution increase to produce more detailed, less discontinuous motion sequences, a new class of compression transforms will be needed, especially for small memory models and displays such as those found on PDAs. In this, the third series of papers, we discuss the EBLAST compression transform and its application to Internet communication. Leading transforms for compression of Internet video and still imagery are reviewed and analyzed, including GIF, JPEG, AWIC (wavelet-based), wavelet packets, and SPIHT, whose performance is compared with EBLAST. Performance analysis criteria include time and space complexity and quality of the decompressed image. The latter is determined by rate-distortion data obtained from a database of realistic test images. Discussion also includes issues such as robustness of the compressed format to channel noise. EBLAST has been shown to perform superiorly to JPEG and, unlike current wavelet compression transforms, supports fast implementation on embedded processors with small memory models.
Shock-adiabatic to quasi-isentropic compression of warm dense helium up to 150 GPa
NASA Astrophysics Data System (ADS)
Zheng, J.; Chen, Q. F.; Gu, Y. J.; Li, J. T.; Li, Z. G.; Li, C. J.; Chen, Z. Y.
2017-06-01
Multiple reverberation compression can achieve higher pressure, higher temperature, but lower entropy. It is available to provide an important validation for the elaborate and wider planetary models and simulate the inertial confinement fusion capsule implosion process. In the work, we have developed the thermodynamic and optical properties of helium from shock-adiabatic to quasi-isentropic compression by means of a multiple reverberation technique. By this technique, the initial dense gaseous helium was compressed to high pressure and high temperature and entered the warm dense matter (WDM) region. The experimental equation of state (EOS) of WDM helium in the pressure-density-temperature (P-ρ -T) range of 1 -150 GPa , 0.1 -1.1 g c m-3 , and 4600-24 000 K were measured. The optical radiations emanating from the WDM helium were recorded, and the particle velocity profiles detecting from the sample/window interface were obtained successfully up to 10 times compression. The optical radiation results imply that dense He has become rather opaque after the 2nd compression with a density of about 0.3 g c m-3 and a temperature of about 1 eV. The opaque states of helium under multiple compression were analyzed by the particle velocity measurements. The multiple compression technique could efficiently enhanced the density and the compressibility, and our multiple compression ratios (ηi=ρi/ρ0,i =1 -10 ) of helium are greatly improved from 3.5 to 43 based on initial precompressed density (ρ0) . For the relative compression ratio (ηi'=ρi/ρi -1) , it increases with pressure in the lower density regime and reversely decreases in the higher density regime, and a turning point occurs at the 3rd and 4th compression states under the different loading conditions. This nonmonotonic evolution of the compression is controlled by two factors, where the excitation of internal degrees of freedom results in the increasing compressibility and the repulsive interactions between the particles results in the decreasing compressibility at the onset of electron excitation and ionization. In the P-ρ -T contour with the experiments and the calculations, our multiple compression states from insulating to semiconducting fluid (from transparent to opaque fluid) are illustrated. Our results give an elaborate validation of EOS models and have applications for planetary and stellar opaque atmospheres.
Yin, Yihang; Liu, Fengzheng; Zhou, Xiang; Li, Quanzhong
2015-08-07
Wireless sensor networks (WSNs) have been widely used to monitor the environment, and sensors in WSNs are usually power constrained. Because inner-node communication consumes most of the power, efficient data compression schemes are needed to reduce the data transmission to prolong the lifetime of WSNs. In this paper, we propose an efficient data compression model to aggregate data, which is based on spatial clustering and principal component analysis (PCA). First, sensors with a strong temporal-spatial correlation are grouped into one cluster for further processing with a novel similarity measure metric. Next, sensor data in one cluster are aggregated in the cluster head sensor node, and an efficient adaptive strategy is proposed for the selection of the cluster head to conserve energy. Finally, the proposed model applies principal component analysis with an error bound guarantee to compress the data and retain the definite variance at the same time. Computer simulations show that the proposed model can greatly reduce communication and obtain a lower mean square error than other PCA-based algorithms.
Disk-based compression of data from genome sequencing.
Grabowski, Szymon; Deorowicz, Sebastian; Roguski, Łukasz
2015-05-01
High-coverage sequencing data have significant, yet hard to exploit, redundancy. Most FASTQ compressors cannot efficiently compress the DNA stream of large datasets, since the redundancy between overlapping reads cannot be easily captured in the (relatively small) main memory. More interesting solutions for this problem are disk based, where the better of these two, from Cox et al. (2012), is based on the Burrows-Wheeler transform (BWT) and achieves 0.518 bits per base for a 134.0 Gbp human genome sequencing collection with almost 45-fold coverage. We propose overlapping reads compression with minimizers, a compression algorithm dedicated to sequencing reads (DNA only). Our method makes use of a conceptually simple and easily parallelizable idea of minimizers, to obtain 0.317 bits per base as the compression ratio, allowing to fit the 134.0 Gbp dataset into only 5.31 GB of space. http://sun.aei.polsl.pl/orcom under a free license. sebastian.deorowicz@polsl.pl Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Improved heat recovery and high-temperature clean-up for coal-gas fired combustion turbines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barthelemy, N.M.; Lynn, S.
1991-07-01
This study investigates the performance of an Improved Heat Recovery Method (IHRM) applied to a coal-gas fired power-generating system using a high-temperature clean-up. This heat recovery process has been described by Higdon and Lynn (1990). The IHRM is an integrated heat-recovery network that significantly increases the thermal efficiency of a gas turbine in the generation of electric power. Its main feature is to recover both low- and high-temperature heat reclaimed from various gas streams by means of evaporating heated water into combustion air in an air saturation unit. This unit is a packed column where compressed air flows countercurrently tomore » the heated water prior to being sent to the combustor, where it is mixed with coal-gas and burned. The high water content of the air stream thus obtained reduces the amount of excess air required to control the firing temperature of the combustor, which in turn lowers the total work of compression and results in a high thermal efficiency. Three designs of the IHRM were developed to accommodate three different gasifying process. The performances of those designs were evaluated and compared using computer simulations. The efficiencies obtained with the IHRM are substantially higher those yielded by other heat-recovery technologies using the same gasifying processes. The study also revealed that the IHRM compares advantageously to most advanced power-generation technologies currently available or tested commercially. 13 refs., 34 figs., 10 tabs.« less
Enhancing sparsity of Hermite polynomial expansions by iterative rotations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Xiu; Lei, Huan; Baker, Nathan A.
2016-02-01
Compressive sensing has become a powerful addition to uncertainty quantification in recent years. This paper identifies new bases for random variables through linear mappings such that the representation of the quantity of interest is more sparse with new basis functions associated with the new random variables. This sparsity increases both the efficiency and accuracy of the compressive sensing-based uncertainty quantification method. Specifically, we consider rotation- based linear mappings which are determined iteratively for Hermite polynomial expansions. We demonstrate the effectiveness of the new method with applications in solving stochastic partial differential equations and high-dimensional (O(100)) problems.
Lucachick, Glenn; Curran, Scott; Storey, John Morse; ...
2016-03-10
Our work explores the volatility of particles produced from two diesel low temperature combustion (LTC) modes proposed for high-efficiency compression ignition engines. It also explores mechanisms of particulate formation and growth upon dilution in the near-tailpipe environment. Moreover, the number distribution of exhaust particles from low- and mid-load dual-fuel reactivity controlled compression ignition (RCCI) and single-fuel premixed charge compression ignition (PPCI) modes were experimentally studied over a gradient of dilution temperature. Particle volatility of select particle diameters was investigated using volatility tandem differential mobility analysis (V-TDMA). Evaporation rates for exhaust particles were compared with V-TDMA results for candidate pure n-alkanesmore » to identify species with similar volatility characteristics. The results show that LTC particles are mostly comprised of material with volatility similar to engine oil alkanes. V-TDMA results were used as inputs to an aerosol condensation and evaporation model to support the finding that smaller particles in the distribution are comprised of lower volatility material than large particles under primary dilution conditions. Although the results show that saturation levels are high enough to drive condensation of alkanes onto existing particles under the dilution conditions investigated, they are not high We conclude that observed particles from LTC operation must grow from low concentrations of highly non-volatile compounds present in the exhaust.« less
NASA Astrophysics Data System (ADS)
Weike, Pang; Wenju, Lin; Qilin, Pan; Wenye, Lin; Qunte, Dai; Luwei, Yang; Zhentao, Zhang
2014-01-01
In this paper, a set of heat pump (called as Mechanical Vapor Recompression, MVR) propelled by a centrifugal fan is tested and it shows some special characteristic when it works together with a falling film evaporator. Firstly, an analysis of the fan's suction and discharge parameters at stable state, such as its pressure and temperature, indicates that a phenomenon of wet compression is probably to appear during vapor compression. As a result, superheat after saturated vapor is compressed is eliminated, which reduces discharge temperature of the system. It is because drops boil away and absorb the super heat into their latent heat during vapor compression. Meanwhile, drops in the suction vapor add to the compressed vapor, which increase the given heat of the MVR heat pump. Next, assistant electric heat could adjust and keep steady of the operating pressure and temperature of an MVR heat pump. With the evaporation temperature up to be high, heat balance is broken and supplement heat needs to increase. Thirdly, the performance of an MVR heat pump is affect by the balance of falling film and evaporation that has an effect on heat transfer. Then, two parameters standing for the performance are measured as it runs in practical condition. The two important parameters are consumptive electricity power and productive water capacity. According to theoretical work in ideal condition by calculation and fan's input power by measure as running, adiabatic efficiency (ηad) of a centrifugal fan is calculated when it is applied in a heat pump of MVR. Following, based on ηad, practical SMER and COP of an MVR heat pump are discovered to be correlative with it. Finally, in dependence on productive water in theory and in practice, displacement efficiency (ηv) of centrifugal fans is obtained when compressing vapor, and so provide some references of matching a fan for an MVR heat pump. On the other hand, it is helpful to research and develop MVR heat pumps, and also to check electricity power consumption while operating practically in light of electric motor efficiency (ηe) and ηad.
Context-dependent JPEG backward-compatible high-dynamic range image compression
NASA Astrophysics Data System (ADS)
Korshunov, Pavel; Ebrahimi, Touradj
2013-10-01
High-dynamic range (HDR) imaging is expected, together with ultrahigh definition and high-frame rate video, to become a technology that may change photo, TV, and film industries. Many cameras and displays capable of capturing and rendering both HDR images and video are already available in the market. The popularity and full-public adoption of HDR content is, however, hindered by the lack of standards in evaluation of quality, file formats, and compression, as well as large legacy base of low-dynamic range (LDR) displays that are unable to render HDR. To facilitate the wide spread of HDR usage, the backward compatibility of HDR with commonly used legacy technologies for storage, rendering, and compression of video and images are necessary. Although many tone-mapping algorithms are developed for generating viewable LDR content from HDR, there is no consensus of which algorithm to use and under which conditions. We, via a series of subjective evaluations, demonstrate the dependency of the perceptual quality of the tone-mapped LDR images on the context: environmental factors, display parameters, and image content itself. Based on the results of subjective tests, it proposes to extend JPEG file format, the most popular image format, in a backward compatible manner to deal with HDR images also. An architecture to achieve such backward compatibility with JPEG is proposed. A simple implementation of lossy compression demonstrates the efficiency of the proposed architecture compared with the state-of-the-art HDR image compression.
Lossless compression algorithm for REBL direct-write e-beam lithography system
NASA Astrophysics Data System (ADS)
Cramer, George; Liu, Hsin-I.; Zakhor, Avideh
2010-03-01
Future lithography systems must produce microchips with smaller feature sizes, while maintaining throughputs comparable to those of today's optical lithography systems. This places stringent constraints on the effective data throughput of any maskless lithography system. In recent years, we have developed a datapath architecture for direct-write lithography systems, and have shown that compression plays a key role in reducing throughput requirements of such systems. Our approach integrates a low complexity hardware-based decoder with the writers, in order to decompress a compressed data layer in real time on the fly. In doing so, we have developed a spectrum of lossless compression algorithms for integrated circuit layout data to provide a tradeoff between compression efficiency and hardware complexity, the latest of which is Block Golomb Context Copy Coding (Block GC3). In this paper, we present a modified version of Block GC3 called Block RGC3, specifically tailored to the REBL direct-write E-beam lithography system. Two characteristic features of the REBL system are a rotary stage resulting in arbitrarily-rotated layout imagery, and E-beam corrections prior to writing the data, both of which present significant challenges to lossless compression algorithms. Together, these effects reduce the effectiveness of both the copy and predict compression methods within Block GC3. Similar to Block GC3, our newly proposed technique Block RGC3, divides the image into a grid of two-dimensional "blocks" of pixels, each of which copies from a specified location in a history buffer of recently-decoded pixels. However, in Block RGC3 the number of possible copy locations is significantly increased, so as to allow repetition to be discovered along any angle of orientation, rather than horizontal or vertical. Also, by copying smaller groups of pixels at a time, repetition in layout patterns is easier to find and take advantage of. As a side effect, this increases the total number of copy locations to transmit; this is combated with an extra region-growing step, which enforces spatial coherence among neighboring copy locations, thereby improving compression efficiency. We characterize the performance of Block RGC3 in terms of compression efficiency and encoding complexity on a number of rotated Metal 1, Poly, and Via layouts at various angles, and show that Block RGC3 provides higher compression efficiency than existing lossless compression algorithms, including JPEG-LS, ZIP, BZIP2, and Block GC3.
Liu, Qi; Yang, Yu; Chen, Chun; Bu, Jiajun; Zhang, Yin; Ye, Xiuzi
2008-03-31
With the rapid emergence of RNA databases and newly identified non-coding RNAs, an efficient compression algorithm for RNA sequence and structural information is needed for the storage and analysis of such data. Although several algorithms for compressing DNA sequences have been proposed, none of them are suitable for the compression of RNA sequences with their secondary structures simultaneously. This kind of compression not only facilitates the maintenance of RNA data, but also supplies a novel way to measure the informational complexity of RNA structural data, raising the possibility of studying the relationship between the functional activities of RNA structures and their complexities, as well as various structural properties of RNA based on compression. RNACompress employs an efficient grammar-based model to compress RNA sequences and their secondary structures. The main goals of this algorithm are two fold: (1) present a robust and effective way for RNA structural data compression; (2) design a suitable model to represent RNA secondary structure as well as derive the informational complexity of the structural data based on compression. Our extensive tests have shown that RNACompress achieves a universally better compression ratio compared with other sequence-specific or common text-specific compression algorithms, such as Gencompress, winrar and gzip. Moreover, a test of the activities of distinct GTP-binding RNAs (aptamers) compared with their structural complexity shows that our defined informational complexity can be used to describe how complexity varies with activity. These results lead to an objective means of comparing the functional properties of heteropolymers from the information perspective. A universal algorithm for the compression of RNA secondary structure as well as the evaluation of its informational complexity is discussed in this paper. We have developed RNACompress, as a useful tool for academic users. Extensive tests have shown that RNACompress is a universally efficient algorithm for the compression of RNA sequences with their secondary structures. RNACompress also serves as a good measurement of the informational complexity of RNA secondary structure, which can be used to study the functional activities of RNA molecules.
Liu, Qi; Yang, Yu; Chen, Chun; Bu, Jiajun; Zhang, Yin; Ye, Xiuzi
2008-01-01
Background With the rapid emergence of RNA databases and newly identified non-coding RNAs, an efficient compression algorithm for RNA sequence and structural information is needed for the storage and analysis of such data. Although several algorithms for compressing DNA sequences have been proposed, none of them are suitable for the compression of RNA sequences with their secondary structures simultaneously. This kind of compression not only facilitates the maintenance of RNA data, but also supplies a novel way to measure the informational complexity of RNA structural data, raising the possibility of studying the relationship between the functional activities of RNA structures and their complexities, as well as various structural properties of RNA based on compression. Results RNACompress employs an efficient grammar-based model to compress RNA sequences and their secondary structures. The main goals of this algorithm are two fold: (1) present a robust and effective way for RNA structural data compression; (2) design a suitable model to represent RNA secondary structure as well as derive the informational complexity of the structural data based on compression. Our extensive tests have shown that RNACompress achieves a universally better compression ratio compared with other sequence-specific or common text-specific compression algorithms, such as Gencompress, winrar and gzip. Moreover, a test of the activities of distinct GTP-binding RNAs (aptamers) compared with their structural complexity shows that our defined informational complexity can be used to describe how complexity varies with activity. These results lead to an objective means of comparing the functional properties of heteropolymers from the information perspective. Conclusion A universal algorithm for the compression of RNA secondary structure as well as the evaluation of its informational complexity is discussed in this paper. We have developed RNACompress, as a useful tool for academic users. Extensive tests have shown that RNACompress is a universally efficient algorithm for the compression of RNA sequences with their secondary structures. RNACompress also serves as a good measurement of the informational complexity of RNA secondary structure, which can be used to study the functional activities of RNA molecules. PMID:18373878
Digital holographic image fusion for a larger size object using compressive sensing
NASA Astrophysics Data System (ADS)
Tian, Qiuhong; Yan, Liping; Chen, Benyong; Yao, Jiabao; Zhang, Shihua
2017-05-01
Digital holographic imaging fusion for a larger size object using compressive sensing is proposed. In this method, the high frequency component of the digital hologram under discrete wavelet transform is represented sparsely by using compressive sensing so that the data redundancy of digital holographic recording can be resolved validly, the low frequency component is retained totally to ensure the image quality, and multiple reconstructed images with different clear parts corresponding to a laser spot size are fused to realize the high quality reconstructed image of a larger size object. In addition, a filter combing high-pass and low-pass filters is designed to remove the zero-order term from a digital hologram effectively. The digital holographic experimental setup based on off-axis Fresnel digital holography was constructed. The feasible and comparative experiments were carried out. The fused image was evaluated by using the Tamura texture features. The experimental results demonstrated that the proposed method can improve the processing efficiency and visual characteristics of the fused image and enlarge the size of the measured object effectively.
Compressive light field imaging
NASA Astrophysics Data System (ADS)
Ashok, Amit; Neifeld, Mark A.
2010-04-01
Light field imagers such as the plenoptic and the integral imagers inherently measure projections of the four dimensional (4D) light field scalar function onto a two dimensional sensor and therefore, suffer from a spatial vs. angular resolution trade-off. Programmable light field imagers, proposed recently, overcome this spatioangular resolution trade-off and allow high-resolution capture of the (4D) light field function with multiple measurements at the cost of a longer exposure time. However, these light field imagers do not exploit the spatio-angular correlations inherent in the light fields of natural scenes and thus result in photon-inefficient measurements. Here, we describe two architectures for compressive light field imaging that require relatively few photon-efficient measurements to obtain a high-resolution estimate of the light field while reducing the overall exposure time. Our simulation study shows that, compressive light field imagers using the principal component (PC) measurement basis require four times fewer measurements and three times shorter exposure time compared to a conventional light field imager in order to achieve an equivalent light field reconstruction quality.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ciatti, Stephen A.
The history, present and future of the compression ignition engine is a fascinating story that spans over 100 years, from the time of Rudolf Diesel to the highly regulated and computerized engines of the 21st Century. The development of these engines provided inexpensive, reliable and high power density machines to allow transportation, construction and farming to be more productive with less human effort than in any previous period of human history. The concept that fuels could be consumed efficiently and effectively with only the ignition of pressurized and heated air was a significant departure from the previous coal-burning architecture ofmore » the 1800s. Today, the compression ignition engine is undergoing yet another revolution. The equipment that provides transport, builds roads and infrastructure, and harvests the food we eat needs to meet more stringent requirements than ever before. How successfully 21st Century engineers are able to make compression ignition engine technology meet these demands will be of major influence in assisting developing nations (with over 50% of the world’s population) achieve the economic and environmental goals they seek.« less
Research on compression performance of ultrahigh-definition videos
NASA Astrophysics Data System (ADS)
Li, Xiangqun; He, Xiaohai; Qing, Linbo; Tao, Qingchuan; Wu, Di
2017-11-01
With the popularization of high-definition (HD) images and videos (1920×1080 pixels and above), there are even 4K (3840×2160) television signals and 8 K (8192×4320) ultrahigh-definition videos. The demand for HD images and videos is increasing continuously, along with the increasing data volume. The storage and transmission cannot be properly solved only by virtue of the expansion capacity of hard disks and the update and improvement of transmission devices. Based on the full use of the coding standard high-efficiency video coding (HEVC), super-resolution reconstruction technology, and the correlation between the intra- and the interprediction, we first put forward a "division-compensation"-based strategy to further improve the compression performance of a single image and frame I. Then, by making use of the above thought and HEVC encoder and decoder, a video compression coding frame is designed. HEVC is used inside the frame. Last, with the super-resolution reconstruction technology, the reconstructed video quality is further improved. The experiment shows that by the proposed compression method for a single image (frame I) and video sequence here, the performance is superior to that of HEVC in a low bit rate environment.
Lossless compression techniques for maskless lithography data
NASA Astrophysics Data System (ADS)
Dai, Vito; Zakhor, Avideh
2002-07-01
Future lithography systems must produce more dense chips with smaller feature sizes, while maintaining the throughput of one wafer per sixty seconds per layer achieved by today's optical lithography systems. To achieve this throughput with a direct-write maskless lithography system, using 25 nm pixels for 50 nm feature sizes, requires data rates of about 10 Tb/s. In a previous paper, we presented an architecture which achieves this data rate contingent on consistent 25 to 1 compression of lithography data, and on implementation of a decoder-writer chip with a real-time decompressor fabricated on the same chip as the massively parallel array of lithography writers. In this paper, we examine the compression efficiency of a spectrum of techniques suitable for lithography data, including two industry standards JBIG and JPEG-LS, a wavelet based technique SPIHT, general file compression techniques ZIP and BZIP2, our own 2D-LZ technique, and a simple list-of-rectangles representation RECT. Layouts rasterized both to black-and-white pixels, and to 32 level gray pixels are considered. Based on compression efficiency, JBIG, ZIP, 2D-LZ, and BZIP2 are found to be strong candidates for application to maskless lithography data, in many cases far exceeding the required compression ratio of 25. To demonstrate the feasibility of implementing the decoder-writer chip, we consider the design of a hardware decoder based on ZIP, the simplest of the four candidate techniques. The basic algorithm behind ZIP compression is Lempel-Ziv 1977 (LZ77), and the design parameters of LZ77 decompression are optimized to minimize circuit usage while maintaining compression efficiency.
NASA Astrophysics Data System (ADS)
Zhou, Zhenggan; Ma, Baoquan; Jiang, Jingtao; Yu, Guang; Liu, Kui; Zhang, Dongmei; Liu, Weiping
2014-10-01
Air-coupled ultrasonic testing (ACUT) technique has been viewed as a viable solution in defect detection of advanced composites used in aerospace and aviation industries. However, the giant mismatch of acoustic impedance in air-solid interface makes the transmission efficiency of ultrasound low, and leads to poor signal-to-noise (SNR) ratio of received signal. The utilisation of signal-processing techniques in non-destructive testing is highly appreciated. This paper presents a wavelet filtering and phase-coded pulse compression hybrid method to improve the SNR and output power of received signal. The wavelet transform is utilised to filter insignificant components from noisy ultrasonic signal, and pulse compression process is used to improve the power of correlated signal based on cross-correction algorithm. For the purpose of reasonable parameter selection, different families of wavelets (Daubechies, Symlet and Coiflet) and decomposition level in discrete wavelet transform are analysed, different Barker codes (5-13 bits) are also analysed to acquire higher main-to-side lobe ratio. The performance of the hybrid method was verified in a honeycomb composite sample. Experimental results demonstrated that the proposed method is very efficient in improving the SNR and signal strength. The applicability of the proposed method seems to be a very promising tool to evaluate the integrity of high ultrasound attenuation composite materials using the ACUT.
Mechanical Properties and Eco-Efficiency of Steel Fiber Reinforced Alkali-Activated Slag Concrete
Kim, Sun-Woo; Jang, Seok-Joon; Kang, Dae-Hyun; Ahn, Kyung-Lim; Yun, Hyun-Do
2015-01-01
Conventional concrete production that uses ordinary Portland cement (OPC) as a binder seems unsustainable due to its high energy consumption, natural resource exhaustion and huge carbon dioxide (CO2) emissions. To transform the conventional process of concrete production to a more sustainable process, the replacement of high energy-consumptive PC with new binders such as fly ash and alkali-activated slag (AAS) from available industrial by-products has been recognized as an alternative. This paper investigates the effect of curing conditions and steel fiber inclusion on the compressive and flexural performance of AAS concrete with a specified compressive strength of 40 MPa to evaluate the feasibility of AAS concrete as an alternative to normal concrete for CO2 emission reduction in the concrete industry. Their performances are compared with reference concrete produced using OPC. The eco-efficiency of AAS use for concrete production was also evaluated by binder intensity and CO2 intensity based on the test results and literature data. Test results show that it is possible to produce AAS concrete with compressive and flexural performances comparable to conventional concrete. Wet-curing and steel fiber inclusion improve the mechanical performance of AAS concrete. Also, the utilization of AAS as a sustainable binder can lead to significant CO2 emissions reduction and resources and energy conservation in the concrete industry. PMID:28793639
Mobile high-performance computing (HPC) for synthetic aperture radar signal processing
NASA Astrophysics Data System (ADS)
Misko, Joshua; Kim, Youngsoo; Qi, Chenchen; Sirkeci, Birsen
2018-04-01
The importance of mobile high-performance computing has emerged in numerous battlespace applications at the tactical edge in hostile environments. Energy efficient computing power is a key enabler for diverse areas ranging from real-time big data analytics and atmospheric science to network science. However, the design of tactical mobile data centers is dominated by power, thermal, and physical constraints. Presently, it is very unlikely to achieve required computing processing power by aggregating emerging heterogeneous many-core processing platforms consisting of CPU, Field Programmable Gate Arrays and Graphic Processor cores constrained by power and performance. To address these challenges, we performed a Synthetic Aperture Radar case study for Automatic Target Recognition (ATR) using Deep Neural Networks (DNNs). However, these DNN models are typically trained using GPUs with gigabytes of external memories and massively used 32-bit floating point operations. As a result, DNNs do not run efficiently on hardware appropriate for low power or mobile applications. To address this limitation, we proposed for compressing DNN models for ATR suited to deployment on resource constrained hardware. This proposed compression framework utilizes promising DNN compression techniques including pruning and weight quantization while also focusing on processor features common to modern low-power devices. Following this methodology as a guideline produced a DNN for ATR tuned to maximize classification throughput, minimize power consumption, and minimize memory footprint on a low-power device.
A finite element approach for solution of the 3D Euler equations
NASA Technical Reports Server (NTRS)
Thornton, E. A.; Ramakrishnan, R.; Dechaumphai, P.
1986-01-01
Prediction of thermal deformations and stresses has prime importance in the design of the next generation of high speed flight vehicles. Aerothermal load computations for complex three-dimensional shapes necessitate development of procedures to solve the full Navier-Stokes equations. This paper details the development of a three-dimensional inviscid flow approach which can be extended for three-dimensional viscous flows. A finite element formulation, based on a Taylor series expansion in time, is employed to solve the compressible Euler equations. Model generation and results display are done using a commercially available program, PATRAN, and vectorizing strategies are incorporated to ensure computational efficiency. Sample problems are presented to demonstrate the validity of the approach for analyzing high speed compressible flows.
Lasche, G.P.
1983-09-29
The invention is a laser or particle-beam-driven fusion reactor system which takes maximum advantage of both the very short pulsed nature of the energy release of inertial confinement fusion (ICF) and the very small volumes within which the thermonuclear burn takes place. The pulsed nature of ICF permits dynamic direct energy conversion schemes such as magnetohydrodynamic (MHD) generation and magnetic flux compression; the small volumes permit very compact blanket geometries. By fully exploiting these characteristics of ICF, it is possible to design a fusion reactor with exceptionally high power density, high net electric efficiency, and low neutron-induced radioactivity. The invention includes a compact blanket design and method and apparatus for obtaining energy utilizing the compact blanket.
NASA Astrophysics Data System (ADS)
Ku, Nai-Lun; Chen, Yi-Yung; Hsieh, Wei-Che; Whang, Allen Jong-Woei
2012-02-01
Due to the energy crisis, the principle of green energy gains popularity. This leads the increasing interest in renewable energy such as solar energy. Thus, how to collect the sunlight for indoor illumination becomes our ultimate target. With the environmental awareness increasing, we use the nature light as the light source. Then we start to devote the development of solar collecting system. The Natural Light Guiding System includes three parts, collecting, transmitting and lighting part. The idea of our solar collecting system design is a concept for combining the buildings with a combination of collecting modules. Therefore, we can use it anyplace where the sunlight can directly impinges on buildings with collecting elements. In the meantime, while collecting the sunlight with high efficiency, we can transmit the sunlight into indoor through shorter distance zone by light pipe where we needs the light. We proposed a novel design including disk-type collective lens module. With the design, we can let the incident light and exit light be parallel and compressed. By the parallel and compressed design, we make every output light become compressed in the proposed optical structure. In this way, we can increase the ratio about light compression, get the better efficiency and let the energy distribution more uniform for indoor illumination. By the definition of "KPI" as an performance index about light density as following: lm/(mm)2, the simulation results show that the proposed Concentrator is 40,000,000 KPI much better than the 800,000 KPI measured from the traditional ones.
Parallel design patterns for a low-power, software-defined compressed video encoder
NASA Astrophysics Data System (ADS)
Bruns, Michael W.; Hunt, Martin A.; Prasad, Durga; Gunupudi, Nageswara R.; Sonachalam, Sekar
2011-06-01
Video compression algorithms such as H.264 offer much potential for parallel processing that is not always exploited by the technology of a particular implementation. Consumer mobile encoding devices often achieve real-time performance and low power consumption through parallel processing in Application Specific Integrated Circuit (ASIC) technology, but many other applications require a software-defined encoder. High quality compression features needed for some applications such as 10-bit sample depth or 4:2:2 chroma format often go beyond the capability of a typical consumer electronics device. An application may also need to efficiently combine compression with other functions such as noise reduction, image stabilization, real time clocks, GPS data, mission/ESD/user data or software-defined radio in a low power, field upgradable implementation. Low power, software-defined encoders may be implemented using a massively parallel memory-network processor array with 100 or more cores and distributed memory. The large number of processor elements allow the silicon device to operate more efficiently than conventional DSP or CPU technology. A dataflow programming methodology may be used to express all of the encoding processes including motion compensation, transform and quantization, and entropy coding. This is a declarative programming model in which the parallelism of the compression algorithm is expressed as a hierarchical graph of tasks with message communication. Data parallel and task parallel design patterns are supported without the need for explicit global synchronization control. An example is described of an H.264 encoder developed for a commercially available, massively parallel memorynetwork processor device.
An efficient coding algorithm for the compression of ECG signals using the wavelet transform.
Rajoub, Bashar A
2002-04-01
A wavelet-based electrocardiogram (ECG) data compression algorithm is proposed in this paper. The ECG signal is first preprocessed, the discrete wavelet transform (DWT) is then applied to the preprocessed signal. Preprocessing guarantees that the magnitudes of the wavelet coefficients be less than one, and reduces the reconstruction errors near both ends of the compressed signal. The DWT coefficients are divided into three groups, each group is thresholded using a threshold based on a desired energy packing efficiency. A binary significance map is then generated by scanning the wavelet decomposition coefficients and outputting a binary one if the scanned coefficient is significant, and a binary zero if it is insignificant. Compression is achieved by 1) using a variable length code based on run length encoding to compress the significance map and 2) using direct binary representation for representing the significant coefficients. The ability of the coding algorithm to compress ECG signals is investigated, the results were obtained by compressing and decompressing the test signals. The proposed algorithm is compared with direct-based and wavelet-based compression algorithms and showed superior performance. A compression ratio of 24:1 was achieved for MIT-BIH record 117 with a percent root mean square difference as low as 1.08%.
GTZ: a fast compression and cloud transmission tool optimized for FASTQ files.
Xing, Yuting; Li, Gen; Wang, Zhenguo; Feng, Bolun; Song, Zhuo; Wu, Chengkun
2017-12-28
The dramatic development of DNA sequencing technology is generating real big data, craving for more storage and bandwidth. To speed up data sharing and bring data to computing resource faster and cheaper, it is necessary to develop a compression tool than can support efficient compression and transmission of sequencing data onto the cloud storage. This paper presents GTZ, a compression and transmission tool, optimized for FASTQ files. As a reference-free lossless FASTQ compressor, GTZ treats different lines of FASTQ separately, utilizes adaptive context modelling to estimate their characteristic probabilities, and compresses data blocks with arithmetic coding. GTZ can also be used to compress multiple files or directories at once. Furthermore, as a tool to be used in the cloud computing era, it is capable of saving compressed data locally or transmitting data directly into cloud by choice. We evaluated the performance of GTZ on some diverse FASTQ benchmarks. Results show that in most cases, it outperforms many other tools in terms of the compression ratio, speed and stability. GTZ is a tool that enables efficient lossless FASTQ data compression and simultaneous data transmission onto to cloud. It emerges as a useful tool for NGS data storage and transmission in the cloud environment. GTZ is freely available online at: https://github.com/Genetalks/gtz .
NASA Astrophysics Data System (ADS)
Fujioka, Shinsuke; Arikawa, Yasunobu; Kojima, Sadaoki; Johzaki, Tomoyuki; Nagatomo, Hideo; Sawada, Hiroshi; Lee, Seung Ho; Shiroto, Takashi; Ohnishi, Naofumi; Morace, Alessio; Vaisseau, Xavier; Sakata, Shohei; Abe, Yuki; Matsuo, Kazuki; Farley Law, King Fai; Tosaki, Shota; Yogo, Akifumi; Shigemori, Keisuke; Hironaka, Yoichiro; Zhang, Zhe; Sunahara, Atsushi; Ozaki, Tetsuo; Sakagami, Hitoshi; Mima, Kunioki; Fujimoto, Yasushi; Yamanoi, Kohei; Norimatsu, Takayoshi; Tokita, Shigeki; Nakata, Yoshiki; Kawanaka, Junji; Jitsuno, Takahisa; Miyanaga, Noriaki; Nakai, Mitsuo; Nishimura, Hiroaki; Shiraga, Hiroyuki; Kondo, Kotaro; Bailly-Grandvaux, Mathieu; Bellei, Claudio; Santos, João Jorge; Azechi, Hiroshi
2016-05-01
A petawatt laser for fast ignition experiments (LFEX) laser system [N. Miyanaga et al., J. Phys. IV France 133, 81 (2006)], which is currently capable of delivering 2 kJ in a 1.5 ps pulse using 4 laser beams, has been constructed beside the GEKKO-XII laser facility for demonstrating efficient fast heating of a dense plasma up to the ignition temperature under the auspices of the Fast Ignition Realization EXperiment (FIREX) project [H. Azechi et al., Nucl. Fusion 49, 104024 (2009)]. In the FIREX experiment, a cone is attached to a spherical target containing a fuel to prevent a corona plasma from entering the path of the intense heating LFEX laser beams. The LFEX laser beams are focused at the tip of the cone to generate a relativistic electron beam (REB), which heats a dense fuel core generated by compression of a spherical deuterized plastic target induced by the GEKKO-XII laser beams. Recent studies indicate that the current heating efficiency is only 0.4%, and three requirements to achieve higher efficiency of the fast ignition (FI) scheme with the current GEKKO and LFEX systems have been identified: (i) reduction of the high energy tail of the REB; (ii) formation of a fuel core with high areal density using a limited number (twelve) of GEKKO-XII laser beams as well as a limited energy (4 kJ of 0.53-μm light in a 1.3 ns pulse); (iii) guiding and focusing of the REB to the fuel core. Laser-plasma interactions in a long-scale plasma generate electrons that are too energetic to efficiently heat the fuel core. Three actions were taken to meet the first requirement. First, the intensity contrast of the foot pulses to the main pulses of the LFEX was improved to >109. Second, a 5.5-mm-long cone was introduced to reduce pre-heating of the inner cone wall caused by illumination of the unconverted 1.053-μm light of implosion beam (GEKKO-XII). Third, the outside of the cone wall was coated with a 40-μm plastic layer to protect it from the pressure caused by imploding plasma. Following the above improvements, conversion of 13% of the LFEX laser energy to a low energy portion of the REB, whose slope temperature is 0.7 MeV, which is close to the ponderomotive scaling value, was achieved. To meet the second requirement, the compression of a solid spherical ball with a diameter of 200-μm to form a dense core with an areal density of ˜0.07 g/cm2 was induced by a laser-driven spherically converging shock wave. Converging shock compression is more hydrodynamically stable compared to shell implosion, while a hot spot cannot be generated with a solid ball target. Solid ball compression is preferable also for compressing an external magnetic field to collimate the REB to the fuel core, due to the relatively small magnetic Reynolds number of the shock compressed region. To meet the third requirement, we have generated a strong kilo-tesla magnetic field using a laser-driven capacitor-coil target. The strength and time history of the magnetic field were characterized with proton deflectometry and a B-dot probe. Guidance of the REB using a 0.6-kT field in a planar geometry has been demonstrated at the LULI 2000 laser facility. In a realistic FI scenario, a magnetic mirror is formed between the REB generation point and the fuel core. The effects of the strong magnetic field on not only REB transport but also plasma compression were studied using numerical simulations. According to the transport calculations, the heating efficiency can be improved from 0.4% to 4% by the GEKKO and LFEX laser system by meeting the three requirements described above. This efficiency is scalable to 10% of the heating efficiency by increasing the areal density of the fuel core.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fujioka, Shinsuke, E-mail: sfujioka@ile.osaka-u.ac.jp; Arikawa, Yasunobu; Kojima, Sadaoki
A petawatt laser for fast ignition experiments (LFEX) laser system [N. Miyanaga et al., J. Phys. IV France 133, 81 (2006)], which is currently capable of delivering 2 kJ in a 1.5 ps pulse using 4 laser beams, has been constructed beside the GEKKO-XII laser facility for demonstrating efficient fast heating of a dense plasma up to the ignition temperature under the auspices of the Fast Ignition Realization EXperiment (FIREX) project [H. Azechi et al., Nucl. Fusion 49, 104024 (2009)]. In the FIREX experiment, a cone is attached to a spherical target containing a fuel to prevent a corona plasma frommore » entering the path of the intense heating LFEX laser beams. The LFEX laser beams are focused at the tip of the cone to generate a relativistic electron beam (REB), which heats a dense fuel core generated by compression of a spherical deuterized plastic target induced by the GEKKO-XII laser beams. Recent studies indicate that the current heating efficiency is only 0.4%, and three requirements to achieve higher efficiency of the fast ignition (FI) scheme with the current GEKKO and LFEX systems have been identified: (i) reduction of the high energy tail of the REB; (ii) formation of a fuel core with high areal density using a limited number (twelve) of GEKKO-XII laser beams as well as a limited energy (4 kJ of 0.53-μm light in a 1.3 ns pulse); (iii) guiding and focusing of the REB to the fuel core. Laser–plasma interactions in a long-scale plasma generate electrons that are too energetic to efficiently heat the fuel core. Three actions were taken to meet the first requirement. First, the intensity contrast of the foot pulses to the main pulses of the LFEX was improved to >10{sup 9}. Second, a 5.5-mm-long cone was introduced to reduce pre-heating of the inner cone wall caused by illumination of the unconverted 1.053-μm light of implosion beam (GEKKO-XII). Third, the outside of the cone wall was coated with a 40-μm plastic layer to protect it from the pressure caused by imploding plasma. Following the above improvements, conversion of 13% of the LFEX laser energy to a low energy portion of the REB, whose slope temperature is 0.7 MeV, which is close to the ponderomotive scaling value, was achieved. To meet the second requirement, the compression of a solid spherical ball with a diameter of 200-μm to form a dense core with an areal density of ∼0.07 g/cm{sup 2} was induced by a laser-driven spherically converging shock wave. Converging shock compression is more hydrodynamically stable compared to shell implosion, while a hot spot cannot be generated with a solid ball target. Solid ball compression is preferable also for compressing an external magnetic field to collimate the REB to the fuel core, due to the relatively small magnetic Reynolds number of the shock compressed region. To meet the third requirement, we have generated a strong kilo-tesla magnetic field using a laser-driven capacitor-coil target. The strength and time history of the magnetic field were characterized with proton deflectometry and a B-dot probe. Guidance of the REB using a 0.6-kT field in a planar geometry has been demonstrated at the LULI 2000 laser facility. In a realistic FI scenario, a magnetic mirror is formed between the REB generation point and the fuel core. The effects of the strong magnetic field on not only REB transport but also plasma compression were studied using numerical simulations. According to the transport calculations, the heating efficiency can be improved from 0.4% to 4% by the GEKKO and LFEX laser system by meeting the three requirements described above. This efficiency is scalable to 10% of the heating efficiency by increasing the areal density of the fuel core.« less
Leone, Thomas G; Anderson, James E; Davis, Richard S; Iqbal, Asim; Reese, Ronald A; Shelby, Michael H; Studzinski, William M
2015-09-15
Light-duty vehicles (LDVs) in the United States and elsewhere are required to meet increasingly challenging regulations on fuel economy and greenhouse gas (GHG) emissions as well as criteria pollutant emissions. New vehicle trends to improve efficiency include higher compression ratio, downsizing, turbocharging, downspeeding, and hybridization, each involving greater operation of spark-ignited (SI) engines under higher-load, knock-limited conditions. Higher octane ratings for regular-grade gasoline (with greater knock resistance) are an enabler for these technologies. This literature review discusses both fuel and engine factors affecting knock resistance and their contribution to higher engine efficiency and lower tailpipe CO2 emissions. Increasing compression ratios for future SI engines would be the primary response to a significant increase in fuel octane ratings. Existing LDVs would see more advanced spark timing and more efficient combustion phasing. Higher ethanol content is one available option for increasing the octane ratings of gasoline and would provide additional engine efficiency benefits for part and full load operation. An empirical calculation method is provided that allows estimation of expected vehicle efficiency, volumetric fuel economy, and CO2 emission benefits for future LDVs through higher compression ratios for different assumptions on fuel properties and engine types. Accurate "tank-to-wheel" estimates of this type are necessary for "well-to-wheel" analyses of increased gasoline octane ratings in the context of light duty vehicle transportation.
Mobile satellite communications technology - A summary of NASA activities
NASA Technical Reports Server (NTRS)
Dutzi, E. J.; Knouse, G. H.
1986-01-01
Studies in recent years indicate that future high-capacity mobile satellite systems are viable only if certain high-risk enabling technologies are developed. Accordingly, NASA has structured an advanced technology development program aimed at efficient utilization of orbit, spectrum, and power. Over the last two years, studies have concentrated on developing concepts and identifying cost drivers and other issues associated with the major technical areas of emphasis: vehicle antennas, speech compression, bandwidth-efficient digital modems, network architecture, mobile satellite channel characterization, and selected space segment technology. The program is now entering the next phase - breadboarding, development, and field experimentation.
On the Suitability of Suffix Arrays for Lempel-Ziv Data Compression
NASA Astrophysics Data System (ADS)
Ferreira, Artur J.; Oliveira, Arlindo L.; Figueiredo, Mário A. T.
Lossless compression algorithms of the Lempel-Ziv (LZ) family are widely used nowadays. Regarding time and memory requirements, LZ encoding is much more demanding than decoding. In order to speed up the encoding process, efficient data structures, like suffix trees, have been used. In this paper, we explore the use of suffix arrays to hold the dictionary of the LZ encoder, and propose an algorithm to search over it. We show that the resulting encoder attains roughly the same compression ratios as those based on suffix trees. However, the amount of memory required by the suffix array is fixed, and much lower than the variable amount of memory used by encoders based on suffix trees (which depends on the text to encode). We conclude that suffix arrays, when compared to suffix trees in terms of the trade-off among time, memory, and compression ratio, may be preferable in scenarios (e.g., embedded systems) where memory is at a premium and high speed is not critical.
Issues with Strong Compression of Plasma Target by Stabilized Imploding Liner
NASA Astrophysics Data System (ADS)
Turchi, Peter; Frese, Sherry; Frese, Michael
2017-10-01
Strong compression (10:1 in radius) of an FRC by imploding liquid metal liners, stabilized against Rayleigh-Taylor modes, using different scalings for loss based on Bohm vs 100X classical diffusion rates, predict useful compressions with implosion times half the initial energy lifetime. The elongation (length-to-diameter ratio) near peak compression needed to satisfy empirical stability criterion and also retain alpha-particles is about ten. The present paper extends these considerations to issues of the initial FRC, including stability conditions (S*/E) and allowable angular speeds. Furthermore, efficient recovery of the implosion energy and alpha-particle work, in order to reduce the necessary nuclear gain for an economical power reactor, is seen as an important element of the stabilized liner implosion concept for fusion. We describe recent progress in design and construction of the high energy-density prototype of a Stabilized Liner Compressor (SLC) leading to repetitive laboratory experiments to develop the plasma target. Supported by ARPA-E ALPHA Program.
Jeon, Joonryong
2017-01-01
In this paper, a data compression technology-based intelligent data acquisition (IDAQ) system was developed for structural health monitoring of civil structures, and its validity was tested using random signals (El-Centro seismic waveform). The IDAQ system was structured to include a high-performance CPU with large dynamic memory for multi-input and output in a radio frequency (RF) manner. In addition, the embedded software technology (EST) has been applied to it to implement diverse logics needed in the process of acquiring, processing and transmitting data. In order to utilize IDAQ system for the structural health monitoring of civil structures, this study developed an artificial filter bank by which structural dynamic responses (acceleration) were efficiently acquired, and also optimized it on the random El-Centro seismic waveform. All techniques developed in this study have been embedded to our system. The data compression technology-based IDAQ system was proven valid in acquiring valid signals in a compressed size. PMID:28704945
Heo, Gwanghee; Jeon, Joonryong
2017-07-12
In this paper, a data compression technology-based intelligent data acquisition (IDAQ) system was developed for structural health monitoring of civil structures, and its validity was tested using random signals (El-Centro seismic waveform). The IDAQ system was structured to include a high-performance CPU with large dynamic memory for multi-input and output in a radio frequency (RF) manner. In addition, the embedded software technology (EST) has been applied to it to implement diverse logics needed in the process of acquiring, processing and transmitting data. In order to utilize IDAQ system for the structural health monitoring of civil structures, this study developed an artificial filter bank by which structural dynamic responses (acceleration) were efficiently acquired, and also optimized it on the random El-Centro seismic waveform. All techniques developed in this study have been embedded to our system. The data compression technology-based IDAQ system was proven valid in acquiring valid signals in a compressed size.
High bit depth infrared image compression via low bit depth codecs
NASA Astrophysics Data System (ADS)
Belyaev, Evgeny; Mantel, Claire; Forchhammer, Søren
2017-08-01
Future infrared remote sensing systems, such as monitoring of the Earth's environment by satellites, infrastructure inspection by unmanned airborne vehicles etc., will require 16 bit depth infrared images to be compressed and stored or transmitted for further analysis. Such systems are equipped with low power embedded platforms where image or video data is compressed by a hardware block called the video processing unit (VPU). However, in many cases using two 8-bit VPUs can provide advantages compared with using higher bit depth image compression directly. We propose to compress 16 bit depth images via 8 bit depth codecs in the following way. First, an input 16 bit depth image is mapped into 8 bit depth images, e.g., the first image contains only the most significant bytes (MSB image) and the second one contains only the least significant bytes (LSB image). Then each image is compressed by an image or video codec with 8 bits per pixel input format. We analyze how the compression parameters for both MSB and LSB images should be chosen to provide the maximum objective quality for a given compression ratio. Finally, we apply the proposed infrared image compression method utilizing JPEG and H.264/AVC codecs, which are usually available in efficient implementations, and compare their rate-distortion performance with JPEG2000, JPEG-XT and H.265/HEVC codecs supporting direct compression of infrared images in 16 bit depth format. A preliminary result shows that two 8 bit H.264/AVC codecs can achieve similar result as 16 bit HEVC codec.
NASA Astrophysics Data System (ADS)
García, Aday; Santos, Lucana; López, Sebastián.; Callicó, Gustavo M.; Lopez, Jose F.; Sarmiento, Roberto
2014-05-01
Efficient onboard satellite hyperspectral image compression represents a necessity and a challenge for current and future space missions. Therefore, it is mandatory to provide hardware implementations for this type of algorithms in order to achieve the constraints required for onboard compression. In this work, we implement the Lossy Compression for Exomars (LCE) algorithm on an FPGA by means of high-level synthesis (HSL) in order to shorten the design cycle. Specifically, we use CatapultC HLS tool to obtain a VHDL description of the LCE algorithm from C-language specifications. Two different approaches are followed for HLS: on one hand, introducing the whole C-language description in CatapultC and on the other hand, splitting the C-language description in functional modules to be implemented independently with CatapultC, connecting and controlling them by an RTL description code without HLS. In both cases the goal is to obtain an FPGA implementation. We explain the several changes applied to the original Clanguage source code in order to optimize the results obtained by CatapultC for both approaches. Experimental results show low area occupancy of less than 15% for a SRAM-based Virtex-5 FPGA and a maximum frequency above 80 MHz. Additionally, the LCE compressor was implemented into an RTAX2000S antifuse-based FPGA, showing an area occupancy of 75% and a frequency around 53 MHz. All these serve to demonstrate that the LCE algorithm can be efficiently executed on an FPGA onboard a satellite. A comparison between both implementation approaches is also provided. The performance of the algorithm is finally compared with implementations on other technologies, specifically a graphics processing unit (GPU) and a single-threaded CPU.
Nonlinear vibration analysis of the high-efficiency compressive-mode piezoelectric energy harvester
NASA Astrophysics Data System (ADS)
Yang, Zhengbao; Zu, Jean
2015-04-01
Power source is critical to achieve independent and autonomous operations of electronic mobile devices. The vibration-based energy harvesting is extensively studied recently, and recognized as a promising technology to realize inexhaustible power supply for small-scale electronics. Among various approaches, the piezoelectric energy harvesting has gained the most attention due to its high conversion efficiency and simple configurations. However, most of piezoelectric energy harvesters (PEHs) to date are based on bending-beam structures and can only generate limited power with a narrow working bandwidth. The insufficient electric output has greatly impeded their practical applications. In this paper, we present an innovative lead zirconate titanate (PZT) energy harvester, named high-efficiency compressive-mode piezoelectric energy harvester (HC-PEH), to enhance the performance of energy harvesters. A theoretical model was developed analytically, and solved numerically to study the nonlinear characteristics of the HC-PEH. The results estimated by the developed model agree well with the experimental data from the fabricated prototype. The HC-PEH shows strong nonlinear responses, favorable working bandwidth and superior power output. Under a weak excitation of 0.3 g (g = 9.8 m/s2), a maximum power output 30 mW is generated at 22 Hz, which is about ten times better than current energy harvesters. The HC-PEH demonstrates the capability of generating enough power for most of wireless sensors.
Trajectory NG: portable, compressed, general molecular dynamics trajectories.
Spångberg, Daniel; Larsson, Daniel S D; van der Spoel, David
2011-10-01
We present general algorithms for the compression of molecular dynamics trajectories. The standard ways to store MD trajectories as text or as raw binary floating point numbers result in very large files when efficient simulation programs are used on supercomputers. Our algorithms are based on the observation that differences in atomic coordinates/velocities, in either time or space, are generally smaller than the absolute values of the coordinates/velocities. Also, it is often possible to store values at a lower precision. We apply several compression schemes to compress the resulting differences further. The most efficient algorithms developed here use a block sorting algorithm in combination with Huffman coding. Depending on the frequency of storage of frames in the trajectory, either space, time, or combinations of space and time differences are usually the most efficient. We compare the efficiency of our algorithms with each other and with other algorithms present in the literature for various systems: liquid argon, water, a virus capsid solvated in 15 mM aqueous NaCl, and solid magnesium oxide. We perform tests to determine how much precision is necessary to obtain accurate structural and dynamic properties, as well as benchmark a parallelized implementation of the algorithms. We obtain compression ratios (compared to single precision floating point) of 1:3.3-1:35 depending on the frequency of storage of frames and the system studied.
NASA Astrophysics Data System (ADS)
Wibowo; Fadillah, Y.
2018-03-01
Efficiency in a construction works is a very important thing. Concrete with ease of workmanship and rapid achievement of service strength will to determine the level of efficiency. In this research, we studied the optimization of accelerator usage in achieving performance on compressive strength of concrete in function of time. The addition of variation of 0.3% - 2.3% to the weight of cement gives a positive impact of the rapid achievement of hardened concrete, however the speed of increasing of concrete strength achievement in term of time influence present increasing value of filling ability parameter of self-compacting concrete. The right composition of accelerator aligned with range of the values standard of filling ability parameters of HSSCC will be an advantage guidance for producers in the ready-mix concrete industry.
Design of a Variational Multiscale Method for Turbulent Compressible Flows
NASA Technical Reports Server (NTRS)
Diosady, Laslo Tibor; Murman, Scott M.
2013-01-01
A spectral-element framework is presented for the simulation of subsonic compressible high-Reynolds-number flows. The focus of the work is maximizing the efficiency of the computational schemes to enable unsteady simulations with a large number of spatial and temporal degrees of freedom. A collocation scheme is combined with optimized computational kernels to provide a residual evaluation with computational cost independent of order of accuracy up to 16th order. The optimized residual routines are used to develop a low-memory implicit scheme based on a matrix-free Newton-Krylov method. A preconditioner based on the finite-difference diagonalized ADI scheme is developed which maintains the low memory of the matrix-free implicit solver, while providing improved convergence properties. Emphasis on low memory usage throughout the solver development is leveraged to implement a coupled space-time DG solver which may offer further efficiency gains through adaptivity in both space and time.
Mach 6.5 air induction system design for the Beta 2 two-stage-to-orbit booster vehicle
NASA Technical Reports Server (NTRS)
Midea, Anthony C.
1991-01-01
A preliminary, two-dimensional, mixed compression air induction system is designed for the Beta II Two Stage to Orbit booster vehicle to minimize installation losses and efficiently deliver the required airflow. Design concepts, such as an external isentropic compression ramp and a bypass system were developed and evaluated for performance benefits. The design was optimized by maximizing installed propulsion/vehicle system performance. The resulting system design operating characteristics and performance are presented. The air induction system design has significantly lower transonic drag than similar designs and only requires about 1/3 of the bleed extraction. In addition, the design efficiently provides the integrated system required airflow, while maintaining adequate levels of total pressure recovery. The excellent performance of this highly integrated air induction system is essential for the successful completion of the Beta II booster vehicle mission.
Consider the DME alternative for diesel engines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fleisch, T.H.; Meurer, P.C.
1996-07-01
Engine tests demonstrate that dimethyl ether (DME, CH{sub 3}OCH{sub 3}) can provide an alternative approach toward efficient, ultra-clean and quiet compression ignition (CI) engines. From a combustion point of view, DME is an attractive alternative fuel for CI engines, primarily for commercial applications in urban areas, where ultra-low emissions will be required in the future. DME can resolve the classical diesel emission problem of smoke emissions, which are completely eliminated. With a properly developed DME injection and combustion system, NO{sub x} emissions can be reduced to 40% of Euro II or U.S. 1998 limits, and can meet the future ULEVmore » standards of California. Simultaneously, the combustion noise is reduced by as much as 15 dB(A) below diesel levels. In addition, the classical diesel advantages such as high thermal efficiency, compression ignition, engine robustness, etc., are retained.« less
Region-Based Prediction for Image Compression in the Cloud.
Begaint, Jean; Thoreau, Dominique; Guillotel, Philippe; Guillemot, Christine
2018-04-01
Thanks to the increasing number of images stored in the cloud, external image similarities can be leveraged to efficiently compress images by exploiting inter-images correlations. In this paper, we propose a novel image prediction scheme for cloud storage. Unlike current state-of-the-art methods, we use a semi-local approach to exploit inter-image correlation. The reference image is first segmented into multiple planar regions determined from matched local features and super-pixels. The geometric and photometric disparities between the matched regions of the reference image and the current image are then compensated. Finally, multiple references are generated from the estimated compensation models and organized in a pseudo-sequence to differentially encode the input image using classical video coding tools. Experimental results demonstrate that the proposed approach yields significant rate-distortion performance improvements compared with the current image inter-coding solutions such as high efficiency video coding.
NASA Astrophysics Data System (ADS)
Li, M.; Yuan, T.; Xu, Y. X.; Luo, S. N.
2018-05-01
When an intense picosecond laser pulse is loaded upon a dense plasma, a high energy density plasma bunch, including electron bunch and ion bunch, can be generated in the target. We simulate this process through one-dimensional particle-in-cell simulation and find that the electron bunch generation is mainly due to a local high energy density electron sphere originated in the plasma skin layer. Once generated the sphere rapidly expands to compress the surrounding electrons and induce high density electron layer, coupled with that, hot electrons are efficiently triggered in the local sphere and traveling in the whole target. Under the compressions of light pressure, forward-running and backward-running hot electrons, a high energy density electron bunch generates. The bunch energy density is as high as TJ/m3 order of magnitude in our conditions, which is significant in laser driven dynamic high pressure generation and may find applications in high energy density physics.
NASA Technical Reports Server (NTRS)
Torrence, M. G.
1975-01-01
An investigation of a fixed-geometry, swept external-internal compression inlet was conducted at a Mach number of 6.0 and a test-section Reynolds number of 1.55 x 10 to the 7th power per meter. The test conditions was constant for all runs with stagnation pressure and temperature at 20 atmospheres and 500 K, respectively. Tests were made at angles of attack of -5 deg, 0 deg, 3 deg, and 5 deg. Measurements consisted of pitot- and static-pressure surveys in inlet throat, wall static pressures, and surface temperatures. Boundary-layer bleed was provided on the centerbody and on the cowl internal surface. The inlet performance was consistently high over the range of the angle of attack tested, with an overall average total pressure recovery of 78 percent and corresponding adiabatic kinetic-energy efficiency of 99 percent. The inlet throat flow distribution was uniform and the Mach number and pressure level were of the correct magnitude for efficient combustor design. The utilization of a swept compression field to meet the starting requirements of a fixed-geometry inlet produced neither flow instability nor a tendency to unstart.
Methods for pore water extraction from unsaturated zone tuff, Yucca Mountain, Nevada
Scofield, K.M.
2006-01-01
Assessing the performance of the proposed high-level radioactive waste repository at Yucca Mountain, Nevada, requires an understanding of the chemistry of the water that moves through the host rock. The uniaxial compression method used to extract pore water from samples of tuffaceous borehole core was successful only for nonwelded tuff. An ultracentrifugation method was adopted to extract pore water from samples of the densely welded tuff of the proposed repository horizon. Tests were performed using both methods to determine the efficiency of pore water extraction and the potential effects on pore water chemistry. Test results indicate that uniaxial compression is most efficient for extracting pore water from nonwelded tuff, while ultracentrifugation is more successful in extracting pore water from densely welded tuff. Pore water splits collected from a single nonwelded tuff core during uniaxial compression tests have shown changes in pore water chemistry with increasing pressure for calcium, chloride, sulfate, and nitrate. Pore water samples collected from the intermediate pressure ranges should prevent the influence of re-dissolved, evaporative salts and the addition of ion-deficient water from clays and zeolites. Chemistry of pore water splits from welded and nonwelded tuffs using ultracentrifugation indicates that there is no substantial fractionation of solutes.
Design and Evaluation of a Bolted Joint for a Discrete Carbon-Epoxy Rod-Reinforced Hat Section
NASA Technical Reports Server (NTRS)
Baker, Donald J.; Rousseau, Carl Q.
1996-01-01
The use of pre-fabricated pultruded carbon-epoxy rods has reduced the manufacturing complexity and costs of stiffened composite panels while increasing the damage tolerance of the panels. However, repairability of these highly efficient discrete stiffeners has been a concern. Design, analysis, and test results are presented in this paper for a bolted-joint repair for the pultruded rod concept that is capable of efficiently transferring axial loads in a hat-section stiffener on the upper skin segment of a heavily loaded aircraft wing component. A tension and a compression joint design were evaluated. The tension joint design achieved approximately 1.0 percent strain in the carbon-epoxy rod-reinforced hat-section and failed in a metal fitting at 166 percent of the design ultimate load. The compression joint design failed in the carbon-epoxy rod-reinforced hat-section test specimen area at approximately 0.7 percent strain and at 110 percent of the design ultimate load. This strain level of 0.7 percent in compression is similar to the failure strain observed in previously reported carbon-epoxy rod-reinforced hat-section column tests.
Han, Hyeon; Kim, Donghoon; Chu, Kanghyun; Park, Jucheol; Nam, Sang Yeol; Heo, Seungyang; Yang, Chan-Ho; Jang, Hyun Myung
2018-01-17
Ferroelectric photovoltaics (FPVs) are being extensively investigated by virtue of switchable photovoltaic responses and anomalously high photovoltages of ∼10 4 V. However, FPVs suffer from extremely low photocurrents due to their wide band gaps (E g ). Here, we present a promising FPV based on hexagonal YbFeO 3 (h-YbFO) thin-film heterostructure by exploiting its narrow E g . More importantly, we demonstrate enhanced FPV effects by suitably exploiting the substrate-induced film strain in these h-YbFO-based photovoltaics. A compressive-strained h-YbFO/Pt/MgO heterojunction device shows ∼3 times enhanced photovoltaic efficiency than that of a tensile-strained h-YbFO/Pt/Al 2 O 3 device. We have shown that the enhanced photovoltaic efficiency mainly stems from the enhanced photon absorption over a wide range of the photon energy, coupled with the enhanced polarization under a compressive strain. Density functional theory studies indicate that the compressive strain reduces E g substantially and enhances the strength of d-d transitions. This study will set a new standard for determining substrates toward thin-film photovoltaics and optoelectronic devices.
Efficient burst image compression using H.265/HEVC
NASA Astrophysics Data System (ADS)
Roodaki-Lavasani, Hoda; Lainema, Jani
2014-02-01
New imaging use cases are emerging as more powerful camera hardware is entering consumer markets. One family of such use cases is based on capturing multiple pictures instead of just one when taking a photograph. That kind of a camera operation allows e.g. selecting the most successful shot from a sequence of images, showing what happened right before or after the shot was taken or combining the shots by computational means to improve either visible characteristics of the picture (such as dynamic range or focus) or the artistic aspects of the photo (e.g. by superimposing pictures on top of each other). Considering that photographic images are typically of high resolution and quality and the fact that these kind of image bursts can consist of at least tens of individual pictures, an efficient compression algorithm is desired. However, traditional video coding approaches fail to provide the random access properties these use cases require to achieve near-instantaneous access to the pictures in the coded sequence. That feature is critical to allow users to browse the pictures in an arbitrary order or imaging algorithms to extract desired pictures from the sequence quickly. This paper proposes coding structures that provide such random access properties while achieving coding efficiency superior to existing image coders. The results indicate that using HEVC video codec with a single reference picture fixed for the whole sequence can achieve nearly as good compression as traditional IPPP coding structures. It is also shown that the selection of the reference frame can further improve the coding efficiency.
An efficient and extensible approach for compressing phylogenetic trees
2011-01-01
Background Biologists require new algorithms to efficiently compress and store their large collections of phylogenetic trees. Our previous work showed that TreeZip is a promising approach for compressing phylogenetic trees. In this paper, we extend our TreeZip algorithm by handling trees with weighted branches. Furthermore, by using the compressed TreeZip file as input, we have designed an extensible decompressor that can extract subcollections of trees, compute majority and strict consensus trees, and merge tree collections using set operations such as union, intersection, and set difference. Results On unweighted phylogenetic trees, TreeZip is able to compress Newick files in excess of 98%. On weighted phylogenetic trees, TreeZip is able to compress a Newick file by at least 73%. TreeZip can be combined with 7zip with little overhead, allowing space savings in excess of 99% (unweighted) and 92%(weighted). Unlike TreeZip, 7zip is not immune to branch rotations, and performs worse as the level of variability in the Newick string representation increases. Finally, since the TreeZip compressed text (TRZ) file contains all the semantic information in a collection of trees, we can easily filter and decompress a subset of trees of interest (such as the set of unique trees), or build the resulting consensus tree in a matter of seconds. We also show the ease of which set operations can be performed on TRZ files, at speeds quicker than those performed on Newick or 7zip compressed Newick files, and without loss of space savings. Conclusions TreeZip is an efficient approach for compressing large collections of phylogenetic trees. The semantic and compact nature of the TRZ file allow it to be operated upon directly and quickly, without a need to decompress the original Newick file. We believe that TreeZip will be vital for compressing and archiving trees in the biological community. PMID:22165819
An efficient and extensible approach for compressing phylogenetic trees.
Matthews, Suzanne J; Williams, Tiffani L
2011-10-18
Biologists require new algorithms to efficiently compress and store their large collections of phylogenetic trees. Our previous work showed that TreeZip is a promising approach for compressing phylogenetic trees. In this paper, we extend our TreeZip algorithm by handling trees with weighted branches. Furthermore, by using the compressed TreeZip file as input, we have designed an extensible decompressor that can extract subcollections of trees, compute majority and strict consensus trees, and merge tree collections using set operations such as union, intersection, and set difference. On unweighted phylogenetic trees, TreeZip is able to compress Newick files in excess of 98%. On weighted phylogenetic trees, TreeZip is able to compress a Newick file by at least 73%. TreeZip can be combined with 7zip with little overhead, allowing space savings in excess of 99% (unweighted) and 92%(weighted). Unlike TreeZip, 7zip is not immune to branch rotations, and performs worse as the level of variability in the Newick string representation increases. Finally, since the TreeZip compressed text (TRZ) file contains all the semantic information in a collection of trees, we can easily filter and decompress a subset of trees of interest (such as the set of unique trees), or build the resulting consensus tree in a matter of seconds. We also show the ease of which set operations can be performed on TRZ files, at speeds quicker than those performed on Newick or 7zip compressed Newick files, and without loss of space savings. TreeZip is an efficient approach for compressing large collections of phylogenetic trees. The semantic and compact nature of the TRZ file allow it to be operated upon directly and quickly, without a need to decompress the original Newick file. We believe that TreeZip will be vital for compressing and archiving trees in the biological community.
Test of superplastically formed corrugated aluminum compression specimens with beaded webs
NASA Technical Reports Server (NTRS)
Davis, Randall C.; Royster, Dick M.; Bales, Thomas T.; James, William F.; Shinn, Joseph M., Jr.
1991-01-01
Corrugated wall sections provide a highly efficient structure for carrying compressive loads in aircraft and spacecraft fuselages. The superplastic forming (SPF) process offers a means to produce complex shells and panels with corrugated wall shapes. A study was made to investigate the feasibility of superplastically forming 7475-T6 aluminum sheet into a corrugated wall configuration and to demonstrate the structural integrity of the construction by testing. The corrugated configuration selected has beaded web segments separating curved-cap segments. Eight test specimens were fabricated. Two specimens were simply a single sheet of aluminum superplastically formed to a beaded-web, curved-cap corrugation configuration. Six specimens were single-sheet corrugations modified by adhesive bonding additional sheet material to selectively reinforce the curved-cap portion of the corrugation. The specimens were tested to failure by crippling in end compression at room temperature.
Liang, Jinyang; Gao, Liang; Hai, Pengfei; Li, Chiye; Wang, Lihong V.
2015-01-01
Compressed ultrafast photography (CUP), a computational imaging technique, is synchronized with short-pulsed laser illumination to enable dynamic three-dimensional (3D) imaging. By leveraging the time-of-flight (ToF) information of pulsed light backscattered by the object, ToF-CUP can reconstruct a volumetric image from a single camera snapshot. In addition, the approach unites the encryption of depth data with the compressed acquisition of 3D data in a single snapshot measurement, thereby allowing efficient and secure data storage and transmission. We demonstrated high-speed 3D videography of moving objects at up to 75 volumes per second. The ToF-CUP camera was applied to track the 3D position of a live comet goldfish. We have also imaged a moving object obscured by a scattering medium. PMID:26503834
View compensated compression of volume rendered images for remote visualization.
Lalgudi, Hariharan G; Marcellin, Michael W; Bilgin, Ali; Oh, Han; Nadar, Mariappan S
2009-07-01
Remote visualization of volumetric images has gained importance over the past few years in medical and industrial applications. Volume visualization is a computationally intensive process, often requiring hardware acceleration to achieve a real time viewing experience. One remote visualization model that can accomplish this would transmit rendered images from a server, based on viewpoint requests from a client. For constrained server-client bandwidth, an efficient compression scheme is vital for transmitting high quality rendered images. In this paper, we present a new view compensation scheme that utilizes the geometric relationship between viewpoints to exploit the correlation between successive rendered images. The proposed method obviates motion estimation between rendered images, enabling significant reduction to the complexity of a compressor. Additionally, the view compensation scheme, in conjunction with JPEG2000 performs better than AVC, the state of the art video compression standard.
Martensitic and magnetic transformation in Ni-Mn-Ga-Co ferromagnetic shape memory alloys.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cong, D. Y.; Wang, S.; Wang, Y. D.
2008-01-01
The effect of Co addition on crystal structure, martensitic transformation, Curie temperature and compressive properties of Ni{sub 53-x}Mn{sub 25}Ga{sub 22}Co{sub x} alloys with the Co content up to 14 at% was investigated. An abrupt decrease of martensitic transformation temperature was observed when the Co content exceeded 6 at.%, which can be attributed to the atomic disorder resulting from the Co addition. Substitution of Co for Ni proved efficient in increasing the Curie temperature. Compression experiments showed that the substitution of 4 at.% Co for Ni did not change the fracture strain, but lead to the increase in the compressive strengthmore » and the decrease in the yield stress. This study may offer experimental data for developing high performance ferromagnetic shape memory alloys.« less
High reliability outdoor sonar prototype based on efficient signal coding.
Alvarez, Fernando J; Ureña, Jesús; Mazo, Manuel; Hernández, Alvaro; García, Juan J; de Marziani, Carlos
2006-10-01
Many mobile robots and autonomous vehicles designed for outdoor operation have incorporated ultrasonic sensors in their navigation systems, whose function is mainly to avoid possible collisions with very close obstacles. The use of these systems in more precise tasks requires signal encoding and the incorporation of pulse compression techniques that have already been used with success in the design of high-performance indoor sonars. However, the transmission of ultrasonic encoded signals outdoors entails a new challenge because of the effects of atmospheric turbulence. This phenomenon causes random fluctuations in the phase and amplitude of traveling acoustic waves, a fact that can make the encoded signal completely unrecognizable by its matched receiver. Atmospheric turbulence is investigated in this work, with the aim of determining the conditions under which it is possible to assure the reliable outdoor operation of an ultrasonic pulse compression system. As a result of this analysis, a novel sonar prototype based on complementary sequences coding is developed and experimentally tested. This encoding scheme provides the system with very useful additional features, namely, high robustness to noise, multi-mode operation capability (simultaneous emissions with minimum cross talk interference), and the possibility of applying an efficient detection algorithm that notably decreases the hardware resource requirements.
Magnetic compression laser driving circuit
Ball, D.G.; Birx, D.; Cook, E.G.
1993-01-05
A magnetic compression laser driving circuit is disclosed. The magnetic compression laser driving circuit compresses voltage pulses in the range of 1.5 microseconds at 20 kilovolts of amplitude to pulses in the range of 40 nanoseconds and 60 kilovolts of amplitude. The magnetic compression laser driving circuit includes a multi-stage magnetic switch where the last stage includes a switch having at least two turns which has larger saturated inductance with less core material so that the efficiency of the circuit and hence the laser is increased.
Magnetic compression laser driving circuit
Ball, Don G.; Birx, Dan; Cook, Edward G.
1993-01-01
A magnetic compression laser driving circuit is disclosed. The magnetic compression laser driving circuit compresses voltage pulses in the range of 1.5 microseconds at 20 Kilovolts of amplitude to pulses in the range of 40 nanoseconds and 60 Kilovolts of amplitude. The magnetic compression laser driving circuit includes a multi-stage magnetic switch where the last stage includes a switch having at least two turns which has larger saturated inductance with less core material so that the efficiency of the circuit and hence the laser is increased.
NASA Astrophysics Data System (ADS)
Nehlig, F.; Vollmer, B.; Braine, J.
2016-03-01
The cluster environment can affect galaxy evolution in different ways: via ram pressure stripping or by gravitational perturbations caused by galactic encounters. Both kinds of interactions can lead to the compression of the interstellar medium (ISM) and its associated magnetic fields, causing an increase in the gas surface density and the appearance of asymmetric ridges of polarized radio continuum emission. New IRAM 30m HERA CO(2-1) data of NGC 4501, a Virgo spiral galaxy currently experiencing ram pressure stripping, and NGC 4567/68, an interacting pair of galaxies in the Virgo cluster, are presented. We find an increase in the molecular fraction where the ISM is compressed. The gas is close to self-gravitation in compressed regions. This leads to an increase in gas pressure and a decrease in the ratio between the molecular fraction and total ISM pressure. The overall Kennicutt Schmidt relation based on a pixel-by-pixel analysis at ~1.5 kpc resolution is not significantly modified by compression. However, we detected continuous regions of low molecular star formation efficiencies in the compressed parts of the galactic gas disks. The data suggest that a relation between the molecular star formation efficiency SFEH2 = SFR/M(H2) and gas self-gravitation (Rmol/Ptot and Toomre Q parameter) exists. Both systems show spatial variations in the star formation efficiency with respect to the molecular gas that can be related to environmental compression of the ISM. An analytical model was used to investigate the dependence of SFEH2 on self-gravitation. The model correctly reproduces the correlations between Rmol/Ptot, SFEH2, and Q if different global turbulent velocity dispersions are assumed for the three galaxies. We found that variations in the NH2/ICO conversion factor can mask most of the correlation between SFEH2 and the Toomre Q parameter. Dynamical simulations were used to compare the effects of ram pressure and tidal ISM compression. These models give direct access to the volume density. We conclude that a gravitationally induced ISM compression has the same consequences as ram pressure compression: (I) an increasing gas surface density; (II) an increasing molecular fraction; and (III) a decreasing Rmol/Ptot in the compressed region due to the presence of nearly self-gravitating gas. The response of SFEH2 to compression is more complex. While in the violent ISM-ISM collisions (e.g., Taffy galaxies and NGC 4438) the interaction makes star formation drop by an order of magnitude, we only detect an SFEH2 variation of ~50% in the compressed regions of the three galaxies. We suggest that the decrease in star formation depends on the ratio between the compression timescale and the turbulent dissipation timescale. In NGC 4501 and NGC 4567/68 the compression timescale is comparable to the turbulent dissipation timescale and only leads to minor changes in the molecular star formation efficiency.
HCCI Combustion Engines Final Report CRADA No. TC02032.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aceves, S.; Lyford-Pike, E.
This was a collaborative effort between Lawrence Livermore National Security, LLC (formerly The Regents of the University of California)/Lawrence Livermore National Laboratory (LLNL) and Cummins Engine Company (Cwnmins), to advance the state of the art on HomogeneousCharge Compression-Ignition (HCCI) engines, resulting in a clean, high-efficiency alternative to diesel engines.
Unified approach for incompressible flows
NASA Astrophysics Data System (ADS)
Chang, Tyne-Hsien
1993-12-01
An unified approach for solving both compressible and incompressible flows was investigated in this study. The difference in CFD code development between incompressible and compressible flows is due to the mathematical characteristics. However, if one can modify the continuity equation for incompressible flows by introducing pseudocompressibility, the governing equations for incompressible flows would have the same mathematical characters as compressible flows. The application of a compressible flow code to solve incompressible flows becomes feasible. Among numerical algorithms developed for compressible flows, the Centered Total Variation Diminishing (CTVD) schemes possess better mathematical properties to damp out the spurious oscillations while providing high-order accuracy for high speed flows. It leads us to believe that CTVD schemes can equally well solve incompressible flows. In this study, the governing equations for incompressible flows include the continuity equation and momentum equations. The continuity equation is modified by adding a time-derivative of the pressure term containing the artificial compressibility. The modified continuity equation together with the unsteady momentum equations forms a hyperbolic-parabolic type of time-dependent system of equations. The continuity equation is modified by adding a time-derivative of the pressure term containing the artificial compressibility. The modified continuity equation together with the unsteady momentum equations forms a hyperbolic-parabolic type of time-dependent system of equations. Thus, the CTVD schemes can be implemented. In addition, the boundary conditions including physical and numerical boundary conditions must be properly specified to obtain accurate solution. The CFD code for this research is currently in progress. Flow past a circular cylinder will be used for numerical experiments to determine the accuracy and efficiency of the code before applying this code to more specific applications.
NASA Astrophysics Data System (ADS)
Liu, Y. B.; Zhuge, W. L.; Zhang, Y. J.; Zhang, S. Y.
2016-05-01
To reach the goal of energy conservation and emission reduction, high intake pressure is needed to meet the demand of high power density and high EGR rate for internal combustion engine. Present power density of diesel engine has reached 90KW/L and intake pressure ratio needed is over 5. Two-stage turbocharging system is an effective way to realize high compression ratio. Because turbocharging system compression work derives from exhaust gas energy. Efficiency of exhaust gas energy influenced by design and matching of turbine system is important to performance of high supercharging engine. Conventional turbine system is assembled by single-stage turbocharger turbines and turbine matching is based on turbine MAP measured on test rig. Flow between turbine system is assumed uniform and value of outlet physical quantities of turbine are regarded as the same as ambient value. However, there are three-dimension flow field distortion and outlet physical quantities value change which will influence performance of turbine system as were demonstrated by some studies. For engine equipped with two-stage turbocharging system, optimization of turbine system design will increase efficiency of exhaust gas energy and thereby increase engine power density. However flow interaction of turbine system will change flow in turbine and influence turbine performance. To recognize the interaction characteristics between high pressure turbine and low pressure turbine, flow in turbine system is modeled and simulated numerically. The calculation results suggested that static pressure field at inlet to low pressure turbine increases back pressure of high pressure turbine, however efficiency of high pressure turbine changes little; distorted velocity field at outlet to high pressure turbine results in swirl at inlet to low pressure turbine. Clockwise swirl results in large negative angle of attack at inlet to rotor which causes flow loss in turbine impeller passages and decreases turbine efficiency. However negative angle of attack decreases when inlet swirl is anti-clockwise and efficiency of low pressure turbine can be increased by 3% compared to inlet condition of clockwise swirl. Consequently flow simulation and analysis are able to aid in figuring out interaction mechanism of turbine system and optimizing turbine system design.
Comparative data compression techniques and multi-compression results
NASA Astrophysics Data System (ADS)
Hasan, M. R.; Ibrahimy, M. I.; Motakabber, S. M. A.; Ferdaus, M. M.; Khan, M. N. H.
2013-12-01
Data compression is very necessary in business data processing, because of the cost savings that it offers and the large volume of data manipulated in many business applications. It is a method or system for transmitting a digital image (i.e., an array of pixels) from a digital data source to a digital data receiver. More the size of the data be smaller, it provides better transmission speed and saves time. In this communication, we always want to transmit data efficiently and noise freely. This paper will provide some compression techniques for lossless text type data compression and comparative result of multiple and single compression, that will help to find out better compression output and to develop compression algorithms.
Enhancement of Satellite Image Compression Using a Hybrid (DWT-DCT) Algorithm
NASA Astrophysics Data System (ADS)
Shihab, Halah Saadoon; Shafie, Suhaidi; Ramli, Abdul Rahman; Ahmad, Fauzan
2017-12-01
Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) image compression techniques have been utilized in most of the earth observation satellites launched during the last few decades. However, these techniques have some issues that should be addressed. The DWT method has proven to be more efficient than DCT for several reasons. Nevertheless, the DCT can be exploited to improve the high-resolution satellite image compression when combined with the DWT technique. Hence, a proposed hybrid (DWT-DCT) method was developed and implemented in the current work, simulating an image compression system on-board on a small remote sensing satellite, with the aim of achieving a higher compression ratio to decrease the onboard data storage and the downlink bandwidth, while avoiding further complex levels of DWT. This method also succeeded in maintaining the reconstructed satellite image quality through replacing the standard forward DWT thresholding and quantization processes with an alternative process that employed the zero-padding technique, which also helped to reduce the processing time of DWT compression. The DCT, DWT and the proposed hybrid methods were implemented individually, for comparison, on three LANDSAT 8 images, using the MATLAB software package. A comparison was also made between the proposed method and three other previously published hybrid methods. The evaluation of all the objective and subjective results indicated the feasibility of using the proposed hybrid (DWT-DCT) method to enhance the image compression process on-board satellites.
Vasudeva, Mohit; Sharma, Sumeet; Mohapatra, S K; Kundu, Krishnendu
2016-01-01
As a substitute to petroleum-derived diesel, biodiesel has high potential as a renewable and environment friendly energy source. For petroleum importing countries the choice of feedstock for biodiesel production within the geographical region is a major influential factor. Crude rice bran oil is found to be good and viable feedstock for biodiesel production. A two step esterification is carried out for higher free fatty acid crude rice bran oil. Blends of 10, 20 and 40 % by vol. crude rice bran biodiesel are tested in a variable compression ratio diesel engine at compression ratio 15, 16, 17 and 18. Engine performance and exhaust emission parameters are examined. Cylinder pressure-crank angle variation is also plotted. The increase in compression ratio from 15 to 18 resulted in 18.6 % decrease in brake specific fuel consumption and 14.66 % increase in brake thermal efficiency on an average. Cylinder pressure increases by 15 % when compression ratio is increased. Carbon monoxide emission decreased by 22.27 %, hydrocarbon decreased by 38.4 %, carbon dioxide increased by 17.43 % and oxides of nitrogen as NOx emission increased by 22.76 % on an average when compression ratio is increased from 15 to 18. The blends of crude rice bran biodiesel show better results than diesel with increase in compression ratio.
Oscillating-Linear-Drive Vacuum Compressor for CO2
NASA Technical Reports Server (NTRS)
Izenson, Michael G.; Shimko, Martin
2005-01-01
A vacuum compressor has been designed to compress CO2 from approximately equal to 1 psia (approximately equal to 6.9 kPa absolute pressure) to approximately equal to 75 psia (approximately equal to 0.52 MPa), to be insensitive to moisture, to have a long operational life, and to be lightweight, compact, and efficient. The compressor consists mainly of (1) a compression head that includes hydraulic diaphragms, a gas-compression diaphragm, and check valves; and (2) oscillating linear drive that includes a linear motor and a drive spring, through which compression force is applied to the hydraulic diaphragms. The motor is driven at the resonance vibrational frequency of the motor/spring/compression-head system, the compression head acting as a damper that takes energy out of the oscillation. The net effect of the oscillation is to cause cyclic expansion and contraction of the gas-compression diaphragm, and, hence, of the volume bounded by this diaphragm. One-way check valves admit gas into this volume from the low-pressure side during expansion and allow the gas to flow out to the high-pressure side during contraction. Fatigue data and the results of diaphragm stress calculations have been interpreted as signifying that the compressor can be expected to have an operational life of greater than 30 years with a confidence level of 99.9 percent.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curran, Scott; Hanson, Reed M; Wagner, Robert M
2012-01-01
This paper investigates the effect of E85 on load expansion and FTP modal point emissions indices under reactivity controlled compression ignition (RCCI) operation on a light-duty multi-cylinder diesel engine. A General Motors (GM) 1.9L four-cylinder diesel engine with the stock compression ratio of 17.5:1, common rail diesel injection system, high-pressure exhaust gas recirculation (EGR) system and variable geometry turbocharger was modified to allow for port fuel injection with gasoline or E85. Controlling the fuel reactivity in-cylinder by the adjustment of the ratio of premixed low-reactivity fuel (gasoline or E85) to direct injected high reactivity fuel (diesel fuel) has been shownmore » to extend the operating range of high-efficiency clean combustion (HECC) compared to the use of a single fuel alone as in homogeneous charge compression ignition (HCCI) or premixed charge compression ignition (PCCI). The effect of E85 on the Ad-hoc federal test procedure (FTP) modal points is explored along with the effect of load expansion through the light-duty diesel speed operating range. The Ad-hoc FTP modal points of 1500 rpm, 1.0bar brake mean effective pressure (BMEP); 1500rpm, 2.6bar BMEP; 2000rpm, 2.0bar BMEP; 2300rpm, 4.2bar BMEP; and 2600rpm, 8.8bar BMEP were explored. Previous results with 96 RON unleaded test gasoline (UTG-96) and ultra-low sulfur diesel (ULSD) showed that with stock hardware, the 2600rpm, 8.8bar BMEP modal point was not obtainable due to excessive cylinder pressure rise rate and unstable combustion both with and without the use of EGR. Brake thermal efficiency and emissions performance of RCCI operation with E85 and ULSD is explored and compared against conventional diesel combustion (CDC) and RCCI operation with UTG 96 and ULSD.« less
Compressed Air/Vacuum Transportation Techniques
NASA Astrophysics Data System (ADS)
Guha, Shyamal
2011-03-01
General theory of compressed air/vacuum transportation will be presented. In this transportation, a vehicle (such as an automobile or a rail car) is powered either by compressed air or by air at near vacuum pressure. Four version of such transportation is feasible. In all versions, a ``c-shaped'' plastic or ceramic pipe lies buried a few inches under the ground surface. This pipe carries compressed air or air at near vacuum pressure. In type I transportation, a vehicle draws compressed air (or vacuum) from this buried pipe. Using turbine or reciprocating air cylinder, mechanical power is generated from compressed air (or from vacuum). This mechanical power transferred to the wheels of an automobile (or a rail car) drives the vehicle. In type II-IV transportation techniques, a horizontal force is generated inside the plastic (or ceramic) pipe. A set of vertical and horizontal steel bars is used to transmit this force to the automobile on the road (or to a rail car on rail track). The proposed transportation system has following merits: virtually accident free; highly energy efficient; pollution free and it will not contribute to carbon dioxide emission. Some developmental work on this transportation will be needed before it can be used by the traveling public. The entire transportation system could be computer controlled.
NASA Astrophysics Data System (ADS)
Akoguz, A.; Bozkurt, S.; Gozutok, A. A.; Alp, G.; Turan, E. G.; Bogaz, M.; Kent, S.
2016-06-01
High resolution level in satellite imagery came with its fundamental problem as big amount of telemetry data which is to be stored after the downlink operation. Moreover, later the post-processing and image enhancement steps after the image is acquired, the file sizes increase even more and then it gets a lot harder to store and consume much more time to transmit the data from one source to another; hence, it should be taken into account that to save even more space with file compression of the raw and various levels of processed data is a necessity for archiving stations to save more space. Lossless data compression algorithms that will be examined in this study aim to provide compression without any loss of data holding spectral information. Within this objective, well-known open source programs supporting related compression algorithms have been implemented on processed GeoTIFF images of Airbus Defence & Spaces SPOT 6 & 7 satellites having 1.5 m. of GSD, which were acquired and stored by ITU Center for Satellite Communications and Remote Sensing (ITU CSCRS), with the algorithms Lempel-Ziv-Welch (LZW), Lempel-Ziv-Markov chain Algorithm (LZMA & LZMA2), Lempel-Ziv-Oberhumer (LZO), Deflate & Deflate 64, Prediction by Partial Matching (PPMd or PPM2), Burrows-Wheeler Transform (BWT) in order to observe compression performances of these algorithms over sample datasets in terms of how much of the image data can be compressed by ensuring lossless compression.
Cloud Optimized Image Format and Compression
NASA Astrophysics Data System (ADS)
Becker, P.; Plesea, L.; Maurer, T.
2015-04-01
Cloud based image storage and processing requires revaluation of formats and processing methods. For the true value of the massive volumes of earth observation data to be realized, the image data needs to be accessible from the cloud. Traditional file formats such as TIF and NITF were developed in the hay day of the desktop and assumed fast low latency file access. Other formats such as JPEG2000 provide for streaming protocols for pixel data, but still require a server to have file access. These concepts no longer truly hold in cloud based elastic storage and computation environments. This paper will provide details of a newly evolving image storage format (MRF) and compression that is optimized for cloud environments. Although the cost of storage continues to fall for large data volumes, there is still significant value in compression. For imagery data to be used in analysis and exploit the extended dynamic range of the new sensors, lossless or controlled lossy compression is of high value. Compression decreases the data volumes stored and reduces the data transferred, but the reduced data size must be balanced with the CPU required to decompress. The paper also outlines a new compression algorithm (LERC) for imagery and elevation data that optimizes this balance. Advantages of the compression include its simple to implement algorithm that enables it to be efficiently accessed using JavaScript. Combing this new cloud based image storage format and compression will help resolve some of the challenges of big image data on the internet.
Primary Energy Efficiency Analysis of Different Separate Sensible and Latent Cooling Techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abdelaziz, Omar
2015-01-01
Separate Sensible and Latent cooling (SSLC) has been discussed in open literature as means to improve air conditioning system efficiency. The main benefit of SSLC is that it enables heat source optimization for the different forms of loads, sensible vs. latent, and as such maximizes the cycle efficiency. In this paper I use a thermodynamic analysis tool in order to analyse the performance of various SSLC technologies including: multi-evaporators two stage compression system, vapour compression system with heat activated desiccant dehumidification, and integrated vapour compression with desiccant dehumidification. A primary coefficient of performance is defined and used to judge themore » performance of the different SSLC technologies at the design conditions. Results showed the trade-off in performance for different sensible heat factor and regeneration temperatures.« less
Classification Techniques for Digital Map Compression
1989-03-01
classification improved the performance of the K-means classification algorithm resulting in a compression of 8.06:1 with Lempel - Ziv coding. Run-length coding... compression performance are run-length coding [2], [8] and Lempel - Ziv coding 110], [11]. These techniques are chosen because they are most efficient when...investigated. After the classification, some standard file compression methods, such as Lempel - Ziv and run-length encoding were applied to the
NASA Astrophysics Data System (ADS)
Bulan, Orhan; Bernal, Edgar A.; Loce, Robert P.; Wu, Wencheng
2013-03-01
Video cameras are widely deployed along city streets, interstate highways, traffic lights, stop signs and toll booths by entities that perform traffic monitoring and law enforcement. The videos captured by these cameras are typically compressed and stored in large databases. Performing a rapid search for a specific vehicle within a large database of compressed videos is often required and can be a time-critical life or death situation. In this paper, we propose video compression and decompression algorithms that enable fast and efficient vehicle or, more generally, event searches in large video databases. The proposed algorithm selects reference frames (i.e., I-frames) based on a vehicle having been detected at a specified position within the scene being monitored while compressing a video sequence. A search for a specific vehicle in the compressed video stream is performed across the reference frames only, which does not require decompression of the full video sequence as in traditional search algorithms. Our experimental results on videos captured in a local road show that the proposed algorithm significantly reduces the search space (thus reducing time and computational resources) in vehicle search tasks within compressed video streams, particularly those captured in light traffic volume conditions.
NASA Technical Reports Server (NTRS)
Kwak, Dochan; Kiris, C.; Smith, Charles A. (Technical Monitor)
1998-01-01
Performance of the two commonly used numerical procedures, one based on artificial compressibility method and the other pressure projection method, are compared. These formulations are selected primarily because they are designed for three-dimensional applications. The computational procedures are compared by obtaining steady state solutions of a wake vortex and unsteady solutions of a curved duct flow. For steady computations, artificial compressibility was very efficient in terms of computing time and robustness. For an unsteady flow which requires small physical time step, pressure projection method was found to be computationally more efficient than an artificial compressibility method. This comparison is intended to give some basis for selecting a method or a flow solution code for large three-dimensional applications where computing resources become a critical issue.
Unified approach for incompressible flows
NASA Astrophysics Data System (ADS)
Chang, Tyne-Hsien
1995-07-01
A unified approach for solving incompressible flows has been investigated in this study. The numerical CTVD (Centered Total Variation Diminishing) scheme used in this study was successfully developed by Sanders and Li for compressible flows, especially for the high speed. The CTVD scheme possesses better mathematical properties to damp out the spurious oscillations while providing high-order accuracy for high speed flows. It leads us to believe that the CTVD scheme can equally well apply to solve incompressible flows. Because of the mathematical difference between the governing equations for incompressible and compressible flows, the scheme can not directly apply to the incompressible flows. However, if one can modify the continuity equation for incompressible flows by introducing pseudo-compressibility, the governing equations for incompressible flows would have the same mathematical characters as compressible flows. The application of the algorithm to incompressible flows thus becomes feasible. In this study, the governing equations for incompressible flows comprise continuity equation and momentum equations. The continuity equation is modified by adding a time-derivative of the pressure term containing the artificial compressibility. The modified continuity equation together with the unsteady momentum equations forms a hyperbolic-parabolic type of time-dependent system of equations. Thus, the CTVD schemes can be implemented. In addition, the physical and numerical boundary conditions are properly implemented by the characteristic boundary conditions. Accordingly, a CFD code has been developed for this research and is currently under testing. Flow past a circular cylinder was chosen for numerical experiments to determine the accuracy and efficiency of the code. The code has shown some promising results.
[Neurovascular compression of the medulla oblongata: a rare cause of secondary hypertension].
Nádas, Judit; Czirják, Sándor; Igaz, Péter; Vörös, Erika; Jermendy, György; Rácz, Károly; Tóth, Miklós
2014-05-25
Compression of the rostral ventrolateral medulla oblongata is one of the rarely identified causes of refractory hypertension. In patients with severe, intractable hypertension caused by neurovascular compression, neurosurgical decompression should be considered. The authors present the history of a 20-year-old man with severe hypertension. After excluding other possible causes of secondary hypertension, the underlying cause of his high blood pressure was identified by the demonstration of neurovascular compression shown by magnetic resonance angiography and an increased sympathetic activity (sinus tachycardia) during the high blood pressure episodes. Due to frequent episodes of hypertensive crises, surgical decompression was recommended, which was performed with the placement of an isograft between the brainstem and the left vertebral artery. In the first six months after the operation, the patient's blood pressure could be kept in the normal range with significantly reduced doses of antihypertensive medication. Repeat magnetic resonance angiography confirmed the cessation of brainstem compression. After six months, increased blood pressure returned periodically, but to a smaller extent and less frequently. Based on the result of magnetic resonance angiography performed 22 months after surgery, re-operation was considered. According to previous literature data long-term success can only be achieved in one third of patients after surgical decompression. In the majority of patients surgery results in a significant decrease of blood pressure, an increased efficiency of antihypertensive therapy as well as a decrease in the frequency of highly increased blood pressure episodes. Thus, a significant improvement of the patient's quality of life can be achieved. The case of this patient is an example of the latter scenario.
Unified approach for incompressible flows
NASA Technical Reports Server (NTRS)
Chang, Tyne-Hsien
1995-01-01
A unified approach for solving incompressible flows has been investigated in this study. The numerical CTVD (Centered Total Variation Diminishing) scheme used in this study was successfully developed by Sanders and Li for compressible flows, especially for the high speed. The CTVD scheme possesses better mathematical properties to damp out the spurious oscillations while providing high-order accuracy for high speed flows. It leads us to believe that the CTVD scheme can equally well apply to solve incompressible flows. Because of the mathematical difference between the governing equations for incompressible and compressible flows, the scheme can not directly apply to the incompressible flows. However, if one can modify the continuity equation for incompressible flows by introducing pseudo-compressibility, the governing equations for incompressible flows would have the same mathematical characters as compressible flows. The application of the algorithm to incompressible flows thus becomes feasible. In this study, the governing equations for incompressible flows comprise continuity equation and momentum equations. The continuity equation is modified by adding a time-derivative of the pressure term containing the artificial compressibility. The modified continuity equation together with the unsteady momentum equations forms a hyperbolic-parabolic type of time-dependent system of equations. Thus, the CTVD schemes can be implemented. In addition, the physical and numerical boundary conditions are properly implemented by the characteristic boundary conditions. Accordingly, a CFD code has been developed for this research and is currently under testing. Flow past a circular cylinder was chosen for numerical experiments to determine the accuracy and efficiency of the code. The code has shown some promising results.
Visual pattern image sequence coding
NASA Technical Reports Server (NTRS)
Silsbee, Peter; Bovik, Alan C.; Chen, Dapang
1990-01-01
The visual pattern image coding (VPIC) configurable digital image-coding process is capable of coding with visual fidelity comparable to the best available techniques, at compressions which (at 30-40:1) exceed all other technologies. These capabilities are associated with unprecedented coding efficiencies; coding and decoding operations are entirely linear with respect to image size and entail a complexity that is 1-2 orders of magnitude faster than any previous high-compression technique. The visual pattern image sequence coding to which attention is presently given exploits all the advantages of the static VPIC in the reduction of information from an additional, temporal dimension, to achieve unprecedented image sequence coding performance.
Distributed Joint Source-Channel Coding in Wireless Sensor Networks
Zhu, Xuqi; Liu, Yu; Zhang, Lin
2009-01-01
Considering the fact that sensors are energy-limited and the wireless channel conditions in wireless sensor networks, there is an urgent need for a low-complexity coding method with high compression ratio and noise-resisted features. This paper reviews the progress made in distributed joint source-channel coding which can address this issue. The main existing deployments, from the theory to practice, of distributed joint source-channel coding over the independent channels, the multiple access channels and the broadcast channels are introduced, respectively. To this end, we also present a practical scheme for compressing multiple correlated sources over the independent channels. The simulation results demonstrate the desired efficiency. PMID:22408560
2012-01-01
Background As Next-Generation Sequencing data becomes available, existing hardware environments do not provide sufficient storage space and computational power to store and process the data due to their enormous size. This is and will be a frequent problem that is encountered everyday by researchers who are working on genetic data. There are some options available for compressing and storing such data, such as general-purpose compression software, PBAT/PLINK binary format, etc. However, these currently available methods either do not offer sufficient compression rates, or require a great amount of CPU time for decompression and loading every time the data is accessed. Results Here, we propose a novel and simple algorithm for storing such sequencing data. We show that, the compression factor of the algorithm ranges from 16 to several hundreds, which potentially allows SNP data of hundreds of Gigabytes to be stored in hundreds of Megabytes. We provide a C++ implementation of the algorithm, which supports direct loading and parallel loading of the compressed format without requiring extra time for decompression. By applying the algorithm to simulated and real datasets, we show that the algorithm gives greater compression rate than the commonly used compression methods, and the data-loading process takes less time. Also, The C++ library provides direct-data-retrieving functions, which allows the compressed information to be easily accessed by other C++ programs. Conclusions The SpeedGene algorithm enables the storage and the analysis of next generation sequencing data in current hardware environment, making system upgrades unnecessary. PMID:22591016
High-performance compression of astronomical images
NASA Technical Reports Server (NTRS)
White, Richard L.
1993-01-01
Astronomical images have some rather unusual characteristics that make many existing image compression techniques either ineffective or inapplicable. A typical image consists of a nearly flat background sprinkled with point sources and occasional extended sources. The images are often noisy, so that lossless compression does not work very well; furthermore, the images are usually subjected to stringent quantitative analysis, so any lossy compression method must be proven not to discard useful information, but must instead discard only the noise. Finally, the images can be extremely large. For example, the Space Telescope Science Institute has digitized photographic plates covering the entire sky, generating 1500 images each having 14000 x 14000 16-bit pixels. Several astronomical groups are now constructing cameras with mosaics of large CCD's (each 2048 x 2048 or larger); these instruments will be used in projects that generate data at a rate exceeding 100 MBytes every 5 minutes for many years. An effective technique for image compression may be based on the H-transform (Fritze et al. 1977). The method that we have developed can be used for either lossless or lossy compression. The digitized sky survey images can be compressed by at least a factor of 10 with no noticeable losses in the astrometric and photometric properties of the compressed images. The method has been designed to be computationally efficient: compression or decompression of a 512 x 512 image requires only 4 seconds on a Sun SPARCstation 1. The algorithm uses only integer arithmetic, so it is completely reversible in its lossless mode, and it could easily be implemented in hardware for space applications.
Evaluation of H.264 and H.265 full motion video encoding for small UAS platforms
NASA Astrophysics Data System (ADS)
McGuinness, Christopher D.; Walker, David; Taylor, Clark; Hill, Kerry; Hoffman, Marc
2016-05-01
Of all the steps in the image acquisition and formation pipeline, compression is the only process that degrades image quality. A selected compression algorithm succeeds or fails to provide sufficient quality at the requested compression rate depending on how well the algorithm is suited to the input data. Applying an algorithm designed for one type of data to a different type often results in poor compression performance. This is mostly the case when comparing the performance of H.264, designed for standard definition data, to HEVC (High Efficiency Video Coding), which the Joint Collaborative Team on Video Coding (JCT-VC) designed for high-definition data. This study focuses on evaluating how HEVC compares to H.264 when compressing data from small UAS platforms. To compare the standards directly, we assess two open-source traditional software solutions: x264 and x265. These software-only comparisons allow us to establish a baseline of how much improvement can generally be expected of HEVC over H.264. Then, specific solutions leveraging different types of hardware are selected to understand the limitations of commercial-off-the-shelf (COTS) options. Algorithmically, regardless of the implementation, HEVC is found to provide similar quality video as H.264 at 40% lower data rates for video resolutions greater than 1280x720, roughly 1 Megapixel (MPx). For resolutions less than 1MPx, H.264 is an adequate solution though a small (roughly 20%) compression boost is earned by employing HEVC. New low cost, size, weight, and power (CSWAP) HEVC implementations are being developed and will be ideal for small UAS systems.
Current interruption in inductive storage systems with inertial current source
NASA Astrophysics Data System (ADS)
Vitkovitsky, I. M.; Conte, D.; Ford, R. D.; Lupton, W. H.
1980-03-01
Utilization of inertial current source inductive storage with high power output requires a switch with short opening time. This switch must operate as a circuit breaker, i.e., be capable to carry the current for a time period characteristic of inertial systems, such as homopolar generators. For reasonable efficiency, its opening time must be fast to minimize the energy dissipated in downstream fuse stages required for any additional pulse compression. A switch that satisfies these criteria, as well as other requirements such as that for high voltage operation associated with high power output, is an explosively driven switch consisting of large number of gaps arranged in series. The performance of this switch in limiting and/or interrupting currents produced by large generators has been studied. Single switch modules were designed and tested for limiting the commutating current output of 1 MW, 60 Hz, generator and 500 KJ capacitor banks. Current limiting and commutation were evaluated, using these sources, for currents ranging up to 0.4 MA. The explosive opening of the switch was found to provide an effective first stage for further pulse compression. It opens in tens of microseconds, commutates current at high efficiency ( = 905) recovers very rapidly over a wide range of operating conditions.
NASA Astrophysics Data System (ADS)
Polskoy, Petr; Mailyan, Dmitry; Georgiev, Sergey; Muradyan, Viktor
2018-03-01
The increase of high-rise construction volume or «High-Rise Construction» requires the use of high-strength concrete and that leads to the reduction in section size of structures and to the decrease in material consumption. First of all, it refers to the compressed elements for which, when the transverse dimensions are reduced, their flexibility and deformation increase but the load bearing capacity decreases. Growth in construction also leads to the increase of repair and restoration works or to the strengthening of structures. The most effective method of their strengthening in buildings of «High-Rise Construction» is the use of composite materials which reduces the weight of reinforcement elements and labour costs on execution of works. In this article the results of experimental research on strength and deformation of short compressed reinforced concrete structures, reinforced with external carbon fiber reinforcement, are presented. Their flexibility is λh=10, and the cross-section dimensions ratio b/h is 2, that is 1,5 times more, than recommended by standards in Russia. The following research was being done for three kinds of strained and deformed conditions with different variants of composite reinforcement. The results of the experiment proved the real efficiency of composite reinforcement of the compressed elements with sides ratio equal to 2, increasing the bearing capacity of pillars till 1,5 times. These results can be used for designing the buildings of different number of storeys.
NASA Technical Reports Server (NTRS)
Stanitz, John D; Sheldrake, Leonard J
1953-01-01
A technique is developed for the application of a channel design method to the design of high-solidity cascades with prescribed velocity distributions as a function of arc length along the blade-element profile. The technique is applied to both incompressible and subsonic compressible, nonviscous, irrotational fluid motion. For compressible flow, the ratio of specific heats is assumed equal to -1.0. An impulse cascade with 90 degree turning was designed for incompressible flow and was tested at the design angle of attack over a range of downstream Mach number from 0.2 to coke flow. To achieve good efficiency, the cascade was designed for prescribed velocities and maximum blade loading according to limitations imposed by considerations of boundary-layer separation.
Less strained and more efficient GaN light-emitting diodes with embedded silica hollow nanospheres
Kim, Jonghak; Woo, Heeje; Joo, Kisu; Tae, Sungwon; Park, Jinsub; Moon, Daeyoung; Park, Sung Hyun; Jang, Junghwan; Cho, Yigil; Park, Jucheol; Yuh, Hwankuk; Lee, Gun-Do; Choi, In-Suk; Nanishi, Yasushi; Han, Heung Nam; Char, Kookheon; Yoon, Euijoon
2013-01-01
Light-emitting diodes (LEDs) become an attractive alternative to conventional light sources due to high efficiency and long lifetime. However, different material properties between GaN and sapphire cause several problems such as high defect density in GaN, serious wafer bowing, particularly in large-area wafers, and poor light extraction of GaN-based LEDs. Here, we suggest a new growth strategy for high efficiency LEDs by incorporating silica hollow nanospheres (S-HNS). In this strategy, S-HNSs were introduced as a monolayer on a sapphire substrate and the subsequent growth of GaN by metalorganic chemical vapor deposition results in improved crystal quality due to nano-scale lateral epitaxial overgrowth. Moreover, well-defined voids embedded at the GaN/sapphire interface help scatter lights effectively for improved light extraction, and reduce wafer bowing due to partial alleviation of compressive stress in GaN. The incorporation of S-HNS into LEDs is thus quite advantageous in achieving high efficiency LEDs for solid-state lighting. PMID:24220259
Parallel design of JPEG-LS encoder on graphics processing units
NASA Astrophysics Data System (ADS)
Duan, Hao; Fang, Yong; Huang, Bormin
2012-01-01
With recent technical advances in graphic processing units (GPUs), GPUs have outperformed CPUs in terms of compute capability and memory bandwidth. Many successful GPU applications to high performance computing have been reported. JPEG-LS is an ISO/IEC standard for lossless image compression which utilizes adaptive context modeling and run-length coding to improve compression ratio. However, adaptive context modeling causes data dependency among adjacent pixels and the run-length coding has to be performed in a sequential way. Hence, using JPEG-LS to compress large-volume hyperspectral image data is quite time-consuming. We implement an efficient parallel JPEG-LS encoder for lossless hyperspectral compression on a NVIDIA GPU using the computer unified device architecture (CUDA) programming technology. We use the block parallel strategy, as well as such CUDA techniques as coalesced global memory access, parallel prefix sum, and asynchronous data transfer. We also show the relation between GPU speedup and AVIRIS block size, as well as the relation between compression ratio and AVIRIS block size. When AVIRIS images are divided into blocks, each with 64×64 pixels, we gain the best GPU performance with 26.3x speedup over its original CPU code.
Resource efficient data compression algorithms for demanding, WSN based biomedical applications.
Antonopoulos, Christos P; Voros, Nikolaos S
2016-02-01
During the last few years, medical research areas of critical importance such as Epilepsy monitoring and study, increasingly utilize wireless sensor network technologies in order to achieve better understanding and significant breakthroughs. However, the limited memory and communication bandwidth offered by WSN platforms comprise a significant shortcoming to such demanding application scenarios. Although, data compression can mitigate such deficiencies there is a lack of objective and comprehensive evaluation of relative approaches and even more on specialized approaches targeting specific demanding applications. The research work presented in this paper focuses on implementing and offering an in-depth experimental study regarding prominent, already existing as well as novel proposed compression algorithms. All algorithms have been implemented in a common Matlab framework. A major contribution of this paper, that differentiates it from similar research efforts, is the employment of real world Electroencephalography (EEG) and Electrocardiography (ECG) datasets comprising the two most demanding Epilepsy modalities. Emphasis is put on WSN applications, thus the respective metrics focus on compression rate and execution latency for the selected datasets. The evaluation results reveal significant performance and behavioral characteristics of the algorithms related to their complexity and the relative negative effect on compression latency as opposed to the increased compression rate. It is noted that the proposed schemes managed to offer considerable advantage especially aiming to achieve the optimum tradeoff between compression rate-latency. Specifically, proposed algorithm managed to combine highly completive level of compression while ensuring minimum latency thus exhibiting real-time capabilities. Additionally, one of the proposed schemes is compared against state-of-the-art general-purpose compression algorithms also exhibiting considerable advantages as far as the compression rate is concerned. Copyright © 2015 Elsevier Inc. All rights reserved.
Discontinuous Galerkin Methods and High-Speed Turbulent Flows
NASA Astrophysics Data System (ADS)
Atak, Muhammed; Larsson, Johan; Munz, Claus-Dieter
2014-11-01
Discontinuous Galerkin methods gain increasing importance within the CFD community as they combine arbitrary high order of accuracy in complex geometries with parallel efficiency. Particularly the discontinuous Galerkin spectral element method (DGSEM) is a promising candidate for both the direct numerical simulation (DNS) and large eddy simulation (LES) of turbulent flows due to its excellent scaling attributes. In this talk, we present a DNS of a compressible turbulent boundary layer along a flat plate at a free-stream Mach number of M = 2.67 and assess the computational efficiency of the DGSEM at performing high-fidelity simulations of both transitional and turbulent boundary layers. We compare the accuracy of the results as well as the computational performance to results using a high order finite difference method.
Alpöz, A. Riza; Ertuḡrul, Fahinur; Cogulu, Dilsah; Ak, Asli Topaloḡlu; Tanoḡlu, Metin; Kaya, Elçin
2008-01-01
Objectives The aim of this study was to investigate microhardness and compressive strength of composite resin (Tetric-Ceram, Ivoclar Vivadent), compomer (Compoglass, Ivoclar, Vivadent), and resin modified glass ionomer cement (Fuji II LC, GC Corp) polymerized using halogen light (Optilux 501, Demetron, Kerr) and LED (Bluephase C5, Ivoclar Vivadent) for different curing times. Methods Samples were placed in disc shaped plastic molds with uniform size of 5 mm diameter and 2 mm in thickness for surface microhardness test and placed in a diameter of 4 mm and a length of 2 mm teflon cylinders for compressive strength test. For each subgroup, 20 samples for microhardness (n=180) and 5 samples for compressive strength were prepared (n=45). In group 1, samples were polymerized using halogen light source for 40 seconds; in group 2 and 3 samples were polymerized using LED light source for 20 seconds and 40 seconds respectively. All data were analyzed by two way analysis of ANOVA and Tukey’s post-hoc tests. Results Same exposure time of 40 seconds with a low intensity LED was found similar or more efficient than a high intensity halogen light unit (P>.05), however application of LED for 20 seconds was found less efficient than 40 seconds curing time (P=.03). Conclusions It is important to increase the light curing time and use appropriate light curing devices to polymerize resin composite in deep cavities to maximize the hardness and compressive strength of restorative materials. PMID:19212507
Applications of just-noticeable depth difference model in joint multiview video plus depth coding
NASA Astrophysics Data System (ADS)
Liu, Chao; An, Ping; Zuo, Yifan; Zhang, Zhaoyang
2014-10-01
A new multiview just-noticeable-depth-difference(MJNDD) Model is presented and applied to compress the joint multiview video plus depth. Many video coding algorithms remove spatial and temporal redundancies and statistical redundancies but they are not capable of removing the perceptual redundancies. Since the final receptor of video is the human eyes, we can remove the perception redundancy to gain higher compression efficiency according to the properties of human visual system (HVS). Traditional just-noticeable-distortion (JND) model in pixel domain contains luminance contrast and spatial-temporal masking effects, which describes the perception redundancy quantitatively. Whereas HVS is very sensitive to depth information, a new multiview-just-noticeable-depth-difference(MJNDD) model is proposed by combining traditional JND model with just-noticeable-depth-difference (JNDD) model. The texture video is divided into background and foreground areas using depth information. Then different JND threshold values are assigned to these two parts. Later the MJNDD model is utilized to encode the texture video on JMVC. When encoding the depth video, JNDD model is applied to remove the block artifacts and protect the edges. Then we use VSRS3.5 (View Synthesis Reference Software) to generate the intermediate views. Experimental results show that our model can endure more noise and the compression efficiency is improved by 25.29 percent at average and by 54.06 percent at most compared to JMVC while maintaining the subject quality. Hence it can gain high compress ratio and low bit rate.
Low-rank and Adaptive Sparse Signal (LASSI) Models for Highly Accelerated Dynamic Imaging
Ravishankar, Saiprasad; Moore, Brian E.; Nadakuditi, Raj Rao; Fessler, Jeffrey A.
2017-01-01
Sparsity-based approaches have been popular in many applications in image processing and imaging. Compressed sensing exploits the sparsity of images in a transform domain or dictionary to improve image recovery from undersampled measurements. In the context of inverse problems in dynamic imaging, recent research has demonstrated the promise of sparsity and low-rank techniques. For example, the patches of the underlying data are modeled as sparse in an adaptive dictionary domain, and the resulting image and dictionary estimation from undersampled measurements is called dictionary-blind compressed sensing, or the dynamic image sequence is modeled as a sum of low-rank and sparse (in some transform domain) components (L+S model) that are estimated from limited measurements. In this work, we investigate a data-adaptive extension of the L+S model, dubbed LASSI, where the temporal image sequence is decomposed into a low-rank component and a component whose spatiotemporal (3D) patches are sparse in some adaptive dictionary domain. We investigate various formulations and efficient methods for jointly estimating the underlying dynamic signal components and the spatiotemporal dictionary from limited measurements. We also obtain efficient sparsity penalized dictionary-blind compressed sensing methods as special cases of our LASSI approaches. Our numerical experiments demonstrate the promising performance of LASSI schemes for dynamic magnetic resonance image reconstruction from limited k-t space data compared to recent methods such as k-t SLR and L+S, and compared to the proposed dictionary-blind compressed sensing method. PMID:28092528
Low-Rank and Adaptive Sparse Signal (LASSI) Models for Highly Accelerated Dynamic Imaging.
Ravishankar, Saiprasad; Moore, Brian E; Nadakuditi, Raj Rao; Fessler, Jeffrey A
2017-05-01
Sparsity-based approaches have been popular in many applications in image processing and imaging. Compressed sensing exploits the sparsity of images in a transform domain or dictionary to improve image recovery fromundersampledmeasurements. In the context of inverse problems in dynamic imaging, recent research has demonstrated the promise of sparsity and low-rank techniques. For example, the patches of the underlying data are modeled as sparse in an adaptive dictionary domain, and the resulting image and dictionary estimation from undersampled measurements is called dictionary-blind compressed sensing, or the dynamic image sequence is modeled as a sum of low-rank and sparse (in some transform domain) components (L+S model) that are estimated from limited measurements. In this work, we investigate a data-adaptive extension of the L+S model, dubbed LASSI, where the temporal image sequence is decomposed into a low-rank component and a component whose spatiotemporal (3D) patches are sparse in some adaptive dictionary domain. We investigate various formulations and efficient methods for jointly estimating the underlying dynamic signal components and the spatiotemporal dictionary from limited measurements. We also obtain efficient sparsity penalized dictionary-blind compressed sensing methods as special cases of our LASSI approaches. Our numerical experiments demonstrate the promising performance of LASSI schemes for dynamicmagnetic resonance image reconstruction from limited k-t space data compared to recent methods such as k-t SLR and L+S, and compared to the proposed dictionary-blind compressed sensing method.
Wavelet-based audio embedding and audio/video compression
NASA Astrophysics Data System (ADS)
Mendenhall, Michael J.; Claypoole, Roger L., Jr.
2001-12-01
Watermarking, traditionally used for copyright protection, is used in a new and exciting way. An efficient wavelet-based watermarking technique embeds audio information into a video signal. Several effective compression techniques are applied to compress the resulting audio/video signal in an embedded fashion. This wavelet-based compression algorithm incorporates bit-plane coding, index coding, and Huffman coding. To demonstrate the potential of this audio embedding and audio/video compression algorithm, we embed an audio signal into a video signal and then compress. Results show that overall compression rates of 15:1 can be achieved. The video signal is reconstructed with a median PSNR of nearly 33 dB. Finally, the audio signal is extracted from the compressed audio/video signal without error.
Coding visual features extracted from video sequences.
Baroffio, Luca; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano
2014-05-01
Visual features are successfully exploited in several applications (e.g., visual search, object recognition and tracking, etc.) due to their ability to efficiently represent image content. Several visual analysis tasks require features to be transmitted over a bandwidth-limited network, thus calling for coding techniques to reduce the required bit budget, while attaining a target level of efficiency. In this paper, we propose, for the first time, a coding architecture designed for local features (e.g., SIFT, SURF) extracted from video sequences. To achieve high coding efficiency, we exploit both spatial and temporal redundancy by means of intraframe and interframe coding modes. In addition, we propose a coding mode decision based on rate-distortion optimization. The proposed coding scheme can be conveniently adopted to implement the analyze-then-compress (ATC) paradigm in the context of visual sensor networks. That is, sets of visual features are extracted from video frames, encoded at remote nodes, and finally transmitted to a central controller that performs visual analysis. This is in contrast to the traditional compress-then-analyze (CTA) paradigm, in which video sequences acquired at a node are compressed and then sent to a central unit for further processing. In this paper, we compare these coding paradigms using metrics that are routinely adopted to evaluate the suitability of visual features in the context of content-based retrieval, object recognition, and tracking. Experimental results demonstrate that, thanks to the significant coding gains achieved by the proposed coding scheme, ATC outperforms CTA with respect to all evaluation metrics.
NASA Astrophysics Data System (ADS)
Li, Baihong; Dong, Ruifang; Zhou, Conghua; Xiang, Xiao; Li, Yongfang; Zhang, Shougang
2018-05-01
Selective two-photon microscopy and high-precision nonlinear spectroscopy rely on efficient spectral compression at the desired frequency. Previously, a Fresnel-inspired binary phase shaping (FIBPS) method was theoretically proposed for spectral compression of two-photon absorption and second-harmonic generation (SHG) with a square-chirped pulse. Here, we theoretically show that the FIBPS can introduce a negative quadratic frequency phase (negative chirp) by analogy with the spatial-domain phase function of Fresnel zone plate. Thus, the previous theoretical model can be extended to the case where the pulse can be transformed limited and in any symmetrical spectral shape. As an example, we experimentally demonstrate spectral compression in SHG by FIBPS for a Gaussian transform-limited pulse and show good agreement with the theory. Given the fundamental pulse bandwidth, a narrower SHG bandwidth with relatively high intensity can be obtained by simply increasing the number of binary phases. The experimental results also verify that our method is superior to that proposed in [Phys. Rev. A 46, 2749 (1992), 10.1103/PhysRevA.46.2749]. This method will significantly facilitate the applications of selective two-photon microscopy and spectroscopy. Moreover, as it can introduce negative dispersion, hence it can also be generalized to other applications in the field of dispersion compensation.
QLog Solar-Cell Mode Photodiode Logarithmic CMOS Pixel Using Charge Compression and Readout †
Ni, Yang
2018-01-01
In this paper, we present a new logarithmic pixel design currently under development at New Imaging Technologies SA (NIT). This new logarithmic pixel design uses charge domain logarithmic signal compression and charge-transfer-based signal readout. This structure gives a linear response in low light conditions and logarithmic response in high light conditions. The charge transfer readout efficiently suppresses the reset (KTC) noise by using true correlated double sampling (CDS) in low light conditions. In high light conditions, thanks to charge domain logarithmic compression, it has been demonstrated that 3000 electrons should be enough to cover a 120 dB dynamic range with a mobile phone camera-like signal-to-noise ratio (SNR) over the whole dynamic range. This low electron count permits the use of ultra-small floating diffusion capacitance (sub-fF) without charge overflow. The resulting large conversion gain permits a single photon detection capability with a wide dynamic range without a complex sensor/system design. A first prototype sensor with 320 × 240 pixels has been implemented to validate this charge domain logarithmic pixel concept and modeling. The first experimental results validate the logarithmic charge compression theory and the low readout noise due to the charge-transfer-based readout. PMID:29443903
QLog Solar-Cell Mode Photodiode Logarithmic CMOS Pixel Using Charge Compression and Readout.
Ni, Yang
2018-02-14
In this paper, we present a new logarithmic pixel design currently under development at New Imaging Technologies SA (NIT). This new logarithmic pixel design uses charge domain logarithmic signal compression and charge-transfer-based signal readout. This structure gives a linear response in low light conditions and logarithmic response in high light conditions. The charge transfer readout efficiently suppresses the reset (KTC) noise by using true correlated double sampling (CDS) in low light conditions. In high light conditions, thanks to charge domain logarithmic compression, it has been demonstrated that 3000 electrons should be enough to cover a 120 dB dynamic range with a mobile phone camera-like signal-to-noise ratio (SNR) over the whole dynamic range. This low electron count permits the use of ultra-small floating diffusion capacitance (sub-fF) without charge overflow. The resulting large conversion gain permits a single photon detection capability with a wide dynamic range without a complex sensor/system design. A first prototype sensor with 320 × 240 pixels has been implemented to validate this charge domain logarithmic pixel concept and modeling. The first experimental results validate the logarithmic charge compression theory and the low readout noise due to the charge-transfer-based readout.
Heat-pump cool storage in a clathrate of freon
NASA Astrophysics Data System (ADS)
Tomlinson, J. J.
Presented are the analytical description and assessment of a unique heat pump/storage system in which the conventional evaporator of the vapor compression cycle is replaced by a highly efficient direct contract crystallizer. The thermal storage technique requires the formation of a refrigerant gas hydrate (a clathrate) and exploits an enthalpy of reaction comparable to the heat of fusion of ice. Additional system operational benefits include cool storage at the favorable temperatures of 4 to 7 C (40 to 45 F), and highly efficient heat transfer ates afforded by he direct contact mechanism. In addition, the experimental approach underway at ORNL to study such a system is discussed.
An Efficient, Lossless Database for Storing and Transmitting Medical Images
NASA Technical Reports Server (NTRS)
Fenstermacher, Marc J.
1998-01-01
This research aimed in creating new compression methods based on the central idea of Set Redundancy Compression (SRC). Set Redundancy refers to the common information that exists in a set of similar images. SRC compression methods take advantage of this common information and can achieve improved compression of similar images by reducing their Set Redundancy. The current research resulted in the development of three new lossless SRC compression methods: MARS (Median-Aided Region Sorting), MAZE (Max-Aided Zero Elimination) and MaxGBA (Max-Guided Bit Allocation).
Compressor ported shroud for foil bearing cooling
Elpern, David G [Los Angeles, CA; McCabe, Niall [Torrance, CA; Gee, Mark [South Pasadena, CA
2011-08-02
A compressor ported shroud takes compressed air from the shroud of the compressor before it is completely compressed and delivers it to foil bearings. The compressed air has a lower pressure and temperature than compressed outlet air. The lower temperature of the air means that less air needs to be bled off from the compressor to cool the foil bearings. This increases the overall system efficiency due to the reduced mass flow requirements of the lower temperature air. By taking the air at a lower pressure, less work is lost compressing the bearing cooling air.
A Vortex Particle-Mesh method for subsonic compressible flows
NASA Astrophysics Data System (ADS)
Parmentier, Philippe; Winckelmans, Grégoire; Chatelain, Philippe
2018-02-01
This paper presents the implementation and validation of a remeshed Vortex Particle-Mesh (VPM) method capable of simulating complex compressible and viscous flows. It is supplemented with a radiation boundary condition in order for the method to accommodate the radiating quantities of the flow. The efficiency of the methodology relies on the use of an underlying grid; it allows the use of a FFT-based Poisson solver to calculate the velocity field, and the use of high-order isotropic finite differences to evaluate the non-advective terms in the Lagrangian form of the conservation equations. The Möhring analogy is then also used to further obtain the far-field sound produced by two co-rotating Gaussian vortices. It is demonstrated that the method is in excellent quantitative agreement with reference results that were obtained using a high-order Eulerian method and using a high-order remeshed Vortex Particle (VP) method.
Resolution enhancement of low-quality videos using a high-resolution frame
NASA Astrophysics Data System (ADS)
Pham, Tuan Q.; van Vliet, Lucas J.; Schutte, Klamer
2006-01-01
This paper proposes an example-based Super-Resolution (SR) algorithm of compressed videos in the Discrete Cosine Transform (DCT) domain. Input to the system is a Low-Resolution (LR) compressed video together with a High-Resolution (HR) still image of similar content. Using a training set of corresponding LR-HR pairs of image patches from the HR still image, high-frequency details are transferred from the HR source to the LR video. The DCT-domain algorithm is much faster than example-based SR in spatial domain 6 because of a reduction in search dimensionality, which is a direct result of the compact and uncorrelated DCT representation. Fast searching techniques like tree-structure vector quantization 16 and coherence search1 are also key to the improved efficiency. Preliminary results on MJPEG sequence show promising result of the DCT-domain SR synthesis approach.
FBCOT: a fast block coding option for JPEG 2000
NASA Astrophysics Data System (ADS)
Taubman, David; Naman, Aous; Mathew, Reji
2017-09-01
Based on the EBCOT algorithm, JPEG 2000 finds application in many fields, including high performance scientific, geospatial and video coding applications. Beyond digital cinema, JPEG 2000 is also attractive for low-latency video communications. The main obstacle for some of these applications is the relatively high computational complexity of the block coder, especially at high bit-rates. This paper proposes a drop-in replacement for the JPEG 2000 block coding algorithm, achieving much higher encoding and decoding throughputs, with only modest loss in coding efficiency (typically < 0.5dB). The algorithm provides only limited quality/SNR scalability, but offers truly reversible transcoding to/from any standard JPEG 2000 block bit-stream. The proposed FAST block coder can be used with EBCOT's post-compression RD-optimization methodology, allowing a target compressed bit-rate to be achieved even at low latencies, leading to the name FBCOT (Fast Block Coding with Optimized Truncation).
System design of an optical interferometer based on compressive sensing
NASA Astrophysics Data System (ADS)
Liu, Gang; Wen, De-Sheng; Song, Zong-Xi
2018-07-01
In this paper, we develop a new optical interferometric telescope architecture based on compressive sensing (CS) theory. Traditional optical telescopes with large apertures must be large in size, heavy and have high-power consumption, which limits the development of space-based telescopes. A turning point has occurred in the advent of imaging technology that utilizes Fourier-domain interferometry. This technology can reduce the system size, weight and power consumption by an order of magnitude compared to traditional optical telescopes at the same resolution. CS theory demonstrates that incomplete and noisy Fourier measurements may suffice for the exact reconstruction of sparse or compressible signals. Our proposed architecture combines advantages from the two frameworks, and the performance is evaluated through simulations. The results indicate the ability to efficiently sample spatial frequencies, while being lightweight and compact in size. Another attractive property of our architecture is the strong denoising ability for Gaussian noise.
Turbulence statistics with quantified uncertainty in cold-wall supersonic channel flow
NASA Astrophysics Data System (ADS)
Ulerich, Rhys; Moser, Robert D.
2012-11-01
To investigate compressibility effects in wall-bounded turbulence, a series of direct numerical simulations of compressible channel flow with isothermal (cold) walls have been conducted. All combinations of Re = { 3000 , 5000 } and Ma = { 0 . 1 , 0 . 5 , 1 . 5 , 3 . 0 } have been simulated where the Reynolds and Mach numbers are based on bulk velocity and sound speed at the wall temperature. Turbulence statistics with precisely quantified uncertainties computed from these simulations will be presented and are being made available in a public data base at http://turbulence.ices.utexas.edu/. The simulations were performed using a new pseudo-spectral code called Suzerain, which was designed to efficiently produce high quality data on compressible, wall-bounded turbulent flows using a semi-implicit Fourier/B-spline numerical formulation. This work is supported by the Department of Energy [National Nuclear Security Administration] under Award Number [DE-FC52-08NA28615].
DOE Office of Scientific and Technical Information (OSTI.GOV)
Belte, D.; Stratton, M.V.
1982-08-01
The United States Army Aviation Engineering Flight Activity conducted level flight performance tests of the OH-58C helicopter at Edwards AFB, California from 22 September to 20 November 1981, and at St. Paul, Minnesota, from 12 January to 9 February 1982. Nondimensional methods were used to identify effects of compressibility and blade stall on performance, and increased referred rotor speeds were used to supplement the range of currently available level flight data. Maximum differences in nondimensional power required attributed to compressibility effects varied from 6.5 to 11%. However, high actual rotor speed at a given condition can result in less powermore » required than at low rotor speed even with the compressibility penalty. The power required characteristics determined by these tests can be combined with engine performance to determine the most fuel efficient operating conditions.« less
NASA Astrophysics Data System (ADS)
Chang, Ching-Chun; Liu, Yanjun; Nguyen, Son T.
2015-03-01
Data hiding is a technique that embeds information into digital cover data. This technique has been concentrated on the spatial uncompressed domain, and it is considered more challenging to perform in the compressed domain, i.e., vector quantization, JPEG, and block truncation coding (BTC). In this paper, we propose a new data hiding scheme for BTC-compressed images. In the proposed scheme, a dynamic programming strategy was used to search for the optimal solution of the bijective mapping function for LSB substitution. Then, according to the optimal solution, each mean value embeds three secret bits to obtain high hiding capacity with low distortion. The experimental results indicated that the proposed scheme obtained both higher hiding capacity and hiding efficiency than the other four existing schemes, while ensuring good visual quality of the stego-image. In addition, the proposed scheme achieved a low bit rate as original BTC algorithm.
Design and Testing of CO 2 Compression Using Supersonic Shock Wave Technology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koopman, Aaron
This report summarizes work performed by Ramgen and subcontractors in pursuit of the design and construction of a 10 MW supersonic CO2 compressor and supporting facility. The compressor will demonstrate application of Ramgen’s supersonic compression technology at an industrial scale using CO2 in a closed-loop. The report includes details of early feasibility studies, CFD validation and comparison to experimental data, static test experimental results, compressor and facility design and analyses, and development of aerodynamic tools. A summary of Ramgen's ISC Engine program activity is also included. This program will demonstrate the adaptation of Ramgen's supersonic compression and advanced vortex combustionmore » technology to result in a highly efficient and cost effective alternative to traditional gas turbine engines. The build out of a 1.5 MW test facility to support the engine and associated subcomponent test program is summarized.« less
Health and efficiency in trimix versus air breathing in compressed air workers.
Van Rees Vellinga, T P; Verhoeven, A C; Van Dijk, F J H; Sterk, W
2006-01-01
The Western Scheldt Tunneling Project in the Netherlands provided a unique opportunity to evaluate the effects of trimix usage on the health of compressed air workers and the efficiency of the project. Data analysis addressed 318 exposures to compressed air at 3.9-4.4 bar gauge and 52 exposures to trimix (25% oxygen, 25% helium, and 50% nitrogen) at 4.6-4.8 bar gauge. Results revealed three incidents of decompression sickness all of which involved the use of compressed air. During exposure to compressed air, the effects of nitrogen narcosis were manifested in operational errors and increased fatigue among the workers. When using trimix, less effort was required for breathing, and mandatory decompression times for stays of a specific duration and maximum depth were considerably shorter. We conclude that it might be rational--for both medical and operational reasons--to use breathing gases with lower nitrogen fractions (e.g., trimix) for deep-caisson work at pressures exceeding 3 bar gauge, although definitive studies are needed.
LES of Temporally Evolving Mixing Layers by Three High Order Schemes
NASA Astrophysics Data System (ADS)
Yee, H.; Sjögreen, B.; Hadjadj, A.
2011-10-01
The performance of three high order shock-capturing schemes is compared for large eddy simulations (LES) of temporally evolving mixing layers for different convective Mach number (Mc) ranging from the quasi-incompressible regime to highly compressible supersonic regime. The considered high order schemes are fifth-order WENO (WENO5), seventh-order WENO (WENO7), and the associated eighth-order central spatial base scheme with the dissipative portion of WENO7 as a nonlinear post-processing filter step (WENO7fi). This high order nonlinear filter method (Yee & Sjögreen 2009) is designed for accurate and efficient simulations of shock-free compressible turbulence, turbulence with shocklets and turbulence with strong shocks with minimum tuning of scheme parameters. The LES results by WENO7fi using the same scheme parameter agree well with experimental results of Barone et al. (2006), and published direct numerical simulations (DNS) by Rogers & Moser (1994) and Pantano & Sarkar (2002), whereas results by WENO5 and WENO7 compare poorly with experimental data and DNS computations.
Bandwidth compression of multispectral satellite imagery
NASA Technical Reports Server (NTRS)
Habibi, A.
1978-01-01
The results of two studies aimed at developing efficient adaptive and nonadaptive techniques for compressing the bandwidth of multispectral images are summarized. These techniques are evaluated and compared using various optimality criteria including MSE, SNR, and recognition accuracy of the bandwidth compressed images. As an example of future requirements, the bandwidth requirements for the proposed Landsat-D Thematic Mapper are considered.
Compressive Sensing Based Bio-Inspired Shape Feature Detection CMOS Imager
NASA Technical Reports Server (NTRS)
Duong, Tuan A. (Inventor)
2015-01-01
A CMOS imager integrated circuit using compressive sensing and bio-inspired detection is presented which integrates novel functions and algorithms within a novel hardware architecture enabling efficient on-chip implementation.
Efficient Decoding of Compressed Data.
ERIC Educational Resources Information Center
Bassiouni, Mostafa A.; Mukherjee, Amar
1995-01-01
Discusses the problem of enhancing the speed of Huffman decoding of compressed data. Topics addressed include the Huffman decoding tree; multibit decoding; binary string mapping problems; and algorithms for solving mapping problems. (22 references) (LRW)
Bunch compression efficiency of the femtosecond electron source at Chiang Mai University
NASA Astrophysics Data System (ADS)
Thongbai, C.; Kusoljariyakul, K.; Saisut, J.
2011-07-01
A femtosecond electron source has been developed at the Plasma and Beam Physics Research Facility (PBP), Chiang Mai University (CMU), Thailand. Ultra-short electron bunches can be produced with a bunch compression system consisting of a thermionic cathode RF-gun, an alpha-magnet as a magnetic bunch compressor, and a linear accelerator as a post acceleration section. To obtain effective bunch compression, it is crucial to provide a proper longitudinal phase-space distribution at the gun exit matched to the subsequent beam transport system. Via beam dynamics calculations and experiments, we investigate the bunch compression efficiency for various RF-gun fields. The particle distribution at the RF-gun exit will be tracked numerically through the alpha-magnet and beam transport. Details of the study and results leading to an optimum condition for our system will be presented.
NASA Astrophysics Data System (ADS)
Fiandrotti, Attilio; Fosson, Sophie M.; Ravazzi, Chiara; Magli, Enrico
2018-04-01
Compressive sensing promises to enable bandwidth-efficient on-board compression of astronomical data by lifting the encoding complexity from the source to the receiver. The signal is recovered off-line, exploiting GPUs parallel computation capabilities to speedup the reconstruction process. However, inherent GPU hardware constraints limit the size of the recoverable signal and the speedup practically achievable. In this work, we design parallel algorithms that exploit the properties of circulant matrices for efficient GPU-accelerated sparse signals recovery. Our approach reduces the memory requirements, allowing us to recover very large signals with limited memory. In addition, it achieves a tenfold signal recovery speedup thanks to ad-hoc parallelization of matrix-vector multiplications and matrix inversions. Finally, we practically demonstrate our algorithms in a typical application of circulant matrices: deblurring a sparse astronomical image in the compressed domain.
A hybrid data compression approach for online backup service
NASA Astrophysics Data System (ADS)
Wang, Hua; Zhou, Ke; Qin, MingKang
2009-08-01
With the popularity of Saas (Software as a service), backup service has becoming a hot topic of storage application. Due to the numerous backup users, how to reduce the massive data load is a key problem for system designer. Data compression provides a good solution. Traditional data compression application used to adopt a single method, which has limitations in some respects. For example data stream compression can only realize intra-file compression, de-duplication is used to eliminate inter-file redundant data, compression efficiency cannot meet the need of backup service software. This paper proposes a novel hybrid compression approach, which includes two levels: global compression and block compression. The former can eliminate redundant inter-file copies across different users, the latter adopts data stream compression technology to realize intra-file de-duplication. Several compressing algorithms were adopted to measure the compression ratio and CPU time. Adaptability using different algorithm in certain situation is also analyzed. The performance analysis shows that great improvement is made through the hybrid compression policy.
Umeda, Akira; Iwata, Yasushi; Okada, Yasumasa; Shimada, Megumi; Baba, Akiyasu; Minatogawa, Yasuyuki; Yamada, Takayasu; Chino, Masao; Watanabe, Takafumi; Akaishi, Makoto
2004-12-01
The high cost of digital echocardiographs and the large size of data files hinder the adoption of remote diagnosis of digitized echocardiography data. We have developed a low-cost digital filing system for echocardiography data. In this system, data from a conventional analog echocardiograph are captured using a personal computer (PC) equipped with an analog-to-digital converter board. Motion picture data are promptly compressed using a moving pictures expert group (MPEG) 4 codec. The digitized data with preliminary reports obtained in a rural hospital are then sent to cardiologists at distant urban general hospitals via the internet. The cardiologists can evaluate the data using widely available movie-viewing software (Windows Media Player). The diagnostic accuracy of this double-check system was confirmed by comparison with ordinary super-VHS videotapes. We have demonstrated that digitization of echocardiography data from a conventional analog echocardiograph and MPEG 4 compression can be performed using an ordinary PC-based system, and that this system enables highly efficient digital storage and remote diagnosis at low cost.
DNABIT Compress - Genome compression algorithm.
Rajarajeswari, Pothuraju; Apparao, Allam
2011-01-22
Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.
NASA Astrophysics Data System (ADS)
Cintra, Renato J.; Bayer, Fábio M.
2017-12-01
In [Dhandapani and Ramachandran, "Area and power efficient DCT architecture for image compression", EURASIP Journal on Advances in Signal Processing 2014, 2014:180] the authors claim to have introduced an approximation for the discrete cosine transform capable of outperforming several well-known approximations in literature in terms of additive complexity. We could not verify the above results and we offer corrections for their work.
NASA Technical Reports Server (NTRS)
Rice, R. F.
1974-01-01
End-to-end system considerations involving channel coding and data compression are reported which could drastically improve the efficiency in communicating pictorial information from future planetary spacecraft. In addition to presenting new and potentially significant system considerations, this report attempts to fill a need for a comprehensive tutorial which makes much of this very subject accessible to readers whose disciplines lie outside of communication theory.
NASA Astrophysics Data System (ADS)
Dassekpo, Jean-Baptiste Mawulé; Zha, Xiaoxiong; Zhan, Jiapeng; Ning, Jiaqian
Geopolymer is an energy efficient and sustainable material that is currently used in construction industry as an alternative for Portland cement. As a new material, specific mix design method is essential and efforts have been made to develop a mix design procedure with the main focus on achieving better compressive strength and economy. In this paper, a sequential addition of synthesis parameters such as fly ash-sand, alkaline liquids, plasticizer and additional water at well-defined time intervals was investigated. A total of 4 mix procedures were used to study the compressive performance on fly ash-based geopolymer mortar and the results of each method were analyzed and discussed. Experimental results show that the sequential addition of sodium hydroxide (NaOH), sodium silicate (Na2SiO3), plasticizer (PL), followed by adding water (WA) increases considerably the compressive strengths of the geopolymer-based mortar. These results clearly demonstrate the high significant influence of sequential addition of synthesis parameters on geopolymer materials compressive properties, and also provide a new mixing method for the preparation of geopolymer paste, mortar and concrete.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lucachick, Glenn; Curran, Scott; Storey, John Morse
Our work explores the volatility of particles produced from two diesel low temperature combustion (LTC) modes proposed for high-efficiency compression ignition engines. It also explores mechanisms of particulate formation and growth upon dilution in the near-tailpipe environment. Moreover, the number distribution of exhaust particles from low- and mid-load dual-fuel reactivity controlled compression ignition (RCCI) and single-fuel premixed charge compression ignition (PPCI) modes were experimentally studied over a gradient of dilution temperature. Particle volatility of select particle diameters was investigated using volatility tandem differential mobility analysis (V-TDMA). Evaporation rates for exhaust particles were compared with V-TDMA results for candidate pure n-alkanesmore » to identify species with similar volatility characteristics. The results show that LTC particles are mostly comprised of material with volatility similar to engine oil alkanes. V-TDMA results were used as inputs to an aerosol condensation and evaporation model to support the finding that smaller particles in the distribution are comprised of lower volatility material than large particles under primary dilution conditions. Although the results show that saturation levels are high enough to drive condensation of alkanes onto existing particles under the dilution conditions investigated, they are not high We conclude that observed particles from LTC operation must grow from low concentrations of highly non-volatile compounds present in the exhaust.« less
Novel modes and adaptive block scanning order for intra prediction in AV1
NASA Astrophysics Data System (ADS)
Hadar, Ofer; Shleifer, Ariel; Mukherjee, Debargha; Joshi, Urvang; Mazar, Itai; Yuzvinsky, Michael; Tavor, Nitzan; Itzhak, Nati; Birman, Raz
2017-09-01
The demand for streaming video content is on the rise and growing exponentially. Networks bandwidth is very costly and therefore there is a constant effort to improve video compression rates and enable the sending of reduced data volumes while retaining quality of experience (QoE). One basic feature that utilizes the spatial correlation of pixels for video compression is Intra-Prediction, which determines the codec's compression efficiency. Intra prediction enables significant reduction of the Intra-Frame (I frame) size and, therefore, contributes to efficient exploitation of bandwidth. In this presentation, we propose new Intra-Prediction algorithms that improve the AV1 prediction model and provide better compression ratios. Two (2) types of methods are considered: )1( New scanning order method that maximizes spatial correlation in order to reduce prediction error; and )2( New Intra-Prediction modes implementation in AVI. Modern video coding standards, including AVI codec, utilize fixed scan orders in processing blocks during intra coding. The fixed scan orders typically result in residual blocks with high prediction error mainly in blocks with edges. This means that the fixed scan orders cannot fully exploit the content-adaptive spatial correlations between adjacent blocks, thus the bitrate after compression tends to be large. To reduce the bitrate induced by inaccurate intra prediction, the proposed approach adaptively chooses the scanning order of blocks according to criteria of firstly predicting blocks with maximum number of surrounding, already Inter-Predicted blocks. Using the modified scanning order method and the new modes has reduced the MSE by up to five (5) times when compared to conventional TM mode / Raster scan and up to two (2) times when compared to conventional CALIC mode / Raster scan, depending on the image characteristics (which determines the percentage of blocks predicted with Inter-Prediction, which in turn impacts the efficiency of the new scanning method). For the same cases, the PSNR was shown to improve by up to 7.4dB and up to 4 dB, respectively. The new modes have yielded 5% improvement in BD-Rate over traditionally used modes, when run on K-Frame, which is expected to yield 1% of overall improvement.
Liborg: a lidar-based robot for efficient 3D mapping
NASA Astrophysics Data System (ADS)
Vlaminck, Michiel; Luong, Hiep; Philips, Wilfried
2017-09-01
In this work we present Liborg, a spatial mapping and localization system that is able to acquire 3D models on the y using data originated from lidar sensors. The novelty of this work is in the highly efficient way we deal with the tremendous amount of data to guarantee fast execution times while preserving sufficiently high accuracy. The proposed solution is based on a multi-resolution technique based on octrees. The paper discusses and evaluates the main benefits of our approach including its efficiency regarding building and updating the map and its compactness regarding compressing the map. In addition, the paper presents a working prototype consisting of a robot equipped with a Velodyne Lidar Puck (VLP-16) and controlled by a Raspberry Pi serving as an independent acquisition platform.
GLIDES â Efficient Energy Storage from ORNL
Momen, Ayyoub M.; Abu-Heiba, Ahmad; Odukomaiya, Wale; Akinina, Alla
2018-06-25
The research shown in this video features the GLIDES (Ground-Level Integrated Diverse Energy Storage) project, which has been under development at Oak Ridge National Laboratory (ORNL) since 2013. GLIDES can store energy via combined inputs of electricity and heat, and deliver dispatchable electricity. Supported by ORNLâs Laboratory Directorâs Research and Development (LDRD) fund, this energy storage system is low-cost, and hybridizes compressed air and pumped-hydro approaches to allow for storage of intermittent renewable energy at high efficiency. A U.S. patent application for this novel energy storage concept has been submitted, and research findings suggest it has the potential to be a flexible, low-cost, scalable, high-efficiency option for energy storage, especially useful in residential and commercial buildings.
CFD Analysis and Design Optimization Using Parallel Computers
NASA Technical Reports Server (NTRS)
Martinelli, Luigi; Alonso, Juan Jose; Jameson, Antony; Reuther, James
1997-01-01
A versatile and efficient multi-block method is presented for the simulation of both steady and unsteady flow, as well as aerodynamic design optimization of complete aircraft configurations. The compressible Euler and Reynolds Averaged Navier-Stokes (RANS) equations are discretized using a high resolution scheme on body-fitted structured meshes. An efficient multigrid implicit scheme is implemented for time-accurate flow calculations. Optimum aerodynamic shape design is achieved at very low cost using an adjoint formulation. The method is implemented on parallel computing systems using the MPI message passing interface standard to ensure portability. The results demonstrate that, by combining highly efficient algorithms with parallel computing, it is possible to perform detailed steady and unsteady analysis as well as automatic design for complex configurations using the present generation of parallel computers.
GLIDES – Efficient Energy Storage from ORNL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Momen, Ayyoub M.; Abu-Heiba, Ahmad; Odukomaiya, Wale
2016-03-01
The research shown in this video features the GLIDES (Ground-Level Integrated Diverse Energy Storage) project, which has been under development at Oak Ridge National Laboratory (ORNL) since 2013. GLIDES can store energy via combined inputs of electricity and heat, and deliver dispatchable electricity. Supported by ORNL’s Laboratory Director’s Research and Development (LDRD) fund, this energy storage system is low-cost, and hybridizes compressed air and pumped-hydro approaches to allow for storage of intermittent renewable energy at high efficiency. A U.S. patent application for this novel energy storage concept has been submitted, and research findings suggest it has the potential to bemore » a flexible, low-cost, scalable, high-efficiency option for energy storage, especially useful in residential and commercial buildings.« less
Fuzzy Relational Compression Applied on Feature Vectors for Infant Cry Recognition
NASA Astrophysics Data System (ADS)
Reyes-Galaviz, Orion Fausto; Reyes-García, Carlos Alberto
Data compression is always advisable when it comes to handling and processing information quickly and efficiently. There are two main problems that need to be solved when it comes to handling data; store information in smaller spaces and processes it in the shortest possible time. When it comes to infant cry analysis (ICA), there is always the need to construct large sound repositories from crying babies. Samples that have to be analyzed and be used to train and test pattern recognition algorithms; making this a time consuming task when working with uncompressed feature vectors. In this work, we show a simple, but efficient, method that uses Fuzzy Relational Product (FRP) to compresses the information inside a feature vector, building with this a compressed matrix that will help us recognize two kinds of pathologies in infants; Asphyxia and Deafness. We describe the sound analysis, which consists on the extraction of Mel Frequency Cepstral Coefficients that generate vectors which will later be compressed by using FRP. There is also a description of the infant cry database used in this work, along with the training and testing of a Time Delay Neural Network with the compressed features, which shows a performance of 96.44% with our proposed feature vector compression.
Kim, Dong-Sun; Kwon, Jin-San
2014-01-01
Research on real-time health systems have received great attention during recent years and the needs of high-quality personal multichannel medical signal compression for personal medical product applications are increasing. The international MPEG-4 audio lossless coding (ALS) standard supports a joint channel-coding scheme for improving compression performance of multichannel signals and it is very efficient compression method for multi-channel biosignals. However, the computational complexity of such a multichannel coding scheme is significantly greater than that of other lossless audio encoders. In this paper, we present a multichannel hardware encoder based on a low-complexity joint-coding technique and shared multiplier scheme for portable devices. A joint-coding decision method and a reference channel selection scheme are modified for a low-complexity joint coder. The proposed joint coding decision method determines the optimized joint-coding operation based on the relationship between the cross correlation of residual signals and the compression ratio. The reference channel selection is designed to select a channel for the entropy coding of the joint coding. The hardware encoder operates at a 40 MHz clock frequency and supports two-channel parallel encoding for the multichannel monitoring system. Experimental results show that the compression ratio increases by 0.06%, whereas the computational complexity decreases by 20.72% compared to the MPEG-4 ALS reference software encoder. In addition, the compression ratio increases by about 11.92%, compared to the single channel based bio-signal lossless data compressor. PMID:25237900
A closed-loop compressive-sensing-based neural recording system.
Zhang, Jie; Mitra, Srinjoy; Suo, Yuanming; Cheng, Andrew; Xiong, Tao; Michon, Frederic; Welkenhuysen, Marleen; Kloosterman, Fabian; Chin, Peter S; Hsiao, Steven; Tran, Trac D; Yazicioglu, Firat; Etienne-Cummings, Ralph
2015-06-01
This paper describes a low power closed-loop compressive sensing (CS) based neural recording system. This system provides an efficient method to reduce data transmission bandwidth for implantable neural recording devices. By doing so, this technique reduces a majority of system power consumption which is dissipated at data readout interface. The design of the system is scalable and is a viable option for large scale integration of electrodes or recording sites onto a single device. The entire system consists of an application-specific integrated circuit (ASIC) with 4 recording readout channels with CS circuits, a real time off-chip CS recovery block and a recovery quality evaluation block that provides a closed feedback to adaptively adjust compression rate. Since CS performance is strongly signal dependent, the ASIC has been tested in vivo and with standard public neural databases. Implemented using efficient digital circuit, this system is able to achieve >10 times data compression on the entire neural spike band (500-6KHz) while consuming only 0.83uW (0.53 V voltage supply) additional digital power per electrode. When only the spikes are desired, the system is able to further compress the detected spikes by around 16 times. Unlike other similar systems, the characteristic spikes and inter-spike data can both be recovered which guarantes a >95% spike classification success rate. The compression circuit occupied 0.11mm(2)/electrode in a 180nm CMOS process. The complete signal processing circuit consumes <16uW/electrode. Power and area efficiency demonstrated by the system make it an ideal candidate for integration into large recording arrays containing thousands of electrode. Closed-loop recording and reconstruction performance evaluation further improves the robustness of the compression method, thus making the system more practical for long term recording.
Efficient Encoding and Rendering of Time-Varying Volume Data
NASA Technical Reports Server (NTRS)
Ma, Kwan-Liu; Smith, Diann; Shih, Ming-Yun; Shen, Han-Wei
1998-01-01
Visualization of time-varying volumetric data sets, which may be obtained from numerical simulations or sensing instruments, provides scientists insights into the detailed dynamics of the phenomenon under study. This paper describes a coherent solution based on quantization, coupled with octree and difference encoding for visualizing time-varying volumetric data. Quantization is used to attain voxel-level compression and may have a significant influence on the performance of the subsequent encoding and visualization steps. Octree encoding is used for spatial domain compression, and difference encoding for temporal domain compression. In essence, neighboring voxels may be fused into macro voxels if they have similar values, and subtrees at consecutive time steps may be merged if they are identical. The software rendering process is tailored according to the tree structures and the volume visualization process. With the tree representation, selective rendering may be performed very efficiently. Additionally, the I/O costs are reduced. With these combined savings, a higher level of user interactivity is achieved. We have studied a variety of time-varying volume datasets, performed encoding based on data statistics, and optimized the rendering calculations wherever possible. Preliminary tests on workstations have shown in many cases tremendous reduction by as high as 90% in both storage space and inter-frame delay.
Brady, Timothy F; Konkle, Talia; Alvarez, George A
2009-11-01
The information that individuals can hold in working memory is quite limited, but researchers have typically studied this capacity using simple objects or letter strings with no associations between them. However, in the real world there are strong associations and regularities in the input. In an information theoretic sense, regularities introduce redundancies that make the input more compressible. The current study shows that observers can take advantage of these redundancies, enabling them to remember more items in working memory. In 2 experiments, covariance was introduced between colors in a display so that over trials some color pairs were more likely to appear than other color pairs. Observers remembered more items from these displays than from displays where the colors were paired randomly. The improved memory performance cannot be explained by simply guessing the high-probability color pair, suggesting that observers formed more efficient representations to remember more items. Further, as observers learned the regularities, their working memory performance improved in a way that is quantitatively predicted by a Bayesian learning model and optimal encoding scheme. These results suggest that the underlying capacity of the individuals' working memory is unchanged, but the information they have to remember can be encoded in a more compressed fashion. Copyright 2009 APA
Efficiency loss of thin film Cu(InxGa1-x)Se(S) solar panels by lamination process
NASA Astrophysics Data System (ADS)
Xu, Li
2017-04-01
Efficiency loss of thin film Cu(InxGa1-x)Se(S) (CIGS) solar panels by lamination process has been compromising the final output power in commercial products of solar modules, but few reports have been published on such issue, as the majority of the investigation is focused on the efficiency at the circuit level, i.e., before lamination process. In this paper, we studied the effect of lamination process to the efficiency loss of thin film CIGS solar panels. It was observed that the fill factor degradation dominated the efficiency loss with the small change of Voc and Jsc. Experiments showed that neither the temperature nor the pressure, nor the two combined in the lamination process is the root cause of the efficiency loss; instead, the ethylene vinyl acetate (EVA) layer as the encapsulation material which directly contacts the solar cell devices was the major factor responsible for the efficiency loss. It was found that the gel content of the cured EVA film after lamination was highly correlated to the efficiency loss. The higher the gel content, the higher the efficiency loss. The mismatch of coefficient of thermal expansion between the EVA film and the CIGS thin film resulted in compressive stress in the device layer after lamination process. The compressive stress is speculated to affect the lattice defects, but need to be confirmed with the measurement of capacitance voltage (CV) and drive level capacitance profiling (DLCP). Three-day sun soak was then carried out and it was observed that the fill factor recovered significantly and so did the efficiency. Experiments also showed that there was no impact of chemical erosion on the front electrode of transparent conductive oxide (TCO) films by chemicals released from the EVA films during lamination.
High efficiency and broadband acoustic diodes
NASA Astrophysics Data System (ADS)
Fu, Congyi; Wang, Bohan; Zhao, Tianfei; Chen, C. Q.
2018-01-01
Energy transmission efficiency and working bandwidth are the two major factors limiting the application of current acoustic diodes (ADs). This letter presents a design of high efficiency and broadband acoustic diodes composed of a nonlinear frequency converter and a linear wave filter. The converter consists of two masses connected by a bilinear spring with asymmetric tension and compression stiffness. The wave filter is a linear mass-spring lattice (sonic crystal). Both numerical simulation and experiment show that the energy transmission efficiency of the acoustic diode can be improved by as much as two orders of magnitude, reaching about 61%. Moreover, the primary working band width of the AD is about two times of the cut-off frequency of the sonic crystal filter. The cut-off frequency dependent working band of the AD implies that the developed AD can be scaled up or down from macro-scale to micro- and nano-scale.
About a method for compressing x-ray computed microtomography data
NASA Astrophysics Data System (ADS)
Mancini, Lucia; Kourousias, George; Billè, Fulvio; De Carlo, Francesco; Fidler, Aleš
2018-04-01
The management of scientific data is of high importance especially for experimental techniques that produce big data volumes. Such a technique is x-ray computed tomography (CT) and its community has introduced advanced data formats which allow for better management of experimental data. Rather than the organization of the data and the associated meta-data, the main topic on this work is data compression and its applicability to experimental data collected from a synchrotron-based CT beamline at the Elettra-Sincrotrone Trieste facility (Italy) and studies images acquired from various types of samples. This study covers parallel beam geometry, but it could be easily extended to a cone-beam one. The reconstruction workflow used is the one currently in operation at the beamline. Contrary to standard image compression studies, this manuscript proposes a systematic framework and workflow for the critical examination of different compression techniques and does so by applying it to experimental data. Beyond the methodology framework, this study presents and examines the use of JPEG-XR in combination with HDF5 and TIFF formats providing insights and strategies on data compression and image quality issues that can be used and implemented at other synchrotron facilities and laboratory systems. In conclusion, projection data compression using JPEG-XR appears as a promising, efficient method to reduce data file size and thus to facilitate data handling and image reconstruction.
Magnetic Flux Compression Concept for Aerospace Propulsion and Power
NASA Technical Reports Server (NTRS)
Litchford, Ron J.; Robertson, Tony; Hawk, Clark W.; Turner, Matt; Koelfgen, Syri
2000-01-01
The objective of this research is to investigate system level performance and design issues associated with magnetic flux compression devices for aerospace power generation and propulsion. The proposed concept incorporates the principles of magnetic flux compression for direct conversion of nuclear/chemical detonation energy into electrical power. Specifically a magnetic field is compressed between an expanding detonation driven diamagnetic plasma and a stator structure formed from a high temperature superconductor (HTSC). The expanding plasma cloud is entirely confined by the compressed magnetic field at the expense of internal kinetic energy. Electrical power is inductively extracted, and the detonation products are collimated and expelled through a magnetic nozzle. The long-term development of this highly integrated generator/propulsion system opens up revolutionary NASA Mission scenarios for future interplanetary and interstellar spacecraft. The unique features of this concept with respect to future space travel opportunities are as follows: ability to implement high energy density chemical detonations or ICF microfusion bursts as the impulsive diamagnetic plasma source; high power density system characteristics constrain the size, weight, and cost of the vehicle architecture; provides inductive storage pulse power with a very short pulse rise time; multimegajoule energy bursts/terawatt power bursts; compact pulse power driver for low-impedance dense plasma devices; utilization of low cost HTSC material and casting technology to increase magnetic flux conservation and inductive energy storage; improvement in chemical/nuclear-to-electric energy conversion efficiency and the ability to generate significant levels of thrust with very high specific impulse; potential for developing a small, lightweight, low cost, self-excited integrated propulsion and power system suitable for space stations, planetary bases, and interplanetary and interstellar space travel; potential for attaining specific impulses approaching 10 (exp 6) seconds, which would enable missions to the outer planets within ten years and missions at interstellar distances within fifty years.
Zhu, Xuehua; Wang, Yulei; Lu, Zhiwei; Zhang, Hengkang
2015-09-07
A new technique for generating high energy sub-400 picosecond laser pulses is presented in this paper. The temporally super-Gaussian-shaped laser pulses are used as light source. When the forward pump is reflected by the rear window of SBS cell, the frequency component that fulfills Brillouin frequency shift in its sideband spectrum works as a seed and excites SBS, which results in efficient compression of the incident pump pulse. First the pulse compression characteristics of 20th-order super-Gaussian temporally shaped pulses with 5 ns duration are analyzed theoretically. Then experiment is carried out with a narrow-band high power Nd:glass laser system at the double-frequency and wavelength of 527 nm which delivers 5 ns super-Gaussian temporally shaped pulses with single pulse energy over 10 J. FC-40 is used as the active SBS medium for its brief phonon lifetime and high power capacity. In the experiment, the results agree well with the numerical calculations. With pump energy of 5.36J, the compression of pulse duration from 5 ns to 360 ps is obtained. The output energy is 3.02 J and the peak-power is magnified 8.3 times. Moreover, the compressed pulse shows a high stability because it is initiated by the feedback of rear window rather than the thermal noise distributing inside the medium. This technique of generating high energy hundred picosecond laser pulses has simple structure and is easy to operate, and it also can be scaled to higher energy pulse compression in the future. Meanwhile, it should also be taken into consideration that in such a nonfocusing scheme, the noise-initiated SBS would increase the distortion on the wavefront of Stokes beam to some extent, and the pump energy should be controlled below the threshold of noise-initiated SBS.
Image Segmentation, Registration, Compression, and Matching
NASA Technical Reports Server (NTRS)
Yadegar, Jacob; Wei, Hai; Yadegar, Joseph; Ray, Nilanjan; Zabuawala, Sakina
2011-01-01
A novel computational framework was developed of a 2D affine invariant matching exploiting a parameter space. Named as affine invariant parameter space (AIPS), the technique can be applied to many image-processing and computer-vision problems, including image registration, template matching, and object tracking from image sequence. The AIPS is formed by the parameters in an affine combination of a set of feature points in the image plane. In cases where the entire image can be assumed to have undergone a single affine transformation, the new AIPS match metric and matching framework becomes very effective (compared with the state-of-the-art methods at the time of this reporting). No knowledge about scaling or any other transformation parameters need to be known a priori to apply the AIPS framework. An automated suite of software tools has been created to provide accurate image segmentation (for data cleaning) and high-quality 2D image and 3D surface registration (for fusing multi-resolution terrain, image, and map data). These tools are capable of supporting existing GIS toolkits already in the marketplace, and will also be usable in a stand-alone fashion. The toolkit applies novel algorithmic approaches for image segmentation, feature extraction, and registration of 2D imagery and 3D surface data, which supports first-pass, batched, fully automatic feature extraction (for segmentation), and registration. A hierarchical and adaptive approach is taken for achieving automatic feature extraction, segmentation, and registration. Surface registration is the process of aligning two (or more) data sets to a common coordinate system, during which the transformation between their different coordinate systems is determined. Also developed here are a novel, volumetric surface modeling and compression technique that provide both quality-guaranteed mesh surface approximations and compaction of the model sizes by efficiently coding the geometry and connectivity/topology components of the generated models. The highly efficient triangular mesh compression compacts the connectivity information at the rate of 1.5-4 bits per vertex (on average for triangle meshes), while reducing the 3D geometry by 40-50 percent. Finally, taking into consideration the characteristics of 3D terrain data, and using the innovative, regularized binary decomposition mesh modeling, a multistage, pattern-drive modeling, and compression technique has been developed to provide an effective framework for compressing digital elevation model (DEM) surfaces, high-resolution aerial imagery, and other types of NASA data.
Kinetic efficiency of polar monolithic capillary columns in high-pressure gas chromatography.
Kurganov, A A; Korolev, A A; Shiryaeva, V E; Popova, T P; Kanateva, A Yu
2013-11-08
Poppe plots were used for analysis of kinetic efficiency of monolithic sorbents synthesized in quartz capillaries for utilization in high-pressure gas chromatography. Values of theoretical plate time and maximum number of theoretical plates occurred to depend significantly on synthetic parameters such as relative amount of monomer in the initial polymerization mixture, temperature and polymerization time. Poppe plots let one to find synthesis conditions suitable either for high-speed separations or for maximal efficiency. It is shown that construction of kinetic Poppe curves using potential Van Deemter data demands compressibility of mobile phase to be taken into consideration in the case of gas chromatography. Model mixture of light hydrocarbons C1 to C4 was then used for investigation of influence of carrier gas nature on kinetic efficiency of polymeric monolithic columns. Minimal values of theoretical plate times were found for CO2 and N2O carrier gases. Copyright © 2013 Elsevier B.V. All rights reserved.
Magnetic-Flux-Compression Cooling Using Superconductors
NASA Technical Reports Server (NTRS)
Strayer, Donald M.; Israelsson, Ulf E.; Elleman, Daniel D.
1989-01-01
Proposed magnetic-flux-compression refrigeration system produces final-stage temperatures below 4.2 K. More efficient than mechanical and sorption refrigerators at temperatures in this range. Weighs less than comparable liquid-helium-cooled superconducting magnetic refrigeration systems operating below 4.2 K. Magnetic-flux-compression cooling stage combines advantages of newly discovered superconductors with those of cooling by magnetization and demagnetization of paramagnetic salts.
Simple numerical method for predicting steady compressible flows
NASA Technical Reports Server (NTRS)
Vonlavante, Ernst; Nelson, N. Duane
1986-01-01
A numerical method for solving the isenthalpic form of the governing equations for compressible viscous and inviscid flows was developed. The method was based on the concept of flux vector splitting in its implicit form. The method was tested on several demanding inviscid and viscous configurations. Two different forms of the implicit operator were investigated. The time marching to steady state was accelerated by the implementation of the multigrid procedure. Its various forms very effectively increased the rate of convergence of the present scheme. High quality steady state results were obtained in most of the test cases; these required only short computational times due to the relative efficiency of the basic method.
NASA Astrophysics Data System (ADS)
Fan, Shuwei; Bai, Liang; Chen, Nana
2016-08-01
As one of the key elements of high-power laser systems, the pulse compression multilayer dielectric grating is required for broader band, higher diffraction efficiency and higher damage threshold. In this paper, the multilayer dielectric film and the multilayer dielectric gratings(MDG) were designed by eigen matrix and optimized with the help of generic algorithm and rigorous coupled wave method. The reflectivity was close to 100% and the bandwith were over 250nm, twice compared to the unoptimized film structure. The simulation software of standing wave field distribution within MDG was developed and the electric field of the MDG was calculated. And the key parameters which affected the electric field distribution were also studied.
An adaptive distributed data aggregation based on RCPC for wireless sensor networks
NASA Astrophysics Data System (ADS)
Hua, Guogang; Chen, Chang Wen
2006-05-01
One of the most important design issues in wireless sensor networks is energy efficiency. Data aggregation has significant impact on the energy efficiency of the wireless sensor networks. With massive deployment of sensor nodes and limited energy supply, data aggregation has been considered as an essential paradigm for data collection in sensor networks. Recently, distributed source coding has been demonstrated to possess several advantages in data aggregation for wireless sensor networks. Distributed source coding is able to encode sensor data with lower bit rate without direct communication among sensor nodes. To ensure reliable and high throughput transmission with the aggregated data, we proposed in this research a progressive transmission and decoding of Rate-Compatible Punctured Convolutional (RCPC) coded data aggregation with distributed source coding. Our proposed 1/2 RSC codes with Viterbi algorithm for distributed source coding are able to guarantee that, even without any correlation between the data, the decoder can always decode the data correctly without wasting energy. The proposed approach achieves two aspects in adaptive data aggregation for wireless sensor networks. First, the RCPC coding facilitates adaptive compression corresponding to the correlation of the sensor data. When the data correlation is high, higher compression ration can be achieved. Otherwise, lower compression ratio will be achieved. Second, the data aggregation is adaptively accumulated. There is no waste of energy in the transmission; even there is no correlation among the data, the energy consumed is at the same level as raw data collection. Experimental results have shown that the proposed distributed data aggregation based on RCPC is able to achieve high throughput and low energy consumption data collection for wireless sensor networks
Hang, X; Greenberg, N L; Shiota, T; Firstenberg, M S; Thomas, J D
2000-01-01
Real-time three-dimensional echocardiography has been introduced to provide improved quantification and description of cardiac function. Data compression is desired to allow efficient storage and improve data transmission. Previous work has suggested improved results utilizing wavelet transforms in the compression of medical data including 2D echocardiogram. Set partitioning in hierarchical trees (SPIHT) was extended to compress volumetric echocardiographic data by modifying the algorithm based on the three-dimensional wavelet packet transform. A compression ratio of at least 40:1 resulted in preserved image quality.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Love, Lonnie J.; Mell, Ellen
2015-02-01
AeroValve s innovative pneumatic valve technology recycles compressed air through the valve body with each cycle of the valve, and was reported to reduce compressed air requirements by an average of 25% 30%.This technology collaboration project between ORNL and Aerovalve confirms the energy efficiency of valve performance. Measuring air consumption per work completed, the AeroValve was as much as 85% better than the commercial Festo valve.
Performance evaluation of the intra compression in the video coding standards
NASA Astrophysics Data System (ADS)
Abramowski, Andrzej
2015-09-01
The article presents a comparison of the Intra prediction algorithms in the current state-of-the-art video coding standards, including MJPEG 2000, VP8, VP9, H.264/AVC and H.265/HEVC. The effectiveness of techniques employed by each standard is evaluated in terms of compression efficiency and average encoding time. The compression efficiency is measured using BD-PSNR and BD-RATE metrics with H.265/HEVC results as an anchor. Tests are performed on a set of video sequences, composed of sequences gathered by Joint Collaborative Team on Video Coding during the development of the H.265/HEVC standard and 4K sequences provided by Ultra Video Group. According to results, H.265/HEVC provides significant bit-rate savings at the expense of computational complexity, while VP9 may be regarded as a compromise between the efficiency and required encoding time.
High-resolution three-dimensional imaging with compress sensing
NASA Astrophysics Data System (ADS)
Wang, Jingyi; Ke, Jun
2016-10-01
LIDAR three-dimensional imaging technology have been used in many fields, such as military detection. However, LIDAR require extremely fast data acquisition speed. This makes the manufacture of detector array for LIDAR system is very difficult. To solve this problem, we consider using compress sensing which can greatly decrease the data acquisition and relax the requirement of a detection device. To use the compressive sensing idea, a spatial light modulator will be used to modulate the pulsed light source. Then a photodetector is used to receive the reflected light. A convex optimization problem is solved to reconstruct the 2D depth map of the object. To improve the resolution in transversal direction, we use multiframe image restoration technology. For each 2D piecewise-planar scene, we move the SLM half-pixel each time. Then the position where the modulated light illuminates will changed accordingly. We repeat moving the SLM to four different directions. Then we can get four low-resolution depth maps with different details of the same plane scene. If we use all of the measurements obtained by the subpixel movements, we can reconstruct a high-resolution depth map of the sense. A linear minimum-mean-square error algorithm is used for the reconstruction. By combining compress sensing and multiframe image restoration technology, we reduce the burden on data analyze and improve the efficiency of detection. More importantly, we obtain high-resolution depth maps of a 3D scene.
NASA Astrophysics Data System (ADS)
Lin, Yung-Hsu
The goal of this dissertation is to study high pressure streamers in air and apply it to diesel engine technologies. Nanosecond scale pulsed high voltage discharges in air/fuel mixtures can generate radicals which in turn have been shown to improve combustion efficiency in gasoline fueled internal combustion engines. We are exploring the possibility to extend such transient plasma generation and expected radical species generation to the range of pressures encountered in compression-ignition (diesel) engines having compression ratios of ˜20:1, thereby improving lean burning efficiency and extending the range of lean combustion. At the beginning of this dissertation, research into streamer discharges is reviewed. Then, we conducted experiments of streamer propagation at high pressures, calculated the streamer velocity based on both optical and electrical measurements, and the similarity law was checked by analyzing the streamer velocity as a function of the reduced electric field, E/P. Our results showed that the similarity law is invalid, and an empirical scaling factor, E/√P, is obtained and verified by dimensional analysis. The equation derived from the dimensional analysis will be beneficial to proper electrode and pulse generator design for transient plasma assisted internal engine experiments. Along with the high pressure study, we applied such technique on diesel engine to improve the fuel efficiency and exhaust treatment. We observed a small effect of transient plasma on peak pressure, which implied that transient plasma has the capability to improve the fuel consumption. In addition, the NO can be reduced effectively by the same technique and the energy cost is 30 eV per NO molecule.
Ahsendorf, Tobias; Müller, Franz-Josef; Topkar, Ved; Gunawardena, Jeremy; Eils, Roland
2017-01-01
The DNA microstates that regulate transcription include sequence-specific transcription factors (TFs), coregulatory complexes, nucleosomes, histone modifications, DNA methylation, and parts of the three-dimensional architecture of genomes, which could create an enormous combinatorial complexity across the genome. However, many proteins and epigenetic marks are known to colocalize, suggesting that the information content encoded in these marks can be compressed. It has so far proved difficult to understand this compression in a systematic and quantitative manner. Here, we show that simple linear models can reliably predict the data generated by the ENCODE and Roadmap Epigenomics consortia. Further, we demonstrate that a small number of marks can predict all other marks with high average correlation across the genome, systematically revealing the substantial information compression that is present in different cell lines. We find that the linear models for activating marks are typically cell line-independent, while those for silencing marks are predominantly cell line-specific. Of particular note, a nuclear receptor corepressor, transducin beta-like 1 X-linked receptor 1 (TBLR1), was highly predictive of other marks in two hematopoietic cell lines. The methodology presented here shows how the potentially vast complexity of TFs, coregulators, and epigenetic marks at eukaryotic genes is highly redundant and that the information present can be compressed onto a much smaller subset of marks. These findings could be used to efficiently characterize cell lines and tissues based on a small number of diagnostic marks and suggest how the DNA microstates, which regulate the expression of individual genes, can be specified. PMID:29216191
Hydrolysis kinetics of tulip tree xylan in hot compressed water.
Yoon, Junho; Lee, Hun Wook; Sim, Seungjae; Myint, Aye Aye; Park, Hee Jeong; Lee, Youn-Woo
2016-08-01
Lignocellulosic biomass, a promising renewable resource, can be converted into numerous valuable chemicals post enzymatic saccharification. However, the efficacy of enzymatic saccharification of lignocellulosic biomass is low; therefore, pretreatment is necessary to improve the efficiency. Here, a kinetic analysis was carried out on xylan hydrolysis, after hot compressed water pretreatment of the lignocellulosic biomass conducted at 180-220°C for 5-30min, and on subsequent xylooligosaccharide hydrolysis. The weight ratio of fast-reacting xylan to slow-reacting xylan was 5.25 in tulip tree. Our kinetic results were applied to three different reaction systems to improve the pretreatment efficiency. We found that semi-continuous reactor is promising. Lower reaction temperatures and shorter space times in semi-continuous reactor are recommended for improving xylan conversion and xylooligosaccharide yield. In the theoretical calculation, 95% of xylooligosaccharide yield and xylan conversion were achieved simultaneously with high selectivity (desired product/undesired product) of 100 or more. Copyright © 2016. Published by Elsevier Ltd.
A Formal Messaging Notation for Alaskan Aviation Data
NASA Technical Reports Server (NTRS)
Rios, Joseph L.
2015-01-01
Data exchange is an increasingly important aspect of the National Airspace System. While many data communication channels have become more capable of sending and receiving data at higher throughput rates, there is still a need to use communication channels efficiently with limited throughput. The limitation can be based on technological issues, financial considerations, or both. This paper provides a complete description of several important aviation weather data in Abstract Syntax Notation format. By doing so, data providers can take advantage of Abstract Syntax Notation's ability to encode data in a highly compressed format. When data such as pilot weather reports, surface weather observations, and various weather predictions are compressed in such a manner, it allows for the efficient use of throughput-limited communication channels. This paper provides details on the Abstract Syntax Notation One (ASN.1) implementation for Alaskan aviation data, and demonstrates its use on real-world aviation weather data samples as Alaska has sparse terrestrial data infrastructure and data are often sent via relatively costly satellite channels.
Recent Efforts and Experiments in the Construction of Aviation Engines
NASA Technical Reports Server (NTRS)
SCHWAGER
1920-01-01
It became evident during World War I that ever-increasing demands were being placed on the mean power of aircraft engines as a result of the increased on board equipment and the demands of aerial combat. The need was for increased climbing efficiency and climbing speed. The response to these demands has been in terms of lightweight construction and the adaptation of the aircraft engine to the requirements of its use. Discussed here are specific efforts to increase flying efficiency, such as reduction of the number of revolutions of the propeller from 1400 to about 900 r.p.m. through the use of a reduction gear, increasing piston velocity, locating two crankshafts in one gear box, and using the two-cycle stroke. Also discussed are improvements in the transformation of fuel energy into engine power, the raising of compression ratios, the use of super-compression with carburetors constructed for high altitudes, the use of turbo-compressors, rotary engines, and the use of variable pitch propellers.
Quantization selection in the high-throughput H.264/AVC encoder based on the RD
NASA Astrophysics Data System (ADS)
Pastuszak, Grzegorz
2013-10-01
In the hardware video encoder, the quantization is responsible for quality losses. On the other hand, it allows the reduction of bit rates to the target one. If the mode selection is based on the rate-distortion criterion, the quantization can also be adjusted to obtain better compression efficiency. Particularly, the use of Lagrangian function with a given multiplier enables the encoder to select the most suitable quantization step determined by the quantization parameter QP. Moreover, the quantization offset added before discarding the fraction value after quantization can be adjusted. In order to select the best quantization parameter and offset in real time, the HD/SD encoder should be implemented in the hardware. In particular, the hardware architecture should embed the transformation and quantization modules able to process the same residuals many times. In this work, such an architecture is used. Experimental results show what improvements in terms of compression efficiency are achievable for Intra coding.
NASA Astrophysics Data System (ADS)
Chen, Xianhe; Xia, Zhixun; Huang, Liya; Hu, Jianxin
2017-05-01
The working cycle of a novel underwater propulsion system based on aluminium combustion with water is researched in order to evaluate the best performance. The system exploits the exothermic reaction between aluminium and water which will produce high temperature, pressure steam and hydrogen mixture that can be used to drive turbine to generate power. Several new system configurations corresponding to different working cycles are investigated, and their performance parameters in terms of net power, energy density and global efficiency are discussed. The results of the system simulation show that using the recirculation steam rather than hydrogen as the carrier gas, the system net power, energy density and efficiency of the system are greatly increased compared, however the system performance is close either using adiabatic compression or isothermal compression. And if an evaporator component is added into system in order to take full use of the solid product heat, the system performance will be improved.
NASA Astrophysics Data System (ADS)
Yoshikawa, Choiku; Hattori, Kazuhiro; Jeong, Jongsoo; Saito, Kiyoshi; Kawai, Sunao
An ejector can transform the expansion energy of the driving flow into the pressure build-up energy of the suction flow. Therefore, by utilizing the ejector instead of the expansion valve for the vapor compression cycle, the performance of the cycle can be greatly improved. Until now, the performance of the vapor compression cycle with the ejector has not been examined sufficiently. Therefore, this paper constructs the simulation model of the vapor compression cycle with the ejector and investigates the performance of that cycle by the simulation. Working fluids are ammonia and CO2. As a result, in case of the ejector efficiency 90%, COP of the vapor compression cycle using ammonia with the ejector is 5% higher than that of the conventional cycle and COP using CO2 with the ejector is 22% higher than that of the conventional cycle.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghosh, B.B.
The object of the study reported in this paper was to investigate the possibility of using the blend of kerosene with petrol in a gasoline engines, without much losses in performance. The authors carried out experiments on a four-stroke cycle Briggs and Stratton S. I. Engine using five blends of kerosene with petrol at a compression ratios 5.3 and 7.47 to 1 with and without surge chambers, at a constant engine speed of 1500 rev/min with the following conclusions: 1. At part-load and the lower compression ratio the brake thermal efficiency is improved with percentage increase of kerosene but atmore » the higher compression ratio it is improved only upto 50% kerosene blend with petrol. 2. The knock-free maximum bhp is reduced with (a) the percentage increase of kerosene, (b) the increase of compression ratio. 3. Use of a surge chamber increase the knock-free maximum bhp, and reduces the brake thermal efficiency.« less
Thermofluidic compression effects to achieve combustion in a low-compression scramjet engine
NASA Astrophysics Data System (ADS)
Moura, A. F.; Wheatley, V.; Jahn, I.
2018-07-01
The compression provided by a scramjet inlet is an important parameter in its design. It must be low enough to limit thermal and structural loads and stagnation pressure losses, but high enough to provide the conditions favourable for combustion. Inlets are typically designed to achieve sufficient compression without accounting for the fluidic, and subsequently thermal, compression provided by the fuel injection, which can enable robust combustion in a low-compression engine. This is investigated using Reynolds-averaged Navier-Stokes numerical simulations of a simplified scramjet engine designed to have insufficient compression to auto-ignite fuel in the absence of thermofluidic compression. The engine was designed with a wide rectangular combustor and a single centrally located injector, in order to reduce three-dimensional effects of the walls on the fuel plume. By varying the injected mass flow rate of hydrogen fuel (equivalence ratios of 0.22, 0.17, and 0.13), it is demonstrated that higher equivalence ratios lead to earlier ignition and more rapid combustion, even though mean conditions in the combustor change by no more than 5% for pressure and 3% for temperature with higher equivalence ratio. By supplementing the lower equivalence ratio with helium to achieve a higher mass flow rate, it is confirmed that these benefits are primarily due to the local compression provided by the extra injected mass. Investigation of the conditions around the fuel plume indicated two connected mechanisms. The higher mass flow rate for higher equivalence ratios generated a stronger injector bow shock that compresses the free-stream gas, increasing OH radical production and promoting ignition. This was observed both in the higher equivalence ratio case and in the case with helium. This earlier ignition led to increased temperature and pressure downstream and, consequently, stronger combustion. The heat release from combustion provided thermal compression in the combustor, further increasing combustion efficiency.
Thermofluidic compression effects to achieve combustion in a low-compression scramjet engine
NASA Astrophysics Data System (ADS)
Moura, A. F.; Wheatley, V.; Jahn, I.
2017-12-01
The compression provided by a scramjet inlet is an important parameter in its design. It must be low enough to limit thermal and structural loads and stagnation pressure losses, but high enough to provide the conditions favourable for combustion. Inlets are typically designed to achieve sufficient compression without accounting for the fluidic, and subsequently thermal, compression provided by the fuel injection, which can enable robust combustion in a low-compression engine. This is investigated using Reynolds-averaged Navier-Stokes numerical simulations of a simplified scramjet engine designed to have insufficient compression to auto-ignite fuel in the absence of thermofluidic compression. The engine was designed with a wide rectangular combustor and a single centrally located injector, in order to reduce three-dimensional effects of the walls on the fuel plume. By varying the injected mass flow rate of hydrogen fuel (equivalence ratios of 0.22, 0.17, and 0.13), it is demonstrated that higher equivalence ratios lead to earlier ignition and more rapid combustion, even though mean conditions in the combustor change by no more than 5% for pressure and 3% for temperature with higher equivalence ratio. By supplementing the lower equivalence ratio with helium to achieve a higher mass flow rate, it is confirmed that these benefits are primarily due to the local compression provided by the extra injected mass. Investigation of the conditions around the fuel plume indicated two connected mechanisms. The higher mass flow rate for higher equivalence ratios generated a stronger injector bow shock that compresses the free-stream gas, increasing OH radical production and promoting ignition. This was observed both in the higher equivalence ratio case and in the case with helium. This earlier ignition led to increased temperature and pressure downstream and, consequently, stronger combustion. The heat release from combustion provided thermal compression in the combustor, further increasing combustion efficiency.
Deterministic compressive sampling for high-quality image reconstruction of ultrasound tomography.
Huy, Tran Quang; Tue, Huynh Huu; Long, Ton That; Duc-Tan, Tran
2017-05-25
A well-known diagnostic imaging modality, termed ultrasound tomography, was quickly developed for the detection of very small tumors whose sizes are smaller than the wavelength of the incident pressure wave without ionizing radiation, compared to the current gold-standard X-ray mammography. Based on inverse scattering technique, ultrasound tomography uses some material properties such as sound contrast or attenuation to detect small targets. The Distorted Born Iterative Method (DBIM) based on first-order Born approximation is an efficient diffraction tomography approach. One of the challenges for a high quality reconstruction is to obtain many measurements from the number of transmitters and receivers. Given the fact that biomedical images are often sparse, the compressed sensing (CS) technique could be therefore effectively applied to ultrasound tomography by reducing the number of transmitters and receivers, while maintaining a high quality of image reconstruction. There are currently several work on CS that dispose randomly distributed locations for the measurement system. However, this random configuration is relatively difficult to implement in practice. Instead of it, we should adopt a methodology that helps determine the locations of measurement devices in a deterministic way. For this, we develop the novel DCS-DBIM algorithm that is highly applicable in practice. Inspired of the exploitation of the deterministic compressed sensing technique (DCS) introduced by the authors few years ago with the image reconstruction process implemented using l 1 regularization. Simulation results of the proposed approach have demonstrated its high performance, with the normalized error approximately 90% reduced, compared to the conventional approach, this new approach can save half of number of measurements and only uses two iterations. Universal image quality index is also evaluated in order to prove the efficiency of the proposed approach. Numerical simulation results indicate that CS and DCS techniques offer equivalent image reconstruction quality with simpler practical implementation. It would be a very promising approach in practical applications of modern biomedical imaging technology.
NASA Technical Reports Server (NTRS)
Bardina, J. E.
1994-01-01
A new computational efficient 3-D compressible Reynolds-averaged implicit Navier-Stokes method with advanced two equation turbulence models for high speed flows is presented. All convective terms are modeled using an entropy satisfying higher-order Total Variation Diminishing (TVD) scheme based on implicit upwind flux-difference split approximations and arithmetic averaging procedure of primitive variables. This method combines the best features of data management and computational efficiency of space marching procedures with the generality and stability of time dependent Navier-Stokes procedures to solve flows with mixed supersonic and subsonic zones, including streamwise separated flows. Its robust stability derives from a combination of conservative implicit upwind flux-difference splitting with Roe's property U to provide accurate shock capturing capability that non-conservative schemes do not guarantee, alternating symmetric Gauss-Seidel 'method of planes' relaxation procedure coupled with a three-dimensional two-factor diagonal-dominant approximate factorization scheme, TVD flux limiters of higher-order flux differences satisfying realizability, and well-posed characteristic-based implicit boundary-point a'pproximations consistent with the local characteristics domain of dependence. The efficiency of the method is highly increased with Newton Raphson acceleration which allows convergence in essentially one forward sweep for supersonic flows. The method is verified by comparing with experiment and other Navier-Stokes methods. Here, results of adiabatic and cooled flat plate flows, compression corner flow, and 3-D hypersonic shock-wave/turbulent boundary layer interaction flows are presented. The robust 3-D method achieves a better computational efficiency of at least one order of magnitude over the CNS Navier-Stokes code. It provides cost-effective aerodynamic predictions in agreement with experiment, and the capability of predicting complex flow structures in complex geometries with good accuracy.
Zhang, Lin; Zhou, Wenchen; Naples, Neil J; Yi, Allen Y
2018-05-01
A novel fabrication method by combining high-speed single-point diamond milling and precision compression molding processes for fabrication of discontinuous freeform microlens arrays was proposed. Compared with slow tool servo diamond broaching, high-speed single-point diamond milling was selected for its flexibility in the fabrication of true 3D optical surfaces with discontinuous features. The advantage of single-point diamond milling is that the surface features can be constructed sequentially by spacing the axes of a virtual spindle at arbitrary positions based on the combination of rotational and translational motions of both the high-speed spindle and linear slides. By employing this method, each micro-lenslet was regarded as a microstructure cell by passing the axis of the virtual spindle through the vertex of each cell. An optimization arithmetic based on minimum-area fabrication was introduced to the machining process to further increase the machining efficiency. After the mold insert was machined, it was employed to replicate the microlens array onto chalcogenide glass. In the ensuing optical measurement, the self-built Shack-Hartmann wavefront sensor was proven to be accurate in detecting an infrared wavefront by both experiments and numerical simulation. The combined results showed that precision compression molding of chalcogenide glasses could be an economic and precision optical fabrication technology for high-volume production of infrared optics.
An improvement analysis on video compression using file segmentation
NASA Astrophysics Data System (ADS)
Sharma, Shubhankar; Singh, K. John; Priya, M.
2017-11-01
From the past two decades the extreme evolution of the Internet has lead a massive rise in video technology and significantly video consumption over the Internet which inhabits the bulk of data traffic in general. Clearly, video consumes that so much data size on the World Wide Web, to reduce the burden on the Internet and deduction of bandwidth consume by video so that the user can easily access the video data.For this, many video codecs are developed such as HEVC/H.265 and V9. Although after seeing codec like this one gets a dilemma of which would be improved technology in the manner of rate distortion and the coding standard.This paper gives a solution about the difficulty for getting low delay in video compression and video application e.g. ad-hoc video conferencing/streaming or observation by surveillance. Also this paper describes the benchmark of HEVC and V9 technique of video compression on subjective oral estimations of High Definition video content, playback on web browsers. Moreover, this gives the experimental ideology of dividing the video file into several segments for compression and putting back together to improve the efficiency of video compression on the web as well as on the offline mode.
System considerations for efficient communication and storage of MSTI image data
NASA Technical Reports Server (NTRS)
Rice, Robert F.
1994-01-01
The Ballistic Missile Defense Organization has been developing the capability to evaluate one or more high-rate sensor/hardware combinations by incorporating them as payloads on a series of Miniature Seeker Technology Insertion (MSTI) flights. This publication represents the final report of a 1993 study to analyze the potential impact f data compression and of related communication system technologies on post-MSTI 3 flights. Lossless compression is considered alone and in conjunction with various spatial editing modes. Additionally, JPEG and Fractal algorithms are examined in order to bound the potential gains from the use of lossy compression. but lossless compression is clearly shown to better fit the goals of the MSTI investigations. Lossless compression factors of between 2:1 and 6:1 would provide significant benefits to both on-board mass memory and the downlink. for on-board mass memory, the savings could range from $5 million to $9 million. Such benefits should be possible by direct application of recently developed NASA VLSI microcircuits. It is shown that further downlink enhancements of 2:1 to 3:1 should be feasible thorough use of practical modifications to the existing modulation system and incorporation of Reed-Solomon channel coding. The latter enhancement could also be achieved by applying recently developed VLSI microcircuits.
NASA Astrophysics Data System (ADS)
Korobeinikov, Igor V.; Morozova, Natalia V.; Lukyanova, Lidia N.; Usov, Oleg A.; Kulbachinskii, Vladimir A.; Shchennikov, Vladimir V.; Ovsyannikov, Sergey V.
2018-01-01
We propose a model of a thermoelectric module in which the performance parameters can be controlled by applied tuneable stress. This model includes a miniature high-pressure anvil-type cell and a specially designed thermoelectric module that is compressed between two opposite anvils. High thermally conductive high-pressure anvils that can be made, for instance, of sintered technical diamonds with enhanced thermal conductivity, would enable efficient heat absorption or rejection from a thermoelectric module. Using a high-pressure cell as a prototype of a stress-controlled thermoelectric converter, we investigated the effect of applied high pressure on the power factors of several single-crystalline thermoelectrics, including binary p-type Bi2Te3, and multi-component (Bi,Sb)2Te3 and Bi2(Te,Se,S)3 solid solutions. We found that a moderate applied pressure of a few GPa significantly enhances the power factors of some of these thermoelectrics. Thus, they might be more efficiently utilized in stress-controlled thermoelectric modules. In the example of one of these thermoelectrics crystallizing in the same rhombohedral structure, we examined the crystal lattice stability under moderate high pressures. We uncovered an abnormal compression of the rhombohedral lattice of (Bi0.25,Sb0.75)2Te3 along the c-axis in a hexagonal unit cell, and detected two phase transitions to the C2/m and C2/c monoclinic structures above 9.5 and 18 GPa, respectively.
A privacy-preserving solution for compressed storage and selective retrieval of genomic data.
Huang, Zhicong; Ayday, Erman; Lin, Huang; Aiyar, Raeka S; Molyneaux, Adam; Xu, Zhenyu; Fellay, Jacques; Steinmetz, Lars M; Hubaux, Jean-Pierre
2016-12-01
In clinical genomics, the continuous evolution of bioinformatic algorithms and sequencing platforms makes it beneficial to store patients' complete aligned genomic data in addition to variant calls relative to a reference sequence. Due to the large size of human genome sequence data files (varying from 30 GB to 200 GB depending on coverage), two major challenges facing genomics laboratories are the costs of storage and the efficiency of the initial data processing. In addition, privacy of genomic data is becoming an increasingly serious concern, yet no standard data storage solutions exist that enable compression, encryption, and selective retrieval. Here we present a privacy-preserving solution named SECRAM (Selective retrieval on Encrypted and Compressed Reference-oriented Alignment Map) for the secure storage of compressed aligned genomic data. Our solution enables selective retrieval of encrypted data and improves the efficiency of downstream analysis (e.g., variant calling). Compared with BAM, the de facto standard for storing aligned genomic data, SECRAM uses 18% less storage. Compared with CRAM, one of the most compressed nonencrypted formats (using 34% less storage than BAM), SECRAM maintains efficient compression and downstream data processing, while allowing for unprecedented levels of security in genomic data storage. Compared with previous work, the distinguishing features of SECRAM are that (1) it is position-based instead of read-based, and (2) it allows random querying of a subregion from a BAM-like file in an encrypted form. Our method thus offers a space-saving, privacy-preserving, and effective solution for the storage of clinical genomic data. © 2016 Huang et al.; Published by Cold Spring Harbor Laboratory Press.
Observation sequences and onboard data processing of Planet-C
NASA Astrophysics Data System (ADS)
Suzuki, M.; Imamura, T.; Nakamura, M.; Ishi, N.; Ueno, M.; Hihara, H.; Abe, T.; Yamada, T.
Planet-C or VCO Venus Climate Orbiter will carry 5 cameras IR1 IR 1micrometer camera IR2 IR 2micrometer camera UVI UV Imager LIR Long-IR camera and LAC Lightning and Airglow Camera in the UV-IR region to investigate atmospheric dynamics of Venus During 30 hr orbiting designed to quasi-synchronize to the super rotation of the Venus atmosphere 3 groups of scientific observations will be carried out i image acquisition of 4 cameras IR1 IR2 UVI LIR 20 min in 2 hrs ii LAC operation only when VCO is within Venus shadow and iii radio occultation These observation sequences will define the scientific outputs of VCO program but the sequences must be compromised with command telemetry downlink and thermal power conditions For maximizing science data downlink it must be well compressed and the compression efficiency and image quality have the significant scientific importance in the VCO program Images of 4 cameras IR1 2 and UVI 1Kx1K and LIR 240x240 will be compressed using JPEG2000 J2K standard J2K is selected because of a no block noise b efficiency c both reversible and irreversible d patent loyalty free and e already implemented as academic commercial software ICs and ASIC logic designs Data compression efficiencies of J2K are about 0 3 reversible and 0 1 sim 0 01 irreversible The DE Digital Electronics unit which controls 4 cameras and handles onboard data processing compression is under concept design stage It is concluded that the J2K data compression logics circuits using space
A privacy-preserving solution for compressed storage and selective retrieval of genomic data
Huang, Zhicong; Ayday, Erman; Lin, Huang; Aiyar, Raeka S.; Molyneaux, Adam; Xu, Zhenyu; Hubaux, Jean-Pierre
2016-01-01
In clinical genomics, the continuous evolution of bioinformatic algorithms and sequencing platforms makes it beneficial to store patients’ complete aligned genomic data in addition to variant calls relative to a reference sequence. Due to the large size of human genome sequence data files (varying from 30 GB to 200 GB depending on coverage), two major challenges facing genomics laboratories are the costs of storage and the efficiency of the initial data processing. In addition, privacy of genomic data is becoming an increasingly serious concern, yet no standard data storage solutions exist that enable compression, encryption, and selective retrieval. Here we present a privacy-preserving solution named SECRAM (Selective retrieval on Encrypted and Compressed Reference-oriented Alignment Map) for the secure storage of compressed aligned genomic data. Our solution enables selective retrieval of encrypted data and improves the efficiency of downstream analysis (e.g., variant calling). Compared with BAM, the de facto standard for storing aligned genomic data, SECRAM uses 18% less storage. Compared with CRAM, one of the most compressed nonencrypted formats (using 34% less storage than BAM), SECRAM maintains efficient compression and downstream data processing, while allowing for unprecedented levels of security in genomic data storage. Compared with previous work, the distinguishing features of SECRAM are that (1) it is position-based instead of read-based, and (2) it allows random querying of a subregion from a BAM-like file in an encrypted form. Our method thus offers a space-saving, privacy-preserving, and effective solution for the storage of clinical genomic data. PMID:27789525
Wang, Xingfu; Peng, Wenbo; Yu, Ruomeng; Zou, Haiyang; Dai, Yejing; Zi, Yunlong; Wu, Changsheng; Li, Shuti; Wang, Zhong Lin
2017-06-14
Achievement of p-n homojuncted GaN enables the birth of III-nitride light emitters. Owing to the wurtzite-structure of GaN, piezoelectric polarization charges present at the interface can effectively control/tune the optoelectric behaviors of local charge-carriers (i.e., the piezo-phototronic effect). Here, we demonstrate the significantly enhanced light-output efficiency and suppressed efficiency droop in GaN microwire (MW)-based p-n junction ultraviolet light-emitting diode (UV LED) by the piezo-phototronic effect. By applying a -0.12% static compressive strain perpendicular to the p-n junction interface, the relative external quantum efficiency of the LED is enhanced by over 600%. Furthermore, efficiency droop is markedly reduced from 46.6% to 7.5% and corresponding droop onset current density shifts from 10 to 26.7 A cm -2 . Enhanced electrons confinement and improved holes injection efficiency by the piezo-phototronic effect are revealed and theoretically confirmed as the physical mechanisms. This study offers an unconventional path to develop high efficiency, strong brightness and high power III-nitride light sources.
Wu, Zhaohua; Feng, Jiaxin; Qiao, Fangli; Tan, Zhe-Min
2016-04-13
In this big data era, it is more urgent than ever to solve two major issues: (i) fast data transmission methods that can facilitate access to data from non-local sources and (ii) fast and efficient data analysis methods that can reveal the key information from the available data for particular purposes. Although approaches in different fields to address these two questions may differ significantly, the common part must involve data compression techniques and a fast algorithm. This paper introduces the recently developed adaptive and spatio-temporally local analysis method, namely the fast multidimensional ensemble empirical mode decomposition (MEEMD), for the analysis of a large spatio-temporal dataset. The original MEEMD uses ensemble empirical mode decomposition to decompose time series at each spatial grid and then pieces together the temporal-spatial evolution of climate variability and change on naturally separated timescales, which is computationally expensive. By taking advantage of the high efficiency of the expression using principal component analysis/empirical orthogonal function analysis for spatio-temporally coherent data, we design a lossy compression method for climate data to facilitate its non-local transmission. We also explain the basic principles behind the fast MEEMD through decomposing principal components instead of original grid-wise time series to speed up computation of MEEMD. Using a typical climate dataset as an example, we demonstrate that our newly designed methods can (i) compress data with a compression rate of one to two orders; and (ii) speed-up the MEEMD algorithm by one to two orders. © 2016 The Authors.
Progress on a Taylor weak statement finite element algorithm for high-speed aerodynamic flows
NASA Technical Reports Server (NTRS)
Baker, A. J.; Freels, J. D.
1989-01-01
A new finite element numerical Computational Fluid Dynamics (CFD) algorithm has matured to the point of efficiently solving two-dimensional high speed real-gas compressible flow problems in generalized coordinates on modern vector computer systems. The algorithm employs a Taylor Weak Statement classical Galerkin formulation, a variably implicit Newton iteration, and a tensor matrix product factorization of the linear algebra Jacobian under a generalized coordinate transformation. Allowing for a general two-dimensional conservation law system, the algorithm has been exercised on the Euler and laminar forms of the Navier-Stokes equations. Real-gas fluid properties are admitted, and numerical results verify solution accuracy, efficiency, and stability over a range of test problem parameters.
Thermal and Structural Analysis of Micro-Fabricated Involute Regenerators
NASA Astrophysics Data System (ADS)
Qiu, Songgang; Augenblick, Jack E.
2005-02-01
Long-life, high-efficiency power generators based on free-piston Stirling engines are an energy conversion solution for future space power generation and commercial applications. As part of the efforts to further improve Stirling engine efficiency and reliability, a micro-fabricated, involute regenerator structure is proposed by a Cleveland State University-led regenerator research team. This paper reports on thermal and structural analyses of the involute regenerator to demonstrate the feasibility of the proposed regenerator. The results indicate that the involute regenerator has extremely high axial stiffness to sustain reasonable axial compression forces with negligible lateral deformation. The relatively low radial stiffness may impose some challenges to the appropriate installation of the in-volute regenerators.
The Role of Efficient XML Interchange (EXI) in Navy Wide-Area Network (WAN) Optimization
2015-03-01
compress, and re-encrypt data to continue providing optimization through compression; however, that capability requires careful consideration of...optimization 23 of encrypted data requires a careful analysis and comparison of performance improvements and IA vulnerabilities. It is important...Contained EXI capitalizes on multiple techniques to improve compression, and they vary depending on a set of EXI options passed to the codec
Wan, Ying
2018-05-01
Aim of the study was to observe and analyze the clinical effect of phellodendron wet compress in treating the phlebitis caused by infusion. The research objects were 600 cases of phlebitis caused by infusion, all of which were treated in our hospital from June 2013 to June 2016. All patients were entitled to the right to know. They were randomly divided into the research group and the control group. Patients in the control group were treated with magnesium sulfate solution wet compress, while patients in the research group were treated with phellodendron wet compress. The effects in these two groups were observed and compared. Compared with the control group, the research group has better overall treatment efficiency, p<0.05; shorter average onset of action, p<0.05; less time in relieving red swelling and pain, p<0.05. Phellodendron wet compress shows a beneficial effect in treating the phlebitis caused by infusion. It can not only obviously shorten the onset of action, but also level up the overall treatment efficiency that helps patients to recover.
Cluster compression algorithm: A joint clustering/data compression concept
NASA Technical Reports Server (NTRS)
Hilbert, E. E.
1977-01-01
The Cluster Compression Algorithm (CCA), which was developed to reduce costs associated with transmitting, storing, distributing, and interpreting LANDSAT multispectral image data is described. The CCA is a preprocessing algorithm that uses feature extraction and data compression to more efficiently represent the information in the image data. The format of the preprocessed data enables simply a look-up table decoding and direct use of the extracted features to reduce user computation for either image reconstruction, or computer interpretation of the image data. Basically, the CCA uses spatially local clustering to extract features from the image data to describe spectral characteristics of the data set. In addition, the features may be used to form a sequence of scalar numbers that define each picture element in terms of the cluster features. This sequence, called the feature map, is then efficiently represented by using source encoding concepts. Various forms of the CCA are defined and experimental results are presented to show trade-offs and characteristics of the various implementations. Examples are provided that demonstrate the application of the cluster compression concept to multi-spectral images from LANDSAT and other sources.
Evaluation of porous ceramic as microbial carrier of biofilter to remove toluene vapor.
Lim, J S; Park, S J; Koo, J K; Park, H
2001-01-01
Three kinds of porous ceramic microbe media are fabricated from fly ash, diatomite and a mixture of fly ash and diatomite powders. Water holding capacity, density, porosity, pore size and distribution, compressive strength and micro-structure of each of the fabricated media are measured and compared. The fly ash and diatomite mixture ceramic is evaluated as the best biofilter medium among the three media because of its high compressive strength. It is selected as an experimental biofilter medium inoculated with thickened activated sludge. The laboratory scale biofilter was operated for 42 days under various experimental conditions varying in inlet toluene concentration and flow rate of contaminated air stream. The experimental result shows that the removal efficiency reaches up to 96.6% after 4 days from the start-up. Nutrient limitation is considered as a major factor limiting biofilter efficiency. Biofilter efficiency decreases substantially at the build-up of backpressure, which is largely due to the accumulation of excess VSS within the media. Periodic backwashing of the biofilter is necessary to remove excess biomass and attain stable long-term high removal efficiency. The bed needs to be backwashed when the overall pressure drop becomes greater than 460.6 Pa at space velocity of 100 h-1. A maximum flow rate of 444.85 g m-3hr-1 of toluene elimination by the mixture ceramic biofilter, which is higher than the previously reported values. This indicates that the fly ash and diatomite mixture ceramic biofilter can be effectively applied for removing toluene vapor.
A Digital Compressed Sensing-Based Energy-Efficient Single-Spot Bluetooth ECG Node
Cai, Zhipeng; Zou, Fumin; Zhang, Xiangyu
2018-01-01
Energy efficiency is still the obstacle for long-term real-time wireless ECG monitoring. In this paper, a digital compressed sensing- (CS-) based single-spot Bluetooth ECG node is proposed to deal with the challenge in wireless ECG application. A periodic sleep/wake-up scheme and a CS-based compression algorithm are implemented in a node, which consists of ultra-low-power analog front-end, microcontroller, Bluetooth 4.0 communication module, and so forth. The efficiency improvement and the node's specifics are evidenced by the experiments using the ECG signals sampled by the proposed node under daily activities of lay, sit, stand, walk, and run. Under using sparse binary matrix (SBM), block sparse Bayesian learning (BSBL) method, and discrete cosine transform (DCT) basis, all ECG signals were essentially undistorted recovered with root-mean-square differences (PRDs) which are less than 6%. The proposed sleep/wake-up scheme and data compression can reduce the airtime over energy-hungry wireless links, the energy consumption of proposed node is 6.53 mJ, and the energy consumption of radio decreases 77.37%. Moreover, the energy consumption increase caused by CS code execution is negligible, which is 1.3% of the total energy consumption. PMID:29599945
A Digital Compressed Sensing-Based Energy-Efficient Single-Spot Bluetooth ECG Node.
Luo, Kan; Cai, Zhipeng; Du, Keqin; Zou, Fumin; Zhang, Xiangyu; Li, Jianqing
2018-01-01
Energy efficiency is still the obstacle for long-term real-time wireless ECG monitoring. In this paper, a digital compressed sensing- (CS-) based single-spot Bluetooth ECG node is proposed to deal with the challenge in wireless ECG application. A periodic sleep/wake-up scheme and a CS-based compression algorithm are implemented in a node, which consists of ultra-low-power analog front-end, microcontroller, Bluetooth 4.0 communication module, and so forth. The efficiency improvement and the node's specifics are evidenced by the experiments using the ECG signals sampled by the proposed node under daily activities of lay, sit, stand, walk, and run. Under using sparse binary matrix (SBM), block sparse Bayesian learning (BSBL) method, and discrete cosine transform (DCT) basis, all ECG signals were essentially undistorted recovered with root-mean-square differences (PRDs) which are less than 6%. The proposed sleep/wake-up scheme and data compression can reduce the airtime over energy-hungry wireless links, the energy consumption of proposed node is 6.53 mJ, and the energy consumption of radio decreases 77.37%. Moreover, the energy consumption increase caused by CS code execution is negligible, which is 1.3% of the total energy consumption.
Compressive strength of delaminated aerospace composites.
Butler, Richard; Rhead, Andrew T; Liu, Wenli; Kontis, Nikolaos
2012-04-28
An efficient analytical model is described which predicts the value of compressive strain below which buckle-driven propagation of delaminations in aerospace composites will not occur. An extension of this efficient strip model which accounts for propagation transverse to the direction of applied compression is derived. In order to provide validation for the strip model a number of laminates were artificially delaminated producing a range of thin anisotropic sub-laminates made up of 0°, ±45° and 90° plies that displayed varied buckling and delamination propagation phenomena. These laminates were subsequently subject to experimental compression testing and nonlinear finite element analysis (FEA) using cohesive elements. Comparison of strip model results with those from experiments indicates that the model can conservatively predict the strain at which propagation occurs to within 10 per cent of experimental values provided (i) the thin-film assumption made in the modelling methodology holds and (ii) full elastic coupling effects do not play a significant role in the post-buckling of the sub-laminate. With such provision, the model was more accurate and produced fewer non-conservative results than FEA. The accuracy and efficiency of the model make it well suited to application in optimum ply-stacking algorithms to maximize laminate strength.
NASA Astrophysics Data System (ADS)
Vuilleumier, David Malcolm
The detailed study of chemical kinetics in engines has become required to further advance engine efficiency while simultaneously lowering engine emissions. This push for higher efficiency engines is not caused by a lack of oil, but by efforts to reduce anthropogenic carbon dioxide emissions, that cause global warming. To operate in more efficient manners while reducing traditional pollutant emissions, modern internal combustion piston engines are forced to operate in regimes in which combustion is no longer fully transport limited, and instead is at least partially governed by chemical kinetics of combusting mixtures. Kinetically-controlled combustion allows the operation of piston engines at high compression ratios, with partially-premixed dilute charges; these operating conditions simultaneously provide high thermodynamic efficiency and low pollutant formation. The investigations presented in this dissertation study the effect of ethanol addition on the low-temperature chemistry of gasoline type fuels in engines. These investigations are carried out both in a simplified, fundamental engine experiment, named Homogeneous Charge Compression Ignition, as well as in more applied engine systems, named Gasoline Compression Ignition engines and Partial Fuel Stratification engines. These experimental investigations, and the accompanying modeling work, show that ethanol is an effective scavenger of radicals at low temperatures, and this inhibits the low temperature pathways of gasoline oxidation. Further, the investigations measure the sensitivity of gasoline auto-ignition to system pressure at conditions that are relevant to modern engines. It is shown that at pressures above 40 bar and temperatures below 850 Kelvin, gasoline begins to exhibit Low-Temperature Heat Release. However, the addition of 20% ethanol raises the pressure requirement to 60 bar, while the temperature requirement remains unchanged. These findings have major implications for a range of modern engines. Low-Temperature Heat Release significantly enhances the auto-ignition process, which limits the conditions under which advanced combustion strategies may operate. As these advanced combustion strategies are required to meet emissions and fuel-economy regulations, the findings of this dissertation may benefit and be incorporated into future engine design toolkits, such as detailed chemical kinetic mechanisms.
Understanding turbulence in compressing plasmas and its exploitation or prevention.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davidovits, Seth
Unprecedented densities and temperatures are now achieved in compressions of plasma, by lasers and by pulsed power, in major experimental facilities. These compressions, carried out at the largest scale at the National Ignition Facility and at the Z Pulsed Power Facility, have important applications, including fusion, X-ray production, and materials research. Several experimental and simulation results suggest that the plasma in some of these compressions is turbulent. In fact, measurements suggest that in certain laboratory plasma compressions the turbulent energy is a dominant energy component. Similarly, turbulence is dominant in some compressing astrophysical plasmas, such as in molecular clouds. Turbulencemore » need not be dominant to be important; even small quantities could greatly influence experiments that are sensitive to mixing of non-fuel into fuel, such as compressions seeking fusion ignition. Despite its important role in major settings, bulk plasma turbulence under compression is insufficiently understood to answer or even to pose some of the most fundamental questions about it. This thesis both identifies and answers key questions in compressing turbulent motion, while providing a description of the behavior of three-dimensional, isotropic, compressions of homogeneous turbulence with a plasma viscosity. This description includes a simple, but successful, new model for the turbulent energy of plasma undergoing compression. The unique features of compressing turbulence with a plasma viscosity are shown, including the sensitivity of the turbulence to plasma ionization, and a sudden viscous dissipation'' effect which rapidly converts plasma turbulent energy into thermal energy. This thesis then examines turbulence in both laboratory compression experiments and molecular clouds. It importantly shows: the possibility of exploiting turbulence to make fusion or X-ray production more efficient; conditions under which hot-spot turbulence can be prevented; and a lower bound on the growth of turbulence in molecular clouds. This bound raises questions about the level of dissipation in existing molecular cloud models. Finally, the observations originally motivating the thesis, Z-pinch measurements suggesting dominant turbulent energy, are reexamined by self-consistently accounting for the impact of the turbulence on the spectroscopic analysis. This is found to strengthen the evidence that the multiple observations describe a highly turbulent plasma state.« less
Understanding Turbulence in Compressing Plasmas and Its Exploitation or Prevention
NASA Astrophysics Data System (ADS)
Davidovits, Seth
Unprecedented densities and temperatures are now achieved in compressions of plasma, by lasers and by pulsed power, in major experimental facilities. These compressions, carried out at the largest scale at the National Ignition Facility and at the Z Pulsed Power Facility, have important applications, including fusion, X-ray production, and materials research. Several experimental and simulation results suggest that the plasma in some of these compressions is turbulent. In fact, measurements suggest that in certain laboratory plasma compressions the turbulent energy is a dominant energy component. Similarly, turbulence is dominant in some compressing astrophysical plasmas, such as in molecular clouds. Turbulence need not be dominant to be important; even small quantities could greatly influence experiments that are sensitive to mixing of non-fuel into fuel, such as compressions seeking fusion ignition. Despite its important role in major settings, bulk plasma turbulence under compression is insufficiently understood to answer or even to pose some of the most fundamental questions about it. This thesis both identifies and answers key questions in compressing turbulent motion, while providing a description of the behavior of three-dimensional, isotropic, compressions of homogeneous turbulence with a plasma viscosity. This description includes a simple, but successful, new model for the turbulent energy of plasma undergoing compression. The unique features of compressing turbulence with a plasma viscosity are shown, including the sensitivity of the turbulence to plasma ionization, and a "sudden viscous dissipation'' effect which rapidly converts plasma turbulent energy into thermal energy. This thesis then examines turbulence in both laboratory compression experiments and molecular clouds. It importantly shows: the possibility of exploiting turbulence to make fusion or X-ray production more efficient; conditions under which hot-spot turbulence can be prevented; and a lower bound on the growth of turbulence in molecular clouds. This bound raises questions about the level of dissipation in existing molecular cloud models. Finally, the observations originally motivating the thesis, Z-pinch measurements suggesting dominant turbulent energy, are reexamined by self-consistently accounting for the impact of the turbulence on the spectroscopic analysis. This is found to strengthen the evidence that the multiple observations describe a highly turbulent plasma state.
Comparative performance between compressed and uncompressed airborne imagery
NASA Astrophysics Data System (ADS)
Phan, Chung; Rupp, Ronald; Agarwal, Sanjeev; Trang, Anh; Nair, Sumesh
2008-04-01
The US Army's RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD), Countermine Division is evaluating the compressibility of airborne multi-spectral imagery for mine and minefield detection application. Of particular interest is to assess the highest image data compression rate that can be afforded without the loss of image quality for war fighters in the loop and performance of near real time mine detection algorithm. The JPEG-2000 compression standard is used to perform data compression. Both lossless and lossy compressions are considered. A multi-spectral anomaly detector such as RX (Reed & Xiaoli), which is widely used as a core algorithm baseline in airborne mine and minefield detection on different mine types, minefields, and terrains to identify potential individual targets, is used to compare the mine detection performance. This paper presents the compression scheme and compares detection performance results between compressed and uncompressed imagery for various level of compressions. The compression efficiency is evaluated and its dependence upon different backgrounds and other factors are documented and presented using multi-spectral data.
Experiences in solar cooling systems
NASA Astrophysics Data System (ADS)
Ward, D. S.
The results of performance evaluations for nine solar cooling systems are presented, and reasons fow low or high net energy balances are discussed. Six of the nine systems are noted to have performed unfavorably compared to standard cooling systems due to thermal storage losses, excessive system electrical demands, inappropriate control strategies, poor system-to-load matching, and poor chiller performance. A reduction in heat losses in one residential unit increased the total system efficiency by 2.5%, while eliminating heat losses to the building interior increased the efficiency by 3.3%. The best system incorporated a lithium bromide absorption chiller and a Rankine cycle compression unit for a commercial application. Improvements in the cooling tower and fan configurations to increase the solar cooling system efficiency are indicated. Best performances are expected to occur in climates inducing high annual cooling loads.
Development of self-compressing BLSOM for comprehensive analysis of big sequence data.
Kikuchi, Akihito; Ikemura, Toshimichi; Abe, Takashi
2015-01-01
With the remarkable increase in genomic sequence data from various organisms, novel tools are needed for comprehensive analyses of available big sequence data. We previously developed a Batch-Learning Self-Organizing Map (BLSOM), which can cluster genomic fragment sequences according to phylotype solely dependent on oligonucleotide composition and applied to genome and metagenomic studies. BLSOM is suitable for high-performance parallel-computing and can analyze big data simultaneously, but a large-scale BLSOM needs a large computational resource. We have developed Self-Compressing BLSOM (SC-BLSOM) for reduction of computation time, which allows us to carry out comprehensive analysis of big sequence data without the use of high-performance supercomputers. The strategy of SC-BLSOM is to hierarchically construct BLSOMs according to data class, such as phylotype. The first-layer BLSOM was constructed with each of the divided input data pieces that represents the data subclass, such as phylotype division, resulting in compression of the number of data pieces. The second BLSOM was constructed with a total of weight vectors obtained in the first-layer BLSOMs. We compared SC-BLSOM with the conventional BLSOM by analyzing bacterial genome sequences. SC-BLSOM could be constructed faster than BLSOM and cluster the sequences according to phylotype with high accuracy, showing the method's suitability for efficient knowledge discovery from big sequence data.
An Efficient Image Compressor for Charge Coupled Devices Camera
Li, Jin; Xing, Fei; You, Zheng
2014-01-01
Recently, the discrete wavelet transforms- (DWT-) based compressor, such as JPEG2000 and CCSDS-IDC, is widely seen as the state of the art compression scheme for charge coupled devices (CCD) camera. However, CCD images project on the DWT basis to produce a large number of large amplitude high-frequency coefficients because these images have a large number of complex texture and contour information, which are disadvantage for the later coding. In this paper, we proposed a low-complexity posttransform coupled with compressing sensing (PT-CS) compression approach for remote sensing image. First, the DWT is applied to the remote sensing image. Then, a pair base posttransform is applied to the DWT coefficients. The pair base are DCT base and Hadamard base, which can be used on the high and low bit-rate, respectively. The best posttransform is selected by the l p-norm-based approach. The posttransform is considered as the sparse representation stage of CS. The posttransform coefficients are resampled by sensing measurement matrix. Experimental results on on-board CCD camera images show that the proposed approach significantly outperforms the CCSDS-IDC-based coder, and its performance is comparable to that of the JPEG2000 at low bit rate and it does not have the high excessive implementation complexity of JPEG2000. PMID:25114977
Real-time transmission of digital video using variable-length coding
NASA Technical Reports Server (NTRS)
Bizon, Thomas P.; Shalkhauser, Mary JO; Whyte, Wayne A., Jr.
1993-01-01
Huffman coding is a variable-length lossless compression technique where data with a high probability of occurrence is represented with short codewords, while 'not-so-likely' data is assigned longer codewords. Compression is achieved when the high-probability levels occur so frequently that their benefit outweighs any penalty paid when a less likely input occurs. One instance where Huffman coding is extremely effective occurs when data is highly predictable and differential coding can be applied (as with a digital video signal). For that reason, it is desirable to apply this compression technique to digital video transmission; however, special care must be taken in order to implement a communication protocol utilizing Huffman coding. This paper addresses several of the issues relating to the real-time transmission of Huffman-coded digital video over a constant-rate serial channel. Topics discussed include data rate conversion (from variable to a fixed rate), efficient data buffering, channel coding, recovery from communication errors, decoder synchronization, and decoder architectures. A description of the hardware developed to execute Huffman coding and serial transmission is also included. Although this paper focuses on matters relating to Huffman-coded digital video, the techniques discussed can easily be generalized for a variety of applications which require transmission of variable-length data.
THE TURBULENT DYNAMO IN HIGHLY COMPRESSIBLE SUPERSONIC PLASMAS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Federrath, Christoph; Schober, Jennifer; Bovino, Stefano
The turbulent dynamo may explain the origin of cosmic magnetism. While the exponential amplification of magnetic fields has been studied for incompressible gases, little is known about dynamo action in highly compressible, supersonic plasmas, such as the interstellar medium of galaxies and the early universe. Here we perform the first quantitative comparison of theoretical models of the dynamo growth rate and saturation level with three-dimensional magnetohydrodynamical simulations of supersonic turbulence with grid resolutions of up to 1024{sup 3} cells. We obtain numerical convergence and find that dynamo action occurs for both low and high magnetic Prandtl numbers Pm = ν/ηmore » = 0.1-10 (the ratio of viscous to magnetic dissipation), which had so far only been seen for Pm ≥ 1 in supersonic turbulence. We measure the critical magnetic Reynolds number, Rm{sub crit}=129{sub −31}{sup +43}, showing that the compressible dynamo is almost as efficient as in incompressible gas. Considering the physical conditions of the present and early universe, we conclude that magnetic fields need to be taken into account during structure formation from the early to the present cosmic ages, because they suppress gas fragmentation and drive powerful jets and outflows, both greatly affecting the initial mass function of stars.« less
DNABIT Compress – Genome compression algorithm
Rajarajeswari, Pothuraju; Apparao, Allam
2011-01-01
Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that “DNABIT Compress” algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases. PMID:21383923
The effect of compressive viscosity and thermal conduction on the longitudinal MHD waves
NASA Astrophysics Data System (ADS)
Bahari, K.; Shahhosaini, N.
2018-05-01
longitudinal Magnetohydrodynamic (MHD) oscillations have been studied in a slowly cooling coronal loop, in the presence of thermal conduction and compressive viscosity, in the linear MHD approximation. WKB method has been used to solve the governing equations. In the leading order approximation the dispersion relation has been obtained, and using the first order approximation the time dependent amplitude has been determined. Cooling causes the oscillations to amplify and damping mechanisms are more efficient in hot loops. In cool loops the oscillation amplitude increases with time but in hot loops the oscillation amplitude decreases with time. Our conclusion is that in hot loops the efficiency of the compressive viscosity in damping longitudinal waves is comparable to that of the thermal conduction.
The effect of compressive viscosity and thermal conduction on the longitudinal MHD waves
NASA Astrophysics Data System (ADS)
Bahari, K.; Shahhosaini, N.
2018-07-01
Longitudinal magnetohydrodynamic (MHD) oscillations have been studied in a slowly cooling coronal loop, in the presence of thermal conduction and compressive viscosity, in the linear MHD approximation. The WKB method has been used to solve the governing equations. In the leading order approximation the dispersion relation has been obtained, and using the first-order approximation the time-dependent amplitude has been determined. Cooling causes the oscillations to amplify and damping mechanisms are more efficient in hot loops. In cool loops the oscillation amplitude increases with time but in hot loops the oscillation amplitude decreases with time. Our conclusion is that in hot loops the efficiency of the compressive viscosity in damping longitudinal waves is comparable to that of the thermal conduction.
Lossless compression of VLSI layout image data.
Dai, Vito; Zakhor, Avideh
2006-09-01
We present a novel lossless compression algorithm called Context Copy Combinatorial Code (C4), which integrates the advantages of two very disparate compression techniques: context-based modeling and Lempel-Ziv (LZ) style copying. While the algorithm can be applied to many lossless compression applications, such as document image compression, our primary target application has been lossless compression of integrated circuit layout image data. These images contain a heterogeneous mix of data: dense repetitive data better suited to LZ-style coding, and less dense structured data, better suited to context-based encoding. As part of C4, we have developed a novel binary entropy coding technique called combinatorial coding which is simultaneously as efficient as arithmetic coding, and as fast as Huffman coding. Compression results show C4 outperforms JBIG, ZIP, BZIP2, and two-dimensional LZ, and achieves lossless compression ratios greater than 22 for binary layout image data, and greater than 14 for gray-pixel image data.
SeqCompress: an algorithm for biological sequence compression.
Sardaraz, Muhammad; Tahir, Muhammad; Ikram, Ataul Aziz; Bajwa, Hassan
2014-10-01
The growth of Next Generation Sequencing technologies presents significant research challenges, specifically to design bioinformatics tools that handle massive amount of data efficiently. Biological sequence data storage cost has become a noticeable proportion of total cost in the generation and analysis. Particularly increase in DNA sequencing rate is significantly outstripping the rate of increase in disk storage capacity, which may go beyond the limit of storage capacity. It is essential to develop algorithms that handle large data sets via better memory management. This article presents a DNA sequence compression algorithm SeqCompress that copes with the space complexity of biological sequences. The algorithm is based on lossless data compression and uses statistical model as well as arithmetic coding to compress DNA sequences. The proposed algorithm is compared with recent specialized compression tools for biological sequences. Experimental results show that proposed algorithm has better compression gain as compared to other existing algorithms. Copyright © 2014 Elsevier Inc. All rights reserved.
Development of the manufacture of billets based on high-strength aluminum alloys
NASA Astrophysics Data System (ADS)
Korostelev, V. F.; Denisov, M. S.; Bol'shakov, A. E.; Van Khieu, Chan
2017-09-01
When pressure is applied upon casting as a factor of external impact on melt, the problems related mainly to filling of molds are solved; however, some casting defects cannot be avoided. The experimental results demonstrate that complete compensation of shrinkage under pressure can be achieved by compressing of casting by 8-10% prior to beginning of solidification and by 2-3% during the transition of a metal from the liquid to the solid state. It is mentioned that the procedure based on compressing a liquid metal can be efficiently applied for manufacture of high-strength aluminum alloy castings. The selection of engineering parameters is substantiated. Examples of castings made of V95 alloy according to the developed procedure are given. In addition, the article discusses the problems related to designing of engineering and special-purpose equipment, software, and control automation.
High throughput dual-wavelength temperature distribution imaging via compressive imaging
NASA Astrophysics Data System (ADS)
Yao, Xu-Ri; Lan, Ruo-Ming; Liu, Xue-Feng; Zhu, Ge; Zheng, Fu; Yu, Wen-Kai; Zhai, Guang-Jie
2018-03-01
Thermal imaging is an essential tool in a wide variety of research areas. In this work we demonstrate high-throughput double-wavelength temperature distribution imaging using a modified single-pixel camera without the requirement of a beam splitter (BS). A digital micro-mirror device (DMD) is utilized to display binary masks and split the incident radiation, which eliminates the necessity of a BS. Because the spatial resolution is dictated by the DMD, this thermal imaging system has the advantage of perfect spatial registration between the two images, which limits the need for the pixel registration and fine adjustments. Two bucket detectors, which measures the total light intensity reflected from the DMD, are employed in this system and yield an improvement in the detection efficiency of the narrow-band radiation. A compressive imaging algorithm is utilized to achieve under-sampling recovery. A proof-of-principle experiment was presented to demonstrate the feasibility of this structure.
Adsorbed Natural Gas Storage in Optimized High Surface Area Microporous Carbon
NASA Astrophysics Data System (ADS)
Romanos, Jimmy; Rash, Tyler; Nordwald, Erik; Shocklee, Joshua Shawn; Wexler, Carlos; Pfeifer, Peter
2011-03-01
Adsorbed natural gas (ANG) is an attractive alternative technology to compressed natural gas (CNG) or liquefied natural gas (LNG) for the efficient storage of natural gas, in particular for vehicular applications. In adsorbants engineered to have pores of a few molecular diameters, a strong van der Walls force allows reversible physisorption of methane at low pressures and room temperature. Activated carbons were optimized for storage by varying KOH:C ratio and activation temperature. We also consider the effect of mechanical compression of powders to further enhance the volumetric storage capacity. We will present standard porous material characterization (BET surface area and pore-size distribution from subcritical N2 adsorption) and methane isotherms up to 250 bar at 293K. At sufficiently high pressure, specific surface area, methane binding energy and film density can be extracted from supercritical methane adsorption isotherms. Research supported by the California Energy Commission (500-08-022).
Compact Encoding of Robot-Generated 3D Maps for Efficient Wireless Transmission
2003-01-01
Lempel - Ziv -Welch (LZW) and Ziv - Lempel (LZ77) respectively. Image based compression can also be based on dic- tionaries... compression of the data , without actually displaying a 3D model, printing statistical results for comparison of the different algorithms . 1http... compression algorithms , and wavelet algorithms tuned to the specific nature of the raw laser data . For most such applications, the usage of lossless
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1987-10-01
Huffman codes, comma-free codes, and block codes with shift indicators are important candidate-message compression codes for improving the efficiency of communications systems. This study was undertaken to determine if these codes could be used to increase the thruput of the fixed very-low-frequency (FVLF) communication system. This applications involves the use of compression codes in a channel with errors.
NASA Technical Reports Server (NTRS)
Sidilkover, David
1997-01-01
Some important advances took place during the last several years in the development of genuinely multidimensional upwind schemes for the compressible Euler equations. In particular, a robust, high-resolution genuinely multidimensional scheme which can be used for any of the flow regimes computations was constructed. This paper summarizes briefly these developments and outlines the fundamental advantages of this approach.
Effect of intake pipe on the volumetric efficiency of an internal combustion engine
NASA Technical Reports Server (NTRS)
Capetti, Antonio
1929-01-01
The writer discusses the phenomena of expansion and compression which alternately take place in the cylinders of four-stroke engines during the induction process at a high mean piston speed due to the inertia and elasticity of the mixture in the intake pipe. The present paper is intended to demonstrate theoretically the existence of a most favorable pipe length for charging.
Poe, Donald P
2005-06-17
A general theory for efficiency of nonuniform columns with compressible mobile phase fluids is applied to the elution of an unretained solute in packed-column supercritical fluid chromatography (pSFC). The theoretical apparent plate height under isothermal conditions is given by the Knox equation multiplied by a compressibility correction factor f1, which is equal to the ratio of the temporal-to-spatial average densities of the mobile phase. If isothermal conditions are maintained, large pressure drops in pSFC should not result in excessive efficiency losses for elution of unretained solutes.
Detonation duct gas generator demonstration program
NASA Technical Reports Server (NTRS)
Wortman, Andrew; Brinlee, Gayl A.; Othmer, Peter; Whelan, Michael A.
1991-01-01
The feasibility of the generation of detonation waves moving periodically across high speed channel flow is experimentally demonstrated. Such waves are essential to the concept of compressing requirements and increasing the engine pressure compressor with the objective of reducing conventional compressor requirements and increasing the engine thermodynamic efficiency through isochoric energy addition. By generating transient transverse waves, rather than standing waves, shock wave losses are reduced by an order of magnitude. The ultimate objective is to use such detonation ducts downstream of a low pressure gas turbine compressor to produce a high overall pressure ratio thermodynamic cycle. A 4 foot long, 1 inch x 12 inch cross-section, detonation duct was operated in a blow-down mode using compressed air reservoirs. Liquid or vapor propane was injected through injectors or solenoid valves located in the plenum or the duct itself. Detonation waves were generated when the mixture was ignited by a row of spark plugs in the duct wall. Problems with fuel injection and mixing limited the air speeds to about Mach 0.5, frequencies to below 10 Hz, and measured pressure ratios of about 5 to 6. The feasibility of the gas dynamic compression was demonstrated and the critical problem areas were identified.
Core-pumped mode-locked ytterbium-doped fiber laser operating around 980 nm
NASA Astrophysics Data System (ADS)
Zhou, Yue; Dai, Yitang; Li, Jianqiang; Yin, Feifei; Dai, Jian; Zhang, Tian; Xu, Kun
2018-07-01
In this letter, we first demonstrate a core-pumped passively mode-locked all-normal-dispersion ytterbium-doped fiber oscillator based on nonlinear polarization evolution operating around 980 nm. The dissipative soliton fiber laser pulse can be compressed down to 250 fs with 1 nJ pulse energy, and the slope efficiency of the oscillator can be as high as 19%. To improve the dissipative soliton laser output spectrum smoothness, we replace the birefringent plate based intracavity filter with a diffraction-grating based filter. The output pulse duration can then be further compressed down to 180 fs with improved spectral-smoothness. These schemes have potential applications in seeding cryogenic Yb:YLF amplifiers and underwater exploration of marine resources.
NASA Technical Reports Server (NTRS)
Gabb, Timothy P.; Danetti, Andrew; Draper, Susan L.; Locci, Ivan E.; Telesman, Jack
2016-01-01
The fatigue lives of disk superalloys can be increased by shot peening their surfaces, to induce compressive residual stresses near the surface that impede cracking there. As disk application temperatures increase for improved efficiency, the persistence of these beneficial stresses could be impaired, especially with continued fatigue cycling. The objective of this work was to study the retention of residual stresses introduced by shot peening, when subjected to fatigue and high temperatures. Fatigue specimens of powder metallurgy processed nickel-base disk superalloy ME3 were prepared with consistent processing and heat treatment. They were then shot peened using varied conditions. Strain-controlled fatigue cycles were run at room temperature and 704 C, to allow re-assessment of residual stresses.
[Cutaneous cicatrix: natural course, anomalies and prevention].
Bardot, J
1994-09-01
Improving scar quality has become a major concern for surgeons. Although good skin suturing is of primordial important, the healing process varies greatly from one patient to another and the risk of hypertrophic or keloid scar evolution is currently unpredictable. Local massage and above all post-operative compression using compressive garments and sheets of silicon are an efficient methods of counteracting the proliferative phase which occurs during the first few months. In severe cases, particularly in burn patients, high-pressure springwater hydrotherapy to reduce scar contracture has proved to be effective. The current trend is to decrease the risk of bad scars in the immediate post-traumatic, post-operative stage in order to obtain the best possible scar initially and thus avoid revision surgery.
NASA Astrophysics Data System (ADS)
Neji, N.; Jridi, M.; Alfalou, A.; Masmoudi, N.
2016-02-01
The double random phase encryption (DRPE) method is a well-known all-optical architecture which has many advantages especially in terms of encryption efficiency. However, the method presents some vulnerabilities against attacks and requires a large quantity of information to encode the complex output plane. In this paper, we present an innovative hybrid technique to enhance the performance of DRPE method in terms of compression and encryption. An optimized simultaneous compression and encryption method is applied simultaneously on the real and imaginary components of the DRPE output plane. The compression and encryption technique consists in using an innovative randomized arithmetic coder (RAC) that can well compress the DRPE output planes and at the same time enhance the encryption. The RAC is obtained by an appropriate selection of some conditions in the binary arithmetic coding (BAC) process and by using a pseudo-random number to encrypt the corresponding outputs. The proposed technique has the capabilities to process video content and to be standard compliant with modern video coding standards such as H264 and HEVC. Simulations demonstrate that the proposed crypto-compression system has presented the drawbacks of the DRPE method. The cryptographic properties of DRPE have been enhanced while a compression rate of one-sixth can be achieved. FPGA implementation results show the high performance of the proposed method in terms of maximum operating frequency, hardware occupation, and dynamic power consumption.
Dynamic video encryption algorithm for H.264/AVC based on a spatiotemporal chaos system.
Xu, Hui; Tong, Xiao-Jun; Zhang, Miao; Wang, Zhu; Li, Ling-Hao
2016-06-01
Video encryption schemes mostly employ the selective encryption method to encrypt parts of important and sensitive video information, aiming to ensure the real-time performance and encryption efficiency. The classic block cipher is not applicable to video encryption due to the high computational overhead. In this paper, we propose the encryption selection control module to encrypt video syntax elements dynamically which is controlled by the chaotic pseudorandom sequence. A novel spatiotemporal chaos system and binarization method is used to generate a key stream for encrypting the chosen syntax elements. The proposed scheme enhances the resistance against attacks through the dynamic encryption process and high-security stream cipher. Experimental results show that the proposed method exhibits high security and high efficiency with little effect on the compression ratio and time cost.
Efficient electrochemical refrigeration power plant using natural gas with ∼100% CO2 capture
NASA Astrophysics Data System (ADS)
Al-musleh, Easa I.; Mallapragada, Dharik S.; Agrawal, Rakesh
2015-01-01
We propose an efficient Natural Gas (NG) based Solid Oxide Fuel Cell (SOFC) power plant equipped with ∼100% CO2 capture. The power plant uses a unique refrigeration based process to capture and liquefy CO2 from the SOFC exhaust. The capture of CO2 is carried out via condensation and purification using two rectifying columns operating at different pressures. The uncondensed gas mixture, comprising of relatively high purity unconverted fuel, is recycled to the SOFC and found to boost the power generation of the SOFC by 22%, when compared to a stand alone SOFC. If Liquefied Natural Gas (LNG) is available at the plant gate, then the refrigeration available from its evaporation is used for CO2 Capture and Liquefaction (CO2CL). If NG is utilized, then a Mixed Refrigerant (MR) vapor compression cycle is utilized for CO2CL. Alternatively, the necessary refrigeration can be supplied by evaporating the captured liquid CO2 at a lower pressure, which is then compressed to supercritical pressures for pipeline transportation. From rigorous simulations, the power generation efficiency of the proposed processes is found to be 70-76% based on lower heating value (LHV). The benefit of the proposed processes is evident when the efficiency of 73% for a conventional SOFC-Gas turbine power plant without CO2 capture is compared with an equivalent efficiency of 71.2% for the proposed process with CO2CL.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Blarigan, P.
A hydrogen fueled engine is being developed specifically for the auxiliary power unit (APU) in a series type hybrid vehicle. Hydrogen is different from other internal combustion (IC) engine fuels, and hybrid vehicle IC engine requirements are different from those of other IC vehicle engines. Together these differences will allow a new engine design based on first principles that will maximize thermal efficiency while minimizing principal emissions. The experimental program is proceeding in four steps: (1) Demonstration of the emissions and the indicated thermal efficiency capability of a standard CLR research engine modified for higher compression ratios and hydrogen fueledmore » operation. (2) Design and test a new combustion chamber geometry for an existing single cylinder research engine, in an attempt to improve on the baseline indicated thermal efficiency of the CLR engine. (3) Design and build, in conjunction with an industrial collaborator, a new full scale research engine designed to maximize brake thermal efficiency. Include a full complement of combustion diagnostics. (4) Incorporate all of the knowledge thus obtained in the design and fabrication, by an industrial collaborator, of the hydrogen fueled engine for the hybrid vehicle power train illustrator. Results of the CLR baseline engine testing are presented, as well as preliminary data from the new combustion chamber engine. The CLR data confirm the low NOx produced by lean operation. The preliminary indicated thermal efficiency data from the new combustion chamber design engine show an improvement relative to the CLR engine. Comparison with previous high compression engine results shows reasonable agreement.« less
SVD compression for magnetic resonance fingerprinting in the time domain.
McGivney, Debra F; Pierre, Eric; Ma, Dan; Jiang, Yun; Saybasili, Haris; Gulani, Vikas; Griswold, Mark A
2014-12-01
Magnetic resonance (MR) fingerprinting is a technique for acquiring and processing MR data that simultaneously provides quantitative maps of different tissue parameters through a pattern recognition algorithm. A predefined dictionary models the possible signal evolutions simulated using the Bloch equations with different combinations of various MR parameters and pattern recognition is completed by computing the inner product between the observed signal and each of the predicted signals within the dictionary. Though this matching algorithm has been shown to accurately predict the MR parameters of interest, one desires a more efficient method to obtain the quantitative images. We propose to compress the dictionary using the singular value decomposition, which will provide a low-rank approximation. By compressing the size of the dictionary in the time domain, we are able to speed up the pattern recognition algorithm, by a factor of between 3.4-4.8, without sacrificing the high signal-to-noise ratio of the original scheme presented previously.
Thermo-electrochemical production of compressed hydrogen from methane with near-zero energy loss
NASA Astrophysics Data System (ADS)
Malerød-Fjeld, Harald; Clark, Daniel; Yuste-Tirados, Irene; Zanón, Raquel; Catalán-Martinez, David; Beeaff, Dustin; Morejudo, Selene H.; Vestre, Per K.; Norby, Truls; Haugsrud, Reidar; Serra, José M.; Kjølseth, Christian
2017-11-01
Conventional production of hydrogen requires large industrial plants to minimize energy losses and capital costs associated with steam reforming, water-gas shift, product separation and compression. Here we present a protonic membrane reformer (PMR) that produces high-purity hydrogen from steam methane reforming in a single-stage process with near-zero energy loss. We use a BaZrO3-based proton-conducting electrolyte deposited as a dense film on a porous Ni composite electrode with dual function as a reforming catalyst. At 800 °C, we achieve full methane conversion by removing 99% of the formed hydrogen, which is simultaneously compressed electrochemically up to 50 bar. A thermally balanced operation regime is achieved by coupling several thermo-chemical processes. Modelling of a small-scale (10 kg H2 day-1) hydrogen plant reveals an overall energy efficiency of >87%. The results suggest that future declining electricity prices could make PMRs a competitive alternative for industrial-scale hydrogen plants integrating CO2 capture.
Semi-regular remeshing based trust region spherical geometry image for 3D deformed mesh used MLWNN
NASA Astrophysics Data System (ADS)
Dhibi, Naziha; Elkefi, Akram; Bellil, Wajdi; Ben Amar, Chokri
2017-03-01
Triangular surface are now widely used for modeling three-dimensional object, since these models are very high resolution and the geometry of the mesh is often very dense, it is then necessary to remesh this object to reduce their complexity, the mesh quality (connectivity regularity) must be ameliorated. In this paper, we review the main methods of semi-regular remeshing of the state of the art, given the semi-regular remeshing is mainly relevant for wavelet-based compression, then we present our method for re-meshing based trust region spherical geometry image to have good scheme of 3d mesh compression used to deform 3D meh based on Multi library Wavelet Neural Network structure (MLWNN). Experimental results show that the progressive re-meshing algorithm capable of obtaining more compact representations and semi-regular objects and yield an efficient compression capabilities with minimal set of features used to have good 3D deformation scheme.
Further Investigations of Hypersonic Engine Seals
NASA Technical Reports Server (NTRS)
Dunlap, Patrick H., Jr.; Steinetz, Bruce M.; DeMange, Jeffrey J.
2004-01-01
Durable, flexible sliding seals are required in advanced hypersonic engines to seal the perimeters of movable engine ramps for efficient, safe operation in high heat flux environments at temperatures of 2000 to 2500 F. Current seal designs do not meet the demanding requirements for future engines, so NASA's Glenn Research Center is developing advanced seals and preloading devices to overcome these shortfalls. An advanced ceramic wafer seal design and two silicon nitride compression spring designs were evaluated in a series of compression, scrub, and flow tests. Silicon nitride wafer seals survived 2000 in. (50.8 m) of scrubbing at 2000 F against a silicon carbide rub surface with no chips or signs of damage. Flow rates measured for the wafers before and after scrubbing were almost identical and were up to 32 times lower than those recorded for the best braided rope seal flow blockers. Silicon nitride compression springs showed promise conceptually as potential seal preload devices to help maintain seal resiliency.
SVD Compression for Magnetic Resonance Fingerprinting in the Time Domain
McGivney, Debra F.; Pierre, Eric; Ma, Dan; Jiang, Yun; Saybasili, Haris; Gulani, Vikas; Griswold, Mark A.
2016-01-01
Magnetic resonance fingerprinting is a technique for acquiring and processing MR data that simultaneously provides quantitative maps of different tissue parameters through a pattern recognition algorithm. A predefined dictionary models the possible signal evolutions simulated using the Bloch equations with different combinations of various MR parameters and pattern recognition is completed by computing the inner product between the observed signal and each of the predicted signals within the dictionary. Though this matching algorithm has been shown to accurately predict the MR parameters of interest, one desires a more efficient method to obtain the quantitative images. We propose to compress the dictionary using the singular value decomposition (SVD), which will provide a low-rank approximation. By compressing the size of the dictionary in the time domain, we are able to speed up the pattern recognition algorithm, by a factor of between 3.4-4.8, without sacrificing the high signal-to-noise ratio of the original scheme presented previously. PMID:25029380
Farruggia, Andrea; Gagie, Travis; Navarro, Gonzalo; Puglisi, Simon J; Sirén, Jouni
2018-05-01
Suffix trees are one of the most versatile data structures in stringology, with many applications in bioinformatics. Their main drawback is their size, which can be tens of times larger than the input sequence. Much effort has been put into reducing the space usage, leading ultimately to compressed suffix trees. These compressed data structures can efficiently simulate the suffix tree, while using space proportional to a compressed representation of the sequence. In this work, we take a new approach to compressed suffix trees for repetitive sequence collections, such as collections of individual genomes. We compress the suffix trees of individual sequences relative to the suffix tree of a reference sequence. These relative data structures provide competitive time/space trade-offs, being almost as small as the smallest compressed suffix trees for repetitive collections, and competitive in time with the largest and fastest compressed suffix trees.
Sandford, M.T. II; Handel, T.G.; Bradley, J.N.
1998-03-10
A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique is disclosed. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method. 11 figs.
Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.
1998-01-01
A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method.
Farruggia, Andrea; Gagie, Travis; Navarro, Gonzalo; Puglisi, Simon J; Sirén, Jouni
2018-01-01
Abstract Suffix trees are one of the most versatile data structures in stringology, with many applications in bioinformatics. Their main drawback is their size, which can be tens of times larger than the input sequence. Much effort has been put into reducing the space usage, leading ultimately to compressed suffix trees. These compressed data structures can efficiently simulate the suffix tree, while using space proportional to a compressed representation of the sequence. In this work, we take a new approach to compressed suffix trees for repetitive sequence collections, such as collections of individual genomes. We compress the suffix trees of individual sequences relative to the suffix tree of a reference sequence. These relative data structures provide competitive time/space trade-offs, being almost as small as the smallest compressed suffix trees for repetitive collections, and competitive in time with the largest and fastest compressed suffix trees. PMID:29795706
Comparative Study of Three High Order Schemes for LES of Temporally Evolving Mixing Layers
NASA Technical Reports Server (NTRS)
Yee, Helen M. C.; Sjogreen, Biorn Axel; Hadjadj, C.
2012-01-01
Three high order shock-capturing schemes are compared for large eddy simulations (LES) of temporally evolving mixing layers (TML) for different convective Mach numbers (Mc) ranging from the quasi-incompressible regime to highly compressible supersonic regime. The considered high order schemes are fifth-order WENO (WENO5), seventh-order WENO (WENO7) and the associated eighth-order central spatial base scheme with the dissipative portion of WENO7 as a nonlinear post-processing filter step (WENO7fi). This high order nonlinear filter method (H.C. Yee and B. Sjogreen, Proceedings of ICOSAHOM09, June 22-26, 2009, Trondheim, Norway) is designed for accurate and efficient simulations of shock-free compressible turbulence, turbulence with shocklets and turbulence with strong shocks with minimum tuning of scheme parameters. The LES results by WENO7fi using the same scheme parameter agree well with experimental results of Barone et al. (2006), and published direct numerical simulations (DNS) work of Rogers & Moser (1994) and Pantano & Sarkar (2002), whereas results by WENO5 and WENO7 compare poorly with experimental data and DNS computations.
Electroforming of optical tooling in high-strength Ni-Co alloy
NASA Astrophysics Data System (ADS)
Stein, Berl
2003-05-01
Plastic optics are often mass produced by injection, compression or injection-compression molding. Optical quality molds can be directly machined in appropriate materials (tool steels, electroless nickel, aluminum, etc.), but much greater cost efficiency can be achieved with electroformed modl inserts. Traditionally, electroforming of optical quality mold inserts has been carried out in nickel, a material much softer than tool steels which, when hardened to 45 - 50 HRc usually exhibit high wear resistance and long service life (hundreds of thousands of impressions per mold). Because of their low hardness (< 20 HRc), nickel molds can produce only tens of thousands of parts before they are scrapped due to wear or accidental damage. This drawback prevented their wider usage in general plastic and optical mold making. Recently, NiCoForm has developed a proprietary Ni-CO electroforming bath combining the high strength and wear resistance of the alloy with the low stress and high replication fidelity typical of pure nickel electroforming. This paper will outline the approach to electroforming of optical quality tooling in low stress, high strength Ni-Co alloy and present several examples of electroformed NiColoy mold inserts.
Ito, Yoichiro; Ma, Xiaofeng; Clary, Robert
2016-01-01
A simple tool is introduced which can modify the shape of tubing to enhance the partition efficiency in high-speed countercurrent chromatography. It consists of a pair of interlocking identical gears, each coaxially holding a pressing wheel to intermittently compress plastic tubing in 0 – 10 mm length at every 1 cm interval. The performance of the processed tubing is examined in protein separation with 1.6 mm ID PTFE tubing intermittently pressed in 3 mm and 10 mm width both at 10 mm intervals at various flow rates and revolution speeds. A series of experiments was performed with a polymer phase system composed of polyethylene glycol and dibasic potassium phosphate each at 12.5% (w/w) in deionized water using three protein samples. Overall results clearly demonstrate that the compressed tubing can yield substantially higher peak resolution than the non-processed tubing. The simple tubing modifier is very useful for separation of proteins with high-speed countercurrent chromatography. PMID:27818942
Ito, Yoichiro; Ma, Xiaofeng; Clary, Robert
2016-01-01
A simple tool is introduced which can modify the shape of tubing to enhance the partition efficiency in high-speed countercurrent chromatography. It consists of a pair of interlocking identical gears, each coaxially holding a pressing wheel to intermittently compress plastic tubing in 0 - 10 mm length at every 1 cm interval. The performance of the processed tubing is examined in protein separation with 1.6 mm ID PTFE tubing intermittently pressed in 3 mm and 10 mm width both at 10 mm intervals at various flow rates and revolution speeds. A series of experiments was performed with a polymer phase system composed of polyethylene glycol and dibasic potassium phosphate each at 12.5% (w/w) in deionized water using three protein samples. Overall results clearly demonstrate that the compressed tubing can yield substantially higher peak resolution than the non-processed tubing. The simple tubing modifier is very useful for separation of proteins with high-speed countercurrent chromatography.
Spitzer Operations: Scheduling the Out Years
NASA Technical Reports Server (NTRS)
Mahoney, William A.; Effertz, Mark J.; Fisher, Mark E.; Garcia, Lisa J.; Hunt, Joseph C. Jr.; Mannings, Vincent; McElroy, Douglas B.; Scire, Elena
2012-01-01
Spitzer Warm Mission operations have remained robust and exceptionally efficient since the cryogenic mission ended in mid-2009. The distance to the now exceeds 1 AU, making telecommunications increasingly difficult; however, analysis has shown that two-way communication could be maintained through at least 2017 with minimal loss in observing efficiency. The science program continues to emphasize the characterization of exoplanets, time domain studies, and deep surveys, all of which can impose interesting scheduling constraints. Recent changes have significantly improved on-board data compression, which both enables certain high volume observations and reduces Spitzer's demand for competitive Deep Space Network resources.
Efficient Parallel Algorithm For Direct Numerical Simulation of Turbulent Flows
NASA Technical Reports Server (NTRS)
Moitra, Stuti; Gatski, Thomas B.
1997-01-01
A distributed algorithm for a high-order-accurate finite-difference approach to the direct numerical simulation (DNS) of transition and turbulence in compressible flows is described. This work has two major objectives. The first objective is to demonstrate that parallel and distributed-memory machines can be successfully and efficiently used to solve computationally intensive and input/output intensive algorithms of the DNS class. The second objective is to show that the computational complexity involved in solving the tridiagonal systems inherent in the DNS algorithm can be reduced by algorithm innovations that obviate the need to use a parallelized tridiagonal solver.
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sjogreen, B.; Sandham, N. D.; Hadjadj, A.; Kwak, Dochan (Technical Monitor)
2000-01-01
In a series of papers, Olsson (1994, 1995), Olsson & Oliger (1994), Strand (1994), Gerritsen Olsson (1996), Yee et al. (1999a,b, 2000) and Sandham & Yee (2000), the issue of nonlinear stability of the compressible Euler and Navier-Stokes Equations, including physical boundaries, and the corresponding development of the discrete analogue of nonlinear stable high order schemes, including boundary schemes, were developed, extended and evaluated for various fluid flows. High order here refers to spatial schemes that are essentially fourth-order or higher away from shock and shear regions. The objective of this paper is to give an overview of the progress of the low dissipative high order shock-capturing schemes proposed by Yee et al. (1999a,b, 2000). This class of schemes consists of simple non-dissipative high order compact or non-compact central spatial differencings and adaptive nonlinear numerical dissipation operators to minimize the use of numerical dissipation. The amount of numerical dissipation is further minimized by applying the scheme to the entropy splitting form of the inviscid flux derivatives, and by rewriting the viscous terms to minimize odd-even decoupling before the application of the central scheme (Sandham & Yee). The efficiency and accuracy of these scheme are compared with spectral, TVD and fifth- order WENO schemes. A new approach of Sjogreen & Yee (2000) utilizing non-orthogonal multi-resolution wavelet basis functions as sensors to dynamically determine the appropriate amount of numerical dissipation to be added to the non-dissipative high order spatial scheme at each grid point will be discussed. Numerical experiments of long time integration of smooth flows, shock-turbulence interactions, direct numerical simulations of a 3-D compressible turbulent plane channel flow, and various mixing layer problems indicate that these schemes are especially suitable for practical complex problems in nonlinear aeroacoustics, rotorcraft dynamics, direct numerical simulation or large eddy simulation of compressible turbulent flows at various speeds including high-speed shock-turbulence interactions, and general long time wave propagation problems. These schemes, including entropy splitting, have also been extended to freestream preserving schemes on curvilinear moving grids for a thermally perfect gas (Vinokur & Yee 2000).
Hyperspectral IASI L1C Data Compression.
García-Sobrino, Joaquín; Serra-Sagristà, Joan; Bartrina-Rapesta, Joan
2017-06-16
The Infrared Atmospheric Sounding Interferometer (IASI), implemented on the MetOp satellite series, represents a significant step forward in atmospheric forecast and weather understanding. The instrument provides infrared soundings of unprecedented accuracy and spectral resolution to derive humidity and atmospheric temperature profiles, as well as some of the chemical components playing a key role in climate monitoring. IASI collects rich spectral information, which results in large amounts of data (about 16 Gigabytes per day). Efficient compression techniques are requested for both transmission and storage of such huge data. This study reviews the performance of several state of the art coding standards and techniques for IASI L1C data compression. Discussion embraces lossless, near-lossless and lossy compression. Several spectral transforms, essential to achieve improved coding performance due to the high spectral redundancy inherent to IASI products, are also discussed. Illustrative results are reported for a set of 96 IASI L1C orbits acquired over a full year (4 orbits per month for each IASI-A and IASI-B from July 2013 to June 2014) . Further, this survey provides organized data and facts to assist future research and the atmospheric scientific community.
Compressibility effects on turbulent mixing
NASA Astrophysics Data System (ADS)
Panickacheril John, John; Donzis, Diego
2016-11-01
We investigate the effect of compressibility on passive scalar mixing in isotropic turbulence with a focus on the fundamental mechanisms that are responsible for such effects using a large Direct Numerical Simulation (DNS) database. The database includes simulations with Taylor Reynolds number (Rλ) up to 100, turbulent Mach number (Mt) between 0.1 and 0.6 and Schmidt number (Sc) from 0.5 to 1.0. We present several measures of mixing efficiency on different canonical flows to robustly identify compressibility effects. We found that, like shear layers, mixing is reduced as Mach number increases. However, data also reveal a non-monotonic trend with Mt. To assess directly the effect of dilatational motions we also present results with both dilatational and soleniodal forcing. Analysis suggests that a small fraction of dilatational forcing decreases mixing time at higher Mt. Scalar spectra collapse when normalized by Batchelor variables which suggests that a compressive mechanism similar to Batchelor mixing in incompressible flows might be responsible for better mixing at high Mt and with dilatational forcing compared to pure solenoidal mixing. We also present results on scalar budgets, in particular on production and dissipation. Support from NSF is gratefully acknowledged.
Local wavelet transform: a cost-efficient custom processor for space image compression
NASA Astrophysics Data System (ADS)
Masschelein, Bart; Bormans, Jan G.; Lafruit, Gauthier
2002-11-01
Thanks to its intrinsic scalability features, the wavelet transform has become increasingly popular as decorrelator in image compression applications. Throuhgput, memory requirements and complexity are important parameters when developing hardware image compression modules. An implementation of the classical, global wavelet transform requires large memory sizes and implies a large latency between the availability of the input image and the production of minimal data entities for entropy coding. Image tiling methods, as proposed by JPEG2000, reduce the memory sizes and the latency, but inevitably introduce image artefacts. The Local Wavelet Transform (LWT), presented in this paper, is a low-complexity wavelet transform architecture using a block-based processing that results in the same transformed images as those obtained by the global wavelet transform. The architecture minimizes the processing latency with a limited amount of memory. Moreover, as the LWT is an instruction-based custom processor, it can be programmed for specific tasks, such as push-broom processing of infinite-length satelite images. The features of the LWT makes it appropriate for use in space image compression, where high throughput, low memory sizes, low complexity, low power and push-broom processing are important requirements.
Effects of compression and individual variability on face recognition performance
NASA Astrophysics Data System (ADS)
McGarry, Delia P.; Arndt, Craig M.; McCabe, Steven A.; D'Amato, Donald P.
2004-08-01
The Enhanced Border Security and Visa Entry Reform Act of 2002 requires that the Visa Waiver Program be available only to countries that have a program to issue to their nationals machine-readable passports incorporating biometric identifiers complying with applicable standards established by the International Civil Aviation Organization (ICAO). In June 2002, the New Technologies Working Group of ICAO unanimously endorsed the use of face recognition (FR) as the globally interoperable biometric for machine-assisted identity confirmation with machine-readable travel documents (MRTDs), although Member States may elect to use fingerprint and/or iris recognition as additional biometric technologies. The means and formats are still being developed through which biometric information might be stored in the constrained space of integrated circuit chips embedded within travel documents. Such information will be stored in an open, yet unalterable and very compact format, probably as digitally signed and efficiently compressed images. The objective of this research is to characterize the many factors that affect FR system performance with respect to the legislated mandates concerning FR. A photograph acquisition environment and a commercial face recognition system have been installed at Mitretek, and over 1,400 images have been collected of volunteers. The image database and FR system are being used to analyze the effects of lossy image compression, individual differences, such as eyeglasses and facial hair, and the acquisition environment on FR system performance. Images are compressed by varying ratios using JPEG2000 to determine the trade-off points between recognition accuracy and compression ratio. The various acquisition factors that contribute to differences in FR system performance among individuals are also being measured. The results of this study will be used to refine and test efficient face image interchange standards that ensure highly accurate recognition, both for automated FR systems and human inspectors. Working within the M1-Biometrics Technical Committee of the InterNational Committee for Information Technology Standards (INCITS) organization, a standard face image format will be tested and submitted to organizations such as ICAO.
Preparation of flexible TiO2 photoelectrodes for dye-sensitized solar cells
NASA Astrophysics Data System (ADS)
Li, Wen-Ren; Wang, Hsiu-Hsuan; Lin, Chia-Feng; Su, Chaochin
2014-09-01
Dye-sensitized solar cells (DSSCs) based on nanocrystalline TiO2 photoelectrodes on indium tin oxide (ITO) coated polymer substrates have drawn great attention due to its lightweight, flexibility and advantages in commercial applications. However, the thermal instability of polymer substrates limits the process temperature to below 150 °C. In order to assure high and firm interparticle connection between TiO2 nanocrystals (TiO2-NC) and polymer substrates, the post-treatment of flexible TiO2 photoelectrodes (F-TiO2-PE) by mechanical compression was employed. In this work, Degussa P25 TiO2-NC was mixed with tert-butyl alcohol and DI-water to form TiO2 paste. F-TiO2-PE was then prepared by coating the TiO2 paste onto ITO coated polyethylene terephthalate (PET) substrate using doctor blade followed by low temperature sintering at 120 °C for 2 hours. To study the effect of mechanical compression, we applied 50 and 100 kg/cm2 pressure on TiO2/PET to complete the fabrication of F-TiO2-PE. The surface morphology of F-TiO2-PE was characterized using scanning electron microscopy. The resultant F-TiO2-PE sample exhibited a smooth, crack-free structure indicating the great improvement in the interparticle connection of TiO2-NC. Increase of compression pressure could lead to the increase of DSSC photoconversion efficiency. The best photoconversion efficiency of 4.19 % (open circuit voltage (Voc) = 0.79 V, short-circuit photocurrent density (Jsc) = 7.75 mA/cm2, fill factor (FF) = 0.68) was obtained for the F-TiO2-PE device, which showed great enhancement compared with the F-TiO2-PE cell without compression treatment. The effect of compression in DSSC performance was vindicated by the electrochemical impedance spectroscopy measurement.
Research on lossless compression of true color RGB image with low time and space complexity
NASA Astrophysics Data System (ADS)
Pan, ShuLin; Xie, ChengJun; Xu, Lin
2008-12-01
Eliminating correlated redundancy of space and energy by using a DWT lifting scheme and reducing the complexity of the image by using an algebraic transform among the RGB components. An improved Rice Coding algorithm, in which presents an enumerating DWT lifting scheme that fits any size images by image renormalization has been proposed in this paper. This algorithm has a coding and decoding process without backtracking for dealing with the pixels of an image. It support LOCO-I and it can also be applied to Coder / Decoder. Simulation analysis indicates that the proposed method can achieve a high image compression. Compare with Lossless-JPG, PNG(Microsoft), PNG(Rene), PNG(Photoshop), PNG(Anix PicViewer), PNG(ACDSee), PNG(Ulead photo Explorer), JPEG2000, PNG(KoDa Inc), SPIHT and JPEG-LS, the lossless image compression ratio improved 45%, 29%, 25%, 21%, 19%, 17%, 16%, 15%, 11%, 10.5%, 10% separately with 24 pieces of RGB image provided by KoDa Inc. Accessing the main memory in Pentium IV,CPU2.20GHZ and 256MRAM, the coding speed of the proposed coder can be increased about 21 times than the SPIHT and the efficiency of the performance can be increased 166% or so, the decoder's coding speed can be increased about 17 times than the SPIHT and the efficiency of the performance can be increased 128% or so.
Area and power efficient DCT architecture for image compression
NASA Astrophysics Data System (ADS)
Dhandapani, Vaithiyanathan; Ramachandran, Seshasayanan
2014-12-01
The discrete cosine transform (DCT) is one of the major components in image and video compression systems. The final output of these systems is interpreted by the human visual system (HVS), which is not perfect. The limited perception of human visualization allows the algorithm to be numerically approximate rather than exact. In this paper, we propose a new matrix for discrete cosine transform. The proposed 8 × 8 transformation matrix contains only zeros and ones which requires only adders, thus avoiding the need for multiplication and shift operations. The new class of transform requires only 12 additions, which highly reduces the computational complexity and achieves a performance in image compression that is comparable to that of the existing approximated DCT. Another important aspect of the proposed transform is that it provides an efficient area and power optimization while implementing in hardware. To ensure the versatility of the proposal and to further evaluate the performance and correctness of the structure in terms of speed, area, and power consumption, the model is implemented on Xilinx Virtex 7 field programmable gate array (FPGA) device and synthesized with Cadence® RTL Compiler® using UMC 90 nm standard cell library. The analysis obtained from the implementation indicates that the proposed structure is superior to the existing approximation techniques with a 30% reduction in power and 12% reduction in area.
Video coding for 3D-HEVC based on saliency information
NASA Astrophysics Data System (ADS)
Yu, Fang; An, Ping; Yang, Chao; You, Zhixiang; Shen, Liquan
2016-11-01
As an extension of High Efficiency Video Coding ( HEVC), 3D-HEVC has been widely researched under the impetus of the new generation coding standard in recent years. Compared with H.264/AVC, its compression efficiency is doubled while keeping the same video quality. However, its higher encoding complexity and longer encoding time are not negligible. To reduce the computational complexity and guarantee the subjective quality of virtual views, this paper presents a novel video coding method for 3D-HEVC based on the saliency informat ion which is an important part of Human Visual System (HVS). First of all, the relationship between the current coding unit and its adjacent units is used to adjust the maximum depth of each largest coding unit (LCU) and determine the SKIP mode reasonably. Then, according to the saliency informat ion of each frame image, the texture and its corresponding depth map will be divided into three regions, that is, salient area, middle area and non-salient area. Afterwards, d ifferent quantization parameters will be assigned to different regions to conduct low complexity coding. Finally, the compressed video will generate new view point videos through the renderer tool. As shown in our experiments, the proposed method saves more bit rate than other approaches and achieves up to highest 38% encoding time reduction without subjective quality loss in compression or rendering.
Efficient transmission of compressed data for remote volume visualization.
Krishnan, Karthik; Marcellin, Michael W; Bilgin, Ali; Nadar, Mariappan S
2006-09-01
One of the goals of telemedicine is to enable remote visualization and browsing of medical volumes. There is a need to employ scalable compression schemes and efficient client-server models to obtain interactivity and an enhanced viewing experience. First, we present a scheme that uses JPEG2000 and JPIP (JPEG2000 Interactive Protocol) to transmit data in a multi-resolution and progressive fashion. The server exploits the spatial locality offered by the wavelet transform and packet indexing information to transmit, in so far as possible, compressed volume data relevant to the clients query. Once the client identifies its volume of interest (VOI), the volume is refined progressively within the VOI from an initial lossy to a final lossless representation. Contextual background information can also be made available having quality fading away from the VOI. Second, we present a prioritization that enables the client to progressively visualize scene content from a compressed file. In our specific example, the client is able to make requests to progressively receive data corresponding to any tissue type. The server is now capable of reordering the same compressed data file on the fly to serve data packets prioritized as per the client's request. Lastly, we describe the effect of compression parameters on compression ratio, decoding times and interactivity. We also present suggestions for optimizing JPEG2000 for remote volume visualization and volume browsing applications. The resulting system is ideally suited for client-server applications with the server maintaining the compressed volume data, to be browsed by a client with a low bandwidth constraint.
NASA Technical Reports Server (NTRS)
Schey, Oscar W; Wilson, Ernest E
1929-01-01
This report presents the results of an analytical investigation on the practicability of using mechanically operated discharge valves in conjunction with a manually operated intake control for improving the performance of N. A. C. A. Roots type superchargers. These valves, which may be either of the oscillating or rotating type, are placed in the discharge opening of the supercharger and are so shaped and synchronized with the supercharger impellers that they do not open until the air has been compressed to the delivery pressure. The intake control limits the quantity of air compressed to engine requirements by permitting the excess air to escape from the compression chamber before compression begins. The percentage power saving and the actual horsepower saved were computed for altitudes from 0 to 20,000 feet. These computations are based on the pressure-volume cards for the conventional and the modified roots type superchargers and on the results of laboratory tests of the conventional type. The use of discharge valves shows a power saving of approximately 26 per cent at a critical altitude of 20,000 feet. In addition, these valves reduce the amplitude of the discharge pulsations and increase the volumetric efficiency. With slow-speed roots blowers operating at high-pressure differences even better results would be expected. For aircraft engine superchargers operating at high speeds these discharge valves increase the performance as above, but have the disadvantages of increasing the weight and of adding a high-speed mechanism to a simple machine. (author)
Kumar, Ranjeet; Kumar, A; Singh, G K
2016-06-01
In the field of biomedical, it becomes necessary to reduce data quantity due to the limitation of storage in real-time ambulatory system and telemedicine system. Research has been underway since very beginning for the development of an efficient and simple technique for longer term benefits. This paper, presents an algorithm based on singular value decomposition (SVD), and embedded zero tree wavelet (EZW) techniques for ECG signal compression which deals with the huge data of ambulatory system. The proposed method utilizes the low rank matrix for initial compression on two dimensional (2-D) ECG data array using SVD, and then EZW is initiated for final compression. Initially, 2-D array construction has key issue for the proposed technique in pre-processing. Here, three different beat segmentation approaches have been exploited for 2-D array construction using segmented beat alignment with exploitation of beat correlation. The proposed algorithm has been tested on MIT-BIH arrhythmia record, and it was found that it is very efficient in compression of different types of ECG signal with lower signal distortion based on different fidelity assessments. The evaluation results illustrate that the proposed algorithm has achieved the compression ratio of 24.25:1 with excellent quality of signal reconstruction in terms of percentage-root-mean square difference (PRD) as 1.89% for ECG signal Rec. 100 and consumes only 162bps data instead of 3960bps uncompressed data. The proposed method is efficient and flexible with different types of ECG signal for compression, and controls quality of reconstruction. Simulated results are clearly illustrate the proposed method can play a big role to save the memory space of health data centres as well as save the bandwidth in telemedicine based healthcare systems. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.