GPU Lossless Hyperspectral Data Compression System
NASA Technical Reports Server (NTRS)
Aranki, Nazeeh I.; Keymeulen, Didier; Kiely, Aaron B.; Klimesh, Matthew A.
2014-01-01
Hyperspectral imaging systems onboard aircraft or spacecraft can acquire large amounts of data, putting a strain on limited downlink and storage resources. Onboard data compression can mitigate this problem but may require a system capable of a high throughput. In order to achieve a high throughput with a software compressor, a graphics processing unit (GPU) implementation of a compressor was developed targeting the current state-of-the-art GPUs from NVIDIA(R). The implementation is based on the fast lossless (FL) compression algorithm reported in "Fast Lossless Compression of Multispectral-Image Data" (NPO- 42517), NASA Tech Briefs, Vol. 30, No. 8 (August 2006), page 26, which operates on hyperspectral data and achieves excellent compression performance while having low complexity. The FL compressor uses an adaptive filtering method and achieves state-of-the-art performance in both compression effectiveness and low complexity. The new Consultative Committee for Space Data Systems (CCSDS) Standard for Lossless Multispectral & Hyperspectral image compression (CCSDS 123) is based on the FL compressor. The software makes use of the highly-parallel processing capability of GPUs to achieve a throughput at least six times higher than that of a software implementation running on a single-core CPU. This implementation provides a practical real-time solution for compression of data from airborne hyperspectral instruments.
Compression of contour data through exploiting curve-to-curve dependence
NASA Technical Reports Server (NTRS)
Yalabik, N.; Cooper, D. B.
1975-01-01
An approach to exploiting curve-to-curve dependencies in order to achieve high data compression is presented. One of the approaches to date of along curve compression through use of cubic spline approximation is taken and extended by investigating the additional compressibility achievable through curve-to-curve structure exploitation. One of the models under investigation is reported on.
Layered compression for high-precision depth data.
Miao, Dan; Fu, Jingjing; Lu, Yan; Li, Shipeng; Chen, Chang Wen
2015-12-01
With the development of depth data acquisition technologies, access to high-precision depth with more than 8-b depths has become much easier and determining how to efficiently represent and compress high-precision depth is essential for practical depth storage and transmission systems. In this paper, we propose a layered high-precision depth compression framework based on an 8-b image/video encoder to achieve efficient compression with low complexity. Within this framework, considering the characteristics of the high-precision depth, a depth map is partitioned into two layers: 1) the most significant bits (MSBs) layer and 2) the least significant bits (LSBs) layer. The MSBs layer provides rough depth value distribution, while the LSBs layer records the details of the depth value variation. For the MSBs layer, an error-controllable pixel domain encoding scheme is proposed to exploit the data correlation of the general depth information with sharp edges and to guarantee the data format of LSBs layer is 8 b after taking the quantization error from MSBs layer. For the LSBs layer, standard 8-b image/video codec is leveraged to perform the compression. The experimental results demonstrate that the proposed coding scheme can achieve real-time depth compression with satisfactory reconstruction quality. Moreover, the compressed depth data generated from this scheme can achieve better performance in view synthesis and gesture recognition applications compared with the conventional coding schemes because of the error control algorithm.
High-quality lossy compression: current and future trends
NASA Astrophysics Data System (ADS)
McLaughlin, Steven W.
1995-01-01
This paper is concerned with current and future trends in the lossy compression of real sources such as imagery, video, speech and music. We put all lossy compression schemes into common framework where each can be characterized in terms of three well-defined advantages: cell shape, region shape and memory advantages. We concentrate on image compression and discuss how new entropy constrained trellis-based compressors achieve cell- shape, region-shape and memory gain resulting in high fidelity and high compression.
NASA Astrophysics Data System (ADS)
Wan, Tat C.; Kabuka, Mansur R.
1994-05-01
With the tremendous growth in imaging applications and the development of filmless radiology, the need for compression techniques that can achieve high compression ratios with user specified distortion rates becomes necessary. Boundaries and edges in the tissue structures are vital for detection of lesions and tumors, which in turn requires the preservation of edges in the image. The proposed edge preserving image compressor (EPIC) combines lossless compression of edges with neural network compression techniques based on dynamic associative neural networks (DANN), to provide high compression ratios with user specified distortion rates in an adaptive compression system well-suited to parallel implementations. Improvements to DANN-based training through the use of a variance classifier for controlling a bank of neural networks speed convergence and allow the use of higher compression ratios for `simple' patterns. The adaptation and generalization capabilities inherent in EPIC also facilitate progressive transmission of images through varying the number of quantization levels used to represent compressed patterns. Average compression ratios of 7.51:1 with an averaged average mean squared error of 0.0147 were achieved.
High Order Filter Methods for the Non-ideal Compressible MHD Equations
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sjoegreen, Bjoern
2003-01-01
The generalization of a class of low-dissipative high order filter finite difference methods for long time wave propagation of shock/turbulence/combustion compressible viscous gas dynamic flows to compressible MHD equations for structured curvilinear grids has been achieved. The new scheme is shown to provide a natural and efficient way for the minimization of the divergence of the magnetic field numerical error. Standard divergence cleaning is not required by the present filter approach. For certain non-ideal MHD test cases, divergence free preservation of the magnetic fields has been achieved.
Divergence Free High Order Filter Methods for the Compressible MHD Equations
NASA Technical Reports Server (NTRS)
Yea, H. C.; Sjoegreen, Bjoern
2003-01-01
The generalization of a class of low-dissipative high order filter finite difference methods for long time wave propagation of shock/turbulence/combustion compressible viscous gas dynamic flows to compressible MHD equations for structured curvilinear grids has been achieved. The new scheme is shown to provide a natural and efficient way for the minimization of the divergence of the magnetic field numerical error. Standard diver- gence cleaning is not required by the present filter approach. For certain MHD test cases, divergence free preservation of the magnetic fields has been achieved.
Zhang, Yi; Huang, Yi; Zhang, Tengfei; Chang, Huicong; Xiao, Peishuang; Chen, Honghui; Huang, Zhiyu; Chen, Yongsheng
2015-03-25
The broadband and tunable high-performance microwave absorption properties of an ultralight and highly compressible graphene foam (GF) are investigated. Simply via physical compression, the microwave absorption performance can be tuned. The qualified bandwidth coverage of 93.8% (60.5 GHz/64.5 GHz) is achieved for the GF under 90% compressive strain (1.0 mm thickness). This mainly because of the 3D conductive network. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Efficient compression of molecular dynamics trajectory files.
Marais, Patrick; Kenwood, Julian; Smith, Keegan Carruthers; Kuttel, Michelle M; Gain, James
2012-10-15
We investigate whether specific properties of molecular dynamics trajectory files can be exploited to achieve effective file compression. We explore two classes of lossy, quantized compression scheme: "interframe" predictors, which exploit temporal coherence between successive frames in a simulation, and more complex "intraframe" schemes, which compress each frame independently. Our interframe predictors are fast, memory-efficient and well suited to on-the-fly compression of massive simulation data sets, and significantly outperform the benchmark BZip2 application. Our schemes are configurable: atomic positional accuracy can be sacrificed to achieve greater compression. For high fidelity compression, our linear interframe predictor gives the best results at very little computational cost: at moderate levels of approximation (12-bit quantization, maximum error ≈ 10(-2) Å), we can compress a 1-2 fs trajectory file to 5-8% of its original size. For 200 fs time steps-typically used in fine grained water diffusion experiments-we can compress files to ~25% of their input size, still substantially better than BZip2. While compression performance degrades with high levels of quantization, the simulation error is typically much greater than the associated approximation error in such cases. Copyright © 2012 Wiley Periodicals, Inc.
COxSwAIN: Compressive Sensing for Advanced Imaging and Navigation
NASA Technical Reports Server (NTRS)
Kurwitz, Richard; Pulley, Marina; LaFerney, Nathan; Munoz, Carlos
2015-01-01
The COxSwAIN project focuses on building an image and video compression scheme that can be implemented in a small or low-power satellite. To do this, we used Compressive Sensing, where the compression is performed by matrix multiplications on the satellite and reconstructed on the ground. Our paper explains our methodology and demonstrates the results of the scheme, being able to achieve high quality image compression that is robust to noise and corruption.
Zeng, Xianglong; Guo, Hairun; Zhou, Binbin; Bache, Morten
2012-11-19
We propose an efficient approach to improve few-cycle soliton compression with cascaded quadratic nonlinearities by using an engineered multi-section structure of the nonlinear crystal. By exploiting engineering of the cascaded quadratic nonlinearities, in each section soliton compression with a low effective order is realized, and high-quality few-cycle pulses with large compression factors are feasible. Each subsequent section is designed so that the compressed pulse exiting the previous section experiences an overall effective self-defocusing cubic nonlinearity corresponding to a modest soliton order, which is kept larger than unity to ensure further compression. This is done by increasing the cascaded quadratic nonlinearity in the new section with an engineered reduced residual phase mismatch. The low soliton orders in each section ensure excellent pulse quality and high efficiency. Numerical results show that compressed pulses with less than three-cycle duration can be achieved even when the compression factor is very large, and in contrast to standard soliton compression, these compressed pulses have minimal pedestal and high quality factor.
High-speed real-time image compression based on all-optical discrete cosine transformation
NASA Astrophysics Data System (ADS)
Guo, Qiang; Chen, Hongwei; Wang, Yuxi; Chen, Minghua; Yang, Sigang; Xie, Shizhong
2017-02-01
In this paper, we present a high-speed single-pixel imaging (SPI) system based on all-optical discrete cosine transform (DCT) and demonstrate its capability to enable noninvasive imaging of flowing cells in a microfluidic channel. Through spectral shaping based on photonic time stretch (PTS) and wavelength-to-space conversion, structured illumination patterns are generated at a rate (tens of MHz) which is three orders of magnitude higher than the switching rate of a digital micromirror device (DMD) used in a conventional single-pixel camera. Using this pattern projector, high-speed image compression based on DCT can be achieved in the optical domain. In our proposed system, a high compression ratio (approximately 10:1) and a fast image reconstruction procedure are both achieved, which implicates broad applications in industrial quality control and biomedical imaging.
Engineering tough, highly compressible, biodegradable hydrogels by tuning the network architecture.
Gu, Dunyin; Tan, Shereen; Xu, Chenglong; O'Connor, Andrea J; Qiao, Greg G
2017-06-20
By precisely tuning the network architecture, tough, highly compressible hydrogels were engineered. The hydrogels were made by interconnecting high-functionality hydrophobic domains through linear tri-block chains, consisting of soft hydrophilic middle blocks, flanked with flexible hydrophobic blocks. In showing their applicability, the efficient encapsulation and prolonged release of hydrophobic drugs were achieved.
Dynamic Experiments and Constitutive Model Performance for Polycarbonate
2014-07-01
phase disabled. Note, positive stress is tensile and negative is compressive ....28 Figure 23. Parameter sensitivity showing numerical contours of axial ... compressive . For the no alpha and no beta cases shown in the axial stress plots of figure 23 at 40 s, an increase in radial compression as compared...traditional Taylor cylinder impact experiment, which achieves large strain and high-strain-rate deformation but under hydrostatic compression
NASA Astrophysics Data System (ADS)
Zhang, Xuyan; Zhang, Zhiyao; Wang, Shubing; Liang, Dong; Li, Heping; Liu, Yong
2018-03-01
We propose and demonstrate an approach that can achieve high-resolution quantization by employing soliton self-frequency shift and spectral compression. Our approach is based on a bi-directional comb-fiber architecture which is composed of a Sagnac-loop-based mirror and a comb-like combination of N sections of interleaved single-mode fibers and high nonlinear fibers. The Sagnac-loop-based mirror placed at the terminal of a bus line reflects the optical pulses back to the bus line to achieve additional N-stage spectral compression, thus single-stage soliton self-frequency shift (SSFS) and (2 N - 1)-stage spectral compression are realized in the bi-directional scheme. The fiber length in the architecture is numerically optimized, and the proposed quantization scheme is evaluated by both simulation and experiment in the case of N = 2. In the experiment, a quantization resolution of 6.2 bits is obtained, which is 1.2-bit higher than that of its uni-directional counterpart.
Fast lossless compression via cascading Bloom filters
2014-01-01
Background Data from large Next Generation Sequencing (NGS) experiments present challenges both in terms of costs associated with storage and in time required for file transfer. It is sometimes possible to store only a summary relevant to particular applications, but generally it is desirable to keep all information needed to revisit experimental results in the future. Thus, the need for efficient lossless compression methods for NGS reads arises. It has been shown that NGS-specific compression schemes can improve results over generic compression methods, such as the Lempel-Ziv algorithm, Burrows-Wheeler transform, or Arithmetic Coding. When a reference genome is available, effective compression can be achieved by first aligning the reads to the reference genome, and then encoding each read using the alignment position combined with the differences in the read relative to the reference. These reference-based methods have been shown to compress better than reference-free schemes, but the alignment step they require demands several hours of CPU time on a typical dataset, whereas reference-free methods can usually compress in minutes. Results We present a new approach that achieves highly efficient compression by using a reference genome, but completely circumvents the need for alignment, affording a great reduction in the time needed to compress. In contrast to reference-based methods that first align reads to the genome, we hash all reads into Bloom filters to encode, and decode by querying the same Bloom filters using read-length subsequences of the reference genome. Further compression is achieved by using a cascade of such filters. Conclusions Our method, called BARCODE, runs an order of magnitude faster than reference-based methods, while compressing an order of magnitude better than reference-free methods, over a broad range of sequencing coverage. In high coverage (50-100 fold), compared to the best tested compressors, BARCODE saves 80-90% of the running time while only increasing space slightly. PMID:25252952
Fast lossless compression via cascading Bloom filters.
Rozov, Roye; Shamir, Ron; Halperin, Eran
2014-01-01
Data from large Next Generation Sequencing (NGS) experiments present challenges both in terms of costs associated with storage and in time required for file transfer. It is sometimes possible to store only a summary relevant to particular applications, but generally it is desirable to keep all information needed to revisit experimental results in the future. Thus, the need for efficient lossless compression methods for NGS reads arises. It has been shown that NGS-specific compression schemes can improve results over generic compression methods, such as the Lempel-Ziv algorithm, Burrows-Wheeler transform, or Arithmetic Coding. When a reference genome is available, effective compression can be achieved by first aligning the reads to the reference genome, and then encoding each read using the alignment position combined with the differences in the read relative to the reference. These reference-based methods have been shown to compress better than reference-free schemes, but the alignment step they require demands several hours of CPU time on a typical dataset, whereas reference-free methods can usually compress in minutes. We present a new approach that achieves highly efficient compression by using a reference genome, but completely circumvents the need for alignment, affording a great reduction in the time needed to compress. In contrast to reference-based methods that first align reads to the genome, we hash all reads into Bloom filters to encode, and decode by querying the same Bloom filters using read-length subsequences of the reference genome. Further compression is achieved by using a cascade of such filters. Our method, called BARCODE, runs an order of magnitude faster than reference-based methods, while compressing an order of magnitude better than reference-free methods, over a broad range of sequencing coverage. In high coverage (50-100 fold), compared to the best tested compressors, BARCODE saves 80-90% of the running time while only increasing space slightly.
Ma, JiaLi; Zhang, TanTan; Dong, MingChui
2015-05-01
This paper presents a novel electrocardiogram (ECG) compression method for e-health applications by adapting an adaptive Fourier decomposition (AFD) algorithm hybridized with a symbol substitution (SS) technique. The compression consists of two stages: first stage AFD executes efficient lossy compression with high fidelity; second stage SS performs lossless compression enhancement and built-in data encryption, which is pivotal for e-health. Validated with 48 ECG records from MIT-BIH arrhythmia benchmark database, the proposed method achieves averaged compression ratio (CR) of 17.6-44.5 and percentage root mean square difference (PRD) of 0.8-2.0% with a highly linear and robust PRD-CR relationship, pushing forward the compression performance to an unexploited region. As such, this paper provides an attractive candidate of ECG compression method for pervasive e-health applications.
NASA Astrophysics Data System (ADS)
Ahmed, H. O. A.; Wong, M. L. D.; Nandi, A. K.
2018-01-01
Condition classification of rolling element bearings in rotating machines is important to prevent the breakdown of industrial machinery. A considerable amount of literature has been published on bearing faults classification. These studies aim to determine automatically the current status of a roller element bearing. Of these studies, methods based on compressed sensing (CS) have received some attention recently due to their ability to allow one to sample below the Nyquist sampling rate. This technology has many possible uses in machine condition monitoring and has been investigated as a possible approach for fault detection and classification in the compressed domain, i.e., without reconstructing the original signal. However, previous CS based methods have been found to be too weak for highly compressed data. The present paper explores computationally, for the first time, the effects of sparse autoencoder based over-complete sparse representations on the classification performance of highly compressed measurements of bearing vibration signals. For this study, the CS method was used to produce highly compressed measurements of the original bearing dataset. Then, an effective deep neural network (DNN) with unsupervised feature learning algorithm based on sparse autoencoder is used for learning over-complete sparse representations of these compressed datasets. Finally, the fault classification is achieved using two stages, namely, pre-training classification based on stacked autoencoder and softmax regression layer form the deep net stage (the first stage), and re-training classification based on backpropagation (BP) algorithm forms the fine-tuning stage (the second stage). The experimental results show that the proposed method is able to achieve high levels of accuracy even with extremely compressed measurements compared with the existing techniques.
Compression of helium to high pressures and temperatures using a ballistic piston apparatus
NASA Technical Reports Server (NTRS)
Roman, B. P.; Rovel, G. P.; Lewis, M. J.
1971-01-01
Some preliminary experiments are described which were carried out in a high enthalpy laboratory to investigate the compression of helium, a typical shock-tube driver gas, to very high pressures and temperatures by means of a ballistic piston. The purpose of these measurements was to identify any problem areas in the compression process, to determine the importance of real gas effects duDC 47355s process, and to establish the feasibility of using a ballistic piston apparatus to achieve temperatures in helium in excess of 10,000 K.
RF pulse compression for future linear colliders
NASA Astrophysics Data System (ADS)
Wilson, Perry B.
1995-07-01
Future (nonsuperconducting) linear colliders will require very high values of peak rf power per meter of accelerating structure. The role of rf pulse compression in producing this power is examined within the context of overall rf system design for three future colliders at energies of 1.0-1.5 TeV, 5 TeV, and 25 TeV. In order to keep the average AC input power and the length of the accelerator within reasonable limits, a collider in the 1.0-1.5 TeV energy range will probably be built at an x-band rf frequency, and will require a peak power on the order of 150-200 MW per meter of accelerating structure. A 5 TeV collider at 34 GHz with a reasonable length (35 km) and AC input power (225 MW) would require about 550 MW per meter of structure. Two-beam accelerators can achieve peak powers of this order by applying dc pulse compression techniques (induction linac modules) to produce the drive beam. Klystron-driven colliders achieve high peak power by a combination of dc pulse compression (modulators) and rf pulse compression, with about the same overall rf system efficiency (30-40%) as a two-beam collider. A high gain (6.8) three-stage binary pulse compression system with high efficiency (80%) is described, which (compared to a SLED-II system) can be used to reduce the klystron peak power by about a factor of two, or alternatively, to cut the number of klystrons in half for a 1.0-1.5 TeV x-band collider. For a 5 TeV klystron-driven collider, a high gain, high efficiency rf pulse compression system is essential.
Alaska SAR Facility (ASF5) SAR Communications (SARCOM) Data Compression System
NASA Technical Reports Server (NTRS)
Mango, Stephen A.
1989-01-01
The real-time operational requirements for SARCOM translation into a high speed image data handler and processor to achieve the desired compression ratios and the selection of a suitable image data compression technique with as low as possible fidelity (information) losses and which can be implemented in an algorithm placing a relatively low arithmetic load on the system are described.
NASA Astrophysics Data System (ADS)
Lv, Peng; Tang, Xun; Zheng, Ruilin; Ma, Xiaobo; Yu, Kehan; Wei, Wei
2017-12-01
Superelastic graphene aerogel with ultra-high compressibility shows promising potential for compression-tolerant supercapacitor electrode. However, its specific capacitance is too low to meet the practical application. Herein, we deposited polyaniline (PANI) into the superelastic graphene aerogel to improve the capacitance while maintaining the superelasticity. Graphene/PANI aerogel with optimized PANI mass content of 63 wt% shows the improved specific capacitance of 713 F g-1 in the three-electrode system. And the graphene/PANI aerogel presents a high recoverable compressive strain of 90% due to the strong interaction between PANI and graphene. The all-solid-state supercapacitors were assembled to demonstrate the compression-tolerant ability of graphene/PANI electrodes. The gravimetric capacitance of graphene/PANI electrodes reaches 424 F g-1 and retains 96% even at 90% compressive strain. And a volumetric capacitance of 65.5 F cm-3 is achieved, which is much higher than that of other compressible composite electrodes. Furthermore, several compressible supercapacitors can be integrated and connected in series to enhance the overall output voltage, suggesting the potential to meet the practical application.
Lv, Peng; Tang, Xun; Zheng, Ruilin; Ma, Xiaobo; Yu, Kehan; Wei, Wei
2017-12-19
Superelastic graphene aerogel with ultra-high compressibility shows promising potential for compression-tolerant supercapacitor electrode. However, its specific capacitance is too low to meet the practical application. Herein, we deposited polyaniline (PANI) into the superelastic graphene aerogel to improve the capacitance while maintaining the superelasticity. Graphene/PANI aerogel with optimized PANI mass content of 63 wt% shows the improved specific capacitance of 713 F g -1 in the three-electrode system. And the graphene/PANI aerogel presents a high recoverable compressive strain of 90% due to the strong interaction between PANI and graphene. The all-solid-state supercapacitors were assembled to demonstrate the compression-tolerant ability of graphene/PANI electrodes. The gravimetric capacitance of graphene/PANI electrodes reaches 424 F g -1 and retains 96% even at 90% compressive strain. And a volumetric capacitance of 65.5 F cm -3 is achieved, which is much higher than that of other compressible composite electrodes. Furthermore, several compressible supercapacitors can be integrated and connected in series to enhance the overall output voltage, suggesting the potential to meet the practical application.
Image and Video Compression with VLSI Neural Networks
NASA Technical Reports Server (NTRS)
Fang, W.; Sheu, B.
1993-01-01
An advanced motion-compensated predictive video compression system based on artificial neural networks has been developed to effectively eliminate the temporal and spatial redundancy of video image sequences and thus reduce the bandwidth and storage required for the transmission and recording of the video signal. The VLSI neuroprocessor for high-speed high-ratio image compression based upon a self-organization network and the conventional algorithm for vector quantization are compared. The proposed method is quite efficient and can achieve near-optimal results.
Analysis-Preserving Video Microscopy Compression via Correlation and Mathematical Morphology
Shao, Chong; Zhong, Alfred; Cribb, Jeremy; Osborne, Lukas D.; O’Brien, E. Timothy; Superfine, Richard; Mayer-Patel, Ketan; Taylor, Russell M.
2015-01-01
The large amount video data produced by multi-channel, high-resolution microscopy system drives the need for a new high-performance domain-specific video compression technique. We describe a novel compression method for video microscopy data. The method is based on Pearson's correlation and mathematical morphology. The method makes use of the point-spread function (PSF) in the microscopy video acquisition phase. We compare our method to other lossless compression methods and to lossy JPEG, JPEG2000 and H.264 compression for various kinds of video microscopy data including fluorescence video and brightfield video. We find that for certain data sets, the new method compresses much better than lossless compression with no impact on analysis results. It achieved a best compressed size of 0.77% of the original size, 25× smaller than the best lossless technique (which yields 20% for the same video). The compressed size scales with the video's scientific data content. Further testing showed that existing lossy algorithms greatly impacted data analysis at similar compression sizes. PMID:26435032
DNABIT Compress - Genome compression algorithm.
Rajarajeswari, Pothuraju; Apparao, Allam
2011-01-22
Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.
Lv, Peng; Wang, Yaru; Ji, Chenglong; Yuan, Jiajiao
2017-01-01
Ultra-compressible electrodes with high electrochemical performance, reversible compressibility and extreme durability are in high demand in compression-tolerant energy storage devices. Herein, an ultra-compressible ternary composite was synthesized by successively electrodepositing poly(3,4-ethylenedioxythiophene) (PEDOT) and MnO2 into the superelastic graphene aerogel (SEGA). In SEGA/PEDOT/MnO2 ternary composite, SEGA provides the compressible backbone and conductive network; MnO2 is mainly responsible for pseudo reactions; the middle PEDOT not only reduces the interface resistance between MnO2 and graphene, but also further reinforces the strength of graphene cellar walls. The synergistic effect of the three components in the ternary composite electrode leads to high electrochemical performances and good compression-tolerant ability. The gravimetric capacitance of the compressible ternary composite electrodes reaches 343 F g−1 and can retain 97% even at 95% compressive strain. And a volumetric capacitance of 147.4 F cm−3 is achieved, which is much higher than that of other graphene-based compressible electrodes. This value of volumetric capacitance can be preserved by 80% after 3500 charge/discharge cycles under various compression strains, indicating an extreme durability.
Wavelet data compression for archiving high-resolution icosahedral model data
NASA Astrophysics Data System (ADS)
Wang, N.; Bao, J.; Lee, J.
2011-12-01
With the increase of the resolution of global circulation models, it becomes ever more important to develop highly effective solutions to archive the huge datasets produced by those models. While lossless data compression guarantees the accuracy of the restored data, it can only achieve limited reduction of data size. Wavelet transform based data compression offers significant potentials in data size reduction, and it has been shown very effective in transmitting data for remote visualizations. However, for data archive purposes, a detailed study has to be conducted to evaluate its impact to the datasets that will be used in further numerical computations. In this study, we carried out two sets of experiments for both summer and winter seasons. An icosahedral grid weather model and a highly efficient wavelet data compression software were used for this study. Initial conditions were compressed and input to the model to run to 10 days. The forecast results were then compared to those forecast results from the model run with the original uncompressed initial conditions. Several visual comparisons, as well as the statistics of numerical comparisons are presented. These results indicate that with specified minimum accuracy losses, wavelet data compression achieves significant data size reduction, and at the same time, it maintains minimum numerical impacts to the datasets. In addition, some issues are discussed to increase the archive efficiency while retaining a complete set of meta data for each archived file.
Dempsey, Adam B.; Curran, Scott J.; Wagner, Robert M.
2016-01-14
Many research studies have shown that low temperature combustion in compression ignition engines has the ability to yield ultra-low NOx and soot emissions while maintaining high thermal efficiency. To achieve low temperature combustion, sufficient mixing time between the fuel and air in a globally dilute environment is required, thereby avoiding fuel-rich regions and reducing peak combustion temperatures, which significantly reduces soot and NOx formation, respectively. It has been demonstrated that achieving low temperature combustion with diesel fuel over a wide range of conditions is difficult because of its properties, namely, low volatility and high chemical reactivity. On the contrary, gasolinemore » has a high volatility and low chemical reactivity, meaning it is easier to achieve the amount of premixing time required prior to autoignition to achieve low temperature combustion. In order to achieve low temperature combustion while meeting other constraints, such as low pressure rise rates and maintaining control over the timing of combustion, in-cylinder fuel stratification has been widely investigated for gasoline low temperature combustion engines. The level of fuel stratification is, in reality, a continuum ranging from fully premixed (i.e. homogeneous charge of fuel and air) to heavily stratified, heterogeneous operation, such as diesel combustion. However, to illustrate the impact of fuel stratification on gasoline compression ignition, the authors have identified three representative operating strategies: partial, moderate, and heavy fuel stratification. Thus, this article provides an overview and perspective of the current research efforts to develop engine operating strategies for achieving gasoline low temperature combustion in a compression ignition engine via fuel stratification. In this paper, computational fluid dynamics modeling of the in-cylinder processes during the closed valve portion of the cycle was used to illustrate the opportunities and challenges associated with the various fuel stratification levels.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dempsey, Adam B.; Curran, Scott J.; Wagner, Robert M.
Many research studies have shown that low temperature combustion in compression ignition engines has the ability to yield ultra-low NOx and soot emissions while maintaining high thermal efficiency. To achieve low temperature combustion, sufficient mixing time between the fuel and air in a globally dilute environment is required, thereby avoiding fuel-rich regions and reducing peak combustion temperatures, which significantly reduces soot and NOx formation, respectively. It has been demonstrated that achieving low temperature combustion with diesel fuel over a wide range of conditions is difficult because of its properties, namely, low volatility and high chemical reactivity. On the contrary, gasolinemore » has a high volatility and low chemical reactivity, meaning it is easier to achieve the amount of premixing time required prior to autoignition to achieve low temperature combustion. In order to achieve low temperature combustion while meeting other constraints, such as low pressure rise rates and maintaining control over the timing of combustion, in-cylinder fuel stratification has been widely investigated for gasoline low temperature combustion engines. The level of fuel stratification is, in reality, a continuum ranging from fully premixed (i.e. homogeneous charge of fuel and air) to heavily stratified, heterogeneous operation, such as diesel combustion. However, to illustrate the impact of fuel stratification on gasoline compression ignition, the authors have identified three representative operating strategies: partial, moderate, and heavy fuel stratification. Thus, this article provides an overview and perspective of the current research efforts to develop engine operating strategies for achieving gasoline low temperature combustion in a compression ignition engine via fuel stratification. In this paper, computational fluid dynamics modeling of the in-cylinder processes during the closed valve portion of the cycle was used to illustrate the opportunities and challenges associated with the various fuel stratification levels.« less
Method for obtaining large levitation pressure in superconducting magnetic bearings
Hull, John R.
1997-01-01
A method and apparatus for compressing magnetic flux to achieve high levitation pressures. Magnetic flux produced by a magnetic flux source travels through a gap between two high temperature superconducting material structures. The gap has a varying cross-sectional area to compress the magnetic flux, providing an increased magnetic field and correspondingly increased levitation force in the gap.
Method for obtaining large levitation pressure in superconducting magnetic bearings
Hull, John R.
1996-01-01
A method and apparatus for compressing magnetic flux to achieve high levitation pressures. Magnetic flux produced by a magnetic flux source travels through a gap between two high temperature superconducting material structures. The gap has a varying cross-sectional area to compress the magnetic flux, providing an increased magnetic field and correspondingly increased levitation force in the gap.
FRESCO: Referential compression of highly similar sequences.
Wandelt, Sebastian; Leser, Ulf
2013-01-01
In many applications, sets of similar texts or sequences are of high importance. Prominent examples are revision histories of documents or genomic sequences. Modern high-throughput sequencing technologies are able to generate DNA sequences at an ever-increasing rate. In parallel to the decreasing experimental time and cost necessary to produce DNA sequences, computational requirements for analysis and storage of the sequences are steeply increasing. Compression is a key technology to deal with this challenge. Recently, referential compression schemes, storing only the differences between a to-be-compressed input and a known reference sequence, gained a lot of interest in this field. In this paper, we propose a general open-source framework to compress large amounts of biological sequence data called Framework for REferential Sequence COmpression (FRESCO). Our basic compression algorithm is shown to be one to two orders of magnitudes faster than comparable related work, while achieving similar compression ratios. We also propose several techniques to further increase compression ratios, while still retaining the advantage in speed: 1) selecting a good reference sequence; and 2) rewriting a reference sequence to allow for better compression. In addition,we propose a new way of further boosting the compression ratios by applying referential compression to already referentially compressed files (second-order compression). This technique allows for compression ratios way beyond state of the art, for instance,4,000:1 and higher for human genomes. We evaluate our algorithms on a large data set from three different species (more than 1,000 genomes, more than 3 TB) and on a collection of versions of Wikipedia pages. Our results show that real-time compression of highly similar sequences at high compression ratios is possible on modern hardware.
Zhang, Yu; Wu, Jianxin; Cai, Jianfei
2016-05-01
In large-scale visual recognition and image retrieval tasks, feature vectors, such as Fisher vector (FV) or the vector of locally aggregated descriptors (VLAD), have achieved state-of-the-art results. However, the combination of the large numbers of examples and high-dimensional vectors necessitates dimensionality reduction, in order to reduce its storage and CPU costs to a reasonable range. In spite of the popularity of various feature compression methods, this paper shows that the feature (dimension) selection is a better choice for high-dimensional FV/VLAD than the feature (dimension) compression methods, e.g., product quantization. We show that strong correlation among the feature dimensions in the FV and the VLAD may not exist, which renders feature selection a natural choice. We also show that, many dimensions in FV/VLAD are noise. Throwing them away using feature selection is better than compressing them and useful dimensions altogether using feature compression methods. To choose features, we propose an efficient importance sorting algorithm considering both the supervised and unsupervised cases, for visual recognition and image retrieval, respectively. Combining with the 1-bit quantization, feature selection has achieved both higher accuracy and less computational cost than feature compression methods, such as product quantization, on the FV and the VLAD image representations.
Compressing DNA sequence databases with coil.
White, W Timothy J; Hendy, Michael D
2008-05-20
Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip) compression - an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression - the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST) data. Finally, coil can efficiently encode incremental additions to a sequence database. coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work.
Compressing DNA sequence databases with coil
White, W Timothy J; Hendy, Michael D
2008-01-01
Background Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip) compression – an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. Results We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression – the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST) data. Finally, coil can efficiently encode incremental additions to a sequence database. Conclusion coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work. PMID:18489794
DNABIT Compress – Genome compression algorithm
Rajarajeswari, Pothuraju; Apparao, Allam
2011-01-01
Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that “DNABIT Compress” algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases. PMID:21383923
C-FSCV: Compressive Fast-Scan Cyclic Voltammetry for Brain Dopamine Recording.
Zamani, Hossein; Bahrami, Hamid Reza; Chalwadi, Preeti; Garris, Paul A; Mohseni, Pedram
2018-01-01
This paper presents a novel compressive sensing framework for recording brain dopamine levels with fast-scan cyclic voltammetry (FSCV) at a carbon-fiber microelectrode. Termed compressive FSCV (C-FSCV), this approach compressively samples the measured total current in each FSCV scan and performs basic FSCV processing steps, e.g., background current averaging and subtraction, directly with compressed measurements. The resulting background-subtracted faradaic currents, which are shown to have a block-sparse representation in the discrete cosine transform domain, are next reconstructed from their compressively sampled counterparts with the block sparse Bayesian learning algorithm. Using a previously recorded dopamine dataset, consisting of electrically evoked signals recorded in the dorsal striatum of an anesthetized rat, the C-FSCV framework is shown to be efficacious in compressing and reconstructing brain dopamine dynamics and associated voltammograms with high fidelity (correlation coefficient, ), while achieving compression ratio, CR, values as high as ~ 5. Moreover, using another set of dopamine data recorded 5 minutes after administration of amphetamine (AMPH) to an ambulatory rat, C-FSCV once again compresses (CR = 5) and reconstructs the temporal pattern of dopamine release with high fidelity ( ), leading to a true-positive rate of 96.4% in detecting AMPH-induced dopamine transients.
Method for obtaining large levitation pressure in superconducting magnetic bearings
Hull, J.R.
1997-08-05
A method and apparatus are disclosed for compressing magnetic flux to achieve high levitation pressures. Magnetic flux produced by a magnetic flux source travels through a gap between two high temperature superconducting material structures. The gap has a varying cross-sectional area to compress the magnetic flux, providing an increased magnetic field and correspondingly increased levitation force in the gap. 4 figs.
Method for obtaining large levitation pressure in superconducting magnetic bearings
Hull, J.R.
1996-10-08
A method and apparatus are disclosed for compressing magnetic flux to achieve high levitation pressures. Magnetic flux produced by a magnetic flux source travels through a gap between two high temperature superconducting material structures. The gap has a varying cross-sectional area to compress the magnetic flux, providing an increased magnetic field and correspondingly increased levitation force in the gap. 4 figs.
A novel ECG data compression method based on adaptive Fourier decomposition
NASA Astrophysics Data System (ADS)
Tan, Chunyu; Zhang, Liming
2017-12-01
This paper presents a novel electrocardiogram (ECG) compression method based on adaptive Fourier decomposition (AFD). AFD is a newly developed signal decomposition approach, which can decompose a signal with fast convergence, and hence reconstruct ECG signals with high fidelity. Unlike most of the high performance algorithms, our method does not make use of any preprocessing operation before compression. Huffman coding is employed for further compression. Validated with 48 ECG recordings of MIT-BIH arrhythmia database, the proposed method achieves the compression ratio (CR) of 35.53 and the percentage root mean square difference (PRD) of 1.47% on average with N = 8 decomposition times and a robust PRD-CR relationship. The results demonstrate that the proposed method has a good performance compared with the state-of-the-art ECG compressors.
A zero-error operational video data compression system
NASA Technical Reports Server (NTRS)
Kutz, R. L.
1973-01-01
A data compression system has been operating since February 1972, using ATS spin-scan cloud cover data. With the launch of ITOS 3 in October 1972, this data compression system has become the only source of near-realtime very high resolution radiometer image data at the data processing facility. The VHRR image data are compressed and transmitted over a 50 kilobit per second wideband ground link. The goal of the data compression experiment was to send data quantized to six bits at twice the rate possible when no compression is used, while maintaining zero error between the transmitted and reconstructed data. All objectives of the data compression experiment were met, and thus a capability of doubling the data throughput of the system has been achieved.
Compressive sensing in medical imaging
Graff, Christian G.; Sidky, Emil Y.
2015-01-01
The promise of compressive sensing, exploitation of compressibility to achieve high quality image reconstructions with less data, has attracted a great deal of attention in the medical imaging community. At the Compressed Sensing Incubator meeting held in April 2014 at OSA Headquarters in Washington, DC, presentations were given summarizing some of the research efforts ongoing in compressive sensing for x-ray computed tomography and magnetic resonance imaging systems. This article provides an expanded version of these presentations. Sparsity-exploiting reconstruction algorithms that have gained popularity in the medical imaging community are studied, and examples of clinical applications that could benefit from compressive sensing ideas are provided. The current and potential future impact of compressive sensing on the medical imaging field is discussed. PMID:25968400
Joint image encryption and compression scheme based on IWT and SPIHT
NASA Astrophysics Data System (ADS)
Zhang, Miao; Tong, Xiaojun
2017-03-01
A joint lossless image encryption and compression scheme based on integer wavelet transform (IWT) and set partitioning in hierarchical trees (SPIHT) is proposed to achieve lossless image encryption and compression simultaneously. Making use of the properties of IWT and SPIHT, encryption and compression are combined. Moreover, the proposed secure set partitioning in hierarchical trees (SSPIHT) via the addition of encryption in the SPIHT coding process has no effect on compression performance. A hyper-chaotic system, nonlinear inverse operation, Secure Hash Algorithm-256(SHA-256), and plaintext-based keystream are all used to enhance the security. The test results indicate that the proposed methods have high security and good lossless compression performance.
Intelligent bandwidth compression
NASA Astrophysics Data System (ADS)
Tseng, D. Y.; Bullock, B. L.; Olin, K. E.; Kandt, R. K.; Olsen, J. D.
1980-02-01
The feasibility of a 1000:1 bandwidth compression ratio for image transmission has been demonstrated using image-analysis algorithms and a rule-based controller. Such a high compression ratio was achieved by first analyzing scene content using auto-cueing and feature-extraction algorithms, and then transmitting only the pertinent information consistent with mission requirements. A rule-based controller directs the flow of analysis and performs priority allocations on the extracted scene content. The reconstructed bandwidth-compressed image consists of an edge map of the scene background, with primary and secondary target windows embedded in the edge map. The bandwidth-compressed images are updated at a basic rate of 1 frame per second, with the high-priority target window updated at 7.5 frames per second. The scene-analysis algorithms used in this system together with the adaptive priority controller are described. Results of simulated 1000:1 bandwidth-compressed images are presented.
Telemedicine + OCT: toward design of optimized algorithms for high-quality compressed images
NASA Astrophysics Data System (ADS)
Mousavi, Mahta; Lurie, Kristen; Land, Julian; Javidi, Tara; Ellerbee, Audrey K.
2014-03-01
Telemedicine is an emerging technology that aims to provide clinical healthcare at a distance. Among its goals, the transfer of diagnostic images over telecommunication channels has been quite appealing to the medical community. When viewed as an adjunct to biomedical device hardware, one highly important consideration aside from the transfer rate and speed is the accuracy of the reconstructed image at the receiver end. Although optical coherence tomography (OCT) is an established imaging technique that is ripe for telemedicine, the effects of OCT data compression, which may be necessary on certain telemedicine platforms, have not received much attention in the literature. We investigate the performance and efficiency of several lossless and lossy compression techniques for OCT data and characterize their effectiveness with respect to achievable compression ratio, compression rate and preservation of image quality. We examine the effects of compression in the interferogram vs. A-scan domain as assessed with various objective and subjective metrics.
Data-dependent bucketing improves reference-free compression of sequencing reads.
Patro, Rob; Kingsford, Carl
2015-09-01
The storage and transmission of high-throughput sequencing data consumes significant resources. As our capacity to produce such data continues to increase, this burden will only grow. One approach to reduce storage and transmission requirements is to compress this sequencing data. We present a novel technique to boost the compression of sequencing that is based on the concept of bucketing similar reads so that they appear nearby in the file. We demonstrate that, by adopting a data-dependent bucketing scheme and employing a number of encoding ideas, we can achieve substantially better compression ratios than existing de novo sequence compression tools, including other bucketing and reordering schemes. Our method, Mince, achieves up to a 45% reduction in file sizes (28% on average) compared with existing state-of-the-art de novo compression schemes. Mince is written in C++11, is open source and has been made available under the GPLv3 license. It is available at http://www.cs.cmu.edu/∼ckingsf/software/mince. carlk@cs.cmu.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.
NASA Astrophysics Data System (ADS)
Gong, Lihua; Deng, Chengzhi; Pan, Shumin; Zhou, Nanrun
2018-07-01
Based on hyper-chaotic system and discrete fractional random transform, an image compression-encryption algorithm is designed. The original image is first transformed into a spectrum by the discrete cosine transform and the resulting spectrum is compressed according to the method of spectrum cutting. The random matrix of the discrete fractional random transform is controlled by a chaotic sequence originated from the high dimensional hyper-chaotic system. Then the compressed spectrum is encrypted by the discrete fractional random transform. The order of DFrRT and the parameters of the hyper-chaotic system are the main keys of this image compression and encryption algorithm. The proposed algorithm can compress and encrypt image signal, especially can encrypt multiple images once. To achieve the compression of multiple images, the images are transformed into spectra by the discrete cosine transform, and then the spectra are incised and spliced into a composite spectrum by Zigzag scanning. Simulation results demonstrate that the proposed image compression and encryption algorithm is of high security and good compression performance.
NASA Astrophysics Data System (ADS)
Sheikh Khalid, Faisal; Bazilah Azmi, Nurul; Natasya Mazenan, Puteri; Shahidan, Shahiron; Ali, Noorwirdawati
2018-03-01
This research focuses on the performance of composite sand cement brick containing recycle concrete aggregate and waste polyethylene terephthalate. This study aims to determine the mechanical properties such as compressive strength and water absorption of composite brick containing recycled concrete aggregate (RCA) and polyethylene terephthalate (PET) waste. The bricks specimens were prepared by using 100% natural sand, they were then replaced by RCA at 25%, 50% and 75% with proportions of PET consists of 0.5%, 1.0% and 1.5% by weight of natural sand. Based on the results of compressive strength, only RCA 25% with 0.5% PET achieve lower strength than normal bricks while others showed a high strength. However, all design mix reaches strength more than 7N/mm2 as expected. Besides that, the most favorable mix design that achieves high compressive strength is 75% of RCA with 0.5% PET.
Nyland, Mark A; Lanting, Brent A; Nikolov, Hristo N; Somerville, Lyndsay E; Teeter, Matthew G; Howard, James L
2016-12-01
It is common practice to burr custom holes in revision porous metal cups for screw insertion. The objective of this study was to determine how different hole types affect a surgeon's sense of screw fixation. Porous revision cups were prepared with pre-drilled and custom burred holes. Cups were held in place adjacent to synthetic bone material of varying density. Surgeons inserted screws through the different holes and materials. Surgeon subjective rating, compression, and torque was recorded. The torque achieved was greater ( p = 0.002) for screws through custom holes than pre-fabricated holes in low and medium density material, with no difference for high density. Peak compression was greater ( p = 0.026) through the pre-fabricated holes only in high density material. Use of burred holes affects the torque generated, and may decrease the amount of cup-acetabulum compression achieved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robey, H. F.; Smalyuk, V. A.; Milovich, J. L.
A series of indirectly driven capsule implosions has been performed on the National Ignition Facility to assess the relative contributions of ablation-front instability growth vs. fuel compression on implosion performance. Laser pulse shapes for both low and high-foot pulses were modified to vary ablation-front growth and fuel adiabat, separately and controllably. Three principal conclusions are drawn from this study: (1) It is shown that reducing ablation-front instability growth in low-foot implosions results in a substantial (3-10X) increase in neutron yield with no loss of fuel compression. (2) It is shown that reducing the fuel adiabat in high-foot implosions results inmore » a significant (36%) increase in fuel compression together with a small (10%) increase in neutron yield. (3) Increased electron preheat at higher laser power in high-foot implosions, however, appears to offset the gain in compression achieved by adiabat-shaping at lower power. These results taken collectively bridge the space between the higher compression low-foot results and the higher yield high-foot results.« less
Compressed air production with waste heat utilization in industry
NASA Astrophysics Data System (ADS)
Nolting, E.
1984-06-01
The centralized power-heat coupling (PHC) technique using block heating power stations, is presented. Compressed air production in PHC technique with internal combustion engine drive achieves a high degree of primary energy utilization. Cost savings of 50% are reached compared to conventional production. The simultaneous utilization of compressed air and heat is especially interesting. A speed regulated drive via an internal combustion motor gives a further saving of 10% to 20% compared to intermittent operation. The high fuel utilization efficiency ( 80%) leads to a pay off after two years for operation times of 3000 hr.
Intelligent bandwith compression
NASA Astrophysics Data System (ADS)
Tseng, D. Y.; Bullock, B. L.; Olin, K. E.; Kandt, R. K.; Olsen, J. D.
1980-02-01
The feasibility of a 1000:1 bandwidth compression ratio for image transmission has been demonstrated using image-analysis algorithms and a rule-based controller. Such a high compression ratio was achieved by first analyzing scene content using auto-cueing and feature-extraction algorithms, and then transmitting only the pertinent information consistent with mission requirements. A rule-based controller directs the flow of analysis and performs priority allocations on the extracted scene content. The reconstructed bandwidth-compressed image consists of an edge map of the scene background, with primary and secondary target windows embedded in the edge map. The bandwidth-compressed images are updated at a basic rate of 1 frame per second, with the high-priority target window updated at 7.5 frames per second. The scene-analysis algorithms used in this system together with the adaptive priority controller are described. Results of simulated 1000:1 band width-compressed images are presented. A video tape simulation of the Intelligent Bandwidth Compression system has been produced using a sequence of video input from the data base.
Fast and Adaptive Lossless Onboard Hyperspectral Data Compression System
NASA Technical Reports Server (NTRS)
Aranki, Nazeeh I.; Keymeulen, Didier; Kimesh, Matthew A.
2012-01-01
Modern hyperspectral imaging systems are able to acquire far more data than can be downlinked from a spacecraft. Onboard data compression helps to alleviate this problem, but requires a system capable of power efficiency and high throughput. Software solutions have limited throughput performance and are power-hungry. Dedicated hardware solutions can provide both high throughput and power efficiency, while taking the load off of the main processor. Thus a hardware compression system was developed. The implementation uses a field-programmable gate array (FPGA). The implementation is based on the fast lossless (FL) compression algorithm reported in Fast Lossless Compression of Multispectral-Image Data (NPO-42517), NASA Tech Briefs, Vol. 30, No. 8 (August 2006), page 26, which achieves excellent compression performance and has low complexity. This algorithm performs predictive compression using an adaptive filtering method, and uses adaptive Golomb coding. The implementation also packetizes the coded data. The FL algorithm is well suited for implementation in hardware. In the FPGA implementation, one sample is compressed every clock cycle, which makes for a fast and practical realtime solution for space applications. Benefits of this implementation are: 1) The underlying algorithm achieves a combination of low complexity and compression effectiveness that exceeds that of techniques currently in use. 2) The algorithm requires no training data or other specific information about the nature of the spectral bands for a fixed instrument dynamic range. 3) Hardware acceleration provides a throughput improvement of 10 to 100 times vs. the software implementation. A prototype of the compressor is available in software, but it runs at a speed that does not meet spacecraft requirements. The hardware implementation targets the Xilinx Virtex IV FPGAs, and makes the use of this compressor practical for Earth satellites as well as beyond-Earth missions with hyperspectral instruments.
Highly efficient frequency conversion with bandwidth compression of quantum light
Allgaier, Markus; Ansari, Vahid; Sansoni, Linda; Eigner, Christof; Quiring, Viktor; Ricken, Raimund; Harder, Georg; Brecht, Benjamin; Silberhorn, Christine
2017-01-01
Hybrid quantum networks rely on efficient interfacing of dissimilar quantum nodes, as elements based on parametric downconversion sources, quantum dots, colour centres or atoms are fundamentally different in their frequencies and bandwidths. Although pulse manipulation has been demonstrated in very different systems, to date no interface exists that provides both an efficient bandwidth compression and a substantial frequency translation at the same time. Here we demonstrate an engineered sum-frequency-conversion process in lithium niobate that achieves both goals. We convert pure photons at telecom wavelengths to the visible range while compressing the bandwidth by a factor of 7.47 under preservation of non-classical photon-number statistics. We achieve internal conversion efficiencies of 61.5%, significantly outperforming spectral filtering for bandwidth compression. Our system thus makes the connection between previously incompatible quantum systems as a step towards usable quantum networks. PMID:28134242
A pulse-compression-ring circuit for high-efficiency electric propulsion.
Owens, Thomas L
2008-03-01
A highly efficient, highly reliable pulsed-power system has been developed for use in high power, repetitively pulsed inductive plasma thrusters. The pulsed inductive thruster ejects plasma propellant at a high velocity using a Lorentz force developed through inductive coupling to the plasma. Having greatly increased propellant-utilization efficiency compared to chemical rockets, this type of electric propulsion system may one day propel spacecraft on long-duration deep-space missions. High system reliability and electrical efficiency are extremely important for these extended missions. In the prototype pulsed-power system described here, exceptional reliability is achieved using a pulse-compression circuit driven by both active solid-state switching and passive magnetic switching. High efficiency is achieved using a novel ring architecture that recovers unused energy in a pulse-compression system with minimal circuit loss after each impulse. As an added benefit, voltage reversal is eliminated in the ring topology, resulting in long lifetimes for energy-storage capacitors. System tests were performed using an adjustable inductive load at a voltage level of 3.3 kV, a peak current of 20 kA, and a current switching rate of 15 kA/micros.
NASA Astrophysics Data System (ADS)
Lv, Peng; Tang, Xun; Yuan, Jiajiao; Ji, Chenglong
2017-11-01
Highly compressible electrodes are in high demand in volume-restricted energy storage devices. Superelastic reduced graphene oxide (rGO) aerogel with attractive characteristics are proposed as the promising skeleton for compressible electrodes. Herein, a ternary aerogel was prepared by successively electrodepositing polypyrrole (PPy) and MnO2 into the superelastic rGO aerogel. In the rGO/PPy/MnO2 aerogel, rGO aerogel provides the continuously conductive network; MnO2 is mainly responsible for pseudo reactions; the middle PPy layer not only reduces the interface resistance between rGO and MnO2, but also further enhanced the mechanical strength of rGO backbone. The synergistic effect of the three components leads to excellent performances including high specific capacitance, reversible compressibility, and extreme durability. The gravimetric capacitance of the compressible rGO/PPy/MnO2 aerogel electrodes reaches 366 F g-1 and can retain 95.3% even under 95% compressive strain. And a volumetric capacitance of 138 F cm-3 is achieved, which is much higher than that of other rGO-based compressible electrodes. This volumetric capacitance value can be preserved by 85% after 3500 charge/discharge cycles with various compression conditions. This work will pave the way for advanced applications in the area of compressible energy-storage devices meeting the requirement of limiting space.
Thermofluidic compression effects to achieve combustion in a low-compression scramjet engine
NASA Astrophysics Data System (ADS)
Moura, A. F.; Wheatley, V.; Jahn, I.
2018-07-01
The compression provided by a scramjet inlet is an important parameter in its design. It must be low enough to limit thermal and structural loads and stagnation pressure losses, but high enough to provide the conditions favourable for combustion. Inlets are typically designed to achieve sufficient compression without accounting for the fluidic, and subsequently thermal, compression provided by the fuel injection, which can enable robust combustion in a low-compression engine. This is investigated using Reynolds-averaged Navier-Stokes numerical simulations of a simplified scramjet engine designed to have insufficient compression to auto-ignite fuel in the absence of thermofluidic compression. The engine was designed with a wide rectangular combustor and a single centrally located injector, in order to reduce three-dimensional effects of the walls on the fuel plume. By varying the injected mass flow rate of hydrogen fuel (equivalence ratios of 0.22, 0.17, and 0.13), it is demonstrated that higher equivalence ratios lead to earlier ignition and more rapid combustion, even though mean conditions in the combustor change by no more than 5% for pressure and 3% for temperature with higher equivalence ratio. By supplementing the lower equivalence ratio with helium to achieve a higher mass flow rate, it is confirmed that these benefits are primarily due to the local compression provided by the extra injected mass. Investigation of the conditions around the fuel plume indicated two connected mechanisms. The higher mass flow rate for higher equivalence ratios generated a stronger injector bow shock that compresses the free-stream gas, increasing OH radical production and promoting ignition. This was observed both in the higher equivalence ratio case and in the case with helium. This earlier ignition led to increased temperature and pressure downstream and, consequently, stronger combustion. The heat release from combustion provided thermal compression in the combustor, further increasing combustion efficiency.
Thermofluidic compression effects to achieve combustion in a low-compression scramjet engine
NASA Astrophysics Data System (ADS)
Moura, A. F.; Wheatley, V.; Jahn, I.
2017-12-01
The compression provided by a scramjet inlet is an important parameter in its design. It must be low enough to limit thermal and structural loads and stagnation pressure losses, but high enough to provide the conditions favourable for combustion. Inlets are typically designed to achieve sufficient compression without accounting for the fluidic, and subsequently thermal, compression provided by the fuel injection, which can enable robust combustion in a low-compression engine. This is investigated using Reynolds-averaged Navier-Stokes numerical simulations of a simplified scramjet engine designed to have insufficient compression to auto-ignite fuel in the absence of thermofluidic compression. The engine was designed with a wide rectangular combustor and a single centrally located injector, in order to reduce three-dimensional effects of the walls on the fuel plume. By varying the injected mass flow rate of hydrogen fuel (equivalence ratios of 0.22, 0.17, and 0.13), it is demonstrated that higher equivalence ratios lead to earlier ignition and more rapid combustion, even though mean conditions in the combustor change by no more than 5% for pressure and 3% for temperature with higher equivalence ratio. By supplementing the lower equivalence ratio with helium to achieve a higher mass flow rate, it is confirmed that these benefits are primarily due to the local compression provided by the extra injected mass. Investigation of the conditions around the fuel plume indicated two connected mechanisms. The higher mass flow rate for higher equivalence ratios generated a stronger injector bow shock that compresses the free-stream gas, increasing OH radical production and promoting ignition. This was observed both in the higher equivalence ratio case and in the case with helium. This earlier ignition led to increased temperature and pressure downstream and, consequently, stronger combustion. The heat release from combustion provided thermal compression in the combustor, further increasing combustion efficiency.
Design of Restoration Method Based on Compressed Sensing and TwIST Algorithm
NASA Astrophysics Data System (ADS)
Zhang, Fei; Piao, Yan
2018-04-01
In order to improve the subjective and objective quality of degraded images at low sampling rates effectively,save storage space and reduce computational complexity at the same time, this paper proposes a joint restoration algorithm of compressed sensing and two step iterative threshold shrinkage (TwIST). The algorithm applies the TwIST algorithm which used in image restoration to the compressed sensing theory. Then, a small amount of sparse high-frequency information is obtained in frequency domain. The TwIST algorithm based on compressed sensing theory is used to accurately reconstruct the high frequency image. The experimental results show that the proposed algorithm achieves better subjective visual effects and objective quality of degraded images while accurately restoring degraded images.
Radar Range Sidelobe Reduction Using Adaptive Pulse Compression Technique
NASA Technical Reports Server (NTRS)
Li, Lihua; Coon, Michael; McLinden, Matthew
2013-01-01
Pulse compression has been widely used in radars so that low-power, long RF pulses can be transmitted, rather than a highpower short pulse. Pulse compression radars offer a number of advantages over high-power short pulsed radars, such as no need of high-power RF circuitry, no need of high-voltage electronics, compact size and light weight, better range resolution, and better reliability. However, range sidelobe associated with pulse compression has prevented the use of this technique on spaceborne radars since surface returns detected by range sidelobes may mask the returns from a nearby weak cloud or precipitation particles. Research on adaptive pulse compression was carried out utilizing a field-programmable gate array (FPGA) waveform generation board and a radar transceiver simulator. The results have shown significant improvements in pulse compression sidelobe performance. Microwave and millimeter-wave radars present many technological challenges for Earth and planetary science applications. The traditional tube-based radars use high-voltage power supply/modulators and high-power RF transmitters; therefore, these radars usually have large size, heavy weight, and reliability issues for space and airborne platforms. Pulse compression technology has provided a path toward meeting many of these radar challenges. Recent advances in digital waveform generation, digital receivers, and solid-state power amplifiers have opened a new era for applying pulse compression to the development of compact and high-performance airborne and spaceborne remote sensing radars. The primary objective of this innovative effort is to develop and test a new pulse compression technique to achieve ultrarange sidelobes so that this technique can be applied to spaceborne, airborne, and ground-based remote sensing radars to meet future science requirements. By using digital waveform generation, digital receiver, and solid-state power amplifier technologies, this improved pulse compression technique could bring significant impact on future radar development. The novel feature of this innovation is the non-linear FM (NLFM) waveform design. The traditional linear FM has the limit (-20 log BT -3 dB) for achieving ultra-low-range sidelobe in pulse compression. For this study, a different combination of 20- or 40-microsecond chirp pulse width and 2- or 4-MHz chirp bandwidth was used. These are typical operational parameters for airborne or spaceborne weather radars. The NLFM waveform design was then implemented on a FPGA board to generate a real chirp signal, which was then sent to the radar transceiver simulator. The final results have shown significant improvement on sidelobe performance compared to that obtained using a traditional linear FM chirp.
NASA Astrophysics Data System (ADS)
Jinesh, Mathew; MacPherson, William N.; Hand, Duncan P.; Maier, Robert R. J.
2016-05-01
A smart metal component having the potential for high temperature strain sensing capability is reported. The stainless steel (SS316) structure is made by selective laser melting (SLM). A fiber Bragg grating (FBG) is embedded in to a 3D printed U-groove by high temperature brazing using a silver based alloy, achieving an axial FBG compression of 13 millistrain at room temperature. Initial results shows that the test component can be used for up to 700°C for sensing applications.
Lawlor, Shawn P [Bellevue, WA; Novaresi, Mark A [San Diego, CA; Cornelius, Charles C [Kirkland, WA
2008-02-26
A gas compressor based on the use of a driven rotor having an axially oriented compression ramp traveling at a local supersonic inlet velocity (based on the combination of inlet gas velocity and tangential speed of the ramp) which forms a supersonic shockwave axially, between adjacent strakes. In using this method to compress inlet gas, the supersonic compressor efficiently achieves high compression ratios while utilizing a compact, stabilized gasdynamic flow path. Operated at supersonic speeds, the inlet stabilizes an oblique/normal shock system in the gasdyanamic flow path formed between the gas compression ramp on a strake, the shock capture lip on the adjacent strake, and captures the resultant pressure within the stationary external housing while providing a diffuser downstream of the compression ramp.
NASA Astrophysics Data System (ADS)
Leihong, Zhang; Zilan, Pan; Luying, Wu; Xiuhua, Ma
2016-11-01
To solve the problem that large images can hardly be retrieved for stringent hardware restrictions and the security level is low, a method based on compressive ghost imaging (CGI) with Fast Fourier Transform (FFT) is proposed, named FFT-CGI. Initially, the information is encrypted by the sender with FFT, and the FFT-coded image is encrypted by the system of CGI with a secret key. Then the receiver decrypts the image with the aid of compressive sensing (CS) and FFT. Simulation results are given to verify the feasibility, security, and compression of the proposed encryption scheme. The experiment suggests the method can improve the quality of large images compared with conventional ghost imaging and achieve the imaging for large-sized images, further the amount of data transmitted largely reduced because of the combination of compressive sensing and FFT, and improve the security level of ghost images through ciphertext-only attack (COA), chosen-plaintext attack (CPA), and noise attack. This technique can be immediately applied to encryption and data storage with the advantages of high security, fast transmission, and high quality of reconstructed information.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Milovich, J. L., E-mail: milovich1@llnl.gov; Robey, H. F.; Clark, D. S.
Experimental results from indirectly driven ignition implosions during the National Ignition Campaign (NIC) [M. J. Edwards et al., Phys. Plasmas 20, 070501 (2013)] achieved a record compression of the central deuterium-tritium fuel layer with measured areal densities up to 1.2 g/cm{sup 2}, but with significantly lower total neutron yields (between 1.5 × 10{sup 14} and 5.5 × 10{sup 14}) than predicted, approximately 10% of the 2D simulated yield. An order of magnitude improvement in the neutron yield was subsequently obtained in the “high-foot” experiments [O. A. Hurricane et al., Nature 506, 343 (2014)]. However, this yield was obtained at the expensemore » of fuel compression due to deliberately higher fuel adiabat. In this paper, the design of an adiabat-shaped implosion is presented, in which the laser pulse is tailored to achieve similar resistance to ablation-front instability growth, but with a low fuel adiabat to achieve high compression. Comparison with measured performance shows a factor of 3–10× improvement in the neutron yield (>40% of predicted simulated yield) over similar NIC implosions, while maintaining a reasonable fuel compression of >1 g/cm{sup 2}. Extension of these designs to higher laser power and energy is discussed to further explore the trade-off between increased implosion velocity and the deleterious effects of hydrodynamic instabilities.« less
Comparison of lossless compression techniques for prepress color images
NASA Astrophysics Data System (ADS)
Van Assche, Steven; Denecker, Koen N.; Philips, Wilfried R.; Lemahieu, Ignace L.
1998-12-01
In the pre-press industry color images have both a high spatial and a high color resolution. Such images require a considerable amount of storage space and impose long transmission times. Data compression is desired to reduce these storage and transmission problems. Because of the high quality requirements in the pre-press industry only lossless compression is acceptable. Most existing lossless compression schemes operate on gray-scale images. In this case the color components of color images must be compressed independently. However, higher compression ratios can be achieved by exploiting inter-color redundancies. In this paper we present a comparison of three state-of-the-art lossless compression techniques which exploit such color redundancies: IEP (Inter- color Error Prediction) and a KLT-based technique, which are both linear color decorrelation techniques, and Interframe CALIC, which uses a non-linear approach to color decorrelation. It is shown that these techniques are able to exploit color redundancies and that color decorrelation can be done effectively and efficiently. The linear color decorrelators provide a considerable coding gain (about 2 bpp) on some typical prepress images. The non-linear interframe CALIC predictor does not yield better results, but the full interframe CALIC technique does.
Three dimensional range geometry and texture data compression with space-filling curves.
Chen, Xia; Zhang, Song
2017-10-16
This paper presents a novel method to effectively store three-dimensional (3D) data and 2D texture data into a regular 24-bit image. The proposed method uses the Hilbert space-filling curve to map the normalized unwrapped phase map to two 8-bit color channels, and saves the third color channel for 2D texture storage. By further leveraging existing 2D image and video compression techniques, the proposed method can achieve high compression ratios while effectively preserving data quality. Since the encoding and decoding processes can be applied to most of the current 2D media platforms, this proposed compression method can make 3D data storage and transmission available for many electrical devices without requiring special hardware changes. Experiments demonstrate that if a lossless 2D image/video format is used, both original 3D geometry and 2D color texture can be accurately recovered; if lossy image/video compression is used, only black-and-white or grayscale texture can be properly recovered, but much higher compression ratios (e.g., 1543:1 against the ASCII OBJ format) are achieved with slight loss of 3D geometry quality.
Super high compression of line drawing data
NASA Technical Reports Server (NTRS)
Cooper, D. B.
1976-01-01
Models which can be used to accurately represent the type of line drawings which occur in teleconferencing and transmission for remote classrooms and which permit considerable data compression were described. The objective was to encode these pictures in binary sequences of shortest length but such that the pictures can be reconstructed without loss of important structure. It was shown that exploitation of reasonably simple structure permits compressions in the range of 30-100 to 1. When dealing with highly stylized material such as electronic or logic circuit schematics, it is unnecessary to reproduce configurations exactly. Rather, the symbols and configurations must be understood and be reproduced, but one can use fixed font symbols for resistors, diodes, capacitors, etc. Compression of pictures of natural phenomena such as can be realized by taking a similar approach, or essentially zero error reproducibility can be achieved but at a lower level of compression.
A review in high early strength concrete and local materials potential
NASA Astrophysics Data System (ADS)
Yasin, A. K.; Bayuaji, R.; Susanto, T. E.
2017-11-01
High early strength concrete is one of the type in high performance concrete. A high early strength concrete means that the compressive strength of the concrete at the first 24 hours after site-pouring could achieve structural concrete quality (compressive strength > 21 MPa). There are 4 (four) important factors that must be considered in the making process, those factors including: portland cement type, cement content, water to cement ratio, and admixture. In accordance with its high performance, the production cost is estimated to be 25 to 30% higher than conventional concrete. One effort to cut the production cost is to utilize local materials. This paper will also explain about the local materials which were abundantly available, cheap, and located in strategic coast area of East Java Province, that is: Gresik, Tuban and Bojonegoro city. In addition, the application of this study is not limited only to a large building project, but also for a small scale building which has one to three-story. The performance of this concrete was apparently able to achieve the quality of compressive strength of 27 MPa at the age of 24 hours, which qualified enough to support building structurally.
The integrated design and archive of space-borne signal processing and compression coding
NASA Astrophysics Data System (ADS)
He, Qiang-min; Su, Hao-hang; Wu, Wen-bo
2017-10-01
With the increasing demand of users for the extraction of remote sensing image information, it is very urgent to significantly enhance the whole system's imaging quality and imaging ability by using the integrated design to achieve its compact structure, light quality and higher attitude maneuver ability. At this present stage, the remote sensing camera's video signal processing unit and image compression and coding unit are distributed in different devices. The volume, weight and consumption of these two units is relatively large, which unable to meet the requirements of the high mobility remote sensing camera. This paper according to the high mobility remote sensing camera's technical requirements, designs a kind of space-borne integrated signal processing and compression circuit by researching a variety of technologies, such as the high speed and high density analog-digital mixed PCB design, the embedded DSP technology and the image compression technology based on the special-purpose chips. This circuit lays a solid foundation for the research of the high mobility remote sensing camera.
High speed fluorescence imaging with compressed ultrafast photography
NASA Astrophysics Data System (ADS)
Thompson, J. V.; Mason, J. D.; Beier, H. T.; Bixler, J. N.
2017-02-01
Fluorescent lifetime imaging is an optical technique that facilitates imaging molecular interactions and cellular functions. Because the excited lifetime of a fluorophore is sensitive to its local microenvironment,1, 2 measurement of fluorescent lifetimes can be used to accurately detect regional changes in temperature, pH, and ion concentration. However, typical state of the art fluorescent lifetime methods are severely limited when it comes to acquisition time (on the order of seconds to minutes) and video rate imaging. Here we show that compressed ultrafast photography (CUP) can be used in conjunction with fluorescent lifetime imaging to overcome these acquisition rate limitations. Frame rates up to one hundred billion frames per second have been demonstrated with compressed ultrafast photography using a streak camera.3 These rates are achieved by encoding time in the spatial direction with a pseudo-random binary pattern. The time domain information is then reconstructed using a compressed sensing algorithm, resulting in a cube of data (x,y,t) for each readout image. Thus, application of compressed ultrafast photography will allow us to acquire an entire fluorescent lifetime image with a single laser pulse. Using a streak camera with a high-speed CMOS camera, acquisition rates of 100 frames per second can be achieved, which will significantly enhance our ability to quantitatively measure complex biological events with high spatial and temporal resolution. In particular, we will demonstrate the ability of this technique to do single-shot fluorescent lifetime imaging of cells and microspheres.
NASA Technical Reports Server (NTRS)
Barrie, Alexander C.; Yeh, Penshu; Dorelli, John C.; Clark, George B.; Paterson, William R.; Adrian, Mark L.; Holland, Matthew P.; Lobell, James V.; Simpson, David G.; Pollock, Craig J.;
2015-01-01
Plasma measurements in space are becoming increasingly faster, higher resolution, and distributed over multiple instruments. As raw data generation rates can exceed available data transfer bandwidth, data compression is becoming a critical design component. Data compression has been a staple of imaging instruments for years, but only recently have plasma measurement designers become interested in high performance data compression. Missions will often use a simple lossless compression technique yielding compression ratios of approximately 2:1, however future missions may require compression ratios upwards of 10:1. This study aims to explore how a Discrete Wavelet Transform combined with a Bit Plane Encoder (DWT/BPE), implemented via a CCSDS standard, can be used effectively to compress count information common to plasma measurements to high compression ratios while maintaining little or no compression error. The compression ASIC used for the Fast Plasma Investigation (FPI) on board the Magnetospheric Multiscale mission (MMS) is used for this study. Plasma count data from multiple sources is examined: resampled data from previous missions, randomly generated data from distribution functions, and simulations of expected regimes. These are run through the compression routines with various parameters to yield the greatest possible compression ratio while maintaining little or no error, the latter indicates that fully lossless compression is obtained. Finally, recommendations are made for future missions as to what can be achieved when compressing plasma count data and how best to do so.
Design of light-small high-speed image data processing system
NASA Astrophysics Data System (ADS)
Yang, Jinbao; Feng, Xue; Li, Fei
2015-10-01
A light-small high speed image data processing system was designed in order to meet the request of image data processing in aerospace. System was constructed of FPGA, DSP and MCU (Micro-controller), implementing a video compress of 3 million pixels@15frames and real-time return of compressed image to the upper system. Programmable characteristic of FPGA, high performance image compress IC and configurable MCU were made best use to improve integration. Besides, hard-soft board design was introduced and PCB layout was optimized. At last, system achieved miniaturization, light-weight and fast heat dispersion. Experiments show that, system's multifunction was designed correctly and worked stably. In conclusion, system can be widely used in the area of light-small imaging.
Nelson, Matthew; Moorhead, Amy; Yost, Dana; Whorton, Adrian
2012-01-01
We present a case of successful resuscitation from cardiac arrest after 25 minutes of ventricular fibrillation (VF) secondary to peripartum cardiomyopathy. This case highlights a rare disease, but also, more importantly, the successful use of the five links of survival: early access to 9-1-1, early cardiopulmonary resuscitation (CPR), early defibrillation, early advanced life support, and postresuscitative care. We also demonstrate the importance of high-quality resuscitation practices in order to achieve a successful outcome. Manual compressions can be performed at a guidelines-compliant rate. With training, users are able to achieve high compression fractions. Pre/post shock delays can be minimized to further increase compression fraction. Nationally, CPR interruptions are often long. We recommend closer attention to uninterrupted 2-minute cycles of CPR, minimizing delays in CPR through training, and a focus on a closely choreographed approach. User review of transthoracic impedance feedback data should play a vital role in a cardiac arrest quality-improvement program.
Methods for compressible fluid simulation on GPUs using high-order finite differences
NASA Astrophysics Data System (ADS)
Pekkilä, Johannes; Väisälä, Miikka S.; Käpylä, Maarit J.; Käpylä, Petri J.; Anjum, Omer
2017-08-01
We focus on implementing and optimizing a sixth-order finite-difference solver for simulating compressible fluids on a GPU using third-order Runge-Kutta integration. Since graphics processing units perform well in data-parallel tasks, this makes them an attractive platform for fluid simulation. However, high-order stencil computation is memory-intensive with respect to both main memory and the caches of the GPU. We present two approaches for simulating compressible fluids using 55-point and 19-point stencils. We seek to reduce the requirements for memory bandwidth and cache size in our methods by using cache blocking and decomposing a latency-bound kernel into several bandwidth-bound kernels. Our fastest implementation is bandwidth-bound and integrates 343 million grid points per second on a Tesla K40t GPU, achieving a 3 . 6 × speedup over a comparable hydrodynamics solver benchmarked on two Intel Xeon E5-2690v3 processors. Our alternative GPU implementation is latency-bound and achieves the rate of 168 million updates per second.
Data compression using Chebyshev transform
NASA Technical Reports Server (NTRS)
Cheng, Andrew F. (Inventor); Hawkins, III, S. Edward (Inventor); Nguyen, Lillian (Inventor); Monaco, Christopher A. (Inventor); Seagrave, Gordon G. (Inventor)
2007-01-01
The present invention is a method, system, and computer program product for implementation of a capable, general purpose compression algorithm that can be engaged on the fly. This invention has particular practical application with time-series data, and more particularly, time-series data obtained form a spacecraft, or similar situations where cost, size and/or power limitations are prevalent, although it is not limited to such applications. It is also particularly applicable to the compression of serial data streams and works in one, two, or three dimensions. The original input data is approximated by Chebyshev polynomials, achieving very high compression ratios on serial data streams with minimal loss of scientific information.
A novel 3D Cartesian random sampling strategy for Compressive Sensing Magnetic Resonance Imaging.
Valvano, Giuseppe; Martini, Nicola; Santarelli, Maria Filomena; Chiappino, Dante; Landini, Luigi
2015-01-01
In this work we propose a novel acquisition strategy for accelerated 3D Compressive Sensing Magnetic Resonance Imaging (CS-MRI). This strategy is based on a 3D cartesian sampling with random switching of the frequency encoding direction with other K-space directions. Two 3D sampling strategies are presented. In the first strategy, the frequency encoding direction is randomly switched with one of the two phase encoding directions. In the second strategy, the frequency encoding direction is randomly chosen between all the directions of the K-Space. These strategies can lower the coherence of the acquisition, in order to produce reduced aliasing artifacts and to achieve a better image quality after Compressive Sensing (CS) reconstruction. Furthermore, the proposed strategies can reduce the typical smoothing of CS due to the limited sampling of high frequency locations. We demonstrated by means of simulations that the proposed acquisition strategies outperformed the standard Compressive Sensing acquisition. This results in a better quality of the reconstructed images and in a greater achievable acceleration.
Phononic glass: a robust acoustic-absorption material.
Jiang, Heng; Wang, Yuren
2012-08-01
In order to achieve strong wide band acoustic absorption under high hydrostatic pressure, an interpenetrating network structure is introduced into the locally resonant phononic crystal to fabricate a type of phononic composite material called "phononic glass." Underwater acoustic absorption coefficient measurements show that the material owns high underwater sound absorption coefficients over 0.9 in 12-30 kHz. Moreover, the quasi-static compressive behavior shows that the phononic glass has a compressive strength over 5 MPa which is crucial for underwater applications.
NASA Astrophysics Data System (ADS)
Tawfik, Walid
2015-06-01
In this work, we could experimentally achieved the generation of white-light laser pulses of few-cycle fs pulses using a neon-filled hollow-core fiber. The observed pulses reached 6-fs at at repetition rate of 1 kHz using 2.5 mJ of 31 fs femtosecond pulses. The pulse compressing achieved by the supercontinuum produced in static neon-filled hollow fibers while the dispersion compensation is achieved by five pairs of chirped mirrors. We showed that gas pressure can be used to continuously vary the bandwidth from 350 nm to 900 nm. Furthermore, the applied technique allows for a straightforward tuning of the pulse duration via the gas pressure whilst maintaining near-transform-limited pulses with constant output energy, thereby reducing the complications introduced by chirped pulses. Through measurements of the transmission through the fiber as a function of gas pressure, a high throughput exceeding 60% was achieved. Adaptive pulse compression is achieved by using the spectral phase obtained from a spectral phase interferometry for direct electric field reconstruction (SPIDER) measurement as feedback for a liquid crystal spatial light modulator (SLM). The spectral phase of these supercontinua is found to be extremely stable over several hours. This allowed us to demonstrate successful compression to pulses as short as 5.2 fs with controlled wide spectral bandwidth, which could be used to excite different states in complicated molecules at once.
Study of radar pulse compression for high resolution satellite altimetry
NASA Technical Reports Server (NTRS)
Dooley, R. P.; Nathanson, F. E.; Brooks, L. W.
1974-01-01
Pulse compression techniques are studied which are applicable to a satellite altimeter having a topographic resolution of + 10 cm. A systematic design procedure is used to determine the system parameters. The performance of an optimum, maximum likelihood processor is analysed, which provides the basis for modifying the standard split-gate tracker to achieve improved performance. Bandwidth considerations lead to the recommendation of a full deramp STRETCH pulse compression technique followed by an analog filter bank to separate range returns. The implementation of the recommended technique is examined.
High-speed and high-ratio referential genome compression.
Liu, Yuansheng; Peng, Hui; Wong, Limsoon; Li, Jinyan
2017-11-01
The rapidly increasing number of genomes generated by high-throughput sequencing platforms and assembly algorithms is accompanied by problems in data storage, compression and communication. Traditional compression algorithms are unable to meet the demand of high compression ratio due to the intrinsic challenging features of DNA sequences such as small alphabet size, frequent repeats and palindromes. Reference-based lossless compression, by which only the differences between two similar genomes are stored, is a promising approach with high compression ratio. We present a high-performance referential genome compression algorithm named HiRGC. It is based on a 2-bit encoding scheme and an advanced greedy-matching search on a hash table. We compare the performance of HiRGC with four state-of-the-art compression methods on a benchmark dataset of eight human genomes. HiRGC takes <30 min to compress about 21 gigabytes of each set of the seven target genomes into 96-260 megabytes, achieving compression ratios of 217 to 82 times. This performance is at least 1.9 times better than the best competing algorithm on its best case. Our compression speed is also at least 2.9 times faster. HiRGC is stable and robust to deal with different reference genomes. In contrast, the competing methods' performance varies widely on different reference genomes. More experiments on 100 human genomes from the 1000 Genome Project and on genomes of several other species again demonstrate that HiRGC's performance is consistently excellent. The C ++ and Java source codes of our algorithm are freely available for academic and non-commercial use. They can be downloaded from https://github.com/yuansliu/HiRGC. jinyan.li@uts.edu.au. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Cooper, G J; Townend, D J; Cater, S R; Pearce, B P
1991-01-01
Materials have been applied to the thoracic wall of anaesthetised experimental animals exposed to blast overpressure to investigate the coupling of direct stress waves into the thorax and the relative contribution of compressive stress waves and gross thoracic compression to lung injury. The ultimate purpose of the work is to develop effective personal protection from the primary effects of blast overpressure--efficient protection can only be achieved if the injury mechanism is identified and characterized. Foam materials acted as acoustic couplers and resulted in a significant augmentation of the visceral injury; decoupling and elimination of injury were achieved by application of a high acoustic impedance layer on top of the foam. In vitro experiments studying stress wave transmission from air through various layers into an anechoic water chamber showed a significant increase in power transmitted by the foams, principally at high frequencies. Material such as copper or resin bonded Kevlar incorporated as a facing upon the foam achieved substantial decoupling at high frequencies--low frequency transmission was largely unaffected. An acoustic transmission model replicated the coupling of the blast waves into the anechoic water chamber. The studies suggest that direct transmission of stress waves plays a dominant role in lung parenchymal injury from blast loading and that gross thoracic compression is not the primary injury mechanism. Acoustic decoupling principles may therefore be employed to reduce the direct stress coupled into the body and thus reduce the severity of lung injury--the most simple decoupler is a high acoustic impedance material as a facing upon a foam, but decoupling layers may be optimized using acoustic transmission models. Conventional impacts producing high body wall velocities will also lead to stress wave generation and transmission--stress wave effects may dominate the visceral response to the impact with direct compression and shear contributing little to the aetiology of the injury.
A Fiber-Optic System Generating Pulses of High Spectral Density
NASA Astrophysics Data System (ADS)
Abramov, A. S.; Zolotovskii, I. O.; Korobko, D. A.; Fotiadi, A. A.
2018-03-01
A cascade fiber-optic system that generates pulses of high spectral density by using the effect of nonlinear spectral compression is proposed. It is demonstrated that the shape of the pulse envelope substantially influences the degree of compression of its spectrum. In so doing, maximum compression is achieved for parabolic pulses. The cascade system includes an optical fiber exhibiting normal dispersion that decreases along the fiber length, thereby ensuring that the pulse envelope evolves toward a parabolic shape, along with diffraction gratings and a fiber spectral compressor. Based on computer simulation, we determined parameters of cascade elements leading to maximum spectral density of radiation originating from a subpicosecond laser pulse of medium energy.
Heavy flavours production in quark-gluon plasma formed in high energy nuclear reactions
NASA Technical Reports Server (NTRS)
Kloskinski, J.
1985-01-01
Results on compression and temperatures of nuclear fireballs and on relative yield of strange and charmed hadrons are given . The results show that temperatures above 300 MeV and large compressions are unlikely achieved in average heavy ion collision. In consequence, thermal production of charm is low. Strange particle production is, however, substantial and indicates clear temperature - threshold behavior.
Fast heating of ultrahigh-density plasma as a step towards laser fusion ignition.
Kodama, R; Norreys, P A; Mima, K; Dangor, A E; Evans, R G; Fujita, H; Kitagawa, Y; Krushelnick, K; Miyakoshi, T; Miyanaga, N; Norimatsu, T; Rose, S J; Shozaki, T; Shigemori, K; Sunahara, A; Tampo, M; Tanaka, K A; Toyama, Y; Yamanaka, T; Zepf, M
2001-08-23
Modern high-power lasers can generate extreme states of matter that are relevant to astrophysics, equation-of-state studies and fusion energy research. Laser-driven implosions of spherical polymer shells have, for example, achieved an increase in density of 1,000 times relative to the solid state. These densities are large enough to enable controlled fusion, but to achieve energy gain a small volume of compressed fuel (known as the 'spark') must be heated to temperatures of about 108 K (corresponding to thermal energies in excess of 10 keV). In the conventional approach to controlled fusion, the spark is both produced and heated by accurately timed shock waves, but this process requires both precise implosion symmetry and a very large drive energy. In principle, these requirements can be significantly relaxed by performing the compression and fast heating separately; however, this 'fast ignitor' approach also suffers drawbacks, such as propagation losses and deflection of the ultra-intense laser pulse by the plasma surrounding the compressed fuel. Here we employ a new compression geometry that eliminates these problems; we combine production of compressed matter in a laser-driven implosion with picosecond-fast heating by a laser pulse timed to coincide with the peak compression. Our approach therefore permits efficient compression and heating to be carried out simultaneously, providing a route to efficient fusion energy production.
Real-time compression of raw computed tomography data: technology, architecture, and benefits
NASA Astrophysics Data System (ADS)
Wegener, Albert; Chandra, Naveen; Ling, Yi; Senzig, Robert; Herfkens, Robert
2009-02-01
Compression of computed tomography (CT) projection samples reduces slip ring and disk drive costs. A lowcomplexity, CT-optimized compression algorithm called Prism CTTM achieves at least 1.59:1 and up to 2.75:1 lossless compression on twenty-six CT projection data sets. We compare the lossless compression performance of Prism CT to alternative lossless coders, including Lempel-Ziv, Golomb-Rice, and Huffman coders using representative CT data sets. Prism CT provides the best mean lossless compression ratio of 1.95:1 on the representative data set. Prism CT compression can be integrated into existing slip rings using a single FPGA. Prism CT decompression operates at 100 Msamp/sec using one core of a dual-core Xeon CPU. We describe a methodology to evaluate the effects of lossy compression on image quality to achieve even higher compression ratios. We conclude that lossless compression of raw CT signals provides significant cost savings and performance improvements for slip rings and disk drive subsystems in all CT machines. Lossy compression should be considered in future CT data acquisition subsystems because it provides even more system benefits above lossless compression while achieving transparent diagnostic image quality. This result is demonstrated on a limited dataset using appropriately selected compression ratios and an experienced radiologist.
Compressed sensing for high-resolution nonlipid suppressed 1 H FID MRSI of the human brain at 9.4T.
Nassirpour, Sahar; Chang, Paul; Avdievitch, Nikolai; Henning, Anke
2018-04-29
The aim of this study was to apply compressed sensing to accelerate the acquisition of high resolution metabolite maps of the human brain using a nonlipid suppressed ultra-short TR and TE 1 H FID MRSI sequence at 9.4T. X-t sparse compressed sensing reconstruction was optimized for nonlipid suppressed 1 H FID MRSI data. Coil-by-coil x-t sparse reconstruction was compared with SENSE x-t sparse and low rank reconstruction. The effect of matrix size and spatial resolution on the achievable acceleration factor was studied. Finally, in vivo metabolite maps with different acceleration factors of 2, 4, 5, and 10 were acquired and compared. Coil-by-coil x-t sparse compressed sensing reconstruction was not able to reliably recover the nonlipid suppressed data, rather a combination of parallel and sparse reconstruction was necessary (SENSE x-t sparse). For acceleration factors of up to 5, both the low-rank and the compressed sensing methods were able to reconstruct the data comparably well (root mean squared errors [RMSEs] ≤ 10.5% for Cre). However, the reconstruction time of the low rank algorithm was drastically longer than compressed sensing. Using the optimized compressed sensing reconstruction, acceleration factors of 4 or 5 could be reached for the MRSI data with a matrix size of 64 × 64. For lower spatial resolutions, an acceleration factor of up to R∼4 was successfully achieved. By tailoring the reconstruction scheme to the nonlipid suppressed data through parameter optimization and performance evaluation, we present high resolution (97 µL voxel size) accelerated in vivo metabolite maps of the human brain acquired at 9.4T within scan times of 3 to 3.75 min. © 2018 International Society for Magnetic Resonance in Medicine.
Merging-compression formation of high temperature tokamak plasma
NASA Astrophysics Data System (ADS)
Gryaznevich, M. P.; Sykes, A.
2017-07-01
Merging-compression is a solenoid-free plasma formation method used in spherical tokamaks (STs). Two plasma rings are formed and merged via magnetic reconnection into one plasma ring that then is radially compressed to form the ST configuration. Plasma currents of several hundred kA and plasma temperatures in the keV-range have been produced using this method, however until recently there was no full understanding of the merging-compression formation physics. In this paper we explain in detail, for the first time, all stages of the merging-compression plasma formation. This method will be used to create ST plasmas in the compact (R ~ 0.4-0.6 m) high field, high current (3 T/2 MA) ST40 tokamak. Moderate extrapolation from the available experimental data suggests the possibility of achieving plasma current ~2 MA, and 10 keV range temperatures at densities ~1-5 × 1020 m-3, bringing ST40 plasmas into a burning plasma (alpha particle heating) relevant conditions directly from the plasma formation. Issues connected with this approach for ST40 and future ST reactors are discussed
A gigawatt level repetitive rate adjustable magnetic pulse compressor.
Li, Song; Gao, Jing-Ming; Yang, Han-Wu; Qian, Bao-Liang; Li, Ze-Xin
2015-08-01
In this paper, a gigawatt level repetitive rate adjustable magnetic pulse compressor is investigated both numerically and experimentally. The device has advantages of high power level, high repetitive rate achievability, and long lifetime reliability. Importantly, dominate parameters including the saturation time, the peak voltage, and even the compression ratio can be potentially adjusted continuously and reliably, which significantly expands the applicable area of the device and generators based on it. Specifically, a two-stage adjustable magnetic pulse compressor, utilized for charging the pulse forming network of a high power pulse generator, is designed with different compression ratios of 25 and 18 through an optimized design process. Equivalent circuit analysis shows that the modification of compression ratio can be achieved by just changing the turn number of the winding. At the same time, increasing inductance of the grounded inductor will decrease the peak voltage and delay the charging process. Based on these analyses, an adjustable compressor was built and studied experimentally in both the single shot mode and repetitive rate mode. Pulses with peak voltage of 60 kV and energy per pulse of 360 J were obtained in the experiment. The rise times of the pulses were compressed from 25 μs to 1 μs and from 18 μs to 1 μs, respectively, at repetitive rate of 20 Hz with good repeatability. Experimental results show reasonable agreement with analyses.
Adaptive compressive learning for prediction of protein-protein interactions from primary sequence.
Zhang, Ya-Nan; Pan, Xiao-Yong; Huang, Yan; Shen, Hong-Bin
2011-08-21
Protein-protein interactions (PPIs) play an important role in biological processes. Although much effort has been devoted to the identification of novel PPIs by integrating experimental biological knowledge, there are still many difficulties because of lacking enough protein structural and functional information. It is highly desired to develop methods based only on amino acid sequences for predicting PPIs. However, sequence-based predictors are often struggling with the high-dimensionality causing over-fitting and high computational complexity problems, as well as the redundancy of sequential feature vectors. In this paper, a novel computational approach based on compressed sensing theory is proposed to predict yeast Saccharomyces cerevisiae PPIs from primary sequence and has achieved promising results. The key advantage of the proposed compressed sensing algorithm is that it can compress the original high-dimensional protein sequential feature vector into a much lower but more condensed space taking the sparsity property of the original signal into account. What makes compressed sensing much more attractive in protein sequence analysis is its compressed signal can be reconstructed from far fewer measurements than what is usually considered necessary in traditional Nyquist sampling theory. Experimental results demonstrate that proposed compressed sensing method is powerful for analyzing noisy biological data and reducing redundancy in feature vectors. The proposed method represents a new strategy of dealing with high-dimensional protein discrete model and has great potentiality to be extended to deal with many other complicated biological systems. Copyright © 2011 Elsevier Ltd. All rights reserved.
[Neurovascular compression of the medulla oblongata: a rare cause of secondary hypertension].
Nádas, Judit; Czirják, Sándor; Igaz, Péter; Vörös, Erika; Jermendy, György; Rácz, Károly; Tóth, Miklós
2014-05-25
Compression of the rostral ventrolateral medulla oblongata is one of the rarely identified causes of refractory hypertension. In patients with severe, intractable hypertension caused by neurovascular compression, neurosurgical decompression should be considered. The authors present the history of a 20-year-old man with severe hypertension. After excluding other possible causes of secondary hypertension, the underlying cause of his high blood pressure was identified by the demonstration of neurovascular compression shown by magnetic resonance angiography and an increased sympathetic activity (sinus tachycardia) during the high blood pressure episodes. Due to frequent episodes of hypertensive crises, surgical decompression was recommended, which was performed with the placement of an isograft between the brainstem and the left vertebral artery. In the first six months after the operation, the patient's blood pressure could be kept in the normal range with significantly reduced doses of antihypertensive medication. Repeat magnetic resonance angiography confirmed the cessation of brainstem compression. After six months, increased blood pressure returned periodically, but to a smaller extent and less frequently. Based on the result of magnetic resonance angiography performed 22 months after surgery, re-operation was considered. According to previous literature data long-term success can only be achieved in one third of patients after surgical decompression. In the majority of patients surgery results in a significant decrease of blood pressure, an increased efficiency of antihypertensive therapy as well as a decrease in the frequency of highly increased blood pressure episodes. Thus, a significant improvement of the patient's quality of life can be achieved. The case of this patient is an example of the latter scenario.
Visually lossless compression of digital hologram sequences
NASA Astrophysics Data System (ADS)
Darakis, Emmanouil; Kowiel, Marcin; Näsänen, Risto; Naughton, Thomas J.
2010-01-01
Digital hologram sequences have great potential for the recording of 3D scenes of moving macroscopic objects as their numerical reconstruction can yield a range of perspective views of the scene. Digital holograms inherently have large information content and lossless coding of holographic data is rather inefficient due to the speckled nature of the interference fringes they contain. Lossy coding of still holograms and hologram sequences has shown promising results. By definition, lossy compression introduces errors in the reconstruction. In all of the previous studies, numerical metrics were used to measure the compression error and through it, the coding quality. Digital hologram reconstructions are highly speckled and the speckle pattern is very sensitive to data changes. Hence, numerical quality metrics can be misleading. For example, for low compression ratios, a numerically significant coding error can have visually negligible effects. Yet, in several cases, it is of high interest to know how much lossy compression can be achieved, while maintaining the reconstruction quality at visually lossless levels. Using an experimental threshold estimation method, the staircase algorithm, we determined the highest compression ratio that was not perceptible to human observers for objects compressed with Dirac and MPEG-4 compression methods. This level of compression can be regarded as the point below which compression is perceptually lossless although physically the compression is lossy. It was found that up to 4 to 7.5 fold compression can be obtained with the above methods without any perceptible change in the appearance of video sequences.
Some practical aspects of lossless and nearly-lossless compression of AVHRR imagery
NASA Technical Reports Server (NTRS)
Hogan, David B.; Miller, Chris X.; Christensen, Than Lee; Moorti, Raj
1994-01-01
Compression of Advanced Very high Resolution Radiometers (AVHRR) imagery operating in a lossless or nearly-lossless mode is evaluated. Several practical issues are analyzed including: variability of compression over time and among channels, rate-smoothing buffer size, multi-spectral preprocessing of data, day/night handling, and impact on key operational data applications. This analysis is based on a DPCM algorithm employing the Universal Noiseless Coder, which is a candidate for inclusion in many future remote sensing systems. It is shown that compression rates of about 2:1 (daytime) can be achieved with modest buffer sizes (less than or equal to 2.5 Mbytes) and a relatively simple multi-spectral preprocessing step.
A Comparison of LBG and ADPCM Speech Compression Techniques
NASA Astrophysics Data System (ADS)
Bachu, Rajesh G.; Patel, Jignasa; Barkana, Buket D.
Speech compression is the technology of converting human speech into an efficiently encoded representation that can later be decoded to produce a close approximation of the original signal. In all speech there is a degree of predictability and speech coding techniques exploit this to reduce bit rates yet still maintain a suitable level of quality. This paper is a study and implementation of Linde-Buzo-Gray Algorithm (LBG) and Adaptive Differential Pulse Code Modulation (ADPCM) algorithms to compress speech signals. In here we implemented the methods using MATLAB 7.0. The methods we used in this study gave good results and performance in compressing the speech and listening tests showed that efficient and high quality coding is achieved.
100J Pulsed Laser Shock Driver for Dynamic Compression Research
NASA Astrophysics Data System (ADS)
Wang, X.; Sethian, J.; Bromage, J.; Fochs, S.; Broege, D.; Zuegel, J.; Roides, R.; Cuffney, R.; Brent, G.; Zweiback, J.; Currier, Z.; D'Amico, K.; Hawreliak, J.; Zhang, J.; Rigg, P. A.; Gupta, Y. M.
2017-06-01
Logos Technologies and the Laboratory for Laser Energetics (LLE, University of Rochester) - in partnership with Washington State University - have designed, built and deployed a one of a kind 100J pulsed UV (351 nm) laser system to perform real-time, x-ray diffraction and imaging experiments in laser-driven compression experiments at the Dynamic Compression Sector (DCS) at the Advanced Photon Source, Argonne National Laboratory. The laser complements the other dynamic compression drivers at DCS. The laser system features beam smoothing for 2-d spatially uniform loading of samples and four, highly reproducible, temporal profiles (total pulse duration: 5-15 ns) to accommodate a wide variety of scientific needs. Other pulse shapes can be achieved as the experimental needs evolve. Timing of the laser pulse is highly precise (<200 ps) to allow accurate synchronization of the x-rays with the dynamic compression event. Details of the laser system, its operating parameters, and representative results will be presented. Work supported by DOE/NNSA.
An effective and efficient compression algorithm for ECG signals with irregular periods.
Chou, Hsiao-Hsuan; Chen, Ying-Jui; Shiau, Yu-Chien; Kuo, Te-Son
2006-06-01
This paper presents an effective and efficient preprocessing algorithm for two-dimensional (2-D) electrocardiogram (ECG) compression to better compress irregular ECG signals by exploiting their inter- and intra-beat correlations. To better reveal the correlation structure, we first convert the ECG signal into a proper 2-D representation, or image. This involves a few steps including QRS detection and alignment, period sorting, and length equalization. The resulting 2-D ECG representation is then ready to be compressed by an appropriate image compression algorithm. We choose the state-of-the-art JPEG2000 for its high efficiency and flexibility. In this way, the proposed algorithm is shown to outperform some existing arts in the literature by simultaneously achieving high compression ratio (CR), low percent root mean squared difference (PRD), low maximum error (MaxErr), and low standard derivation of errors (StdErr). In particular, because the proposed period sorting method rearranges the detected heartbeats into a smoother image that is easier to compress, this algorithm is insensitive to irregular ECG periods. Thus either the irregular ECG signals or the QRS false-detection cases can be better compressed. This is a significant improvement over existing 2-D ECG compression methods. Moreover, this algorithm is not tied exclusively to JPEG2000. It can also be combined with other 2-D preprocessing methods or appropriate codecs to enhance the compression performance in irregular ECG cases.
Feasibility of spatial frequency-domain imaging for monitoring palpable breast lesions
NASA Astrophysics Data System (ADS)
Robbins, Constance M.; Raghavan, Guruprasad; Antaki, James F.; Kainerstorfer, Jana M.
2017-12-01
In breast cancer diagnosis and therapy monitoring, there is a need for frequent, noninvasive disease progression evaluation. Breast tumors differ from healthy tissue in mechanical stiffness as well as optical properties, which allows optical methods to detect and monitor breast lesions noninvasively. Spatial frequency-domain imaging (SFDI) is a reflectance-based diffuse optical method that can yield two-dimensional images of absolute optical properties of tissue with an inexpensive and portable system, although depth penetration is limited. Since the absorption coefficient of breast tissue is relatively low and the tissue is quite flexible, there is an opportunity for compression of tissue to bring stiff, palpable breast lesions within the detection range of SFDI. Sixteen breast tissue-mimicking phantoms were fabricated containing stiffer, more highly absorbing tumor-mimicking inclusions of varying absorption contrast and depth. These phantoms were imaged with an SFDI system at five levels of compression. An increase in absorption contrast was observed with compression, and reliable detection of each inclusion was achieved when compression was sufficient to bring the inclusion center within ˜12 mm of the phantom surface. At highest compression level, contrasts achieved with this system were comparable to those measured with single source-detector near-infrared spectroscopy.
Compressive hyperspectral and multispectral imaging fusion
NASA Astrophysics Data System (ADS)
Espitia, Óscar; Castillo, Sergio; Arguello, Henry
2016-05-01
Image fusion is a valuable framework which combines two or more images of the same scene from one or multiple sensors, allowing to improve the resolution of the images and increase the interpretable content. In remote sensing a common fusion problem consists of merging hyperspectral (HS) and multispectral (MS) images that involve large amount of redundant data, which ignores the highly correlated structure of the datacube along the spatial and spectral dimensions. Compressive HS and MS systems compress the spectral data in the acquisition step allowing to reduce the data redundancy by using different sampling patterns. This work presents a compressed HS and MS image fusion approach, which uses a high dimensional joint sparse model. The joint sparse model is formulated by combining HS and MS compressive acquisition models. The high spectral and spatial resolution image is reconstructed by using sparse optimization algorithms. Different fusion spectral image scenarios are used to explore the performance of the proposed scheme. Several simulations with synthetic and real datacubes show promising results as the reliable reconstruction of a high spectral and spatial resolution image can be achieved by using as few as just the 50% of the datacube.
Compression of a mixed antiproton and electron non-neutral plasma to high densities
NASA Astrophysics Data System (ADS)
Aghion, Stefano; Amsler, Claude; Bonomi, Germano; Brusa, Roberto S.; Caccia, Massimo; Caravita, Ruggero; Castelli, Fabrizio; Cerchiari, Giovanni; Comparat, Daniel; Consolati, Giovanni; Demetrio, Andrea; Di Noto, Lea; Doser, Michael; Evans, Craig; Fanì, Mattia; Ferragut, Rafael; Fesel, Julian; Fontana, Andrea; Gerber, Sebastian; Giammarchi, Marco; Gligorova, Angela; Guatieri, Francesco; Haider, Stefan; Hinterberger, Alexander; Holmestad, Helga; Kellerbauer, Alban; Khalidova, Olga; Krasnický, Daniel; Lagomarsino, Vittorio; Lansonneur, Pierre; Lebrun, Patrice; Malbrunot, Chloé; Mariazzi, Sebastiano; Marton, Johann; Matveev, Victor; Mazzotta, Zeudi; Müller, Simon R.; Nebbia, Giancarlo; Nedelec, Patrick; Oberthaler, Markus; Pacifico, Nicola; Pagano, Davide; Penasa, Luca; Petracek, Vojtech; Prelz, Francesco; Prevedelli, Marco; Rienaecker, Benjamin; Robert, Jacques; Røhne, Ole M.; Rotondi, Alberto; Sandaker, Heidi; Santoro, Romualdo; Smestad, Lillian; Sorrentino, Fiodor; Testera, Gemma; Tietje, Ingmari C.; Widmann, Eberhard; Yzombard, Pauline; Zimmer, Christian; Zmeskal, Johann; Zurlo, Nicola; Antonello, Massimiliano
2018-04-01
We describe a multi-step "rotating wall" compression of a mixed cold antiproton-electron non-neutral plasma in a 4.46 T Penning-Malmberg trap developed in the context of the AEḡIS experiment at CERN. Such traps are routinely used for the preparation of cold antiprotons suitable for antihydrogen production. A tenfold antiproton radius compression has been achieved, with a minimum antiproton radius of only 0.17 mm. We describe the experimental conditions necessary to perform such a compression: minimizing the tails of the electron density distribution is paramount to ensure that the antiproton density distribution follows that of the electrons. Such electron density tails are remnants of rotating wall compression and in many cases can remain unnoticed. We observe that the compression dynamics for a pure electron plasma behaves the same way as that of a mixed antiproton and electron plasma. Thanks to this optimized compression method and the high single shot antiproton catching efficiency, we observe for the first time cold and dense non-neutral antiproton plasmas with particle densities n ≥ 1013 m-3, which pave the way for an efficient pulsed antihydrogen production in AEḡIS.
Resource efficient data compression algorithms for demanding, WSN based biomedical applications.
Antonopoulos, Christos P; Voros, Nikolaos S
2016-02-01
During the last few years, medical research areas of critical importance such as Epilepsy monitoring and study, increasingly utilize wireless sensor network technologies in order to achieve better understanding and significant breakthroughs. However, the limited memory and communication bandwidth offered by WSN platforms comprise a significant shortcoming to such demanding application scenarios. Although, data compression can mitigate such deficiencies there is a lack of objective and comprehensive evaluation of relative approaches and even more on specialized approaches targeting specific demanding applications. The research work presented in this paper focuses on implementing and offering an in-depth experimental study regarding prominent, already existing as well as novel proposed compression algorithms. All algorithms have been implemented in a common Matlab framework. A major contribution of this paper, that differentiates it from similar research efforts, is the employment of real world Electroencephalography (EEG) and Electrocardiography (ECG) datasets comprising the two most demanding Epilepsy modalities. Emphasis is put on WSN applications, thus the respective metrics focus on compression rate and execution latency for the selected datasets. The evaluation results reveal significant performance and behavioral characteristics of the algorithms related to their complexity and the relative negative effect on compression latency as opposed to the increased compression rate. It is noted that the proposed schemes managed to offer considerable advantage especially aiming to achieve the optimum tradeoff between compression rate-latency. Specifically, proposed algorithm managed to combine highly completive level of compression while ensuring minimum latency thus exhibiting real-time capabilities. Additionally, one of the proposed schemes is compared against state-of-the-art general-purpose compression algorithms also exhibiting considerable advantages as far as the compression rate is concerned. Copyright © 2015 Elsevier Inc. All rights reserved.
High-power ultrashort fiber laser for solar cells micromachining
NASA Astrophysics Data System (ADS)
Lecourt, J.-B.; Duterte, C.; Liegeois, F.; Lekime, D.; Hernandez, Y.; Giannone, D.
2012-02-01
We report on a high-power ultra-short fiber laser for thin film solar cells micromachining. The laser is based on Chirped Pulse Amplification (CPA) scheme. The pulses are stretched to hundreds of picoseconds prior to amplification and can be compressed down to picosecond at high energy. The repetition rate is adjustable from 100 kHz to 1 MHz and the optical average output power is close to 13 W (before compression). The whole setup is fully fibred, except the compressor achieved with bulk gratings, resulting on a compact and reliable solution for cold ablation.
Real-Time Data Filtering and Compression in Wide Area Simulation Networks
1992-10-02
Area Simulation Networks Achieving the real-time linkage among multiple , geographically-distant, local area networks that support distributed...November 1989, pp. 52-61. [IEEE85] IEEE/ANSI Standard 8802/3 "Carrier sense multiple access with collision detection (CSMA/CD) access method and...decoding/encoding of multiple bits. The hardware is programmable, easily adaptable and yields a high compression rate. A prototype 2-micron VLSI chip
Compression of computer generated phase-shifting hologram sequence using AVC and HEVC
NASA Astrophysics Data System (ADS)
Xing, Yafei; Pesquet-Popescu, Béatrice; Dufaux, Frederic
2013-09-01
With the capability of achieving twice the compression ratio of Advanced Video Coding (AVC) with similar reconstruction quality, High Efficiency Video Coding (HEVC) is expected to become the newleading technique of video coding. In order to reduce the storage and transmission burden of digital holograms, in this paper we propose to use HEVC for compressing the phase-shifting digital hologram sequences (PSDHS). By simulating phase-shifting digital holography (PSDH) interferometry, interference patterns between illuminated three dimensional( 3D) virtual objects and the stepwise phase changed reference wave are generated as digital holograms. The hologram sequences are obtained by the movement of the virtual objects and compressed by AVC and HEVC. The experimental results show that AVC and HEVC are efficient to compress PSDHS, with HEVC giving better performance. Good compression rate and reconstruction quality can be obtained with bitrate above 15000kbps.
Pulse compression using a tapered microstructure optical fiber.
Hu, Jonathan; Marks, Brian S; Menyuk, Curtis R; Kim, Jinchae; Carruthers, Thomas F; Wright, Barbara M; Taunay, Thierry F; Friebele, E J
2006-05-01
We calculate the pulse compression in a tapered microstructure optical fiber with four layers of holes. We show that the primary limitation on pulse compression is the loss due to mode leakage. As a fiber's diameter decreases due to the tapering, so does the air-hole diameter, and at a sufficiently small diameter the guided mode loss becomes unacceptably high. For the four-layer geometry we considered, a compression factor of 10 can be achieved by a pulse with an initial FWHM duration of 3 ps in a tapered fiber that is 28 m long. We find that there is little difference in the pulse compression between a linear taper profile and a Gaussian taper profile. More layers of air-holes allows the pitch to decrease considerably before losses become unacceptable, but only a moderate increase in the degree of pulse compression is obtained.
Geostationary Imaging FTS (GIFTS) Data Processing: Measurement Simulation and Compression
NASA Technical Reports Server (NTRS)
Huang, Hung-Lung; Revercomb, H. E.; Thom, J.; Antonelli, P. B.; Osborne, B.; Tobin, D.; Knuteson, R.; Garcia, R.; Dutcher, S.; Li, J.
2001-01-01
GIFTS (Geostationary Imaging Fourier Transform Spectrometer), a forerunner of next generation geostationary satellite weather observing systems, will be built to fly on the NASA EO-3 geostationary orbit mission in 2004 to demonstrate the use of large area detector arrays and readouts. Timely high spatial resolution images and quantitative soundings of clouds, water vapor, temperature, and pollutants of the atmosphere for weather prediction and air quality monitoring will be achieved. GIFTS is novel in terms of providing many scientific returns that traditionally can only be achieved by separate advanced imaging and sounding systems. GIFTS' ability to obtain half-hourly high vertical density wind over the full earth disk is revolutionary. However, these new technologies bring forth many challenges for data transmission, archiving, and geophysical data processing. In this paper, we will focus on the aspect of data volume and downlink issues by conducting a GIFTS data compression experiment. We will discuss the scenario of using principal component analysis as a foundation for atmospheric data retrieval and compression of uncalibrated and un-normalized interferograms. The effects of compression on the degradation of the signal and noise reduction in interferogram and spectral domains will be highlighted. A simulation system developed to model the GIFTS instrument measurements is described in detail.
Continuous manufacturing of extended release tablets via powder mixing and direct compression.
Ervasti, Tuomas; Simonaho, Simo-Pekka; Ketolainen, Jarkko; Forsberg, Peter; Fransson, Magnus; Wikström, Håkan; Folestad, Staffan; Lakio, Satu; Tajarobi, Pirjo; Abrahmsén-Alami, Susanna
2015-11-10
The aim of the current work was to explore continuous dry powder mixing and direct compression for manufacturing of extended release (ER) matrix tablets. The study was span out with a challenging formulation design comprising ibuprofen compositions with varying particle size and a relatively low amount of the matrix former hydroxypropyl methylcellulose (HPMC). Standard grade HPMC (CR) was compared to a recently developed direct compressible grade (DC2). The work demonstrate that ER tablets with desired quality attributes could be manufactured via integrated continuous mixing and direct compression. The most robust tablet quality (weight, assay, tensile strength) was obtained using high mixer speed and large particle size ibuprofen and HPMC DC2 due to good powder flow. At low mixer speed it was more difficult to achieve high quality low dose tablets. Notably, with HPMC DC2 the processing conditions had a significant effect on drug release. Longer processing time and/or faster mixer speed was needed to achieve robust release with compositions containing DC2 compared with those containing CR. This work confirms the importance of balancing process parameters and material properties to find consistent product quality. Also, adaptive control is proven a pivotal means for control of continuous manufacturing systems. Copyright © 2015 Elsevier B.V. All rights reserved.
Reconstructing high-dimensional two-photon entangled states via compressive sensing
Tonolini, Francesco; Chan, Susan; Agnew, Megan; Lindsay, Alan; Leach, Jonathan
2014-01-01
Accurately establishing the state of large-scale quantum systems is an important tool in quantum information science; however, the large number of unknown parameters hinders the rapid characterisation of such states, and reconstruction procedures can become prohibitively time-consuming. Compressive sensing, a procedure for solving inverse problems by incorporating prior knowledge about the form of the solution, provides an attractive alternative to the problem of high-dimensional quantum state characterisation. Using a modified version of compressive sensing that incorporates the principles of singular value thresholding, we reconstruct the density matrix of a high-dimensional two-photon entangled system. The dimension of each photon is equal to d = 17, corresponding to a system of 83521 unknown real parameters. Accurate reconstruction is achieved with approximately 2500 measurements, only 3% of the total number of unknown parameters in the state. The algorithm we develop is fast, computationally inexpensive, and applicable to a wide range of quantum states, thus demonstrating compressive sensing as an effective technique for measuring the state of large-scale quantum systems. PMID:25306850
NASA Astrophysics Data System (ADS)
Mojica, Edson; Pertuz, Said; Arguello, Henry
2017-12-01
One of the main challenges in Computed Tomography (CT) is obtaining accurate reconstructions of the imaged object while keeping a low radiation dose in the acquisition process. In order to solve this problem, several researchers have proposed the use of compressed sensing for reducing the amount of measurements required to perform CT. This paper tackles the problem of designing high-resolution coded apertures for compressed sensing computed tomography. In contrast to previous approaches, we aim at designing apertures to be used with low-resolution detectors in order to achieve super-resolution. The proposed method iteratively improves random coded apertures using a gradient descent algorithm subject to constraints in the coherence and homogeneity of the compressive sensing matrix induced by the coded aperture. Experiments with different test sets show consistent results for different transmittances, number of shots and super-resolution factors.
A Study on Homogeneous Charge Compression Ignition Gasoline Engines
NASA Astrophysics Data System (ADS)
Kaneko, Makoto; Morikawa, Koji; Itoh, Jin; Saishu, Youhei
A new engine concept consisting of HCCI combustion for low and midrange loads and spark ignition combustion for high loads was introduced. The timing of the intake valve closing was adjusted to alter the negative valve overlap and effective compression ratio to provide suitable HCCI conditions. The effect of mixture formation on auto-ignition was also investigated using a direct injection engine. As a result, HCCI combustion was achieved with a relatively low compression ratio when the intake air was heated by internal EGR. The resulting combustion was at a high thermal efficiency, comparable to that of modern diesel engines, and produced almost no NOx emissions or smoke. The mixture stratification increased the local A/F concentration, resulting in higher reactivity. A wide range of combustible A/F ratios was used to control the compression ignition timing. Photographs showed that the flame filled the entire chamber during combustion, reducing both emissions and fuel consumption.
FBCOT: a fast block coding option for JPEG 2000
NASA Astrophysics Data System (ADS)
Taubman, David; Naman, Aous; Mathew, Reji
2017-09-01
Based on the EBCOT algorithm, JPEG 2000 finds application in many fields, including high performance scientific, geospatial and video coding applications. Beyond digital cinema, JPEG 2000 is also attractive for low-latency video communications. The main obstacle for some of these applications is the relatively high computational complexity of the block coder, especially at high bit-rates. This paper proposes a drop-in replacement for the JPEG 2000 block coding algorithm, achieving much higher encoding and decoding throughputs, with only modest loss in coding efficiency (typically < 0.5dB). The algorithm provides only limited quality/SNR scalability, but offers truly reversible transcoding to/from any standard JPEG 2000 block bit-stream. The proposed FAST block coder can be used with EBCOT's post-compression RD-optimization methodology, allowing a target compressed bit-rate to be achieved even at low latencies, leading to the name FBCOT (Fast Block Coding with Optimized Truncation).
Compression in wearable sensor nodes: impacts of node topology.
Imtiaz, Syed Anas; Casson, Alexander J; Rodriguez-Villegas, Esther
2014-04-01
Wearable sensor nodes monitoring the human body must operate autonomously for very long periods of time. Online and low-power data compression embedded within the sensor node is therefore essential to minimize data storage/transmission overheads. This paper presents a low-power MSP430 compressive sensing implementation for providing such compression, focusing particularly on the impact of the sensor node architecture on the compression performance. Compression power performance is compared for four different sensor nodes incorporating different strategies for wireless transmission/on-sensor-node local storage of data. The results demonstrate that the compressive sensing used must be designed differently depending on the underlying node topology, and that the compression strategy should not be guided only by signal processing considerations. We also provide a practical overview of state-of-the-art sensor node topologies. Wireless transmission of data is often preferred as it offers increased flexibility during use, but in general at the cost of increased power consumption. We demonstrate that wireless sensor nodes can highly benefit from the use of compressive sensing and now can achieve power consumptions comparable to, or better than, the use of local memory.
NASA Astrophysics Data System (ADS)
Wibowo; Fadillah, Y.
2018-03-01
Efficiency in a construction works is a very important thing. Concrete with ease of workmanship and rapid achievement of service strength will to determine the level of efficiency. In this research, we studied the optimization of accelerator usage in achieving performance on compressive strength of concrete in function of time. The addition of variation of 0.3% - 2.3% to the weight of cement gives a positive impact of the rapid achievement of hardened concrete, however the speed of increasing of concrete strength achievement in term of time influence present increasing value of filling ability parameter of self-compacting concrete. The right composition of accelerator aligned with range of the values standard of filling ability parameters of HSSCC will be an advantage guidance for producers in the ready-mix concrete industry.
JPEG XS call for proposals subjective evaluations
NASA Astrophysics Data System (ADS)
McNally, David; Bruylants, Tim; Willème, Alexandre; Ebrahimi, Touradj; Schelkens, Peter; Macq, Benoit
2017-09-01
In March 2016 the Joint Photographic Experts Group (JPEG), formally known as ISO/IEC SC29 WG1, issued a call for proposals soliciting compression technologies for a low-latency, lightweight and visually transparent video compression scheme. Within the JPEG family of standards, this scheme was denominated JPEG XS. The subjective evaluation of visually lossless compressed video sequences at high resolutions and bit depths poses particular challenges. This paper describes the adopted procedures, the subjective evaluation setup, the evaluation process and summarizes the obtained results which were achieved in the context of the JPEG XS standardization process.
Compressed air-assisted solvent extraction (CASX) for metal removal.
Li, Chi-Wang; Chen, Yi-Ming; Hsiao, Shin-Tien
2008-03-01
A novel process, compressed air-assisted solvent extraction (CASX), was developed to generate micro-sized solvent-coated air bubbles (MSAB) for metal extraction. Through pressurization of solvent with compressed air followed by releasing air-oversaturated solvent into metal-containing wastewater, MSAB were generated instantaneously. The enormous surface area of MSAB makes extraction process extremely fast and achieves very high aqueous/solvent weight ratio (A/S ratio). CASX process completely removed Cr(VI) from acidic electroplating wastewater under A/S ratio of 115 and extraction time of less than 10s. When synthetic wastewater containing Cd(II) of 50mgl(-1) was treated, A/S ratios of higher than 714 and 1190 could be achieved using solvent with extractant/diluent weight ratio of 1:1 and 5:1, respectively. Also, MSAB have very different physical properties, such as size and density, compared to the emulsified solvent droplets, making separation and recovery of solvent from treated effluent very easy.
152 W average power Tm-doped fiber CPA system.
Stutzki, Fabian; Gaida, Christian; Gebhardt, Martin; Jansen, Florian; Wienke, Andreas; Zeitner, Uwe; Fuchs, Frank; Jauregui, Cesar; Wandt, Dieter; Kracht, Dietmar; Limpert, Jens; Tünnermann, Andreas
2014-08-15
A high-power thulium (Tm)-doped fiber chirped-pulse amplification system emitting a record compressed average output power of 152 W and 4 MW peak power is demonstrated. This result is enabled by utilizing Tm-doped photonic crystal fibers with mode-field diameters of 35 μm, which mitigate detrimental nonlinearities, exhibit slope efficiencies of more than 50%, and allow for reaching a pump-power-limited average output power of 241 W. The high-compression efficiency has been achieved by using multilayer dielectric gratings with diffraction efficiencies higher than 98%.
Time-resolved compression of a capsule with a cone to high density for fast-ignition laser fusion.
Theobald, W; Solodov, A A; Stoeckl, C; Anderson, K S; Beg, F N; Epstein, R; Fiksel, G; Giraldez, E M; Glebov, V Yu; Habara, H; Ivancic, S; Jarrott, L C; Marshall, F J; McKiernan, G; McLean, H S; Mileham, C; Nilson, P M; Patel, P K; Pérez, F; Sangster, T C; Santos, J J; Sawada, H; Shvydky, A; Stephens, R B; Wei, M S
2014-12-12
The advent of high-intensity lasers enables us to recreate and study the behaviour of matter under the extreme densities and pressures that exist in many astrophysical objects. It may also enable us to develop a power source based on laser-driven nuclear fusion. Achieving such conditions usually requires a target that is highly uniform and spherically symmetric. Here we show that it is possible to generate high densities in a so-called fast-ignition target that consists of a thin shell whose spherical symmetry is interrupted by the inclusion of a metal cone. Using picosecond-time-resolved X-ray radiography, we show that we can achieve areal densities in excess of 300 mg cm(-2) with a nanosecond-duration compression pulse--the highest areal density ever reported for a cone-in-shell target. Such densities are high enough to stop MeV electrons, which is necessary for igniting the fuel with a subsequent picosecond pulse focused into the resulting plasma.
A Wireless Headstage for Combined Optogenetics and Multichannel Electrophysiological Recording.
Gagnon-Turcotte, Gabriel; LeChasseur, Yoan; Bories, Cyril; Messaddeq, Younes; De Koninck, Yves; Gosselin, Benoit
2017-02-01
This paper presents a wireless headstage with real-time spike detection and data compression for combined optogenetics and multichannel electrophysiological recording. The proposed headstage, which is intended to perform both optical stimulation and electrophysiological recordings simultaneously in freely moving transgenic rodents, is entirely built with commercial off-the-shelf components, and includes 32 recording channels and 32 optical stimulation channels. It can detect, compress and transmit full action potential waveforms over 32 channels in parallel and in real time using an embedded digital signal processor based on a low-power field programmable gate array and a Microblaze microprocessor softcore. Such a processor implements a complete digital spike detector featuring a novel adaptive threshold based on a Sigma-delta control loop, and a wavelet data compression module using a new dynamic coefficient re-quantization technique achieving large compression ratios with higher signal quality. Simultaneous optical stimulation and recording have been performed in-vivo using an optrode featuring 8 microelectrodes and 1 implantable fiber coupled to a 465-nm LED, in the somatosensory cortex and the Hippocampus of a transgenic mouse expressing ChannelRhodospin (Thy1::ChR2-YFP line 4) under anesthetized conditions. Experimental results show that the proposed headstage can trigger neuron activity while collecting, detecting and compressing single cell microvolt amplitude activity from multiple channels in parallel while achieving overall compression ratios above 500. This is the first reported high-channel count wireless optogenetic device providing simultaneous optical stimulation and recording. Measured characteristics show that the proposed headstage can achieve up to 100% of true positive detection rate for signal-to-noise ratio (SNR) down to 15 dB, while achieving up to 97.28% at SNR as low as 5 dB. The implemented prototype features a lifespan of up to 105 minutes, and uses a lightweight (2.8 g) and compact [Formula: see text] rigid-flex printed circuit board.
Operations and maintenance in the glass container industry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barbieri, D.; Jacobson, D.
1999-07-01
Compressed air is a significant electrical end-use at most manufacturing facilities, and few industries utilize compressed air to the extent of the glass container industry. Unfortunately, compressed air is often a significant source of wasted energy because many customers view it as a low-maintenance system. In the case of the glass container industry, compressed air is a mission-critical system used for driving production machinery, blowing glass, cooling plungers and product, and packaging. Leakage totaling 10% of total compressed air capacity is not uncommon, and leakage rates upwards of 40% have been observed. Even though energy savings from repairing compressed airmore » leaks can be substantial, regular maintenance procedures are often not in place for compressed air systems. In order to achieve future savings in the compressed air end-use, O and M programs must make a special effort to educate customers on the significant energy impacts of regular compressed air system maintenance. This paper will focus on the glass industry, its reliability on compressed air, and the unique savings potential in the glass container industry. Through a technical review of the glass production process, this paper will identify compressed air as a highly significant electrical consumer in these facilities and present ideas on how to produce and deliver compressed air in a more efficient manner. It will also examine a glass container manufacturer with extremely high savings potential in compressed air systems, but little initiative to establish and perform compressed air maintenance due to an if it works, don't mess with it maintenance philosophy. Finally, this paper will address the economic benefit of compressed air maintenance in this and other manufacturing industries.« less
New Algorithms and Lower Bounds for Sequential-Access Data Compression
NASA Astrophysics Data System (ADS)
Gagie, Travis
2009-02-01
This thesis concerns sequential-access data compression, i.e., by algorithms that read the input one or more times from beginning to end. In one chapter we consider adaptive prefix coding, for which we must read the input character by character, outputting each character's self-delimiting codeword before reading the next one. We show how to encode and decode each character in constant worst-case time while producing an encoding whose length is worst-case optimal. In another chapter we consider one-pass compression with memory bounded in terms of the alphabet size and context length, and prove a nearly tight tradeoff between the amount of memory we can use and the quality of the compression we can achieve. In a third chapter we consider compression in the read/write streams model, which allows us passes and memory both polylogarithmic in the size of the input. We first show how to achieve universal compression using only one pass over one stream. We then show that one stream is not sufficient for achieving good grammar-based compression. Finally, we show that two streams are necessary and sufficient for achieving entropy-only bounds.
An adaptive technique to maximize lossless image data compression of satellite images
NASA Technical Reports Server (NTRS)
Stewart, Robert J.; Lure, Y. M. Fleming; Liou, C. S. Joe
1994-01-01
Data compression will pay an increasingly important role in the storage and transmission of image data within NASA science programs as the Earth Observing System comes into operation. It is important that the science data be preserved at the fidelity the instrument and the satellite communication systems were designed to produce. Lossless compression must therefore be applied, at least, to archive the processed instrument data. In this paper, we present an analysis of the performance of lossless compression techniques and develop an adaptive approach which applied image remapping, feature-based image segmentation to determine regions of similar entropy and high-order arithmetic coding to obtain significant improvements over the use of conventional compression techniques alone. Image remapping is used to transform the original image into a lower entropy state. Several techniques were tested on satellite images including differential pulse code modulation, bi-linear interpolation, and block-based linear predictive coding. The results of these experiments are discussed and trade-offs between computation requirements and entropy reductions are used to identify the optimum approach for a variety of satellite images. Further entropy reduction can be achieved by segmenting the image based on local entropy properties then applying a coding technique which maximizes compression for the region. Experimental results are presented showing the effect of different coding techniques for regions of different entropy. A rule-base is developed through which the technique giving the best compression is selected. The paper concludes that maximum compression can be achieved cost effectively and at acceptable performance rates with a combination of techniques which are selected based on image contextual information.
Main drive optimization of a high-foot pulse shape in inertial confinement fusion implosions
NASA Astrophysics Data System (ADS)
Wang, L. F.; Ye, W. H.; Wu, J. F.; Liu, Jie; Zhang, W. Y.; He, X. T.
2016-12-01
While progress towards hot-spot ignition has been made achieving an alpha-heating dominated state in high-foot implosion experiments [Hurricane et al., Nat. Phys. 12, 800 (2016)] on the National Ignition Facility, improvements are needed to increase the fuel compression for the enhancement of the neutron yield. A strategy is proposed to improve the fuel compression through the recompression of a shock/compression wave generated by the end of the main drive portion of a high-foot pulse shape. Two methods for the peak pulse recompression, namely, the decompression-and-recompression (DR) and simple recompression schemes, are investigated and compared. Radiation hydrodynamic simulations confirm that the peak pulse recompression can clearly improve fuel compression without significantly compromising the implosion stability. In particular, when the convergent DR shock is tuned to encounter the divergent shock from the capsule center at a suitable position, not only the neutron yield but also the stability of stagnating hot-spot can be noticeably improved, compared to the conventional high-foot implosions [Hurricane et al., Phys. Plasmas 21, 056314 (2014)].
Compressed Sensing for Resolution Enhancement of Hyperpolarized 13C Flyback 3D-MRSI
Hu, Simon; Lustig, Michael; Chen, Albert P.; Crane, Jason; Kerr, Adam; Kelley, Douglas A.C.; Hurd, Ralph; Kurhanewicz, John; Nelson, Sarah J.; Pauly, John M.; Vigneron, Daniel B.
2008-01-01
High polarization of nuclear spins in liquid state through dynamic nuclear polarization has enabled the direct monitoring of 13C metabolites in vivo at very high signal to noise, allowing for rapid assessment of tissue metabolism. The abundant SNR afforded by this hyperpolarization technique makes high resolution 13C 3D-MRSI feasible. However, the number of phase encodes that can be fit into the short acquisition time for hyperpolarized imaging limits spatial coverage and resolution. To take advantage of the high SNR available from hyperpolarization, we have applied compressed sensing to achieve a factor of 2 enhancement in spatial resolution without increasing acquisition time or decreasing coverage. In this paper, the design and testing of compressed sensing suited for a flyback 13C 3D-MRSI sequence are presented. The key to this design was the undersampling of spectral k-space using a novel blipped scheme, thus taking advantage of the considerable sparsity in typical hyperpolarized 13C spectra. Phantom tests validated the accuracy of the compressed sensing approach and initial mouse experiments demonstrated in vivo feasibility. PMID:18367420
Lessons Learned in the High-Speed Aerodynamic Research Programs of the NACA/NASA
NASA Technical Reports Server (NTRS)
Spearman, M. Leroy
2004-01-01
The achievement of flight with manned, powered, heavier-than-air aircraft in 1903 marked the beginning of a new era in the means of transportation. A special advantage for aircraft was in speed. However, when an aircraft penetrates the air at very high speeds, the disturbed air is compressed and there are changes in the density, pressure and temperature of the air. These compressibility effects change the aerodynamic characteristics of an aircraft and introduce problems in drag, stability and control. Many aircraft designed in the post-World War II era were plagued with the effects of compressibility. Accordingly, the study of the aerodynamic behavior of aircraft, spacecraft and missiles at high-speed became a major part of the research activity of the NACA/NASA. The intent of the research was to determine the causes and provide some solutions for the aerodynamic problems resulting from the effects of compressibility. The purpose of this paper is to review some of the high-speed aerodynamic research work conducted at the Langley Research Center from the viewpoint of the author who has been active in much of the effort.
2D-pattern matching image and video compression: theory, algorithms, and experiments.
Alzina, Marc; Szpankowski, Wojciech; Grama, Ananth
2002-01-01
In this paper, we propose a lossy data compression framework based on an approximate two-dimensional (2D) pattern matching (2D-PMC) extension of the Lempel-Ziv (1977, 1978) lossless scheme. This framework forms the basis upon which higher level schemes relying on differential coding, frequency domain techniques, prediction, and other methods can be built. We apply our pattern matching framework to image and video compression and report on theoretical and experimental results. Theoretically, we show that the fixed database model used for video compression leads to suboptimal but computationally efficient performance. The compression ratio of this model is shown to tend to the generalized entropy. For image compression, we use a growing database model for which we provide an approximate analysis. The implementation of 2D-PMC is a challenging problem from the algorithmic point of view. We use a range of techniques and data structures such as k-d trees, generalized run length coding, adaptive arithmetic coding, and variable and adaptive maximum distortion level to achieve good compression ratios at high compression speeds. We demonstrate bit rates in the range of 0.25-0.5 bpp for high-quality images and data rates in the range of 0.15-0.5 Mbps for a baseline video compression scheme that does not use any prediction or interpolation. We also demonstrate that this asymmetric compression scheme is capable of extremely fast decompression making it particularly suitable for networked multimedia applications.
Evaluation on Compressive Characteristics of Medical Stents Applied by Mesh Structures
NASA Astrophysics Data System (ADS)
Hirayama, Kazuki; He, Jianmei
2017-11-01
There are concerns about strength reduction and fatigue fracture due to stress concentration in currently used medical stents. To address these problems, meshed stents applied by mesh structures were interested for achieving long life and high strength perfromance of medical stents. The purpose of this study is to design basic mesh shapes to obatin three dimensional (3D) meshed stent models for mechanical property evaluation. The influence of introduced design variables on compressive characteristics of meshed stent models are evaluated through finite element analysis using ANSYS Workbench code. From the analytical results, the compressive stiffness are changed periodically with compressive directions, average results need to be introduced as the mean value of compressive stiffness of meshed stents. Secondly, compressive flexibility of meshed stents can be improved by increasing the angle proportional to the arm length of the mesh basic shape. By increasing the number of basic mesh shapes arranged in stent’s circumferential direction, compressive rigidity of meshed stent tends to be increased. Finaly reducing the mesh line width is found effective to improve compressive flexibility of meshed stents.
Loaded delay lines for future RF pulse compression systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, R.M.; Wilson, P.B.; Kroll, N.M.
1995-05-01
The peak power delivered by the klystrons in the NLCRA (Next Linear Collider Test Accelerator) now under construction at SLAC is enhanced by a factor of four in a SLED-II type of R.F. pulse compression system (pulse width compression ratio of six). To achieve the desired output pulse duration of 250 ns, a delay line constructed from a 36 m length of circular waveguide is used. Future colliders, however, will require even higher peak power and larger compression factors, which favors a more efficient binary pulse compression approach. Binary pulse compression, however, requires a line whose delay time is approximatelymore » proportional to the compression factor. To reduce the length of these lines to manageable proportions, periodically loaded delay lines are being analyzed using a generalized scattering matrix approach. One issue under study is the possibility of propagating two TE{sub o} modes, one with a high group velocity and one with a group velocity of the order 0.05c, for use in a single-line binary pulse compression system. Particular attention is paid to time domain pulse degradation and to Ohmic losses.« less
A Computer Based Program to Improve Reading and Mathematics Scores for High School Students.
ERIC Educational Resources Information Center
Bond, Carole L.; And Others
A study examined the effect on reading achievement, mathematics achievement, and ACT scores when computer based instruction (CBI) was compressed into a 6-week period of time. In addition, the effects of learning style and receptive language deficits on these scores were studied. Computer based instruction is a primary source of instruction that…
NASA Technical Reports Server (NTRS)
Tserng, Hua-Quen; Ketterson, Andrew; Saunier, Paul; McCarty, Larry; Davis, Steve
1998-01-01
The design, fabrication, and performance of K-band high-efficiency, linear power pHEMT amplifiers implemented in Embedded Transmission Line (ETL) MMIC configuration with unthinned GaAs substrate and topside grounding are reported. A three-stage amplifier achieved a power-added efficiency of 40.5% with 264 mW output at 20.2 GHz. The linear gain is 28.5 dB with 1-dB gain compression output power of 200 mW and 31% power-added efficiency. The carrier-to-third-order intermodulation ratio is approx. 20 dBc at the 1-dB compression point. A RF functional yield of more than 90% has been achieved.
Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain
Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo
2012-01-01
An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality. PMID:23049544
Medical image compression based on vector quantization with variable block sizes in wavelet domain.
Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo
2012-01-01
An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.
Choong, Chwee-Lin; Shim, Mun-Bo; Lee, Byoung-Sun; Jeon, Sanghun; Ko, Dong-Su; Kang, Tae-Hyung; Bae, Jihyun; Lee, Sung Hoon; Byun, Kyung-Eun; Im, Jungkyun; Jeong, Yong Jin; Park, Chan Eon; Park, Jong-Jin; Chung, U-In
2014-06-04
A stretchable resistive pressure sensor is achieved by coating a compressible substrate with a highly stretchable electrode. The substrate contains an array of microscale pyramidal features, and the electrode comprises a polymer composite. When the pressure-induced geometrical change experienced by the electrode is maximized at 40% elongation, a sensitivity of 10.3 kPa(-1) is achieved. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Sriraam, N.
2012-01-01
Developments of new classes of efficient compression algorithms, software systems, and hardware for data intensive applications in today's digital health care systems provide timely and meaningful solutions in response to exponentially growing patient information data complexity and associated analysis requirements. Of the different 1D medical signals, electroencephalography (EEG) data is of great importance to the neurologist for detecting brain-related disorders. The volume of digitized EEG data generated and preserved for future reference exceeds the capacity of recent developments in digital storage and communication media and hence there is a need for an efficient compression system. This paper presents a new and efficient high performance lossless EEG compression using wavelet transform and neural network predictors. The coefficients generated from the EEG signal by integer wavelet transform are used to train the neural network predictors. The error residues are further encoded using a combinational entropy encoder, Lempel-Ziv-arithmetic encoder. Also a new context-based error modeling is also investigated to improve the compression efficiency. A compression ratio of 2.99 (with compression efficiency of 67%) is achieved with the proposed scheme with less encoding time thereby providing diagnostic reliability for lossless transmission as well as recovery of EEG signals for telemedicine applications. PMID:22489238
Sriraam, N
2012-01-01
Developments of new classes of efficient compression algorithms, software systems, and hardware for data intensive applications in today's digital health care systems provide timely and meaningful solutions in response to exponentially growing patient information data complexity and associated analysis requirements. Of the different 1D medical signals, electroencephalography (EEG) data is of great importance to the neurologist for detecting brain-related disorders. The volume of digitized EEG data generated and preserved for future reference exceeds the capacity of recent developments in digital storage and communication media and hence there is a need for an efficient compression system. This paper presents a new and efficient high performance lossless EEG compression using wavelet transform and neural network predictors. The coefficients generated from the EEG signal by integer wavelet transform are used to train the neural network predictors. The error residues are further encoded using a combinational entropy encoder, Lempel-Ziv-arithmetic encoder. Also a new context-based error modeling is also investigated to improve the compression efficiency. A compression ratio of 2.99 (with compression efficiency of 67%) is achieved with the proposed scheme with less encoding time thereby providing diagnostic reliability for lossless transmission as well as recovery of EEG signals for telemedicine applications.
An adaptive distributed data aggregation based on RCPC for wireless sensor networks
NASA Astrophysics Data System (ADS)
Hua, Guogang; Chen, Chang Wen
2006-05-01
One of the most important design issues in wireless sensor networks is energy efficiency. Data aggregation has significant impact on the energy efficiency of the wireless sensor networks. With massive deployment of sensor nodes and limited energy supply, data aggregation has been considered as an essential paradigm for data collection in sensor networks. Recently, distributed source coding has been demonstrated to possess several advantages in data aggregation for wireless sensor networks. Distributed source coding is able to encode sensor data with lower bit rate without direct communication among sensor nodes. To ensure reliable and high throughput transmission with the aggregated data, we proposed in this research a progressive transmission and decoding of Rate-Compatible Punctured Convolutional (RCPC) coded data aggregation with distributed source coding. Our proposed 1/2 RSC codes with Viterbi algorithm for distributed source coding are able to guarantee that, even without any correlation between the data, the decoder can always decode the data correctly without wasting energy. The proposed approach achieves two aspects in adaptive data aggregation for wireless sensor networks. First, the RCPC coding facilitates adaptive compression corresponding to the correlation of the sensor data. When the data correlation is high, higher compression ration can be achieved. Otherwise, lower compression ratio will be achieved. Second, the data aggregation is adaptively accumulated. There is no waste of energy in the transmission; even there is no correlation among the data, the energy consumed is at the same level as raw data collection. Experimental results have shown that the proposed distributed data aggregation based on RCPC is able to achieve high throughput and low energy consumption data collection for wireless sensor networks
High-speed reconstruction of compressed images
NASA Astrophysics Data System (ADS)
Cox, Jerome R., Jr.; Moore, Stephen M.
1990-07-01
A compression scheme is described that allows high-definition radiological images with greater than 8-bit intensity resolution to be represented by 8-bit pixels. Reconstruction of the images with their original intensity resolution can be carried out by means of a pipeline architecture suitable for compact, high-speed implementation. A reconstruction system is described that can be fabricated according to this approach and placed between an 8-bit display buffer and the display's video system thereby allowing contrast control of images at video rates. Results for 50 CR chest images are described showing that error-free reconstruction of the original 10-bit CR images can be achieved.
High Performance Compression of Science Data
NASA Technical Reports Server (NTRS)
Storer, James A.; Carpentieri, Bruno; Cohn, Martin
1994-01-01
Two papers make up the body of this report. One presents a single-pass adaptive vector quantization algorithm that learns a codebook of variable size and shape entries; the authors present experiments on a set of test images showing that with no training or prior knowledge of the data, for a given fidelity, the compression achieved typically equals or exceeds that of the JPEG standard. The second paper addresses motion compensation, one of the most effective techniques used in interframe data compression. A parallel block-matching algorithm for estimating interframe displacement of blocks with minimum error is presented. The algorithm is designed for a simple parallel architecture to process video in real time.
Embedded wavelet packet transform technique for texture compression
NASA Astrophysics Data System (ADS)
Li, Jin; Cheng, Po-Yuen; Kuo, C.-C. Jay
1995-09-01
A highly efficient texture compression scheme is proposed in this research. With this scheme, energy compaction of texture images is first achieved by the wavelet packet transform, and an embedding approach is then adopted for the coding of the wavelet packet transform coefficients. By comparing the proposed algorithm with the JPEG standard, FBI wavelet/scalar quantization standard and the EZW scheme with extensive experimental results, we observe a significant improvement in the rate-distortion performance and visual quality.
Compression-based integral curve data reuse framework for flow visualization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, Fan; Bi, Chongke; Guo, Hanqi
Currently, by default, integral curves are repeatedly re-computed in different flow visualization applications, such as FTLE field computation, source-destination queries, etc., leading to unnecessary resource cost. We present a compression-based data reuse framework for integral curves, to greatly reduce their retrieval cost, especially in a resource-limited environment. In our design, a hierarchical and hybrid compression scheme is proposed to balance three objectives, including high compression ratio, controllable error, and low decompression cost. Specifically, we use and combine digitized curve sparse representation, floating-point data compression, and octree space partitioning to adaptively achieve the objectives. Results have shown that our data reusemore » framework could acquire tens of times acceleration in the resource-limited environment compared to on-the-fly particle tracing, and keep controllable information loss. Moreover, our method could provide fast integral curve retrieval for more complex data, such as unstructured mesh data.« less
Magnetized Target Fusion At General Fusion: An Overview
NASA Astrophysics Data System (ADS)
Laberge, Michel; O'Shea, Peter; Donaldson, Mike; Delage, Michael; Fusion Team, General
2017-10-01
Magnetized Target Fusion (MTF) involves compressing an initial magnetically confined plasma on a timescale faster than the thermal confinement time of the plasma. If near adiabatic compression is achieved, volumetric compression of 350X or more of a 500 eV target plasma would achieve a final plasma temperature exceeding 10 keV. Interesting fusion gains could be achieved provided the compressed plasma has sufficient density and dwell time. General Fusion (GF) is developing a compression system using pneumatic pistons to collapse a cavity formed in liquid metal containing a magnetized plasma target. Low cost driver, straightforward heat extraction, good tritium breeding ratio and excellent neutron protection could lead to a practical power plant. GF (65 employees) has an active plasma R&D program including both full scale and reduced scale plasma experiments and simulation of both. Although pneumatic driven compression of full scale plasmas is the end goal, present compression studies use reduced scale plasmas and chemically accelerated aluminum liners. We will review results from our plasma target development, motivate and review the results of dynamic compression field tests and briefly describe the work to date on the pneumatic driver front.
NASA Astrophysics Data System (ADS)
Cattaneo, Alessandro; Park, Gyuhae; Farrar, Charles; Mascareñas, David
2012-04-01
The acoustic emission (AE) phenomena generated by a rapid release in the internal stress of a material represent a promising technique for structural health monitoring (SHM) applications. AE events typically result in a discrete number of short-time, transient signals. The challenge associated with capturing these events using classical techniques is that very high sampling rates must be used over extended periods of time. The result is that a very large amount of data is collected to capture a phenomenon that rarely occurs. Furthermore, the high energy consumption associated with the required high sampling rates makes the implementation of high-endurance, low-power, embedded AE sensor nodes difficult to achieve. The relatively rare occurrence of AE events over long time scales implies that these measurements are inherently sparse in the spike domain. The sparse nature of AE measurements makes them an attractive candidate for the application of compressed sampling techniques. Collecting compressed measurements of sparse AE signals will relax the requirements on the sampling rate and memory demands. The focus of this work is to investigate the suitability of compressed sensing techniques for AE-based SHM. The work explores estimating AE signal statistics in the compressed domain for low-power classification applications. In the event compressed classification finds an event of interest, ι1 norm minimization will be used to reconstruct the measurement for further analysis. The impact of structured noise on compressive measurements is specifically addressed. The suitability of a particular algorithm, called Justice Pursuit, to increase robustness to a small amount of arbitrary measurement corruption is investigated.
A new hyperspectral image compression paradigm based on fusion
NASA Astrophysics Data System (ADS)
Guerra, Raúl; Melián, José; López, Sebastián.; Sarmiento, Roberto
2016-10-01
The on-board compression of remote sensed hyperspectral images is an important task nowadays. One of the main difficulties is that the compression of these images must be performed in the satellite which carries the hyperspectral sensor. Hence, this process must be performed by space qualified hardware, having area, power and speed limitations. Moreover, it is important to achieve high compression ratios without compromising the quality of the decompress image. In this manuscript we proposed a new methodology for compressing hyperspectral images based on hyperspectral image fusion concepts. The proposed compression process has two independent steps. The first one is to spatially degrade the remote sensed hyperspectral image to obtain a low resolution hyperspectral image. The second step is to spectrally degrade the remote sensed hyperspectral image to obtain a high resolution multispectral image. These two degraded images are then send to the earth surface, where they must be fused using a fusion algorithm for hyperspectral and multispectral image, in order to recover the remote sensed hyperspectral image. The main advantage of the proposed methodology for compressing remote sensed hyperspectral images is that the compression process, which must be performed on-board, becomes very simple, being the fusion process used to reconstruct image the more complex one. An extra advantage is that the compression ratio can be fixed in advanced. Many simulations have been performed using different fusion algorithms and different methodologies for degrading the hyperspectral image. The results obtained in the simulations performed corroborate the benefits of the proposed methodology.
Wavelet compression of noisy tomographic images
NASA Astrophysics Data System (ADS)
Kappeler, Christian; Mueller, Stefan P.
1995-09-01
3D data acquisition is increasingly used in positron emission tomography (PET) to collect a larger fraction of the emitted radiation. A major practical difficulty with data storage and transmission in 3D-PET is the large size of the data sets. A typical dynamic study contains about 200 Mbyte of data. PET images inherently have a high level of photon noise and therefore usually are evaluated after being processed by a smoothing filter. In this work we examined lossy compression schemes under the postulate not induce image modifications exceeding those resulting from low pass filtering. The standard we will refer to is the Hanning filter. Resolution and inhomogeneity serve as figures of merit for quantification of image quality. The images to be compressed are transformed to a wavelet representation using Daubechies12 wavelets and compressed after filtering by thresholding. We do not include further compression by quantization and coding here. Achievable compression factors at this level of processing are thirty to fifty.
A Streaming PCA VLSI Chip for Neural Data Compression.
Wu, Tong; Zhao, Wenfeng; Guo, Hongsun; Lim, Hubert H; Yang, Zhi
2017-12-01
Neural recording system miniaturization and integration with low-power wireless technologies require compressing neural data before transmission. Feature extraction is a procedure to represent data in a low-dimensional space; its integration into a recording chip can be an efficient approach to compress neural data. In this paper, we propose a streaming principal component analysis algorithm and its microchip implementation to compress multichannel local field potential (LFP) and spike data. The circuits have been designed in a 65-nm CMOS technology and occupy a silicon area of 0.06 mm. Throughout the experiments, the chip compresses LFPs by 10 at the expense of as low as 1% reconstruction errors and 144-nW/channel power consumption; for spikes, the achieved compression ratio is 25 with 8% reconstruction errors and 3.05-W/channel power consumption. In addition, the algorithm and its hardware architecture can swiftly adapt to nonstationary spiking activities, which enables efficient hardware sharing among multiple channels to support a high-channel count recorder.
A comparison of inferface pressures of three compression bandage systems.
Hanna, Richard; Bohbot, Serge; Connolly, Nicki
To measure and compare the interface pressures achieved with two compression bandage systems - a four-layer system (4LB) and a two-layer short-stretch system (SSB) - with a new two-layer system (2LB), which uses an etalonnage (performance indicator) to help achieve the correct therapeutic pressure for healing venous leg ulcers - recommended as 40 mmHg. 32 nurses with experience of using compression bandages applied each of the three systems to a healthy female volunteer in a sitting position. The interface pressures and time taken to apply the systems were measured. A questionnaire regarding the concept of the new system and its application in comparison to the existing two systems was then completed by the nurses. The interface pressures achieved show that many nurses applied very high pressures with the 4LB (25% achieving pressures > 50 mmHg) whereas the majority of the nurses (75%) achieved a pressure of < 30 mmHg when using the SSB. A pressure of 30-50 mmHg was achieved with the new 2LB. The SSB took the least time to be applied (mean: 1 minute 50 seconds) with the 4LB the slowest (mean: 3 minutes 46 seconds). A mean time of 2 minutes 35 seconds was taken to apply the 2LB. Over 63% of the nurses felt the 2LB was very easy to apply. These results suggest that the 2LB achieves the required therapeutic pressure necessary for the management of venous leg ulcers, is easy to apply and may provide a suitable alternative to other multi-layer bandage systems.
Ho, B T; Tsai, M J; Wei, J; Ma, M; Saipetch, P
1996-01-01
A new method of video compression for angiographic images has been developed to achieve high compression ratio (~20:1) while eliminating block artifacts which leads to loss of diagnostic accuracy. This method adopts motion picture experts group's (MPEGs) motion compensated prediction to takes advantage of frame to frame correlation. However, in contrast to MPEG, the error images arising from mismatches in the motion estimation are encoded by discrete wavelet transform (DWT) rather than block discrete cosine transform (DCT). Furthermore, the authors developed a classification scheme which label each block in an image as intra, error, or background type and encode it accordingly. This hybrid coding can significantly improve the compression efficiency in certain eases. This method can be generalized for any dynamic image sequences applications sensitive to block artifacts.
Learning random networks for compression of still and moving images
NASA Technical Reports Server (NTRS)
Gelenbe, Erol; Sungur, Mert; Cramer, Christopher
1994-01-01
Image compression for both still and moving images is an extremely important area of investigation, with numerous applications to videoconferencing, interactive education, home entertainment, and potential applications to earth observations, medical imaging, digital libraries, and many other areas. We describe work on a neural network methodology to compress/decompress still and moving images. We use the 'point-process' type neural network model which is closer to biophysical reality than standard models, and yet is mathematically much more tractable. We currently achieve compression ratios of the order of 120:1 for moving grey-level images, based on a combination of motion detection and compression. The observed signal-to-noise ratio varies from values above 25 to more than 35. The method is computationally fast so that compression and decompression can be carried out in real-time. It uses the adaptive capabilities of a set of neural networks so as to select varying compression ratios in real-time as a function of quality achieved. It also uses a motion detector which will avoid retransmitting portions of the image which have varied little from the previous frame. Further improvements can be achieved by using on-line learning during compression, and by appropriate compensation of nonlinearities in the compression/decompression scheme. We expect to go well beyond the 250:1 compression level for color images with good quality levels.
Super-elastic and fatigue resistant carbon material with lamellar multi-arch microstructure
NASA Astrophysics Data System (ADS)
Gao, Huai-Ling; Zhu, Yin-Bo; Mao, Li-Bo; Wang, Feng-Chao; Luo, Xi-Sheng; Liu, Yang-Yi; Lu, Yang; Pan, Zhao; Ge, Jin; Shen, Wei; Zheng, Ya-Rong; Xu, Liang; Wang, Lin-Jun; Xu, Wei-Hong; Wu, Heng-An; Yu, Shu-Hong
2016-09-01
Low-density compressible materials enable various applications but are often hindered by structure-derived fatigue failure, weak elasticity with slow recovery speed and large energy dissipation. Here we demonstrate a carbon material with microstructure-derived super-elasticity and high fatigue resistance achieved by designing a hierarchical lamellar architecture composed of thousands of microscale arches that serve as elastic units. The obtained monolithic carbon material can rebound a steel ball in spring-like fashion with fast recovery speed (~580 mm s-1), and demonstrates complete recovery and small energy dissipation (~0.2) in each compress-release cycle, even under 90% strain. Particularly, the material can maintain structural integrity after more than 106 cycles at 20% strain and 2.5 × 105 cycles at 50% strain. This structural material, although constructed using an intrinsically brittle carbon constituent, is simultaneously super-elastic, highly compressible and fatigue resistant to a degree even greater than that of previously reported compressible foams mainly made from more robust constituents.
Information theory in econophysics: stock market and retirement funds
NASA Astrophysics Data System (ADS)
Vogel, Eugenio; Saravia, G.; Astete, J.; Díaz, J.; Erribarren, R.; Riadi, F.
2013-03-01
Information theory can help to recognize magnetic phase transitions, what can be seen as a way to recognize different regimes. This is achieved by means of zippers specifically designed to compact data in a meaningful way at is the case for compressor wlzip. In the present contribution we first apply wlzip to the Chilean stock market interpreting the compression rates for the files storing the minute variation of the IPSA indicator. Agitated days yield poor compression rates while calm days yield high compressibility. We then correlate this behavior to the value of the five retirement funds related to the Chilean economy. It is found that the covariance between the profitability of the retirement funds and the compressibility of the IPSA values of previous day is high for those funds investing in risky stocks. Surprisingly, there seems to be no great difference among the three riskier funds contrary to what could be expected from the limitations on the portfolio composition established by the laws that regulate this market.
Rapid-Rate Compression Testing of Sheet Materials at High Temperatures
NASA Technical Reports Server (NTRS)
Bernett, E. C.; Gerberich, W. W.
1961-01-01
This Report describes the test equipment that was developed and the procedures that were used to evaluate structural sheet-material compression properties at preselected constant strain rates and/or loads. Electrical self-resistance was used to achieve a rapid heating rate of 200 F/sec. Four materials were tested at maximum temperatures which ranged from 600 F for the aluminum alloy to 2000 F for the Ni-Cr-Co iron-base alloy. Tests at 0.1, 0.001, and 0.00001 in./in./sec showed that strain rate has a major effect on the measured strength, especially at the high temperatures. The tests, under conditions of constant temperature and constant compression stress, showed that creep deformation can be a critical factor even when the time involved is on the order of a few seconds or less. The theoretical and practical aspects of rapid-rate compression testing are presented, and suggestions are made regarding possible modifications of the equipment which would improve the over-all capabilities.
Squish: Near-Optimal Compression for Archival of Relational Datasets
Gao, Yihan; Parameswaran, Aditya
2017-01-01
Relational datasets are being generated at an alarmingly rapid rate across organizations and industries. Compressing these datasets could significantly reduce storage and archival costs. Traditional compression algorithms, e.g., gzip, are suboptimal for compressing relational datasets since they ignore the table structure and relationships between attributes. We study compression algorithms that leverage the relational structure to compress datasets to a much greater extent. We develop Squish, a system that uses a combination of Bayesian Networks and Arithmetic Coding to capture multiple kinds of dependencies among attributes and achieve near-entropy compression rate. Squish also supports user-defined attributes: users can instantiate new data types by simply implementing five functions for a new class interface. We prove the asymptotic optimality of our compression algorithm and conduct experiments to show the effectiveness of our system: Squish achieves a reduction of over 50% in storage size relative to systems developed in prior work on a variety of real datasets. PMID:28180028
Abelairas-Gómez, Cristian; Rodríguez-Núñez, Antonio; Vilas-Pintos, Elisardo; Prieto Saborit, José Antonio; Barcala-Furelos, Roberto
2015-06-01
To describe the quality of chest compressions performed by secondary-school students trained with a realtime audiovisual feedback system. The learners were 167 students aged 12 to 15 years who had no prior experience with cardiopulmonary resuscitation (CPR). They received an hour of instruction in CPR theory and practice and then took a 2-minute test, performing hands-only CPR on a child mannequin (Prestan Professional Child Manikin). Lights built into the mannequin gave learners feedback about how many compressions they had achieved and clicking sounds told them when compressions were deep enough. All the learners were able to maintain a steady enough rhythm of compressions and reached at least 80% of the targeted compression depth. Fewer correct compressions were done in the second minute than in the first (P=.016). Real-time audiovisual feedback helps schoolchildren aged 12 to 15 years to achieve quality chest compressions on a mannequin.
A Hybrid Data Compression Scheme for Power Reduction in Wireless Sensors for IoT.
Deepu, Chacko John; Heng, Chun-Huat; Lian, Yong
2017-04-01
This paper presents a novel data compression and transmission scheme for power reduction in Internet-of-Things (IoT) enabled wireless sensors. In the proposed scheme, data is compressed with both lossy and lossless techniques, so as to enable hybrid transmission mode, support adaptive data rate selection and save power in wireless transmission. Applying the method to electrocardiogram (ECG), the data is first compressed using a lossy compression technique with a high compression ratio (CR). The residual error between the original data and the decompressed lossy data is preserved using entropy coding, enabling a lossless restoration of the original data when required. Average CR of 2.1 × and 7.8 × were achieved for lossless and lossy compression respectively with MIT/BIH database. The power reduction is demonstrated using a Bluetooth transceiver and is found to be reduced to 18% for lossy and 53% for lossless transmission respectively. Options for hybrid transmission mode, adaptive rate selection and system level power reduction make the proposed scheme attractive for IoT wireless sensors in healthcare applications.
A source-specific model for lossless compression of global Earth data
NASA Astrophysics Data System (ADS)
Kess, Barbara Lynne
A Source Specific Model for Global Earth Data (SSM-GED) is a lossless compression method for large images that captures global redundancy in the data and achieves a significant improvement over CALIC and DCXT-BT/CARP, two leading lossless compression schemes. The Global Land 1-Km Advanced Very High Resolution Radiometer (AVHRR) data, which contains 662 Megabytes (MB) per band, is an example of a large data set that requires decompression of regions of the data. For this reason, SSM-GED compresses the AVHRR data as a collection of subwindows. This approach defines the statistical parameters for the model prior to compression. Unlike universal models that assume no a priori knowledge of the data, SSM-GED captures global redundancy that exists among all of the subwindows of data. The overlap in parameters among subwindows of data enables SSM-GED to improve the compression rate by increasing the number of parameters and maintaining a small model cost for each subwindow of data. This lossless compression method is applicable to other large volumes of image data such as video.
An Image Processing Technique for Achieving Lossy Compression of Data at Ratios in Excess of 100:1
1992-11-01
5 Lempel , Ziv , Welch (LZW) Compression ............... 7 Lossless Compression Tests Results ................. 9 Exact...since IBM holds the patent for this technique. Lempel , Ziv , Welch (LZW) Compression The LZW compression is related to two compression techniques known as... compression , using the input stream as data . This step is possible because the compression algorithm always outputs the phrase and character components of a
Thermo-electrochemical production of compressed hydrogen from methane with near-zero energy loss
NASA Astrophysics Data System (ADS)
Malerød-Fjeld, Harald; Clark, Daniel; Yuste-Tirados, Irene; Zanón, Raquel; Catalán-Martinez, David; Beeaff, Dustin; Morejudo, Selene H.; Vestre, Per K.; Norby, Truls; Haugsrud, Reidar; Serra, José M.; Kjølseth, Christian
2017-11-01
Conventional production of hydrogen requires large industrial plants to minimize energy losses and capital costs associated with steam reforming, water-gas shift, product separation and compression. Here we present a protonic membrane reformer (PMR) that produces high-purity hydrogen from steam methane reforming in a single-stage process with near-zero energy loss. We use a BaZrO3-based proton-conducting electrolyte deposited as a dense film on a porous Ni composite electrode with dual function as a reforming catalyst. At 800 °C, we achieve full methane conversion by removing 99% of the formed hydrogen, which is simultaneously compressed electrochemically up to 50 bar. A thermally balanced operation regime is achieved by coupling several thermo-chemical processes. Modelling of a small-scale (10 kg H2 day-1) hydrogen plant reveals an overall energy efficiency of >87%. The results suggest that future declining electricity prices could make PMRs a competitive alternative for industrial-scale hydrogen plants integrating CO2 capture.
Size dependent compressibility of nano-ceria: Minimum near 33 nm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodenbough, Philip P.; Chemistry Department, Columbia University, New York, New York 10027; Song, Junhua
2015-04-20
We report the crystallite-size-dependency of the compressibility of nanoceria under hydrostatic pressure for a wide variety of crystallite diameters and comment on the size-based trends indicating an extremum near 33 nm. Uniform nano-crystals of ceria were synthesized by basic precipitation from cerium (III) nitrate. Size-control was achieved by adjusting mixing time and, for larger particles, a subsequent annealing temperature. The nano-crystals were characterized by transmission electron microscopy and standard ambient x-ray diffraction (XRD). Compressibility, or its reciprocal, bulk modulus, was measured with high-pressure XRD at LBL-ALS, using helium, neon, or argon as the pressure-transmitting medium for all samples. As crystallite sizemore » decreased below 100 nm, the bulk modulus first increased, and then decreased, achieving a maximum near a crystallite diameter of 33 nm. We review earlier work and examine several possible explanations for the peaking of bulk modulus at an intermediate crystallite size.« less
NASA Technical Reports Server (NTRS)
1975-01-01
Two digital video data compression systems directly applicable to the Space Shuttle TV Communication System were described: (1) For the uplink, a low rate monochrome data compressor is used. The compression is achieved by using a motion detection technique in the Hadamard domain. To transform the variable source rate into a fixed rate, an adaptive rate buffer is provided. (2) For the downlink, a color data compressor is considered. The compression is achieved first by intra-color transformation of the original signal vector, into a vector which has lower information entropy. Then two-dimensional data compression techniques are applied to the Hadamard transformed components of this last vector. Mathematical models and data reliability analyses were also provided for the above video data compression techniques transmitted over a channel encoded Gaussian channel. It was shown that substantial gains can be achieved by the combination of video source and channel coding.
Compression and information recovery in ptychography
NASA Astrophysics Data System (ADS)
Loetgering, L.; Treffer, D.; Wilhein, T.
2018-04-01
Ptychographic coherent diffraction imaging (PCDI) is a scanning microscopy modality that allows for simultaneous recovery of object and illumination information. This ability renders PCDI a suitable technique for x-ray lensless imaging and optics characterization. Its potential for information recovery typically relies on large amounts of data redundancy. However, the field of view in ptychography is practically limited by the memory and the computational facilities available. We describe techniques that achieve robust ptychographic information recovery at high compression rates. The techniques are compared and tested with experimental data.
Fentem, P H; Goddard, M; Gooden, B A; Yeung, C K
1976-01-01
A study was performed to determine whether the pressures routinely produced by bandaging for compression sclerotherapy of varicose veins are adequate to maintain the superfical veins almost empty of blood. The results suggest that well-applied bandages can provide sufficient support to combat the high distending pressures found in varicose veins. The large variation among different surgeons, however, indicates that any clinical assessment of compression sclerotherapy should include measurement of the pressure at which the bandages are applied. PMID:974569
Data Compression Techniques for Maps
1989-01-01
Lempel - Ziv compression is applied to the classified and unclassified images as also to the output of the compression algorithms . The algorithms ...resulted in a compression of 7:1. The output of the quadtree coding algorithm was then compressed using Lempel - Ziv coding. The compression ratio achieved...using Lempel - Ziv coding. The unclassified image gave a compression ratio of only 1.4:1. The K means classified image
NASA Technical Reports Server (NTRS)
Schuette, Evan H
1945-01-01
Design charts are developed for 24s-t aluminum-alloy flat compression panels with longitudinal z-section stiffeners. These charts make possible the design of the lightest panels of this type for a wide range of design requirements. Examples of the use of the charts are given and it is pointed out on the basis of these examples that, over a wide range of design conditions, the maintenance of buckle-free surfaces does not conflict with the achievement of high structural efficiency. The achievement of the maximum possible structural efficiency with 24s-t aluminum-alloy panels, however, requires closer stiffener spacings than those now in common use.
Application of content-based image compression to telepathology
NASA Astrophysics Data System (ADS)
Varga, Margaret J.; Ducksbury, Paul G.; Callagy, Grace
2002-05-01
Telepathology is a means of practicing pathology at a distance, viewing images on a computer display rather than directly through a microscope. Without compression, images take too long to transmit to a remote location and are very expensive to store for future examination. However, to date the use of compressed images in pathology remains controversial. This is because commercial image compression algorithms such as JPEG achieve data compression without knowledge of the diagnostic content. Often images are lossily compressed at the expense of corrupting informative content. None of the currently available lossy compression techniques are concerned with what information has been preserved and what data has been discarded. Their sole objective is to compress and transmit the images as fast as possible. By contrast, this paper presents a novel image compression technique, which exploits knowledge of the slide diagnostic content. This 'content based' approach combines visually lossless and lossy compression techniques, judiciously applying each in the appropriate context across an image so as to maintain 'diagnostic' information while still maximising the possible compression. Standard compression algorithms, e.g. wavelets, can still be used, but their use in a context sensitive manner can offer high compression ratios and preservation of diagnostically important information. When compared with lossless compression the novel content-based approach can potentially provide the same degree of information with a smaller amount of data. When compared with lossy compression it can provide more information for a given amount of compression. The precise gain in the compression performance depends on the application (e.g. database archive or second opinion consultation) and the diagnostic content of the images.
Recognizable or Not: Towards Image Semantic Quality Assessment for Compression
NASA Astrophysics Data System (ADS)
Liu, Dong; Wang, Dandan; Li, Houqiang
2017-12-01
Traditionally, image compression was optimized for the pixel-wise fidelity or the perceptual quality of the compressed images given a bit-rate budget. But recently, compressed images are more and more utilized for automatic semantic analysis tasks such as recognition and retrieval. For these tasks, we argue that the optimization target of compression is no longer perceptual quality, but the utility of the compressed images in the given automatic semantic analysis task. Accordingly, we propose to evaluate the quality of the compressed images neither at pixel level nor at perceptual level, but at semantic level. In this paper, we make preliminary efforts towards image semantic quality assessment (ISQA), focusing on the task of optical character recognition (OCR) from compressed images. We propose a full-reference ISQA measure by comparing the features extracted from text regions of original and compressed images. We then propose to integrate the ISQA measure into an image compression scheme. Experimental results show that our proposed ISQA measure is much better than PSNR and SSIM in evaluating the semantic quality of compressed images; accordingly, adopting our ISQA measure to optimize compression for OCR leads to significant bit-rate saving compared to using PSNR or SSIM. Moreover, we perform subjective test about text recognition from compressed images, and observe that our ISQA measure has high consistency with subjective recognizability. Our work explores new dimensions in image quality assessment, and demonstrates promising direction to achieve higher compression ratio for specific semantic analysis tasks.
Guo, Dan; Cai, Jun; Zhang, Shengfei; Zhang, Liang; Feng, Xinmin
2017-01-01
Abstract Osteoporotic vertebral compression fractures with intraosseous vacuum phenomena could cause persistent back pains in patients, even after receiving conservative treatment. The aim of this study was to evaluate the efficacy of using high-viscosity bone cement via bilateral percutaneous vertebroplasty in treating patients who have osteoporotic vertebral compression fractures with intraosseous vacuum phenomena. Twenty osteoporotic vertebral compression fracture patients with intraosseous vacuum phenomena, who received at least 2 months of conservative treatment, were further treated by injecting high-viscosity bone cement via bilateral percutaneous vertebroplasty due to failure of conservative treatment. Treatment efficacy was evaluated by determining the anterior vertebral compression rates, visual analog scale (VAS) scores, and Oswestry disability index (ODI) scores at 1 day before the operation, on the first day of postoperation, at 1-month postoperation, and at 1-year postoperation. Three of 20 patients had asymptomatic bone cement leakage when treated via percutaneous vertebroplasty; however, no serious complications related to these treatments were observed during the 1-year follow-up period. A statistically significant improvement on the anterior vertebral compression rates, VAS scores, and ODI scores were achieved after percutaneous vertebroplasty. However, differences in the anterior vertebral compression rate, VAS score, and ODI score in the different time points during the 1-year follow-up period was not statistically significant (P > 0.05). Within the limitations of this study, the injection of high-viscosity bone cement via bilateral percutaneous vertebroplasty for patients who have osteoporotic vertebral compression fractures with intraosseous vacuum phenomena significantly relieved their back pains and improved their daily life activities shortly after the operation, thereby improving their life quality. In this study, the use of high-viscosity bone cement reduced the leakage rate and contributed to their successful treatment, as observed in patients during the 1-year follow-up period. PMID:28383423
Zhang, Li; Athavale, Prashant; Pop, Mihaela; Wright, Graham A
2017-08-01
To enable robust reconstruction for highly accelerated three-dimensional multicontrast late enhancement imaging to provide improved MR characterization of myocardial infarction with isotropic high spatial resolution. A new method using compressed sensing with low rank and spatially varying edge-preserving constraints (CS-LASER) is proposed to improve the reconstruction of fine image details from highly undersampled data. CS-LASER leverages the low rank feature of the multicontrast volume series in MR relaxation and integrates spatially varying edge preservation into the explicit low rank constrained compressed sensing framework using weighted total variation. With an orthogonal temporal basis pre-estimated, a multiscale iterative reconstruction framework is proposed to enable the practice of CS-LASER with spatially varying weights of appropriate accuracy. In in vivo pig studies with both retrospective and prospective undersamplings, CS-LASER preserved fine image details better and presented tissue characteristics with a higher degree of consistency with histopathology, particularly in the peri-infarct region, than an alternative technique for different acceleration rates. An isotropic resolution of 1.5 mm was achieved in vivo within a single breath-hold using the proposed techniques. Accelerated three-dimensional multicontrast late enhancement with CS-LASER can achieve improved MR characterization of myocardial infarction with high spatial resolution. Magn Reson Med 78:598-610, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
NASA Technical Reports Server (NTRS)
Sterritt, D. E.; Lalos, G. T.; Schneider, R. T.
1976-01-01
A computer simulation study concerning a compressed fissioning UF6 gas is presented. The compression is to be achieved by a ballistic piston compressor. Data on UF6 obtained with this compressor were incorporated in the simulation study. As a neutron source to create the fission events in the compressed gas, a fast burst reactor was considered. The conclusion is that it takes a neutron flux in excess of 10 to the 15th power n/sec sq cm to produce measurable increases in pressure and temperature, while a flux in excess of 10 to 19th power n/sq cm sec would probably damage the compressor.
High performance compression of science data
NASA Technical Reports Server (NTRS)
Storer, James A.; Cohn, Martin
1994-01-01
Two papers make up the body of this report. One presents a single-pass adaptive vector quantization algorithm that learns a codebook of variable size and shape entries; the authors present experiments on a set of test images showing that with no training or prior knowledge of the data, for a given fidelity, the compression achieved typically equals or exceeds that of the JPEG standard. The second paper addresses motion compensation, one of the most effective techniques used in the interframe data compression. A parallel block-matching algorithm for estimating interframe displacement of blocks with minimum error is presented. The algorithm is designed for a simple parallel architecture to process video in real time.
High performance compression of science data
NASA Technical Reports Server (NTRS)
Storer, James A.; Cohn, Martin
1992-01-01
In the future, NASA expects to gather over a tera-byte per day of data requiring space for levels of archival storage. Data compression will be a key component in systems that store this data (e.g., optical disk and tape) as well as in communications systems (both between space and Earth and between scientific locations on Earth). We propose to develop algorithms that can be a basis for software and hardware systems that compress a wide variety of scientific data with different criteria for fidelity/bandwidth tradeoffs. The algorithmic approaches we consider are specially targeted for parallel computation where data rates of over 1 billion bits per second are achievable with current technology.
High performance compression of science data
NASA Technical Reports Server (NTRS)
Storer, James A.; Cohn, Martin
1993-01-01
In the future, NASA expects to gather over a tera-byte per day of data requiring space for levels of archival storage. Data compression will be a key component in systems that store this data (e.g., optical disk and tape) as well as in communications systems (both between space and Earth and between scientific locations on Earth). We propose to develop algorithms that can be a basis for software and hardware systems that compress a wide variety of scientific data with different criteria for fidelity/bandwidth tradeoffs. The algorithmic approaches we consider are specially targeted for parallel computation where data rates of over 1 billion bits per second are achievable with current technology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sterritt, D.E.; Lalos, G.T.; Schneider, R.T.
1976-12-01
A computer simulation study concerning a compressed fissioning UF/sub 6/ gas is presented. The compression is to be achieved by a ballistic piston compressor. Data on UF/sub 6/ obtained with this compressor were incorporated in the simulation study. As a neutron source to create the fission events in the compressed gas, a fast burst reactor was considered. The conclusion is that it takes a neutron flux in excess of 10/sup 15/ n/cm/sup 2/-s to produce measurable increases in pressure and temperature, while a flux in excess of 10/sup 19/ n/cm/sup 2/-s would probably damage the compressor.
SCALCE: boosting sequence compression algorithms using locally consistent encoding.
Hach, Faraz; Numanagic, Ibrahim; Alkan, Can; Sahinalp, S Cenk
2012-12-01
The high throughput sequencing (HTS) platforms generate unprecedented amounts of data that introduce challenges for the computational infrastructure. Data management, storage and analysis have become major logistical obstacles for those adopting the new platforms. The requirement for large investment for this purpose almost signalled the end of the Sequence Read Archive hosted at the National Center for Biotechnology Information (NCBI), which holds most of the sequence data generated world wide. Currently, most HTS data are compressed through general purpose algorithms such as gzip. These algorithms are not designed for compressing data generated by the HTS platforms; for example, they do not take advantage of the specific nature of genomic sequence data, that is, limited alphabet size and high similarity among reads. Fast and efficient compression algorithms designed specifically for HTS data should be able to address some of the issues in data management, storage and communication. Such algorithms would also help with analysis provided they offer additional capabilities such as random access to any read and indexing for efficient sequence similarity search. Here we present SCALCE, a 'boosting' scheme based on Locally Consistent Parsing technique, which reorganizes the reads in a way that results in a higher compression speed and compression rate, independent of the compression algorithm in use and without using a reference genome. Our tests indicate that SCALCE can improve the compression rate achieved through gzip by a factor of 4.19-when the goal is to compress the reads alone. In fact, on SCALCE reordered reads, gzip running time can improve by a factor of 15.06 on a standard PC with a single core and 6 GB memory. Interestingly even the running time of SCALCE + gzip improves that of gzip alone by a factor of 2.09. When compared with the recently published BEETL, which aims to sort the (inverted) reads in lexicographic order for improving bzip2, SCALCE + gzip provides up to 2.01 times better compression while improving the running time by a factor of 5.17. SCALCE also provides the option to compress the quality scores as well as the read names, in addition to the reads themselves. This is achieved by compressing the quality scores through order-3 Arithmetic Coding (AC) and the read names through gzip through the reordering SCALCE provides on the reads. This way, in comparison with gzip compression of the unordered FASTQ files (including reads, read names and quality scores), SCALCE (together with gzip and arithmetic encoding) can provide up to 3.34 improvement in the compression rate and 1.26 improvement in running time. Our algorithm, SCALCE (Sequence Compression Algorithm using Locally Consistent Encoding), is implemented in C++ with both gzip and bzip2 compression options. It also supports multithreading when gzip option is selected, and the pigz binary is available. It is available at http://scalce.sourceforge.net. fhach@cs.sfu.ca or cenk@cs.sfu.ca Supplementary data are available at Bioinformatics online.
A contourlet transform based algorithm for real-time video encoding
NASA Astrophysics Data System (ADS)
Katsigiannis, Stamos; Papaioannou, Georgios; Maroulis, Dimitris
2012-06-01
In recent years, real-time video communication over the internet has been widely utilized for applications like video conferencing. Streaming live video over heterogeneous IP networks, including wireless networks, requires video coding algorithms that can support various levels of quality in order to adapt to the network end-to-end bandwidth and transmitter/receiver resources. In this work, a scalable video coding and compression algorithm based on the Contourlet Transform is proposed. The algorithm allows for multiple levels of detail, without re-encoding the video frames, by just dropping the encoded information referring to higher resolution than needed. Compression is achieved by means of lossy and lossless methods, as well as variable bit rate encoding schemes. Furthermore, due to the transformation utilized, it does not suffer from blocking artifacts that occur with many widely adopted compression algorithms. Another highly advantageous characteristic of the algorithm is the suppression of noise induced by low-quality sensors usually encountered in web-cameras, due to the manipulation of the transform coefficients at the compression stage. The proposed algorithm is designed to introduce minimal coding delay, thus achieving real-time performance. Performance is enhanced by utilizing the vast computational capabilities of modern GPUs, providing satisfactory encoding and decoding times at relatively low cost. These characteristics make this method suitable for applications like video-conferencing that demand real-time performance, along with the highest visual quality possible for each user. Through the presented performance and quality evaluation of the algorithm, experimental results show that the proposed algorithm achieves better or comparable visual quality relative to other compression and encoding methods tested, while maintaining a satisfactory compression ratio. Especially at low bitrates, it provides more human-eye friendly images compared to algorithms utilizing block-based coding, like the MPEG family, as it introduces fuzziness and blurring instead of artificial block artifacts.
NASA Astrophysics Data System (ADS)
Liao, P. H.; Peng, K. P.; Lin, H. C.; George, T.; Li, P. W.
2018-05-01
We report channel and strain engineering of self-organized, gate-stacking heterostructures comprising Ge-nanosphere gate/SiO2/SiGe-channels. An exquisitely-controlled dynamic balance between the concentrations of oxygen, Si, and Ge interstitials was effectively exploited to simultaneously create these heterostructures in a single oxidation step. Process-controlled tunability of the channel length (5–95 nm diameters for the Ge-nanospheres), gate oxide thickness (2.5–4.8 nm), as well as crystal orientation, chemical composition and strain engineering of the SiGe-channel was achieved. Single-crystalline (100) Si1‑x Ge x shells with Ge content as high as x = 0.85 and with a compressive strain of 3%, as well as (110) Si1‑x Ge x shells with Ge content of x = 0.35 and corresponding compressive strain of 1.5% were achieved. For each crystal orientation, our high Ge-content, highly-stressed SiGe shells feature a high degree of crystallinity and thus, provide a core ‘building block’ required for the fabrication of Ge-based MOS devices.
Liao, P H; Peng, K P; Lin, H C; George, T; Li, P W
2018-05-18
We report channel and strain engineering of self-organized, gate-stacking heterostructures comprising Ge-nanosphere gate/SiO 2 /SiGe-channels. An exquisitely-controlled dynamic balance between the concentrations of oxygen, Si, and Ge interstitials was effectively exploited to simultaneously create these heterostructures in a single oxidation step. Process-controlled tunability of the channel length (5-95 nm diameters for the Ge-nanospheres), gate oxide thickness (2.5-4.8 nm), as well as crystal orientation, chemical composition and strain engineering of the SiGe-channel was achieved. Single-crystalline (100) Si 1-x Ge x shells with Ge content as high as x = 0.85 and with a compressive strain of 3%, as well as (110) Si 1-x Ge x shells with Ge content of x = 0.35 and corresponding compressive strain of 1.5% were achieved. For each crystal orientation, our high Ge-content, highly-stressed SiGe shells feature a high degree of crystallinity and thus, provide a core 'building block' required for the fabrication of Ge-based MOS devices.
Ting, Samuel T; Ahmad, Rizwan; Jin, Ning; Craft, Jason; Serafim da Silveira, Juliana; Xue, Hui; Simonetti, Orlando P
2017-04-01
Sparsity-promoting regularizers can enable stable recovery of highly undersampled magnetic resonance imaging (MRI), promising to improve the clinical utility of challenging applications. However, lengthy computation time limits the clinical use of these methods, especially for dynamic MRI with its large corpus of spatiotemporal data. Here, we present a holistic framework that utilizes the balanced sparse model for compressive sensing and parallel computing to reduce the computation time of cardiac MRI recovery methods. We propose a fast, iterative soft-thresholding method to solve the resulting ℓ1-regularized least squares problem. In addition, our approach utilizes a parallel computing environment that is fully integrated with the MRI acquisition software. The methodology is applied to two formulations of the multichannel MRI problem: image-based recovery and k-space-based recovery. Using measured MRI data, we show that, for a 224 × 144 image series with 48 frames, the proposed k-space-based approach achieves a mean reconstruction time of 2.35 min, a 24-fold improvement compared a reconstruction time of 55.5 min for the nonlinear conjugate gradient method, and the proposed image-based approach achieves a mean reconstruction time of 13.8 s. Our approach can be utilized to achieve fast reconstruction of large MRI datasets, thereby increasing the clinical utility of reconstruction techniques based on compressed sensing. Magn Reson Med 77:1505-1515, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
NASA Astrophysics Data System (ADS)
O'Connor, Sean M.; Lynch, Jerome P.; Gilbert, Anna C.
2013-04-01
Wireless sensors have emerged to offer low-cost sensors with impressive functionality (e.g., data acquisition, computing, and communication) and modular installations. Such advantages enable higher nodal densities than tethered systems resulting in increased spatial resolution of the monitoring system. However, high nodal density comes at a cost as huge amounts of data are generated, weighing heavy on power sources, transmission bandwidth, and data management requirements, often making data compression necessary. The traditional compression paradigm consists of high rate (>Nyquist) uniform sampling and storage of the entire target signal followed by some desired compression scheme prior to transmission. The recently proposed compressed sensing (CS) framework combines the acquisition and compression stage together, thus removing the need for storage and operation of the full target signal prior to transmission. The effectiveness of the CS approach hinges on the presence of a sparse representation of the target signal in a known basis, similarly exploited by several traditional compressive sensing applications today (e.g., imaging, MRI). Field implementations of CS schemes in wireless SHM systems have been challenging due to the lack of commercially available sensing units capable of sampling methods (e.g., random) consistent with the compressed sensing framework, often moving evaluation of CS techniques to simulation and post-processing. The research presented here describes implementation of a CS sampling scheme to the Narada wireless sensing node and the energy efficiencies observed in the deployed sensors. Of interest in this study is the compressibility of acceleration response signals collected from a multi-girder steel-concrete composite bridge. The study shows the benefit of CS in reducing data requirements while ensuring data analysis on compressed data remain accurate.
Near-lossless multichannel EEG compression based on matrix and tensor decompositions.
Dauwels, Justin; Srinivasan, K; Reddy, M Ramasubba; Cichocki, Andrzej
2013-05-01
A novel near-lossless compression algorithm for multichannel electroencephalogram (MC-EEG) is proposed based on matrix/tensor decomposition models. MC-EEG is represented in suitable multiway (multidimensional) forms to efficiently exploit temporal and spatial correlations simultaneously. Several matrix/tensor decomposition models are analyzed in view of efficient decorrelation of the multiway forms of MC-EEG. A compression algorithm is built based on the principle of “lossy plus residual coding,” consisting of a matrix/tensor decomposition-based coder in the lossy layer followed by arithmetic coding in the residual layer. This approach guarantees a specifiable maximum absolute error between original and reconstructed signals. The compression algorithm is applied to three different scalp EEG datasets and an intracranial EEG dataset, each with different sampling rate and resolution. The proposed algorithm achieves attractive compression ratios compared to compressing individual channels separately. For similar compression ratios, the proposed algorithm achieves nearly fivefold lower average error compared to a similar wavelet-based volumetric MC-EEG compression algorithm.
Compressed Secret Key Agreement:Maximizing Multivariate Mutual Information per Bit
NASA Astrophysics Data System (ADS)
Chan, Chung
2017-10-01
The multiterminal secret key agreement problem by public discussion is formulated with an additional source compression step where, prior to the public discussion phase, users independently compress their private sources to filter out strongly correlated components for generating a common secret key. The objective is to maximize the achievable key rate as a function of the joint entropy of the compressed sources. Since the maximum achievable key rate captures the total amount of information mutual to the compressed sources, an optimal compression scheme essentially maximizes the multivariate mutual information per bit of randomness of the private sources, and can therefore be viewed more generally as a dimension reduction technique. Single-letter lower and upper bounds on the maximum achievable key rate are derived for the general source model, and an explicit polynomial-time computable formula is obtained for the pairwise independent network model. In particular, the converse results and the upper bounds are obtained from those of the related secret key agreement problem with rate-limited discussion. A precise duality is shown for the two-user case with one-way discussion, and such duality is extended to obtain the desired converse results in the multi-user case. In addition to posing new challenges in information processing and dimension reduction, the compressed secret key agreement problem helps shed new light on resolving the difficult problem of secret key agreement with rate-limited discussion, by offering a more structured achieving scheme and some simpler conjectures to prove.
DOE Office of Scientific and Technical Information (OSTI.GOV)
O’Reilly, Michael K., E-mail: moreilly1@mater.ie; Ryan, David; Sugrue, Gavin
PurposeTransradial pneumatic compression devices can be used to achieve haemostasis following radial artery puncture. This article describes a novel technique for acquiring haemostasis of arterio-venous haemodialysis fistula access sites without the need for suture placement using one such compression device.Materials and MethodsA retrospective review of fistulograms with or without angioplasty/thrombectomy in a single institution was performed. 20 procedures performed on 12 patients who underwent percutaneous intervention of failing or thrombosed arterio-venous fistulas (AVF) had 27 puncture sites. Haemostasis was achieved using a pneumatic compression device at all access sites. Procedure details including size of access sheath, heparin administration and complicationsmore » were recorded.ResultsTwo diagnostic fistulograms, 14 fistulograms and angioplasties and four thrombectomies were performed via access sheaths with an average size (±SD) of 6 Fr (±1.12). IV unfractionated heparin was administered in 11 of 20 procedures. Haemostasis was achieved in 26 of 27 access sites following 15–20 min of compression using the pneumatic compression device. One case experienced limited bleeding from an inflow access site that was successfully treated with reinflation of the device for a further 5 min. No other complication was recorded.ConclusionsHaemostasis of arterio-venous haemodialysis fistula access sites can be safely and effectively achieved using a pneumatic compression device. This is a technically simple, safe and sutureless technique for acquiring haemostasis after AVF intervention.« less
Delivery of compression therapy for venous leg ulcers.
Zarchi, Kian; Jemec, Gregor B E
2014-07-01
Despite the documented effect of compression therapy in clinical studies and its widespread prescription, treatment of venous leg ulcers is often prolonged and recurrence rates high. Data on provided compression therapy are limited. To assess whether home care nurses achieve adequate subbandage pressure when treating patients with venous leg ulcers and the factors that predict the ability to achieve optimal pressure. We performed a cross-sectional study from March 1, 2011, through March 31, 2012, in home care centers in 2 Danish municipalities. Sixty-eight home care nurses who managed wounds in their everyday practice were included. Participant-masked measurements of subbandage pressure achieved with an elastic, long-stretch, single-component bandage; an inelastic, short-stretch, single-component bandage; and a multilayer, 2-component bandage, as well as, association between achievement of optimal pressure and years in the profession, attendance at wound care educational programs, previous work experience, and confidence in bandaging ability. A substantial variation in the exerted pressure was found: subbandage pressures ranged from 11 mm Hg exerted by an inelastic bandage to 80 mm Hg exerted by a 2-component bandage. The optimal subbandage pressure range, defined as 30 to 50 mm Hg, was achieved by 39 of 62 nurses (63%) applying the 2-component bandage, 28 of 68 nurses (41%) applying the elastic bandage, and 27 of 68 nurses (40%) applying the inelastic bandage. More than half the nurses applying the inelastic (38 [56%]) and elastic (36 [53%]) bandages obtained pressures less than 30 mm Hg. At best, only 17 of 62 nurses (27%) using the 2-component bandage achieved subbandage pressure within the range they aimed for. In this study, none of the investigated factors was associated with the ability to apply a bandage with optimal pressure. This study demonstrates the difficulty of achieving the desired subbandage pressure and indicates that a substantial proportion of patients with venous leg ulcers do not receive adequate compression therapy. Training programs that focus on practical bandaging skills should be implemented to improve management of venous leg ulcers.
Lee, HyungJune; Kim, HyunSeok; Chang, Ik Joon
2014-01-01
We propose a technique to optimize the energy efficiency of data collection in sensor networks by exploiting a selective data compression. To achieve such an aim, we need to make optimal decisions regarding two aspects: (1) which sensor nodes should execute compression; and (2) which compression algorithm should be used by the selected sensor nodes. We formulate this problem into binary integer programs, which provide an energy-optimal solution under the given latency constraint. Our simulation results show that the optimization algorithm significantly reduces the overall network-wide energy consumption for data collection. In the environment having a stationary sink from stationary sensor nodes, the optimized data collection shows 47% energy savings compared to the state-of-the-art collection protocol (CTP). More importantly, we demonstrate that our optimized data collection provides the best performance in an intermittent network under high interference. In such networks, we found that the selective compression for frequent packet retransmissions saves up to 55% energy compared to the best known protocol. PMID:24721763
LFQC: a lossless compression algorithm for FASTQ files
Nicolae, Marius; Pathak, Sudipta; Rajasekaran, Sanguthevar
2015-01-01
Motivation: Next Generation Sequencing (NGS) technologies have revolutionized genomic research by reducing the cost of whole genome sequencing. One of the biggest challenges posed by modern sequencing technology is economic storage of NGS data. Storing raw data is infeasible because of its enormous size and high redundancy. In this article, we address the problem of storage and transmission of large FASTQ files using innovative compression techniques. Results: We introduce a new lossless non-reference based FASTQ compression algorithm named Lossless FASTQ Compressor. We have compared our algorithm with other state of the art big data compression algorithms namely gzip, bzip2, fastqz (Bonfield and Mahoney, 2013), fqzcomp (Bonfield and Mahoney, 2013), Quip (Jones et al., 2012), DSRC2 (Roguski and Deorowicz, 2014). This comparison reveals that our algorithm achieves better compression ratios on LS454 and SOLiD datasets. Availability and implementation: The implementations are freely available for non-commercial purposes. They can be downloaded from http://engr.uconn.edu/rajasek/lfqc-v1.1.zip. Contact: rajasek@engr.uconn.edu PMID:26093148
Lossless compression of AVIRIS data: Comparison of methods and instrument constraints
NASA Technical Reports Server (NTRS)
Roger, R. E.; Arnold, J. F.; Cavenor, M. C.; Richards, J. A.
1992-01-01
A family of lossless compression methods, allowing exact image reconstruction, are evaluated for compressing Airborne Visible/Infrared Imaging Spectrometers (AVIRIS) image data. The methods are used on Differential Pulse Code Modulation (DPCM). The compressed data have an entropy of order 6 bits/pixel. A theoretical model indicates that significantly better lossless compression is unlikely to be achieved because of limits caused by the noise in the AVIRIS channels. AVIRIS data differ from data produced by other visible/near-infrared sensors, such as LANDSAT-TM or SPOT, in several ways. Firstly, the data are recorded at a greater resolution (12 bits, though packed into 16-bit words). Secondly, the spectral channels are relatively narrow and provide continuous coverage of the spectrum so that the data in adjacent channels are generally highly correlated. Thirdly, the noise characteristics of the AVIRIS are defined by the channels' Noise Equivalent Radiances (NER's), and these NER's show that, at some wavelengths, the least significant 5 or 6 bits of data are essentially noise.
Strength development of pervious concrete containing engineered biomass aggregate
NASA Astrophysics Data System (ADS)
Sharif, A. A. M.; Shahidan, S.; Koh, H. B.; Kandash, A.; Zuki, S. S. Mohd
2017-11-01
Pervious concrete with high porosity has good permeability and low mechanical strengths are commonly used in controlling storm water management. It is different from normal concrete. It is only containing single size of coarse aggregate and has lower density compared with normal concrete. This study was focused on the effect of Engineered Biomass Aggregate (EBA) on the compressive strength, void ratio and water permeability of pervious concrete. EBA was prepared by coating the biomass aggregate with epoxy resin. EBA was used to replace natural coarse aggregate ranging from 0% to 25%. 150 mm cube specimens were prepared and used to study the compressive strength, void ratio and water permeability. Compressive strength was tested at 7, 14 and 28 days. Meanwhile, void ratio and permeability tests were carried out on 28 days. The experimental results showed that pervious concrete containing EBA gained lower compressive strength. The compressive strength was reduced gradually by increasing the percentage of EBA. Overall, Pervious concrete containing EBA achieved higher void ratio and permeability.
Control of traumatic wound bleeding by compression with a compact elastic adhesive dressing.
Naimer, Sody Abby; Tanami, Menachem; Malichi, Avishai; Moryosef, David
2006-07-01
Compression dressing has been assumed effective, but never formally examined in the field. A prospective interventional trial examined efficacy and feasibility of an elastic adhesive dressing compression device in the arena of the traumatic incident. The primary variable examined was the bleeding rate from wounds compared before and after dressing. Sixty-two consecutive bleeding wounds resulting from penetrating trauma were treated. Bleeding intensity was profuse in 58%, moderate 23%, and mild in 19%. Full control of bleeding was achieved in 87%, a significantly diminished rate in 11%, and, in 1 case, the technique had no influence on the bleeding rate. The Wilcoxon test for variables comparing bleeding rates before and after the procedure obtained significant difference (Z = -6.9, p < 0.01). No significant complications were observed. Caregivers were highly satisfied in 90% of cases. Elastic adhesive dressing was observed as an effective and reliable technique, demonstrating a high rate of success without complications.
Predefined Redundant Dictionary for Effective Depth Maps Representation
NASA Astrophysics Data System (ADS)
Sebai, Dorsaf; Chaieb, Faten; Ghorbel, Faouzi
2016-01-01
The multi-view video plus depth (MVD) video format consists of two components: texture and depth map, where a combination of these components enables a receiver to generate arbitrary virtual views. However, MVD presents a very voluminous video format that requires a compression process for storage and especially for transmission. Conventional codecs are perfectly efficient for texture images compression but not for intrinsic depth maps properties. Depth images indeed are characterized by areas of smoothly varying grey levels separated by sharp discontinuities at the position of object boundaries. Preserving these characteristics is important to enable high quality view synthesis at the receiver side. In this paper, sparse representation of depth maps is discussed. It is shown that a significant gain in sparsity is achieved when particular mixed dictionaries are used for approximating these types of images with greedy selection strategies. Experiments are conducted to confirm the effectiveness at producing sparse representations, and competitiveness, with respect to candidate state-of-art dictionaries. Finally, the resulting method is shown to be effective for depth maps compression and represents an advantage over the ongoing 3D high efficiency video coding compression standard, particularly at medium and high bitrates.
NASA Astrophysics Data System (ADS)
Xie, ChengJun; Xu, Lin
2008-03-01
This paper presents an algorithm based on mixing transform of wave band grouping to eliminate spectral redundancy, the algorithm adapts to the relativity difference between different frequency spectrum images, and still it works well when the band number is not the power of 2. Using non-boundary extension CDF(2,2)DWT and subtraction mixing transform to eliminate spectral redundancy, employing CDF(2,2)DWT to eliminate spatial redundancy and SPIHT+CABAC for compression coding, the experiment shows that a satisfied lossless compression result can be achieved. Using hyper-spectral image Canal of American JPL laboratory as the data set for lossless compression test, when the band number is not the power of 2, lossless compression result of this compression algorithm is much better than the results acquired by JPEG-LS, WinZip, ARJ, DPCM, the research achievements of a research team of Chinese Academy of Sciences, Minimum Spanning Tree and Near Minimum Spanning Tree, on the average the compression ratio of this algorithm exceeds the above algorithms by 41%,37%,35%,29%,16%,10%,8% respectively; when the band number is the power of 2, for 128 frames of the image Canal, taking 8, 16 and 32 respectively as the number of one group for groupings based on different numbers, considering factors like compression storage complexity, the type of wave band and the compression effect, we suggest using 8 as the number of bands included in one group to achieve a better compression effect. The algorithm of this paper has priority in operation speed and hardware realization convenience.
Using phase contrast imaging to measure the properties of shock compressed aerogel
NASA Astrophysics Data System (ADS)
Hawreliak, James; Erskine, Dave; Schropp, Andres; Galtier, Eric C.; Heimann, Phil
2017-01-01
The Hugoniot states of low density materials, such as silica aerogel, are used in high energy density physics research because they can achieve a range of high temperature and pressure states through shock compression. The shock properties of 100mg/cc silica aerogel were studied at the Materials in Extreme Conditions end station using x-ray phase contrast imaging of spherically expanding shock waves. The shockwaves were generated by focusing a high power 532nm laser to a 50μm focal spot on a thin aluminum ablator. The shock speed was measured in separate experiments using line-VISAR measurements from the reflecting shock front. The relative timing between the x-ray probe and the optical laser pump was varied so x-ray PCI images were taken at pressures between 10GPa and 30GPa. Modeling the compression of the foam in the strong shock limit uses a Gruneisen parameter of 0.49 to fit the data rather than a value of 0.66 that would correspond to a plasma state.
Development of Carbon Dioxide Hermitic Compressor
NASA Astrophysics Data System (ADS)
Imai, Satoshi; Oda, Atsushi; Ebara, Toshiyuki
Because of global environmental problems, the existing refrigerants are to be replaced with natural refrigerants. CO2 is one of the natural refrigerants and environmentally safe, inflammable and non-toxic refrigerant. Therefore high efficiency compressor that can operate with natural refrigerants, especially CO2, needs to be developed. We developed a prototype CO2 hermetic compressor, which is able to use in carbon dioxide refrigerating systems for practical use. The compressor has two rolling pistons, and it leads to low vibrations, low noise. In additions, two-stage compression with two cylinders is adopted, because pressure difference is too large to compress in one stage. And inner pressure of the shell case is intermediate pressure to minimize gas leakage between compressing rooms and inner space of shell case. Intermediate pressure design enabled to make the compressor smaller in size and lighter in weight. As a result, the compressor achieved high efficiency and high reliability by these technology. We plan to study heat pump water heater, cup vending machine and various applications with CO2 compressor.
Compression techniques in tele-radiology
NASA Astrophysics Data System (ADS)
Lu, Tianyu; Xiong, Zixiang; Yun, David Y.
1999-10-01
This paper describes a prototype telemedicine system for remote 3D radiation treatment planning. Due to voluminous medical image data and image streams generated in interactive frame rate involved in the application, the importance of deploying adjustable lossy to lossless compression techniques is emphasized in order to achieve acceptable performance via various kinds of communication networks. In particular, the compression of the data substantially reduces the transmission time and therefore allows large-scale radiation distribution simulation and interactive volume visualization using remote supercomputing resources in a timely fashion. The compression algorithms currently used in the software we developed are JPEG and H.263 lossy methods and Lempel-Ziv (LZ77) lossless methods. Both objective and subjective assessment of the effect of lossy compression methods on the volume data are conducted. Favorable results are obtained showing that substantial compression ratio is achievable within distortion tolerance. From our experience, we conclude that 30dB (PSNR) is about the lower bound to achieve acceptable quality when applying lossy compression to anatomy volume data (e.g. CT). For computer simulated data, much higher PSNR (up to 100dB) is expectable. This work not only introduces such novel approach for delivering medical services that will have significant impact on the existing cooperative image-based services, but also provides a platform for the physicians to assess the effects of lossy compression techniques on the diagnostic and aesthetic appearance of medical imaging.
Alvarez, Guillermo Dufort Y; Favaro, Federico; Lecumberry, Federico; Martin, Alvaro; Oliver, Juan P; Oreggioni, Julian; Ramirez, Ignacio; Seroussi, Gadiel; Steinfeld, Leonardo
2018-02-01
This work presents a wireless multichannel electroencephalogram (EEG) recording system featuring lossless and near-lossless compression of the digitized EEG signal. Two novel, low-complexity, efficient compression algorithms were developed and tested in a low-power platform. The algorithms were tested on six public EEG databases comparing favorably with the best compression rates reported up to date in the literature. In its lossless mode, the platform is capable of encoding and transmitting 59-channel EEG signals, sampled at 500 Hz and 16 bits per sample, at a current consumption of 337 A per channel; this comes with a guarantee that the decompressed signal is identical to the sampled one. The near-lossless mode allows for significant energy savings and/or higher throughputs in exchange for a small guaranteed maximum per-sample distortion in the recovered signal. Finally, we address the tradeoff between computation cost and transmission savings by evaluating three alternatives: sending raw data, or encoding with one of two compression algorithms that differ in complexity and compression performance. We observe that the higher the throughput (number of channels and sampling rate) the larger the benefits obtained from compression.
A Bunch Compression Method for Free Electron Lasers that Avoids Parasitic Compressions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benson, Stephen V.; Douglas, David R.; Tennant, Christopher D.
2015-09-01
Virtually all existing high energy (>few MeV) linac-driven FELs compress the electron bunch length though the use of off-crest acceleration on the rising side of the RF waveform followed by transport through a magnetic chicane. This approach has at least three flaws: 1) it is difficult to correct aberrations--particularly RF curvature, 2) rising side acceleration exacerbates space charge-induced distortion of the longitudinal phase space, and 3) all achromatic "negative compaction" compressors create parasitic compression during the final compression process, increasing the CSR-induced emittance growth. One can avoid these deficiencies by using acceleration on the falling side of the RF waveformmore » and a compressor with M 56>0. This approach offers multiple advantages: 1) It is readily achieved in beam lines supporting simple schemes for aberration compensation, 2) Longitudinal space charge (LSC)-induced phase space distortion tends, on the falling side of the RF waveform, to enhance the chirp, and 3) Compressors with M 56>0 can be configured to avoid spurious over-compression. We will discuss this bunch compression scheme in detail and give results of a successful beam test in April 2012 using the JLab UV Demo FEL« less
Wei, Chung-Kai; Ding, Shinn-Jyh
2016-09-01
To achieve the excellent mechanical properties of biodegradable materials used for cortical bone graft substitutes and fracture fixation devices remains a challenge. To this end, the biomimetic calcium silicate/gelatin/chitosan oligosaccharide composite implants were developed, with an aim of achieving high strength, controlled degradation, and superior osteogenic activity. The work focused on the effect of gelatin on mechanical properties of the composites under four different kinds of mechanical stresses including compression, tensile, bending, and impact. The evaluation of in vitro degradability and fatigue at two simulated body fluid (SBF) of pH 7.4 and 5.0 was also performed, in which the pH 5.0 condition simulated clinical conditions caused by bacterial induced local metabolic acidosis or tissue inflammation. In addition, human mesenchymal stem cells (hMSCs) were sued to examine osteogenic activity. Experimental results showed that the appropriate amount of gelatin positively contributed to failure enhancement in compressive and impact modes. The 10wt% gelatin-containing composite exhibits the maximum value of the compressive strength (166.1MPa), which is within the reported compressive strength for cortical bone. The stability of the bone implants was apparently affected by the in vitro fatigue, but not by the initial pH environments (7.4 or 5.0). The gelatin not only greatly enhanced the degradation of the composite when soaked in the dynamic SBF solution, but effectively promoted attachment, proliferation, differentiation, and formation of mineralization of hMSCs. The 10wt%-gelatin composite with high initial strength may be a potential implant candidate for cortical bone repair and fracture fixation applications. Copyright © 2016 Elsevier Ltd. All rights reserved.
Lossy compression for Animated Web Visualisation
NASA Astrophysics Data System (ADS)
Prudden, R.; Tomlinson, J.; Robinson, N.; Arribas, A.
2017-12-01
This talk will discuss an technique for lossy data compression specialised for web animation. We set ourselves the challenge of visualising a full forecast weather field as an animated 3D web page visualisation. This data is richly spatiotemporal, however it is routinely communicated to the public as a 2D map, and scientists are largely limited to visualising data via static 2D maps or 1D scatter plots. We wanted to present Met Office weather forecasts in a way that represents all the generated data. Our approach was to repurpose the technology used to stream high definition videos. This enabled us to achieve high rates of compression, while being compatible with both web browsers and GPU processing. Since lossy compression necessarily involves discarding information, evaluating the results is an important and difficult problem. This is essentially a problem of forecast verification. The difficulty lies in deciding what it means for two weather fields to be "similar", as simple definitions such as mean squared error often lead to undesirable results. In the second part of the talk, I will briefly discuss some ideas for alternative measures of similarity.
Compressive sensing for single-shot two-dimensional coherent spectroscopy
NASA Astrophysics Data System (ADS)
Harel, E.; Spencer, A.; Spokoyny, B.
2017-02-01
In this work, we explore the use of compressive sensing for the rapid acquisition of two-dimensional optical spectra that encodes the electronic structure and ultrafast dynamics of condensed-phase molecular species. Specifically, we have developed a means to combine multiplexed single-element detection and single-shot and phase-resolved two-dimensional coherent spectroscopy. The method described, which we call Single Point Array Reconstruction by Spatial Encoding (SPARSE) eliminates the need for costly array detectors while speeding up acquisition by several orders of magnitude compared to scanning methods. Physical implementation of SPARSE is facilitated by combining spatiotemporal encoding of the nonlinear optical response and signal modulation by a high-speed digital micromirror device. We demonstrate the approach by investigating a well-characterized cyanine molecule and a photosynthetic pigment-protein complex. Hadamard and compressive sensing algorithms are demonstrated, with the latter achieving compression factors as high as ten. Both show good agreement with directly detected spectra. We envision a myriad of applications in nonlinear spectroscopy using SPARSE with broadband femtosecond light sources in so-far unexplored regions of the electromagnetic spectrum.
A seismic data compression system using subband coding
NASA Technical Reports Server (NTRS)
Kiely, A. B.; Pollara, F.
1995-01-01
This article presents a study of seismic data compression techniques and a compression algorithm based on subband coding. The algorithm includes three stages: a decorrelation stage, a quantization stage that introduces a controlled amount of distortion to allow for high compression ratios, and a lossless entropy coding stage based on a simple but efficient arithmetic coding method. Subband coding methods are particularly suited to the decorrelation of nonstationary processes such as seismic events. Adaptivity to the nonstationary behavior of the waveform is achieved by dividing the data into separate blocks that are encoded separately with an adaptive arithmetic encoder. This is done with high efficiency due to the low overhead introduced by the arithmetic encoder in specifying its parameters. The technique could be used as a progressive transmission system, where successive refinements of the data can be requested by the user. This allows seismologists to first examine a coarse version of waveforms with minimal usage of the channel and then decide where refinements are required. Rate-distortion performance results are presented and comparisons are made with two block transform methods.
A Sparsity-Promoted Decomposition for Compressed Fault Diagnosis of Roller Bearings
Wang, Huaqing; Ke, Yanliang; Song, Liuyang; Tang, Gang; Chen, Peng
2016-01-01
The traditional approaches for condition monitoring of roller bearings are almost always achieved under Shannon sampling theorem conditions, leading to a big-data problem. The compressed sensing (CS) theory provides a new solution to the big-data problem. However, the vibration signals are insufficiently sparse and it is difficult to achieve sparsity using the conventional techniques, which impedes the application of CS theory. Therefore, it is of great significance to promote the sparsity when applying the CS theory to fault diagnosis of roller bearings. To increase the sparsity of vibration signals, a sparsity-promoted method called the tunable Q-factor wavelet transform based on decomposing the analyzed signals into transient impact components and high oscillation components is utilized in this work. The former become sparser than the raw signals with noise eliminated, whereas the latter include noise. Thus, the decomposed transient impact components replace the original signals for analysis. The CS theory is applied to extract the fault features without complete reconstruction, which means that the reconstruction can be completed when the components with interested frequencies are detected and the fault diagnosis can be achieved during the reconstruction procedure. The application cases prove that the CS theory assisted by the tunable Q-factor wavelet transform can successfully extract the fault features from the compressed samples. PMID:27657063
Demonstration of Isothermal Compressed Air Energy Storage to Support Renewable Energy Production
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bollinger, Benjamin
This project develops and demonstrates a megawatt (MW)-scale Energy Storage System that employs compressed air as the storage medium. An isothermal compressed air energy storage (ICAES TM) system rated for 1 MW or more will be demonstrated in a full-scale prototype unit. Breakthrough cost-effectiveness will be achieved through the use of proprietary methods for isothermal gas cycling and staged gas expansion implemented using industrially mature, readily-available components.The ICAES approach uses an electrically driven mechanical system to raise air to high pressure for storage in low-cost pressure vessels, pipeline, or lined-rock cavern (LRC). This air is later expanded through the samemore » mechanical system to drive the electric motor as a generator. The approach incorporates two key efficiency-enhancing innovations: (1) isothermal (constant temperature) gas cycling, which is achieved by mixing liquid with air (via spray or foam) to exchange heat with air undergoing compression or expansion; and (2) a novel, staged gas-expansion scheme that allows the drivetrain to operate at constant power while still allowing the stored gas to work over its entire pressure range. The ICAES system will be scalable, non-toxic, and cost-effective, making it suitable for firming renewables and for other grid applications.« less
[Ambulant compression therapy for crural ulcers; an effective treatment when applied skilfully].
de Boer, Edith M; Geerkens, Maud; Mooij, Michael C
2015-01-01
The incidence of crural ulcers is high. They reduce quality of life considerably and create a burden on the healthcare budget. The key treatment is ambulant compression therapy (ACT). We describe two patients with crural ulcers whose ambulant compression treatment was suboptimal and did not result in healing. When the bandages were applied correctly healing was achieved. If correctly applied ACT should provide sufficient pressure to eliminate oedema, whilst taking local circumstances such as bony structures and arterial qualities into consideration. To provide pressure-to-measure regular practical training, skills and regular quality checks are needed. Knowledge of the properties of bandages and the proper use of materials for padding under the bandage enables good personalised ACT. In trained hands adequate compression and making use of simple bandages and dressings provides good care for patients suffering from crural ulcers in contrast to inadequate ACT using the same materials.
Gas turbine power plant with supersonic shock compression ramps
Lawlor, Shawn P [Bellevue, WA; Novaresi, Mark A [San Diego, CA; Cornelius, Charles C [Kirkland, WA
2008-10-14
A gas turbine engine. The engine is based on the use of a gas turbine driven rotor having a compression ramp traveling at a local supersonic inlet velocity (based on the combination of inlet gas velocity and tangential speed of the ramp) which compresses inlet gas against a stationary sidewall. The supersonic compressor efficiently achieves high compression ratios while utilizing a compact, stabilized gasdynamic flow path. Operated at supersonic speeds, the inlet stabilizes an oblique/normal shock system in the gasdynamic flow path formed between the rim of the rotor, the strakes, and a stationary external housing. Part load efficiency is enhanced by use of a lean pre-mix system, a pre-swirl compressor, and a bypass stream to bleed a portion of the gas after passing through the pre-swirl compressor to the combustion gas outlet. Use of a stationary low NOx combustor provides excellent emissions results.
Exploratory Research on Bearing Characteristics of Confined Stabilized Soil
NASA Astrophysics Data System (ADS)
Wu, Shuai Shuai; Gao, Zheng Guo; Li, Shi Yang; Cui, Wen Bo; Huang, Xin
2018-06-01
The performance of a new kind of confined stabilized soil (CSS) was investigated which was constructed by filling the stabilized soil, which was made by mixing soil with a binder containing a high content of expansive component, into an engineering plastic pipe. Cube compressive strength of the stabilized soil formed with constraint and axial compression performance of stabilized soil cylinders confined with the constraint pipe were measured. The results indicated that combining the constraint pipe and the binder containing expansion component could achieve such effects: higher production of expansive hydrates could be adopted so as to fill more voids in the stabilized soil and improve its strength; at the same time compressive prestress built on the core stabilized soil, combined of which hoop constraint provided effective radial compressive force on the core stabilized soil. These effects made the CSS acquire plastic failure mode and more than twice bearing capacity of ordinary stabilized soil with the same binder content.
Gmz: a Gml Compression Model for Webgis
NASA Astrophysics Data System (ADS)
Khandelwal, A.; Rajan, K. S.
2017-09-01
Geography markup language (GML) is an XML specification for expressing geographical features. Defined by Open Geospatial Consortium (OGC), it is widely used for storage and transmission of maps over the Internet. XML schemas provide the convenience to define custom features profiles in GML for specific needs as seen in widely popular cityGML, simple features profile, coverage, etc. Simple features profile (SFP) is a simpler subset of GML profile with support for point, line and polygon geometries. SFP has been constructed to make sure it covers most commonly used GML geometries. Web Feature Service (WFS) serves query results in SFP by default. But it falls short of being an ideal choice due to its high verbosity and size-heavy nature, which provides immense scope for compression. GMZ is a lossless compression model developed to work for SFP compliant GML files. Our experiments indicate GMZ achieves reasonably good compression ratios and can be useful in WebGIS based applications.
NASA Astrophysics Data System (ADS)
McWilliams, R. S.
2013-12-01
Laboratory studies of volatiles at high pressure are constantly challenged to achieve conditions directly relevant to planets. While dynamic compression experiments are confined to adiabatic pathways that frequently exceed relevant temperatures due to the low densities and bulk moduli of volatile samples, static compression experiments are often complicated by sample reactivity and mobility before reaching relevant temperatures. By combining the speed of dynamic compression with the flexibility of experimental path afforded by static compression, optical spectroscopy measurements in volatiles such as H, N, and Ar have been demonstrated at previously-unexplored planetary temperature (up to 11,000 K) and pressure (up to 150 GPa). These optical data characterize the electronic properties of extreme states and have implications for bonding, transport, and mixing behavior in volatiles within planets. This work was conducted in collaboration with D.A. Dalton and A.F. Goncharov (Carnegie Institution of Washington) and M.F. Mahmood (Howard University).
Advanced application flight experiment breadboard pulse compression radar altimeter program
NASA Technical Reports Server (NTRS)
1976-01-01
Design, development and performance of the pulse compression radar altimeter is described. The high resolution breadboard system is designed to operate from an aircraft at 10 Kft above the ocean and to accurately measure altitude, sea wave height and sea reflectivity. The minicomputer controlled Ku band system provides six basic variables and an extensive digital recording capability for experimentation purposes. Signal bandwidths of 360 MHz are obtained using a reflective array compression line. Stretch processing is used to achieve 1000:1 pulse compression. The system range command LSB is 0.62 ns or 9.25 cm. A second order altitude tracker, aided by accelerometer inputs is implemented in the system software. During flight tests the system demonstrated an altitude resolution capability of 2.1 cm and sea wave height estimation accuracy of 10%. The altitude measurement performance exceeds that of the Skylab and GEOS-C predecessors by approximately an order of magnitude.
Shock compression experiments on Lithium Deuteride (LiD) single crystals
Knudson, M. D.; Desjarlais, M. P.; Lemke, R. W.
2016-12-21
Shock compression experiments in the few hundred GPa (multi-Mabr) regime were performed on Lithium Deuteride (LiD) single crystals. This study utilized the high velocity flyer plate capability of the Sandia Z Machine to perform impact experiments at flyer plate velocities in the range of 17-32 km/s. Measurements included pressure, density, and temperature between ~200-600 GPa along the Principal Hugoniot – the locus of end states achievable through compression by large amplitude shock waves – as well as pressure and density of re - shock states up to ~900 GPa. Lastly, the experimental measurements are compared with recent density functional theorymore » calculations as well as a new tabular equation of state developed at Los Alamos National Labs.« less
File compression and encryption based on LLS and arithmetic coding
NASA Astrophysics Data System (ADS)
Yu, Changzhi; Li, Hengjian; Wang, Xiyu
2018-03-01
e propose a file compression model based on arithmetic coding. Firstly, the original symbols, to be encoded, are input to the encoder one by one, we produce a set of chaotic sequences by using the Logistic and sine chaos system(LLS), and the values of this chaotic sequences are randomly modified the Upper and lower limits of current symbols probability. In order to achieve the purpose of encryption, we modify the upper and lower limits of all character probabilities when encoding each symbols. Experimental results show that the proposed model can achieve the purpose of data encryption while achieving almost the same compression efficiency as the arithmetic coding.
NASA Astrophysics Data System (ADS)
Pont, Grégoire; Brenner, Pierre; Cinnella, Paola; Maugars, Bruno; Robinet, Jean-Christophe
2017-12-01
A Godunov's type unstructured finite volume method suitable for highly compressible turbulent scale-resolving simulations around complex geometries is constructed by using a successive correction technique. First, a family of k-exact Godunov schemes is developed by recursively correcting the truncation error of the piecewise polynomial representation of the primitive variables. The keystone of the proposed approach is a quasi-Green gradient operator which ensures consistency on general meshes. In addition, a high-order single-point quadrature formula, based on high-order approximations of the successive derivatives of the solution, is developed for flux integration along cell faces. The proposed family of schemes is compact in the algorithmic sense, since it only involves communications between direct neighbors of the mesh cells. The numerical properties of the schemes up to fifth-order are investigated, with focus on their resolvability in terms of number of mesh points required to resolve a given wavelength accurately. Afterwards, in the aim of achieving the best possible trade-off between accuracy, computational cost and robustness in view of industrial flow computations, we focus more specifically on the third-order accurate scheme of the family, and modify locally its numerical flux in order to reduce the amount of numerical dissipation in vortex-dominated regions. This is achieved by switching from the upwind scheme, mostly applied in highly compressible regions, to a fourth-order centered one in vortex-dominated regions. An analytical switch function based on the local grid Reynolds number is adopted in order to warrant numerical stability of the recentering process. Numerical applications demonstrate the accuracy and robustness of the proposed methodology for compressible scale-resolving computations. In particular, supersonic RANS/LES computations of the flow over a cavity are presented to show the capability of the scheme to predict flows with shocks, vortical structures and complex geometries.
Information content exploitation of imaging spectrometer's images for lossless compression
NASA Astrophysics Data System (ADS)
Wang, Jianyu; Zhu, Zhenyu; Lin, Kan
1996-11-01
Imaging spectrometer, such as MAIS produces a tremendous volume of image data with up to 5.12 Mbps raw data rate, which needs urgently a real-time, efficient and reversible compression implementation. Between the lossy scheme with high compression ratio and the lossless scheme with high fidelity, we must make our choice based on the particular information content analysis of each imaging spectrometer's image data. In this paper, we present a careful analysis of information-preserving compression of imaging spectrometer MAIS with an entropy and autocorrelation study on the hyperspectral images. First, the statistical information in an actual MAIS image, captured in Marble Bar Australia, is measured with its entropy, conditional entropy, mutual information and autocorrelation coefficients on both spatial dimensions and spectral dimension. With these careful analyses, it is shown that there is high redundancy existing in the spatial dimensions, but the correlation in spectral dimension of the raw images is smaller than expected. The main reason of the nonstationarity on spectral dimension is attributed to the instruments's discrepancy on detector's response and channel's amplification in different spectral bands. To restore its natural correlation, we preprocess the signal in advance. There are two methods to accomplish this requirement: onboard radiation calibration and normalization. A better result can be achieved by the former one. After preprocessing, the spectral correlation increases so high that it contributes much redundancy in addition to spatial correlation. At last, an on-board hardware implementation for the lossless compression is presented with an ideal result.
Lee, Chang Jae; Chung, Tae Nyoung; Bae, Jinkun; Kim, Eui Chung; Choi, Sung Wook; Kim, Ok Jun
2015-03-01
Current guidelines for cardiopulmonary resuscitation recommend chest compressions (CC) during 50% of the duty cycle (DC) in part because of the ease with which individuals may learn to achieve it with practice. However, no consideration has been given to a possible interaction between DC and depth of CC, which has been the subject of recent study. Our aim was to determine if 50% DC is inappropriate to achieve sufficient chest compression depth for female and light rescuers. Previously collected CC data, performed by senior medical students guided by metronome sounds with various down-stroke patterns and rates, were included in the analysis. Multiple linear regression analysis was performed to determine the association between average compression depth (ACD) with average compression rate (ACR), DC, and physical characteristics of the performers. Expected ACD was calculated for various settings. DC, ACR, body weight, male sex, and self-assessed physical strength were significantly associated with ACD in multivariate analysis. Based on our calculations, with 50% of DC, only men with ACR of 140/min or faster or body weight over 74 kg with ACR of 120/min can achieve sufficient ACD. A shorter DC is independently correlated with deeper CC during simulated cardiopulmonary resuscitation. The optimal DC recommended in current guidelines may be inappropriate for achieving sufficient CD, especially for female or lighter-weight rescuers.
Xhepa, Erion; Byrne, Robert A; Schulz, Stefanie; Helde, Sandra; Gewalt, Senta; Cassese, Salvatore; Linhardt, Maryam; Ibrahim, Tareq; Mehilli, Julinda; Hoppe, Katharina; Grupp, Katharina; Kufner, Sebastian; Böttiger, Corinna; Hoppmann, Petra; Burgdorf, Christof; Fusaro, Massimiliano; Ott, Ilka; Schneider, Simon; Hengstenberg, Christian; Schunkert, Heribert; Laugwitz, Karl-Ludwig; Kastrati, Adnan
2014-06-01
Vascular closure devices (VCD) have been introduced into clinical practice with the aim of increasing the procedural efficiency and clinical safety of coronary angiography. However, clinical studies comparing VCD and manual compression have yielded mixed results, and large randomised clinical trials comparing the two strategies are missing. Moreover, comparative efficacy studies between different VCD in routine clinical use are lacking. The Instrumental Sealing of ARterial puncture site - CLOSURE device versus manual compression (ISAR-CLOSURE) trial is a prospective, randomised clinical trial designed to compare the outcomes associated with the use of VCD or manual compression to achieve femoral haemostasis. The test hypothesis is that femoral haemostasis after coronary angiography achieved using VCD is not inferior to manual compression in terms of access-site-related vascular complications. Patients undergoing coronary angiography via the common femoral artery will be randomised in a 1:1:1 fashion to receive FemoSeal VCD, EXOSEAL VCD or manual compression. The primary endpoint is the incidence of the composite of arterial access-related complications (haematoma ≥5 cm, pseudoaneurysm, arteriovenous fistula, access-site-related bleeding, acute ipsilateral leg ischaemia, the need for vascular surgical/interventional treatment or documented local infection) at 30 days after randomisation. According to power calculations based on non-inferiority hypothesis testing, enrolment of 4,500 patients is planned. The trial is registered at www.clinicaltrials.gov (study identifier: NCT01389375). The safety of VCD as compared to manual compression in patients undergoing transfemoral coronary angiography remains an issue of clinical equipoise. The aim of the ISAR-CLOSURE trial is to assess whether femoral haemostasis achieved through the use of VCD is non-inferior to manual compression in terms of access-site-related vascular complications.
NASA Astrophysics Data System (ADS)
Yu, Yunluo; Pu, Guang; Jiang, Kyle
2017-12-01
This paper describes a theoretical investigation of static and dynamic characteristics of herringbone-grooved air thrust bearings. Firstly, Finite Difference Method (FDM) and Finite Volume Method (FVM) are used in combination to solve the non-linear Reynolds equation and to find the pressure distribution of the film and the total loading capacity of the bearing. The influence of design parameters on air film gap characteristics, including the air film thickness, depth of the groove and rotating speed, are analyzed based on the FDM model. The simulation results show that hydrostatic thrust bearings can achieve a better load capacity with less air consumption than herringbone grooved thrust bearings at low compressibility number; herringbone grooved thrust bearings can achieve a higher load capacity but with more air consumption than hydrostatic thrust bearing at high compressibility number; herringbone grooved thrust bearings would lose stability at high rotating speeds, and the stability increases with the depth of the grooves.
Highly stretchable carbon aerogels.
Guo, Fan; Jiang, Yanqiu; Xu, Zhen; Xiao, Youhua; Fang, Bo; Liu, Yingjun; Gao, Weiwei; Zhao, Pei; Wang, Hongtao; Gao, Chao
2018-02-28
Carbon aerogels demonstrate wide applications for their ultralow density, rich porosity, and multifunctionalities. Their compressive elasticity has been achieved by different carbons. However, reversibly high stretchability of neat carbon aerogels is still a great challenge owing to their extremely dilute brittle interconnections and poorly ductile cells. Here we report highly stretchable neat carbon aerogels with a retractable 200% elongation through hierarchical synergistic assembly. The hierarchical buckled structures and synergistic reinforcement between graphene and carbon nanotubes enable a temperature-invariable, recoverable stretching elasticity with small energy dissipation (~0.1, 100% strain) and high fatigue resistance more than 10 6 cycles. The ultralight carbon aerogels with both stretchability and compressibility were designed as strain sensors for logic identification of sophisticated shape conversions. Our methodology paves the way to highly stretchable carbon and neat inorganic materials with extensive applications in aerospace, smart robots, and wearable devices.
SCALCE: boosting sequence compression algorithms using locally consistent encoding
Hach, Faraz; Numanagić, Ibrahim; Sahinalp, S Cenk
2012-01-01
Motivation: The high throughput sequencing (HTS) platforms generate unprecedented amounts of data that introduce challenges for the computational infrastructure. Data management, storage and analysis have become major logistical obstacles for those adopting the new platforms. The requirement for large investment for this purpose almost signalled the end of the Sequence Read Archive hosted at the National Center for Biotechnology Information (NCBI), which holds most of the sequence data generated world wide. Currently, most HTS data are compressed through general purpose algorithms such as gzip. These algorithms are not designed for compressing data generated by the HTS platforms; for example, they do not take advantage of the specific nature of genomic sequence data, that is, limited alphabet size and high similarity among reads. Fast and efficient compression algorithms designed specifically for HTS data should be able to address some of the issues in data management, storage and communication. Such algorithms would also help with analysis provided they offer additional capabilities such as random access to any read and indexing for efficient sequence similarity search. Here we present SCALCE, a ‘boosting’ scheme based on Locally Consistent Parsing technique, which reorganizes the reads in a way that results in a higher compression speed and compression rate, independent of the compression algorithm in use and without using a reference genome. Results: Our tests indicate that SCALCE can improve the compression rate achieved through gzip by a factor of 4.19—when the goal is to compress the reads alone. In fact, on SCALCE reordered reads, gzip running time can improve by a factor of 15.06 on a standard PC with a single core and 6 GB memory. Interestingly even the running time of SCALCE + gzip improves that of gzip alone by a factor of 2.09. When compared with the recently published BEETL, which aims to sort the (inverted) reads in lexicographic order for improving bzip2, SCALCE + gzip provides up to 2.01 times better compression while improving the running time by a factor of 5.17. SCALCE also provides the option to compress the quality scores as well as the read names, in addition to the reads themselves. This is achieved by compressing the quality scores through order-3 Arithmetic Coding (AC) and the read names through gzip through the reordering SCALCE provides on the reads. This way, in comparison with gzip compression of the unordered FASTQ files (including reads, read names and quality scores), SCALCE (together with gzip and arithmetic encoding) can provide up to 3.34 improvement in the compression rate and 1.26 improvement in running time. Availability: Our algorithm, SCALCE (Sequence Compression Algorithm using Locally Consistent Encoding), is implemented in C++ with both gzip and bzip2 compression options. It also supports multithreading when gzip option is selected, and the pigz binary is available. It is available at http://scalce.sourceforge.net. Contact: fhach@cs.sfu.ca or cenk@cs.sfu.ca Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23047557
A high-order vertex-based central ENO finite-volume scheme for three-dimensional compressible flows
Charest, Marc R.J.; Canfield, Thomas R.; Morgan, Nathaniel R.; ...
2015-03-11
High-order discretization methods offer the potential to reduce the computational cost associated with modeling compressible flows. However, it is difficult to obtain accurate high-order discretizations of conservation laws that do not produce spurious oscillations near discontinuities, especially on multi-dimensional unstructured meshes. A novel, high-order, central essentially non-oscillatory (CENO) finite-volume method that does not have these difficulties is proposed for tetrahedral meshes. The proposed unstructured method is vertex-based, which differs from existing cell-based CENO formulations, and uses a hybrid reconstruction procedure that switches between two different solution representations. It applies a high-order k-exact reconstruction in smooth regions and a limited linearmore » reconstruction when discontinuities are encountered. Both reconstructions use a single, central stencil for all variables, making the application of CENO to arbitrary unstructured meshes relatively straightforward. The new approach was applied to the conservation equations governing compressible flows and assessed in terms of accuracy and computational cost. For all problems considered, which included various function reconstructions and idealized flows, CENO demonstrated excellent reliability and robustness. Up to fifth-order accuracy was achieved in smooth regions and essentially non-oscillatory solutions were obtained near discontinuities. The high-order schemes were also more computationally efficient for high-accuracy solutions, i.e., they took less wall time than the lower-order schemes to achieve a desired level of error. In one particular case, it took a factor of 24 less wall-time to obtain a given level of error with the fourth-order CENO scheme than to obtain the same error with the second-order scheme.« less
Nonlinear compression of temporal solitons in an optical waveguide via inverse engineering
NASA Astrophysics Data System (ADS)
Paul, Koushik; Sarma, Amarendra K.
2018-03-01
We propose a novel method based on the so-called shortcut-to-adiabatic passage techniques to achieve fast compression of temporal solitons in a nonlinear waveguide. We demonstrate that soliton compression could be achieved, in principle, at an arbitrarily small distance by inverse-engineering the pulse width and the nonlinearity of the medium. The proposed scheme could possibly be exploited for various short-distance communication protocols and may be even in nonlinear guided wave-optics devices and generation of ultrashort soliton pulses.
Quantization Distortion in Block Transform-Compressed Data
NASA Technical Reports Server (NTRS)
Boden, A. F.
1995-01-01
The popular JPEG image compression standard is an example of a block transform-based compression scheme; the image is systematically subdivided into block that are individually transformed, quantized, and encoded. The compression is achieved by quantizing the transformed data, reducing the data entropy and thus facilitating efficient encoding. A generic block transform model is introduced.
ERIC Educational Resources Information Center
Dailey, K. Anne
Time-compressed speech (also called compressed speech, speeded speech, or accelerated speech) is an extension of the normal recording procedure for reproducing the spoken word. Compressed speech can be used to achieve dramatic reductions in listening time without significant loss in comprehension. The implications of such temporal reductions in…
Directional amorphization of boron carbide subjected to laser shock compression.
Zhao, Shiteng; Kad, Bimal; Remington, Bruce A; LaSalvia, Jerry C; Wehrenberg, Christopher E; Behler, Kristopher D; Meyers, Marc A
2016-10-25
Solid-state shock-wave propagation is strongly nonequilibrium in nature and hence rate dependent. Using high-power pulsed-laser-driven shock compression, unprecedented high strain rates can be achieved; here we report the directional amorphization in boron carbide polycrystals. At a shock pressure of 45∼50 GPa, multiple planar faults, slightly deviated from maximum shear direction, occur a few hundred nanometers below the shock surface. High-resolution transmission electron microscopy reveals that these planar faults are precursors of directional amorphization. It is proposed that the shear stresses cause the amorphization and that pressure assists the process by ensuring the integrity of the specimen. Thermal energy conversion calculations including heat transfer suggest that amorphization is a solid-state process. Such a phenomenon has significant effect on the ballistic performance of B 4 C.
Castillo, Edward; Castillo, Richard; White, Benjamin; Rojo, Javier; Guerrero, Thomas
2012-01-01
Compressible flow based image registration operates under the assumption that the mass of the imaged material is conserved from one image to the next. Depending on how the mass conservation assumption is modeled, the performance of existing compressible flow methods is limited by factors such as image quality, noise, large magnitude voxel displacements, and computational requirements. The Least Median of Squares Filtered Compressible Flow (LFC) method introduced here is based on a localized, nonlinear least squares, compressible flow model that describes the displacement of a single voxel that lends itself to a simple grid search (block matching) optimization strategy. Spatially inaccurate grid search point matches, corresponding to erroneous local minimizers of the nonlinear compressible flow model, are removed by a novel filtering approach based on least median of squares fitting and the forward search outlier detection method. The spatial accuracy of the method is measured using ten thoracic CT image sets and large samples of expert determined landmarks (available at www.dir-lab.com). The LFC method produces an average error within the intra-observer error on eight of the ten cases, indicating that the method is capable of achieving a high spatial accuracy for thoracic CT registration. PMID:22797602
Compressed Air Working in Chennai During Metro Tunnel Construction: Occupational Health Problems.
Kulkarni, Ajit C
2017-01-01
Chennai metropolis has been growing rapidly. Need was felt of a metro rail system. Two corridors were planned. Corridor 1, of 23 km starting from Washermanpet to Airport. 14.3 km of this would be underground. Corridor 2, of 22 km starting from Chennai Central Railway station to St. Thomas Mount. 9.7 km of this would be underground. Occupational health centre's role involved selection of miners and assessing their fitness to work under compressed air. Planning and execution of compression and decompression, health monitoring and treatment of compression related illnesses. More than thirty five thousand man hours of work was carried out under compressed air pressure ranged from 1.2 to 1.9 bar absolute. There were only three cases of pain only ( Type I) decompression sickness which were treated with recompression. Vigilant medical supervision, experienced lock operators and reduced working hours under pressure because of inclement environmental conditions viz. high temperature and humidity, has helped achieve this low incident. Tunnelling activity will increase in India as more cities will soon opt for underground metro railway. Indian standard IS 4138 - 1977 " Safety code for working in compressed air" needs to be updated urgently keeping pace with modern working methods.
Zhou, Fei; Nielson, Weston; Xia, Yi; ...
2014-10-27
First-principles prediction of lattice thermal conductivity K L of strongly anharmonic crystals is a long-standing challenge in solid state physics. Using recent advances in information science, we propose a systematic and rigorous approach to this problem, compressive sensing lattice dynamics (CSLD). Compressive sensing is used to select the physically important terms in the lattice dynamics model and determine their values in one shot. Non-intuitively, high accuracy is achieved when the model is trained on first-principles forces in quasi-random atomic configurations. The method is demonstrated for Si, NaCl, and Cu 12Sb 4S 13, an earth-abundant thermoelectric with strong phononphonon interactions thatmore » limit the room-temperature K L to values near the amorphous limit.« less
Lee, Chang Jae; Chung, Tae Nyoung; Bae, Jinkun; Kim, Eui Chung; Choi, Sung Wook; Kim, Ok Jun
2015-01-01
Objective Current guidelines for cardiopulmonary resuscitation recommend chest compressions (CC) during 50% of the duty cycle (DC) in part because of the ease with which individuals may learn to achieve it with practice. However, no consideration has been given to a possible interaction between DC and depth of CC, which has been the subject of recent study. Our aim was to determine if 50% DC is inappropriate to achieve sufficient chest compression depth for female and light rescuers. Methods Previously collected CC data, performed by senior medical students guided by metronome sounds with various down-stroke patterns and rates, were included in the analysis. Multiple linear regression analysis was performed to determine the association between average compression depth (ACD) with average compression rate (ACR), DC, and physical characteristics of the performers. Expected ACD was calculated for various settings. Results DC, ACR, body weight, male sex, and self-assessed physical strength were significantly associated with ACD in multivariate analysis. Based on our calculations, with 50% of DC, only men with ACR of 140/min or faster or body weight over 74 kg with ACR of 120/min can achieve sufficient ACD. Conclusion A shorter DC is independently correlated with deeper CC during simulated cardiopulmonary resuscitation. The optimal DC recommended in current guidelines may be inappropriate for achieving sufficient CD, especially for female or lighter-weight rescuers. PMID:27752567
Cosmological Particle Data Compression in Practice
NASA Astrophysics Data System (ADS)
Zeyen, M.; Ahrens, J.; Hagen, H.; Heitmann, K.; Habib, S.
2017-12-01
In cosmological simulations trillions of particles are handled and several terabytes of unstructured particle data are generated in each time step. Transferring this data directly from memory to disk in an uncompressed way results in a massive load on I/O and storage systems. Hence, one goal of domain scientists is to compress the data before storing it to disk while minimizing the loss of information. To prevent reading back uncompressed data from disk, this can be done in an in-situ process. Since the simulation continuously generates data, the available time for the compression of one time step is limited. Therefore, the evaluation of compression techniques has shifted from only focusing on compression rates to include run-times and scalability.In recent years several compression techniques for cosmological data have become available. These techniques can be either lossy or lossless, depending on the technique. For both cases, this study aims to evaluate and compare the state of the art compression techniques for unstructured particle data. This study focuses on the techniques available in the Blosc framework with its multi-threading support, the XZ Utils toolkit with the LZMA algorithm that achieves high compression rates, and the widespread FPZIP and ZFP methods for lossy compressions.For the investigated compression techniques, quantitative performance indicators such as compression rates, run-time/throughput, and reconstruction errors are measured. Based on these factors, this study offers a comprehensive analysis of the individual techniques and discusses their applicability for in-situ compression. In addition, domain specific measures are evaluated on the reconstructed data sets, and the relative error rates and statistical properties are analyzed and compared. Based on this study future challenges and directions in the compression of unstructured cosmological particle data were identified.
Schaefer, Carolyn E; Kupwade-Patil, Kunal; Ortega, Michael; Soriano, Carmen; Büyüköztürk, Oral; White, Anne E; Short, Michael P
2018-01-01
Concrete production contributes heavily to greenhouse gas emissions, thus a need exists for the development of durable and sustainable concrete with a lower carbon footprint. This can be achieved when cement is partially replaced with another material, such as waste plastic, though normally with a tradeoff in compressive strength. This study discusses progress toward a high/medium strength concrete with a dense, cementitious matrix that contains an irradiated plastic additive, recovering the compressive strength while displacing concrete with waste materials to reduce greenhouse gas generation. Compressive strength tests showed that the addition of high dose (100kGy) irradiated plastic in multiple concretes resulted in increased compressive strength as compared to samples containing regular, non-irradiated plastic. This suggests that irradiating plastic at a high dose is a viable potential solution for regaining some of the strength that is lost when plastic is added to cement paste. X-ray Diffraction (XRD), Backscattered Electron Microscopy (BSE), and X-ray microtomography explain the mechanisms for strength retention when using irradiated plastic as a filler for cement paste. By partially replacing Portland cement with a recycled waste plastic, this design may have a potential to contribute to reduced carbon emissions when scaled to the level of mass concrete production. Copyright © 2017 Elsevier Ltd. All rights reserved.
Xu, Ou; Zhang, Jiejun; Yao, Jianping
2016-11-01
High speed and high resolution interrogation of a fiber Bragg grating (FBG) sensor based on microwave photonic filtering and chirped microwave pulse compression is proposed and experimentally demonstrated. In the proposed sensor, a broadband linearly chirped microwave waveform (LCMW) is applied to a single-passband microwave photonic filter (MPF) which is implemented based on phase modulation and phase modulation to intensity modulation conversion using a phase modulator (PM) and a phase-shifted FBG (PS-FBG). Since the center frequency of the MPF is a function of the central wavelength of the PS-FBG, when the PS-FBG experiences a strain or temperature change, the wavelength is shifted, which leads to the change in the center frequency of the MPF. At the output of the MPF, a filtered chirped waveform with the center frequency corresponding to the applied strain or temperature is obtained. By compressing the filtered LCMW in a digital signal processor, the resolution is improved. The proposed interrogation technique is experimentally demonstrated. The experimental results show that interrogation sensitivity and resolution as high as 1.25 ns/με and 0.8 με are achieved.
Disk-based compression of data from genome sequencing.
Grabowski, Szymon; Deorowicz, Sebastian; Roguski, Łukasz
2015-05-01
High-coverage sequencing data have significant, yet hard to exploit, redundancy. Most FASTQ compressors cannot efficiently compress the DNA stream of large datasets, since the redundancy between overlapping reads cannot be easily captured in the (relatively small) main memory. More interesting solutions for this problem are disk based, where the better of these two, from Cox et al. (2012), is based on the Burrows-Wheeler transform (BWT) and achieves 0.518 bits per base for a 134.0 Gbp human genome sequencing collection with almost 45-fold coverage. We propose overlapping reads compression with minimizers, a compression algorithm dedicated to sequencing reads (DNA only). Our method makes use of a conceptually simple and easily parallelizable idea of minimizers, to obtain 0.317 bits per base as the compression ratio, allowing to fit the 134.0 Gbp dataset into only 5.31 GB of space. http://sun.aei.polsl.pl/orcom under a free license. sebastian.deorowicz@polsl.pl Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Li, Y.; Capatina, D.; D'Amico, K.; Eng, P.; Hawreliak, J.; Graber, T.; Rickerson, D.; Klug, J.; Rigg, P. A.; Gupta, Y. M.
2017-06-01
Coupling laser-driven compression experiments to the x-ray beam at the Dynamic Compression Sector (DCS) at the Advanced Photon Source (APS) of Argonne National Laboratory requires state-of-the-art x-ray focusing, pulse isolation, and diagnostics capabilities. The 100J UV pulsed laser system can be fired once every 20 minutes so precise alignment and focusing of the x-rays on each new sample must be fast and reproducible. Multiple Kirkpatrick-Baez (KB) mirrors are used to achieve a focal spot size as small as 50 μm at the target, while the strategic placement of scintillating screens, cameras, and detectors allows for fast diagnosis of the beam shape, intensity, and alignment of the sample to the x-ray beam. In addition, a series of x-ray choppers and shutters are used to ensure that the sample is exposed to only a single x-ray pulse ( 80ps) during the dynamic compression event and require highly precise synchronization. Details of the technical requirements, layout, and performance of these instruments will be presented. Work supported by DOE/NNSA.
Shock-adiabatic to quasi-isentropic compression of warm dense helium up to 150 GPa
NASA Astrophysics Data System (ADS)
Zheng, J.; Chen, Q. F.; Gu, Y. J.; Li, J. T.; Li, Z. G.; Li, C. J.; Chen, Z. Y.
2017-06-01
Multiple reverberation compression can achieve higher pressure, higher temperature, but lower entropy. It is available to provide an important validation for the elaborate and wider planetary models and simulate the inertial confinement fusion capsule implosion process. In the work, we have developed the thermodynamic and optical properties of helium from shock-adiabatic to quasi-isentropic compression by means of a multiple reverberation technique. By this technique, the initial dense gaseous helium was compressed to high pressure and high temperature and entered the warm dense matter (WDM) region. The experimental equation of state (EOS) of WDM helium in the pressure-density-temperature (P-ρ -T) range of 1 -150 GPa , 0.1 -1.1 g c m-3 , and 4600-24 000 K were measured. The optical radiations emanating from the WDM helium were recorded, and the particle velocity profiles detecting from the sample/window interface were obtained successfully up to 10 times compression. The optical radiation results imply that dense He has become rather opaque after the 2nd compression with a density of about 0.3 g c m-3 and a temperature of about 1 eV. The opaque states of helium under multiple compression were analyzed by the particle velocity measurements. The multiple compression technique could efficiently enhanced the density and the compressibility, and our multiple compression ratios (ηi=ρi/ρ0,i =1 -10 ) of helium are greatly improved from 3.5 to 43 based on initial precompressed density (ρ0) . For the relative compression ratio (ηi'=ρi/ρi -1) , it increases with pressure in the lower density regime and reversely decreases in the higher density regime, and a turning point occurs at the 3rd and 4th compression states under the different loading conditions. This nonmonotonic evolution of the compression is controlled by two factors, where the excitation of internal degrees of freedom results in the increasing compressibility and the repulsive interactions between the particles results in the decreasing compressibility at the onset of electron excitation and ionization. In the P-ρ -T contour with the experiments and the calculations, our multiple compression states from insulating to semiconducting fluid (from transparent to opaque fluid) are illustrated. Our results give an elaborate validation of EOS models and have applications for planetary and stellar opaque atmospheres.
NASA Technical Reports Server (NTRS)
Gleich, D.
1972-01-01
The fabrication of helicopter rotary wings from composite materials is discussed. Two composite spar specimens consisting of compressively prestressed stainless steel liner over-wrapped with pretensioned fiberglass were constructed. High liner strength and toughness together with the prescribed prestresses and final sizing of the part are achieved by means of cryogenic stretch forming of the fiber wrapped composite spar at minus 320 F, followed by release of the forming pressure and warm up to room temperature. The prestresses are chosen to provide residual compression in the metal liner under operating loads.
Compression of facsimile graphics for transmission over digital mobile satellite circuits
NASA Astrophysics Data System (ADS)
Dimolitsas, Spiros; Corcoran, Frank L.
A technique for reducing the transmission requirements of facsimile images while maintaining high intelligibility in mobile communications environments is described. The algorithms developed are capable of achieving a compression of approximately 32 to 1. The technique focuses on the implementation of a low-cost interface unit suitable for facsimile communication between low-power mobile stations and fixed stations for both point-to-point and point-to-multipoint transmissions. This interface may be colocated with the transmitting facsimile terminals. The technique was implemented and tested by intercepting facsimile documents in a store-and-forward mode.
A Two-Stage Reconstruction Processor for Human Detection in Compressive Sensing CMOS Radar.
Tsao, Kuei-Chi; Lee, Ling; Chu, Ta-Shun; Huang, Yuan-Hao
2018-04-05
Complementary metal-oxide-semiconductor (CMOS) radar has recently gained much research attraction because small and low-power CMOS devices are very suitable for deploying sensing nodes in a low-power wireless sensing system. This study focuses on the signal processing of a wireless CMOS impulse radar system that can detect humans and objects in the home-care internet-of-things sensing system. The challenges of low-power CMOS radar systems are the weakness of human signals and the high computational complexity of the target detection algorithm. The compressive sensing-based detection algorithm can relax the computational costs by avoiding the utilization of matched filters and reducing the analog-to-digital converter bandwidth requirement. The orthogonal matching pursuit (OMP) is one of the popular signal reconstruction algorithms for compressive sensing radar; however, the complexity is still very high because the high resolution of human respiration leads to high-dimension signal reconstruction. Thus, this paper proposes a two-stage reconstruction algorithm for compressive sensing radar. The proposed algorithm not only has lower complexity than the OMP algorithm by 75% but also achieves better positioning performance than the OMP algorithm especially in noisy environments. This study also designed and implemented the algorithm by using Vertex-7 FPGA chip (Xilinx, San Jose, CA, USA). The proposed reconstruction processor can support the 256 × 13 real-time radar image display with a throughput of 28.2 frames per second.
Context-dependent JPEG backward-compatible high-dynamic range image compression
NASA Astrophysics Data System (ADS)
Korshunov, Pavel; Ebrahimi, Touradj
2013-10-01
High-dynamic range (HDR) imaging is expected, together with ultrahigh definition and high-frame rate video, to become a technology that may change photo, TV, and film industries. Many cameras and displays capable of capturing and rendering both HDR images and video are already available in the market. The popularity and full-public adoption of HDR content is, however, hindered by the lack of standards in evaluation of quality, file formats, and compression, as well as large legacy base of low-dynamic range (LDR) displays that are unable to render HDR. To facilitate the wide spread of HDR usage, the backward compatibility of HDR with commonly used legacy technologies for storage, rendering, and compression of video and images are necessary. Although many tone-mapping algorithms are developed for generating viewable LDR content from HDR, there is no consensus of which algorithm to use and under which conditions. We, via a series of subjective evaluations, demonstrate the dependency of the perceptual quality of the tone-mapped LDR images on the context: environmental factors, display parameters, and image content itself. Based on the results of subjective tests, it proposes to extend JPEG file format, the most popular image format, in a backward compatible manner to deal with HDR images also. An architecture to achieve such backward compatibility with JPEG is proposed. A simple implementation of lossy compression demonstrates the efficiency of the proposed architecture compared with the state-of-the-art HDR image compression.
Nonlinear frequency compression: effects on sound quality ratings of speech and music.
Parsa, Vijay; Scollie, Susan; Glista, Danielle; Seelisch, Andreas
2013-03-01
Frequency lowering technologies offer an alternative amplification solution for severe to profound high frequency hearing losses. While frequency lowering technologies may improve audibility of high frequency sounds, the very nature of this processing can affect the perceived sound quality. This article reports the results from two studies that investigated the impact of a nonlinear frequency compression (NFC) algorithm on perceived sound quality. In the first study, the cutoff frequency and compression ratio parameters of the NFC algorithm were varied, and their effect on the speech quality was measured subjectively with 12 normal hearing adults, 12 normal hearing children, 13 hearing impaired adults, and 9 hearing impaired children. In the second study, 12 normal hearing and 8 hearing impaired adult listeners rated the quality of speech in quiet, speech in noise, and music after processing with a different set of NFC parameters. Results showed that the cutoff frequency parameter had more impact on sound quality ratings than the compression ratio, and that the hearing impaired adults were more tolerant to increased frequency compression than normal hearing adults. No statistically significant differences were found in the sound quality ratings of speech-in-noise and music stimuli processed through various NFC settings by hearing impaired listeners. These findings suggest that there may be an acceptable range of NFC settings for hearing impaired individuals where sound quality is not adversely affected. These results may assist an Audiologist in clinical NFC hearing aid fittings for achieving a balance between high frequency audibility and sound quality.
Martin, Philip; Theobald, Peter; Kemp, Alison; Maguire, Sabine; Maconochie, Ian; Jones, Michael
2013-08-01
European and Advanced Paediatric Life Support training courses. Sixty-nine certified CPR providers. CPR providers were randomly allocated to a 'no-feedback' or 'feedback' group, performing two-thumb and two-finger chest compressions on a "physiological", instrumented resuscitation manikin. Baseline data was recorded without feedback, before chest compressions were repeated with one group receiving feedback. Indices were calculated that defined chest compression quality, based upon comparison of the chest wall displacement to the targets of four, internationally recommended parameters: chest compression depth, release force, chest compression rate and compression duty cycle. Baseline data were consistent with other studies, with <1% of chest compressions performed by providers simultaneously achieving the target of the four internationally recommended parameters. During the 'experimental' phase, 34 CPR providers benefitted from the provision of 'real-time' feedback which, on analysis, coincided with a statistical improvement in compression rate, depth and duty cycle quality across both compression techniques (all measures: p<0.001). Feedback enabled providers to simultaneously achieve the four targets in 75% (two-finger) and 80% (two-thumb) of chest compressions. Real-time feedback produced a dramatic increase in the quality of chest compression (i.e. from <1% to 75-80%). If these results transfer to a clinical scenario this technology could, for the first time, support providers in consistently performing accurate chest compressions during infant CPR and thus potentially improving clinical outcomes. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
H.264/AVC Video Compression on Smartphones
NASA Astrophysics Data System (ADS)
Sharabayko, M. P.; Markov, N. G.
2017-01-01
In this paper, we studied the usage of H.264/AVC video compression tools by the flagship smartphones. The results show that only a subset of tools is used, meaning that there is still a potential to achieve higher compression efficiency within the H.264/AVC standard, but the most advanced smartphones are already reaching the compression efficiency limit of H.264/AVC.
Real-Time Aggressive Image Data Compression
1990-03-31
implemented with higher degrees of modularity, concurrency, and higher levels of machine intelligence , thereby providing higher data -throughput rates...Project Summary Project Title: Real-Time Aggressive Image Data Compression Principal Investigators: Dr. Yih-Fang Huang and Dr. Ruey-wen Liu Institution...Summary The objective of the proposed research is to develop reliable algorithms !.hat can achieve aggressive image data compression (with a compression
Lossless compression techniques for maskless lithography data
NASA Astrophysics Data System (ADS)
Dai, Vito; Zakhor, Avideh
2002-07-01
Future lithography systems must produce more dense chips with smaller feature sizes, while maintaining the throughput of one wafer per sixty seconds per layer achieved by today's optical lithography systems. To achieve this throughput with a direct-write maskless lithography system, using 25 nm pixels for 50 nm feature sizes, requires data rates of about 10 Tb/s. In a previous paper, we presented an architecture which achieves this data rate contingent on consistent 25 to 1 compression of lithography data, and on implementation of a decoder-writer chip with a real-time decompressor fabricated on the same chip as the massively parallel array of lithography writers. In this paper, we examine the compression efficiency of a spectrum of techniques suitable for lithography data, including two industry standards JBIG and JPEG-LS, a wavelet based technique SPIHT, general file compression techniques ZIP and BZIP2, our own 2D-LZ technique, and a simple list-of-rectangles representation RECT. Layouts rasterized both to black-and-white pixels, and to 32 level gray pixels are considered. Based on compression efficiency, JBIG, ZIP, 2D-LZ, and BZIP2 are found to be strong candidates for application to maskless lithography data, in many cases far exceeding the required compression ratio of 25. To demonstrate the feasibility of implementing the decoder-writer chip, we consider the design of a hardware decoder based on ZIP, the simplest of the four candidate techniques. The basic algorithm behind ZIP compression is Lempel-Ziv 1977 (LZ77), and the design parameters of LZ77 decompression are optimized to minimize circuit usage while maintaining compression efficiency.
Ultra-porous titanium oxide scaffold with high compressive strength
Tiainen, Hanna; Lyngstadaas, S. Petter; Ellingsen, Jan Eirik
2010-01-01
Highly porous and well interconnected titanium dioxide (TiO2) scaffolds with compressive strength above 2.5 MPa were fabricated without compromising the desired pore architectural characteristics, such as high porosity, appropriate pore size, surface-to-volume ratio, and interconnectivity. Processing parameters and pore architectural characteristics were investigated in order to identify the key processing steps and morphological properties that contributed to the enhanced strength of the scaffolds. Cleaning of the TiO2 raw powder removed phosphates but introduced sodium into the powder, which was suggested to decrease the slurry stability. Strong correlation was found between compressive strength and both replication times and solid content in the ceramic slurry. Increase in the solid content resulted in more favourable sponge loading, which was achieved due to the more suitable rheological properties of the ceramic slurry. Repeated replication process induced only negligible changes in the pore architectural parameters indicating a reduced flaw size in the scaffold struts. The fabricated TiO2 scaffolds show great promise as load-bearing bone scaffolds for applications where moderate mechanical support is required. PMID:20711636
Improvement of pump tubes for gas guns and shock tube drivers
NASA Technical Reports Server (NTRS)
Bogdanoff, D. W.
1990-01-01
In a pump tube, a gas is mechanically compressed, producing very high pressures and sound speeds. The intensely heated gas produced in such a tube can be used to drive light gas guns and shock tubes. Three concepts are presented that have the potential to allow substantial reductions in the size and mass of the pump tube to be achieved. The first concept involves the use of one or more diaphragms in the pump tube, thus replacing a single compression process by multiple, successive compressions. The second concept involves a radical reduction in the length-to-diameter ratio of the pump tube and the pump tube piston. The third concept involves shock heating of the working gas by high explosives in a cyclindrical geometry reusable device. Preliminary design analyses are performed on all three concepts and they appear to be quite feasible. Reductions in the length and mass of the pump tube by factors up to about 11 and about 7, respectively, are predicted, relative to a benchmark conventional pump tube.
Compressive light field imaging
NASA Astrophysics Data System (ADS)
Ashok, Amit; Neifeld, Mark A.
2010-04-01
Light field imagers such as the plenoptic and the integral imagers inherently measure projections of the four dimensional (4D) light field scalar function onto a two dimensional sensor and therefore, suffer from a spatial vs. angular resolution trade-off. Programmable light field imagers, proposed recently, overcome this spatioangular resolution trade-off and allow high-resolution capture of the (4D) light field function with multiple measurements at the cost of a longer exposure time. However, these light field imagers do not exploit the spatio-angular correlations inherent in the light fields of natural scenes and thus result in photon-inefficient measurements. Here, we describe two architectures for compressive light field imaging that require relatively few photon-efficient measurements to obtain a high-resolution estimate of the light field while reducing the overall exposure time. Our simulation study shows that, compressive light field imagers using the principal component (PC) measurement basis require four times fewer measurements and three times shorter exposure time compared to a conventional light field imager in order to achieve an equivalent light field reconstruction quality.
Large-deformation and high-strength amorphous porous carbon nanospheres
NASA Astrophysics Data System (ADS)
Yang, Weizhu; Mao, Shimin; Yang, Jia; Shang, Tao; Song, Hongguang; Mabon, James; Swiech, Wacek; Vance, John R.; Yue, Zhufeng; Dillon, Shen J.; Xu, Hangxun; Xu, Baoxing
2016-04-01
Carbon is one of the most important materials extensively used in industry and our daily life. Crystalline carbon materials such as carbon nanotubes and graphene possess ultrahigh strength and toughness. In contrast, amorphous carbon is known to be very brittle and can sustain little compressive deformation. Inspired by biological shells and honeycomb-like cellular structures in nature, we introduce a class of hybrid structural designs and demonstrate that amorphous porous carbon nanospheres with a thin outer shell can simultaneously achieve high strength and sustain large deformation. The amorphous carbon nanospheres were synthesized via a low-cost, scalable and structure-controllable ultrasonic spray pyrolysis approach using energetic carbon precursors. In situ compression experiments on individual nanospheres show that the amorphous carbon nanospheres with an optimized structure can sustain beyond 50% compressive strain. Both experiments and finite element analyses reveal that the buckling deformation of the outer spherical shell dominates the improvement of strength while the collapse of inner nanoscale pores driven by twisting, rotation, buckling and bending of pore walls contributes to the large deformation.
High-performance software-only H.261 video compression on PC
NASA Astrophysics Data System (ADS)
Kasperovich, Leonid
1996-03-01
This paper describes an implementation of a software H.261 codec for PC, that takes an advantage of the fast computational algorithms for DCT-based video compression, which have been presented by the author at the February's 1995 SPIE/IS&T meeting. The motivation for developing the H.261 prototype system is to demonstrate a feasibility of real time software- only videoconferencing solution to operate across a wide range of network bandwidth, frame rate, and resolution of the input video. As the bandwidths of current network technology will be increased, the higher frame rate and resolution of video to be transmitted is allowed, that requires, in turn, a software codec to be able to compress pictures of CIF (352 X 288) resolution at up to 30 frame/sec. Running on Pentium 133 MHz PC the codec presented is capable to compress video in CIF format at 21 - 23 frame/sec. This result is comparable to the known hardware-based H.261 solutions, but it doesn't require any specific hardware. The methods to achieve high performance, the program optimization technique for Pentium microprocessor along with the performance profile, showing the actual contribution of the different encoding/decoding stages to the overall computational process, are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ciatti, Stephen A.
The history, present and future of the compression ignition engine is a fascinating story that spans over 100 years, from the time of Rudolf Diesel to the highly regulated and computerized engines of the 21st Century. The development of these engines provided inexpensive, reliable and high power density machines to allow transportation, construction and farming to be more productive with less human effort than in any previous period of human history. The concept that fuels could be consumed efficiently and effectively with only the ignition of pressurized and heated air was a significant departure from the previous coal-burning architecture ofmore » the 1800s. Today, the compression ignition engine is undergoing yet another revolution. The equipment that provides transport, builds roads and infrastructure, and harvests the food we eat needs to meet more stringent requirements than ever before. How successfully 21st Century engineers are able to make compression ignition engine technology meet these demands will be of major influence in assisting developing nations (with over 50% of the world’s population) achieve the economic and environmental goals they seek.« less
Mechanical properties in crumple-formed paper derived materials subjected to compression.
Hanaor, D A H; Flores Johnson, E A; Wang, S; Quach, S; Dela-Torre, K N; Gan, Y; Shen, L
2017-06-01
The crumpling of precursor materials to form dense three dimensional geometries offers an attractive route towards the utilisation of minor-value waste materials. Crumple-forming results in a mesostructured system in which mechanical properties of the material are governed by complex cross-scale deformation mechanisms. Here we investigate the physical and mechanical properties of dense compacted structures fabricated by the confined uniaxial compression of a cellulose tissue to yield crumpled mesostructuring. A total of 25 specimens of various densities were tested under compression. Crumple formed specimens exhibited densities in the range 0.8-1.3 g cm -3 , and showed high strength to weight characteristics, achieving ultimate compressive strength values of up to 200 MPa under both quasi-static and high strain rate loading conditions and deformation energy that compares well to engineering materials of similar density. The materials fabricated in this work and their mechanical attributes demonstrate the potential of crumple-forming approaches in the fabrication of novel energy-absorbing materials from low-cost precursors such as recycled paper. Stiffness and toughness of the materials exhibit density dependence suggesting this forming technique further allows controllable impact energy dissipation rates in dynamic applications.
Influence of rate of force application during compression on tablet capping.
Sarkar, Srimanta; Ooi, Shing Ming; Liew, Celine Valeria; Heng, Paul Wan Sia
2015-04-01
Root cause and possible processing remediation of tablet capping were investigated using a specially designed tablet press with an air compensator installed above the precompression roll to limit compression force and allow extended dwell time in the precompression event. Using acetaminophen-starch (77.9:22.1) as a model formulation, tablets were prepared by various combinations of precompression and main compression forces, set precompression thickness, and turret speed. The rate of force application (RFA) was the main factor contributing to the tablet mechanical strength and capping. When target force above the force required for strong interparticulate bond formation, the resultant high RFA contributed to more pronounced air entrapment, uneven force distribution, and consequently, stratified densification in compact together with high viscoelastic recovery. These factors collectively had contributed to the tablet capping. As extended dwell time assisted particle rearrangement and air escape, a denser and more homogenous packing in the die could be achieved. This occurred during the extended dwell time when a low precompression force was applied, followed by application of main compression force for strong interparticulate bond formation that was the most beneficial option to solve capping problem. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.
An efficient coding algorithm for the compression of ECG signals using the wavelet transform.
Rajoub, Bashar A
2002-04-01
A wavelet-based electrocardiogram (ECG) data compression algorithm is proposed in this paper. The ECG signal is first preprocessed, the discrete wavelet transform (DWT) is then applied to the preprocessed signal. Preprocessing guarantees that the magnitudes of the wavelet coefficients be less than one, and reduces the reconstruction errors near both ends of the compressed signal. The DWT coefficients are divided into three groups, each group is thresholded using a threshold based on a desired energy packing efficiency. A binary significance map is then generated by scanning the wavelet decomposition coefficients and outputting a binary one if the scanned coefficient is significant, and a binary zero if it is insignificant. Compression is achieved by 1) using a variable length code based on run length encoding to compress the significance map and 2) using direct binary representation for representing the significant coefficients. The ability of the coding algorithm to compress ECG signals is investigated, the results were obtained by compressing and decompressing the test signals. The proposed algorithm is compared with direct-based and wavelet-based compression algorithms and showed superior performance. A compression ratio of 24:1 was achieved for MIT-BIH record 117 with a percent root mean square difference as low as 1.08%.
Continuous direct compression as manufacturing platform for sustained release tablets.
Van Snick, B; Holman, J; Cunningham, C; Kumar, A; Vercruysse, J; De Beer, T; Remon, J P; Vervaet, C
2017-03-15
This study presents a framework for process and product development on a continuous direct compression manufacturing platform. A challenging sustained release formulation with high content of a poorly flowing low density drug was selected. Two HPMC grades were evaluated as matrix former: standard Methocel CR and directly compressible Methocel DC2. The feeding behavior of each formulation component was investigated by deriving feed factor profiles. The maximum feed factor was used to estimate the drive command and depended strongly upon the density of the material. Furthermore, the shape of the feed factor profile allowed definition of a customized refill regime for each material. Inline NIRs was used to estimate the residence time distribution (RTD) in the mixer and monitor blend uniformity. Tablet content and weight variability were determined as additional measures of mixing performance. For Methocel CR, the best axial mixing (i.e. feeder fluctuation dampening) was achieved when an impeller with high number of radial mixing blades operated at low speed. However, the variability in tablet weight and content uniformity deteriorated under this condition. One can therefore conclude that balancing axial mixing with tablet quality is critical for Methocel CR. However, reformulating with the direct compressible Methocel DC2 as matrix former improved tablet quality vastly. Furthermore, both process and product were significantly more robust to changes in process and design variables. This observation underpins the importance of flowability during continuous blending and die-filling. At the compaction stage, blends with Methocel CR showed better tabletability driven by a higher compressibility as the smaller CR particles have a higher bonding area. However, tablets of similar strength were achieved using Methocel DC2 by targeting equal porosity. Compaction pressure impacted tablet properties and dissolution. Hence controlling thickness during continuous manufacturing of sustained release tablets was crucial to ensure reproducible dissolution. Copyright © 2017 Elsevier B.V. All rights reserved.
Radiological Image Compression
NASA Astrophysics Data System (ADS)
Lo, Shih-Chung Benedict
The movement toward digital images in radiology presents the problem of how to conveniently and economically store, retrieve, and transmit the volume of digital images. Basic research into image data compression is necessary in order to move from a film-based department to an efficient digital -based department. Digital data compression technology consists of two types of compression technique: error-free and irreversible. Error -free image compression is desired; however, present techniques can only achieve compression ratio of from 1.5:1 to 3:1, depending upon the image characteristics. Irreversible image compression can achieve a much higher compression ratio; however, the image reconstructed from the compressed data shows some difference from the original image. This dissertation studies both error-free and irreversible image compression techniques. In particular, some modified error-free techniques have been tested and the recommended strategies for various radiological images are discussed. A full-frame bit-allocation irreversible compression technique has been derived. A total of 76 images which include CT head and body, and radiographs digitized to 2048 x 2048, 1024 x 1024, and 512 x 512 have been used to test this algorithm. The normalized mean -square-error (NMSE) on the difference image, defined as the difference between the original and the reconstructed image from a given compression ratio, is used as a global measurement on the quality of the reconstructed image. The NMSE's of total of 380 reconstructed and 380 difference images are measured and the results tabulated. Three complex compression methods are also suggested to compress images with special characteristics. Finally, various parameters which would effect the quality of the reconstructed images are discussed. A proposed hardware compression module is given in the last chapter.
First-principles molecular dynamics simulations of anorthite (CaAl2Si2O8) glass at high pressure
NASA Astrophysics Data System (ADS)
Ghosh, Dipta B.; Karki, Bijaya B.
2018-06-01
We report first-principles molecular dynamics study of the equation of state, structural, and elastic properties of CaAl2Si2O8 glass at 300 K as a function of pressure up to 155 GPa. Our results for the ambient pressure glass show that: (1) as with other silicates, Si atoms remain mostly (> 95%) under tetrahedral oxygen surroundings; (2) unlike anorthite crystal, presence of high-coordination (> 4) Al atoms with 30% abundance; (3) and significant presence of both non-bridging (8%) and triply (17%) coordinated oxygen. To achieve the glass configurations at various pressures, we use two different simulation schedules: cold and hot compression. Cold compression refers to sequential compression at 300 K. Compression at 3000 K and subsequent isochoric quenching to 300 K is considered as hot compression. At the initial stages of compression (0-10 GPa), smooth increase in bond distance and coordination occurs in the hot-compressed glass. Whereas in cold compression, Si (also Al to some extent) displays mainly topological changes (without significantly affecting the average bond distance or coordination) in this pressure interval. Further increase in pressure results in gradual increases in mean coordination, with Si-O (Al-O) coordination eventually reaching and remaining 6 (6.5) at the highest compression. Similarly, the ambient pressure Ca-O coordination of 5.9 increases to 9.5 at 155 GPa. The continuous pressure-induced increase in the proportion of oxygen triclusters along with the appearance and increasing abundance of tetrahedral oxygens results in mean O-T (T = Si and Al) coordination of > 3 from a value of 2.1 at ambient pressure. Due to the absence of kinetic barrier, the hot-compressed glasses consistently produce greater densities and higher coordination numbers than the cold compression cases. Decompressed glasses show irreversible compaction along with retention of high-coordination species when decompressed from pressure ≥ 10 GPa. The different density retention amounts (12, 17, and 20% when decompressed from 12, 40, and 155 GPa, respectively) signifies that the degree of irreversibility depends on the peak pressure of decompression. The calculated compressional and shear wave velocities (5 and 3 km/s at 0 GPa) for the cold-compressed case display sluggish pressure response in the 0-10 GPa interval as opposed to smooth increase in the hot-compressed one. Shear velocity saturates rather rapidly with a value of 5 km/s, whereas compressional wave velocity displays continuous increase, reaching/exceeding 12.5 km/s at 155 GPa. These structural details suggest that the pressure response of the cold-compressed glasses is not only inherently different at the 0-10 GPa interval, the density, coordination, and wave velocity data are consistently lower than the hot-compressed glasses. Hot-compressed glasses may, therefore, be the better analog in the study of high-pressure silicate melts.
Sirisha, Pathuri Lakshmi; Babu, Govada Kishore; Babu, Puttagunta Srinivasa
2014-01-01
Ambulatory blood pressure monitoring is regarded as the gold standard for hypertensive therapy in non-dipping hypertension patients. A novel compression coated formulation of captopril and hydrochlorothiazide (HCTZ) was developed in order to improve the efficacy of antihypertensive therapy considering the half-life of both drugs. The synergistic action using combination therapy can be effectively achieved by sustained release captopril (t1/2= 2.5 h) and fast releasing HCTZ (average t1/2= 9.5 h). The sustained release floating tablets of captopril were prepared by using 23 factorial design by employing three polymers i.e., ethyl cellulose (EC), carbopol and xanthan gum at two levels. The formulations (CF1-CF8) were optimized using analysis of variance for two response variables, buoyancy and T50%. Among the three polymers employed, the coefficients and P values for the response variable buoyancy and T50% using EC were found to be 3.824, 0.028 and 0.0196, 0.046 respectively. From the coefficients and P values for the two response variables, formulation CF2 was optimized, which contains EC polymer alone at a high level. The CF2 formulation was further compression coated with optimized gastric dispersible HCTZ layer (HF9). The compression coated tablet was further evaluated using drug release kinetics. The Q value of HCTZ layer is achieved within 20 min following first order release whereas the Q value of captopril was obtained at 6.5 h following Higuchi model, from which it is proved that rapid release HCTZ and slow release of captopril is achieved. The mechanism of drug release was analyzed using Peppas equation, which showed an n >0.90 confirming case II transportation mechanism for drug release. PMID:25006552
High average power magnetic modulator for metal vapor lasers
Ball, Don G.; Birx, Daniel L.; Cook, Edward G.; Miller, John L.
1994-01-01
A three-stage magnetic modulator utilizing magnetic pulse compression designed to provide a 60 kV pulse to a copper vapor laser at a 4.5 kHz repetition rate is disclosed. This modulator operates at 34 kW input power. The circuit includes a step up auto transformer and utilizes a rod and plate stack construction technique to achieve a high packing factor.
A CAM-based LZ data compression IC
NASA Technical Reports Server (NTRS)
Winters, K.; Bode, R.; Schneider, E.
1993-01-01
A custom CMOS processor is introduced that implements the Data Compression Lempel-Ziv (DCLZ) standard, a variation of the LZ2 Algorithm. This component presently achieves a sustained compression and decompression rate of 10 megabytes/second by employing an on-chip content-addressable memory for string table storage.
NASA Astrophysics Data System (ADS)
Shiri, Ramin; Safari, Ebrahim; Bananej, Alireza
2018-04-01
We investigate numerically the controllable chirped pulse compression in a one-dimensional photonic structure containing a nematic liquid crystal defect layer using the temperature dependent refractive index of the liquid crystal. We consider the structure under irradiation by near-infrared ultra-short laser pulses polarized parallel to the liquid crystal director at a normal angle of incidence. It is found that the dispersion behaviour and consequently the compression ability of the system can be changed in a controlled manner due to the variation in the defect temperature. When the temperature increased from 290 to 305 K, the transmitted pulse duration decreased from 75 to 42 fs in the middle of the structure, correspondingly. As a result, a novel low-loss tunable pulse compressor with a really compact size and high compression factor is achieved. The so-called transfer matrix method is utilized for numerical simulations of the band structure and reflection/transmission spectra of the structure under investigation.
NASA Astrophysics Data System (ADS)
Lindsay, R. A.; Cox, B. V.
Universal and adaptive data compression techniques have the capability to globally compress all types of data without loss of information but have the disadvantage of complexity and computation speed. Advances in hardware speed and the reduction of computational costs have made universal data compression feasible. Implementations of the Adaptive Huffman and Lempel-Ziv compression algorithms are evaluated for performance. Compression ratios versus run times for different size data files are graphically presented and discussed in the paper. Required adjustments needed for optimum performance of the algorithms relative to theoretical achievable limits will be outlined.
An Optimal Seed Based Compression Algorithm for DNA Sequences
Gopalakrishnan, Gopakumar; Karunakaran, Muralikrishnan
2016-01-01
This paper proposes a seed based lossless compression algorithm to compress a DNA sequence which uses a substitution method that is similar to the LempelZiv compression scheme. The proposed method exploits the repetition structures that are inherent in DNA sequences by creating an offline dictionary which contains all such repeats along with the details of mismatches. By ensuring that only promising mismatches are allowed, the method achieves a compression ratio that is at par or better than the existing lossless DNA sequence compression algorithms. PMID:27555868
NASA Astrophysics Data System (ADS)
Bright, Ido; Lin, Guang; Kutz, J. Nathan
2013-12-01
Compressive sensing is used to determine the flow characteristics around a cylinder (Reynolds number and pressure/flow field) from a sparse number of pressure measurements on the cylinder. Using a supervised machine learning strategy, library elements encoding the dimensionally reduced dynamics are computed for various Reynolds numbers. Convex L1 optimization is then used with a limited number of pressure measurements on the cylinder to reconstruct, or decode, the full pressure field and the resulting flow field around the cylinder. Aside from the highly turbulent regime (large Reynolds number) where only the Reynolds number can be identified, accurate reconstruction of the pressure field and Reynolds number is achieved. The proposed data-driven strategy thus achieves encoding of the fluid dynamics using the L2 norm, and robust decoding (flow field reconstruction) using the sparsity promoting L1 norm.
High bit depth infrared image compression via low bit depth codecs
NASA Astrophysics Data System (ADS)
Belyaev, Evgeny; Mantel, Claire; Forchhammer, Søren
2017-08-01
Future infrared remote sensing systems, such as monitoring of the Earth's environment by satellites, infrastructure inspection by unmanned airborne vehicles etc., will require 16 bit depth infrared images to be compressed and stored or transmitted for further analysis. Such systems are equipped with low power embedded platforms where image or video data is compressed by a hardware block called the video processing unit (VPU). However, in many cases using two 8-bit VPUs can provide advantages compared with using higher bit depth image compression directly. We propose to compress 16 bit depth images via 8 bit depth codecs in the following way. First, an input 16 bit depth image is mapped into 8 bit depth images, e.g., the first image contains only the most significant bytes (MSB image) and the second one contains only the least significant bytes (LSB image). Then each image is compressed by an image or video codec with 8 bits per pixel input format. We analyze how the compression parameters for both MSB and LSB images should be chosen to provide the maximum objective quality for a given compression ratio. Finally, we apply the proposed infrared image compression method utilizing JPEG and H.264/AVC codecs, which are usually available in efficient implementations, and compare their rate-distortion performance with JPEG2000, JPEG-XT and H.265/HEVC codecs supporting direct compression of infrared images in 16 bit depth format. A preliminary result shows that two 8 bit H.264/AVC codecs can achieve similar result as 16 bit HEVC codec.
Basic concepts for the design of high-efficiency single-junction and multibandgap solar cells
NASA Technical Reports Server (NTRS)
Fan, J. C. C.
1985-01-01
Concepts for obtaining practical solar-cell modules with one-sun efficiencies up to 30 percent at air mass 1 are now well understood. Such high-efficiency modules utilize multibandgap structures. To achieve module efficiencies significantly above 30 percent, it is necessary to employ different concepts such as spectral compression and broad-band detection. A detailed description of concepts for the design of high-efficiency multibandgap solar cells is given.
NASA Astrophysics Data System (ADS)
Chuang, Cheng-Hung; Chen, Yen-Lin
2013-02-01
This study presents a steganographic optical image encryption system based on reversible data hiding and double random phase encoding (DRPE) techniques. Conventional optical image encryption systems can securely transmit valuable images using an encryption method for possible application in optical transmission systems. The steganographic optical image encryption system based on the DRPE technique has been investigated to hide secret data in encrypted images. However, the DRPE techniques vulnerable to attacks and many of the data hiding methods in the DRPE system can distort the decrypted images. The proposed system, based on reversible data hiding, uses a JBIG2 compression scheme to achieve lossless decrypted image quality and perform a prior encryption process. Thus, the DRPE technique enables a more secured optical encryption process. The proposed method extracts and compresses the bit planes of the original image using the lossless JBIG2 technique. The secret data are embedded in the remaining storage space. The RSA algorithm can cipher the compressed binary bits and secret data for advanced security. Experimental results show that the proposed system achieves a high data embedding capacity and lossless reconstruction of the original images.
Light-weight reference-based compression of FASTQ data.
Zhang, Yongpeng; Li, Linsen; Yang, Yanli; Yang, Xiao; He, Shan; Zhu, Zexuan
2015-06-09
The exponential growth of next generation sequencing (NGS) data has posed big challenges to data storage, management and archive. Data compression is one of the effective solutions, where reference-based compression strategies can typically achieve superior compression ratios compared to the ones not relying on any reference. This paper presents a lossless light-weight reference-based compression algorithm namely LW-FQZip to compress FASTQ data. The three components of any given input, i.e., metadata, short reads and quality score strings, are first parsed into three data streams in which the redundancy information are identified and eliminated independently. Particularly, well-designed incremental and run-length-limited encoding schemes are utilized to compress the metadata and quality score streams, respectively. To handle the short reads, LW-FQZip uses a novel light-weight mapping model to fast map them against external reference sequence(s) and produce concise alignment results for storage. The three processed data streams are then packed together with some general purpose compression algorithms like LZMA. LW-FQZip was evaluated on eight real-world NGS data sets and achieved compression ratios in the range of 0.111-0.201. This is comparable or superior to other state-of-the-art lossless NGS data compression algorithms. LW-FQZip is a program that enables efficient lossless FASTQ data compression. It contributes to the state of art applications for NGS data storage and transmission. LW-FQZip is freely available online at: http://csse.szu.edu.cn/staff/zhuzx/LWFQZip.
Real time network traffic monitoring for wireless local area networks based on compressed sensing
NASA Astrophysics Data System (ADS)
Balouchestani, Mohammadreza
2017-05-01
A wireless local area network (WLAN) is an important type of wireless networks which connotes different wireless nodes in a local area network. WLANs suffer from important problems such as network load balancing, large amount of energy, and load of sampling. This paper presents a new networking traffic approach based on Compressed Sensing (CS) for improving the quality of WLANs. The proposed architecture allows reducing Data Delay Probability (DDP) to 15%, which is a good record for WLANs. The proposed architecture is increased Data Throughput (DT) to 22 % and Signal to Noise (S/N) ratio to 17 %, which provide a good background for establishing high qualified local area networks. This architecture enables continuous data acquisition and compression of WLAN's signals that are suitable for a variety of other wireless networking applications. At the transmitter side of each wireless node, an analog-CS framework is applied at the sensing step before analog to digital converter in order to generate the compressed version of the input signal. At the receiver side of wireless node, a reconstruction algorithm is applied in order to reconstruct the original signals from the compressed signals with high probability and enough accuracy. The proposed algorithm out-performs existing algorithms by achieving a good level of Quality of Service (QoS). This ability allows reducing 15 % of Bit Error Rate (BER) at each wireless node.
Directional amorphization of boron carbide subjected to laser shock compression
Zhao, Shiteng; Kad, Bimal; Remington, Bruce A.; LaSalvia, Jerry C.; Wehrenberg, Christopher E.; Behler, Kristopher D.; Meyers, Marc A.
2016-01-01
Solid-state shock-wave propagation is strongly nonequilibrium in nature and hence rate dependent. Using high-power pulsed-laser-driven shock compression, unprecedented high strain rates can be achieved; here we report the directional amorphization in boron carbide polycrystals. At a shock pressure of 45∼50 GPa, multiple planar faults, slightly deviated from maximum shear direction, occur a few hundred nanometers below the shock surface. High-resolution transmission electron microscopy reveals that these planar faults are precursors of directional amorphization. It is proposed that the shear stresses cause the amorphization and that pressure assists the process by ensuring the integrity of the specimen. Thermal energy conversion calculations including heat transfer suggest that amorphization is a solid-state process. Such a phenomenon has significant effect on the ballistic performance of B4C. PMID:27733513
Directional amorphization of boron carbide subjected to laser shock compression
Zhao, Shiteng; Kad, Bimal; Remington, Bruce A.; ...
2016-10-12
Solid-state shock-wave propagation is strongly nonequilibrium in nature and hence rate dependent. When using high-power pulsed-laser-driven shock compression, an unprecedented high strain rates can be achieved; we report the directional amorphization in boron carbide polycrystals. At a shock pressure of 45~50 GPa, multiple planar faults, slightly deviated from maximum shear direction, occur a few hundred nanometers below the shock surface. High-resolution transmission electron microscopy reveals that these planar faults are precursors of directional amorphization. We also propose that the shear stresses cause the amorphization and that pressure assists the process by ensuring the integrity of the specimen. Thermal energy conversionmore » calculations including heat transfer suggest that amorphization is a solid-state process. Such a phenomenon has significant effect on the ballistic performance of B 4C.« less
Latt, L Daniel; Glisson, Richard R; Adams, Samuel B; Schuh, Reinhard; Narron, John A; Easley, Mark E
2015-10-01
Transverse tarsal joint arthrodesis is commonly performed in the operative treatment of hindfoot arthritis and acquired flatfoot deformity. While fixation is typically achieved using screws, failure to obtain and maintain joint compression sometimes occurs, potentially leading to nonunion. External fixation is an alternate method of achieving arthrodesis site compression and has the advantage of allowing postoperative compression adjustment when necessary. However, its performance relative to standard screw fixation has not been quantified in this application. We hypothesized that external fixation could provide transverse tarsal joint compression exceeding that possible with screw fixation. Transverse tarsal joint fixation was performed sequentially, first with a circular external fixator and then with compression screws, on 9 fresh-frozen cadaveric legs. The external fixator was attached in abutting rings fixed to the tibia and the hindfoot and a third anterior ring parallel to the hindfoot ring using transverse wires and half-pins in the tibial diaphysis, calcaneus, and metatarsals. Screw fixation comprised two 4.3 mm headless compression screws traversing the talonavicular joint and 1 across the calcaneocuboid joint. Compressive forces generated during incremental fixator foot ring displacement to 20 mm and incremental screw tightening were measured using a custom-fabricated instrumented miniature external fixator spanning the transverse tarsal joint. The maximum compressive force generated by the external fixator averaged 186% of that produced by the screws (range, 104%-391%). Fixator compression surpassed that obtainable with screws at 12 mm of ring displacement and decreased when the tibial ring was detached. No correlation was found between bone density and the compressive force achievable by either fusion method. The compression across the transverse tarsal joint that can be obtained with a circular external fixator including a tibial ring exceeds that which can be obtained with 3 headless compression screws. Screw and external fixator performance did not correlate with bone mineral density. This study supports the use of external fixation as an alternative method of generating compression to help stimulate fusion across the transverse tarsal joints. The findings provide biomechanical evidence to support the use of external fixation as a viable option in transverse tarsal joint fusion cases in which screw fixation has failed or is anticipated to be inadequate due to suboptimal bone quality. © The Author(s) 2015.
View compensated compression of volume rendered images for remote visualization.
Lalgudi, Hariharan G; Marcellin, Michael W; Bilgin, Ali; Oh, Han; Nadar, Mariappan S
2009-07-01
Remote visualization of volumetric images has gained importance over the past few years in medical and industrial applications. Volume visualization is a computationally intensive process, often requiring hardware acceleration to achieve a real time viewing experience. One remote visualization model that can accomplish this would transmit rendered images from a server, based on viewpoint requests from a client. For constrained server-client bandwidth, an efficient compression scheme is vital for transmitting high quality rendered images. In this paper, we present a new view compensation scheme that utilizes the geometric relationship between viewpoints to exploit the correlation between successive rendered images. The proposed method obviates motion estimation between rendered images, enabling significant reduction to the complexity of a compressor. Additionally, the view compensation scheme, in conjunction with JPEG2000 performs better than AVC, the state of the art video compression standard.
Air blast type coal slurry fuel injector
Phatak, Ramkrishna G.
1986-01-01
A device to atomize and inject a coal slurry in the combustion chamber of an internal combustion engine, and which eliminates the use of a conventional fuel injection pump/nozzle. The injector involves the use of compressed air to atomize and inject the coal slurry and like fuels. In one embodiment, the breaking and atomization of the fuel is achieved with the help of perforated discs and compressed air. In another embodiment, a cone shaped aspirator is used to achieve the breaking and atomization of the fuel. The compressed air protects critical bearing areas of the injector.
Air blast type coal slurry fuel injector
Phatak, R.G.
1984-08-31
A device to atomize and inject a coal slurry in the combustion chamber of an internal combustion engine is disclosed which eliminates the use of a conventional fuel injection pump/nozzle. The injector involves the use of compressed air to atomize and inject the coal slurry and like fuels. In one embodiment, the breaking and atomization of the fuel is achieved with the help of perforated discs and compressed air. In another embodiment, a cone shaped aspirator is used to achieve the breaking and atomization of the fuel. The compressed air protects critical bearing areas of the injector.
Real-time 3D video compression for tele-immersive environments
NASA Astrophysics Data System (ADS)
Yang, Zhenyu; Cui, Yi; Anwar, Zahid; Bocchino, Robert; Kiyanclar, Nadir; Nahrstedt, Klara; Campbell, Roy H.; Yurcik, William
2006-01-01
Tele-immersive systems can improve productivity and aid communication by allowing distributed parties to exchange information via a shared immersive experience. The TEEVE research project at the University of Illinois at Urbana-Champaign and the University of California at Berkeley seeks to foster the development and use of tele-immersive environments by a holistic integration of existing components that capture, transmit, and render three-dimensional (3D) scenes in real time to convey a sense of immersive space. However, the transmission of 3D video poses significant challenges. First, it is bandwidth-intensive, as it requires the transmission of multiple large-volume 3D video streams. Second, existing schemes for 2D color video compression such as MPEG, JPEG, and H.263 cannot be applied directly because the 3D video data contains depth as well as color information. Our goal is to explore from a different angle of the 3D compression space with factors including complexity, compression ratio, quality, and real-time performance. To investigate these trade-offs, we present and evaluate two simple 3D compression schemes. For the first scheme, we use color reduction to compress the color information, which we then compress along with the depth information using zlib. For the second scheme, we use motion JPEG to compress the color information and run-length encoding followed by Huffman coding to compress the depth information. We apply both schemes to 3D videos captured from a real tele-immersive environment. Our experimental results show that: (1) the compressed data preserves enough information to communicate the 3D images effectively (min. PSNR > 40) and (2) even without inter-frame motion estimation, very high compression ratios (avg. > 15) are achievable at speeds sufficient to allow real-time communication (avg. ~ 13 ms per 3D video frame).
Zhou, Jun; Wang, Chao
2017-01-01
Intelligent sensing is drastically changing our everyday life including healthcare by biomedical signal monitoring, collection, and analytics. However, long-term healthcare monitoring generates tremendous data volume and demands significant wireless transmission power, which imposes a big challenge for wearable healthcare sensors usually powered by batteries. Efficient compression engine design to reduce wireless transmission data rate with ultra-low power consumption is essential for wearable miniaturized healthcare sensor systems. This paper presents an ultra-low power biomedical signal compression engine for healthcare data sensing and analytics in the era of big data and sensor intelligence. It extracts the feature points of the biomedical signal by window-based turning angle detection. The proposed approach has low complexity and thus low power consumption while achieving a large compression ratio (CR) and good quality of reconstructed signal. Near-threshold design technique is adopted to further reduce the power consumption on the circuit level. Besides, the angle threshold for compression can be adaptively tuned according to the error between the original signal and reconstructed signal to address the variation of signal characteristics from person to person or from channel to channel to meet the required signal quality with optimal CR. For demonstration, the proposed biomedical compression engine has been used and evaluated for ECG compression. It achieves an average (CR) of 71.08% and percentage root-mean-square difference (PRD) of 5.87% while consuming only 39 nW. Compared to several state-of-the-art ECG compression engines, the proposed design has significantly lower power consumption while achieving similar CRD and PRD, making it suitable for long-term wearable miniaturized sensor systems to sense and collect healthcare data for remote data analytics. PMID:28783079
Zhou, Jun; Wang, Chao
2017-08-06
Intelligent sensing is drastically changing our everyday life including healthcare by biomedical signal monitoring, collection, and analytics. However, long-term healthcare monitoring generates tremendous data volume and demands significant wireless transmission power, which imposes a big challenge for wearable healthcare sensors usually powered by batteries. Efficient compression engine design to reduce wireless transmission data rate with ultra-low power consumption is essential for wearable miniaturized healthcare sensor systems. This paper presents an ultra-low power biomedical signal compression engine for healthcare data sensing and analytics in the era of big data and sensor intelligence. It extracts the feature points of the biomedical signal by window-based turning angle detection. The proposed approach has low complexity and thus low power consumption while achieving a large compression ratio (CR) and good quality of reconstructed signal. Near-threshold design technique is adopted to further reduce the power consumption on the circuit level. Besides, the angle threshold for compression can be adaptively tuned according to the error between the original signal and reconstructed signal to address the variation of signal characteristics from person to person or from channel to channel to meet the required signal quality with optimal CR. For demonstration, the proposed biomedical compression engine has been used and evaluated for ECG compression. It achieves an average (CR) of 71.08% and percentage root-mean-square difference (PRD) of 5.87% while consuming only 39 nW. Compared to several state-of-the-art ECG compression engines, the proposed design has significantly lower power consumption while achieving similar CRD and PRD, making it suitable for long-term wearable miniaturized sensor systems to sense and collect healthcare data for remote data analytics.
Li, Yang; Chen, Zhangxing; Xu, Hongyi; ...
2017-01-02
Compression molded SMC composed of chopped carbon fiber and resin polymer which balances the mechanical performance and manufacturing cost presents a promising solution for vehicle lightweight strategy. However, the performance of the SMC molded parts highly depends on the compression molding process and local microstructure, which greatly increases the cost for the part level performance testing and elongates the design cycle. ICME (Integrated Computational Material Engineering) approaches are thus necessary tools to reduce the number of experiments required during part design and speed up the deployment of the SMC materials. As the fundamental stage of the ICME workflow, commercial softwaremore » packages for SMC compression molding exist yet remain not fully validated especially for chopped fiber systems. In this study, SMC plaques are prepared through compression molding process. The corresponding simulation models are built in Autodesk Moldflow with the same part geometry and processing conditions as in the molding tests. The output variables of the compression molding simulations, including press force history and fiber orientation of the part, are compared with experimental data. Influence of the processing conditions to the fiber orientation of the SMC plaque is also discussed. It is found that generally Autodesk Moldflow can achieve a good simulation of the compression molding process for chopped carbon fiber SMC, yet quantitative discrepancies still remain between predicted variables and experimental results.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Yang; Chen, Zhangxing; Xu, Hongyi
Compression molded SMC composed of chopped carbon fiber and resin polymer which balances the mechanical performance and manufacturing cost presents a promising solution for vehicle lightweight strategy. However, the performance of the SMC molded parts highly depends on the compression molding process and local microstructure, which greatly increases the cost for the part level performance testing and elongates the design cycle. ICME (Integrated Computational Material Engineering) approaches are thus necessary tools to reduce the number of experiments required during part design and speed up the deployment of the SMC materials. As the fundamental stage of the ICME workflow, commercial softwaremore » packages for SMC compression molding exist yet remain not fully validated especially for chopped fiber systems. In this study, SMC plaques are prepared through compression molding process. The corresponding simulation models are built in Autodesk Moldflow with the same part geometry and processing conditions as in the molding tests. The output variables of the compression molding simulations, including press force history and fiber orientation of the part, are compared with experimental data. Influence of the processing conditions to the fiber orientation of the SMC plaque is also discussed. It is found that generally Autodesk Moldflow can achieve a good simulation of the compression molding process for chopped carbon fiber SMC, yet quantitative discrepancies still remain between predicted variables and experimental results.« less
Enhancement of Satellite Image Compression Using a Hybrid (DWT-DCT) Algorithm
NASA Astrophysics Data System (ADS)
Shihab, Halah Saadoon; Shafie, Suhaidi; Ramli, Abdul Rahman; Ahmad, Fauzan
2017-12-01
Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) image compression techniques have been utilized in most of the earth observation satellites launched during the last few decades. However, these techniques have some issues that should be addressed. The DWT method has proven to be more efficient than DCT for several reasons. Nevertheless, the DCT can be exploited to improve the high-resolution satellite image compression when combined with the DWT technique. Hence, a proposed hybrid (DWT-DCT) method was developed and implemented in the current work, simulating an image compression system on-board on a small remote sensing satellite, with the aim of achieving a higher compression ratio to decrease the onboard data storage and the downlink bandwidth, while avoiding further complex levels of DWT. This method also succeeded in maintaining the reconstructed satellite image quality through replacing the standard forward DWT thresholding and quantization processes with an alternative process that employed the zero-padding technique, which also helped to reduce the processing time of DWT compression. The DCT, DWT and the proposed hybrid methods were implemented individually, for comparison, on three LANDSAT 8 images, using the MATLAB software package. A comparison was also made between the proposed method and three other previously published hybrid methods. The evaluation of all the objective and subjective results indicated the feasibility of using the proposed hybrid (DWT-DCT) method to enhance the image compression process on-board satellites.
Compressed Air Working in Chennai During Metro Tunnel Construction: Occupational Health Problems
Kulkarni, Ajit C.
2017-01-01
Chennai metropolis has been growing rapidly. Need was felt of a metro rail system. Two corridors were planned. Corridor 1, of 23 km starting from Washermanpet to Airport. 14.3 km of this would be underground. Corridor 2, of 22 km starting from Chennai Central Railway station to St. Thomas Mount. 9.7 km of this would be underground. Occupational health centre's role involved selection of miners and assessing their fitness to work under compressed air. Planning and execution of compression and decompression, health monitoring and treatment of compression related illnesses. More than thirty five thousand man hours of work was carried out under compressed air pressure ranged from 1.2 to 1.9 bar absolute. There were only three cases of pain only ( Type I) decompression sickness which were treated with recompression. Vigilant medical supervision, experienced lock operators and reduced working hours under pressure because of inclement environmental conditions viz. high temperature and humidity, has helped achieve this low incident. Tunnelling activity will increase in India as more cities will soon opt for underground metro railway. Indian standard IS 4138 – 1977 ” Safety code for working in compressed air” needs to be updated urgently keeping pace with modern working methods. PMID:29618908
Using off-the-shelf lossy compression for wireless home sleep staging.
Lan, Kun-Chan; Chang, Da-Wei; Kuo, Chih-En; Wei, Ming-Zhi; Li, Yu-Hung; Shaw, Fu-Zen; Liang, Sheng-Fu
2015-05-15
Recently, there has been increasing interest in the development of wireless home sleep staging systems that allow the patient to be monitored remotely while remaining in the comfort of their home. However, transmitting large amount of Polysomnography (PSG) data over the Internet is an important issue needed to be considered. In this work, we aim to reduce the amount of PSG data which has to be transmitted or stored, while having as little impact as possible on the information in the signal relevant to classify sleep stages. We examine the effects of off-the-shelf lossy compression on an all-night PSG dataset from 20 healthy subjects, in the context of automated sleep staging. The popular compression method Set Partitioning in Hierarchical Trees (SPIHT) was used, and a range of compression levels was selected in order to compress the signals with various degrees of loss. In addition, a rule-based automatic sleep staging method was used to automatically classify the sleep stages. Considering the criteria of clinical usefulness, the experimental results show that the system can achieve more than 60% energy saving with a high accuracy (>84%) in classifying sleep stages by using a lossy compression algorithm like SPIHT. As far as we know, our study is the first that focuses how much loss can be tolerated in compressing complex multi-channel PSG data for sleep analysis. We demonstrate the feasibility of using lossy SPIHT compression for wireless home sleep staging. Copyright © 2015 Elsevier B.V. All rights reserved.
Aerodynamics inside a rapid compression machine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mittal, Gaurav; Sung, Chih-Jen
2006-04-15
The aerodynamics inside a rapid compression machine after the end of compression is investigated using planar laser-induced fluorescence (PLIF) of acetone. To study the effect of reaction chamber configuration on the resulting aerodynamics and temperature field, experiments are conducted and compared using a creviced piston and a flat piston under varying conditions. Results show that the flat piston design leads to significant mixing of the cold vortex with the hot core region, which causes alternate hot and cold regions inside the combustion chamber. At higher pressures, the effect of the vortex is reduced. The creviced piston head configuration is demonstratedmore » to result in drastic reduction of the effect of the vortex. Experimental conditions are also simulated using the Star-CD computational fluid dynamics package. Computed results closely match with experimental observation. Numerical results indicate that with a flat piston design, gas velocity after compression is very high and the core region shrinks quickly due to rapid entrainment of cold gases. Whereas, for a creviced piston head design, gas velocity after compression is significantly lower and the core region remains unaffected for a long duration. As a consequence, for the flat piston, adiabatic core assumption can significantly overpredict the maximum temperature after the end of compression. For the creviced piston, the adiabatic core assumption is found to be valid even up to 100 ms after compression. This work therefore experimentally and numerically substantiates the importance of piston head design for achieving a homogeneous core region inside a rapid compression machine. (author)« less
A High Performance Block Eigensolver for Nuclear Configuration Interaction Calculations
Aktulga, Hasan Metin; Afibuzzaman, Md.; Williams, Samuel; ...
2017-06-01
As on-node parallelism increases and the performance gap between the processor and the memory system widens, achieving high performance in large-scale scientific applications requires an architecture-aware design of algorithms and solvers. We focus on the eigenvalue problem arising in nuclear Configuration Interaction (CI) calculations, where a few extreme eigenpairs of a sparse symmetric matrix are needed. Here, we consider a block iterative eigensolver whose main computational kernels are the multiplication of a sparse matrix with multiple vectors (SpMM), and tall-skinny matrix operations. We then present techniques to significantly improve the SpMM and the transpose operation SpMM T by using themore » compressed sparse blocks (CSB) format. We achieve 3-4× speedup on the requisite operations over good implementations with the commonly used compressed sparse row (CSR) format. We develop a performance model that allows us to correctly estimate the performance of our SpMM kernel implementations, and we identify cache bandwidth as a potential performance bottleneck beyond DRAM. We also analyze and optimize the performance of LOBPCG kernels (inner product and linear combinations on multiple vectors) and show up to 15× speedup over using high performance BLAS libraries for these operations. The resulting high performance LOBPCG solver achieves 1.4× to 1.8× speedup over the existing Lanczos solver on a series of CI computations on high-end multicore architectures (Intel Xeons). We also analyze the performance of our techniques on an Intel Xeon Phi Knights Corner (KNC) processor.« less
A High Performance Block Eigensolver for Nuclear Configuration Interaction Calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aktulga, Hasan Metin; Afibuzzaman, Md.; Williams, Samuel
As on-node parallelism increases and the performance gap between the processor and the memory system widens, achieving high performance in large-scale scientific applications requires an architecture-aware design of algorithms and solvers. We focus on the eigenvalue problem arising in nuclear Configuration Interaction (CI) calculations, where a few extreme eigenpairs of a sparse symmetric matrix are needed. Here, we consider a block iterative eigensolver whose main computational kernels are the multiplication of a sparse matrix with multiple vectors (SpMM), and tall-skinny matrix operations. We then present techniques to significantly improve the SpMM and the transpose operation SpMM T by using themore » compressed sparse blocks (CSB) format. We achieve 3-4× speedup on the requisite operations over good implementations with the commonly used compressed sparse row (CSR) format. We develop a performance model that allows us to correctly estimate the performance of our SpMM kernel implementations, and we identify cache bandwidth as a potential performance bottleneck beyond DRAM. We also analyze and optimize the performance of LOBPCG kernels (inner product and linear combinations on multiple vectors) and show up to 15× speedup over using high performance BLAS libraries for these operations. The resulting high performance LOBPCG solver achieves 1.4× to 1.8× speedup over the existing Lanczos solver on a series of CI computations on high-end multicore architectures (Intel Xeons). We also analyze the performance of our techniques on an Intel Xeon Phi Knights Corner (KNC) processor.« less
Visual pattern image sequence coding
NASA Technical Reports Server (NTRS)
Silsbee, Peter; Bovik, Alan C.; Chen, Dapang
1990-01-01
The visual pattern image coding (VPIC) configurable digital image-coding process is capable of coding with visual fidelity comparable to the best available techniques, at compressions which (at 30-40:1) exceed all other technologies. These capabilities are associated with unprecedented coding efficiencies; coding and decoding operations are entirely linear with respect to image size and entail a complexity that is 1-2 orders of magnitude faster than any previous high-compression technique. The visual pattern image sequence coding to which attention is presently given exploits all the advantages of the static VPIC in the reduction of information from an additional, temporal dimension, to achieve unprecedented image sequence coding performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Splitter, Derek A; Szybist, James P
The present study experimentally investigates spark-ignited combustion with 87 AKI E0 gasoline in its neat form and in mid-level alcohol-gasoline blends with 24% vol./vol. iso-butanol-gasoline (IB24) and 30% vol./vol. ethanol-gasoline (E30). A single-cylinder research engine is used with a low and high compression ratio of 9.2:1 and 11.85:1 respectively. The engine is equipped with hydraulically actuated valves, laboratory intake air, and is capable of external exhaust gas recirculation (EGR). All fuels are operated to full-load conditions with =1, using both 0% and 15% external cooled EGR. The results demonstrate that higher octane number bio-fuels better utilize higher compression ratios withmore » high stoichiometric torque capability. Specifically, the unique properties of ethanol enabled a doubling of the stoichiometric torque capability with the 11.85:1 compression ratio using E30 as compared to 87 AKI, up to 20 bar IMEPg at =1 (with 15% EGR, 18.5 bar with 0% EGR). EGR was shown to provide thermodynamic advantages with all fuels. The results demonstrate that E30 may further the downsizing and downspeeding of engines by achieving increased low speed torque, even with high compression ratios. The results suggest that at mid-level alcohol-gasoline blends, engine and vehicle optimization can offset the reduced fuel energy content of alcohol-gasoline blends, and likely reduce vehicle fuel consumption and tailpipe CO2 emissions.« less
Paridaens, Tom; Van Wallendael, Glenn; De Neve, Wesley; Lambert, Peter
2017-05-15
The past decade has seen the introduction of new technologies that lowered the cost of genomic sequencing increasingly. We can even observe that the cost of sequencing is dropping significantly faster than the cost of storage and transmission. The latter motivates a need for continuous improvements in the area of genomic data compression, not only at the level of effectiveness (compression rate), but also at the level of functionality (e.g. random access), configurability (effectiveness versus complexity, coding tool set …) and versatility (support for both sequenced reads and assembled sequences). In that regard, we can point out that current approaches mostly do not support random access, requiring full files to be transmitted, and that current approaches are restricted to either read or sequence compression. We propose AFRESh, an adaptive framework for no-reference compression of genomic data with random access functionality, targeting the effective representation of the raw genomic symbol streams of both reads and assembled sequences. AFRESh makes use of a configurable set of prediction and encoding tools, extended by a Context-Adaptive Binary Arithmetic Coding scheme (CABAC), to compress raw genetic codes. To the best of our knowledge, our paper is the first to describe an effective implementation CABAC outside of its' original application. By applying CABAC, the compression effectiveness improves by up to 19% for assembled sequences and up to 62% for reads. By applying AFRESh to the genomic symbols of the MPEG genomic compression test set for reads, a compression gain is achieved of up to 51% compared to SCALCE, 42% compared to LFQC and 44% compared to ORCOM. When comparing to generic compression approaches, a compression gain is achieved of up to 41% compared to GNU Gzip and 22% compared to 7-Zip at the Ultra setting. Additionaly, when compressing assembled sequences of the Human Genome, a compression gain is achieved up to 34% compared to GNU Gzip and 16% compared to 7-Zip at the Ultra setting. A Windows executable version can be downloaded at https://github.com/tparidae/AFresh . tom.paridaens@ugent.be. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Widefield compressive multiphoton microscopy.
Alemohammad, Milad; Shin, Jaewook; Tran, Dung N; Stroud, Jasper R; Chin, Sang Peter; Tran, Trac D; Foster, Mark A
2018-06-15
A single-pixel compressively sensed architecture is exploited to simultaneously achieve a 10× reduction in acquired data compared with the Nyquist rate, while alleviating limitations faced by conventional widefield temporal focusing microscopes due to scattering of the fluorescence signal. Additionally, we demonstrate an adaptive sampling scheme that further improves the compression and speed of our approach.
Wave energy devices with compressible volumes.
Kurniawan, Adi; Greaves, Deborah; Chaplin, John
2014-12-08
We present an analysis of wave energy devices with air-filled compressible submerged volumes, where variability of volume is achieved by means of a horizontal surface free to move up and down relative to the body. An analysis of bodies without power take-off (PTO) systems is first presented to demonstrate the positive effects a compressible volume could have on the body response. Subsequently, two compressible device variations are analysed. In the first variation, the compressible volume is connected to a fixed volume via an air turbine for PTO. In the second variation, a water column separates the compressible volume from another volume, which is fitted with an air turbine open to the atmosphere. Both floating and bottom-fixed, axisymmetric, configurations are considered, and linear analysis is employed throughout. Advantages and disadvantages of each device are examined in detail. Some configurations with displaced volumes less than 2000 m 3 and with constant turbine coefficients are shown to be capable of achieving 80% of the theoretical maximum absorbed power over a wave period range of about 4 s.
Wave energy devices with compressible volumes
Kurniawan, Adi; Greaves, Deborah; Chaplin, John
2014-01-01
We present an analysis of wave energy devices with air-filled compressible submerged volumes, where variability of volume is achieved by means of a horizontal surface free to move up and down relative to the body. An analysis of bodies without power take-off (PTO) systems is first presented to demonstrate the positive effects a compressible volume could have on the body response. Subsequently, two compressible device variations are analysed. In the first variation, the compressible volume is connected to a fixed volume via an air turbine for PTO. In the second variation, a water column separates the compressible volume from another volume, which is fitted with an air turbine open to the atmosphere. Both floating and bottom-fixed, axisymmetric, configurations are considered, and linear analysis is employed throughout. Advantages and disadvantages of each device are examined in detail. Some configurations with displaced volumes less than 2000 m3 and with constant turbine coefficients are shown to be capable of achieving 80% of the theoretical maximum absorbed power over a wave period range of about 4 s. PMID:25484609
Achieving high aspect ratio wrinkles by modifying material network stress.
Chen, Yu-Cheng; Wang, Yan; McCarthy, Thomas J; Crosby, Alfred J
2017-06-07
Wrinkle aspect ratio, or the amplitude divided by the wavelength, is hindered by strain localization transitions when an increasing global compressive stress is applied to synthetic material systems. However, many examples from living organisms show extremely high aspect ratios, such as gut villi and flower petals. We use three experimental approaches to demonstrate that these high aspect ratio structures can be achieved by modifying the network stress in the wrinkle substrate. We modify the wrinkle stress and effectively delay the strain localization transition, such as folding, to larger aspect ratios by using a zero-stress initial wavy substrate, creating a secondary network with post-curing, or using chemical stress relaxation materials. A wrinkle aspect ratio as high as 0.85, almost three times higher than common values of synthetic wrinkles, is achieved, and a quantitative framework is presented to provide understanding the different strategies and predictions for future investigations.
NASA Technical Reports Server (NTRS)
Stanitz, John D; Sheldrake, Leonard J
1953-01-01
A technique is developed for the application of a channel design method to the design of high-solidity cascades with prescribed velocity distributions as a function of arc length along the blade-element profile. The technique is applied to both incompressible and subsonic compressible, nonviscous, irrotational fluid motion. For compressible flow, the ratio of specific heats is assumed equal to -1.0. An impulse cascade with 90 degree turning was designed for incompressible flow and was tested at the design angle of attack over a range of downstream Mach number from 0.2 to coke flow. To achieve good efficiency, the cascade was designed for prescribed velocities and maximum blade loading according to limitations imposed by considerations of boundary-layer separation.
Megahertz-resolution programmable microwave shaper.
Li, Jilong; Dai, Yitang; Yin, Feifei; Li, Wei; Li, Ming; Chen, Hongwei; Xu, Kun
2018-04-15
A novel microwave shaper is proposed and demonstrated, of which the microwave spectral transfer function could be fully programmable with high resolution. We achieve this by bandwidth-compressed mapping a programmable optical wave-shaper, which has a lower frequency resolution of tens of gigahertz, to a microwave one with resolution of tens of megahertz. This is based on a novel technology of "bandwidth scaling," which employs bandwidth-stretched electronic-to-optical conversion and bandwidth-compressed optical-to-electronic conversion. We demonstrate the high resolution and full reconfigurability experimentally. Furthermore, we show the group delay variation could be greatly enlarged after mapping; this is then verified by the experiment with an enlargement of 194 times. The resolution improvement and group delay magnification significantly distinguish our proposal from previous optics-to-microwave spectrum mapping.
Influence of compression parameters on mechanical behavior of mozzarella cheese.
Fogaça, Davi Novaes Ladeia; da Silva, William Soares; Rodrigues, Luciano Brito
2017-10-01
Studies on the interaction between direction and degree of compression in the Texture Profile Analysis (TPA) of cheeses are limited. For this reason the present study aimed to evaluate the mechanical properties of Mozzarella cheese by TPA at different compression degrees (65, 75, and 85%) and directions (axes X, Y, and Z). Data obtained were compared in order to identify possible interaction between both factors. Compression direction did not affect any mechanical variable, or rather, the cheese had an isotropic behavior for TPA. Compression degree had a significant influence (p < 0.05) on TPA responses, excepting for chewiness TPA (N), which remained constant. Data from texture profile were adjusted to models to explain the mechanical behavior according to the compression degree used in the test. The isotropic behavior observed may be result of differences in production method of Mozzarella cheese especially on stretching of cheese mass. Texture Profile Analysis (TPA) is a technique largely used to assess the mechanical properties of food, particularly cheese. The precise choice of the instrumental test configuration is essential for achieving results that represent the material analyzed. The method of manufacturing is another factor that may directly influence the mechanical properties of food. This can be seen, for instance, in stretched curd cheese, such as Mozzarella. Knowledge on such mechanical properties is highly relevant for food industries due to the mechanical resistance in piling, pressing, manufacture of packages, and food transport, or to melting features presented by the food at high temperatures in preparation of several foods, such as pizzas, snacks, sandwiches, and appetizers. © 2016 Wiley Periodicals, Inc.
A Two-Stage Reconstruction Processor for Human Detection in Compressive Sensing CMOS Radar
Tsao, Kuei-Chi; Lee, Ling; Chu, Ta-Shun
2018-01-01
Complementary metal-oxide-semiconductor (CMOS) radar has recently gained much research attraction because small and low-power CMOS devices are very suitable for deploying sensing nodes in a low-power wireless sensing system. This study focuses on the signal processing of a wireless CMOS impulse radar system that can detect humans and objects in the home-care internet-of-things sensing system. The challenges of low-power CMOS radar systems are the weakness of human signals and the high computational complexity of the target detection algorithm. The compressive sensing-based detection algorithm can relax the computational costs by avoiding the utilization of matched filters and reducing the analog-to-digital converter bandwidth requirement. The orthogonal matching pursuit (OMP) is one of the popular signal reconstruction algorithms for compressive sensing radar; however, the complexity is still very high because the high resolution of human respiration leads to high-dimension signal reconstruction. Thus, this paper proposes a two-stage reconstruction algorithm for compressive sensing radar. The proposed algorithm not only has lower complexity than the OMP algorithm by 75% but also achieves better positioning performance than the OMP algorithm especially in noisy environments. This study also designed and implemented the algorithm by using Vertex-7 FPGA chip (Xilinx, San Jose, CA, USA). The proposed reconstruction processor can support the 256×13 real-time radar image display with a throughput of 28.2 frames per second. PMID:29621170
Compression Therapy: Clinical and Experimental Evidence
2012-01-01
Aim: A review is given on the different tools of compression therapy and their mode of action. Methods: Interface pressure and stiffness of compression devices, alone or in combination can be measured in vivo. Hemodynamic effects have been demonstrated by measuring venous volume and flow velocity using MRI, Duplex and radioisotopes, venous reflux and venous pumping function using plethysmography and phlebodynamometry. Oedema reduction can be measured by limb volumetry. Results: Compression stockings exerting a pressure of ~20 mmHg on the distal leg are able to increase venous blood flow velocity in the supine position and to prevent leg swelling after prolonged sitting and standing. In the upright position, an interface pressure of more than 50 mmHg is needed for intermittent occlusion of incompetent veins and for a reduction of ambulatory venous hypertension during walking. Such high intermittent interface pressure peaks exerting a “massaging effect” may rather be achieved by short stretch multilayer bandages than by elastic stockings. Conclusion: Compression is a cornerstone in the management of venous and lymphatic insufficiency. However, this treatment modality is still underestimated and deserves better understanding and improved educational programs, both for patients and medical staff. PMID:23641263
Design and fabrication of a large area freestanding compressive stress SiO2 optical window
NASA Astrophysics Data System (ADS)
Van Toan, Nguyen; Sangu, Suguru; Ono, Takahito
2016-07-01
This paper reports the design and fabrication of a 7.2 mm × 9.6 mm freestanding compressive stress SiO2 optical window without buckling. An application of the SiO2 optical window with and without liquid penetration has been demonstrated for an optical modulator and its optical characteristic is evaluated by using an image sensor. Two methods for SiO2 optical window fabrication have been presented. The first method is a combination of silicon etching and a thermal oxidation process. Silicon capillaries fabricated by deep reactive ion etching (deep RIE) are completely oxidized to form the SiO2 capillaries. The large compressive stress of the oxide causes buckling of the optical window, which is reduced by optimizing the design of the device structure. A magnetron-type RIE, which is investigated for deep SiO2 etching, is the second method. This method achieves deep SiO2 etching together with smooth surfaces, vertical shapes and a high aspect ratio. Additionally, in order to avoid a wrinkling optical window, the idea of a Peano curve structure has been proposed to achieve a freestanding compressive stress SiO2 optical window. A 7.2 mm × 9.6 mm optical window area without buckling integrated with an image sensor for an optical modulator has been successfully fabricated. The qualitative and quantitative evaluations have been performed in cases with and without liquid penetration.
Wavelet-based audio embedding and audio/video compression
NASA Astrophysics Data System (ADS)
Mendenhall, Michael J.; Claypoole, Roger L., Jr.
2001-12-01
Watermarking, traditionally used for copyright protection, is used in a new and exciting way. An efficient wavelet-based watermarking technique embeds audio information into a video signal. Several effective compression techniques are applied to compress the resulting audio/video signal in an embedded fashion. This wavelet-based compression algorithm incorporates bit-plane coding, index coding, and Huffman coding. To demonstrate the potential of this audio embedding and audio/video compression algorithm, we embed an audio signal into a video signal and then compress. Results show that overall compression rates of 15:1 can be achieved. The video signal is reconstructed with a median PSNR of nearly 33 dB. Finally, the audio signal is extracted from the compressed audio/video signal without error.
Recce imagery compression options
NASA Astrophysics Data System (ADS)
Healy, Donald J.
1995-09-01
The errors introduced into reconstructed RECCE imagery by ATARS DPCM compression are compared to those introduced by the more modern DCT-based JPEG compression algorithm. For storage applications in which uncompressed sensor data is available JPEG provides better mean-square-error performance while also providing more flexibility in the selection of compressed data rates. When ATARS DPCM compression has already been performed, lossless encoding techniques may be applied to the DPCM deltas to achieve further compression without introducing additional errors. The abilities of several lossless compression algorithms including Huffman, Lempel-Ziv, Lempel-Ziv-Welch, and Rice encoding to provide this additional compression of ATARS DPCM deltas are compared. It is shown that the amount of noise in the original imagery significantly affects these comparisons.
An Efficient, Lossless Database for Storing and Transmitting Medical Images
NASA Technical Reports Server (NTRS)
Fenstermacher, Marc J.
1998-01-01
This research aimed in creating new compression methods based on the central idea of Set Redundancy Compression (SRC). Set Redundancy refers to the common information that exists in a set of similar images. SRC compression methods take advantage of this common information and can achieve improved compression of similar images by reducing their Set Redundancy. The current research resulted in the development of three new lossless SRC compression methods: MARS (Median-Aided Region Sorting), MAZE (Max-Aided Zero Elimination) and MaxGBA (Max-Guided Bit Allocation).
A Novel Range Compression Algorithm for Resolution Enhancement in GNSS-SARs.
Zheng, Yu; Yang, Yang; Chen, Wu
2017-06-25
In this paper, a novel range compression algorithm for enhancing range resolutions of a passive Global Navigation Satellite System-based Synthetic Aperture Radar (GNSS-SAR) is proposed. In the proposed algorithm, within each azimuth bin, firstly range compression is carried out by correlating a reflected GNSS intermediate frequency (IF) signal with a synchronized direct GNSS base-band signal in the range domain. Thereafter, spectrum equalization is applied to the compressed results for suppressing side lobes to obtain a final range-compressed signal. Both theoretical analysis and simulation results have demonstrated that significant range resolution improvement in GNSS-SAR images can be achieved by the proposed range compression algorithm, compared to the conventional range compression algorithm.
Longitudinal dynamics of twin electron bunches in the Linac Coherent Light Source
Zhang, Zhen; Ding, Yuantao; Marinelli, Agostino; ...
2015-03-02
The recent development of two-color x-ray free-electron lasers, as well as the successful demonstration of high-gradient witness bunch acceleration in a plasma, have generated strong interest in electron bunch trains, where two or more electron bunches are generated, accelerated and compressed in the same accelerating bucket. In this paper we give a detailed analysis of a twin-bunch technique in a high-energy linac. This method allows the generation of two electron bunches with high peak current and independent control of time delay and energy separation. We find that the wakefields in the accelerator structures play an important role in the twin-bunchmore » compression, and through analysis show that they can be used to extend the available time delay range. As a result, based on the theoretical model and simulations we propose several methods to achieve larger time delay.« less
NASA Astrophysics Data System (ADS)
Chang, Ching-Chun; Liu, Yanjun; Nguyen, Son T.
2015-03-01
Data hiding is a technique that embeds information into digital cover data. This technique has been concentrated on the spatial uncompressed domain, and it is considered more challenging to perform in the compressed domain, i.e., vector quantization, JPEG, and block truncation coding (BTC). In this paper, we propose a new data hiding scheme for BTC-compressed images. In the proposed scheme, a dynamic programming strategy was used to search for the optimal solution of the bijective mapping function for LSB substitution. Then, according to the optimal solution, each mean value embeds three secret bits to obtain high hiding capacity with low distortion. The experimental results indicated that the proposed scheme obtained both higher hiding capacity and hiding efficiency than the other four existing schemes, while ensuring good visual quality of the stego-image. In addition, the proposed scheme achieved a low bit rate as original BTC algorithm.
NASA Technical Reports Server (NTRS)
Thesken, John C.; Bowman, Cheryl L.; Arnold, Steven M.
2003-01-01
Successful spaceflight operations require onboard power management systems that reliably achieve mission objectives for a minimal launch weight. Because of their high specific energies and potential for reduced maintenance and logistics, composite flywheels are an attractive alternative to electrochemical batteries. The Rotor Durability Team, which comprises members from the Ohio Aerospace Institute (OAI) and the NASA Glenn Research Center, completed a program of elevated temperature testing at Glenn' s Life Prediction Branch's Fatigue Laboratory. The experiments provided unique design data essential to the safety and durability of flywheel energy storage systems for the International Space Station and other manned spaceflight applications. Analysis of the experimental data (ref. 1) demonstrated that the compressive stress relaxation of composite flywheel rotor material is significantly greater than the commonly available tensile stress relaxation data. Durability analysis of compression preloaded flywheel rotors is required for accurate safe-life predictions for use in the International Space Station.
Milic, Dragan J; Zivic, Sasa S; Bogdanovic, Dragan C; Jovanovic, Milan M; Jankovic, Radmilo J; Milosevic, Zoran D; Stamenkovic, Dragan M; Trenkic, Marija S
2010-03-01
Venous leg ulcers (VLU) have a huge social and economic impact. An estimated 1.5% of European adults will suffer a venous ulcer at some point in their lives. Despite the widespread use of bandaging with high pressure in the treatment of this condition, recurrence rates range between 25% to 70%. Numerous studies have suggested that the compression system should provide sub-bandage pressure values in the range from 35 mm Hg to 45 mm Hg in order to achieve the best possible healing results. An open, randomized, prospective, single-center study was performed in order to determine the healing rates of VLU when treated with different compression systems and different sub-bandage pressure values. One hundred thirty-one patients (72 women, 59 men; mean age, 59-years-old) with VLU (ulcer surface >3 cm(2); duration >3 months) were randomized into three groups: group A - 42 patients who were treated using an open-toed, elastic, class III compression device knitted in tubular form (Tubulcus, Laboratoires Innothera, Arcueil, France); group B - 46 patients treated with the multi-component bandaging system comprised of Tubulcus and one elastic bandage (15 cm wide and 5 cm long with 200% stretch, Niva, Novi Sad, Serbia); and group C - forty-three patients treated with the multi-component bandaging system comprised of Tubulcus and two elastic bandages. Pressure measurements were taken with the Kikuhime device (TT MediTrade, Soro, Denmark) at the B1 measuring point in the supine, sitting, and standing positions under the three different compression systems. The median resting values in the supine and standing positions in examined study groups were as follows: group A - 36.2 mm Hg and 43.9 mm Hg; group B - 53.9 mm Hg and 68.2 mm Hg; group C - 74.0 mm Hg and 87.4 mm Hg. The healing rate during the 26-week treatment period was 25% (13/42) in group A, 67.4% (31/46) in group B, and 74.4% (32/43) in group C. The success of compression treatment in group A was strongly associated with the small ulcer surface (<5 cm(2)) and smaller calf circumference (CC; <38 cm). On the other hand, compliance in group A was good. In groups B and C, compliance was poor in patients with small CC, but the healing rate was high, especially in patients with large ulcers and a large CC (>43 cm). The results obtained in this study indicate that better healing results are achieved with two or multi-component compression systems than with single-component compression systems and that a compression system should be individually determined for each patient according to individual characteristics of the leg and CC. Target sub-bandage pressure value (B1 measuring point in the sitting position) of the compression system needed for the ulcer healing could be determined according to a simple formula, CC + CC/2.
NASA Astrophysics Data System (ADS)
Fiandrotti, Attilio; Fosson, Sophie M.; Ravazzi, Chiara; Magli, Enrico
2018-04-01
Compressive sensing promises to enable bandwidth-efficient on-board compression of astronomical data by lifting the encoding complexity from the source to the receiver. The signal is recovered off-line, exploiting GPUs parallel computation capabilities to speedup the reconstruction process. However, inherent GPU hardware constraints limit the size of the recoverable signal and the speedup practically achievable. In this work, we design parallel algorithms that exploit the properties of circulant matrices for efficient GPU-accelerated sparse signals recovery. Our approach reduces the memory requirements, allowing us to recover very large signals with limited memory. In addition, it achieves a tenfold signal recovery speedup thanks to ad-hoc parallelization of matrix-vector multiplications and matrix inversions. Finally, we practically demonstrate our algorithms in a typical application of circulant matrices: deblurring a sparse astronomical image in the compressed domain.
Time-resolved compression of a capsule with a cone to high density for fast-ignition laser fusion
Theobald, W.; Solodov, A. A.; Stoeckl, C.; ...
2014-12-12
The advent of high-intensity lasers enables us to recreate and study the behaviour of matter under the extreme densities and pressures that exist in many astrophysical objects. It may also enable us to develop a power source based on laser-driven nuclear fusion. Achieving such conditions usually requires a target that is highly uniform and spherically symmetric. Here we show that it is possible to generate high densities in a so-called fast-ignition target that consists of a thin shell whose spherical symmetry is interrupted by the inclusion of a metal cone. Using picosecond-time-resolved X-ray radiography, we show that we can achievemore » areal densities in excess of 300 mg cm -2 with a nanosecond-duration compression pulse -- the highest areal density ever reported for a cone-in-shell target. Such densities are high enough to stop MeV electrons, which is necessary for igniting the fuel with a subsequent picosecond pulse focused into the resulting plasma.« less
High compression image and image sequence coding
NASA Technical Reports Server (NTRS)
Kunt, Murat
1989-01-01
The digital representation of an image requires a very large number of bits. This number is even larger for an image sequence. The goal of image coding is to reduce this number, as much as possible, and reconstruct a faithful duplicate of the original picture or image sequence. Early efforts in image coding, solely guided by information theory, led to a plethora of methods. The compression ratio reached a plateau around 10:1 a couple of years ago. Recent progress in the study of the brain mechanism of vision and scene analysis has opened new vistas in picture coding. Directional sensitivity of the neurones in the visual pathway combined with the separate processing of contours and textures has led to a new class of coding methods capable of achieving compression ratios as high as 100:1 for images and around 300:1 for image sequences. Recent progress on some of the main avenues of object-based methods is presented. These second generation techniques make use of contour-texture modeling, new results in neurophysiology and psychophysics and scene analysis.
Medical Ultrasound Video Coding with H.265/HEVC Based on ROI Extraction
Wu, Yueying; Liu, Pengyu; Gao, Yuan; Jia, Kebin
2016-01-01
High-efficiency video compression technology is of primary importance to the storage and transmission of digital medical video in modern medical communication systems. To further improve the compression performance of medical ultrasound video, two innovative technologies based on diagnostic region-of-interest (ROI) extraction using the high efficiency video coding (H.265/HEVC) standard are presented in this paper. First, an effective ROI extraction algorithm based on image textural features is proposed to strengthen the applicability of ROI detection results in the H.265/HEVC quad-tree coding structure. Second, a hierarchical coding method based on transform coefficient adjustment and a quantization parameter (QP) selection process is designed to implement the otherness encoding for ROIs and non-ROIs. Experimental results demonstrate that the proposed optimization strategy significantly improves the coding performance by achieving a BD-BR reduction of 13.52% and a BD-PSNR gain of 1.16 dB on average compared to H.265/HEVC (HM15.0). The proposed medical video coding algorithm is expected to satisfy low bit-rate compression requirements for modern medical communication systems. PMID:27814367
Medical Ultrasound Video Coding with H.265/HEVC Based on ROI Extraction.
Wu, Yueying; Liu, Pengyu; Gao, Yuan; Jia, Kebin
2016-01-01
High-efficiency video compression technology is of primary importance to the storage and transmission of digital medical video in modern medical communication systems. To further improve the compression performance of medical ultrasound video, two innovative technologies based on diagnostic region-of-interest (ROI) extraction using the high efficiency video coding (H.265/HEVC) standard are presented in this paper. First, an effective ROI extraction algorithm based on image textural features is proposed to strengthen the applicability of ROI detection results in the H.265/HEVC quad-tree coding structure. Second, a hierarchical coding method based on transform coefficient adjustment and a quantization parameter (QP) selection process is designed to implement the otherness encoding for ROIs and non-ROIs. Experimental results demonstrate that the proposed optimization strategy significantly improves the coding performance by achieving a BD-BR reduction of 13.52% and a BD-PSNR gain of 1.16 dB on average compared to H.265/HEVC (HM15.0). The proposed medical video coding algorithm is expected to satisfy low bit-rate compression requirements for modern medical communication systems.
Achieving High Resolution Measurements Within Limited Bandwidth Via Sensor Data Compression
2013-06-01
MIDAS , high-g accelerometer 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU 18. NUMBER OF PAGES 16 19a. NAME OF...instrumentation boards are a miniaturization of the Multifunctional Instrumentation and Data Acquisition System ( MIDAS ) designed by ARL and detailed in several...technical reports (1). The original MIDAS has a diameter of 1.4 inches and height of 1.6 inches. This miniaturization for a 30mm round is
Single fraction spine radiosurgery for myeloma epidural spinal cord compression.
Jin, Ryan; Rock, Jack; Jin, Jian-Yue; Janakiraman, Nalini; Kim, Jae Ho; Movsas, Benjamin; Ryu, Samuel
2009-01-01
Radiosurgery delivers highly focused radiation beams to the defined target with high precision and accuracy. It has been demonstrated that spine radiosurgery can be safely used for treatment of spine metastasis with rapid and durable pain control, but without detrimental effects to the spinal cord. This study was carried out to determine the role of single fraction radiosurgery for epidural spinal cord compression due to multiple myeloma. A total of 31 lesions in 24 patients with multiple myeloma, who presented with epidural spinal cord compression, were treated with spine radiosurgery. Single fraction radiation dose of 10-18 Gy (median of 16 Gy) was administered to the involved spine including the epidural or paraspinal tumor. Patients were followed up with clinical exams and imaging studies. Median follow-up was 11.2 months (range 1-55). Primary endpoints of this study were pain control, neurological improvement, and radiographic tumor control. Overall pain control rate was 86%; complete relief in 54%, and partial relief in 32% of the patients. Seven patients presented with neurological deficits. Five patients neurologically improved or became normal after radiosurgery. Complete radiographic response of the epidural tumor was noted in 81% at 3 months after radiosurgery. During the follow-up time, there was no radiographic or neurological progression at the treated spine. The treatment was non-invasive and well tolerated. Single fraction radiosurgery achieved an excellent clinical and radiographic response of myeloma epidural spinal cord compression. Radiosurgery can be a viable treatment option for myeloma epidural compression.
The creation of radiation dominated plasmas using laboratory extreme ultra-violet lasers
NASA Astrophysics Data System (ADS)
Tallents, G. J.; Wilson, S.; West, A.; Aslanyan, V.; Lolley, J.; Rossall, A. K.
2017-06-01
Ionization in experiments where solid targets are irradiated by high irradiance extreme ultra-violet (EUV) lasers is examined. Free electron degeneracy effects on ionization in the presence of a high EUV flux of radiation is shown to be important. Overlap of the physics of such plasmas with plasma material under compression in indirect inertial fusion is explored. The design of the focusing optics needed to achieve high irradiance (up to 1014 Wcm-2) using an EUV capillary laser is presented.
NASA Astrophysics Data System (ADS)
Krisnamurti; Soehardjono, A.; Zacoeb, A.; Wibowo, A.
2018-01-01
Earthquake disaster can cause infrastructure damage. Prevention of human casualties from disasters should do. Prevention efforts can do through improving the mechanical performance of building materials. To achieve high-performance concrete (HPC), usually used Ordinary Portland Cement (OPC). However, the most widely circulating cement types today are Portland Pozzolana Cement (PPC) or Portland Composite Cement (PCC). Therefore, the proportion of materials used in the HPC mix design needs to adjust to achieve the expected performance. This study aims to develop a concrete mix design method using PPC to fulfil the criteria of HPC. The study refers to the code/regulation of concrete mixtures that use OPC based on the results of laboratory testing. This research uses PPC material, gravel from Malang area, Lumajang sand, water, silica fume and superplasticizer of a polycarboxylate copolymer. The analyzed information includes the investigation results of aggregate properties, concrete mixed composition, water-binder ratio variation, specimen dimension, compressive strength and elasticity modulus of the specimen. The test results show that the concrete compressive strength achieves value between 25 MPa to 55 MPa. The mix design method that has developed can simplify the process of concrete mix design using PPC to achieve the certain desired performance of concrete.
A New Approach for Fingerprint Image Compression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mazieres, Bertrand
1997-12-01
The FBI has been collecting fingerprint cards since 1924 and now has over 200 million of them. Digitized with 8 bits of grayscale resolution at 500 dots per inch, it means 2000 terabytes of information. Also, without any compression, transmitting a 10 Mb card over a 9600 baud connection will need 3 hours. Hence we need a compression and a compression as close to lossless as possible: all fingerprint details must be kept. A lossless compression usually do not give a better compression ratio than 2:1, which is not sufficient. Compressing these images with the JPEG standard leads to artefactsmore » which appear even at low compression rates. Therefore the FBI has chosen in 1993 a scheme of compression based on a wavelet transform, followed by a scalar quantization and an entropy coding : the so-called WSQ. This scheme allows to achieve compression ratios of 20:1 without any perceptible loss of quality. The publication of the FBI specifies a decoder, which means that many parameters can be changed in the encoding process: the type of analysis/reconstruction filters, the way the bit allocation is made, the number of Huffman tables used for the entropy coding. The first encoder used 9/7 filters for the wavelet transform and did the bit allocation using a high-rate bit assumption. Since the transform is made into 64 subbands, quite a lot of bands receive only a few bits even at an archival quality compression rate of 0.75 bit/pixel. Thus, after a brief overview of the standard, we will discuss a new approach for the bit-allocation that seems to make more sense where theory is concerned. Then we will talk about some implementation aspects, particularly for the new entropy coder and the features that allow other applications than fingerprint image compression. Finally, we will compare the performances of the new encoder to those of the first encoder.« less
NASA Astrophysics Data System (ADS)
Yao, Juncai; Liu, Guizhong
2017-03-01
In order to achieve higher image compression ratio and improve visual perception of the decompressed image, a novel color image compression scheme based on the contrast sensitivity characteristics of the human visual system (HVS) is proposed. In the proposed scheme, firstly the image is converted into the YCrCb color space and divided into sub-blocks. Afterwards, the discrete cosine transform is carried out for each sub-block, and three quantization matrices are built to quantize the frequency spectrum coefficients of the images by combining the contrast sensitivity characteristics of HVS. The Huffman algorithm is used to encode the quantized data. The inverse process involves decompression and matching to reconstruct the decompressed color image. And simulations are carried out for two color images. The results show that the average structural similarity index measurement (SSIM) and peak signal to noise ratio (PSNR) under the approximate compression ratio could be increased by 2.78% and 5.48%, respectively, compared with the joint photographic experts group (JPEG) compression. The results indicate that the proposed compression algorithm in the text is feasible and effective to achieve higher compression ratio under ensuring the encoding and image quality, which can fully meet the needs of storage and transmission of color images in daily life.
An L1-norm phase constraint for half-Fourier compressed sensing in 3D MR imaging.
Li, Guobin; Hennig, Jürgen; Raithel, Esther; Büchert, Martin; Paul, Dominik; Korvink, Jan G; Zaitsev, Maxim
2015-10-01
In most half-Fourier imaging methods, explicit phase replacement is used. In combination with parallel imaging, or compressed sensing, half-Fourier reconstruction is usually performed in a separate step. The purpose of this paper is to report that integration of half-Fourier reconstruction into iterative reconstruction minimizes reconstruction errors. The L1-norm phase constraint for half-Fourier imaging proposed in this work is compared with the L2-norm variant of the same algorithm, with several typical half-Fourier reconstruction methods. Half-Fourier imaging with the proposed phase constraint can be seamlessly combined with parallel imaging and compressed sensing to achieve high acceleration factors. In simulations and in in-vivo experiments half-Fourier imaging with the proposed L1-norm phase constraint enables superior performance both reconstruction of image details and with regard to robustness against phase estimation errors. The performance and feasibility of half-Fourier imaging with the proposed L1-norm phase constraint is reported. Its seamless combination with parallel imaging and compressed sensing enables use of greater acceleration in 3D MR imaging.
NASA Astrophysics Data System (ADS)
Dassekpo, Jean-Baptiste Mawulé; Zha, Xiaoxiong; Zhan, Jiapeng; Ning, Jiaqian
Geopolymer is an energy efficient and sustainable material that is currently used in construction industry as an alternative for Portland cement. As a new material, specific mix design method is essential and efforts have been made to develop a mix design procedure with the main focus on achieving better compressive strength and economy. In this paper, a sequential addition of synthesis parameters such as fly ash-sand, alkaline liquids, plasticizer and additional water at well-defined time intervals was investigated. A total of 4 mix procedures were used to study the compressive performance on fly ash-based geopolymer mortar and the results of each method were analyzed and discussed. Experimental results show that the sequential addition of sodium hydroxide (NaOH), sodium silicate (Na2SiO3), plasticizer (PL), followed by adding water (WA) increases considerably the compressive strengths of the geopolymer-based mortar. These results clearly demonstrate the high significant influence of sequential addition of synthesis parameters on geopolymer materials compressive properties, and also provide a new mixing method for the preparation of geopolymer paste, mortar and concrete.
NASA Astrophysics Data System (ADS)
Zhang, Mina; Zhou, Xianglin; Zhu, Wuzhi; Li, Jinghao
2018-04-01
A novel refractory CoCrMoNbTi0.4 high-entropy alloy (HEA) was prepared via vacuum arc melting. After annealing treatment at different temperatures, the microstructure evolution, phase stability, and mechanical properties of the alloy were investigated. The alloy was composed of two primary body-centered cubic structures (BCC1 and BCC2) and a small amount of (Co, Cr)2Nb-type Laves phase under different annealing conditions. The microhardness and compressive strength of the heat-treated alloy was significantly enhanced by the solid-solution strengthening of the BCC phase matrix and newborn Laves phase. Especially, the alloy annealed at 1473 K (1200 °C) achieved the maximum hardness and compressive strength values of 959 ± 2 HV0.5 and 1790 MPa, respectively, owing to the enhanced volume fraction of the dispersed Laves phase. In particular, the HEAs exhibited promising high-temperature mechanical performance, when heated to an elevated temperature of 1473 K (1200 °C), with a compressive fracture strength higher than 580 MPa without fracture at a strain of more than 20 pct. This study suggests that the present refractory HEAs have immense potential for engineering applications as a new class of high-temperature structural materials.
NASA Astrophysics Data System (ADS)
Lin, Zhiting; Wang, Haiyan; Lin, Yunhao; Wang, Wenliang; Li, Guoqiang
2017-11-01
High-performance blue GaN-based light-emitting diodes (LEDs) on Si substrates have been achieved by applying a suitable tensile stress in the underlying n-GaN. It is demonstrated by simulation that tensile stress in the underlying n-GaN alleviates the negative effect from polarization electric fields on multiple quantum wells but an excessively large tensile stress severely bends the band profile of the electron blocking layer, resulting in carrier loss and large electric resistance. A medium level of tensile stress, which ranges from 4 to 5 GPa, can maximally improve the luminous intensity and decrease forward voltage of LEDs on Si substrates. The LED with the optimal tensile stress shows the largest simulated luminous intensity and the smallest simulated voltage at 35 A/cm2. Compared to the LEDs with a compressive stress of -3 GPa and a large tensile stress of 8 GPa, the improvement of luminous intensity can reach 102% and 28.34%, respectively. Subsequent experimental results provide evidence of the superiority of applying tensile stress in n-GaN. The experimental light output power of the LEDs with a tensile stress of 1.03 GPa is 528 mW, achieving a significant improvement of 19.4% at 35 A/cm2 in comparison to the reference LED with a compressive stress of -0.63 GPa. The forward voltage of this LED is 3.08 V, which is smaller than 3.11 V for the reference LED. This methodology of stress management on underlying GaN-based epitaxial films shows a bright feature for achieving high-performance LED devices on Si substrates.
Interleaved EPI diffusion imaging using SPIRiT-based reconstruction with virtual coil compression.
Dong, Zijing; Wang, Fuyixue; Ma, Xiaodong; Zhang, Zhe; Dai, Erpeng; Yuan, Chun; Guo, Hua
2018-03-01
To develop a novel diffusion imaging reconstruction framework based on iterative self-consistent parallel imaging reconstruction (SPIRiT) for multishot interleaved echo planar imaging (iEPI), with computation acceleration by virtual coil compression. As a general approach for autocalibrating parallel imaging, SPIRiT improves the performance of traditional generalized autocalibrating partially parallel acquisitions (GRAPPA) methods in that the formulation with self-consistency is better conditioned, suggesting SPIRiT to be a better candidate in k-space-based reconstruction. In this study, a general SPIRiT framework is adopted to incorporate both coil sensitivity and phase variation information as virtual coils and then is applied to 2D navigated iEPI diffusion imaging. To reduce the reconstruction time when using a large number of coils and shots, a novel shot-coil compression method is proposed for computation acceleration in Cartesian sampling. Simulations and in vivo experiments were conducted to evaluate the performance of the proposed method. Compared with the conventional coil compression, the shot-coil compression achieved higher compression rates with reduced errors. The simulation and in vivo experiments demonstrate that the SPIRiT-based reconstruction outperformed the existing method, realigned GRAPPA, and provided superior images with reduced artifacts. The SPIRiT-based reconstruction with virtual coil compression is a reliable method for high-resolution iEPI diffusion imaging. Magn Reson Med 79:1525-1531, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
NASA Astrophysics Data System (ADS)
Yaakobi, B.; Boehly, T. R.; Sangster, T. C.; Meyerhofer, D. D.; Remington, B. A.; Allen, P. G.; Pollaine, S. M.; Lorenzana, H. E.; Lorenz, K. T.; Hawreliak, J. A.
2008-06-01
The use of in situ extended x-ray absorption fine structure (EXAFS) for characterizing nanosecond laser-shocked vanadium, titanium, and iron has recently been demonstrated. These measurements are extended to laser-driven, quasi-isentropic compression experiments (ICE). The radiation source (backlighter) for EXAFS in all of these experiments is obtained by imploding a spherical target on the OMEGA laser [T. R. Boehly et al., Rev. Sci. Instrum. 66, 508 (1995)]. Isentropic compression (where the entropy is kept constant) enables to reach high compressions at relatively low temperatures. The absorption spectra are used to determine the temperature and compression in a vanadium sample quasi-isentropically compressed to pressures of up to ˜0.75Mbar. The ability to measure the temperature and compression directly is unique to EXAFS. The drive pressure is calibrated by substituting aluminum for the vanadium and interferometrically measuring the velocity of the back target surface by the velocity interferometer system for any reflector (VISAR). The experimental results obtained by EXAFS and VISAR agree with each other and with the simulations of a hydrodynamic code. The role of a shield to protect the sample from impact heating is studied. It is shown that the shield produces an initial weak shock that is followed by a quasi-isentropic compression at a relatively low temperature. The role of radiation heating from the imploding target as well as from the laser-absorption region is studied. The results show that in laser-driven ICE, as compared with laser-driven shocks, comparable compressions can be achieved at lower temperatures. The EXAFS results show important details not seen in the VISAR results.
ERIC Educational Resources Information Center
Pastore, Raymond S.
2009-01-01
The purpose of this study was to examine the effects of visual representations and time-compressed instruction on learning and learners' perceptions of cognitive load. Time-compressed instruction refers to instruction that has been increased in speed without sacrificing quality. It was anticipated that learners would be able to gain a conceptual…
Channel coding/decoding alternatives for compressed TV data on advanced planetary missions.
NASA Technical Reports Server (NTRS)
Rice, R. F.
1972-01-01
The compatibility of channel coding/decoding schemes with a specific TV compressor developed for advanced planetary missions is considered. Under certain conditions, it is shown that compressed data can be transmitted at approximately the same rate as uncompressed data without any loss in quality. Thus, the full gains of data compression can be achieved in real-time transmission.
A Novel Range Compression Algorithm for Resolution Enhancement in GNSS-SARs
Zheng, Yu; Yang, Yang; Chen, Wu
2017-01-01
In this paper, a novel range compression algorithm for enhancing range resolutions of a passive Global Navigation Satellite System-based Synthetic Aperture Radar (GNSS-SAR) is proposed. In the proposed algorithm, within each azimuth bin, firstly range compression is carried out by correlating a reflected GNSS intermediate frequency (IF) signal with a synchronized direct GNSS base-band signal in the range domain. Thereafter, spectrum equalization is applied to the compressed results for suppressing side lobes to obtain a final range-compressed signal. Both theoretical analysis and simulation results have demonstrated that significant range resolution improvement in GNSS-SAR images can be achieved by the proposed range compression algorithm, compared to the conventional range compression algorithm. PMID:28672830
Želudevičius, J; Danilevičius, R; Viskontas, K; Rusteika, N; Regelskis, K
2013-03-11
Results of numerical and experimental investigations of the simple fiber CPA system seeded by nearly bandwidth-limited pulses from the picosecond oscillator are presented. We utilized self-phase modulation in a stretcher fiber to broaden the pulse spectrum and dispersion of the fiber to stretch pulses in time. During amplification in the ytterbium-doped CCC fiber, gain-shaping and self-phase modulation effects were observed, which improved pulse compression with a bulk diffraction grating compressor. After compression with spectral filtering, pulses with the duration of 400 fs and energy as high as 50 µJ were achieved, and the output beam quality was nearly diffraction-limited.
Crystal coating via spray drying to improve powder tabletability.
Vanhoorne, V; Peeters, E; Van Snick, B; Remon, J P; Vervaet, C
2014-11-01
A continuous crystal coating method was developed to improve both flowability and tabletability of powders. The method includes the introduction of solid, dry particles into an atomized spray during spray drying in order to coat and agglomerate individual particles. Paracetamol was used as a model drug as it exhibits poor flowability and high capping tendency upon compaction. The particle size enlargement and flowability were evaluated by the mean median particle size and flow index of the resulting powders. The crystal coating coprocessing method was successful for the production of powders containing 75% paracetamol with excellent tableting properties. However, the extent of agglomeration achieved during coprocessing was limited. Tablets compressed on a rotary tablet press in manual mode showed excellent compression properties without capping tendency. A formulation with 75% paracetamol, 5% PVP and 20% amorphous lactose yielded a tensile strength of 1.9 MPa at a compression pressure of 288 MPa. The friability of tablets compressed at 188 MPa was only 0.6%. The excellent tabletability of this formulation was attributed to the coating of paracetamol crystals with amorphous lactose and PVP through coprocessing and the presence of brittle and plastic components in the formulation. The coprocessing method was also successfully applied for the production of directly compressible lactose showing improved tensile strength and friability in comparison to a spray dried direct compression lactose grade.
NASA Astrophysics Data System (ADS)
Lanez, M.; Oudjit, M. N.; Zenati, A.; Arroudj, K.; Bali, A.
Reactive powder concretes (RPC) are characterized by a particle diameter not exceeding 600 μm and having very high compressive and tensile strengths. This paper describes a new generation of micro concrete, which has an initial as well as a final high physicomechanical performance. To achieve this, 15% by weight of the Portland cement have been substituted by materials rich in Silica (Slag and Dune Sand). The results obtained from the tests carried out on the RPC show that compressive and tensile strengths increase when incorporating the addition, thus improving the compactness of mixtures through filler and pozzolanic effects. With a reduction in the aggregate phase in the RPC and the abundance of the dune sand (southern of Algeria) and slag (industrial by-product of the blast furnace), the use of the RPC will allow Algeria to fulfil economical as well as ecological requirements.
Tumuluru, J. S.; Tabil, L. G.; Song, Y.; ...
2014-10-01
The present study is to understand the impact of process conditions on the quality attributes of wheat oat, barley, and canola straw briquettes. Analysis of variance indicated that briquette moisture content and initial density immediately after compaction and final density after 2 weeks of storage are strong functions of feedstock moisture content and compression pressure, whereas durability rating is influenced by die temperature and feedstock moisture content. Briquettes produced at a low feedstock moisture content of 9 % (w.b.) yielded maximum densities >700 kg/m3 for wheat, oat, canola, and barley straws. Lower feedstock moisture content of <10 % (w.b.) andmore » higher die temperatures >110 °C and compression pressure >10 MPa minimized the briquette moisture content and maximized densities and durability rating based on surface plots observations. Optimal process conditions indicated that a low feedstock moisture content of about 9 % (w.b.), high die temperature of 120–130 °C, medium-to-large hammer mill screen sizes of about 24 to 31.75 mm, and low to high compression pressures of 7.5 to 12.5 MPa minimized briquette moisture content to <8 % (w.b.) and maximized density to >700 kg/m3. Durability rating >90 % is achievable at higher die temperatures of >123 °C, lower to medium feedstock moisture contents of 9 to 12 % (w.b.), low to high compression pressures of 7.5 to 12.5 MPa, and large hammer mill screen size of 31.75 mm, except for canola where a lower compression pressure of 7.5 to 8.5 MPa and a smaller hammer mill screen size of 19 mm for oat maximized the durability rating values.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-28
... systemic risk, portfolio reconciliation should be a proactive process that delivers a consolidated view of... achieved by portfolio compression, in turn, may lessen systemic risk and enhance the overall stability of...
NASA Astrophysics Data System (ADS)
Lach, E.; Redjaïmia, A.; Leitner, H.; Clemens, H.
2006-08-01
Nanometer-sized precipitates are responsible for the high strength of steel alloys well known as maraging steels. The term maraging relates to aging reactions in very low-carbon martensitic steels. Due to precipitation hardening 0.2% yield stress values of up to 2.4 GPa can be achieved. The class of stainless maraging steels exhibits an excellent combination of very high strength and hardness, ductility and toughness, combined with good corrosion resistance. In many applications like crash worthiness or ballistic protection the materials are loaded at high strain-rates. The most important characteristic of material behavior under dynamic load is the dynamic yield stress. In this work compression tests had been conducted at strain-rates in the order of 5 x 10 - 3 s - 1 up to 3 x 103 s - 1 to study the materials behaviour. Additionally high dynamic compression tests had been performed in the temperature range from -40circC up to 300circC.
Yu, Hailiang; Yan, Ming; Lu, Cheng; Tieu, Anh Kiet; Li, Huijun; Zhu, Qiang; Godbole, Ajit; Li, Jintao; Su, Lihong; Kong, Charlie
2016-01-01
An increasing number of industrial applications need superstrength steels. It is known that refined grains and nanoscale precipitates can increase strength. The hardest martensitic steel reported to date is C0.8 steel, whose nanohardness can reach 11.9 GPa through incremental interstitial solid solution strengthening. Here we report a nanograined (NG) steel dispersed with nanoscale precipitates which has an extraordinarily high hardness of 19.1 GPa. The NG steel (shock-compressed Armox 500T steel) was obtained under these conditions: high strain rate of 1.2 μs−1, high temperature rise rate of 600 Kμs−1 and high pressure of 17 GPa. The mean grain size achieved was 39 nm and reinforcing precipitates were indexed in the NG steel. The strength of the NG steel is expected to be ~3950 MPa. The discovery of the NG steel offers a general pathway for designing new advanced steel materials with exceptional hardness and excellent strength. PMID:27892460
An infrared-visible image fusion scheme based on NSCT and compressed sensing
NASA Astrophysics Data System (ADS)
Zhang, Qiong; Maldague, Xavier
2015-05-01
Image fusion, as a research hot point nowadays in the field of infrared computer vision, has been developed utilizing different varieties of methods. Traditional image fusion algorithms are inclined to bring problems, such as data storage shortage and computational complexity increase, etc. Compressed sensing (CS) uses sparse sampling without knowing the priori knowledge and greatly reconstructs the image, which reduces the cost and complexity of image processing. In this paper, an advanced compressed sensing image fusion algorithm based on non-subsampled contourlet transform (NSCT) is proposed. NSCT provides better sparsity than the wavelet transform in image representation. Throughout the NSCT decomposition, the low-frequency and high-frequency coefficients can be obtained respectively. For the fusion processing of low-frequency coefficients of infrared and visible images , the adaptive regional energy weighting rule is utilized. Thus only the high-frequency coefficients are specially measured. Here we use sparse representation and random projection to obtain the required values of high-frequency coefficients, afterwards, the coefficients of each image block can be fused via the absolute maximum selection rule and/or the regional standard deviation rule. In the reconstruction of the compressive sampling results, a gradient-based iterative algorithm and the total variation (TV) method are employed to recover the high-frequency coefficients. Eventually, the fused image is recovered by inverse NSCT. Both the visual effects and the numerical computation results after experiments indicate that the presented approach achieves much higher quality of image fusion, accelerates the calculations, enhances various targets and extracts more useful information.
Tiny videos: a large data set for nonparametric video retrieval and frame classification.
Karpenko, Alexandre; Aarabi, Parham
2011-03-01
In this paper, we present a large database of over 50,000 user-labeled videos collected from YouTube. We develop a compact representation called "tiny videos" that achieves high video compression rates while retaining the overall visual appearance of the video as it varies over time. We show that frame sampling using affinity propagation-an exemplar-based clustering algorithm-achieves the best trade-off between compression and video recall. We use this large collection of user-labeled videos in conjunction with simple data mining techniques to perform related video retrieval, as well as classification of images and video frames. The classification results achieved by tiny videos are compared with the tiny images framework [24] for a variety of recognition tasks. The tiny images data set consists of 80 million images collected from the Internet. These are the largest labeled research data sets of videos and images available to date. We show that tiny videos are better suited for classifying scenery and sports activities, while tiny images perform better at recognizing objects. Furthermore, we demonstrate that combining the tiny images and tiny videos data sets improves classification precision in a wider range of categories.
Unstructured and adaptive mesh generation for high Reynolds number viscous flows
NASA Technical Reports Server (NTRS)
Mavriplis, Dimitri J.
1991-01-01
A method for generating and adaptively refining a highly stretched unstructured mesh suitable for the computation of high-Reynolds-number viscous flows about arbitrary two-dimensional geometries was developed. The method is based on the Delaunay triangulation of a predetermined set of points and employs a local mapping in order to achieve the high stretching rates required in the boundary-layer and wake regions. The initial mesh-point distribution is determined in a geometry-adaptive manner which clusters points in regions of high curvature and sharp corners. Adaptive mesh refinement is achieved by adding new points in regions of large flow gradients, and locally retriangulating; thus, obviating the need for global mesh regeneration. Initial and adapted meshes about complex multi-element airfoil geometries are shown and compressible flow solutions are computed on these meshes.
NASA Astrophysics Data System (ADS)
Fujiwara, Takahiro; Uchiito, Haruki; Tokairin, Tomoya; Kawai, Hiroyuki
2017-04-01
Regarding Structural Health Monitoring (SHM) for seismic acceleration, Wireless Sensor Networks (WSN) is a promising tool for low-cost monitoring. Compressed sensing and transmission schemes have been drawing attention to achieve effective data collection in WSN. Especially, SHM systems installing massive nodes of WSN require efficient data transmission due to restricted communications capability. The dominant frequency band of seismic acceleration is occupied within 100 Hz or less. In addition, the response motions on upper floors of a structure are activated at a natural frequency, resulting in induced shaking at the specified narrow band. Focusing on the vibration characteristics of structures, we introduce data compression techniques for seismic acceleration monitoring in order to reduce the amount of transmission data. We carry out a compressed sensing and transmission scheme by band pass filtering for seismic acceleration data. The algorithm executes the discrete Fourier transform for the frequency domain and band path filtering for the compressed transmission. Assuming that the compressed data is transmitted through computer networks, restoration of the data is performed by the inverse Fourier transform in the receiving node. This paper discusses the evaluation of the compressed sensing for seismic acceleration by way of an average error. The results present the average error was 0.06 or less for the horizontal acceleration, in conditions where the acceleration was compressed into 1/32. Especially, the average error on the 4th floor achieved a small error of 0.02. Those results indicate that compressed sensing and transmission technique is effective to reduce the amount of data with maintaining the small average error.
System considerations for efficient communication and storage of MSTI image data
NASA Technical Reports Server (NTRS)
Rice, Robert F.
1994-01-01
The Ballistic Missile Defense Organization has been developing the capability to evaluate one or more high-rate sensor/hardware combinations by incorporating them as payloads on a series of Miniature Seeker Technology Insertion (MSTI) flights. This publication represents the final report of a 1993 study to analyze the potential impact f data compression and of related communication system technologies on post-MSTI 3 flights. Lossless compression is considered alone and in conjunction with various spatial editing modes. Additionally, JPEG and Fractal algorithms are examined in order to bound the potential gains from the use of lossy compression. but lossless compression is clearly shown to better fit the goals of the MSTI investigations. Lossless compression factors of between 2:1 and 6:1 would provide significant benefits to both on-board mass memory and the downlink. for on-board mass memory, the savings could range from $5 million to $9 million. Such benefits should be possible by direct application of recently developed NASA VLSI microcircuits. It is shown that further downlink enhancements of 2:1 to 3:1 should be feasible thorough use of practical modifications to the existing modulation system and incorporation of Reed-Solomon channel coding. The latter enhancement could also be achieved by applying recently developed VLSI microcircuits.
Efficient burst image compression using H.265/HEVC
NASA Astrophysics Data System (ADS)
Roodaki-Lavasani, Hoda; Lainema, Jani
2014-02-01
New imaging use cases are emerging as more powerful camera hardware is entering consumer markets. One family of such use cases is based on capturing multiple pictures instead of just one when taking a photograph. That kind of a camera operation allows e.g. selecting the most successful shot from a sequence of images, showing what happened right before or after the shot was taken or combining the shots by computational means to improve either visible characteristics of the picture (such as dynamic range or focus) or the artistic aspects of the photo (e.g. by superimposing pictures on top of each other). Considering that photographic images are typically of high resolution and quality and the fact that these kind of image bursts can consist of at least tens of individual pictures, an efficient compression algorithm is desired. However, traditional video coding approaches fail to provide the random access properties these use cases require to achieve near-instantaneous access to the pictures in the coded sequence. That feature is critical to allow users to browse the pictures in an arbitrary order or imaging algorithms to extract desired pictures from the sequence quickly. This paper proposes coding structures that provide such random access properties while achieving coding efficiency superior to existing image coders. The results indicate that using HEVC video codec with a single reference picture fixed for the whole sequence can achieve nearly as good compression as traditional IPPP coding structures. It is also shown that the selection of the reference frame can further improve the coding efficiency.
Laser shock compression experiments on precompressed water in ``SG-II'' laser facility
NASA Astrophysics Data System (ADS)
Shu, Hua; Huang, Xiuguang; Ye, Junjian; Fu, Sizu
2017-06-01
Laser shock compression experiments on precompressed samples offer the possibility to obtain new hugoniot data over a significantly broader range of density-temperature phase than was previously achievable. This technique was developed in ``SG-II'' laser facility. Hugoniot data were obtained for water in 300 GPa pressure range by laser-driven shock compression of samples statically precompressed in diamond-anvil cells.
Hydrodynamically Lubricated Rotary Shaft Having Twist Resistant Geometry
Dietle, Lannie; Gobeli, Jeffrey D.
1993-07-27
A hydrodynamically lubricated squeeze packing type rotary shaft with a cross-sectional geometry suitable for pressurized lubricant retention is provided which, in the preferred embodiment, incorporates a protuberant static sealing interface that, compared to prior art, dramatically improves the exclusionary action of the dynamic sealing interface in low pressure and unpressurized applications by achieving symmetrical deformation of the seal at the static and dynamic sealing interfaces. In abrasive environments, the improved exclusionary action results in a dramatic reduction of seal and shaft wear, compared to prior art, and provides a significant increase in seal life. The invention also increases seal life by making higher levels of initial compression possible, compared to prior art, without compromising hydrodynamic lubrication; this added compression makes the seal more tolerant of compression set, abrasive wear, mechanical misalignment, dynamic runout, and manufacturing tolerances, and also makes hydrodynamic seals with smaller cross-sections more practical. In alternate embodiments, the benefits enumerated above are achieved by cooperative configurations of the seal and the gland which achieve symmetrical deformation of the seal at the static and dynamic sealing interfaces. The seal may also be configured such that predetermined radial compression deforms it to a desired operative configuration, even through symmetrical deformation is lacking.
Flow design and simulation of a gas compression system for hydrogen fusion energy production
NASA Astrophysics Data System (ADS)
Avital, E. J.; Salvatore, E.; Munjiza, A.; Suponitsky, V.; Plant, D.; Laberge, M.
2017-08-01
An innovative gas compression system is proposed and computationally researched to achieve a short time response as needed in engineering applications such as hydrogen fusion energy reactors and high speed hammers. The system consists of a reservoir containing high pressure gas connected to a straight tube which in turn is connected to a spherical duct, where at the sphere’s centre plasma resides in the case of a fusion reactor. Diaphragm located inside the straight tube separates the reservoir’s high pressure gas from the rest of the plenum. Once the diaphragm is breached the high pressure gas enters the plenum to drive pistons located on the inner wall of the spherical duct that will eventually end compressing the plasma. Quasi-1D and axisymmetric flow formulations are used to design and analyse the flow dynamics. A spike is designed for the interface between the straight tube and the spherical duct to provide a smooth geometry transition for the flow. Flow simulations show high supersonic flow hitting the end of the spherical duct, generating a return shock wave propagating upstream and raising the pressure above the reservoir pressure as in the hammer wave problem, potentially giving temporary pressure boost to the pistons. Good agreement is revealed between the two flow formulations pointing to the usefulness of the quasi-1D formulation as a rapid solver. Nevertheless, a mild time delay in the axisymmetric flow simulation occurred due to moderate two-dimensionality effects. The compression system is settled down in a few milliseconds for a spherical duct of 0.8 m diameter using Helium gas and a uniform duct cross-section area. Various system geometries are analysed using instantaneous and time history flow plots.
Feed-forward motor control of ultrafast, ballistic movements.
Kagaya, K; Patek, S N
2016-02-01
To circumvent the limits of muscle, ultrafast movements achieve high power through the use of springs and latches. The time scale of these movements is too short for control through typical neuromuscular mechanisms, thus ultrafast movements are either invariant or controlled prior to movement. We tested whether mantis shrimp (Stomatopoda: Neogonodactylus bredini) vary their ultrafast smashing strikes and, if so, how this control is achieved prior to movement. We collected high-speed images of strike mechanics and electromyograms of the extensor and flexor muscles that control spring compression and latch release. During spring compression, lateral extensor and flexor units were co-activated. The strike initiated several milliseconds after the flexor units ceased, suggesting that flexor activity prevents spring release and determines the timing of strike initiation. We used linear mixed models and Akaike's information criterion to serially evaluate multiple hypotheses for control mechanisms. We found that variation in spring compression and strike angular velocity were statistically explained by spike activity of the extensor muscle. The results show that mantis shrimp can generate kinematically variable strikes and that their kinematics can be changed through adjustments to motor activity prior to the movement, thus supporting an upstream, central-nervous-system-based control of ultrafast movement. Based on these and other findings, we present a shishiodoshi model that illustrates alternative models of control in biological ballistic systems. The discovery of feed-forward control in mantis shrimp sets the stage for the assessment of targets, strategic variation in kinematics and the role of learning in ultrafast animals. © 2016. Published by The Company of Biologists Ltd.
High power green lasers for gamma source
NASA Astrophysics Data System (ADS)
Durand, Magali; Sevillano, Pierre; Alexaline, Olivier; Sangla, Damien; Casanova, Alexis; Aubourg, Adrien; Saci, Abdelhak; Courjaud, Antoine
2018-02-01
A high intensity Gamma source is required for Nuclear Spectroscopy, it will be delivered by the interaction between accelerated electron and intense laser beams. Those two interactions lasers are based on a multi-stage amplification scheme that ended with a second harmonics generation to deliver 200 mJ, 5 ps pulses at 515 nm and 100 Hz. A t-Pulse oscillator with slow and fast feedback loop implemented inside the oscillator cavity allows the possibility of synchronization to an optical reference. A temporal jitter of 120 fs rms is achieved, integrated from 10 Hz to 10 MHz. Then a regenerative amplifier, based on Yb:YAG technology, pumped by fiber-coupled QCW laser diodes, delivers pulses up to 30 mJ. The 1 nm bandwidth was compressed to 1.5 ps with a good spatial quality: M2 of 1.1. This amplifier is integrated in a compact sealed housing (750 x 500 x 150 mm), which allows a pulse-pulse stability of 0.1 % rms, and a long-term stability of 1,9 % over 100 hours (with +/-1°C environment). The main amplification stage uses a cryocooled Yb:YAG crystal in an active mirror configuration. The crystal is cooled at 130 K via a compact and low-vibration cryocooler, avoiding any additional phase noise contribution, 340 mJ in a six pass scheme was achieved, with 0.9 of Strehl ratio. The trade off to the gain of a cryogenic amplifier is the bandwidth reduction, however the 1030 nm pulse was compressed to 4.4 ps. As for the regenerative amplifier a long-term stability of 1.9 % over 30 hours was achieved in an environment with +/-1°C temperature fluctuations The compression and Second Harmonics Generation Stages have allowed the conversion of 150 mJ of uncompressed infrared beam into 60 mJ at 515 nm.
Understanding turbulence in compressing plasmas and its exploitation or prevention.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davidovits, Seth
Unprecedented densities and temperatures are now achieved in compressions of plasma, by lasers and by pulsed power, in major experimental facilities. These compressions, carried out at the largest scale at the National Ignition Facility and at the Z Pulsed Power Facility, have important applications, including fusion, X-ray production, and materials research. Several experimental and simulation results suggest that the plasma in some of these compressions is turbulent. In fact, measurements suggest that in certain laboratory plasma compressions the turbulent energy is a dominant energy component. Similarly, turbulence is dominant in some compressing astrophysical plasmas, such as in molecular clouds. Turbulencemore » need not be dominant to be important; even small quantities could greatly influence experiments that are sensitive to mixing of non-fuel into fuel, such as compressions seeking fusion ignition. Despite its important role in major settings, bulk plasma turbulence under compression is insufficiently understood to answer or even to pose some of the most fundamental questions about it. This thesis both identifies and answers key questions in compressing turbulent motion, while providing a description of the behavior of three-dimensional, isotropic, compressions of homogeneous turbulence with a plasma viscosity. This description includes a simple, but successful, new model for the turbulent energy of plasma undergoing compression. The unique features of compressing turbulence with a plasma viscosity are shown, including the sensitivity of the turbulence to plasma ionization, and a sudden viscous dissipation'' effect which rapidly converts plasma turbulent energy into thermal energy. This thesis then examines turbulence in both laboratory compression experiments and molecular clouds. It importantly shows: the possibility of exploiting turbulence to make fusion or X-ray production more efficient; conditions under which hot-spot turbulence can be prevented; and a lower bound on the growth of turbulence in molecular clouds. This bound raises questions about the level of dissipation in existing molecular cloud models. Finally, the observations originally motivating the thesis, Z-pinch measurements suggesting dominant turbulent energy, are reexamined by self-consistently accounting for the impact of the turbulence on the spectroscopic analysis. This is found to strengthen the evidence that the multiple observations describe a highly turbulent plasma state.« less
Understanding Turbulence in Compressing Plasmas and Its Exploitation or Prevention
NASA Astrophysics Data System (ADS)
Davidovits, Seth
Unprecedented densities and temperatures are now achieved in compressions of plasma, by lasers and by pulsed power, in major experimental facilities. These compressions, carried out at the largest scale at the National Ignition Facility and at the Z Pulsed Power Facility, have important applications, including fusion, X-ray production, and materials research. Several experimental and simulation results suggest that the plasma in some of these compressions is turbulent. In fact, measurements suggest that in certain laboratory plasma compressions the turbulent energy is a dominant energy component. Similarly, turbulence is dominant in some compressing astrophysical plasmas, such as in molecular clouds. Turbulence need not be dominant to be important; even small quantities could greatly influence experiments that are sensitive to mixing of non-fuel into fuel, such as compressions seeking fusion ignition. Despite its important role in major settings, bulk plasma turbulence under compression is insufficiently understood to answer or even to pose some of the most fundamental questions about it. This thesis both identifies and answers key questions in compressing turbulent motion, while providing a description of the behavior of three-dimensional, isotropic, compressions of homogeneous turbulence with a plasma viscosity. This description includes a simple, but successful, new model for the turbulent energy of plasma undergoing compression. The unique features of compressing turbulence with a plasma viscosity are shown, including the sensitivity of the turbulence to plasma ionization, and a "sudden viscous dissipation'' effect which rapidly converts plasma turbulent energy into thermal energy. This thesis then examines turbulence in both laboratory compression experiments and molecular clouds. It importantly shows: the possibility of exploiting turbulence to make fusion or X-ray production more efficient; conditions under which hot-spot turbulence can be prevented; and a lower bound on the growth of turbulence in molecular clouds. This bound raises questions about the level of dissipation in existing molecular cloud models. Finally, the observations originally motivating the thesis, Z-pinch measurements suggesting dominant turbulent energy, are reexamined by self-consistently accounting for the impact of the turbulence on the spectroscopic analysis. This is found to strengthen the evidence that the multiple observations describe a highly turbulent plasma state.
An image assessment study of image acceptability of the Galileo low gain antenna mission
NASA Technical Reports Server (NTRS)
Chuang, S. L.; Haines, R. F.; Grant, T.; Gold, Yaron; Cheung, Kar-Ming
1994-01-01
This paper describes a study conducted by NASA Ames Research Center (ARC) in collaboration with the Jet Propulsion Laboratory (JPL), Pasadena, California on the image acceptability of the Galileo Low Gain Antenna mission. The primary objective of the study is to determine the impact of the Integer Cosine Transform (ICT) compression algorithm on Galilean images of atmospheric bodies, moons, asteroids and Jupiter's rings. The approach involved fifteen volunteer subjects representing twelve institutions involved with the Galileo Solid State Imaging (SSI) experiment. Four different experiment specific quantization tables (q-table) and various compression stepsizes (q-factor) to achieve different compression ratios were used. It then determined the acceptability of the compressed monochromatic astronomical images as evaluated by Galileo SSI mission scientists. Fourteen different images were evaluated. Each observer viewed two versions of the same image side by side on a high resolution monitor, each was compressed using a different quantization stepsize. They were requested to select which image had the highest overall quality to support them in carrying out their visual evaluations of image content. Then they rated both images using a scale from one to five on its judged degree of usefulness. Up to four pre-selected types of images were presented with and without noise to each subject based upon results of a previously administered survey of their image preferences. Fourteen different images in seven image groups were studied. The results showed that: (1) acceptable compression ratios vary widely with the type of images; (2) noisy images detract greatly from image acceptability and acceptable compression ratios; and (3) atmospheric images of Jupiter seem to have higher compression ratios of 4 to 5 times that of some clear surface satellite images.
Real-time transmission of digital video using variable-length coding
NASA Technical Reports Server (NTRS)
Bizon, Thomas P.; Shalkhauser, Mary JO; Whyte, Wayne A., Jr.
1993-01-01
Huffman coding is a variable-length lossless compression technique where data with a high probability of occurrence is represented with short codewords, while 'not-so-likely' data is assigned longer codewords. Compression is achieved when the high-probability levels occur so frequently that their benefit outweighs any penalty paid when a less likely input occurs. One instance where Huffman coding is extremely effective occurs when data is highly predictable and differential coding can be applied (as with a digital video signal). For that reason, it is desirable to apply this compression technique to digital video transmission; however, special care must be taken in order to implement a communication protocol utilizing Huffman coding. This paper addresses several of the issues relating to the real-time transmission of Huffman-coded digital video over a constant-rate serial channel. Topics discussed include data rate conversion (from variable to a fixed rate), efficient data buffering, channel coding, recovery from communication errors, decoder synchronization, and decoder architectures. A description of the hardware developed to execute Huffman coding and serial transmission is also included. Although this paper focuses on matters relating to Huffman-coded digital video, the techniques discussed can easily be generalized for a variety of applications which require transmission of variable-length data.
You, Je Sung; Chung, Sung Phil; Chang, Chul Ho; Park, Incheol; Lee, Hye Sun; Kim, SeungHo; Lee, Hahn Shick
2013-08-01
In real cardiopulmonary resuscitation (CPR), noise can arise from instructional voices and environmental sounds in places such as a battlefield and industrial and high-traffic areas. A feedback device using a flashing light was designed to overcome noise-induced stimulus saturation during CPR. This study was conducted to determine whether 'flashlight' guidance influences CPR performance in a simulated noisy setting. We recruited 30 senior medical students with no previous experience of using flashlight-guided CPR to participate in this prospective, simulation-based, crossover study. The experiment was conducted in a simulated noisy situation using a cardiac arrest model without ventilation. Noise such as patrol car and fire engine sirens was artificially generated. The flashlight guidance device emitted light pulses at the rate of 100 flashes/min. Participants also received instructions to achieve the desired rate of 100 compressions/min. CPR performances were recorded with a Resusci Anne mannequin with a computer skill-reporting system. There were significant differences between the control and flashlight groups in mean compression rate (MCR), MCR/min and visual analogue scale. However, there were no significant differences in correct compression depth, mean compression depth, correct hand position, and correctly released compression. The flashlight group constantly maintained the pace at the desired 100 compressions/min. Furthermore, the flashlight group had a tendency to keep the MCR constant, whereas the control group had a tendency to decrease it after 60 s. Flashlight-guided CPR is particularly advantageous for maintaining a desired MCR during hands-only CPR in noisy environments, where metronome pacing might not be clearly heard.
Lossless compression of VLSI layout image data.
Dai, Vito; Zakhor, Avideh
2006-09-01
We present a novel lossless compression algorithm called Context Copy Combinatorial Code (C4), which integrates the advantages of two very disparate compression techniques: context-based modeling and Lempel-Ziv (LZ) style copying. While the algorithm can be applied to many lossless compression applications, such as document image compression, our primary target application has been lossless compression of integrated circuit layout image data. These images contain a heterogeneous mix of data: dense repetitive data better suited to LZ-style coding, and less dense structured data, better suited to context-based encoding. As part of C4, we have developed a novel binary entropy coding technique called combinatorial coding which is simultaneously as efficient as arithmetic coding, and as fast as Huffman coding. Compression results show C4 outperforms JBIG, ZIP, BZIP2, and two-dimensional LZ, and achieves lossless compression ratios greater than 22 for binary layout image data, and greater than 14 for gray-pixel image data.
Laser-pulse compression in a collisional plasma under weak-relativistic ponderomotive nonlinearity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, Mamta; Gupta, D. N., E-mail: dngupta@physics.du.ac.in
We present theory and numerical analysis which demonstrate laser-pulse compression in a collisional plasma under the weak-relativistic ponderomotive nonlinearity. Plasma equilibrium density is modified due to the ohmic heating of electrons, the collisions, and the weak relativistic-ponderomotive force during the interaction of a laser pulse with plasmas. First, within one-dimensional analysis, the longitudinal self-compression mechanism is discussed. Three-dimensional analysis (spatiotemporal) of laser pulse propagation is also investigated by coupling the self-compression with the self-focusing. In the regime in which the laser becomes self-focused due to the weak relativistic-ponderomotive nonlinearity, we provide results for enhanced pulse compression. The results show thatmore » the matched interplay between self-focusing and self-compression can improve significantly the temporal profile of the compressed pulse. Enhanced pulse compression can be achieved by optimizing and selecting the parameters such as collision frequency, ion-temperature, and laser intensity.« less
Radio astronomy Explorer B antenna aspect processor
NASA Technical Reports Server (NTRS)
Miller, W. H.; Novello, J.; Reeves, C. C.
1972-01-01
The antenna aspect system used on the Radio Astronomy Explorer B spacecraft is described. This system consists of two facsimile cameras, a data encoder, and a data processor. Emphasis is placed on the discussion of the data processor, which contains a data compressor and a source encoder. With this compression scheme a compression ratio of 8 is achieved on a typical line of camera data. These compressed data are then convolutionally encoded.
Electrical conductivity of aluminum hydride AlH3 at high pressure and temperature
NASA Astrophysics Data System (ADS)
Shakhray, Denis; Molodets, Alexander; Fortov, Vladimir; Khrapak, Aleksei
2009-06-01
A study of electrophysical and thermodynamic properties of alane AlH3 under multi shock compression has been carried out. The increase in specific electroconductivity of alane at shock compression up to pressure 100 GPa have been measured. High pressures and temperatures were obtained with explosive device, which accelerates the stainless impactor up to 3 km/sec. The impact shock is split into a shock wave reverberating in alane between two stiff metal anvils. The conductivity of shocked alane increases in the range up to 60-75 GPa and is about 30 1/Ohm*cm. In this region the semiconductor regime is true for shocked alane. The conductivity of alane achieves approximately 500 1/Ohm*cm at 80-90 GPa. In this region conductivity is interpreted in frames of the conception of the ``dielectric catastrophe'', taking into consideration significant difference between electronic states of isolated AlH3 molecule and condensed alane.
Development of the manufacture of billets based on high-strength aluminum alloys
NASA Astrophysics Data System (ADS)
Korostelev, V. F.; Denisov, M. S.; Bol'shakov, A. E.; Van Khieu, Chan
2017-09-01
When pressure is applied upon casting as a factor of external impact on melt, the problems related mainly to filling of molds are solved; however, some casting defects cannot be avoided. The experimental results demonstrate that complete compensation of shrinkage under pressure can be achieved by compressing of casting by 8-10% prior to beginning of solidification and by 2-3% during the transition of a metal from the liquid to the solid state. It is mentioned that the procedure based on compressing a liquid metal can be efficiently applied for manufacture of high-strength aluminum alloy castings. The selection of engineering parameters is substantiated. Examples of castings made of V95 alloy according to the developed procedure are given. In addition, the article discusses the problems related to designing of engineering and special-purpose equipment, software, and control automation.
High throughput dual-wavelength temperature distribution imaging via compressive imaging
NASA Astrophysics Data System (ADS)
Yao, Xu-Ri; Lan, Ruo-Ming; Liu, Xue-Feng; Zhu, Ge; Zheng, Fu; Yu, Wen-Kai; Zhai, Guang-Jie
2018-03-01
Thermal imaging is an essential tool in a wide variety of research areas. In this work we demonstrate high-throughput double-wavelength temperature distribution imaging using a modified single-pixel camera without the requirement of a beam splitter (BS). A digital micro-mirror device (DMD) is utilized to display binary masks and split the incident radiation, which eliminates the necessity of a BS. Because the spatial resolution is dictated by the DMD, this thermal imaging system has the advantage of perfect spatial registration between the two images, which limits the need for the pixel registration and fine adjustments. Two bucket detectors, which measures the total light intensity reflected from the DMD, are employed in this system and yield an improvement in the detection efficiency of the narrow-band radiation. A compressive imaging algorithm is utilized to achieve under-sampling recovery. A proof-of-principle experiment was presented to demonstrate the feasibility of this structure.
Dong, Shan; Zhang, Anmin; Liu, Kai; ...
2016-02-26
The recent renaissance of black phosphorus (BP) as a two-dimensional (2D) layered material has generated tremendous interest, but its unique structural characters underlying many of its outstanding properties still need elucidation. Here we report Raman measurements that reveal an ultralow-frequency collective compression mode (CCM) in BP, which is unprecedented among similar 2D layered materials. This novel CCM indicates an unusually strong interlayer coupling, and this result is quantitatively supported by a phonon frequency analysis and first-principles calculations. Moreover, the CCM and another branch of low-frequency Raman modes shift sensitively with changing number of layers, allowing an accurate determination of themore » thickness up to tens of atomic layers, which is considerably higher than previously achieved by using high-frequency Raman modes. Lastly, these findings offer fundamental insights and practical tools for further exploration of BP as a highly promising new 2D semiconductor.« less
Transparent, flexible, and solid-state supercapacitors based on graphene electrodes
NASA Astrophysics Data System (ADS)
Gao, Y.; Zhou, Y. S.; Xiong, W.; Jiang, L. J.; Mahjouri-samani, M.; Thirugnanam, P.; Huang, X.; Wang, M. M.; Jiang, L.; Lu, Y. F.
2013-07-01
In this study, graphene-based supercapacitors with optical transparency and mechanical flexibility have been achieved using a combination of poly(vinyl alcohol)/phosphoric acid gel electrolyte and graphene electrodes. An optical transmittance of ˜67% in a wavelength range of 500-800 nm and a 92.4% remnant capacitance under a bending angle of 80° have been achieved for the supercapacitors. The decrease in capacitance under bending is ascribed to the buckling of the graphene electrode in compression. The supercapacitors with high optical transparency, electrochemical stability, and mechanical flexibility hold promises for transparent and flexible electronics.
Generation of stable subfemtosecond hard x-ray pulses with optimized nonlinear bunch compression
Huang, Senlin; Ding, Yuantao; Huang, Zhirong; ...
2014-12-15
In this paper, we propose a simple scheme that leverages existing x-ray free-electron laser hardware to produce stable single-spike, subfemtosecond x-ray pulses. By optimizing a high-harmonic radio-frequency linearizer to achieve nonlinear compression of a low-charge (20 pC) electron beam, we obtain a sharp current profile possessing a few-femtosecond full width at half maximum temporal duration. A reverse undulator taper is applied to enable lasing only within the current spike, where longitudinal space charge forces induce an electron beam time-energy chirp. Simulations based on the Linac Coherent Light Source parameters show that stable single-spike x-ray pulses with a duration less thanmore » 200 attoseconds can be obtained.« less
Compression-based aggregation model for medical web services.
Al-Shammary, Dhiah; Khalil, Ibrahim
2010-01-01
Many organizations such as hospitals have adopted Cloud Web services in applying their network services to avoid investing heavily computing infrastructure. SOAP (Simple Object Access Protocol) is the basic communication protocol of Cloud Web services that is XML based protocol. Generally,Web services often suffer congestions and bottlenecks as a result of the high network traffic that is caused by the large XML overhead size. At the same time, the massive load on Cloud Web services in terms of the large demand of client requests has resulted in the same problem. In this paper, two XML-aware aggregation techniques that are based on exploiting the compression concepts are proposed in order to aggregate the medical Web messages and achieve higher message size reduction.
Short pulse laser stretcher-compressor using a single common reflective grating
Erbert, Gaylen V.; Biswal, Subrat; Bartolick, Joseph M.; Stuart, Brent C.; Telford, Steve
2004-05-25
The present invention provides an easily aligned, all-reflective, aberration-free pulse stretcher-compressor in a compact geometry. The stretcher-compressor device is a reflective multi-layer dielectric that can be utilized for high power chirped-pulse amplification material processing applications. A reflective grating element of the device is constructed: 1) to receive a beam for stretching of laser pulses in a beam stretcher beam path and 2) to also receive stretched amplified pulses to be compressed in a compressor beam path through the same (i.e., common) reflective multilayer dielectric diffraction grating. The stretched and compressed pulses are interleaved about the grating element to provide the desired number of passes in each respective beam path in order to achieve the desired results.
Compressed sensing based missing nodes prediction in temporal communication network
NASA Astrophysics Data System (ADS)
Cheng, Guangquan; Ma, Yang; Liu, Zhong; Xie, Fuli
2018-02-01
The reconstruction of complex network topology is of great theoretical and practical significance. Most research so far focuses on the prediction of missing links. There are many mature algorithms for link prediction which have achieved good results, but research on the prediction of missing nodes has just begun. In this paper, we propose an algorithm for missing node prediction in complex networks. We detect the position of missing nodes based on their neighbor nodes under the theory of compressed sensing, and extend the algorithm to the case of multiple missing nodes using spectral clustering. Experiments on real public network datasets and simulated datasets show that our algorithm can detect the locations of hidden nodes effectively with high precision.
NASA Astrophysics Data System (ADS)
Huynh, Nam; Zhang, Edward; Betcke, Marta; Arridge, Simon R.; Beard, Paul; Cox, Ben
2015-03-01
A system for dynamic mapping of broadband ultrasound fields has been designed, with high frame rate photoacoustic imaging in mind. A Fabry-Pérot interferometric ultrasound sensor was interrogated using a coherent light single-pixel camera. Scrambled Hadamard measurement patterns were used to sample the acoustic field at the sensor, and either a fast Hadamard transform or a compressed sensing reconstruction algorithm were used to recover the acoustic pressure data. Frame rates of 80 Hz were achieved for 32x32 images even though no specialist hardware was used for the on-the-fly reconstructions. The ability of the system to obtain photocacoustic images with data compressions as low as 10% was also demonstrated.
Metallization of aluminum hydride AlH3 at high multiple-shock pressures
NASA Astrophysics Data System (ADS)
Molodets, A. M.; Shakhray, D. V.; Khrapak, A. G.; Fortov, V. E.
2009-05-01
A study of electrophysical and thermodynamic properties of alane AlH3 under multishock compression has been carried out. The increase in specific electroconductivity of alane at shock compression up to pressure 100 GPa has been measured. High pressures and temperatures were obtained with an explosive device, which accelerates the stainless impactor up to 3 km/s. A strong shock wave is generated on impact with a holder containing alane. The impact shock is split into a shock wave reverberating in alane between two stiff metal anvils. This compression loads the alane sample by a multishock manner up to pressure 80-90 GPa, heats alane to the temperature of about 1500-2000 K, and lasts 1μs . The conductivity of shocked alane increases in the range up to 60-75 GPa and is about 30(Ωcm)-1 . In this region the semiconductor regime is true for shocked alane. The conductivity of alane achieves approximately 500(Ωcm)-1 at 80-90 GPa. In this region, conductivity is interpreted in frames of the conception of the “dielectric catastrophe,” taking into consideration significant differences between the electronic states of isolated molecule AlH3 and condensed alane.
NASA Astrophysics Data System (ADS)
Agurto, C.; Barriga, S.; Murray, V.; Pattichis, M.; Soliz, P.
2010-03-01
Diabetic retinopathy (DR) is one of the leading causes of blindness among adult Americans. Automatic methods for detection of the disease have been developed in recent years, most of them addressing the segmentation of bright and red lesions. In this paper we present an automatic DR screening system that does approach the problem through the segmentation of features. The algorithm determines non-diseased retinal images from those with pathology based on textural features obtained using multiscale Amplitude Modulation-Frequency Modulation (AM-FM) decompositions. The decomposition is represented as features that are the inputs to a classifier. The algorithm achieves 0.88 area under the ROC curve (AROC) for a set of 280 images from the MESSIDOR database. The algorithm is then used to analyze the effects of image compression and degradation, which will be present in most actual clinical or screening environments. Results show that the algorithm is insensitive to illumination variations, but high rates of compression and large blurring effects degrade its performance.
Compression and Transmission of RF Signals for Telediagnosis
NASA Astrophysics Data System (ADS)
Seko, Toshihiro; Doi, Motonori; Oshiro, Osamu; Chihara, Kunihiro
2000-05-01
Health care is a critical issue nowadays. Much emphasis is given to quality care for all people. Telediagnosis has attracted public attention. We propose a new method of ultrasound image transmission for telediagnosis. In conventional methods, video image signals are transmitted. In our method, the RF signals which are acquired by an ultrasound probe, are transmitted. The RF signals can be transformed to color Doppler images or high-resolution images by a receiver. Because a stored form is adopted, the proposed system can be realized with existent technology such as hyper text transfer protocol (HTTP) and file transfer protocol (FTP). In this paper, we describe two lossless compression methods which specialize in the transmission of RF signals. One of the methods uses the characteristics of the RF signal. In the other method, the amount of the data is reduced. Measurements were performed in water targeting an iron block and triangular Styrofoam. Additionally, abdominal fat measurement was performed. Our method achieved a compression rate of 13% with 8 bit data.
Photorealistic scene presentation: virtual video camera
NASA Astrophysics Data System (ADS)
Johnson, Michael J.; Rogers, Joel Clark W.
1994-07-01
This paper presents a low cost alternative for presenting photo-realistic imagery during the final approach, which often is a peak workload phase of flight. The method capitalizes on `a priori' information. It accesses out-the-window `snapshots' from a mass storage device, selecting the snapshots that deliver the best match for a given aircraft position and runway scene. It then warps the snapshots to align them more closely with the current viewpoint. The individual snapshots, stored as highly compressed images, are decompressed and interpolated to produce a `clear-day' video stream. The paper shows how this warping, when combined with other compression methods, saves considerable amounts of storage; compression factors from 1000 to 3000 were achieved. Thus, a CD-ROM today can store reference snapshots for thousands of different runways. Dynamic scene elements not present in the snapshot database can be inserted as separate symbolic or pictorial images. When underpinned by an appropriate suite of sensor technologies, the methods discussed indicate an all-weather virtual video camera is possible.
Mechanical Behavior and Microstructure Evolution of Bearing Steel 52100 During Warm Compression
NASA Astrophysics Data System (ADS)
Huo, Yuanming; He, Tao; Chen, Shoushuang; Wu, Riming
2018-05-01
High-performance bearing steel requires a fine and homogeneous structure of carbide particles. Direct deformation spheroidizing of bearing steel in a dual-phase zone can contribute to achieving this important structure. In this work, warm compression testing of 52100 bearing steel was performed at temperatures in the range of 650-850°C and at strain rates of 0.1-10.0 s-1. The effect of deformation temperatures on mechanical behavior and microstructure evolution was investigated to determine the warm deformation temperature window. The effect of deformation rates on microstructure evolution and metal flow softening behavior of the warm compression was analyzed and discussed. Experimental results showed that the temperature range from 750°C to 800°C should be regarded as the critical range separating warm and hot deformation. Warm deformation at temperatures in the range of 650-750°C promoted carbide spheroidization, and this was determined to be the warm deformation temperature window. Metal flow softening during the warm deformation was caused by carbide spheroidization.
NASA Astrophysics Data System (ADS)
Neji, N.; Jridi, M.; Alfalou, A.; Masmoudi, N.
2016-02-01
The double random phase encryption (DRPE) method is a well-known all-optical architecture which has many advantages especially in terms of encryption efficiency. However, the method presents some vulnerabilities against attacks and requires a large quantity of information to encode the complex output plane. In this paper, we present an innovative hybrid technique to enhance the performance of DRPE method in terms of compression and encryption. An optimized simultaneous compression and encryption method is applied simultaneously on the real and imaginary components of the DRPE output plane. The compression and encryption technique consists in using an innovative randomized arithmetic coder (RAC) that can well compress the DRPE output planes and at the same time enhance the encryption. The RAC is obtained by an appropriate selection of some conditions in the binary arithmetic coding (BAC) process and by using a pseudo-random number to encrypt the corresponding outputs. The proposed technique has the capabilities to process video content and to be standard compliant with modern video coding standards such as H264 and HEVC. Simulations demonstrate that the proposed crypto-compression system has presented the drawbacks of the DRPE method. The cryptographic properties of DRPE have been enhanced while a compression rate of one-sixth can be achieved. FPGA implementation results show the high performance of the proposed method in terms of maximum operating frequency, hardware occupation, and dynamic power consumption.
Electroforming of optical tooling in high-strength Ni-Co alloy
NASA Astrophysics Data System (ADS)
Stein, Berl
2003-05-01
Plastic optics are often mass produced by injection, compression or injection-compression molding. Optical quality molds can be directly machined in appropriate materials (tool steels, electroless nickel, aluminum, etc.), but much greater cost efficiency can be achieved with electroformed modl inserts. Traditionally, electroforming of optical quality mold inserts has been carried out in nickel, a material much softer than tool steels which, when hardened to 45 - 50 HRc usually exhibit high wear resistance and long service life (hundreds of thousands of impressions per mold). Because of their low hardness (< 20 HRc), nickel molds can produce only tens of thousands of parts before they are scrapped due to wear or accidental damage. This drawback prevented their wider usage in general plastic and optical mold making. Recently, NiCoForm has developed a proprietary Ni-CO electroforming bath combining the high strength and wear resistance of the alloy with the low stress and high replication fidelity typical of pure nickel electroforming. This paper will outline the approach to electroforming of optical quality tooling in low stress, high strength Ni-Co alloy and present several examples of electroformed NiColoy mold inserts.
Lean production of taste improved lipidic sodium benzoate formulations.
Eckert, C; Pein, M; Breitkreutz, J
2014-10-01
Sodium benzoate is a highly soluble orphan drug with unpleasant taste and high daily dose. The aim of this study was to develop a child appropriate, individually dosable, and taste masked dosage form utilizing lipids in melt granulation process and tableting. A saliva resistant coated lipid granule produced by extrusion served as reference product. Low melting hard fat was found to be appropriate as lipid binder in high-shear granulation. The resulting granules were compressed to minitablets without addition of other excipients. Compression to 2mm minitablets decreased the dissolved API amount within the first 2 min of dissolution from 33% to 23%. The Euclidean distances, calculated from electronic tongue measurements, were reduced, indicating an improved taste. The reference product showed a lag time in dissolution, which is desirable for taste masking. Although a lag time was not achieved for the lipidic minitablets, drug release in various food materials was reduced to 2%, assuming a suitable taste masking for oral sodium benzoate administration. Copyright © 2014 Elsevier B.V. All rights reserved.
Loading capacity of zirconia implant supported hybrid ceramic crowns.
Rohr, Nadja; Coldea, Andrea; Zitzmann, Nicola U; Fischer, Jens
2015-12-01
Recently a polymer infiltrated hybrid ceramic was developed, which is characterized by a low elastic modulus and therefore may be considered as potential material for implant supported single crowns. The purpose of the study was to evaluate the loading capacity of hybrid ceramic single crowns on one-piece zirconia implants with respect to the cement type. Fracture load tests were performed on standardized molar crowns milled from hybrid ceramic or feldspar ceramic, cemented to zirconia implants with either machined or etched intaglio surface using four different resin composite cements. Flexure strength, elastic modulus, indirect tensile strength and compressive strength of the cements were measured. Statistical analysis was performed using two-way ANOVA (p=0.05). The hybrid ceramic exhibited statistically significant higher fracture load values than the feldspar ceramic. Fracture load values and compressive strength values of the respective cements were correlated. Highest fracture load values were achieved with an adhesive cement (1253±148N). Etching of the intaglio surface did not improve the fracture load. Loading capacity of hybrid ceramic single crowns on one-piece zirconia implants is superior to that of feldspar ceramic. To achieve maximal loading capacity for permanent cementation of full-ceramic restorations on zirconia implants, self-adhesive or adhesive cements with a high compressive strength should be used. Copyright © 2015 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Kovach, L. S.; Zdankiewicz, E. M.
1987-01-01
Vapor compression distillation technology for phase change recovery of potable water from wastewater has evolved as a technically mature approach for use aboard the Space Station. A program to parametrically test an advanced preprototype Vapor Compression Distillation Subsystem (VCDS) was completed during 1985 and 1986. In parallel with parametric testing, a hardware improvement program was initiated to test the feasibility of incorporating several key improvements into the advanced preprototype VCDS following initial parametric tests. Specific areas of improvement included long-life, self-lubricated bearings, a lightweight, highly-efficient compressor, and a long-life magnetic drive. With the exception of the self-lubricated bearings, these improvements are incorporated. The advanced preprototype VCDS was designed to reclaim 95 percent of the available wastewater at a nominal water recovery rate of 1.36 kg/h achieved at a solids concentration of 2.3 percent and 308 K condenser temperature. While this performance was maintained for the initial testing, a 300 percent improvement in water production rate with a corresponding lower specific energy was achieved following incorporation of the improvements. Testing involved the characterization of key VCDS performance factors as a function of recycle loop solids concentration, distillation unit temperature and fluids pump speed. The objective of this effort was to expand the VCDS data base to enable defining optimum performance characteristics for flight hardware development.
Waste Heat Approximation for Understanding Dynamic Compression in Nature and Experiments
NASA Astrophysics Data System (ADS)
Jeanloz, R.
2015-12-01
Energy dissipated during dynamic compression quantifies the residual heat left in a planet due to impact and accretion, as well as the deviation of a loading path from an ideal isentrope. Waste heat ignores the difference between the pressure-volume isentrope and Hugoniot in approximating the dissipated energy as the area between the Rayleigh line and Hugoniot (assumed given by a linear dependence of shock velocity on particle velocity). Strength and phase transformations are ignored: justifiably, when considering sufficiently high dynamic pressures and reversible transformations. Waste heat mis-estimates the dissipated energy by less than 10-20 percent for volume compressions under 30-60 percent. Specific waste heat (energy per mass) reaches 0.2-0.3 c02 at impact velocities 2-4 times the zero-pressure bulk sound velocity (c0), its maximum possible value being 0.5 c02. As larger impact velocities are implied for typical orbital velocities of Earth-like planets, and c02 ≈ 2-30 MJ/kg for rock, the specific waste heat due to accretion corresponds to temperature rises of about 3-15 x 103 K for rock: melting accompanies accretion even with only 20-30 percent waste heat retained. Impact sterilization is similarly quantified in terms of waste heat relative to the energy required to vaporize H2O (impact velocity of 7-8 km/s, or 4.5-5 c0, is sufficient). Waste heat also clarifies the relationship between shock, multi-shock and ramp loading experiments, as well as the effect of (static) pre-compression. Breaking a shock into 2 steps significantly reduces the dissipated energy, with minimum waste heat achieved for two equal volume compressions in succession. Breaking a shock into as few as 4 steps reduces the waste heat to within a few percent of zero, documenting how multi-shock loading approaches an isentrope. Pre-compression, being less dissipative than an initial shock to the same strain, further reduces waste heat. Multi-shock (i.e., high strain-rate) loading of pre-compressed samples may thus offer the closest approach to an isentrope, and therefore the most extreme compression at which matter can be studied at the "warm" temperatures of planetary interiors.
Hyperspectral IASI L1C Data Compression.
García-Sobrino, Joaquín; Serra-Sagristà, Joan; Bartrina-Rapesta, Joan
2017-06-16
The Infrared Atmospheric Sounding Interferometer (IASI), implemented on the MetOp satellite series, represents a significant step forward in atmospheric forecast and weather understanding. The instrument provides infrared soundings of unprecedented accuracy and spectral resolution to derive humidity and atmospheric temperature profiles, as well as some of the chemical components playing a key role in climate monitoring. IASI collects rich spectral information, which results in large amounts of data (about 16 Gigabytes per day). Efficient compression techniques are requested for both transmission and storage of such huge data. This study reviews the performance of several state of the art coding standards and techniques for IASI L1C data compression. Discussion embraces lossless, near-lossless and lossy compression. Several spectral transforms, essential to achieve improved coding performance due to the high spectral redundancy inherent to IASI products, are also discussed. Illustrative results are reported for a set of 96 IASI L1C orbits acquired over a full year (4 orbits per month for each IASI-A and IASI-B from July 2013 to June 2014) . Further, this survey provides organized data and facts to assist future research and the atmospheric scientific community.
Özbilen, Sedat; Liebert, Daniela; Beck, Tilmann; Bram, Martin
2016-03-01
Porous titanium cylinders were produced with a constant amount of temporary space holder (70 vol.%). Different interstitial contents were achieved by varying the starting powders (HDH vs. gas atomized) and manufacturing method (cold compaction without organic binders vs. warm compaction of MIM feedstocks). Interstitial contents (O, C, and N) as a function of manufacturing were measured by chemical analysis. Samples contained 0.34-0.58 wt.% oxygen, which was found to have the greatest effect on mechanical properties. Quasi-static mechanical tests under compression at low strain rate were used for reference and to define parameters for cyclic compression tests. Not unexpectedly, increased oxygen content increased the yield strength of the porous titanium. Cyclic compression fatigue tests were conducted using sinusoidal loading in a servo-hydraulic testing machine. Increased oxygen content was concomitant with embrittlement of the titanium matrix, resulting in significant reduction of compression cycles before failure. For samples with 0.34 wt.% oxygen, R, σ(min) and σ(max) were varied systematically to estimate the fatigue limit (~4 million cycles). Microstructural changes induced by cyclic loading were then characterized by optical microscopy, SEM and EBSD. Copyright © 2015 Elsevier B.V. All rights reserved.
Parallel hyperspectral compressive sensing method on GPU
NASA Astrophysics Data System (ADS)
Bernabé, Sergio; Martín, Gabriel; Nascimento, José M. P.
2015-10-01
Remote hyperspectral sensors collect large amounts of data per flight usually with low spatial resolution. It is known that the bandwidth connection between the satellite/airborne platform and the ground station is reduced, thus a compression onboard method is desirable to reduce the amount of data to be transmitted. This paper presents a parallel implementation of an compressive sensing method, called parallel hyperspectral coded aperture (P-HYCA), for graphics processing units (GPU) using the compute unified device architecture (CUDA). This method takes into account two main properties of hyperspectral dataset, namely the high correlation existing among the spectral bands and the generally low number of endmembers needed to explain the data, which largely reduces the number of measurements necessary to correctly reconstruct the original data. Experimental results conducted using synthetic and real hyperspectral datasets on two different GPU architectures by NVIDIA: GeForce GTX 590 and GeForce GTX TITAN, reveal that the use of GPUs can provide real-time compressive sensing performance. The achieved speedup is up to 20 times when compared with the processing time of HYCA running on one core of the Intel i7-2600 CPU (3.4GHz), with 16 Gbyte memory.
NASA Astrophysics Data System (ADS)
Ciaramello, Frank M.; Hemami, Sheila S.
2009-02-01
Communication of American Sign Language (ASL) over mobile phones would be very beneficial to the Deaf community. ASL video encoded to achieve the rates provided by current cellular networks must be heavily compressed and appropriate assessment techniques are required to analyze the intelligibility of the compressed video. As an extension to a purely spatial measure of intelligibility, this paper quantifies the effect of temporal compression artifacts on sign language intelligibility. These artifacts can be the result of motion-compensation errors that distract the observer or frame rate reductions. They reduce the the perception of smooth motion and disrupt the temporal coherence of the video. Motion-compensation errors that affect temporal coherence are identified by measuring the block-level correlation between co-located macroblocks in adjacent frames. The impact of frame rate reductions was quantified through experimental testing. A subjective study was performed in which fluent ASL participants rated the intelligibility of sequences encoded at a range of 5 different frame rates and with 3 different levels of distortion. The subjective data is used to parameterize an objective intelligibility measure which is highly correlated with subjective ratings at multiple frame rates.
Compressive sensing for efficient health monitoring and effective damage detection of structures
NASA Astrophysics Data System (ADS)
Jayawardhana, Madhuka; Zhu, Xinqun; Liyanapathirana, Ranjith; Gunawardana, Upul
2017-02-01
Real world Structural Health Monitoring (SHM) systems consist of sensors in the scale of hundreds, each sensor generating extremely large amounts of data, often arousing the issue of the cost associated with data transfer and storage. Sensor energy is a major component included in this cost factor, especially in Wireless Sensor Networks (WSN). Data compression is one of the techniques that is being explored to mitigate the effects of these issues. In contrast to traditional data compression techniques, Compressive Sensing (CS) - a very recent development - introduces the means of accurately reproducing a signal by acquiring much less number of samples than that defined by Nyquist's theorem. CS achieves this task by exploiting the sparsity of the signal. By the reduced amount of data samples, CS may help reduce the energy consumption and storage costs associated with SHM systems. This paper investigates CS based data acquisition in SHM, in particular, the implications of CS on damage detection and localization. CS is implemented in a simulation environment to compress structural response data from a Reinforced Concrete (RC) structure. Promising results were obtained from the compressed data reconstruction process as well as the subsequent damage identification process using the reconstructed data. A reconstruction accuracy of 99% could be achieved at a Compression Ratio (CR) of 2.48 using the experimental data. Further analysis using the reconstructed signals provided accurate damage detection and localization results using two damage detection algorithms, showing that CS has not compromised the crucial information on structural damages during the compression process.
Li, Jianjun; Zhao, Qun; Wang, Enbo; Zhang, Chuanhui; Wang, Guangbin; Yuan, Quan
2012-05-01
Articular cartilage is routinely subjected to mechanical forces and growth factors. Adipose-derived stem cells (ASCs) are multi-potent adult stem cells and capable of chondrogenesis. In the present study, we investigated the comparative and interactive effects of dynamic compression and insulin-like growth factor-I (IGF-I) on the chondrogenesis of rabbit ASCs in chitosan/gelatin scaffolds. Rabbit ASCs with or without a plasmid overexpressing of human IGF-1 were cultured in chitosan/gelatin scaffolds for 2 days, then subjected to cyclic compression with 5% strain and 1 Hz for 4 h per day for seven consecutive days. Dynamic compression induced chondrogenesis of rabbit ASCs by activating calcium signaling pathways and up-regulating the expression of Sox-9. Dynamic compression plus IGF-1 overexpression up-regulated expression of chondrocyte-specific extracellular matrix genes including type II collagen, Sox-9, and aggrecan with no effect on type X collagen expression. Furthermore, dynamic compression and IGF-1 expression promoted cellular proliferation and the deposition of proteoglycan and collagen. Intracellular calcium ion concentration and peak currents of Ca(2+) ion channels were consistent with chondrocytes. The tissue-engineered cartilage from this process had excellent mechanical properties. When applied together, the effects achieved by the two stimuli (dynamic compression and IGF-1) were greater than those achieved by either stimulus alone. Our results suggest that dynamic compression combined with IGF-1 overexpression might benefit articular cartilage tissue engineering in cartilage regeneration. Copyright © 2011 Wiley Periodicals, Inc.
n-Gram-Based Text Compression.
Nguyen, Vu H; Nguyen, Hien T; Duong, Hieu N; Snasel, Vaclav
2016-01-01
We propose an efficient method for compressing Vietnamese text using n -gram dictionaries. It has a significant compression ratio in comparison with those of state-of-the-art methods on the same dataset. Given a text, first, the proposed method splits it into n -grams and then encodes them based on n -gram dictionaries. In the encoding phase, we use a sliding window with a size that ranges from bigram to five grams to obtain the best encoding stream. Each n -gram is encoded by two to four bytes accordingly based on its corresponding n -gram dictionary. We collected 2.5 GB text corpus from some Vietnamese news agencies to build n -gram dictionaries from unigram to five grams and achieve dictionaries with a size of 12 GB in total. In order to evaluate our method, we collected a testing set of 10 different text files with different sizes. The experimental results indicate that our method achieves compression ratio around 90% and outperforms state-of-the-art methods.
Duong, Hieu N.; Snasel, Vaclav
2016-01-01
We propose an efficient method for compressing Vietnamese text using n-gram dictionaries. It has a significant compression ratio in comparison with those of state-of-the-art methods on the same dataset. Given a text, first, the proposed method splits it into n-grams and then encodes them based on n-gram dictionaries. In the encoding phase, we use a sliding window with a size that ranges from bigram to five grams to obtain the best encoding stream. Each n-gram is encoded by two to four bytes accordingly based on its corresponding n-gram dictionary. We collected 2.5 GB text corpus from some Vietnamese news agencies to build n-gram dictionaries from unigram to five grams and achieve dictionaries with a size of 12 GB in total. In order to evaluate our method, we collected a testing set of 10 different text files with different sizes. The experimental results indicate that our method achieves compression ratio around 90% and outperforms state-of-the-art methods. PMID:27965708
NASA Astrophysics Data System (ADS)
Henry de Frahan, Marc T.; Varadan, Sreenivas; Johnsen, Eric
2015-01-01
Although the Discontinuous Galerkin (DG) method has seen widespread use for compressible flow problems in a single fluid with constant material properties, it has yet to be implemented in a consistent fashion for compressible multiphase flows with shocks and interfaces. Specifically, it is challenging to design a scheme that meets the following requirements: conservation, high-order accuracy in smooth regions and non-oscillatory behavior at discontinuities (in particular, material interfaces). Following the interface-capturing approach of Abgrall [1], we model flows of multiple fluid components or phases using a single equation of state with variable material properties; discontinuities in these properties correspond to interfaces. To represent compressible phenomena in solids, liquids, and gases, we present our analysis for equations of state belonging to the Mie-Grüneisen family. Within the DG framework, we propose a conservative, high-order accurate, and non-oscillatory limiting procedure, verified with simple multifluid and multiphase problems. We show analytically that two key elements are required to prevent spurious pressure oscillations at interfaces and maintain conservation: (i) the transport equation(s) describing the material properties must be solved in a non-conservative weak form, and (ii) the suitable variables must be limited (density, momentum, pressure, and appropriate properties entering the equation of state), coupled with a consistent reconstruction of the energy. Further, we introduce a physics-based discontinuity sensor to apply limiting in a solution-adaptive fashion. We verify this approach with one- and two-dimensional problems with shocks and interfaces, including high pressure and density ratios, for fluids obeying different equations of state to illustrate the robustness and versatility of the method. The algorithm is implemented on parallel graphics processing units (GPU) to achieve high speedup.
Hugoniot equation of state and dynamic strength of boron carbide
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grady, Dennis E.
Boron carbide ceramics have been particularly problematic in attempts to develop adequate constitutive model descriptions for purposes of analysis of dynamic response in the shock and impact environment. Dynamic strength properties of boron carbide ceramic differ uniquely from comparable ceramics. Furthermore, boron carbide is suspected, but not definitely shown, to undergoing polymorphic phase transformation under shock compression. In the present paper, shock-wave compression measurements conducted over the past 40 years are assessed for the purpose of achieving improved understanding of the dynamic equation of state and strength of boron carbide. In particular, attention is focused on the often ignored Losmore » Alamos National Laboratory (LANL) Hugoniot measurements performed on porous sintered boron carbide ceramic. The LANL data are shown to exhibit two compression anomalies on the shock Hugoniot within the range of 20–60 GPa that may relate to crystallographic structure transitions. More recent molecular dynamics simulations on the compressibility of the boron carbide crystal lattice reveal compression transitions that bear similarities to the LANL Hugoniot results. The same Hugoniot data are complemented with dynamic isentropic compression data for boron carbide extracted from Hugoniot measurements on boron carbide and copper granular mixtures. Other Hugoniot measurements, however, performed on near-full-density boron carbide ceramic differ markedly from the LANL Hugoniot data. These later data exhibit markedly less compressibility and tend not to show comparable anomalies in compressibility. Alternative Hugoniot anomalies, however, are exhibited by the near-full-density data. Experimental uncertainty, Hugoniot strength, and phase transformation physics are all possible explanations for the observed discrepancies. It is reasoned that experimental uncertainty and Hugoniot strength are not likely explanations for the observed differences. The notable mechanistic difference in the processes of shock compression between the LANL data and that of the other studies is the markedly larger inelastic deformation and dissipation experienced in the shock event brought about by compaction of the substantially larger porosity LANL test ceramics. High-pressure diamond anvil cell experiments reveal extensive amorphization, reasoned to be a reversion product of a higher-pressure crystallographic phase, which is a consequence of application of both high pressure and shear deformation to the boron carbide crystal structure. A dependence of shock-induced high-pressure phase transformation in boron carbide on the extent of shear deformation experienced in the shock process offers a plausible explanation for the differences observed in the LANL Hugoniot data on porous ceramic and that of other shock data on near-full-density boron carbide.« less
Lossless Compression of Data into Fixed-Length Packets
NASA Technical Reports Server (NTRS)
Kiely, Aaron B.; Klimesh, Matthew A.
2009-01-01
A computer program effects lossless compression of data samples from a one-dimensional source into fixed-length data packets. The software makes use of adaptive prediction: it exploits the data structure in such a way as to increase the efficiency of compression beyond that otherwise achievable. Adaptive linear filtering is used to predict each sample value based on past sample values. The difference between predicted and actual sample values is encoded using a Golomb code.
Joint Services Electronics Program Annual Progress Report.
1985-11-01
one symbol memory) adaptive lHuffman codes were performed, and the compression achieved was compared with that of Ziv - Lempel coding. As was expected...MATERIALS 8 4. Information Systems 9 4.1 REAL TIME STATISTICAL DATA PROCESSING 9 -. 4.2 DATA COMPRESSION for COMPUTER DATA STRUCTURES 9 5. PhD...a. Real Time Statistical Data Processing (T. Kailatb) b. Data Compression for Computer Data Structures (J. Gill) Acces Fo NTIS CRA&I I " DTIC TAB
Optimal Compressed Sensing and Reconstruction of Unstructured Mesh Datasets
Salloum, Maher; Fabian, Nathan D.; Hensinger, David M.; ...
2017-08-09
Exascale computing promises quantities of data too large to efficiently store and transfer across networks in order to be able to analyze and visualize the results. We investigate compressed sensing (CS) as an in situ method to reduce the size of the data as it is being generated during a large-scale simulation. CS works by sampling the data on the computational cluster within an alternative function space such as wavelet bases and then reconstructing back to the original space on visualization platforms. While much work has gone into exploring CS on structured datasets, such as image data, we investigate itsmore » usefulness for point clouds such as unstructured mesh datasets often found in finite element simulations. We sample using a technique that exhibits low coherence with tree wavelets found to be suitable for point clouds. We reconstruct using the stagewise orthogonal matching pursuit algorithm that we improved to facilitate automated use in batch jobs. We analyze the achievable compression ratios and the quality and accuracy of reconstructed results at each compression ratio. In the considered case studies, we are able to achieve compression ratios up to two orders of magnitude with reasonable reconstruction accuracy and minimal visual deterioration in the data. Finally, our results suggest that, compared to other compression techniques, CS is attractive in cases where the compression overhead has to be minimized and where the reconstruction cost is not a significant concern.« less
Optimum SNR data compression in hardware using an Eigencoil array.
King, Scott B; Varosi, Steve M; Duensing, G Randy
2010-05-01
With the number of receivers available on clinical MRI systems now ranging from 8 to 32 channels, data compression methods are being explored to lessen the demands on the computer for data handling and processing. Although software-based methods of compression after reception lessen computational requirements, a hardware-based method before the receiver also reduces the number of receive channels required. An eight-channel Eigencoil array is constructed by placing a hardware radiofrequency signal combiner inline after preamplification, before the receiver system. The Eigencoil array produces signal-to-noise ratio (SNR) of an optimal reconstruction using a standard sum-of-squares reconstruction, with peripheral SNR gains of 30% over the standard array. The concept of "receiver channel reduction" or MRI data compression is demonstrated, with optimal SNR using only four channels, and with a three-channel Eigencoil, superior sum-of-squares SNR was achieved over the standard eight-channel array. A three-channel Eigencoil portion of a product neurovascular array confirms in vivo SNR performance and demonstrates parallel MRI up to R = 3. This SNR-preserving data compression method advantageously allows users of MRI systems with fewer receiver channels to achieve the SNR of higher-channel MRI systems. (c) 2010 Wiley-Liss, Inc.
Data compression of discrete sequence: A tree based approach using dynamic programming
NASA Technical Reports Server (NTRS)
Shivaram, Gurusrasad; Seetharaman, Guna; Rao, T. R. N.
1994-01-01
A dynamic programming based approach for data compression of a ID sequence is presented. The compression of an input sequence of size N to that of a smaller size k is achieved by dividing the input sequence into k subsequences and replacing the subsequences by their respective average values. The partitioning of the input sequence is carried with the intention of reducing the mean squared error in the reconstructed sequence. The complexity involved in finding the partitions which would result in such an optimal compressed sequence is reduced by using the dynamic programming approach, which is presented.
A test data compression scheme based on irrational numbers stored coding.
Wu, Hai-feng; Cheng, Yu-sheng; Zhan, Wen-fa; Cheng, Yi-fei; Wu, Qiong; Zhu, Shi-juan
2014-01-01
Test question has already become an important factor to restrict the development of integrated circuit industry. A new test data compression scheme, namely irrational numbers stored (INS), is presented. To achieve the goal of compress test data efficiently, test data is converted into floating-point numbers, stored in the form of irrational numbers. The algorithm of converting floating-point number to irrational number precisely is given. Experimental results for some ISCAS 89 benchmarks show that the compression effect of proposed scheme is better than the coding methods such as FDR, AARLC, INDC, FAVLC, and VRL.
Novel Data Reduction Based on Statistical Similarity
Lee, Dongeun; Sim, Alex; Choi, Jaesik; ...
2016-07-18
Applications such as scientific simulations and power grid monitoring are generating so much data quickly that compression is essential to reduce storage requirement or transmission capacity. To achieve better compression, one is often willing to discard some repeated information. These lossy compression methods are primarily designed to minimize the Euclidean distance between the original data and the compressed data. But this measure of distance severely limits either reconstruction quality or compression performance. In this paper, we propose a new class of compression method by redefining the distance measure with a statistical concept known as exchangeability. This approach reduces the storagemore » requirement and captures essential features, while reducing the storage requirement. In this paper, we report our design and implementation of such a compression method named IDEALEM. To demonstrate its effectiveness, we apply it on a set of power grid monitoring data, and show that it can reduce the volume of data much more than the best known compression method while maintaining the quality of the compressed data. Finally, in these tests, IDEALEM captures extraordinary events in the data, while its compression ratios can far exceed 100.« less
Wang, Ming; Zhang, Kai; Dai, Xin-Xin; Li, Yin; Guo, Jiang; Liu, Hu; Li, Gen-Hui; Tan, Yan-Jun; Zeng, Jian-Bing; Guo, Zhanhu
2017-08-10
Formation of highly conductive networks is essential for achieving flexible conductive polymer composites (CPCs) with high force sensitivity and high electrical conductivity. In this study, self-segregated structures were constructed in polydimethylsiloxane/multi-wall carbon nanotube (PDMS/MWCNT) nanocomposites, which then exhibited high piezoresistive sensitivity and low percolation threshold without sacrificing their mechanical properties. First, PDMS was cured and pulverized into 40-60 mesh-sized particles (with the size range of 250-425 μm) as an optimum self-segregated phase to improve the subsequent electrical conductivity. Then, the uncured PDMS/MWCNT base together with the curing agent was mixed with the abovementioned PDMS particles, serving as the segregated phase. Finally, the mixture was cured again to form the PDMS/MWCNT nanocomposites with self-segregated structures. The morphological evaluation indicated that MWCNTs were located in the second cured three-dimensional (3D) continuous PDMS phase, resulting in an ultralow percolation threshold of 0.003 vol% MWCNTs. The nanocomposites with self-segregated structures with 0.2 vol% MWCNTs achieved a high electrical conductivity of 0.003 S m -1 , whereas only 4.87 × 10 -10 S m -1 was achieved for the conventional samples with 0.2 vol% MWCNTs. The gauge factor GF of the self-segregated samples was 7.4-fold that of the conventional samples at 30% compression strain. Furthermore, the self-segregated samples also showed higher compression modulus and strength as compared to the conventional samples. These enhanced properties were attributed to the construction of 3D self-segregated structures, concentrated distribution of MWCNTs, and strong interfacial interaction between the segregated phase and the continuous phase with chemical bonds formed during the second curing process. These self-segregated structures provide a new insight into the fabrication of elastomers with high electrical conductivity and piezoresistive sensitivity for flexible force-sensitive materials.
NASA Environmentally Responsible Aviation's Highly-Loaded Front Block Compressor Demonstration
NASA Technical Reports Server (NTRS)
Celestina, Mark
2016-01-01
This presentation will detail the work done to improve thermal efficiency in the compression process of a gas turbine engine for aircraft applications under NASAs Environmentally Responsible Aviation Project. The talk will present the goals and objectives of the work and show the activity of both Phase 1 and Phase 2 tests and analysis. The summary shows the projected fuel burn savings achieved through system studies.
Universal route to optimal few- to single-cycle pulse generation in hollow-core fiber compressors.
Conejero Jarque, E; San Roman, J; Silva, F; Romero, R; Holgado, W; Gonzalez-Galicia, M A; Alonso, B; Sola, I J; Crespo, H
2018-02-02
Gas-filled hollow-core fiber (HCF) pulse post-compressors generating few- to single-cycle pulses are a key enabling tool for attosecond science and ultrafast spectroscopy. Achieving optimum performance in this regime can be extremely challenging due to the ultra-broad bandwidth of the pulses and the need of an adequate temporal diagnostic. These difficulties have hindered the full exploitation of HCF post-compressors, namely the generation of stable and high-quality near-Fourier-transform-limited pulses. Here we show that, independently of conditions such as the type of gas or the laser system used, there is a universal route to obtain the shortest stable output pulse down to the single-cycle regime. Numerical simulations and experimental measurements performed with the dispersion-scan technique reveal that, in quite general conditions, post-compressed pulses exhibit a residual third-order dispersion intrinsic to optimum nonlinear propagation within the fiber, in agreement with measurements independently performed in several laboratories around the world. The understanding of this effect and its adequate correction, e.g. using simple transparent optical media, enables achieving high-quality post-compressed pulses with only minor changes in existing setups. These optimized sources have impact in many fields of science and technology and should enable new and exciting applications in the few- to single-cycle pulse regime.
Nova Upgrade: A proposed ICF facility to demonstrate ignition and gain, revision 1
NASA Astrophysics Data System (ADS)
1992-07-01
The present objective of the national Inertial Confinement Fusion (ICF) Program is to determine the scientific feasibility of compressing and heating a small mass of mixed deuterium and tritium (DT) to conditions at which fusion occurs and significant energy is released. The potential applications of ICF will be determined by the resulting fusion energy yield (amount of energy produced) and gain (ratio of energy released to energy required to heat and compress the DT fuel). Important defense and civilian applications, including weapons physics, weapons effects simulation, and ultimately the generation of electric power will become possible if yields of 100 to 1,000 MJ and gains exceeding approximately 50 can be achieved. Once ignition and propagating bum producing modest gain (2 to 10) at moderate drive energy (1 to 2 MJ) has been achieved, the extension to high gain (greater than 50) is straightforward. Therefore, the demonstration of ignition and modest gain is the final step in establishing the scientific feasibility of ICF. Lawrence Livermore National Laboratory (LLNL) proposes the Nova Upgrade Facility to achieve this demonstration by the end of the decade. This facility would be constructed within the existing Nova building at LLNL for a total cost of approximately $400 M over the proposed FY 1995-1999 construction period. This report discusses this facility.
NASA Astrophysics Data System (ADS)
Barba, Bin Jeremiah D.; Aranilla, Charito T.; Relleve, Lorna S.; Cruz, Veriza Rita C.; Vista, Jeanina Richelle; Abad, Lucille V.
2018-03-01
Uncontrolled hemorrhage remains a persistent problem especially in anatomical areas where compression and tourniquet cannot be applied. Hemostatic agents are materials which can achieve control of bleeding in acute, life-threatening traumatic coagulopathy. In this study, we prepared biocompatible hydrogel-based hemostat crosslinked by ionizing radiation. Granules made from carboxymethyl cellulose and dressing from kappa carrageenan and polyethylene oxide were characterized by FT-IR, SEM, and gel analysis. Gamma radiation with a dose of 25 kGy was used for sterilization process. Stability studies indicate that the products remain effective with a shelf life of up to 18 months based on accelerated aging. Both hemostatic agents were demonstrated to be effective in vitro blood clotting assays showing a low blood clotting index, high platelet adhesion capacity and accelerated clotting time. Hemostat granules and dressing were also used in a femoral artery rat bleeding model where hemorrhage control was achieved in 90 s without compression and resulted in 100% survival rate after a 7 and 14-day observation.
Very high volume fly ash green concrete for applications in India.
Yu, Jing; Mishra, Dhanada K; Wu, Chang; Leung, Christopher Ky
2018-06-01
Safe disposal of fly ash generated by coal-based thermal power plants continues to pose significant challenges around the world and in India in particular. Green structural concrete with 80% cement replaced by local Chinese fly ash has been recently developed to achieve a target characteristic compressive strength of 45 MPa. Such green concrete mixes are not only cheaper in cost, but also embody lower energy and carbon footprint, compared with conventional mixes. This study aims to adopt such materials using no less than 80% fly ash as binder in routine concrete works in countries like India with the commonly used lower target characteristic compressive strength of 30 MPa. It is achieved by the simple and practical method of adjusting the water/binder ratio and/or superplasticiser dosage. The proposed green concrete shows encouraging mechanical properties at 7 days and 28 days, as well as much lower material cost and environmental impact compared with commercial Grade 30 concrete. This technology can play an important role in meeting the huge infrastructure demands in India in a sustainable manner.
Curvelet-based compressive sensing for InSAR raw data
NASA Astrophysics Data System (ADS)
Costa, Marcello G.; da Silva Pinho, Marcelo; Fernandes, David
2015-10-01
The aim of this work is to evaluate the compression performance of SAR raw data for interferometry applications collected by airborne from BRADAR (Brazilian SAR System operating in X and P bands) using the new approach based on compressive sensing (CS) to achieve an effective recovery with a good phase preserving. For this framework is desirable a real-time capability, where the collected data can be compressed to reduce onboard storage and bandwidth required for transmission. In the CS theory, a sparse unknown signals can be recovered from a small number of random or pseudo-random measurements by sparsity-promoting nonlinear recovery algorithms. Therefore, the original signal can be significantly reduced. To achieve the sparse representation of SAR signal, was done a curvelet transform. The curvelets constitute a directional frame, which allows an optimal sparse representation of objects with discontinuities along smooth curves as observed in raw data and provides an advanced denoising optimization. For the tests were made available a scene of 8192 x 2048 samples in range and azimuth in X-band with 2 m of resolution. The sparse representation was compressed using low dimension measurements matrices in each curvelet subband. Thus, an iterative CS reconstruction method based on IST (iterative soft/shrinkage threshold) was adjusted to recover the curvelets coefficients and then the original signal. To evaluate the compression performance were computed the compression ratio (CR), signal to noise ratio (SNR), and because the interferometry applications require more reconstruction accuracy the phase parameters like the standard deviation of the phase (PSD) and the mean phase error (MPE) were also computed. Moreover, in the image domain, a single-look complex image was generated to evaluate the compression effects. All results were computed in terms of sparsity analysis to provides an efficient compression and quality recovering appropriated for inSAR applications, therefore, providing a feasibility for compressive sensing application.
NASA Astrophysics Data System (ADS)
Ohwada, Taku; Shibata, Yuki; Kato, Takuma; Nakamura, Taichi
2018-06-01
Developed is a high-order accurate shock-capturing scheme for the compressible Euler/Navier-Stokes equations; the formal accuracy is 5th order in space and 4th order in time. The performance and efficiency of the scheme are validated in various numerical tests. The main ingredients of the scheme are nothing special; they are variants of the standard numerical flux, MUSCL, the usual Lagrange's polynomial and the conventional Runge-Kutta method. The scheme can compute a boundary layer accurately with a rational resolution and capture a stationary contact discontinuity sharply without inner points. And yet it is endowed with high resistance against shock anomalies (carbuncle phenomenon, post-shock oscillations, etc.). A good balance between high robustness and low dissipation is achieved by blending three types of numerical fluxes according to physical situation in an intuitively easy-to-understand way. The performance of the scheme is largely comparable to that of WENO5-Rusanov, while its computational cost is 30-40% less than of that of the advanced scheme.
3D printing of high-strength bioscaffolds for the synergistic treatment of bone cancer
NASA Astrophysics Data System (ADS)
Ma, Hongshi; Li, Tao; Huan, Zhiguang; Zhang, Meng; Yang, Zezheng; Wang, Jinwu; Chang, Jiang; Wu, Chengtie
2018-04-01
The challenges in bone tumor therapy are how to repair the large bone defects induced by surgery and kill all possible residual tumor cells. Compared to cancellous bone defect regeneration, cortical bone defect regeneration has a higher demand for bone substitute materials. To the best of our knowledge, there are currently few bifunctional biomaterials with an ultra-high strength for both tumor therapy and cortical bone regeneration. Here, we designed Fe-CaSiO3 composite scaffolds (30CS) via 3D printing technique. First, the 30CS composite scaffolds possessed a high compressive strength that provided sufficient mechanical support in bone cortical defects; second, synergistic photothermal and ROS therapies achieved an enhanced tumor therapeutic effect in vitro and in vivo. Finally, the presence of CaSiO3 in the composite scaffolds improved the degradation performance, stimulated the proliferation and differentiation of rBMSCs, and further promoted bone formation in vivo. Such 30CS scaffolds with a high compressive strength can function as versatile and efficient biomaterials for the future regeneration of cortical bone defects and the treatment of bone cancer.
Diagnosis of high-temperature implosions using low- and high-opacity Krypton lines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yaakobi, B.; Epstein, R.; Hooper, C.F. Jr.
1996-04-01
High-temperature laser target implosions can be achieved by using relatively thin-shell targets, and they can be. diagnosed by doping the fuel with krypton and measuring K-shell and L-shell lines. Electron temperatures of up to 5 keV at modest compressed densities ({approximately}1-5g/cm{sup 3}) are predicted for such experiments, with ion temperatures peaking above 10 keV at the center. It is found that the profiles of low-opacity (optically thin) lines in the expected density range are dominated by the Doppler broadening and can provide a measurement of the ion temperature if spectrometers of spectral resolution {Delta}{lambda}/{lambda} {ge} 1000 are used. For high-opacitymore » lines, obtained with a higher krypton fill pressure, the measurement of the escape factor can yield the {rho}R of the compressed fuel. At higher densities, Stark broadening of low-opacity lines becomes important and can provide a density measurement, whereas lines of higher opacity can be used to estimate the extent of mixing.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salloum, Maher; Fabian, Nathan D.; Hensinger, David M.
Exascale computing promises quantities of data too large to efficiently store and transfer across networks in order to be able to analyze and visualize the results. We investigate compressed sensing (CS) as an in situ method to reduce the size of the data as it is being generated during a large-scale simulation. CS works by sampling the data on the computational cluster within an alternative function space such as wavelet bases and then reconstructing back to the original space on visualization platforms. While much work has gone into exploring CS on structured datasets, such as image data, we investigate itsmore » usefulness for point clouds such as unstructured mesh datasets often found in finite element simulations. We sample using a technique that exhibits low coherence with tree wavelets found to be suitable for point clouds. We reconstruct using the stagewise orthogonal matching pursuit algorithm that we improved to facilitate automated use in batch jobs. We analyze the achievable compression ratios and the quality and accuracy of reconstructed results at each compression ratio. In the considered case studies, we are able to achieve compression ratios up to two orders of magnitude with reasonable reconstruction accuracy and minimal visual deterioration in the data. Finally, our results suggest that, compared to other compression techniques, CS is attractive in cases where the compression overhead has to be minimized and where the reconstruction cost is not a significant concern.« less
Liu, Xilin; Zhang, Milin; Xiong, Tao; Richardson, Andrew G; Lucas, Timothy H; Chin, Peter S; Etienne-Cummings, Ralph; Tran, Trac D; Van der Spiegel, Jan
2016-07-18
Reliable, multi-channel neural recording is critical to the neuroscience research and clinical treatment. However, most hardware development of fully integrated, multi-channel wireless neural recorders to-date, is still in the proof-of-concept stage. To be ready for practical use, the trade-offs between performance, power consumption, device size, robustness, and compatibility need to be carefully taken into account. This paper presents an optimized wireless compressed sensing neural signal recording system. The system takes advantages of both custom integrated circuits and universal compatible wireless solutions. The proposed system includes an implantable wireless system-on-chip (SoC) and an external wireless relay. The SoC integrates 16-channel low-noise neural amplifiers, programmable filters and gain stages, a SAR ADC, a real-time compressed sensing module, and a near field wireless power and data transmission link. The external relay integrates a 32 bit low-power microcontroller with Bluetooth 4.0 wireless module, a programming interface, and an inductive charging unit. The SoC achieves high signal recording quality with minimized power consumption, while reducing the risk of infection from through-skin connectors. The external relay maximizes the compatibility and programmability. The proposed compressed sensing module is highly configurable, featuring a SNDR of 9.78 dB with a compression ratio of 8×. The SoC has been fabricated in a 180 nm standard CMOS technology, occupying 2.1 mm × 0.6 mm silicon area. A pre-implantable system has been assembled to demonstrate the proposed paradigm. The developed system has been successfully used for long-term wireless neural recording in freely behaving rhesus monkey.
Lu, Yi; Zhang, Chunmei; Zheng, Guanyu; Zhou, Lixiang
2018-04-22
Prior to mechanical dewatering, sludge conditioning is indispensable to reduce the difficulty of sludge treatment and disposal. The effect of bioacidification conditioning driven by Acidithiobacillus ferrooxidans LX5 on the dewatering rate and extent of sewage sludge during compression dewatering process was investigated in this study. The results showed that the bioacidification of sludge driven by A. ferrooxidans LX5 simultaneously improved both the sludge dewatering rate and extent, which was not attained by physical/chemical conditioning approaches, including ultrasonication, microwave, freezing/thawing, or by adding the chemical conditioner cationic polyacrylamide (CPAM). During the bioacidification of sludge, the decrease in sludge pH induced the damage of sludge microbial cell structures, which enhanced the dewatering extent of sludge, and the added Fe 2+ and the subsequent bio-oxidized Fe 3+ effectively flocculated the damaged sludge flocs to improve the sludge dewatering rate. In the compression dewatering process consisting of filtration and expression stages, high removal of moisture and a short dewatering time were achieved during the filtration stage and the expression kinetics were also improved because of the high elasticity of sludge cake and the rapid creeping of the aggregates within the sludge cake. In addition, the usefulness of bioacidification driven by A. ferrooxidans LX5 in improving the compression dewatering of sewage sludge could not be attained by the chemical treatment of sludge through pH modification and Fe 3+ addition. Therefore, the bioacidification of sludge driven by A. ferrooxidans LX5 is an effective conditioning method to simultaneously improve the rate and extent of compression dewatering of sewage sludge.
Review: Pressure-Induced Densification of Oxide Glasses at the Glass Transition
NASA Astrophysics Data System (ADS)
Kapoor, Saurabh; Wondraczek, Lothar; Smedskjaer, Morten M.
2017-02-01
Densification of oxide glasses at the glass transition offers a novel route to develop bulk glasses with tailored properties for emerging applications. Such densification can be achieved in the technologically relevant pressure regime of up to 1GPa. However, the present understanding of the composition-structure-property relationships governing these glasses is limited, with key questions, e.g., related to densification mechanism, remaining largely unanswered. Recent advances in structural characterization tools and high-pressure apparatuses have prompted new research efforts. Here, we review this recent progress and the insights gained in the understanding of the influence of isostatic compression at elevated temperature (so-called hot compression) on the composition-structure-property relationships of oxide glasses. We focus on compression at temperatures at or around the glass transition temperature (Tg), with relevant comparisons made to glasses prepared by pressure quenching and cold compression. We show that permanent densification at 1 GPa sets-in at temperatures above 0.7Tg and the degree of densification increases with increasing compression temperature and time, until attaining an approximately constant value for temperatures above Tg. For glasses compressed at the same temperature/pressure conditions, we demonstrate direct relations between the degree of volume densification and the pressure-induced change in micro-mechanical properties such as hardness, elastic moduli, and extent of the indentation size effect across a variety of glass families. Furthermore, we summarize the results on relaxation behavior of hot compressed glasses. All the pressure-induced changes in the structure and properties exhibit strong composition dependence. The experimental results highlight new opportunities for future investigation and identify research challenges that need to be overcome to advance the field.
Huang, Wenju; Dai, Kun; Zhai, Yue; Liu, Hu; Zhan, Pengfei; Gao, Jiachen; Zheng, Guoqiang; Liu, Chuntai; Shen, Changyu
2017-12-06
Flexible and lightweight carbon nanotube (CNT)/thermoplastic polyurethane (TPU) conductive foam with a novel aligned porous structure was fabricated. The density of the aligned porous material was as low as 0.123 g·cm -3 . Homogeneous dispersion of CNTs was achieved through the skeleton of the foam, and an ultralow percolation threshold of 0.0023 vol % was obtained. Compared with the disordered foam, mechanical properties of the aligned foam were enhanced and the piezoresistive stability of the flexible foam was improved significantly. The compression strength of the aligned TPU foam increases by 30.7% at the strain of 50%, and the stress of the aligned foam is 22 times that of the disordered foam at the strain of 90%. Importantly, the resistance variation of the aligned foam shows a fascinating linear characteristic under the applied strain until 77%, which would benefit the application of the foam as a desired pressure sensor. During multiple cyclic compression-release measurements, the aligned conductive CNT/TPU foam represents excellent reversibility and reproducibility in terms of resistance. This nice capability benefits from the aligned porous structure composed of ladderlike cells along the orientation direction. Simultaneously, the human motion detections, such as walk, jump, squat, etc. were demonstrated by using our flexible pressure sensor. Because of the lightweight, flexibility, high compressibility, excellent reversibility, and reproducibility of the conductive aligned foam, the present study is capable of providing new insights into the fabrication of a high-performance pressure sensor.
NASA Astrophysics Data System (ADS)
Aschauer, S.; Majewski, P.; Lutz, G.; Soltau, H.; Holl, P.; Hartmann, R.; Schlosser, D.; Paschen, U.; Weyers, S.; Dreiner, S.; Klusmann, M.; Hauser, J.; Kalok, D.; Bechteler, A.; Heinzinger, K.; Porro, M.; Titze, B.; Strüder, L.
2017-11-01
DEPFET Active Pixel Sensors (APS) have been introduced as focal plane detectors for X-ray astronomy already in 1996. Fabricated on high resistivity, fully depleted silicon and back-illuminated they can provide high quantum efficiency and low noise operation even at very high read rates. In 2009 a new type of DEPFET APS, the DSSC (DEPFET Sensor with Signal Compression) was developed, which is dedicated to high-speed X-ray imaging at the European X-ray free electron laser facility (EuXFEL) in Hamburg. In order to resolve the enormous contrasts occurring in Free Electron Laser (FEL) experiments, this new DSSC-DEPFET sensor has the capability of nonlinear amplification, that is, high gain for low intensities in order to obtain single-photon detection capability, and reduced gain for high intensities to achieve high dynamic range for several thousand photons per pixel and frame. We call this property "signal compression". Starting in 2015, we have been fabricating DEPFET sensors in an industrial scale CMOS foundry maintaining the outstanding proven DEPFET properties and adding new capabilities due to the industrial-scale CMOS process. We will highlight these additional features and describe the progress achieved so far. In a first attempt on double-sided polished 725 μm thick 200 mm high resistivity float zone silicon wafers all relevant device related properties have been measured, such as leakage current, depletion voltage, transistor characteristics, noise and energy resolution for X-rays and the nonlinear response. The smaller feature size provided by the new technology allows for an advanced design and significant improvements in device performance. A brief summary of the present status will be given as well as an outlook on next steps and future perspectives.
Metronome improves compression and ventilation rates during CPR on a manikin in a randomized trial.
Kern, Karl B; Stickney, Ronald E; Gallison, Leanne; Smith, Robert E
2010-02-01
We hypothesized that a unique tock and voice metronome could prevent both suboptimal chest compression rates and hyperventilation. A prospective, randomized, parallel design study involving 34 pairs of paid firefighter/emergency medical technicians (EMTs) performing two-rescuer CPR using a Laerdal SkillReporter Resusci Anne manikin with and without metronome guidance was performed. Each CPR session consisted of 2 min of 30:2 CPR with an unsecured airway, then 4 min of CPR with a secured airway (continuous compressions at 100 min(-1) with 8-10 ventilations/min), repeated after the rescuers switched roles. The metronome provided "tock" prompts for compressions, transition prompts between compressions and ventilations, and a spoken "ventilate" prompt. During CPR with a bag/valve/mask the target compression rate of 90-110 min(-1) was achieved in 5/34 CPR sessions (15%) for the control group and 34/34 sessions (100%) for the metronome group (p<0.001). An excessive ventilation rate was not observed in either the metronome or control group during CPR with a bag/valve/mask. During CPR with a bag/endotracheal tube, the target of both a compression rate of 90-110 min(-1) and a ventilation rate of 8-11 min(-1) was achieved in 3/34 CPR sessions (9%) for the control group and 33/34 sessions (97%) for the metronome group (p<0.001). Metronome use with the secured airway scenario significantly decreased the incidence of over-ventilation (11/34 EMT pairs vs. 0/34 EMT pairs; p<0.001). A unique combination tock and voice prompting metronome was effective at directing correct chest compression and ventilation rates both before and after intubation. Copyright 2009 Elsevier Ireland Ltd. All rights reserved.
QualComp: a new lossy compressor for quality scores based on rate distortion theory
2013-01-01
Background Next Generation Sequencing technologies have revolutionized many fields in biology by reducing the time and cost required for sequencing. As a result, large amounts of sequencing data are being generated. A typical sequencing data file may occupy tens or even hundreds of gigabytes of disk space, prohibitively large for many users. This data consists of both the nucleotide sequences and per-base quality scores that indicate the level of confidence in the readout of these sequences. Quality scores account for about half of the required disk space in the commonly used FASTQ format (before compression), and therefore the compression of the quality scores can significantly reduce storage requirements and speed up analysis and transmission of sequencing data. Results In this paper, we present a new scheme for the lossy compression of the quality scores, to address the problem of storage. Our framework allows the user to specify the rate (bits per quality score) prior to compression, independent of the data to be compressed. Our algorithm can work at any rate, unlike other lossy compression algorithms. We envisage our algorithm as being part of a more general compression scheme that works with the entire FASTQ file. Numerical experiments show that we can achieve a better mean squared error (MSE) for small rates (bits per quality score) than other lossy compression schemes. For the organism PhiX, whose assembled genome is known and assumed to be correct, we show that it is possible to achieve a significant reduction in size with little compromise in performance on downstream applications (e.g., alignment). Conclusions QualComp is an open source software package, written in C and freely available for download at https://sourceforge.net/projects/qualcomp. PMID:23758828
Oblivious image watermarking combined with JPEG compression
NASA Astrophysics Data System (ADS)
Chen, Qing; Maitre, Henri; Pesquet-Popescu, Beatrice
2003-06-01
For most data hiding applications, the main source of concern is the effect of lossy compression on hidden information. The objective of watermarking is fundamentally in conflict with lossy compression. The latter attempts to remove all irrelevant and redundant information from a signal, while the former uses the irrelevant information to mask the presence of hidden data. Compression on a watermarked image can significantly affect the retrieval of the watermark. Past investigations of this problem have heavily relied on simulation. It is desirable not only to measure the effect of compression on embedded watermark, but also to control the embedding process to survive lossy compression. In this paper, we focus on oblivious watermarking by assuming that the watermarked image inevitably undergoes JPEG compression prior to watermark extraction. We propose an image-adaptive watermarking scheme where the watermarking algorithm and the JPEG compression standard are jointly considered. Watermark embedding takes into consideration the JPEG compression quality factor and exploits an HVS model to adaptively attain a proper trade-off among transparency, hiding data rate, and robustness to JPEG compression. The scheme estimates the image-dependent payload under JPEG compression to achieve the watermarking bit allocation in a determinate way, while maintaining consistent watermark retrieval performance.
Optimal Compression Methods for Floating-point Format Images
NASA Technical Reports Server (NTRS)
Pence, W. D.; White, R. L.; Seaman, R.
2009-01-01
We report on the results of a comparison study of different techniques for compressing FITS images that have floating-point (real*4) pixel values. Standard file compression methods like GZIP are generally ineffective in this case (with compression ratios only in the range 1.2 - 1.6), so instead we use a technique of converting the floating-point values into quantized scaled integers which are compressed using the Rice algorithm. The compressed data stream is stored in FITS format using the tiled-image compression convention. This is technically a lossy compression method, since the pixel values are not exactly reproduced, however all the significant photometric and astrometric information content of the image can be preserved while still achieving file compression ratios in the range of 4 to 8. We also show that introducing dithering, or randomization, when assigning the quantized pixel-values can significantly improve the photometric and astrometric precision in the stellar images in the compressed file without adding additional noise. We quantify our results by comparing the stellar magnitudes and positions as measured in the original uncompressed image to those derived from the same image after applying successively greater amounts of compression.
Real-time demonstration hardware for enhanced DPCM video compression algorithm
NASA Technical Reports Server (NTRS)
Bizon, Thomas P.; Whyte, Wayne A., Jr.; Marcopoli, Vincent R.
1992-01-01
The lack of available wideband digital links as well as the complexity of implementation of bandwidth efficient digital video CODECs (encoder/decoder) has worked to keep the cost of digital television transmission too high to compete with analog methods. Terrestrial and satellite video service providers, however, are now recognizing the potential gains that digital video compression offers and are proposing to incorporate compression systems to increase the number of available program channels. NASA is similarly recognizing the benefits of and trend toward digital video compression techniques for transmission of high quality video from space and therefore, has developed a digital television bandwidth compression algorithm to process standard National Television Systems Committee (NTSC) composite color television signals. The algorithm is based on differential pulse code modulation (DPCM), but additionally utilizes a non-adaptive predictor, non-uniform quantizer and multilevel Huffman coder to reduce the data rate substantially below that achievable with straight DPCM. The non-adaptive predictor and multilevel Huffman coder combine to set this technique apart from other DPCM encoding algorithms. All processing is done on a intra-field basis to prevent motion degradation and minimize hardware complexity. Computer simulations have shown the algorithm will produce broadcast quality reconstructed video at an average transmission rate of 1.8 bits/pixel. Hardware implementation of the DPCM circuit, non-adaptive predictor and non-uniform quantizer has been completed, providing realtime demonstration of the image quality at full video rates. Video sampling/reconstruction circuits have also been constructed to accomplish the analog video processing necessary for the real-time demonstration. Performance results for the completed hardware compare favorably with simulation results. Hardware implementation of the multilevel Huffman encoder/decoder is currently under development along with implementation of a buffer control algorithm to accommodate the variable data rate output of the multilevel Huffman encoder. A video CODEC of this type could be used to compress NTSC color television signals where high quality reconstruction is desirable (e.g., Space Station video transmission, transmission direct-to-the-home via direct broadcast satellite systems or cable television distribution to system headends and direct-to-the-home).
Song, Xiaoying; Huang, Qijun; Chang, Sheng; He, Jin; Wang, Hao
2018-06-01
To improve the compression rates for lossless compression of medical images, an efficient algorithm, based on irregular segmentation and region-based prediction, is proposed in this paper. Considering that the first step of a region-based compression algorithm is segmentation, this paper proposes a hybrid method by combining geometry-adaptive partitioning and quadtree partitioning to achieve adaptive irregular segmentation for medical images. Then, least square (LS)-based predictors are adaptively designed for each region (regular subblock or irregular subregion). The proposed adaptive algorithm not only exploits spatial correlation between pixels but it utilizes local structure similarity, resulting in efficient compression performance. Experimental results show that the average compression performance of the proposed algorithm is 10.48, 4.86, 3.58, and 0.10% better than that of JPEG 2000, CALIC, EDP, and JPEG-LS, respectively. Graphical abstract ᅟ.
NASA Technical Reports Server (NTRS)
Cole, G. L.; Neiner, G. H.; Baumbick, R. J.
1973-01-01
Experimental results of terminal shock and restart control system tests of a two-dimensional, twin-duct mixed compression inlet are presented. High-response (110-Hz bandwidth) overboard bypass doors were used, both as the variable to control shock position and as the means of disturbing the inlet airflow. An inherent instability in inlet shock position resulted in noisy feedback signals and thus restricted the terminal shock position control performance that was achieved. Proportional-plus-integral type controllers using either throat exit static pressure or shock position sensor feedback gave adequate low-frequency control. The inlet restart control system kept the terminal shock control loop closed throughout the unstart-restart transient. The capability to restart the inlet was non limited by the inlet instability.
NASA Technical Reports Server (NTRS)
Baumbick, R. J.
1974-01-01
Results of experimental tests conducted on a supersonic, mixed-compression, axisymmetric inlet are presented. The inlet is designed for operation at Mach 2.5 with a turbofan engine (TF-30). The inlet was coupled to either a choked orifice plate or a long duct which had a variable-area choked exit plug. Closed-loop frequency responses of selected diffuser static pressures used in the terminal-shock control system are presented. Results are shown for Mach 2.5 conditions with the inlet coupled to either the choked orifice plate or the long duct. Inlet unstart-restart traces are also presented. High-response inlet bypass doors were used to generate an internal disturbance and also to achieve terminal-shock control.
A Lossless hybrid wavelet-fractal compression for welding radiographic images.
Mekhalfa, Faiza; Avanaki, Mohammad R N; Berkani, Daoud
2016-01-01
In this work a lossless wavelet-fractal image coder is proposed. The process starts by compressing and decompressing the original image using wavelet transformation and fractal coding algorithm. The decompressed image is removed from the original one to obtain a residual image which is coded by using Huffman algorithm. Simulation results show that with the proposed scheme, we achieve an infinite peak signal to noise ratio (PSNR) with higher compression ratio compared to typical lossless method. Moreover, the use of wavelet transform speeds up the fractal compression algorithm by reducing the size of the domain pool. The compression results of several welding radiographic images using the proposed scheme are evaluated quantitatively and compared with the results of Huffman coding algorithm.
You, Ilhwan; Yoo, Doo-Yeol; Kim, Soonho; Kim, Min-Jae; Zi, Goangseup
2017-01-01
This study examined the electrical and self-sensing capacities of ultra-high-performance fiber-reinforced concrete (UHPFRC) with and without carbon nanotubes (CNTs). For this, the effects of steel fiber content, orientation, and pore water content on the electrical and piezoresistive properties of UHPFRC without CNTs were first evaluated. Then, the effect of CNT content on the self-sensing capacities of UHPFRC under compression and flexure was investigated. Test results indicated that higher steel fiber content, better fiber orientation, and higher amount of pore water led to higher electrical conductivity of UHPFRC. The effects of fiber orientation and drying condition on the electrical conductivity became minor as sufficiently high amount of steel fibers, 3% by volume, was added. Including only steel fibers did not impart UHPFRC with piezoresistive properties. Addition of CNTs substantially improved the electrical conductivity of UHPFRC. Under compression, UHPFRC with a CNT content of 0.3% or greater had a self-sensing ability that was activated by the formation of cracks, and better sensing capacity was achieved by including greater amount of CNTs. Furthermore, the pre-peak flexural behavior of UHPFRC was precisely simulated with a fractional change in resistivity when 0.3% CNTs were incorporated. The pre-cracking self-sensing capacity of UHPFRC with CNTs was more effective under tensile stress state than under compressive stress state. PMID:29109388
You, Ilhwan; Yoo, Doo-Yeol; Kim, Sooho; Kim, Min-Jae; Zi, Goangseup
2017-10-29
This study examined the electrical and self-sensing capacities of ultra-high-performance fiber-reinforced concrete (UHPFRC) with and without carbon nanotubes (CNTs). For this, the effects of steel fiber content, orientation, and pore water content on the electrical and piezoresistive properties of UHPFRC without CNTs were first evaluated. Then, the effect of CNT content on the self-sensing capacities of UHPFRC under compression and flexure was investigated. Test results indicated that higher steel fiber content, better fiber orientation, and higher amount of pore water led to higher electrical conductivity of UHPFRC. The effects of fiber orientation and drying condition on the electrical conductivity became minor as sufficiently high amount of steel fibers, 3% by volume, was added. Including only steel fibers did not impart UHPFRC with piezoresistive properties. Addition of CNTs substantially improved the electrical conductivity of UHPFRC. Under compression, UHPFRC with a CNT content of 0.3% or greater had a self-sensing ability that was activated by the formation of cracks, and better sensing capacity was achieved by including greater amount of CNTs. Furthermore, the pre-peak flexural behavior of UHPFRC was precisely simulated with a fractional change in resistivity when 0.3% CNTs were incorporated. The pre-cracking self-sensing capacity of UHPFRC with CNTs was more effective under tensile stress state than under compressive stress state.
Liao, Ke; Zhu, Min; Ding, Lei
2013-08-01
The present study investigated the use of transform sparseness of cortical current density on human brain surface to improve electroencephalography/magnetoencephalography (EEG/MEG) inverse solutions. Transform sparseness was assessed by evaluating compressibility of cortical current densities in transform domains. To do that, a structure compression method from computer graphics was first adopted to compress cortical surface structure, either regular or irregular, into hierarchical multi-resolution meshes. Then, a new face-based wavelet method based on generated multi-resolution meshes was proposed to compress current density functions defined on cortical surfaces. Twelve cortical surface models were built by three EEG/MEG softwares and their structural compressibility was evaluated and compared by the proposed method. Monte Carlo simulations were implemented to evaluate the performance of the proposed wavelet method in compressing various cortical current density distributions as compared to other two available vertex-based wavelet methods. The present results indicate that the face-based wavelet method can achieve higher transform sparseness than vertex-based wavelet methods. Furthermore, basis functions from the face-based wavelet method have lower coherence against typical EEG and MEG measurement systems than vertex-based wavelet methods. Both high transform sparseness and low coherent measurements suggest that the proposed face-based wavelet method can improve the performance of L1-norm regularized EEG/MEG inverse solutions, which was further demonstrated in simulations and experimental setups using MEG data. Thus, this new transform on complicated cortical structure is promising to significantly advance EEG/MEG inverse source imaging technologies. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
On heat transfer in squish gaps
NASA Astrophysics Data System (ADS)
Spurk, J. H.
1986-06-01
Attention is given to the heat transfer characteristics of a squish gap in an internal combustion engine cylinder, when the piston is nearing top dead center (TDC) on the compression stroke. If the lateral extent of the gap is much larger than its height, the inviscid flow is similar to the stagnation point flow. Surface temperature and pressure histories during compression and expansion are studied. Surface temperature has a maximum near TDC, then drops and rises again during expansion; higher values are actually achieved during expansion than during compression.
Syndrome source coding and its universal generalization
NASA Technical Reports Server (NTRS)
Ancheta, T. C., Jr.
1975-01-01
A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A universal generalization of syndrome-source-coding is formulated which provides robustly-effective, distortionless, coding of source ensembles.
Collateral Damage to Satellites from an EMP Attack
2010-08-01
peak dose is computed in an infinite half plane of silicon. The resulting in- plane stresses in silicon are shown in Figure VI.23. In- plane refers to...achieved by the SLAR coating 81 Figure VIII.6. Ratio of the peak in- plane compressive stress to the maximum compressive stress for the SLAR coating...82 Figure VIII.7. Maximum in- plane compressive stress in a SLAR coating on DMSP/NOAA subjected to the threat events 83 Figure VIII.8. Maximum in
Beam brilliance investigation of high current ion beams at GSI heavy ion accelerator facility.
Adonin, A A; Hollinger, R
2014-02-01
In this work the emittance measurements of high current Ta-beam provided by VARIS (Vacuum Arc Ion Source) ion source are presented. Beam brilliance as a function of beam aperture at various extraction conditions is investigated. Influence of electrostatic ion beam compression in post acceleration gap on the beam quality is discussed. Use of different extraction systems (single aperture, 7 holes, and 13 holes) in order to achieve more peaked beam core is considered. The possible ways to increase the beam brilliance are discussed.
Modular approach to achieving the next-generation X-ray light source
NASA Astrophysics Data System (ADS)
Biedron, S. G.; Milton, S. V.; Freund, H. P.
2001-12-01
A modular approach to the next-generation light source is described. The "modules" include photocathode, radio-frequency, electron guns and their associated drive-laser systems, linear accelerators, bunch-compression systems, seed laser systems, planar undulators, two-undulator harmonic generation schemes, high-gain harmonic generation systems, nonlinear higher harmonics, and wavelength shifting. These modules will be helpful in distributing the next-generation light source to many more laboratories than the current single-pass, high-gain free-electron laser designs permit, due to both monetary and/or physical space constraints.
Divergence Free High Order Filter Methods for Multiscale Non-ideal MHD Flows
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sjoegreen, Bjoern
2003-01-01
Low-dissipative high order filter finite difference methods for long time wave propagation of shock/turbulence/combustion compressible viscous MHD flows has been constructed. Several variants of the filter approach that cater to different flow types are proposed. These filters provide a natural and efficient way for the minimization of the divergence of the magnetic field (Delta . B) numerical error in the sense that no standard divergence cleaning is required. For certain 2-D MHD test problems, divergence free preservation of the magnetic fields of these filter schemes has been achieved.
Compressed sensing system considerations for ECG and EMG wireless biosensors.
Dixon, Anna M R; Allstot, Emily G; Gangopadhyay, Daibashish; Allstot, David J
2012-04-01
Compressed sensing (CS) is an emerging signal processing paradigm that enables sub-Nyquist processing of sparse signals such as electrocardiogram (ECG) and electromyogram (EMG) biosignals. Consequently, it can be applied to biosignal acquisition systems to reduce the data rate to realize ultra-low-power performance. CS is compared to conventional and adaptive sampling techniques and several system-level design considerations are presented for CS acquisition systems including sparsity and compression limits, thresholding techniques, encoder bit-precision requirements, and signal recovery algorithms. Simulation studies show that compression factors greater than 16X are achievable for ECG and EMG signals with signal-to-quantization noise ratios greater than 60 dB.
Efficient generation of ultra-intense few-cycle radially polarized laser pulses.
Carbajo, Sergio; Granados, Eduardo; Schimpf, Damian; Sell, Alexander; Hong, Kyung-Han; Moses, Jeffrey; Kärtner, Franz X
2014-04-15
We report on efficient generation of millijoule-level, kilohertz-repetition-rate few-cycle laser pulses with radial polarization by combining a gas-filled hollow-waveguide compression technique with a suitable polarization mode converter. Peak power levels >85 GW are routinely achieved, capable of reaching relativistic intensities >10(19) W/cm2 with carrier-envelope-phase control, by employing readily accessible ultrafast high-energy laser technology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Senthil, K.; Mitra, S.; Sandeep, S., E-mail: sentilk@barc.gov.in
In a multi-gigawatt pulsed power system like KALI-30 GW, insulation coordination is required to achieve high voltages ranging from 0.3 MV to 1 MV. At the same time optimisation of the insulation parameters is required to minimize the inductance of the system, so that nanoseconds output can be achieved. The KALI-30GW pulse power system utilizes a combination of Perspex, delrin, epoxy, transformer oil, nitrogen/SF{sub 6} gas and vacuum insulation at its various stages in compressing DC high voltage to a nanoseconds pulse. This paper describes the operation and performance of the system from 400 kV to 1030 kV output voltagemore » pulse and insulation parameters utilized for obtaining peak 1 MV output. (author)« less
Optimized satellite image compression and reconstruction via evolution strategies
NASA Astrophysics Data System (ADS)
Babb, Brendan; Moore, Frank; Peterson, Michael
2009-05-01
This paper describes the automatic discovery, via an Evolution Strategy with Covariance Matrix Adaptation (CMA-ES), of vectors of real-valued coefficients representing matched forward and inverse transforms that outperform the 9/7 Cohen-Daubechies-Feauveau (CDF) discrete wavelet transform (DWT) for satellite image compression and reconstruction under conditions subject to quantization error. The best transform evolved during this study reduces the mean squared error (MSE) present in reconstructed satellite images by an average of 33.78% (1.79 dB), while maintaining the average information entropy (IE) of compressed images at 99.57% in comparison to the wavelet. In addition, this evolved transform achieves 49.88% (3.00 dB) average MSE reduction when tested on 80 images from the FBI fingerprint test set, and 42.35% (2.39 dB) average MSE reduction when tested on a set of 18 digital photographs, while achieving average IE of 104.36% and 100.08%, respectively. These results indicate that our evolved transform greatly improves the quality of reconstructed images without substantial loss of compression capability over a broad range of image classes.
Carbon and Energy Saving Financial Opportunities in the Industrial Compressed Air Sector
NASA Astrophysics Data System (ADS)
Vittorini, Diego; Cipollone, Roberto
2017-08-01
The transition towards a more sustainable energy scenario calls for both medium-to-long and short term interventions, with CO2 reduction and fossil fuel saving as main goals for all the Countries in the World. Among all others, one way to support these efforts is the setting-up of immaterial markets able to regulate, in the form of purchase and sales quotas, CO2 emissions avoided and fossil fuels not consumed. As a consequence, the upgrade of those sectors, characterized by high energy impact, is currently more than an option due to the related achievable financial advantage on the afore mentioned markets. Being responsible for about 10% electricity consumption in Industry, the compressed air sector is currently addressed as extremely appealing, when CO2 emissions and burned fossil fuels saving are in question. In the paper, once a standard is defined for compressors performances, based on data from the Compressed Air and Gas Institute and PNEUROP, the achievable energy saving is evaluated along with the effect in terms of CO2 emissions: with reference to those contexts in which mature intangible markets are established, an estimation of the financial benefit from savings sale on correspondent markets is possible, in terms of both avoided CO2 and fossil fuels not burned. The approach adopted allows to extend the analysis results to every context of interest, by applying the appropriate emission factor to the datum on compressor specific consumption.
Coding visual features extracted from video sequences.
Baroffio, Luca; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano
2014-05-01
Visual features are successfully exploited in several applications (e.g., visual search, object recognition and tracking, etc.) due to their ability to efficiently represent image content. Several visual analysis tasks require features to be transmitted over a bandwidth-limited network, thus calling for coding techniques to reduce the required bit budget, while attaining a target level of efficiency. In this paper, we propose, for the first time, a coding architecture designed for local features (e.g., SIFT, SURF) extracted from video sequences. To achieve high coding efficiency, we exploit both spatial and temporal redundancy by means of intraframe and interframe coding modes. In addition, we propose a coding mode decision based on rate-distortion optimization. The proposed coding scheme can be conveniently adopted to implement the analyze-then-compress (ATC) paradigm in the context of visual sensor networks. That is, sets of visual features are extracted from video frames, encoded at remote nodes, and finally transmitted to a central controller that performs visual analysis. This is in contrast to the traditional compress-then-analyze (CTA) paradigm, in which video sequences acquired at a node are compressed and then sent to a central unit for further processing. In this paper, we compare these coding paradigms using metrics that are routinely adopted to evaluate the suitability of visual features in the context of content-based retrieval, object recognition, and tracking. Experimental results demonstrate that, thanks to the significant coding gains achieved by the proposed coding scheme, ATC outperforms CTA with respect to all evaluation metrics.
GPU Lossless Hyperspectral Data Compression System for Space Applications
NASA Technical Reports Server (NTRS)
Keymeulen, Didier; Aranki, Nazeeh; Hopson, Ben; Kiely, Aaron; Klimesh, Matthew; Benkrid, Khaled
2012-01-01
On-board lossless hyperspectral data compression reduces data volume in order to meet NASA and DoD limited downlink capabilities. At JPL, a novel, adaptive and predictive technique for lossless compression of hyperspectral data, named the Fast Lossless (FL) algorithm, was recently developed. This technique uses an adaptive filtering method and achieves state-of-the-art performance in both compression effectiveness and low complexity. Because of its outstanding performance and suitability for real-time onboard hardware implementation, the FL compressor is being formalized as the emerging CCSDS Standard for Lossless Multispectral & Hyperspectral image compression. The FL compressor is well-suited for parallel hardware implementation. A GPU hardware implementation was developed for FL targeting the current state-of-the-art GPUs from NVIDIA(Trademark). The GPU implementation on a NVIDIA(Trademark) GeForce(Trademark) GTX 580 achieves a throughput performance of 583.08 Mbits/sec (44.85 MSamples/sec) and an acceleration of at least 6 times a software implementation running on a 3.47 GHz single core Intel(Trademark) Xeon(Trademark) processor. This paper describes the design and implementation of the FL algorithm on the GPU. The massively parallel implementation will provide in the future a fast and practical real-time solution for airborne and space applications.
Effect of compressive force on PEM fuel cell performance
NASA Astrophysics Data System (ADS)
MacDonald, Colin Stephen
Polymer electrolyte membrane (PEM) fuel cells possess the potential, as a zero-emission power source, to replace the internal combustion engine as the primary option for transportation applications. Though there are a number of obstacles to vast PEM fuel cell commercialization, such as high cost and limited durability, there has been significant progress in the field to achieve this goal. Experimental testing and analysis of fuel cell performance has been an important tool in this advancement. Experimental studies of the PEM fuel cell not only identify unfiltered performance response to manipulation of variables, but also aid in the advancement of fuel cell modelling, by allowing for validation of computational schemes. Compressive force used to contain a fuel cell assembly can play a significant role in how effectively the cell functions, the most obvious example being to ensure proper sealing within the cell. Compression can have a considerable impact on cell performance beyond the sealing aspects. The force can manipulate the ability to deliver reactants and the electrochemical functions of the cell, by altering the layers in the cell susceptible to this force. For these reasons an experimental study was undertaken, presented in this thesis, with specific focus placed on cell compression; in order to study its effect on reactant flow fields and performance response. The goal of the thesis was to develop a consistent and accurate general test procedure for the experimental analysis of a PEM fuel cell in order to analyse the effects of compression on performance. The factors potentially affecting cell performance, which were a function of compression, were identified as: (1) Sealing and surface contact; (2) Pressure drop across the flow channel; (3) Porosity of the GDL. Each factor was analysed independently in order to determine the individual contribution to changes in performance. An optimal degree of compression was identified for the cell configuration in question and the performance gains from the aforementioned compression factors were quantified. The study provided a considerable amount of practical and analytical knowledge in the area of cell compression and shed light on the importance of precision compressive control within the PEM fuel cell.
Shen, Qi; Liu, Zhanqiang; Hua, Yang; Zhao, Jinfu; Lv, Woyun; Mohsan, Aziz Ul Hassan
2018-06-14
Service performance of components such as fatigue life are dramatically influenced by the machined surface and subsurface residual stresses. This paper aims at achieving a better understanding of the influence of cutting edge microgeometry on machined surface residual stresses during orthogonal dry cutting of Inconel 718. Numerical and experimental investigations have been conducted in this research. The cutting edge microgeometry factors of average cutting edge radius S¯, form-factor K , and chamfer were investigated. An increasing trend for the magnitudes of both tensile and compressive residual stresses was observed by using larger S¯ or introducing a chamfer on the cutting edges. The ploughing depth has been predicted based on the stagnation zone. The increase of ploughing depth means that more material was ironed on the workpiece subsurface, which resulted in an increase in the compressive residual stress. The thermal loads were leading factors that affected the surface tensile residual stress. For the unsymmetrical honed cutting edge with K = 2, the friction between tool and workpiece and tensile residual stress tended to be high, while for the unsymmetrical honed cutting edge with K = 0.5, the high ploughing depth led to a higher compressive residual stress. This paper provides guidance for regulating machine-induced residual stress by edge preparation.
van Tulder, Raphael; Laggner, Roberta; Kienbacher, Calvin; Schmid, Bernhard; Zajicek, Andreas; Haidvogel, Jochen; Sebald, Dieter; Laggner, Anton N; Herkner, Harald; Sterz, Fritz; Eisenburger, Philip
2015-04-01
In CPR, sufficient compression depth is essential. The American Heart Association ("at least 5cm", AHA-R) and the European Resuscitation Council ("at least 5cm, but not to exceed 6cm", ERC-R) recommendations differ, and both are hardly achieved. This study aims to investigate the effects of differing target depth instructions on compression depth performances of professional and lay-rescuers. 110 professional-rescuers and 110 lay-rescuers were randomized (1:1, 4 groups) to estimate the AHA-R or ERC-R on a paper sheet (given horizontal axis) using a pencil and to perform chest compressions according to AHA-R or ERC-R on a manikin. Distance estimation and compression depth were the outcome variables. Professional-rescuers estimated the distance according to AHA-R in 19/55 (34.5%) and to ERC-R in 20/55 (36.4%) cases (p=0.84). Professional-rescuers achieved correct compression depth according to AHA-R in 39/55 (70.9%) and to ERC-R in 36/55 (65.4%) cases (p=0.97). Lay-rescuers estimated the distance correctly according to AHA-R in 18/55 (32.7%) and to ERC-R in 20/55 (36.4%) cases (p=0.59). Lay-rescuers yielded correct compression depth according to AHA-R in 39/55 (70.9%) and to ERC-R in 26/55 (47.3%) cases (p=0.02). Professional and lay-rescuers have severe difficulties in correctly estimating distance on a sheet of paper. Professional-rescuers are able to yield AHA-R and ERC-R targets likewise. In lay-rescuers AHA-R was associated with significantly higher success rates. The inability to estimate distance could explain the failure to appropriately perform chest compressions. For teaching lay-rescuers, the AHA-R with no upper limit of compression depth might be preferable. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
A compressible Navier-Stokes solver with two-equation and Reynolds stress turbulence closure models
NASA Technical Reports Server (NTRS)
Morrison, Joseph H.
1992-01-01
This report outlines the development of a general purpose aerodynamic solver for compressible turbulent flows. Turbulent closure is achieved using either two equation or Reynolds stress transportation equations. The applicable equation set consists of Favre-averaged conservation equations for the mass, momentum and total energy, and transport equations for the turbulent stresses and turbulent dissipation rate. In order to develop a scheme with good shock capturing capabilities, good accuracy and general geometric capabilities, a multi-block cell centered finite volume approach is used. Viscous fluxes are discretized using a finite volume representation of a central difference operator and the source terms are treated as an integral over the control volume. The methodology is validated by testing the algorithm on both two and three dimensional flows. Both the two equation and Reynolds stress models are used on a two dimensional 10 degree compression ramp at Mach 3, and the two equation model is used on the three dimensional flow over a cone at angle of attack at Mach 3.5. With the development of this algorithm, it is now possible to compute complex, compressible high speed flow fields using both two equation and Reynolds stress turbulent closure models, with the capability of eventually evaluating their predictive performance.
Ultrafast Kα x-ray Thomson scattering from shock compressed lithium hydride
Kritcher, A. L.; Neumayer, P.; Castor, J.; ...
2009-04-13
Spectrally and temporally resolved x-ray Thomson scattering using ultrafast Ti Kα x rays has provided experimental validation for modeling of the compression and heating of shocked matter. The coalescence of two shocks launched into a solid density LiH target by a shaped 6 ns heater beam was observed from rapid heating to temperatures of 2.2 eV, enabling tests of shock timing models. Here, the temperature evolution of the target at various times during shock progression was characterized from the intensity of the elastic scattering component. The observation of scattering from plasmons, electron plasma oscillations, at shock coalescence indicates a transitionmore » to a dense metallic plasma state in LiH. From the frequency shift of the measured plasmon feature the electron density was directly determined with high accuracy, providing a material compression of a factor of 3 times solid density. The quality of data achieved in these experiments demonstrates the capability for single shot dynamic characterization of dense shock compressed matter. Here, the conditions probed in this experiment are relevant for the study of the physics of planetary formation and to characterize inertial confinement fusion targets for experiments such as on the National Ignition Facility, Lawrence Livermore National Laboratory.« less
A JPEG backward-compatible HDR image compression
NASA Astrophysics Data System (ADS)
Korshunov, Pavel; Ebrahimi, Touradj
2012-10-01
High Dynamic Range (HDR) imaging is expected to become one of the technologies that could shape next generation of consumer digital photography. Manufacturers are rolling out cameras and displays capable of capturing and rendering HDR images. The popularity and full public adoption of HDR content is however hindered by the lack of standards in evaluation of quality, file formats, and compression, as well as large legacy base of Low Dynamic Range (LDR) displays that are unable to render HDR. To facilitate wide spread of HDR usage, the backward compatibility of HDR technology with commonly used legacy image storage, rendering, and compression is necessary. Although many tone-mapping algorithms were developed for generating viewable LDR images from HDR content, there is no consensus on which algorithm to use and under which conditions. This paper, via a series of subjective evaluations, demonstrates the dependency of perceived quality of the tone-mapped LDR images on environmental parameters and image content. Based on the results of subjective tests, it proposes to extend JPEG file format, as the most popular image format, in a backward compatible manner to also deal with HDR pictures. To this end, the paper provides an architecture to achieve such backward compatibility with JPEG and demonstrates efficiency of a simple implementation of this framework when compared to the state of the art HDR image compression.
Ofori-Kwakye, Kwabena; Mfoafo, Kwadwo Amanor; Kipo, Samuel Lugrie; Kuntworbe, Noble; Boakye-Gyasi, Mariam El
2016-01-01
The study was aimed at developing extended release matrix tablets of poorly water-soluble diclofenac sodium and highly water-soluble metformin hydrochloride by direct compression using cashew gum, xanthan gum and hydroxypropylmethylcellulose (HPMC) as release retardants. The suitability of light grade cashew gum as a direct compression excipient was studied using the SeDeM Diagram Expert System. Thirteen tablet formulations of diclofenac sodium (∼100 mg) and metformin hydrochloride (∼200 mg) were prepared with varying amounts of cashew gum, xanthan gum and HPMC by direct compression. The flow properties of blended powders and the uniformity of weight, crushing strength, friability, swelling index and drug content of compressed tablets were determined. In vitro drug release studies of the matrix tablets were conducted in phosphate buffer (diclofenac: pH 7.4; metformin: pH 6.8) and the kinetics of drug release was determined by fitting the release data to five kinetic models. Cashew gum was found to be suitable for direct compression, having a good compressibility index (ICG) value of 5.173. The diclofenac and metformin matrix tablets produced generally possessed fairly good physical properties. Tablet swelling and drug release in aqueous medium were dependent on the type and amount of release retarding polymer and the solubility of drug used. Extended release of diclofenac (∼24 h) and metformin (∼8-12 h) from the matrix tablets in aqueous medium was achieved using various blends of the polymers. Drug release from diclofenac tablets fitted zero order, first order or Higuchi model while release from metformin tablets followed Higuchi or Hixson-Crowell model. The mechanism of release of the two drugs was mostly through Fickian diffusion and anomalous non-Fickian diffusion. The study has demonstrated the potential of blended hydrophilic polymers in the design and optimization of extended release matrix tablets for soluble and poorly soluble drugs by direct compression.
Pizanis, Antonius; Holstein, Jörg H; Vossen, Felix; Burkhardt, Markus; Pohlemann, Tim
2013-08-26
Anterior bone grafts are used as struts to reconstruct the anterior column of the spine in kyphosis or following injury. An incomplete fusion can lead to later correction losses and compromise further healing. Despite the different stabilizing techniques that have evolved, from posterior or anterior fixating implants to combined anterior/posterior instrumentation, graft pseudarthrosis rates remain an important concern. Furthermore, the need for additional anterior implant fixation is still controversial. In this bench-top study, we focused on the graft-bone interface under various conditions, using two simulated spinal injury models and common surgical fixation techniques to investigate the effect of implant-mediated compression and contact on the anterior graft. Calf spines were stabilised with posterior internal fixators. The wooden blocks as substitutes for strut grafts were impacted using a "pressfit" technique and pressure-sensitive films placed at the interface between the vertebral bone and the graft to record the compression force and the contact area with various stabilization techniques. Compression was achieved either with posterior internal fixator alone or with an additional anterior implant. The importance of concomitant ligament damage was also considered using two simulated injury models: pure compression Magerl/AO fracture type A or rotation/translation fracture type C models. In type A injury models, 1 mm-oversized grafts for impaction grafting provided good compression and fair contact areas that were both markedly increased by the use of additional compressing anterior rods or by shortening the posterior fixator construct. Anterior instrumentation by itself had similar effects. For type C injuries, dramatic differences were observed between the techniques, as there was a net decrease in compression and an inadequate contact on the graft occurred in this model. Under these circumstances, both compression and the contact area on graft could only be maintained at high levels with the use of additional anterior rods. Under experimental conditions, we observed that ligamentous injury following type C fracture has a negative influence on the compression and contact area of anterior interbody bone grafts when only an internal fixator is used for stabilization. Because of the loss of tension banding effects in type C injuries, an additional anterior compressing implant can be beneficial to restore both compression to and contact on the strut graft.
Image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing
NASA Astrophysics Data System (ADS)
Zhou, Nanrun; Pan, Shumin; Cheng, Shan; Zhou, Zhihong
2016-08-01
Most image encryption algorithms based on low-dimensional chaos systems bear security risks and suffer encryption data expansion when adopting nonlinear transformation directly. To overcome these weaknesses and reduce the possible transmission burden, an efficient image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing is proposed. The original image is measured by the measurement matrices in two directions to achieve compression and encryption simultaneously, and then the resulting image is re-encrypted by the cycle shift operation controlled by a hyper-chaotic system. Cycle shift operation can change the values of the pixels efficiently. The proposed cryptosystem decreases the volume of data to be transmitted and simplifies the keys distribution simultaneously as a nonlinear encryption system. Simulation results verify the validity and the reliability of the proposed algorithm with acceptable compression and security performance.
Optimisation algorithms for ECG data compression.
Haugland, D; Heber, J G; Husøy, J H
1997-07-01
The use of exact optimisation algorithms for compressing digital electrocardiograms (ECGs) is demonstrated. As opposed to traditional time-domain methods, which use heuristics to select a small subset of representative signal samples, the problem of selecting the subset is formulated in rigorous mathematical terms. This approach makes it possible to derive algorithms guaranteeing the smallest possible reconstruction error when a bounded selection of signal samples is interpolated. The proposed model resembles well-known network models and is solved by a cubic dynamic programming algorithm. When applied to standard test problems, the algorithm produces a compressed representation for which the distortion is about one-half of that obtained by traditional time-domain compression techniques at reasonable compression ratios. This illustrates that, in terms of the accuracy of decoded signals, existing time-domain heuristics for ECG compression may be far from what is theoretically achievable. The paper is an attempt to bridge this gap.
Design of a high-power, high-brightness Nd:YAG solar laser.
Liang, Dawei; Almeida, Joana; Garcia, Dário
2014-03-20
A simple high-power, high-brightness Nd:YAG solar laser pumping approach is presented in this paper. The incoming solar radiation is both collected and concentrated by four Fresnel lenses and redirected toward a Nd:YAG laser head by four plane-folding mirrors. A fused-silica secondary concentrator is used to compress the highly concentrated solar radiation to a laser rod. Optimum pumping conditions and laser resonator parameters are found through ZEMAX and LASCAD numerical analysis. Solar laser power of 96 W is numerically calculated, corresponding to the collection efficiency of 24 W/m². A record-high solar laser beam brightness figure of merit of 9.6 W is numerically achieved.
Hardware Implementation of Lossless Adaptive and Scalable Hyperspectral Data Compression for Space
NASA Technical Reports Server (NTRS)
Aranki, Nazeeh; Keymeulen, Didier; Bakhshi, Alireza; Klimesh, Matthew
2009-01-01
On-board lossless hyperspectral data compression reduces data volume in order to meet NASA and DoD limited downlink capabilities. The technique also improves signature extraction, object recognition and feature classification capabilities by providing exact reconstructed data on constrained downlink resources. At JPL a novel, adaptive and predictive technique for lossless compression of hyperspectral data was recently developed. This technique uses an adaptive filtering method and achieves a combination of low complexity and compression effectiveness that far exceeds state-of-the-art techniques currently in use. The JPL-developed 'Fast Lossless' algorithm requires no training data or other specific information about the nature of the spectral bands for a fixed instrument dynamic range. It is of low computational complexity and thus well-suited for implementation in hardware. A modified form of the algorithm that is better suited for data from pushbroom instruments is generally appropriate for flight implementation. A scalable field programmable gate array (FPGA) hardware implementation was developed. The FPGA implementation achieves a throughput performance of 58 Msamples/sec, which can be increased to over 100 Msamples/sec in a parallel implementation that uses twice the hardware resources This paper describes the hardware implementation of the 'Modified Fast Lossless' compression algorithm on an FPGA. The FPGA implementation targets the current state-of-the-art FPGAs (Xilinx Virtex IV and V families) and compresses one sample every clock cycle to provide a fast and practical real-time solution for space applications.
An Adaptive Prediction-Based Approach to Lossless Compression of Floating-Point Volume Data.
Fout, N; Ma, Kwan-Liu
2012-12-01
In this work, we address the problem of lossless compression of scientific and medical floating-point volume data. We propose two prediction-based compression methods that share a common framework, which consists of a switched prediction scheme wherein the best predictor out of a preset group of linear predictors is selected. Such a scheme is able to adapt to different datasets as well as to varying statistics within the data. The first method, called APE (Adaptive Polynomial Encoder), uses a family of structured interpolating polynomials for prediction, while the second method, which we refer to as ACE (Adaptive Combined Encoder), combines predictors from previous work with the polynomial predictors to yield a more flexible, powerful encoder that is able to effectively decorrelate a wide range of data. In addition, in order to facilitate efficient visualization of compressed data, our scheme provides an option to partition floating-point values in such a way as to provide a progressive representation. We compare our two compressors to existing state-of-the-art lossless floating-point compressors for scientific data, with our data suite including both computer simulations and observational measurements. The results demonstrate that our polynomial predictor, APE, is comparable to previous approaches in terms of speed but achieves better compression rates on average. ACE, our combined predictor, while somewhat slower, is able to achieve the best compression rate on all datasets, with significantly better rates on most of the datasets.
Compression and release dynamics of an active matter system of Euglena gracilis
NASA Astrophysics Data System (ADS)
Lam, Amy; Tsang, Alan C. H.; Ouellette, Nicholas; Riedel-Kruse, Ingmar
Active matter, defined as ensembles of self-propelled particles, encompasses a large variety of systems at all scales, from nanoparticles to bird flocks. Though various models and simulations have been created to describe the dynamics of these systems, experimental verification has been difficult to obtain. This is frequently due to the complex interaction rules which govern the particle behavior, in turn making systematic varying of parameters impossible. Here, we propose a model for predicting the system evolution of compression and release of an active system based on experiments and simulations. In particular, we consider ensembles of the unicellular, photo-responsive algae, Euglena gracilis, under light stimulation. By varying the spatiotemporal light patterns, we are able to finely adjust cell densities and achieve arbitrary non-homogeneous distributions, including compression into high-density aggregates of varying geometries. We observe the formation of depletion zones after the release of the confining stimulus and investigate the effects of the density distribution and particle rotational noise on the depletion. These results provide implications for defining state parameters which determine system evolution.
Quantitative holographic interferometry applied to combustion and compressible flow research
NASA Astrophysics Data System (ADS)
Bryanston-Cross, Peter J.; Towers, D. P.
1993-03-01
The application of holographic interferometry to phase object analysis is described. Emphasis has been given to a method of extracting quantitative information automatically from the interferometric fringe data. To achieve this a carrier frequency has been added to the holographic data. This has made it possible, firstly to form a phase map using a fast Fourier transform (FFT) algorithm. Then to `solve,' or unwrap, this image to give a contiguous density map using a minimum weight spanning tree (MST) noise immune algorithm, known as fringe analysis (FRAN). Applications of this work to a burner flame and a compressible flow are presented. In both cases the spatial frequency of the fringes exceed the resolvable limit of conventional digital framestores. Therefore, a flatbed scanner with a resolution of 3200 X 2400 pixels has been used to produce very high resolution digital images from photographs. This approach has allowed the processing of data despite the presence of caustics, generated by strong thermal gradients at the edge of the combustion field. A similar example is presented from the analysis of a compressible transonic flow in the shock wave and trailing edge regions.
Low-rate image coding using vector quantization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makur, A.
1990-01-01
This thesis deals with the development and analysis of a computationally simple vector quantization image compression system for coding monochrome images at low bit rate. Vector quantization has been known to be an effective compression scheme when a low bit rate is desirable, but the intensive computation required in a vector quantization encoder has been a handicap in using it for low rate image coding. The present work shows that, without substantially increasing the coder complexity, it is indeed possible to achieve acceptable picture quality while attaining a high compression ratio. Several modifications to the conventional vector quantization coder aremore » proposed in the thesis. These modifications are shown to offer better subjective quality when compared to the basic coder. Distributed blocks are used instead of spatial blocks to construct the input vectors. A class of input-dependent weighted distortion functions is used to incorporate psychovisual characteristics in the distortion measure. Computationally simple filtering techniques are applied to further improve the decoded image quality. Finally, unique designs of the vector quantization coder using electronic neural networks are described, so that the coding delay is reduced considerably.« less
Font group identification using reconstructed fonts
NASA Astrophysics Data System (ADS)
Cutter, Michael P.; van Beusekom, Joost; Shafait, Faisal; Breuel, Thomas M.
2011-01-01
Ideally, digital versions of scanned documents should be represented in a format that is searchable, compressed, highly readable, and faithful to the original. These goals can theoretically be achieved through OCR and font recognition, re-typesetting the document text with original fonts. However, OCR and font recognition remain hard problems, and many historical documents use fonts that are not available in digital forms. It is desirable to be able to reconstruct fonts with vector glyphs that approximate the shapes of the letters that form a font. In this work, we address the grouping of tokens in a token-compressed document into candidate fonts. This permits us to incorporate font information into token-compressed images even when the original fonts are unknown or unavailable in digital format. This paper extends previous work in font reconstruction by proposing and evaluating an algorithm to assign a font to every character within a document. This is a necessary step to represent a scanned document image with a reconstructed font. Through our evaluation method, we have measured a 98.4% accuracy for the assignment of letters to candidate fonts in multi-font documents.
Design and evaluation of a bolted joint for a discrete carbon-epoxy rod-reinforced hat section
NASA Technical Reports Server (NTRS)
Rousseau, Carl Q.; Baker, Donald J.
1996-01-01
The use of prefabricated pultruded carbon-epoxy rods has reduced the manufacturing complexity and costs of stiffened composite panels while increasing the damage tolerance of the panels. However, repairability of these highly efficient discrete stiffeners has been a concern. Design, analysis, and test results are presented in this paper for a bolted-joint repair for the pultruded rod concept that is capable of efficiently transferring axial loads in a hat-section stiffener on the upper skin segment of a heavily loaded aircraft wing component. A tension and a compression joint design were evaluated. The tension joint design achieved approximately 1.0% strain in the carbon-epoxy rod-reinforced hat-section and failed in a metal fitting at 166% of the design ultimate load. The compression joint design failed in the carbon-epoxy rod-reinforced hat-section test specimen area at approximately 0.7% strain and at 110% of the design ultimate load. This strain level of 0.7% in compression is similar to the failure strain observed in previously reported carbon-epoxy rod-reinforced hat-section column tests.
The effect on slurry water as a fresh water replacement in concrete properties
NASA Astrophysics Data System (ADS)
Kadir, Aeslina Abdul; Shahidan, Shahiron; Hai Yee, Lau; Ikhmal Haqeem Hassan, Mohd; Bakri Abdullah, Mohd Mustafa Al
2016-06-01
Concrete is the most widely used engineering material in the world and one of the largest water consuming industries. Consequently, the concrete manufacturer, ready mixed concrete plant is increased dramatically due to high demand from urban development project. At the same time, slurry water was generated and leading to environmental problems. Thus, this paper is to investigate the effect of using slurry water on concrete properties in term of mechanical properties. The basic wastewater characterization was investigated according to USEPA (Method 150.1 & 300.0) while the mechanical property of concrete with slurry water was compared according to ASTM C1602 and BS EN 1008 standards. In this research, the compressive strength, modulus of elasticity and tensile strength were studied. The percentage of wastewater replaced in concrete mixing was ranging from 0% up to 50%. In addition, the resulted also suggested that the concrete with 20% replacement of slurry water was achieved the highest compressive strength and modulus of elasticity compared to other percentages. Moreover, the results also recommended that concrete with slurry water mix have better compressive strength compared to control mix concrete.
Improving transmission efficiency of large sequence alignment/map (SAM) files.
Sakib, Muhammad Nazmus; Tang, Jijun; Zheng, W Jim; Huang, Chin-Tser
2011-01-01
Research in bioinformatics primarily involves collection and analysis of a large volume of genomic data. Naturally, it demands efficient storage and transfer of this huge amount of data. In recent years, some research has been done to find efficient compression algorithms to reduce the size of various sequencing data. One way to improve the transmission time of large files is to apply a maximum lossless compression on them. In this paper, we present SAMZIP, a specialized encoding scheme, for sequence alignment data in SAM (Sequence Alignment/Map) format, which improves the compression ratio of existing compression tools available. In order to achieve this, we exploit the prior knowledge of the file format and specifications. Our experimental results show that our encoding scheme improves compression ratio, thereby reducing overall transmission time significantly.
Ablation driven by hot electrons generated during the ignitor laser pulse in shock ignition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piriz, A. R.; Rodriguez Prieto, G.; Tahir, N. A.
2012-12-15
An analytical model for the ablation driven by hot electrons is presented. The hot electrons are assumed to be generated during the high intensity laser spike used to produce the ignitor shock wave in the shock ignition driven inertial fusion concept, and to carry on the absorbed laser energy in its totality. Efficient energy coupling requires to keep the critical surface sufficiently close to the ablation front and this goal can be achieved for high laser intensities provided that the laser wavelength is short enough. Scaling laws for the ablation pressure and the other relevant magnitudes of the ablation cloudmore » are found in terms of the laser and target parameters. The effect of the preformed plasma assembled by the compression pulse, previous to the ignitor, is also discussed. It is found that a minimum ratio between the compression and the ignitor pulses would be necessary for the adequate matching of the corresponding scale lengths.« less
Compression deformation of WC: atomistic description of hard ceramic material
NASA Astrophysics Data System (ADS)
Feng, Qing; Song, Xiaoyan; Liu, Xuemei; Liang, Shuhua; Wang, Haibin; Nie, Zuoren
2017-11-01
The deformation characteristics of WC, as a typical hard ceramic material, were studied on the nanoscale using atomistic simulations for both the single-crystal and polycrystalline forms under uniaxial compression. In particular, the effects of crystallographic orientation, grain boundary coordination and grain size on the origin of deformation were investigated. The deformation behavior of the single-crystal and polycrystalline WC both depend strongly on the orientation towards the loading direction. The grain boundaries play a significant role in the deformation coordination and the potential high fracture toughness of the nanocrystalline WC. In contrast to conventional knowledge of ceramics, maximum strength was obtained at a critical grain size corresponding to the turning point from a Hall-Petch to an inverse Hall-Petch relationship. For this the mechanism of the combined effect of dislocation motion within grains and the coordination of stress concentration at the grain boundaries were proposed. The present work has moved forward our understanding of plastic deformability and the possibility of achieving a high strength of nanocrystalline ceramic materials.
Factors affecting the sticking of insects on modified aircraft wings
NASA Technical Reports Server (NTRS)
Yi, O.; Chitsaz-Z, M. R.; Eiss, N. S.; Wightman, J. P.
1988-01-01
Previous work showed that the total number of insects sticking to an aluminum surface was reduced by coating the aluminum surface with elastomers. Due to a large number of possible experimental errors, no correlation between the modulus of elasticity, the elastomer, and the total number of insects sticking to a given elastomer was obtained. One of the errors assumed to be introduced during the road test is a variable insect flux so the number of insects striking one surface might be different from that striking another sample. To eliminate this source of error, the road test used to collect insects was simulated in a laboratory by development of an insect impacting technique using a pipe and high pressure compressed air. The insects are accelerated by a compressed air gun to high velocities and are then impacted with a stationary target on which the sample is mounted. The velocity of an object exiting from the pipe was determined and further improvement of the technique was achieved to obtain a uniform air velocity distribution.
Fluid helium at conditions of giant planetary interiors
Stixrude, Lars; Jeanloz, Raymond
2008-01-01
As the second most-abundant chemical element in the universe, helium makes up a large fraction of giant gaseous planets, including Jupiter, Saturn, and most extrasolar planets discovered to date. Using first-principles molecular dynamics simulations, we find that fluid helium undergoes temperature-induced metallization at high pressures. The electronic energy gap (band gap) closes at 20,000 K at a density half that of zero-temperature metallization, resulting in electrical conductivities greater than the minimum metallic value. Gap closure is achieved by a broadening of the valence band via increased s–p hydridization with increasing temperature, and this influences the equation of state: The Grüneisen parameter, which determines the adiabatic temperature–depth gradient inside a planet, changes only modestly, decreasing with compression up to the high-temperature metallization and then increasing upon further compression. The change in electronic structure of He at elevated pressures and temperatures has important implications for the miscibility of helium in hydrogen and for understanding the thermal histories of giant planets.
Compression deformation of WC: atomistic description of hard ceramic material.
Feng, Qing; Song, Xiaoyan; Liu, Xuemei; Liang, Shuhua; Wang, Haibin; Nie, Zuoren
2017-11-24
The deformation characteristics of WC, as a typical hard ceramic material, were studied on the nanoscale using atomistic simulations for both the single-crystal and polycrystalline forms under uniaxial compression. In particular, the effects of crystallographic orientation, grain boundary coordination and grain size on the origin of deformation were investigated. The deformation behavior of the single-crystal and polycrystalline WC both depend strongly on the orientation towards the loading direction. The grain boundaries play a significant role in the deformation coordination and the potential high fracture toughness of the nanocrystalline WC. In contrast to conventional knowledge of ceramics, maximum strength was obtained at a critical grain size corresponding to the turning point from a Hall-Petch to an inverse Hall-Petch relationship. For this the mechanism of the combined effect of dislocation motion within grains and the coordination of stress concentration at the grain boundaries were proposed. The present work has moved forward our understanding of plastic deformability and the possibility of achieving a high strength of nanocrystalline ceramic materials.
NASA Astrophysics Data System (ADS)
Jaini, Z. M.; Rum, R. H. M.; Boon, K. H.
2017-10-01
This paper presents the utilization of rice husk ash (RHA) as sand replacement and polypropylene mega-mesh 55 (PMM) as fiber reinforcement in foamed concrete. High pozzolanic reaction and the ability to become filler make RHA as a strategic material to enhance the strength and durability of foamed concrete. Furthermore, the presence of PMM optimizes the toughness of foamed concrete in resisting shrinkage and cracking. In this experimental study, cube and cylinder specimens were prepared for the compression and splitting-tensile tests. Meanwhile, notched beam specimens were cast for the three-point bending test. It was found that 40% RHA and 9kg/m3 PMM contribute to the highest strength and fracture energy. The compressive, tensile and flexural strengths are 32MPa, 2.88MPa and 6.68MPa respectively, while the fracture energy achieves 42.19N/m. The results indicate high potential of RHA and PMM in enhancing the mechanical properties of foamed concrete.
Pressure dependence of excited-state charge-carrier dynamics in organolead tribromide perovskites
NASA Astrophysics Data System (ADS)
Liu, X. C.; Han, J. H.; Zhao, H. F.; Yan, H. C.; Shi, Y.; Jin, M. X.; Liu, C. L.; Ding, D. J.
2018-05-01
Excited-state charge-carrier dynamics governs the performance of organometal trihalide perovskites (OTPs) and is strongly influenced by the crystal structure. Characterizing the excited-state charge-carrier dynamics in OTPs under high pressure is imperative for providing crucial insights into structure-property relations. Here, we conduct in situ high-pressure femtosecond transient absorption spectroscopy experiments to study the excited-state carrier dynamics of CH3NH3PbBr3 (MAPbBr3) under hydrostatic pressure. The results indicate that compression is an effective approach to modulate the carrier dynamics of MAPbBr3. Across each pressure-induced phase, carrier relaxation, phonon scattering, and Auger recombination present different pressure-dependent properties under compression. Responsiveness is attributed to the pressure-induced variation in the lattice structure, which also changes the electronic band structure. Specifically, simultaneous prolongation of carrier relaxation and Auger recombination is achieved in the ambient phase, which is very valuable for excess energy harvesting. Our discussion provides clues for optimizing the photovoltaic performance of OTPs.
A novel multiple description scalable coding scheme for mobile wireless video transmission
NASA Astrophysics Data System (ADS)
Zheng, Haifeng; Yu, Lun; Chen, Chang Wen
2005-03-01
We proposed in this paper a novel multiple description scalable coding (MDSC) scheme based on in-band motion compensation temporal filtering (IBMCTF) technique in order to achieve high video coding performance and robust video transmission. The input video sequence is first split into equal-sized groups of frames (GOFs). Within a GOF, each frame is hierarchically decomposed by discrete wavelet transform. Since there is a direct relationship between wavelet coefficients and what they represent in the image content after wavelet decomposition, we are able to reorganize the spatial orientation trees to generate multiple bit-streams and employed SPIHT algorithm to achieve high coding efficiency. We have shown that multiple bit-stream transmission is very effective in combating error propagation in both Internet video streaming and mobile wireless video. Furthermore, we adopt the IBMCTF scheme to remove the redundancy for inter-frames along the temporal direction using motion compensated temporal filtering, thus high coding performance and flexible scalability can be provided in this scheme. In order to make compressed video resilient to channel error and to guarantee robust video transmission over mobile wireless channels, we add redundancy to each bit-stream and apply error concealment strategy for lost motion vectors. Unlike traditional multiple description schemes, the integration of these techniques enable us to generate more than two bit-streams that may be more appropriate for multiple antenna transmission of compressed video. Simulate results on standard video sequences have shown that the proposed scheme provides flexible tradeoff between coding efficiency and error resilience.
The quality mammographic image. A review of its components.
Rickard, M T
1989-11-01
Seven major factors resulting in a quality or high contrast and high resolution mammographic image have been discussed. The following is a summary of their key features: 1) Dedicated mammographic equipment. --Molybdenum target material --Molybdenum filter, beryllium window --Low kVp usage, in range of 24 to 30 --Routine contact mammography performed at 25 kVp --Slightly lower kVp for coned compression --Slightly higher kVp for microfocus magnification 2) Film density --Phototimer with adjustable position --Calibration of phototimer to optimal optical density of approx. 1.4 over full kVp range 3) Breast Compression --General and focal (coned compression). --Essential to achieve proper contrast, resolution and breast immobility. --Foot controls preferable. 4) Focal Spot. --Size recommendation for contact work 0.3 mm. --Minimum power output of 100 mA at 25 kVp desirable to avoid movement blurring in contact grid work. --Size recommendation for magnification work 0.1 mm. 5) Grid. --Usage recommended as routine in all but magnification work. 6) Film-screen Combination. --High contrast--high speed film. --High resolution screen. --Specifically designed cassette for close film-screen contact and low radiation absorption. --Use of faster screens for magnification techniques. 7) Dedicated processing. --Increased developing time--40 to 45 seconds. --Increased developer temperature--35 to 38 degrees. --Adjusted replenishment rate and dryer temperature. All seven factors contributing to image contrast and resolution affect radiation dosage to the breast. The risk of increased dosage associated with the use of various techniques needs to be balanced against the risks of incorrect diagnosis associated with their non-use.(ABSTRACT TRUNCATED AT 250 WORDS)
Wollborn, Jakob; Ruetten, Eva; Schlueter, Bjoern; Haberstroh, Joerg; Goebel, Ulrich; Schick, Martin A
2018-01-22
Standardized modeling of cardiac arrest and cardiopulmonary resuscitation (CPR) is crucial to evaluate new treatment options. Experimental porcine models are ideal, closely mimicking human-like physiology. However, anteroposterior chest diameter differs significantly, being larger in pigs and thus poses a challenge to achieve adequate perfusion pressures and consequently hemodynamics during CPR, which are commonly achieved during human resuscitation. The aim was to prove that standardized resuscitation is feasible and renders adequate hemodynamics and perfusion in pigs, using a specifically designed resuscitation board for a pneumatic chest compression device. A "porcine-fit" resuscitation board was designed for our experiments to optimally use a pneumatic compression device (LUCAS® II, Physio-Control Inc.), which is widely employed in emergency medicine and ideal in an experimental setting due to its high standardization. Asphyxial cardiac arrest was induced in 10 German hybrid landrace pigs and cardiopulmonary resuscitation was performed according to ERC/AHA 2015 guidelines with mechanical chest compressions. Hemodynamics were measured in the carotid and pulmonary artery. Furthermore, arterial blood gas was drawn to assess oxygenation and tissue perfusion. The custom-designed resuscitation board in combination with the LUCAS® device demonstrated highly sufficient performance regarding hemodynamics during CPR (mean arterial blood pressure, MAP 46 ± 1 mmHg and mean pulmonary artery pressure, mPAP of 36 ± 1 mmHg over the course of CPR). MAP returned to baseline values at 2 h after ROSC (80 ± 4 mmHg), requiring moderate doses of vasopressors. Furthermore, stroke volume and contractility were analyzed using pulse contour analysis (106 ± 3 ml and 1097 ± 22 mmHg/s during CPR). Blood gas analysis revealed CPR-typical changes, normalizing in the due course. Thermodilution parameters did not show persistent intravascular volume shift. Standardized cardiopulmonary resuscitation is feasible in a porcine model, achieving adequate hemodynamics and consecutive tissue perfusion of consistent quality. Copyright © 2018 Elsevier Inc. All rights reserved.
Buckling Behavior of Compression-Loaded Composite Cylindrical Shells with Reinforced Cutouts
NASA Technical Reports Server (NTRS)
Hilburger, Mark W.; Starnes, James H., Jr.
2002-01-01
Results from a numerical study of the response of thin-wall compression-loaded quasi-isotropic laminated composite cylindrical shells with reinforced and unreinforced square cutouts are presented. The effects of cutout reinforcement orthotropy, size, and thickness on the nonlinear response of the shells are described. A high-fidelity nonlinear analysis procedure has been used to predict the nonlinear response of the shells. The analysis procedure includes a nonlinear static analysis that predicts stable response characteristics of the shells and a nonlinear transient analysis that predicts unstable dynamic buckling response characteristics. The results illustrate how a compression-loaded shell with an unreinforced cutout can exhibit a complex nonlinear response. In particular, a local buckling response occurs in the shell near the cutout and is caused by a complex nonlinear coupling between local shell-wall deformations and in-plane destabilizing compression stresses near the cutout. In general, the addition of reinforcement around a cutout in a compression-loaded shell can retard or eliminate the local buckling response near the cutout and increase the buckling load of the shell, as expected. However, results are presented that show how certain reinforcement configurations can actually cause an unexpected increase in the magnitude of local deformations and stresses in the shell and cause a reduction in the buckling load. Specific cases are presented that suggest that the orthotropy, thickness, and size of a cutout reinforcement in a shell can be tailored to achieve improved response characteristics.
Image quality (IQ) guided multispectral image compression
NASA Astrophysics Data System (ADS)
Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik
2016-05-01
Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.
Digital compression algorithms for HDTV transmission
NASA Technical Reports Server (NTRS)
Adkins, Kenneth C.; Shalkhauser, Mary JO; Bibyk, Steven B.
1990-01-01
Digital compression of video images is a possible avenue for high definition television (HDTV) transmission. Compression needs to be optimized while picture quality remains high. Two techniques for compression the digital images are explained and comparisons are drawn between the human vision system and artificial compression techniques. Suggestions for improving compression algorithms through the use of neural and analog circuitry are given.
The effects of lossy compression on diagnostically relevant seizure information in EEG signals.
Higgins, G; McGinley, B; Faul, S; McEvoy, R P; Glavin, M; Marnane, W P; Jones, E
2013-01-01
This paper examines the effects of compression on EEG signals, in the context of automated detection of epileptic seizures. Specifically, it examines the use of lossy compression on EEG signals in order to reduce the amount of data which has to be transmitted or stored, while having as little impact as possible on the information in the signal relevant to diagnosing epileptic seizures. Two popular compression methods, JPEG2000 and SPIHT, were used. A range of compression levels was selected for both algorithms in order to compress the signals with varying degrees of loss. This compression was applied to the database of epileptiform data provided by the University of Freiburg, Germany. The real-time EEG analysis for event detection automated seizure detection system was used in place of a trained clinician for scoring the reconstructed data. Results demonstrate that compression by a factor of up to 120:1 can be achieved, with minimal loss in seizure detection performance as measured by the area under the receiver operating characteristic curve of the seizure detection system.
A comparison of select image-compression algorithms for an electronic still camera
NASA Technical Reports Server (NTRS)
Nerheim, Rosalee
1989-01-01
This effort is a study of image-compression algorithms for an electronic still camera. An electronic still camera can record and transmit high-quality images without the use of film, because images are stored digitally in computer memory. However, high-resolution images contain an enormous amount of information, and will strain the camera's data-storage system. Image compression will allow more images to be stored in the camera's memory. For the electronic still camera, a compression algorithm that produces a reconstructed image of high fidelity is most important. Efficiency of the algorithm is the second priority. High fidelity and efficiency are more important than a high compression ratio. Several algorithms were chosen for this study and judged on fidelity, efficiency and compression ratio. The transform method appears to be the best choice. At present, the method is compressing images to a ratio of 5.3:1 and producing high-fidelity reconstructed images.
High-speed railway signal trackside equipment patrol inspection system
NASA Astrophysics Data System (ADS)
Wu, Nan
2018-03-01
High-speed railway signal trackside equipment patrol inspection system comprehensively applies TDI (time delay integration), high-speed and highly responsive CMOS architecture, low illumination photosensitive technique, image data compression technique, machine vision technique and so on, installed on high-speed railway inspection train, and achieves the collection, management and analysis of the images of signal trackside equipment appearance while the train is running. The system will automatically filter out the signal trackside equipment images from a large number of the background image, and identify of the equipment changes by comparing the original image data. Combining with ledger data and train location information, the system accurately locate the trackside equipment, conscientiously guiding maintenance.
Roberts, Jonathan S; Niu, Jianli; Pastor-Cervantes, Juan A
2017-10-01
Hemostasis following transradial access (TRA) is usually achieved by mechanical compression. We investigated use of the QuikClot Radial hemostasis pad (Z-Medica) compared with the TR Band (Terumo Medical) to shorten hemostasis after TRA. Thirty patients undergoing TRA coronary angiography and/or percutaneous coronary intervention were randomized into three cohorts post TRA: 10 patients received mechanical compression with the TR Band, 10 patients received 30 min of compression with the QuikClot Radial pad, and 10 patients received 60 min of compression with the QuikClot Radial pad. Times to hemostasis and access-site complications were recorded. Radial artery patency was evaluated 1 hour after hemostasis by the reverse Barbeau's test. There were no differences in patient characteristics, mean dose of heparin (7117 ± 1054 IU), or mean activated clotting time value (210 ± 50 sec) at the end of procedure among the three groups. Successful hemostasis was achieved in 100% of patients with both the 30-min and 60-min compression groups using the QuikClot pad. Hemostasis failure occurred in 50% of patients when the TR Band was initially weaned at the protocol-driven time (40 min after sheath removal). Mean compression time for hemostasis with the TR Band was 149.4 min compared with 30.7 min and 60.9 min for the 30-min and 60-min QuikClot groups, respectively. No radial artery occlusion occurred in any subject at the end of the study. Use of the QuikClot Radial pad following TRA in this pilot trial significantly shortened hemostasis times when compared with the TR Band, with no increased complications noted.
Data compression techniques applied to high resolution high frame rate video technology
NASA Technical Reports Server (NTRS)
Hartz, William G.; Alexovich, Robert E.; Neustadter, Marc S.
1989-01-01
An investigation is presented of video data compression applied to microgravity space experiments using High Resolution High Frame Rate Video Technology (HHVT). An extensive survey of methods of video data compression, described in the open literature, was conducted. The survey examines compression methods employing digital computing. The results of the survey are presented. They include a description of each method and assessment of image degradation and video data parameters. An assessment is made of present and near term future technology for implementation of video data compression in high speed imaging system. Results of the assessment are discussed and summarized. The results of a study of a baseline HHVT video system, and approaches for implementation of video data compression, are presented. Case studies of three microgravity experiments are presented and specific compression techniques and implementations are recommended.
Kesler, Michael S.; Goyel, Sonalika; Ebrahimi, Fereshteh; ...
2016-11-15
The mechanical properties of novel alloys with two-phase γ-TiAl + σ-Nb 2Al microstructures were evaluated under compression at room temperature. Microstructures of varying scales were developed through solutionizing and aging heat treatments and the volume fraction of phases were varied with changes in composition. Ultra-fine, aged γ+σ microstructures were achieved for the alloys which affectively retained high volume fractions of the parent β-phase upon quenching from the solutionizing temperature. The yield strength and compressive strain to failure of these alloys show a strong dependence on the relative scale and volume fraction of phases. Surprisingly, the hard brittle σ-phase particles weremore » not found to control fracture in the refined microstructures.« less
NASA Technical Reports Server (NTRS)
Steinthorsson, E.; Modiano, David; Colella, Phillip
1994-01-01
A methodology for accurate and efficient simulation of unsteady, compressible flows is presented. The cornerstones of the methodology are a special discretization of the Navier-Stokes equations on structured body-fitted grid systems and an efficient solution-adaptive mesh refinement technique for structured grids. The discretization employs an explicit multidimensional upwind scheme for the inviscid fluxes and an implicit treatment of the viscous terms. The mesh refinement technique is based on the AMR algorithm of Berger and Colella. In this approach, cells on each level of refinement are organized into a small number of topologically rectangular blocks, each containing several thousand cells. The small number of blocks leads to small overhead in managing data, while their size and regular topology means that a high degree of optimization can be achieved on computers with vector processors.
Scale-adaptive compressive tracking with feature integration
NASA Astrophysics Data System (ADS)
Liu, Wei; Li, Jicheng; Chen, Xiao; Li, Shuxin
2016-05-01
Numerous tracking-by-detection methods have been proposed for robust visual tracking, among which compressive tracking (CT) has obtained some promising results. A scale-adaptive CT method based on multifeature integration is presented to improve the robustness and accuracy of CT. We introduce a keypoint-based model to achieve the accurate scale estimation, which can additionally give a prior location of the target. Furthermore, by the high efficiency of data-independent random projection matrix, multiple features are integrated into an effective appearance model to construct the naïve Bayes classifier. At last, an adaptive update scheme is proposed to update the classifier conservatively. Experiments on various challenging sequences demonstrate substantial improvements by our proposed tracker over CT and other state-of-the-art trackers in terms of dealing with scale variation, abrupt motion, deformation, and illumination changes.
A study of the crystallization, melting, and foaming behaviors of polylactic acid in compressed CO₂.
Zhai, Wentao; Ko, Yoorim; Zhu, Wenli; Wong, Anson; Park, Chul B
2009-12-16
The crystallization and melting behaviors of linear polylactic acid (PLA) treated by compressed CO(2) was investigated. The isothermal crystallization test indicated that while PLA exhibited very low crystallization kinetics under atmospheric pressure, CO(2) exposure significantly increased PLA's crystallization rate; a high crystallinity of 16.5% was achieved after CO(2) treatment for only 1 min at 100 degrees C and 6.89 MPa. One melting peak could be found in the DSC curve, and this exhibited a slight dependency on treatment times, temperatures, and pressures. PLA samples tended to foam during the gas release process, and a foaming window as a function of time and temperature was established. Based on the foaming window, crystallinity, and cell morphology, it was found that foaming clearly reduced the needed time for PLA's crystallization equilibrium.
Fukatsu, Hiroshi; Naganawa, Shinji; Yumura, Shinnichiro
2008-04-01
This study was aimed to validate the performance of a novel image compression method using a neural network to achieve a lossless compression. The encoding consists of the following blocks: a prediction block; a residual data calculation block; a transformation and quantization block; an organization and modification block; and an entropy encoding block. The predicted image is divided into four macro-blocks using the original image for teaching; and then redivided into sixteen sub-blocks. The predicted image is compared to the original image to create the residual image. The spatial and frequency data of the residual image are compared and transformed. Chest radiography, computed tomography (CT), magnetic resonance imaging, positron emission tomography, radioisotope mammography, ultrasonography, and digital subtraction angiography images were compressed using the AIC lossless compression method; and the compression rates were calculated. The compression rates were around 15:1 for chest radiography and mammography, 12:1 for CT, and around 6:1 for other images. This method thus enables greater lossless compression than the conventional methods. This novel method should improve the efficiency of handling of the increasing volume of medical imaging data.
Wavelet-based compression of pathological images for telemedicine applications
NASA Astrophysics Data System (ADS)
Chen, Chang W.; Jiang, Jianfei; Zheng, Zhiyong; Wu, Xue G.; Yu, Lun
2000-05-01
In this paper, we present the performance evaluation of wavelet-based coding techniques as applied to the compression of pathological images for application in an Internet-based telemedicine system. We first study how well suited the wavelet-based coding is as it applies to the compression of pathological images, since these images often contain fine textures that are often critical to the diagnosis of potential diseases. We compare the wavelet-based compression with the DCT-based JPEG compression in the DICOM standard for medical imaging applications. Both objective and subjective measures have been studied in the evaluation of compression performance. These studies are performed in close collaboration with expert pathologists who have conducted the evaluation of the compressed pathological images and communication engineers and information scientists who designed the proposed telemedicine system. These performance evaluations have shown that the wavelet-based coding is suitable for the compression of various pathological images and can be integrated well with the Internet-based telemedicine systems. A prototype of the proposed telemedicine system has been developed in which the wavelet-based coding is adopted for the compression to achieve bandwidth efficient transmission and therefore speed up the communications between the remote terminal and the central server of the telemedicine system.
Amin, Muhammad Nasir; Khan, Kaffayatullah; Saleem, Muhammad Umair; Khurram, Nauman; Niazi, Muhammad Umar Khan
2017-06-11
In this study, the researchers investigated the potential use of locally available waste materials from the lime stone quarry and the granite industry as a partial replacement of cement. Quarry sites and granite industry in the eastern province of Saudi Arabia produces tons of powder wastes in the form of quarry dust (QD) and granite sludge (GS), respectively, causing serious environmental problems along with frequent dust storms in the area. According to ASTM C109, identical 50-mm3 specimens were cast throughout this study to evaluate the compressive strength development of mortars (7, 28 and 91 days) containing these waste materials. Experimental variables included different percentage replacement of cement with waste materials (GS, QD), fineness of GS, various curing temperatures (20, 40 and 60 °C as local normal and hot environmental temperatures) and curing moisture (continuously moist and partially moist followed by air curing). Finally, the results of mortar containing waste materials were compared to corresponding results of control mortar (CM) and mortar containing fly ash (FA). The test results indicated that under normal curing (20 °C, moist cured), the compressive strength of mortar containing the different percentage of waste materials (QD, GS, FA and their combinations) remained lower than that of CM at all ages. However, the compressive strength of mortar containing waste materials slightly increased with increased fineness of GS and significantly increased under high curing temperatures. It was recommended that more fineness of GS be achieved to use its high percentage replacement with cement (30% or more) incorporating local environmental conditions.
Amin, Muhammad Nasir; Khan, Kaffayatullah; Saleem, Muhammad Umair; Khurram, Nauman; Niazi, Muhammad Umar Khan
2017-01-01
In this study, the researchers investigated the potential use of locally available waste materials from the lime stone quarry and the granite industry as a partial replacement of cement. Quarry sites and granite industry in the eastern province of Saudi Arabia produces tons of powder wastes in the form of quarry dust (QD) and granite sludge (GS), respectively, causing serious environmental problems along with frequent dust storms in the area. According to ASTM C109, identical 50-mm3 specimens were cast throughout this study to evaluate the compressive strength development of mortars (7, 28 and 91 days) containing these waste materials. Experimental variables included different percentage replacement of cement with waste materials (GS, QD), fineness of GS, various curing temperatures (20, 40 and 60 °C as local normal and hot environmental temperatures) and curing moisture (continuously moist and partially moist followed by air curing). Finally, the results of mortar containing waste materials were compared to corresponding results of control mortar (CM) and mortar containing fly ash (FA). The test results indicated that under normal curing (20 °C, moist cured), the compressive strength of mortar containing the different percentage of waste materials (QD, GS, FA and their combinations) remained lower than that of CM at all ages. However, the compressive strength of mortar containing waste materials slightly increased with increased fineness of GS and significantly increased under high curing temperatures. It was recommended that more fineness of GS be achieved to use its high percentage replacement with cement (30% or more) incorporating local environmental conditions. PMID:28772999
Yang, T.S.; Yao, S.H.; Chang, Y.Y.; Deng, J.H.
2018-01-01
Hard coatings have been adopted in cutting and forming applications for nearly two decades. The major purpose of using hard coatings is to reduce the friction coefficient between contact surfaces, to increase strength, toughness and anti-wear performance of working tools and molds, and then to obtain a smooth work surface and an increase in service life of tools and molds. In this report, we deposited a composite CrTiSiN hard coating, and a traditional single-layered TiAlN coating as a reference. Then, the coatings were comparatively studied by a series of tests. A field emission SEM was used to characterize the microstructure. Hardness was measured using a nano-indentation tester. Adhesion of coatings was evaluated using a Rockwell C hardness indentation tester. A pin-on-disk wear tester with WC balls as sliding counterparts was used to determine the wear properties. A self-designed compression and friction tester, by combining a Universal Testing Machine and a wear tester, was used to evaluate the contact behavior of composite CrTiSiN coated dies in compressing of Mg alloy sheets under high pressure. The results indicated that the hardness of composite CrTiSiN coating was lower than that of the TiAlN coating. However, the CrTiSiN coating showed better anti-wear performance. The CrTiSiN coated dies achieved smooth surfaces on the Mg alloy sheet in the compressing test and lower friction coefficient in the friction test, as compared with the TiAlN coating. PMID:29316687
Yang, T S; Yao, S H; Chang, Y Y; Deng, J H
2018-01-08
Hard coatings have been adopted in cutting and forming applications for nearly two decades. The major purpose of using hard coatings is to reduce the friction coefficient between contact surfaces, to increase strength, toughness and anti-wear performance of working tools and molds, and then to obtain a smooth work surface and an increase in service life of tools and molds. In this report, we deposited a composite CrTiSiN hard coating, and a traditional single-layered TiAlN coating as a reference. Then, the coatings were comparatively studied by a series of tests. A field emission SEM was used to characterize the microstructure. Hardness was measured using a nano-indentation tester. Adhesion of coatings was evaluated using a Rockwell C hardness indentation tester. A pin-on-disk wear tester with WC balls as sliding counterparts was used to determine the wear properties. A self-designed compression and friction tester, by combining a Universal Testing Machine and a wear tester, was used to evaluate the contact behavior of composite CrTiSiN coated dies in compressing of Mg alloy sheets under high pressure. The results indicated that the hardness of composite CrTiSiN coating was lower than that of the TiAlN coating. However, the CrTiSiN coating showed better anti-wear performance. The CrTiSiN coated dies achieved smooth surfaces on the Mg alloy sheet in the compressing test and lower friction coefficient in the friction test, as compared with the TiAlN coating.
SAR data compression: Application, requirements, and designs
NASA Technical Reports Server (NTRS)
Curlander, John C.; Chang, C. Y.
1991-01-01
The feasibility of reducing data volume and data rate is evaluated for the Earth Observing System (EOS) Synthetic Aperture Radar (SAR). All elements of data stream from the sensor downlink data stream to electronic delivery of browse data products are explored. The factors influencing design of a data compression system are analyzed, including the signal data characteristics, the image quality requirements, and the throughput requirements. The conclusion is that little or no reduction can be achieved in the raw signal data using traditional data compression techniques (e.g., vector quantization, adaptive discrete cosine transform) due to the induced phase errors in the output image. However, after image formation, a number of techniques are effective for data compression.
Advances in high throughput DNA sequence data compression.
Sardaraz, Muhammad; Tahir, Muhammad; Ikram, Ataul Aziz
2016-06-01
Advances in high throughput sequencing technologies and reduction in cost of sequencing have led to exponential growth in high throughput DNA sequence data. This growth has posed challenges such as storage, retrieval, and transmission of sequencing data. Data compression is used to cope with these challenges. Various methods have been developed to compress genomic and sequencing data. In this article, we present a comprehensive review of compression methods for genome and reads compression. Algorithms are categorized as referential or reference free. Experimental results and comparative analysis of various methods for data compression are presented. Finally, key challenges and research directions in DNA sequence data compression are highlighted.
Biomedical sensor design using analog compressed sensing
NASA Astrophysics Data System (ADS)
Balouchestani, Mohammadreza; Krishnan, Sridhar
2015-05-01
The main drawback of current healthcare systems is the location-specific nature of the system due to the use of fixed/wired biomedical sensors. Since biomedical sensors are usually driven by a battery, power consumption is the most important factor determining the life of a biomedical sensor. They are also restricted by size, cost, and transmission capacity. Therefore, it is important to reduce the load of sampling by merging the sampling and compression steps to reduce the storage usage, transmission times, and power consumption in order to expand the current healthcare systems to Wireless Healthcare Systems (WHSs). In this work, we present an implementation of a low-power biomedical sensor using analog Compressed Sensing (CS) framework for sparse biomedical signals that addresses both the energy and telemetry bandwidth constraints of wearable and wireless Body-Area Networks (BANs). This architecture enables continuous data acquisition and compression of biomedical signals that are suitable for a variety of diagnostic and treatment purposes. At the transmitter side, an analog-CS framework is applied at the sensing step before Analog to Digital Converter (ADC) in order to generate the compressed version of the input analog bio-signal. At the receiver side, a reconstruction algorithm based on Restricted Isometry Property (RIP) condition is applied in order to reconstruct the original bio-signals form the compressed bio-signals with high probability and enough accuracy. We examine the proposed algorithm with healthy and neuropathy surface Electromyography (sEMG) signals. The proposed algorithm achieves a good level for Average Recognition Rate (ARR) at 93% and reconstruction accuracy at 98.9%. In addition, The proposed architecture reduces total computation time from 32 to 11.5 seconds at sampling-rate=29 % of Nyquist rate, Percentage Residual Difference (PRD)=26 %, Root Mean Squared Error (RMSE)=3 %.
Assessment of the impact of modeling axial compression on PET image reconstruction.
Belzunce, Martin A; Reader, Andrew J
2017-10-01
To comprehensively evaluate both the acceleration and image-quality impacts of axial compression and its degree of modeling in fully 3D PET image reconstruction. Despite being used since the very dawn of 3D PET reconstruction, there are still no extensive studies on the impact of axial compression and its degree of modeling during reconstruction on the end-point reconstructed image quality. In this work, an evaluation of the impact of axial compression on the image quality is performed by extensively simulating data with span values from 1 to 121. In addition, two methods for modeling the axial compression in the reconstruction were evaluated. The first method models the axial compression in the system matrix, while the second method uses an unmatched projector/backprojector, where the axial compression is modeled only in the forward projector. The different system matrices were analyzed by computing their singular values and the point response functions for small subregions of the FOV. The two methods were evaluated with simulated and real data for the Biograph mMR scanner. For the simulated data, the axial compression with span values lower than 7 did not show a decrease in the contrast of the reconstructed images. For span 11, the standard sinogram size of the mMR scanner, losses of contrast in the range of 5-10 percentage points were observed when measured for a hot lesion. For higher span values, the spatial resolution was degraded considerably. However, impressively, for all span values of 21 and lower, modeling the axial compression in the system matrix compensated for the spatial resolution degradation and obtained similar contrast values as the span 1 reconstructions. Such approaches have the same processing times as span 1 reconstructions, but they permit significant reduction in storage requirements for the fully 3D sinograms. For higher span values, the system has a large condition number and it is therefore difficult to recover accurately the higher frequencies. Modeling the axial compression also achieved a lower coefficient of variation but with an increase of intervoxel correlations. The unmatched projector/backprojector achieved similar contrast values to the matched version at considerably lower reconstruction times, but at the cost of noisier images. For a line source scan, the reconstructions with modeling of the axial compression achieved similar resolution to the span 1 reconstructions. Axial compression applied to PET sinograms was found to have a negligible impact for span values lower than 7. For span values up to 21, the spatial resolution degradation due to the axial compression can be almost completely compensated for by modeling this effect in the system matrix at the expense of considerably larger processing times and higher intervoxel correlations, while retaining the storage benefit of compressed data. For even higher span values, the resolution loss cannot be completely compensated possibly due to an effective null space in the system. The use of an unmatched projector/backprojector proved to be a practical solution to compensate for the spatial resolution degradation at a reasonable computational cost but can lead to noisier images. © 2017 The Authors. Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Development of ultralight, super-elastic, hierarchical metallic meta-structures with i3DP technology
NASA Astrophysics Data System (ADS)
Zhang, Dongxing; Xiao, Junfeng; Moorlag, Carolyn; Guo, Qiuquan; Yang, Jun
2017-11-01
Lightweight and mechanically robust materials show promising applications in thermal insulation, energy absorption, and battery catalyst supports. This study demonstrates an effective method for creation of ultralight metallic structures based on initiator-integrated 3D printing technology (i3DP), which provides a possible platform to design the materials with the best geometric parameters and desired mechanical performance. In this study, ultralight Ni foams with 3D interconnected hollow tubes were fabricated, consisting of hierarchical features spanning three scale orders ranging from submicron to centimeter. The resultant materials can achieve an ultralight density of as low as 5.1 mg cm-3 and nearly recover after significant compression up to 50%. Due to a high compression ratio, the hierarchical structure exhibits superior properties in terms of energy absorption and mechanical efficiency. The relationship of structural parameters and mechanical response was established. The ability of achieving ultralight density <10 mg cm-3 and the stable \\bar{E}˜ {\\bar{ρ }}2 scaling through all range of relative density, indicates an advantage over the previous stochastic metal foams. Overall, this initiator-integrated 3D printing approach provides metallic structures with substantial benefits from the hierarchical design and fabrication flexibility to ultralight applications.
NASA Astrophysics Data System (ADS)
Peng, Feng; Sun, Ying; Pickard, Chris J.; Needs, Richard J.; Wu, Qiang; Ma, Yanming
2017-09-01
Room-temperature superconductivity has been a long-held dream and an area of intensive research. Recent experimental findings of superconductivity at 200 K in highly compressed hydrogen (H) sulfides have demonstrated the potential for achieving room-temperature superconductivity in compressed H-rich materials. We report first-principles structure searches for stable H-rich clathrate structures in rare earth hydrides at high pressures. The peculiarity of these structures lies in the emergence of unusual H cages with stoichiometries H24 , H29 , and H32 , in which H atoms are weakly covalently bonded to one another, with rare earth atoms occupying the centers of the cages. We have found that high-temperature superconductivity is closely associated with H clathrate structures, with large H-derived electronic densities of states at the Fermi level and strong electron-phonon coupling related to the stretching and rocking motions of H atoms within the cages. Strikingly, a yttrium (Y) H32 clathrate structure of stoichiometry YH10 is predicted to be a potential room-temperature superconductor with an estimated Tc of up to 303 K at 400 GPa, as derived by direct solution of the Eliashberg equation.
NASA Astrophysics Data System (ADS)
Tahir, N. A.; Shutov, A.; Lomonosov, I. V.; Gryaznov, V.; Deutsch, C.; Fortov, V. E.; Hoffmann, D. H. H.; Ni, P.; Piriz, A. R.; Udrea, S.; Varentsov, D.; Wouchuk, G.
2006-06-01
Intense beams of energetic heavy ions are believed to be a very efficient and novel tool to create states of High-Energy-Density (HED) in matter. This paper shows with the help of numerical simulations that the heavy ion beams that will be generated at the future Facility for Antiprotons and Ion Research (FAIR)[W.F. Henning, Nucl. Instr. Meth. B 214, 211 (2004)] will allow one to use two different experimental schemes to study HED states in matter. First scheme named HIHEX (Heavy Ion Heating and EXpansion), will generate high-pressure, high-entropy states in matter by volumetric isochoric heating. The heated material will then be allowed to expand isentropically. Using this scheme, it will be possible to study important regions of the phase diagram that are either difficult to access or are even unaccessible using traditional methods of shock compression. The second scheme would allow one to achieve low-entropy compression of a sample material like hydrogen or water to produce conditions that are believed to exist in the interiors of the giant planets. This scheme is named LAPLAS (LAboratory PLAnetary Sciences).
Peng, Feng; Sun, Ying; Pickard, Chris J; Needs, Richard J; Wu, Qiang; Ma, Yanming
2017-09-08
Room-temperature superconductivity has been a long-held dream and an area of intensive research. Recent experimental findings of superconductivity at 200 K in highly compressed hydrogen (H) sulfides have demonstrated the potential for achieving room-temperature superconductivity in compressed H-rich materials. We report first-principles structure searches for stable H-rich clathrate structures in rare earth hydrides at high pressures. The peculiarity of these structures lies in the emergence of unusual H cages with stoichiometries H_{24}, H_{29}, and H_{32}, in which H atoms are weakly covalently bonded to one another, with rare earth atoms occupying the centers of the cages. We have found that high-temperature superconductivity is closely associated with H clathrate structures, with large H-derived electronic densities of states at the Fermi level and strong electron-phonon coupling related to the stretching and rocking motions of H atoms within the cages. Strikingly, a yttrium (Y) H_{32} clathrate structure of stoichiometry YH_{10} is predicted to be a potential room-temperature superconductor with an estimated T_{c} of up to 303 K at 400 GPa, as derived by direct solution of the Eliashberg equation.
Huang, Wei; Xiao, Liang; Liu, Hongyi; Wei, Zhihui
2015-01-19
Due to the instrumental and imaging optics limitations, it is difficult to acquire high spatial resolution hyperspectral imagery (HSI). Super-resolution (SR) imagery aims at inferring high quality images of a given scene from degraded versions of the same scene. This paper proposes a novel hyperspectral imagery super-resolution (HSI-SR) method via dictionary learning and spatial-spectral regularization. The main contributions of this paper are twofold. First, inspired by the compressive sensing (CS) framework, for learning the high resolution dictionary, we encourage stronger sparsity on image patches and promote smaller coherence between the learned dictionary and sensing matrix. Thus, a sparsity and incoherence restricted dictionary learning method is proposed to achieve higher efficiency sparse representation. Second, a variational regularization model combing a spatial sparsity regularization term and a new local spectral similarity preserving term is proposed to integrate the spectral and spatial-contextual information of the HSI. Experimental results show that the proposed method can effectively recover spatial information and better preserve spectral information. The high spatial resolution HSI reconstructed by the proposed method outperforms reconstructed results by other well-known methods in terms of both objective measurements and visual evaluation.
Interactive calculation procedures for mixed compression inlets
NASA Technical Reports Server (NTRS)
Reshotko, Eli
1983-01-01
The proper design of engine nacelle installations for supersonic aircraft depends on a sophisticated understanding of the interactions between the boundary layers and the bounding external flows. The successful operation of mixed external-internal compression inlets depends significantly on the ability to closely control the operation of the internal compression portion of the inlet. This portion of the inlet is one where compression is achieved by multiple reflection of oblique shock waves and weak compression waves in a converging internal flow passage. However weak these shocks and waves may seem gas-dynamically, they are of sufficient strength to separate a laminar boundary layer and generally even strong enough for separation or incipient separation of the turbulent boundary layers. An understanding was developed of the viscous-inviscid interactions and of the shock wave boundary layer interactions and reflections.
Low-Complexity Lossless and Near-Lossless Data Compression Technique for Multispectral Imagery
NASA Technical Reports Server (NTRS)
Xie, Hua; Klimesh, Matthew A.
2009-01-01
This work extends the lossless data compression technique described in Fast Lossless Compression of Multispectral- Image Data, (NPO-42517) NASA Tech Briefs, Vol. 30, No. 8 (August 2006), page 26. The original technique was extended to include a near-lossless compression option, allowing substantially smaller compressed file sizes when a small amount of distortion can be tolerated. Near-lossless compression is obtained by including a quantization step prior to encoding of prediction residuals. The original technique uses lossless predictive compression and is designed for use on multispectral imagery. A lossless predictive data compression algorithm compresses a digitized signal one sample at a time as follows: First, a sample value is predicted from previously encoded samples. The difference between the actual sample value and the prediction is called the prediction residual. The prediction residual is encoded into the compressed file. The decompressor can form the same predicted sample and can decode the prediction residual from the compressed file, and so can reconstruct the original sample. A lossless predictive compression algorithm can generally be converted to a near-lossless compression algorithm by quantizing the prediction residuals prior to encoding them. In this case, since the reconstructed sample values will not be identical to the original sample values, the encoder must determine the values that will be reconstructed and use these values for predicting later sample values. The technique described here uses this method, starting with the original technique, to allow near-lossless compression. The extension to allow near-lossless compression adds the ability to achieve much more compression when small amounts of distortion are tolerable, while retaining the low complexity and good overall compression effectiveness of the original algorithm.
Chang, Mary P; Gent, Lana M; Sweet, Merrilee; Potts, Jerry; Ahtone, Jeral; Idris, Ahamed H
2017-07-01
The American Heart Association set goals in 2010 to train 20 million people annually in cardiopulmonary resuscitation and to double bystander response by 2020. These ambitious goals are difficult to achieve without new approaches. The main objective is to evaluate a new approach to cardiopulmonary resuscitation instruction using a self-instructional kiosk to teach Hands-Only CPR to people at a busy international airport. This is a prospective, observational study evaluating a new approach to teach Hands-Only CPR to the public from July 2013 to February 2016. The American Heart Association developed a Hands-Only CPR Kiosk for this project. We assessed the number of participants who viewed the instructional video and practiced chest compressions as well as the quality metrics of the chest compressions. In a 32-month period, there were 23478 visits to the Hands-Only CPR Kiosk and 9006 test sessions; of those practice sessions, 26.2% achieved correct chest compression rate, 60.2% achieved correct chest compression depth, and 63.5% had the correct hand position. There is noticeable public interest in learning Hands-Only CPR by using an airport kiosk and an airport is an opportune place to engage a layperson in learning Hands-Only CPR. The average quality of Hands-Only CPR by the public needs improvement and adding kiosks to other locations in the airport could reach more people and could be replicated in other major airports in the United States. Copyright © 2017 Elsevier B.V. All rights reserved.
Designing an efficient LT-code with unequal error protection for image transmission
NASA Astrophysics Data System (ADS)
S. Marques, F.; Schwartz, C.; Pinho, M. S.; Finamore, W. A.
2015-10-01
The use of images from earth observation satellites is spread over different applications, such as a car navigation systems and a disaster monitoring. In general, those images are captured by on board imaging devices and must be transmitted to the Earth using a communication system. Even though a high resolution image can produce a better Quality of Service, it leads to transmitters with high bit rate which require a large bandwidth and expend a large amount of energy. Therefore, it is very important to design efficient communication systems. From communication theory, it is well known that a source encoder is crucial in an efficient system. In a remote sensing satellite image transmission, this efficiency is achieved by using an image compressor, to reduce the amount of data which must be transmitted. The Consultative Committee for Space Data Systems (CCSDS), a multinational forum for the development of communications and data system standards for space flight, establishes a recommended standard for a data compression algorithm for images from space systems. Unfortunately, in the satellite communication channel, the transmitted signal is corrupted by the presence of noise, interference signals, etc. Therefore, the receiver of a digital communication system may fail to recover the transmitted bit. Actually, a channel code can be used to reduce the effect of this failure. In 2002, the Luby Transform code (LT-code) was introduced and it was shown that it was very efficient when the binary erasure channel model was used. Since the effect of the bit recovery failure depends on the position of the bit in the compressed image stream, in the last decade many e orts have been made to develop LT-code with unequal error protection. In 2012, Arslan et al. showed improvements when LT-codes with unequal error protection were used in images compressed by SPIHT algorithm. The techniques presented by Arslan et al. can be adapted to work with the algorithm for image compression recommended by CCSDS. In fact, to design a LT-code with an unequal error protection, the bit stream produced by the algorithm recommended by CCSDS must be partitioned in M disjoint sets of bits. Using the weighted approach, the LT-code produces M different failure probabilities for each set of bits, p1, ..., pM leading to a total probability of failure, p which is an average of p1, ..., pM. In general, the parameters of the LT-code with unequal error protection is chosen using a heuristic procedure. In this work, we analyze the problem of choosing the LT-code parameters to optimize two figure of merits: (a) the probability of achieving a minimum acceptable PSNR, and (b) the mean of PSNR, given that the minimum acceptable PSNR has been achieved. Given the rate-distortion curve achieved by CCSDS recommended algorithm, this work establishes a closed form of the mean of PSNR (given that the minimum acceptable PSNR has been achieved) as a function of p1, ..., pM. The main contribution of this work is the study of a criteria to select the parameters p1, ..., pM to optimize the performance of image transmission.
Non-linear properties of metallic cellular materials with a negative Poisson's ratio
NASA Technical Reports Server (NTRS)
Choi, J. B.; Lakes, R. S.
1992-01-01
Negative Poisson's ratio copper foam was prepared and characterized experimentally. The transformation into re-entrant foam was accomplished by applying sequential permanent compressions above the yield point to achieve a triaxial compression. The Poisson's ratio of the re-entrant foam depended on strain and attained a relative minimum at strains near zero. Poisson's ratio as small as -0.8 was achieved. The strain dependence of properties occurred over a narrower range of strain than in the polymer foams studied earlier. Annealing of the foam resulted in a slightly greater magnitude of negative Poisson's ratio and greater toughness at the expense of a decrease in the Young's modulus.
NASA Astrophysics Data System (ADS)
Guthier, C.; Aschenbrenner, K. P.; Buergy, D.; Ehmann, M.; Wenz, F.; Hesser, J. W.
2015-03-01
This work discusses a novel strategy for inverse planning in low dose rate brachytherapy. It applies the idea of compressed sensing to the problem of inverse treatment planning and a new solver for this formulation is developed. An inverse planning algorithm was developed incorporating brachytherapy dose calculation methods as recommended by AAPM TG-43. For optimization of the functional a new variant of a matching pursuit type solver is presented. The results are compared with current state-of-the-art inverse treatment planning algorithms by means of real prostate cancer patient data. The novel strategy outperforms the best state-of-the-art methods in speed, while achieving comparable quality. It is able to find solutions with comparable values for the objective function and it achieves these results within a few microseconds, being up to 542 times faster than competing state-of-the-art strategies, allowing real-time treatment planning. The sparse solution of inverse brachytherapy planning achieved with methods from compressed sensing is a new paradigm for optimization in medical physics. Through the sparsity of required needles and seeds identified by this method, the cost of intervention may be reduced.
Guthier, C; Aschenbrenner, K P; Buergy, D; Ehmann, M; Wenz, F; Hesser, J W
2015-03-21
This work discusses a novel strategy for inverse planning in low dose rate brachytherapy. It applies the idea of compressed sensing to the problem of inverse treatment planning and a new solver for this formulation is developed. An inverse planning algorithm was developed incorporating brachytherapy dose calculation methods as recommended by AAPM TG-43. For optimization of the functional a new variant of a matching pursuit type solver is presented. The results are compared with current state-of-the-art inverse treatment planning algorithms by means of real prostate cancer patient data. The novel strategy outperforms the best state-of-the-art methods in speed, while achieving comparable quality. It is able to find solutions with comparable values for the objective function and it achieves these results within a few microseconds, being up to 542 times faster than competing state-of-the-art strategies, allowing real-time treatment planning. The sparse solution of inverse brachytherapy planning achieved with methods from compressed sensing is a new paradigm for optimization in medical physics. Through the sparsity of required needles and seeds identified by this method, the cost of intervention may be reduced.
Note: A micro-perfusion system for use during real-time physiological studies under high pressure
NASA Astrophysics Data System (ADS)
Maltas, Jeff; Long, Zac; Huff, Alison; Maloney, Ryan; Ryan, Jordan; Urayama, Paul
2014-10-01
We construct a micro-perfusion system using piston screw pump generators for use during real-time, high-pressure physiological studies. Perfusion is achieved using two generators, with one generator being compressed while the other is retracted, thus maintaining pressurization while producing fluid flow. We demonstrate control over perfusion rates in the 10-μl/s range and the ability to change between fluid reservoirs at up to 50 MPa. We validate the screw-pump approach by monitoring the cyanide-induced response of UV-excited autofluorescence from Saccharomyces cerevisiae under pressurization.
Note: A micro-perfusion system for use during real-time physiological studies under high pressure.
Maltas, Jeff; Long, Zac; Huff, Alison; Maloney, Ryan; Ryan, Jordan; Urayama, Paul
2014-10-01
We construct a micro-perfusion system using piston screw pump generators for use during real-time, high-pressure physiological studies. Perfusion is achieved using two generators, with one generator being compressed while the other is retracted, thus maintaining pressurization while producing fluid flow. We demonstrate control over perfusion rates in the 10-μl/s range and the ability to change between fluid reservoirs at up to 50 MPa. We validate the screw-pump approach by monitoring the cyanide-induced response of UV-excited autofluorescence from Saccharomyces cerevisiae under pressurization.
Adaptive Low Dissipative High Order Filter Methods for Multiscale MHD Flows
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sjoegreen, Bjoern
2004-01-01
Adaptive low-dissipative high order filter finite difference methods for long time wave propagation of shock/turbulence/combustion compressible viscous MHD flows has been constructed. Several variants of the filter approach that cater to different flow types are proposed. These filters provide a natural and efficient way for the minimization of the divergence of the magnetic field [divergence of B] numerical error in the sense that no standard divergence cleaning is required. For certain 2-D MHD test problems, divergence free preservation of the magnetic fields of these filter schemes has been achieved.
Tension-Compression Fatigue of a Nextel™720/alumina Composite at 1200 °C in Air and in Steam
NASA Astrophysics Data System (ADS)
Lanser, R. L.; Ruggles-Wrenn, M. B.
2016-08-01
Tension-compression fatigue behavior of an oxide-oxide ceramic-matrix composite was investigated at 1200 °C in air and in steam. The composite is comprised of an alumina matrix reinforced with Nextel™720 alumina-mullite fibers woven in an eight harness satin weave (8HSW). The composite has no interface between the fiber and matrix, and relies on the porous matrix for flaw tolerance. Tension-compression fatigue behavior was studied for cyclical stresses ranging from 60 to 120 MPa at a frequency of 1.0 Hz. The R ratio (minimum stress to maximum stress) was -1.0. Fatigue run-out was defined as 105 cycles and was achieved at 80 MPa in air and at 70 MPa in steam. Steam reduced cyclic lives by an order of magnitude. Specimens that achieved fatigue run-out were subjected to tensile tests to failure to characterize the retained tensile properties. Specimens subjected to prior cyclic loading in air retained 100 % of their tensile strength. The steam environment severely degraded tensile properties. Tension-compression cyclic loading was considerably more damaging than tension-tension cyclic loading. Composite microstructure, as well as damage and failure mechanisms were investigated.
NASA Astrophysics Data System (ADS)
Jiang, Zhen-Hua; Yan, Chao; Yu, Jian
2013-08-01
Two types of implicit algorithms have been improved for high order discontinuous Galerkin (DG) method to solve compressible Navier-Stokes (NS) equations on triangular grids. A block lower-upper symmetric Gauss-Seidel (BLU-SGS) approach is implemented as a nonlinear iterative scheme. And a modified LU-SGS (LLU-SGS) approach is suggested to reduce the memory requirements while retain the good convergence performance of the original LU-SGS approach. Both implicit schemes have the significant advantage that only the diagonal block matrix is stored. The resulting implicit high-order DG methods are applied, in combination with Hermite weighted essentially non-oscillatory (HWENO) limiters, to solve viscous flow problems. Numerical results demonstrate that the present implicit methods are able to achieve significant efficiency improvements over explicit counterparts and for viscous flows with shocks, and the HWENO limiters can be used to achieve the desired essentially non-oscillatory shock transition and the designed high-order accuracy simultaneously.
Lovelock, D Michael; Zatcky, Joan; Goodman, Karyn; Yamada, Yoshiya
2014-06-01
Abdominal compression using a pneumatic abdominal compression belt developed in-house has been used to reduce respiratory motion of patients undergoing hypo-fractionated or single fraction stereotactic radio-ablative therapy for abdominal cancers. The clinical objective of belt usage was to reduce the cranial-caudal (CC) respiratory motion of the tumor to 5 mm or less during both CT simulation and treatment. A retrospective analysis was done to determine the effectiveness of the device and associated clinical procedures to reduce the CC respiratory motion of the tumor. 42 patients treated for tumors in the liver (30), adrenal glands (6), pancreas (3) and lymph nodes (3) using high dose hypofractionated radiotherapy between 2004 and the present were eligible for analysis. All patients had 2-3 radiopaque fiducial markers implanted near the tumor prior to simulation, or had clips from prior surgery. Integral to the belt is an inflatable air bladder that is positioned over the abdomen. The pneumatic pressure was set to a level in consultation with the patient. The CC motion was measured fluoroscopically with and without pneumatic pressure. Pneumatic pressure was used at all treatments to reduce to CC motion to that achieved at simulation. The mean CC motion with the belt in place, but no additional air pressure was 11.4 mm with a range of 5-20 mm. With the pressure applied, the mean CC motion was reduced to 4.4 mm with a range of 1-8 mm (P-value < 0.001). The clinical objective of reducing the CC motion of the tumor to a maximum excursion of 5 mm or less was achieved in 93% of cases. The use of a pneumatic compression belt and associated clinical procedures was found to result in a significant and frequently substantial reduction in the CC motion of the tumor.
Production and Assessment of Damaged High Energy Propellant Samples,
1980-05-08
and (c) -69.8% ...... 14 3 Longitudinal Velocity one hour after Compressing Versus Applied Engineering Compressive Strain for Propellant Samples...LONGITUDINAL VELOCITY ONE HOUR AFTER COMPRESSING VERSUS APPLIED ENGINEERING COMPRESSIVE STRAIN FOR PROPELLANT SAMPLES (NOMINAL 40 mm DIA x 13 mm HIGH
NASA Astrophysics Data System (ADS)
Nevskii, A. V.; Baldin, I. V.; Kudyakov, K. L.
2015-01-01
Adoption of modern building materials based on non-metallic fibers and their application in concrete structures represent one of the important issues in construction industry. This paper presents results of investigation of several types of raw materials selected: basalt fiber, carbon fiber and composite fiber rods based on glass and carbon. Preliminary testing has shown the possibility of raw materials to be effectively used in compressed concrete elements. Experimental program to define strength and deformability of compressed concrete elements with non-metallic fiber reinforcement and rod composite reinforcement included design, manufacture and testing of several types of concrete samples with different types of fiber and longitudinal rod reinforcement. The samples were tested under compressive static load. The results demonstrated that fiber reinforcement of concrete allows increasing carrying capacity of compressed concrete elements and reducing their deformability. Using composite longitudinal reinforcement instead of steel longitudinal reinforcement in compressed concrete elements insignificantly influences bearing capacity. Combined use of composite rod reinforcement and fiber reinforcement in compressed concrete elements enables to achieve maximum strength and minimum deformability.
Sullivan, Nancy J; Duval-Arnould, Jordan; Twilley, Marida; Smith, Sarah P; Aksamit, Deborah; Boone-Guercio, Pam; Jeffries, Pamela R; Hunt, Elizabeth A
2015-01-01
Traditional American Heart Association (AHA) cardiopulmonary resuscitation (CPR) curriculum focuses on teams of two performing quality chest compressions with rescuers on their knees but does not include training specific to In-Hospital Cardiac Arrests (IHCA), i.e. patient in hospital bed with large resuscitation teams and sophisticated technology available. A randomized controlled trial was conducted with the primary goal of evaluating the effectiveness and ideal frequency of in-situ training on time elapsed from call for help to; (1) initiation of chest compressions and (2) successful defibrillation in IHCA. Non-intensive care unit nurses were randomized into four groups: standard AHA training (C) and three groups that participated in 15 min in-situ IHCA training sessions every two (2M), three (3M) or six months (6M). Curriculum included specific choreography for teams to achieve immediate chest compressions, high chest compression fractions and rapid defibrillation while incorporating use of a backboard, stepstool. More frequent training was associated with decreased median (IQR) seconds to: starting compressions: [C: 33(25-40) vs. 6M: 21(15-26) vs. 3M: 14(10-20) vs. 2M: 13(9-20); p < 0.001]; and defibrillation: [C: 157(140-254) vs. 6M: 138(107-158) vs. 3M: 115(101-119) vs. 2M: 109(98-129); p < 0.001]. A composite outcome of key priorities, compressions within 20s, defibrillation within 180 s and use of a backboard, revealed improvement with more frequent training sessions: [C:5%(1/18) vs. 6M: 23%(4/17) vs. 3M: 56%(9/16) vs. 2M: 73%(11/15); p < 0.001]. Results revealed short in-situ training sessions conducted every 3 months are effective in improving timely initiation of chest compressions and defibrillation in IHCA. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Effect of Impact Compression on the Age-Hardening of Rapidly Solidified Al-Zn-Mg Base Alloys
NASA Astrophysics Data System (ADS)
Horikawa, Keitaro; Kobayashi, Hidetoshi
Effect of impact compression on the age-hardening behavior and the mechanical properties of Mesoalite aluminum alloy was examined by means of the high-velocity plane collision between a projectile and Mesoalite by using a single powder gun. By imposing the impact compression to the Meso10 and Meso20 alloys in the state of quenching after the solution heat treatment, the following age-hardening at 110 °C was highly increased, comparing with the Mesoalite without the impact compression. XRD results revealed that high plastic strain was introduced on the specimen inside after the impact compression. Compression test results also clarified that both Meso10 and Meso20 alloy specimens imposed the impact compressive stresses more than 5 GPa after the peak-aging at 110°C showed higher yield stresses, comparing with the alloys without the impact compression. It was also shown that the Meso10 and Meso20 specimens after the solution heat treatment, followed by the high-velocity impact compression (12 GPa) and the peak-aging treatment indicated the highest compressive yield stresses such as 994 GPa in Meso10 and 1091 GPa in Meso20.
POLYCOMP: Efficient and configurable compression of astronomical timelines
NASA Astrophysics Data System (ADS)
Tomasi, M.
2016-07-01
This paper describes the implementation of polycomp, a open-sourced, publicly available program for compressing one-dimensional data series in tabular format. The program is particularly suited for compressing smooth, noiseless streams of data like pointing information, as one of the algorithms it implements applies a combination of least squares polynomial fitting and discrete Chebyshev transforms that is able to achieve a compression ratio Cr up to ≈ 40 in the examples discussed in this work. This performance comes at the expense of a loss of information, whose upper bound is configured by the user. I show two areas in which the usage of polycomp is interesting. In the first example, I compress the ephemeris table of an astronomical object (Ganymede), obtaining Cr ≈ 20, with a compression error on the x , y , z coordinates smaller than 1 m. In the second example, I compress the publicly available timelines recorded by the Low Frequency Instrument (LFI), an array of microwave radiometers onboard the ESA Planck spacecraft. The compression reduces the needed storage from ∼ 6.5 TB to ≈ 0.75 TB (Cr ≈ 9), thus making them small enough to be kept in a portable hard drive.