Sandford, M.T. II; Handel, T.G.; Bradley, J.N.
1998-07-07
A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique are disclosed. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%. 21 figs.
Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.
1998-01-01
A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%.
Evaluating lossy data compression on climate simulation data within a large ensemble
Baker, Allison H.; Hammerling, Dorit M.; Mickelson, Sheri A.; ...
2016-12-07
High-resolution Earth system model simulations generate enormous data volumes, and retaining the data from these simulations often strains institutional storage resources. Further, these exceedingly large storage requirements negatively impact science objectives, for example, by forcing reductions in data output frequency, simulation length, or ensemble size. To lessen data volumes from the Community Earth System Model (CESM), we advocate the use of lossy data compression techniques. While lossy data compression does not exactly preserve the original data (as lossless compression does), lossy techniques have an advantage in terms of smaller storage requirements. To preserve the integrity of the scientific simulation data,more » the effects of lossy data compression on the original data should, at a minimum, not be statistically distinguishable from the natural variability of the climate system, and previous preliminary work with data from CESM has shown this goal to be attainable. However, to ultimately convince climate scientists that it is acceptable to use lossy data compression, we provide climate scientists with access to publicly available climate data that have undergone lossy data compression. In particular, we report on the results of a lossy data compression experiment with output from the CESM Large Ensemble (CESM-LE) Community Project, in which we challenge climate scientists to examine features of the data relevant to their interests, and attempt to identify which of the ensemble members have been compressed and reconstructed. We find that while detecting distinguishing features is certainly possible, the compression effects noticeable in these features are often unimportant or disappear in post-processing analyses. In addition, we perform several analyses that directly compare the original data to the reconstructed data to investigate the preservation, or lack thereof, of specific features critical to climate science. Overall, we conclude that applying lossy data compression to climate simulation data is both advantageous in terms of data reduction and generally acceptable in terms of effects on scientific results.« less
Evaluating lossy data compression on climate simulation data within a large ensemble
NASA Astrophysics Data System (ADS)
Baker, Allison H.; Hammerling, Dorit M.; Mickelson, Sheri A.; Xu, Haiying; Stolpe, Martin B.; Naveau, Phillipe; Sanderson, Ben; Ebert-Uphoff, Imme; Samarasinghe, Savini; De Simone, Francesco; Carbone, Francesco; Gencarelli, Christian N.; Dennis, John M.; Kay, Jennifer E.; Lindstrom, Peter
2016-12-01
High-resolution Earth system model simulations generate enormous data volumes, and retaining the data from these simulations often strains institutional storage resources. Further, these exceedingly large storage requirements negatively impact science objectives, for example, by forcing reductions in data output frequency, simulation length, or ensemble size. To lessen data volumes from the Community Earth System Model (CESM), we advocate the use of lossy data compression techniques. While lossy data compression does not exactly preserve the original data (as lossless compression does), lossy techniques have an advantage in terms of smaller storage requirements. To preserve the integrity of the scientific simulation data, the effects of lossy data compression on the original data should, at a minimum, not be statistically distinguishable from the natural variability of the climate system, and previous preliminary work with data from CESM has shown this goal to be attainable. However, to ultimately convince climate scientists that it is acceptable to use lossy data compression, we provide climate scientists with access to publicly available climate data that have undergone lossy data compression. In particular, we report on the results of a lossy data compression experiment with output from the CESM Large Ensemble (CESM-LE) Community Project, in which we challenge climate scientists to examine features of the data relevant to their interests, and attempt to identify which of the ensemble members have been compressed and reconstructed. We find that while detecting distinguishing features is certainly possible, the compression effects noticeable in these features are often unimportant or disappear in post-processing analyses. In addition, we perform several analyses that directly compare the original data to the reconstructed data to investigate the preservation, or lack thereof, of specific features critical to climate science. Overall, we conclude that applying lossy data compression to climate simulation data is both advantageous in terms of data reduction and generally acceptable in terms of effects on scientific results.
Evaluating lossy data compression on climate simulation data within a large ensemble
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, Allison H.; Hammerling, Dorit M.; Mickelson, Sheri A.
High-resolution Earth system model simulations generate enormous data volumes, and retaining the data from these simulations often strains institutional storage resources. Further, these exceedingly large storage requirements negatively impact science objectives, for example, by forcing reductions in data output frequency, simulation length, or ensemble size. To lessen data volumes from the Community Earth System Model (CESM), we advocate the use of lossy data compression techniques. While lossy data compression does not exactly preserve the original data (as lossless compression does), lossy techniques have an advantage in terms of smaller storage requirements. To preserve the integrity of the scientific simulation data,more » the effects of lossy data compression on the original data should, at a minimum, not be statistically distinguishable from the natural variability of the climate system, and previous preliminary work with data from CESM has shown this goal to be attainable. However, to ultimately convince climate scientists that it is acceptable to use lossy data compression, we provide climate scientists with access to publicly available climate data that have undergone lossy data compression. In particular, we report on the results of a lossy data compression experiment with output from the CESM Large Ensemble (CESM-LE) Community Project, in which we challenge climate scientists to examine features of the data relevant to their interests, and attempt to identify which of the ensemble members have been compressed and reconstructed. We find that while detecting distinguishing features is certainly possible, the compression effects noticeable in these features are often unimportant or disappear in post-processing analyses. In addition, we perform several analyses that directly compare the original data to the reconstructed data to investigate the preservation, or lack thereof, of specific features critical to climate science. Overall, we conclude that applying lossy data compression to climate simulation data is both advantageous in terms of data reduction and generally acceptable in terms of effects on scientific results.« less
Task-oriented lossy compression of magnetic resonance images
NASA Astrophysics Data System (ADS)
Anderson, Mark C.; Atkins, M. Stella; Vaisey, Jacques
1996-04-01
A new task-oriented image quality metric is used to quantify the effects of distortion introduced into magnetic resonance images by lossy compression. This metric measures the similarity between a radiologist's manual segmentation of pathological features in the original images and the automated segmentations performed on the original and compressed images. The images are compressed using a general wavelet-based lossy image compression technique, embedded zerotree coding, and segmented using a three-dimensional stochastic model-based tissue segmentation algorithm. The performance of the compression system is then enhanced by compressing different regions of the image volume at different bit rates, guided by prior knowledge about the location of important anatomical regions in the image. Application of the new system to magnetic resonance images is shown to produce compression results superior to the conventional methods, both subjectively and with respect to the segmentation similarity metric.
Compression techniques in tele-radiology
NASA Astrophysics Data System (ADS)
Lu, Tianyu; Xiong, Zixiang; Yun, David Y.
1999-10-01
This paper describes a prototype telemedicine system for remote 3D radiation treatment planning. Due to voluminous medical image data and image streams generated in interactive frame rate involved in the application, the importance of deploying adjustable lossy to lossless compression techniques is emphasized in order to achieve acceptable performance via various kinds of communication networks. In particular, the compression of the data substantially reduces the transmission time and therefore allows large-scale radiation distribution simulation and interactive volume visualization using remote supercomputing resources in a timely fashion. The compression algorithms currently used in the software we developed are JPEG and H.263 lossy methods and Lempel-Ziv (LZ77) lossless methods. Both objective and subjective assessment of the effect of lossy compression methods on the volume data are conducted. Favorable results are obtained showing that substantial compression ratio is achievable within distortion tolerance. From our experience, we conclude that 30dB (PSNR) is about the lower bound to achieve acceptable quality when applying lossy compression to anatomy volume data (e.g. CT). For computer simulated data, much higher PSNR (up to 100dB) is expectable. This work not only introduces such novel approach for delivering medical services that will have significant impact on the existing cooperative image-based services, but also provides a platform for the physicians to assess the effects of lossy compression techniques on the diagnostic and aesthetic appearance of medical imaging.
A Real-Time High Performance Data Compression Technique For Space Applications
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu; Venbrux, Jack; Bhatia, Prakash; Miller, Warner H.
2000-01-01
A high performance lossy data compression technique is currently being developed for space science applications under the requirement of high-speed push-broom scanning. The technique is also error-resilient in that error propagation is contained within a few scan lines. The algorithm is based on block-transform combined with bit-plane encoding; this combination results in an embedded bit string with exactly the desirable compression rate. The lossy coder is described. The compression scheme performs well on a suite of test images typical of images from spacecraft instruments. Hardware implementations are in development; a functional chip set is expected by the end of 2001.
FaStore - a space-saving solution for raw sequencing data.
Roguski, Lukasz; Ochoa, Idoia; Hernaez, Mikel; Deorowicz, Sebastian
2018-03-29
The affordability of DNA sequencing has led to the generation of unprecedented volumes of raw sequencing data. These data must be stored, processed, and transmitted, which poses significant challenges. To facilitate this effort, we introduce FaStore, a specialized compressor for FASTQ files. FaStore does not use any reference sequences for compression, and permits the user to choose from several lossy modes to improve the overall compression ratio, depending on the specific needs. FaStore in the lossless mode achieves a significant improvement in compression ratio with respect to previously proposed algorithms. We perform an analysis on the effect that the different lossy modes have on variant calling, the most widely used application for clinical decision making, especially important in the era of precision medicine. We show that lossy compression can offer significant compression gains, while preserving the essential genomic information and without affecting the variant calling performance. FaStore can be downloaded from https://github.com/refresh-bio/FaStore. sebastian.deorowicz@polsl.pl. Supplementary data are available at Bioinformatics online.
NASA Technical Reports Server (NTRS)
Matic, Roy M.; Mosley, Judith I.
1994-01-01
Future space-based, remote sensing systems will have data transmission requirements that exceed available downlinks necessitating the use of lossy compression techniques for multispectral data. In this paper, we describe several algorithms for lossy compression of multispectral data which combine spectral decorrelation techniques with an adaptive, wavelet-based, image compression algorithm to exploit both spectral and spatial correlation. We compare the performance of several different spectral decorrelation techniques including wavelet transformation in the spectral dimension. The performance of each technique is evaluated at compression ratios ranging from 4:1 to 16:1. Performance measures used are visual examination, conventional distortion measures, and multispectral classification results. We also introduce a family of distortion metrics that are designed to quantify and predict the effect of compression artifacts on multi spectral classification of the reconstructed data.
1993-12-01
0~0 S* NAVAL POSTGRADUATE SCHOOL Monterey, California DTIC ELECTE THESIS S APR 11 1994DU A SIMPLE, LOW OVERHEAD DATA COMPRESSION ALGORITHM FOR...A SIMPLE. LOW OVERHEAD DATA COMPRESSION ALGORITHM FOR CONVERTING LOSSY COMPRESSION PROCESSES TO LOSSLESS. 6. AUTHOR(S) Abbott, Walter D., III 7...Approved for public release; distribution is unlimited. A Simple, Low Overhead Data Compression Algorithm for Converting Lossy Processes to Lossless by
Displaying radiologic images on personal computers: image storage and compression--Part 2.
Gillespy, T; Rowberg, A H
1994-02-01
This is part 2 of our article on image storage and compression, the third article of our series for radiologists and imaging scientists on displaying, manipulating, and analyzing radiologic images on personal computers. Image compression is classified as lossless (nondestructive) or lossy (destructive). Common lossless compression algorithms include variable-length bit codes (Huffman codes and variants), dictionary-based compression (Lempel-Ziv variants), and arithmetic coding. Huffman codes and the Lempel-Ziv-Welch (LZW) algorithm are commonly used for image compression. All of these compression methods are enhanced if the image has been transformed into a differential image based on a differential pulse-code modulation (DPCM) algorithm. The LZW compression after the DPCM image transformation performed the best on our example images, and performed almost as well as the best of the three commercial compression programs tested. Lossy compression techniques are capable of much higher data compression, but reduced image quality and compression artifacts may be noticeable. Lossy compression is comprised of three steps: transformation, quantization, and coding. Two commonly used transformation methods are the discrete cosine transformation and discrete wavelet transformation. In both methods, most of the image information is contained in a relatively few of the transformation coefficients. The quantization step reduces many of the lower order coefficients to 0, which greatly improves the efficiency of the coding (compression) step. In fractal-based image compression, image patterns are stored as equations that can be reconstructed at different levels of resolution.
Oblivious image watermarking combined with JPEG compression
NASA Astrophysics Data System (ADS)
Chen, Qing; Maitre, Henri; Pesquet-Popescu, Beatrice
2003-06-01
For most data hiding applications, the main source of concern is the effect of lossy compression on hidden information. The objective of watermarking is fundamentally in conflict with lossy compression. The latter attempts to remove all irrelevant and redundant information from a signal, while the former uses the irrelevant information to mask the presence of hidden data. Compression on a watermarked image can significantly affect the retrieval of the watermark. Past investigations of this problem have heavily relied on simulation. It is desirable not only to measure the effect of compression on embedded watermark, but also to control the embedding process to survive lossy compression. In this paper, we focus on oblivious watermarking by assuming that the watermarked image inevitably undergoes JPEG compression prior to watermark extraction. We propose an image-adaptive watermarking scheme where the watermarking algorithm and the JPEG compression standard are jointly considered. Watermark embedding takes into consideration the JPEG compression quality factor and exploits an HVS model to adaptively attain a proper trade-off among transparency, hiding data rate, and robustness to JPEG compression. The scheme estimates the image-dependent payload under JPEG compression to achieve the watermarking bit allocation in a determinate way, while maintaining consistent watermark retrieval performance.
A database for assessment of effect of lossy compression on digital mammograms
NASA Astrophysics Data System (ADS)
Wang, Jiheng; Sahiner, Berkman; Petrick, Nicholas; Pezeshk, Aria
2018-03-01
With widespread use of screening digital mammography, efficient storage of the vast amounts of data has become a challenge. While lossless image compression causes no risk to the interpretation of the data, it does not allow for high compression rates. Lossy compression and the associated higher compression ratios are therefore more desirable. The U.S. Food and Drug Administration (FDA) currently interprets the Mammography Quality Standards Act as prohibiting lossy compression of digital mammograms for primary image interpretation, image retention, or transfer to the patient or her designated recipient. Previous work has used reader studies to determine proper usage criteria for evaluating lossy image compression in mammography, and utilized different measures and metrics to characterize medical image quality. The drawback of such studies is that they rely on a threshold on compression ratio as the fundamental criterion for preserving the quality of images. However, compression ratio is not a useful indicator of image quality. On the other hand, many objective image quality metrics (IQMs) have shown excellent performance for natural image content for consumer electronic applications. In this paper, we create a new synthetic mammogram database with several unique features. We compare and characterize the impact of image compression on several clinically relevant image attributes such as perceived contrast and mass appearance for different kinds of masses. We plan to use this database to develop a new objective IQM for measuring the quality of compressed mammographic images to help determine the allowed maximum compression for different kinds of breasts and masses in terms of visual and diagnostic quality.
StirMark Benchmark: audio watermarking attacks based on lossy compression
NASA Astrophysics Data System (ADS)
Steinebach, Martin; Lang, Andreas; Dittmann, Jana
2002-04-01
StirMark Benchmark is a well-known evaluation tool for watermarking robustness. Additional attacks are added to it continuously. To enable application based evaluation, in our paper we address attacks against audio watermarks based on lossy audio compression algorithms to be included in the test environment. We discuss the effect of different lossy compression algorithms like MPEG-2 audio Layer 3, Ogg or VQF on a selection of audio test data. Our focus is on changes regarding the basic characteristics of the audio data like spectrum or average power and on removal of embedded watermarks. Furthermore we compare results of different watermarking algorithms and show that lossy compression is still a challenge for most of them. There are two strategies for adding evaluation of robustness against lossy compression to StirMark Benchmark: (a) use of existing free compression algorithms (b) implementation of a generic lossy compression simulation. We discuss how such a model can be implemented based on the results of our tests. This method is less complex, as no real psycho acoustic model has to be applied. Our model can be used for audio watermarking evaluation of numerous application fields. As an example, we describe its importance for e-commerce applications with watermarking security.
High-quality lossy compression: current and future trends
NASA Astrophysics Data System (ADS)
McLaughlin, Steven W.
1995-01-01
This paper is concerned with current and future trends in the lossy compression of real sources such as imagery, video, speech and music. We put all lossy compression schemes into common framework where each can be characterized in terms of three well-defined advantages: cell shape, region shape and memory advantages. We concentrate on image compression and discuss how new entropy constrained trellis-based compressors achieve cell- shape, region-shape and memory gain resulting in high fidelity and high compression.
Sandford, M.T. II; Handel, T.G.; Bradley, J.N.
1998-03-10
A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique is disclosed. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method. 11 figs.
Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.
1998-01-01
A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method.
A Hybrid Data Compression Scheme for Power Reduction in Wireless Sensors for IoT.
Deepu, Chacko John; Heng, Chun-Huat; Lian, Yong
2017-04-01
This paper presents a novel data compression and transmission scheme for power reduction in Internet-of-Things (IoT) enabled wireless sensors. In the proposed scheme, data is compressed with both lossy and lossless techniques, so as to enable hybrid transmission mode, support adaptive data rate selection and save power in wireless transmission. Applying the method to electrocardiogram (ECG), the data is first compressed using a lossy compression technique with a high compression ratio (CR). The residual error between the original data and the decompressed lossy data is preserved using entropy coding, enabling a lossless restoration of the original data when required. Average CR of 2.1 × and 7.8 × were achieved for lossless and lossy compression respectively with MIT/BIH database. The power reduction is demonstrated using a Bluetooth transceiver and is found to be reduced to 18% for lossy and 53% for lossless transmission respectively. Options for hybrid transmission mode, adaptive rate selection and system level power reduction make the proposed scheme attractive for IoT wireless sensors in healthcare applications.
Efficient Sparse Signal Transmission over a Lossy Link Using Compressive Sensing
Wu, Liantao; Yu, Kai; Cao, Dongyu; Hu, Yuhen; Wang, Zhi
2015-01-01
Reliable data transmission over lossy communication link is expensive due to overheads for error protection. For signals that have inherent sparse structures, compressive sensing (CS) is applied to facilitate efficient sparse signal transmissions over lossy communication links without data compression or error protection. The natural packet loss in the lossy link is modeled as a random sampling process of the transmitted data, and the original signal will be reconstructed from the lossy transmission results using the CS-based reconstruction method at the receiving end. The impacts of packet lengths on transmission efficiency under different channel conditions have been discussed, and interleaving is incorporated to mitigate the impact of burst data loss. Extensive simulations and experiments have been conducted and compared to the traditional automatic repeat request (ARQ) interpolation technique, and very favorable results have been observed in terms of both accuracy of the reconstructed signals and the transmission energy consumption. Furthermore, the packet length effect provides useful insights for using compressed sensing for efficient sparse signal transmission via lossy links. PMID:26287195
Compression of high-density EMG signals for trapezius and gastrocnemius muscles.
Itiki, Cinthia; Furuie, Sergio S; Merletti, Roberto
2014-03-10
New technologies for data transmission and multi-electrode arrays increased the demand for compressing high-density electromyography (HD EMG) signals. This article aims the compression of HD EMG signals recorded by two-dimensional electrode matrices at different muscle-contraction forces. It also shows methodological aspects of compressing HD EMG signals for non-pinnate (upper trapezius) and pinnate (medial gastrocnemius) muscles, using image compression techniques. HD EMG signals were placed in image rows, according to two distinct electrode orders: parallel and perpendicular to the muscle longitudinal axis. For the lossless case, the images obtained from single-differential signals as well as their differences in time were compressed. For the lossy algorithm, the images associated to the recorded monopolar or single-differential signals were compressed for different compression levels. Lossless compression provided up to 59.3% file-size reduction (FSR), with lower contraction forces associated to higher FSR. For lossy compression, a 90.8% reduction on the file size was attained, while keeping the signal-to-noise ratio (SNR) at 21.19 dB. For a similar FSR, higher contraction forces corresponded to higher SNR CONCLUSIONS: The computation of signal differences in time improves the performance of lossless compression while the selection of signals in the transversal order improves the lossy compression of HD EMG, for both pinnate and non-pinnate muscles.
Compression of high-density EMG signals for trapezius and gastrocnemius muscles
2014-01-01
Background New technologies for data transmission and multi-electrode arrays increased the demand for compressing high-density electromyography (HD EMG) signals. This article aims the compression of HD EMG signals recorded by two-dimensional electrode matrices at different muscle-contraction forces. It also shows methodological aspects of compressing HD EMG signals for non-pinnate (upper trapezius) and pinnate (medial gastrocnemius) muscles, using image compression techniques. Methods HD EMG signals were placed in image rows, according to two distinct electrode orders: parallel and perpendicular to the muscle longitudinal axis. For the lossless case, the images obtained from single-differential signals as well as their differences in time were compressed. For the lossy algorithm, the images associated to the recorded monopolar or single-differential signals were compressed for different compression levels. Results Lossless compression provided up to 59.3% file-size reduction (FSR), with lower contraction forces associated to higher FSR. For lossy compression, a 90.8% reduction on the file size was attained, while keeping the signal-to-noise ratio (SNR) at 21.19 dB. For a similar FSR, higher contraction forces corresponded to higher SNR Conclusions The computation of signal differences in time improves the performance of lossless compression while the selection of signals in the transversal order improves the lossy compression of HD EMG, for both pinnate and non-pinnate muscles. PMID:24612604
QualComp: a new lossy compressor for quality scores based on rate distortion theory
2013-01-01
Background Next Generation Sequencing technologies have revolutionized many fields in biology by reducing the time and cost required for sequencing. As a result, large amounts of sequencing data are being generated. A typical sequencing data file may occupy tens or even hundreds of gigabytes of disk space, prohibitively large for many users. This data consists of both the nucleotide sequences and per-base quality scores that indicate the level of confidence in the readout of these sequences. Quality scores account for about half of the required disk space in the commonly used FASTQ format (before compression), and therefore the compression of the quality scores can significantly reduce storage requirements and speed up analysis and transmission of sequencing data. Results In this paper, we present a new scheme for the lossy compression of the quality scores, to address the problem of storage. Our framework allows the user to specify the rate (bits per quality score) prior to compression, independent of the data to be compressed. Our algorithm can work at any rate, unlike other lossy compression algorithms. We envisage our algorithm as being part of a more general compression scheme that works with the entire FASTQ file. Numerical experiments show that we can achieve a better mean squared error (MSE) for small rates (bits per quality score) than other lossy compression schemes. For the organism PhiX, whose assembled genome is known and assumed to be correct, we show that it is possible to achieve a significant reduction in size with little compromise in performance on downstream applications (e.g., alignment). Conclusions QualComp is an open source software package, written in C and freely available for download at https://sourceforge.net/projects/qualcomp. PMID:23758828
Avrin, D E; Andriole, K P; Yin, L; Gould, R G; Arenson, R L
2001-03-01
A hierarchical storage management (HSM) scheme for cost-effective on-line archival of image data using lossy compression is described. This HSM scheme also provides an off-site tape backup mechanism and disaster recovery. The full-resolution image data are viewed originally for primary diagnosis, then losslessly compressed and sent off site to a tape backup archive. In addition, the original data are wavelet lossy compressed (at approximately 25:1 for computed radiography, 10:1 for computed tomography, and 5:1 for magnetic resonance) and stored on a large RAID device for maximum cost-effective, on-line storage and immediate retrieval of images for review and comparison. This HSM scheme provides a solution to 4 problems in image archiving, namely cost-effective on-line storage, disaster recovery of data, off-site tape backup for the legal record, and maximum intermediate storage and retrieval through the use of on-site lossy compression.
NASA Technical Reports Server (NTRS)
Tilton, James C.; Manohar, Mareboyana
1994-01-01
Recent advances in imaging technology make it possible to obtain imagery data of the Earth at high spatial, spectral and radiometric resolutions from Earth orbiting satellites. The rate at which the data is collected from these satellites can far exceed the channel capacity of the data downlink. Reducing the data rate to within the channel capacity can often require painful trade-offs in which certain scientific returns are sacrificed for the sake of others. In this paper we model the radiometric version of this form of lossy compression by dropping a specified number of least significant bits from each data pixel and compressing the remaining bits using an appropriate lossless compression technique. We call this approach 'truncation followed by lossless compression' or TLLC. We compare the TLLC approach with applying a lossy compression technique to the data for reducing the data rate to the channel capacity, and demonstrate that each of three different lossy compression techniques (JPEG/DCT, VQ and Model-Based VQ) give a better effective radiometric resolution than TLLC for a given channel rate.
A new approach of objective quality evaluation on JPEG2000 lossy-compressed lung cancer CT images
NASA Astrophysics Data System (ADS)
Cai, Weihua; Tan, Yongqiang; Zhang, Jianguo
2007-03-01
Image compression has been used to increase the communication efficiency and storage capacity. JPEG 2000 compression, based on the wavelet transformation, has its advantages comparing to other compression methods, such as ROI coding, error resilience, adaptive binary arithmetic coding and embedded bit-stream. However it is still difficult to find an objective method to evaluate the image quality of lossy-compressed medical images so far. In this paper, we present an approach to evaluate the image quality by using a computer aided diagnosis (CAD) system. We selected 77 cases of CT images, bearing benign and malignant lung nodules with confirmed pathology, from our clinical Picture Archiving and Communication System (PACS). We have developed a prototype of CAD system to classify these images into benign ones and malignant ones, the performance of which was evaluated by the receiver operator characteristics (ROC) curves. We first used JPEG 2000 to compress these cases of images with different compression ratio from lossless to lossy, and used the CAD system to classify the cases with different compressed ratio, then compared the ROC curves from the CAD classification results. Support vector machine (SVM) and neural networks (NN) were used to classify the malignancy of input nodules. In each approach, we found that the area under ROC (AUC) decreases with the increment of compression ratio with small fluctuations.
Application of content-based image compression to telepathology
NASA Astrophysics Data System (ADS)
Varga, Margaret J.; Ducksbury, Paul G.; Callagy, Grace
2002-05-01
Telepathology is a means of practicing pathology at a distance, viewing images on a computer display rather than directly through a microscope. Without compression, images take too long to transmit to a remote location and are very expensive to store for future examination. However, to date the use of compressed images in pathology remains controversial. This is because commercial image compression algorithms such as JPEG achieve data compression without knowledge of the diagnostic content. Often images are lossily compressed at the expense of corrupting informative content. None of the currently available lossy compression techniques are concerned with what information has been preserved and what data has been discarded. Their sole objective is to compress and transmit the images as fast as possible. By contrast, this paper presents a novel image compression technique, which exploits knowledge of the slide diagnostic content. This 'content based' approach combines visually lossless and lossy compression techniques, judiciously applying each in the appropriate context across an image so as to maintain 'diagnostic' information while still maximising the possible compression. Standard compression algorithms, e.g. wavelets, can still be used, but their use in a context sensitive manner can offer high compression ratios and preservation of diagnostically important information. When compared with lossless compression the novel content-based approach can potentially provide the same degree of information with a smaller amount of data. When compared with lossy compression it can provide more information for a given amount of compression. The precise gain in the compression performance depends on the application (e.g. database archive or second opinion consultation) and the diagnostic content of the images.
Jaferzadeh, Keyvan; Gholami, Samaneh; Moon, Inkyu
2016-12-20
In this paper, we evaluate lossless and lossy compression techniques to compress quantitative phase images of red blood cells (RBCs) obtained by an off-axis digital holographic microscopy (DHM). The RBC phase images are numerically reconstructed from their digital holograms and are stored in 16-bit unsigned integer format. In the case of lossless compression, predictive coding of JPEG lossless (JPEG-LS), JPEG2000, and JP3D are evaluated, and compression ratio (CR) and complexity (compression time) are compared against each other. It turns out that JP2k can outperform other methods by having the best CR. In the lossy case, JP2k and JP3D with different CRs are examined. Because some data is lost in a lossy way, the degradation level is measured by comparing different morphological and biochemical parameters of RBC before and after compression. Morphological parameters are volume, surface area, RBC diameter, sphericity index, and the biochemical cell parameter is mean corpuscular hemoglobin (MCH). Experimental results show that JP2k outperforms JP3D not only in terms of mean square error (MSE) when CR increases, but also in compression time in the lossy compression way. In addition, our compression results with both algorithms demonstrate that with high CR values the three-dimensional profile of RBC can be preserved and morphological and biochemical parameters can still be within the range of reported values.
Analysis-Preserving Video Microscopy Compression via Correlation and Mathematical Morphology
Shao, Chong; Zhong, Alfred; Cribb, Jeremy; Osborne, Lukas D.; O’Brien, E. Timothy; Superfine, Richard; Mayer-Patel, Ketan; Taylor, Russell M.
2015-01-01
The large amount video data produced by multi-channel, high-resolution microscopy system drives the need for a new high-performance domain-specific video compression technique. We describe a novel compression method for video microscopy data. The method is based on Pearson's correlation and mathematical morphology. The method makes use of the point-spread function (PSF) in the microscopy video acquisition phase. We compare our method to other lossless compression methods and to lossy JPEG, JPEG2000 and H.264 compression for various kinds of video microscopy data including fluorescence video and brightfield video. We find that for certain data sets, the new method compresses much better than lossless compression with no impact on analysis results. It achieved a best compressed size of 0.77% of the original size, 25× smaller than the best lossless technique (which yields 20% for the same video). The compressed size scales with the video's scientific data content. Further testing showed that existing lossy algorithms greatly impacted data analysis at similar compression sizes. PMID:26435032
Design of a receiver operating characteristic (ROC) study of 10:1 lossy image compression
NASA Astrophysics Data System (ADS)
Collins, Cary A.; Lane, David; Frank, Mark S.; Hardy, Michael E.; Haynor, David R.; Smith, Donald V.; Parker, James E.; Bender, Gregory N.; Kim, Yongmin
1994-04-01
The digital archiving system at Madigan Army Medical Center (MAMC) uses a 10:1 lossy data compression algorithm for most forms of computed radiography. A systematic study on the potential effect of lossy image compression on patient care has been initiated with a series of studies focused on specific diagnostic tasks. The studies are based upon the receiver operating characteristic (ROC) method of analysis for diagnostic systems. The null hypothesis is that observer performance with approximately 10:1 compressed and decompressed images is not different from using original, uncompressed images for detecting subtle pathologic findings seen on computed radiographs of bone, chest, or abdomen, when viewed on a high-resolution monitor. Our design involves collecting cases from eight pathologic categories. Truth is determined by committee using confirmatory studies performed during routine clinical practice whenever possible. Software has been developed to aid in case collection and to allow reading of the cases for the study using stand-alone Siemens Litebox workstations. Data analysis uses two methods, ROC analysis and free-response ROC (FROC) methods. This study will be one of the largest ROC/FROC studies of its kind and could benefit clinical radiology practice using PACS technology. The study design and results from a pilot FROC study are presented.
Lossless medical image compression with a hybrid coder
NASA Astrophysics Data System (ADS)
Way, Jing-Dar; Cheng, Po-Yuen
1998-10-01
The volume of medical image data is expected to increase dramatically in the next decade due to the large use of radiological image for medical diagnosis. The economics of distributing the medical image dictate that data compression is essential. While there is lossy image compression, the medical image must be recorded and transmitted lossless before it reaches the users to avoid wrong diagnosis due to the image data lost. Therefore, a low complexity, high performance lossless compression schematic that can approach the theoretic bound and operate in near real-time is needed. In this paper, we propose a hybrid image coder to compress the digitized medical image without any data loss. The hybrid coder is constituted of two key components: an embedded wavelet coder and a lossless run-length coder. In this system, the medical image is compressed with the lossy wavelet coder first, and the residual image between the original and the compressed ones is further compressed with the run-length coder. Several optimization schemes have been used in these coders to increase the coding performance. It is shown that the proposed algorithm is with higher compression ratio than run-length entropy coders such as arithmetic, Huffman and Lempel-Ziv coders.
Improved compression technique for multipass color printers
NASA Astrophysics Data System (ADS)
Honsinger, Chris
1998-01-01
A multipass color printer prints a color image by printing one color place at a time in a prescribed order, e.g., in a four-color systems, the cyan plane may be printed first, the magenta next, and so on. It is desirable to discard the data related to each color plane once it has been printed, so that data from the next print may be downloaded. In this paper, we present a compression scheme that allows the release of a color plane memory, but still takes advantage of the correlation between the color planes. The compression scheme is based on a block adaptive technique for decorrelating the color planes followed by a spatial lossy compression of the decorrelated data. A preferred method of lossy compression is the DCT-based JPEG compression standard, as it is shown that the block adaptive decorrelation operations can be efficiently performed in the DCT domain. The result of the compression technique are compared to that of using JPEG on RGB data without any decorrelating transform. In general, the technique is shown to improve the compression performance over a practical range of compression ratios by at least 30 percent in all images, and up to 45 percent in some images.
Real-time compression of raw computed tomography data: technology, architecture, and benefits
NASA Astrophysics Data System (ADS)
Wegener, Albert; Chandra, Naveen; Ling, Yi; Senzig, Robert; Herfkens, Robert
2009-02-01
Compression of computed tomography (CT) projection samples reduces slip ring and disk drive costs. A lowcomplexity, CT-optimized compression algorithm called Prism CTTM achieves at least 1.59:1 and up to 2.75:1 lossless compression on twenty-six CT projection data sets. We compare the lossless compression performance of Prism CT to alternative lossless coders, including Lempel-Ziv, Golomb-Rice, and Huffman coders using representative CT data sets. Prism CT provides the best mean lossless compression ratio of 1.95:1 on the representative data set. Prism CT compression can be integrated into existing slip rings using a single FPGA. Prism CT decompression operates at 100 Msamp/sec using one core of a dual-core Xeon CPU. We describe a methodology to evaluate the effects of lossy compression on image quality to achieve even higher compression ratios. We conclude that lossless compression of raw CT signals provides significant cost savings and performance improvements for slip rings and disk drive subsystems in all CT machines. Lossy compression should be considered in future CT data acquisition subsystems because it provides even more system benefits above lossless compression while achieving transparent diagnostic image quality. This result is demonstrated on a limited dataset using appropriately selected compression ratios and an experienced radiologist.
The compression–error trade-off for large gridded data sets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silver, Jeremy D.; Zender, Charles S.
The netCDF-4 format is widely used for large gridded scientific data sets and includes several compression methods: lossy linear scaling and the non-lossy deflate and shuffle algorithms. Many multidimensional geoscientific data sets exhibit considerable variation over one or several spatial dimensions (e.g., vertically) with less variation in the remaining dimensions (e.g., horizontally). On such data sets, linear scaling with a single pair of scale and offset parameters often entails considerable loss of precision. We introduce an alternative compression method called "layer-packing" that simultaneously exploits lossy linear scaling and lossless compression. Layer-packing stores arrays (instead of a scalar pair) of scalemore » and offset parameters. An implementation of this method is compared with lossless compression, storing data at fixed relative precision (bit-grooming) and scalar linear packing in terms of compression ratio, accuracy and speed. When viewed as a trade-off between compression and error, layer-packing yields similar results to bit-grooming (storing between 3 and 4 significant figures). Bit-grooming and layer-packing offer significantly better control of precision than scalar linear packing. Relative performance, in terms of compression and errors, of bit-groomed and layer-packed data were strongly predicted by the entropy of the exponent array, and lossless compression was well predicted by entropy of the original data array. Layer-packed data files must be "unpacked" to be readily usable. The compression and precision characteristics make layer-packing a competitive archive format for many scientific data sets.« less
The compression–error trade-off for large gridded data sets
Silver, Jeremy D.; Zender, Charles S.
2017-01-27
The netCDF-4 format is widely used for large gridded scientific data sets and includes several compression methods: lossy linear scaling and the non-lossy deflate and shuffle algorithms. Many multidimensional geoscientific data sets exhibit considerable variation over one or several spatial dimensions (e.g., vertically) with less variation in the remaining dimensions (e.g., horizontally). On such data sets, linear scaling with a single pair of scale and offset parameters often entails considerable loss of precision. We introduce an alternative compression method called "layer-packing" that simultaneously exploits lossy linear scaling and lossless compression. Layer-packing stores arrays (instead of a scalar pair) of scalemore » and offset parameters. An implementation of this method is compared with lossless compression, storing data at fixed relative precision (bit-grooming) and scalar linear packing in terms of compression ratio, accuracy and speed. When viewed as a trade-off between compression and error, layer-packing yields similar results to bit-grooming (storing between 3 and 4 significant figures). Bit-grooming and layer-packing offer significantly better control of precision than scalar linear packing. Relative performance, in terms of compression and errors, of bit-groomed and layer-packed data were strongly predicted by the entropy of the exponent array, and lossless compression was well predicted by entropy of the original data array. Layer-packed data files must be "unpacked" to be readily usable. The compression and precision characteristics make layer-packing a competitive archive format for many scientific data sets.« less
Visually lossless compression of digital hologram sequences
NASA Astrophysics Data System (ADS)
Darakis, Emmanouil; Kowiel, Marcin; Näsänen, Risto; Naughton, Thomas J.
2010-01-01
Digital hologram sequences have great potential for the recording of 3D scenes of moving macroscopic objects as their numerical reconstruction can yield a range of perspective views of the scene. Digital holograms inherently have large information content and lossless coding of holographic data is rather inefficient due to the speckled nature of the interference fringes they contain. Lossy coding of still holograms and hologram sequences has shown promising results. By definition, lossy compression introduces errors in the reconstruction. In all of the previous studies, numerical metrics were used to measure the compression error and through it, the coding quality. Digital hologram reconstructions are highly speckled and the speckle pattern is very sensitive to data changes. Hence, numerical quality metrics can be misleading. For example, for low compression ratios, a numerically significant coding error can have visually negligible effects. Yet, in several cases, it is of high interest to know how much lossy compression can be achieved, while maintaining the reconstruction quality at visually lossless levels. Using an experimental threshold estimation method, the staircase algorithm, we determined the highest compression ratio that was not perceptible to human observers for objects compressed with Dirac and MPEG-4 compression methods. This level of compression can be regarded as the point below which compression is perceptually lossless although physically the compression is lossy. It was found that up to 4 to 7.5 fold compression can be obtained with the above methods without any perceptible change in the appearance of video sequences.
The effects of lossy compression on diagnostically relevant seizure information in EEG signals.
Higgins, G; McGinley, B; Faul, S; McEvoy, R P; Glavin, M; Marnane, W P; Jones, E
2013-01-01
This paper examines the effects of compression on EEG signals, in the context of automated detection of epileptic seizures. Specifically, it examines the use of lossy compression on EEG signals in order to reduce the amount of data which has to be transmitted or stored, while having as little impact as possible on the information in the signal relevant to diagnosing epileptic seizures. Two popular compression methods, JPEG2000 and SPIHT, were used. A range of compression levels was selected for both algorithms in order to compress the signals with varying degrees of loss. This compression was applied to the database of epileptiform data provided by the University of Freiburg, Germany. The real-time EEG analysis for event detection automated seizure detection system was used in place of a trained clinician for scoring the reconstructed data. Results demonstrate that compression by a factor of up to 120:1 can be achieved, with minimal loss in seizure detection performance as measured by the area under the receiver operating characteristic curve of the seizure detection system.
NASA Technical Reports Server (NTRS)
Reif, John H.
1987-01-01
A parallel compression algorithm for the 16,384 processor MPP machine was developed. The serial version of the algorithm can be viewed as a combination of on-line dynamic lossless test compression techniques (which employ simple learning strategies) and vector quantization. These concepts are described. How these concepts are combined to form a new strategy for performing dynamic on-line lossy compression is discussed. Finally, the implementation of this algorithm in a massively parallel fashion on the MPP is discussed.
Wavelet-based compression of M-FISH images.
Hua, Jianping; Xiong, Zixiang; Wu, Qiang; Castleman, Kenneth R
2005-05-01
Multiplex fluorescence in situ hybridization (M-FISH) is a recently developed technology that enables multi-color chromosome karyotyping for molecular cytogenetic analysis. Each M-FISH image set consists of a number of aligned images of the same chromosome specimen captured at different optical wavelength. This paper presents embedded M-FISH image coding (EMIC), where the foreground objects/chromosomes and the background objects/images are coded separately. We first apply critically sampled integer wavelet transforms to both the foreground and the background. We then use object-based bit-plane coding to compress each object and generate separate embedded bitstreams that allow continuous lossy-to-lossless compression of the foreground and the background. For efficient arithmetic coding of bit planes, we propose a method of designing an optimal context model that specifically exploits the statistical characteristics of M-FISH images in the wavelet domain. Our experiments show that EMIC achieves nearly twice as much compression as Lempel-Ziv-Welch coding. EMIC also performs much better than JPEG-LS and JPEG-2000 for lossless coding. The lossy performance of EMIC is significantly better than that of coding each M-FISH image with JPEG-2000.
Koski, Antti; Tossavainen, Timo; Juhola, Martti
2004-01-01
Electrocardiogram (ECG) signals are the most prominent biomedical signal type used in clinical medicine. Their compression is important and widely researched in the medical informatics community. In the previous literature compression efficacy has been investigated only in the context of how much known or developed methods reduced the storage required by compressed forms of original ECG signals. Sometimes statistical signal evaluations based on, for example, root mean square error were studied. In previous research we developed a refined method for signal compression and tested it jointly with several known techniques for other biomedical signals. Our method of so-called successive approximation quantization used with wavelets was one of the most successful in those tests. In this paper, we studied to what extent these lossy compression methods altered values of medical parameters (medical information) computed from signals. Since the methods are lossy, some information is lost due to the compression when a high enough compression ratio is reached. We found that ECG signals sampled at 400 Hz could be compressed to one fourth of their original storage space, but the values of their medical parameters changed less than 5% due to compression, which indicates reliable results.
Using off-the-shelf lossy compression for wireless home sleep staging.
Lan, Kun-Chan; Chang, Da-Wei; Kuo, Chih-En; Wei, Ming-Zhi; Li, Yu-Hung; Shaw, Fu-Zen; Liang, Sheng-Fu
2015-05-15
Recently, there has been increasing interest in the development of wireless home sleep staging systems that allow the patient to be monitored remotely while remaining in the comfort of their home. However, transmitting large amount of Polysomnography (PSG) data over the Internet is an important issue needed to be considered. In this work, we aim to reduce the amount of PSG data which has to be transmitted or stored, while having as little impact as possible on the information in the signal relevant to classify sleep stages. We examine the effects of off-the-shelf lossy compression on an all-night PSG dataset from 20 healthy subjects, in the context of automated sleep staging. The popular compression method Set Partitioning in Hierarchical Trees (SPIHT) was used, and a range of compression levels was selected in order to compress the signals with various degrees of loss. In addition, a rule-based automatic sleep staging method was used to automatically classify the sleep stages. Considering the criteria of clinical usefulness, the experimental results show that the system can achieve more than 60% energy saving with a high accuracy (>84%) in classifying sleep stages by using a lossy compression algorithm like SPIHT. As far as we know, our study is the first that focuses how much loss can be tolerated in compressing complex multi-channel PSG data for sleep analysis. We demonstrate the feasibility of using lossy SPIHT compression for wireless home sleep staging. Copyright © 2015 Elsevier B.V. All rights reserved.
1996-10-25
been demonstrated that steganography is ineffective 195 when images are stored using this compression algorithm [2]. Difficulty in designing a general...Despite the relative ease of employing steganography to covertly transport data in an uncompressed 24-bit image , lossy compression algorithms based on... image , the security threat that steganography poses cannot be completely eliminated by application of a transform-based lossy compression algorithm
Volumetric Medical Image Coding: An Object-based, Lossy-to-lossless and Fully Scalable Approach
Danyali, Habibiollah; Mertins, Alfred
2011-01-01
In this article, an object-based, highly scalable, lossy-to-lossless 3D wavelet coding approach for volumetric medical image data (e.g., magnetic resonance (MR) and computed tomography (CT)) is proposed. The new method, called 3DOBHS-SPIHT, is based on the well-known set partitioning in the hierarchical trees (SPIHT) algorithm and supports both quality and resolution scalability. The 3D input data is grouped into groups of slices (GOS) and each GOS is encoded and decoded as a separate unit. The symmetric tree definition of the original 3DSPIHT is improved by introducing a new asymmetric tree structure. While preserving the compression efficiency, the new tree structure allows for a small size of each GOS, which not only reduces memory consumption during the encoding and decoding processes, but also facilitates more efficient random access to certain segments of slices. To achieve more compression efficiency, the algorithm only encodes the main object of interest in each 3D data set, which can have any arbitrary shape, and ignores the unnecessary background. The experimental results on some MR data sets show the good performance of the 3DOBHS-SPIHT algorithm for multi-resolution lossy-to-lossless coding. The compression efficiency, full scalability, and object-based features of the proposed approach, beside its lossy-to-lossless coding support, make it a very attractive candidate for volumetric medical image information archiving and transmission applications. PMID:22606653
Bit-Grooming: Shave Your Bits with Razor-sharp Precision
NASA Astrophysics Data System (ADS)
Zender, C. S.; Silver, J.
2017-12-01
Lossless compression can reduce climate data storage by 30-40%. Further reduction requires lossy compression that also reduces precision. Fortunately, geoscientific models and measurements generate false precision (scientifically meaningless data bits) that can be eliminated without sacrificing scientifically meaningful data. We introduce Bit Grooming, a lossy compression algorithm that removes the bloat due to false-precision, those bits and bytes beyond the meaningful precision of the data.Bit Grooming is statistically unbiased, applies to all floating point numbers, and is easy to use. Bit-Grooming reduces geoscience data storage requirements by 40-80%. We compared Bit Grooming to competitors Linear Packing, Layer Packing, and GRIB2/JPEG2000. The other compression methods have the edge in terms of compression, but Bit Grooming is the most accurate and certainly the most usable and portable.Bit Grooming provides flexible and well-balanced solutions to the trade-offs among compression, accuracy, and usability required by lossy compression. Geoscientists could reduce their long term storage costs, and show leadership in the elimination of false precision, by adopting Bit Grooming.
Lossy compression of weak lensing data
Vanderveld, R. Ali; Bernstein, Gary M.; Stoughton, Chris; ...
2011-07-12
Future orbiting observatories will survey large areas of sky in order to constrain the physics of dark matter and dark energy using weak gravitational lensing and other methods. Lossy compression of the resultant data will improve the cost and feasibility of transmitting the images through the space communication network. We evaluate the consequences of the lossy compression algorithm of Bernstein et al. (2010) for the high-precision measurement of weak-lensing galaxy ellipticities. This square-root algorithm compresses each pixel independently, and the information discarded is by construction less than the Poisson error from photon shot noise. For simulated space-based images (without cosmicmore » rays) digitized to the typical 16 bits per pixel, application of the lossy compression followed by image-wise lossless compression yields images with only 2.4 bits per pixel, a factor of 6.7 compression. We demonstrate that this compression introduces no bias in the sky background. The compression introduces a small amount of additional digitization noise to the images, and we demonstrate a corresponding small increase in ellipticity measurement noise. The ellipticity measurement method is biased by the addition of noise, so the additional digitization noise is expected to induce a multiplicative bias on the galaxies measured ellipticities. After correcting for this known noise-induced bias, we find a residual multiplicative ellipticity bias of m {approx} -4 x 10 -4. This bias is small when compared to the many other issues that precision weak lensing surveys must confront, and furthermore we expect it to be reduced further with better calibration of ellipticity measurement methods.« less
Image compression software for the SOHO LASCO and EIT experiments
NASA Technical Reports Server (NTRS)
Grunes, Mitchell R.; Howard, Russell A.; Hoppel, Karl; Mango, Stephen A.; Wang, Dennis
1994-01-01
This paper describes the lossless and lossy image compression algorithms to be used on board the Solar Heliospheric Observatory (SOHO) in conjunction with the Large Angle Spectrometric Coronograph and Extreme Ultraviolet Imaging Telescope experiments. It also shows preliminary results obtained using similar prior imagery and discusses the lossy compression artifacts which will result. This paper is in part intended for the use of SOHO investigators who need to understand the results of SOHO compression in order to better allocate the transmission bits which they have been allocated.
NASA Technical Reports Server (NTRS)
Sayood, K.; Chen, Y. C.; Wang, X.
1992-01-01
During this reporting period we have worked on three somewhat different problems. These are modeling of video traffic in packet networks, low rate video compression, and the development of a lossy + lossless image compression algorithm, which might have some application in browsing algorithms. The lossy + lossless scheme is an extension of work previously done under this grant. It provides a simple technique for incorporating browsing capability. The low rate coding scheme is also a simple variation on the standard discrete cosine transform (DCT) coding approach. In spite of its simplicity, the approach provides surprisingly high quality reconstructions. The modeling approach is borrowed from the speech recognition literature, and seems to be promising in that it provides a simple way of obtaining an idea about the second order behavior of a particular coding scheme. Details about these are presented.
Multidimensional incremental parsing for universal source coding.
Bae, Soo Hyun; Juang, Biing-Hwang
2008-10-01
A multidimensional incremental parsing algorithm (MDIP) for multidimensional discrete sources, as a generalization of the Lempel-Ziv coding algorithm, is investigated. It consists of three essential component schemes, maximum decimation matching, hierarchical structure of multidimensional source coding, and dictionary augmentation. As a counterpart of the longest match search in the Lempel-Ziv algorithm, two classes of maximum decimation matching are studied. Also, an underlying behavior of the dictionary augmentation scheme for estimating the source statistics is examined. For an m-dimensional source, m augmentative patches are appended into the dictionary at each coding epoch, thus requiring the transmission of a substantial amount of information to the decoder. The property of the hierarchical structure of the source coding algorithm resolves this issue by successively incorporating lower dimensional coding procedures in the scheme. In regard to universal lossy source coders, we propose two distortion functions, the local average distortion and the local minimax distortion with a set of threshold levels for each source symbol. For performance evaluation, we implemented three image compression algorithms based upon the MDIP; one is lossless and the others are lossy. The lossless image compression algorithm does not perform better than the Lempel-Ziv-Welch coding, but experimentally shows efficiency in capturing the source structure. The two lossy image compression algorithms are implemented using the two distortion functions, respectively. The algorithm based on the local average distortion is efficient at minimizing the signal distortion, but the images by the one with the local minimax distortion have a good perceptual fidelity among other compression algorithms. Our insights inspire future research on feature extraction of multidimensional discrete sources.
NASA Astrophysics Data System (ADS)
Seeram, Euclid
2006-03-01
The large volumes of digital images produced by digital imaging modalities in Radiology have provided the motivation for the development of picture archiving and communication systems (PACS) in an effort to provide an organized mechanism for digital image management. The development of more sophisticated methods of digital image acquisition (Multislice CT and Digital Mammography, for example), as well as the implementation and performance of PACS and Teleradiology systems in a health care environment, have created challenges in the area of image compression with respect to storing and transmitting digital images. Image compression can be reversible (lossless) or irreversible (lossy). While in the former, there is no loss of information, the latter presents concerns since there is a loss of information. This loss of information from diagnostic medical images is of primary concern not only to radiologists, but also to patients and their physicians. In 1997, Goldberg pointed out that "there is growing evidence that lossy compression can be applied without significantly affecting the diagnostic content of images... there is growing consensus in the radiologic community that some forms of lossy compression are acceptable". The purpose of this study was to explore the opinions of expert radiologists, and related professional organizations on the use of irreversible compression in routine practice The opinions of notable radiologists in the US and Canada are varied indicating no consensus of opinion on the use of irreversible compression in primary diagnosis, however, they are generally positive on the notion of the image storage and transmission advantages. Almost all radiologists are concerned with the litigation potential of an incorrect diagnosis based on irreversible compressed images. The survey of several radiology professional and related organizations reveals that no professional practice standards exist for the use of irreversible compression. Currently, the only standard for image compression is stated in the ACR's Technical Standards for Teleradiology and Digital Image Management.
Recent advances in lossy compression of scientific floating-point data
NASA Astrophysics Data System (ADS)
Lindstrom, P.
2017-12-01
With a continuing exponential trend in supercomputer performance, ever larger data sets are being generated through numerical simulation. Bandwidth and storage capacity are, however, not keeping pace with this increase in data size, causing significant data movement bottlenecks in simulation codes and substantial monetary costs associated with archiving vast volumes of data. Worse yet, ever smaller fractions of data generated can be stored for further analysis, where scientists frequently rely on decimating or averaging large data sets in time and/or space. One way to mitigate these problems is to employ data compression to reduce data volumes. However, lossless compression of floating-point data can achieve only very modest size reductions on the order of 10-50%. We present ZFP and FPZIP, two state-of-the-art lossy compressors for structured floating-point data that routinely achieve one to two orders of magnitude reduction with little to no impact on the accuracy of visualization and quantitative data analysis. We provide examples of the use of such lossy compressors in climate and seismic modeling applications to effectively accelerate I/O and reduce storage requirements. We further discuss how the design decisions behind these and other compressors impact error distributions and other statistical and differential properties, including derived quantities of interest relevant to each science application.
An Evaluation Framework for Lossy Compression of Genome Sequencing Quality Values.
Alberti, Claudio; Daniels, Noah; Hernaez, Mikel; Voges, Jan; Goldfeder, Rachel L; Hernandez-Lopez, Ana A; Mattavelli, Marco; Berger, Bonnie
2016-01-01
This paper provides the specification and an initial validation of an evaluation framework for the comparison of lossy compressors of genome sequencing quality values. The goal is to define reference data, test sets, tools and metrics that shall be used to evaluate the impact of lossy compression of quality values on human genome variant calling. The functionality of the framework is validated referring to two state-of-the-art genomic compressors. This work has been spurred by the current activity within the ISO/IEC SC29/WG11 technical committee (a.k.a. MPEG), which is investigating the possibility of starting a standardization activity for genomic information representation.
Progress with lossy compression of data from the Community Earth System Model
NASA Astrophysics Data System (ADS)
Xu, H.; Baker, A.; Hammerling, D.; Li, S.; Clyne, J.
2017-12-01
Climate models, such as the Community Earth System Model (CESM), generate massive quantities of data, particularly when run at high spatial and temporal resolutions. The burden of storage is further exacerbated by creating large ensembles, generating large numbers of variables, outputting at high frequencies, and duplicating data archives (to protect against disk failures). Applying lossy compression methods to CESM datasets is an attractive means of reducing data storage requirements, but ensuring that the loss of information does not negatively impact science objectives is critical. In particular, test methods are needed to evaluate whether critical features (e.g., extreme values and spatial and temporal gradients) have been preserved and to boost scientists' confidence in the lossy compression process. We will provide an overview on our progress in applying lossy compression to CESM output and describe our unique suite of metric tests that evaluate the impact of information loss. Further, we will describe our processes how to choose an appropriate compression algorithm (and its associated parameters) given the diversity of CESM data (e.g., variables may be constant, smooth, change abruptly, contain missing values, or have large ranges). Traditional compression algorithms, such as those used for images, are not necessarily ideally suited for floating-point climate simulation data, and different methods may have different strengths and be more effective for certain types of variables than others. We will discuss our progress towards our ultimate goal of developing an automated multi-method parallel approach for compression of climate data that both maximizes data reduction and minimizes the impact of data loss on science results.
[Remote access to a web-based image distribution system].
Bergh, B; Schlaefke, A; Frankenbach, R; Vogl, T J
2004-06-01
To assess different network and security technologies for remote access to a web-based image distribution system of a hospital intranet. Following preparatory testing, the time-to-display (TTD) was measured for three image types (CR, CT, MR). The evaluation included two remote access technologies consisting of direct ISDN-Dial-Up or VPN connection (Virtual Private Network), with three different connection speeds of 64, 128 (ISDN) and 768 Kbit/s (ADSL-Asymmetric Digital Subscriber Line), as well as with lossless and lossy compression. Depending on the image type, the TTD with lossless compression for 64 Kbit/s varied from 1 : 00 to 2 : 40 minutes, for 128 Kbit/s from 0 : 35 to 1 : 15 minutes and for ADSL from 0 : 15 to 0 : 45 minutes. The ISDN-Dial-Up connection was superior to VPN technology at 64 Kbit/s but did not allow higher connection speeds. Lossy compression reduced the TTD by half for all measurements. VPN technology is preferable to direct Dial-Up connections since it offers higher connection speeds and advantages in usage and security. For occasional usage, 128 Kbit/s (ISDN) can be considered sufficient, especially in conjunction with lossy compression. ADSL should be chosen when a more frequent usage is anticipated, whereby lossy compression may be omitted. Due to higher bandwidths and improved usability, the web-based approach appears superior to conventional teleradiology systems.
Electroencephalographic compression based on modulated filter banks and wavelet transform.
Bazán-Prieto, Carlos; Cárdenas-Barrera, Julián; Blanco-Velasco, Manuel; Cruz-Roldán, Fernando
2011-01-01
Due to the large volume of information generated in an electroencephalographic (EEG) study, compression is needed for storage, processing or transmission for analysis. In this paper we evaluate and compare two lossy compression techniques applied to EEG signals. It compares the performance of compression schemes with decomposition by filter banks or wavelet Packets transformation, seeking the best value for compression, best quality and more efficient real time implementation. Due to specific properties of EEG signals, we propose a quantization stage adapted to the dynamic range of each band, looking for higher quality. The results show that the compressor with filter bank performs better than transform methods. Quantization adapted to the dynamic range significantly enhances the quality.
Comparative performance between compressed and uncompressed airborne imagery
NASA Astrophysics Data System (ADS)
Phan, Chung; Rupp, Ronald; Agarwal, Sanjeev; Trang, Anh; Nair, Sumesh
2008-04-01
The US Army's RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD), Countermine Division is evaluating the compressibility of airborne multi-spectral imagery for mine and minefield detection application. Of particular interest is to assess the highest image data compression rate that can be afforded without the loss of image quality for war fighters in the loop and performance of near real time mine detection algorithm. The JPEG-2000 compression standard is used to perform data compression. Both lossless and lossy compressions are considered. A multi-spectral anomaly detector such as RX (Reed & Xiaoli), which is widely used as a core algorithm baseline in airborne mine and minefield detection on different mine types, minefields, and terrains to identify potential individual targets, is used to compare the mine detection performance. This paper presents the compression scheme and compares detection performance results between compressed and uncompressed imagery for various level of compressions. The compression efficiency is evaluated and its dependence upon different backgrounds and other factors are documented and presented using multi-spectral data.
2D-RBUC for efficient parallel compression of residuals
NASA Astrophysics Data System (ADS)
Đurđević, Đorđe M.; Tartalja, Igor I.
2018-02-01
In this paper, we present a method for lossless compression of residuals with an efficient SIMD parallel decompression. The residuals originate from lossy or near lossless compression of height fields, which are commonly used to represent models of terrains. The algorithm is founded on the existing RBUC method for compression of non-uniform data sources. We have adapted the method to capture 2D spatial locality of height fields, and developed the data decompression algorithm for modern GPU architectures already present even in home computers. In combination with the point-level SIMD-parallel lossless/lossy high field compression method HFPaC, characterized by fast progressive decompression and seamlessly reconstructed surface, the newly proposed method trades off small efficiency degradation for a non negligible compression ratio (measured up to 91%) benefit.
Reducing the complexity of the CCSDS standard for image compression decreasing the DWT filter order
NASA Astrophysics Data System (ADS)
Ito, Leandro H.; Pinho, Marcelo S.
2014-10-01
The goal for this work is to evaluate the impact of utilizing shorter wavelet filters in the CCSDS standard for lossy and lossless image compression. Another constraint considered was the existence of symmetry in the filters. That approach was desired to maintain the symmetric extension compatibility of the filter banks. Even though this strategy works well for oat wavelets, it is not always the case for their integer approximations. The periodic extension was utilized whenever symmetric extension was not applicable. Even though the latter outperforms the former, for fair comparison the symmetric extension compatible integer-to-integer wavelet approximations were evaluated under both extensions. The evaluation methods adopted were bit rate (bpp), PSNR and the number of operations required by each wavelet transforms. All these results were compared against the ones obtained utilizing the standard CCSDS with 9/7 filter banks, for lossy and lossless compression. The tests were performed over tallies (512x512) of raw remote sensing images from CBERS-2B (China-Brazil Earth Resources Satellites) captured from its high resolution CCD camera. These images were cordially made available by INPE (National Institute for Space Research) in Brazil. For the CCSDS implementation, it was utilized the source code developed by Hongqiang Wang from the Electrical Department at Nebraska-Lincoln University, applying the appropriate changes on the wavelet transform. For lossy compression, the results have shown that the filter bank built from the Deslauriers-Dubuc scaling function, with respectively 2 and 4 vanishing moments on the synthesis and analysis banks, presented not only a reduction of 21% in the number of operations required, but also a performance on par with the 9/7 filter bank. In the lossless case, the biorthogonal Cohen-Daubechies-Feauveau with 2 vanishing moments presented a performance close to the 9/7 integer approximation of the CCSDS, with the number of operations reduced by 1/3.
Locally adaptive vector quantization: Data compression with feature preservation
NASA Technical Reports Server (NTRS)
Cheung, K. M.; Sayano, M.
1992-01-01
A study of a locally adaptive vector quantization (LAVQ) algorithm for data compression is presented. This algorithm provides high-speed one-pass compression and is fully adaptable to any data source and does not require a priori knowledge of the source statistics. Therefore, LAVQ is a universal data compression algorithm. The basic algorithm and several modifications to improve performance are discussed. These modifications are nonlinear quantization, coarse quantization of the codebook, and lossless compression of the output. Performance of LAVQ on various images using irreversible (lossy) coding is comparable to that of the Linde-Buzo-Gray algorithm, but LAVQ has a much higher speed; thus this algorithm has potential for real-time video compression. Unlike most other image compression algorithms, LAVQ preserves fine detail in images. LAVQ's performance as a lossless data compression algorithm is comparable to that of Lempel-Ziv-based algorithms, but LAVQ uses far less memory during the coding process.
Ma, JiaLi; Zhang, TanTan; Dong, MingChui
2015-05-01
This paper presents a novel electrocardiogram (ECG) compression method for e-health applications by adapting an adaptive Fourier decomposition (AFD) algorithm hybridized with a symbol substitution (SS) technique. The compression consists of two stages: first stage AFD executes efficient lossy compression with high fidelity; second stage SS performs lossless compression enhancement and built-in data encryption, which is pivotal for e-health. Validated with 48 ECG records from MIT-BIH arrhythmia benchmark database, the proposed method achieves averaged compression ratio (CR) of 17.6-44.5 and percentage root mean square difference (PRD) of 0.8-2.0% with a highly linear and robust PRD-CR relationship, pushing forward the compression performance to an unexploited region. As such, this paper provides an attractive candidate of ECG compression method for pervasive e-health applications.
NASA Astrophysics Data System (ADS)
Martin, Gabriel; Gonzalez-Ruiz, Vicente; Plaza, Antonio; Ortiz, Juan P.; Garcia, Inmaculada
2010-07-01
Lossy hyperspectral image compression has received considerable interest in recent years due to the extremely high dimensionality of the data. However, the impact of lossy compression on spectral unmixing techniques has not been widely studied. These techniques characterize mixed pixels (resulting from insufficient spatial resolution) in terms of a suitable combination of spectrally pure substances (called endmembers) weighted by their estimated fractional abundances. This paper focuses on the impact of JPEG2000-based lossy compression of hyperspectral images on the quality of the endmembers extracted by different algorithms. The three considered algorithms are the orthogonal subspace projection (OSP), which uses only spatial information, and the automatic morphological endmember extraction (AMEE) and spatial spectral endmember extraction (SSEE), which integrate both spatial and spectral information in the search for endmembers. The impact of compression on the resulting abundance estimation based on the endmembers derived by different methods is also substantiated. Experimental results are conducted using a hyperspectral data set collected by NASA Jet Propulsion Laboratory over the Cuprite mining district in Nevada. The experimental results are quantitatively analyzed using reference information available from U.S. Geological Survey, resulting in recommendations to specialists interested in applying endmember extraction and unmixing algorithms to compressed hyperspectral data.
Lossy compression for Animated Web Visualisation
NASA Astrophysics Data System (ADS)
Prudden, R.; Tomlinson, J.; Robinson, N.; Arribas, A.
2017-12-01
This talk will discuss an technique for lossy data compression specialised for web animation. We set ourselves the challenge of visualising a full forecast weather field as an animated 3D web page visualisation. This data is richly spatiotemporal, however it is routinely communicated to the public as a 2D map, and scientists are largely limited to visualising data via static 2D maps or 1D scatter plots. We wanted to present Met Office weather forecasts in a way that represents all the generated data. Our approach was to repurpose the technology used to stream high definition videos. This enabled us to achieve high rates of compression, while being compatible with both web browsers and GPU processing. Since lossy compression necessarily involves discarding information, evaluating the results is an important and difficult problem. This is essentially a problem of forecast verification. The difficulty lies in deciding what it means for two weather fields to be "similar", as simple definitions such as mean squared error often lead to undesirable results. In the second part of the talk, I will briefly discuss some ideas for alternative measures of similarity.
Impact of lossy compression on diagnostic accuracy of radiographs for periapical lesions
NASA Technical Reports Server (NTRS)
Eraso, Francisco E.; Analoui, Mostafa; Watson, Andrew B.; Rebeschini, Regina
2002-01-01
OBJECTIVES: The purpose of this study was to evaluate the lossy Joint Photographic Experts Group compression for endodontic pretreatment digital radiographs. STUDY DESIGN: Fifty clinical charge-coupled device-based, digital radiographs depicting periapical areas were selected. Each image was compressed at 2, 4, 8, 16, 32, 48, and 64 compression ratios. One root per image was marked for examination. Images were randomized and viewed by four clinical observers under standardized viewing conditions. Each observer read the image set three times, with at least two weeks between each reading. Three pre-selected sites per image (mesial, distal, apical) were scored on a five-scale score confidence scale. A panel of three examiners scored the uncompressed images, with a consensus score for each site. The consensus score was used as the baseline for assessing the impact of lossy compression on the diagnostic values of images. The mean absolute error between consensus and observer scores was computed for each observer, site, and reading session. RESULTS: Balanced one-way analysis of variance for all observers indicated that for compression ratios 48 and 64, there was significant difference between mean absolute error of uncompressed and compressed images (P <.05). After converting the five-scale score to two-level diagnostic values, the diagnostic accuracy was strongly correlated (R (2) = 0.91) with the compression ratio. CONCLUSION: The results of this study suggest that high compression ratios can have a severe impact on the diagnostic quality of the digital radiographs for detection of periapical lesions.
NASA Astrophysics Data System (ADS)
Osada, Masakazu; Tsukui, Hideki
2002-09-01
ABSTRACT Picture Archiving and Communication System (PACS) is a system which connects imaging modalities, image archives, and image workstations to reduce film handling cost and improve hospital workflow. Handling diagnostic ultrasound and endoscopy images is challenging, because it produces large amount of data such as motion (cine) images of 30 frames per second, 640 x 480 in resolution, with 24-bit color. Also, it requires enough image quality for clinical review. We have developed PACS which is able to manage ultrasound and endoscopy cine images with above resolution and frame rate, and investigate suitable compression method and compression rate for clinical image review. Results show that clinicians require capability for frame-by-frame forward and backward review of cine images because they carefully look through motion images to find certain color patterns which may appear in one frame. In order to satisfy this quality, we have chosen motion JPEG, installed and confirmed that we could capture this specific pattern. As for acceptable image compression rate, we have performed subjective evaluation. No subjects could tell the difference between original non-compressed images and 1:10 lossy compressed JPEG images. One subject could tell the difference between original and 1:20 lossy compressed JPEG images although it is acceptable. Thus, ratios of 1:10 to 1:20 are acceptable to reduce data amount and cost while maintaining quality for clinical review.
Lossy compression of quality scores in genomic data.
Cánovas, Rodrigo; Moffat, Alistair; Turpin, Andrew
2014-08-01
Next-generation sequencing technologies are revolutionizing medicine. Data from sequencing technologies are typically represented as a string of bases, an associated sequence of per-base quality scores and other metadata, and in aggregate can require a large amount of space. The quality scores show how accurate the bases are with respect to the sequencing process, that is, how confident the sequencer is of having called them correctly, and are the largest component in datasets in which they are retained. Previous research has examined how to store sequences of bases effectively; here we add to that knowledge by examining methods for compressing quality scores. The quality values originate in a continuous domain, and so if a fidelity criterion is introduced, it is possible to introduce flexibility in the way these values are represented, allowing lossy compression over the quality score data. We present existing compression options for quality score data, and then introduce two new lossy techniques. Experiments measuring the trade-off between compression ratio and information loss are reported, including quantifying the effect of lossy representations on a downstream application that carries out single nucleotide polymorphism and insert/deletion detection. The new methods are demonstrably superior to other techniques when assessed against the spectrum of possible trade-offs between storage required and fidelity of representation. An implementation of the methods described here is available at https://github.com/rcanovas/libCSAM. rcanovas@student.unimelb.edu.au Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Output MSE and PSNR prediction in DCT-based lossy compression of remote sensing images
NASA Astrophysics Data System (ADS)
Kozhemiakin, Ruslan A.; Abramov, Sergey K.; Lukin, Vladimir V.; Vozel, Benoit; Chehdi, Kacem
2017-10-01
Amount and size of remote sensing (RS) images acquired by modern systems are so large that data have to be compressed in order to transfer, save and disseminate them. Lossy compression becomes more popular for aforementioned situations. But lossy compression has to be applied carefully with providing acceptable level of introduced distortions not to lose valuable information contained in data. Then introduced losses have to be controlled and predicted and this is problematic for many coders. In this paper, we analyze possibilities of predicting mean square error or, equivalently, PSNR for coders based on discrete cosine transform (DCT) applied either for compressing singlechannel RS images or multichannel data in component-wise manner. The proposed approach is based on direct dependence between distortions introduced due to DCT coefficient quantization and losses in compressed data. One more innovation deals with possibility to employ a limited number (percentage) of blocks for which DCT-coefficients have to be calculated. This accelerates prediction and makes it considerably faster than compression itself. There are two other advantages of the proposed approach. First, it is applicable for both uniform and non-uniform quantization of DCT coefficients. Second, the approach is quite general since it works for several analyzed DCT-based coders. The simulation results are obtained for standard test images and then verified for real-life RS data.
Enabling Near Real-Time Remote Search for Fast Transient Events with Lossy Data Compression
NASA Astrophysics Data System (ADS)
Vohl, Dany; Pritchard, Tyler; Andreoni, Igor; Cooke, Jeffrey; Meade, Bernard
2017-09-01
We present a systematic evaluation of JPEG2000 (ISO/IEC 15444) as a transport data format to enable rapid remote searches for fast transient events as part of the Deeper Wider Faster programme. Deeper Wider Faster programme uses 20 telescopes from radio to gamma rays to perform simultaneous and rapid-response follow-up searches for fast transient events on millisecond-to-hours timescales. Deeper Wider Faster programme search demands have a set of constraints that is becoming common amongst large collaborations. Here, we focus on the rapid optical data component of Deeper Wider Faster programme led by the Dark Energy Camera at Cerro Tololo Inter-American Observatory. Each Dark Energy Camera image has 70 total coupled-charged devices saved as a 1.2 gigabyte FITS file. Near real-time data processing and fast transient candidate identifications-in minutes for rapid follow-up triggers on other telescopes-requires computational power exceeding what is currently available on-site at Cerro Tololo Inter-American Observatory. In this context, data files need to be transmitted rapidly to a foreign location for supercomputing post-processing, source finding, visualisation and analysis. This step in the search process poses a major bottleneck, and reducing the data size helps accommodate faster data transmission. To maximise our gain in transfer time and still achieve our science goals, we opt for lossy data compression-keeping in mind that raw data is archived and can be evaluated at a later time. We evaluate how lossy JPEG2000 compression affects the process of finding transients, and find only a negligible effect for compression ratios up to 25:1. We also find a linear relation between compression ratio and the mean estimated data transmission speed-up factor. Adding highly customised compression and decompression steps to the science pipeline considerably reduces the transmission time-validating its introduction to the Deeper Wider Faster programme science pipeline and enabling science that was otherwise too difficult with current technology.
Near-lossless multichannel EEG compression based on matrix and tensor decompositions.
Dauwels, Justin; Srinivasan, K; Reddy, M Ramasubba; Cichocki, Andrzej
2013-05-01
A novel near-lossless compression algorithm for multichannel electroencephalogram (MC-EEG) is proposed based on matrix/tensor decomposition models. MC-EEG is represented in suitable multiway (multidimensional) forms to efficiently exploit temporal and spatial correlations simultaneously. Several matrix/tensor decomposition models are analyzed in view of efficient decorrelation of the multiway forms of MC-EEG. A compression algorithm is built based on the principle of “lossy plus residual coding,” consisting of a matrix/tensor decomposition-based coder in the lossy layer followed by arithmetic coding in the residual layer. This approach guarantees a specifiable maximum absolute error between original and reconstructed signals. The compression algorithm is applied to three different scalp EEG datasets and an intracranial EEG dataset, each with different sampling rate and resolution. The proposed algorithm achieves attractive compression ratios compared to compressing individual channels separately. For similar compression ratios, the proposed algorithm achieves nearly fivefold lower average error compared to a similar wavelet-based volumetric MC-EEG compression algorithm.
Cosmological Particle Data Compression in Practice
NASA Astrophysics Data System (ADS)
Zeyen, M.; Ahrens, J.; Hagen, H.; Heitmann, K.; Habib, S.
2017-12-01
In cosmological simulations trillions of particles are handled and several terabytes of unstructured particle data are generated in each time step. Transferring this data directly from memory to disk in an uncompressed way results in a massive load on I/O and storage systems. Hence, one goal of domain scientists is to compress the data before storing it to disk while minimizing the loss of information. To prevent reading back uncompressed data from disk, this can be done in an in-situ process. Since the simulation continuously generates data, the available time for the compression of one time step is limited. Therefore, the evaluation of compression techniques has shifted from only focusing on compression rates to include run-times and scalability.In recent years several compression techniques for cosmological data have become available. These techniques can be either lossy or lossless, depending on the technique. For both cases, this study aims to evaluate and compare the state of the art compression techniques for unstructured particle data. This study focuses on the techniques available in the Blosc framework with its multi-threading support, the XZ Utils toolkit with the LZMA algorithm that achieves high compression rates, and the widespread FPZIP and ZFP methods for lossy compressions.For the investigated compression techniques, quantitative performance indicators such as compression rates, run-time/throughput, and reconstruction errors are measured. Based on these factors, this study offers a comprehensive analysis of the individual techniques and discusses their applicability for in-situ compression. In addition, domain specific measures are evaluated on the reconstructed data sets, and the relative error rates and statistical properties are analyzed and compared. Based on this study future challenges and directions in the compression of unstructured cosmological particle data were identified.
ChIPWig: a random access-enabling lossless and lossy compression method for ChIP-seq data.
Ravanmehr, Vida; Kim, Minji; Wang, Zhiying; Milenkovic, Olgica
2018-03-15
Chromatin immunoprecipitation sequencing (ChIP-seq) experiments are inexpensive and time-efficient, and result in massive datasets that introduce significant storage and maintenance challenges. To address the resulting Big Data problems, we propose a lossless and lossy compression framework specifically designed for ChIP-seq Wig data, termed ChIPWig. ChIPWig enables random access, summary statistics lookups and it is based on the asymptotic theory of optimal point density design for nonuniform quantizers. We tested the ChIPWig compressor on 10 ChIP-seq datasets generated by the ENCODE consortium. On average, lossless ChIPWig reduced the file sizes to merely 6% of the original, and offered 6-fold compression rate improvement compared to bigWig. The lossy feature further reduced file sizes 2-fold compared to the lossless mode, with little or no effects on peak calling and motif discovery using specialized NarrowPeaks methods. The compression and decompression speed rates are of the order of 0.2 sec/MB using general purpose computers. The source code and binaries are freely available for download at https://github.com/vidarmehr/ChIPWig-v2, implemented in C ++. milenkov@illinois.edu. Supplementary data are available at Bioinformatics online.
High-performance compression of astronomical images
NASA Technical Reports Server (NTRS)
White, Richard L.
1993-01-01
Astronomical images have some rather unusual characteristics that make many existing image compression techniques either ineffective or inapplicable. A typical image consists of a nearly flat background sprinkled with point sources and occasional extended sources. The images are often noisy, so that lossless compression does not work very well; furthermore, the images are usually subjected to stringent quantitative analysis, so any lossy compression method must be proven not to discard useful information, but must instead discard only the noise. Finally, the images can be extremely large. For example, the Space Telescope Science Institute has digitized photographic plates covering the entire sky, generating 1500 images each having 14000 x 14000 16-bit pixels. Several astronomical groups are now constructing cameras with mosaics of large CCD's (each 2048 x 2048 or larger); these instruments will be used in projects that generate data at a rate exceeding 100 MBytes every 5 minutes for many years. An effective technique for image compression may be based on the H-transform (Fritze et al. 1977). The method that we have developed can be used for either lossless or lossy compression. The digitized sky survey images can be compressed by at least a factor of 10 with no noticeable losses in the astrometric and photometric properties of the compressed images. The method has been designed to be computationally efficient: compression or decompression of a 512 x 512 image requires only 4 seconds on a Sun SPARCstation 1. The algorithm uses only integer arithmetic, so it is completely reversible in its lossless mode, and it could easily be implemented in hardware for space applications.
Fast reversible wavelet image compressor
NASA Astrophysics Data System (ADS)
Kim, HyungJun; Li, Ching-Chung
1996-10-01
We present a unified image compressor with spline biorthogonal wavelets and dyadic rational filter coefficients which gives high computational speed and excellent compression performance. Convolutions with these filters can be preformed by using only arithmetic shifting and addition operations. Wavelet coefficients can be encoded with an arithmetic coder which also uses arithmetic shifting and addition operations. Therefore, from the beginning to the end, the while encoding/decoding process can be done within a short period of time. The proposed method naturally extends form the lossless compression to the lossy but high compression range and can be easily adapted to the progressive reconstruction.
An Image Processing Technique for Achieving Lossy Compression of Data at Ratios in Excess of 100:1
1992-11-01
5 Lempel , Ziv , Welch (LZW) Compression ............... 7 Lossless Compression Tests Results ................. 9 Exact...since IBM holds the patent for this technique. Lempel , Ziv , Welch (LZW) Compression The LZW compression is related to two compression techniques known as... compression , using the input stream as data . This step is possible because the compression algorithm always outputs the phrase and character components of a
Optimization of Error-Bounded Lossy Compression for Hard-to-Compress HPC Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Di, Sheng; Cappello, Franck
Since today’s scientific applications are producing vast amounts of data, compressing them before storage/transmission is critical. Results of existing compressors show two types of HPC data sets: highly compressible and hard to compress. In this work, we carefully design and optimize the error-bounded lossy compression for hard-tocompress scientific data. We propose an optimized algorithm that can adaptively partition the HPC data into best-fit consecutive segments each having mutually close data values, such that the compression condition can be optimized. Another significant contribution is the optimization of shifting offset such that the XOR-leading-zero length between two consecutive unpredictable data points canmore » be maximized. We finally devise an adaptive method to select the best-fit compressor at runtime for maximizing the compression factor. We evaluate our solution using 13 benchmarks based on real-world scientific problems, and we compare it with 9 other state-of-the-art compressors. Experiments show that our compressor can always guarantee the compression errors within the user-specified error bounds. Most importantly, our optimization can improve the compression factor effectively, by up to 49% for hard-tocompress data sets with similar compression/decompression time cost.« less
The impact of skull bone intensity on the quality of compressed CT neuro images
NASA Astrophysics Data System (ADS)
Kowalik-Urbaniak, Ilona; Vrscay, Edward R.; Wang, Zhou; Cavaro-Menard, Christine; Koff, David; Wallace, Bill; Obara, Boguslaw
2012-02-01
The increasing use of technologies such as CT and MRI, along with a continuing improvement in their resolution, has contributed to the explosive growth of digital image data being generated. Medical communities around the world have recognized the need for efficient storage, transmission and display of medical images. For example, the Canadian Association of Radiologists (CAR) has recommended compression ratios for various modalities and anatomical regions to be employed by lossy JPEG and JPEG2000 compression in order to preserve diagnostic quality. Here we investigate the effects of the sharp skull edges present in CT neuro images on JPEG and JPEG2000 lossy compression. We conjecture that this atypical effect is caused by the sharp edges between the skull bone and the background regions as well as between the skull bone and the interior regions. These strong edges create large wavelet coefficients that consume an unnecessarily large number of bits in JPEG2000 compression because of its bitplane coding scheme, and thus result in reduced quality at the interior region, which contains most diagnostic information in the image. To validate the conjecture, we investigate a segmentation based compression algorithm based on simple thresholding and morphological operators. As expected, quality is improved in terms of PSNR as well as the structural similarity (SSIM) image quality measure, and its multiscale (MS-SSIM) and informationweighted (IW-SSIM) versions. This study not only supports our conjecture, but also provides a solution to improve the performance of JPEG and JPEG2000 compression for specific types of CT images.
A Posteriori Restoration of Block Transform-Compressed Data
NASA Technical Reports Server (NTRS)
Brown, R.; Boden, A. F.
1995-01-01
The Galileo spacecraft will use lossy data compression for the transmission of its science imagery over the low-bandwidth communication system. The technique chosen for image compression is a block transform technique based on the Integer Cosine Transform, a derivative of the JPEG image compression standard. Considered here are two known a posteriori enhancement techniques, which are adapted.
Survey Of Lossless Image Coding Techniques
NASA Astrophysics Data System (ADS)
Melnychuck, Paul W.; Rabbani, Majid
1989-04-01
Many image transmission/storage applications requiring some form of data compression additionally require that the decoded image be an exact replica of the original. Lossless image coding algorithms meet this requirement by generating a decoded image that is numerically identical to the original. Several lossless coding techniques are modifications of well-known lossy schemes, whereas others are new. Traditional Markov-based models and newer arithmetic coding techniques are applied to predictive coding, bit plane processing, and lossy plus residual coding. Generally speaking, the compression ratio offered by these techniques are in the area of 1.6:1 to 3:1 for 8-bit pictorial images. Compression ratios for 12-bit radiological images approach 3:1, as these images have less detailed structure, and hence, their higher pel correlation leads to a greater removal of image redundancy.
Reducing disk storage of full-3D seismic waveform tomography (F3DT) through lossy online compression
NASA Astrophysics Data System (ADS)
Lindstrom, Peter; Chen, Po; Lee, En-Jui
2016-08-01
Full-3D seismic waveform tomography (F3DT) is the latest seismic tomography technique that can assimilate broadband, multi-component seismic waveform observations into high-resolution 3D subsurface seismic structure models. The main drawback in the current F3DT implementation, in particular the scattering-integral implementation (F3DT-SI), is the high disk storage cost and the associated I/O overhead of archiving the 4D space-time wavefields of the receiver- or source-side strain tensors. The strain tensor fields are needed for computing the data sensitivity kernels, which are used for constructing the Jacobian matrix in the Gauss-Newton optimization algorithm. In this study, we have successfully integrated a lossy compression algorithm into our F3DT-SI workflow to significantly reduce the disk space for storing the strain tensor fields. The compressor supports a user-specified tolerance for bounding the error, and can be integrated into our finite-difference wave-propagation simulation code used for computing the strain fields. The decompressor can be integrated into the kernel calculation code that reads the strain fields from the disk and compute the data sensitivity kernels. During the wave-propagation simulations, we compress the strain fields before writing them to the disk. To compute the data sensitivity kernels, we read the compressed strain fields from the disk and decompress them before using them in kernel calculations. Experiments using a realistic dataset in our California statewide F3DT project have shown that we can reduce the strain-field disk storage by at least an order of magnitude with acceptable loss, and also improve the overall I/O performance of the entire F3DT-SI workflow significantly. The integration of the lossy online compressor may potentially open up the possibilities of the wide adoption of F3DT-SI in routine seismic tomography practices in the near future.
Reducing Disk Storage of Full-3D Seismic Waveform Tomography (F3DT) Through Lossy Online Compression
Lindstrom, Peter; Chen, Po; Lee, En-Jui
2016-05-05
Full-3D seismic waveform tomography (F3DT) is the latest seismic tomography technique that can assimilate broadband, multi-component seismic waveform observations into high-resolution 3D subsurface seismic structure models. The main drawback in the current F3DT implementation, in particular the scattering-integral implementation (F3DT-SI), is the high disk storage cost and the associated I/O overhead of archiving the 4D space-time wavefields of the receiver- or source-side strain tensors. The strain tensor fields are needed for computing the data sensitivity kernels, which are used for constructing the Jacobian matrix in the Gauss-Newton optimization algorithm. In this study, we have successfully integrated a lossy compression algorithmmore » into our F3DT SI workflow to significantly reduce the disk space for storing the strain tensor fields. The compressor supports a user-specified tolerance for bounding the error, and can be integrated into our finite-difference wave-propagation simulation code used for computing the strain fields. The decompressor can be integrated into the kernel calculation code that reads the strain fields from the disk and compute the data sensitivity kernels. During the wave-propagation simulations, we compress the strain fields before writing them to the disk. To compute the data sensitivity kernels, we read the compressed strain fields from the disk and decompress them before using them in kernel calculations. Experiments using a realistic dataset in our California statewide F3DT project have shown that we can reduce the strain-field disk storage by at least an order of magnitude with acceptable loss, and also improve the overall I/O performance of the entire F3DT-SI workflow significantly. The integration of the lossy online compressor may potentially open up the possibilities of the wide adoption of F3DT-SI in routine seismic tomography practices in the near future.« less
Digital storage and analysis of color Doppler echocardiograms
NASA Technical Reports Server (NTRS)
Chandra, S.; Thomas, J. D.
1997-01-01
Color Doppler flow mapping has played an important role in clinical echocardiography. Most of the clinical work, however, has been primarily qualitative. Although qualitative information is very valuable, there is considerable quantitative information stored within the velocity map that has not been extensively exploited so far. Recently, many researchers have shown interest in using the encoded velocities to address the clinical problems such as quantification of valvular regurgitation, calculation of cardiac output, and characterization of ventricular filling. In this article, we review some basic physics and engineering aspects of color Doppler echocardiography, as well as drawbacks of trying to retrieve velocities from video tape data. Digital storage, which plays a critical role in performing quantitative analysis, is discussed in some detail with special attention to velocity encoding in DICOM 3.0 (medical image storage standard) and the use of digital compression. Lossy compression can considerably reduce file size with minimal loss of information (mostly redundant); this is critical for digital storage because of the enormous amount of data generated (a 10 minute study could require 18 Gigabytes of storage capacity). Lossy JPEG compression and its impact on quantitative analysis has been studied, showing that images compressed at 27:1 using the JPEG algorithm compares favorably with directly digitized video images, the current goldstandard. Some potential applications of these velocities in analyzing the proximal convergence zones, mitral inflow, and some areas of future development are also discussed in the article.
Subjective evaluation of compressed image quality
NASA Astrophysics Data System (ADS)
Lee, Heesub; Rowberg, Alan H.; Frank, Mark S.; Choi, Hyung-Sik; Kim, Yongmin
1992-05-01
Lossy data compression generates distortion or error on the reconstructed image and the distortion becomes visible as the compression ratio increases. Even at the same compression ratio, the distortion appears differently depending on the compression method used. Because of the nonlinearity of the human visual system and lossy data compression methods, we have evaluated subjectively the quality of medical images compressed with two different methods, an intraframe and interframe coding algorithms. The evaluated raw data were analyzed statistically to measure interrater reliability and reliability of an individual reader. Also, the analysis of variance was used to identify which compression method is better statistically, and from what compression ratio the quality of a compressed image is evaluated as poorer than that of the original. Nine x-ray CT head images from three patients were used as test cases. Six radiologists participated in reading the 99 images (some were duplicates) compressed at four different compression ratios, original, 5:1, 10:1, and 15:1. The six readers agree more than by chance alone and their agreement was statistically significant, but there were large variations among readers as well as within a reader. The displacement estimated interframe coding algorithm is significantly better in quality than that of the 2-D block DCT at significance level 0.05. Also, 10:1 compressed images with the interframe coding algorithm do not show any significant differences from the original at level 0.05.
Psychophysical Comparisons in Image Compression Algorithms.
1999-03-01
Leister, M., "Lossy Lempel - Ziv Algorithm for Large Alphabet Sources and Applications to Image Compression ," IEEE Proceedings, v.I, pp. 225-228, September...1623-1642, September 1990. Sanford, M.A., An Analysis of Data Compression Algorithms used in the Transmission of Imagery, Master’s Thesis, Naval...NAVAL POSTGRADUATE SCHOOL Monterey, California THESIS PSYCHOPHYSICAL COMPARISONS IN IMAGE COMPRESSION ALGORITHMS by % Christopher J. Bodine • March
ZFP compression plugin (filter) for HDF5
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Mark C.
H5Z-ZFP is a compression plugin (filter) for the HDF5 library based upon the ZFP-0.5.0 compression library. It supports 4- or 8-byte integer or floating point HDF5 datasets of any dimension but partitioned in 1, 2, or 3 dimensional chunks. It supports ZFP's four fundamental modes of operation; rate, precision, accuracy or expert. It is a lossy compression plugin.
Planning/scheduling techniques for VQ-based image compression
NASA Technical Reports Server (NTRS)
Short, Nicholas M., Jr.; Manohar, Mareboyana; Tilton, James C.
1994-01-01
The enormous size of the data holding and the complexity of the information system resulting from the EOS system pose several challenges to computer scientists, one of which is data archival and dissemination. More than ninety percent of the data holdings of NASA is in the form of images which will be accessed by users across the computer networks. Accessing the image data in its full resolution creates data traffic problems. Image browsing using a lossy compression reduces this data traffic, as well as storage by factor of 30-40. Of the several image compression techniques, VQ is most appropriate for this application since the decompression of the VQ compressed images is a table lookup process which makes minimal additional demands on the user's computational resources. Lossy compression of image data needs expert level knowledge in general and is not straightforward to use. This is especially true in the case of VQ. It involves the selection of appropriate codebooks for a given data set and vector dimensions for each compression ratio, etc. A planning and scheduling system is described for using the VQ compression technique in the data access and ingest of raw satellite data.
The use of ZFP lossy floating point data compression in tornado-resolving thunderstorm simulations
NASA Astrophysics Data System (ADS)
Orf, L.
2017-12-01
In the field of atmospheric science, numerical models are used to produce forecasts of weather and climate and serve as virtual laboratories for scientists studying atmospheric phenomena. In both operational and research arenas, atmospheric simulations exploiting modern supercomputing hardware can produce a tremendous amount of data. During model execution, the transfer of floating point data from memory to the file system is often a significant bottleneck where I/O can dominate wallclock time. One way to reduce the I/O footprint is to compress the floating point data, which reduces amount of data saved to the file system. In this presentation we introduce LOFS, a file system developed specifically for use in three-dimensional numerical weather models that are run on massively parallel supercomputers. LOFS utilizes the core (in-memory buffered) HDF5 driver and includes compression options including ZFP, a lossy floating point data compression algorithm. ZFP offers several mechanisms for specifying the amount of lossy compression to be applied to floating point data, including the ability to specify the maximum absolute error allowed in each compressed 3D array. We explore different maximum error tolerances in a tornado-resolving supercell thunderstorm simulation for model variables including cloud and precipitation, temperature, wind velocity and vorticity magnitude. We find that average compression ratios exceeding 20:1 in scientifically interesting regions of the simulation domain produce visually identical results to uncompressed data in visualizations and plots. Since LOFS splits the model domain across many files, compression ratios for a given error tolerance can be compared across different locations within the model domain. We find that regions of high spatial variability (which tend to be where scientifically interesting things are occurring) show the lowest compression ratios, whereas regions of the domain with little spatial variability compress extremely well. We observe that the overhead for compressing data with ZFP is low, and that compressing data in memory reduces the amount of memory overhead needed to store the virtual files before they are flushed to disk.
Telemedicine + OCT: toward design of optimized algorithms for high-quality compressed images
NASA Astrophysics Data System (ADS)
Mousavi, Mahta; Lurie, Kristen; Land, Julian; Javidi, Tara; Ellerbee, Audrey K.
2014-03-01
Telemedicine is an emerging technology that aims to provide clinical healthcare at a distance. Among its goals, the transfer of diagnostic images over telecommunication channels has been quite appealing to the medical community. When viewed as an adjunct to biomedical device hardware, one highly important consideration aside from the transfer rate and speed is the accuracy of the reconstructed image at the receiver end. Although optical coherence tomography (OCT) is an established imaging technique that is ripe for telemedicine, the effects of OCT data compression, which may be necessary on certain telemedicine platforms, have not received much attention in the literature. We investigate the performance and efficiency of several lossless and lossy compression techniques for OCT data and characterize their effectiveness with respect to achievable compression ratio, compression rate and preservation of image quality. We examine the effects of compression in the interferogram vs. A-scan domain as assessed with various objective and subjective metrics.
2015-03-01
fall in the lossy category (Gonzalez, Woods , & Eddins, 2009, p. 420). For the textual or numeric data in XML, however, lossy compression is...7/1,337 > Professional Notes Being Efficient with Bandwidth By Lieutenant Commander Steve Debich, Lieutenant Bruce Hill, Captain Scot Miller (Retired...2005). XML Binary Characterization. Retrieved from http://www.w3.org/TR/xbc-characterization/ Gonzalez, R., Woods , R., & Eddins, S. (2009
Verification testing of the compression performance of the HEVC screen content coding extensions
NASA Astrophysics Data System (ADS)
Sullivan, Gary J.; Baroncini, Vittorio A.; Yu, Haoping; Joshi, Rajan L.; Liu, Shan; Xiu, Xiaoyu; Xu, Jizheng
2017-09-01
This paper reports on verification testing of the coding performance of the screen content coding (SCC) extensions of the High Efficiency Video Coding (HEVC) standard (Rec. ITU-T H.265 | ISO/IEC 23008-2 MPEG-H Part 2). The coding performance of HEVC screen content model (SCM) reference software is compared with that of the HEVC test model (HM) without the SCC extensions, as well as with the Advanced Video Coding (AVC) joint model (JM) reference software, for both lossy and mathematically lossless compression using All-Intra (AI), Random Access (RA), and Lowdelay B (LB) encoding structures and using similar encoding techniques. Video test sequences in 1920×1080 RGB 4:4:4, YCbCr 4:4:4, and YCbCr 4:2:0 colour sampling formats with 8 bits per sample are tested in two categories: "text and graphics with motion" (TGM) and "mixed" content. For lossless coding, the encodings are evaluated in terms of relative bit-rate savings. For lossy compression, subjective testing was conducted at 4 quality levels for each coding case, and the test results are presented through mean opinion score (MOS) curves. The relative coding performance is also evaluated in terms of Bjøntegaard-delta (BD) bit-rate savings for equal PSNR quality. The perceptual tests and objective metric measurements show a very substantial benefit in coding efficiency for the SCC extensions, and provided consistent results with a high degree of confidence. For TGM video, the estimated bit-rate savings ranged from 60-90% relative to the JM and 40-80% relative to the HM, depending on the AI/RA/LB configuration category and colour sampling format.
Assessing the Effects of Data Compression in Simulations Using Physically Motivated Metrics
Laney, Daniel; Langer, Steven; Weber, Christopher; ...
2014-01-01
This paper examines whether lossy compression can be used effectively in physics simulations as a possible strategy to combat the expected data-movement bottleneck in future high performance computing architectures. We show that, for the codes and simulations we tested, compression levels of 3–5X can be applied without causing significant changes to important physical quantities. Rather than applying signal processing error metrics, we utilize physics-based metrics appropriate for each code to assess the impact of compression. We evaluate three different simulation codes: a Lagrangian shock-hydrodynamics code, an Eulerian higher-order hydrodynamics turbulence modeling code, and an Eulerian coupled laser-plasma interaction code. Wemore » compress relevant quantities after each time-step to approximate the effects of tightly coupled compression and study the compression rates to estimate memory and disk-bandwidth reduction. We find that the error characteristics of compression algorithms must be carefully considered in the context of the underlying physics being modeled.« less
Compression of color-mapped images
NASA Technical Reports Server (NTRS)
Hadenfeldt, A. C.; Sayood, Khalid
1992-01-01
In a standard image coding scenario, pixel-to-pixel correlation nearly always exists in the data, especially if the image is a natural scene. This correlation is what allows predictive coding schemes (e.g., DPCM) to perform efficient compression. In a color-mapped image, the values stored in the pixel array are no longer directly related to the pixel intensity. Two color indices which are numerically adjacent (close) may point to two very different colors. The correlation still exists, but only via the colormap. This fact can be exploited by sorting the color map to reintroduce the structure. The sorting of colormaps is studied and it is shown how the resulting structure can be used in both lossless and lossy compression of images.
Correlation estimation and performance optimization for distributed image compression
NASA Astrophysics Data System (ADS)
He, Zhihai; Cao, Lei; Cheng, Hui
2006-01-01
Correlation estimation plays a critical role in resource allocation and rate control for distributed data compression. A Wyner-Ziv encoder for distributed image compression is often considered as a lossy source encoder followed by a lossless Slepian-Wolf encoder. The source encoder consists of spatial transform, quantization, and bit plane extraction. In this work, we find that Gray code, which has been extensively used in digital modulation, is able to significantly improve the correlation between the source data and its side information. Theoretically, we analyze the behavior of Gray code within the context of distributed image compression. Using this theoretical model, we are able to efficiently allocate the bit budget and determine the code rate of the Slepian-Wolf encoder. Our experimental results demonstrate that the Gray code, coupled with accurate correlation estimation and rate control, significantly improves the picture quality, by up to 4 dB, over the existing methods for distributed image compression.
The effect of lossy image compression on image classification
NASA Technical Reports Server (NTRS)
Paola, Justin D.; Schowengerdt, Robert A.
1995-01-01
We have classified four different images, under various levels of JPEG compression, using the following classification algorithms: minimum-distance, maximum-likelihood, and neural network. The training site accuracy and percent difference from the original classification were tabulated for each image compression level, with maximum-likelihood showing the poorest results. In general, as compression ratio increased, the classification retained its overall appearance, but much of the pixel-to-pixel detail was eliminated. We also examined the effect of compression on spatial pattern detection using a neural network.
Compressed domain indexing of losslessly compressed images
NASA Astrophysics Data System (ADS)
Schaefer, Gerald
2001-12-01
Image retrieval and image compression have been pursued separately in the past. Only little research has been done on a synthesis of the two by allowing image retrieval to be performed directly in the compressed domain of images without the need to uncompress them first. In this paper methods for image retrieval in the compressed domain of losslessly compressed images are introduced. While most image compression techniques are lossy, i.e. discard visually less significant information, lossless techniques are still required in fields like medical imaging or in situations where images must not be changed due to legal reasons. The algorithms in this paper are based on predictive coding methods where a pixel is encoded based on the pixel values of its (already encoded) neighborhood. The first method is based on an understanding that predictively coded data is itself indexable and represents a textural description of the image. The second method operates directly on the entropy encoded data by comparing codebooks of images. Experiments show good image retrieval results for both approaches.
Novel Data Reduction Based on Statistical Similarity
Lee, Dongeun; Sim, Alex; Choi, Jaesik; ...
2016-07-18
Applications such as scientific simulations and power grid monitoring are generating so much data quickly that compression is essential to reduce storage requirement or transmission capacity. To achieve better compression, one is often willing to discard some repeated information. These lossy compression methods are primarily designed to minimize the Euclidean distance between the original data and the compressed data. But this measure of distance severely limits either reconstruction quality or compression performance. In this paper, we propose a new class of compression method by redefining the distance measure with a statistical concept known as exchangeability. This approach reduces the storagemore » requirement and captures essential features, while reducing the storage requirement. In this paper, we report our design and implementation of such a compression method named IDEALEM. To demonstrate its effectiveness, we apply it on a set of power grid monitoring data, and show that it can reduce the volume of data much more than the best known compression method while maintaining the quality of the compressed data. Finally, in these tests, IDEALEM captures extraordinary events in the data, while its compression ratios can far exceed 100.« less
KungFQ: a simple and powerful approach to compress fastq files.
Grassi, Elena; Di Gregorio, Federico; Molineris, Ivan
2012-01-01
Nowadays storing data derived from deep sequencing experiments has become pivotal and standard compression algorithms do not exploit in a satisfying manner their structure. A number of reference-based compression algorithms have been developed but they are less adequate when approaching new species without fully sequenced genomes or nongenomic data. We developed a tool that takes advantages of fastq characteristics and encodes them in a binary format optimized in order to be further compressed with standard tools (such as gzip or lzma). The algorithm is straightforward and does not need any external reference file, it scans the fastq only once and has a constant memory requirement. Moreover, we added the possibility to perform lossy compression, losing some of the original information (IDs and/or qualities) but resulting in smaller files; it is also possible to define a quality cutoff under which corresponding base calls are converted to N. We achieve 2.82 to 7.77 compression ratios on various fastq files without losing information and 5.37 to 8.77 losing IDs, which are often not used in common analysis pipelines. In this paper, we compare the algorithm performance with known tools, usually obtaining higher compression levels.
The New CCSDS Image Compression Recommendation
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu; Armbruster, Philippe; Kiely, Aaron; Masschelein, Bart; Moury, Gilles; Schaefer, Christoph
2005-01-01
The Consultative Committee for Space Data Systems (CCSDS) data compression working group has recently adopted a recommendation for image data compression, with a final release expected in 2005. The algorithm adopted in the recommendation consists of a two-dimensional discrete wavelet transform of the image, followed by progressive bit-plane coding of the transformed data. The algorithm can provide both lossless and lossy compression, and allows a user to directly control the compressed data volume or the fidelity with which the wavelet-transformed data can be reconstructed. The algorithm is suitable for both frame-based image data and scan-based sensor data, and has applications for near-Earth and deep-space missions. The standard will be accompanied by free software sources on a future web site. An Application-Specific Integrated Circuit (ASIC) implementation of the compressor is currently under development. This paper describes the compression algorithm along with the requirements that drove the selection of the algorithm. Performance results and comparisons with other compressors are given for a test set of space images.
A Lossy Compression Technique Enabling Duplication-Aware Sequence Alignment
Freschi, Valerio; Bogliolo, Alessandro
2012-01-01
In spite of the recognized importance of tandem duplications in genome evolution, commonly adopted sequence comparison algorithms do not take into account complex mutation events involving more than one residue at the time, since they are not compliant with the underlying assumption of statistical independence of adjacent residues. As a consequence, the presence of tandem repeats in sequences under comparison may impair the biological significance of the resulting alignment. Although solutions have been proposed, repeat-aware sequence alignment is still considered to be an open problem and new efficient and effective methods have been advocated. The present paper describes an alternative lossy compression scheme for genomic sequences which iteratively collapses repeats of increasing length. The resulting approximate representations do not contain tandem duplications, while retaining enough information for making their comparison even more significant than the edit distance between the original sequences. This allows us to exploit traditional alignment algorithms directly on the compressed sequences. Results confirm the validity of the proposed approach for the problem of duplication-aware sequence alignment. PMID:22518086
No Bit Left Behind: The Limits of Heap Data Compression
2008-06-01
Lempel - Ziv compression is non-lossy, in other words, the original data can be fully recovered by decompression. Unlike the data representations for most...of the other models, Lempel - Ziv compressed data does not permit random access, let alone in-place update. To compute this model as accu- rately as...of the collection, we print the size of the full stream, i.e., all live data in the heap. We then apply Lempel - Ziv compression to the stream
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu (Inventor)
1997-01-01
A pre-coding method and device for improving data compression performance by removing correlation between a first original data set and a second original data set, each having M members, respectively. The pre-coding method produces a compression-efficiency-enhancing double-difference data set. The method and device produce a double-difference data set, i.e., an adjacent-delta calculation performed on a cross-delta data set or a cross-delta calculation performed on two adjacent-delta data sets, from either one of (1) two adjacent spectral bands coming from two discrete sources, respectively, or (2) two time-shifted data sets coming from a single source. The resulting double-difference data set is then coded using either a distortionless data encoding scheme (entropy encoding) or a lossy data compression scheme. Also, a post-decoding method and device for recovering a second original data set having been represented by such a double-difference data set.
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu (Inventor)
1998-01-01
A pre-coding method and device for improving data compression performance by removing correlation between a first original data set and a second original data set, each having M members, respectively. The pre-coding method produces a compression-efficiency-enhancing double-difference data set. The method and device produce a double-difference data set, i.e., an adjacent-delta calculation performed on a cross-delta data set or a cross-delta calculation performed on two adjacent-delta data sets, from either one of (1) two adjacent spectral bands coming from two discrete sources, respectively, or (2) two time-shifted data sets coming from a single source. The resulting double-difference data set is then coded using either a distortionless data encoding scheme (entropy encoding) or a lossy data compression scheme. Also, a post-decoding method and device for recovering a second original data set having been represented by such a double-difference data set.
On scalable lossless video coding based on sub-pixel accurate MCTF
NASA Astrophysics Data System (ADS)
Yea, Sehoon; Pearlman, William A.
2006-01-01
We propose two approaches to scalable lossless coding of motion video. They achieve SNR-scalable bitstream up to lossless reconstruction based upon the subpixel-accurate MCTF-based wavelet video coding. The first approach is based upon a two-stage encoding strategy where a lossy reconstruction layer is augmented by a following residual layer in order to obtain (nearly) lossless reconstruction. The key advantages of our approach include an 'on-the-fly' determination of bit budget distribution between the lossy and the residual layers, freedom to use almost any progressive lossy video coding scheme as the first layer and an added feature of near-lossless compression. The second approach capitalizes on the fact that we can maintain the invertibility of MCTF with an arbitrary sub-pixel accuracy even in the presence of an extra truncation step for lossless reconstruction thanks to the lifting implementation. Experimental results show that the proposed schemes achieve compression ratios not obtainable by intra-frame coders such as Motion JPEG-2000 thanks to their inter-frame coding nature. Also they are shown to outperform the state-of-the-art non-scalable inter-frame coder H.264 (JM) lossless mode, with the added benefit of bitstream embeddedness.
NASA Astrophysics Data System (ADS)
Zender, Charles S.
2016-09-01
Geoscientific models and measurements generate false precision (scientifically meaningless data bits) that wastes storage space. False precision can mislead (by implying noise is signal) and be scientifically pointless, especially for measurements. By contrast, lossy compression can be both economical (save space) and heuristic (clarify data limitations) without compromising the scientific integrity of data. Data quantization can thus be appropriate regardless of whether space limitations are a concern. We introduce, implement, and characterize a new lossy compression scheme suitable for IEEE floating-point data. Our new Bit Grooming algorithm alternately shaves (to zero) and sets (to one) the least significant bits of consecutive values to preserve a desired precision. This is a symmetric, two-sided variant of an algorithm sometimes called Bit Shaving that quantizes values solely by zeroing bits. Our variation eliminates the artificial low bias produced by always zeroing bits, and makes Bit Grooming more suitable for arrays and multi-dimensional fields whose mean statistics are important. Bit Grooming relies on standard lossless compression to achieve the actual reduction in storage space, so we tested Bit Grooming by applying the DEFLATE compression algorithm to bit-groomed and full-precision climate data stored in netCDF3, netCDF4, HDF4, and HDF5 formats. Bit Grooming reduces the storage space required by initially uncompressed and compressed climate data by 25-80 and 5-65 %, respectively, for single-precision values (the most common case for climate data) quantized to retain 1-5 decimal digits of precision. The potential reduction is greater for double-precision datasets. When used aggressively (i.e., preserving only 1-2 digits), Bit Grooming produces storage reductions comparable to other quantization techniques such as Linear Packing. Unlike Linear Packing, whose guaranteed precision rapidly degrades within the relatively narrow dynamic range of values that it can compress, Bit Grooming guarantees the specified precision throughout the full floating-point range. Data quantization by Bit Grooming is irreversible (i.e., lossy) yet transparent, meaning that no extra processing is required by data users/readers. Hence Bit Grooming can easily reduce data storage volume without sacrificing scientific precision or imposing extra burdens on users.
NASA Astrophysics Data System (ADS)
Gelmini, A.; Gottardi, G.; Moriyama, T.
2017-10-01
This work presents an innovative computational approach for the inversion of wideband ground penetrating radar (GPR) data. The retrieval of the dielectric characteristics of sparse scatterers buried in a lossy soil is performed by combining a multi-task Bayesian compressive sensing (MT-BCS) solver and a frequency hopping (FH) strategy. The developed methodology is able to benefit from the regularization capabilities of the MT-BCS as well as to exploit the multi-chromatic informative content of GPR measurements. A set of numerical results is reported in order to assess the effectiveness of the proposed GPR inverse scattering technique, as well as to compare it to a simpler single-task implementation.
Cost-effective handling of digital medical images in the telemedicine environment.
Choong, Miew Keen; Logeswaran, Rajasvaran; Bister, Michel
2007-09-01
This paper concentrates on strategies for less costly handling of medical images. Aspects of digitization using conventional digital cameras, lossy compression with good diagnostic quality, and visualization through less costly monitors are discussed. For digitization of film-based media, subjective evaluation of the suitability of digital cameras as an alternative to the digitizer was undertaken. To save on storage, bandwidth and transmission time, the acceptable degree of compression with diagnostically no loss of important data was studied through randomized double-blind tests of the subjective image quality when compression noise was kept lower than the inherent noise. A diagnostic experiment was undertaken to evaluate normal low cost computer monitors as viable viewing displays for clinicians. The results show that conventional digital camera images of X-ray images were diagnostically similar to the expensive digitizer. Lossy compression, when used moderately with the imaging noise to compression noise ratio (ICR) greater than four, can bring about image improvement with better diagnostic quality than the original image. Statistical analysis shows that there is no diagnostic difference between expensive high quality monitors and conventional computer monitors. The results presented show good potential in implementing the proposed strategies to promote widespread cost-effective telemedicine and digital medical environments. 2006 Elsevier Ireland Ltd
HUGO: Hierarchical mUlti-reference Genome cOmpression for aligned reads
Li, Pinghao; Jiang, Xiaoqian; Wang, Shuang; Kim, Jihoon; Xiong, Hongkai; Ohno-Machado, Lucila
2014-01-01
Background and objective Short-read sequencing is becoming the standard of practice for the study of structural variants associated with disease. However, with the growth of sequence data largely surpassing reasonable storage capability, the biomedical community is challenged with the management, transfer, archiving, and storage of sequence data. Methods We developed Hierarchical mUlti-reference Genome cOmpression (HUGO), a novel compression algorithm for aligned reads in the sorted Sequence Alignment/Map (SAM) format. We first aligned short reads against a reference genome and stored exactly mapped reads for compression. For the inexact mapped or unmapped reads, we realigned them against different reference genomes using an adaptive scheme by gradually shortening the read length. Regarding the base quality value, we offer lossy and lossless compression mechanisms. The lossy compression mechanism for the base quality values uses k-means clustering, where a user can adjust the balance between decompression quality and compression rate. The lossless compression can be produced by setting k (the number of clusters) to the number of different quality values. Results The proposed method produced a compression ratio in the range 0.5–0.65, which corresponds to 35–50% storage savings based on experimental datasets. The proposed approach achieved 15% more storage savings over CRAM and comparable compression ratio with Samcomp (CRAM and Samcomp are two of the state-of-the-art genome compression algorithms). The software is freely available at https://sourceforge.net/projects/hierachicaldnac/with a General Public License (GPL) license. Limitation Our method requires having different reference genomes and prolongs the execution time for additional alignments. Conclusions The proposed multi-reference-based compression algorithm for aligned reads outperforms existing single-reference based algorithms. PMID:24368726
Remote Sensing Image Quality Assessment Experiment with Post-Processing
NASA Astrophysics Data System (ADS)
Jiang, W.; Chen, S.; Wang, X.; Huang, Q.; Shi, H.; Man, Y.
2018-04-01
This paper briefly describes the post-processing influence assessment experiment, the experiment includes three steps: the physical simulation, image processing, and image quality assessment. The physical simulation models sampled imaging system in laboratory, the imaging system parameters are tested, the digital image serving as image processing input are produced by this imaging system with the same imaging system parameters. The gathered optical sampled images with the tested imaging parameters are processed by 3 digital image processes, including calibration pre-processing, lossy compression with different compression ratio and image post-processing with different core. Image quality assessment method used is just noticeable difference (JND) subject assessment based on ISO20462, through subject assessment of the gathered and processing images, the influence of different imaging parameters and post-processing to image quality can be found. The six JND subject assessment experimental data can be validated each other. Main conclusions include: image post-processing can improve image quality; image post-processing can improve image quality even with lossy compression, image quality with higher compression ratio improves less than lower ratio; with our image post-processing method, image quality is better, when camera MTF being within a small range.
Efficient compression of molecular dynamics trajectory files.
Marais, Patrick; Kenwood, Julian; Smith, Keegan Carruthers; Kuttel, Michelle M; Gain, James
2012-10-15
We investigate whether specific properties of molecular dynamics trajectory files can be exploited to achieve effective file compression. We explore two classes of lossy, quantized compression scheme: "interframe" predictors, which exploit temporal coherence between successive frames in a simulation, and more complex "intraframe" schemes, which compress each frame independently. Our interframe predictors are fast, memory-efficient and well suited to on-the-fly compression of massive simulation data sets, and significantly outperform the benchmark BZip2 application. Our schemes are configurable: atomic positional accuracy can be sacrificed to achieve greater compression. For high fidelity compression, our linear interframe predictor gives the best results at very little computational cost: at moderate levels of approximation (12-bit quantization, maximum error ≈ 10(-2) Å), we can compress a 1-2 fs trajectory file to 5-8% of its original size. For 200 fs time steps-typically used in fine grained water diffusion experiments-we can compress files to ~25% of their input size, still substantially better than BZip2. While compression performance degrades with high levels of quantization, the simulation error is typically much greater than the associated approximation error in such cases. Copyright © 2012 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
García, Aday; Santos, Lucana; López, Sebastián.; Callicó, Gustavo M.; Lopez, Jose F.; Sarmiento, Roberto
2014-05-01
Efficient onboard satellite hyperspectral image compression represents a necessity and a challenge for current and future space missions. Therefore, it is mandatory to provide hardware implementations for this type of algorithms in order to achieve the constraints required for onboard compression. In this work, we implement the Lossy Compression for Exomars (LCE) algorithm on an FPGA by means of high-level synthesis (HSL) in order to shorten the design cycle. Specifically, we use CatapultC HLS tool to obtain a VHDL description of the LCE algorithm from C-language specifications. Two different approaches are followed for HLS: on one hand, introducing the whole C-language description in CatapultC and on the other hand, splitting the C-language description in functional modules to be implemented independently with CatapultC, connecting and controlling them by an RTL description code without HLS. In both cases the goal is to obtain an FPGA implementation. We explain the several changes applied to the original Clanguage source code in order to optimize the results obtained by CatapultC for both approaches. Experimental results show low area occupancy of less than 15% for a SRAM-based Virtex-5 FPGA and a maximum frequency above 80 MHz. Additionally, the LCE compressor was implemented into an RTAX2000S antifuse-based FPGA, showing an area occupancy of 75% and a frequency around 53 MHz. All these serve to demonstrate that the LCE algorithm can be efficiently executed on an FPGA onboard a satellite. A comparison between both implementation approaches is also provided. The performance of the algorithm is finally compared with implementations on other technologies, specifically a graphics processing unit (GPU) and a single-threaded CPU.
Cerina, Luca; Iozzia, Luca; Mainardi, Luca
2017-11-14
In this paper, common time- and frequency-domain variability indexes obtained by pulse rate variability (PRV) series extracted from video-photoplethysmographic signal (vPPG) were compared with heart rate variability (HRV) parameters calculated from synchronized ECG signals. The dual focus of this study was to analyze the effect of different video acquisition frame-rates starting from 60 frames-per-second (fps) down to 7.5 fps and different video compression techniques using both lossless and lossy codecs on PRV parameters estimation. Video recordings were acquired through an off-the-shelf GigE Sony XCG-C30C camera on 60 young, healthy subjects (age 23±4 years) in the supine position. A fully automated, signal extraction method based on the Kanade-Lucas-Tomasi (KLT) algorithm for regions of interest (ROI) detection and tracking, in combination with a zero-phase principal component analysis (ZCA) signal separation technique was employed to convert the video frames sequence to a pulsatile signal. The frame-rate degradation was simulated on video recordings by directly sub-sampling the ROI tracking and signal extraction modules, to correctly mimic videos recorded at a lower speed. The compression of the videos was configured to avoid any frame rejection caused by codec quality leveling, FFV1 codec was used for lossless compression and H.264 with variable quality parameter as lossy codec. The results showed that a reduced frame-rate leads to inaccurate tracking of ROIs, increased time-jitter in the signals dynamics and local peak displacements, which degrades the performances in all the PRV parameters. The root mean square of successive differences (RMSSD) and the proportion of successive differences greater than 50 ms (PNN50) indexes in time-domain and the low frequency (LF) and high frequency (HF) power in frequency domain were the parameters which highly degraded with frame-rate reduction. Such a degradation can be partially mitigated by up-sampling the measured signal at a higher frequency (namely 60 Hz). Concerning the video compression, the results showed that compression techniques are suitable for the storage of vPPG recordings, although lossless or intra-frame compression are to be preferred over inter-frame compression methods. FFV1 performances are very close to the uncompressed (UNC) version with less than 45% disk size. H.264 showed a degradation of the PRV estimation directly correlated with the increase of the compression ratio.
NASA Astrophysics Data System (ADS)
Atubga, David; Wu, Huijuan; Lu, Lidong; Sun, Xiaoyan
2017-02-01
Typical fully distributed optical fiber sensors (DOFS) with dozens of kilometers are equivalent to tens of thousands of point sensors along the whole monitoring line, which means tens of thousands of data will be generated for one pulse launching period. Therefore, in an all-day nonstop monitoring, large volumes of data are created thereby triggering the demand for large storage space and high speed for data transmission. In addition, when the monitoring length and channel numbers increase, the data also increase extensively. The task of mitigating large volumes of data accumulation, large storage capacity, and high-speed data transmission is, therefore, the aim of this paper. To demonstrate our idea, we carried out a comparative study of two lossless methods, Huffman and Lempel Ziv Welch (LZW), with a lossy data compression algorithm, fast wavelet transform (FWT) based on three distinctive DOFS sensing data, such as Φ-OTDR, P-OTDR, and B-OTDA. Our results demonstrated that FWT yielded the best compression ratio with good consumption time, irrespective of errors in signal construction of the three DOFS data. Our outcomes indicate the promising potentials of FWT which makes it more suitable, reliable, and convenient for real-time compression of the DOFS data. Finally, it was observed that differences in the DOFS data structure have some influence on both the compression ratio and computational cost.
Color image lossy compression based on blind evaluation and prediction of noise characteristics
NASA Astrophysics Data System (ADS)
Ponomarenko, Nikolay N.; Lukin, Vladimir V.; Egiazarian, Karen O.; Lepisto, Leena
2011-03-01
The paper deals with JPEG adaptive lossy compression of color images formed by digital cameras. Adaptation to noise characteristics and blur estimated for each given image is carried out. The dominant factor degrading image quality is determined in a blind manner. Characteristics of this dominant factor are then estimated. Finally, a scaling factor that determines quantization steps for default JPEG table is adaptively set (selected). Within this general framework, two possible strategies are considered. A first one presumes blind estimation for an image after all operations in digital image processing chain just before compressing a given raster image. A second strategy is based on prediction of noise and blur parameters from analysis of RAW image under quite general assumptions concerning characteristics parameters of transformations an image will be subject to at further processing stages. The advantages of both strategies are discussed. The first strategy provides more accurate estimation and larger benefit in image compression ratio (CR) compared to super-high quality (SHQ) mode. However, it is more complicated and requires more resources. The second strategy is simpler but less beneficial. The proposed approaches are tested for quite many real life color images acquired by digital cameras and shown to provide more than two time increase of average CR compared to SHQ mode without introducing visible distortions with respect to SHQ compressed images.
NASA Technical Reports Server (NTRS)
Novik, Dmitry A.; Tilton, James C.
1993-01-01
The compression, or efficient coding, of single band or multispectral still images is becoming an increasingly important topic. While lossy compression approaches can produce reconstructions that are visually close to the original, many scientific and engineering applications require exact (lossless) reconstructions. However, the most popular and efficient lossless compression techniques do not fully exploit the two-dimensional structural links existing in the image data. We describe here a general approach to lossless data compression that effectively exploits two-dimensional structural links of any length. After describing in detail two main variants on this scheme, we discuss experimental results.
Subband/Transform MATLAB Functions For Processing Images
NASA Technical Reports Server (NTRS)
Glover, D.
1995-01-01
SUBTRANS software is package of routines implementing image-data-processing functions for use with MATLAB*(TM) software. Provides capability to transform image data with block transforms and to produce spatial-frequency subbands of transformed data. Functions cascaded to provide further decomposition into more subbands. Also used in image-data-compression systems. For example, transforms used to prepare data for lossy compression. Written for use in MATLAB mathematical-analysis environment.
Surmounting the Effects of Lossy Compression on Steganography
1996-10-01
and can be exploited to export sensitive information. Since images are fre- quently compressed for storage or transmission, effective steganography ... steganography is that which is stored with an accuracy far greater than necessary for the data’s use and display. Image , Postscript, and audio files are...information can be concealed in bitmapped image files with little or no visible degradation of the image [4.]. This process, called steganography , is
Zender, Charles S.
2016-09-19
Geoscientific models and measurements generate false precision (scientifically meaningless data bits) that wastes storage space. False precision can mislead (by implying noise is signal) and be scientifically pointless, especially for measurements. By contrast, lossy compression can be both economical (save space) and heuristic (clarify data limitations) without compromising the scientific integrity of data. Data quantization can thus be appropriate regardless of whether space limitations are a concern. We introduce, implement, and characterize a new lossy compression scheme suitable for IEEE floating-point data. Our new Bit Grooming algorithm alternately shaves (to zero) and sets (to one) the least significant bits ofmore » consecutive values to preserve a desired precision. This is a symmetric, two-sided variant of an algorithm sometimes called Bit Shaving that quantizes values solely by zeroing bits. Our variation eliminates the artificial low bias produced by always zeroing bits, and makes Bit Grooming more suitable for arrays and multi-dimensional fields whose mean statistics are important. Bit Grooming relies on standard lossless compression to achieve the actual reduction in storage space, so we tested Bit Grooming by applying the DEFLATE compression algorithm to bit-groomed and full-precision climate data stored in netCDF3, netCDF4, HDF4, and HDF5 formats. Bit Grooming reduces the storage space required by initially uncompressed and compressed climate data by 25–80 and 5–65 %, respectively, for single-precision values (the most common case for climate data) quantized to retain 1–5 decimal digits of precision. The potential reduction is greater for double-precision datasets. When used aggressively (i.e., preserving only 1–2 digits), Bit Grooming produces storage reductions comparable to other quantization techniques such as Linear Packing. Unlike Linear Packing, whose guaranteed precision rapidly degrades within the relatively narrow dynamic range of values that it can compress, Bit Grooming guarantees the specified precision throughout the full floating-point range. Data quantization by Bit Grooming is irreversible (i.e., lossy) yet transparent, meaning that no extra processing is required by data users/readers. Hence Bit Grooming can easily reduce data storage volume without sacrificing scientific precision or imposing extra burdens on users.« less
Yu, Kai; Yin, Ming; Luo, Ji-An; Wang, Yingguan; Bao, Ming; Hu, Yu-Hen; Wang, Zhi
2016-05-23
A compressive sensing joint sparse representation direction of arrival estimation (CSJSR-DoA) approach is proposed for wireless sensor array networks (WSAN). By exploiting the joint spatial and spectral correlations of acoustic sensor array data, the CSJSR-DoA approach provides reliable DoA estimation using randomly-sampled acoustic sensor data. Since random sampling is performed at remote sensor arrays, less data need to be transmitted over lossy wireless channels to the fusion center (FC), and the expensive source coding operation at sensor nodes can be avoided. To investigate the spatial sparsity, an upper bound of the coherence of incoming sensor signals is derived assuming a linear sensor array configuration. This bound provides a theoretical constraint on the angular separation of acoustic sources to ensure the spatial sparsity of the received acoustic sensor array signals. The Cram e ´ r-Rao bound of the CSJSR-DoA estimator that quantifies the theoretical DoA estimation performance is also derived. The potential performance of the CSJSR-DoA approach is validated using both simulations and field experiments on a prototype WSAN platform. Compared to existing compressive sensing-based DoA estimation methods, the CSJSR-DoA approach shows significant performance improvement.
Hyperspectral IASI L1C Data Compression.
García-Sobrino, Joaquín; Serra-Sagristà, Joan; Bartrina-Rapesta, Joan
2017-06-16
The Infrared Atmospheric Sounding Interferometer (IASI), implemented on the MetOp satellite series, represents a significant step forward in atmospheric forecast and weather understanding. The instrument provides infrared soundings of unprecedented accuracy and spectral resolution to derive humidity and atmospheric temperature profiles, as well as some of the chemical components playing a key role in climate monitoring. IASI collects rich spectral information, which results in large amounts of data (about 16 Gigabytes per day). Efficient compression techniques are requested for both transmission and storage of such huge data. This study reviews the performance of several state of the art coding standards and techniques for IASI L1C data compression. Discussion embraces lossless, near-lossless and lossy compression. Several spectral transforms, essential to achieve improved coding performance due to the high spectral redundancy inherent to IASI products, are also discussed. Illustrative results are reported for a set of 96 IASI L1C orbits acquired over a full year (4 orbits per month for each IASI-A and IASI-B from July 2013 to June 2014) . Further, this survey provides organized data and facts to assist future research and the atmospheric scientific community.
DOT National Transportation Integrated Search
2012-10-01
In this report we present a transportation video coding and wireless transmission system specically tailored to automated : vehicle tracking applications. By taking into account the video characteristics and the lossy nature of the wireless channe...
[A quality controllable algorithm for ECG compression based on wavelet transform and ROI coding].
Zhao, An; Wu, Baoming
2006-12-01
This paper presents an ECG compression algorithm based on wavelet transform and region of interest (ROI) coding. The algorithm has realized near-lossless coding in ROI and quality controllable lossy coding outside of ROI. After mean removal of the original signal, multi-layer orthogonal discrete wavelet transform is performed. Simultaneously,feature extraction is performed on the original signal to find the position of ROI. The coefficients related to the ROI are important coefficients and kept. Otherwise, the energy loss of the transform domain is calculated according to the goal PRDBE (Percentage Root-mean-square Difference with Baseline Eliminated), and then the threshold of the coefficients outside of ROI is determined according to the loss of energy. The important coefficients, which include the coefficients of ROI and the coefficients that are larger than the threshold outside of ROI, are put into a linear quantifier. The map, which records the positions of the important coefficients in the original wavelet coefficients vector, is compressed with a run-length encoder. Huffman coding has been applied to improve the compression ratio. ECG signals taken from the MIT/BIH arrhythmia database are tested, and satisfactory results in terms of clinical information preserving, quality and compress ratio are obtained.
Data Compression Techniques for Advanced Space Transportation Systems
NASA Technical Reports Server (NTRS)
Bradley, William G.
1998-01-01
Advanced space transportation systems, including vehicle state of health systems, will produce large amounts of data which must be stored on board the vehicle and or transmitted to the ground and stored. The cost of storage or transmission of the data could be reduced if the number of bits required to represent the data is reduced by the use of data compression techniques. Most of the work done in this study was rather generic and could apply to many data compression systems, but the first application area to be considered was launch vehicle state of health telemetry systems. Both lossless and lossy compression techniques were considered in this study.
Technology Directions for the 21st Century. Volume 4
NASA Technical Reports Server (NTRS)
Crimi, Giles; Verheggen, Henry; Botta, Robert; Paul, Heywood; Vuong, Xuyen
1998-01-01
Data compression is an important tool for reducing the bandwidth of communications systems, and thus for reducing the size, weight, and power of spacecraft systems. For data requiring lossless transmissions, including most science data from spacecraft sensors, small compression factors of two to three may be expected. Little improvement can be expected over time. For data that is suitable for lossy compression, such as video data streams, much higher compression factors can be expected, such as 100 or more. More progress can be expected in this branch of the field, since there is more hidden redundancy and many more ways to exploit that redundancy.
Prediction of compression-induced image interpretability degradation
NASA Astrophysics Data System (ADS)
Blasch, Erik; Chen, Hua-Mei; Irvine, John M.; Wang, Zhonghai; Chen, Genshe; Nagy, James; Scott, Stephen
2018-04-01
Image compression is an important component in modern imaging systems as the volume of the raw data collected is increasing. To reduce the volume of data while collecting imagery useful for analysis, choosing the appropriate image compression method is desired. Lossless compression is able to preserve all the information, but it has limited reduction power. On the other hand, lossy compression, which may result in very high compression ratios, suffers from information loss. We model the compression-induced information loss in terms of the National Imagery Interpretability Rating Scale or NIIRS. NIIRS is a user-based quantification of image interpretability widely adopted by the Geographic Information System community. Specifically, we present the Compression Degradation Image Function Index (CoDIFI) framework that predicts the NIIRS degradation (i.e., a decrease of NIIRS level) for a given compression setting. The CoDIFI-NIIRS framework enables a user to broker the maximum compression setting while maintaining a specified NIIRS rating.
Optimal color coding for compression of true color images
NASA Astrophysics Data System (ADS)
Musatenko, Yurij S.; Kurashov, Vitalij N.
1998-11-01
In the paper we present the method that improves lossy compression of the true color or other multispectral images. The essence of the method is to project initial color planes into Karhunen-Loeve (KL) basis that gives completely decorrelated representation for the image and to compress basis functions instead of the planes. To do that the new fast algorithm of true KL basis construction with low memory consumption is suggested and our recently proposed scheme for finding optimal losses of Kl functions while compression is used. Compare to standard JPEG compression of the CMYK images the method provides the PSNR gain from 0.2 to 2 dB for the convenient compression ratios. Experimental results are obtained for high resolution CMYK images. It is demonstrated that presented scheme could work on common hardware.
Data compression for near Earth and deep space to Earth transmission
NASA Technical Reports Server (NTRS)
Erickson, Daniel E.
1991-01-01
Key issues of data compression for near Earth and deep space to Earth transmission discussion group are briefly presented. Specific recommendations as made by the group are as follows: (1) since data compression is a cost effective way to improve communications and storage capacity, NASA should use lossless data compression wherever possible; (2) NASA should conduct experiments and studies on the value and effectiveness of lossy data compression; (3) NASA should develop and select approaches to high ratio compression of operational data such as voice and video; (4) NASA should develop data compression integrated circuits for a few key approaches identified in the preceding recommendation; (5) NASA should examine new data compression approaches such as combining source and channel encoding, where high payoff gaps are identified in currently available schemes; and (6) users and developers of data compression technologies should be in closer communication within NASA and with academia, industry, and other government agencies.
A block-based JPEG-LS compression technique with lossless region of interest
NASA Astrophysics Data System (ADS)
Deng, Lihua; Huang, Zhenghua; Yao, Shoukui
2018-03-01
JPEG-LS lossless compression algorithm is used in many specialized applications that emphasize on the attainment of high fidelity for its lower complexity and better compression ratios than the lossless JPEG standard. But it cannot prevent error diffusion because of the context dependence of the algorithm, and have low compression rate when compared to lossy compression. In this paper, we firstly divide the image into two parts: ROI regions and non-ROI regions. Then we adopt a block-based image compression technique to decrease the range of error diffusion. We provide JPEG-LS lossless compression for the image blocks which include the whole or part region of interest (ROI) and JPEG-LS near lossless compression for the image blocks which are included in the non-ROI (unimportant) regions. Finally, a set of experiments are designed to assess the effectiveness of the proposed compression method.
Gaussian Multiscale Aggregation Applied to Segmentation in Hand Biometrics
de Santos Sierra, Alberto; Ávila, Carmen Sánchez; Casanova, Javier Guerra; del Pozo, Gonzalo Bailador
2011-01-01
This paper presents an image segmentation algorithm based on Gaussian multiscale aggregation oriented to hand biometric applications. The method is able to isolate the hand from a wide variety of background textures such as carpets, fabric, glass, grass, soil or stones. The evaluation was carried out by using a publicly available synthetic database with 408,000 hand images in different backgrounds, comparing the performance in terms of accuracy and computational cost to two competitive segmentation methods existing in literature, namely Lossy Data Compression (LDC) and Normalized Cuts (NCuts). The results highlight that the proposed method outperforms current competitive segmentation methods with regard to computational cost, time performance, accuracy and memory usage. PMID:22247658
Gaussian multiscale aggregation applied to segmentation in hand biometrics.
de Santos Sierra, Alberto; Avila, Carmen Sánchez; Casanova, Javier Guerra; del Pozo, Gonzalo Bailador
2011-01-01
This paper presents an image segmentation algorithm based on Gaussian multiscale aggregation oriented to hand biometric applications. The method is able to isolate the hand from a wide variety of background textures such as carpets, fabric, glass, grass, soil or stones. The evaluation was carried out by using a publicly available synthetic database with 408,000 hand images in different backgrounds, comparing the performance in terms of accuracy and computational cost to two competitive segmentation methods existing in literature, namely Lossy Data Compression (LDC) and Normalized Cuts (NCuts). The results highlight that the proposed method outperforms current competitive segmentation methods with regard to computational cost, time performance, accuracy and memory usage.
Algorithm for Compressing Time-Series Data
NASA Technical Reports Server (NTRS)
Hawkins, S. Edward, III; Darlington, Edward Hugo
2012-01-01
An algorithm based on Chebyshev polynomials effects lossy compression of time-series data or other one-dimensional data streams (e.g., spectral data) that are arranged in blocks for sequential transmission. The algorithm was developed for use in transmitting data from spacecraft scientific instruments to Earth stations. In spite of its lossy nature, the algorithm preserves the information needed for scientific analysis. The algorithm is computationally simple, yet compresses data streams by factors much greater than two. The algorithm is not restricted to spacecraft or scientific uses: it is applicable to time-series data in general. The algorithm can also be applied to general multidimensional data that have been converted to time-series data, a typical example being image data acquired by raster scanning. However, unlike most prior image-data-compression algorithms, this algorithm neither depends on nor exploits the two-dimensional spatial correlations that are generally present in images. In order to understand the essence of this compression algorithm, it is necessary to understand that the net effect of this algorithm and the associated decompression algorithm is to approximate the original stream of data as a sequence of finite series of Chebyshev polynomials. For the purpose of this algorithm, a block of data or interval of time for which a Chebyshev polynomial series is fitted to the original data is denoted a fitting interval. Chebyshev approximation has two properties that make it particularly effective for compressing serial data streams with minimal loss of scientific information: The errors associated with a Chebyshev approximation are nearly uniformly distributed over the fitting interval (this is known in the art as the "equal error property"); and the maximum deviations of the fitted Chebyshev polynomial from the original data have the smallest possible values (this is known in the art as the "min-max property").
JPEG 2000-based compression of fringe patterns for digital holographic microscopy
NASA Astrophysics Data System (ADS)
Blinder, David; Bruylants, Tim; Ottevaere, Heidi; Munteanu, Adrian; Schelkens, Peter
2014-12-01
With the advent of modern computing and imaging technologies, digital holography is becoming widespread in various scientific disciplines such as microscopy, interferometry, surface shape measurements, vibration analysis, data encoding, and certification. Therefore, designing an efficient data representation technology is of particular importance. Off-axis holograms have very different signal properties with respect to regular imagery, because they represent a recorded interference pattern with its energy biased toward the high-frequency bands. This causes traditional images' coders, which assume an underlying 1/f2 power spectral density distribution, to perform suboptimally for this type of imagery. We propose a JPEG 2000-based codec framework that provides a generic architecture suitable for the compression of many types of off-axis holograms. This framework has a JPEG 2000 codec at its core, extended with (1) fully arbitrary wavelet decomposition styles and (2) directional wavelet transforms. Using this codec, we report significant improvements in coding performance for off-axis holography relative to the conventional JPEG 2000 standard, with Bjøntegaard delta-peak signal-to-noise ratio improvements ranging from 1.3 to 11.6 dB for lossy compression in the 0.125 to 2.00 bpp range and bit-rate reductions of up to 1.6 bpp for lossless compression.
Lossless compression of image data products on th e FIFE CD-ROM series
NASA Technical Reports Server (NTRS)
Newcomer, Jeffrey A.; Strebel, Donald E.
1993-01-01
How do you store enough of the key data sets, from a total of 120 gigabytes of data collected for a scientific experiment, on a collection of CD-ROM's, small enough to distribute to a broad scientific community? In such an application where information loss in unacceptable, lossless compression algorithms are the only choice. Although lossy compression algorithms can provide an order of magnitude improvement in compression ratios over lossless algorithms the information that is lost is often part of the key scientific precision of the data. Therefore, lossless compression algorithms are and will continue to be extremely important in minimizing archiving storage requirements and distribution of large earth and space (ESS) data sets while preserving the essential scientific precision of the data.
2D-pattern matching image and video compression: theory, algorithms, and experiments.
Alzina, Marc; Szpankowski, Wojciech; Grama, Ananth
2002-01-01
In this paper, we propose a lossy data compression framework based on an approximate two-dimensional (2D) pattern matching (2D-PMC) extension of the Lempel-Ziv (1977, 1978) lossless scheme. This framework forms the basis upon which higher level schemes relying on differential coding, frequency domain techniques, prediction, and other methods can be built. We apply our pattern matching framework to image and video compression and report on theoretical and experimental results. Theoretically, we show that the fixed database model used for video compression leads to suboptimal but computationally efficient performance. The compression ratio of this model is shown to tend to the generalized entropy. For image compression, we use a growing database model for which we provide an approximate analysis. The implementation of 2D-PMC is a challenging problem from the algorithmic point of view. We use a range of techniques and data structures such as k-d trees, generalized run length coding, adaptive arithmetic coding, and variable and adaptive maximum distortion level to achieve good compression ratios at high compression speeds. We demonstrate bit rates in the range of 0.25-0.5 bpp for high-quality images and data rates in the range of 0.15-0.5 Mbps for a baseline video compression scheme that does not use any prediction or interpolation. We also demonstrate that this asymmetric compression scheme is capable of extremely fast decompression making it particularly suitable for networked multimedia applications.
Boiler: lossy compression of RNA-seq alignments using coverage vectors
Pritt, Jacob; Langmead, Ben
2016-01-01
We describe Boiler, a new software tool for compressing and querying large collections of RNA-seq alignments. Boiler discards most per-read data, keeping only a genomic coverage vector plus a few empirical distributions summarizing the alignments. Since most per-read data is discarded, storage footprint is often much smaller than that achieved by other compression tools. Despite this, the most relevant per-read data can be recovered; we show that Boiler compression has only a slight negative impact on results given by downstream tools for isoform assembly and quantification. Boiler also allows the user to pose fast and useful queries without decompressing the entire file. Boiler is free open source software available from github.com/jpritt/boiler. PMID:27298258
Template based parallel checkpointing in a massively parallel computer system
Archer, Charles Jens [Rochester, MN; Inglett, Todd Alan [Rochester, MN
2009-01-13
A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.
ICER-3D: A Progressive Wavelet-Based Compressor for Hyperspectral Images
NASA Technical Reports Server (NTRS)
Kiely, A.; Klimesh, M.; Xie, H.; Aranki, N.
2005-01-01
ICER-3D is a progressive, wavelet-based compressor for hyperspectral images. ICER-3D is derived from the ICER image compressor. ICER-3D can provide lossless and lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The three-dimensional wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of hyperspectral data sets, while facilitating elimination of spectral ringing artifacts. Correlation is further exploited by a context modeler that effectively exploits spectral dependencies in the wavelet-transformed hyperspectral data. Performance results illustrating the benefits of these features are presented.
Hyperspectral image compressing using wavelet-based method
NASA Astrophysics Data System (ADS)
Yu, Hui; Zhang, Zhi-jie; Lei, Bo; Wang, Chen-sheng
2017-10-01
Hyperspectral imaging sensors can acquire images in hundreds of continuous narrow spectral bands. Therefore each object presented in the image can be identified from their spectral response. However, such kind of imaging brings a huge amount of data, which requires transmission, processing, and storage resources for both airborne and space borne imaging. Due to the high volume of hyperspectral image data, the exploration of compression strategies has received a lot of attention in recent years. Compression of hyperspectral data cubes is an effective solution for these problems. Lossless compression of the hyperspectral data usually results in low compression ratio, which may not meet the available resources; on the other hand, lossy compression may give the desired ratio, but with a significant degradation effect on object identification performance of the hyperspectral data. Moreover, most hyperspectral data compression techniques exploits the similarities in spectral dimensions; which requires bands reordering or regrouping, to make use of the spectral redundancy. In this paper, we explored the spectral cross correlation between different bands, and proposed an adaptive band selection method to obtain the spectral bands which contain most of the information of the acquired hyperspectral data cube. The proposed method mainly consist three steps: First, the algorithm decomposes the original hyperspectral imagery into a series of subspaces based on the hyper correlation matrix of the hyperspectral images between different bands. And then the Wavelet-based algorithm is applied to the each subspaces. At last the PCA method is applied to the wavelet coefficients to produce the chosen number of components. The performance of the proposed method was tested by using ISODATA classification method.
JPEG2000 Image Compression on Solar EUV Images
NASA Astrophysics Data System (ADS)
Fischer, Catherine E.; Müller, Daniel; De Moortel, Ineke
2017-01-01
For future solar missions as well as ground-based telescopes, efficient ways to return and process data have become increasingly important. Solar Orbiter, which is the next ESA/NASA mission to explore the Sun and the heliosphere, is a deep-space mission, which implies a limited telemetry rate that makes efficient onboard data compression a necessity to achieve the mission science goals. Missions like the Solar Dynamics Observatory (SDO) and future ground-based telescopes such as the Daniel K. Inouye Solar Telescope, on the other hand, face the challenge of making petabyte-sized solar data archives accessible to the solar community. New image compression standards address these challenges by implementing efficient and flexible compression algorithms that can be tailored to user requirements. We analyse solar images from the Atmospheric Imaging Assembly (AIA) instrument onboard SDO to study the effect of lossy JPEG2000 (from the Joint Photographic Experts Group 2000) image compression at different bitrates. To assess the quality of compressed images, we use the mean structural similarity (MSSIM) index as well as the widely used peak signal-to-noise ratio (PSNR) as metrics and compare the two in the context of solar EUV images. In addition, we perform tests to validate the scientific use of the lossily compressed images by analysing examples of an on-disc and off-limb coronal-loop oscillation time-series observed by AIA/SDO.
Lossy Wavefield Compression for Full-Waveform Inversion
NASA Astrophysics Data System (ADS)
Boehm, C.; Fichtner, A.; de la Puente, J.; Hanzich, M.
2015-12-01
We present lossy compression techniques, tailored to the inexact computation of sensitivity kernels, that significantly reduce the memory requirements of adjoint-based minimization schemes. Adjoint methods are a powerful tool to solve tomography problems in full-waveform inversion (FWI). Yet they face the challenge of massive memory requirements caused by the opposite directions of forward and adjoint simulations and the necessity to access both wavefields simultaneously during the computation of the sensitivity kernel. Thus, storage, I/O operations, and memory bandwidth become key topics in FWI. In this talk, we present strategies for the temporal and spatial compression of the forward wavefield. This comprises re-interpolation with coarse time steps and an adaptive polynomial degree of the spectral element shape functions. In addition, we predict the projection errors on a hierarchy of grids and re-quantize the residuals with an adaptive floating-point accuracy to improve the approximation. Furthermore, we use the first arrivals of adjoint waves to identify "shadow zones" that do not contribute to the sensitivity kernel at all. Updating and storing the wavefield within these shadow zones is skipped, which reduces memory requirements and computational costs at the same time. Compared to check-pointing, our approach has only a negligible computational overhead, utilizing the fact that a sufficiently accurate sensitivity kernel does not require a fully resolved forward wavefield. Furthermore, we use adaptive compression thresholds during the FWI iterations to ensure convergence. Numerical experiments on the reservoir scale and for the Western Mediterranean prove the high potential of this approach with an effective compression factor of 500-1000. Furthermore, it is computationally cheap and easy to integrate in both, finite-differences and finite-element wave propagation codes.
NASA Technical Reports Server (NTRS)
Robinson, Julie A.; Webb, Edward L.; Evangelista, Arlene
2000-01-01
Studies that utilize astronaut-acquired orbital photographs for visual or digital classification require high-quality data to ensure accuracy. The majority of images available must be digitized from film and electronically transferred to scientific users. This study examined the effect of scanning spatial resolution (1200, 2400 pixels per inch [21.2 and 10.6 microns/pixel]), scanning density range option (Auto, Full) and compression ratio (non-lossy [TIFF], and lossy JPEG 10:1, 46:1, 83:1) on digital classification results of an orbital photograph from the NASA - Johnson Space Center archive. Qualitative results suggested that 1200 ppi was acceptable for visual interpretive uses for major land cover types. Moreover, Auto scanning density range was superior to Full density range. Quantitative assessment of the processing steps indicated that, while 2400 ppi scanning spatial resolution resulted in more classified polygons as well as a substantially greater proportion of polygons < 0.2 ha, overall agreement between 1200 ppi and 2400 ppi was quite high. JPEG compression up to approximately 46:1 also did not appear to have a major impact on quantitative classification characteristics. We conclude that both 1200 and 2400 ppi scanning resolutions are acceptable options for this level of land cover classification, as well as a compression ratio at or below approximately 46:1. Auto range density should always be used during scanning because it acquires more of the information from the film. The particular combination of scanning spatial resolution and compression level will require a case-by-case decision and will depend upon memory capabilities, analytical objectives and the spatial properties of the objects in the image.
A Proposal for Kelly CriterionBased Lossy Network Compression
2016-03-01
warehousing and data mining techniques for cyber security. New York (NY): Springer; 2007. p. 83–108. 34. Münz G, Li S, Carle G. Traffic anomaly...p. 188–196. 48. Kim NU, Park MW, Park SH, Jung SM, Eom JH, Chung TM. A study on ef- fective hash-based load balancing scheme for parallel nids. In
Digital mammography, cancer screening: Factors important for image compression
NASA Technical Reports Server (NTRS)
Clarke, Laurence P.; Blaine, G. James; Doi, Kunio; Yaffe, Martin J.; Shtern, Faina; Brown, G. Stephen; Winfield, Daniel L.; Kallergi, Maria
1993-01-01
The use of digital mammography for breast cancer screening poses several novel problems such as development of digital sensors, computer assisted diagnosis (CAD) methods for image noise suppression, enhancement, and pattern recognition, compression algorithms for image storage, transmission, and remote diagnosis. X-ray digital mammography using novel direct digital detection schemes or film digitizers results in large data sets and, therefore, image compression methods will play a significant role in the image processing and analysis by CAD techniques. In view of the extensive compression required, the relative merit of 'virtually lossless' versus lossy methods should be determined. A brief overview is presented here of the developments of digital sensors, CAD, and compression methods currently proposed and tested for mammography. The objective of the NCI/NASA Working Group on Digital Mammography is to stimulate the interest of the image processing and compression scientific community for this medical application and identify possible dual use technologies within the NASA centers.
An image compression algorithm for a high-resolution digital still camera
NASA Technical Reports Server (NTRS)
Nerheim, Rosalee
1989-01-01
The Electronic Still Camera (ESC) project will provide for the capture and transmission of high-quality images without the use of film. The image quality will be superior to video and will approach the quality of 35mm film. The camera, which will have the same general shape and handling as a 35mm camera, will be able to send images to earth in near real-time. Images will be stored in computer memory (RAM) in removable cartridges readable by a computer. To save storage space, the image will be compressed and reconstructed at the time of viewing. Both lossless and loss-y image compression algorithms are studied, described, and compared.
Cánovas, Rodrigo; Moffat, Alistair; Turpin, Andrew
2016-12-15
Next generation sequencing machines produce vast amounts of genomic data. For the data to be useful, it is essential that it can be stored and manipulated efficiently. This work responds to the combined challenge of compressing genomic data, while providing fast access to regions of interest, without necessitating decompression of whole files. We describe CSAM (Compressed SAM format), a compression approach offering lossless and lossy compression for SAM files. The structures and techniques proposed are suitable for representing SAM files, as well as supporting fast access to the compressed information. They generate more compact lossless representations than BAM, which is currently the preferred lossless compressed SAM-equivalent format; and are self-contained, that is, they do not depend on any external resources to compress or decompress SAM files. An implementation is available at https://github.com/rcanovas/libCSAM CONTACT: canovas-ba@lirmm.frSupplementary Information: Supplementary data is available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Reference-free compression of high throughput sequencing data with a probabilistic de Bruijn graph.
Benoit, Gaëtan; Lemaitre, Claire; Lavenier, Dominique; Drezen, Erwan; Dayris, Thibault; Uricaru, Raluca; Rizk, Guillaume
2015-09-14
Data volumes generated by next-generation sequencing (NGS) technologies is now a major concern for both data storage and transmission. This triggered the need for more efficient methods than general purpose compression tools, such as the widely used gzip method. We present a novel reference-free method meant to compress data issued from high throughput sequencing technologies. Our approach, implemented in the software LEON, employs techniques derived from existing assembly principles. The method is based on a reference probabilistic de Bruijn Graph, built de novo from the set of reads and stored in a Bloom filter. Each read is encoded as a path in this graph, by memorizing an anchoring kmer and a list of bifurcations. The same probabilistic de Bruijn Graph is used to perform a lossy transformation of the quality scores, which allows to obtain higher compression rates without losing pertinent information for downstream analyses. LEON was run on various real sequencing datasets (whole genome, exome, RNA-seq or metagenomics). In all cases, LEON showed higher overall compression ratios than state-of-the-art compression software. On a C. elegans whole genome sequencing dataset, LEON divided the original file size by more than 20. LEON is an open source software, distributed under GNU affero GPL License, available for download at http://gatb.inria.fr/software/leon/.
A simple and efficient algorithm operating with linear time for MCEEG data compression.
Titus, Geevarghese; Sudhakar, M S
2017-09-01
Popularisation of electroencephalograph (EEG) signals in diversified fields have increased the need for devices capable of operating at lower power and storage requirements. This has led to a great deal of research in data compression, that can address (a) low latency in the coding of the signal, (b) reduced hardware and software dependencies, (c) quantify the system anomalies, and (d) effectively reconstruct the compressed signal. This paper proposes a computationally simple and novel coding scheme named spatial pseudo codec (SPC), to achieve lossy to near lossless compression of multichannel EEG (MCEEG). In the proposed system, MCEEG signals are initially normalized, followed by two parallel processes: one operating on integer part and the other, on fractional part of the normalized data. The redundancies in integer part are exploited using spatial domain encoder, and the fractional part is coded as pseudo integers. The proposed method has been tested on a wide range of databases having variable sampling rates and resolutions. Results indicate that the algorithm has a good recovery performance with an average percentage root mean square deviation (PRD) of 2.72 for an average compression ratio (CR) of 3.16. Furthermore, the algorithm has a complexity of only O(n) with an average encoding and decoding time per sample of 0.3 ms and 0.04 ms respectively. The performance of the algorithm is comparable with recent methods like fast discrete cosine transform (fDCT) and tensor decomposition methods. The results validated the feasibility of the proposed compression scheme for practical MCEEG recording, archiving and brain computer interfacing systems.
The effects of wavelet compression on Digital Elevation Models (DEMs)
Oimoen, M.J.
2004-01-01
This paper investigates the effects of lossy compression on floating-point digital elevation models using the discrete wavelet transform. The compression of elevation data poses a different set of problems and concerns than does the compression of images. Most notably, the usefulness of DEMs depends largely in the quality of their derivatives, such as slope and aspect. Three areas extracted from the U.S. Geological Survey's National Elevation Dataset were transformed to the wavelet domain using the third order filters of the Daubechies family (DAUB6), and were made sparse by setting 95 percent of the smallest wavelet coefficients to zero. The resulting raster is compressible to a corresponding degree. The effects of the nulled coefficients on the reconstructed DEM are noted as residuals in elevation, derived slope and aspect, and delineation of drainage basins and streamlines. A simple masking technique also is presented, that maintains the integrity and flatness of water bodies in the reconstructed DEM.
Shulkind, Gal; Nazarathy, Moshe
2012-12-17
We present an efficient method for system identification (nonlinear channel estimation) of third order nonlinear Volterra Series Transfer Function (VSTF) characterizing the four-wave-mixing nonlinear process over a coherent OFDM fiber link. Despite the seemingly large number of degrees of freedom in the VSTF (cubic in the number of frequency points) we identified a compressed VSTF representation which does not entail loss of information. Additional slightly lossy compression may be obtained by discarding very low power VSTF coefficients associated with regions of destructive interference in the FWM phased array effect. Based on this two-staged VSTF compressed representation, we develop a robust and efficient algorithm of nonlinear system identification (optical performance monitoring) estimating the VSTF by transmission of an extended training sequence over the OFDM link, performing just a matrix-vector multiplication at the receiver by a pseudo-inverse matrix which is pre-evaluated offline. For 512 (1024) frequency samples per channel, the VSTF measurement takes less than 1 (10) msec to complete with computational complexity of one real-valued multiply-add operation per time sample. Relative to a naïve exhaustive three-tone-test, our algorithm is far more tolerant of ASE additive noise and its acquisition time is orders of magnitude faster.
MP3 compression of Doppler ultrasound signals.
Poepping, Tamie L; Gill, Jeremy; Fenster, Aaron; Holdsworth, David W
2003-01-01
The effect of lossy, MP3 compression on spectral parameters derived from Doppler ultrasound (US) signals was investigated. Compression was tested on signals acquired from two sources: 1. phase quadrature and 2. stereo audio directional output. A total of 11, 10-s acquisitions of Doppler US signal were collected from each source at three sites in a flow phantom. Doppler signals were digitized at 44.1 kHz and compressed using four grades of MP3 compression (in kilobits per second, kbps; compression ratios in brackets): 1400 kbps (uncompressed), 128 kbps (11:1), 64 kbps (22:1) and 32 kbps (44:1). Doppler spectra were characterized by peak velocity, mean velocity, spectral width, integrated power and ratio of spectral power between negative and positive velocities. The results suggest that MP3 compression on digital Doppler US signals is feasible at 128 kbps, with a resulting 11:1 compression ratio, without compromising clinically relevant information. Higher compression ratios led to significant differences for both signal sources when compared with the uncompressed signals. Copyright 2003 World Federation for Ultrasound in Medicine & Biology
Entropy reduction via simplified image contourization
NASA Technical Reports Server (NTRS)
Turner, Martin J.
1993-01-01
The process of contourization is presented which converts a raster image into a set of plateaux or contours. These contours can be grouped into a hierarchical structure, defining total spatial inclusion, called a contour tree. A contour coder has been developed which fully describes these contours in a compact and efficient manner and is the basis for an image compression method. Simplification of the contour tree has been undertaken by merging contour tree nodes thus lowering the contour tree's entropy. This can be exploited by the contour coder to increase the image compression ratio. By applying general and simple rules derived from physiological experiments on the human vision system, lossy image compression can be achieved which minimizes noticeable artifacts in the simplified image.
Boiler: lossy compression of RNA-seq alignments using coverage vectors.
Pritt, Jacob; Langmead, Ben
2016-09-19
We describe Boiler, a new software tool for compressing and querying large collections of RNA-seq alignments. Boiler discards most per-read data, keeping only a genomic coverage vector plus a few empirical distributions summarizing the alignments. Since most per-read data is discarded, storage footprint is often much smaller than that achieved by other compression tools. Despite this, the most relevant per-read data can be recovered; we show that Boiler compression has only a slight negative impact on results given by downstream tools for isoform assembly and quantification. Boiler also allows the user to pose fast and useful queries without decompressing the entire file. Boiler is free open source software available from github.com/jpritt/boiler. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Wavelet compression of noisy tomographic images
NASA Astrophysics Data System (ADS)
Kappeler, Christian; Mueller, Stefan P.
1995-09-01
3D data acquisition is increasingly used in positron emission tomography (PET) to collect a larger fraction of the emitted radiation. A major practical difficulty with data storage and transmission in 3D-PET is the large size of the data sets. A typical dynamic study contains about 200 Mbyte of data. PET images inherently have a high level of photon noise and therefore usually are evaluated after being processed by a smoothing filter. In this work we examined lossy compression schemes under the postulate not induce image modifications exceeding those resulting from low pass filtering. The standard we will refer to is the Hanning filter. Resolution and inhomogeneity serve as figures of merit for quantification of image quality. The images to be compressed are transformed to a wavelet representation using Daubechies12 wavelets and compressed after filtering by thresholding. We do not include further compression by quantization and coding here. Achievable compression factors at this level of processing are thirty to fifty.
A contourlet transform based algorithm for real-time video encoding
NASA Astrophysics Data System (ADS)
Katsigiannis, Stamos; Papaioannou, Georgios; Maroulis, Dimitris
2012-06-01
In recent years, real-time video communication over the internet has been widely utilized for applications like video conferencing. Streaming live video over heterogeneous IP networks, including wireless networks, requires video coding algorithms that can support various levels of quality in order to adapt to the network end-to-end bandwidth and transmitter/receiver resources. In this work, a scalable video coding and compression algorithm based on the Contourlet Transform is proposed. The algorithm allows for multiple levels of detail, without re-encoding the video frames, by just dropping the encoded information referring to higher resolution than needed. Compression is achieved by means of lossy and lossless methods, as well as variable bit rate encoding schemes. Furthermore, due to the transformation utilized, it does not suffer from blocking artifacts that occur with many widely adopted compression algorithms. Another highly advantageous characteristic of the algorithm is the suppression of noise induced by low-quality sensors usually encountered in web-cameras, due to the manipulation of the transform coefficients at the compression stage. The proposed algorithm is designed to introduce minimal coding delay, thus achieving real-time performance. Performance is enhanced by utilizing the vast computational capabilities of modern GPUs, providing satisfactory encoding and decoding times at relatively low cost. These characteristics make this method suitable for applications like video-conferencing that demand real-time performance, along with the highest visual quality possible for each user. Through the presented performance and quality evaluation of the algorithm, experimental results show that the proposed algorithm achieves better or comparable visual quality relative to other compression and encoding methods tested, while maintaining a satisfactory compression ratio. Especially at low bitrates, it provides more human-eye friendly images compared to algorithms utilizing block-based coding, like the MPEG family, as it introduces fuzziness and blurring instead of artificial block artifacts.
Towards a Visual Quality Metric for Digital Video
NASA Technical Reports Server (NTRS)
Watson, Andrew B.
1998-01-01
The advent of widespread distribution of digital video creates a need for automated methods for evaluating visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics. In previous work, we have developed visual quality metrics for evaluating, controlling, and optimizing the quality of compressed still images. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. The challenge of video quality metrics is to extend these simplified models to temporal signals as well. In this presentation I will discuss a number of the issues that must be resolved in the design of effective video quality metrics. Among these are spatial, temporal, and chromatic sensitivity and their interactions, visual masking, and implementation complexity. I will also touch on the question of how to evaluate the performance of these metrics.
Automated Assessment of Visual Quality of Digital Video
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Ellis, Stephen R. (Technical Monitor)
1997-01-01
The advent of widespread distribution of digital video creates a need for automated methods for evaluating visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics. In previous work, we have developed visual quality metrics for evaluating, controlling, and optimizing the quality of compressed still images[1-4]. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. The challenge of video quality metrics is to extend these simplified models to temporal signals as well. In this presentation I will discuss a number of the issues that must be resolved in the design of effective video quality metrics. Among these are spatial, temporal, and chromatic sensitivity and their interactions, visual masking, and implementation complexity. I will also touch on the question of how to evaluate the performance of these metrics.
Optimal Compression Methods for Floating-point Format Images
NASA Technical Reports Server (NTRS)
Pence, W. D.; White, R. L.; Seaman, R.
2009-01-01
We report on the results of a comparison study of different techniques for compressing FITS images that have floating-point (real*4) pixel values. Standard file compression methods like GZIP are generally ineffective in this case (with compression ratios only in the range 1.2 - 1.6), so instead we use a technique of converting the floating-point values into quantized scaled integers which are compressed using the Rice algorithm. The compressed data stream is stored in FITS format using the tiled-image compression convention. This is technically a lossy compression method, since the pixel values are not exactly reproduced, however all the significant photometric and astrometric information content of the image can be preserved while still achieving file compression ratios in the range of 4 to 8. We also show that introducing dithering, or randomization, when assigning the quantized pixel-values can significantly improve the photometric and astrometric precision in the stellar images in the compressed file without adding additional noise. We quantify our results by comparing the stellar magnitudes and positions as measured in the original uncompressed image to those derived from the same image after applying successively greater amounts of compression.
Constitutive parameter measurements of lossy materials
NASA Technical Reports Server (NTRS)
Dominek, A.; Park, A.
1989-01-01
The electrical constitutive parameters of lossy materials are considered. A discussion of the NRL arch for lossy coatings is presented involving analytical analyses of the reflected field using the geometrical theory of diffraction (GTD) and physical optics (PO). The actual values for these parameters can be obtained through a traditional transmission technique which is examined from an error analysis standpoint. Alternate sample geometries are suggested for this technique to reduce sample tolerance requirements for accurate parameter determination. The performance for one alternate geometry is given.
NASA Astrophysics Data System (ADS)
Wang, Ke-Yan; Li, Yun-Song; Liu, Kai; Wu, Cheng-Ke
2008-08-01
A novel compression algorithm for interferential multispectral images based on adaptive classification and curve-fitting is proposed. The image is first partitioned adaptively into major-interference region and minor-interference region. Different approximating functions are then constructed for two kinds of regions respectively. For the major interference region, some typical interferential curves are selected to predict other curves. These typical curves are then processed by curve-fitting method. For the minor interference region, the data of each interferential curve are independently approximated. Finally the approximating errors of two regions are entropy coded. The experimental results show that, compared with JPEG2000, the proposed algorithm not only decreases the average output bit-rate by about 0.2 bit/pixel for lossless compression, but also improves the reconstructed images and reduces the spectral distortion greatly, especially at high bit-rate for lossy compression.
Analysis of Compression Algorithm in Ground Collision Avoidance Systems (Auto-GCAS)
NASA Technical Reports Server (NTRS)
Schmalz, Tyler; Ryan, Jack
2011-01-01
Automatic Ground Collision Avoidance Systems (Auto-GCAS) utilizes Digital Terrain Elevation Data (DTED) stored onboard a plane to determine potential recovery maneuvers. Because of the current limitations of computer hardware on military airplanes such as the F-22 and F-35, the DTED must be compressed through a lossy technique called binary-tree tip-tilt. The purpose of this study is to determine the accuracy of the compressed data with respect to the original DTED. This study is mainly interested in the magnitude of the error between the two as well as the overall distribution of the errors throughout the DTED. By understanding how the errors of the compression technique are affected by various factors (topography, density of sampling points, sub-sampling techniques, etc.), modifications can be made to the compression technique resulting in better accuracy. This, in turn, would minimize unnecessary activation of A-GCAS during flight as well as maximizing its contribution to fighter safety.
Analysis of tractable distortion metrics for EEG compression applications.
Bazán-Prieto, Carlos; Blanco-Velasco, Manuel; Cárdenas-Barrera, Julián; Cruz-Roldán, Fernando
2012-07-01
Coding distortion in lossy electroencephalographic (EEG) signal compression methods is evaluated through tractable objective criteria. The percentage root-mean-square difference, which is a global and relative indicator of the quality held by reconstructed waveforms, is the most widely used criterion. However, this parameter does not ensure compliance with clinical standard guidelines that specify limits to allowable noise in EEG recordings. As a result, expert clinicians may have difficulties interpreting the resulting distortion of the EEG for a given value of this parameter. Conversely, the root-mean-square error is an alternative criterion that quantifies distortion in understandable units. In this paper, we demonstrate that the root-mean-square error is better suited to control and to assess the distortion introduced by compression methods. The experiments conducted in this paper show that the use of the root-mean-square error as target parameter in EEG compression allows both clinicians and scientists to infer whether coding error is clinically acceptable or not at no cost for the compression ratio.
Effects of compression and individual variability on face recognition performance
NASA Astrophysics Data System (ADS)
McGarry, Delia P.; Arndt, Craig M.; McCabe, Steven A.; D'Amato, Donald P.
2004-08-01
The Enhanced Border Security and Visa Entry Reform Act of 2002 requires that the Visa Waiver Program be available only to countries that have a program to issue to their nationals machine-readable passports incorporating biometric identifiers complying with applicable standards established by the International Civil Aviation Organization (ICAO). In June 2002, the New Technologies Working Group of ICAO unanimously endorsed the use of face recognition (FR) as the globally interoperable biometric for machine-assisted identity confirmation with machine-readable travel documents (MRTDs), although Member States may elect to use fingerprint and/or iris recognition as additional biometric technologies. The means and formats are still being developed through which biometric information might be stored in the constrained space of integrated circuit chips embedded within travel documents. Such information will be stored in an open, yet unalterable and very compact format, probably as digitally signed and efficiently compressed images. The objective of this research is to characterize the many factors that affect FR system performance with respect to the legislated mandates concerning FR. A photograph acquisition environment and a commercial face recognition system have been installed at Mitretek, and over 1,400 images have been collected of volunteers. The image database and FR system are being used to analyze the effects of lossy image compression, individual differences, such as eyeglasses and facial hair, and the acquisition environment on FR system performance. Images are compressed by varying ratios using JPEG2000 to determine the trade-off points between recognition accuracy and compression ratio. The various acquisition factors that contribute to differences in FR system performance among individuals are also being measured. The results of this study will be used to refine and test efficient face image interchange standards that ensure highly accurate recognition, both for automated FR systems and human inspectors. Working within the M1-Biometrics Technical Committee of the InterNational Committee for Information Technology Standards (INCITS) organization, a standard face image format will be tested and submitted to organizations such as ICAO.
Barbier, Paolo; Alimento, Marina; Berna, Giovanni; Cavoretto, Dario; Celeste, Fabrizio; Muratori, Manuela; Guazzi, Maurizio D
2004-01-01
Tele-echocardiography is not widely used because of lengthy transmission times when using standard Motion Pictures Expert Groups (MPEG)-2 lossy compression algorythms, unless expensive high bandwidth lines are used. We sought to validate the newer MPEG-4 algorythms to allow further reduction in echocardiographic motion video file size. Four cardiologists expert in echocardiography read blindly 165 randomized uncompressed and compressed 2D and color Doppler normal and pathologic motion images. One Digital Video and 3 MPEG-4 compression algorythms were tested, the latter at 3 decreasing compression quality levels (100%, 65% and 40%). Mean diagnostic and image quality scores were computed for each file and compared across the 3 compression levels using uncompressed files as controls. File dimensions decreased from a range of uncompressed 12-83 MB to MPEG-4 0.03-2.3 MB. All algorythms showed mean scores that were not significantly different from uncompressed source, except the MPEG-4 DivX algorythm at the highest selected compression (40%, p=.002). These data support the use of MPEG-4 compression to reduce echocardiographic motion image size for transmission purposes, allowing cost reduction through use of low bandwidth lines.
Context Modeler for Wavelet Compression of Spectral Hyperspectral Images
NASA Technical Reports Server (NTRS)
Kiely, Aaron; Xie, Hua; Klimesh, matthew; Aranki, Nazeeh
2010-01-01
A context-modeling sub-algorithm has been developed as part of an algorithm that effects three-dimensional (3D) wavelet-based compression of hyperspectral image data. The context-modeling subalgorithm, hereafter denoted the context modeler, provides estimates of probability distributions of wavelet-transformed data being encoded. These estimates are utilized by an entropy coding subalgorithm that is another major component of the compression algorithm. The estimates make it possible to compress the image data more effectively than would otherwise be possible. The following background discussion is prerequisite to a meaningful summary of the context modeler. This discussion is presented relative to ICER-3D, which is the name attached to a particular compression algorithm and the software that implements it. The ICER-3D software is summarized briefly in the preceding article, ICER-3D Hyperspectral Image Compression Software (NPO-43238). Some aspects of this algorithm were previously described, in a slightly more general context than the ICER-3D software, in "Improving 3D Wavelet-Based Compression of Hyperspectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. In turn, ICER-3D is a product of generalization of ICER, another previously reported algorithm and computer program that can perform both lossless and lossy wavelet-based compression and decompression of gray-scale-image data. In ICER-3D, hyperspectral image data are decomposed using a 3D discrete wavelet transform (DWT). Following wavelet decomposition, mean values are subtracted from spatial planes of spatially low-pass subbands prior to encoding. The resulting data are converted to sign-magnitude form and compressed. In ICER-3D, compression is progressive, in that compressed information is ordered so that as more of the compressed data stream is received, successive reconstructions of the hyperspectral image data are of successively higher overall fidelity.
Optimizing Cloud Based Image Storage, Dissemination and Processing Through Use of Mrf and Lerc
NASA Astrophysics Data System (ADS)
Becker, Peter; Plesea, Lucian; Maurer, Thomas
2016-06-01
The volume and numbers of geospatial images being collected continue to increase exponentially with the ever increasing number of airborne and satellite imaging platforms, and the increasing rate of data collection. As a result, the cost of fast storage required to provide access to the imagery is a major cost factor in enterprise image management solutions to handle, process and disseminate the imagery and information extracted from the imagery. Cloud based object storage offers to provide significantly lower cost and elastic storage for this imagery, but also adds some disadvantages in terms of greater latency for data access and lack of traditional file access. Although traditional file formats geoTIF, JPEG2000 and NITF can be downloaded from such object storage, their structure and available compression are not optimum and access performance is curtailed. This paper provides details on a solution by utilizing a new open image formats for storage and access to geospatial imagery optimized for cloud storage and processing. MRF (Meta Raster Format) is optimized for large collections of scenes such as those acquired from optical sensors. The format enables optimized data access from cloud storage, along with the use of new compression options which cannot easily be added to existing formats. The paper also provides an overview of LERC a new image compression that can be used with MRF that provides very good lossless and controlled lossy compression.
Fixed-Rate Compressed Floating-Point Arrays.
Lindstrom, Peter
2014-12-01
Current compression schemes for floating-point data commonly take fixed-precision values and compress them to a variable-length bit stream, complicating memory management and random access. We present a fixed-rate, near-lossless compression scheme that maps small blocks of 4(d) values in d dimensions to a fixed, user-specified number of bits per block, thereby allowing read and write random access to compressed floating-point data at block granularity. Our approach is inspired by fixed-rate texture compression methods widely adopted in graphics hardware, but has been tailored to the high dynamic range and precision demands of scientific applications. Our compressor is based on a new, lifted, orthogonal block transform and embedded coding, allowing each per-block bit stream to be truncated at any point if desired, thus facilitating bit rate selection using a single compression scheme. To avoid compression or decompression upon every data access, we employ a software write-back cache of uncompressed blocks. Our compressor has been designed with computational simplicity and speed in mind to allow for the possibility of a hardware implementation, and uses only a small number of fixed-point arithmetic operations per compressed value. We demonstrate the viability and benefits of lossy compression in several applications, including visualization, quantitative data analysis, and numerical simulation.
A no-reference image and video visual quality metric based on machine learning
NASA Astrophysics Data System (ADS)
Frantc, Vladimir; Voronin, Viacheslav; Semenishchev, Evgenii; Minkin, Maxim; Delov, Aliy
2018-04-01
The paper presents a novel visual quality metric for lossy compressed video quality assessment. High degree of correlation with subjective estimations of quality is due to using of a convolutional neural network trained on a large amount of pairs video sequence-subjective quality score. We demonstrate how our predicted no-reference quality metric correlates with qualitative opinion in a human observer study. Results are shown on the EVVQ dataset with comparison existing approaches.
Applications of the JPEG standard in a medical environment
NASA Astrophysics Data System (ADS)
Wittenberg, Ulrich
1993-10-01
JPEG is a very versatile image coding and compression standard for single images. Medical images make a higher demand on image quality and precision than the usual 'pretty pictures'. In this paper the potential applications of the various JPEG coding modes in a medical environment are evaluated. Due to legal reasons the lossless modes are especially interesting. The spatial modes are equally important because medical data may well exceed the maximum of 12 bit precision allowed for the DCT modes. The performance of the spatial predictors is investigated. From the users point of view the progressive modes, which provide a fast but coarse approximation of the final image, reduce the subjective time one has to wait for it, so they also reduce the user's frustration. Even the lossy modes will find some applications, but they have to be handled with care, because repeated lossy coding and decoding leads to a degradation of the image quality. The amount of this degradation is investigated. The JPEG standard alone is not sufficient for a PACS because it does not store enough additional data such as creation data or details of the imaging modality. Therefore it will be an imbedded coding format in standards like TIFF or ACR/NEMA. It is concluded that the JPEG standard is versatile enough to match the requirements of the medical community.
NASA Astrophysics Data System (ADS)
Plaza, Antonio; Plaza, Javier; Paz, Abel
2010-10-01
Latest generation remote sensing instruments (called hyperspectral imagers) are now able to generate hundreds of images, corresponding to different wavelength channels, for the same area on the surface of the Earth. In previous work, we have reported that the scalability of parallel processing algorithms dealing with these high-dimensional data volumes is affected by the amount of data to be exchanged through the communication network of the system. However, large messages are common in hyperspectral imaging applications since processing algorithms are pixel-based, and each pixel vector to be exchanged through the communication network is made up of hundreds of spectral values. Thus, decreasing the amount of data to be exchanged could improve the scalability and parallel performance. In this paper, we propose a new framework based on intelligent utilization of wavelet-based data compression techniques for improving the scalability of a standard hyperspectral image processing chain on heterogeneous networks of workstations. This type of parallel platform is quickly becoming a standard in hyperspectral image processing due to the distributed nature of collected hyperspectral data as well as its flexibility and low cost. Our experimental results indicate that adaptive lossy compression can lead to improvements in the scalability of the hyperspectral processing chain without sacrificing analysis accuracy, even at sub-pixel precision levels.
Compression strategies for LiDAR waveform cube
NASA Astrophysics Data System (ADS)
Jóźków, Grzegorz; Toth, Charles; Quirk, Mihaela; Grejner-Brzezinska, Dorota
2015-01-01
Full-waveform LiDAR data (FWD) provide a wealth of information about the shape and materials of the surveyed areas. Unlike discrete data that retains only a few strong returns, FWD generally keeps the whole signal, at all times, regardless of the signal intensity. Hence, FWD will have an increasingly well-deserved role in mapping and beyond, in the much desired classification in the raw data format. Full-waveform systems currently perform only the recording of the waveform data at the acquisition stage; the return extraction is mostly deferred to post-processing. Although the full waveform preserves most of the details of the real data, it presents a serious practical challenge for a wide use: much larger datasets compared to those from the classical discrete return systems. Atop the need for more storage space, the acquisition speed of the FWD may also limit the pulse rate on most systems that cannot store data fast enough, and thus, reduces the perceived system performance. This work introduces a waveform cube model to compress waveforms in selected subsets of the cube, aimed at achieving decreased storage while maintaining the maximum pulse rate of FWD systems. In our experiments, the waveform cube is compressed using classical methods for 2D imagery that are further tested to assess the feasibility of the proposed solution. The spatial distribution of airborne waveform data is irregular; however, the manner of the FWD acquisition allows the organization of the waveforms in a regular 3D structure similar to familiar multi-component imagery, as those of hyper-spectral cubes or 3D volumetric tomography scans. This study presents the performance analysis of several lossy compression methods applied to the LiDAR waveform cube, including JPEG-1, JPEG-2000, and PCA-based techniques. Wide ranges of tests performed on real airborne datasets have demonstrated the benefits of the JPEG-2000 Standard where high compression rates incur fairly small data degradation. In addition, the JPEG-2000 Standard-compliant compression implementation can be fast and, thus, used in real-time systems, as compressed data sequences can be formed progressively during the waveform data collection. We conclude from our experiments that 2D image compression strategies are feasible and efficient approaches, thus they might be applied during the acquisition of the FWD sensors.
Compressing climate model simulations: reducing storage burden while preserving information
NASA Astrophysics Data System (ADS)
Hammerling, Dorit; Baker, Allison; Xu, Haiying; Clyne, John; Li, Samuel
2017-04-01
Climate models, which are run at high spatial and temporal resolutions, generate massive quantities of data. As our computing capabilities continue to increase, storing all of the generated data is becoming a bottleneck, which negatively affects scientific progress. It is thus important to develop methods for representing the full datasets by smaller compressed versions, which still preserve all the critical information and, as an added benefit, allow for faster read and write operations during analysis work. Traditional lossy compression algorithms, as for example used for image files, are not necessarily ideally suited for climate data. While visual appearance is relevant, climate data has additional critical features such as the preservation of extreme values and spatial and temporal gradients. Developing alternative metrics to quantify information loss in a manner that is meaningful to climate scientists is an ongoing process still in its early stages. We will provide an overview of current efforts to develop such metrics to assess existing algorithms and to guide the development of tailored compression algorithms to address this pressing challenge.
NASA Astrophysics Data System (ADS)
Clunie, David A.
2000-05-01
Proprietary compression schemes have a cost and risk associated with their support, end of life and interoperability. Standards reduce this cost and risk. The new JPEG-LS process (ISO/IEC 14495-1), and the lossless mode of the proposed JPEG 2000 scheme (ISO/IEC CD15444-1), new standard schemes that may be incorporated into DICOM, are evaluated here. Three thousand, six hundred and seventy-nine (3,679) single frame grayscale images from multiple anatomical regions, modalities and vendors, were tested. For all images combined JPEG-LS and JPEG 2000 performed equally well (3.81), almost as well as CALIC (3.91), a complex predictive scheme used only as a benchmark. Both out-performed existing JPEG (3.04 with optimum predictor choice per image, 2.79 for previous pixel prediction as most commonly used in DICOM). Text dictionary schemes performed poorly (gzip 2.38), as did image dictionary schemes without statistical modeling (PNG 2.76). Proprietary transform based schemes did not perform as well as JPEG-LS or JPEG 2000 (S+P Arithmetic 3.4, CREW 3.56). Stratified by modality, JPEG-LS compressed CT images (4.00), MR (3.59), NM (5.98), US (3.4), IO (2.66), CR (3.64), DX (2.43), and MG (2.62). CALIC always achieved the highest compression except for one modality for which JPEG-LS did better (MG digital vendor A JPEG-LS 4.02, CALIC 4.01). JPEG-LS outperformed existing JPEG for all modalities. The use of standard schemes can achieve state of the art performance, regardless of modality, JPEG-LS is simple, easy to implement, consumes less memory, and is faster than JPEG 2000, though JPEG 2000 will offer lossy and progressive transmission. It is recommended that DICOM add transfer syntaxes for both JPEG-LS and JPEG 2000.
Impact of JPEG2000 compression on spatial-spectral endmember extraction from hyperspectral data
NASA Astrophysics Data System (ADS)
Martín, Gabriel; Ruiz, V. G.; Plaza, Antonio; Ortiz, Juan P.; García, Inmaculada
2009-08-01
Hyperspectral image compression has received considerable interest in recent years. However, an important issue that has not been investigated in the past is the impact of lossy compression on spectral mixture analysis applications, which characterize mixed pixels in terms of a suitable combination of spectrally pure spectral substances (called endmembers) weighted by their estimated fractional abundances. In this paper, we specifically investigate the impact of JPEG2000 compression of hyperspectral images on the quality of the endmembers extracted by algorithms that incorporate both the spectral and the spatial information (useful for incorporating contextual information in the spectral endmember search). The two considered algorithms are the automatic morphological endmember extraction (AMEE) and the spatial spectral endmember extraction (SSEE) techniques. Experimental results are conducted using a well-known data set collected by AVIRIS over the Cuprite mining district in Nevada and with detailed ground-truth information available from U. S. Geological Survey. Our experiments reveal some interesting findings that may be useful to specialists applying spatial-spectral endmember extraction algorithms to compressed hyperspectral imagery.
Cloud solution for histopathological image analysis using region of interest based compression.
Kanakatte, Aparna; Subramanya, Rakshith; Delampady, Ashik; Nayak, Rajarama; Purushothaman, Balamuralidhar; Gubbi, Jayavardhana
2017-07-01
Recent technological gains have led to the adoption of innovative cloud based solutions in medical imaging field. Once the medical image is acquired, it can be viewed, modified, annotated and shared on many devices. This advancement is mainly due to the introduction of Cloud computing in medical domain. Tissue pathology images are complex and are normally collected at different focal lengths using a microscope. The single whole slide image contains many multi resolution images stored in a pyramidal structure with the highest resolution image at the base and the smallest thumbnail image at the top of the pyramid. Highest resolution image will be used for tissue pathology diagnosis and analysis. Transferring and storing such huge images is a big challenge. Compression is a very useful and effective technique to reduce the size of these images. As pathology images are used for diagnosis, no information can be lost during compression (lossless compression). A novel method of extracting the tissue region and applying lossless compression on this region and lossy compression on the empty regions has been proposed in this paper. The resulting compression ratio along with lossless compression on tissue region is in acceptable range allowing efficient storage and transmission to and from the Cloud.
Three dimensional range geometry and texture data compression with space-filling curves.
Chen, Xia; Zhang, Song
2017-10-16
This paper presents a novel method to effectively store three-dimensional (3D) data and 2D texture data into a regular 24-bit image. The proposed method uses the Hilbert space-filling curve to map the normalized unwrapped phase map to two 8-bit color channels, and saves the third color channel for 2D texture storage. By further leveraging existing 2D image and video compression techniques, the proposed method can achieve high compression ratios while effectively preserving data quality. Since the encoding and decoding processes can be applied to most of the current 2D media platforms, this proposed compression method can make 3D data storage and transmission available for many electrical devices without requiring special hardware changes. Experiments demonstrate that if a lossless 2D image/video format is used, both original 3D geometry and 2D color texture can be accurately recovered; if lossy image/video compression is used, only black-and-white or grayscale texture can be properly recovered, but much higher compression ratios (e.g., 1543:1 against the ASCII OBJ format) are achieved with slight loss of 3D geometry quality.
NASA Astrophysics Data System (ADS)
Aizenberg, Evgeni; Bigio, Irving J.; Rodriguez-Diaz, Eladio
2012-03-01
The Fourier descriptors paradigm is a well-established approach for affine-invariant characterization of shape contours. In the work presented here, we extend this method to images, and obtain a 2D Fourier representation that is invariant to image rotation. The proposed technique retains phase uniqueness, and therefore structural image information is not lost. Rotation-invariant phase coefficients were used to train a single multi-valued neuron (MVN) to recognize satellite and human face images rotated by a wide range of angles. Experiments yielded 100% and 96.43% classification rate for each data set, respectively. Recognition performance was additionally evaluated under effects of lossy JPEG compression and additive Gaussian noise. Preliminary results show that the derived rotation-invariant features combined with the MVN provide a promising scheme for efficient recognition of rotated images.
Efficient and automatic wireless geohazard monitoring
NASA Astrophysics Data System (ADS)
Rubin, Marc J.
In this dissertation, we present our research contributions geared towards creating an automated and efficient wireless sensor network (WSN) for geohazard monitoring. Specifically, this dissertation addresses three overall technical research problems inherent in implementing and deploying such a WSN, i.e., 1) automated event detection from geophysical data, 2) efficient wireless transmission, and 3) low-cost wireless hardware. In addition, after presenting algorithms, experimentation, and results from these three overall problems, we take a step back and discuss how, when, and why such scientific work matters in a geohazardous risk scenario. First, in Chapter 2, we discuss automated geohazard event detection within geophysical data. In particular, we present our pattern recognition workflow that can automatically detect snow avalanche events in seismic (geophone sensor) data. This workflow includes customized signal preprocessing for feature extraction, cluster-based stratified sub-sampling for majority class reduction, and experimentation with 12 different machine learning algorithms; results show that a decision stump classifier achieved 99.8% accuracy, 88.8% recall, and 13.2% precision in detecting avalanches within seismic data collected in the mountains above Davos, Switzerland, an improvement on previous work in the field. To address the second overall research problem (i.e., efficient wireless transmission), we present and evaluate our on-mote compressive sampling algorithm called Randomized Timing Vector (RTV) in Chapter 3 and compare our approach to four other on-mote, lossy compression algorithms in Chapter 4. Results from our work show that our RTV algorithm outperforms current on-mote compressive sampling algorithms and performs comparably to (and in many cases better than) the four state-of-the-art, on-mote lossy compression techniques. The main benefit of RTV is that it can guarantee a desired level of compression performance (and thus, radio usage and power consumption) without subjugating recovered signal quality. Another benefit of RTV is its simplicity and low computational overhead; by sampling directly in compressed form, RTV vastly decreases the amount of memory space and computation time required for on-mote compression. Third, in Chapter 5, we present and evaluate our custom, low-cost, Arduino-based wireless hardware (i.e., GeoMoteShield) developed for wireless seismic data acquisition. In particular, we first provide details regarding the motivation, design, and implementation of our custom GeoMoteShield and then compare our custom hardware against two much more expensive systems, i.e., a traditional wired seismograph and a "from-the-ground-up" wireless mote developed by SmartGeo colleagues. We validate our custom WSN of nine GeoMoteShields using controlled lab tests and then further evaluate the WSN's performance during two seismic field tests, i.e., a "walk-away" test and a seismic refraction survey. Results show that our low-cost, Arduino-based GeoMoteShield performs comparably to a much more expensive wired system and a "from the ground up" wireless mote in terms of signal precision, accuracy, and time synchronization. Finally, in Chapter 6, we provide a broad literature review and discussion of how, when, and why scientific work matters in geohazardous risk scenarios. This work is geared towards scientists conducting research within fields involving geohazard risk assessment and mitigation. In particular, this chapter reviews three topics from Science, Technology, Engineering, and Policy (STEP): 1) risk, scientific uncertainty, and policy, 2) society's perceptions of risk, and 3) the effectiveness of risk communication. Though this chapter is not intended to be a comprehensive STEP literature survey, it addresses many pertinent questions and provides guidance to scientists and engineers operating in such fields. In short, this chapter aims to answer three main questions, i.e., 1) "when does scientific work influence policy decisions?", 2) "how does scientific work impact people's perception of risk?", and 3) "how is technical scientific work communicated to the non-scientific community?".
Image quality (IQ) guided multispectral image compression
NASA Astrophysics Data System (ADS)
Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik
2016-05-01
Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.
Security of modified Ping-Pong protocol in noisy and lossy channel
Han, Yun-Guang; Yin, Zhen-Qiang; Li, Hong-Wei; Chen, Wei; Wang, Shuang; Guo, Guang-Can; Han, Zheng-Fu
2014-01-01
The “Ping-Pong” (PP) protocol is a two-way quantum key protocol based on entanglement. In this protocol, Bob prepares one maximally entangled pair of qubits, and sends one qubit to Alice. Then, Alice performs some necessary operations on this qubit and sends it back to Bob. Although this protocol was proposed in 2002, its security in the noisy and lossy channel has not been proven. In this report, we add a simple and experimentally feasible modification to the original PP protocol, and prove the security of this modified PP protocol against collective attacks when the noisy and lossy channel is taken into account. Simulation results show that our protocol is practical. PMID:24816899
Security of modified Ping-Pong protocol in noisy and lossy channel.
Han, Yun-Guang; Yin, Zhen-Qiang; Li, Hong-Wei; Chen, Wei; Wang, Shuang; Guo, Guang-Can; Han, Zheng-Fu
2014-05-12
The "Ping-Pong" (PP) protocol is a two-way quantum key protocol based on entanglement. In this protocol, Bob prepares one maximally entangled pair of qubits, and sends one qubit to Alice. Then, Alice performs some necessary operations on this qubit and sends it back to Bob. Although this protocol was proposed in 2002, its security in the noisy and lossy channel has not been proven. In this report, we add a simple and experimentally feasible modification to the original PP protocol, and prove the security of this modified PP protocol against collective attacks when the noisy and lossy channel is taken into account. Simulation results show that our protocol is practical.
NASA Astrophysics Data System (ADS)
Dehbashi, Reza; Shahabadi, Mahmoud
2013-12-01
The commonly used coordinate transformation for cylindrical cloaks is generalized. This transformation is utilized to determine an anisotropic inhomogeneous diagonal material tensors of a shell type cloak for various material types, i.e., double-positive (DPS: ɛ, μ > 0), double-negative (DNG: ɛ, μ < 0), ɛ-negative (ENG), and μ-negative (MNG). To obtain conditions of perfect cloaking for various material types, a rigorous analysis is performed. It is shown that perfect cloaking will be achieved for same type material for the cloak and its surrounding medium. Moreover, material losses are included in the analysis to demonstrate that perfect cloaking for lossy materials can be achieved for identical loss tangent of the cloak and its surrounding material. Sensitivity of the cloaking performance to losses for different material types is also investigated. The obtained analytical results are verified using a Finite-Element computational analysis.
The compression and storage method of the same kind of medical images: DPCM
NASA Astrophysics Data System (ADS)
Zhao, Xiuying; Wei, Jingyuan; Zhai, Linpei; Liu, Hong
2006-09-01
Medical imaging has started to take advantage of digital technology, opening the way for advanced medical imaging and teleradiology. Medical images, however, require large amounts of memory. At over 1 million bytes per image, a typical hospital needs a staggering amount of memory storage (over one trillion bytes per year), and transmitting an image over a network (even the promised superhighway) could take minutes--too slow for interactive teleradiology. This calls for image compression to reduce significantly the amount of data needed to represent an image. Several compression techniques with different compression ratio have been developed. However, the lossless techniques, which allow for perfect reconstruction of the original images, yield modest compression ratio, while the techniques that yield higher compression ratio are lossy, that is, the original image is reconstructed only approximately. Medical imaging poses the great challenge of having compression algorithms that are lossless (for diagnostic and legal reasons) and yet have high compression ratio for reduced storage and transmission time. To meet this challenge, we are developing and studying some compression schemes, which are either strictly lossless or diagnostically lossless, taking advantage of the peculiarities of medical images and of the medical practice. In order to increase the Signal to Noise Ratio (SNR) by exploitation of correlations within the source signal, a method of combining differential pulse code modulation (DPCM) is presented.
Johnson, Jeffrey P; Krupinski, Elizabeth A; Yan, Michelle; Roehrig, Hans; Graham, Anna R; Weinstein, Ronald S
2011-02-01
A major issue in telepathology is the extremely large and growing size of digitized "virtual" slides, which can require several gigabytes of storage and cause significant delays in data transmission for remote image interpretation and interactive visualization by pathologists. Compression can reduce this massive amount of virtual slide data, but reversible (lossless) methods limit data reduction to less than 50%, while lossy compression can degrade image quality and diagnostic accuracy. "Visually lossless" compression offers the potential for using higher compression levels without noticeable artifacts, but requires a rate-control strategy that adapts to image content and loss visibility. We investigated the utility of a visual discrimination model (VDM) and other distortion metrics for predicting JPEG 2000 bit rates corresponding to visually lossless compression of virtual slides for breast biopsy specimens. Threshold bit rates were determined experimentally with human observers for a variety of tissue regions cropped from virtual slides. For test images compressed to their visually lossless thresholds, just-noticeable difference (JND) metrics computed by the VDM were nearly constant at the 95th percentile level or higher, and were significantly less variable than peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) metrics. Our results suggest that VDM metrics could be used to guide the compression of virtual slides to achieve visually lossless compression while providing 5-12 times the data reduction of reversible methods.
Fundamental study of compression for movie files of coronary angiography
NASA Astrophysics Data System (ADS)
Ando, Takekazu; Tsuchiya, Yuichiro; Kodera, Yoshie
2005-04-01
When network distribution of movie files was considered as reference, it could be useful that the lossy compression movie files which has small file size. We chouse three kinds of coronary stricture movies with different moving speed as an examination object; heart rate of slow, normal and fast movies. The movies of MPEG-1, DivX5.11, WMV9 (Windows Media Video 9), and WMV9-VCM (Windows Media Video 9-Video Compression Manager) were made from three kinds of AVI format movies with different moving speeds. Five kinds of movies that are four kinds of compression movies and non-compression AVI instead of the DICOM format were evaluated by Thurstone's method. The Evaluation factors of movies were determined as "sharpness, granularity, contrast, and comprehensive evaluation." In the virtual bradycardia movie, AVI was the best evaluation at all evaluation factors except the granularity. In the virtual normal movie, an excellent compression technique is different in all evaluation factors. In the virtual tachycardia movie, MPEG-1 was the best evaluation at all evaluation factors expects the contrast. There is a good compression form depending on the speed of movies because of the difference of compression algorithm. It is thought that it is an influence by the difference of the compression between frames. The compression algorithm for movie has the compression between the frames and the intra-frame compression. As the compression algorithm give the different influence to image by each compression method, it is necessary to examine the relation of the compression algorithm and our results.
Edge-Based Image Compression with Homogeneous Diffusion
NASA Astrophysics Data System (ADS)
Mainberger, Markus; Weickert, Joachim
It is well-known that edges contain semantically important image information. In this paper we present a lossy compression method for cartoon-like images that exploits information at image edges. These edges are extracted with the Marr-Hildreth operator followed by hysteresis thresholding. Their locations are stored in a lossless way using JBIG. Moreover, we encode the grey or colour values at both sides of each edge by applying quantisation, subsampling and PAQ coding. In the decoding step, information outside these encoded data is recovered by solving the Laplace equation, i.e. we inpaint with the steady state of a homogeneous diffusion process. Our experiments show that the suggested method outperforms the widely-used JPEG standard and can even beat the advanced JPEG2000 standard for cartoon-like images.
Onboard Data Compression of Synthetic Aperture Radar Data: Status and Prospects
NASA Technical Reports Server (NTRS)
Klimesh, Matthew A.; Moision, Bruce
2008-01-01
Synthetic aperture radar (SAR) instruments on spacecraft are capable of producing huge quantities of data. Onboard lossy data compression is commonly used to reduce the burden on the communication link. In this paper an overview is given of various SAR data compression techniques, along with an assessment of how much improvement is possible (and practical) and how to approach the problem of obtaining it. Synthetic aperture radar (SAR) instruments on spacecraft are capable of acquiring huge quantities of data. As a result, the available downlink rate and onboard storage capacity can be limiting factors in mission design for spacecraft with SAR instruments. This is true both for Earth-orbiting missions and missions to more distant targets such as Venus, Titan, and Europa. (Of course for missions beyond Earth orbit downlink rates are much lower and thus potentially much more limiting.) Typically spacecraft with SAR instruments use some form of data compression in order to reduce the storage size and/or downlink rate necessary to accommodate the SAR data. Our aim here is to give an overview of SAR data compression strategies that have been considered, and to assess the prospects for additional improvements.
NASA Technical Reports Server (NTRS)
Gabriel, Philip M.; Yeh, Penshu; Tsay, Si-Chee
2013-01-01
This paper presents results and analyses of applying an international space data compression standard to weather radar measurements that can easily span 8 orders of magnitude and typically require a large storage capacity as well as significant bandwidth for transmission. By varying the degree of the data compression, we analyzed the non-linear response of models that relate measured radar reflectivity and/or Doppler spectra to the moments and properties of the particle size distribution characterizing clouds and precipitation. Preliminary results for the meteorologically important phenomena of clouds and light rain indicate that for a 0.5 dB calibration uncertainty, typical for the ground-based pulsed-Doppler 94 GHz (or 3.2 mm, W-band) weather radar used as a proxy for spaceborne radar in this study, a lossless compression ratio of only 1.2 is achievable. However, further analyses of the non-linear response of various models of rainfall rate, liquid water content and median volume diameter show that a lossy data compression ratio exceeding 15 is realizable. The exploratory analyses presented are relevant to future satellite missions, where the transmission bandwidth is premium and storage requirements of vast volumes of data, potentially problematic.
Scalable Coding of Plenoptic Images by Using a Sparse Set and Disparities.
Li, Yun; Sjostrom, Marten; Olsson, Roger; Jennehag, Ulf
2016-01-01
One of the light field capturing techniques is the focused plenoptic capturing. By placing a microlens array in front of the photosensor, the focused plenoptic cameras capture both spatial and angular information of a scene in each microlens image and across microlens images. The capturing results in a significant amount of redundant information, and the captured image is usually of a large resolution. A coding scheme that removes the redundancy before coding can be of advantage for efficient compression, transmission, and rendering. In this paper, we propose a lossy coding scheme to efficiently represent plenoptic images. The format contains a sparse image set and its associated disparities. The reconstruction is performed by disparity-based interpolation and inpainting, and the reconstructed image is later employed as a prediction reference for the coding of the full plenoptic image. As an outcome of the representation, the proposed scheme inherits a scalable structure with three layers. The results show that plenoptic images are compressed efficiently with over 60 percent bit rate reduction compared with High Efficiency Video Coding intra coding, and with over 20 percent compared with an High Efficiency Video Coding block copying mode.
A novel shape-based coding-decoding technique for an industrial visual inspection system.
Mukherjee, Anirban; Chaudhuri, Subhasis; Dutta, Pranab K; Sen, Siddhartha; Patra, Amit
2004-01-01
This paper describes a unique single camera-based dimension storage method for image-based measurement. The system has been designed and implemented in one of the integrated steel plants of India. The purpose of the system is to encode the frontal cross-sectional area of an ingot. The encoded data will be stored in a database to facilitate the future manufacturing diagnostic process. The compression efficiency and reconstruction error of the lossy encoding technique have been reported and found to be quite encouraging.
Low complexity lossless compression of underwater sound recordings.
Johnson, Mark; Partan, Jim; Hurst, Tom
2013-03-01
Autonomous listening devices are increasingly used to study vocal aquatic animals, and there is a constant need to record longer or with greater bandwidth, requiring efficient use of memory and battery power. Real-time compression of sound has the potential to extend recording durations and bandwidths at the expense of increased processing operations and therefore power consumption. Whereas lossy methods such as MP3 introduce undesirable artifacts, lossless compression algorithms (e.g., flac) guarantee exact data recovery. But these algorithms are relatively complex due to the wide variety of signals they are designed to compress. A simpler lossless algorithm is shown here to provide compression factors of three or more for underwater sound recordings over a range of noise environments. The compressor was evaluated using samples from drifting and animal-borne sound recorders with sampling rates of 16-240 kHz. It achieves >87% of the compression of more-complex methods but requires about 1/10 of the processing operations resulting in less than 1 mW power consumption at a sampling rate of 192 kHz on a low-power microprocessor. The potential to triple recording duration with a minor increase in power consumption and no loss in sound quality may be especially valuable for battery-limited tags and robotic vehicles.
Transform coding for space applications
NASA Technical Reports Server (NTRS)
Glover, Daniel
1993-01-01
Data compression coding requirements for aerospace applications differ somewhat from the compression requirements for entertainment systems. On the one hand, entertainment applications are bit rate driven with the goal of getting the best quality possible with a given bandwidth. Science applications are quality driven with the goal of getting the lowest bit rate for a given level of reconstruction quality. In the past, the required quality level has been nothing less than perfect allowing only the use of lossless compression methods (if that). With the advent of better, faster, cheaper missions, an opportunity has arisen for lossy data compression methods to find a use in science applications as requirements for perfect quality reconstruction runs into cost constraints. This paper presents a review of the data compression problem from the space application perspective. Transform coding techniques are described and some simple, integer transforms are presented. The application of these transforms to space-based data compression problems is discussed. Integer transforms have an advantage over conventional transforms in computational complexity. Space applications are different from broadcast or entertainment in that it is desirable to have a simple encoder (in space) and tolerate a more complicated decoder (on the ground) rather than vice versa. Energy compaction with new transforms are compared with the Walsh-Hadamard (WHT), Discrete Cosine (DCT), and Integer Cosine (ICT) transforms.
Wu, Chin H; Grant, Christopher V; Cook, Gabriel A; Park, Sang Ho; Opella, Stanley J
2009-09-01
A strip-shield inserted between a high inductance double-tuned solenoid coil and the glass tube containing the sample improves the efficiency of probes used for high-field solid-state NMR experiments on lossy aqueous samples of proteins and other biopolymers. A strip-shield is a coil liner consisting of thin copper strips layered on a PTFE (polytetrafluoroethylene) insulator. With lossy samples, the shift in tuning frequency is smaller, the reduction in Q, and RF-induced heating are all significantly reduced when the strip-shield is present. The performance of 800MHz (1)H/(15)N and (1)H/(13)C double-resonance probes is demonstrated on aqueous samples of membrane proteins in phospholipid bilayers.
Peterson, P Gabriel; Pak, Sung K; Nguyen, Binh; Jacobs, Genevieve; Folio, Les
2012-12-01
This study aims to evaluate the utility of compressed computed tomography (CT) studies (to expedite transmission) using Motion Pictures Experts Group, Layer 4 (MPEG-4) movie formatting in combat hospitals when guiding major treatment regimens. This retrospective analysis was approved by Walter Reed Army Medical Center institutional review board with a waiver for the informed consent requirement. Twenty-five CT chest, abdomen, and pelvis exams were converted from Digital Imaging and Communications in Medicine to MPEG-4 movie format at various compression ratios. Three board-certified radiologists reviewed various levels of compression on emergent CT findings on 25 combat casualties and compared with the interpretation of the original series. A Universal Trauma Window was selected at -200 HU level and 1,500 HU width, then compressed at three lossy levels. Sensitivities and specificities for each reviewer were calculated along with 95 % confidence intervals using the method of general estimating equations. The compression ratios compared were 171:1, 86:1, and 41:1 with combined sensitivities of 90 % (95 % confidence interval, 79-95), 94 % (87-97), and 100 % (93-100), respectively. Combined specificities were 100 % (85-100), 100 % (85-100), and 96 % (78-99), respectively. The introduction of CT in combat hospitals with increasing detectors and image data in recent military operations has increased the need for effective teleradiology; mandating compression technology. Image compression is currently used to transmit images from combat hospital to tertiary care centers with subspecialists and our study demonstrates MPEG-4 technology as a reasonable means of achieving such compression.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lindstrom, P; Cohen, J D
We present a streaming geometry compression codec for multiresolution, uniformly-gridded, triangular terrain patches that supports very fast decompression. Our method is based on linear prediction and residual coding for lossless compression of the full-resolution data. As simplified patches on coarser levels in the hierarchy already incur some data loss, we optionally allow further quantization for more lossy compression. The quantization levels are adaptive on a per-patch basis, while still permitting seamless, adaptive tessellations of the terrain. Our geometry compression on such a hierarchy achieves compression ratios of 3:1 to 12:1. Our scheme is not only suitable for fast decompression onmore » the CPU, but also for parallel decoding on the GPU with peak throughput over 2 billion triangles per second. Each terrain patch is independently decompressed on the fly from a variable-rate bitstream by a GPU geometry program with no branches or conditionals. Thus we can store the geometry compressed on the GPU, reducing storage and bandwidth requirements throughout the system. In our rendering approach, only compressed bitstreams and the decoded height values in the view-dependent 'cut' are explicitly stored on the GPU. Normal vectors are computed in a streaming fashion, and remaining geometry and texture coordinates, as well as mesh connectivity, are shared and re-used for all patches. We demonstrate and evaluate our algorithms on a small prototype system in which all compressed geometry fits in the GPU memory and decompression occurs on the fly every rendering frame without any cache maintenance.« less
Cloud Optimized Image Format and Compression
NASA Astrophysics Data System (ADS)
Becker, P.; Plesea, L.; Maurer, T.
2015-04-01
Cloud based image storage and processing requires revaluation of formats and processing methods. For the true value of the massive volumes of earth observation data to be realized, the image data needs to be accessible from the cloud. Traditional file formats such as TIF and NITF were developed in the hay day of the desktop and assumed fast low latency file access. Other formats such as JPEG2000 provide for streaming protocols for pixel data, but still require a server to have file access. These concepts no longer truly hold in cloud based elastic storage and computation environments. This paper will provide details of a newly evolving image storage format (MRF) and compression that is optimized for cloud environments. Although the cost of storage continues to fall for large data volumes, there is still significant value in compression. For imagery data to be used in analysis and exploit the extended dynamic range of the new sensors, lossless or controlled lossy compression is of high value. Compression decreases the data volumes stored and reduces the data transferred, but the reduced data size must be balanced with the CPU required to decompress. The paper also outlines a new compression algorithm (LERC) for imagery and elevation data that optimizes this balance. Advantages of the compression include its simple to implement algorithm that enables it to be efficiently accessed using JavaScript. Combing this new cloud based image storage format and compression will help resolve some of the challenges of big image data on the internet.
NASA Technical Reports Server (NTRS)
Rao, T. R. N.; Seetharaman, G.; Feng, G. L.
1996-01-01
With the development of new advanced instruments for remote sensing applications, sensor data will be generated at a rate that not only requires increased onboard processing and storage capability, but imposes demands on the space to ground communication link and ground data management-communication system. Data compression and error control codes provide viable means to alleviate these demands. Two types of data compression have been studied by many researchers in the area of information theory: a lossless technique that guarantees full reconstruction of the data, and a lossy technique which generally gives higher data compaction ratio but incurs some distortion in the reconstructed data. To satisfy the many science disciplines which NASA supports, lossless data compression becomes a primary focus for the technology development. While transmitting the data obtained by any lossless data compression, it is very important to use some error-control code. For a long time, convolutional codes have been widely used in satellite telecommunications. To more efficiently transform the data obtained by the Rice algorithm, it is required to meet the a posteriori probability (APP) for each decoded bit. A relevant algorithm for this purpose has been proposed which minimizes the bit error probability in the decoding linear block and convolutional codes and meets the APP for each decoded bit. However, recent results on iterative decoding of 'Turbo codes', turn conventional wisdom on its head and suggest fundamentally new techniques. During the past several months of this research, the following approaches have been developed: (1) a new lossless data compression algorithm, which is much better than the extended Rice algorithm for various types of sensor data, (2) a new approach to determine the generalized Hamming weights of the algebraic-geometric codes defined by a large class of curves in high-dimensional spaces, (3) some efficient improved geometric Goppa codes for disk memory systems and high-speed mass memory systems, and (4) a tree based approach for data compression using dynamic programming.
System considerations for efficient communication and storage of MSTI image data
NASA Technical Reports Server (NTRS)
Rice, Robert F.
1994-01-01
The Ballistic Missile Defense Organization has been developing the capability to evaluate one or more high-rate sensor/hardware combinations by incorporating them as payloads on a series of Miniature Seeker Technology Insertion (MSTI) flights. This publication represents the final report of a 1993 study to analyze the potential impact f data compression and of related communication system technologies on post-MSTI 3 flights. Lossless compression is considered alone and in conjunction with various spatial editing modes. Additionally, JPEG and Fractal algorithms are examined in order to bound the potential gains from the use of lossy compression. but lossless compression is clearly shown to better fit the goals of the MSTI investigations. Lossless compression factors of between 2:1 and 6:1 would provide significant benefits to both on-board mass memory and the downlink. for on-board mass memory, the savings could range from $5 million to $9 million. Such benefits should be possible by direct application of recently developed NASA VLSI microcircuits. It is shown that further downlink enhancements of 2:1 to 3:1 should be feasible thorough use of practical modifications to the existing modulation system and incorporation of Reed-Solomon channel coding. The latter enhancement could also be achieved by applying recently developed VLSI microcircuits.
Receiver-Assisted Congestion Control to Achieve High Throughput in Lossy Wireless Networks
NASA Astrophysics Data System (ADS)
Shi, Kai; Shu, Yantai; Yang, Oliver; Luo, Jiarong
2010-04-01
Many applications would require fast data transfer in high-speed wireless networks nowadays. However, due to its conservative congestion control algorithm, Transmission Control Protocol (TCP) cannot effectively utilize the network capacity in lossy wireless networks. In this paper, we propose a receiver-assisted congestion control mechanism (RACC) in which the sender performs loss-based control, while the receiver is performing delay-based control. The receiver measures the network bandwidth based on the packet interarrival interval and uses it to compute a congestion window size deemed appropriate for the sender. After receiving the advertised value feedback from the receiver, the sender then uses the additive increase and multiplicative decrease (AIMD) mechanism to compute the correct congestion window size to be used. By integrating the loss-based and the delay-based congestion controls, our mechanism can mitigate the effect of wireless losses, alleviate the timeout effect, and therefore make better use of network bandwidth. Simulation and experiment results in various scenarios show that our mechanism can outperform conventional TCP in high-speed and lossy wireless environments.
BASKET on-board software library
NASA Astrophysics Data System (ADS)
Luntzer, Armin; Ottensamer, Roland; Kerschbaum, Franz
2014-07-01
The University of Vienna is a provider of on-board data processing software with focus on data compression, such as used on board the highly successful Herschel/PACS instrument, as well as in the small BRITE-Constellation fleet of cube-sats. Current contributions are made to CHEOPS, SAFARI and PLATO. The effort was taken to review the various functions developed for Herschel and provide a consolidated software library to facilitate the work for future missions. This library is a shopping basket of algorithms. Its contents are separated into four classes: auxiliary functions (e.g. circular buffers), preprocessing functions (e.g. for calibration), lossless data compression (arithmetic or Rice coding) and lossy reduction steps (ramp fitting etc.). The "BASKET" has all functionality that is needed to create an on-board data processing chain. All sources are written in C, supplemented by optimized versions in assembly, targeting popular CPU architectures for space applications. BASKET is open source and constantly growing
Robust Audio Watermarking by Using Low-Frequency Histogram
NASA Astrophysics Data System (ADS)
Xiang, Shijun
In continuation to earlier work where the problem of time-scale modification (TSM) has been studied [1] by modifying the shape of audio time domain histogram, here we consider the additional ingredient of resisting additive noise-like operations, such as Gaussian noise, lossy compression and low-pass filtering. In other words, we study the problem of the watermark against both TSM and additive noises. To this end, in this paper we extract the histogram from a Gaussian-filtered low-frequency component for audio watermarking. The watermark is inserted by shaping the histogram in a way that the use of two consecutive bins as a group is exploited for hiding a bit by reassigning their population. The watermarked signals are perceptibly similar to the original one. Comparing with the previous time-domain watermarking scheme [1], the proposed watermarking method is more robust against additive noise, MP3 compression, low-pass filtering, etc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pingenot, J; Rieben, R; White, D
2005-10-31
We present a computational study of signal propagation and attenuation of a 200 MHz planar loop antenna in a cave environment. The cave is modeled as a straight and lossy random rough wall. To simulate a broad frequency band, the full wave Maxwell equations are solved directly in the time domain via a high order vector finite element discretization using the massively parallel CEM code EMSolve. The numerical technique is first verified against theoretical results for a planar loop antenna in a smooth lossy cave. The simulation is then performed for a series of random rough surface meshes in ordermore » to generate statistical data for the propagation and attenuation properties of the antenna in a cave environment. Results for the mean and variance of the power spectral density of the electric field are presented and discussed.« less
Hasar, U C
2009-05-01
A microcontroller-based noncontact and nondestructive microwave free-space measurement system for real-time and dynamic determination of complex permittivity of lossy liquid materials has been proposed. The system is comprised of two main sections--microwave and electronic. While the microwave section provides for measuring only the amplitudes of reflection coefficients, the electronic section processes these data and determines the complex permittivity using a general purpose microcontroller. The proposed method eliminates elaborate liquid sample holder preparation and only requires microwave components to perform reflection measurements from one side of the holder. In addition, it explicitly determines the permittivity of lossy liquid samples from reflection measurements at different frequencies without any knowledge on sample thickness. In order to reduce systematic errors in the system, we propose a simple calibration technique, which employs simple and readily available standards. The measurement system can be a good candidate for industrial-based applications.
Clunie, David A; Gebow, Dan
2015-01-01
Deidentification of medical images requires attention to both header information as well as the pixel data itself, in which burned-in text may be present. If the pixel data to be deidentified is stored in a compressed form, traditionally it is decompressed, identifying text is redacted, and if necessary, pixel data are recompressed. Decompression without recompression may result in images of excessive or intractable size. Recompression with an irreversible scheme is undesirable because it may cause additional loss in the diagnostically relevant regions of the images. The irreversible (lossy) JPEG compression scheme works on small blocks of the image independently, hence, redaction can selectively be confined only to those blocks containing identifying text, leaving all other blocks unchanged. An open source implementation of selective redaction and a demonstration of its applicability to multiframe color ultrasound images is described. The process can be applied either to standalone JPEG images or JPEG bit streams encapsulated in other formats, which in the case of medical images, is usually DICOM.
Efficient transmission of compressed data for remote volume visualization.
Krishnan, Karthik; Marcellin, Michael W; Bilgin, Ali; Nadar, Mariappan S
2006-09-01
One of the goals of telemedicine is to enable remote visualization and browsing of medical volumes. There is a need to employ scalable compression schemes and efficient client-server models to obtain interactivity and an enhanced viewing experience. First, we present a scheme that uses JPEG2000 and JPIP (JPEG2000 Interactive Protocol) to transmit data in a multi-resolution and progressive fashion. The server exploits the spatial locality offered by the wavelet transform and packet indexing information to transmit, in so far as possible, compressed volume data relevant to the clients query. Once the client identifies its volume of interest (VOI), the volume is refined progressively within the VOI from an initial lossy to a final lossless representation. Contextual background information can also be made available having quality fading away from the VOI. Second, we present a prioritization that enables the client to progressively visualize scene content from a compressed file. In our specific example, the client is able to make requests to progressively receive data corresponding to any tissue type. The server is now capable of reordering the same compressed data file on the fly to serve data packets prioritized as per the client's request. Lastly, we describe the effect of compression parameters on compression ratio, decoding times and interactivity. We also present suggestions for optimizing JPEG2000 for remote volume visualization and volume browsing applications. The resulting system is ideally suited for client-server applications with the server maintaining the compressed volume data, to be browsed by a client with a low bandwidth constraint.
A radio-aware routing algorithm for reliable directed diffusion in lossy wireless sensor networks.
Kim, Yong-Pyo; Jung, Euihyun; Park, Yong-Jin
2009-01-01
In Wireless Sensor Networks (WSNs), transmission errors occur frequently due to node failure, battery discharge, contention or interference by objects. Although Directed Diffusion has been considered as a prominent data-centric routing algorithm, it has some weaknesses due to unexpected network errors. In order to address these problems, we proposed a radio-aware routing algorithm to improve the reliability of Directed Diffusion in lossy WSNs. The proposed algorithm is aware of the network status based on the radio information from MAC and PHY layers using a cross-layer design. The cross-layer design can be used to get detailed information about current status of wireless network such as a link quality or transmission errors of communication links. The radio information indicating variant network conditions and link quality was used to determine an alternative route that provides reliable data transmission under lossy WSNs. According to the simulation result, the radio-aware reliable routing algorithm showed better performance in both grid and random topologies with various error rates. The proposed solution suggested the possibility of providing a reliable transmission method for QoS requests in lossy WSNs based on the radio-awareness. The energy and mobility issues will be addressed in the future work.
Towards Holography via Quantum Source-Channel Codes.
Pastawski, Fernando; Eisert, Jens; Wilming, Henrik
2017-07-14
While originally motivated by quantum computation, quantum error correction (QEC) is currently providing valuable insights into many-body quantum physics, such as topological phases of matter. Furthermore, mounting evidence originating from holography research (AdS/CFT) indicates that QEC should also be pertinent for conformal field theories. With this motivation in mind, we introduce quantum source-channel codes, which combine features of lossy compression and approximate quantum error correction, both of which are predicted in holography. Through a recent construction for approximate recovery maps, we derive guarantees on its erasure decoding performance from calculations of an entropic quantity called conditional mutual information. As an example, we consider Gibbs states of the transverse field Ising model at criticality and provide evidence that they exhibit nontrivial protection from local erasure. This gives rise to the first concrete interpretation of a bona fide conformal field theory as a quantum error correcting code. We argue that quantum source-channel codes are of independent interest beyond holography.
Huang, H; Coatrieux, G; Shu, H Z; Luo, L M; Roux, Ch
2011-01-01
In this paper we present a medical image integrity verification system that not only allows detecting and approximating malevolent local image alterations (e.g. removal or addition of findings) but is also capable to identify the nature of global image processing applied to the image (e.g. lossy compression, filtering …). For that purpose, we propose an image signature derived from the geometric moments of pixel blocks. Such a signature is computed over regions of interest of the image and then watermarked in regions of non interest. Image integrity analysis is conducted by comparing embedded and recomputed signatures. If any, local modifications are approximated through the determination of the parameters of the nearest generalized 2D Gaussian. Image moments are taken as image features and serve as inputs to one classifier we learned to discriminate the type of global image processing. Experimental results with both local and global modifications illustrate the overall performances of our approach.
Towards Holography via Quantum Source-Channel Codes
NASA Astrophysics Data System (ADS)
Pastawski, Fernando; Eisert, Jens; Wilming, Henrik
2017-07-01
While originally motivated by quantum computation, quantum error correction (QEC) is currently providing valuable insights into many-body quantum physics, such as topological phases of matter. Furthermore, mounting evidence originating from holography research (AdS/CFT) indicates that QEC should also be pertinent for conformal field theories. With this motivation in mind, we introduce quantum source-channel codes, which combine features of lossy compression and approximate quantum error correction, both of which are predicted in holography. Through a recent construction for approximate recovery maps, we derive guarantees on its erasure decoding performance from calculations of an entropic quantity called conditional mutual information. As an example, we consider Gibbs states of the transverse field Ising model at criticality and provide evidence that they exhibit nontrivial protection from local erasure. This gives rise to the first concrete interpretation of a bona fide conformal field theory as a quantum error correcting code. We argue that quantum source-channel codes are of independent interest beyond holography.
Zhu, Yanmei; Witt, Rachel E.; MacCallum, Julia K.; Jiang, Jack J.
2010-01-01
Objective In this study, a Voice over Internet Protocol (VoIP) communication based on G.729 protocol was simulated to determine the effects of this system on acoustic perturbation parameters of normal and pathological voice signals. Patients and Methods: Fifty recordings of normal voice and 48 recordings of pathological voice affected by laryngeal paralysis were transmitted through a VoIP communication system. The acoustic analysis programs of CSpeech and MDVP were used to determine the percent jitter and percent shimmer from the voice samples before and after VoIP transmission. The effects of three frequently used audio compression protocols (MP3, WMA, and FLAC) on the perturbation measures were also studied. Results It was found that VoIP transmission disrupts the waveform and increases the percent jitter and percent shimmer of voice samples. However, after VoIP transmission, significant discrimination between normal and pathological voices affected by laryngeal paralysis was still possible. It was found that the lossless compression method FLAC does not exert any influence on the perturbation measures. The lossy compression methods MP3 and WMA increase percent jitter and percent shimmer values. Conclusion This study validates the feasibility of these transmission and compression protocols in developing remote voice signal data collection and assessment systems. PMID:20588051
Shu, Shiwei; Zhan, Yawen; Lee, Chris; Lu, Jian; Li, Yang Yang
2016-01-01
Absorber is an important component in various optical devices. Here we report a novel type of asymmetric absorber in the visible and near-infrared spectrum which is based on lossy Bragg stacks. The lossy Bragg stacks can achieve near-perfect absorption at one side and high reflection at the other within the narrow bands (several nm) of resonance wavelengths, whereas display almost identical absorption/reflection responses for the rest of the spectrum. Meanwhile, this interesting wavelength-selective asymmetric absorption behavior persists for wide angles, does not depend on polarization, and can be ascribed to the lossy characteristics of the Bragg stacks. Moreover, interesting Fano resonance with easily tailorable peak profiles can be realized using the lossy Bragg stacks. PMID:27251768
Antenna pattern control using impedance surfaces
NASA Technical Reports Server (NTRS)
Balanis, Constantine A.; Liu, Kefeng
1992-01-01
During this research period, we have effectively transferred existing computer codes from CRAY supercomputer to work station based systems. The work station based version of our code preserved the accuracy of the numerical computations while giving a much better turn-around time than the CRAY supercomputer. Such a task relieved us of the heavy dependence of the supercomputer account budget and made codes developed in this research project more feasible for applications. The analysis of pyramidal horns with impedance surfaces was our major focus during this research period. Three different modeling algorithms in analyzing lossy impedance surfaces were investigated and compared with measured data. Through this investigation, we discovered that a hybrid Fourier transform technique, which uses the eigen mode in the stepped waveguide section and the Fourier transformed field distributions across the stepped discontinuities for lossy impedances coating, gives a better accuracy in analyzing lossy coatings. After a further refinement of the present technique, we will perform an accurate radiation pattern synthesis in the coming reporting period.
Baczewski, Andrew David; Vikram, Melapudi; Shanker, Balasubramaniam; ...
2010-08-27
Diffusion, lossy wave, and Klein–Gordon equations find numerous applications in practical problems across a range of diverse disciplines. The temporal dependence of all three Green’s functions are characterized by an infinite tail. This implies that the cost complexity of the spatio-temporal convolutions, associated with evaluating the potentials, scales as O(N s 2N t 2), where N s and N t are the number of spatial and temporal degrees of freedom, respectively. In this paper, we discuss two new methods to rapidly evaluate these spatio-temporal convolutions by exploiting their block-Toeplitz nature within the framework of accelerated Cartesian expansions (ACE). The firstmore » scheme identifies a convolution relation in time amongst ACE harmonics and the fast Fourier transform (FFT) is used for efficient evaluation of these convolutions. The second method exploits the rank deficiency of the ACE translation operators with respect to time and develops a recursive numerical compression scheme for the efficient representation and evaluation of temporal convolutions. It is shown that the cost of both methods scales as O(N sN tlog 2N t). Furthermore, several numerical results are presented for the diffusion equation to validate the accuracy and efficacy of the fast algorithms developed here.« less
Wavelet compression techniques for hyperspectral data
NASA Technical Reports Server (NTRS)
Evans, Bruce; Ringer, Brian; Yeates, Mathew
1994-01-01
Hyperspectral sensors are electro-optic sensors which typically operate in visible and near infrared bands. Their characteristic property is the ability to resolve a relatively large number (i.e., tens to hundreds) of contiguous spectral bands to produce a detailed profile of the electromagnetic spectrum. In contrast, multispectral sensors measure relatively few non-contiguous spectral bands. Like multispectral sensors, hyperspectral sensors are often also imaging sensors, measuring spectra over an array of spatial resolution cells. The data produced may thus be viewed as a three dimensional array of samples in which two dimensions correspond to spatial position and the third to wavelength. Because they multiply the already large storage/transmission bandwidth requirements of conventional digital images, hyperspectral sensors generate formidable torrents of data. Their fine spectral resolution typically results in high redundancy in the spectral dimension, so that hyperspectral data sets are excellent candidates for compression. Although there have been a number of studies of compression algorithms for multispectral data, we are not aware of any published results for hyperspectral data. Three algorithms for hyperspectral data compression are compared. They were selected as representatives of three major approaches for extending conventional lossy image compression techniques to hyperspectral data. The simplest approach treats the data as an ensemble of images and compresses each image independently, ignoring the correlation between spectral bands. The second approach transforms the data to decorrelate the spectral bands, and then compresses the transformed data as a set of independent images. The third approach directly generalizes two-dimensional transform coding by applying a three-dimensional transform as part of the usual transform-quantize-entropy code procedure. The algorithms studied all use the discrete wavelet transform. In the first two cases, a wavelet transform coder was used for the two-dimensional compression. The third case used a three dimensional extension of this same algorithm.
NASA Astrophysics Data System (ADS)
Mukherjee, Bijoy K.; Metia, Santanu
2009-10-01
The paper is divided into three parts. The first part gives a brief introduction to the overall paper, to fractional order PID (PIλDμ) controllers and to Genetic Algorithm (GA). In the second part, first it has been studied how the performance of an integer order PID controller deteriorates when implemented with lossy capacitors in its analog realization. Thereafter it has been shown that the lossy capacitors can be effectively modeled by fractional order terms. Then, a novel GA based method has been proposed to tune the controller parameters such that the original performance is retained even though realized with the same lossy capacitors. Simulation results have been presented to validate the usefulness of the method. Some Ziegler-Nichols type tuning rules for design of fractional order PID controllers have been proposed in the literature [11]. In the third part, a novel GA based method has been proposed which shows how equivalent integer order PID controllers can be obtained which will give performance level similar to those of the fractional order PID controllers thereby removing the complexity involved in the implementation of the latter. It has been shown with extensive simulation results that the equivalent integer order PID controllers more or less retain the robustness and iso-damping properties of the original fractional order PID controllers. Simulation results also show that the equivalent integer order PID controllers are more robust than the normal Ziegler-Nichols tuned PID controllers.
Evaluating the Efficacy of Wavelet Configurations on Turbulent-Flow Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Shaomeng; Gruchalla, Kenny; Potter, Kristin
2015-10-25
I/O is increasingly becoming a significant constraint for simulation codes and visualization tools on modern supercomputers. Data compression is an attractive workaround, and, in particular, wavelets provide a promising solution. However, wavelets can be applied in multiple configurations, and the variations in configuration impact accuracy, storage cost, and execution time. While the variation in these factors over wavelet configurations have been explored in image processing, they are not well understood for visualization and analysis of scientific data. To illuminate this issue, we evaluate multiple wavelet configurations on turbulent-flow data. Our approach is to repeat established analysis routines on uncompressed andmore » lossy-compressed versions of a data set, and then quantitatively compare their outcomes. Our findings show that accuracy varies greatly based on wavelet configuration, while storage cost and execution time vary less. Overall, our study provides new insights for simulation analysts and visualization experts, who need to make tradeoffs between accuracy, storage cost, and execution time.« less
Xue, Bing; Qu, Xiaodong; Fang, Guangyou; Ji, Yicai
2017-01-01
In this paper, the methods and analysis for estimating the location of a three-dimensional (3-D) single source buried in lossy medium are presented with uniform circular array (UCA). The mathematical model of the signal in the lossy medium is proposed. Using information in the covariance matrix obtained by the sensors’ outputs, equations of the source location (azimuth angle, elevation angle, and range) are obtained. Then, the phase and amplitude of the covariance matrix function are used to process the source localization in the lossy medium. By analyzing the characteristics of the proposed methods and the multiple signal classification (MUSIC) method, the computational complexity and the valid scope of these methods are given. From the results, whether the loss is known or not, we can choose the best method for processing the issues (localization in lossless medium or lossy medium). PMID:28574467
NASA Astrophysics Data System (ADS)
Socorro, A. B.; Corres, J. M.; Del Villar, I.; Matias, I. R.; Arregui, F. J.
2014-05-01
This work presents the development and test of an anti-gliadin antibodies biosensor based on lossy mode resonances (LMRs) to detect celiac disease. Several polyelectrolites were used to perform layer-by-layer assembly processes in order to generate the LMR and to fabricate a gliadin-embedded thin-film. The LMR shifted 20 nm when immersed in a 5 ppm anti-gliadin antibodies-PBS solution, what makes this bioprobe suitable for detecting celiac disease. This is the first time, to our knowledge, that LMRs are used to detect celiac disease and these results suppose promising prospects on the use of such phenomena as biological detectors.
Toward objective image quality metrics: the AIC Eval Program of the JPEG
NASA Astrophysics Data System (ADS)
Richter, Thomas; Larabi, Chaker
2008-08-01
Objective quality assessment of lossy image compression codecs is an important part of the recent call of the JPEG for Advanced Image Coding. The target of the AIC ad-hoc group is twofold: First, to receive state-of-the-art still image codecs and to propose suitable technology for standardization; and second, to study objective image quality metrics to evaluate the performance of such codes. Even tthough the performance of an objective metric is defined by how well it predicts the outcome of a subjective assessment, one can also study the usefulness of a metric in a non-traditional way indirectly, namely by measuring the subjective quality improvement of a codec that has been optimized for a specific objective metric. This approach shall be demonstrated here on the recently proposed HDPhoto format14 introduced by Microsoft and a SSIM-tuned17 version of it by one of the authors. We compare these two implementations with JPEG1 in two variations and a visual and PSNR optimal JPEG200013 implementation. To this end, we use subjective and objective tests based on the multiscale SSIM and a new DCT based metric.
Context-dependent JPEG backward-compatible high-dynamic range image compression
NASA Astrophysics Data System (ADS)
Korshunov, Pavel; Ebrahimi, Touradj
2013-10-01
High-dynamic range (HDR) imaging is expected, together with ultrahigh definition and high-frame rate video, to become a technology that may change photo, TV, and film industries. Many cameras and displays capable of capturing and rendering both HDR images and video are already available in the market. The popularity and full-public adoption of HDR content is, however, hindered by the lack of standards in evaluation of quality, file formats, and compression, as well as large legacy base of low-dynamic range (LDR) displays that are unable to render HDR. To facilitate the wide spread of HDR usage, the backward compatibility of HDR with commonly used legacy technologies for storage, rendering, and compression of video and images are necessary. Although many tone-mapping algorithms are developed for generating viewable LDR content from HDR, there is no consensus of which algorithm to use and under which conditions. We, via a series of subjective evaluations, demonstrate the dependency of the perceptual quality of the tone-mapped LDR images on the context: environmental factors, display parameters, and image content itself. Based on the results of subjective tests, it proposes to extend JPEG file format, the most popular image format, in a backward compatible manner to deal with HDR images also. An architecture to achieve such backward compatibility with JPEG is proposed. A simple implementation of lossy compression demonstrates the efficiency of the proposed architecture compared with the state-of-the-art HDR image compression.
Information content exploitation of imaging spectrometer's images for lossless compression
NASA Astrophysics Data System (ADS)
Wang, Jianyu; Zhu, Zhenyu; Lin, Kan
1996-11-01
Imaging spectrometer, such as MAIS produces a tremendous volume of image data with up to 5.12 Mbps raw data rate, which needs urgently a real-time, efficient and reversible compression implementation. Between the lossy scheme with high compression ratio and the lossless scheme with high fidelity, we must make our choice based on the particular information content analysis of each imaging spectrometer's image data. In this paper, we present a careful analysis of information-preserving compression of imaging spectrometer MAIS with an entropy and autocorrelation study on the hyperspectral images. First, the statistical information in an actual MAIS image, captured in Marble Bar Australia, is measured with its entropy, conditional entropy, mutual information and autocorrelation coefficients on both spatial dimensions and spectral dimension. With these careful analyses, it is shown that there is high redundancy existing in the spatial dimensions, but the correlation in spectral dimension of the raw images is smaller than expected. The main reason of the nonstationarity on spectral dimension is attributed to the instruments's discrepancy on detector's response and channel's amplification in different spectral bands. To restore its natural correlation, we preprocess the signal in advance. There are two methods to accomplish this requirement: onboard radiation calibration and normalization. A better result can be achieved by the former one. After preprocessing, the spectral correlation increases so high that it contributes much redundancy in addition to spatial correlation. At last, an on-board hardware implementation for the lossless compression is presented with an ideal result.
High thermal conductivity lossy dielectric using co-densified multilayer configuration
Tiegs, Terry N.; Kiggans, Jr., James O.
2003-06-17
Systems and methods are described for loss dielectrics. A method of manufacturing a lossy dielectric includes providing at least one high dielectric loss layer and providing at least one high thermal conductivity-electrically insulating layer adjacent the at least one high dielectric loss layer and then densifying together. The systems and methods provide advantages because the lossy dielectrics are less costly and more environmentally friendly than the available alternatives.
NASA Astrophysics Data System (ADS)
Darazi, R.; Gouze, A.; Macq, B.
2009-01-01
Reproducing a natural and real scene as we see in the real world everyday is becoming more and more popular. Stereoscopic and multi-view techniques are used for this end. However due to the fact that more information are displayed requires supporting technologies such as digital compression to ensure the storage and transmission of the sequences. In this paper, a new scheme for stereo image coding is proposed. The original left and right images are jointly coded. The main idea is to optimally exploit the existing correlation between the two images. This is done by the design of an efficient transform that reduces the existing redundancy in the stereo image pair. This approach was inspired by Lifting Scheme (LS). The novelty in our work is that the prediction step is been replaced by an hybrid step that consists in disparity compensation followed by luminance correction and an optimized prediction step. The proposed scheme can be used for lossless and for lossy coding. Experimental results show improvement in terms of performance and complexity compared to recently proposed methods.
Huang, Yixing; Yuan, Xujin; Wang, Changxian; Chen, Mingji; Tang, Liqun; Fang, Daining
2018-06-15
Microwave absorber with broadband absorption and thin thickness is one of the main research interests in this field. A flexible ultrathin and broadband microwave absorber comprising multiwall carbon nanotubes, spherical carbonyl iron, and silicone rubber is fabricated in a newly proposed pyramidal spatial periodic structure (SPS). The SPS with equivalent thickness of 3.73 mm covers the -10 dB and -15 dB absorption bandwidth in the frequency range 2-40 GHz and 10-40 GHz, respectively. The excellent absorption performance is achieved by concentration and dissipation of the electromagnetic field inside different parts of the magnetic-dielectric lossy protrusions in different frequency ranges.
A simple circular-polarized antenna: Circular waveguide horn coated with lossy magnetic material
NASA Technical Reports Server (NTRS)
Lee, C. S.; Lee, S. W.; Justice, D. W.
1986-01-01
A circular waveguide horn coated with a lossy material in its interior wall can be used as an alternative to a corrugated waveguide for radiating a circularly polarized (CP) field. To achieve good CP radiation, the diameter of the structure must be larger than the free-space wavelength, and the coating material must be sufficiently lossy and magnetic. This device is cheaper and lighter in weight than the corrugated one.
Effect of the losses in the vocal tract on determination of the area function.
Gülmezoğlu, M Bilginer; Barkana, Atalay
2003-01-01
In this work, the cross-sectional areas of the vocal tract are determined for the lossy and lossless cases by using the pole-zero models obtained from the electrical equivalent circuit model of the vocal tract and the system identification method. The cross-sectional areas are used to compare the lossy and lossless cases. In the lossy case, the internal losses due to wall vibration, heat conduction, air friction and viscosity are considered, that is, the complex poles and zeros obtained from the models are used directly. Whereas, in the lossless case, only the imaginary parts of these poles and zeros are used. The vocal tract shapes obtained for the lossy case are close to the actual ones.
NASA Astrophysics Data System (ADS)
Tiwari, Divya; Mullaney, Kevin; Korposh, Serhiy; James, Stephen W.; Lee, Seung-Woo; Tatam, Ralph P.
2016-05-01
The development of an ammonia sensor, formed by the deposition of a functionalised titanium dioxide film onto a tapered optical fibre is presented. The titanium dioxide coating allows the coupling of light from the fundamental core mode to a lossy mode supported by the coating, thus creating lossy mode resonance (LMR) in the transmission spectrum. The porphyrin compound that was used to functionalise the coating was removed from the titanium dioxide coating upon exposure to ammonia, causing a change in the refractive index of the coating and a concomitant shift in the central wavelength of the lossy mode resonance. Concentrations of ammonia as small as 1ppm was detected with a response time of less than 1min.
Imaging spectrometry - Technology and applications
NASA Technical Reports Server (NTRS)
Solomon, Jerry E.
1989-01-01
The development history and current status of NASA imaging-spectrometer (IS) technology are discussed in a review covering the period 1982-1988. Consideration is given to the Airborne IS first flown in 1982, the second-generation Airborne Visible and IR IS (AVIRIS), the High-Resolution IS being developed for the EOS polar platform, improved two-dimensional focal-plane arrays for the short-wave IR spectral region, and noncollinear acoustooptic tunable filters for use as spectral dispersing elements. Also examined are approaches to solving the data-processing problems posed by the high data volumes of state-of-the-art ISs (e.g., 160 MB per 600 x 600-pixel AVIRIS scene), including intelligent data editing, lossless and lossy data compression techniques, and direct extraction of scientifically meaningful geophysical and biophysical parameters.
Fast and memory efficient text image compression with JBIG2.
Ye, Yan; Cosman, Pamela
2003-01-01
In this paper, we investigate ways to reduce encoding time, memory consumption and substitution errors for text image compression with JBIG2. We first look at page striping where the encoder splits the input image into horizontal stripes and processes one stripe at a time. We propose dynamic dictionary updating procedures for page striping to reduce the bit rate penalty it incurs. Experiments show that splitting the image into two stripes can save 30% of encoding time and 40% of physical memory with a small coding loss of about 1.5%. Using more stripes brings further savings in time and memory but the return diminishes. We also propose an adaptive way to update the dictionary only when it has become out-of-date. The adaptive updating scheme can resolve the time versus bit rate tradeoff and the memory versus bit rate tradeoff well simultaneously. We then propose three speedup techniques for pattern matching, the most time-consuming encoding activity in JBIG2. When combined together, these speedup techniques can save up to 75% of the total encoding time with at most 1.7% of bit rate penalty. Finally, we look at improving reconstructed image quality for lossy compression. We propose enhanced prescreening and feature monitored shape unifying to significantly reduce substitution errors in the reconstructed images.
Performance evaluation of a lossy transmission lines based diode detector at cryogenic temperature.
Villa, E; Aja, B; de la Fuente, L; Artal, E
2016-01-01
This work is focused on the design, fabrication, and performance analysis of a square-law Schottky diode detector based on lossy transmission lines working under cryogenic temperature (15 K). The design analysis of a microwave detector, based on a planar gallium-arsenide low effective Schottky barrier height diode, is reported, which is aimed for achieving large input return loss as well as flat sensitivity versus frequency. The designed circuit demonstrates good sensitivity, as well as a good return loss in a wide bandwidth at Ka-band, at both room (300 K) and cryogenic (15 K) temperatures. A good sensitivity of 1000 mV/mW and input return loss better than 12 dB have been achieved when it works as a zero-bias Schottky diode detector at room temperature, increasing the sensitivity up to a minimum of 2200 mV/mW, with the need of a DC bias current, at cryogenic temperature.
Numerical methods for analyzing electromagnetic scattering
NASA Technical Reports Server (NTRS)
Lee, S. W.; Lo, Y. T.; Chuang, S. L.; Lee, C. S.
1985-01-01
Numerical methods to analyze electromagnetic scattering are presented. The dispersions and attenuations of the normal modes in a circular waveguide coated with lossy material were completely analyzed. The radar cross section (RCS) from a circular waveguide coated with lossy material was calculated. The following is observed: (1) the interior irradiation contributes to the RCS much more than does the rim diffraction; (2) at low frequency, the RCS from the circular waveguide terminated by a perfect electric conductor (PEC) can be reduced more than 13 dB down with a coating thickness less than 1% of the radius using the best lossy material available in a 6 radius-long cylinder; (3) at high frequency, a modal separation between the highly attenuated and the lowly attenuated modes is evident if the coating material is too lossy, however, a large RCS reduction can be achieved for a small incident angle with a thin layer of coating. It is found that the waveguide coated with a lossy magnetic material can be used as a substitute for a corrugated waveguide to produce a circularly polarized radiation yield.
Compression performance of HEVC and its format range and screen content coding extensions
NASA Astrophysics Data System (ADS)
Li, Bin; Xu, Jizheng; Sullivan, Gary J.
2015-09-01
This paper presents a comparison-based test of the objective compression performance of the High Efficiency Video Coding (HEVC) standard, its format range extensions (RExt), and its draft screen content coding extensions (SCC). The current dominant standard, H.264/MPEG-4 AVC, is used as an anchor reference in the comparison. The conditions used for the comparison tests were designed to reflect relevant application scenarios and to enable a fair comparison to the maximum extent feasible - i.e., using comparable quantization settings, reference frame buffering, intra refresh periods, rate-distortion optimization decision processing, etc. It is noted that such PSNR-based objective comparisons generally provide more conservative estimates of HEVC benefit than are found in subjective studies. The experimental results show that, when compared with H.264/MPEG-4 AVC, HEVC version 1 provides a bit rate savings for equal PSNR of about 23% for all-intra coding, 34% for random access coding, and 38% for low-delay coding. This is consistent with prior studies and the general characterization that HEVC can provide about a bit rate savings of about 50% for equal subjective quality for most applications. The HEVC format range extensions provide a similar bit rate savings of about 13-25% for all-intra coding, 28-33% for random access coding, and 32-38% for low-delay coding at different bit rate ranges. For lossy coding of screen content, the HEVC screen content coding extensions achieve a bit rate savings of about 66%, 63%, and 61% for all-intra coding, random access coding, and low-delay coding, respectively. For lossless coding, the corresponding bit rate savings are about 40%, 33%, and 32%, respectively.
Statistical Compression of Wind Speed Data
NASA Astrophysics Data System (ADS)
Tagle, F.; Castruccio, S.; Crippa, P.; Genton, M.
2017-12-01
In this work we introduce a lossy compression approach that utilizes a stochastic wind generator based on a non-Gaussian distribution to reproduce the internal climate variability of daily wind speed as represented by the CESM Large Ensemble over Saudi Arabia. Stochastic wind generators, and stochastic weather generators more generally, are statistical models that aim to match certain statistical properties of the data on which they are trained. They have been used extensively in applications ranging from agricultural models to climate impact studies. In this novel context, the parameters of the fitted model can be interpreted as encoding the information contained in the original uncompressed data. The statistical model is fit to only 3 of the 30 ensemble members and it adequately captures the variability of the ensemble in terms of seasonal internannual variability of daily wind speed. To deal with such a large spatial domain, it is partitioned into 9 region, and the model is fit independently to each of these. We further discuss a recent refinement of the model, which relaxes this assumption of regional independence, by introducing a large-scale component that interacts with the fine-scale regional effects.
Contextual Compression of Large-Scale Wind Turbine Array Simulations: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gruchalla, Kenny M; Brunhart-Lupo, Nicholas J; Potter, Kristin C
Data sizes are becoming a critical issue particularly for HPC applications. We have developed a user-driven lossy wavelet-based storage model to facilitate the analysis and visualization of large-scale wind turbine array simulations. The model stores data as heterogeneous blocks of wavelet coefficients, providing high-fidelity access to user-defined data regions believed the most salient, while providing lower-fidelity access to less salient regions on a block-by-block basis. In practice, by retaining the wavelet coefficients as a function of feature saliency, we have seen data reductions in excess of 94 percent, while retaining lossless information in the turbine-wake regions most critical to analysismore » and providing enough (low-fidelity) contextual information in the upper atmosphere to track incoming coherent turbulent structures. Our contextual wavelet compression approach has allowed us to deliver interactive visual analysis while providing the user control over where data loss, and thus reduction in accuracy, in the analysis occurs. We argue this reduced but contexualized representation is a valid approach and encourages contextual data management.« less
Contextual Compression of Large-Scale Wind Turbine Array Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gruchalla, Kenny M; Brunhart-Lupo, Nicholas J; Potter, Kristin C
Data sizes are becoming a critical issue particularly for HPC applications. We have developed a user-driven lossy wavelet-based storage model to facilitate the analysis and visualization of large-scale wind turbine array simulations. The model stores data as heterogeneous blocks of wavelet coefficients, providing high-fidelity access to user-defined data regions believed the most salient, while providing lower-fidelity access to less salient regions on a block-by-block basis. In practice, by retaining the wavelet coefficients as a function of feature saliency, we have seen data reductions in excess of 94 percent, while retaining lossless information in the turbine-wake regions most critical to analysismore » and providing enough (low-fidelity) contextual information in the upper atmosphere to track incoming coherent turbulent structures. Our contextual wavelet compression approach has allowed us to deliver interative visual analysis while providing the user control over where data loss, and thus reduction in accuracy, in the analysis occurs. We argue this reduced but contextualized representation is a valid approach and encourages contextual data management.« less
Embedded importance watermarking for image verification in radiology
NASA Astrophysics Data System (ADS)
Osborne, Domininc; Rogers, D.; Sorell, M.; Abbott, Derek
2004-03-01
Digital medical images used in radiology are quite different to everyday continuous tone images. Radiology images require that all detailed diagnostic information can be extracted, which traditionally constrains digital medical images to be of large size and stored without loss of information. In order to transmit diagnostic images over a narrowband wireless communication link for remote diagnosis, lossy compression schemes must be used. This involves discarding detailed information and compressing the data, making it more susceptible to error. The loss of image detail and incidental degradation occurring during transmission have potential legal accountability issues, especially in the case of the null diagnosis of a tumor. The work proposed here investigates techniques for verifying the voracity of medical images - in particular, detailing the use of embedded watermarking as an objective means to ensure that important parts of the medical image can be verified. We propose a result to show how embedded watermarking can be used to differentiate contextual from detailed information. The type of images that will be used include spiral hairline fractures and small tumors, which contain the essential diagnostic high spatial frequency information.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pingenot, J; Rieben, R; White, D
2004-12-06
We present a computational study of signal propagation and attenuation of a 200 MHz dipole antenna in a cave environment. The cave is modeled as a straight and lossy random rough wall. To simulate a broad frequency band, the full wave Maxwell equations are solved directly in the time domain via a high order vector finite element discretization using the massively parallel CEM code EMSolve. The simulation is performed for a series of random meshes in order to generate statistical data for the propagation and attenuation properties of the cave environment. Results for the power spectral density and phase ofmore » the electric field vector components are presented and discussed.« less
NASA Astrophysics Data System (ADS)
Grant, Christopher V.; Yang, Yuan; Glibowicka, Mira; Wu, Chin H.; Park, Sang Ho; Deber, Charles M.; Opella, Stanley J.
2009-11-01
The design, construction, and performance of a cross-coil double-resonance probe for solid-state NMR experiments on lossy biological samples at high magnetic fields are described. The outer coil is a Modified Alderman-Grant Coil (MAGC) tuned to the 1H frequency. The inner coil consists of a multi-turn solenoid coil that produces a B 1 field orthogonal to that of the outer coil. This results in a compact nested cross-coil pair with the inner solenoid coil tuned to the low frequency detection channel. This design has several advantages over multiple-tuned solenoid coil probes, since RF heating from the 1H channel is substantially reduced, it can be tuned for samples with a wide range of dielectric constants, and the simplified circuit design and high inductance inner coil provides excellent sensitivity. The utility of this probe is demonstrated on two electrically lossy samples of membrane proteins in phospholipid bilayers (bicelles) that are particularly difficult for conventional NMR probes. The 72-residue polypeptide embedding the transmembrane helices 3 and 4 of the Cystic Fibrosis Transmembrane Conductance Regulator (CFTR) (residues 194-241) requires a high salt concentration in order to be successfully reconstituted in phospholipid bicelles. A second application is to paramagnetic relaxation enhancement applied to the membrane-bound form of Pf1 coat protein in phospholipid bicelles where the resistance to sample heating enables high duty cycle solid-state NMR experiments to be performed.
Electromagnetic scattering by a straight thin wire
NASA Technical Reports Server (NTRS)
Shamansky, Harry T.; Dominek, Allen K.; Peters, Leon, Jr.
1989-01-01
The traveling-wave energy, which multiply diffracts on a straight thin wire, is represented as a sum of terms, each with a distinct physical meaning, that can be individually examined in the time domain. Expressions for each scattering mechanism on a straight thin wire are cast in the form of four basic electromagnetic wave concepts: diffraction, attachment, launch, and reflection. Using the basic mechanisms from P. Ya. Ufimtsev (1962), each of the scattering mechanisms is included into the total scattered field for the straight thin wire. Scattering as a function of angle and frequency is then compared to the moment-method solution. These analytic expressions are then extended to a lossy wire with a simple approximate modification using the propagation velocity on the wire as derived from the Sommerfeld wave on a straight lossy wire. Both the perfectly conducting and lossy wire solutions are compared to moment-method results, and excellent agreement is found. As is common with asymptotic solutions, when the electrical length of wire is smaller than 0.2 lambda the results lose accuracy. The expressions modified to approximate the scattering for the lossy thin wire yield excellent agreement even for lossy wires where the wire radius is on the order of skin depth.
Video quality assesment using M-SVD
NASA Astrophysics Data System (ADS)
Tao, Peining; Eskicioglu, Ahmet M.
2007-01-01
Objective video quality measurement is a challenging problem in a variety of video processing application ranging from lossy compression to printing. An ideal video quality measure should be able to mimic the human observer. We present a new video quality measure, M-SVD, to evaluate distorted video sequences based on singular value decomposition. A computationally efficient approach is developed for full-reference (FR) video quality assessment. This measure is tested on the Video Quality Experts Group (VQEG) phase I FR-TV test data set. Our experiments show the graphical measure displays the amount of distortion as well as the distribution of error in all frames of the video sequence while the numerical measure has a good correlation with perceived video quality outperforms PSNR and other objective measures by a clear margin.
A robust H.264/AVC video watermarking scheme with drift compensation.
Jiang, Xinghao; Sun, Tanfeng; Zhou, Yue; Wang, Wan; Shi, Yun-Qing
2014-01-01
A robust H.264/AVC video watermarking scheme for copyright protection with self-adaptive drift compensation is proposed. In our scheme, motion vector residuals of macroblocks with the smallest partition size are selected to hide copyright information in order to hold visual impact and distortion drift to a minimum. Drift compensation is also implemented to reduce the influence of watermark to the most extent. Besides, discrete cosine transform (DCT) with energy compact property is applied to the motion vector residual group, which can ensure robustness against intentional attacks. According to the experimental results, this scheme gains excellent imperceptibility and low bit-rate increase. Malicious attacks with different quantization parameters (QPs) or motion estimation algorithms can be resisted efficiently, with 80% accuracy on average after lossy compression.
A Robust H.264/AVC Video Watermarking Scheme with Drift Compensation
Sun, Tanfeng; Zhou, Yue; Shi, Yun-Qing
2014-01-01
A robust H.264/AVC video watermarking scheme for copyright protection with self-adaptive drift compensation is proposed. In our scheme, motion vector residuals of macroblocks with the smallest partition size are selected to hide copyright information in order to hold visual impact and distortion drift to a minimum. Drift compensation is also implemented to reduce the influence of watermark to the most extent. Besides, discrete cosine transform (DCT) with energy compact property is applied to the motion vector residual group, which can ensure robustness against intentional attacks. According to the experimental results, this scheme gains excellent imperceptibility and low bit-rate increase. Malicious attacks with different quantization parameters (QPs) or motion estimation algorithms can be resisted efficiently, with 80% accuracy on average after lossy compression. PMID:24672376
Chang, Yin-Jung; Lai, Chi-Sheng
2013-09-01
The mismatch in film thickness and incident angle between reflectance and transmittance extrema due to the presence of lossy film(s) is investigated toward the maximum transmittance design in the active region of solar cells. Using a planar air/lossy film/silicon double-interface geometry illustrates important and quite opposite mismatch behaviors associated with TE and TM waves. In a typical thin-film CIGS solar cell, mismatches contributed by TM waves in general dominate. The angular mismatch is at least 10° in about 37%-53% of the spectrum, depending on the thickness combination of all lossy interlayers. The largest thickness mismatch of a specific interlayer generally increases with the thickness of the layer itself. Antireflection coating designs for solar cells should therefore be optimized in terms of the maximum transmittance into the active region, even if the corresponding reflectance is not at its minimum.
Normal modes in an overmoded circular waveguide coated with lossy material
NASA Technical Reports Server (NTRS)
Lee, C. S.; Lee, S. W.; Chuang, S. L.
1985-01-01
The normal modes in an overmoded waveguide coated with a lossy material are analyzed, particularly for their attenuation properties as a function of coating material, layer thickness, and frequency. When the coating material is not too lossy, the low-order modes are highly attenuated even with a thin layer of coating. This coated guide serves as a mode suppressor of the low-order modes, which can be particularly useful for reducing the radar cross section (RCS) of a cavity structure such as a jet inlet. When the coating material is very lossy, low-order modes fall into two distinct groups: highly and lowly attenuated modes. However, as a/lambda (a = radius of the cylinder; lambda = the free-space wavelength) increases, the separation between these two groups becomes less distinctive. The attenuation constants of most of the low-order modes become small, and decrease as a function of lambda sup 2/a sup 3.
Wu, Zhaohua; Feng, Jiaxin; Qiao, Fangli; Tan, Zhe-Min
2016-04-13
In this big data era, it is more urgent than ever to solve two major issues: (i) fast data transmission methods that can facilitate access to data from non-local sources and (ii) fast and efficient data analysis methods that can reveal the key information from the available data for particular purposes. Although approaches in different fields to address these two questions may differ significantly, the common part must involve data compression techniques and a fast algorithm. This paper introduces the recently developed adaptive and spatio-temporally local analysis method, namely the fast multidimensional ensemble empirical mode decomposition (MEEMD), for the analysis of a large spatio-temporal dataset. The original MEEMD uses ensemble empirical mode decomposition to decompose time series at each spatial grid and then pieces together the temporal-spatial evolution of climate variability and change on naturally separated timescales, which is computationally expensive. By taking advantage of the high efficiency of the expression using principal component analysis/empirical orthogonal function analysis for spatio-temporally coherent data, we design a lossy compression method for climate data to facilitate its non-local transmission. We also explain the basic principles behind the fast MEEMD through decomposing principal components instead of original grid-wise time series to speed up computation of MEEMD. Using a typical climate dataset as an example, we demonstrate that our newly designed methods can (i) compress data with a compression rate of one to two orders; and (ii) speed-up the MEEMD algorithm by one to two orders. © 2016 The Authors.
Graphene Oxide in Lossy Mode Resonance-Based Optical Fiber Sensors for Ethanol Detection.
Hernaez, Miguel; Mayes, Andrew G; Melendi-Espina, Sonia
2017-12-27
The influence of graphene oxide (GO) over the features of an optical fiber ethanol sensor based on lossy mode resonances (LMR) has been studied in this work. Four different sensors were built with this aim, each comprising a multimode optical fiber core fragment coated with a SnO₂ thin film. Layer by layer (LbL) coatings made of 1, 2 and 4 bilayers of polyethyleneimine (PEI) and graphene oxide were deposited onto three of these devices and their behavior as aqueous ethanol sensors was characterized and compared with the sensor without GO. The sensors with GO showed much better performance with a maximum sensitivity enhancement of 176% with respect to the sensor without GO. To our knowledge, this is the first time that GO has been used to make an optical fiber sensor based on LMR.
Campione, Salvatore; Warne, Larry K.; Basilio, Lorena I.; ...
2017-01-13
This study details a model for the response of a finite- or an infinite-length wire interacting with a conducting ground to an electromagnetic pulse excitation. We develop a frequency–domain method based on transmission line theory that we name ATLOG – Analytic Transmission Line Over Ground. This method is developed as an alternative to full-wave methods, as it delivers a fast and reliable solution. It allows for the treatment of finite or infinite lossy, coated wires, and lossy grounds. The cases of wire above ground, as well as resting on the ground and buried beneath the ground are treated. The reportedmore » method is general and the time response of the induced current is obtained using an inverse Fourier transform of the current in the frequency domain. The focus is on the characteristics and propagation of the transmission line mode. Comparisons with full-wave simulations strengthen the validity of the proposed method.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campione, Salvatore; Warne, Larry K.; Basilio, Lorena I.
This study details a model for the response of a finite- or an infinite-length wire interacting with a conducting ground to an electromagnetic pulse excitation. We develop a frequency–domain method based on transmission line theory that we name ATLOG – Analytic Transmission Line Over Ground. This method is developed as an alternative to full-wave methods, as it delivers a fast and reliable solution. It allows for the treatment of finite or infinite lossy, coated wires, and lossy grounds. The cases of wire above ground, as well as resting on the ground and buried beneath the ground are treated. The reportedmore » method is general and the time response of the induced current is obtained using an inverse Fourier transform of the current in the frequency domain. The focus is on the characteristics and propagation of the transmission line mode. Comparisons with full-wave simulations strengthen the validity of the proposed method.« less
Toward a perceptual video-quality metric
NASA Astrophysics Data System (ADS)
Watson, Andrew B.
1998-07-01
The advent of widespread distribution of digital video creates a need for automated methods for evaluating the visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics, and the economic need to reduce bit-rate to the lowest level that yields acceptable quality. In previous work, we have developed visual quality metrics for evaluating, controlling,a nd optimizing the quality of compressed still images. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. Here I describe a new video quality metric that is an extension of these still image metrics into the time domain. Like the still image metrics, it is based on the Discrete Cosine Transform. An effort has been made to minimize the amount of memory and computation required by the metric, in order that might be applied in the widest range of applications. To calibrate the basic sensitivity of this metric to spatial and temporal signals we have made measurements of visual thresholds for temporally varying samples of DCT quantization noise.
ARKACHAISRI, THASCHAWEE; VILAIYUK, SOAMARAT; LI, SUZANNE; O’NEIL, KATHLEEN M.; POPE, ELENA; HIGGINS, GLORIA C.; PUNARO, MARILYNN; RABINOVICH, EGLA C.; ROSENKRANZ, MARGALIT; KIETZ, DANIEL A.; ROSEN, PAUL; SPALDING, STEVEN J.; HENNON, TERESA R.; TOROK, KATHRYN S.; CASSIDY, ELAINE; MEDSGER, THOMAS A.
2013-01-01
Objective To develop and evaluate a Localized Scleroderma (LS) Skin Severity Index (LoSSI) and global assessments’ clinimetric property and effect on quality of life (QOL). Methods A 3-phase study was conducted. The first phase involved 15 patients with LS and 14 examiners who assessed LoSSI [surface area (SA), erythema (ER), skin thickness (ST), and new lesion/extension (N/E)] twice for inter/intrarater reliability. Patient global assessment of disease severity (PtGA-S) and Children’s Dermatology Life Quality Index (CDLQI) were collected for intrarater reliability evaluation. The second phase was aimed to develop clinical determinants for physician global assessment of disease activity (PhysGA-A) and to assess its content validity. The third phase involved 2 examiners assessing LoSSI and PhysGA-A on 27 patients. Effect of training on improving reliability/validity and sensitivity to change of the LoSSI and PhysGA-A was determined. Results Interrater reliability was excellent for ER [intraclass correlation coefficient (ICC) 0.71], ST (ICC 0.70), LoSSI (ICC 0.80), and PhysGA-A (ICC 0.90) but poor for SA (ICC 0.35); thus, LoSSI was modified to mLoSSI. Examiners’ experience did not affect the scores, but training/practice improved reliability. Intrarater reliability was excellent for ER, ST, and LoSSI (Spearman’s rho = 0.71–0.89) and moderate for SA. PtGA-S and CDLQI showed good intrarater agreement (ICC 0.63 and 0.80). mLoSSI correlated moderately with PhysGA-A and PtGA-S. Both mLoSSI and PhysGA-A were sensitive to change following therapy. Conclusion mLoSSI and PhysGA-A are reliable and valid tools for assessing LS disease severity and show high sensitivity to detect change over time. These tools are feasible for use in routine clinical practice. They should be considered for inclusion in a core set of LS outcome measures for clinical trials. PMID:19833758
Wave attenuation and mode dispersion in a waveguide coated with lossy dielectric material
NASA Technical Reports Server (NTRS)
Lee, C. S.; Chuang, S. L.; Lee, S. W.; Lo, Y. T.
1984-01-01
The modal attenuation constants in a cylindrical waveguide coated with a lossy dielectric material are studied as functions of frequency, dielectric constant, and thickness of the dielectric layer. A dielectric material best suited for a large attenuation is suggested. Using Kirchhoff's approximation, the field attenuation in a coated waveguide which is illuminated by a normally incident plane wave is also studied. For a circular guide which has a diameter of two wavelengths and is coated with a thin lossy dielectric layer (omega sub r = 9.1 - j2.3, thickness = 3% of the radius), a 3 dB attenuation is achieved within 16 diameters.
Video quality pooling adaptive to perceptual distortion severity.
Park, Jincheol; Seshadrinathan, Kalpana; Lee, Sanghoon; Bovik, Alan Conrad
2013-02-01
It is generally recognized that severe video distortions that are transient in space and/or time have a large effect on overall perceived video quality. In order to understand this phenomena, we study the distribution of spatio-temporally local quality scores obtained from several video quality assessment (VQA) algorithms on videos suffering from compression and lossy transmission over communication channels. We propose a content adaptive spatial and temporal pooling strategy based on the observed distribution. Our method adaptively emphasizes "worst" scores along both the spatial and temporal dimensions of a video sequence and also considers the perceptual effect of large-area cohesive motion flow such as egomotion. We demonstrate the efficacy of the method by testing it using three different VQA algorithms on the LIVE Video Quality database and the EPFL-PoliMI video quality database.
ICER-3D Hyperspectral Image Compression Software
NASA Technical Reports Server (NTRS)
Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh
2010-01-01
Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, three-dimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume. The ICER-3D software is capable of providing either lossless or lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The compression algorithm, which was derived from the ICER image compression algorithm, includes wavelet-transform, context-modeling, and entropy coding subalgorithms. The 3D wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of sets of hyperspectral image data, while facilitating elimination of spectral ringing artifacts, using a technique summarized in "Improving 3D Wavelet-Based Compression of Spectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. Correlation is further exploited by a context-modeling subalgorithm, which exploits spectral dependencies in the wavelet-transformed hyperspectral data, using an algorithm that is summarized in "Context Modeler for Wavelet Compression of Hyperspectral Images" (NPO-43239), which follows this article. An important feature of ICER-3D is a scheme for limiting the adverse effects of loss of data during transmission. In this scheme, as in the similar scheme used by ICER, the spatial-frequency domain is partitioned into rectangular error-containment regions. In ICER-3D, the partitions extend through all the wavelength bands. The data in each partition are compressed independently of those in the other partitions, so that loss or corruption of data from any partition does not affect the other partitions. Furthermore, because compression is progressive within each partition, when data are lost, any data from that partition received prior to the loss can be used to reconstruct that partition at lower fidelity. By virtue of the compression improvement it achieves relative to previous means of onboard data compression, this software enables (1) increased return of hyperspectral scientific data in the presence of limits on the rates of transmission of data from spacecraft to Earth via radio communication links and/or (2) reduction in spacecraft radio-communication power and/or cost through reduction in the amounts of data required to be downlinked and stored onboard prior to downlink. The software is also suitable for compressing hyperspectral images for ground storage or archival purposes.
Tipikin, D. S.; Earle, K. A.; Freed, J. H.
2010-01-01
The sensitivity of a high frequency electron spin resonance (ESR) spectrometer depends strongly on the structure used to couple the incident millimeter wave to the sample that generates the ESR signal. Subsequent coupling of the ESR signal to the detection arm of the spectrometer is also a crucial consideration for achieving high spectrometer sensitivity. In previous work, we found that a means for continuously varying the coupling was necessary for attaining high sensitivity reliably and reproducibly. We report here on a novel asymmetric mesh structure that achieves continuously variable coupling by rotating the mesh in its own plane about the millimeter wave transmission line optical axis. We quantify the performance of this device with nitroxide spin-label spectra in both a lossy aqueous solution and a low loss solid state system. These two systems have very different coupling requirements and are representative of the range of coupling achievable with this technique. Lossy systems in particular are a demanding test of the achievable sensitivity and allow us to assess the suitability of this approach for applying high frequency ESR to the study of biological systems at physiological conditions, for example. The variable coupling technique reported on here allows us to readily achieve a factor of ca. 7 improvement in signal to noise at 170 GHz and a factor of ca. 5 at 95 GHz over what has previously been reported for lossy samples. PMID:20458356
Upper bounds on secret-key agreement over lossy thermal bosonic channels
NASA Astrophysics Data System (ADS)
Kaur, Eneet; Wilde, Mark M.
2017-12-01
Upper bounds on the secret-key-agreement capacity of a quantum channel serve as a way to assess the performance of practical quantum-key-distribution protocols conducted over that channel. In particular, if a protocol employs a quantum repeater, achieving secret-key rates exceeding these upper bounds is evidence of having a working quantum repeater. In this paper, we extend a recent advance [Liuzzo-Scorpo et al., Phys. Rev. Lett. 119, 120503 (2017), 10.1103/PhysRevLett.119.120503] in the theory of the teleportation simulation of single-mode phase-insensitive Gaussian channels such that it now applies to the relative entropy of entanglement measure. As a consequence of this extension, we find tighter upper bounds on the nonasymptotic secret-key-agreement capacity of the lossy thermal bosonic channel than were previously known. The lossy thermal bosonic channel serves as a more realistic model of communication than the pure-loss bosonic channel, because it can model the effects of eavesdropper tampering and imperfect detectors. An implication of our result is that the previously known upper bounds on the secret-key-agreement capacity of the thermal channel are too pessimistic for the practical finite-size regime in which the channel is used a finite number of times, and so it should now be somewhat easier to witness a working quantum repeater when using secret-key-agreement capacity upper bounds as a benchmark.
Chen, Yibo; Chanet, Jean-Pierre; Hou, Kun-Mean; Shi, Hongling; de Sousa, Gil
2015-08-10
In recent years, IoT (Internet of Things) technologies have seen great advances, particularly, the IPv6 Routing Protocol for Low-power and Lossy Networks (RPL), which provides a powerful and flexible routing framework that can be applied in a variety of application scenarios. In this context, as an important role of IoT, Wireless Sensor Networks (WSNs) can utilize RPL to design efficient routing protocols for a specific application to increase the ubiquity of networks with resource-constrained WSN nodes that are low-cost and easy to deploy. In this article, our work starts with the description of Agricultural Low-power and Lossy Networks (A-LLNs) complying with the LLN framework, and to clarify the requirements of this application-oriented routing solution. After a brief review of existing optimization techniques for RPL, our contribution is dedicated to a Scalable Context-Aware Objective Function (SCAOF) that can adapt RPL to the environmental monitoring of A-LLNs, through combining energy-aware, reliability-aware, robustness-aware and resource-aware contexts according to the composite routing metrics approach. The correct behavior of this enhanced RPL version (RPAL) was verified by performance evaluations on both simulation and field tests. The obtained experimental results confirm that SCAOF can deliver the desired advantages on network lifetime extension, and high reliability and efficiency in different simulation scenarios and hardware testbeds.
Chen, Yibo; Chanet, Jean-Pierre; Hou, Kun-Mean; Shi, Hongling; de Sousa, Gil
2015-01-01
In recent years, IoT (Internet of Things) technologies have seen great advances, particularly, the IPv6 Routing Protocol for Low-power and Lossy Networks (RPL), which provides a powerful and flexible routing framework that can be applied in a variety of application scenarios. In this context, as an important role of IoT, Wireless Sensor Networks (WSNs) can utilize RPL to design efficient routing protocols for a specific application to increase the ubiquity of networks with resource-constrained WSN nodes that are low-cost and easy to deploy. In this article, our work starts with the description of Agricultural Low-power and Lossy Networks (A-LLNs) complying with the LLN framework, and to clarify the requirements of this application-oriented routing solution. After a brief review of existing optimization techniques for RPL, our contribution is dedicated to a Scalable Context-Aware Objective Function (SCAOF) that can adapt RPL to the environmental monitoring of A-LLNs, through combining energy-aware, reliability-aware, robustness-aware and resource-aware contexts according to the composite routing metrics approach. The correct behavior of this enhanced RPL version (RPAL) was verified by performance evaluations on both simulation and field tests. The obtained experimental results confirm that SCAOF can deliver the desired advantages on network lifetime extension, and high reliability and efficiency in different simulation scenarios and hardware testbeds. PMID:26266411
Mode-dependent templates and scan order for H.264/AVC-based intra lossless coding.
Gu, Zhouye; Lin, Weisi; Lee, Bu-Sung; Lau, Chiew Tong; Sun, Ming-Ting
2012-09-01
In H.264/advanced video coding (AVC), lossless coding and lossy coding share the same entropy coding module. However, the entropy coders in the H.264/AVC standard were original designed for lossy video coding and do not yield adequate performance for lossless video coding. In this paper, we analyze the problem with the current lossless coding scheme and propose a mode-dependent template (MD-template) based method for intra lossless coding. By exploring the statistical redundancy of the prediction residual in the H.264/AVC intra prediction modes, more zero coefficients are generated. By designing a new scan order for each MD-template, the scanned coefficients sequence fits the H.264/AVC entropy coders better. A fast implementation algorithm is also designed. With little computation increase, experimental results confirm that the proposed fast algorithm achieves about 7.2% bit saving compared with the current H.264/AVC fidelity range extensions high profile.
SURGNET: An Integrated Surgical Data Transmission System for Telesurgery.
Natarajan, Sriram; Ganz, Aura
2009-01-01
Remote surgery information requires quick and reliable transmission between the surgeon and the patient site. However, the networks that interconnect the surgeon and patient sites are usually time varying and lossy which can cause packet loss and delay jitter. In this paper we propose SURGNET, a telesurgery system for which we developed the architecture, algorithms and implemented it on a testbed. The algorithms include adaptive packet prediction and buffer time adjustment techniques which reduce the negative effects caused by the lossy and time varying networks. To evaluate the proposed SURGNET system, at the therapist site, we implemented a therapist panel which controls the force feedback device movements and provides image analysis functionality. At the patient site we controlled a virtual reality applet built in Matlab. The varying network conditions were emulated using NISTNet emulator. Our results show that even for severe packet loss and variable delay jitter, the proposed integrated synchronization techniques significantly improve SURGNET performance.
Representation of deformable motion for compression of dynamic cardiac image data
NASA Astrophysics Data System (ADS)
Weinlich, Andreas; Amon, Peter; Hutter, Andreas; Kaup, André
2012-02-01
We present a new approach for efficient estimation and storage of tissue deformation in dynamic medical image data like 3-D+t computed tomography reconstructions of human heart acquisitions. Tissue deformation between two points in time can be described by means of a displacement vector field indicating for each voxel of a slice, from which position in the previous slice at a fixed position in the third dimension it has moved to this position. Our deformation model represents the motion in a compact manner using a down-sampled potential function of the displacement vector field. This function is obtained by a Gauss-Newton minimization of the estimation error image, i. e., the difference between the current and the deformed previous slice. For lossless or lossy compression of volume slices, the potential function and the error image can afterwards be coded separately. By assuming deformations instead of translational motion, a subsequent coding algorithm using this method will achieve better compression ratios for medical volume data than with conventional block-based motion compensation known from video coding. Due to the smooth prediction without block artifacts, particularly whole-image transforms like wavelet decomposition as well as intra-slice prediction methods can benefit from this approach. We show that with discrete cosine as well as with Karhunen-Lo`eve transform the method can achieve a better energy compaction of the error image than block-based motion compensation while reaching approximately the same prediction error energy.
Dynamic propagation of symmetric Airy pulses with initial chirps in an optical fiber
NASA Astrophysics Data System (ADS)
Shi, Xiaohui; Huang, Xianwei; Deng, Yangbao; Tan, Chao; Bai, Yanfeng; Fu, Xiquan
2017-09-01
We analytically and numerically investigate the propagation dynamics of initially chirped symmetric Airy pulses in an optical fiber. The results show that the positive chirps act to promote the interference in generating a focal point on the propagation axis, while the negative chirps tend to suppress the focusing effect, as compared to conventional unchirped symmetric Airy pulses. The numerical results demonstrate that the linear propagation of chirped symmetric Airy pulses depend considerably on the chirp parameter and the primary lobe position. In the anomalous dispersion region, positively chirped symmetric Airy pulses first undergo an initial compression, and reach a foci due to the opposite acceleration, and then experience a lossy inversion transformation, and come to the opposite facing focal position. The impact of truncation coefficient and Kerr nonlinearity on the chirped symmetric Airy pulses propagation is also disclosed separately.
The Visual Uncertainty Paradigm for Controlling Screen-Space Information in Visualization
ERIC Educational Resources Information Center
Dasgupta, Aritra
2012-01-01
The information visualization pipeline serves as a lossy communication channel for presentation of data on a screen-space of limited resolution. The lossy communication is not just a machine-only phenomenon due to information loss caused by translation of data, but also a reflection of the degree to which the human user can comprehend visual…
Electromagnetic backscattering from a random distribution of lossy dielectric scatterers
NASA Technical Reports Server (NTRS)
Lang, R. H.
1980-01-01
Electromagnetic backscattering from a sparse distribution of discrete lossy dielectric scatterers occupying a region 5 was studied. The scatterers are assumed to have random position and orientation. Scattered fields are calculated by first finding the mean field and then by using it to define an equivalent medium within the volume 5. The scatterers are then viewed as being embedded in the equivalent medium; the distorted Born approximation is then used to find the scattered fields. This technique represents an improvement over the standard Born approximation since it takes into account the attenuation of the incident and scattered waves in the equivalent medium. The method is used to model a leaf canopy when the leaves are modeled by lossy dielectric discs.
High-Throughput Block Optical DNA Sequence Identification.
Sagar, Dodderi Manjunatha; Korshoj, Lee Erik; Hanson, Katrina Bethany; Chowdhury, Partha Pratim; Otoupal, Peter Britton; Chatterjee, Anushree; Nagpal, Prashant
2018-01-01
Optical techniques for molecular diagnostics or DNA sequencing generally rely on small molecule fluorescent labels, which utilize light with a wavelength of several hundred nanometers for detection. Developing a label-free optical DNA sequencing technique will require nanoscale focusing of light, a high-throughput and multiplexed identification method, and a data compression technique to rapidly identify sequences and analyze genomic heterogeneity for big datasets. Such a method should identify characteristic molecular vibrations using optical spectroscopy, especially in the "fingerprinting region" from ≈400-1400 cm -1 . Here, surface-enhanced Raman spectroscopy is used to demonstrate label-free identification of DNA nucleobases with multiplexed 3D plasmonic nanofocusing. While nanometer-scale mode volumes prevent identification of single nucleobases within a DNA sequence, the block optical technique can identify A, T, G, and C content in DNA k-mers. The content of each nucleotide in a DNA block can be a unique and high-throughput method for identifying sequences, genes, and other biomarkers as an alternative to single-letter sequencing. Additionally, coupling two complementary vibrational spectroscopy techniques (infrared and Raman) can improve block characterization. These results pave the way for developing a novel, high-throughput block optical sequencing method with lossy genomic data compression using k-mer identification from multiplexed optical data acquisition. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Loss/gain-induced ultrathin antireflection coatings
Luo, Jie; Li, Sucheng; Hou, Bo; Lai, Yun
2016-01-01
Tradional antireflection coatings composed of dielectric layers usually require the thickness to be larger than quarter wavelength. Here, we demonstrate that materials with permittivity or permeability dominated by imaginary parts, i.e. lossy or gain media, can realize non-resonant antireflection coatings in deep sub-wavelength scale. Interestingly, while the reflected waves are eliminated as in traditional dielectric antireflection coatings, the transmitted waves can be enhanced or reduced, depending on whether gain or lossy media are applied, respectively. We provide a unified theory for the design of such ultrathin antireflection coatings, showing that under different polarizations and incident angles, different types of ultrathin coatings should be applied. Especially, under transverse magnetic polarization, the requirement shows a switch between gain and lossy media at Brewster angle. As a proof of principle, by using conductive films as a special type of lossy antireflection coatings, we experimentally demonstrate the suppression of Fabry-Pérot resonances in a broad frequency range for microwaves. This valuable functionality can be applied to remove undesired resonant effects, such as the frequency-dependent side lobes induced by resonances in dielectric coverings of antennas. Our work provides a guide for the design of ultrathin antireflection coatings as well as their applications in broadband reflectionless devices. PMID:27349750
NASA Astrophysics Data System (ADS)
FernáNdez Pantoja, M.; Yarovoy, A. G.; Rubio Bretones, A.; GonzáLez GarcíA, S.
2009-12-01
This paper presents a procedure to extend the methods of moments in time domain for the transient analysis of thin-wire antennas to include those cases where the antennas are located over a lossy half-space. This extended technique is based on the reflection coefficient (RC) approach, which approximates the fields incident on the ground interface as plane waves and calculates the time domain RC using the inverse Fourier transform of Fresnel equations. The implementation presented in this paper uses general expressions for the RC which extend its range of applicability to lossy grounds, and is proven to be accurate and fast for antennas located not too near to the ground. The resulting general purpose procedure, able to treat arbitrarily oriented thin-wire antennas, is appropriate for all kind of half-spaces, including lossy cases, and it has turned out to be as computationally fast solving the problem of an arbitrary ground as dealing with a perfect electric conductor ground plane. Results show a numerical validation of the method for different half-spaces, paying special attention to the influence of the antenna to ground distance in the accuracy of the results.
Implementation of interconnect simulation tools in spice
NASA Technical Reports Server (NTRS)
Satsangi, H.; Schutt-Aine, J. E.
1993-01-01
Accurate computer simulation of high speed digital computer circuits and communication circuits requires a multimode approach to simulate both the devices and the interconnects between devices. Classical circuit analysis algorithms (lumped parameter) are needed for circuit devices and the network formed by the interconnected devices. The interconnects, however, have to be modeled as transmission lines which incorporate electromagnetic field analysis. An approach to writing a multimode simulator is to take an existing software package which performs either lumped parameter analysis or field analysis and add the missing type of analysis routines to the package. In this work a traditionally lumped parameter simulator, SPICE, is modified so that it will perform lossy transmission line analysis using a different model approach. Modifying SPICE3E2 or any other large software package is not a trivial task. An understanding of the programming conventions used, simulation software, and simulation algorithms is required. This thesis was written to clarify the procedure for installing a device into SPICE3E2. The installation of three devices is documented and the installations of the first two provide a foundation for installation of the lossy line which is the third device. The details of discussions are specific to SPICE, but the concepts will be helpful when performing installations into other circuit analysis packages.
Detection of bondline delaminations in multilayer structures with lossy components
NASA Technical Reports Server (NTRS)
Madaras, Eric I.; Winfree, William P.; Smith, B. T.; Heyman, Joseph H.
1988-01-01
The detection of bondline delaminations in multilayer structures using ultrasonic reflection techniques is a generic problem in adhesively bonded composite structures such as the Space Shuttles's Solid Rocket Motors (SRM). Standard pulse echo ultrasonic techniques do not perform well for a composite resonator composed of a resonant layer combined with attenuating layers. Excessive ringing in the resonant layer tends to mask internal echoes emanating from the attenuating layers. The SRM is made up of a resonant steel layer backed by layers of adhesive, rubber, liner and fuel, which are ultrasonically attenuating. The structure's response is modeled as a lossy ultrasonic transmission line. The model predicts that the acoustic response of the system is sensitive to delaminations at the interior bondlines in a few narrow frequency bands. These predictions are verified by measurements on a fabricated system. Successful imaging of internal delaminations is sensitive to proper selection of the interrogating frequency. Images of fabricated bondline delaminations are presented based on these studies.
NASA Astrophysics Data System (ADS)
Sobaszek, Michał; Dominik, Magdalena; Burnat, Dariusz; Bogdanowicz, Robert; Stranak, Viteszlav; Sezemsky, Petr; Śmietana, Mateusz
2017-04-01
This work presents an optical fiber sensors based on lossy-mode resonance (LMR) phenomenon supported by indium tin oxide (ITO) thin overlay for investigation of electro-polymerization effect on ITO's surface. The ITO overlays were deposited on core of polymer-clad silica (PCS) fibers using reactive magnetron sputtering (RMS) method. Since ITO is electrically conductive and electrochemically active it can be used as a working electrode in 3-electrode cyclic voltammetry setup. For fixed potential applied to the electrode current flow decrease with time what corresponds to polymer layer formation on the ITO surface. Since LMR phenomenon depends on optical properties in proximity of the ITO surface, polymer layer formation can be monitored optically in real time. The electrodeposition process has been performed with Isatin which is a strong endogenous neurochemical regulator in humans as it is a metabolic derivative of adrenaline. It was found that optical detection of Isatin is possible in the proposed configuration.
MQCC: Maximum Queue Congestion Control for Multipath Networks with Blockage
2015-10-19
higher error rates in wireless networks result in a great deal of “false” congestion indications, resulting in underutilization of the network [4...approaches that are relevant to lossy wireless networks . Multipath TCP (MPTCP) schemes [9], [10] explore the design and implementation of multipath...attempts to “fix” TCP to work with lossy wireless networks using existing techniques. The authors have taken the view that because packet losses are
Excitation of the Uller-Zenneck electromagnetic surface waves in the prism-coupled configuration
NASA Astrophysics Data System (ADS)
Rasheed, Mehran; Faryad, Muhammad
2017-08-01
A configuration to excite the Uller-Zenneck surface electromagnetic waves at the planar interfaces of homogeneous and isotropic dielectric materials is proposed and theoretically analyzed. The Uller-Zenneck waves are surface waves that can exist at the planar interface of two dissimilar dielectric materials of which at least one is a lossy dielectric material. In this paper, a slab of a lossy dielectric material was taken with lossless dielectric materials on both sides. A canonical boundary-value problem was set up and solved to find the possible Uller-Zenneck waves and waveguide modes. The Uller-Zenneck waves guided by the slab of the lossy dielectric material were found to be either symmetric or antisymmetric and transmuted into waveguide modes when the thickness of that slab was increased. A prism-coupled configuration was then successfully devised to excite the Uller-Zenneck waves. The results showed that the Uller-Zenneck waves are excited at the same angle of incidence for any thickness of the slab of the lossy dielectric material, whereas the waveguide modes can be excited when the slab is sufficiently thick. The excitation of Uller-Zenneck waves at the planar interfaces with homogeneous and all-dielectric materials can usher in new avenues for the applications for electromagnetic surface waves.
A singular-value method for reconstruction of nonradial and lossy objects.
Jiang, Wei; Astheimer, Jeffrey; Waag, Robert
2012-03-01
Efficient inverse scattering algorithms for nonradial lossy objects are presented using singular-value decomposition to form reduced-rank representations of the scattering operator. These algorithms extend eigenfunction methods that are not applicable to nonradial lossy scattering objects because the scattering operators for these objects do not have orthonormal eigenfunction decompositions. A method of local reconstruction by segregation of scattering contributions from different local regions is also presented. Scattering from each region is isolated by forming a reduced-rank representation of the scattering operator that has domain and range spaces comprised of far-field patterns with retransmitted fields that focus on the local region. Methods for the estimation of the boundary, average sound speed, and average attenuation slope of the scattering object are also given. These methods yielded approximations of scattering objects that were sufficiently accurate to allow residual variations to be reconstructed in a single iteration. Calculated scattering from a lossy elliptical object with a random background, internal features, and white noise is used to evaluate the proposed methods. Local reconstruction yielded images with spatial resolution that is finer than a half wavelength of the center frequency and reproduces sound speed and attenuation slope with relative root-mean-square errors of 1.09% and 11.45%, respectively.
Causal impulse response for circular sources in viscous media
Kelly, James F.; McGough, Robert J.
2008-01-01
The causal impulse response of the velocity potential for the Stokes wave equation is derived for calculations of transient velocity potential fields generated by circular pistons in viscous media. The causal Green’s function is numerically verified using the material impulse response function approach. The causal, lossy impulse response for a baffled circular piston is then calculated within the near field and the far field regions using expressions previously derived for the fast near field method. Transient velocity potential fields in viscous media are computed with the causal, lossy impulse response and compared to results obtained with the lossless impulse response. The numerical error in the computed velocity potential field is quantitatively analyzed for a range of viscous relaxation times and piston radii. Results show that the largest errors are generated in locations near the piston face and for large relaxation times, and errors are relatively small otherwise. Unlike previous frequency-domain methods that require numerical inverse Fourier transforms for the evaluation of the lossy impulse response, the present approach calculates the lossy impulse response directly in the time domain. The results indicate that this causal impulse response is ideal for time-domain calculations that simultaneously account for diffraction and quadratic frequency-dependent attenuation in viscous media. PMID:18397018
2009-08-01
transmitter state. For example, theory has shown that for a non-classical ten entangled photon N00N state used as a Type-1 sensor, typical losses...stemmed from Lloyd’s proof [14] that a large performance gain accrues from the use of entanglement in single- photon target detection within a lossy...output. These mode pairs are in independent identically distributed (iid), zero-mean, maximally- entangled Gaussian states with average photon number
Metamaterial-based lossy anisotropic epsilon-near-zero medium for energy collimation
NASA Astrophysics Data System (ADS)
Shen, Nian-Hai; Zhang, Peng; Koschny, Thomas; Soukoulis, Costas M.
2016-06-01
A lossy anisotropic epsilon-near-zero (ENZ) medium may lead to a counterintuitive phenomenon of omnidirectional bending-to-normal refraction [S. Feng, Phys. Rev. Lett. 108, 193904 (2012), 10.1103/PhysRevLett.108.193904], which offers a fabulous strategy for energy collimation and energy harvesting. Here, in the scope of effective medium theory, we systematically investigate two simple metamaterial configurations, i.e., metal-dielectric-layered structures and the wire medium, to explore the possibility of fulfilling the conditions of such an anisotropic lossy ENZ medium by playing with materials' parameters. Both realistic metamaterial structures and their effective medium equivalences have been numerically simulated, and the results are in excellent agreement with each other. Our study provides clear guidance and therefore paves the way towards the search for proper designs of anisotropic metamaterials for a decent effect of energy collimation and wave-front manipulation.
A comprehensive review of lossy mode resonance-based fiber optic sensors
NASA Astrophysics Data System (ADS)
Wang, Qi; Zhao, Wan-Ming
2018-01-01
This review paper presents the achievements and present developments in lossy mode resonances-based optical fiber sensors in different sensing field, such as physical, chemical and biological, and briefly look forward to its future development trend in the eyes of the author. Lossy mode resonances (LMR) is a relatively new physical optics phenomenon put forward in recent years. Fiber sensors utilizing LMR offered a new way to improve the sensing capability. LMR fiber sensors have diverse structures such as D-shaped, cladding-off, fiber tip, U-shaped and tapered fiber structures. Major applications of LMR sensors include refraction sensors and biosensors. LMR-based fiber sensors have attracted considerable research and development interest, because of their distinct advantages such as high sensitivity and label-free measurement. This kind of sensor is also of academic interest and many novel and great ideas are continuously developed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Wu-chi; Crawfis, Roger, Weide, Bruce
2002-02-01
In this project, the authors propose the research, development, and distribution of a stackable component-based multimedia streaming protocol middleware service. The goals of this stackable middleware interface include: (1) The middleware service will provide application writers and scientists easy to use interfaces that support their visualization needs. (2) The middleware service will support a variety of image compression modes. Currently, many of the network adaptation protocols for video have been developed with DCT-based compression algorithms like H.261, MPEG-1, or MPEG-2 in mind. It is expected that with advanced scientific computing applications that the lossy compression of the image data willmore » be unacceptable in certain instances. The middleware service will support several in-line lossless compression modes for error-sensitive scientific visualization data. (3) The middleware service will support two different types of streaming video modes: one for interactive collaboration of scientists and a stored video streaming mode for viewing prerecorded animations. The use of two different streaming types will allow the quality of the video delivered to the user to be maximized. Most importantly, this service will happen transparently to the user (with some basic controls exported to the user for domain specific tweaking). In the spirit of layered network protocols (like ISO and TCP/IP), application writers should not have to know a large amount about lower level network details. Currently, many example video streaming players have their congestion management techniques tightly integrated into the video player itself and are, for the most part, ''one-off'' applications. As more networked multimedia and video applications are written in the future, a larger percentage of these programmers and scientist will most likely know little about the underlying networking layer. By providing a simple, powerful, and semi-transparent middleware layer, the successful completion of this project will help serve as a catalyst to support future video-based applications, particularly those of advanced scientific computing applications.« less
Accuracy of a teleported squeezed coherent-state superposition trapped into a high-Q cavity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sales, J. S.; Silva, L. F. da; Almeida, N. G. de
2011-03-15
We propose a scheme to teleport a superposition of squeezed coherent states from one mode of a lossy cavity to one mode of a second lossy cavity. Based on current experimental capabilities, we present a calculation of the fidelity demonstrating that accurate quantum teleportation can be achieved for some parameters of the squeezed coherent states superposition. The signature of successful quantum teleportation is present in the negative values of the Wigner function.
Accuracy of a teleported squeezed coherent-state superposition trapped into a high-Q cavity
NASA Astrophysics Data System (ADS)
Sales, J. S.; da Silva, L. F.; de Almeida, N. G.
2011-03-01
We propose a scheme to teleport a superposition of squeezed coherent states from one mode of a lossy cavity to one mode of a second lossy cavity. Based on current experimental capabilities, we present a calculation of the fidelity demonstrating that accurate quantum teleportation can be achieved for some parameters of the squeezed coherent states superposition. The signature of successful quantum teleportation is present in the negative values of the Wigner function.
Image Processing, Coding, and Compression with Multiple-Point Impulse Response Functions.
NASA Astrophysics Data System (ADS)
Stossel, Bryan Joseph
1995-01-01
Aspects of image processing, coding, and compression with multiple-point impulse response functions are investigated. Topics considered include characterization of the corresponding random-walk transfer function, image recovery for images degraded by the multiple-point impulse response, and the application of the blur function to image coding and compression. It is found that although the zeros of the real and imaginary parts of the random-walk transfer function occur in continuous, closed contours, the zeros of the transfer function occur at isolated spatial frequencies. Theoretical calculations of the average number of zeros per area are in excellent agreement with experimental results obtained from computer counts of the zeros. The average number of zeros per area is proportional to the standard deviations of the real part of the transfer function as well as the first partial derivatives. Statistical parameters of the transfer function are calculated including the mean, variance, and correlation functions for the real and imaginary parts of the transfer function and their corresponding first partial derivatives. These calculations verify the assumptions required in the derivation of the expression for the average number of zeros. Interesting results are found for the correlations of the real and imaginary parts of the transfer function and their first partial derivatives. The isolated nature of the zeros in the transfer function and its characteristics at high spatial frequencies result in largely reduced reconstruction artifacts and excellent reconstructions are obtained for distributions of impulses consisting of 25 to 150 impulses. The multiple-point impulse response obscures original scenes beyond recognition. This property is important for secure transmission of data on many communication systems. The multiple-point impulse response enables the decoding and restoration of the original scene with very little distortion. Images prefiltered by the random-walk transfer function yield greater compression ratios than are obtained for the original scene. The multiple-point impulse response decreases the bit rate approximately 40-70% and affords near distortion-free reconstructions. Due to the lossy nature of transform-based compression algorithms, noise reduction measures must be incorporated to yield acceptable reconstructions after decompression.
Effect of Loss on Multiplexed Single-Photon Sources (Open Access Publisher’s Version)
2015-04-28
lossy components on near- and long-term experimental goals, we simulate themultiplexed sources when used formany- photon state generation under various...efficient integer factorization and digital quantum simulation [7, 8], which relies critically on the development of a high-performance, on-demand photon ...SPDC) or spontaneous four-wave mixing: parametric processes which use a pump laser in a nonlinearmaterial to spontaneously generate photon pairs
NASA Astrophysics Data System (ADS)
Ma, Long; Zhao, Deping
2011-12-01
Spectral imaging technology have been used mostly in remote sensing, but have recently been extended to new area requiring high fidelity color reproductions like telemedicine, e-commerce, etc. These spectral imaging systems are important because they offer improved color reproduction quality not only for a standard observer under a particular illuminantion, but for any other individual exhibiting normal color vision capability under another illuminantion. A possibility for browsing of the archives is needed. In this paper, the authors present a new spectral image browsing architecture. The architecture for browsing is expressed as follow: (1) The spectral domain of the spectral image is reduced with the PCA transform. As a result of the PCA transform the eigenvectors and the eigenimages are obtained. (2) We quantize the eigenimages with the original bit depth of spectral image (e.g. if spectral image is originally 8bit, then quantize eigenimage to 8bit), and use 32bit floating numbers for the eigenvectors. (3) The first eigenimage is lossless compressed by JPEG-LS, the other eigenimages were lossy compressed by wavelet based SPIHT algorithm. For experimental evalution, the following measures were used. We used PSNR as the measurement for spectral accuracy. And for the evaluation of color reproducibility, ΔE was used.here standard D65 was used as a light source. To test the proposed method, we used FOREST and CORAL spectral image databases contrain 12 and 10 spectral images, respectively. The images were acquired in the range of 403-696nm. The size of the images were 128*128, the number of bands was 40 and the resolution was 8 bits per sample. Our experiments show the proposed compression method is suitable for browsing, i.e., for visual purpose.
Omnidirectional, broadband light absorption using large-area, ultrathin lossy metallic film coatings
NASA Astrophysics Data System (ADS)
Li, Zhongyang; Palacios, Edgar; Butun, Serkan; Kocer, Hasan; Aydin, Koray
2015-10-01
Resonant absorbers based on nanostructured materials are promising for variety of applications including optical filters, thermophotovoltaics, thermal emitters, and hot-electron collection. One of the significant challenges for such micro/nanoscale featured medium or surface, however, is costly lithographic processes for structural patterning which restricted from industrial production of complex designs. Here, we demonstrate lithography-free, broadband, polarization-independent optical absorbers based on a three-layer ultrathin film composed of subwavelength chromium (Cr) and oxide film coatings. We have measured almost perfect absorption as high as 99.5% across the entire visible regime and beyond (400-800 nm). In addition to near-ideal absorption, our absorbers exhibit omnidirectional independence for incidence angle over ±60 degrees. Broadband absorbers introduced in this study perform better than nanostructured plasmonic absorber counterparts in terms of bandwidth, polarization and angle independence. Improvements of such “blackbody” samples based on uniform thin-film coatings is attributed to extremely low quality factor of asymmetric highly-lossy Fabry-Perot cavities. Such broadband absorber designs are ultrathin compared to carbon nanotube based black materials, and does not require lithographic processes. This demonstration redirects the broadband super absorber design to extreme simplicity, higher performance and cost effective manufacturing convenience for practical industrial production.
Aroul, A L Praveen; Bhatia, Dinesh
2011-01-01
Continued miniaturization of electronic devices and technological advancements in wireless communications has made wearable body-centric telemedicine systems viable. Antennas play a crucial role in characterizing the efficiency and reliability of these systems. The performance characteristics such as the radiation pattern, gain, efficiency of the antennas get adversely affected due to the presence of lossy human body tissues. In this paper we investigate the above mentioned performance parameters and radio frequency transmission properties of wire and planar structures operating at ISM frequency band of 2.40-2.50 GHz in the proximity of human body.
1987-03-01
the VLSI Implementation of the Electromagnetic Field of an Arbitrary Current Source" B.A. Hoyt, A.J. Terzuoli, A.V. Lair ., Air Force Institute of...method is that cavities of arbitrary three dimensional shapes and nonuniform lossy materials can be analyzed. THEORY OF VECTOR POTENTIAL FINITE...elements used to model the cavity. The method includes the effects of nonuniform lossy materials and can analyze cavities of a wide variety of two- and
Solution of the lossy nonlinear Tricomi equation with application to sonic boom focusing
NASA Astrophysics Data System (ADS)
Salamone, Joseph A., III
Sonic boom focusing theory has been augmented with new terms that account for mean flow effects in the direction of propagation and also for atmospheric absorption/dispersion due to molecular relaxation due to oxygen and nitrogen. The newly derived model equation was numerically implemented using a computer code. The computer code was numerically validated using a spectral solution for nonlinear propagation of a sinusoid through a lossy homogeneous medium. An additional numerical check was performed to verify the linear diffraction component of the code calculations. The computer code was experimentally validated using measured sonic boom focusing data from the NASA sponsored Superboom Caustic and Analysis Measurement Program (SCAMP) flight test. The computer code was in good agreement with both the numerical and experimental validation. The newly developed code was applied to examine the focusing of a NASA low-boom demonstration vehicle concept. The resulting pressure field was calculated for several supersonic climb profiles. The shaping efforts designed into the signatures were still somewhat evident despite the effects of sonic boom focusing.
Shahbazi, Mohammad; Saranlı, Uluç; Babuška, Robert; Lopes, Gabriel A D
2016-12-05
This paper introduces approximate time-domain solutions to the otherwise non-integrable double-stance dynamics of the 'bipedal' spring-loaded inverted pendulum (B-SLIP) in the presence of non-negligible damping. We first introduce an auxiliary system whose behavior under certain conditions is approximately equivalent to the B-SLIP in double-stance. Then, we derive approximate solutions to the dynamics of the new system following two different methods: (i) updated-momentum approach that can deal with both the lossy and lossless B-SLIP models, and (ii) perturbation-based approach following which we only derive a solution to the lossless case. The prediction performance of each method is characterized via a comprehensive numerical analysis. The derived representations are computationally very efficient compared to numerical integrations, and, hence, are suitable for online planning, increasing the autonomy of walking robots. Two application examples of walking gait control are presented. The proposed solutions can serve as instrumental tools in various fields such as control in legged robotics and human motion understanding in biomechanics.
Modeling and Experimental Validation for 3D mm-wave Radar Imaging
NASA Astrophysics Data System (ADS)
Ghazi, Galia
As the problem of identifying suicide bombers wearing explosives concealed under clothing becomes increasingly important, it becomes essential to detect suspicious individuals at a distance. Systems which employ multiple sensors to determine the presence of explosives on people are being developed. Their functions include observing and following individuals with intelligent video, identifying explosives residues or heat signatures on the outer surface of their clothing, and characterizing explosives using penetrating X-rays, terahertz waves, neutron analysis, or nuclear quadrupole resonance. At present, mm-wave radar is the only modality that can both penetrate and sense beneath clothing at a distance of 2 to 50 meters without causing physical harm. Unfortunately, current mm-wave radar systems capable of performing high-resolution, real-time imaging require using arrays with a large number of transmitting and receiving modules; therefore, these systems present undesired large size, weight and power consumption, as well as extremely complex hardware architecture. The overarching goal of this thesis is the development and experimental validation of a next generation inexpensive, high-resolution radar system that can distinguish security threats hidden on individuals located at 2-10 meters range. In pursuit of this goal, this thesis proposes the following contributions: (1) Development and experimental validation of a new current-based, high-frequency computational method to model large scattering problems (hundreds of wavelengths) involving lossy, penetrable and multi-layered dielectric and conductive structures, which is needed for an accurate characterization of the wave-matter interaction and EM scattering in the target region; (2) Development of combined Norm-1, Norm-2 regularized imaging algorithms, which are needed for enhancing the resolution of the images while using a minimum number of transmitting and receiving antennas; (3) Implementation and experimental validation of new calibration techniques, which are needed for coherent imaging with multistatic configurations; and (4) Investigation of novel compressive antennas, which spatially modulate the wavefield in order to enhance the information transfer efficiency between sampling and imaging regions and use of Compressive Sensing algorithms.
Numerical methods for analyzing electromagnetic scattering
NASA Technical Reports Server (NTRS)
Lee, S. W.; Lo, Y. T.; Chuang, S. L.; Lee, C. S.
1985-01-01
Attenuation properties of the normal modes in an overmoded waveguide coated with a lossy material were analyzed. It is found that the low-order modes, can be significantly attenuated even with a thin layer of coating if the coating material is not too lossy. A thinner layer of coating is required for large attenuation of the low-order modes if the coating material is magnetic rather than dielectric. The Radar Cross Section (RCS) from an uncoated circular guide terminated by a perfect electric conductor was calculated and compared with available experimental data. It is confirmed that the interior irradiation contributes to the RCS. The equivalent-current method based on the geometrical theory of diffraction (GTD) was chosen for the calculation of the contribution from the rim diffraction. The RCS reduction from a coated circular guide terminated by a PEC are planned schemes for the experiments are included. The waveguide coated with a lossy magnetic material is suggested as a substitute for the corrugated waveguide.
Analysis of the electromagnetic scattering from an inlet geometry with lossy walls
NASA Technical Reports Server (NTRS)
Myung, N. H.; Pathak, P. H.; Chunang, C. D.
1985-01-01
One of the primary goals is to develop an approximate but sufficiently accurate analysis for the problem of electromagnetic (EM) plane wave scattering by an open ended, perfectly-conducting, semi-infinite hollow circular waveguide (or duct) with a thin, uniform layer of lossy or absorbing material on its inner wall, and with a simple termination inside. The less difficult but useful problem of the EM scattering by a two-dimensional (2-D), semi-infinite parallel plate waveguide with an impedance boundary condition on the inner walls was chosen initially for analysis. The impedance boundary condition in this problem serves to model a thin layer of lossy dielectric/ferrite coating on the otherwise perfectly-conducting interior waveguide walls. An approximate but efficient and accurate ray solution was obtained recently. That solution is presently being extended to the case of a moderately thick dielectric/ferrite coating on the walls so as to be valid for situations where the impedance boundary condition may not remain sufficiently accurate.
NASA Technical Reports Server (NTRS)
Beggs, John H.
2000-01-01
The upwind leapfrog or Linear Bicharacteristic Scheme (LBS) has previously been extended to treat lossy dielectric and magnetic materials. This paper examines different methodologies for treatment of the electric loss term in the Linear Bicharacteristic Scheme for computational electromagnetics. Several different treatments of the electric loss term using the LBS are explored and compared on one-dimensional model problems involving reflection from lossy dielectric materials on both uniform and nonuniform grids. Results using these LBS implementations are also compared with the FDTD method for convenience.
Current distribution on a cylindrical antenna with parallel orientation in a lossy magnetoplasma
NASA Technical Reports Server (NTRS)
Klein, C. A.; Klock, P. W.; Deschamps, G. A.
1972-01-01
The current distribution and impedance of a thin cylindrical antenna with parallel orientation to the static magnetic field of a lossy magnetoplasma is calculated with the method of moments. The electric field produced by an infinitesimal current source is first derived. Results are presented for a wide range of plasma parameters. Reasonable answers are obtained for all cases except for the overdense hyperbolic case. A discussion of the numerical stability is included which not only applies to this problem but other applications of the method of moments.
Measurement of the properties of lossy materials inside a finite conducting cylinder
NASA Technical Reports Server (NTRS)
Dominek, A.; Park, A.; Caldecott, R.
1988-01-01
Broadband, swept frequency measurement techniques were investigated for the evaluation of the electrical performance of thin, high temperature material coatings. Reflections and transmission measurements using an HP8510B Network Analyzer were developed for an existing high temperature test rig at NASA Lewis Research Center. Reflection measurements will be the initial approach used due to fixture simplicity even though surface wave transmission measurements would be more sensitive. The minimum goal is to monitor the electrical change of the material's performance as a function of temperature. If possible, the materials constitutive parameters, epsilon and muon will be found.
Method and apparatus for powering an electrodeless lamp with reduced radio frequency interference
Simpson, James E.
1999-01-01
An electrodeless lamp waveguide structure includes tuned absorbers for spurious RF signals. A lamp waveguide with an integral frequency selective attenuation includes resonant absorbers positioned within the waveguide to absorb spurious out-of-band RF energy. The absorbers have a negligible effect on energy at the selected frequency used to excite plasma in the lamp. In a first embodiment, one or more thin slabs of lossy magnetic material are affixed to the sidewalls of the waveguide at approximately one quarter wavelength of the spurious signal from an end wall of the waveguide. The positioning of the lossy material optimizes absorption of power from the spurious signal. In a second embodiment, one or more thin slabs of lossy magnetic material are used in conjunction with band rejection waveguide filter elements. In a third embodiment, one or more microstrip filter elements are tuned to the frequency of the spurious signal and positioned within the waveguide to couple and absorb the spurious signal's energy. All three embodiments absorb negligible energy at the selected frequency and so do not significantly diminish the energy efficiency of the lamp.
Method and apparatus for powering an electrodeless lamp with reduced radio frequency interference
Simpson, J.E.
1999-06-08
An electrodeless lamp waveguide structure includes tuned absorbers for spurious RF signals. A lamp waveguide with an integral frequency selective attenuation includes resonant absorbers positioned within the waveguide to absorb spurious out-of-band RF energy. The absorbers have a negligible effect on energy at the selected frequency used to excite plasma in the lamp. In a first embodiment, one or more thin slabs of lossy magnetic material are affixed to the sidewalls of the waveguide at approximately one quarter wavelength of the spurious signal from an end wall of the waveguide. The positioning of the lossy material optimizes absorption of power from the spurious signal. In a second embodiment, one or more thin slabs of lossy magnetic material are used in conjunction with band rejection waveguide filter elements. In a third embodiment, one or more microstrip filter elements are tuned to the frequency of the spurious signal and positioned within the waveguide to couple and absorb the spurious signal's energy. All three embodiments absorb negligible energy at the selected frequency and so do not significantly diminish the energy efficiency of the lamp. 18 figs.
Unsteady Analysis of Inlet-Compressor Acoustic Interactions Using Coupled 3-D and 1-D CFD Codes
NASA Technical Reports Server (NTRS)
Suresh, A.; Cole, G. L.
2000-01-01
It is well known that the dynamic response of a mixed compression supersonic inlet is very sensitive to the boundary condition imposed at the subsonic exit (engine face) of the inlet. In previous work, a 3-D computational fluid dynamics (CFD) inlet code (NPARC) was coupled at the engine face to a 3-D turbomachinery code (ADPAC) simulating an isolated rotor and the coupled simulation used to study the unsteady response of the inlet. The main problem with this approach is that the high fidelity turbomachinery simulation becomes prohibitively expensive as more stages are included in the simulation. In this paper, an alternative approach is explored, wherein the inlet code is coupled to a lesser fidelity 1-D transient compressor code (DYNTECC) which simulates the whole compressor. The specific application chosen for this evaluation is the collapsing bump experiment performed at the University of Cincinnati, wherein reflections of a large-amplitude acoustic pulse from a compressor were measured. The metrics for comparison are the pulse strength (time integral of the pulse amplitude) and wave form (shape). When the compressor is modeled by stage characteristics the computed strength is about ten percent greater than that for the experiment, but the wave shapes are in poor agreement. An alternate approach that uses a fixed rise in duct total pressure and temperature (so-called 'lossy' duct) to simulate a compressor gives good pulse shapes but the strength is about 30 percent low.
Robust video transmission with distributed source coded auxiliary channel.
Wang, Jiajun; Majumdar, Abhik; Ramchandran, Kannan
2009-12-01
We propose a novel solution to the problem of robust, low-latency video transmission over lossy channels. Predictive video codecs, such as MPEG and H.26x, are very susceptible to prediction mismatch between encoder and decoder or "drift" when there are packet losses. These mismatches lead to a significant degradation in the decoded quality. To address this problem, we propose an auxiliary codec system that sends additional information alongside an MPEG or H.26x compressed video stream to correct for errors in decoded frames and mitigate drift. The proposed system is based on the principles of distributed source coding and uses the (possibly erroneous) MPEG/H.26x decoder reconstruction as side information at the auxiliary decoder. The distributed source coding framework depends upon knowing the statistical dependency (or correlation) between the source and the side information. We propose a recursive algorithm to analytically track the correlation between the original source frame and the erroneous MPEG/H.26x decoded frame. Finally, we propose a rate-distortion optimization scheme to allocate the rate used by the auxiliary encoder among the encoding blocks within a video frame. We implement the proposed system and present extensive simulation results that demonstrate significant gains in performance both visually and objectively (on the order of 2 dB in PSNR over forward error correction based solutions and 1.5 dB in PSNR over intrarefresh based solutions for typical scenarios) under tight latency constraints.
Quantum optics of lossy asymmetric beam splitters.
Uppu, Ravitej; Wolterink, Tom A W; Tentrup, Tristan B H; Pinkse, Pepijn W H
2016-07-25
We theoretically investigate quantum interference of two single photons at a lossy asymmetric beam splitter, the most general passive 2×2 optical circuit. The losses in the circuit result in a non-unitary scattering matrix with a non-trivial set of constraints on the elements of the scattering matrix. Our analysis using the noise operator formalism shows that the loss allows tunability of quantum interference to an extent not possible with a lossless beam splitter. Our theoretical studies support the experimental demonstrations of programmable quantum interference in highly multimodal systems such as opaque scattering media and multimode fibers.
Theory and Circuit Model for Lossy Coaxial Transmission Line
DOE Office of Scientific and Technical Information (OSTI.GOV)
Genoni, T. C.; Anderson, C. N.; Clark, R. E.
2017-04-01
The theory of signal propagation in lossy coaxial transmission lines is revisited and new approximate analytic formulas for the line impedance and attenuation are derived. The accuracy of these formulas from DC to 100 GHz is demonstrated by comparison to numerical solutions of the exact field equations. Based on this analysis, a new circuit model is described which accurately reproduces the line response over the entire frequency range. Circuit model calculations are in excellent agreement with the numerical and analytic results, and with finite-difference-time-domain simulations which resolve the skindepths of the conducting walls.
Image Size Variation Influence on Corrupted and Non-viewable BMP Image
NASA Astrophysics Data System (ADS)
Azmi, Tengku Norsuhaila T.; Azma Abdullah, Nurul; Rahman, Nurul Hidayah Ab; Hamid, Isredza Rahmi A.; Chai Wen, Chuah
2017-08-01
Image is one of the evidence component seek in digital forensics. Joint Photographic Experts Group (JPEG) format is most popular used in the Internet because JPEG files are very lossy and easy to compress that can speed up Internet transmitting processes. However, corrupted JPEG images are hard to recover due to the complexities of determining corruption point. Nowadays Bitmap (BMP) images are preferred in image processing compared to another formats because BMP image contain all the image information in a simple format. Therefore, in order to investigate the corruption point in JPEG, the file is required to be converted into BMP format. Nevertheless, there are many things that can influence the corrupting of BMP image such as the changes of image size that make the file non-viewable. In this paper, the experiment indicates that the size of BMP file influences the changes in the image itself through three conditions, deleting, replacing and insertion. From the experiment, we learnt by correcting the file size, it can able to produce a viewable file though partially. Then, it can be investigated further to identify the corruption point.
Kristensen, Jesper T; Houmann, Andreas; Liu, Xiaomin; Turchinovich, Dmitry
2008-06-23
We report on highly reproducible low-loss fusion splicing of polarization-maintaining single-mode fibers (PM-SMFs) and hollow-core photonic crystal fibers (HC-PCFs). The PM-SMF-to-HC-PCF splices are characterized by the loss of 0.62 +/- 0.24 dB, and polarization extinction ratio of 19 +/- 0.68 dB. The reciprocal HC-PCF-to-PM-SMF splice loss is found to be 2.19 +/- 0.33 dB, which is caused by the mode evolution in HC-PCF. The return loss in both cases was measured to be -14 dB. We show that a splice defect is caused by the HC-PCF cleave defect, and the lossy splice can be predicted at an early stage of the splicing process. We also demonstrate that the higher splice loss compromises the PM properties of the splice. Our splicing technique was successfully applied to the realization of a low-loss, environmentally stable monolithic PM fiber laser pulse compressor, enabling direct end-of-the-fiber femtosecond pulse delivery.
Multiferroic properties in NdFeO3-PbTiO3 solid solutions
NASA Astrophysics Data System (ADS)
Kumar, Sunil; Pal, Jaswinder; Kaur, Shubhpreet; Agrawal, P.; Singh, Mandeep; Singh, Anupinder
2018-05-01
The x(NdFeO3) - 1-x(PbTiO3) where x = 0.2 solid solution was prepared using solid state reaction route. The X-ray diffraction (XRD) data reveals the single phase formation. The microstructure shows grain growth with lesser porosity. The energy dispersive analysis confirms the presence of elements in stochiometric proportion. The polarization vs. Electric field loop estabilished a ferroelectric type behavior but lossy in nature. This lossy nature may be due to the presence of large leakage current in solid solution. The Magnetization vs. Magnetic field plot exhibits a unsaturated hysteriss loop indicates that the sample is not purely ferromagnetic.
NASA Astrophysics Data System (ADS)
Karimi, Hossein; Nikmehr, Saeid; Khodapanah, Ehsan
2016-09-01
In this paper, we develop a B-spline finite-element method (FEM) based on a locally modal wave propagation with anisotropic perfectly matched layers (PMLs), for the first time, to simulate nonlinear and lossy plasmonic waveguides. Conventional approaches like beam propagation method, inherently omit the wave spectrum and do not provide physical insight into nonlinear modes especially in the plasmonic applications, where nonlinear modes are constructed by linear modes with very close propagation constant quantities. Our locally modal B-spline finite element method (LMBS-FEM) does not suffer from the weakness of the conventional approaches. To validate our method, first, propagation of wave for various kinds of linear, nonlinear, lossless and lossy materials of metal-insulator plasmonic structures are simulated using LMBS-FEM in MATLAB and the comparisons are made with FEM-BPM module of COMSOL Multiphysics simulator and B-spline finite-element finite-difference wide angle beam propagation method (BSFEFD-WABPM). The comparisons show that not only our developed numerical approach is computationally more accurate and efficient than conventional approaches but also it provides physical insight into the nonlinear nature of the propagation modes.
Femtomolar Detection by Nanocoated Fiber Label-Free Biosensors.
Chiavaioli, Francesco; Zubiate, Pablo; Del Villar, Ignacio; Zamarreño, Carlos R; Giannetti, Ambra; Tombelli, Sara; Trono, Cosimo; Arregui, Francisco J; Matias, Ignacio R; Baldini, Francesco
2018-05-25
The advent of optical fiber-based biosensors combined with that of nanotechnologies has provided an opportunity for developing in situ, portable, lightweight, versatile, and high-performance optical sensing platforms. We report on the generation of lossy mode resonances by the deposition of nanometer-thick metal oxide films on optical fibers, which makes it possible to measure precisely and accurately the changes in optical properties of the fiber-surrounding medium with very high sensitivity compared to other technology platforms, such as long period gratings or surface plasmon resonances, the gold standard in label-free and real-time biomolecular interaction analysis. This property, combined with the application of specialty structures such as D-shaped fibers, permits enhancing the light-matter interaction. SEM and TEM imaging together with X-EDS tool have been utilized to characterize the two films used, i.e., indium tin oxide and tin dioxide. Moreover, the experimental transmission spectra obtained after the deposition of the nanocoatings have been numerically corroborated by means of wave propagation methods. With the use of a conventional wavelength interrogation system and ad hoc developed microfluidics, the shift of the lossy mode resonance can be reliably recorded in response to very low analyte concentrations. Repeated experiments confirm a big leap in performance thanks to the capability to detect femtomolar concentrations in human serum, improving the detection limit by 3 orders of magnitude when compared with other fiber-based configurations. The biosensor has been regenerated several times by injecting sodium dodecyl sulfate, which proves the capability of sensor to be reused.
Collective attacks and unconditional security in continuous variable quantum key distribution.
Grosshans, Frédéric
2005-01-21
We present here an information theoretic study of Gaussian collective attacks on the continuous variable key distribution protocols based on Gaussian modulation of coherent states. These attacks, overlooked in previous security studies, give a finite advantage to the eavesdropper in the experimentally relevant lossy channel, but are not powerful enough to reduce the range of the reverse reconciliation protocols. Secret key rates are given for the ideal case where Bob performs optimal collective measurements, as well as for the realistic cases where he performs homodyne or heterodyne measurements. We also apply the generic security proof of Christiandl et al. to obtain unconditionally secure rates for these protocols.
NASA Astrophysics Data System (ADS)
Nightingale, James; Wang, Qi; Grecos, Christos; Goma, Sergio
2014-02-01
High Efficiency Video Coding (HEVC), the latest video compression standard (also known as H.265), can deliver video streams of comparable quality to the current H.264 Advanced Video Coding (H.264/AVC) standard with a 50% reduction in bandwidth. Research into SHVC, the scalable extension to the HEVC standard, is still in its infancy. One important area for investigation is whether, given the greater compression ratio of HEVC (and SHVC), the loss of packets containing video content will have a greater impact on the quality of delivered video than is the case with H.264/AVC or its scalable extension H.264/SVC. In this work we empirically evaluate the layer-based, in-network adaptation of video streams encoded using SHVC in situations where dynamically changing bandwidths and datagram loss ratios require the real-time adaptation of video streams. Through the use of extensive experimentation, we establish a comprehensive set of benchmarks for SHVC-based highdefinition video streaming in loss prone network environments such as those commonly found in mobile networks. Among other results, we highlight that packet losses of only 1% can lead to a substantial reduction in PSNR of over 3dB and error propagation in over 130 pictures following the one in which the loss occurred. This work would be one of the earliest studies in this cutting-edge area that reports benchmark evaluation results for the effects of datagram loss on SHVC picture quality and offers empirical and analytical insights into SHVC adaptation to lossy, mobile networking conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S.; Liu, Qiang
We consider tracking of a target with elliptical nonlinear constraints on its motion dynamics. The state estimates are generated by sensors and sent over long-haul links to a remote fusion center for fusion. We show that the constraints can be projected onto the known ellipse and hence incorporated into the estimation and fusion process. In particular, two methods based on (i) direct connection to the center, and (ii) shortest distance to the ellipse are discussed. A tracking example is used to illustrate the tracking performance using projection-based methods with various fusers in the lossy long-haul tracking environment.
A Class of Broad-Band Dissipative Matching Networks Designed on an Insertion-Loss Basis
1952-01-25
latitude for the present : study « Note that although low~pass, performance is: usually hot desirable in microwave work, many loads encountered in...proceeä^^ch ^ "’%ur^ßör j ho^sve* "^ because there has’hot--Seen sufficient study -ofr%fei’ -’~ properties of a. öy^lprtiye U-pole. For this...admittance is placed in shunt . As long -as the resistance r^ or the conductance g-^ is less than 1, the lossy Supples are physically realizable. The
Systematic network coding for two-hop lossy transmissions
NASA Astrophysics Data System (ADS)
Li, Ye; Blostein, Steven; Chan, Wai-Yip
2015-12-01
In this paper, we consider network transmissions over a single or multiple parallel two-hop lossy paths. These scenarios occur in applications such as sensor networks or WiFi offloading. Random linear network coding (RLNC), where previously received packets are re-encoded at intermediate nodes and forwarded, is known to be a capacity-achieving approach for these networks. However, a major drawback of RLNC is its high encoding and decoding complexity. In this work, a systematic network coding method is proposed. We show through both analysis and simulation that the proposed method achieves higher end-to-end rate as well as lower computational cost than RLNC for finite field sizes and finite-sized packet transmissions.
Loss compensation symmetry in dimers made of gain and lossy nanoparticles
NASA Astrophysics Data System (ADS)
Klimov, V. V.; Zabkov, I. V.; Guzatov, D. V.; Vinogradov, A. P.
2018-03-01
The eigenmodes in a two-dimensional dimer made of gain and lossy nanoparticles have been investigated within an exact analytical approach. It has been shown that there are eigenmodes for which all Joule losses are exactly compensated by the gain. Among such solutions there are solutions with a new type of symmetry, which we refer to as loss compensation symmetry, as well as well-known parity-time (PT) symmetric solutions. Unlike PT symmetric ones, the modes with loss compensation symmetry allow one to achieve full loss compensation with significantly less gain that in the case of PT symmetry. This effect paves the way to new loss compensation methods in optics.
Wang, Hua; Liu, Feng; Xia, Ling; Crozier, Stuart
2008-11-21
This paper presents a stabilized Bi-conjugate gradient algorithm (BiCGstab) that can significantly improve the performance of the impedance method, which has been widely applied to model low-frequency field induction phenomena in voxel phantoms. The improved impedance method offers remarkable computational advantages in terms of convergence performance and memory consumption over the conventional, successive over-relaxation (SOR)-based algorithm. The scheme has been validated against other numerical/analytical solutions on a lossy, multilayered sphere phantom excited by an ideal coil loop. To demonstrate the computational performance and application capability of the developed algorithm, the induced fields inside a human phantom due to a low-frequency hyperthermia device is evaluated. The simulation results show the numerical accuracy and superior performance of the method.
Collins, Liam; Belianinov, Alex; Somnath, Suhas; Balke, Nina; Kalinin, Sergei V; Jesse, Stephen
2016-08-12
Kelvin probe force microscopy (KPFM) has provided deep insights into the local electronic, ionic and electrochemical functionalities in a broad range of materials and devices. In classical KPFM, which utilizes heterodyne detection and closed loop bias feedback, the cantilever response is down-sampled to a single measurement of the contact potential difference (CPD) per pixel. This level of detail, however, is insufficient for materials and devices involving bias and time dependent electrochemical events; or at solid-liquid interfaces, where non-linear or lossy dielectrics are present. Here, we demonstrate direct recovery of the bias dependence of the electrostatic force at high temporal resolution using General acquisition Mode (G-Mode) KPFM. G-Mode KPFM utilizes high speed detection, compression, and storage of the raw cantilever deflection signal in its entirety at high sampling rates. We show how G-Mode KPFM can be used to capture nanoscale CPD and capacitance information with a temporal resolution much faster than the cantilever bandwidth, determined by the modulation frequency of the AC voltage. In this way, G-Mode KPFM offers a new paradigm to study dynamic electric phenomena in electroactive interfaces as well as a promising route to extend KPFM to the solid-liquid interface.
Motamedi, Mohammad; Müller, Rolf
2014-06-01
The biosonar beampatterns found across different bat species are highly diverse in terms of global and local shape properties such as overall beamwidth or the presence, location, and shape of multiple lobes. It may be hypothesized that some of this variability reflects evolutionary adaptation. To investigate this hypothesis, the present work has searched for patterns in the variability across a set of 283 numerical predictions of emission and reception beampatterns from 88 bat species belonging to four major families (Rhinolophidae, Hipposideridae, Phyllostomidae, Vespertilionidae). This was done using a lossy compression of the beampatterns that utilized real spherical harmonics as basis functions. The resulting vector representations showed differences between the families as well as between emission and reception. These differences existed in the means of the power spectra as well as in their distribution. The distributions were characterized in a low dimensional space found through principal component analysis. The distinctiveness of the beampatterns across the groups was corroborated by pairwise classification experiments that yielded correct classification rates between ~85 and ~98%. Beamwidth was a major factor but not the sole distinguishing feature in these classification experiments. These differences could be seen as an indication of adaptive trends at the beampattern level.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sandford, M.T. II; Bradley, J.N.; Handel, T.G.
Data embedding is a new steganographic method for combining digital information sets. This paper describes the data embedding method and gives examples of its application using software written in the C-programming language. Sandford and Handel produced a computer program (BMPEMBED, Ver. 1.51 written for IBM PC/AT or compatible, MS/DOS Ver. 3.3 or later) that implements data embedding in an application for digital imagery. Information is embedded into, and extracted from, Truecolor or color-pallet images in Microsoft{reg_sign} bitmap (.BMP) format. Hiding data in the noise component of a host, by means of an algorithm that modifies or replaces the noise bits,more » is termed {open_quote}steganography.{close_quote} Data embedding differs markedly from conventional steganography, because it uses the noise component of the host to insert information with few or no modifications to the host data values or their statistical properties. Consequently, the entropy of the host data is affected little by using data embedding to add information. The data embedding method applies to host data compressed with transform, or {open_quote}lossy{close_quote} compression algorithms, as for example ones based on discrete cosine transform and wavelet functions. Analysis of the host noise generates a key required for embedding and extracting the auxiliary data from the combined data. The key is stored easily in the combined data. Images without the key cannot be processed to extract the embedded information. To provide security for the embedded data, one can remove the key from the combined data and manage it separately. The image key can be encrypted and stored in the combined data or transmitted separately as a ciphertext much smaller in size than the embedded data. The key size is typically ten to one-hundred bytes, and it is in data an analysis algorithm.« less
Channelling information flows from observation to decision; or how to increase certainty
NASA Astrophysics Data System (ADS)
Weijs, S. V.
2015-12-01
To make adequate decisions in an uncertain world, information needs to reach the decision problem, to enable overseeing the full consequences of each possible decision.On its way from the physical world to a decision problem, information is transferred through the physical processes that influence the sensor, then through processes that happen in the sensor, through wires or electromagnetic waves. For the last decade, most information becomes digitized at some point. From moment of digitization, information can in principle be transferred losslessly. Information about the physical world is often also stored, sometimes in compressed form, such as physical laws, concepts, or models of specific hydrological systems. It is important to note, however, that all information about a physical system eventually has to originate from observation (although inevitably coloured by some prior assumptions). This colouring makes the compression lossy, but is effectively the only way to make use of similarities in time and space that enable predictions while measuring only a a few macro-states of a complex hydrological system.Adding physical process knowledge to a hydrological model can thus be seen as a convenient way to transfer information from observations from a different time or place, to make predictions about another situation, assuming the same dynamics are at work.The key challenge to achieve more certainty in hydrological prediction can therefore be formulated as a challenge to tap and channel information flows from the environment. For tapping more information flows, new measurement techniques, large scale campaigns, historical data sets, and large sample hydrology and regionalization efforts can bring progress. For channelling the information flows with minimum loss, model calibration, and model formulation techniques should be critically investigated. Some experience from research in a Swiss high alpine catchment are used as an illustration.
NASA Astrophysics Data System (ADS)
Sandford, Maxwell T., II; Bradley, Jonathan N.; Handel, Theodore G.
1996-01-01
Data embedding is a new steganographic method for combining digital information sets. This paper describes the data embedding method and gives examples of its application using software written in the C-programming language. Sandford and Handel produced a computer program (BMPEMBED, Ver. 1.51 written for IBM PC/AT or compatible, MS/DOS Ver. 3.3 or later) that implements data embedding in an application for digital imagery. Information is embedded into, and extracted from, Truecolor or color-pallet images in MicrosoftTM bitmap (BMP) format. Hiding data in the noise component of a host, by means of an algorithm that modifies or replaces the noise bits, is termed `steganography.' Data embedding differs markedly from conventional steganography, because it uses the noise component of the host to insert information with few or no modifications to the host data values or their statistical properties. Consequently, the entropy of the host data is affected little by using data embedding to add information. The data embedding method applies to host data compressed with transform, or `lossy' compression algorithms, as for example ones based on discrete cosine transform and wavelet functions. Analysis of the host noise generates a key required for embedding and extracting the auxiliary data from the combined data. The key is stored easily in the combined data. Images without the key cannot be processed to extract the embedded information. To provide security for the embedded data, one can remove the key from the combined data and manage it separately. The image key can be encrypted and stored in the combined data or transmitted separately as a ciphertext much smaller in size than the embedded data. The key size is typically ten to one-hundred bytes, and it is derived from the original host data by an analysis algorithm.
Numerical design and analysis of parasitic mode oscillations for 95 GHz gyrotron beam tunnel
NASA Astrophysics Data System (ADS)
Kumar, Nitin; Singh, Udaybir; Yadav, Vivek; Kumar, Anil; Sinha, A. K.
2013-05-01
The beam tunnel, equipped with the high lossy ceramics, is designed for 95 GHz gyrotron. The geometry of the beam tunnel is optimized considering the maximum RF absorption (ideally 100%) and the suppression of parasitic oscillations. The excitation of parasitic modes is a concerning problem for high frequency, high power gyrotrons. Considering the problem of parasitic mode excitation in beam tunnel, a detail analysis is performed for the suppression of these kinds of modes. Trajectory code EGUN and CST Microwave Studio are used for the simulations of electron beam trajectory and electromagnetic analysis, respectively.
Local Gaussian operations can enhance continuous-variable entanglement distillation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang Shengli; Loock, Peter van; Institute of Theoretical Physics I, Universitaet Erlangen-Nuernberg, Staudtstrasse 7/B2, DE-91058 Erlangen
2011-12-15
Entanglement distillation is a fundamental building block in long-distance quantum communication. Though known to be useless on their own for distilling Gaussian entangled states, local Gaussian operations may still help to improve non-Gaussian entanglement distillation schemes. Here we show that by applying local squeezing operations both the performance and the efficiency of existing distillation protocols can be enhanced. We find that such an enhancement through local Gaussian unitaries can be obtained even when the initially shared Gaussian entangled states are mixed, as, for instance, after their distribution through a lossy-fiber communication channel.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heid, Matthias; Luetkenhaus, Norbert
2006-05-15
We investigate the performance of a continuous-variable quantum key distribution scheme in a practical setting. More specifically, we take a nonideal error reconciliation procedure into account. The quantum channel connecting the two honest parties is assumed to be lossy but noiseless. Secret key rates are given for the case that the measurement outcomes are postselected or a reverse reconciliation scheme is applied. The reverse reconciliation scheme loses its initial advantage in the practical setting. If one combines postselection with reverse reconciliation, however, much of this advantage can be recovered.
Numerical modeling of an enhanced very early time electromagnetic (VETEM) prototype system
Cui, T.J.; Chew, W.C.; Aydiner, A.A.; Wright, D.L.; Smith, D.V.; Abraham, J.D.
2000-01-01
In this paper, two numerical models are presented to simulate an enhanced very early time electromagnetic (VETEM) prototype system, which is used for buried-object detection and environmental problems. Usually, the VETEM system contains a transmitting loop antenna and a receiving loop antenna, which run on a lossy ground to detect buried objects. In the first numerical model, the loop antennas are accurately analyzed using the Method of Moments (MoM) for wire antennas above or buried in lossy ground. Then, Conjugate Gradient (CG) methods, with the use of the fast Fourier transform (FFT) or MoM, are applied to investigate the scattering from buried objects. Reflected and scattered magnetic fields are evaluated at the receiving loop to calculate the output electric current. However, the working frequency for the VETEM system is usually low and, hence, two magnetic dipoles are used to replace the transmitter and receiver in the second numerical model. Comparing these two models, the second one is simple, but only valid for low frequency or small loops, while the first modeling is more general. In this paper, all computations are performed in the frequency domain, and the FFT is used to obtain the time-domain responses. Numerical examples show that simulation results from these two models fit very well when the frequency ranges from 10 kHz to 10 MHz, and both results are close to the measured data.
Chang, Yin-Jung
2014-01-13
The investigation of optimum optical designs of interlayers and antireflection (AR) coating for achieving maximum average transmittance (T(ave)) into the CuIn(1-x)Ga(x)Se2 (CIGS) absorber of a typical CIGS solar cell through the suppression of lossy-film-induced angular mismatches is described. Simulated-annealing algorithm incorporated with rigorous electromagnetic transmission-line network approach is applied with criteria of minimum average reflectance (R(ave)) from the cell surface or maximum T(ave) into the CIGS absorber. In the presence of one MgF2 coating, difference in R(ave) associated with optimum designs based upon the two distinct criteria is only 0.3% under broadband and nearly omnidirectional incidence; however, their corresponding T(ave) values could be up to 14.34% apart. Significant T(ave) improvements associated with the maximum-T(ave)-based design are found mainly in the mid to longer wavelengths and are attributed to the largest suppression of lossy-film-induced angular mismatches over the entire CIGS absorption spectrum. Maximum-T(ave)-based designs with a MgF2 coating optimized under extreme deficiency of angular information is shown, as opposed to their minimum-R(ave)-based counterparts, to be highly robust to omnidirectional incidence.
Transport Protocols for Wireless Mesh Networks
NASA Astrophysics Data System (ADS)
Eddie Law, K. L.
Transmission control protocol (TCP) provides reliable connection-oriented services between any two end systems on the Internet. With TCP congestion control algorithm, multiple TCP connections can share network and link resources simultaneously. These TCP congestion control mechanisms have been operating effectively in wired networks. However, performance of TCP connections degrades rapidly in wireless and lossy networks. To sustain the throughput performance of TCP connections in wireless networks, design modifications may be required accordingly in the TCP flow control algorithm, and potentially, in association with other protocols in other layers for proper adaptations. In this chapter, we explain the limitations of the latest TCP congestion control algorithm, and then review some popular designs for TCP connections to operate effectively in wireless mesh network infrastructure.
Scattering from a random layer of leaves in the physical optics limit
NASA Technical Reports Server (NTRS)
Lang, R. H.; Seker, S. S.; Le Vine, D. M.
1982-01-01
Backscatter of electromagnetic radiation from a layer of vegetation over flat lossy ground has been studied in collaborative research at the George Washingnton University and the Goddard Space Flight Center. In this work the vegetation is composed of leaves which are modeled by a random collection of lossy dielectric disks. Backscattering coefficients for the vegetation layer have been calculated in the case of disks whose diameter is large compared to wavelength. These backscattering coefficients are obtained in terms of the scattering amplitude of an individual disk by employing the distorted Born procedure. The scattering amplitude for a disk which is large compared to wavelength is then found by physical optic techniques. Computed results are interpreted in terms of dominant reflected and transmitted contributions from the disks and ground.
New adaptive color quantization method based on self-organizing maps.
Chang, Chip-Hong; Xu, Pengfei; Xiao, Rui; Srikanthan, Thambipillai
2005-01-01
Color quantization (CQ) is an image processing task popularly used to convert true color images to palletized images for limited color display devices. To minimize the contouring artifacts introduced by the reduction of colors, a new competitive learning (CL) based scheme called the frequency sensitive self-organizing maps (FS-SOMs) is proposed to optimize the color palette design for CQ. FS-SOM harmonically blends the neighborhood adaptation of the well-known self-organizing maps (SOMs) with the neuron dependent frequency sensitive learning model, the global butterfly permutation sequence for input randomization, and the reinitialization of dead neurons to harness effective utilization of neurons. The net effect is an improvement in adaptation, a well-ordered color palette, and the alleviation of underutilization problem, which is the main cause of visually perceivable artifacts of CQ. Extensive simulations have been performed to analyze and compare the learning behavior and performance of FS-SOM against other vector quantization (VQ) algorithms. The results show that the proposed FS-SOM outperforms classical CL, Linde, Buzo, and Gray (LBG), and SOM algorithms. More importantly, FS-SOM achieves its superiority in reconstruction quality and topological ordering with a much greater robustness against variations in network parameters than the current art SOM algorithm for CQ. A most significant bit (MSB) biased encoding scheme is also introduced to reduce the number of parallel processing units. By mapping the pixel values as sign-magnitude numbers and biasing the magnitudes according to their sign bits, eight lattice points in the color space are condensed into one common point density function. Consequently, the same processing element can be used to map several color clusters and the entire FS-SOM network can be substantially scaled down without severely scarifying the quality of the displayed image. The drawback of this encoding scheme is the additional storage overhead, which can be cut down by leveraging on existing encoder in an overall lossy compression scheme.
Effect of compression pressure on inhalation grade lactose as carrier for dry powder inhalations
Raut, Neha Sureshrao; Jamaiwar, Swapnil; Umekar, Milind Janrao; Kotagale, Nandkishor Ramdas
2016-01-01
Introduction: This study focused on the potential effects of compression forces experienced during lactose (InhaLac 70, 120, and 230) storage and transport on the flowability and aerosol performance in dry powder inhaler formulation. Materials and Methods: Lactose was subjected to typical compression forces 4, 10, and 20 N/cm2. Powder flowability and particle size distribution analysis of un-compressed and compressed lactose was evaluated by Carr's index, Hausner's ratio, the angle of repose and by laser diffraction method. Aerosol performance of un-compressed and compressed lactose was assessed in dispersion studies using glass twin-stage-liquid-impenger at flow rate 40-80 L/min. Results: At compression forces, the flowability of compressed lactose was observed same or slightly improved. Furthermore, compression of lactose caused a decrease in in vitro aerosol dispersion performance. Conclusion: The present study illustrates that, as carrier size increases, a concurrent decrease in drug aerosolization performance was observed. Thus, the compression of the lactose fines onto the surfaces of the larger lactose particles due to compression pressures was hypothesized to be the cause of these observed performance variations. The simulations of storage and transport in an industrial scale can induce significant variations in formulation performance, and it could be a source of batch-to-batch variations. PMID:27014618
Solving constrained inverse problems for waveform tomography with Salvus
NASA Astrophysics Data System (ADS)
Boehm, C.; Afanasiev, M.; van Driel, M.; Krischer, L.; May, D.; Rietmann, M.; Fichtner, A.
2016-12-01
Finding a good balance between flexibility and performance is often difficult within domain-specific software projects. To achieve this balance, we introduce Salvus: an open-source high-order finite element package built upon PETSc and Eigen, that focuses on large-scale full-waveform modeling and inversion. One of the key features of Salvus is its modular design, based on C++ mixins, that separates the physical equations from the numerical discretization and the mathematical optimization. In this presentation we focus on solving inverse problems with Salvus and discuss (i) dealing with inexact derivatives resulting, e.g., from lossy wavefield compression, (ii) imposing additional constraints on the model parameters, e.g., from effective medium theory, and (iii) integration with a workflow management tool. We present a feasible-point trust-region method for PDE-constrained inverse problems that can handle inexactly computed derivatives. The level of accuracy in the approximate derivatives is controlled by localized error estimates to ensure global convergence of the method. Additional constraints on the model parameters are typically cheap to compute without the need for further simulations. Hence, including them in the trust-region subproblem introduces only a small computational overhead, but ensures feasibility of the model in every iteration. We show examples with homogenization constraints derived from effective medium theory (i.e. all fine-scale updates must upscale to a physically meaningful long-wavelength model). Salvus has a built-in workflow management framework to automate the inversion with interfaces to user-defined misfit functionals and data structures. This significantly reduces the amount of manual user interaction and enhances reproducibility which we demonstrate for several applications from the laboratory to global scale.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neubauer, Michael; Dudas, Alan; Krasnykh, Anatoly
Through a combination of experimentation and calculation the components of a novel room temperature dry load were successfully fabricated. These components included lossy ceramic cylinders of various lengths, thicknesses, and percent of silicon carbide (SiC). The cylinders were then assembled into stainless steel compression rings by differential heating of the parts and a special fixture. Post machining of this assembly provided a means for a final weld. The ring assemblies were then measured for S-parameters, individually and in pairs using a low-cost TE10 rectangular to TE01 circular waveguide adapter specially designed to be part of the final load assembly. Matchedmore » pairs of rings were measured for assembly into the final load, and a sliding short designed and fabricated to assist in determining the desired short location in the final assembly. The plan for the project was for Muons, Inc. to produce prototype loads for long-term testing at SLAC. The STTR funds for SLAC were to upgrade and operate their test station to ensure that the loads would satisfy their requirements. Phase III was to be the sale to SLAC of loads that Muons, Inc. would manufacture. However, an alternate solution that involved a rebuild of the old loads, reduced SLAC budget projections, and a relaxed time for the replacement of all loads meant that in-house labor will be used to do the upgrade without the need for the loads developed in this project. Consequently, the project was terminated before the long term testing was initiated. However, SLAC can use the upgraded test stand to compare the long-term performance of the ones produced in this project with their rebuilt loads when they are available.« less
NASA Astrophysics Data System (ADS)
Poch, O.; Schmid, H. M.; Pommerol, A.; Jost, B.; Brouet, Y.; Thomas, N.
2015-12-01
Polarimetric observations of atmosphere-less Solar System bodies can give clues on the texture and on the physico-chemical composition of their surfaces, as reviewed by Mishchenko et al. (2010) and Bagnulo et al. (2011). Measurements performed in the laboratory on carefully characterized samples can provide reference data that can be used for direct comparison with remote-sensing polarimetric observations. In particular, we want to study the spectral dependence of the polarization and the way it is correlated or not with the surface albedo. In the Laboratory for Outflow Studies of Sublimating Materials (LOSSy) at the University of Bern, we have developed the capability to prepare and analyze optically thick analogues of planetary or cometary surfaces composed of water ice, minerals and carbonaceous compounds. Water-free dust of high porosity can also be produced by sublimation of ice under space-simulated conditions (Pommerol et al., 2015). Here, we present the first results of polarization measurements performed in the LOSSy. A Stokes polarimeter is used to measure the Stokes parameters describing the polarization of the visible light scattered by icy samples illuminated with a randomly polarized light simulating the star light. Additionally, a radio-goniometer, equipped with polarizers, can also measure the phase angle dependence of the linearly polarized scattered light. These measurements could provide interesting inputs to complement the theoretical models and predict or interpret spectro-polarimetric properties of Solar System objects and circumstellar disks. Mishchenko, M., et al., 2010, Polarimetric Remote Sensing of Solar System Objects. Bagnulo, S., et al., 2011, J. Quant. Spectrosc. Ra. 112, 2059. Pommerol, A., et al., 2015, Planet. Space Sci. 109-110, 106-122.
Wang, Juan; Tang, Ce; Zhang, Lei; Gong, Yushun; Yin, Changlin; Li, Yongqin
2015-07-01
The question of whether the placement of the dominant hand against the sternum could improve the quality of manual chest compressions remains controversial. In the present study, we evaluated the influence of dominant vs nondominant hand positioning on the quality of conventional cardiopulmonary resuscitation (CPR) during prolonged basic life support (BLS) by rescuers who performed optimal and suboptimal compressions. Six months after completing a standard BLS training course, 101 medical students were instructed to perform adult single-rescuer BLS for 8 minutes on a manikin with a randomized hand position. Twenty-four hours later, the students placed the opposite hand in contact with the sternum while performing CPR. Those with an average compression depth of less than 50 mm were considered suboptimal. Participants who had performed suboptimal compressions were significantly shorter (170.2 ± 6.8 vs 174.0 ± 5.6 cm, P = .008) and lighter (58.9 ± 7.6 vs 66.9 ± 9.6 kg, P < .001) than those who performed optimal compressions. No significant differences in CPR quality were observed between dominant and nondominant hand placements for these who had an average compression depth of greater than 50 mm. However, both the compression depth (49.7 ± 4.2 vs 46.5 ± 4.1 mm, P = .003) and proportion of chest compressions with an appropriate depth (47.6% ± 27.8% vs 28.0% ± 23.4%, P = .006) were remarkably higher when compressing the chest with the dominant hand against the sternum for those who performed suboptimal CPR. Chest compression quality significantly improved when the dominant hand was placed against the sternum for those who performed suboptimal compressions during conventional CPR. Copyright © 2015 Elsevier Inc. All rights reserved.
Jäntti, H; Silfvast, T; Turpeinen, A; Kiviniemi, V; Uusaro, A
2009-04-01
The adequate chest compression rate during CPR is associated with improved haemodynamics and primary survival. To explore whether the use of a metronome would affect also chest compression depth beside the rate, we evaluated CPR quality using a metronome in a simulated CPR scenario. Forty-four experienced intensive care unit nurses participated in two-rescuer basic life support given to manikins in 10min scenarios. The target chest compression to ventilation ratio was 30:2 performed with bag and mask ventilation. The rescuer performing the compressions was changed every 2min. CPR was performed first without and then with a metronome that beeped 100 times per minute. The quality of CPR was analysed with manikin software. The effect of rescuer fatigue on CPR quality was analysed separately. The mean compression rate between ventilation pauses was 137+/-18compressions per minute (cpm) without and 98+/-2cpm with metronome guidance (p<0.001). The mean number of chest compressions actually performed was 104+/-12cpm without and 79+/-3cpm with the metronome (p<0.001). The mean compression depth during the scenario was 46.9+/-7.7mm without and 43.2+/-6.3mm with metronome guidance (p=0.09). The total number of chest compressions performed was 1022 without metronome guidance, 42% at the correct depth; and 780 with metronome guidance, 61% at the correct depth (p=0.09 for difference for percentage of compression with correct depth). Metronome guidance corrected chest compression rates for each compression cycle to within guideline recommendations, but did not affect chest compression quality or rescuer fatigue.
Low-loss plasmon-assisted electro-optic modulator.
Haffner, Christian; Chelladurai, Daniel; Fedoryshyn, Yuriy; Josten, Arne; Baeuerle, Benedikt; Heni, Wolfgang; Watanabe, Tatsuhiko; Cui, Tong; Cheng, Bojun; Saha, Soham; Elder, Delwin L; Dalton, Larry R; Boltasseva, Alexandra; Shalaev, Vladimir M; Kinsey, Nathaniel; Leuthold, Juerg
2018-04-01
For nearly two decades, researchers in the field of plasmonics 1 -which studies the coupling of electromagnetic waves to the motion of free electrons near the surface of a metal 2 -have sought to realize subwavelength optical devices for information technology 3-6 , sensing 7,8 , nonlinear optics 9,10 , optical nanotweezers 11 and biomedical applications 12 . However, the electron motion generates heat through ohmic losses. Although this heat is desirable for some applications such as photo-thermal therapy, it is a disadvantage in plasmonic devices for sensing and information technology 13 and has led to a widespread view that plasmonics is too lossy to be practical. Here we demonstrate that the ohmic losses can be bypassed by using 'resonant switching'. In the proposed approach, light is coupled to the lossy surface plasmon polaritons only in the device's off state (in resonance) in which attenuation is desired, to ensure large extinction ratios between the on and off states and allow subpicosecond switching. In the on state (out of resonance), destructive interference prevents the light from coupling to the lossy plasmonic section of a device. To validate the approach, we fabricated a plasmonic electro-optic ring modulator. The experiments confirm that low on-chip optical losses, operation at over 100 gigahertz, good energy efficiency, low thermal drift and a compact footprint can be combined in a single device. Our result illustrates that plasmonics has the potential to enable fast, compact on-chip sensing and communications technologies.
On the Performance of Quorum Replication on the Internet
2008-10-31
ISP in Cambridge, MA MA2 ISP in Cambridge, MA MA3 ISP in Martha’s Vineyard, MA MA4 ISP in Massachusetts, MA MD ISP in Laurel, MD MEX National...1523 171 96% 58% MA2 28% 742 1517 230 94% 59% NC 32% 465 616 255 90% 63% ISR2 3% 424 70 400 100% 97% UT1 27% 979 1847 189 96% 55% MA1 29% 645 712 238... MA2 UT2 UT1 CA1 AUS NC NZ TW KR CA2 CA3 Fig. 4. Our crumbling wall quorum system. of the first row are in North America. TW, which has lossy
Digitization of medical documents: an X-Windows application for fast scanning.
Muñoz, A; Salvador, C H; Gonzalez, M A; Dueñas, A
1992-01-01
This paper deals with digitization, using a commercial scanner, of medical documents as still images for introduction into a computer-based Information System. Document management involves storing, editing and transmission. This task has usually been approached from the perspective of the difficulties posed by radiologic images because of their indisputable qualitative and quantitative significance. However, healthcare activities require the management of many other types of documents and involve the requirements of numerous users. One key to document management will be the availability of a digitizer to deal with the greatest possible number of different types of documents. This paper describes the relevant aspects of documents and the technical specifications that digitizers must fulfill. The concept of document type is introduced as the ideal set of digitizing parameters for a given document. The use of document type parameters can drastically reduce the time the user spends in scanning sessions. Presentation is made of an application based on Unix, X-Windows and OSF/Motif, with a GPIB interface, implemented around the document type concept. Finally, the results of the evaluation of the application are presented, focusing on the user interface, as well as on the viewing of color images in an X-Windows environment and the use of lossy algorithms in the compression of medical images.
NASA Technical Reports Server (NTRS)
Cullather, Richard; Bosilovich, Michael
2017-01-01
The Modern-Era Retrospective analysis for Research and Applications, version 2 (MERRA-2) is a global atmospheric reanalysis produced by the NASA Global Modeling and Assimilation Office (GMAO). It spans the satellite observing era from 1980 to the present. The goals of MERRA-2 are to provide a regularly-gridded, homogeneous record of the global atmosphere, and to incorporate additional aspects of the climate system including trace gas constituents (stratospheric ozone), and improved land surface representation, and cryospheric processes. MERRA-2 is also the first satellite-era global reanalysis to assimilate space-based observations of aerosols and represent their interactions with other physical processes in the climate system. The inclusion of these additional components are consistent with the overall objectives of an Integrated Earth System Analysis (IESA). MERRA-2 is intended to replace the original MERRA product, and reflects recent advances in atmospheric modeling and data assimilation. Modern hyperspectral radiance and microwave observations, along with GPS-Radio Occultation and NASA ozone datasets are now assimilated in MERRA-2. Much of the structure of the data files remains the same in MERRA-2. While the original MERRA data format was HDF-EOS, the MERRA-2 supplied binary data format is now NetCDF4 (with lossy compression to save space).
A distributed database view of network tracking systems
NASA Astrophysics Data System (ADS)
Yosinski, Jason; Paffenroth, Randy
2008-04-01
In distributed tracking systems, multiple non-collocated trackers cooperate to fuse local sensor data into a global track picture. Generating this global track picture at a central location is fairly straightforward, but the single point of failure and excessive bandwidth requirements introduced by centralized processing motivate the development of decentralized methods. In many decentralized tracking systems, trackers communicate with their peers via a lossy, bandwidth-limited network in which dropped, delayed, and out of order packets are typical. Oftentimes the decentralized tracking problem is viewed as a local tracking problem with a networking twist; we believe this view can underestimate the network complexities to be overcome. Indeed, a subsequent 'oversight' layer is often introduced to detect and handle track inconsistencies arising from a lack of robustness to network conditions. We instead pose the decentralized tracking problem as a distributed database problem, enabling us to draw inspiration from the vast extant literature on distributed databases. Using the two-phase commit algorithm, a well known technique for resolving transactions across a lossy network, we describe several ways in which one may build a distributed multiple hypothesis tracking system from the ground up to be robust to typical network intricacies. We pay particular attention to the dissimilar challenges presented by network track initiation vs. maintenance and suggest a hybrid system that balances speed and robustness by utilizing two-phase commit for only track initiation transactions. Finally, we present simulation results contrasting the performance of such a system with that of more traditional decentralized tracking implementations.
Variable ratio beam splitter for laser applications
NASA Technical Reports Server (NTRS)
Brown, R. M.
1971-01-01
Beam splitter employing birefringent optics provides either widely different or precisely equal beam ratios, it can be used with laser light source systems for interferometry of lossy media, holography, scattering measurements, and precise beam ratio applications.
Modeling of video compression effects on target acquisition performance
NASA Astrophysics Data System (ADS)
Cha, Jae H.; Preece, Bradley; Espinola, Richard L.
2009-05-01
The effect of video compression on image quality was investigated from the perspective of target acquisition performance modeling. Human perception tests were conducted recently at the U.S. Army RDECOM CERDEC NVESD, measuring identification (ID) performance on simulated military vehicle targets at various ranges. These videos were compressed with different quality and/or quantization levels utilizing motion JPEG, motion JPEG2000, and MPEG-4 encoding. To model the degradation on task performance, the loss in image quality is fit to an equivalent Gaussian MTF scaled by the Structural Similarity Image Metric (SSIM). Residual compression artifacts are treated as 3-D spatio-temporal noise. This 3-D noise is found by taking the difference of the uncompressed frame, with the estimated equivalent blur applied, and the corresponding compressed frame. Results show good agreement between the experimental data and the model prediction. This method has led to a predictive performance model for video compression by correlating various compression levels to particular blur and noise input parameters for NVESD target acquisition performance model suite.
Stability Analysis of Multi-Sensor Kalman Filtering over Lossy Networks
Gao, Shouwan; Chen, Pengpeng; Huang, Dan; Niu, Qiang
2016-01-01
This paper studies the remote Kalman filtering problem for a distributed system setting with multiple sensors that are located at different physical locations. Each sensor encapsulates its own measurement data into one single packet and transmits the packet to the remote filter via a lossy distinct channel. For each communication channel, a time-homogeneous Markov chain is used to model the normal operating condition of packet delivery and losses. Based on the Markov model, a necessary and sufficient condition is obtained, which can guarantee the stability of the mean estimation error covariance. Especially, the stability condition is explicitly expressed as a simple inequality whose parameters are the spectral radius of the system state matrix and transition probabilities of the Markov chains. In contrast to the existing related results, our method imposes less restrictive conditions on systems. Finally, the results are illustrated by simulation examples. PMID:27104541
Anti-coalescence of bosons on a lossy beam splitter.
Vest, Benjamin; Dheur, Marie-Christine; Devaux, Éloïse; Baron, Alexandre; Rousseau, Emmanuel; Hugonin, Jean-Paul; Greffet, Jean-Jacques; Messin, Gaétan; Marquier, François
2017-06-30
Two-boson interference, a fundamentally quantum effect, has been extensively studied with photons through the Hong-Ou-Mandel effect and observed with guided plasmons. Using two freely propagating surface plasmon polaritons (SPPs) interfering on a lossy beam splitter, we show that the presence of loss enables us to modify the reflection and transmission factors of the beam splitter, thus revealing quantum interference paths that do not exist in a lossless configuration. We investigate the two-plasmon interference on beam splitters with different sets of reflection and transmission factors. Through coincidence-detection measurements, we observe either coalescence or anti-coalescence of SPPs. The results show that losses can be viewed as a degree of freedom to control quantum processes. Copyright © 2017 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.
NASA Astrophysics Data System (ADS)
Gros, J.-B.; Kuhl, U.; Legrand, O.; Mortessagne, F.
2016-03-01
The effective Hamiltonian formalism is extended to vectorial electromagnetic waves in order to describe statistical properties of the field in reverberation chambers. The latter are commonly used in electromagnetic compatibility tests. As a first step, the distribution of wave intensities in chaotic systems with varying opening in the weak coupling limit for scalar quantum waves is derived by means of random matrix theory. In this limit the only parameters are the modal overlap and the number of open channels. Using the extended effective Hamiltonian, we describe the intensity statistics of the vectorial electromagnetic eigenmodes of lossy reverberation chambers. Finally, the typical quantity of interest in such chambers, namely, the distribution of the electromagnetic response, is discussed. By determining the distribution of the phase rigidity, describing the coupling to the environment, using random matrix numerical data, we find good agreement between the theoretical prediction and numerical calculations of the response.
Park, Sang O; Hong, Chong Kun; Shin, Dong Hyuk; Lee, Jun Ho; Hwang, Seong Youn
2013-08-01
Untrained laypersons should perform compression-only cardiopulmonary resuscitation (COCPR) under a dispatcher's guidance, but the quality of the chest compressions may be suboptimal. We hypothesised that providing metronome sounds via a phone speaker may improve the quality of chest compressions during dispatcher-assisted COCPR (DA-COCPR). Untrained laypersons were allocated to either the metronome sound-guided group (MG), who performed DA-COCPR with metronome sounds (110 ticks/min), or the control group (CG), who performed conventional DA-COCPR. The participants of each group performed DA-COCPR for 4 min using a manikin with Skill-Reporter, and the data regarding chest compression quality were collected. The data from 33 cases of DA-COCPR in the MG and 34 cases in the CG were compared. The MG showed a faster compression rate than the CG (111.9 vs 96.7/min; p=0.018). A significantly higher proportion of subjects in the MG performed the DA-COCPR with an accurate chest compression rate (100-120/min) compared with the subjects in the CG (32/33 (97.0%) vs 5/34 (14.7%); p<0.0001). The mean compression depth was not different between the MG and the CG (45.9 vs 46.8 mm; p=0.692). However, a higher proportion of subjects in the MG performed shallow compressions (compression depth <38 mm) compared with subjects in the CG (median % was 69.2 vs 15.7; p=0.035). Metronome sound guidance during DA-COCPR for the untrained bystanders improved the chest compression rates, but was associated more with shallow compressions than the conventional DA-COCPR in a manikin model.
Markovian Dynamics of Josephson Parametric Amplification
NASA Astrophysics Data System (ADS)
Kaiser, Waldemar; Haider, Michael; Russer, Johannes A.; Russer, Peter; Jirauschek, Christian
2017-09-01
In this work, we derive the dynamics of the lossy DC pumped non-degenerate Josephson parametric amplifier (DCPJPA). The main element in a DCPJPA is the superconducting Josephson junction. The DC bias generates the AC Josephson current varying the nonlinear inductance of the junction. By this way the Josephson junction acts as the pump oscillator as well as the time varying reactance of the parametric amplifier. In quantum-limited amplification, losses and noise have an increased impact on the characteristics of an amplifier. We outline the classical model of the lossy DCPJPA and derive the available noise power spectral densities. A classical treatment is not capable of including properties like spontaneous emission which is mandatory in case of amplification at the quantum limit. Thus, we derive a quantum mechanical model of the lossy DCPJPA. Thermal losses are modeled by the quantum Langevin approach, by coupling the quantized system to a photon heat bath in thermodynamic equilibrium. The mode occupation in the bath follows the Bose-Einstein statistics. Based on the second quantization formalism, we derive the Heisenberg equations of motion of both resonator modes. We assume the dynamics of the system to follow the Markovian approximation, i.e. the system only depends on its actual state and is memory-free. We explicitly compute the time evolution of the contributions to the signal mode energy and give numeric examples based on different damping and coupling constants. Our analytic results show, that this model is capable of including thermal noise into the description of the DC pumped non-degenerate Josephson parametric amplifier.
Effects of video compression on target acquisition performance
NASA Astrophysics Data System (ADS)
Espinola, Richard L.; Cha, Jae; Preece, Bradley
2008-04-01
The bandwidth requirements of modern target acquisition systems continue to increase with larger sensor formats and multi-spectral capabilities. To obviate this problem, still and moving imagery can be compressed, often resulting in greater than 100 fold decrease in required bandwidth. Compression, however, is generally not error-free and the generated artifacts can adversely affect task performance. The U.S. Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate recently performed an assessment of various compression techniques on static imagery for tank identification. In this paper, we expand this initial assessment by studying and quantifying the effect of various video compression algorithms and their impact on tank identification performance. We perform a series of controlled human perception tests using three dynamic simulated scenarios: target moving/sensor static, target static/sensor static, sensor tracking the target. Results of this study will quantify the effect of video compression on target identification and provide a framework to evaluate video compression on future sensor systems.
Waninger, Kevin N; Goodbred, Andrew; Vanic, Keith; Hauth, John; Onia, Joshua; Stoltzfus, Jill; Melanson, Scott
2014-07-01
To investigate (1) cardiopulmonary resuscitation (CPR) adequacy during simulated cardiac arrest of equipped football players and (2) whether protective football equipment impedes CPR performance measures. Exploratory crossover study performed on Laerdal SimMan 3 G interactive manikin simulator. Temple University/St Luke's University Health Network Regional Medical School Simulation Laboratory. Thirty BCLS-certified ATCs and 6 ACLS-certified emergency department technicians. Subjects were given standardized rescuer scenarios to perform three 2-minute sequences of compression-only CPR. Baseline CPR sequences were captured on each subject. Experimental conditions included 2-minute sequences of CPR either over protective football shoulder pads or under unlaced pads. Subjects were instructed to adhere to 2010 American Heart Association guidelines (initiation of compressions alone at 100/min to 51 mm). Dependent variables included average compression depth, average compression rate, percentage of time chest wall recoiled, and percentage of hands-on contact during compressions. Differences between subject groups were not found to be statistically significant, so groups were combined (n = 36) for analysis of CPR compression adequacy. Compression depth was deeper under shoulder pads than over (P = 0.02), with mean depths of 36.50 and 31.50 mm, respectively. No significant difference was found with compression rate or chest wall recoil. Chest compression depth is significantly decreased when performed over shoulder pads, while there is no apparent effect on rate or chest wall recoil. Although the clinical outcomes from our observed 15% difference in compression depth are uncertain, chest compression under the pads significantly increases the depth of compressions and more closely approaches American Heart Association guidelines for chest compression depth in cardiac arrest.
Effect of Compression Garments on Physiological Responses After Uphill Running.
Struhár, Ivan; Kumstát, Michal; Králová, Dagmar Moc
2018-03-01
Limited practical recommendations related to wearing compression garments for athletes can be drawn from the literature at the present time. We aimed to identify the effects of compression garments on physiological and perceptual measures of performance and recovery after uphill running with different pressure and distributions of applied compression. In a random, double blinded study, 10 trained male runners undertook three 8 km treadmill runs at a 6% elevation rate, with the intensity of 75% VO2max while wearing low, medium grade compression garments and high reverse grade compression. In all the trials, compression garments were worn during 4 hours post run. Creatine kinase, measurements of muscle soreness, ankle strength of plantar/dorsal flexors and mean performance time were then measured. The best mean performance time was observed in the medium grade compression garments with the time difference being: medium grade compression garments vs. high reverse grade compression garments. A positive trend in increasing peak torque of plantar flexion (60º·s-1, 120º·s-1) was found in the medium grade compression garments: a difference between 24 and 48 hours post run. The highest pain tolerance shift in the gastrocnemius muscle was the medium grade compression garments, 24 hour post run, with the shift being +11.37% for the lateral head and 6.63% for the medial head. In conclusion, a beneficial trend in the promotion of running performance and decreasing muscle soreness within 24 hour post exercise was apparent in medium grade compression garments.
Comprehensive study of numerical anisotropy and dispersion in 3-D TLM meshes
NASA Astrophysics Data System (ADS)
Berini, Pierre; Wu, Ke
1995-05-01
This paper presents a comprehensive analysis of the numerical anisotropy and dispersion of 3-D TLM meshes constructed using several generalized symmetrical condensed TLM nodes. The dispersion analysis is performed in isotropic lossless, isotropic lossy and anisotropic lossless media and yields a comparison of the simulation accuracy for the different TLM nodes. The effect of mesh grading on the numerical dispersion is also determined. The results compare meshes constructed with Johns' symmetrical condensed node (SCN), two hybrid symmetrical condensed nodes (HSCN) and two frequency domain symmetrical condensed nodes (FDSCN). It has been found that under certain circumstances, the time domain nodes may introduce numerical anisotropy when modelling isotropic media.
High-performance terahertz wave absorbers made of silicon-based metamaterials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yin, Sheng; Zhu, Jianfei; Jiang, Wei
2015-08-17
Electromagnetic (EM) wave absorbers with high efficiency in different frequency bands have been extensively investigated for various applications. In this paper, we propose an ultra-broadband and polarization-insensitive terahertz metamaterial absorber based on a patterned lossy silicon substrate. Experimentally, a large absorption efficiency more than 95% in a frequency range of 0.9–2.5 THz was obtained up to a wave incident angle as large as 70°. Much broader absorption bandwidth and excellent oblique incidence absorption performance are numerically demonstrated. The underlying mechanisms due to the combination of a waveguide cavity mode and impedance-matched diffraction are analyzed in terms of the field patternsmore » and the scattering features. The monolithic THz absorber proposed here may find important applications in EM energy harvesting systems such as THz barometer or biosensor.« less
Effect of external index of refraction on multimode fiber couplers.
Wang, G Z; Murphy, K A; Claus, R O
1995-12-20
The dependence of the performance of fused-taper multimode fiber couplers on the refractive index of the material surrounding the taper region has been investigated both theoretically and experimentally. It has been identified that for a 2 × 2 multimode fiber coupler there is a range of output-power-coupling ratios for which the effect of the external refractive index is negligible. When the coupler is tapered beyond this region, the performance becomes dependent on the external index of refraction and lossy. To analyze the multimode coupler-loss mechanism, we develop a two-dimensional ray-optics model that incorporates trapped cladding-mode loss and core-mode loss through frustrated total internal reflection.
Computer-simulation results support the experimental observations. Related issues such as coupler fabrication and packaging are also discussed.
Vector quantizer designs for joint compression and terrain categorization of multispectral imagery
NASA Technical Reports Server (NTRS)
Gorman, John D.; Lyons, Daniel F.
1994-01-01
Two vector quantizer designs for compression of multispectral imagery and their impact on terrain categorization performance are evaluated. The mean-squared error (MSE) and classification performance of the two quantizers are compared, and it is shown that a simple two-stage design minimizing MSE subject to a constraint on classification performance has a significantly better classification performance than a standard MSE-based tree-structured vector quantizer followed by maximum likelihood classification. This improvement in classification performance is obtained with minimal loss in MSE performance. The results show that it is advantageous to tailor compression algorithm designs to the required data exploitation tasks. Applications of joint compression/classification include compression for the archival or transmission of Landsat imagery that is later used for land utility surveys and/or radiometric analysis.
Bitshuffle: Filter for improving compression of typed binary data
NASA Astrophysics Data System (ADS)
Masui, Kiyoshi
2017-12-01
Bitshuffle rearranges typed, binary data for improving compression; the algorithm is implemented in a python/C package within the Numpy framework. The library can be used alongside HDF5 to compress and decompress datasets and is integrated through the dynamically loaded filters framework. Algorithmically, Bitshuffle is closely related to HDF5's Shuffle filter except it operates at the bit level instead of the byte level. Arranging a typed data array in to a matrix with the elements as the rows and the bits within the elements as the columns, Bitshuffle "transposes" the matrix, such that all the least-significant-bits are in a row, etc. This transposition is performed within blocks of data roughly 8kB long; this does not in itself compress data, but rearranges it for more efficient compression. A compression library is necessary to perform the actual compression. This scheme has been used for compression of radio data in high performance computing.
Abelairas-Gómez, Cristian; Rodríguez-Núñez, Antonio; Vilas-Pintos, Elisardo; Prieto Saborit, José Antonio; Barcala-Furelos, Roberto
2015-06-01
To describe the quality of chest compressions performed by secondary-school students trained with a realtime audiovisual feedback system. The learners were 167 students aged 12 to 15 years who had no prior experience with cardiopulmonary resuscitation (CPR). They received an hour of instruction in CPR theory and practice and then took a 2-minute test, performing hands-only CPR on a child mannequin (Prestan Professional Child Manikin). Lights built into the mannequin gave learners feedback about how many compressions they had achieved and clicking sounds told them when compressions were deep enough. All the learners were able to maintain a steady enough rhythm of compressions and reached at least 80% of the targeted compression depth. Fewer correct compressions were done in the second minute than in the first (P=.016). Real-time audiovisual feedback helps schoolchildren aged 12 to 15 years to achieve quality chest compressions on a mannequin.
Spatial compression algorithm for the analysis of very large multivariate images
Keenan, Michael R [Albuquerque, NM
2008-07-15
A method for spatially compressing data sets enables the efficient analysis of very large multivariate images. The spatial compression algorithms use a wavelet transformation to map an image into a compressed image containing a smaller number of pixels that retain the original image's information content. Image analysis can then be performed on a compressed data matrix consisting of a reduced number of significant wavelet coefficients. Furthermore, a block algorithm can be used for performing common operations more efficiently. The spatial compression algorithms can be combined with spectral compression algorithms to provide further computational efficiencies.
Wavelet-based compression of pathological images for telemedicine applications
NASA Astrophysics Data System (ADS)
Chen, Chang W.; Jiang, Jianfei; Zheng, Zhiyong; Wu, Xue G.; Yu, Lun
2000-05-01
In this paper, we present the performance evaluation of wavelet-based coding techniques as applied to the compression of pathological images for application in an Internet-based telemedicine system. We first study how well suited the wavelet-based coding is as it applies to the compression of pathological images, since these images often contain fine textures that are often critical to the diagnosis of potential diseases. We compare the wavelet-based compression with the DCT-based JPEG compression in the DICOM standard for medical imaging applications. Both objective and subjective measures have been studied in the evaluation of compression performance. These studies are performed in close collaboration with expert pathologists who have conducted the evaluation of the compressed pathological images and communication engineers and information scientists who designed the proposed telemedicine system. These performance evaluations have shown that the wavelet-based coding is suitable for the compression of various pathological images and can be integrated well with the Internet-based telemedicine systems. A prototype of the proposed telemedicine system has been developed in which the wavelet-based coding is adopted for the compression to achieve bandwidth efficient transmission and therefore speed up the communications between the remote terminal and the central server of the telemedicine system.
Moshina, Nataliia; Sebuødegård, Sofie; Hofvind, Solveig
2017-06-01
We aimed to investigate early performance measures in a population-based breast cancer screening program stratified by compression force and pressure at the time of mammographic screening examination. Early performance measures included recall rate, rates of screen-detected and interval breast cancers, positive predictive value of recall (PPV), sensitivity, specificity, and histopathologic characteristics of screen-detected and interval breast cancers. Information on 261,641 mammographic examinations from 93,444 subsequently screened women was used for analyses. The study period was 2007-2015. Compression force and pressure were categorized using tertiles as low, medium, or high. χ 2 test, t tests, and test for trend were used to examine differences between early performance measures across categories of compression force and pressure. We applied generalized estimating equations to identify the odds ratios (OR) of screen-detected or interval breast cancer associated with compression force and pressure, adjusting for fibroglandular and/or breast volume and age. The recall rate decreased, while PPV and specificity increased with increasing compression force (p for trend <0.05 for all). The recall rate increased, while rate of screen-detected cancer, PPV, sensitivity, and specificity decreased with increasing compression pressure (p for trend <0.05 for all). High compression pressure was associated with higher odds of interval breast cancer compared with low compression pressure (1.89; 95% CI 1.43-2.48). High compression force and low compression pressure were associated with more favorable early performance measures in the screening program.
Liang, Xu; Nie, Kaiwen; Ding, Xian; Dang, Liqin; Sun, Jie; Shi, Feng; Xu, Hua; Jiang, Ruibin; He, Xuexia; Liu, Zonghuai; Lei, Zhibin
2018-03-28
The development of compressible supercapacitor highly relies on the innovative design of electrode materials with both superior compression property and high capacitive performance. This work reports a highly compressible supercapacitor electrode which is prepared by growing electroactive NiCo 2 S 4 (NCS) nanosheets on the compressible carbon sponge (CS). The strong adhesion of the metallic conductive NCS nanosheets to the highly porous carbon scaffolds enable the CS-NCS composite electrode to exhibit an enhanced conductivity and ideal structural integrity during repeated compression-release cycles. Accordingly, the CS-NCS composite electrode delivers a specific capacitance of 1093 F g -1 at 0.5 A g -1 and remarkable rate performance with 91% capacitance retention in the range of 0.5-20 A g -1 . Capacitance performance under the strain of 60% shows that the incorporation of NCS nanosheets in CS scaffolds leads to over five times enhancement in gravimetric capacitance and 17 times enhancement in volumetric capacitance. These performances enable the CS-NCS composite to be one of the promising candidates for potential applications in compressible electrochemical energy storage devices.
Characterization of multiaxial warp knit composites
NASA Technical Reports Server (NTRS)
Dexter, H. Benson; Hasko, Gregory H.; Cano, Roberto J.
1991-01-01
The objectives were to characterize the mechanical behavior and damage tolerance of two multiaxial warp knit fabrics to determine the acceptability of these fabrics for high performance composite applications. The tests performed included compression, tension, open hole compression, compression after impact and compression-compression fatigue. Tests were performed on as-fabricated fabrics and on multi-layer fabrics that were stitched together with either carbon or Kevlar stitching yarn. Results of processing studies for vacuum impregnation with Hercules 3501-6 epoxy resin and pressure impregnation with Dow Tactix 138/H41 epoxy resin and British Petroleum BP E905L epoxy resin are presented.
NASA Astrophysics Data System (ADS)
Lindsay, R. A.; Cox, B. V.
Universal and adaptive data compression techniques have the capability to globally compress all types of data without loss of information but have the disadvantage of complexity and computation speed. Advances in hardware speed and the reduction of computational costs have made universal data compression feasible. Implementations of the Adaptive Huffman and Lempel-Ziv compression algorithms are evaluated for performance. Compression ratios versus run times for different size data files are graphically presented and discussed in the paper. Required adjustments needed for optimum performance of the algorithms relative to theoretical achievable limits will be outlined.
PML AND PSTD ALGORITHM FOR ARBITRARY LOSSY ANISOTROPIC MEDIA. (R825225)
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
Subband coding for image data archiving
NASA Technical Reports Server (NTRS)
Glover, Daniel; Kwatra, S. C.
1993-01-01
The use of subband coding on image data is discussed. An overview of subband coding is given. Advantages of subbanding for browsing and progressive resolution are presented. Implementations for lossless and lossy coding are discussed. Algorithm considerations and simple implementations of subband systems are given.
Subband coding for image data archiving
NASA Technical Reports Server (NTRS)
Glover, D.; Kwatra, S. C.
1992-01-01
The use of subband coding on image data is discussed. An overview of subband coding is given. Advantages of subbanding for browsing and progressive resolution are presented. Implementations for lossless and lossy coding are discussed. Algorithm considerations and simple implementations of subband are given.
Technique for Performing Dielectric Property Measurements at Microwave Frequencies
NASA Technical Reports Server (NTRS)
Barmatz, Martin B.; Jackson, Henry W.
2010-01-01
A paper discusses the need to perform accurate dielectric property measurements on larger sized samples, particularly liquids at microwave frequencies. These types of measurements cannot be obtained using conventional cavity perturbation methods, particularly for liquids or powdered or granulated solids that require a surrounding container. To solve this problem, a model has been developed for the resonant frequency and quality factor of a cylindrical microwave cavity containing concentric cylindrical samples. This model can then be inverted to obtain the real and imaginary dielectric constants of the material of interest. This approach is based on using exact solutions to Maxwell s equations for the resonant properties of a cylindrical microwave cavity and also using the effective electrical conductivity of the cavity walls that is estimated from the measured empty cavity quality factor. This new approach calculates the complex resonant frequency and associated electromagnetic fields for a cylindrical microwave cavity with lossy walls that is loaded with concentric, axially aligned, lossy dielectric cylindrical samples. In this approach, the calculated complex resonant frequency, consisting of real and imaginary parts, is related to the experimentally measured quantities. Because this approach uses Maxwell's equations to determine the perturbed electromagnetic fields in the cavity with the material(s) inserted, one can calculate the expected wall losses using the fields for the loaded cavity rather than just depending on the value of the fields obtained from the empty cavity quality factor. These additional calculations provide a more accurate determination of the complex dielectric constant of the material being studied. The improved approach will be particularly important when working with larger samples or samples with larger dielectric constants that will further perturb the cavity electromagnetic fields. Also, this approach enables the ability to have a larger sample of interest, such as a liquid or powdered or granulated solid, inside a cylindrical container.
Recce imagery compression options
NASA Astrophysics Data System (ADS)
Healy, Donald J.
1995-09-01
The errors introduced into reconstructed RECCE imagery by ATARS DPCM compression are compared to those introduced by the more modern DCT-based JPEG compression algorithm. For storage applications in which uncompressed sensor data is available JPEG provides better mean-square-error performance while also providing more flexibility in the selection of compressed data rates. When ATARS DPCM compression has already been performed, lossless encoding techniques may be applied to the DPCM deltas to achieve further compression without introducing additional errors. The abilities of several lossless compression algorithms including Huffman, Lempel-Ziv, Lempel-Ziv-Welch, and Rice encoding to provide this additional compression of ATARS DPCM deltas are compared. It is shown that the amount of noise in the original imagery significantly affects these comparisons.
Curiosity's Mars Hand Lens Imager (MAHLI) Investigation
Edgett, Kenneth S.; Yingst, R. Aileen; Ravine, Michael A.; Caplinger, Michael A.; Maki, Justin N.; Ghaemi, F. Tony; Schaffner, Jacob A.; Bell, James F.; Edwards, Laurence J.; Herkenhoff, Kenneth E.; Heydari, Ezat; Kah, Linda C.; Lemmon, Mark T.; Minitti, Michelle E.; Olson, Timothy S.; Parker, Timothy J.; Rowland, Scott K.; Schieber, Juergen; Sullivan, Robert J.; Sumner, Dawn Y.; Thomas, Peter C.; Jensen, Elsa H.; Simmonds, John J.; Sengstacken, Aaron J.; Wilson, Reg G.; Goetz, Walter
2012-01-01
The Mars Science Laboratory (MSL) Mars Hand Lens Imager (MAHLI) investigation will use a 2-megapixel color camera with a focusable macro lens aboard the rover, Curiosity, to investigate the stratigraphy and grain-scale texture, structure, mineralogy, and morphology of geologic materials in northwestern Gale crater. Of particular interest is the stratigraphic record of a ?5 km thick layered rock sequence exposed on the slopes of Aeolis Mons (also known as Mount Sharp). The instrument consists of three parts, a camera head mounted on the turret at the end of a robotic arm, an electronics and data storage assembly located inside the rover body, and a calibration target mounted on the robotic arm shoulder azimuth actuator housing. MAHLI can acquire in-focus images at working distances from ?2.1 cm to infinity. At the minimum working distance, image pixel scale is ?14 μm per pixel and very coarse silt grains can be resolved. At the working distance of the Mars Exploration Rover Microscopic Imager cameras aboard Spirit and Opportunity, MAHLI?s resolution is comparable at ?30 μm per pixel. Onboard capabilities include autofocus, auto-exposure, sub-framing, video imaging, Bayer pattern color interpolation, lossy and lossless compression, focus merging of up to 8 focus stack images, white light and longwave ultraviolet (365 nm) illumination of nearby subjects, and 8 gigabytes of non-volatile memory data storage.
Overview of the Multi-Spectral Imager on the NEAR spacecraft
NASA Astrophysics Data System (ADS)
Hawkins, S. E., III
1996-07-01
The Multi-Spectral Imager on the Near Earth Asteroid Rendezvous (NEAR) spacecraft is a 1 Hz frame rate CCD camera sensitive in the visible and near infrared bands (~400-1100 nm). MSI is the primary instrument on the spacecraft to determine morphology and composition of the surface of asteroid 433 Eros. In addition, the camera will be used to assist in navigation to the asteroid. The instrument uses refractive optics and has an eight position spectral filter wheel to select different wavelength bands. The MSI optical focal length of 168 mm gives a 2.9 ° × 2.25 ° field of view. The CCD is passively cooled and the 537×244 pixel array output is digitized to 12 bits. Electronic shuttering increases the effective dynamic range of the instrument by more than a factor of 100. A one-time deployable cover protects the instrument during ground testing operations and launch. A reduced aperture viewport permits full field of view imaging while the cover is in place. A Data Processing Unit (DPU) provides the digital interface between the spacecraft and the Camera Head and uses an RTX2010 processor. The DPU provides an eight frame image buffer, lossy and lossless data compression routines, and automatic exposure control. An overview of the instrument is presented and design parameters and trade-offs are discussed.
Insulin Resistance: Regression and Clustering
Yoon, Sangho; Assimes, Themistocles L.; Quertermous, Thomas; Hsiao, Chin-Fu; Chuang, Lee-Ming; Hwu, Chii-Min; Rajaratnam, Bala; Olshen, Richard A.
2014-01-01
In this paper we try to define insulin resistance (IR) precisely for a group of Chinese women. Our definition deliberately does not depend upon body mass index (BMI) or age, although in other studies, with particular random effects models quite different from models used here, BMI accounts for a large part of the variability in IR. We accomplish our goal through application of Gauss mixture vector quantization (GMVQ), a technique for clustering that was developed for application to lossy data compression. Defining data come from measurements that play major roles in medical practice. A precise statement of what the data are is in Section 1. Their family structures are described in detail. They concern levels of lipids and the results of an oral glucose tolerance test (OGTT). We apply GMVQ to residuals obtained from regressions of outcomes of an OGTT and lipids on functions of age and BMI that are inferred from the data. A bootstrap procedure developed for our family data supplemented by insights from other approaches leads us to believe that two clusters are appropriate for defining IR precisely. One cluster consists of women who are IR, and the other of women who seem not to be. Genes and other features are used to predict cluster membership. We argue that prediction with “main effects” is not satisfactory, but prediction that includes interactions may be. PMID:24887437
Zhang, Yi; Huang, Yi; Zhang, Tengfei; Chang, Huicong; Xiao, Peishuang; Chen, Honghui; Huang, Zhiyu; Chen, Yongsheng
2015-03-25
The broadband and tunable high-performance microwave absorption properties of an ultralight and highly compressible graphene foam (GF) are investigated. Simply via physical compression, the microwave absorption performance can be tuned. The qualified bandwidth coverage of 93.8% (60.5 GHz/64.5 GHz) is achieved for the GF under 90% compressive strain (1.0 mm thickness). This mainly because of the 3D conductive network. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Aggregating quantum repeaters for the quantum internet
NASA Astrophysics Data System (ADS)
Azuma, Koji; Kato, Go
2017-09-01
The quantum internet holds promise for accomplishing quantum teleportation and unconditionally secure communication freely between arbitrary clients all over the globe, as well as the simulation of quantum many-body systems. For such a quantum internet protocol, a general fundamental upper bound on the obtainable entanglement or secret key has been derived [K. Azuma, A. Mizutani, and H.-K. Lo, Nat. Commun. 7, 13523 (2016), 10.1038/ncomms13523]. Here we consider its converse problem. In particular, we present a universal protocol constructible from any given quantum network, which is based on running quantum repeater schemes in parallel over the network. For arbitrary lossy optical channel networks, our protocol has no scaling gap with the upper bound, even based on existing quantum repeater schemes. In an asymptotic limit, our protocol works as an optimal entanglement or secret-key distribution over any quantum network composed of practical channels such as erasure channels, dephasing channels, bosonic quantum amplifier channels, and lossy optical channels.
The Linear Bicharacteristic Scheme for Computational Electromagnetics
NASA Technical Reports Server (NTRS)
Beggs, John H.; Chan, Siew-Loong
2000-01-01
The upwind leapfrog or Linear Bicharacteristic Scheme (LBS) has previously been implemented and demonstrated on electromagnetic wave propagation problems. This paper extends the Linear Bicharacteristic Scheme for computational electromagnetics to treat lossy dielectric and magnetic materials and perfect electrical conductors. This is accomplished by proper implementation of the LBS for homogeneous lossy dielectric and magnetic media, and treatment of perfect electrical conductors (PECs) are shown to follow directly in the limit of high conductivity. Heterogeneous media are treated through implementation of surface boundary conditions and no special extrapolations or interpolations at dielectric material boundaries are required. Results are presented for one-dimensional model problems on both uniform and nonuniform grids, and the FDTD algorithm is chosen as a convenient reference algorithm for comparison. The results demonstrate that the explicit LBS is a dissipation-free, second-order accurate algorithm which uses a smaller stencil than the FDTD algorithm, yet it has approximately one-third the phase velocity error. The LBS is also more accurate on nonuniform grids.
A radial transmission line material measurement apparatus
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warne, L.K.; Moyer, R.D.; Koontz, T.E.
1993-05-01
A radial transmission line material measurement sample apparatus (sample holder, offset short standards, measurement software, and instrumentation) is described which has been proposed, analyzed, designed, constructed, and tested. The purpose of the apparatus is to obtain accurate surface impedance measurements of lossy, possibly anisotropic, samples at low and intermediate frequencies (vhf and low uhf). The samples typically take the form of sections of the material coatings on conducting objects. Such measurements thus provide the key input data for predictive numerical scattering codes. Prediction of the sample surface impedance from the coaxial input impedance measurement is carried out by two techniques.more » The first is an analytical model for the coaxial-to-radial transmission line junction. The second is an empirical determination of the bilinear transformation model of the junction by the measurement of three full standards. The standards take the form of three offset shorts (and an additional lossy Salisbury load), which have also been constructed. The accuracy achievable with the device appears to be near one percent.« less
A Two-Dimensional Linear Bicharacteristic Scheme for Electromagnetics
NASA Technical Reports Server (NTRS)
Beggs, John H.
2002-01-01
The upwind leapfrog or Linear Bicharacteristic Scheme (LBS) has previously been implemented and demonstrated on one-dimensional electromagnetic wave propagation problems. This memorandum extends the Linear Bicharacteristic Scheme for computational electromagnetics to model lossy dielectric and magnetic materials and perfect electrical conductors in two dimensions. This is accomplished by proper implementation of the LBS for homogeneous lossy dielectric and magnetic media and for perfect electrical conductors. Both the Transverse Electric and Transverse Magnetic polarizations are considered. Computational requirements and a Fourier analysis are also discussed. Heterogeneous media are modeled through implementation of surface boundary conditions and no special extrapolations or interpolations at dielectric material boundaries are required. Results are presented for two-dimensional model problems on uniform grids, and the Finite Difference Time Domain (FDTD) algorithm is chosen as a convenient reference algorithm for comparison. The results demonstrate that the two-dimensional explicit LBS is a dissipation-free, second-order accurate algorithm which uses a smaller stencil than the FDTD algorithm, yet it has less phase velocity error.
Extrinsic and Intrinsic Frequency Dispersion of High-k Materials in Capacitance-Voltage Measurements
Tao, J.; Zhao, C.Z.; Zhao, C.; Taechakumput, P.; Werner, M.; Taylor, S.; Chalker, P. R.
2012-01-01
In capacitance-voltage (C-V) measurements, frequency dispersion in high-k dielectrics is often observed. The frequency dependence of the dielectric constant (k-value), that is the intrinsic frequency dispersion, could not be assessed before suppressing the effects of extrinsic frequency dispersion, such as the effects of the lossy interfacial layer (between the high-k thin film and silicon substrate) and the parasitic effects. The effect of the lossy interfacial layer on frequency dispersion was investigated and modeled based on a dual frequency technique. The significance of parasitic effects (including series resistance and the back metal contact of the metal-oxide-semiconductor (MOS) capacitor) on frequency dispersion was also studied. The effect of surface roughness on frequency dispersion is also discussed. After taking extrinsic frequency dispersion into account, the relaxation behavior can be modeled using the Curie-von Schweidler (CS) law, the Kohlrausch-Williams-Watts (KWW) relationship and the Havriliak-Negami (HN) relationship. Dielectric relaxation mechanisms are also discussed. PMID:28817021
Zhao, Lei; Cui, Tie Jun
2005-12-01
An enhancement of the specific absorption rate (SAR) inside a lossy dielectric object has been investigated theoretically based on a slab of left-handed medium (LHM). In order to make an accurate analysis of SAR distribution, a proper Green's function involved in the LHM slab is proposed, from which an integral equation for the electric field inside the dielectric object is derived. Such an integral equation has been solved accurately and efficiently using the conjugate gradient method and the fast Fourier transform. We have made a lot of numerical experiments on the SAR distributions inside the dielectric object excited by a line source with and without the LHM slab. Numerical experiments show that SAR can be enhanced tremendously when the LHM slab is involved due to the proper usage of strong surface waves, which will be helpful in the potential biomedical applications for hyperthermia. The physical insight for such a phenomenon has also been discussed.
Effect of compressive force on PEM fuel cell performance
NASA Astrophysics Data System (ADS)
MacDonald, Colin Stephen
Polymer electrolyte membrane (PEM) fuel cells possess the potential, as a zero-emission power source, to replace the internal combustion engine as the primary option for transportation applications. Though there are a number of obstacles to vast PEM fuel cell commercialization, such as high cost and limited durability, there has been significant progress in the field to achieve this goal. Experimental testing and analysis of fuel cell performance has been an important tool in this advancement. Experimental studies of the PEM fuel cell not only identify unfiltered performance response to manipulation of variables, but also aid in the advancement of fuel cell modelling, by allowing for validation of computational schemes. Compressive force used to contain a fuel cell assembly can play a significant role in how effectively the cell functions, the most obvious example being to ensure proper sealing within the cell. Compression can have a considerable impact on cell performance beyond the sealing aspects. The force can manipulate the ability to deliver reactants and the electrochemical functions of the cell, by altering the layers in the cell susceptible to this force. For these reasons an experimental study was undertaken, presented in this thesis, with specific focus placed on cell compression; in order to study its effect on reactant flow fields and performance response. The goal of the thesis was to develop a consistent and accurate general test procedure for the experimental analysis of a PEM fuel cell in order to analyse the effects of compression on performance. The factors potentially affecting cell performance, which were a function of compression, were identified as: (1) Sealing and surface contact; (2) Pressure drop across the flow channel; (3) Porosity of the GDL. Each factor was analysed independently in order to determine the individual contribution to changes in performance. An optimal degree of compression was identified for the cell configuration in question and the performance gains from the aforementioned compression factors were quantified. The study provided a considerable amount of practical and analytical knowledge in the area of cell compression and shed light on the importance of precision compressive control within the PEM fuel cell.
A new hyperspectral image compression paradigm based on fusion
NASA Astrophysics Data System (ADS)
Guerra, Raúl; Melián, José; López, Sebastián.; Sarmiento, Roberto
2016-10-01
The on-board compression of remote sensed hyperspectral images is an important task nowadays. One of the main difficulties is that the compression of these images must be performed in the satellite which carries the hyperspectral sensor. Hence, this process must be performed by space qualified hardware, having area, power and speed limitations. Moreover, it is important to achieve high compression ratios without compromising the quality of the decompress image. In this manuscript we proposed a new methodology for compressing hyperspectral images based on hyperspectral image fusion concepts. The proposed compression process has two independent steps. The first one is to spatially degrade the remote sensed hyperspectral image to obtain a low resolution hyperspectral image. The second step is to spectrally degrade the remote sensed hyperspectral image to obtain a high resolution multispectral image. These two degraded images are then send to the earth surface, where they must be fused using a fusion algorithm for hyperspectral and multispectral image, in order to recover the remote sensed hyperspectral image. The main advantage of the proposed methodology for compressing remote sensed hyperspectral images is that the compression process, which must be performed on-board, becomes very simple, being the fusion process used to reconstruct image the more complex one. An extra advantage is that the compression ratio can be fixed in advanced. Many simulations have been performed using different fusion algorithms and different methodologies for degrading the hyperspectral image. The results obtained in the simulations performed corroborate the benefits of the proposed methodology.
Design of Miniaturized Double-Negative Material for Specific Absorption Rate Reduction in Human Head
Faruque, Mohammad Rashed Iqbal; Islam, Mohammad Tariqul
2014-01-01
In this study, a double-negative triangular metamaterial (TMM) structure, which exhibits a resounding electric response at microwave frequency, was developed by etching two concentric triangular rings of conducting materials. A finite-difference time-domain method in conjunction with the lossy-Drude model was used in this study. Simulations were performed using the CST Microwave Studio. The specific absorption rate (SAR) reduction technique is discussed, and the effects of the position of attachment, the distance, and the size of the metamaterials on the SAR reduction are explored. The performance of the double-negative TMMs in cellular phones was also measured in the cheek and the tilted positions using the COMOSAR system. The TMMs achieved a 52.28% reduction for the 10 g SAR. These results provide a guideline to determine the triangular design of metamaterials with the maximum SAR reducing effect for a mobile phone. PMID:25350398
Faruque, Mohammad Rashed Iqbal; Islam, Mohammad Tariqul
2014-01-01
In this study, a double-negative triangular metamaterial (TMM) structure, which exhibits a resounding electric response at microwave frequency, was developed by etching two concentric triangular rings of conducting materials. A finite-difference time-domain method in conjunction with the lossy-Drude model was used in this study. Simulations were performed using the CST Microwave Studio. The specific absorption rate (SAR) reduction technique is discussed, and the effects of the position of attachment, the distance, and the size of the metamaterials on the SAR reduction are explored. The performance of the double-negative TMMs in cellular phones was also measured in the cheek and the tilted positions using the COMOSAR system. The TMMs achieved a 52.28% reduction for the 10 g SAR. These results provide a guideline to determine the triangular design of metamaterials with the maximum SAR reducing effect for a mobile phone.
NASA Technical Reports Server (NTRS)
Schwerdt, Helen N.; Chae, Junseok; Miranda, Felix A.
2012-01-01
This paper reports the wireless performance of a biocompatible fully passive microsystem implanted in phantom media simulating the dispersive dielectric properties of the human head, for potential application in recording cortical neuropotentials. Fully passive wireless operation is achieved by means of backscattering electromagnetic (EM) waves carrying 3rd order harmonic mixing products (2f(sub 0) plus or minus f(sub m)=4.4-4.9 GHZ) containing targeted neuropotential signals (fm approximately equal to 1-1000 Hz). The microsystem is enclosed in 4 micrometer thick parylene-C for biocompatibility and has a footprint of 4 millimeters x 12 millimeters x 500 micrometers. Preliminary testing of the microsystem implanted in the lossy biological simulating media results in signal-to-noise ratio's (SNR) near 22 (SNR approximately equal to 38 in free space) for millivolt level neuropotentials, demonstrating the potential for fully passive wireless microsystems in implantable medical applications.
New Report Compares Performance of Compressed Natural Gas Refuse Haulers to
Diesel-Powered Trucks Report Compares Performance of Compressed Natural Gas Refuse Haulers to Diesel-Powered Trucks For more information contact: e:mail: Public Affairs A new report that compares the performance of compressed natural gas (CNG) refuse haulers in New York City to similar diesel-powered trucks
Blomberg, Hans; Gedeborg, Rolf; Berglund, Lars; Karlsten, Rolf; Johansson, Jakob
2011-10-01
Mechanical chest compression devices are being implemented as an aid in cardiopulmonary resuscitation (CPR), despite lack of evidence of improved outcome. This manikin study evaluates the CPR-performance of ambulance crews, who had a mechanical chest compression device implemented in their routine clinical practice 8 months previously. The objectives were to evaluate time to first defibrillation, no-flow time, and estimate the quality of compressions. The performance of 21 ambulance crews (ambulance nurse and emergency medical technician) with the authorization to perform advanced life support was studied in an experimental, randomized cross-over study in a manikin setup. Each crew performed two identical CPR scenarios, with and without the aid of the mechanical compression device LUCAS. A computerized manikin was used for data sampling. There were no substantial differences in time to first defibrillation or no-flow time until first defibrillation. However, the fraction of adequate compressions in relation to total compressions was remarkably low in LUCAS-CPR (58%) compared to manual CPR (88%) (95% confidence interval for the difference: 13-50%). Only 12 out of the 21 ambulance crews (57%) applied the mandatory stabilization strap on the LUCAS device. The use of a mechanical compression aid was not associated with substantial differences in time to first defibrillation or no-flow time in the early phase of CPR. However, constant but poor chest compressions due to failure in recognizing and correcting a malposition of the device may counteract a potential benefit of mechanical chest compressions. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Investigations on Absorber Materials at Cryogenic Temperatures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marhauser, Frank; Elliott, Thomas; Rimmer, Robert
2009-05-01
In the framework of the 12 GeV upgrade project for the Continuous Electron Beam Accelerator Facility (CEBAF) improvements are being made to refurbish cryomodules housing Thomas Jefferson National Accelerator Facility's (JLab) original 5-cell cavities. Recently we have started to look into a possible simplification of the existing Higher Order Mode (HOM) absorber design combined with the aim to find alternative material candidates. The absorbers are implemented in two HOM-waveguides immersed in the helium bath and operate at 2 K temperature. We have built a cryogenic setup to perform measurements on sample load materials to investigate their lossy characteristics and variationsmore » from room temperature down to 2 K. Initial results are presented in this paper.« less
Data compression for full motion video transmission
NASA Technical Reports Server (NTRS)
Whyte, Wayne A., Jr.; Sayood, Khalid
1991-01-01
Clearly transmission of visual information will be a major, if not dominant, factor in determining the requirements for, and assessing the performance of the Space Exploration Initiative (SEI) communications systems. Projected image/video requirements which are currently anticipated for SEI mission scenarios are presented. Based on this information and projected link performance figures, the image/video data compression requirements which would allow link closure are identified. Finally several approaches which could satisfy some of the compression requirements are presented and possible future approaches which show promise for more substantial compression performance improvement are discussed.
Prechamber Compression-Ignition Engine Performance
NASA Technical Reports Server (NTRS)
Moore, Charles S; Collins, John H , Jr
1938-01-01
Single-cylinder compression-ignition engine tests were made to investigate the performance characteristics of prechamber type of cylinder head. Certain fundamental variables influencing engine performance -- clearance distribution, size, shape, and direction of the passage connecting the cylinder and prechamber, shape of prechamber, cylinder clearance, compression ratio, and boosting -- were independently tested. Results of motoring and of power tests, including several typical indicator cards, are presented.
Classification Techniques for Digital Map Compression
1989-03-01
classification improved the performance of the K-means classification algorithm resulting in a compression of 8.06:1 with Lempel - Ziv coding. Run-length coding... compression performance are run-length coding [2], [8] and Lempel - Ziv coding 110], [11]. These techniques are chosen because they are most efficient when...investigated. After the classification, some standard file compression methods, such as Lempel - Ziv and run-length encoding were applied to the
Spectral compression algorithms for the analysis of very large multivariate images
Keenan, Michael R.
2007-10-16
A method for spectrally compressing data sets enables the efficient analysis of very large multivariate images. The spectral compression algorithm uses a factored representation of the data that can be obtained from Principal Components Analysis or other factorization technique. Furthermore, a block algorithm can be used for performing common operations more efficiently. An image analysis can be performed on the factored representation of the data, using only the most significant factors. The spectral compression algorithm can be combined with a spatial compression algorithm to provide further computational efficiencies.
Martin, Philip; Theobald, Peter; Kemp, Alison; Maguire, Sabine; Maconochie, Ian; Jones, Michael
2013-08-01
European and Advanced Paediatric Life Support training courses. Sixty-nine certified CPR providers. CPR providers were randomly allocated to a 'no-feedback' or 'feedback' group, performing two-thumb and two-finger chest compressions on a "physiological", instrumented resuscitation manikin. Baseline data was recorded without feedback, before chest compressions were repeated with one group receiving feedback. Indices were calculated that defined chest compression quality, based upon comparison of the chest wall displacement to the targets of four, internationally recommended parameters: chest compression depth, release force, chest compression rate and compression duty cycle. Baseline data were consistent with other studies, with <1% of chest compressions performed by providers simultaneously achieving the target of the four internationally recommended parameters. During the 'experimental' phase, 34 CPR providers benefitted from the provision of 'real-time' feedback which, on analysis, coincided with a statistical improvement in compression rate, depth and duty cycle quality across both compression techniques (all measures: p<0.001). Feedback enabled providers to simultaneously achieve the four targets in 75% (two-finger) and 80% (two-thumb) of chest compressions. Real-time feedback produced a dramatic increase in the quality of chest compression (i.e. from <1% to 75-80%). If these results transfer to a clinical scenario this technology could, for the first time, support providers in consistently performing accurate chest compressions during infant CPR and thus potentially improving clinical outcomes. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Lee, Seong Hwa; Ryu, Ji Ho; Min, Mun Ki; Kim, Yong In; Park, Maeng Real; Yeom, Seok Ran; Han, Sang Kyoon; Park, Seong Wook
2016-08-01
When performing cardiopulmonary resuscitation (CPR), the 2010 American Heart Association guidelines recommend a chest compression rate of at least 100 min, whereas the 2010 European Resuscitation Council guidelines recommend a rate of between 100 and 120 min. The aim of this study was to examine the rate of chest compression that fulfilled various quality indicators, thereby determining the optimal rate of compression. Thirty-two trainee emergency medical technicians and six paramedics were enrolled in this study. All participants had been trained in basic life support. Each participant performed 2 min of continuous compressions on a skill reporter manikin, while listening to a metronome sound at rates of 100, 120, 140, and 160 beats/min, in a random order. Mean compression depth, incomplete chest recoil, and the proportion of correctly performed chest compressions during the 2 min were measured and recorded. The rate of incomplete chest recoil was lower at compression rates of 100 and 120 min compared with that at 160 min (P=0.001). The numbers of compressions that fulfilled the criteria for high-quality CPR at a rate of 120 min were significantly higher than those at 100 min (P=0.016). The number of high-quality CPR compressions was the highest at a compression rate of 120 min, and increased incomplete recoil occurred with increasing compression rate. However, further studies are needed to confirm the results.
NASA Technical Reports Server (NTRS)
Vandermey, Nancy E.; Morris, Don H.; Masters, John E.
1991-01-01
Damage initiation and growth under compression-compression fatigue loading were investigated for a stitched uniweave material system with an underlying AS4/3501-6 quasi-isotropic layup. Performance of unnotched specimens having stitch rows at either 0 degree or 90 degrees to the loading direction was compared. Special attention was given to the effects of stitching related manufacturing defects. Damage evaluation techniques included edge replication, stiffness monitoring, x-ray radiography, residual compressive strength, and laminate sectioning. It was found that the manufacturing defect of inclined stitches had the greatest adverse effect on material performance. Zero degree and 90 degree specimen performances were generally the same. While the stitches were the source of damage initiation, they also slowed damage propagation both along the length and across the width and affected through-the-thickness damage growth. A pinched layer zone formed by the stitches particularly affected damage initiation and growth. The compressive failure mode was transverse shear for all specimens, both in static compression and fatigue cycling effects.
High-speed and high-ratio referential genome compression.
Liu, Yuansheng; Peng, Hui; Wong, Limsoon; Li, Jinyan
2017-11-01
The rapidly increasing number of genomes generated by high-throughput sequencing platforms and assembly algorithms is accompanied by problems in data storage, compression and communication. Traditional compression algorithms are unable to meet the demand of high compression ratio due to the intrinsic challenging features of DNA sequences such as small alphabet size, frequent repeats and palindromes. Reference-based lossless compression, by which only the differences between two similar genomes are stored, is a promising approach with high compression ratio. We present a high-performance referential genome compression algorithm named HiRGC. It is based on a 2-bit encoding scheme and an advanced greedy-matching search on a hash table. We compare the performance of HiRGC with four state-of-the-art compression methods on a benchmark dataset of eight human genomes. HiRGC takes <30 min to compress about 21 gigabytes of each set of the seven target genomes into 96-260 megabytes, achieving compression ratios of 217 to 82 times. This performance is at least 1.9 times better than the best competing algorithm on its best case. Our compression speed is also at least 2.9 times faster. HiRGC is stable and robust to deal with different reference genomes. In contrast, the competing methods' performance varies widely on different reference genomes. More experiments on 100 human genomes from the 1000 Genome Project and on genomes of several other species again demonstrate that HiRGC's performance is consistently excellent. The C ++ and Java source codes of our algorithm are freely available for academic and non-commercial use. They can be downloaded from https://github.com/yuansliu/HiRGC. jinyan.li@uts.edu.au. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Liu, Shawn; Vaillancourt, Christian; Kasaboski, Ann; Taljaard, Monica
2016-11-01
This study sought to measure bystander fatigue and cardiopulmonary resuscitation (CPR) quality after five minutes of CPR using the continuous chest compression (CCC) versus the 30:2 chest compression to ventilation method in older lay persons, a population most likely to perform CPR on cardiac arrest victims. This randomized crossover trial took place at three tertiary care hospitals and a seniors' center. Participants were aged ≥55 years without significant physical limitations (frailty score ≤3/7). They completed two 5-minute CPR sessions (using 30:2 and CCC) on manikins; sessions were separated by a rest period. We used concealed block randomization to determine CPR method order. Metronome feedback maintained a compression rate of 100/minute. We measured heart rate (HR), mean arterial pressure (MAP), and Borg Exertion Scale. CPR quality measures included total number of compressions and number of adequate compressions (depth ≥5 cm). Sixty-three participants were enrolled: mean age 70.8 years, female 66.7%, past CPR training 60.3%. Bystander fatigue was similar between CPR methods: mean difference in HR -0.59 (95% CI -3.51-2.33), MAP 1.64 (95% CI -0.23-3.50), and Borg 0.46 (95% CI 0.07-0.84). Compared to 30:2, participants using CCC performed more chest compressions (480.0 v. 376.3, mean difference 107.7; p<0.0001) and more adequate chest compressions (381.5 v. 324.9, mean difference. 62.0; p=0.0001), although good compressions/minute declined significantly faster with the CCC method (p=0.0002). CPR quality decreased significantly faster when performing CCC compared to 30:2. However, performing CCC produced more adequate compressions overall with a similar level of fatigue compared to the 30:2 method.
Usha, Sruthi P; Gupta, Banshi D
2018-03-15
A lossy mode resonance (LMR) based sensor for urinary p-cresol testing on optical fiber substrate is developed. The sensor probe fabrication includes dip coating of nanocomposite layer of zinc oxide and molybdenum sulphide (ZnO/MoS 2 ) over unclad core of optical fiber as the transducer layer followed by the layer of molecular imprinted polymer (MIP) as the recognition medium. The addition of molybdenum sulphide in the transducer layer increases the absorption of light in the medium which enhances the LMR properties of zinc oxide thereby increasing the conductivity and hence the sensitivity of the sensor. The sensor probe is characterized for p-cresol concentration range from 0µM (reference sample) to 1000µM in artificially prepared urine. Optimizations of various probe fabrication parameters are carried to bring out the sensor's optimal performance with a sensitivity of 11.86nm/µM and 28nM as the limit of detection (LOD). A two-order improvement in LOD is obtained as compared to the recently reported p-cresol sensor. The proposed sensor possesses a response time of 15s which is 8 times better than that reported in the literature utilizing electrochemical method. Its response time is also better than the p-cresol sensor currently available in the market for the medical field. Thus, with a fast response, significant stability and repeatability, the proposed sensor holds practical implementation possibilities in the medical field. Further, the realization of sensor probe over optical fiber substrate adds remote sensing and online monitoring feasibilities. Copyright © 2017 Elsevier B.V. All rights reserved.
Dynamic compressive properties obtained from a split Hopkinson pressure bar test of Boryeong shale
NASA Astrophysics Data System (ADS)
Kang, Minju; Cho, Jung-Woo; Kim, Yang Gon; Park, Jaeyeong; Jeong, Myeong-Sik; Lee, Sunghak
2016-09-01
Dynamic compressive properties of a Boryeong shale were evaluated by using a split Hopkinson pressure bar, and were compared with those of a Hwangdeung granite which is a typical hard rock. The results indicated that the dynamic compressive loading reduced the resistance to fracture. The dynamic compressive strength was lower in the shale than in the granite, and was raised with increasing strain rate by microcracking effect as well as strain rate strengthening effect. Since the number of microcracked fragments increased with increasing strain rate in the shale having laminated weakness planes, the shale showed the better fragmentation performance than the granite at high strain rates. The effect of transversely isotropic plane on compressive strength decreased with increasing strain rate, which was desirable for increasing the fragmentation performance. Thus, the shale can be more reliably applied to industrial areas requiring good fragmentation performance as the striking speed of drilling or hydraulic fracturing machines increased. The present dynamic compressive test effectively evaluated the fragmentation performance as well as compressive strength and strain energy density by controlling the air pressure, and provided an important idea on which rock was more readily fragmented under dynamically processing conditions such as high-speed drilling and blasting.
Bringing light into the dark: effects of compression clothing on performance and recovery.
Born, Dennis-Peter; Sperlich, Billy; Holmberg, Hans-Christer
2013-01-01
To assess original research addressing the effect of the application of compression clothing on sport performance and recovery after exercise, a computer-based literature research was performed in July 2011 using the electronic databases PubMed, MEDLINE, SPORTDiscus, and Web of Science. Studies examining the effect of compression clothing on endurance, strength and power, motor control, and physiological, psychological, and biomechanical parameters during or after exercise were included, and means and measures of variability of the outcome measures were recorded to estimate the effect size (Hedges g) and associated 95% confidence intervals for comparisons of experimental (compression) and control trials (noncompression). The characteristics of the compression clothing, participants, and study design were also extracted. The original research from peer-reviewed journals was examined using the Physiotherapy Evidence Database (PEDro) Scale. Results indicated small effect sizes for the application of compression clothing during exercise for short-duration sprints (10-60 m), vertical-jump height, extending time to exhaustion (such as running at VO2max or during incremental tests), and time-trial performance (3-60 min). When compression clothing was applied for recovery purposes after exercise, small to moderate effect sizes were observed in recovery of maximal strength and power, especially vertical-jump exercise; reductions in muscle swelling and perceived muscle pain; blood lactate removal; and increases in body temperature. These results suggest that the application of compression clothing may assist athletic performance and recovery in given situations with consideration of the effects magnitude and practical relevance.
Analysis of the operation of the SCD Response intermittent compression system.
Morris, Rh J; Griffiths, H; Woodcock, J P
2002-01-01
The work assessed the performance of the Kendall SCD Response intermittent pneumatic compression system for deep vein thrombosis prophylaxis, which claimed to set its cycle according to the blood flow characteristics of individual patient limbs. A series of tests measured the system response in various situations, including application to the limbs of healthy volunteers, and to false limbs. Practical experimentation and theoretical analysis were used to investigate influences on the system functioning other than blood flow. The system tested did not seem to perform as claimed, being unable to distinguish between real and fake limbs. The intervals between compressions were set to times unrealistic for venous refill, with temperature changes in the cuff the greatest influence on performance. Combining the functions of compression and the measurement of the effects of compression in the same air bladder makes temperature artefacts unavoidable and can cause significant errors in the inter-compression interval.
NASA Technical Reports Server (NTRS)
Sanders, J. C.; Mendelson, Alexander
1945-01-01
Small high-speed single-cylinder compression-ignition engines were tested to determine their performance characteristics under high supercharging. Calculations were made on the energy available in the exhaust gas of the compression-ignition engines. The maximum power at any given maximum cylinder pressure was obtained when the compression pressure was equal to the maximum cylinder pressure. Constant-pressure combustion was found possible at an engine speed of 2200 rpm. Exhaust pressures and temperatures were determined from an analysis of indicator cards. The analysis showed that, at rich mixtures with the exhaust back pressure equal to the inlet-air pressure, there is excess energy available for driving a turbine over that required for supercharging. The presence of this excess energy indicates that a highly supercharged compression-ignition engine might be desirable as a compressor and combustion chamber for a turbine.
Gehrig, Nicolas; Dragotti, Pier Luigi
2009-03-01
In this paper, we study the sampling and the distributed compression of the data acquired by a camera sensor network. The effective design of these sampling and compression schemes requires, however, the understanding of the structure of the acquired data. To this end, we show that the a priori knowledge of the configuration of the camera sensor network can lead to an effective estimation of such structure and to the design of effective distributed compression algorithms. For idealized scenarios, we derive the fundamental performance bounds of a camera sensor network and clarify the connection between sampling and distributed compression. We then present a distributed compression algorithm that takes advantage of the structure of the data and that outperforms independent compression algorithms on real multiview images.
Joint image encryption and compression scheme based on IWT and SPIHT
NASA Astrophysics Data System (ADS)
Zhang, Miao; Tong, Xiaojun
2017-03-01
A joint lossless image encryption and compression scheme based on integer wavelet transform (IWT) and set partitioning in hierarchical trees (SPIHT) is proposed to achieve lossless image encryption and compression simultaneously. Making use of the properties of IWT and SPIHT, encryption and compression are combined. Moreover, the proposed secure set partitioning in hierarchical trees (SSPIHT) via the addition of encryption in the SPIHT coding process has no effect on compression performance. A hyper-chaotic system, nonlinear inverse operation, Secure Hash Algorithm-256(SHA-256), and plaintext-based keystream are all used to enhance the security. The test results indicate that the proposed methods have high security and good lossless compression performance.
Cheremkhin, Pavel A; Kurbatova, Ekaterina A
2018-01-01
Compression of digital holograms can significantly help with the storage of objects and data in 2D and 3D form, its transmission, and its reconstruction. Compression of standard images by methods based on wavelets allows high compression ratios (up to 20-50 times) with minimum losses of quality. In the case of digital holograms, application of wavelets directly does not allow high values of compression to be obtained. However, additional preprocessing and postprocessing can afford significant compression of holograms and the acceptable quality of reconstructed images. In this paper application of wavelet transforms for compression of off-axis digital holograms are considered. The combined technique based on zero- and twin-order elimination, wavelet compression of the amplitude and phase components of the obtained Fourier spectrum, and further additional compression of wavelet coefficients by thresholding and quantization is considered. Numerical experiments on reconstruction of images from the compressed holograms are performed. The comparative analysis of applicability of various wavelets and methods of additional compression of wavelet coefficients is performed. Optimum parameters of compression of holograms by the methods can be estimated. Sizes of holographic information were decreased up to 190 times.
Semeraro, Federico; Frisoli, Antonio; Loconsole, Claudio; Bannò, Filippo; Tammaro, Gaetano; Imbriaco, Guglielmo; Marchetti, Luca; Cerchiari, Erga L
2013-04-01
Outcome after cardiac arrest is dependent on the quality of chest compressions (CC). A great number of devices have been developed to provide guidance during CPR. The present study evaluates a new CPR feedback system (Mini-VREM: Mini-Virtual Reality Enhanced Mannequin) designed to improve CC during training. Mini-VREM system consists of a Kinect(®) (Microsoft, Redmond, WA, USA) motion sensing device and specifically developed software to provide audio-visual feedback. Mini-VREM was connected to a commercially available mannequin (Laerdal Medical, Stavanger, Norway). Eighty trainees (healthcare professionals and lay people) volunteered in this randomised crossover pilot study. All subjects performed a 2 min CC trial, 1h pause and a second 2 min CC trial. The first group (FB/NFB, n=40) performed CC with Mini-VREM feedback (FB) followed by CC without feedback (NFB). The second group (NFB/FB, n=40) performed vice versa. Primary endpoints: adequate compression (compression rate between 100 and 120 min(-1) and compression depth between 50 and 60mm); compressions rate within 100-120 min(-1); compressions depth within 50-60mm. When compared to the performance without feedback, with Mini-VREM feedback compressions were more adequate (FB 35.78% vs. NFB 7.27%, p<0.001) and more compressions achieved target rate (FB 72.04% vs. 31.42%, p<0.001) and target depth (FB 47.34% vs. 24.87%, p=0.002). The participants perceived the system to be easy to use with effective feedback. The Mini-VREM system was able to improve significantly the CC performance by healthcare professionals and by lay people in a simulated CA scenario, in terms of compression rate and depth. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Stone, Michael A.; Moore, Brian C. J.
2003-08-01
Using a ``noise-vocoder'' cochlear implant simulator [Shannon et al., Science 270, 303-304 (1995)], the effect of the speed of dynamic range compression on speech intelligibility was assessed, using normal-hearing subjects. The target speech had a level 5 dB above that of the competing speech. Initially, baseline performance was measured with no compression active, using between 4 and 16 processing channels. Then, performance was measured using a fast-acting compressor and a slow-acting compressor, each operating prior to the vocoder simulation. The fast system produced significant gain variation over syllabic timescales. The slow system produced significant gain variation only over the timescale of sentences. With no compression active, about six channels were necessary to achieve 50% correct identification of words in sentences. Sixteen channels produced near-maximum performance. Slow-acting compression produced no significant degradation relative to the baseline. However, fast-acting compression consistently reduced performance relative to that for the baseline, over a wide range of performance levels. It is suggested that fast-acting compression degrades performance for two reasons: (1) because it introduces correlated fluctuations in amplitude in different frequency bands, which tends to produce perceptual fusion of the target and background sounds and (2) because it reduces amplitude modulation depth and intensity contrasts.
A real-time ECG data compression and transmission algorithm for an e-health device.
Lee, SangJoon; Kim, Jungkuk; Lee, Myoungho
2011-09-01
This paper introduces a real-time data compression and transmission algorithm between e-health terminals for a periodic ECGsignal. The proposed algorithm consists of five compression procedures and four reconstruction procedures. In order to evaluate the performance of the proposed algorithm, the algorithm was applied to all 48 recordings of MIT-BIH arrhythmia database, and the compress ratio (CR), percent root mean square difference (PRD), percent root mean square difference normalized (PRDN), rms, SNR, and quality score (QS) values were obtained. The result showed that the CR was 27.9:1 and the PRD was 2.93 on average for all 48 data instances with a 15% window size. In addition, the performance of the algorithm was compared to those of similar algorithms introduced recently by others. It was found that the proposed algorithm showed clearly superior performance in all 48 data instances at a compression ratio lower than 15:1, whereas it showed similar or slightly inferior PRD performance for a data compression ratio higher than 20:1. In light of the fact that the similarity with the original data becomes meaningless when the PRD is higher than 2, the proposed algorithm shows significantly better performance compared to the performance levels of other algorithms. Moreover, because the algorithm can compress and transmit data in real time, it can be served as an optimal biosignal data transmission method for limited bandwidth communication between e-health devices.
Survey of Header Compression Techniques
NASA Technical Reports Server (NTRS)
Ishac, Joseph
2001-01-01
This report provides a summary of several different header compression techniques. The different techniques included are: (1) Van Jacobson's header compression (RFC 1144); (2) SCPS (Space Communications Protocol Standards) header compression (SCPS-TP, SCPS-NP); (3) Robust header compression (ROHC); and (4) The header compression techniques in RFC2507 and RFC2508. The methodology for compression and error correction for these schemes are described in the remainder of this document. All of the header compression schemes support compression over simplex links, provided that the end receiver has some means of sending data back to the sender. However, if that return path does not exist, then neither Van Jacobson's nor SCPS can be used, since both rely on TCP (Transmission Control Protocol). In addition, under link conditions of low delay and low error, all of the schemes perform as expected. However, based on the methodology of the schemes, each scheme is likely to behave differently as conditions degrade. Van Jacobson's header compression relies heavily on the TCP retransmission timer and would suffer an increase in loss propagation should the link possess a high delay and/or bit error rate (BER). The SCPS header compression scheme protects against high delay environments by avoiding delta encoding between packets. Thus, loss propagation is avoided. However, SCPS is still affected by an increased BER (bit-error-rate) since the lack of delta encoding results in larger header sizes. Next, the schemes found in RFC2507 and RFC2508 perform well for non-TCP connections in poor conditions. RFC2507 performance with TCP connections is improved by various techniques over Van Jacobson's, but still suffers a performance hit with poor link properties. Also, RFC2507 offers the ability to send TCP data without delta encoding, similar to what SCPS offers. ROHC is similar to the previous two schemes, but adds additional CRCs (cyclic redundancy check) into headers and improves compression schemes which provide better tolerances in conditions with a high BER.
The Influence of Compression Stocking on Jumping Performance of Athlete
NASA Astrophysics Data System (ADS)
Salleh, M. N.; Lazim, H. M.; Lamsali, H.; Salleh, A. F.
2018-05-01
Evidence of compression stocking effectiveness are mixed, with some researchers suggests that the stocking can enhance performance while others dispute the finding. One of the factors that are thought to cause the mixed results is level of pressure used in their studies. This research had organized a test on fourteen athletes. Their body was scanned and a customized compression stocking which can exert pressure correspond to the intended one was developed. An experiment was conducted to measure the effect of wearing compression stocking on jumping performance. The results show mixed outcomes. For the female athlete, there is a significant difference between wearing and not wearing compression stocking (p<0.05) on knee power. However, there is no significant difference for male athletes whether wearing or not.
NASA Technical Reports Server (NTRS)
Rodi, Patrick E.
1993-01-01
Forward swept sidewall compression inlets have been tested in the Mach 4 Blowdown Facility at the NASA Langley Research Center to study the effects of bodyside compression surfaces on inlet performance in the presence of an incoming turbulent boundary layer. The measurements include mass flow capture and mean surface pressure distributions obtained during simulated combustion pressure increases downstream of the inlet. The kerosene-lampblack surface tracer technique has been used to obtain patterns of the local wall shear stress direction. Inlet performance is evaluated using starting and unstarting characteristics, mass capture, mean surface pressure distributions and permissible back pressure limits. The results indicate that inlet performance can be improved with selected bodyside compression surfaces placed between the inlet sidewalls.
A High Performance Image Data Compression Technique for Space Applications
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu; Venbrux, Jack
2003-01-01
A highly performing image data compression technique is currently being developed for space science applications under the requirement of high-speed and pushbroom scanning. The technique is also applicable to frame based imaging data. The algorithm combines a two-dimensional transform with a bitplane encoding; this results in an embedded bit string with exact desirable compression rate specified by the user. The compression scheme performs well on a suite of test images acquired from spacecraft instruments. It can also be applied to three-dimensional data cube resulting from hyper-spectral imaging instrument. Flight qualifiable hardware implementations are in development. The implementation is being designed to compress data in excess of 20 Msampledsec and support quantization from 2 to 16 bits. This paper presents the algorithm, its applications and status of development.
Multispectral Image Compression Based on DSC Combined with CCSDS-IDC
Li, Jin; Xing, Fei; Sun, Ting; You, Zheng
2014-01-01
Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches. PMID:25110741
Multispectral image compression based on DSC combined with CCSDS-IDC.
Li, Jin; Xing, Fei; Sun, Ting; You, Zheng
2014-01-01
Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches.
Lv, Peng; Wang, Yaru; Ji, Chenglong; Yuan, Jiajiao
2017-01-01
Ultra-compressible electrodes with high electrochemical performance, reversible compressibility and extreme durability are in high demand in compression-tolerant energy storage devices. Herein, an ultra-compressible ternary composite was synthesized by successively electrodepositing poly(3,4-ethylenedioxythiophene) (PEDOT) and MnO2 into the superelastic graphene aerogel (SEGA). In SEGA/PEDOT/MnO2 ternary composite, SEGA provides the compressible backbone and conductive network; MnO2 is mainly responsible for pseudo reactions; the middle PEDOT not only reduces the interface resistance between MnO2 and graphene, but also further reinforces the strength of graphene cellar walls. The synergistic effect of the three components in the ternary composite electrode leads to high electrochemical performances and good compression-tolerant ability. The gravimetric capacitance of the compressible ternary composite electrodes reaches 343 F g−1 and can retain 97% even at 95% compressive strain. And a volumetric capacitance of 147.4 F cm−3 is achieved, which is much higher than that of other graphene-based compressible electrodes. This value of volumetric capacitance can be preserved by 80% after 3500 charge/discharge cycles under various compression strains, indicating an extreme durability.
1989-08-01
where the slope becomes infinite. This point could represent a cutoff frequency of the two coupled complex modes as a/b increases and the cutin ...a good example of attenuation near the cutin for each mode. A common characteristic throughout these plots, and also very similar to the cc versus a/b
Passive states as optimal inputs for single-jump lossy quantum channels
NASA Astrophysics Data System (ADS)
De Palma, Giacomo; Mari, Andrea; Lloyd, Seth; Giovannetti, Vittorio
2016-06-01
The passive states of a quantum system minimize the average energy among all the states with a given spectrum. We prove that passive states are the optimal inputs of single-jump lossy quantum channels. These channels arise from a weak interaction of the quantum system of interest with a large Markovian bath in its ground state, such that the interaction Hamiltonian couples only consecutive energy eigenstates of the system. We prove that the output generated by any input state ρ majorizes the output generated by the passive input state ρ0 with the same spectrum of ρ . Then, the output generated by ρ can be obtained applying a random unitary operation to the output generated by ρ0. This is an extension of De Palma et al. [IEEE Trans. Inf. Theory 62, 2895 (2016)], 10.1109/TIT.2016.2547426, where the same result is proved for one-mode bosonic Gaussian channels. We also prove that for finite temperature this optimality property can fail already in a two-level system, where the best input is a coherent superposition of the two energy eigenstates.
Improving GPR image resolution in lossy ground using dispersive migration
Oden, C.P.; Powers, M.H.; Wright, D.L.; Olhoeft, G.R.
2007-01-01
As a compact wave packet travels through a dispersive medium, it becomes dilated and distorted. As a result, ground-penetrating radar (GPR) surveys over conductive and/or lossy soils often result in poor image resolution. A dispersive migration method is presented that combines an inverse dispersion filter with frequency-domain migration. The method requires a fully characterized GPR system including the antenna response, which is a function of the local soil properties for ground-coupled antennas. The GPR system response spectrum is used to stabilize the inverse dispersion filter. Dispersive migration restores attenuated spectral components when the signal-to-noise ratio is adequate. Applying the algorithm to simulated data shows that the improved spatial resolution is significant when data are acquired with a GPR system having 120 dB or more of dynamic range, and when the medium has a loss tangent of 0.3 or more. Results also show that dispersive migration provides no significant advantage over conventional migration when the loss tangent is less than 0.3, or when using a GPR system with a small dynamic range. ?? 2007 IEEE.
A dissipative quantum mechanical beam-splitter.
Ramakrishna, S A; Bandyopadhyay, A; Rai, J
1998-01-19
A dissipative beam-splitter (BS) has been analyzed by modeling the losses in the BS due to the excitation of optical phonons. The losses are obtained in terms of the BS medium properties. The model simplies the picture by treating the loss mechanism as a perturbation on the photon modes in a linear, non-lossy medium in the limit of small losses, instead of using the full field quantization in lossy, dispersive media. The model uses second order perturbation in the Markoff approximation and yields the Beer's law for absorption in the first approximation, thus providing a microscopic description of the absorption coecient. It is shown that the fluctuations in the modes get increased because of the losses. We show the existence of quantum interferences due to phase correlations between the input beams and it is shown that these correlations can result in loss quenching. Hence in spite of having such a dissipative medium, it is possible to design a lossless 50-50 BS at normal incidence which may have potential applications in laser optics and dielectric-coated mirrors.
NASA Astrophysics Data System (ADS)
Kosiel, Kamil; Koba, Marcin; Masiewicz, Marcin; Śmietana, Mateusz
2018-06-01
The paper shows application of atomic layer deposition (ALD) technique as a tool for tailoring sensorial properties of lossy-mode-resonance (LMR)-based optical fiber sensors. Hafnium dioxide (HfO2), zirconium dioxide (ZrO2), and tantalum oxide (TaxOy), as high-refractive-index dielectrics that are particularly convenient for LMR-sensor fabrication, were deposited by low-temperature (100 °C) ALD ensuring safe conditions for thermally vulnerable fibers. Applicability of HfO2 and ZrO2 overlays, deposited with ALD-related atomic level thickness accuracy for fabrication of LMR-sensors with controlled sensorial properties was presented. Additionally, for the first time according to our best knowledge, the double-layer overlay composed of two different materials - silicon nitride (SixNy) and TaxOy - is presented for the LMR fiber sensors. The thin films of such overlay were deposited by two different techniques - PECVD (the SixNy) and ALD (the TaxOy). Such approach ensures fast overlay fabrication and at the same time facility for resonant wavelength tuning, yielding devices with satisfactory sensorial properties.
Three-dimensional imaging of buried objects in very lossy earth by inversion of VETEM data
Cui, T.J.; Aydiner, A.A.; Chew, W.C.; Wright, D.L.; Smith, D.V.
2003-01-01
The very early time electromagnetic system (VETEM) is an efficient tool for the detection of buried objects in very lossy earth, which allows a deeper penetration depth compared to the ground-penetrating radar. In this paper, the inversion of VETEM data is investigated using three-dimensional (3-D) inverse scattering techniques, where multiple frequencies are applied in the frequency range from 0-5 MHz. For small and moderately sized problems, the Born approximation and/or the Born iterative method have been used with the aid of the singular value decomposition and/or the conjugate gradient method in solving the linearized integral equations. For large-scale problems, a localized 3-D inversion method based on the Born approximation has been proposed for the inversion of VETEM data over a large measurement domain. Ways to process and to calibrate the experimental VETEM data are discussed to capture the real physics of buried objects. Reconstruction examples using synthesized VETEM data and real-world VETEM data are given to test the validity and efficiency of the proposed approach.
Goto, Nobuo; Miyazaki, Yasumitsu
2014-06-01
Optical switching of high-bit-rate quadrature-phase-shift-keying (QPSK) pulse trains using collinear acousto-optic (AO) devices is theoretically discussed. Since the collinear AO devices have wavelength selectivity, the switched optical pulse trains suffer from distortion when the bandwidth of the pulse train is comparable to the pass bandwidth of the AO device. As the AO device, a sidelobe-suppressed device with a tapered surface-acoustic-wave (SAW) waveguide and a Butterworth-type filter device with a lossy SAW directional coupler are considered. Phase distortion of optical pulse trains at 40 to 100 Gsymbols/s in QPSK format is numerically analyzed. Bit-error-rate performance with additive Gaussian noise is also evaluated by the Monte Carlo method.
Performance of device-independent quantum key distribution
NASA Astrophysics Data System (ADS)
Cao, Zhu; Zhao, Qi; Ma, Xiongfeng
2016-07-01
Quantum key distribution provides information-theoretically-secure communication. In practice, device imperfections may jeopardise the system security. Device-independent quantum key distribution solves this problem by providing secure keys even when the quantum devices are untrusted and uncharacterized. Following a recent security proof of the device-independent quantum key distribution, we improve the key rate by tightening the parameter choice in the security proof. In practice where the system is lossy, we further improve the key rate by taking into account the loss position information. From our numerical simulation, our method can outperform existing results. Meanwhile, we outline clear experimental requirements for implementing device-independent quantum key distribution. The maximal tolerable error rate is 1.6%, the minimal required transmittance is 97.3%, and the minimal required visibility is 96.8 % .
NASA Technical Reports Server (NTRS)
Adams, Donald F.
1999-01-01
The attached data summarizes the work performed by the Composite Materials Research Group at the University of Wyoming funded by the NASA LaRC Research Grant NAG-1-1294. The work consisted primarily of tension, compression, open-hole compression and double cantilever beam fracture toughness testing performed an a variety of NASA LaRC composite materials. Tests were performed at various environmental conditions and pre-conditioning requirements. The primary purpose of this work was to support the LaRC material development efforts. The data summaries are arranged in chronological order from oldest to newest.
Tomographic Image Compression Using Multidimensional Transforms.
ERIC Educational Resources Information Center
Villasenor, John D.
1994-01-01
Describes a method for compressing tomographic images obtained using Positron Emission Tomography (PET) and Magnetic Resonance (MR) by applying transform compression using all available dimensions. This takes maximum advantage of redundancy of the data, allowing significant increases in compression efficiency and performance. (13 references) (KRN)
NASA Astrophysics Data System (ADS)
Schmalz, Mark S.; Ritter, Gerhard X.; Caimi, Frank M.
2001-12-01
A wide variety of digital image compression transforms developed for still imaging and broadcast video transmission are unsuitable for Internet video applications due to insufficient compression ratio, poor reconstruction fidelity, or excessive computational requirements. Examples include hierarchical transforms that require all, or large portion of, a source image to reside in memory at one time, transforms that induce significant locking effect at operationally salient compression ratios, and algorithms that require large amounts of floating-point computation. The latter constraint holds especially for video compression by small mobile imaging devices for transmission to, and compression on, platforms such as palmtop computers or personal digital assistants (PDAs). As Internet video requirements for frame rate and resolution increase to produce more detailed, less discontinuous motion sequences, a new class of compression transforms will be needed, especially for small memory models and displays such as those found on PDAs. In this, the third series of papers, we discuss the EBLAST compression transform and its application to Internet communication. Leading transforms for compression of Internet video and still imagery are reviewed and analyzed, including GIF, JPEG, AWIC (wavelet-based), wavelet packets, and SPIHT, whose performance is compared with EBLAST. Performance analysis criteria include time and space complexity and quality of the decompressed image. The latter is determined by rate-distortion data obtained from a database of realistic test images. Discussion also includes issues such as robustness of the compressed format to channel noise. EBLAST has been shown to perform superiorly to JPEG and, unlike current wavelet compression transforms, supports fast implementation on embedded processors with small memory models.
Bae, Jinkun; Chung, Tae Nyoung; Je, Sang Mo
2016-01-01
Objectives To assess how the quality of metronome-guided cardiopulmonary resuscitation (CPR) was affected by the chest compression rate familiarised by training before the performance and to determine a possible mechanism for any effect shown. Design Prospective crossover trial of a simulated, one-person, chest-compression-only CPR. Setting Participants were recruited from a medical school and two paramedic schools of South Korea. Participants 42 senior students of a medical school and two paramedic schools were enrolled but five dropped out due to physical restraints. Intervention Senior medical and paramedic students performed 1 min of metronome-guided CPR with chest compressions only at a speed of 120 compressions/min after training for chest compression with three different rates (100, 120 and 140 compressions/min). Friedman's test was used to compare average compression depths based on the different rates used during training. Results Average compression depths were significantly different according to the rate used in training (p<0.001). A post hoc analysis showed that average compression depths were significantly different between trials after training at a speed of 100 compressions/min and those at speeds of 120 and 140 compressions/min (both p<0.001). Conclusions The depth of chest compression during metronome-guided CPR is affected by the relative difference between the rate of metronome guidance and the chest compression rate practised in previous training. PMID:26873050
NASA Technical Reports Server (NTRS)
Barrie, Alexander C.; Yeh, Penshu; Dorelli, John C.; Clark, George B.; Paterson, William R.; Adrian, Mark L.; Holland, Matthew P.; Lobell, James V.; Simpson, David G.; Pollock, Craig J.;
2015-01-01
Plasma measurements in space are becoming increasingly faster, higher resolution, and distributed over multiple instruments. As raw data generation rates can exceed available data transfer bandwidth, data compression is becoming a critical design component. Data compression has been a staple of imaging instruments for years, but only recently have plasma measurement designers become interested in high performance data compression. Missions will often use a simple lossless compression technique yielding compression ratios of approximately 2:1, however future missions may require compression ratios upwards of 10:1. This study aims to explore how a Discrete Wavelet Transform combined with a Bit Plane Encoder (DWT/BPE), implemented via a CCSDS standard, can be used effectively to compress count information common to plasma measurements to high compression ratios while maintaining little or no compression error. The compression ASIC used for the Fast Plasma Investigation (FPI) on board the Magnetospheric Multiscale mission (MMS) is used for this study. Plasma count data from multiple sources is examined: resampled data from previous missions, randomly generated data from distribution functions, and simulations of expected regimes. These are run through the compression routines with various parameters to yield the greatest possible compression ratio while maintaining little or no error, the latter indicates that fully lossless compression is obtained. Finally, recommendations are made for future missions as to what can be achieved when compressing plasma count data and how best to do so.
Is There Evidence that Runners can Benefit from Wearing Compression Clothing?
Engel, Florian Azad; Holmberg, Hans-Christer; Sperlich, Billy
2016-12-01
Runners at various levels of performance and specializing in different events (from 800 m to marathons) wear compression socks, sleeves, shorts, and/or tights in attempt to improve their performance and facilitate recovery. Recently, a number of publications reporting contradictory results with regard to the influence of compression garments in this context have appeared. To assess original research on the effects of compression clothing (socks, calf sleeves, shorts, and tights) on running performance and recovery. A computerized research of the electronic databases PubMed, MEDLINE, SPORTDiscus, and Web of Science was performed in September of 2015, and the relevant articles published in peer-reviewed journals were thus identified rated using the Physiotherapy Evidence Database (PEDro) Scale. Studies examining effects on physiological, psychological, and/or biomechanical parameters during or after running were included, and means and measures of variability for the outcome employed to calculate Hedges'g effect size and associated 95 % confidence intervals for comparison of experimental (compression) and control (non-compression) trials. Compression garments exerted no statistically significant mean effects on running performance (times for a (half) marathon, 15-km trail running, 5- and 10-km runs, and 400-m sprint), maximal and submaximal oxygen uptake, blood lactate concentrations, blood gas kinetics, cardiac parameters (including heart rate, cardiac output, cardiac index, and stroke volume), body and perceived temperature, or the performance of strength-related tasks after running. Small positive effect sizes were calculated for the time to exhaustion (in incremental or step tests), running economy (including biomechanical variables), clearance of blood lactate, perceived exertion, maximal voluntary isometric contraction and peak leg muscle power immediately after running, and markers of muscle damage and inflammation. The body core temperature was moderately affected by compression, while the effect size values for post-exercise leg soreness and the delay in onset of muscle fatigue indicated large positive effects. Our present findings suggest that by wearing compression clothing, runners may improve variables related to endurance performance (i.e., time to exhaustion) slightly, due to improvements in running economy, biomechanical variables, perception, and muscle temperature. They should also benefit from reduced muscle pain, damage, and inflammation.
Sharifahmadian, Ershad
2006-01-01
The set partitioning in hierarchical trees (SPIHT) algorithm is very effective and computationally simple technique for image and signal compression. Here the author modified the algorithm which provides even better performance than the SPIHT algorithm. The enhanced set partitioning in hierarchical trees (ESPIHT) algorithm has performance faster than the SPIHT algorithm. In addition, the proposed algorithm reduces the number of bits in a bit stream which is stored or transmitted. I applied it to compression of multichannel ECG data. Also, I presented a specific procedure based on the modified algorithm for more efficient compression of multichannel ECG data. This method employed on selected records from the MIT-BIH arrhythmia database. According to experiments, the proposed method attained the significant results regarding compression of multichannel ECG data. Furthermore, in order to compress one signal which is stored for a long time, the proposed multichannel compression method can be utilized efficiently.
Compression in wearable sensor nodes: impacts of node topology.
Imtiaz, Syed Anas; Casson, Alexander J; Rodriguez-Villegas, Esther
2014-04-01
Wearable sensor nodes monitoring the human body must operate autonomously for very long periods of time. Online and low-power data compression embedded within the sensor node is therefore essential to minimize data storage/transmission overheads. This paper presents a low-power MSP430 compressive sensing implementation for providing such compression, focusing particularly on the impact of the sensor node architecture on the compression performance. Compression power performance is compared for four different sensor nodes incorporating different strategies for wireless transmission/on-sensor-node local storage of data. The results demonstrate that the compressive sensing used must be designed differently depending on the underlying node topology, and that the compression strategy should not be guided only by signal processing considerations. We also provide a practical overview of state-of-the-art sensor node topologies. Wireless transmission of data is often preferred as it offers increased flexibility during use, but in general at the cost of increased power consumption. We demonstrate that wireless sensor nodes can highly benefit from the use of compressive sensing and now can achieve power consumptions comparable to, or better than, the use of local memory.
Proposal for a Standard Format for Neurophysiology Data Recording and Exchange.
Stead, Matt; Halford, Jonathan J
2016-10-01
The lack of interoperability between information networks is a significant source of cost in health care. Standardized data formats decrease health care cost, improve quality of care, and facilitate biomedical research. There is no common standard digital format for storing clinical neurophysiologic data. This review proposes a new standard file format for neurophysiology data (the bulk of which is video-electroencephalographic data), entitled the Multiscale Electrophysiology Format, version 3 (MEF3), which is designed to address many of the shortcomings of existing formats. MEF3 provides functionality that addresses many of the limitations of current formats. The proposed improvements include (1) hierarchical file structure with improved organization; (2) greater extensibility for big data applications requiring a large number of channels, signal types, and parallel processing; (3) efficient and flexible lossy or lossless data compression; (4) industry standard multilayered data encryption and time obfuscation that permits sharing of human data without the need for deidentification procedures; (5) resistance to file corruption; (6) facilitation of online and offline review and analysis; and (7) provision of full open source documentation. At this time, there is no other neurophysiology format that supports all of these features. MEF3 is currently gaining industry and academic community support. The authors propose the use of the MEF3 as a standard format for neurophysiology recording and data exchange. Collaboration between industry, professional organizations, research communities, and independent standards organizations is needed to move the project forward.
Progressive data transmission for anatomical landmark detection in a cloud.
Sofka, M; Ralovich, K; Zhang, J; Zhou, S K; Comaniciu, D
2012-01-01
In the concept of cloud-computing-based systems, various authorized users have secure access to patient records from a number of care delivery organizations from any location. This creates a growing need for remote visualization, advanced image processing, state-of-the-art image analysis, and computer aided diagnosis. This paper proposes a system of algorithms for automatic detection of anatomical landmarks in 3D volumes in the cloud computing environment. The system addresses the inherent problem of limited bandwidth between a (thin) client, data center, and data analysis server. The problem of limited bandwidth is solved by a hierarchical sequential detection algorithm that obtains data by progressively transmitting only image regions required for processing. The client sends a request to detect a set of landmarks for region visualization or further analysis. The algorithm running on the data analysis server obtains a coarse level image from the data center and generates landmark location candidates. The candidates are then used to obtain image neighborhood regions at a finer resolution level for further detection. This way, the landmark locations are hierarchically and sequentially detected and refined. Only image regions surrounding landmark location candidates need to be trans- mitted during detection. Furthermore, the image regions are lossy compressed with JPEG 2000. Together, these properties amount to at least 30 times bandwidth reduction while achieving similar accuracy when compared to an algorithm using the original data. The hierarchical sequential algorithm with progressive data transmission considerably reduces bandwidth requirements in cloud-based detection systems.
Crisp, Jonathan G; Lovato, Luis M; Jang, Timothy B
2010-12-01
Compression ultrasonography of the lower extremity is an established method of detecting proximal lower extremity deep venous thrombosis when performed by a certified operator in a vascular laboratory. Our objective is to determine the sensitivity and specificity of bedside 2-point compression ultrasonography performed in the emergency department (ED) with portable vascular ultrasonography for the detection of proximal lower extremity deep venous thrombosis. We did this by directly comparing emergency physician-performed ultrasonography to lower extremity duplex ultrasonography performed by the Department of Radiology. This was a prospective, cross-sectional study and diagnostic test assessment of a convenience sample of ED patients with a suspected lower extremity deep venous thrombosis, conducted at a single-center, urban, academic ED. All physicians had a 10-minute training session before enrolling patients. ED compression ultrasonography occurred before Department of Radiology ultrasonography and involved identification of 2 specific points: the common femoral and popliteal vessels, with subsequent compression of the common femoral and popliteal veins. The study result was considered positive for proximal lower extremity deep venous thrombosis if either vein was incompressible or a thrombus was visualized. Sensitivity and specificity were calculated with the final radiologist interpretation of the Department of Radiology ultrasonography as the criterion standard. A total of 47 physicians performed 199 2-point compression ultrasonographic examinations in the ED. Median number of examinations per physician was 2 (range 1 to 29 examinations; interquartile range 1 to 5 examinations). There were 45 proximal lower extremity deep venous thromboses observed on Department of Radiology evaluation, all correctly identified by ED 2-point compression ultrasonography. The 153 patients without proximal lower extremity deep venous thrombosis all had a negative ED compression ultrasonographic result. One patient with a negative Department of Radiology ultrasonographic result was found to have decreased compression of the popliteal vein on ED compression ultrasonography, giving a single false-positive result, yet repeated ultrasonography by the Department of Radiology 1 week later showed a popliteal deep venous thrombosis. The sensitivity and specificity of ED 2-point compression ultrasonography for deep venous thrombosis were 100% (95% confidence interval 92% to 100%) and 99% (95% confidence interval 96% to 100%), respectively. Emergency physician-performed 2-point compression ultrasonography of the lower extremity with a portable vascular ultrasonographic machine, conducted in the ED by this physician group and in this patient sample, accurately identified the presence and absence of proximal lower extremity deep venous thrombosis. Copyright © 2010 American College of Emergency Physicians. Published by Mosby, Inc. All rights reserved.
Schwartz, Andrew H; Shinn-Cunningham, Barbara G
2013-04-01
Many hearing aids introduce compressive gain to accommodate the reduced dynamic range that often accompanies hearing loss. However, natural sounds produce complicated temporal dynamics in hearing aid compression, as gain is driven by whichever source dominates at a given moment. Moreover, independent compression at the two ears can introduce fluctuations in interaural level differences (ILDs) important for spatial perception. While independent compression can interfere with spatial perception of sound, it does not always interfere with localization accuracy or speech identification. Here, normal-hearing listeners reported a target message played simultaneously with two spatially separated masker messages. We measured the amount of spatial separation required between the target and maskers for subjects to perform at threshold in this task. Fast, syllabic compression that was independent at the two ears increased the required spatial separation, but linking the compressors to provide identical gain to both ears (preserving ILDs) restored much of the deficit caused by fast, independent compression. Effects were less clear for slower compression. Percent-correct performance was lower with independent compression, but only for small spatial separations. These results may help explain differences in previous reports of the effect of compression on spatial perception of sound.
Pulse compression of harmonic chirp signals using the fractional fourier transform.
Arif, M; Cowell, D M J; Freear, S
2010-06-01
In ultrasound harmonic imaging with chirp-coded excitation, a harmonic matched filter (HMF) is typically used on the received signal to perform pulse compression of the second harmonic component (SHC) to recover signal axial resolution. Designing the HMF for the compression of the SHC is a problematic issue because it requires optimal window selection. In the compressed second harmonic signal, the sidelobe level may increase and the mainlobe width (MLW) widen under a mismatched condition, resulting in loss of axial resolution. We propose the use of the fractional Fourier transform (FrFT) as an alternative tool to perform compression of the chirp-coded SHC generated as a result of the nonlinear propagation of an ultrasound signal. Two methods are used to experimentally assess the performance benefits of the FrFT technique over the HMF techniques. The first method uses chirp excitation with central frequency of 2.25 MHz and bandwidth of 1 MHz. The second method uses chirp excitation with pulse inversion to increase the bandwidth to 2 MHz. In this study, experiments were performed in a water tank with a single-element transducer mounted coaxially with a hydrophone in a pitch-catch configuration. Results are presented that indicate that the FrFT can perform pulse compression of the second harmonic chirp component, with a 14% reduction in the MLW of the compressed signal when compared with the HMF. Also, the FrFT provides at least 23% reduction in the MLW of the compressed signal when compared with the harmonic mismatched filter (HMMF). The FrFT maintains comparable peak and integrated sidelobe levels when compared with the HMF and HMMF techniques. Copyright 2010 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
Latt, L Daniel; Glisson, Richard R; Adams, Samuel B; Schuh, Reinhard; Narron, John A; Easley, Mark E
2015-10-01
Transverse tarsal joint arthrodesis is commonly performed in the operative treatment of hindfoot arthritis and acquired flatfoot deformity. While fixation is typically achieved using screws, failure to obtain and maintain joint compression sometimes occurs, potentially leading to nonunion. External fixation is an alternate method of achieving arthrodesis site compression and has the advantage of allowing postoperative compression adjustment when necessary. However, its performance relative to standard screw fixation has not been quantified in this application. We hypothesized that external fixation could provide transverse tarsal joint compression exceeding that possible with screw fixation. Transverse tarsal joint fixation was performed sequentially, first with a circular external fixator and then with compression screws, on 9 fresh-frozen cadaveric legs. The external fixator was attached in abutting rings fixed to the tibia and the hindfoot and a third anterior ring parallel to the hindfoot ring using transverse wires and half-pins in the tibial diaphysis, calcaneus, and metatarsals. Screw fixation comprised two 4.3 mm headless compression screws traversing the talonavicular joint and 1 across the calcaneocuboid joint. Compressive forces generated during incremental fixator foot ring displacement to 20 mm and incremental screw tightening were measured using a custom-fabricated instrumented miniature external fixator spanning the transverse tarsal joint. The maximum compressive force generated by the external fixator averaged 186% of that produced by the screws (range, 104%-391%). Fixator compression surpassed that obtainable with screws at 12 mm of ring displacement and decreased when the tibial ring was detached. No correlation was found between bone density and the compressive force achievable by either fusion method. The compression across the transverse tarsal joint that can be obtained with a circular external fixator including a tibial ring exceeds that which can be obtained with 3 headless compression screws. Screw and external fixator performance did not correlate with bone mineral density. This study supports the use of external fixation as an alternative method of generating compression to help stimulate fusion across the transverse tarsal joints. The findings provide biomechanical evidence to support the use of external fixation as a viable option in transverse tarsal joint fusion cases in which screw fixation has failed or is anticipated to be inadequate due to suboptimal bone quality. © The Author(s) 2015.
Comparison of two SVD-based color image compression schemes.
Li, Ying; Wei, Musheng; Zhang, Fengxia; Zhao, Jianli
2017-01-01
Color image compression is a commonly used process to represent image data as few bits as possible, which removes redundancy in the data while maintaining an appropriate level of quality for the user. Color image compression algorithms based on quaternion are very common in recent years. In this paper, we propose a color image compression scheme, based on the real SVD, named real compression scheme. First, we form a new real rectangular matrix C according to the red, green and blue components of the original color image and perform the real SVD for C. Then we select several largest singular values and the corresponding vectors in the left and right unitary matrices to compress the color image. We compare the real compression scheme with quaternion compression scheme by performing quaternion SVD using the real structure-preserving algorithm. We compare the two schemes in terms of operation amount, assignment number, operation speed, PSNR and CR. The experimental results show that with the same numbers of selected singular values, the real compression scheme offers higher CR, much less operation time, but a little bit smaller PSNR than the quaternion compression scheme. When these two schemes have the same CR, the real compression scheme shows more prominent advantages both on the operation time and PSNR.
Comparison of two SVD-based color image compression schemes
Li, Ying; Wei, Musheng; Zhang, Fengxia; Zhao, Jianli
2017-01-01
Color image compression is a commonly used process to represent image data as few bits as possible, which removes redundancy in the data while maintaining an appropriate level of quality for the user. Color image compression algorithms based on quaternion are very common in recent years. In this paper, we propose a color image compression scheme, based on the real SVD, named real compression scheme. First, we form a new real rectangular matrix C according to the red, green and blue components of the original color image and perform the real SVD for C. Then we select several largest singular values and the corresponding vectors in the left and right unitary matrices to compress the color image. We compare the real compression scheme with quaternion compression scheme by performing quaternion SVD using the real structure-preserving algorithm. We compare the two schemes in terms of operation amount, assignment number, operation speed, PSNR and CR. The experimental results show that with the same numbers of selected singular values, the real compression scheme offers higher CR, much less operation time, but a little bit smaller PSNR than the quaternion compression scheme. When these two schemes have the same CR, the real compression scheme shows more prominent advantages both on the operation time and PSNR. PMID:28257451
Birkun, Alexei; Glotov, Maksim; Ndjamen, Herman Franklin; Alaiye, Esther; Adeleke, Temidara; Samarin, Sergey
2018-01-01
To assess the effectiveness of the telephone chest-compression-only cardiopulmonary resuscitation (CPR) guided by a pre-recorded instructional audio when compared with dispatcher-assisted resuscitation. It was a prospective, blind, randomised controlled study involving 109 medical students without previous CPR training. In a standardized mannequin scenario, after the step of dispatcher-assisted cardiac arrest recognition, the participants performed compression-only resuscitation guided over the telephone by either: (1) the pre-recorded instructional audio ( n =57); or (2) verbal dispatcher assistance ( n =52). The simulation video records were reviewed to assess the CPR performance using a 13-item checklist. The interval from call reception to the first compression, total number and rate of compressions, total number and duration of pauses after the first compression were also recorded. There were no significant differences between the recording-assisted and dispatcher-assisted groups based on the overall performance score (5.6±2.2 vs. 5.1±1.9, P >0.05) or individual criteria of the CPR performance checklist. The recording-assisted group demonstrated significantly shorter time interval from call receipt to the first compression (86.0±14.3 vs. 91.2±14.2 s, P <0.05), higher compression rate (94.9±26.4 vs. 89.1±32.8 min -1 ) and number of compressions provided (170.2±48.0 vs. 156.2±60.7). When provided by untrained persons in the simulated settings, the compression-only resuscitation guided by the pre-recorded instructional audio is no less efficient than dispatcher-assisted CPR. Future studies are warranted to further assess feasibility of using instructional audio aid as a potential alternative to dispatcher assistance.
Birkun, Alexei; Glotov, Maksim; Ndjamen, Herman Franklin; Alaiye, Esther; Adeleke, Temidara; Samarin, Sergey
2018-01-01
BACKGROUND: To assess the effectiveness of the telephone chest-compression-only cardiopulmonary resuscitation (CPR) guided by a pre-recorded instructional audio when compared with dispatcher-assisted resuscitation. METHODS: It was a prospective, blind, randomised controlled study involving 109 medical students without previous CPR training. In a standardized mannequin scenario, after the step of dispatcher-assisted cardiac arrest recognition, the participants performed compression-only resuscitation guided over the telephone by either: (1) the pre-recorded instructional audio (n=57); or (2) verbal dispatcher assistance (n=52). The simulation video records were reviewed to assess the CPR performance using a 13-item checklist. The interval from call reception to the first compression, total number and rate of compressions, total number and duration of pauses after the first compression were also recorded. RESULTS: There were no significant differences between the recording-assisted and dispatcher-assisted groups based on the overall performance score (5.6±2.2 vs. 5.1±1.9, P>0.05) or individual criteria of the CPR performance checklist. The recording-assisted group demonstrated significantly shorter time interval from call receipt to the first compression (86.0±14.3 vs. 91.2±14.2 s, P<0.05), higher compression rate (94.9±26.4 vs. 89.1±32.8 min-1) and number of compressions provided (170.2±48.0 vs. 156.2±60.7). CONCLUSION: When provided by untrained persons in the simulated settings, the compression-only resuscitation guided by the pre-recorded instructional audio is no less efficient than dispatcher-assisted CPR. Future studies are warranted to further assess feasibility of using instructional audio aid as a potential alternative to dispatcher assistance.
NASA Astrophysics Data System (ADS)
Bulan, Orhan; Bernal, Edgar A.; Loce, Robert P.; Wu, Wencheng
2013-03-01
Video cameras are widely deployed along city streets, interstate highways, traffic lights, stop signs and toll booths by entities that perform traffic monitoring and law enforcement. The videos captured by these cameras are typically compressed and stored in large databases. Performing a rapid search for a specific vehicle within a large database of compressed videos is often required and can be a time-critical life or death situation. In this paper, we propose video compression and decompression algorithms that enable fast and efficient vehicle or, more generally, event searches in large video databases. The proposed algorithm selects reference frames (i.e., I-frames) based on a vehicle having been detected at a specified position within the scene being monitored while compressing a video sequence. A search for a specific vehicle in the compressed video stream is performed across the reference frames only, which does not require decompression of the full video sequence as in traditional search algorithms. Our experimental results on videos captured in a local road show that the proposed algorithm significantly reduces the search space (thus reducing time and computational resources) in vehicle search tasks within compressed video streams, particularly those captured in light traffic volume conditions.
NASA Technical Reports Server (NTRS)
Hurst, Victor, IV; West, Sarah; Austin, Paul; Branson, Richard; Beck, George
2005-01-01
Astronaut crew medical officers (CMO) aboard the International Space Station (ISS) receive 40 hours of medical training over 18 months before each mission, including two-person cardiopulmonary resuscitation (2CPR) as recommended by the American Heart Association (AHA). Recent studies have concluded that the use of metronomic tones improves the coordination of 2CPR by trained clinicians. 2CPR performance data for minimally-trained caregivers has been limited. The goal of this study was to determine whether use of a metronome by minimally-trained caregivers (CMO analogues) would improve 2CPR performance. 20 pairs of minimally-trained caregivers certified in 2CPR via AHA guidelines performed 2CPR for 4 minutes on an instrumented manikin using 3 interventions: 1) Standard 2CPR without a metronome [NONE], 2) Standard 2CPR plus a metronome for coordinating compression rate only [MET], 3) Standard 2CPR plus a metronome for coordinating both the compression rate and ventilation rate [BOTH]. Caregivers were evaluated for their ability to meet the AHA guideline of 32 breaths-240 compressions in 4 minutes. All (100%) caregivers using the BOTH intervention provided the required number of ventilation breaths as compared with the NONE caregivers (10%) and MET caregivers (0%). For compressions, 97.5% of the BOTH caregivers were not successful in meeting the AHA compression guideline; however, an average of 238 compressions of the desired 240 were completed. None of the caregivers were successful in meeting the compression guideline using the NONE and MET interventions. This study demonstrates that use of metronomic tones by minimally-trained caregivers for coordinating both compressions and breaths improves 2CPR performance. Meeting the breath guideline is important to minimize air entering the stomach, thus decreasing the likelihood of gastric aspiration. These results suggest that manifesting a metronome for the ISS may augment the performance of 2CPR on orbit and thus may increase the level of care.
Bae, Jinkun; Chung, Tae Nyoung; Je, Sang Mo
2016-02-12
To assess how the quality of metronome-guided cardiopulmonary resuscitation (CPR) was affected by the chest compression rate familiarised by training before the performance and to determine a possible mechanism for any effect shown. Prospective crossover trial of a simulated, one-person, chest-compression-only CPR. Participants were recruited from a medical school and two paramedic schools of South Korea. 42 senior students of a medical school and two paramedic schools were enrolled but five dropped out due to physical restraints. Senior medical and paramedic students performed 1 min of metronome-guided CPR with chest compressions only at a speed of 120 compressions/min after training for chest compression with three different rates (100, 120 and 140 compressions/min). Friedman's test was used to compare average compression depths based on the different rates used during training. Average compression depths were significantly different according to the rate used in training (p<0.001). A post hoc analysis showed that average compression depths were significantly different between trials after training at a speed of 100 compressions/min and those at speeds of 120 and 140 compressions/min (both p<0.001). The depth of chest compression during metronome-guided CPR is affected by the relative difference between the rate of metronome guidance and the chest compression rate practised in previous training. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
NASA Astrophysics Data System (ADS)
Gedalin, Daniel; Oiknine, Yaniv; August, Isaac; Blumberg, Dan G.; Rotman, Stanley R.; Stern, Adrian
2017-04-01
Compressive sensing theory was proposed to deal with the high quantity of measurements demanded by traditional hyperspectral systems. Recently, a compressive spectral imaging technique dubbed compressive sensing miniature ultraspectral imaging (CS-MUSI) was presented. This system uses a voltage controlled liquid crystal device to create multiplexed hyperspectral cubes. We evaluate the utility of the data captured using the CS-MUSI system for the task of target detection. Specifically, we compare the performance of the matched filter target detection algorithm in traditional hyperspectral systems and in CS-MUSI multiplexed hyperspectral cubes. We found that the target detection algorithm performs similarly in both cases, despite the fact that the CS-MUSI data is up to an order of magnitude less than that in conventional hyperspectral cubes. Moreover, the target detection is approximately an order of magnitude faster in CS-MUSI data.
Outer planet Pioneer imaging communications system study. [data compression
NASA Technical Reports Server (NTRS)
1974-01-01
The effects of different types of imaging data compression on the elements of the Pioneer end-to-end data system were studied for three imaging transmission methods. These were: no data compression, moderate data compression, and the advanced imaging communications system. It is concluded that: (1) the value of data compression is inversely related to the downlink telemetry bit rate; (2) the rolling characteristics of the spacecraft limit the selection of data compression ratios; and (3) data compression might be used to perform acceptable outer planet mission at reduced downlink telemetry bit rates.
A Complete Multimode Equivalent-Circuit Theory for Electrical Design
Williams, Dylan F.; Hayden, Leonard A.; Marks, Roger B.
1997-01-01
This work presents a complete equivalent-circuit theory for lossy multimode transmission lines. Its voltages and currents are based on general linear combinations of standard normalized modal voltages and currents. The theory includes new expressions for transmission line impedance matrices, symmetry and lossless conditions, source representations, and the thermal noise of passive multiports. PMID:27805153
Kermajani, Hamidreza; Gomez, Carles
2014-01-01
The IPv6 Routing Protocol for Low-power and Lossy Networks (RPL) has been recently developed by the Internet Engineering Task Force (IETF). Given its crucial role in enabling the Internet of Things, a significant amount of research effort has already been devoted to RPL. However, the RPL network convergence process has not yet been investigated in detail. In this paper we study the influence of the main RPL parameters and mechanisms on the network convergence process of this protocol in IEEE 802.15.4 multihop networks. We also propose and evaluate a mechanism that leverages an option available in RPL for accelerating the network convergence process. We carry out extensive simulations for a wide range of conditions, considering different network scenarios in terms of size and density. Results show that network convergence performance depends dramatically on the use and adequate configuration of key RPL parameters and mechanisms. The findings and contributions of this work provide a RPL configuration guideline for network convergence performance tuning, as well as a characterization of the related performance trade-offs. PMID:25004154
Performance of an on-chip superconducting circulator for quantum microwave systems
NASA Astrophysics Data System (ADS)
Chapman, Benjamin; Rosenthal, Eric; Moores, Bradley; Kerckhoff, Joseph; Mates, J. A. B.; Hilton, G. C.; Vale, L. R.; Ullom, J. N.; LalumíEre, Kevin; Blais, Alexandre; Lehnert, K. W.
Microwave circulators enforce a single propagation direction for signals in an electrical network. Unfortunately, commercial circulators are bulky, lossy, and cannot be integrated close to superconducting circuits because they require strong ( kOe) magnetic fields produced by permanent magnets. Here we report on the performance of an on-chip, active circulator for superconducting microwave circuits, which uses no permanent magnets. Non-reciprocity is achieved by actively modulating reactive elements around 100 MHz, giving roughly a factor of 50 in the separation between signal and control frequencies, which facilitates filtering. The circulator's active components are dynamically tunable inductors constructed with arrays of dc-SQUIDs in series. Array inductance is tuned by varying the magnetic flux through the SQUIDs with fields weaker than 1 Oe. Although the instantaneous bandwidth of the device is narrow, the operation frequency is tunable between 4 and 8 GHz. This presentation will describe the device's theory of operation and compare its measured performance to design goals. This work is supported by the ARO under contract W911NF-14-1-0079 and the National Science Foundation under Grant Number 1125844.
Kermajani, Hamidreza; Gomez, Carles
2014-07-07
The IPv6 Routing Protocol for Low-power and Lossy Networks (RPL) has been recently developed by the Internet Engineering Task Force (IETF). Given its crucial role in enabling the Internet of Things, a significant amount of research effort has already been devoted to RPL. However, the RPL network convergence process has not yet been investigated in detail. In this paper we study the influence of the main RPL parameters and mechanisms on the network convergence process of this protocol in IEEE 802.15.4 multihop networks. We also propose and evaluate a mechanism that leverages an option available in RPL for accelerating the network convergence process. We carry out extensive simulations for a wide range of conditions, considering different network scenarios in terms of size and density. Results show that network convergence performance depends dramatically on the use and adequate configuration of key RPL parameters and mechanisms. The findings and contributions of this work provide a RPL configuration guideline for network convergence performance tuning, as well as a characterization of the related performance trade-offs.
Venugopal, Paramaguru; Kasimani, Ramesh; Chinnasamy, Suresh
2018-06-21
The transportation demand in India is increasing tremendously, which arouses the energy consumption by 4.1 to 6.1% increases each year from 2010 to 2050. In addition, the private vehicle ownership keeps on increasing almost 10% per year during the last decade and reaches 213 million tons of oil consumption in 2016. Thus, this makes India the third largest importer of crude oil in the world. Because of this problem, there is a need of promoting the alternative fuels (biodiesel) which are from different feedstocks for the transportation. This alternative fuel has better emission characteristics compared to neat diesel, hence the biodiesel can be used as direct alternative for diesel and it can also be blended with diesel to get better performance. However, the effect of compression ratio, injection timing, injection pressure, composition-blend ratio and air-fuel ratio, and the shape of the cylinder may affect the performance and emission characteristics of the diesel engine. This article deals with the effect of compression ratio in the performance of the engine while using Honne oil diesel blend and also to find out the optimum compression ratio. So the experimentations are conducted using Honne oil diesel blend-fueled CI engine at variable load conditions and at constant speed operations. In order to find out the optimum compression ratio, experiments are carried out on a single-cylinder, four-stroke variable compression ratio diesel engine, and it is found that 18:1 compression ratio gives better performance than the lower compression ratios. Engine performance tests were carried out at different compression ratio values. Using experimental data, regression model was developed and the values were predicted using response surface methodology. Then the predicted values were validated with the experimental results and a maximum error percentage of 6.057 with an average percentage of error as 3.57 were obtained. The optimum numeric factors for different responses were also selected using RSM.
Lattimer, C R; Kalodiki, E; Azzam, M; Geroulakos, G
2016-07-01
To test the in vivo haemodynamic performance of graduated elastic compression (GEC) stockings using air-plethysmography (APG) in healthy volunteers (controls) and patients with varicose veins (VVs), post-thrombotic syndrome (PTS), or lymphoedema. Responsiveness data were used to determine which group benefited the most from GEC. There were 12 patients per group compared using no compression, knee-length Class 1 (18-21 mmHg) compression, and Class 2 (23-32 mmHg) compression. Stocking/leg interface pressures (mmHg) were measured supine in two places using an air-sensor transducer. Stocking performance parameters, investigated before and after GEC, included the standard APG tests (working venous volume [wVV], venous filling index [VFI], venous drainage index [VDI], ejection fraction [EF]) and the occlusion plethysmography tests (incremental pressure causing the maximal increase in calf volume [IPMIV], outflow fraction [OF]). Results were expressed as median and interquartile range. Significant graduated compression was achieved in all four groups with higher interface pressures at the ankle. Only the VVs patients had a significant reduction in their wVV (without: 133 [109-146] vs. class1: 93 [74-113] mL) and the VFI (without: 4.6 [3-7.1] vs. class1: 3.1 [1.9-5] mL/s), both at p <.05. The IPMIV improved significantly in all groups except in the PTS group (p <.05). The OF improved only in the controls (without: 43 [38-51] vs. class1: 50 [48-53] %) and the VVs patients (without: 47 [39-58] vs. class1: 56 [50-64] %), both at p <.05. There were no significant differences in the VDI or the EF with GEC. Compression dose-response relationships were not observed. Patients with varicose veins improved the most, whereas those with PTS improved the least. Performance seemed to depend more on disease pathophysiology than compression strength. However, the lack of responsiveness to compression strength may be related to the low external pressures used. Stocking performance tests may have value in selecting those patients who benefit most from compression. Copyright © 2016 European Society for Vascular Surgery. Published by Elsevier Ltd. All rights reserved.
Benoit, Justin L; Vogele, Jennifer; Hart, Kimberly W; Lindsell, Christopher J; McMullan, Jason T
2017-06-01
Bystander compression-only cardiopulmonary resuscitation (CPR) improves survival after out-of-hospital cardiac arrest. To broaden CPR training, 1-2min ultra-brief videos have been disseminated via the Internet and television. Our objective was to determine whether participants passively exposed to a televised ultra-brief video perform CPR better than unexposed controls. This before-and-after study was conducted with non-patients in an urban Emergency Department waiting room. The intervention was an ultra-brief CPR training video displayed via closed-circuit television 3-6 times/hour. Participants were unaware of the study and not told to watch the video. Pre-intervention, no video was displayed. Participants were asked to demonstrate compression-only CPR on a manikin. Performance was scored based on critical actions: check for responsiveness, call for help, begin compressions immediately, and correct hand placement, compression rate and depth. The primary outcome was the proportion of participants who performed all actions correctly. There were 50 control and 50 exposed participants. Mean age was 37, 51% were African-American, 52% were female, and 10% self-reported current CPR certification. There were no statistically significant differences in baseline characteristics between groups. The number of participants who performed all actions correctly was 0 (0%) control vs. 10 (20%) exposed (difference 20%, 95% confidence interval [CI] 8.9-31.1%, p<0.001). Correct compression rate and depth were 11 (22%) control vs. 22 (44%) exposed (22%, 95% CI 4.1-39.9%, p=0.019), and 5 (10%) control vs. 15 (30%) exposed (20%, 95% CI 4.8-35.2%, p=0.012), respectively. Passive ultra-brief video training is associated with improved performance of compression-only CPR. Copyright © 2017 Elsevier B.V. All rights reserved.
Effect of fluid compressibility on journal bearing performance
NASA Technical Reports Server (NTRS)
Dimofte, Florin
1993-01-01
An analysis was undertaken to determine the effect of fluid film compressibility on the performance of fluid film bearings. A new version of the Reynolds equation was developed, using a polytropic expansion, for both steady-state and dynamic conditions. Polytropic exponents from 1 (isothermal) to 1000 (approaching an incompressible liquid) were evaluated for two bearing numbers, selected from a range of practical interest for cryogenic application, and without cavitation. Bearing loads were insensitive to fluid compressibility for low bearing numbers, as was expected. The effect of compressibility on attitude angle was significant, even when the bearing number was low. A small amount of fluid compressibility was enough to obtain stable running conditions. Incompressible liquid lacked stability at all conditions. Fluid compressibility can be used to control the bearing dynamic coefficients, thereby influencing the dynamic behavior of the rotor-bearing system.
A bioinspired study on the compressive resistance of helicoidal fibre structures
NASA Astrophysics Data System (ADS)
Tan, Ting; Ribbans, Brian
2017-10-01
Helicoidal fibre structures are widely observed in natural materials. In this paper, an integrated experimental and analytical approach was used to investigate the compressive resistance of helicoidal fibre structures. First, helicoidal fibre-reinforced composites were created using three-dimensionally printed helicoids and polymeric matrices, including plain, ring-reinforced and helix-reinforced helicoids. Then, load-displacement curves under monotonic compression tests were collected to measure the compressive strengths of helicoidal fibre composites. Fractographic characterization was performed using an X-ray microtomographer and scanning electron microscope, through which crack propagations in helicoidal structures were illustrated. Finally, mathematical modelling was performed to reveal the essential fibre architectures in the compressive resistance of helicoidal fibre structures. This work reveals that fibre-matrix ratios, helix pitch angles and interlayer rotary angles are critical to the compressive resistance of helicoidal structures.
Song, Xiaoying; Huang, Qijun; Chang, Sheng; He, Jin; Wang, Hao
2018-06-01
To improve the compression rates for lossless compression of medical images, an efficient algorithm, based on irregular segmentation and region-based prediction, is proposed in this paper. Considering that the first step of a region-based compression algorithm is segmentation, this paper proposes a hybrid method by combining geometry-adaptive partitioning and quadtree partitioning to achieve adaptive irregular segmentation for medical images. Then, least square (LS)-based predictors are adaptively designed for each region (regular subblock or irregular subregion). The proposed adaptive algorithm not only exploits spatial correlation between pixels but it utilizes local structure similarity, resulting in efficient compression performance. Experimental results show that the average compression performance of the proposed algorithm is 10.48, 4.86, 3.58, and 0.10% better than that of JPEG 2000, CALIC, EDP, and JPEG-LS, respectively. Graphical abstract ᅟ.
DNA-COMPACT: DNA COMpression Based on a Pattern-Aware Contextual Modeling Technique
Li, Pinghao; Wang, Shuang; Kim, Jihoon; Xiong, Hongkai; Ohno-Machado, Lucila; Jiang, Xiaoqian
2013-01-01
Genome data are becoming increasingly important for modern medicine. As the rate of increase in DNA sequencing outstrips the rate of increase in disk storage capacity, the storage and data transferring of large genome data are becoming important concerns for biomedical researchers. We propose a two-pass lossless genome compression algorithm, which highlights the synthesis of complementary contextual models, to improve the compression performance. The proposed framework could handle genome compression with and without reference sequences, and demonstrated performance advantages over best existing algorithms. The method for reference-free compression led to bit rates of 1.720 and 1.838 bits per base for bacteria and yeast, which were approximately 3.7% and 2.6% better than the state-of-the-art algorithms. Regarding performance with reference, we tested on the first Korean personal genome sequence data set, and our proposed method demonstrated a 189-fold compression rate, reducing the raw file size from 2986.8 MB to 15.8 MB at a comparable decompression cost with existing algorithms. DNAcompact is freely available at https://sourceforge.net/projects/dnacompact/for research purpose. PMID:24282536
Partiprajak, Suphamas; Thongpo, Pichaya
2016-01-01
This study explored the retention of basic life support knowledge, self-efficacy, and chest compression performance among Thai nursing students at a university in Thailand. A one-group, pre-test and post-test design time series was used. Participants were 30 nursing students undertaking basic life support training as a care provider. Repeated measure analysis of variance was used to test the retention of knowledge and self-efficacy between pre-test, immediate post-test, and re-test after 3 months. A Wilcoxon signed-rank test was used to compare the difference in chest compression performance two times. Basic life support knowledge was measured using the Basic Life Support Standard Test for Cognitive Knowledge. Self-efficacy was measured using the Basic Life Support Self-Efficacy Questionnaire. Chest compression performance was evaluated using a data printout from Resusci Anne and Laerdal skillmeter within two cycles. The training had an immediate significant effect on the knowledge, self-efficacy, and skill of chest compression; however, the knowledge and self-efficacy significantly declined after post-training for 3 months. Chest compression performance after training for 3 months was positively retaining compared to the first post-test but was not significant. Therefore, a retraining program to maintain knowledge and self-efficacy for a longer period of time should be established after post-training for 3 months. Copyright © 2015 Elsevier Ltd. All rights reserved.
Costa, Marcus V C; Carvalho, Joao L A; Berger, Pedro A; Zaghetto, Alexandre; da Rocha, Adson F; Nascimento, Francisco A O
2009-01-01
We present a new preprocessing technique for two-dimensional compression of surface electromyographic (S-EMG) signals, based on correlation sorting. We show that the JPEG2000 coding system (originally designed for compression of still images) and the H.264/AVC encoder (video compression algorithm operating in intraframe mode) can be used for compression of S-EMG signals. We compare the performance of these two off-the-shelf image compression algorithms for S-EMG compression, with and without the proposed preprocessing step. Compression of both isotonic and isometric contraction S-EMG signals is evaluated. The proposed methods were compared with other S-EMG compression algorithms from the literature.
Observer performance assessment of JPEG-compressed high-resolution chest images
NASA Astrophysics Data System (ADS)
Good, Walter F.; Maitz, Glenn S.; King, Jill L.; Gennari, Rose C.; Gur, David
1999-05-01
The JPEG compression algorithm was tested on a set of 529 chest radiographs that had been digitized at a spatial resolution of 100 micrometer and contrast sensitivity of 12 bits. Images were compressed using five fixed 'psychovisual' quantization tables which produced average compression ratios in the range 15:1 to 61:1, and were then printed onto film. Six experienced radiologists read all cases from the laser printed film, in each of the five compressed modes as well as in the non-compressed mode. For comparison purposes, observers also read the same cases with reduced pixel resolutions of 200 micrometer and 400 micrometer. The specific task involved detecting masses, pneumothoraces, interstitial disease, alveolar infiltrates and rib fractures. Over the range of compression ratios tested, for images digitized at 100 micrometer, we were unable to demonstrate any statistically significant decrease (p greater than 0.05) in observer performance as measured by ROC techniques. However, the observers' subjective assessments of image quality did decrease significantly as image resolution was reduced and suggested a decreasing, but nonsignificant, trend as the compression ratio was increased. The seeming discrepancy between our failure to detect a reduction in observer performance, and other published studies, is likely due to: (1) the higher resolution at which we digitized our images; (2) the higher signal-to-noise ratio of our digitized films versus typical CR images; and (3) our particular choice of an optimized quantization scheme.
The effect of compression and attention allocation on speech intelligibility
NASA Astrophysics Data System (ADS)
Choi, Sangsook; Carrell, Thomas
2003-10-01
Research investigating the effects of amplitude compression on speech intelligibility for individuals with sensorineural hearing loss has demonstrated contradictory results [Souza and Turner (1999)]. Because percent-correct measures may not be the best indicator of compression effectiveness, a speech intelligibility and motor coordination task was developed to provide data that may more thoroughly explain the perception of compressed speech signals. In the present study, a pursuit rotor task [Dlhopolsky (2000)] was employed along with word identification task to measure the amount of attention required to perceive compressed and non-compressed words in noise. Monosyllabic words were mixed with speech-shaped noise at a fixed signal-to-noise ratio and compressed using a wide dynamic range compression scheme. Participants with normal hearing identified each word with or without a simultaneous pursuit-rotor task. Also, participants completed the pursuit-rotor task without simultaneous word presentation. It was expected that the performance on the additional motor task would reflect effect of the compression better than simple word-accuracy measures. Results were complex. For example, in some conditions an irrelevant task actually improved performance on a simultaneous listening task. This suggests there might be an optimal level of attention required for recognition of monosyllabic words.
Lok, U-Wai; Li, Pai-Chi
2016-03-01
Graphics processing unit (GPU)-based software beamforming has advantages over hardware-based beamforming of easier programmability and a faster design cycle, since complicated imaging algorithms can be efficiently programmed and modified. However, the need for a high data rate when transferring ultrasound radio-frequency (RF) data from the hardware front end to the software back end limits the real-time performance. Data compression methods can be applied to the hardware front end to mitigate the data transfer issue. Nevertheless, most decompression processes cannot be performed efficiently on a GPU, thus becoming another bottleneck of the real-time imaging. Moreover, lossless (or nearly lossless) compression is desirable to avoid image quality degradation. In a previous study, we proposed a real-time lossless compression-decompression algorithm and demonstrated that it can reduce the overall processing time because the reduction in data transfer time is greater than the computation time required for compression/decompression. This paper analyzes the lossless compression method in order to understand the factors limiting the compression efficiency. Based on the analytical results, a nearly lossless compression is proposed to further enhance the compression efficiency. The proposed method comprises a transformation coding method involving modified lossless compression that aims at suppressing amplitude data. The simulation results indicate that the compression ratio (CR) of the proposed approach can be enhanced from nearly 1.8 to 2.5, thus allowing a higher data acquisition rate at the front end. The spatial and contrast resolutions with and without compression were almost identical, and the process of decompressing the data of a single frame on a GPU took only several milliseconds. Moreover, the proposed method has been implemented in a 64-channel system that we built in-house to demonstrate the feasibility of the proposed algorithm in a real system. It was found that channel data from a 64-channel system can be transferred using the standard USB 3.0 interface in most practical imaging applications.
Lietaert, Karel; Cutolo, Antonio; Boey, Dries; Van Hooreweder, Brecht
2018-03-21
Mechanical performance of additively manufactured (AM) Ti6Al4V scaffolds has mostly been studied in uniaxial compression. However, in real-life applications, more complex load conditions occur. To address this, a novel sample geometry was designed, tested and analyzed in this work. The new scaffold geometry, with porosity gradient between the solid ends and scaffold middle, was successfully used for quasi-static tension, tension-tension (R = 0.1), tension-compression (R = -1) and compression-compression (R = 10) fatigue tests. Results show that global loading in tension-tension leads to a decreased fatigue performance compared to global loading in compression-compression. This difference in fatigue life can be understood fairly well by approximating the local tensile stress amplitudes in the struts near the nodes. Local stress based Haigh diagrams were constructed to provide more insight in the fatigue behavior. When fatigue life is interpreted in terms of local stresses, the behavior of single struts is shown to be qualitatively the same as bulk Ti6Al4V. Compression-compression and tension-tension fatigue regimes lead to a shorter fatigue life than fully reversed loading due to the presence of a mean local tensile stress. Fractographic analysis showed that most fracture sites were located close to the nodes, where the highest tensile stresses are located.
ERIC Educational Resources Information Center
Masterson, James; And Others
Forty-eight sixth-grade students were studied to determine their response to selected compressions of the narration of an instructional sound motion picture. A 4:10 color film with a 158 wpm recorded narration was shown at 25, 33-1/3 and 50 percent compression rates; performance time and quality were measured immediately and after 12-day…
Corpuls CPR Generates Higher Mean Arterial Pressure Than LUCAS II in a Pig Model of Cardiac Arrest.
Eichhorn, S; Mendoza, A; Prinzing, A; Stroh, A; Xinghai, L; Polski, M; Heller, M; Lahm, H; Wolf, E; Lange, R; Krane, M
2017-01-01
According to the European Resuscitation Council guidelines, the use of mechanical chest compression devices is a reasonable alternative in situations where manual chest compression is impractical or compromises provider safety. The aim of this study is to compare the performance of a recently developed chest compression device (Corpuls CPR) with an established system (LUCAS II) in a pig model. Methods . Pigs ( n = 5/group) in provoked ventricular fibrillation were left untreated for 5 minutes, after which 15 min of cardiopulmonary resuscitation was performed with chest compressions. After 15 min, defibrillation was performed every 2 min if necessary, and up to 3 doses of adrenaline were given. If there was no return of spontaneous circulation after 25 min, the experiment was terminated. Coronary perfusion pressure, carotid blood flow, end-expiratory CO 2 , regional oxygen saturation by near infrared spectroscopy, blood gas, and local organ perfusion with fluorescent labelled microspheres were measured at baseline and during resuscitation. Results . Animals treated with Corpuls CPR had significantly higher mean arterial pressures during resuscitation, along with a detectable trend of greater carotid blood flow and organ perfusion. Conclusion . Chest compressions with the Corpuls CPR device generated significantly higher mean arterial pressures than compressions performed with the LUCAS II device.
Corpuls CPR Generates Higher Mean Arterial Pressure Than LUCAS II in a Pig Model of Cardiac Arrest
Mendoza, A.; Prinzing, A.; Stroh, A.; Xinghai, L.; Polski, M.; Heller, M.; Lahm, H.; Wolf, E.; Lange, R.; Krane, M.
2017-01-01
According to the European Resuscitation Council guidelines, the use of mechanical chest compression devices is a reasonable alternative in situations where manual chest compression is impractical or compromises provider safety. The aim of this study is to compare the performance of a recently developed chest compression device (Corpuls CPR) with an established system (LUCAS II) in a pig model. Methods. Pigs (n = 5/group) in provoked ventricular fibrillation were left untreated for 5 minutes, after which 15 min of cardiopulmonary resuscitation was performed with chest compressions. After 15 min, defibrillation was performed every 2 min if necessary, and up to 3 doses of adrenaline were given. If there was no return of spontaneous circulation after 25 min, the experiment was terminated. Coronary perfusion pressure, carotid blood flow, end-expiratory CO2, regional oxygen saturation by near infrared spectroscopy, blood gas, and local organ perfusion with fluorescent labelled microspheres were measured at baseline and during resuscitation. Results. Animals treated with Corpuls CPR had significantly higher mean arterial pressures during resuscitation, along with a detectable trend of greater carotid blood flow and organ perfusion. Conclusion. Chest compressions with the Corpuls CPR device generated significantly higher mean arterial pressures than compressions performed with the LUCAS II device. PMID:29392137
Continuous-variable quantum key distribution in uniform fast-fading channels
NASA Astrophysics Data System (ADS)
Papanastasiou, Panagiotis; Weedbrook, Christian; Pirandola, Stefano
2018-03-01
We investigate the performance of several continuous-variable quantum key distribution protocols in the presence of uniform fading channels. These are lossy channels whose transmissivity changes according to a uniform probability distribution. We assume the worst-case scenario where an eavesdropper induces a fast-fading process, where she chooses the instantaneous transmissivity while the remote parties may only detect the mean statistical effect. We analyze coherent-state protocols in various configurations, including the one-way switching protocol in reverse reconciliation, the measurement-device-independent protocol in the symmetric configuration, and its extension to a three-party network. We show that, regardless of the advantage given to the eavesdropper (control of the fading), these protocols can still achieve high rates under realistic attacks, within reasonable values for the variance of the probability distribution associated with the fading process.
Heralded quantum steering over a high-loss channel
Weston, Morgan M.; Slussarenko, Sergei; Chrzanowski, Helen M.; Wollmann, Sabine; Shalm, Lynden K.; Verma, Varun B.; Allman, Michael S.; Nam, Sae Woo; Pryde, Geoff J.
2018-01-01
Entanglement is the key resource for many long-range quantum information tasks, including secure communication and fundamental tests of quantum physics. These tasks require robust verification of shared entanglement, but performing it over long distances is presently technologically intractable because the loss through an optical fiber or free-space channel opens up a detection loophole. We design and experimentally demonstrate a scheme that verifies entanglement in the presence of at least 14.8 ± 0.1 dB of added loss, equivalent to approximately 80 km of telecommunication fiber. Our protocol relies on entanglement swapping to herald the presence of a photon after the lossy channel, enabling event-ready implementation of quantum steering. This result overcomes the key barrier in device-independent communication under realistic high-loss scenarios and in the realization of a quantum repeater. PMID:29322093
Heralded quantum steering over a high-loss channel.
Weston, Morgan M; Slussarenko, Sergei; Chrzanowski, Helen M; Wollmann, Sabine; Shalm, Lynden K; Verma, Varun B; Allman, Michael S; Nam, Sae Woo; Pryde, Geoff J
2018-01-01
Entanglement is the key resource for many long-range quantum information tasks, including secure communication and fundamental tests of quantum physics. These tasks require robust verification of shared entanglement, but performing it over long distances is presently technologically intractable because the loss through an optical fiber or free-space channel opens up a detection loophole. We design and experimentally demonstrate a scheme that verifies entanglement in the presence of at least 14.8 ± 0.1 dB of added loss, equivalent to approximately 80 km of telecommunication fiber. Our protocol relies on entanglement swapping to herald the presence of a photon after the lossy channel, enabling event-ready implementation of quantum steering. This result overcomes the key barrier in device-independent communication under realistic high-loss scenarios and in the realization of a quantum repeater.
NASA Astrophysics Data System (ADS)
Tsifouti, A.; Triantaphillidou, S.; Larabi, M. C.; Doré, G.; Bilissi, E.; Psarrou, A.
2015-01-01
In this investigation we study the effects of compression and frame rate reduction on the performance of four video analytics (VA) systems utilizing a low complexity scenario, such as the Sterile Zone (SZ). Additionally, we identify the most influential scene parameters affecting the performance of these systems. The SZ scenario is a scene consisting of a fence, not to be trespassed, and an area with grass. The VA system needs to alarm when there is an intruder (attack) entering the scene. The work includes testing of the systems with uncompressed and compressed (using H.264/MPEG-4 AVC at 25 and 5 frames per second) footage, consisting of quantified scene parameters. The scene parameters include descriptions of scene contrast, camera to subject distance, and attack portrayal. Additional footage, including only distractions (no attacks) is also investigated. Results have shown that every system has performed differently for each compression/frame rate level, whilst overall, compression has not adversely affected the performance of the systems. Frame rate reduction has decreased performance and scene parameters have influenced the behavior of the systems differently. Most false alarms were triggered with a distraction clip, including abrupt shadows through the fence. Findings could contribute to the improvement of VA systems.
Hydrogen as an Auxiliary Fuel in Compression-Ignition Engines
NASA Technical Reports Server (NTRS)
Gerrish, Harold C; Foster, H
1936-01-01
An investigation was made to determine whether a sufficient amount of hydrogen could be efficiently burned in a compression-ignition engine to compensate for the increase of lift of an airship due to the consumption of the fuel oil. The performance of a single-cylinder four-stroke-cycle compression-ignition engine operating on fuel oil alone was compared with its performance when various quantities of hydrogen were inducted with the inlet air. Engine-performance data, indicator cards, and exhaust-gas samples were obtained for each change in engine-operating conditions.
Multi-pass encoding of hyperspectral imagery with spectral quality control
NASA Astrophysics Data System (ADS)
Wasson, Steven; Walker, William
2015-05-01
Multi-pass encoding is a technique employed in the field of video compression that maximizes the quality of an encoded video sequence within the constraints of a specified bit rate. This paper presents research where multi-pass encoding is extended to the field of hyperspectral image compression. Unlike video, which is primarily intended to be viewed by a human observer, hyperspectral imagery is processed by computational algorithms that generally attempt to classify the pixel spectra within the imagery. As such, these algorithms are more sensitive to distortion in the spectral dimension of the image than they are to perceptual distortion in the spatial dimension. The compression algorithm developed for this research, which uses the Karhunen-Loeve transform for spectral decorrelation followed by a modified H.264/Advanced Video Coding (AVC) encoder, maintains a user-specified spectral quality level while maximizing the compression ratio throughout the encoding process. The compression performance may be considered near-lossless in certain scenarios. For qualitative purposes, this paper presents the performance of the compression algorithm for several Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and Hyperion datasets using spectral angle as the spectral quality assessment function. Specifically, the compression performance is illustrated in the form of rate-distortion curves that plot spectral angle versus bits per pixel per band (bpppb).
Aging and compressibility of municipal solid wastes.
Chen, Y M; Zhan, Tony L T; Wei, H Y; Ke, H
2009-01-01
The expansion of a municipal solid waste (MSW) landfill requires the ability to predict settlement behavior of the existing landfill. The practice of using a single compressibility value when performing a settlement analysis may lead to inaccurate predictions. This paper gives consideration to changes in the mechanical compressibility of MSW as a function of the fill age of MSW as well as the embedding depth of MSW. Borehole samples representative of various fill ages were obtained from five boreholes drilled to the bottom of the Qizhishan landfill in Suzhou, China. Thirty-one borehole samples were used to perform confined compression tests. Waste composition and volume-mass properties (i.e., unit weight, void ratio, and water content) were measured on all the samples. The test results showed that the compressible components of the MSW (i.e., organics, plastics, paper, wood and textiles) decreased with an increase in the fill age. The in situ void ratio of the MSW was shown to decrease with depth into the landfill. The compression index, Cc, was observed to decrease from 1.0 to 0.3 with depth into the landfill. Settlement analyses were performed on the existing landfill, demonstrating that the variation of MSW compressibility with fill age or depth should be taken into account in the settlement prediction.
The effect of compression on individual pressure vessel nickel/hydrogen components
NASA Technical Reports Server (NTRS)
Manzo, Michelle A.; Perez-Davis, Marla E.
1988-01-01
Compression tests were performed on representative Individual Pressure Vessel (IPV) Nickel/Hydrogen cell components in an effort to better understand the effects of force on component compression and the interactions of components under compression. It appears that the separator is the most easily compressed of all of the stack components. It will typically partially compress before any of the other components begin to compress. The compression characteristics of the cell components in assembly differed considerably from what would be predicted based on individual compression characteristics. Component interactions played a significant role in the stack response to compression. The results of the compression tests were factored into the design and selection of Belleville washers added to the cell stack to accommodate nickel electrode expansion while keeping the pressure on the stack within a reasonable range of the original preset.
NASA Technical Reports Server (NTRS)
Barrie, A. C.; Smith, S. E.; Dorelli, J. C.; Gershman, D. J.; Yeh, P.; Schiff, C.; Avanov, L. A.
2017-01-01
Data compression has been a staple of imaging instruments for years. Recently, plasma measurements have utilized compression with relatively low compression ratios. The Fast Plasma Investigation (FPI) on board the Magnetospheric Multiscale (MMS) mission generates data roughly 100 times faster than previous plasma instruments, requiring a higher compression ratio to fit within the telemetry allocation. This study investigates the performance of a space-based compression standard employing a Discrete Wavelet Transform and a Bit Plane Encoder (DWT/BPE) in compressing FPI plasma count data. Data from the first 6 months of FPI operation are analyzed to explore the error modes evident in the data and how to adapt to them. While approximately half of the Dual Electron Spectrometer (DES) maps had some level of loss, it was found that there is little effect on the plasma moments and that errors present in individual sky maps are typically minor. The majority of Dual Ion Spectrometer burst sky maps compressed in a lossless fashion, with no error introduced during compression. Because of induced compression error, the size limit for DES burst images has been increased for Phase 1B. Additionally, it was found that the floating point compression mode yielded better results when images have significant compression error, leading to floating point mode being used for the fast survey mode of operation for Phase 1B. Despite the suggested tweaks, it was found that wavelet-based compression, and a DWT/BPE algorithm in particular, is highly suitable to data compression for plasma measurement instruments and can be recommended for future missions.
Compression of surface myoelectric signals using MP3 encoding.
Chan, Adrian D C
2011-01-01
The potential of MP3 compression of surface myoelectric signals is explored in this paper. MP3 compression is a perceptual-based encoder scheme, used traditionally to compress audio signals. The ubiquity of MP3 compression (e.g., portable consumer electronics and internet applications) makes it an attractive option for remote monitoring and telemedicine applications. The effects of muscle site and contraction type are examined at different MP3 encoding bitrates. Results demonstrate that MP3 compression is sensitive to the myoelectric signal bandwidth, with larger signal distortion associated with myoelectric signals that have higher bandwidths. Compared to other myoelectric signal compression techniques reported previously (embedded zero-tree wavelet compression and adaptive differential pulse code modulation), MP3 compression demonstrates superior performance (i.e., lower percent residual differences for the same compression ratios).
Leturiondo, Mikel; Ruiz de Gauna, Sofía; Ruiz, Jesus M; Julio Gutiérrez, J; Leturiondo, Luis A; González-Otero, Digna M; Russell, James K; Zive, Dana; Daya, Mohamud
2018-03-01
Capnography has been proposed as a method for monitoring the ventilation rate during cardiopulmonary resuscitation (CPR). A high incidence (above 70%) of capnograms distorted by chest compression induced oscillations has been previously reported in out-of-hospital (OOH) CPR. The aim of the study was to better characterize the chest compression artefact and to evaluate its influence on the performance of a capnogram-based ventilation detector during OOH CPR. Data from the MRx monitor-defibrillator were extracted from OOH cardiac arrest episodes. For each episode, presence of chest compression artefact was annotated in the capnogram. Concurrent compression depth and transthoracic impedance signals were used to identify chest compressions and to annotate ventilations, respectively. We designed a capnogram-based ventilation detection algorithm and tested its performance with clean and distorted episodes. Data were collected from 232 episodes comprising 52 654 ventilations, with a mean (±SD) of 227 (±118) per episode. Overall, 42% of the capnograms were distorted. Presence of chest compression artefact degraded algorithm performance in terms of ventilation detection, estimation of ventilation rate, and the ability to detect hyperventilation. Capnogram-based ventilation detection during CPR using our algorithm was compromised by the presence of chest compression artefact. In particular, artefact spanning from the plateau to the baseline strongly degraded ventilation detection, and caused a high number of false hyperventilation alarms. Further research is needed to reduce the impact of chest compression artefact on capnographic ventilation monitoring. Copyright © 2017 Elsevier B.V. All rights reserved.
Proceedings of the Antenna Applications Symposium (1993). Volume 2,
1994-02-01
Fig. 2). Even though Pwill can be computed accurately (Pial = PBWG - Pm), its field distribution and its chaotic behavior inside the lossy BWG is...34s""I+(n-1)M1)] (4) where M1 = K / N1. Since eq. (4) can be used to obtain the actual field, the time comsuming computation of matix [Y] need only be
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giovannetti, Vittorio; Maccone, Lorenzo; Shapiro, Jeffrey H.
The minimum Renyi and Wehrl output entropies are found for bosonic channels in which the signal photons are either randomly displaced by a Gaussian distribution (classical-noise channel), or coupled to a thermal environment through lossy propagation (thermal-noise channel). It is shown that the Renyi output entropies of integer orders z{>=}2 and the Wehrl output entropy are minimized when the channel input is a coherent state.
ERIC Educational Resources Information Center
Pereyra, Pedro; Robledo-Martinez, Arturo
2009-01-01
We explicitly show that the well-known transmission and reflection amplitudes of planar slabs, obtained via an algebraic summation of Fresnel amplitudes, are completely equivalent to those obtained from transfer matrices in the scattering approach. This equivalence makes the finite periodic systems theory a powerful alternative to the cumbersome…
GTD analysis of airborne antennas radiating in the presence of lossy dielectric layers
NASA Technical Reports Server (NTRS)
Rojas-Teran, R. G.; Burnside, W. D.
1981-01-01
The patterns of monopole or aperture antennas mounted on a perfectly conducting convex surface radiating in the presence of a dielectric or metal plate are computed. The geometrical theory of diffraction is used to analyze the radiating system and extended here to include diffraction by flat dielectric slabs. Modified edge diffraction coefficients valid for wedges whose walls are lossy or lossless thin dielectric or perfectly conducting plates are developed. The width of the dielectric plates cannot exceed a quarter of a wavelength in free space, and the interior angle of the wedge is assumed to be close to 0 deg or 180 deg. Systematic methods for computing the individual components of the total high frequency field are discussed. The accuracy of the solutions is demonstrated by comparisons with measured results, where a 2 lambda by 4 lambda prolate spheroid is used as the convex surface. A jump or kink appears in the calculated pattern when higher order terms that are important are not included in the final solution. The most immediate application of the results presented here is in the modelling of structures such as aircraft which are composed of nonmetallic parts that play a significant role in the pattern.
Infrared broadband metasurface absorber for reducing the thermal mass of a microbolometer.
Jung, Joo-Yun; Song, Kyungjun; Choi, Jun-Hyuk; Lee, Jihye; Choi, Dae-Geun; Jeong, Jun-Ho; Neikirk, Dean P
2017-03-27
We demonstrate an infrared broadband metasurface absorber that is suitable for increasing the response speed of a microbolometer by reducing its thermal mass. A large fraction of holes are made in a periodic pattern on a thin lossy metal layer characterised with a non-dispersive effective surface impedance. This can be used as a non-resonant metasurface that can be integrated with a Salisbury screen absorber to construct an absorbing membrane for a microbolometer that can significantly reduce the thermal mass while maintaining high infrared broadband absorption in the long wavelength infrared (LWIR) band. The non-dispersive effective surface impedance can be matched to the free space by optimising the surface resistance of the thin lossy metal layer depending on the size of the patterned holes by using a dc approximation method. In experiments a high broadband absorption was maintained even when the fill factor of the absorbing area was reduced to 28% (hole area: 72%), and it was theoretically maintained even when the fill factor of the absorbing area was reduced to 19% (hole area: 81%). Therefore, a metasurface with a non-dispersive effective surface impedance is a promising solution for reducing the thermal mass of infrared microbolometer pixels.
Tantawi, Sami G.; Vlieks, Arnold E.
1998-09-01
A compact high-power RF load comprises a series of very low Q resonators, or chokes [16], in a circular waveguide [10]. The sequence of chokes absorb the RF power gradually in a short distance while keeping the bandwidth relatively wide. A polarizer [12] at the input end of the load is provided to convert incoming TE.sub.10 mode signals to circularly polarized TE.sub.11 mode signals. Because the load operates in the circularly polarized mode, the energy is uniformly and efficiently absorbed and the load is more compact than a rectangular load. Using these techniques, a load having a bandwidth of 500 MHz can be produced with an average power dissipation level of 1.5 kW at X-band, and a peak power dissipation of 100 MW. The load can be made from common lossy materials, such as stainless steel, and is less than 15 cm in length. These techniques can also produce loads for use as an alternative to ordinary waveguide loads in small and medium RF accelerators, in radar systems, and in other microwave applications. The design is easily scalable to other RF frequencies and adaptable to the use of other lossy materials.
Reduction of the radar cross section of arbitrarily shaped cavity structures
NASA Technical Reports Server (NTRS)
Chou, R.; Ling, H.; Lee, S. W.
1987-01-01
The problem of the reduction of the radar cross section (RCS) of open-ended cavities was studied. The issues investigated were reduction through lossy coating materials on the inner cavity wall and reduction through shaping of the cavity. A method was presented to calculate the RCS of any arbitrarily shaped structure in order to study the shaping problem. The limitations of this method were also addressed. The modal attenuation was studied in a multilayered coated waveguide. It was shown that by employing two layers of coating, it was possible to achieve an increase in both the magnitude of attenuation and the frequency band of effectiveness. The numerical method used in finding the roots of the characteristic equation breaks down when the coating thickness is very lossy and large in terms of wavelength. A new method of computing the RCS of an arbitrary cavity was applied to study the effects of longitudinal bending on RCS reduction. The ray and modal descriptions for the fields in a parallel plate waveguide were compared. To extend the range of validity of the Shooting and Bouncing Ray (SBR) method, the simple ray picture must be modified to account for the beam blurring.
Complex dispersion relation of surface acoustic waves at a lossy metasurface
NASA Astrophysics Data System (ADS)
Schwan, Logan; Geslain, Alan; Romero-García, Vicente; Groby, Jean-Philippe
2017-01-01
The complex dispersion relation of surface acoustic waves (SAWs) at a lossy resonant metasurface is theoretically and experimentally reported. The metasurface consists of the periodic arrangement of borehole resonators in a rigid substrate. The theoretical model relies on a boundary layer approach that provides the effective metasurface admittance governing the complex dispersion relation in the presence of viscous and thermal losses. The model is experimentally validated by measurements in the semi-anechoic chamber. The complex SAW dispersion relation is experimentally retrieved from the analysis of the spatial Laplace transform of the pressure scanned along a line at the metasurface. The geometrical spreading of the energy from the speaker is accounted for, and both the real and imaginary parts of the SAW wavenumber are obtained. The results show that the strong reduction of the SAW group velocity occurs jointly with a drastic attenuation of the wave, leading to the confinement of the field close to the source and preventing the efficient propagation of such slow-sound surface modes. The method opens perspectives to theoretically predict and experimentally characterize both the dispersion and the attenuation of surface waves at structured surfaces.
IPTV multicast with peer-assisted lossy error control
NASA Astrophysics Data System (ADS)
Li, Zhi; Zhu, Xiaoqing; Begen, Ali C.; Girod, Bernd
2010-07-01
Emerging IPTV technology uses source-specific IP multicast to deliver television programs to end-users. To provide reliable IPTV services over the error-prone DSL access networks, a combination of multicast forward error correction (FEC) and unicast retransmissions is employed to mitigate the impulse noises in DSL links. In existing systems, the retransmission function is provided by the Retransmission Servers sitting at the edge of the core network. In this work, we propose an alternative distributed solution where the burden of packet loss repair is partially shifted to the peer IP set-top boxes. Through Peer-Assisted Repair (PAR) protocol, we demonstrate how the packet repairs can be delivered in a timely, reliable and decentralized manner using the combination of server-peer coordination and redundancy of repairs. We also show that this distributed protocol can be seamlessly integrated with an application-layer source-aware error protection mechanism called forward and retransmitted Systematic Lossy Error Protection (SLEP/SLEPr). Simulations show that this joint PARSLEP/ SLEPr framework not only effectively mitigates the bottleneck experienced by the Retransmission Servers, thus greatly enhancing the scalability of the system, but also efficiently improves the resistance to the impulse noise.
Measuring noise in microwave metamaterials
NASA Astrophysics Data System (ADS)
Wiltshire, M. C. K.; Syms, R. R. A.
2018-05-01
Electromagnetic metamaterials are artificially constructed media composed of arrays of electrical circuits that can exhibit electric and magnetic characteristics unlike those of any conventional materials. However, the materials are lossy and hence noisy, so that the signal-to-noise ratio in practical situations is greatly reduced. In particular, operating in the double negative region, where both the permittivity and the permeability are negative so that the refractive index is real but negative, incurs significant loss and noise penalties. In this work, we report noise measurements on a double negative metamaterial at microwave frequencies and compare them with the results of a simple model based on a transmission line loaded with lossy elements that mimic the split ring resonators and fine wires of the metamaterial. A noise source is associated with the resistive part of each element, and these are added incoherently to predict the total noise spectrum of the metamaterial. The theoretical results are in good agreement with the measurements. In particular, we find that the measured noise spectrum has contributions from both electric and magnetic noise, but is dominated by the magnetic noise. This limits possible applications, even with optimised materials, to functions that cannot be realised by conventional means.
NASA Astrophysics Data System (ADS)
Wiener, Clinton; Weiss, Robert; White, Christopher; Vogt, Bryan
2014-03-01
Since Sauerbrey's 1959 discovery of the mass-frequency relationship in quartz, the QCM has been utilized to probe deposited mass layers. The mass to frequency (imaginary component of the impedance) relationship breaks down when the added mass is not rigidly coupled to the sensor surface and viscous dissipation of the quartz occurs. This dissipation is important in the deposition of soft materials such as polymers or biological molecules. By using a viscoelastic model for frequency and dissipation; the mass, viscosity, and shear modulus can be accurately determined. Here, we demonstrate an additional breakdown in the coupling of the imaginary component of the impedance to the mass by simultaneous QCM-D and spectroscopic ellipsometry (SE) measurements by examination of the swelling behavior of thin physically crosslinked poly-n-isopropylacrylamide films. A film swollen beyond 3 times its dry thickness shows a frequency increase (mass loss) and dissipation increases (increasing lossy film character) on cooling, but SE results show increased swelling of the film. This behavior was found to be thickness invariant for dry thicknesses of 32 nm and greater. Modeling of this QCM-D data shows non-physical results. Scaling concepts associated with this high loss limit will be discussed.
Modal expansions in periodic photonic systems with material loss and dispersion
NASA Astrophysics Data System (ADS)
Wolff, Christian; Busch, Kurt; Mortensen, N. Asger
2018-03-01
We study band-structure properties of periodic optical systems composed of lossy and intrinsically dispersive materials. To this end, we develop an analytical framework based on adjoint modes of a lossy periodic electromagnetic system and show how the problem of linearly dependent eigenmodes in the presence of material dispersion can be overcome. We then formulate expressions for the band-structure derivative (∂ ω )/(∂ k ) (complex group velocity) and the local and total density of transverse optical states. Our exact expressions hold for 3D periodic arrays of materials with arbitrary dispersion properties and in general need to be evaluated numerically. They can be generalized to systems with two, one, or no directions of periodicity provided the fields are localized along nonperiodic directions. Possible applications are photonic crystals, metamaterials, metasurfaces composed of highly dispersive materials such as metals or lossless photonic crystals, and metamaterials or metasurfaces strongly coupled to resonant perturbations such as quantum dots or excitons in 2D materials. For illustration purposes, we analytically evaluate our expressions for some simple systems consisting of lossless dielectrics with one sharp Lorentzian material resonance added. By combining several Lorentz poles, this provides an avenue to perturbatively treat quite general material loss bands in photonic crystals.
Lossless compression of otoneurological eye movement signals.
Tossavainen, Timo; Juhola, Martti
2002-12-01
We studied the performance of several lossless compression algorithms on eye movement signals recorded in otoneurological balance and other physiological laboratories. Despite the wide use of these signals their compression has not been studied prior to our research. The compression methods were based on the common model of using a predictor to decorrelate the input and using an entropy coder to encode the residual. We found that these eye movement signals recorded at 400 Hz and with 13 bit amplitude resolution could losslessly be compressed with a compression ratio of about 2.7.
Improving the mechanical performance of wood fiber reinforced bio-based polyurethane foam
NASA Astrophysics Data System (ADS)
Chang, Li-Chi
Because of the environmental impact of fossil fuel consumption, soybean-based polyurethane (PU) foam has been developed as an alternative to be used as the core in structural insulated panels (SIPs). Wood fibers can be added to enhance the resistance of foam against bending and buckling in compression. The goal of this work is to study the effect of three modifications: fiber surface treatment, catalyst choice, and mixing method on the compression performance of wood fiber-reinforced PU foam. Foams were made with a free-rising process. The compression performance of the foams was measured and the foams were characterized using Fourier transform infrared spectroscopy (FTIR), scanning electron microscopy (SEM), and X-ray computed tomography (CT). The foam reinforced with alkali-treated fibers had improved compression performance. The foams made with various catalysts shared similar performance. The foam made using a mechanical stirrer contained well-dispersed fibers but the reinforcing capability of the fibers was reduced.
The Quiescent-Chamber Type Compression-Ignition Engine
NASA Technical Reports Server (NTRS)
Foster, H H
1937-01-01
Report presents the results of performance tests of a single-cylinder 4-stroke-cycle compression-ignition engine having a vertical disk form of combustion chamber without air flow. The number, size, and direction of the orifices of the fuel-injection nozzles used were independently varied. A table and graphs are presented showing the performance of the engine with different nozzles; results of tests at different compression ratios, boost pressures, and coolant temperatures are also included.
Proposed data compression schemes for the Galileo S-band contingency mission
NASA Technical Reports Server (NTRS)
Cheung, Kar-Ming; Tong, Kevin
1993-01-01
The Galileo spacecraft is currently on its way to Jupiter and its moons. In April 1991, the high gain antenna (HGA) failed to deploy as commanded. In case the current efforts to deploy the HGA fails, communications during the Jupiter encounters will be through one of two low gain antenna (LGA) on an S-band (2.3 GHz) carrier. A lot of effort has been and will be conducted to attempt to open the HGA. Also various options for improving Galileo's telemetry downlink performance are being evaluated in the event that the HGA will not open at Jupiter arrival. Among all viable options the most promising and powerful one is to perform image and non-image data compression in software onboard the spacecraft. This involves in-flight re-programming of the existing flight software of Galileo's Command and Data Subsystem processors and Attitude and Articulation Control System (AACS) processor, which have very limited computational and memory resources. In this article we describe the proposed data compression algorithms and give their respective compression performance. The planned image compression algorithm is a 4 x 4 or an 8 x 8 multiplication-free integer cosine transform (ICT) scheme, which can be viewed as an integer approximation of the popular discrete cosine transform (DCT) scheme. The implementation complexity of the ICT schemes is much lower than the DCT-based schemes, yet the performances of the two algorithms are indistinguishable. The proposed non-image compression algorith is a Lempel-Ziv-Welch (LZW) variant, which is a lossless universal compression algorithm based on a dynamic dictionary lookup table. We developed a simple and efficient hashing function to perform the string search.
Loturco, Irineu; Winckler, Ciro; Lourenço, Thiago F; Veríssimo, Amaury; Kobal, Ronaldo; Kitamura, Katia; Pereira, Lucas A; Nakamura, Fábio Y
2016-01-01
Compression garments are thought to aid performance in some selected speed-power activities owing to improved sensory feedback and proprioception. The aim of this study was to test the effects of using compression garments on speed and power-related performances in elite sprinters with visual impairment, who rely more on proprioception to perform than their Olympic peers. Eight top-level Paralympic sprinters competing in 100- and 200-m races performed, in the following order: unloaded squat jump (SJ), loaded jump squat (JS) and sprint tests over 20- and 70-m distances; using or not the compression garment. The maximum mean propulsive power value obtained during the JS attempts (starting at 40 % of their body mass, after which a load of 10 % of body mass was progressively added) was considered for data analysis purposes. The athletes executed the SJ and JS attempts without any help from their guides. Magnitude-based inference was used to analyze the results. The unloaded SJ was possibly higher in the compression than the placebo condition (41.19 ± 5.09 vs. 39.49 ± 5.75 cm). Performance differences in the loaded JS and sprint tests were all rated as unclear. It was concluded that the acute enhancement in vertical jump ability should be explored in the preparation of Paralympic sprinters during power-related training sessions. However, chronic effects in Paralympic athletes wearing compression garments need to be further tested, in order to support its use as a specific training aid.
Does team lifting increase the variability in peak lumbar compression in ironworkers?
Faber, Gert; Visser, Steven; van der Molen, Henk F; Kuijer, P Paul F M; Hoozemans, Marco J M; Van Dieën, Jaap H; Frings-Dresen, Monique H W
2012-01-01
Ironworkers frequently perform heavy lifting tasks in teams of two or four workers. Team lifting could potentially lead to a higher variation in peak lumbar compression forces than lifts performed by one worker, resulting in higher maximal peak lumbar compression forces. This study compared single-worker lifts (25-kg, iron bar) to two-worker lifts (50-kg, two iron bars) and to four-worker lifts (100-kg, iron lattice). Inverse dynamics was used to calculate peak lumbar compression forces. To assess the variability in peak lumbar loading, all three lifting tasks were performed six times. Results showed that the variability in peak lumbar loading was somewhat higher in the team lifts compared to the single-worker lifts. However, despite this increased variability, team lifts did not result in larger maximum peak lumbar compression forces. Therefore, it was concluded that, from a biomechanical point of view, team lifting does not result in an additional risk for low back complaints in ironworkers.
An Evaluation of High Temperature Airframe Seals for Advanced Hypersonic Vehicles
NASA Technical Reports Server (NTRS)
DeMange, Jeffrey J.; Dunlap, Patrick H.; Steinetz, Bruce M.; Drlik, Gary J.
2007-01-01
High temperature seals are required for advanced hypersonic airframe applications. In this study, both spring tube thermal barriers and innovative wafer seal systems were evaluated under relevant hypersonic test conditions (temperatures, pressures, etc.) via high temperature compression testing and room temperature flow assessments. Thermal barriers composed of a Rene 41 spring tube filled with Saffil insulation and overbraided with a Nextel 312 sheath showed acceptable performance at 1500 F in both short term and longer term compression testing. Nextel 440 thermal barriers with Rene 41 spring tubes and Saffil insulation demonstrated good compression performance up to 1750 F. A silicon nitride wafer seal/compression spring system displayed excellent load performance at temperatures as high as 2200 F and exhibited room temperature leakage values that were only 1/3 those for the spring tube rope seals. For all seal candidates evaluated, no significant degradation in leakage resistance was noted after high temperature compression testing. In addition to these tests, a superalloy seal suitable for dynamic seal applications was optimized through finite element techniques.
A novel ECG data compression method based on adaptive Fourier decomposition
NASA Astrophysics Data System (ADS)
Tan, Chunyu; Zhang, Liming
2017-12-01
This paper presents a novel electrocardiogram (ECG) compression method based on adaptive Fourier decomposition (AFD). AFD is a newly developed signal decomposition approach, which can decompose a signal with fast convergence, and hence reconstruct ECG signals with high fidelity. Unlike most of the high performance algorithms, our method does not make use of any preprocessing operation before compression. Huffman coding is employed for further compression. Validated with 48 ECG recordings of MIT-BIH arrhythmia database, the proposed method achieves the compression ratio (CR) of 35.53 and the percentage root mean square difference (PRD) of 1.47% on average with N = 8 decomposition times and a robust PRD-CR relationship. The results demonstrate that the proposed method has a good performance compared with the state-of-the-art ECG compressors.
Chest compression rate measurement from smartphone video.
Engan, Kjersti; Hinna, Thomas; Ryen, Tom; Birkenes, Tonje S; Myklebust, Helge
2016-08-11
Out-of-hospital cardiac arrest is a life threatening situation where the first person performing cardiopulmonary resuscitation (CPR) most often is a bystander without medical training. Some existing smartphone apps can call the emergency number and provide for example global positioning system (GPS) location like Hjelp 113-GPS App by the Norwegian air ambulance. We propose to extend functionality of such apps by using the built in camera in a smartphone to capture video of the CPR performed, primarily to estimate the duration and rate of the chest compression executed, if any. All calculations are done in real time, and both the caller and the dispatcher will receive the compression rate feedback when detected. The proposed algorithm is based on finding a dynamic region of interest in the video frames, and thereafter evaluating the power spectral density by computing the fast fourier transform over sliding windows. The power of the dominating frequencies is compared to the power of the frequency area of interest. The system is tested on different persons, male and female, in different scenarios addressing target compression rates, background disturbances, compression with mouth-to-mouth ventilation, various background illuminations and phone placements. All tests were done on a recording Laerdal manikin, providing true compression rates for comparison. Overall, the algorithm is seen to be promising, and it manages a number of disturbances and light situations. For target rates at 110 cpm, as recommended during CPR, the mean error in compression rate (Standard dev. over tests in parentheses) is 3.6 (0.8) for short hair bystanders, and 8.7 (6.0) including medium and long haired bystanders. The presented method shows that it is feasible to detect the compression rate of chest compressions performed by a bystander by placing the smartphone close to the patient, and using the built-in camera combined with a video processing algorithm performed real-time on the device.
A biological compression model and its applications.
Cao, Minh Duc; Dix, Trevor I; Allison, Lloyd
2011-01-01
A biological compression model, expert model, is presented which is superior to existing compression algorithms in both compression performance and speed. The model is able to compress whole eukaryotic genomes. Most importantly, the model provides a framework for knowledge discovery from biological data. It can be used for repeat element discovery, sequence alignment and phylogenetic analysis. We demonstrate that the model can handle statistically biased sequences and distantly related sequences where conventional knowledge discovery tools often fail.
COxSwAIN: Compressive Sensing for Advanced Imaging and Navigation
NASA Technical Reports Server (NTRS)
Kurwitz, Richard; Pulley, Marina; LaFerney, Nathan; Munoz, Carlos
2015-01-01
The COxSwAIN project focuses on building an image and video compression scheme that can be implemented in a small or low-power satellite. To do this, we used Compressive Sensing, where the compression is performed by matrix multiplications on the satellite and reconstructed on the ground. Our paper explains our methodology and demonstrates the results of the scheme, being able to achieve high quality image compression that is robust to noise and corruption.
Lattanzi, Riccardo; Zhang, Bei; Knoll, Florian; Assländer, Jakob; Cloos, Martijn A
2018-06-01
Magnetic Resonance Fingerprinting reconstructions can become computationally intractable with multiple transmit channels, if the B 1 + phases are included in the dictionary. We describe a general method that allows to omit the transmit phases. We show that this enables straightforward implementation of dictionary compression to further reduce the problem dimensionality. We merged the raw data of each RF source into a single k-space dataset, extracted the transceiver phases from the corresponding reconstructed images and used them to unwind the phase in each time frame. All phase-unwound time frames were combined in a single set before performing SVD-based compression. We conducted synthetic, phantom and in-vivo experiments to demonstrate the feasibility of SVD-based compression in the case of two-channel transmission. Unwinding the phases before SVD-based compression yielded artifact-free parameter maps. For fully sampled acquisitions, parameters were accurate with as few as 6 compressed time frames. SVD-based compression performed well in-vivo with highly under-sampled acquisitions using 16 compressed time frames, which reduced reconstruction time from 750 to 25min. Our method reduces the dimensions of the dictionary atoms and enables to implement any fingerprint compression strategy in the case of multiple transmit channels. Copyright © 2018 Elsevier Inc. All rights reserved.
ERGC: an efficient referential genome compression algorithm
Saha, Subrata; Rajasekaran, Sanguthevar
2015-01-01
Motivation: Genome sequencing has become faster and more affordable. Consequently, the number of available complete genomic sequences is increasing rapidly. As a result, the cost to store, process, analyze and transmit the data is becoming a bottleneck for research and future medical applications. So, the need for devising efficient data compression and data reduction techniques for biological sequencing data is growing by the day. Although there exists a number of standard data compression algorithms, they are not efficient in compressing biological data. These generic algorithms do not exploit some inherent properties of the sequencing data while compressing. To exploit statistical and information-theoretic properties of genomic sequences, we need specialized compression algorithms. Five different next-generation sequencing data compression problems have been identified and studied in the literature. We propose a novel algorithm for one of these problems known as reference-based genome compression. Results: We have done extensive experiments using five real sequencing datasets. The results on real genomes show that our proposed algorithm is indeed competitive and performs better than the best known algorithms for this problem. It achieves compression ratios that are better than those of the currently best performing algorithms. The time to compress and decompress the whole genome is also very promising. Availability and implementation: The implementations are freely available for non-commercial purposes. They can be downloaded from http://engr.uconn.edu/∼rajasek/ERGC.zip. Contact: rajasek@engr.uconn.edu PMID:26139636
2014-01-01
Background According to the guidelines for cardiopulmonary resuscitation (CPR), the rotation time for chest compression should be about 2 min. The quality of chest compressions is related to the physical fitness of the rescuer, but this was not considered when determining rotation time. The present study aimed to clarify associations between body weight and the quality of chest compression and physical fatigue during CPR performed by 18 registered nurses (10 male and 8 female) assigned to light and heavy groups according to the average weight for each sex in Japan. Methods Five-minute chest compressions were then performed on a manikin that was placed on the floor. Measurement parameters were compression depth, heart rate, oxygen uptake, integrated electromyography signals, and rating of perceived exertion. Compression depth was evaluated according to the ratio (%) of adequate compressions (at least 5 cm deep). Results The ratio of adequate compressions decreased significantly over time in the light group. Values for heart rate, oxygen uptake, muscle activity defined as integrated electromyography signals, and rating of perceived exertion were significantly higher for the light group than for the heavy group. Conclusion Chest compression caused increased fatigue among the light group, which consequently resulted in a gradual fall in the quality of chest compression. These results suggested that individuals with a lower body weight should rotate at 1-min intervals to maintain high quality CPR and thus improve the survival rates and neurological outcomes of victims of cardiac arrest. PMID:24957919
Halftoning processing on a JPEG-compressed image
NASA Astrophysics Data System (ADS)
Sibade, Cedric; Barizien, Stephane; Akil, Mohamed; Perroton, Laurent
2003-12-01
Digital image processing algorithms are usually designed for the raw format, that is on an uncompressed representation of the image. Therefore prior to transforming or processing a compressed format, decompression is applied; then, the result of the processing application is finally re-compressed for further transfer or storage. The change of data representation is resource-consuming in terms of computation, time and memory usage. In the wide format printing industry, this problem becomes an important issue: e.g. a 1 m2 input color image, scanned at 600 dpi exceeds 1.6 GB in its raw representation. However, some image processing algorithms can be performed in the compressed-domain, by applying an equivalent operation on the compressed format. This paper is presenting an innovative application of the halftoning processing operation by screening, to be applied on JPEG-compressed image. This compressed-domain transform is performed by computing the threshold operation of the screening algorithm in the DCT domain. This algorithm is illustrated by examples for different halftone masks. A pre-sharpening operation, applied on a JPEG-compressed low quality image is also described; it allows to de-noise and to enhance the contours of this image.
Coil Compression for Accelerated Imaging with Cartesian Sampling
Zhang, Tao; Pauly, John M.; Vasanawala, Shreyas S.; Lustig, Michael
2012-01-01
MRI using receiver arrays with many coil elements can provide high signal-to-noise ratio and increase parallel imaging acceleration. At the same time, the growing number of elements results in larger datasets and more computation in the reconstruction. This is of particular concern in 3D acquisitions and in iterative reconstructions. Coil compression algorithms are effective in mitigating this problem by compressing data from many channels into fewer virtual coils. In Cartesian sampling there often are fully sampled k-space dimensions. In this work, a new coil compression technique for Cartesian sampling is presented that exploits the spatially varying coil sensitivities in these non-subsampled dimensions for better compression and computation reduction. Instead of directly compressing in k-space, coil compression is performed separately for each spatial location along the fully-sampled directions, followed by an additional alignment process that guarantees the smoothness of the virtual coil sensitivities. This important step provides compatibility with autocalibrating parallel imaging techniques. Its performance is not susceptible to artifacts caused by a tight imaging fieldof-view. High quality compression of in-vivo 3D data from a 32 channel pediatric coil into 6 virtual coils is demonstrated. PMID:22488589
Hasan, Hosni; Davids, Keith; Chow, Jia Yi; Kerr, Graham
2017-04-01
This study investigated effects of wearing compression garments and textured insoles on modes of movement organisation emerging during performance of lower limb interceptive actions in association football. Participants were six skilled (age = 15.67 ± 0.74 years) and six less-skilled (age = 15.17 ± 1.1 years) football players. All participants performed 20 instep kicks with maximum velocity in four randomly organised insoles and socks conditions, (a) Smooth Socks with Smooth Insoles (SSSI); (b) Smooth Socks with Textured Insoles (SSTI); (c) Compression Socks with Smooth Insoles (CSSI); and (d), Compression Socks with Textured Insoles (CSTI). Results showed that, when wearing textured and compression materials (CSSI condition), less-skilled participants displayed significantly greater hip extension and flexion towards the ball contact phase, indicating larger ranges of motion in the kicking limb than in other conditions. Less-skilled participants also demonstrated greater variability in knee-ankle intralimb (angle-angle plots) coordination modes in the CSTI condition. Findings suggested that use of textured and compression materials increased attunement to somatosensory information from lower limb movement, to regulate performance of dynamic interceptive actions like kicking, especially in less-skilled individuals.
Compressive sensing scalp EEG signals: implementations and practical performance.
Abdulghani, Amir M; Casson, Alexander J; Rodriguez-Villegas, Esther
2012-11-01
Highly miniaturised, wearable computing and communication systems allow unobtrusive, convenient and long term monitoring of a range of physiological parameters. For long term operation from the physically smallest batteries, the average power consumption of a wearable device must be very low. It is well known that the overall power consumption of these devices can be reduced by the inclusion of low power consumption, real-time compression of the raw physiological data in the wearable device itself. Compressive sensing is a new paradigm for providing data compression: it has shown significant promise in fields such as MRI; and is potentially suitable for use in wearable computing systems as the compression process required in the wearable device has a low computational complexity. However, the practical performance very much depends on the characteristics of the signal being sensed. As such the utility of the technique cannot be extrapolated from one application to another. Long term electroencephalography (EEG) is a fundamental tool for the investigation of neurological disorders and is increasingly used in many non-medical applications, such as brain-computer interfaces. This article investigates in detail the practical performance of different implementations of the compressive sensing theory when applied to scalp EEG signals.
ECG compression using non-recursive wavelet transform with quality control
NASA Astrophysics Data System (ADS)
Liu, Je-Hung; Hung, King-Chu; Wu, Tsung-Ching
2016-09-01
While wavelet-based electrocardiogram (ECG) data compression using scalar quantisation (SQ) yields excellent compression performance, a wavelet's SQ scheme, however, must select a set of multilevel quantisers for each quantisation process. As a result of the properties of multiple-to-one mapping, however, this scheme is not conducive for reconstruction error control. In order to address this problem, this paper presents a single-variable control SQ scheme able to guarantee the reconstruction quality of wavelet-based ECG data compression. Based on the reversible round-off non-recursive discrete periodised wavelet transform (RRO-NRDPWT), the SQ scheme is derived with a three-stage design process that first uses genetic algorithm (GA) for high compression ratio (CR), followed by a quadratic curve fitting for linear distortion control, and the third uses a fuzzy decision-making for minimising data dependency effect and selecting the optimal SQ. The two databases, Physikalisch-Technische Bundesanstalt (PTB) and Massachusetts Institute of Technology (MIT) arrhythmia, are used to evaluate quality control performance. Experimental results show that the design method guarantees a high compression performance SQ scheme with statistically linear distortion. This property can be independent of training data and can facilitate rapid error control.
A hybrid data compression approach for online backup service
NASA Astrophysics Data System (ADS)
Wang, Hua; Zhou, Ke; Qin, MingKang
2009-08-01
With the popularity of Saas (Software as a service), backup service has becoming a hot topic of storage application. Due to the numerous backup users, how to reduce the massive data load is a key problem for system designer. Data compression provides a good solution. Traditional data compression application used to adopt a single method, which has limitations in some respects. For example data stream compression can only realize intra-file compression, de-duplication is used to eliminate inter-file redundant data, compression efficiency cannot meet the need of backup service software. This paper proposes a novel hybrid compression approach, which includes two levels: global compression and block compression. The former can eliminate redundant inter-file copies across different users, the latter adopts data stream compression technology to realize intra-file de-duplication. Several compressing algorithms were adopted to measure the compression ratio and CPU time. Adaptability using different algorithm in certain situation is also analyzed. The performance analysis shows that great improvement is made through the hybrid compression policy.
NASA Astrophysics Data System (ADS)
Gu, Rui
Vapor compression cycles are widely used in heating, refrigerating and air-conditioning. A slight performance improvement in the components of a vapor compression cycle, such as the compressor, can play a significant role in saving energy use. However, the complexity and cost of these improvements can block their application in the market. Modifying the conventional cycle configuration can offer a less complex and less costly alternative approach. Economizing is a common modification for improving the performance of the refrigeration cycle, resulting in decreasing the work required to compress the gas per unit mass. Traditionally, economizing requires multi-stage compressors, the cost of which has restrained the scope for practical implementation. Compressors with injection ports, which can be used to inject economized refrigerant during the compression process, introduce new possibilities for economization with less cost. This work focuses on computationally investigating a refrigeration system performance with two-phase fluid injection, developing a better understanding of the impact of injected refrigerant quality on refrigeration system performance as well as evaluating the potential COP improvement that injection provides based on refrigeration system performance provided by Copeland.
Fast and accurate face recognition based on image compression
NASA Astrophysics Data System (ADS)
Zheng, Yufeng; Blasch, Erik
2017-05-01
Image compression is desired for many image-related applications especially for network-based applications with bandwidth and storage constraints. The face recognition community typical reports concentrate on the maximal compression rate that would not decrease the recognition accuracy. In general, the wavelet-based face recognition methods such as EBGM (elastic bunch graph matching) and FPB (face pattern byte) are of high performance but run slowly due to their high computation demands. The PCA (Principal Component Analysis) and LDA (Linear Discriminant Analysis) algorithms run fast but perform poorly in face recognition. In this paper, we propose a novel face recognition method based on standard image compression algorithm, which is termed as compression-based (CPB) face recognition. First, all gallery images are compressed by the selected compression algorithm. Second, a mixed image is formed with the probe and gallery images and then compressed. Third, a composite compression ratio (CCR) is computed with three compression ratios calculated from: probe, gallery and mixed images. Finally, the CCR values are compared and the largest CCR corresponds to the matched face. The time cost of each face matching is about the time of compressing the mixed face image. We tested the proposed CPB method on the "ASUMSS face database" (visible and thermal images) from 105 subjects. The face recognition accuracy with visible images is 94.76% when using JPEG compression. On the same face dataset, the accuracy of FPB algorithm was reported as 91.43%. The JPEG-compressionbased (JPEG-CPB) face recognition is standard and fast, which may be integrated into a real-time imaging device.
Fanshan, Meng; Lin, Zhao; Wenqing, Liu; Chunlei, Lu; Yongqiang, Liu; Naiyi, Li
2013-01-01
Cardiopulmonary resuscitation (CPR) is a sudden emergency procedure that requires a rapid and efficient response, and personnel training in lifesaving procedures. Regular practice and training are necessary to improve resuscitation skills and reduce anxiety among the staff. As one of the most important skills mastered by medical volunteers serving for Mt. Taishan International Mounting Festival, we randomly selected some of them to evaluate the quality of CPR operation and compared the result with that of the untrained doctors and nurses. In order to evaluate the functions of repeating standard CPR training on performance qualities of medical volunteers for Mt. Taishan International Mounting Festival, their performance qualities of CPR were compared with those of the untrained medical workers working in emergency departments of hospitals in Taian. The CPR performance qualities of 52 medical volunteers (Standard Training Group), who had continually taken part in standard CPR technical training for six months, were tested at random and were compared with those of 68 medical workers (Compared Group) working in emergency departments of hospitals in Taian who hadn't attended CPR training within a year. The QCPR 3535 monitor (provided by Philips Company) was used to measure the standard degree of single simulated CPR performance, including the chest compression depth, frequency, released pressure between compressions and performance time of compression and ventilation, the results of which were recorded in the table and the number of practical compression per minute was calculated. The data were analyzed by x2 Test and t Test. The factors which would influence CPR performance, including gender, age, placement, hand skill, posture of compression and frequency of training, were classified and given parameters, and were put to Logistic repression analysis. The CPR performance qualities of volunteers were much higher than those of the compared group. The overall pass rates were respectively 86.4% and 31.9%; the pass rates of medical volunteers in terms of the chest compression depth, frequency, released pressure between compressions were higher than those of the compared group, which were 89.6%, 94.2%, 95.8% vs 50.3%, 53.0%, 83.1%, P<0.01; there were few differences in overall performance time, which were (118.4 ± 13.5s) vs (116.0 ± 10.4s), P>0.05; the duration time of ventilation in each performance section was much shorter than that in the compared group, which were (6.38 ± 1.2) vs (7.47 ± 1.7), P<0.01; there were few differences in the number of practical compression per minute, which were (78.2 ± 3.5) vs (78.8 ± 12.2), P>0.05); the time proportion of compression and ventilation was 2.6:1 vs 2.1:1. The Logistic repression analysis showed that CPR performance qualities were clearly related to hand skill, posture of compression and repeating standard training, which were respectively OR 13.12 and 95%CI (2.35~73.2); OR 30.89, 95%CI (3.62~263.5); OR 4.07,95%CI (1.16~14.2). The CPR performance qualities of volunteers who had had repeating standard training were much higher than those of untrained medical workers, which proved that standard training helped improve CPR performance qualities.
Extended testing of compression distillation.
NASA Technical Reports Server (NTRS)
Bambenek, R. A.; Nuccio, P. P.
1972-01-01
During the past eight years, the NASA Manned Spacecraft Center has supported the development of an integrated water and waste management system which includes the compression distillation process for recovering useable water from urine, urinal flush water, humidity condensate, commode flush water, and concentrated wash water. This paper describes the design of the compression distillation unit, developed for this system, and the testing performed to demonstrate its reliability and performance. In addition, this paper summarizes the work performed on pretreatment and post-treatment processes, to assure the recovery of sterile potable water from urine and treated urinal flush water.