The Effects of Bottom Ash on Setting Time and Compressive Strength of Fly Ash Geopolymer Paste
NASA Astrophysics Data System (ADS)
Affandhie, B. A.; Kurniasari, P. T.; Darmawan, M. S.; Subekti, S.; Wibowo, B.; Husin, N. A.; Bayuaji, R.; Irawan, S.
2017-11-01
This research is to find out the contribution of waste energy utilization of fly ash and bottom ash coal as binding agent of geopolymer concrete. This research methodology uses experimental approach in laboratory by making cylinder paste test object with dimension diameter of 2.5 cm x height 5 cm with some combination of fly ash and bottom ash mix with time setting test (ASTM C 191-04a) and compressive strength (ASTM C 39-04a). The research concludes that the effect of bottom ash on fly ash-based geopolymer paste shows good results in setting time and compressive strength.
Iterative dictionary construction for compression of large DNA data sets.
Kuruppu, Shanika; Beresford-Smith, Bryan; Conway, Thomas; Zobel, Justin
2012-01-01
Genomic repositories increasingly include individual as well as reference sequences, which tend to share long identical and near-identical strings of nucleotides. However, the sequential processing used by most compression algorithms, and the volumes of data involved, mean that these long-range repetitions are not detected. An order-insensitive, disk-based dictionary construction method can detect this repeated content and use it to compress collections of sequences. We explore a dictionary construction method that improves repeat identification in large DNA data sets. Our adaptation, COMRAD, of an existing disk-based method identifies exact repeated content in collections of sequences with similarities within and across the set of input sequences. COMRAD compresses the data over multiple passes, which is an expensive process, but allows COMRAD to compress large data sets within reasonable time and space. COMRAD allows for random access to individual sequences and subsequences without decompressing the whole data set. COMRAD has no competitor in terms of the size of data sets that it can compress (extending to many hundreds of gigabytes) and, even for smaller data sets, the results are competitive compared to alternatives; as an example, 39 S. cerevisiae genomes compressed to 0.25 bits per base.
Effect of sodium fluorosilicate on the properties of Portland cement.
Appelbaum, Keith S; Stewart, Jeffrey T; Hartwell, Gary R
2012-07-01
Mineral trioxide aggregate (MTA) satisfies most of the ideal properties of a surgical root-end filling and perforation repair material. It has been found to be nontoxic, noncarcinogenic, nongenotoxic, biocompatible, insoluble in tissue fluids, and dimensionally stable and promotes cementogenesis. The major disadvantages are its long setting time and difficult handling characteristics during placement when performing endodontic procedures. MTA is similar to Portland cement (PC) in both composition and properties. The cement industry has used many additives to decrease the setting time of PC. Proprietary formulas of PC additives include fluorosilicates, which decrease setting time. The purpose of this pilot study was to determine whether sodium fluorosilicate (SF) could be used to decrease the setting time without adversely affecting the compressive strength of PC. To determine the most appropriate amount of SF to add to PC to decrease its setting time, 1%, 2%, 3%, 4%, 5%, 10%, and 15% SF by weight were added to PC and compared with PC without SF. Setting times were measured by using a Gilmore needle, and compressive strengths were determined by using a materials testing system at 24 hours and 21 days. Statistical analysis was performed by using one-way analysis of variance with post hoc Games-Howell test. None of the percentages of SF were effective in changing the setting time of PC (P > .05), and the SF additives were found to decrease the compressive strength of PC (P < .001). On the basis of the conditions of this study, SF should not be used to decrease setting time and increase the compressive strength of PC and as such does not warrant further testing with MTA. Copyright © 2012 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu (Inventor)
1997-01-01
A pre-coding method and device for improving data compression performance by removing correlation between a first original data set and a second original data set, each having M members, respectively. The pre-coding method produces a compression-efficiency-enhancing double-difference data set. The method and device produce a double-difference data set, i.e., an adjacent-delta calculation performed on a cross-delta data set or a cross-delta calculation performed on two adjacent-delta data sets, from either one of (1) two adjacent spectral bands coming from two discrete sources, respectively, or (2) two time-shifted data sets coming from a single source. The resulting double-difference data set is then coded using either a distortionless data encoding scheme (entropy encoding) or a lossy data compression scheme. Also, a post-decoding method and device for recovering a second original data set having been represented by such a double-difference data set.
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu (Inventor)
1998-01-01
A pre-coding method and device for improving data compression performance by removing correlation between a first original data set and a second original data set, each having M members, respectively. The pre-coding method produces a compression-efficiency-enhancing double-difference data set. The method and device produce a double-difference data set, i.e., an adjacent-delta calculation performed on a cross-delta data set or a cross-delta calculation performed on two adjacent-delta data sets, from either one of (1) two adjacent spectral bands coming from two discrete sources, respectively, or (2) two time-shifted data sets coming from a single source. The resulting double-difference data set is then coded using either a distortionless data encoding scheme (entropy encoding) or a lossy data compression scheme. Also, a post-decoding method and device for recovering a second original data set having been represented by such a double-difference data set.
Sharifahmadian, Ershad
2006-01-01
The set partitioning in hierarchical trees (SPIHT) algorithm is very effective and computationally simple technique for image and signal compression. Here the author modified the algorithm which provides even better performance than the SPIHT algorithm. The enhanced set partitioning in hierarchical trees (ESPIHT) algorithm has performance faster than the SPIHT algorithm. In addition, the proposed algorithm reduces the number of bits in a bit stream which is stored or transmitted. I applied it to compression of multichannel ECG data. Also, I presented a specific procedure based on the modified algorithm for more efficient compression of multichannel ECG data. This method employed on selected records from the MIT-BIH arrhythmia database. According to experiments, the proposed method attained the significant results regarding compression of multichannel ECG data. Furthermore, in order to compress one signal which is stored for a long time, the proposed multichannel compression method can be utilized efficiently.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Small, Ward; Pearson, Mark A.; Maiti, Amitesh
Dow Corning SE 1700 (reinforced polydimethylsiloxane) porous structures were made by direct ink writing (DIW). The specimens (~50% porosity) were subjected to various compressive strains (15, 30, 45%) and temperatures (room temperature, 35, 50, 70°C) in a nitrogen atmosphere (active purge) for 1 year. Compression set and load retention of the aged specimens were measured periodically during the study. Compression set increased with strain and temperature. After 1 year, specimens aged at room temperature, 35, and 50°C showed ~10% compression set (relative to the applied compressive deflection), while those aged at 70°C showed 20-40%. Due to the increasing compression set,more » load retention decreased with temperature, ranging from ~90% at room temperature to ~60-80% at 70°C. Long-term compression set and load retention at room temperature were predicted by applying time-temperature superposition (TTS). The predictions show compression set relative to the compressive deflection will be ~10-15% with ~70-90% load retention after 50 years at 15-45% strain, suggesting the material will continue to be mechanically functional. Comparison of the results to previously acquired data for cellular (M97*, M9760, M9763) and RTV (S5370) silicone foams suggests that the SE 1700 DIW porous specimens are on par with, or outperform, the legacy foams.« less
Fixation of compressive deformation in wood by pre-steaming
M. Inoue; N. Sekino; T. Morooka; R.M. Rowell; M. Norimoto
2008-01-01
Wood block specimens pre-steamed at 120-220 °C for 5-20 min were compressed in the radial direction. The recovery of set decreased with increasing pre-steaming temperature and time. The reduction of set recovery correlated with the amount of weight loss in steaming irrespective of pre-steaming temperature and time. The weight loss for the highest level of...
Mathai, Jijo Pottackal; Appu, Sabarish
2015-01-01
Auditory neuropathy spectrum disorder (ANSD) is a form of sensorineural hearing loss, causing severe deficits in speech perception. The perceptual problems of individuals with ANSD were attributed to their temporal processing impairment rather than to reduced audibility. This rendered their rehabilitation difficult using hearing aids. Although hearing aids can restore audibility, compression circuits in a hearing aid might distort the temporal modulations of speech, causing poor aided performance. Therefore, hearing aid settings that preserve the temporal modulations of speech might be an effective way to improve speech perception in ANSD. The purpose of the study was to investigate the perception of hearing aid-processed speech in individuals with late-onset ANSD. A repeated measures design was used to study the effect of various compression time settings on speech perception and perceived quality. Seventeen individuals with late-onset ANSD within the age range of 20-35 yr participated in the study. The word recognition scores (WRSs) and quality judgment of phonemically balanced words, processed using four different compression settings of a hearing aid (slow, medium, fast, and linear), were evaluated. The modulation spectra of hearing aid-processed stimuli were estimated to probe the effect of amplification on the temporal envelope of speech. Repeated measures analysis of variance and post hoc Bonferroni's pairwise comparisons were used to analyze the word recognition performance and quality judgment. The comparison between unprocessed and all four hearing aid-processed stimuli showed significantly higher perception using the former stimuli. Even though perception of words processed using slow compression time settings of the hearing aids were significantly higher than the fast one, their difference was only 4%. In addition, there were no significant differences in perception between any other hearing aid-processed stimuli. Analysis of the temporal envelope of hearing aid-processed stimuli revealed minimal changes in the temporal envelope across the four hearing aid settings. In terms of quality, the highest number of individuals preferred stimuli processed using slow compression time settings. Individuals who preferred medium ones followed this. However, none of the individuals preferred fast compression time settings. Analysis of quality judgment showed that slow, medium, and linear settings presented significantly higher preference scores than the fast compression setting. Individuals with ANSD showed no marked difference in perception of speech that was processed using the four different hearing aid settings. However, significantly higher preference, in terms of quality, was found for stimuli processed using slow, medium, and linear settings over the fast one. Therefore, whenever hearing aids are recommended for ANSD, those having slow compression time settings or linear amplification may be chosen over the fast (syllabic compression) one. In addition, WRSs obtained using hearing aid-processed stimuli were remarkably poorer than unprocessed stimuli. This shows that processing of speech through hearing aids might have caused a large reduction of performance in individuals with ANSD. However, further evaluation is needed using individually programmed hearing aids rather than hearing aid-processed stimuli. American Academy of Audiology.
NASA Astrophysics Data System (ADS)
Antoni, Herianto, Jason Ghorman; Anastasia, Evelin; Hardjito, Djwantoro
2017-09-01
Fly ash with high calcium oxide content when used as the base material in geopolymer concrete could cause flash setting or rapid hardening. However, it might increase the compressive strength of geopolymer concrete. This rapid hardening could cause problems if the geopolymer concrete is used on a large scale casting that requires a long setting time. CaO content can be indicated by pH values of the fly ash, while higher pH is correlated with the rapid setting time of fly ash-based geopolymer. This study investigates the addition of acid solution to reduce the initial pH of the fly ash and to prolong the setting time of the mixture. The acids used in this study are hydrochloric acid (HCl), sulfuric acid (H2 SO4), nitric acid (HNO3) and acetic acid (CH3 COOH). It was found that the addition of acid solution in fly ash was able to decrease the initial pH of fly ash, however, the initial setting time of geopolymer was not reduced. It was even faster than that of the control mixture. The acid type causes various influence, depending on the fly ash properties. In addition, the use of acid solution in fly ash reduces the compressive strength of geopolymer mortar. It is concluded that the addition of acid solution cannot prolong the rapid hardening of high calcium fly ash geopolymer, and it causes adverse effect on the compressive strength.
Real-time compression of raw computed tomography data: technology, architecture, and benefits
NASA Astrophysics Data System (ADS)
Wegener, Albert; Chandra, Naveen; Ling, Yi; Senzig, Robert; Herfkens, Robert
2009-02-01
Compression of computed tomography (CT) projection samples reduces slip ring and disk drive costs. A lowcomplexity, CT-optimized compression algorithm called Prism CTTM achieves at least 1.59:1 and up to 2.75:1 lossless compression on twenty-six CT projection data sets. We compare the lossless compression performance of Prism CT to alternative lossless coders, including Lempel-Ziv, Golomb-Rice, and Huffman coders using representative CT data sets. Prism CT provides the best mean lossless compression ratio of 1.95:1 on the representative data set. Prism CT compression can be integrated into existing slip rings using a single FPGA. Prism CT decompression operates at 100 Msamp/sec using one core of a dual-core Xeon CPU. We describe a methodology to evaluate the effects of lossy compression on image quality to achieve even higher compression ratios. We conclude that lossless compression of raw CT signals provides significant cost savings and performance improvements for slip rings and disk drive subsystems in all CT machines. Lossy compression should be considered in future CT data acquisition subsystems because it provides even more system benefits above lossless compression while achieving transparent diagnostic image quality. This result is demonstrated on a limited dataset using appropriately selected compression ratios and an experienced radiologist.
Hang, X; Greenberg, N L; Shiota, T; Firstenberg, M S; Thomas, J D
2000-01-01
Real-time three-dimensional echocardiography has been introduced to provide improved quantification and description of cardiac function. Data compression is desired to allow efficient storage and improve data transmission. Previous work has suggested improved results utilizing wavelet transforms in the compression of medical data including 2D echocardiogram. Set partitioning in hierarchical trees (SPIHT) was extended to compress volumetric echocardiographic data by modifying the algorithm based on the three-dimensional wavelet packet transform. A compression ratio of at least 40:1 resulted in preserved image quality.
FRESCO: Referential compression of highly similar sequences.
Wandelt, Sebastian; Leser, Ulf
2013-01-01
In many applications, sets of similar texts or sequences are of high importance. Prominent examples are revision histories of documents or genomic sequences. Modern high-throughput sequencing technologies are able to generate DNA sequences at an ever-increasing rate. In parallel to the decreasing experimental time and cost necessary to produce DNA sequences, computational requirements for analysis and storage of the sequences are steeply increasing. Compression is a key technology to deal with this challenge. Recently, referential compression schemes, storing only the differences between a to-be-compressed input and a known reference sequence, gained a lot of interest in this field. In this paper, we propose a general open-source framework to compress large amounts of biological sequence data called Framework for REferential Sequence COmpression (FRESCO). Our basic compression algorithm is shown to be one to two orders of magnitudes faster than comparable related work, while achieving similar compression ratios. We also propose several techniques to further increase compression ratios, while still retaining the advantage in speed: 1) selecting a good reference sequence; and 2) rewriting a reference sequence to allow for better compression. In addition,we propose a new way of further boosting the compression ratios by applying referential compression to already referentially compressed files (second-order compression). This technique allows for compression ratios way beyond state of the art, for instance,4,000:1 and higher for human genomes. We evaluate our algorithms on a large data set from three different species (more than 1,000 genomes, more than 3 TB) and on a collection of versions of Wikipedia pages. Our results show that real-time compression of highly similar sequences at high compression ratios is possible on modern hardware.
The Effect of Baggase Ash on Fly Ash-Based Geopolimer Binder
NASA Astrophysics Data System (ADS)
Bayuaji, R.; Darmawan, M. S.; Husin, N. A.; Banugraha, R.; Alfi, M.; Abdullah, M. M. A. B.
2018-06-01
Geopolymer concrete is an environmentally friendly concrete. However, the geopolymer binder has a problem with setting time; mainly the composition comprises high calcium fly ash. This study utilized bagasse ash to improve setting time on fly ash-based geopolymer binder. The characterization of bagasse ash was carried out by using chemical and phase analysis, while the morphology characterization was examined by scanning electron microscope (SEM). The setting time test and the compressive strength test used standard ASTM C 191-04 and ASTM C39 / C39M respectively. The compressive strength of the samples determined at 3, 28 and 56 days. The result compared the requirement of the standards.
Using of borosilicate glass waste as a cement additive
NASA Astrophysics Data System (ADS)
Han, Weiwei; Sun, Tao; Li, Xinping; Sun, Mian; Lu, Yani
2016-08-01
Borosilicate glass waste is investigated as a cement additive in this paper to improve the properties of cement and concrete, such as setting time, compressive strength and radiation shielding. The results demonstrate that borosilicate glass is an effective additive, which not only improves the radiation shielding properties of cement paste, but also shows the irradiation effect on the mechanical and optical properties: borosilicate glass can increase the compressive strength and at the same time it makes a minor impact on the setting time and main mineralogical compositions of hydrated cement mixtures; and when the natural river sand in the mortar is replaced by borosilicate glass sand (in amounts from 0% to 22.2%), the compressive strength and the linear attenuation coefficient firstly increase and then decrease. When the glass waste content is 14.8%, the compressive strength is 43.2 MPa after 28 d and the linear attenuation coefficient is 0.2457 cm-1 after 28 d, which is beneficial for the preparation of radiation shielding concrete with high performances.
Data Compression With Application to Geo-Location
2010-08-01
wireless sensor network requires the estimation of time-difference-of-arrival (TDOA) parameters using data collected by a set of spatially separated sensors. Compressing the data that is shared among the sensors can provide tremendous savings in terms of the energy and transmission latency. Traditional MSE and perceptual based data compression schemes fail to accurately capture the effects of compression on the TDOA estimation task; therefore, it is necessary to investigate compression algorithms suitable for TDOA parameter estimation. This thesis explores the
Analysis of the operation of the SCD Response intermittent compression system.
Morris, Rh J; Griffiths, H; Woodcock, J P
2002-01-01
The work assessed the performance of the Kendall SCD Response intermittent pneumatic compression system for deep vein thrombosis prophylaxis, which claimed to set its cycle according to the blood flow characteristics of individual patient limbs. A series of tests measured the system response in various situations, including application to the limbs of healthy volunteers, and to false limbs. Practical experimentation and theoretical analysis were used to investigate influences on the system functioning other than blood flow. The system tested did not seem to perform as claimed, being unable to distinguish between real and fake limbs. The intervals between compressions were set to times unrealistic for venous refill, with temperature changes in the cuff the greatest influence on performance. Combining the functions of compression and the measurement of the effects of compression in the same air bladder makes temperature artefacts unavoidable and can cause significant errors in the inter-compression interval.
Optimization of Error-Bounded Lossy Compression for Hard-to-Compress HPC Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Di, Sheng; Cappello, Franck
Since today’s scientific applications are producing vast amounts of data, compressing them before storage/transmission is critical. Results of existing compressors show two types of HPC data sets: highly compressible and hard to compress. In this work, we carefully design and optimize the error-bounded lossy compression for hard-tocompress scientific data. We propose an optimized algorithm that can adaptively partition the HPC data into best-fit consecutive segments each having mutually close data values, such that the compression condition can be optimized. Another significant contribution is the optimization of shifting offset such that the XOR-leading-zero length between two consecutive unpredictable data points canmore » be maximized. We finally devise an adaptive method to select the best-fit compressor at runtime for maximizing the compression factor. We evaluate our solution using 13 benchmarks based on real-world scientific problems, and we compare it with 9 other state-of-the-art compressors. Experiments show that our compressor can always guarantee the compression errors within the user-specified error bounds. Most importantly, our optimization can improve the compression factor effectively, by up to 49% for hard-tocompress data sets with similar compression/decompression time cost.« less
Lattanzi, Riccardo; Zhang, Bei; Knoll, Florian; Assländer, Jakob; Cloos, Martijn A
2018-06-01
Magnetic Resonance Fingerprinting reconstructions can become computationally intractable with multiple transmit channels, if the B 1 + phases are included in the dictionary. We describe a general method that allows to omit the transmit phases. We show that this enables straightforward implementation of dictionary compression to further reduce the problem dimensionality. We merged the raw data of each RF source into a single k-space dataset, extracted the transceiver phases from the corresponding reconstructed images and used them to unwind the phase in each time frame. All phase-unwound time frames were combined in a single set before performing SVD-based compression. We conducted synthetic, phantom and in-vivo experiments to demonstrate the feasibility of SVD-based compression in the case of two-channel transmission. Unwinding the phases before SVD-based compression yielded artifact-free parameter maps. For fully sampled acquisitions, parameters were accurate with as few as 6 compressed time frames. SVD-based compression performed well in-vivo with highly under-sampled acquisitions using 16 compressed time frames, which reduced reconstruction time from 750 to 25min. Our method reduces the dimensions of the dictionary atoms and enables to implement any fingerprint compression strategy in the case of multiple transmit channels. Copyright © 2018 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Guo, Wenkang; Yin, Haibo; Wang, Shuyin; He, Zhifeng
2017-04-01
Through studying on the setting times, cement mortar compressive strength and cement mortar compressive strength ratio, the influence of alkali-free liquid accelerators polycarboxylate-type super-plasticizers on the performance of alkali-free liquid accelerators in cement-based material was investigated. The results showed that the compatibility of super-plasticizers and alkali-free liquid accelerators was excellent. However, the dosage of super-plasticizers had a certain impact on the performance of alkali-free liquid accelerators as follows: 1) the setting times of alkali-free liquid accelerators was in the inverse proportional relationship to the dosage of super-plasticizers; 2)the influence of super-plasticizers dosage on the cement mortar compressive strength of alkali-free liquid accelerators was related to the types of accelerators, where exist an optimum super-plasticizers dosage for cement mortar compressive strength at 28d; 3)the later cement mortar compressive strength with alkali-free liquid accelerators were decreasing with the increment of the super-plasticizers dosage. In the practical application of alkali-free liquid accelerators and super-plasticizer, the dosage of super-plasticizer must be determined by dosage optimization test results.
Bernardi, A; Bortoluzzi, E A; Felippe, W T; Felippe, M C S; Wan, W S; Teixeira, C S
2017-01-01
To evaluate nanoparticulate calcium carbonate (NPCC) using transmission electron microscopy and the effects of NPCC addition to MTA in regard to the setting time, dimensional change, compressive strength, solubility and pH. The experimental groups were G1 (MTA), G2 (MTA with 5% NPCC) and G3 (MTA with 10% NPCC). The tests followed ISO and ADA standards. The specimens in the dimensional change and compressive strength tests were measured immediately after setting, after 24 h and after 30 days. In the solubility test, rings filled with cement were weighed after setting and after 30 days. The pH was measured after 24 h and 30 days. The data were analysed with the ANOVA, Tukey's and Kruskal-Wallis tests (α = 5%). The setting time was reduced (P < 0.05) in samples from G2 and G3 compared to G1. After 24 h, the dimensional change was similar amongst the groups, and after 30 days, G2 was associated with less alteration than G1 and G3. There was a difference in the compressive strength (P < 0.001) after 24 h and 30 days (G1 > G2 > G3). The solubility test revealed a difference amongst the groups when the specimens were hydrated: G2 > G1 > G3 and dehydrated: G3 > G2 > G1. The pH of the groups was similar at 24 h with higher values in each group after 30 days (P < 0.05), and G2 and G3 had similar mean pH values but both were higher than G1. Nanoparticulate calcium carbonate had a cubic morphology with few impurities. The addition of nanoparticulate calcium carbonate to MTA accelerated the setting time, decreased compressive strength and, after 30 days, resulted in lower dimensional change (G2), higher solubility and a higher pH. © 2015 International Endodontic Journal. Published by John Wiley & Sons Ltd.
Compressive Classification for TEM-EELS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hao, Weituo; Stevens, Andrew; Yang, Hao
Electron energy loss spectroscopy (EELS) is typically conducted in STEM mode with a spectrometer, or in TEM mode with energy selction. These methods produce a 3D data set (x, y, energy). Some compressive sensing [1,2] and inpainting [3,4,5] approaches have been proposed for recovering a full set of spectra from compressed measurements. In many cases the final form of the spectral data is an elemental map (an image with channels corresponding to elements). This means that most of the collected data is unused or summarized. We propose a method to directly recover the elemental map with reduced dose and acquisitionmore » time. We have designed a new computational TEM sensor for compressive classification [6,7] of energy loss spectra called TEM-EELS.« less
ERIC Educational Resources Information Center
Mizell, Al P.; And Others
Distance learning involves students and faculty engaged in interactive instructional settings when they are at different locations. Compressed video is the live transmission of two-way auditory and visual signals at the same time between sites at different locations. The use of compressed video has expanded in recent years, ranging from use by the…
2013-01-01
Bystander cardiopulmonary resuscitation (CPR) improves out-of-hospital cardiac arrest (OHCA) survival. In settings with prolonged ambulance response times, skilled bystanders may be even more crucial. In 2010, American Heart Association (AHA) and European Resuscitation Council (ERC) introduced compression-only CPR as an alternative to conventional bystander CPR under some circumstances. The purpose of this citation review and document analysis is to determine whether the evidentiary basis for 2010 AHA and ERC guidelines attends to settings with prolonged ambulance response times or no formal ambulance dispatch services. Primary and secondary citations referring to epidemiological research comparing adult OHCA survival based on the type of bystander CPR were included in the analysis. Details extracted from the citations included a study description and primary outcome measure, the geographic location in which the study occurred, EMS response times, the role of dispatchers, and main findings and summary statistics regarding rates of survival among patients receiving no CPR, conventional CPR or compression-only CPR. The inclusion criteria were met by 10 studies. 9 studies took place exclusively in urban settings. Ambulance dispatchers played an integral role in 7 studies. The cited studies suggest either no survival benefit or harm arising from compression-only CPR in settings with extended ambulance response times. The evidentiary basis for 2010 AHA and ERC bystander CPR guidelines does not attend to settings without rapid ambulance response times or dispatch services. Standardized bystander CPR guidelines may require adaptation or reconsideration in these settings. PMID:23601200
Cosmological Particle Data Compression in Practice
NASA Astrophysics Data System (ADS)
Zeyen, M.; Ahrens, J.; Hagen, H.; Heitmann, K.; Habib, S.
2017-12-01
In cosmological simulations trillions of particles are handled and several terabytes of unstructured particle data are generated in each time step. Transferring this data directly from memory to disk in an uncompressed way results in a massive load on I/O and storage systems. Hence, one goal of domain scientists is to compress the data before storing it to disk while minimizing the loss of information. To prevent reading back uncompressed data from disk, this can be done in an in-situ process. Since the simulation continuously generates data, the available time for the compression of one time step is limited. Therefore, the evaluation of compression techniques has shifted from only focusing on compression rates to include run-times and scalability.In recent years several compression techniques for cosmological data have become available. These techniques can be either lossy or lossless, depending on the technique. For both cases, this study aims to evaluate and compare the state of the art compression techniques for unstructured particle data. This study focuses on the techniques available in the Blosc framework with its multi-threading support, the XZ Utils toolkit with the LZMA algorithm that achieves high compression rates, and the widespread FPZIP and ZFP methods for lossy compressions.For the investigated compression techniques, quantitative performance indicators such as compression rates, run-time/throughput, and reconstruction errors are measured. Based on these factors, this study offers a comprehensive analysis of the individual techniques and discusses their applicability for in-situ compression. In addition, domain specific measures are evaluated on the reconstructed data sets, and the relative error rates and statistical properties are analyzed and compared. Based on this study future challenges and directions in the compression of unstructured cosmological particle data were identified.
Compression set in gas-blown condensation-cured polysiloxane elastomers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patel, Mogon; Chinn, Sarah; Maxwell, Robert S.
2010-12-01
Accelerated thermal ageing studies on foamed condensation cured polysiloxane materials have been performed in support of life assessment and material replacement programmes. Two different types of filled hydrogen-blown and condensation cured polysiloxane foams were tested; commercial (RTV S5370), and an in-house formulated polysiloxane elastomer (Silfoam). Compression set properties were investigated using Thermomechanical (TMA) studies and compared against two separate longer term ageing trials carried out in air and in dry inert gas atmospheres using compression jigs. Isotherms measured from these studies were assessed using time-temperature (T/t) superposition. Acceleration factors were determined and fitted to Arrhenius kinetics. For both materials, themore » thermo-mechanical results were found to closely follow the longer term accelerated ageing trials. Comparison of the accelerated ageing data in dry nitrogen atmospheres against field trial results showed the accelerated ageing trends over predict, however the comparison is difficult as the field data suffer from significant component to component variability. Of the long term ageing trials reported here, those carried out in air deviate more significantly from field trials data compared to those carried out in dry nitrogen atmospheres. For field return samples, there is evidence for residual post-curing reactions influencing mechanical performance, which would accelerate compression set. Multiple quantum-NMR studies suggest that compression set is not associated with significant changes in net crosslink density, but that some degree of network rearrangement has occurred due to viscoelastic relaxation as well as bond breaking and forming processes, with possible post-curing reactions at early times.« less
The physical properties of accelerated Portland cement for endodontic use.
Camilleri, J
2008-02-01
To investigate the physical properties of a novel accelerated Portland cement. The setting time, compressive strength, pH and solubility of white Portland cement (Lafarge Asland; CEM 1, 52.5 N) and accelerated Portland cement (Proto A) produced by excluding gypsum from the manufacturing process (Aalborg White) and a modified version with 4 : 1 addition of bismuth oxide (Proto B) were evaluated. Proto A set in 8 min. The compressive strength of Proto A was comparable with that of Portland cement at all testing periods (P > 0.05). Additions of bismuth oxide extended the setting time and reduced the compressive strength (P < 0.05). Both cements and storage solution were alkaline. All cements tested increased by >12% of their original weight after immersion in water for 1 day with no further absorption after 28 days. Addition of bismuth oxide increased the water uptake of the novel cement (P < 0.05). The setting time of Portland cement can be reduced by excluding the gypsum during the last stage of the manufacturing process without affecting its other properties. Addition of bismuth oxide affected the properties of the novel cement. Further investigation on the effect that bismuth oxide has on the properties of mineral trioxide aggregate is thus warranted.
System using data compression and hashing adapted for use for multimedia encryption
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coffland, Douglas R
2011-07-12
A system and method is disclosed for multimedia encryption. Within the system of the present invention, a data compression module receives and compresses a media signal into a compressed data stream. A data acquisition module receives and selects a set of data from the compressed data stream. And, a hashing module receives and hashes the set of data into a keyword. The method of the present invention includes the steps of compressing a media signal into a compressed data stream; selecting a set of data from the compressed data stream; and hashing the set of data into a keyword.
Effect of chemical admixtures on properties of high-calcium fly ash geopolymer
NASA Astrophysics Data System (ADS)
Rattanasak, Ubolluk; Pankhet, Kanokwan; Chindaprasirt, Prinya
2011-06-01
Owing to the high viscosity of sodium silicate solution, fly ash geopolymer has the problems of low workability and rapid setting time. Therefore, the effect of chemical admixtures on the properties of fly ash geopolymer was studied to overcome the rapid set of the geopolymer in this paper. High-calcium fly ash and alkaline solution were used as starting materials to synthesize the geopolymer. Calcium chloride, calcium sulfate, sodium sulfate, and sucrose at dosages of 1wt% and 2wt% of fly ash were selected as admixtures based on concrete knowledge to improve the properties of the geopolymer. The setting time, compressive strength, and degree of reaction were recorded, and the microstructure was examined. The results show that calcium chloride significantly shortens both the initial and final setting times of the geopolymer paste. In addition, sucrose also delays the final setting time significantly. The degrees of reaction of fly ash in the geopolymer paste with the admixtures are all higher than those of the control paste. This contributes to the obvious increases in compressive strength.
RMP: Reduced-set matching pursuit approach for efficient compressed sensing signal reconstruction.
Abdel-Sayed, Michael M; Khattab, Ahmed; Abu-Elyazeed, Mohamed F
2016-11-01
Compressed sensing enables the acquisition of sparse signals at a rate that is much lower than the Nyquist rate. Compressed sensing initially adopted [Formula: see text] minimization for signal reconstruction which is computationally expensive. Several greedy recovery algorithms have been recently proposed for signal reconstruction at a lower computational complexity compared to the optimal [Formula: see text] minimization, while maintaining a good reconstruction accuracy. In this paper, the Reduced-set Matching Pursuit (RMP) greedy recovery algorithm is proposed for compressed sensing. Unlike existing approaches which either select too many or too few values per iteration, RMP aims at selecting the most sufficient number of correlation values per iteration, which improves both the reconstruction time and error. Furthermore, RMP prunes the estimated signal, and hence, excludes the incorrectly selected values. The RMP algorithm achieves a higher reconstruction accuracy at a significantly low computational complexity compared to existing greedy recovery algorithms. It is even superior to [Formula: see text] minimization in terms of the normalized time-error product, a new metric introduced to measure the trade-off between the reconstruction time and error. RMP superior performance is illustrated with both noiseless and noisy samples.
Adaptive compressed sensing of remote-sensing imaging based on the sparsity prediction
NASA Astrophysics Data System (ADS)
Yang, Senlin; Li, Xilong; Chong, Xin
2017-10-01
The conventional compressive sensing works based on the non-adaptive linear projections, and the parameter of its measurement times is usually set empirically. As a result, the quality of image reconstruction is always affected. Firstly, the block-based compressed sensing (BCS) with conventional selection for compressive measurements was given. Then an estimation method for the sparsity of image was proposed based on the two dimensional discrete cosine transform (2D DCT). With an energy threshold given beforehand, the DCT coefficients were processed with both energy normalization and sorting in descending order, and the sparsity of the image can be achieved by the proportion of dominant coefficients. And finally, the simulation result shows that, the method can estimate the sparsity of image effectively, and provides an active basis for the selection of compressive observation times. The result also shows that, since the selection of observation times is based on the sparse degree estimated with the energy threshold provided, the proposed method can ensure the quality of image reconstruction.
Adaptive compressed sensing of multi-view videos based on the sparsity estimation
NASA Astrophysics Data System (ADS)
Yang, Senlin; Li, Xilong; Chong, Xin
2017-11-01
The conventional compressive sensing for videos based on the non-adaptive linear projections, and the measurement times is usually set empirically. As a result, the quality of videos reconstruction is always affected. Firstly, the block-based compressed sensing (BCS) with conventional selection for compressive measurements was described. Then an estimation method for the sparsity of multi-view videos was proposed based on the two dimensional discrete wavelet transform (2D DWT). With an energy threshold given beforehand, the DWT coefficients were processed with both energy normalization and sorting by descending order, and the sparsity of the multi-view video can be achieved by the proportion of dominant coefficients. And finally, the simulation result shows that, the method can estimate the sparsity of video frame effectively, and provides an active basis for the selection of compressive observation times. The result also shows that, since the selection of observation times is based on the sparsity estimated with the energy threshold provided, the proposed method can ensure the reconstruction quality of multi-view videos.
Visually Lossless Data Compression for Real-Time Frame/Pushbroom Space Science Imagers
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu; Venbrux, Jack; Bhatia, Prakash; Miller, Warner H.
2000-01-01
A visually lossless data compression technique is currently being developed for space science applications under the requirement of high-speed push-broom scanning. The technique is also applicable to frame based imaging and is error-resilient in that error propagation is contained within a few scan lines. The algorithm is based on a block transform of a hybrid of modulated lapped transform (MLT) and discrete cosine transform (DCT), or a 2-dimensional lapped transform, followed by bit-plane encoding; this combination results in an embedded bit string with exactly the desirable compression rate as desired by the user. The approach requires no unique table to maximize its performance. The compression scheme performs well on a suite of test images typical of images from spacecraft instruments. Flight qualified hardware implementations are in development; a functional chip set is expected by the end of 2001. The chip set is being designed to compress data in excess of 20 Msamples/sec and support quantizations from 2 to 16 bits.
A Lower Bound on Adiabatic Heating of Compressed Turbulence for Simulation and Model Validation
Davidovits, Seth; Fisch, Nathaniel J.
2017-03-31
The energy in turbulent flow can be amplied by compression, when the compression occurs on a timescale shorter than the turbulent dissipation time. This mechanism may play a part in sustaining turbulence in various astrophysical systems, including molecular clouds. The amount of turbulent amplification depends on the net effect of the compressive forcing and turbulent dissipation. By giving an argument for a bound on this dissipation, we give a lower bound for the scaling of the turbulent velocity with compression ratio in compressed turbulence. That is, turbulence undergoing compression will be enhanced at least as much as the bound givenmore » here, subject to a set of caveats that will be outlined. Used as a validation check, this lower bound suggests that some models of compressing astrophysical turbulence are too dissipative. As a result, the technique used highlights the relationship between compressed turbulence and decaying turbulence.« less
Experimental Study on Semi-Dry Flue Gas Desulfurization Ash Used in Steel Slag Composite Material
NASA Astrophysics Data System (ADS)
Lu, Lijun; Fang, Honghui
This article carried out the experimental study on using desulfurization ash in steel slag composite material. This was done by investigating the desulfurization ash content in formula one and formula two samples on the influence of setting time and strength of mortar. Through this study the following conclusions were reached for formula one: (1) a setting time of more than 10 hours is required, (2) a dosage of desulfurization ash of 1 2% is optimal, where flexural strength is reduced by 10% 23% and compressive strength reduced by 5.7% 16.4%. The conclusions of formula two were: (1) when the dosage of desulfurization ash is within 5%, the setting time is within 10 hours; (2) when the dosage of desulfurization ash is 1 2%, the flexural strength is increased by 5 7% and the compressive strength is reduced by 1 2%. The results show that the formula two is better.
A comparative study of SAR data compression schemes
NASA Technical Reports Server (NTRS)
Lambert-Nebout, C.; Besson, O.; Massonnet, D.; Rogron, B.
1994-01-01
The amount of data collected from spaceborne remote sensing has substantially increased in the last years. During same time period, the ability to store or transmit data has not increased as quickly. At this time, there is a growing interest in developing compression schemes that could provide both higher compression ratios and lower encoding/decoding errors. In the case of the spaceborne Synthetic Aperture Radar (SAR) earth observation system developed by the French Space Agency (CNES), the volume of data to be processed will exceed both the on-board storage capacities and the telecommunication link. The objective of this paper is twofold: to present various compression schemes adapted to SAR data; and to define a set of evaluation criteria and compare the algorithms on SAR data. In this paper, we review two classical methods of SAR data compression and propose novel approaches based on Fourier Transforms and spectrum coding.
Martínez-Martínez, F; Rupérez-Moreno, M J; Martínez-Sober, M; Solves-Llorens, J A; Lorente, D; Serrano-López, A J; Martínez-Sanchis, S; Monserrat, C; Martín-Guerrero, J D
2017-11-01
This work presents a data-driven method to simulate, in real-time, the biomechanical behavior of the breast tissues in some image-guided interventions such as biopsies or radiotherapy dose delivery as well as to speed up multimodal registration algorithms. Ten real breasts were used for this work. Their deformation due to the displacement of two compression plates was simulated off-line using the finite element (FE) method. Three machine learning models were trained with the data from those simulations. Then, they were used to predict in real-time the deformation of the breast tissues during the compression. The models were a decision tree and two tree-based ensemble methods (extremely randomized trees and random forest). Two different experimental setups were designed to validate and study the performance of these models under different conditions. The mean 3D Euclidean distance between nodes predicted by the models and those extracted from the FE simulations was calculated to assess the performance of the models in the validation set. The experiments proved that extremely randomized trees performed better than the other two models. The mean error committed by the three models in the prediction of the nodal displacements was under 2 mm, a threshold usually set for clinical applications. The time needed for breast compression prediction is sufficiently short to allow its use in real-time (<0.2 s). Copyright © 2017 Elsevier Ltd. All rights reserved.
Chakraborty, Mousumi; Ridgway, Cathy; Bawuah, Prince; Markl, Daniel; Gane, Patrick A C; Ketolainen, Jarkko; Zeitler, J Axel; Peiponen, Kai-Erik
2017-06-15
The objective of this study is to propose a novel optical compressibility parameter for porous pharmaceutical tablets. This parameter is defined with the aid of the effective refractive index of a tablet that is obtained from non-destructive and contactless terahertz (THz) time-delay transmission measurement. The optical compressibility parameter of two training sets of pharmaceutical tablets with a priori known porosity and mass fraction of a drug was investigated. Both pharmaceutical sets were compressed with one of the most commonly used excipients, namely microcrystalline cellulose (MCC) and drug Indomethacin. The optical compressibility clearly correlates with the skeletal bulk modulus determined by mercury porosimetry and the recently proposed terahertz lumped structural parameter calculated from terahertz measurements. This lumped structural parameter can be used to analyse the pattern of arrangement of excipient and drug particles in porous pharmaceutical tablets. Therefore, we propose that the optical compressibility can serve as a quality parameter of a pharmaceutical tablet corresponding with the skeletal bulk modulus of the porous tablet, which is related to structural arrangement of the powder particles in the tablet. Copyright © 2017 Elsevier B.V. All rights reserved.
Analysis of Deformation and Equivalent Stress during Biomass Material Compression Molding
NASA Astrophysics Data System (ADS)
Xu, Guiying; Wei, Hetao; Zhang, Zhien; Yu, Shaohui; Wang, Congzhe; Huang, Guowen
2018-02-01
Ansys is adopted to analyze mold deformation and stress field distribution rule during the process of compressing biomass under pressure of 20Mpa. By means of unit selection, material property setting, mesh partition, contact pair establishment, load and constraint applying, and solver setting, the stress and strain of overall mold are analyzed. Deformation and equivalent Stress of compression structure, base, mold, and compression bar were analyzed. We can have conclusions: The distribution of stress forced on compressor is not completely uniform, where the stress at base is slightly decreased; the stress and strain of compression bar is the largest, and stress concentration my occur at top of compression bar, which goes against compression bar service life; the overall deformation of main mold is smaller; although there is slight difference between upper and lower part, the overall variation is not obvious, but the stress difference between upper and lower part of main mold is extremely large so that reaches to 10 times; the stress and strain in base decrease in circular shape, but there is still stress concentration in ledge, which goes against service life; contact stress does not distribute uniformly, there is increasing or decreasing trend in adjacent parts, which is very large in some parts. in constructing both.
A Real-Time High Performance Data Compression Technique For Space Applications
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu; Venbrux, Jack; Bhatia, Prakash; Miller, Warner H.
2000-01-01
A high performance lossy data compression technique is currently being developed for space science applications under the requirement of high-speed push-broom scanning. The technique is also error-resilient in that error propagation is contained within a few scan lines. The algorithm is based on block-transform combined with bit-plane encoding; this combination results in an embedded bit string with exactly the desirable compression rate. The lossy coder is described. The compression scheme performs well on a suite of test images typical of images from spacecraft instruments. Hardware implementations are in development; a functional chip set is expected by the end of 2001.
Retained energy-based coding for EEG signals.
Bazán-Prieto, Carlos; Blanco-Velasco, Manuel; Cárdenas-Barrera, Julián; Cruz-Roldán, Fernando
2012-09-01
The recent use of long-term records in electroencephalography is becoming more frequent due to its diagnostic potential and the growth of novel signal processing methods that deal with these types of recordings. In these cases, the considerable volume of data to be managed makes compression necessary to reduce the bit rate for transmission and storage applications. In this paper, a new compression algorithm specifically designed to encode electroencephalographic (EEG) signals is proposed. Cosine modulated filter banks are used to decompose the EEG signal into a set of subbands well adapted to the frequency bands characteristic of the EEG. Given that no regular pattern may be easily extracted from the signal in time domain, a thresholding-based method is applied for quantizing samples. The method of retained energy is designed for efficiently computing the threshold in the decomposition domain which, at the same time, allows the quality of the reconstructed EEG to be controlled. The experiments are conducted over a large set of signals taken from two public databases available at Physionet and the results show that the compression scheme yields better compression than other reported methods. Copyright © 2011 IPEM. Published by Elsevier Ltd. All rights reserved.
An Efficient, Lossless Database for Storing and Transmitting Medical Images
NASA Technical Reports Server (NTRS)
Fenstermacher, Marc J.
1998-01-01
This research aimed in creating new compression methods based on the central idea of Set Redundancy Compression (SRC). Set Redundancy refers to the common information that exists in a set of similar images. SRC compression methods take advantage of this common information and can achieve improved compression of similar images by reducing their Set Redundancy. The current research resulted in the development of three new lossless SRC compression methods: MARS (Median-Aided Region Sorting), MAZE (Max-Aided Zero Elimination) and MaxGBA (Max-Guided Bit Allocation).
Influence of compressibility on the Lagrangian statistics of vorticity-strain-rate interactions.
Danish, Mohammad; Sinha, Sawan Suman; Srinivasan, Balaji
2016-07-01
The objective of this study is to investigate the influence of compressibility on Lagrangian statistics of vorticity and strain-rate interactions. The Lagrangian statistics are extracted from "almost" time-continuous data sets of direct numerical simulations of compressible decaying isotropic turbulence by employing a cubic spline-based Lagrangian particle tracker. We study the influence of compressibility on Lagrangian statistics of alignment in terms of compressibility parameters-turbulent Mach number, normalized dilatation-rate, and flow topology. In comparison to incompressible turbulence, we observe that the presence of compressibility in a flow field weakens the alignment tendency of vorticity toward the largest strain-rate eigenvector. Based on the Lagrangian statistics of alignment conditioned on dilatation and topology, we find that the weakened tendency of alignment observed in compressible turbulence is because of a special group of fluid particles that have an initially negligible dilatation-rate and are associated with stable-focus-stretching topology.
On the mechanical characteristics of a self-setting calcium phosphate cement.
Bimis, A; Canal, L P; Karalekas, D; Botsis, J
2017-04-01
To perform a mechanical characterization of a self-setting calcium phosphate cement in function of the immersion time in Ringer solution. Specimens of self-setting calcium phosphate cement were prepared from pure α-TCP powder. The residual strains developed during hardening stage were monitored using an embedded fiber Bragg grating sensor. Additionally, the evolution of the elastic modulus was obtained for the same time period by conducting low-load indentation tests. Micro-computed tomography as well as microscope-assisted inspections were employed to evaluate the porosity in the specimens. Moreover, diametral compression tests were conducted in wet and dried specimens to characterize the material strength. The volume of the estimated porosity and absorbed fluid mass, during the first few minutes of the material's exposure in a wet environment, coincide. The immersion in Ringer solution lead to a noticeable increase in the moduli values. The critical value of stresses obtained from the diametral compression tests were combined with the data from uniaxial compression tests, to suggest a Mohr-Coulomb failure criterion. This study presents different techniques to characterize a self-setting calcium phosphate cement and provides experimental data on porosity, mechanical properties and failure. The investigated material possessed an open porosity at its dried state with negligible residual strains and its Young's modulus, obtained from micro-indentation tests, increased with hardening time. The failure loci may be described by a Mohr-Coulomb criterion, characteristic of soil and rock materials. Copyright © 2017 Elsevier Ltd. All rights reserved.
Learning random networks for compression of still and moving images
NASA Technical Reports Server (NTRS)
Gelenbe, Erol; Sungur, Mert; Cramer, Christopher
1994-01-01
Image compression for both still and moving images is an extremely important area of investigation, with numerous applications to videoconferencing, interactive education, home entertainment, and potential applications to earth observations, medical imaging, digital libraries, and many other areas. We describe work on a neural network methodology to compress/decompress still and moving images. We use the 'point-process' type neural network model which is closer to biophysical reality than standard models, and yet is mathematically much more tractable. We currently achieve compression ratios of the order of 120:1 for moving grey-level images, based on a combination of motion detection and compression. The observed signal-to-noise ratio varies from values above 25 to more than 35. The method is computationally fast so that compression and decompression can be carried out in real-time. It uses the adaptive capabilities of a set of neural networks so as to select varying compression ratios in real-time as a function of quality achieved. It also uses a motion detector which will avoid retransmitting portions of the image which have varied little from the previous frame. Further improvements can be achieved by using on-line learning during compression, and by appropriate compensation of nonlinearities in the compression/decompression scheme. We expect to go well beyond the 250:1 compression level for color images with good quality levels.
Stokes Profile Compression Applied to VSM Data
NASA Astrophysics Data System (ADS)
Toussaint, W. A.; Henney, C. J.; Harvey, J. W.
2012-02-01
The practical details of applying the Expansion in Hermite Functions (EHF) method to compression of full-disk full-Stokes solar spectroscopic data from the SOLIS/VSM instrument are discussed in this paper. The algorithm developed and discussed here preserves the 630.15 and 630.25 nm Fe i lines, along with the local continuum and telluric lines. This compression greatly reduces the amount of space required to store these data sets while maintaining the quality of the data, allowing these observations to be archived and made publicly available with limited bandwidth. Applying EHF to the full-Stokes profiles and saving the coefficient files with Rice compression reduces the disk space required to store these observations by a factor of 20, while maintaining the quality of the data and with a total compression time only 35% slower than the standard gzip (GNU zip) compression.
A multicenter observer performance study of 3D JPEG2000 compression of thin-slice CT.
Erickson, Bradley J; Krupinski, Elizabeth; Andriole, Katherine P
2010-10-01
The goal of this study was to determine the compression level at which 3D JPEG2000 compression of thin-slice CTs of the chest and abdomen-pelvis becomes visually perceptible. A secondary goal was to determine if residents in training and non-physicians are substantially different from experienced radiologists in their perception of compression-related changes. This study used multidetector computed tomography 3D datasets with 0.625-1-mm thickness slices of standard chest, abdomen, or pelvis, clipped to 12 bits. The Kakadu v5.2 JPEG2000 compression algorithm was used to compress and decompress the 80 examinations creating four sets of images: lossless, 1.5 bpp (8:1), 1 bpp (12:1), and 0.75 bpp (16:1). Two randomly selected slices from each examination were shown to observers using a flicker mode paradigm in which observers rapidly toggled between two images, the original and a compressed version, with the task of deciding whether differences between them could be detected. Six staff radiologists, four residents, and six PhDs experienced in medical imaging (from three institutions) served as observers. Overall, 77.46% of observers detected differences at 8:1, 94.75% at 12:1, and 98.59% at 16:1 compression levels. Across all compression levels, the staff radiologists noted differences 64.70% of the time, the resident's detected differences 71.91% of the time, and the PhDs detected differences 69.95% of the time. Even mild compression is perceptible with current technology. The ability to detect differences does not equate to diagnostic differences, although perception of compression artifacts could affect diagnostic decision making and diagnostic workflow.
High load operation in a homogeneous charge compression ignition engine
Duffy, Kevin P [Metamora, IL; Kieser, Andrew J [Morton, IL; Liechty, Michael P [Chillicothe, IL; Hardy, William L [Peoria, IL; Rodman, Anthony [Chillicothe, IL; Hergart, Carl-Anders [Peoria, IL
2008-12-23
A homogeneous charge compression ignition engine is set up by first identifying combinations of compression ratio and exhaust gas percentages for each speed and load across the engines operating range. These identified ratios and exhaust gas percentages can then be converted into geometric compression ratio controller settings and exhaust gas recirculation rate controller settings that are mapped against speed and load, and made available to the electronic
Transform coding for hardware-accelerated volume rendering.
Fout, Nathaniel; Ma, Kwan-Liu
2007-01-01
Hardware-accelerated volume rendering using the GPU is now the standard approach for real-time volume rendering, although limited graphics memory can present a problem when rendering large volume data sets. Volumetric compression in which the decompression is coupled to rendering has been shown to be an effective solution to this problem; however, most existing techniques were developed in the context of software volume rendering, and all but the simplest approaches are prohibitive in a real-time hardware-accelerated volume rendering context. In this paper we present a novel block-based transform coding scheme designed specifically with real-time volume rendering in mind, such that the decompression is fast without sacrificing compression quality. This is made possible by consolidating the inverse transform with dequantization in such a way as to allow most of the reprojection to be precomputed. Furthermore, we take advantage of the freedom afforded by off-line compression in order to optimize the encoding as much as possible while hiding this complexity from the decoder. In this context we develop a new block classification scheme which allows us to preserve perceptually important features in the compression. The result of this work is an asymmetric transform coding scheme that allows very large volumes to be compressed and then decompressed in real-time while rendering on the GPU.
NASA Technical Reports Server (NTRS)
Held, Louis F.; Pritchard, Ernest I.
1946-01-01
An investigation was conducted to evaluate the possibilities of utilizing the high-performance characteristics of triptane and xylidines blended with 28-R fuel in order to increase fuel economy by the use of high compression ratios and maximum-economy spark setting. Full-scale single-cylinder knock tests were run with 20 deg B.T.C. and maximum-economy spark settings at compression ratios of 6.9, 8.0, and 10.0, and with two inlet-air temperatures. The fuels tested consisted of triptane, four triptane and one xylidines blend with 28-R, and 28-R fuel alone. Indicated specific fuel consumption at lean mixtures was decreased approximately 17 percent at a compression ratio of 10.0 and maximum-economy spark setting, as compared to that obtained with a compression ratio of 6.9 and normal spark setting. When compression ratio was increased from 6.9 to 10.0 at an inlet-air temperature of 150 F, normal spark setting, and a fuel-air ratio of 0.065, 55-percent triptane was required with 28-R fuel to maintain the knock-limited brake power level obtained with 28-R fuel at a compression ratio of 6.9. Brake specific fuel consumption was decreased 17.5 percent at a compression ratio of 10.0 relative to that obtained at a compression ratio of 6.9. Approximately similar results were noted at an inlet-air temperature of 250 F. For concentrations up through at least 20 percent, triptane can be more efficiently used at normal than at maximum-economy spark setting to maintain a constant knock-limited power output over the range of compression ratios tested.
NASA Astrophysics Data System (ADS)
Pokorný, Jaroslav; Pavlíková, Milena; Medved, Igor; Pavlík, Zbyšek; Zahálková, Jana; Rovnaníková, Pavla; Černý, Robert
2016-06-01
Active silica containing materials in the sub-micrometer size range are commonly used for modification of strength parameters and durability of cement based composites. In addition, these materials also assist to accelerate cement hydration. In this paper, two types of diatomaceous earths are used as partial cement replacement in composition of cement paste mixtures. For raw binders, basic physical and chemical properties are studied. The chemical composition of tested materials is determined using classical chemical analysis combined with XRD method that allowed assessment of SiO2 amorphous phase content. For all tested mixtures, initial and final setting times are measured. Basic physical and mechanical properties are measured on hardened paste samples cured 28 days in water. Here, bulk density, matrix density, total open porosity, compressive and flexural strength, are measured. Relationship between compressive strength and total open porosity is studied using several empirical models. The obtained results give evidence of high pozzolanic activity of tested diatomite earths. Their application leads to the increase of both initial and final setting times, decrease of compressive strength, and increase of flexural strength.
Lee, Bor-Shiunn; Lin, Hong-Ping; Chan, Jerry Chun-Chung; Wang, Wei-Chuan; Hung, Ping-Hsuan; Tsai, Yu-Hsin; Lee, Yuan-Ling
2018-01-01
Mineral trioxide aggregate (MTA) is the most frequently used repair material in endodontics, but the long setting time and reduced mechanical strength in acidic environments are major shortcomings. In this study, a novel sol-gel-derived calcium silicate cement (sCSC) was developed using an initial Ca/Si molar ratio of 3, with the most effective mixing orders of reactants and optimal HNO3 catalyst volumes. A Fourier transform infrared spectrometer, scanning electron microscope with energy-dispersive X-ray spectroscopy, and X-ray powder diffractometer were used for material characterization. The setting time, compressive strength, and microhardness of sCSC after hydration in neutral and pH 5 environments were compared with that of MTA. Results showed that sCSC demonstrated porous microstructures with a setting time of ~30 min, and the major components of sCSC were tricalcium silicate, dicalcium silicate, and calcium oxide. The optimal formula of sCSC was sn200, which exhibited significantly higher compressive strength and microhardness than MTA, irrespective of neutral or pH 5 environments. In addition, both sn200 and MTA demonstrated good biocompatibility because cell viability was similar to that of the control. These findings suggest that sn200 merits further clinical study for potential application in endodontic repair of perforations. PMID:29386894
Lee, Bor-Shiunn; Lin, Hong-Ping; Chan, Jerry Chun-Chung; Wang, Wei-Chuan; Hung, Ping-Hsuan; Tsai, Yu-Hsin; Lee, Yuan-Ling
2018-01-01
Mineral trioxide aggregate (MTA) is the most frequently used repair material in endodontics, but the long setting time and reduced mechanical strength in acidic environments are major shortcomings. In this study, a novel sol-gel-derived calcium silicate cement (sCSC) was developed using an initial Ca/Si molar ratio of 3, with the most effective mixing orders of reactants and optimal HNO 3 catalyst volumes. A Fourier transform infrared spectrometer, scanning electron microscope with energy-dispersive X-ray spectroscopy, and X-ray powder diffractometer were used for material characterization. The setting time, compressive strength, and microhardness of sCSC after hydration in neutral and pH 5 environments were compared with that of MTA. Results showed that sCSC demonstrated porous microstructures with a setting time of ~30 min, and the major components of sCSC were tricalcium silicate, dicalcium silicate, and calcium oxide. The optimal formula of sCSC was sn200, which exhibited significantly higher compressive strength and microhardness than MTA, irrespective of neutral or pH 5 environments. In addition, both sn200 and MTA demonstrated good biocompatibility because cell viability was similar to that of the control. These findings suggest that sn200 merits further clinical study for potential application in endodontic repair of perforations.
Efficient compression of molecular dynamics trajectory files.
Marais, Patrick; Kenwood, Julian; Smith, Keegan Carruthers; Kuttel, Michelle M; Gain, James
2012-10-15
We investigate whether specific properties of molecular dynamics trajectory files can be exploited to achieve effective file compression. We explore two classes of lossy, quantized compression scheme: "interframe" predictors, which exploit temporal coherence between successive frames in a simulation, and more complex "intraframe" schemes, which compress each frame independently. Our interframe predictors are fast, memory-efficient and well suited to on-the-fly compression of massive simulation data sets, and significantly outperform the benchmark BZip2 application. Our schemes are configurable: atomic positional accuracy can be sacrificed to achieve greater compression. For high fidelity compression, our linear interframe predictor gives the best results at very little computational cost: at moderate levels of approximation (12-bit quantization, maximum error ≈ 10(-2) Å), we can compress a 1-2 fs trajectory file to 5-8% of its original size. For 200 fs time steps-typically used in fine grained water diffusion experiments-we can compress files to ~25% of their input size, still substantially better than BZip2. While compression performance degrades with high levels of quantization, the simulation error is typically much greater than the associated approximation error in such cases. Copyright © 2012 Wiley Periodicals, Inc.
NRGC: a novel referential genome compression algorithm.
Saha, Subrata; Rajasekaran, Sanguthevar
2016-11-15
Next-generation sequencing techniques produce millions to billions of short reads. The procedure is not only very cost effective but also can be done in laboratory environment. The state-of-the-art sequence assemblers then construct the whole genomic sequence from these reads. Current cutting edge computing technology makes it possible to build genomic sequences from the billions of reads within a minimal cost and time. As a consequence, we see an explosion of biological sequences in recent times. In turn, the cost of storing the sequences in physical memory or transmitting them over the internet is becoming a major bottleneck for research and future medical applications. Data compression techniques are one of the most important remedies in this context. We are in need of suitable data compression algorithms that can exploit the inherent structure of biological sequences. Although standard data compression algorithms are prevalent, they are not suitable to compress biological sequencing data effectively. In this article, we propose a novel referential genome compression algorithm (NRGC) to effectively and efficiently compress the genomic sequences. We have done rigorous experiments to evaluate NRGC by taking a set of real human genomes. The simulation results show that our algorithm is indeed an effective genome compression algorithm that performs better than the best-known algorithms in most of the cases. Compression and decompression times are also very impressive. The implementations are freely available for non-commercial purposes. They can be downloaded from: http://www.engr.uconn.edu/~rajasek/NRGC.zip CONTACT: rajasek@engr.uconn.edu. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Development of a Biodegradable Bone Cement for Craniofacial Applications
Henslee, Allan M.; Gwak, Dong-Ho; Mikos, Antonios G.; Kasper, F. Kurtis
2015-01-01
This study investigated the formulation of a two-component biodegradable bone cement comprising the unsaturated linear polyester macromer poly(propylene fumarate) (PPF) and crosslinked PPF microparticles for use in craniofacial bone repair applications. A full factorial design was employed to evaluate the effects of formulation parameters such as particle weight percentage, particle size, and accelerator concentration on the setting and mechanical properties of crosslinked composites. It was found that the addition of crosslinked microparticles to PPF macromer significantly reduced the temperature rise upon crosslinking from 100.3 ± 21.6 to 102.7 ± 49.3 °C for formulations without microparticles to 28.0 ± 2.0 to 65.3 ± 17.5 °C for formulations with microparticles. The main effects of increasing the particle weight percentage from 25 to 50% were to significantly increase the compressive modulus by 37.7 ± 16.3 MPa, increase the compressive strength by 2.2 ± 0.5 MPa, decrease the maximum temperature by 9.5 ± 3.7 °C, and increase the setting time by 0.7 ± 0.3 min. Additionally, the main effects of increasing the particle size range from 0–150 μm to 150–300 μm were to significantly increase the compressive modulus by 31.2 ± 16.3 MPa and the compressive strength by 1.3 ± 0.5 MPa. However, the particle size range did not have a significant effect on the maximum temperature and setting time. Overall, the composites tested in this study were found to have properties suitable for further consideration in craniofacial bone repair applications. PMID:22499285
Havel, Christof; Schreiber, Wolfgang; Trimmel, Helmut; Malzer, Reinhard; Haugk, Moritz; Richling, Nina; Riedmüller, Eva; Sterz, Fritz; Herkner, Harald
2010-01-01
Automated verbal and visual feedback improves quality of resuscitation in out-of-hospital cardiac arrest and was proven to increase short-term survival. Quality of resuscitation may be hampered in more difficult situations like emergency transportation. Currently there is no evidence if feedback devices can improve resuscitation quality during different modes of transportation. To assess the effect of real time automated feedback on the quality of resuscitation in an emergency transportation setting. Randomised cross-over trial. Medical University of Vienna, Vienna Municipal Ambulance Service and Helicopter Emergency Medical Service Unit (Christophorus Flugrettungsverein) in September 2007. European Resuscitation Council (ERC) certified health care professionals performing CPR in a flying helicopter and in a moving ambulance vehicle on a manikin with human-like chest properties. CPR sessions, with real time automated feedback as the intervention and standard CPR without feedback as control. Quality of chest compression during resuscitation. Feedback resulted in less deviation from ideal compression rate 100 min(-1) (9+/-9 min(-1), p<0.0001) with this effect becoming steadily larger over time. Applied work was less in the feedback group compared to controls (373+/-448 cm x compression; p<0.001). Feedback did not influence ideal compression depth significantly. There was some indication of a learning effect of the feedback device. Real time automated feedback improves certain aspects of CPR quality in flying helicopters and moving ambulance vehicles. The effect of feedback guidance was most pronounced for chest compression rate. Copyright 2009 Elsevier Ireland Ltd. All rights reserved.
Image processing using Gallium Arsenide (GaAs) technology
NASA Technical Reports Server (NTRS)
Miller, Warner H.
1989-01-01
The need to increase the information return from space-borne imaging systems has increased in the past decade. The use of multi-spectral data has resulted in the need for finer spatial resolution and greater spectral coverage. Onboard signal processing will be necessary in order to utilize the available Tracking and Data Relay Satellite System (TDRSS) communication channel at high efficiency. A generally recognized approach to the increased efficiency of channel usage is through data compression techniques. The compression technique implemented is a differential pulse code modulation (DPCM) scheme with a non-uniform quantizer. The need to advance the state-of-the-art of onboard processing was recognized and a GaAs integrated circuit technology was chosen. An Adaptive Programmable Processor (APP) chip set was developed which is based on an 8-bit slice general processor. The reason for choosing the compression technique for the Multi-spectral Linear Array (MLA) instrument is described. Also a description is given of the GaAs integrated circuit chip set which will demonstrate that data compression can be performed onboard in real time at data rate in the order of 500 Mb/s.
High-speed and high-ratio referential genome compression.
Liu, Yuansheng; Peng, Hui; Wong, Limsoon; Li, Jinyan
2017-11-01
The rapidly increasing number of genomes generated by high-throughput sequencing platforms and assembly algorithms is accompanied by problems in data storage, compression and communication. Traditional compression algorithms are unable to meet the demand of high compression ratio due to the intrinsic challenging features of DNA sequences such as small alphabet size, frequent repeats and palindromes. Reference-based lossless compression, by which only the differences between two similar genomes are stored, is a promising approach with high compression ratio. We present a high-performance referential genome compression algorithm named HiRGC. It is based on a 2-bit encoding scheme and an advanced greedy-matching search on a hash table. We compare the performance of HiRGC with four state-of-the-art compression methods on a benchmark dataset of eight human genomes. HiRGC takes <30 min to compress about 21 gigabytes of each set of the seven target genomes into 96-260 megabytes, achieving compression ratios of 217 to 82 times. This performance is at least 1.9 times better than the best competing algorithm on its best case. Our compression speed is also at least 2.9 times faster. HiRGC is stable and robust to deal with different reference genomes. In contrast, the competing methods' performance varies widely on different reference genomes. More experiments on 100 human genomes from the 1000 Genome Project and on genomes of several other species again demonstrate that HiRGC's performance is consistently excellent. The C ++ and Java source codes of our algorithm are freely available for academic and non-commercial use. They can be downloaded from https://github.com/yuansliu/HiRGC. jinyan.li@uts.edu.au. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Fast and efficient compression of floating-point data.
Lindstrom, Peter; Isenburg, Martin
2006-01-01
Large scale scientific simulation codes typically run on a cluster of CPUs that write/read time steps to/from a single file system. As data sets are constantly growing in size, this increasingly leads to I/O bottlenecks. When the rate at which data is produced exceeds the available I/O bandwidth, the simulation stalls and the CPUs are idle. Data compression can alleviate this problem by using some CPU cycles to reduce the amount of data needed to be transfered. Most compression schemes, however, are designed to operate offline and seek to maximize compression, not throughput. Furthermore, they often require quantizing floating-point values onto a uniform integer grid, which disqualifies their use in applications where exact values must be retained. We propose a simple scheme for lossless, online compression of floating-point data that transparently integrates into the I/O of many applications. A plug-in scheme for data-dependent prediction makes our scheme applicable to a wide variety of data used in visualization, such as unstructured meshes, point sets, images, and voxel grids. We achieve state-of-the-art compression rates and speeds, the latter in part due to an improved entropy coder. We demonstrate that this significantly accelerates I/O throughput in real simulation runs. Unlike previous schemes, our method also adapts well to variable-precision floating-point and integer data.
Wavelet data compression for archiving high-resolution icosahedral model data
NASA Astrophysics Data System (ADS)
Wang, N.; Bao, J.; Lee, J.
2011-12-01
With the increase of the resolution of global circulation models, it becomes ever more important to develop highly effective solutions to archive the huge datasets produced by those models. While lossless data compression guarantees the accuracy of the restored data, it can only achieve limited reduction of data size. Wavelet transform based data compression offers significant potentials in data size reduction, and it has been shown very effective in transmitting data for remote visualizations. However, for data archive purposes, a detailed study has to be conducted to evaluate its impact to the datasets that will be used in further numerical computations. In this study, we carried out two sets of experiments for both summer and winter seasons. An icosahedral grid weather model and a highly efficient wavelet data compression software were used for this study. Initial conditions were compressed and input to the model to run to 10 days. The forecast results were then compared to those forecast results from the model run with the original uncompressed initial conditions. Several visual comparisons, as well as the statistics of numerical comparisons are presented. These results indicate that with specified minimum accuracy losses, wavelet data compression achieves significant data size reduction, and at the same time, it maintains minimum numerical impacts to the datasets. In addition, some issues are discussed to increase the archive efficiency while retaining a complete set of meta data for each archived file.
Imaging industry expectations for compressed sensing in MRI
NASA Astrophysics Data System (ADS)
King, Kevin F.; Kanwischer, Adriana; Peters, Rob
2015-09-01
Compressed sensing requires compressible data, incoherent acquisition and a nonlinear reconstruction algorithm to force creation of a compressible image consistent with the acquired data. MRI images are compressible using various transforms (commonly total variation or wavelets). Incoherent acquisition of MRI data by appropriate selection of pseudo-random or non-Cartesian locations in k-space is straightforward. Increasingly, commercial scanners are sold with enough computing power to enable iterative reconstruction in reasonable times. Therefore integration of compressed sensing into commercial MRI products and clinical practice is beginning. MRI frequently requires the tradeoff of spatial resolution, temporal resolution and volume of spatial coverage to obtain reasonable scan times. Compressed sensing improves scan efficiency and reduces the need for this tradeoff. Benefits to the user will include shorter scans, greater patient comfort, better image quality, more contrast types per patient slot, the enabling of previously impractical applications, and higher throughput. Challenges to vendors include deciding which applications to prioritize, guaranteeing diagnostic image quality, maintaining acceptable usability and workflow, and acquisition and reconstruction algorithm details. Application choice depends on which customer needs the vendor wants to address. The changing healthcare environment is putting cost and productivity pressure on healthcare providers. The improved scan efficiency of compressed sensing can help alleviate some of this pressure. Image quality is strongly influenced by image compressibility and acceleration factor, which must be appropriately limited. Usability and workflow concerns include reconstruction time and user interface friendliness and response. Reconstruction times are limited to about one minute for acceptable workflow. The user interface should be designed to optimize workflow and minimize additional customer training. Algorithm concerns include the decision of which algorithms to implement as well as the problem of optimal setting of adjustable parameters. It will take imaging vendors several years to work through these challenges and provide solutions for a wide range of applications.
Mechanisms of anomalous compressibility of vitreous silica
NASA Astrophysics Data System (ADS)
Clark, Alisha N.; Lesher, Charles E.; Jacobsen, Steven D.; Sen, Sabyasachi
2014-11-01
The anomalous compressibility of vitreous silica has been known for nearly a century, but the mechanisms responsible for it remain poorly understood. Using GHz-ultrasonic interferometry, we measured longitudinal and transverse acoustic wave travel times at pressures up to 5 GPa in vitreous silica with fictive temperatures (Tf) ranging between 985 °C and 1500 °C. The maximum in ultrasonic wave travel times-corresponding to a minimum in acoustic velocities-shifts to higher pressure with increasing Tf for both acoustic waves, with complete reversibility below 5 GPa. These relationships reflect polyamorphism in the supercooled liquid, which results in a glassy state possessing different proportions of domains of high- and low-density amorphous phases (HDA and LDA, respectively). The relative proportion of HDA and LDA is set at Tf and remains fixed on compression below the permanent densification pressure. The bulk material exhibits compression behavior systematically dependent on synthesis conditions that arise from the presence of floppy modes in a mixture of HDA and LDA domains.
High Performance Compression of Science Data
NASA Technical Reports Server (NTRS)
Storer, James A.; Carpentieri, Bruno; Cohn, Martin
1994-01-01
Two papers make up the body of this report. One presents a single-pass adaptive vector quantization algorithm that learns a codebook of variable size and shape entries; the authors present experiments on a set of test images showing that with no training or prior knowledge of the data, for a given fidelity, the compression achieved typically equals or exceeds that of the JPEG standard. The second paper addresses motion compensation, one of the most effective techniques used in interframe data compression. A parallel block-matching algorithm for estimating interframe displacement of blocks with minimum error is presented. The algorithm is designed for a simple parallel architecture to process video in real time.
Shekarchi, Sayedali; Hallam, John; Christensen-Dalsgaard, Jakob
2013-11-01
Head-related transfer functions (HRTFs) are generally large datasets, which can be an important constraint for embedded real-time applications. A method is proposed here to reduce redundancy and compress the datasets. In this method, HRTFs are first compressed by conversion into autoregressive-moving-average (ARMA) filters whose coefficients are calculated using Prony's method. Such filters are specified by a few coefficients which can generate the full head-related impulse responses (HRIRs). Next, Legendre polynomials (LPs) are used to compress the ARMA filter coefficients. LPs are derived on the sphere and form an orthonormal basis set for spherical functions. Higher-order LPs capture increasingly fine spatial details. The number of LPs needed to represent an HRTF, therefore, is indicative of its spatial complexity. The results indicate that compression ratios can exceed 98% while maintaining a spectral error of less than 4 dB in the recovered HRTFs.
Tanner, Timo; Antikainen, Osmo; Ehlers, Henrik; Yliruusi, Jouko
2017-06-30
With modern tableting machines large amounts of tablets are produced with high output. Consequently, methods to examine powder compression in a high-velocity setting are in demand. In the present study, a novel gravitation-based method was developed to examine powder compression. A steel bar is dropped on a punch to compress microcrystalline cellulose and starch samples inside the die. The distance of the bar is being read by a high-accuracy laser displacement sensor which provides a reliable distance-time plot for the bar movement. In-die height and density of the compact can be seen directly from this data, which can be examined further to obtain information on velocity, acceleration and energy distribution during compression. The energy consumed in compact formation could also be seen. Despite the high vertical compression speed, the method was proven to be cost-efficient, accurate and reproducible. Copyright © 2017 Elsevier B.V. All rights reserved.
Jo, Choong Hyun; Cho, Gyu Chong; Lee, Chang Hee
2017-07-01
The purpose of this study was to determine if the over-the-head 2-thumb encircling technique (OTTT) provides better overall quality of cardiopulmonary resuscitation compared with conventional 2-finger technique (TFT) for a lone rescuer in the setting of infant cardiac arrest in ambulance. Fifty medical emergency service students were voluntarily recruited to perform lone rescuer infant cardiopulmonary resuscitation for 2 minutes on a manikin simulating a 3-month-old baby in an ambulance. Participants who performed OTTT sat over the head of manikins to compress the chest using a 2-thumb encircling technique and provide bag-valve mask ventilations, whereas those who performed TFT sat at the side of the manikins to compress using 2-fingers and provide pocket-mask ventilations. Mean hands-off time was not significantly different between OTTT and TFT (7.6 ± 1.1 seconds vs 7.9 ± 1.3 seconds, P = 0.885). Over-the-head 2-thumb encircling technique resulted in greater depth of compression (42.6 ± 1.4 mm vs 41.0 ± 1.4 mm, P < 0.001) and faster rate of compressions (114.4 ± 8.0 per minute vs 112.2 ± 8.2 per minute, P = 0.019) than TFT. Over-the-head 2-thumb encircling technique resulted in a smaller fatigue score than TFT (1.7 ± 1.5 vs 2.5 ± 1.6, P < 0.001). In addition, subjects reported that compression, ventilation, and changing compression to ventilation were easier in OTTT than in TFT. The use of OTTT may be a suitable alternative to TFT in the setting of cardiac arrest of infants during ambulance transfer.
The Cosmology of William Herschel
NASA Astrophysics Data System (ADS)
Hoskin, M.
2009-08-01
William Herschel was an amateur astronomer for half his life, until his discovery of Uranus earned him a royal pension. He then set himself to study "the construction of the heavens" with great reflectors, and discovered over 2,500 nebulae and star clusters. Clusters had clearly formed by the action of gravity, and so scattered clusters would in time become ever more compressed: scattered clusters were young, compressed clusters old. This marked the end of the 'clockwork' universe of Newton and Leibniz.
Simplified dispatch-assisted CPR instructions outperform standard protocol.
Dias, J A; Brown, T B; Saini, D; Shah, R C; Cofield, S S; Waterbor, J W; Funkhouser, E; Terndrup, T E
2007-01-01
Dispatch-assisted chest compressions only CPR (CC-CPR) has gained widespread acceptance, and recent research suggests that increasing the proportion of compression time during CPR may increase survival from out-of-hospital cardiac arrest. We created a simplified CC-CPR protocol to reduce time to start chest compressions and to increase the proportion of time spent delivering chest compressions. This simplified protocol was compared to a published protocol, Medical Priority Dispatch System (MPDS) Version 11.2, recommended by the National Academies of Emergency Dispatch. Subjects were randomized to the MPDS v11.2 protocol or a simplified protocol. Data was recorded from a Laerdal Resusci Anne Skillreporter manikin. A simulated emergency medical dispatcher, contacted by cell phone, delivered standardized instructions for both protocols. Outcomes included chest compression rate, depth, hand position, full release, overall proportion of compressions without error, time to start of CPR and total hands-off chest time. Proportions were analyzed by Wilcoxon's Rank Sum tests and time variables with Welch ANOVA and Wilcoxon's Rank Sum test. All tests used a two-sided alpha-level of 0.05. One hundred and seventeen subjects were randomized prospectively, 58 to the standard protocol and 59 to the simplified protocol. The average age of subjects in both groups was 25 years old. For both groups, the compression rate was equivalent (104 simplified versus 94 MPDS, p = 0.13), as was the proportion with total release (1.0 simplified versus 1.0 MPDS, p = 0.09). The proportion to the correct depth was greater in the simplified protocol (0.31 versus 0.03, p < 0.01), as was the proportion of compressions done without error (0.05 versus 0.0, p = 0.16). Time to start of chest compressions and total hands-off chest time were better in the simplified protocol (start time 60.9s versus 78.6s, p < 0.0001; hands-off chest time 69 s versus 95 s, p < 0.0001). The proportion with correct hand position, however, was worse in the simplified protocol (0.35 versus 0.84, p < 0.01). The simplified protocol was as good as, or better than the MPDS v11.2 protocol in every aspect studied except hand position, and the simplified protocol resulted in significant time savings. The protocol may need modification to ensure correct hand position. Time savings and improved quality of CPR achieved by the new set of instructions could be important in strengthening critical links in the cardiac chain of survival.
The compression–error trade-off for large gridded data sets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silver, Jeremy D.; Zender, Charles S.
The netCDF-4 format is widely used for large gridded scientific data sets and includes several compression methods: lossy linear scaling and the non-lossy deflate and shuffle algorithms. Many multidimensional geoscientific data sets exhibit considerable variation over one or several spatial dimensions (e.g., vertically) with less variation in the remaining dimensions (e.g., horizontally). On such data sets, linear scaling with a single pair of scale and offset parameters often entails considerable loss of precision. We introduce an alternative compression method called "layer-packing" that simultaneously exploits lossy linear scaling and lossless compression. Layer-packing stores arrays (instead of a scalar pair) of scalemore » and offset parameters. An implementation of this method is compared with lossless compression, storing data at fixed relative precision (bit-grooming) and scalar linear packing in terms of compression ratio, accuracy and speed. When viewed as a trade-off between compression and error, layer-packing yields similar results to bit-grooming (storing between 3 and 4 significant figures). Bit-grooming and layer-packing offer significantly better control of precision than scalar linear packing. Relative performance, in terms of compression and errors, of bit-groomed and layer-packed data were strongly predicted by the entropy of the exponent array, and lossless compression was well predicted by entropy of the original data array. Layer-packed data files must be "unpacked" to be readily usable. The compression and precision characteristics make layer-packing a competitive archive format for many scientific data sets.« less
The compression–error trade-off for large gridded data sets
Silver, Jeremy D.; Zender, Charles S.
2017-01-27
The netCDF-4 format is widely used for large gridded scientific data sets and includes several compression methods: lossy linear scaling and the non-lossy deflate and shuffle algorithms. Many multidimensional geoscientific data sets exhibit considerable variation over one or several spatial dimensions (e.g., vertically) with less variation in the remaining dimensions (e.g., horizontally). On such data sets, linear scaling with a single pair of scale and offset parameters often entails considerable loss of precision. We introduce an alternative compression method called "layer-packing" that simultaneously exploits lossy linear scaling and lossless compression. Layer-packing stores arrays (instead of a scalar pair) of scalemore » and offset parameters. An implementation of this method is compared with lossless compression, storing data at fixed relative precision (bit-grooming) and scalar linear packing in terms of compression ratio, accuracy and speed. When viewed as a trade-off between compression and error, layer-packing yields similar results to bit-grooming (storing between 3 and 4 significant figures). Bit-grooming and layer-packing offer significantly better control of precision than scalar linear packing. Relative performance, in terms of compression and errors, of bit-groomed and layer-packed data were strongly predicted by the entropy of the exponent array, and lossless compression was well predicted by entropy of the original data array. Layer-packed data files must be "unpacked" to be readily usable. The compression and precision characteristics make layer-packing a competitive archive format for many scientific data sets.« less
Compression of regions in the global advanced very high resolution radiometer 1-km data set
NASA Technical Reports Server (NTRS)
Kess, Barbara L.; Steinwand, Daniel R.; Reichenbach, Stephen E.
1994-01-01
The global advanced very high resolution radiometer (AVHRR) 1-km data set is a 10-band image produced at USGS' EROS Data Center for the study of the world's land surfaces. The image contains masked regions for non-land areas which are identical in each band but vary between data sets. They comprise over 75 percent of this 9.7 gigabyte image. The mask is compressed once and stored separately from the land data which is compressed for each of the 10 bands. The mask is stored in a hierarchical format for multi-resolution decompression of geographic subwindows of the image. The land for each band is compressed by modifying a method that ignores fill values. This multi-spectral region compression efficiently compresses the region data and precludes fill values from interfering with land compression statistics. Results show that the masked regions in a one-byte test image (6.5 Gigabytes) compress to 0.2 percent of the 557,756,146 bytes they occupy in the original image, resulting in a compression ratio of 89.9 percent for the entire image.
Generation new MP3 data set after compression
NASA Astrophysics Data System (ADS)
Atoum, Mohammed Salem; Almahameed, Mohammad
2016-02-01
The success of audio steganography techniques is to ensure imperceptibility of the embedded secret message in stego file and withstand any form of intentional or un-intentional degradation of secret message (robustness). Crucial to that using digital audio file such as MP3 file, which comes in different compression rate, however research studies have shown that performing steganography in MP3 format after compression is the most suitable one. Unfortunately until now the researchers can not test and implement their algorithm because no standard data set in MP3 file after compression is generated. So this paper focuses to generate standard data set with different compression ratio and different Genre to help researchers to implement their algorithms.
Lossless Astronomical Image Compression and the Effects of Random Noise
NASA Technical Reports Server (NTRS)
Pence, William
2009-01-01
In this paper we compare a variety of modern image compression methods on a large sample of astronomical images. We begin by demonstrating from first principles how the amount of noise in the image pixel values sets a theoretical upper limit on the lossless compression ratio of the image. We derive simple procedures for measuring the amount of noise in an image and for quantitatively predicting how much compression will be possible. We then compare the traditional technique of using the GZIP utility to externally compress the image, with a newer technique of dividing the image into tiles, and then compressing and storing each tile in a FITS binary table structure. This tiled-image compression technique offers a choice of other compression algorithms besides GZIP, some of which are much better suited to compressing astronomical images. Our tests on a large sample of images show that the Rice algorithm provides the best combination of speed and compression efficiency. In particular, Rice typically produces 1.5 times greater compression and provides much faster compression speed than GZIP. Floating point images generally contain too much noise to be effectively compressed with any lossless algorithm. We have developed a compression technique which discards some of the useless noise bits by quantizing the pixel values as scaled integers. The integer images can then be compressed by a factor of 4 or more. Our image compression and uncompression utilities (called fpack and funpack) that were used in this study are publicly available from the HEASARC web site.Users may run these stand-alone programs to compress and uncompress their own images.
Image quality (IQ) guided multispectral image compression
NASA Astrophysics Data System (ADS)
Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik
2016-05-01
Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.
Luo, Huan-Lin; Lin, Deng-Fong; Chen, Shih-Chieh
2017-07-01
In this study, geopolymer specimens based on calcined oil-contaminated clays (OCCs), metakaolin replacements of OCCs, and blast furnace slag were manufactured by the addition of nano-SiO 2 to improve their properties. The effects of adding 0, 1, 2, or 3% nano-SiO 2 on the properties and microstructures of the geopolymer specimens were determined using compressive strength tests, flow tests, setting time tests, scanning electron microscopy (SEM), and silicon nuclear magnetic resonance spectroscopy (Si-NMR). The results showed that the setting time and flowability of the geopolymer specimens decreased and the compressive strength increased as the amount of nano-SiO 2 increased. These results were supported by the SEM and Si-NMR assays. This study suggests that the addition of nano-SiO 2 was beneficial and improved the properties of the geopolymer specimens containing calcined OCC.
Development of real time abdominal compression force monitoring and visual biofeedback system
NASA Astrophysics Data System (ADS)
Kim, Tae-Ho; Kim, Siyong; Kim, Dong-Su; Kang, Seong-Hee; Cho, Min-Seok; Kim, Kyeong-Hyeon; Shin, Dong-Seok; Suh, Tae-Suk
2018-03-01
In this study, we developed and evaluated a system that could monitor abdominal compression force (ACF) in real time and provide a surrogating signal, even under abdominal compression. The system could also provide visual-biofeedback (VBF). The real-time ACF monitoring system developed consists of an abdominal compression device, an ACF monitoring unit and a control system including an in-house ACF management program. We anticipated that ACF variation information caused by respiratory abdominal motion could be used as a respiratory surrogate signal. Four volunteers participated in this test to obtain correlation coefficients between ACF variation and tidal volumes. A simulation study with another group of six volunteers was performed to evaluate the feasibility of the proposed system. In the simulation, we investigated the reproducibility of the compression setup and proposed a further enhanced shallow breathing (ESB) technique using VBF by intentionally reducing the amplitude of the breathing range under abdominal compression. The correlation coefficient between the ACF variation caused by the respiratory abdominal motion and the tidal volume signal for each volunteer was evaluated and R 2 values ranged from 0.79 to 0.84. The ACF variation was similar to a respiratory pattern and slight variations of ACF ranges were observed among sessions. About 73-77% average ACF control rate (i.e. compliance) over five trials was observed in all volunteer subjects except one (64%) when there was no VBF. The targeted ACF range was intentionally reduced to achieve ESB for VBF simulation. With VBF, in spite of the reduced target range, overall ACF control rate improved by about 20% in all volunteers except one (4%), demonstrating the effectiveness of VBF. The developed monitoring system could help reduce the inter-fraction ACF set up error and the intra fraction ACF variation. With the capability of providing a real time surrogating signal and VBF under compression, it could improve the quality of respiratory tumor motion management in abdominal compression radiation therapy.
Prediction of compression-induced image interpretability degradation
NASA Astrophysics Data System (ADS)
Blasch, Erik; Chen, Hua-Mei; Irvine, John M.; Wang, Zhonghai; Chen, Genshe; Nagy, James; Scott, Stephen
2018-04-01
Image compression is an important component in modern imaging systems as the volume of the raw data collected is increasing. To reduce the volume of data while collecting imagery useful for analysis, choosing the appropriate image compression method is desired. Lossless compression is able to preserve all the information, but it has limited reduction power. On the other hand, lossy compression, which may result in very high compression ratios, suffers from information loss. We model the compression-induced information loss in terms of the National Imagery Interpretability Rating Scale or NIIRS. NIIRS is a user-based quantification of image interpretability widely adopted by the Geographic Information System community. Specifically, we present the Compression Degradation Image Function Index (CoDIFI) framework that predicts the NIIRS degradation (i.e., a decrease of NIIRS level) for a given compression setting. The CoDIFI-NIIRS framework enables a user to broker the maximum compression setting while maintaining a specified NIIRS rating.
Compressing Aviation Data in XML Format
NASA Technical Reports Server (NTRS)
Patel, Hemil; Lau, Derek; Kulkarni, Deepak
2003-01-01
Design, operations and maintenance activities in aviation involve analysis of variety of aviation data. This data is typically in disparate formats making it difficult to use with different software packages. Use of a self-describing and extensible standard called XML provides a solution to this interoperability problem. XML provides a standardized language for describing the contents of an information stream, performing the same kind of definitional role for Web content as a database schema performs for relational databases. XML data can be easily customized for display using Extensible Style Sheets (XSL). While self-describing nature of XML makes it easy to reuse, it also increases the size of data significantly. Therefore, transfemng a dataset in XML form can decrease throughput and increase data transfer time significantly. It also increases storage requirements significantly. A natural solution to the problem is to compress the data using suitable algorithm and transfer it in the compressed form. We found that XML-specific compressors such as Xmill and XMLPPM generally outperform traditional compressors. However, optimal use of Xmill requires of discovery of optimal options to use while running Xmill. This, in turn, depends on the nature of data used. Manual disc0ver.y of optimal setting can require an engineer to experiment for weeks. We have devised an XML compression advisory tool that can analyze sample data files and recommend what compression tool would work the best for this data and what are the optimal settings to be used with a XML compression tool.
Objective assessment of MPEG-2 video quality
NASA Astrophysics Data System (ADS)
Gastaldo, Paolo; Zunino, Rodolfo; Rovetta, Stefano
2002-07-01
The increasing use of video compression standards in broadcasting television systems has required, in recent years, the development of video quality measurements that take into account artifacts specifically caused by digital compression techniques. In this paper we present a methodology for the objective quality assessment of MPEG video streams by using circular back-propagation feedforward neural networks. Mapping neural networks can render nonlinear relationships between objective features and subjective judgments, thus avoiding any simplifying assumption on the complexity of the model. The neural network processes an instantaneous set of input values, and yields an associated estimate of perceived quality. Therefore, the neural-network approach turns objective quality assessment into adaptive modeling of subjective perception. The objective features used for the estimate are chosen according to the assessed relevance to perceived quality and are continuously extracted in real time from compressed video streams. The overall system mimics perception but does not require any analytical model of the underlying physical phenomenon. The capability to process compressed video streams represents an important advantage over existing approaches, like avoiding the stream-decoding process greatly enhances real-time performance. Experimental results confirm that the system provides satisfactory, continuous-time approximations for actual scoring curves concerning real test videos.
Influence of rate of force application during compression on tablet capping.
Sarkar, Srimanta; Ooi, Shing Ming; Liew, Celine Valeria; Heng, Paul Wan Sia
2015-04-01
Root cause and possible processing remediation of tablet capping were investigated using a specially designed tablet press with an air compensator installed above the precompression roll to limit compression force and allow extended dwell time in the precompression event. Using acetaminophen-starch (77.9:22.1) as a model formulation, tablets were prepared by various combinations of precompression and main compression forces, set precompression thickness, and turret speed. The rate of force application (RFA) was the main factor contributing to the tablet mechanical strength and capping. When target force above the force required for strong interparticulate bond formation, the resultant high RFA contributed to more pronounced air entrapment, uneven force distribution, and consequently, stratified densification in compact together with high viscoelastic recovery. These factors collectively had contributed to the tablet capping. As extended dwell time assisted particle rearrangement and air escape, a denser and more homogenous packing in the die could be achieved. This occurred during the extended dwell time when a low precompression force was applied, followed by application of main compression force for strong interparticulate bond formation that was the most beneficial option to solve capping problem. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.
Viapiana, R; Flumignan, D L; Guerreiro-Tanomaru, J M; Camilleri, J; Tanomaru-Filho, M
2014-05-01
To evaluate the physicochemical and mechanical properties of Portland cement-based experimental sealers (ES) with different radiopacifying agents (zirconium oxide and niobium oxide micro- and nanoparticles) in comparison with the following conventional sealers: AH Plus, MTA Fillapex and Sealapex. The materials were tested for setting time, compressive strength, flow, film thickness, radiopacity, solubility, dimensional stability and formaldehyde release. Data were subjected to anova and Tukey tests (P < 0.05). MTA Fillapex had the shortest setting time and lowest compressive strength values (P < 0.05) compared with the other materials. The ES had flow values similar to the conventional materials, but higher film thickness (P < 0.05) and lower radiopacity (P < 0.05). Similarly to AH Plus, the ES were associated with dimensional expansion (P > 0.05) and lower solubility when compared with MTA Fillapex and Sealapex (P < 0.05). None of the endodontic sealers evaluated released formaldehyde after mixing. With the exception of radiopacity, the Portland cement-based experimental endodontic sealers presented physicochemical properties according to the specifications no 57 ANSI/ADA (ADA Professional Product Review, 2008) and ISO 6876 (Dentistry - Root Canal Sealing Materials, 2012, British Standards Institution, London, UK). The sealers had setting times and flow ability that was adequate for clinical use, satisfactory compressive strength and low solubility. Additional studies should be carried out with the purpose of decreasing the film thickness and to determine the ideal ratio of radiopacifying agents in Portland cement-based root canal sealers. © 2013 International Endodontic Journal. Published by John Wiley & Sons Ltd.
New method for antibiotic release from bone cement (polymethylmethacrylate): Redefining boundaries.
Carbó-Laso, E; Sanz-Ruiz, P; Del Real-Romero, J C; Ballesteros-Iglesias, Y; Paz-Jiménez, E; Arán-Ais, F; Sánchez-Navarro, M; Pérez-Limiñana, M A; López-Torres, I; Vaquero-Martín, J
The increasing antimicrobial resistance is promoting the addition of antibiotics with high antistaphylococcal activity to polymethylmethacrylate (PMMA), for use in cement spacers in periprosthetic joint infection. Linezolid and levofloxacin have already been used in in-vitro studies, however, rifampicin has been shown to have a deleterious effect on the mechanical properties of PMMA, because it inhibits PMMA polymerization. The objective of our study was to isolate the rifampicin during the polymerization process using microencapsulation techniques, in order to obtain a PMMA suitable for manufacturing bone cement spacers. Microcapsules of rifampicin were synthesized with alginate and PHBV, using Rifaldin ® . The concentration levels of rifampicin were studied by UV-visible spectrophotometry. Compression, hardness and setting time tests were performed with CMW ® 1 cement samples alone, with non-encapsulated rifampicin and with alginate or PHBV microcapsules. The production yield, efficiency and microencapsulation yield were greater with alginate (P = .0001). The cement with microcapsules demonstrated greater resistance to compression than the cement with rifampicin (91.26±5.13, 91.35±6.29 and 74.04±3.57 MPa in alginate, PHBV and rifampicin, respectively) (P = .0001). The setting time reduced, and the hardness curve of the cement with alginate microcapsules was similar to that of the control. Microencapsulation with alginate is an appropriate technique for introducing rifampicin into PMMA, preserving compression properties and setting time. This could allow intraoperative manufacturing of bone cement spacers that release rifampicin for the treatment of periprosthetic joint infection. Copyright © 2017 SECOT. Publicado por Elsevier España, S.L.U. All rights reserved.
High performance compression of science data
NASA Technical Reports Server (NTRS)
Storer, James A.; Cohn, Martin
1994-01-01
Two papers make up the body of this report. One presents a single-pass adaptive vector quantization algorithm that learns a codebook of variable size and shape entries; the authors present experiments on a set of test images showing that with no training or prior knowledge of the data, for a given fidelity, the compression achieved typically equals or exceeds that of the JPEG standard. The second paper addresses motion compensation, one of the most effective techniques used in the interframe data compression. A parallel block-matching algorithm for estimating interframe displacement of blocks with minimum error is presented. The algorithm is designed for a simple parallel architecture to process video in real time.
High-quality JPEG compression history detection for fake uncompressed images
NASA Astrophysics Data System (ADS)
Zhang, Rong; Wang, Rang-Ding; Guo, Li-Jun; Jiang, Bao-Chuan
2017-05-01
Authenticity is one of the most important evaluation factors of images for photography competitions or journalism. Unusual compression history of an image often implies the illicit intent of its author. Our work aims at distinguishing real uncompressed images from fake uncompressed images that are saved in uncompressed formats but have been previously compressed. To detect the potential image JPEG compression, we analyze the JPEG compression artifacts based on the tetrolet covering, which corresponds to the local image geometrical structure. Since the compression can alter the structure information, the tetrolet covering indexes may be changed if a compression is performed on the test image. Such changes can provide valuable clues about the image compression history. To be specific, the test image is first compressed with different quality factors to generate a set of temporary images. Then, the test image is compared with each temporary image block-by-block to investigate whether the tetrolet covering index of each 4×4 block is different between them. The percentages of the changed tetrolet covering indexes corresponding to the quality factors (from low to high) are computed and used to form the p-curve, the local minimum of which may indicate the potential compression. Our experimental results demonstrate the advantage of our method to detect JPEG compressions of high quality, even the highest quality factors such as 98, 99, or 100 of the standard JPEG compression, from uncompressed-format images. At the same time, our detection algorithm can accurately identify the corresponding compression quality factor.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Small, Ward; Pearson, Mark A.; Jensen, Wayne A.
2015-09-13
Compression set of solid (non-porous) Dow Corning SE 1700, Sylgard 184, and “new” M9787 siloxane elastomers was measured according to ASTM D395 Method B. Specimens of SE 1700 were made using (1) the manufacturer’s suggested cure of 150°C for 30 min and (2) an extended cure of 60°C for 6 h and 150°C for 1 h followed by a post-cure under nitrogen purge at 125°C for 12 h. Four specimens of each material were aged at 25-27% compressive strain at 70°C under nitrogen purge for 70 h. Final thickness of each specimen was measured after a 30-min cooling/relaxation period, andmore » compression set relative to deflection was calculated. The average compression set relative to deflection was 6.0% for SE 1700 made using the extended cure and post-cure, 11.3% for SE 1700 made using the manufacturer’s suggested cure, 12.1% for Sylgard 184, and 1.9% for M9787. The extended cure and post-cure reduced the amount of compression set in SE 1700.« less
Performance evaluation of the intra compression in the video coding standards
NASA Astrophysics Data System (ADS)
Abramowski, Andrzej
2015-09-01
The article presents a comparison of the Intra prediction algorithms in the current state-of-the-art video coding standards, including MJPEG 2000, VP8, VP9, H.264/AVC and H.265/HEVC. The effectiveness of techniques employed by each standard is evaluated in terms of compression efficiency and average encoding time. The compression efficiency is measured using BD-PSNR and BD-RATE metrics with H.265/HEVC results as an anchor. Tests are performed on a set of video sequences, composed of sequences gathered by Joint Collaborative Team on Video Coding during the development of the H.265/HEVC standard and 4K sequences provided by Ultra Video Group. According to results, H.265/HEVC provides significant bit-rate savings at the expense of computational complexity, while VP9 may be regarded as a compromise between the efficiency and required encoding time.
el-Briak, Hasna; Durand, Denis; Nurit, Josiane; Munier, Sylvie; Pauvert, Bernard; Boudeville, Phillipe
2002-01-01
By mixing CaHPO(4) x 2H(2)O (DCPD) and CaO with water or sodium phosphate buffers as liquid phase, a calcium phosphate cement was obtained. Its physical and mechanical properties, such as compressive strength, initial and final setting times, cohesion time, dough time, swelling time, dimensional and thermal behavior, and injectability were investigated by varying different parameters such as liquid to powder (L/P) ratio (0.35-0.7 ml g(-1)), molar calcium to phosphate (Ca/P) ratio (1.67-2.5) and the pH (4, 7, and 9) and the concentration (0-1 M) of the sodium phosphate buffer. The best results were obtained with the pH 7 sodium phosphate buffer at the concentration of 0.75 M. With this liquid phase, physical and mechanical properties depended on the Ca/P and L/P ratios, varying from 3 to 11 MPa (compressive strength), 6 to 10 min (initial setting time), 11 to 15 min (final setting time), 15 to 30 min (swelling time), 7 to 20 min (time of 100% injectability). The dough or working time was over 16 min. This cement expanded during its setting (1.2-5 % according to Ca/P and L/P ratios); this would allow a tight filling. Given the mechanical and rheological properties of this new DCPD/CaO-based cement, its use as root canal sealing material can be considered as classical calcium hydroxide or ZnO/eugenol-based pastes, without or with a gutta-percha point. Copyright 2002 Wiley Periodicals, Inc. J Biomed Mater Res (Appl Biomater) 63: 447-453, 2002
NASA Astrophysics Data System (ADS)
Yoshida, Tomonori; Muto, Daiki; Tamai, Tomoya; Suzuki, Shinsuke
2018-04-01
Porous aluminum alloy with aligned unidirectional pores was fabricated by dipping A1050 tubes into A6061 semi-solid slurry. The porous aluminum alloy was processed through Equal-channel Angular Extrusion (ECAE) while preventing cracking and maintaining both the pore size and porosity by setting the insert material and loading back pressure. The specific compressive yield strength of the sample aged after 13 passes of ECAE was approximately 2.5 times higher than that of the solid-solutionized sample without ECAE. Both the energy absorption E V and energy absorption efficiency η V after four passes of ECAE were approximately 1.2 times higher than that of the solid-solutionized sample without ECAE. The specific yield strength was improved via work hardening and precipitation following dynamic aging during ECAE. E V was improved by the application of high compressive stress at the beginning of the compression owing to work hardening via ECAE. η V was improved by a steep increase of stress at low compressive strain and by a gradual increase of stress in the range up to 50 pct of compressive strain. The gradual increase of stress was caused by continuous shear fracture in the metallic part, which was due to the high dislocation density and existence of unidirectional pores parallel to the compressive direction in the structure.
Plasma Studies in the SPECTOR Experiment as Target Development for MTF
NASA Astrophysics Data System (ADS)
Ivanov, Russ; Young, William; the Fusion Team, General
2016-10-01
General Fusion (GF) is developing a Magnetized Target Fusion (MTF) concept in which magnetized plasmas are adiabatically compressed to fusion conditions by the collapse of a liquid metal vortex. To study and optimize the plasma compression process, GF has a field test program in which subscale plasma targets are rapidly compressed with a moving flux conserver. GF has done many field tests to date on plasmas with sufficient thermal confinement but with a compression geometry that is not nearly self-similar. GF has a new design for our subscale plasma injectors called SPECTOR (for SPhErical Compact TORoid) capable of generating and compressing plasmas with a more spherical form factor. SPECTOR forms spherical tokamak plasmas by coaxial helicity injection into a flux conserver (a = 9 cm, R = 19 cm) with a pre-existing toroidal field created by 0.5 MA current in an axial shaft. The toroidal plasma current of 100 - 300 kA resistively decays over a time period of 1.5 msec. SPECTOR1 has an extensive set of plasma diagnostics including Thomson scattering and polarimetry. MHD stability and lifetime of the plasma was explored in different magnetic configurations with a variable safety factor q(Ψ) . Relatively hot (Te >= 350 eV) and dense ( 1020 m-3) plasmas have achieved energy confinement times τE >= 100 μsec and are now ready for field compression tests. russ.ivanov@generalfusion.com.
NASA Astrophysics Data System (ADS)
Yoshida, Tomonori; Muto, Daiki; Tamai, Tomoya; Suzuki, Shinsuke
2018-06-01
Porous aluminum alloy with aligned unidirectional pores was fabricated by dipping A1050 tubes into A6061 semi-solid slurry. The porous aluminum alloy was processed through Equal-channel Angular Extrusion (ECAE) while preventing cracking and maintaining both the pore size and porosity by setting the insert material and loading back pressure. The specific compressive yield strength of the sample aged after 13 passes of ECAE was approximately 2.5 times higher than that of the solid-solutionized sample without ECAE. Both the energy absorption E V and energy absorption efficiency η V after four passes of ECAE were approximately 1.2 times higher than that of the solid-solutionized sample without ECAE. The specific yield strength was improved via work hardening and precipitation following dynamic aging during ECAE. E V was improved by the application of high compressive stress at the beginning of the compression owing to work hardening via ECAE. η V was improved by a steep increase of stress at low compressive strain and by a gradual increase of stress in the range up to 50 pct of compressive strain. The gradual increase of stress was caused by continuous shear fracture in the metallic part, which was due to the high dislocation density and existence of unidirectional pores parallel to the compressive direction in the structure.
Memory hierarchy using row-based compression
Loh, Gabriel H.; O'Connor, James M.
2016-10-25
A system includes a first memory and a device coupleable to the first memory. The device includes a second memory to cache data from the first memory. The second memory includes a plurality of rows, each row including a corresponding set of compressed data blocks of non-uniform sizes and a corresponding set of tag blocks. Each tag block represents a corresponding compressed data block of the row. The device further includes decompression logic to decompress data blocks accessed from the second memory. The device further includes compression logic to compress data blocks to be stored in the second memory.
Joint image encryption and compression scheme based on IWT and SPIHT
NASA Astrophysics Data System (ADS)
Zhang, Miao; Tong, Xiaojun
2017-03-01
A joint lossless image encryption and compression scheme based on integer wavelet transform (IWT) and set partitioning in hierarchical trees (SPIHT) is proposed to achieve lossless image encryption and compression simultaneously. Making use of the properties of IWT and SPIHT, encryption and compression are combined. Moreover, the proposed secure set partitioning in hierarchical trees (SSPIHT) via the addition of encryption in the SPIHT coding process has no effect on compression performance. A hyper-chaotic system, nonlinear inverse operation, Secure Hash Algorithm-256(SHA-256), and plaintext-based keystream are all used to enhance the security. The test results indicate that the proposed methods have high security and good lossless compression performance.
Photon-limited Sensing and Surveillance
2015-01-29
considerable time delay). More specifically, there were four main outcomes from this work: • Improved understanding of the fundmental limitations of...that we design novel cameras for photon-limited settings based on the principles of CS. Most prior theoretical results in compressed sensing and related...inverse problems apply to idealized settings where the noise is i.i.d., and do not account for signal-dependent noise and physical sensing
Vertical Object Layout and Compression for Fixed Heaps
NASA Astrophysics Data System (ADS)
Titzer, Ben L.; Palsberg, Jens
Research into embedded sensor networks has placed increased focus on the problem of developing reliable and flexible software for microcontroller-class devices. Languages such as nesC [10] and Virgil [20] have brought higher-level programming idioms to this lowest layer of software, thereby adding expressiveness. Both languages are marked by the absence of dynamic memory allocation, which removes the need for a runtime system to manage memory. While nesC offers code modules with statically allocated fields, arrays and structs, Virgil allows the application to allocate and initialize arbitrary objects during compilation, producing a fixed object heap for runtime. This paper explores techniques for compressing fixed object heaps with the goal of reducing the RAM footprint of a program. We explore table-based compression and introduce a novel form of object layout called vertical object layout. We provide experimental results that measure the impact on RAM size, code size, and execution time for a set of Virgil programs. Our results show that compressed vertical layout has better execution time and code size than table-based compression while achieving more than 20% heap reduction on 6 of 12 benchmark programs and 2-17% heap reduction on the remaining 6. We also present a formalization of vertical object layout and prove tight relationships between three styles of object layout.
Dataset of producing and curing concrete using domestic treated wastewater
Asadollahfardi, Gholamreza; Delnavaz, Mohammad; Rashnoiee, Vahid; Fazeli, Alireza; Gonabadi, Navid
2015-01-01
We tested the setting time of cement, slump and compressive and tensile strength of 54 triplicate cubic samples and 9 cylindrical samples of concrete with and without a Super plasticizer admixture. We produced concrete samples made with drinking water and treated domestic wastewater containing 300, 400 kg/m3 of cement before chlorination and then cured concrete samples made with drinking water and treated wastewater. Second, concrete samples made with 350 kg/m3 of cement with a Superplasticizer admixture made with drinking water and treated wastewater and then cured with treated wastewater. The compressive strength of all the concrete samples made with treated wastewater had a high coefficient of determination with the control concrete samples. A 28-day tensile strength of all the samples was 96–100% of the tensile strength of the control samples and the setting time was reduced by 30 min which was consistent with a ASTMC191 standard. All samples produced and cured with treated waste water did not have a significant effect on water absorption, slump and surface electrical resistivity tests. However, compressive strength at 21 days of concrete samples using 300 kg/m3 of cement in rapid freezing and thawing conditions was about 11% lower than concrete samples made with drinking water. PMID:26862577
Dataset of producing and curing concrete using domestic treated wastewater.
Asadollahfardi, Gholamreza; Delnavaz, Mohammad; Rashnoiee, Vahid; Fazeli, Alireza; Gonabadi, Navid
2016-03-01
We tested the setting time of cement, slump and compressive and tensile strength of 54 triplicate cubic samples and 9 cylindrical samples of concrete with and without a Super plasticizer admixture. We produced concrete samples made with drinking water and treated domestic wastewater containing 300, 400 kg/m(3) of cement before chlorination and then cured concrete samples made with drinking water and treated wastewater. Second, concrete samples made with 350 kg/m(3) of cement with a Superplasticizer admixture made with drinking water and treated wastewater and then cured with treated wastewater. The compressive strength of all the concrete samples made with treated wastewater had a high coefficient of determination with the control concrete samples. A 28-day tensile strength of all the samples was 96-100% of the tensile strength of the control samples and the setting time was reduced by 30 min which was consistent with a ASTMC191 standard. All samples produced and cured with treated waste water did not have a significant effect on water absorption, slump and surface electrical resistivity tests. However, compressive strength at 21 days of concrete samples using 300 kg/m(3) of cement in rapid freezing and thawing conditions was about 11% lower than concrete samples made with drinking water.
Paridaens, Tom; Van Wallendael, Glenn; De Neve, Wesley; Lambert, Peter
2017-05-15
The past decade has seen the introduction of new technologies that lowered the cost of genomic sequencing increasingly. We can even observe that the cost of sequencing is dropping significantly faster than the cost of storage and transmission. The latter motivates a need for continuous improvements in the area of genomic data compression, not only at the level of effectiveness (compression rate), but also at the level of functionality (e.g. random access), configurability (effectiveness versus complexity, coding tool set …) and versatility (support for both sequenced reads and assembled sequences). In that regard, we can point out that current approaches mostly do not support random access, requiring full files to be transmitted, and that current approaches are restricted to either read or sequence compression. We propose AFRESh, an adaptive framework for no-reference compression of genomic data with random access functionality, targeting the effective representation of the raw genomic symbol streams of both reads and assembled sequences. AFRESh makes use of a configurable set of prediction and encoding tools, extended by a Context-Adaptive Binary Arithmetic Coding scheme (CABAC), to compress raw genetic codes. To the best of our knowledge, our paper is the first to describe an effective implementation CABAC outside of its' original application. By applying CABAC, the compression effectiveness improves by up to 19% for assembled sequences and up to 62% for reads. By applying AFRESh to the genomic symbols of the MPEG genomic compression test set for reads, a compression gain is achieved of up to 51% compared to SCALCE, 42% compared to LFQC and 44% compared to ORCOM. When comparing to generic compression approaches, a compression gain is achieved of up to 41% compared to GNU Gzip and 22% compared to 7-Zip at the Ultra setting. Additionaly, when compressing assembled sequences of the Human Genome, a compression gain is achieved up to 34% compared to GNU Gzip and 16% compared to 7-Zip at the Ultra setting. A Windows executable version can be downloaded at https://github.com/tparidae/AFresh . tom.paridaens@ugent.be. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Concretes of low environmental impact obtained by geopolymerization of Metakaolin
NASA Astrophysics Data System (ADS)
Sandoval, D. C.; Montaño, A. M.; González, C. P.; Gutiérrez, J.
2018-04-01
This work shows results of partial replacement of Portland Type I cement®, by geopolymers obtained through alkaline activation of Metakaolin, in concrete mixtures. Replacement was made with 10%, 20% and 30% of geopolymers at 7, 14, 28 and 90 days of setting. Cement samples was mechanical and electrically tested. Mechanical resistance to compression assay shows that the best percentage of replacement is 10% for every setting time; highest value is 26.75MPa at 90 days. Nyquist diagrams at different times of immersion exhibit same trend: decreasing of electrical resistance as time of assay goes by.
van Tulder, R; Roth, D; Krammel, M; Laggner, R; Heidinger, B; Kienbacher, C; Novosad, H; Chwojka, C; Havel, C; Sterz, F; Schreiber, W; Herkner, H
2014-01-01
Compression depth is frequently suboptimal in cardiopulmonary resuscitation (CPR). We investigated effects of intensified wording and/or repetitive target depth instructions on compression depth in telephone-assisted, protocol driven, bystander CPR on a simulation manikin. Thirty-two volunteers performed 10 min of compression only-CPR in a prospective, investigator-blinded, 4-armed, factorial setting. Participants were randomized either to standard wording ("push down firmly 5 cm"), intensified wording ("it is very important to push down 5 cm every time") or standard or intensified wording repeated every 20s. Three dispatchers were randomized to give these instructions. Primary outcome was relative compression depth (absolute compression depth minus leaning depth). Secondary outcomes were absolute distance, hands-off times as well as BORG-scale and nine-hole peg test (NHPT), pulse rate and blood pressure to reflect physical exertion. We applied a random effects linear regression model. Relative compression depth was 35 ± 10 mm (standard) versus 31 ± 11 mm (intensified wording) versus 25 ± 8 mm (repeated standard) and 31 ± 14 mm (repeated intensified wording). Adjusted for design, body mass index and female sex, intensified wording and repetition led to decreased compression depth of 13 (95%CI -25 to -1) mm (p=0.04) and 9 (95%CI -21 to 3) mm (p=0.13), respectively. Secondary outcomes regarding intensified wording showed significant differences for absolute distance (43 ± 2 versus 20 (95%CI 3-37) mm; p=0.01) and hands-off times (60 ± 40 versus 157 (95%CI 63-251) s; p=0.04). In protocol driven, telephone-assisted, bystander CPR, intensified wording and/or repetitive target depth instruction will not improve compression depth compared to the standard instruction. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Compression of next-generation sequencing quality scores using memetic algorithm
2014-01-01
Background The exponential growth of next-generation sequencing (NGS) derived DNA data poses great challenges to data storage and transmission. Although many compression algorithms have been proposed for DNA reads in NGS data, few methods are designed specifically to handle the quality scores. Results In this paper we present a memetic algorithm (MA) based NGS quality score data compressor, namely MMQSC. The algorithm extracts raw quality score sequences from FASTQ formatted files, and designs compression codebook using MA based multimodal optimization. The input data is then compressed in a substitutional manner. Experimental results on five representative NGS data sets show that MMQSC obtains higher compression ratio than the other state-of-the-art methods. Particularly, MMQSC is a lossless reference-free compression algorithm, yet obtains an average compression ratio of 22.82% on the experimental data sets. Conclusions The proposed MMQSC compresses NGS quality score data effectively. It can be utilized to improve the overall compression ratio on FASTQ formatted files. PMID:25474747
Geometry of generalized depolarizing channels
NASA Astrophysics Data System (ADS)
Burrell, Christian K.
2009-10-01
A generalized depolarizing channel acts on an N -dimensional quantum system to compress the “Bloch ball” in N2-1 directions; it has a corresponding compression vector. We investigate the geometry of these compression vectors and prove a conjecture of Dixit and Sudarshan [Phys. Rev. A 78, 032308 (2008)], namely, that when N=2d (i.e., the system consists of d qubits), and we work in the Pauli basis then the set of all compression vectors forms a simplex. We extend this result by investigating the geometry in other bases; in particular we find precisely when the set of all compression vectors forms a simplex.
Geometry of generalized depolarizing channels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burrell, Christian K.
2009-10-15
A generalized depolarizing channel acts on an N-dimensional quantum system to compress the 'Bloch ball' in N{sup 2}-1 directions; it has a corresponding compression vector. We investigate the geometry of these compression vectors and prove a conjecture of Dixit and Sudarshan [Phys. Rev. A 78, 032308 (2008)], namely, that when N=2{sup d} (i.e., the system consists of d qubits), and we work in the Pauli basis then the set of all compression vectors forms a simplex. We extend this result by investigating the geometry in other bases; in particular we find precisely when the set of all compression vectors formsmore » a simplex.« less
NASA Astrophysics Data System (ADS)
Mawardi, M.; Deyundha, D.; Zainul, R.; Zalmi P, R.
2018-04-01
The study has been conducted to determine characteristics of the portland composite cement by the addition of napa soil from Sarilamak subdistrict, 50 Kota District as an alternative additional material at PT. Semen Padang. Napa soil is a natural material highly containing silica and alumina minerals so that it can be one of material in producing cement. This study aims to determine the effect of napa soil on the quality of portland composite cement. Napa soil used in the variation compositions 0%, 4%, 8%, 12% and 16%, for control of cement used 8 % of pozzolan and 0 % of napa soil. Determination of cement quality by testing cement characteristics include blaine test, sieving, lost of ignition or LOI, insoluble residue, normal consistency, setting time and compressive strength. Cement was characterized using XRF. Fineness of cement decreases with the addition of napa soil. Lost of Ignition of cement decreased, while the insoluble residue increased with the addition of napa soil. Normal consistency of cement increasing, so does initial setting time and final setting time of cement. While the resultant compressive strength decreases with the addition of napa soil on 28 days, 342, 325, 307, 306, and 300 kg / cm2.
Enabling Near Real-Time Remote Search for Fast Transient Events with Lossy Data Compression
NASA Astrophysics Data System (ADS)
Vohl, Dany; Pritchard, Tyler; Andreoni, Igor; Cooke, Jeffrey; Meade, Bernard
2017-09-01
We present a systematic evaluation of JPEG2000 (ISO/IEC 15444) as a transport data format to enable rapid remote searches for fast transient events as part of the Deeper Wider Faster programme. Deeper Wider Faster programme uses 20 telescopes from radio to gamma rays to perform simultaneous and rapid-response follow-up searches for fast transient events on millisecond-to-hours timescales. Deeper Wider Faster programme search demands have a set of constraints that is becoming common amongst large collaborations. Here, we focus on the rapid optical data component of Deeper Wider Faster programme led by the Dark Energy Camera at Cerro Tololo Inter-American Observatory. Each Dark Energy Camera image has 70 total coupled-charged devices saved as a 1.2 gigabyte FITS file. Near real-time data processing and fast transient candidate identifications-in minutes for rapid follow-up triggers on other telescopes-requires computational power exceeding what is currently available on-site at Cerro Tololo Inter-American Observatory. In this context, data files need to be transmitted rapidly to a foreign location for supercomputing post-processing, source finding, visualisation and analysis. This step in the search process poses a major bottleneck, and reducing the data size helps accommodate faster data transmission. To maximise our gain in transfer time and still achieve our science goals, we opt for lossy data compression-keeping in mind that raw data is archived and can be evaluated at a later time. We evaluate how lossy JPEG2000 compression affects the process of finding transients, and find only a negligible effect for compression ratios up to 25:1. We also find a linear relation between compression ratio and the mean estimated data transmission speed-up factor. Adding highly customised compression and decompression steps to the science pipeline considerably reduces the transmission time-validating its introduction to the Deeper Wider Faster programme science pipeline and enabling science that was otherwise too difficult with current technology.
Sensitivity Analysis in RIPless Compressed Sensing
2014-10-01
SECURITY CLASSIFICATION OF: The compressive sensing framework finds a wide range of applications in signal processing and analysis. Within this...Analysis of Compressive Sensing Solutions Report Title The compressive sensing framework finds a wide range of applications in signal processing and...compressed sensing. More specifically, we show that in a noiseless and RIP-less setting [11], the recovery process of a compressed sensing framework is
Networking of three dimensional sonography volume data.
Kratochwil, A; Lee, A; Schoisswohl, A
2000-09-01
Three-dimensioned (3D) sonography enables the examiner to store, instead of copies from single B-scan planes, a volume consisting of 300 scan planes. The volume is displayed on a monitor in form of three orthogonal planes--longitudinal, axial and coronal. Translation and rotation facilitates anatomical orientation and provides any arbitrary plane within the volume to generate organ optimized scan planes. Different algorithms allow the extraction of different information such as surface, or bone structures by maximum mode, or fluid filled structures, such as vessels by the minimum mode. The volume may contain as well color information of vessels. The digitized information is stored on a magnetic optical disc. This allows virtual scanning in absence of the patient under the same conditions as the volume was primarily stored. The volume size is dependent on different, examiner-controlled settings. A volume may need a storage capacity between 2 and 16 MB of 8-bit gray level information. As such huge data sets are unsuitable for network transfer, data compression is of paramount interest. 100 stored volumes were submitted to JPEG, MPEG, and biorthogonal wavelet compression. The original and compressed volumes were randomly shown on two monitors. In case of noticeable image degradation, information on the location of the original and compressed volume and the ratio of compression was read. Numerical values for proving compression fidelity as pixel error calculation and computation of square root error have been unsuitable for evaluating image degradation. The best results in recognizing image degradation were achieved by image experts. The experts disagreed on the ratio where image degradation became visible in only 4% of the volumes. Wavelet compression ratios of 20:1 or 30:1 could be performed without discernible information reduction. The effect of volume compression is reflected both in the reduction of transfer time and in storage capacity. Transmission time for a volume of 6 MB using a normal telephone with a data flow of 56 kB/s was reduced from 14 min to 28 s at a compression rate of 30:1. Compression reduced storage requirements from 6 MB uncompressed to 200 kB at a compression rate of 30:1. This successful compression opens new possibilities of intra- and extra-hospital and global information for 3D sonography. The key to this communication is not only volume compression, but also the fact that the 3D examination can be simulated on any PC by the developed 3D software. PACS teleradiology using digitized radiographs transmitted over standard telephone lines. Systems in combination with the management systems of HIS and RIS are available for archiving, retrieval of images and reports and for local and global communication. This form of tele-medicine will have an impact on cost reduction in hospitals, reduction of transport costs. On this fundament worldwide education and multi-center studies becomes possible.
NASA Astrophysics Data System (ADS)
Orović, Irena; Stanković, Srdjan; Amin, Moeness
2013-05-01
A modified robust two-dimensional compressive sensing algorithm for reconstruction of sparse time-frequency representation (TFR) is proposed. The ambiguity function domain is assumed to be the domain of observations. The two-dimensional Fourier bases are used to linearly relate the observations to the sparse TFR, in lieu of the Wigner distribution. We assume that a set of available samples in the ambiguity domain is heavily corrupted by an impulsive type of noise. Consequently, the problem of sparse TFR reconstruction cannot be tackled using standard compressive sensing optimization algorithms. We introduce a two-dimensional L-statistics based modification into the transform domain representation. It provides suitable initial conditions that will produce efficient convergence of the reconstruction algorithm. This approach applies sorting and weighting operations to discard an expected amount of samples corrupted by noise. The remaining samples serve as observations used in sparse reconstruction of the time-frequency signal representation. The efficiency of the proposed approach is demonstrated on numerical examples that comprise both cases of monocomponent and multicomponent signals.
Self-degradable Cementitious Sealing Materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sugama, T.; Butcher, T., Lance Brothers, Bour, D.
2010-10-01
A self-degradable alkali-activated cementitious material consisting of a sodium silicate activator, slag, Class C fly ash, and sodium carboxymethyl cellulose (CMC) additive was formulated as one dry mix component, and we evaluated its potential in laboratory for use as a temporary sealing material for Enhanced Geothermal System (EGS) wells. The self-degradation of alkali-activated cementitious material (AACM) occurred, when AACM heated at temperatures of {ge}200 C came in contact with water. We interpreted the mechanism of this water-initiated self-degradation as resulting from the in-situ exothermic reactions between the reactants yielded from the dissolution of the non-reacted or partially reacted sodium silicatemore » activator and the thermal degradation of the CMC. The magnitude of self-degradation depended on the CMC content; its effective content in promoting degradation was {ge}0.7%. In contrast, no self-degradation was observed from CMC-modified Class G well cement. For 200 C-autoclaved AACMs without CMC, followed by heating at temperatures up to 300 C, they had a compressive strength ranging from 5982 to 4945 psi, which is {approx}3.5-fold higher than that of the commercial Class G well cement; the initial- and final-setting times of this AACM slurry at 85 C were {approx}60 and {approx}90 min. Two well-formed crystalline hydration phases, 1.1 nm tobermorite and calcium silicate hydrate (I), were responsible for developing this excellent high compressive strength. Although CMC is an attractive, as a degradation-promoting additive, its addition to both the AACM and the Class G well cement altered some properties of original cementitious materials; among those were an extending their setting times, an increasing their porosity, and lowering their compressive strength. Nevertheless, a 0.7% CMC-modified AACM as self-degradable cementitious material displayed the following properties before its breakdown by water; {approx}120 min initial- and {approx}180 min final-setting times at 85 C, and 1825 to 1375 psi compressive strength with 51.2 to 55.0% porosity up to 300 C.« less
Jayabalan, M.
2009-01-01
The effect of reinforcement in the cross-linked poly(propylene fumarate-co-caprolactone diol) thermoset composites based on Kevlar fibres and hydroxyapatite was studied. Cross-linked poly(propylene fumarate-co-caprolactone diol) was also studied without any reinforcement for comparison. The reinforcing fibre acts as a barrier for the curing reaction leading to longer setting time and lesser cross-link density. The fibre and HA reinforced composites have almost the same compressive strength. Nonreinforced material undergoes greater degree of swelling. Among the reinforced materials, the hydroxyapatite reinforced composite has a much higher swelling percentage than the fibre reinforced one. The studies on in vitro degradation of the cured materials reveal hydrolytic degradation in Ringer's solution and PBS medium during aging. All the three materials are found to swell initially in Ringer's solution and PBS medium during aging and then undergo gradual degradation. Compression properties of these cross-linked composites increase with aging; HA reinforced composite has the highest compressive strength and compressive modulus, whereas the aged fibre-reinforced composite has the least compressive strength and modulus. PMID:20126578
Jayabalan, M
2009-01-01
The effect of reinforcement in the cross-linked poly(propylene fumarate-co-caprolactone diol) thermoset composites based on Kevlar fibres and hydroxyapatite was studied. Cross-linked poly(propylene fumarate-co-caprolactone diol) was also studied without any reinforcement for comparison. The reinforcing fibre acts as a barrier for the curing reaction leading to longer setting time and lesser cross-link density. The fibre and HA reinforced composites have almost the same compressive strength. Nonreinforced material undergoes greater degree of swelling. Among the reinforced materials, the hydroxyapatite reinforced composite has a much higher swelling percentage than the fibre reinforced one. The studies on in vitro degradation of the cured materials reveal hydrolytic degradation in Ringer's solution and PBS medium during aging. All the three materials are found to swell initially in Ringer's solution and PBS medium during aging and then undergo gradual degradation. Compression properties of these cross-linked composites increase with aging; HA reinforced composite has the highest compressive strength and compressive modulus, whereas the aged fibre-reinforced composite has the least compressive strength and modulus.
Guerreiro-Tanomaru, Juliane Maria; Vázquez-García, Fernando Antonio; Bosso-Martelo, Roberta; Bernardi, Maria Inês Basso; Faria, Gisele; Tanomaru, Mario
2016-01-01
Mineral Trioxide Aggregate (MTA) is a calcium silicate cement composed of Portland cement (PC) and bismuth oxide. Hydroxyapatite has been incorporated to enhance mechanical and biological properties of dental materials. This study evaluated physicochemical and mechanical properties and antibiofilm activity of MTA and PC associated with zirconium oxide (ZrO2) and hydroxyapatite nanoparticles (HAn). White MTA (Angelus, Brazil); PC (70%)+ZrO2 (30%); PC (60%)+ZrO2 (30%)+HAn (10%); PC (50%)+ZrO2 (30%)+HAn (20%) were evaluated. The pH was assessed by a digital pH-meter and solubility by mass loss. Setting time was evaluated by using Gilmore needles. Compressive strength was analyzed by mechanical test. Samples were radiographed alongside an aluminum step wedge to evaluate radiopacity. For the antibiofilm evaluation, materials were placed in direct contact with E. faecalis biofilm induced on dentine blocks. The number of colony-forming units (CFU mL-1) in the remaining biolfilm was evaluated. The results were submitted to ANOVA and the Tukey test, with 5% significance. There was no difference in pH levels of PC+ZrO2, PC+ZrO2+HAn (10%) and PC+ZrO2+HAn (20%) (p>0.05) and these cements presented higher pH levels than MTA (p<0.05). The highest solubility was observed in PC+ZrO2+HAn (10%) and PC+ZrO2+HAn (20%) (p<0.05). MTA had the shortest initial setting time (p<0.05). All the materials showed radiopacity higher than 3 mmAl. PC+ZrO2 and MTA had the highest compressive strength (p<0.05). Materials did not completely neutralize the bacterial biofilm, but the association with HAn provided greater bacterial reduction than MTA and PC+ZrO2 (p<0.05) after the post-manipulation period of 2 days. The addition of HAn to PC associated with ZrO2 harmed the compressive strength and solubility. On the other hand, HAn did not change the pH and the initial setting time, but improved the radiopacity (HAn 10%), the final setting time and the E. faecalis antibiofilm activity of the cement.
A theory of post-stall transients in axial compression systems. I - Development of equations
NASA Technical Reports Server (NTRS)
Moore, F. K.; Greitzer, E. M.
1985-01-01
An approximate theory is presented for post-stall transients in multistage axial compression systems. The theory leads to a set of three simultaneous nonlinear third-order partial differential equations for pressure rise, and average and disturbed values of flow coefficient, as functions of time and angle around the compressor. By a Galerkin procedure, angular dependence is averaged, and the equations become first order in time. These final equations are capable of describing the growth and possible decay of a rotating-stall cell during a compressor mass-flow transient. It is shown how rotating-stall-like and surgelike motions are coupled through these equations, and also how the instantaneous compressor pumping characteristic changes during the transient stall process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maiti, A.; Weisgraber, T. H.; Gee, R. H.
M97* and M9763 belong to the M97xx series of cellular silicone materials that have been deployed as stress cushions in some of the LLNL systems. Their purpose of these support foams is to distribute the stress between adjacent components, maintain relative positioning of various components, and mitigate the effects of component size variation due to manufacturing and temperature changes. In service these materials are subjected to a continuous compressive strain over long periods of time. In order to ensure their effectiveness, it is important to understand how their mechanical properties change over time. The properties we are primarily concerned aboutmore » are: compression set, load retention, and stress-strain response (modulus).« less
Compressed/reconstructed test images for CRAF/Cassini
NASA Technical Reports Server (NTRS)
Dolinar, S.; Cheung, K.-M.; Onyszchuk, I.; Pollara, F.; Arnold, S.
1991-01-01
A set of compressed, then reconstructed, test images submitted to the Comet Rendezvous Asteroid Flyby (CRAF)/Cassini project is presented as part of its evaluation of near lossless high compression algorithms for representing image data. A total of seven test image files were provided by the project. The seven test images were compressed, then reconstructed with high quality (root mean square error of approximately one or two gray levels on an 8 bit gray scale), using discrete cosine transforms or Hadamard transforms and efficient entropy coders. The resulting compression ratios varied from about 2:1 to about 10:1, depending on the activity or randomness in the source image. This was accomplished without any special effort to optimize the quantizer or to introduce special postprocessing to filter the reconstruction errors. A more complete set of measurements, showing the relative performance of the compression algorithms over a wide range of compression ratios and reconstruction errors, shows that additional compression is possible at a small sacrifice in fidelity.
smallWig: parallel compression of RNA-seq WIG files.
Wang, Zhiying; Weissman, Tsachy; Milenkovic, Olgica
2016-01-15
We developed a new lossless compression method for WIG data, named smallWig, offering the best known compression rates for RNA-seq data and featuring random access functionalities that enable visualization, summary statistics analysis and fast queries from the compressed files. Our approach results in order of magnitude improvements compared with bigWig and ensures compression rates only a fraction of those produced by cWig. The key features of the smallWig algorithm are statistical data analysis and a combination of source coding methods that ensure high flexibility and make the algorithm suitable for different applications. Furthermore, for general-purpose file compression, the compression rate of smallWig approaches the empirical entropy of the tested WIG data. For compression with random query features, smallWig uses a simple block-based compression scheme that introduces only a minor overhead in the compression rate. For archival or storage space-sensitive applications, the method relies on context mixing techniques that lead to further improvements of the compression rate. Implementations of smallWig can be executed in parallel on different sets of chromosomes using multiple processors, thereby enabling desirable scaling for future transcriptome Big Data platforms. The development of next-generation sequencing technologies has led to a dramatic decrease in the cost of DNA/RNA sequencing and expression profiling. RNA-seq has emerged as an important and inexpensive technology that provides information about whole transcriptomes of various species and organisms, as well as different organs and cellular communities. The vast volume of data generated by RNA-seq experiments has significantly increased data storage costs and communication bandwidth requirements. Current compression tools for RNA-seq data such as bigWig and cWig either use general-purpose compressors (gzip) or suboptimal compression schemes that leave significant room for improvement. To substantiate this claim, we performed a statistical analysis of expression data in different transform domains and developed accompanying entropy coding methods that bridge the gap between theoretical and practical WIG file compression rates. We tested different variants of the smallWig compression algorithm on a number of integer-and real- (floating point) valued RNA-seq WIG files generated by the ENCODE project. The results reveal that, on average, smallWig offers 18-fold compression rate improvements, up to 2.5-fold compression time improvements, and 1.5-fold decompression time improvements when compared with bigWig. On the tested files, the memory usage of the algorithm never exceeded 90 KB. When more elaborate context mixing compressors were used within smallWig, the obtained compression rates were as much as 23 times better than those of bigWig. For smallWig used in the random query mode, which also supports retrieval of the summary statistics, an overhead in the compression rate of roughly 3-17% was introduced depending on the chosen system parameters. An increase in encoding and decoding time of 30% and 55% represents an additional performance loss caused by enabling random data access. We also implemented smallWig using multi-processor programming. This parallelization feature decreases the encoding delay 2-3.4 times compared with that of a single-processor implementation, with the number of processors used ranging from 2 to 8; in the same parameter regime, the decoding delay decreased 2-5.2 times. The smallWig software can be downloaded from: http://stanford.edu/~zhiyingw/smallWig/smallwig.html, http://publish.illinois.edu/milenkovic/, http://web.stanford.edu/~tsachy/. zhiyingw@stanford.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Mechanical properties of new dental pulp-capping materials.
Nielsen, Matthew J; Casey, Jeffery A; VanderWeele, Richard A; Vandewalle, Kraig S
2016-01-01
The mechanical properties of pulp-capping materials may affect their resistance to fracture during placement of a final restorative material or while supporting an overlying restoration over time. The purpose of this study was to compare the compressive strength, flexural strength, and flexural modulus of 2 new pulp-capping materials (TheraCal LC and Biodentine), mineral trioxide aggregate (MTA), and calcium hydroxide over time. Specimens were created in molds and tested to failure in a universal testing machine after 15 minutes, 3 hours, and 24 hours. The MTA specimens did not set at 15 minutes. At all time periods, TheraCal LC had the greatest compressive and flexural strengths. After 3 and 24 hours, Biodentine had the greatest flexural modulus. TheraCal LC had greater early strength to potentially resist fracture during immediate placement of a final restorative material. Biodentine had greater stiffness after 3 hours to potentially provide better support of an overlying restoration under function over time.
A compression algorithm for the combination of PDF sets.
Carrazza, Stefano; Latorre, José I; Rojo, Juan; Watt, Graeme
The current PDF4LHC recommendation to estimate uncertainties due to parton distribution functions (PDFs) in theoretical predictions for LHC processes involves the combination of separate predictions computed using PDF sets from different groups, each of which comprises a relatively large number of either Hessian eigenvectors or Monte Carlo (MC) replicas. While many fixed-order and parton shower programs allow the evaluation of PDF uncertainties for a single PDF set at no additional CPU cost, this feature is not universal, and, moreover, the a posteriori combination of the predictions using at least three different PDF sets is still required. In this work, we present a strategy for the statistical combination of individual PDF sets, based on the MC representation of Hessian sets, followed by a compression algorithm for the reduction of the number of MC replicas. We illustrate our strategy with the combination and compression of the recent NNPDF3.0, CT14 and MMHT14 NNLO PDF sets. The resulting compressed Monte Carlo PDF sets are validated at the level of parton luminosities and LHC inclusive cross sections and differential distributions. We determine that around 100 replicas provide an adequate representation of the probability distribution for the original combined PDF set, suitable for general applications to LHC phenomenology.
NASA Astrophysics Data System (ADS)
Kandori, Akihiko; Sano, Yuko; Zhang, Yuhua; Tsuji, Toshio
2015-12-01
This paper describes a new method for calculating chest compression depth and a simple chest-compression gauge for validating the accuracy of the method. The chest-compression gauge has two plates incorporating two magnetic coils, a spring, and an accelerometer. The coils are located at both ends of the spring, and the accelerometer is set on the bottom plate. Waveforms obtained using the magnetic coils (hereafter, "magnetic waveforms"), which are proportional to compression-force waveforms and the acceleration waveforms were measured at the same time. The weight factor expressing the relationship between the second derivatives of the magnetic waveforms and the measured acceleration waveforms was calculated. An estimated-compression-displacement (depth) waveform was obtained by multiplying the weight factor and the magnetic waveforms. Displacements of two large springs (with similar spring constants) within a thorax and displacements of a cardiopulmonary resuscitation training manikin were measured using the gauge to validate the accuracy of the calculated waveform. A laser-displacement detection system was used to compare the real displacement waveform and the estimated waveform. Intraclass correlation coefficients (ICCs) between the real displacement using the laser system and the estimated displacement waveforms were calculated. The estimated displacement error of the compression depth was within 2 mm (<1 standard deviation). All ICCs (two springs and a manikin) were above 0.85 (0.99 in the case of one of the springs). The developed simple chest-compression gauge, based on a new calculation method, provides an accurate compression depth (estimation error < 2 mm).
Impact of lossy compression on diagnostic accuracy of radiographs for periapical lesions
NASA Technical Reports Server (NTRS)
Eraso, Francisco E.; Analoui, Mostafa; Watson, Andrew B.; Rebeschini, Regina
2002-01-01
OBJECTIVES: The purpose of this study was to evaluate the lossy Joint Photographic Experts Group compression for endodontic pretreatment digital radiographs. STUDY DESIGN: Fifty clinical charge-coupled device-based, digital radiographs depicting periapical areas were selected. Each image was compressed at 2, 4, 8, 16, 32, 48, and 64 compression ratios. One root per image was marked for examination. Images were randomized and viewed by four clinical observers under standardized viewing conditions. Each observer read the image set three times, with at least two weeks between each reading. Three pre-selected sites per image (mesial, distal, apical) were scored on a five-scale score confidence scale. A panel of three examiners scored the uncompressed images, with a consensus score for each site. The consensus score was used as the baseline for assessing the impact of lossy compression on the diagnostic values of images. The mean absolute error between consensus and observer scores was computed for each observer, site, and reading session. RESULTS: Balanced one-way analysis of variance for all observers indicated that for compression ratios 48 and 64, there was significant difference between mean absolute error of uncompressed and compressed images (P <.05). After converting the five-scale score to two-level diagnostic values, the diagnostic accuracy was strongly correlated (R (2) = 0.91) with the compression ratio. CONCLUSION: The results of this study suggest that high compression ratios can have a severe impact on the diagnostic quality of the digital radiographs for detection of periapical lesions.
HUGO: Hierarchical mUlti-reference Genome cOmpression for aligned reads
Li, Pinghao; Jiang, Xiaoqian; Wang, Shuang; Kim, Jihoon; Xiong, Hongkai; Ohno-Machado, Lucila
2014-01-01
Background and objective Short-read sequencing is becoming the standard of practice for the study of structural variants associated with disease. However, with the growth of sequence data largely surpassing reasonable storage capability, the biomedical community is challenged with the management, transfer, archiving, and storage of sequence data. Methods We developed Hierarchical mUlti-reference Genome cOmpression (HUGO), a novel compression algorithm for aligned reads in the sorted Sequence Alignment/Map (SAM) format. We first aligned short reads against a reference genome and stored exactly mapped reads for compression. For the inexact mapped or unmapped reads, we realigned them against different reference genomes using an adaptive scheme by gradually shortening the read length. Regarding the base quality value, we offer lossy and lossless compression mechanisms. The lossy compression mechanism for the base quality values uses k-means clustering, where a user can adjust the balance between decompression quality and compression rate. The lossless compression can be produced by setting k (the number of clusters) to the number of different quality values. Results The proposed method produced a compression ratio in the range 0.5–0.65, which corresponds to 35–50% storage savings based on experimental datasets. The proposed approach achieved 15% more storage savings over CRAM and comparable compression ratio with Samcomp (CRAM and Samcomp are two of the state-of-the-art genome compression algorithms). The software is freely available at https://sourceforge.net/projects/hierachicaldnac/with a General Public License (GPL) license. Limitation Our method requires having different reference genomes and prolongs the execution time for additional alignments. Conclusions The proposed multi-reference-based compression algorithm for aligned reads outperforms existing single-reference based algorithms. PMID:24368726
Globalisation, the Challenge of Educational Synchronisation and Teacher Education
ERIC Educational Resources Information Center
Papastephanou, Marianna; Christou, Miranda; Gregoriou, Zelia
2013-01-01
In this article, we set out from the challenge that globalising synchronisation--usually exemplified by Organization for Economic Cooperation and Development and World Bank initiatives--presents for education to argue that the time-space compression effected by globalisation must educationally be dealt with with caution, critical vigilance and a…
An efficient and extensible approach for compressing phylogenetic trees
2011-01-01
Background Biologists require new algorithms to efficiently compress and store their large collections of phylogenetic trees. Our previous work showed that TreeZip is a promising approach for compressing phylogenetic trees. In this paper, we extend our TreeZip algorithm by handling trees with weighted branches. Furthermore, by using the compressed TreeZip file as input, we have designed an extensible decompressor that can extract subcollections of trees, compute majority and strict consensus trees, and merge tree collections using set operations such as union, intersection, and set difference. Results On unweighted phylogenetic trees, TreeZip is able to compress Newick files in excess of 98%. On weighted phylogenetic trees, TreeZip is able to compress a Newick file by at least 73%. TreeZip can be combined with 7zip with little overhead, allowing space savings in excess of 99% (unweighted) and 92%(weighted). Unlike TreeZip, 7zip is not immune to branch rotations, and performs worse as the level of variability in the Newick string representation increases. Finally, since the TreeZip compressed text (TRZ) file contains all the semantic information in a collection of trees, we can easily filter and decompress a subset of trees of interest (such as the set of unique trees), or build the resulting consensus tree in a matter of seconds. We also show the ease of which set operations can be performed on TRZ files, at speeds quicker than those performed on Newick or 7zip compressed Newick files, and without loss of space savings. Conclusions TreeZip is an efficient approach for compressing large collections of phylogenetic trees. The semantic and compact nature of the TRZ file allow it to be operated upon directly and quickly, without a need to decompress the original Newick file. We believe that TreeZip will be vital for compressing and archiving trees in the biological community. PMID:22165819
An efficient and extensible approach for compressing phylogenetic trees.
Matthews, Suzanne J; Williams, Tiffani L
2011-10-18
Biologists require new algorithms to efficiently compress and store their large collections of phylogenetic trees. Our previous work showed that TreeZip is a promising approach for compressing phylogenetic trees. In this paper, we extend our TreeZip algorithm by handling trees with weighted branches. Furthermore, by using the compressed TreeZip file as input, we have designed an extensible decompressor that can extract subcollections of trees, compute majority and strict consensus trees, and merge tree collections using set operations such as union, intersection, and set difference. On unweighted phylogenetic trees, TreeZip is able to compress Newick files in excess of 98%. On weighted phylogenetic trees, TreeZip is able to compress a Newick file by at least 73%. TreeZip can be combined with 7zip with little overhead, allowing space savings in excess of 99% (unweighted) and 92%(weighted). Unlike TreeZip, 7zip is not immune to branch rotations, and performs worse as the level of variability in the Newick string representation increases. Finally, since the TreeZip compressed text (TRZ) file contains all the semantic information in a collection of trees, we can easily filter and decompress a subset of trees of interest (such as the set of unique trees), or build the resulting consensus tree in a matter of seconds. We also show the ease of which set operations can be performed on TRZ files, at speeds quicker than those performed on Newick or 7zip compressed Newick files, and without loss of space savings. TreeZip is an efficient approach for compressing large collections of phylogenetic trees. The semantic and compact nature of the TRZ file allow it to be operated upon directly and quickly, without a need to decompress the original Newick file. We believe that TreeZip will be vital for compressing and archiving trees in the biological community.
Effect of lower limb compression on blood flow and performance in elite wheelchair rugby athletes
Vaile, Joanna; Stefanovic, Brad; Askew, Christopher D.
2016-01-01
Objective To investigate the effects of compression socks worn during exercise on performance and physiological responses in elite wheelchair rugby athletes. Design In a non-blinded randomized crossover design, participants completed two exercise trials (4 × 8 min bouts of submaximal exercise, each finishing with a timed maximal sprint) separated by 24 hr, with or without compression socks. Setting National Sports Training Centre, Queensland, Australia. Participants Ten national representative male wheelchair rugby athletes with cervical spinal cord injuries volunteered to participate. Interventions Participants wore medical grade compression socks on both legs during the exercise task (COMP), and during the control trial no compression was worn (CON). Outcome Measures The efficacy of the compression socks was determined by assessments of limb blood flow, core body temperature, heart rate, and ratings of perceived exertion, perceived thermal strain, and physical performance. Results While no significant differences between conditions were observed for maximal sprint time, average lap time was better maintained in COMP compared to CON (P<0.05). Lower limb blood flow increased from pre- to post-exercise by the same magnitude in both conditions (COMP: 2.51 ± 2.34; CON: 2.20 ± 1.85 ml.100 ml.−1min−1), whereas there was a greater increase in upper limb blood flow pre- to post-exercise in COMP (10.77 ± 8.24 ml.100 ml.−1min−1) compared to CON (6.21 ± 5.73 ml.100 ml.−1min−1; P < 0.05). Conclusion These findings indicate that compression socks worn during exercise is an effective intervention for maintaining submaximal performance during wheelchair exercise, and this performance benefit may be associated with an augmentation of upper limb blood flow. PMID:25582434
Fluorosilicone and silicone o-ring aging study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernstein, Robert; Gillen, Kenneth T.
2007-10-01
Fluorosilicone o-ring aging studies were performed. These studies examined the compressive force loss of fluorosilicone o-rings at accelerated (elevated) temperatures and were then used to make predictions about force loss at room temperature. The results were non-Arrhenius with evidence for a lowering in Arrhenius activation energies as the aging temperature was reduced. The compression set of these fluorosilicone o-rings was found to have a reasonably linear correlation with the force loss. The aging predictions based on using the observed curvature of the Arrhenius aging plots were validated by field aged o-rings that yielded degradation values reasonably close to the predictions.more » Compression set studies of silicone o-rings from a previous study resulted in good correlation to the force loss predictions for the fluorosilicone o-rings from this study. This resulted in a preliminary conclusion that an approximately linear correlation exists between compression set and force decay values for typical fluorosilicone and silicone materials, and that the two materials age at similar rates at low temperatures. Interestingly, because of the observed curvature of the Arrhenius plots available from longer-term, lower temperature accelerated exposures, both materials had faster force decay curves (and correspondingly faster buildup of compression set) at room temperature than anticipated from typical high-temperature exposures. A brief study on heavily filled conducting silicone o-rings resulted in data that deviated from the linear relationship, implying that a degree of caution must be exercised about any general statement relating force decay and compression set.« less
3D Printed Silicones with Shape Memory
Wu, Amanda S.; Small IV, Ward; Bryson, Taylor M.; ...
2017-07-05
Direct ink writing enables the layer-by-layer manufacture of ordered, porous structures whose mechanical behavior is driven by architecture and material properties. Here, we incorporate two different gas filled microsphere pore formers to evaluate the effect of shell stiffness and T g on compressive behavior and compression set in siloxane matrix printed structures. The lower T g microsphere structures exhibit substantial compression set when heated near and above T g, with full structural recovery upon reheating without constraint. By contrast, the higher T g microsphere structures exhibit reduced compression set with no recovery upon reheating. Aside from their role in tuningmore » the mechanical behavior of direct ink write structures, polymer microspheres are good candidates for shape memory elastomers requiring structural complexity, with potential applications toward tandem shape memory polymers.« less
3D Printed Silicones with Shape Memory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Amanda S.; Small IV, Ward; Bryson, Taylor M.
Direct ink writing enables the layer-by-layer manufacture of ordered, porous structures whose mechanical behavior is driven by architecture and material properties. Here, we incorporate two different gas filled microsphere pore formers to evaluate the effect of shell stiffness and T g on compressive behavior and compression set in siloxane matrix printed structures. The lower T g microsphere structures exhibit substantial compression set when heated near and above T g, with full structural recovery upon reheating without constraint. By contrast, the higher T g microsphere structures exhibit reduced compression set with no recovery upon reheating. Aside from their role in tuningmore » the mechanical behavior of direct ink write structures, polymer microspheres are good candidates for shape memory elastomers requiring structural complexity, with potential applications toward tandem shape memory polymers.« less
Efficient Encoding and Rendering of Time-Varying Volume Data
NASA Technical Reports Server (NTRS)
Ma, Kwan-Liu; Smith, Diann; Shih, Ming-Yun; Shen, Han-Wei
1998-01-01
Visualization of time-varying volumetric data sets, which may be obtained from numerical simulations or sensing instruments, provides scientists insights into the detailed dynamics of the phenomenon under study. This paper describes a coherent solution based on quantization, coupled with octree and difference encoding for visualizing time-varying volumetric data. Quantization is used to attain voxel-level compression and may have a significant influence on the performance of the subsequent encoding and visualization steps. Octree encoding is used for spatial domain compression, and difference encoding for temporal domain compression. In essence, neighboring voxels may be fused into macro voxels if they have similar values, and subtrees at consecutive time steps may be merged if they are identical. The software rendering process is tailored according to the tree structures and the volume visualization process. With the tree representation, selective rendering may be performed very efficiently. Additionally, the I/O costs are reduced. With these combined savings, a higher level of user interactivity is achieved. We have studied a variety of time-varying volume datasets, performed encoding based on data statistics, and optimized the rendering calculations wherever possible. Preliminary tests on workstations have shown in many cases tremendous reduction by as high as 90% in both storage space and inter-frame delay.
Monitoring and diagnosis of Alzheimer's disease using noninvasive compressive sensing EEG
NASA Astrophysics Data System (ADS)
Morabito, F. C.; Labate, D.; Morabito, G.; Palamara, I.; Szu, H.
2013-05-01
The majority of elderly with Alzheimer's Disease (AD) receive care at home from caregivers. In contrast to standard tethered clinical settings, a wireless, real-time, body-area smartphone-based remote monitoring of electroencephalogram (EEG) can be extremely advantageous for home care of those patients. Such wearable tools pave the way to personalized medicine, for example giving the opportunity to control the progression of the disease and the effect of drugs. By applying Compressive Sensing (CS) techniques it is in principle possible to overcome the difficulty raised by smartphones spatial-temporal throughput rate bottleneck. Unfortunately, EEG and other physiological signals are often non-sparse. In this paper, it is instead shown that the EEG of AD patients becomes actually more compressible with the progression of the disease. EEG of Mild Cognitive Impaired (MCI) subjects is also showing clear tendency to enhanced compressibility. This feature favor the use of CS techniques and ultimately the use of telemonitoring with wearable sensors.
System and method for pre-cooling of buildings
Springer, David A.; Rainer, Leo I.
2011-08-09
A method for nighttime pre-cooling of a building comprising inputting one or more user settings, lowering the indoor temperature reading of the building during nighttime by operating an outside air ventilation system followed, if necessary, by a vapor compression cooling system. The method provides for nighttime pre-cooling of a building that maintains indoor temperatures within a comfort range based on the user input settings, calculated operational settings, and predictions of indoor and outdoor temperature trends for a future period of time such as the next day.
Effects of Compression on Speech Acoustics, Intelligibility, and Sound Quality
Souza, Pamela E.
2002-01-01
The topic of compression has been discussed quite extensively in the last 20 years (eg, Braida et al., 1982; Dillon, 1996, 2000; Dreschler, 1992; Hickson, 1994; Kuk, 2000 and 2002; Kuk and Ludvigsen, 1999; Moore, 1990; Van Tasell, 1993; Venema, 2000; Verschuure et al., 1996; Walker and Dillon, 1982). However, the latest comprehensive update by this journal was published in 1996 (Kuk, 1996). Since that time, use of compression hearing aids has increased dramatically, from half of hearing aids dispensed only 5 years ago to four out of five hearing aids dispensed today (Strom, 2002b). Most of today's digital and digitally programmable hearing aids are compression devices (Strom, 2002a). It is probable that within a few years, very few patients will be fit with linear hearing aids. Furthermore, compression has increased in complexity, with greater numbers of parameters under the clinician's control. Ideally, these changes will translate to greater flexibility and precision in fitting and selection. However, they also increase the need for information about the effects of compression amplification on speech perception and speech quality. As evidenced by the large number of sessions at professional conferences on fitting compression hearing aids, clinicians continue to have questions about compression technology and when and how it should be used. How does compression work? Who are the best candidates for this technology? How should adjustable parameters be set to provide optimal speech recognition? What effect will compression have on speech quality? These and other questions continue to drive our interest in this technology. This article reviews the effects of compression on the speech signal and the implications for speech intelligibility, quality, and design of clinical procedures. PMID:25425919
On Chorin's Method for Stationary Solutions of the Oberbeck-Boussinesq Equation
NASA Astrophysics Data System (ADS)
Kagei, Yoshiyuki; Nishida, Takaaki
2017-06-01
Stability of stationary solutions of the Oberbeck-Boussinesq system (OB) and the corresponding artificial compressible system is considered. The latter system is obtained by adding the time derivative of the pressure with small parameter ɛ > 0 to the continuity equation of (OB), which was proposed by A. Chorin to find stationary solutions of (OB) numerically. Both systems have the same sets of stationary solutions and the system (OB) is obtained from the artificial compressible one as the limit ɛ \\to 0 which is a singular limit. It is proved that if a stationary solution of the artificial compressible system is stable for sufficiently small ɛ > 0, then it is also stable as a solution of (OB). The converse is proved provided that the velocity field of the stationary solution satisfies some smallness condition.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Kesheng
2007-08-02
An index in a database system is a data structure that utilizes redundant information about the base data to speed up common searching and retrieval operations. Most commonly used indexes are variants of B-trees, such as B+-tree and B*-tree. FastBit implements a set of alternative indexes call compressed bitmap indexes. Compared with B-tree variants, these indexes provide very efficient searching and retrieval operations by sacrificing the efficiency of updating the indexes after the modification of an individual record. In addition to the well-known strengths of bitmap indexes, FastBit has a special strength stemming from the bitmap compression scheme used. Themore » compression method is called the Word-Aligned Hybrid (WAH) code. It reduces the bitmap indexes to reasonable sizes and at the same time allows very efficient bitwise logical operations directly on the compressed bitmaps. Compared with the well-known compression methods such as LZ77 and Byte-aligned Bitmap code (BBC), WAH sacrifices some space efficiency for a significant improvement in operational efficiency. Since the bitwise logical operations are the most important operations needed to answer queries, using WAH compression has been shown to answer queries significantly faster than using other compression schemes. Theoretical analyses showed that WAH compressed bitmap indexes are optimal for one-dimensional range queries. Only the most efficient indexing schemes such as B+-tree and B*-tree have this optimality property. However, bitmap indexes are superior because they can efficiently answer multi-dimensional range queries by combining the answers to one-dimensional queries.« less
Time-Dependent Testing Evaluation and Modeling for Rubber Stopper Seal Performance.
Zeng, Qingyu; Zhao, Xia
2018-01-01
Sufficient rubber stopper sealing performance throughout the entire sealed product life cycle is essential for maintaining container closure integrity in the parenteral packaging industry. However, prior publications have lacked systematic considerations for the time-dependent influence on sealing performance that results from the viscoelastic characteristics of the rubber stoppers. In this paper, we report results of an effort to study these effects by applying both compression stress relaxation testing and residual seal force testing for time-dependent experimental data collection. These experiments were followed by modeling fit calculations based on the Maxwell-Wiechert theory modified with the Kohlrausch-Williams-Watts stretched exponential function, resulting in a nonlinear, time-dependent sealing force model. By employing both testing evaluations and modeling calculations, an in-depth understanding of the time-dependent effects on rubber stopper sealing force was developed. Both testing and modeling data show good consistency, demonstrating that the sealing force decays exponentially over time and eventually levels off because of the viscoelastic nature of the rubber stoppers. The nonlinearity of stress relaxation derives from the viscoelastic characteristics of the rubber stoppers coupled with the large stopper compression deformation into restrained geometry conditions. The modeling fit with capability to handle actual testing data can be employed as a tool to calculate the compression stress relaxation and residual seal force throughout the entire sealed product life cycle. In addition to being time-dependent, stress relaxation is also experimentally shown to be temperature-dependent. The present work provides a new, integrated methodology framework and some fresh insights to the parenteral packaging industry for practically and proactively considering, designing, setting up, controlling, and managing stopper sealing performance throughout the entire sealed product life cycle. LAY ABSTRACT: Historical publications in the parenteral packaging industry have lacked systematic considerations for the time-dependent influence on the sealing performance that results from effects of viscoelastic characteristic of the rubber stoppers. This study applied compression stress relaxation testing and residual seal force testing for time-dependent experimental data collection. These experiments were followed by modeling fit calculations based on the Maxwell-Wiechert theory modified with the Kohlrausch-Williams-Watts stretched exponential function, resulting in a nonlinear, time-dependent sealing force model. Experimental and modeling data show good consistency, demonstrating that sealing force decays exponentially over time and eventually levels off. The nonlinearity of stress relaxation derives from the viscoelastic characteristics of the rubber stoppers coupled with the large stopper compression deformation into restrained geometry conditions. In addition to being time-dependent stress relaxation, it is also experimentally shown to be temperature-dependent. The present work provides a new, integrated methodology framework and some fresh insights to the industry for practically and proactively considering, designing, setting up, controlling, and managing of the stopper sealing performance throughout the entire sealed product life cycle. © PDA, Inc. 2018.
Numerical solutions of Navier-Stokes equations for a Butler wing
NASA Technical Reports Server (NTRS)
Abolhassani, J. S.; Tiwari, S. N.
1985-01-01
The flow field is simulated on the surface of a given delta wing (Butler wing) at zero incident in a uniform stream. The simulation is done by integrating a set of flow field equations. This set of equations governs the unsteady, viscous, compressible, heat conducting flow of an ideal gas. The equations are written in curvilinear coordinates so that the wing surface is represented accurately. These equations are solved by the finite difference method, and results obtained for high-speed freestream conditions are compared with theoretical and experimental results. In this study, the Navier-Stokes equations are solved numerically. These equations are unsteady, compressible, viscous, and three-dimensional without neglecting any terms. The time dependency of the governing equations allows the solution to progress naturally for an arbitrary initial initial guess to an asymptotic steady state, if one exists. The equations are transformed from physical coordinates to the computational coordinates, allowing the solution of the governing equations in a rectangular parallel-piped domain. The equations are solved by the MacCormack time-split technique which is vectorized and programmed to run on the CDC VPS 32 computer.
Lossless compression of image data products on th e FIFE CD-ROM series
NASA Technical Reports Server (NTRS)
Newcomer, Jeffrey A.; Strebel, Donald E.
1993-01-01
How do you store enough of the key data sets, from a total of 120 gigabytes of data collected for a scientific experiment, on a collection of CD-ROM's, small enough to distribute to a broad scientific community? In such an application where information loss in unacceptable, lossless compression algorithms are the only choice. Although lossy compression algorithms can provide an order of magnitude improvement in compression ratios over lossless algorithms the information that is lost is often part of the key scientific precision of the data. Therefore, lossless compression algorithms are and will continue to be extremely important in minimizing archiving storage requirements and distribution of large earth and space (ESS) data sets while preserving the essential scientific precision of the data.
A real-time interferometer technique for compressible flow research
NASA Technical Reports Server (NTRS)
Bachalo, W. D.; Houser, M. J.
1984-01-01
Strengths and shortcomings in the application of interferometric techniques to transonic flow fields are examined and an improved method is elaborated. Such applications have demonstrated the value of interferometry in obtaining data for compressible flow research. With holographic techniques, interferometry may be applied in large scale facilities without the use of expensive optics or elaborate vibration isolation equipment. Results obtained using holographic interferometry and other methods demonstrate that reliable qualitative and quantitative data can be acquired. Nevertheless, the conventional method can be difficult to set up and apply, and it cannot produce real-time data. A new interferometry technique is investigated that promises to be easier to apply and can provide real-time information. This single-beam technique has the necessary insensitivity to vibration for large scale wind tunnel operations. Capabilities of the method and preliminary tests on some laboratory scale flow fluids are described.
Force balancing in mammographic compression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Branderhorst, W., E-mail: w.branderhorst@amc.nl; Groot, J. E. de; Lier, M. G. J. T. B. van
Purpose: In mammography, the height of the image receptor is adjusted to the patient before compressing the breast. An inadequate height setting can result in an imbalance between the forces applied by the image receptor and the paddle, causing the clamped breast to be pushed up or down relative to the body during compression. This leads to unnecessary stretching of the skin and other tissues around the breast, which can make the imaging procedure more painful for the patient. The goal of this study was to implement a method to measure and minimize the force imbalance, and to assess itsmore » feasibility as an objective and reproducible method of setting the image receptor height. Methods: A trial was conducted consisting of 13 craniocaudal mammographic compressions on a silicone breast phantom, each with the image receptor positioned at a different height. The image receptor height was varied over a range of 12 cm. In each compression, the force exerted by the compression paddle was increased up to 140 N in steps of 10 N. In addition to the paddle force, the authors measured the force exerted by the image receptor and the reaction force exerted on the patient body by the ground. The trial was repeated 8 times, with the phantom remounted at a slightly different orientation and position between the trials. Results: For a given paddle force, the obtained results showed that there is always exactly one image receptor height that leads to a balance of the forces on the breast. For the breast phantom, deviating from this specific height increased the force imbalance by 9.4 ± 1.9 N/cm (6.7%) for 140 N paddle force, and by 7.1 ± 1.6 N/cm (17.8%) for 40 N paddle force. The results also show that in situations where the force exerted by the image receptor is not measured, the craniocaudal force imbalance can still be determined by positioning the patient on a weighing scale and observing the changes in displayed weight during the procedure. Conclusions: In mammographic breast compression, even small changes in the image receptor height can lead to a severe imbalance of the applied forces. This may make the procedure more painful than necessary and, in case the image receptor is set too low, may lead to image quality issues and increased radiation dose due to undercompression. In practice, these effects can be reduced by monitoring the force imbalance and actively adjusting the position of the image receptor throughout the compression.« less
Short-fibre reinforcement of calcium phosphate bone cement.
Buchanan, F; Gallagher, L; Jack, V; Dunne, N
2007-02-01
Calcium phosphate cement (CPC) sets to form hydroxyapatite, a major component of mineral bone, and is gaining increasing interest in bone repair applications. However, concerns regarding its brittleness and tendency to fragment have limited its widespread use. In the present study, short-fibre reinforcement of an apatitic calcium phosphate has been investigated to improve the fracture behaviour. The fibres used were polypropylene (PP) fibres, 50 microm in diameter and reduced in length by cryogenic grinding. The compressive strength and fracture behaviour were examined. Fibre addition of up to 10 wt % had a significant effect on composite properties, with the energy absorbed during failure being significantly increased, although this tended to be accompanied with a slight drop in compressive strength. The fibre reinforcement mechanisms appeared to be crack bridging and fibre pull-out. The setting time of the CPC with fibre reinforcement was also investigated and was found to increase with fibre volume fraction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
2012-01-05
SandiaMCR was developed to identify pure components and their concentrations from spectral data. This software efficiently implements the multivariate calibration regression alternating least squares (MCR-ALS), principal component analysis (PCA), and singular value decomposition (SVD). Version 3.37 also includes the PARAFAC-ALS Tucker-1 (for trilinear analysis) algorithms. The alternating least squares methods can be used to determine the composition without or with incomplete prior information on the constituents and their concentrations. It allows the specification of numerous preprocessing, initialization and data selection and compression options for the efficient processing of large data sets. The software includes numerous options including the definition ofmore » equality and non-negativety constraints to realistically restrict the solution set, various normalization or weighting options based on the statistics of the data, several initialization choices and data compression. The software has been designed to provide a practicing spectroscopist the tools required to routinely analysis data in a reasonable time and without requiring expert intervention.« less
NASA Technical Reports Server (NTRS)
Jaggi, S.
1993-01-01
A study is conducted to investigate the effects and advantages of data compression techniques on multispectral imagery data acquired by NASA's airborne scanners at the Stennis Space Center. The first technique used was vector quantization. The vector is defined in the multispectral imagery context as an array of pixels from the same location from each channel. The error obtained in substituting the reconstructed images for the original set is compared for different compression ratios. Also, the eigenvalues of the covariance matrix obtained from the reconstructed data set are compared with the eigenvalues of the original set. The effects of varying the size of the vector codebook on the quality of the compression and on subsequent classification are also presented. The output data from the Vector Quantization algorithm was further compressed by a lossless technique called Difference-mapped Shift-extended Huffman coding. The overall compression for 7 channels of data acquired by the Calibrated Airborne Multispectral Scanner (CAMS), with an RMS error of 15.8 pixels was 195:1 (0.41 bpp) and with an RMS error of 3.6 pixels was 18:1 (.447 bpp). The algorithms were implemented in software and interfaced with the help of dedicated image processing boards to an 80386 PC compatible computer. Modules were developed for the task of image compression and image analysis. Also, supporting software to perform image processing for visual display and interpretation of the compressed/classified images was developed.
Effects of the closing speed of stapler jaws on bovine pancreases.
Chikamoto, Akira; Hashimoto, Daisuke; Ikuta, Yoshiaki; Tsuji, Akira; Abe, Shinya; Hayashi, Hiromitsu; Imai, Katsunori; Nitta, Hidetoshi; Ishiko, Takatoshi; Watanabe, Masayuki; Beppu, Toru; Baba, Hideo
2014-01-01
The division of the pancreatic parenchyma using a stapler is important in pancreatic surgery, especially for laparoscopic surgery. However, this procedure has not yet been standardized. We analyzed the effects of the closing speed of stapler jaws using bovine pancreases for each method. Furthermore, we assigned 10 min to the slow compression method, 5 min to the medium-fast compression method, and 30 s to the rapid compression (RC) method. The time allotted to holding (3 min) and dividing (30 s) was equal under each testing situation. We found that the RC method showed a high-pressure tolerance compared with the other two groups (rapid, 126 ± 49.0 mmHg; medium-fast, 55.5 ± 25.8 mmHg; slow, 45.0 ± 15.7 mmHg; p < 0.01), although the histological findings of the cut end were similar. The histological findings of the pancreatic capsule and parenchyma after the compression by staple jaws without firing also were similar. RC may provide an advantage as measured by pressure tolerance. A small series of distal pancreatectomy with a stapler that compares the speed of different stapler jaw closing times is required to prove the feasibility of these results after the confirmation of the advantages of the RC method under various settings.
Branderhorst, Woutjan; de Groot, Jerry E; van Lier, Monique G J T B; Highnam, Ralph P; den Heeten, Gerard J; Grimbergen, Cornelis A
2017-08-01
To assess the accuracy of two methods of determining the contact area between the compression paddle and the breast in mammography. An accurate method to determine the contact area is essential to accurately calculate the average compression pressure applied by the paddle. For a set of 300 breast compressions, we measured the contact areas between breast and paddle, both capacitively using a transparent foil with indium-tin-oxide (ITO) coating attached to the paddle, and retrospectively from the obtained mammograms using image processing software (Volpara Enterprise, algorithm version 1.5.2). A gold standard was obtained from video images of the compressed breast. During each compression, the breast was illuminated from the sides in order to create a dark shadow on the video image where the breast was in contact with the compression paddle. We manually segmented the shadows captured at the time of x-ray exposure and measured their areas. We found a strong correlation between the manual segmentations and the capacitive measurements [r = 0.989, 95% CI (0.987, 0.992)] and between the manual segmentations and the image processing software [r = 0.978, 95% CI (0.972, 0.982)]. Bland-Altman analysis showed a bias of -0.0038 dm 2 for the capacitive measurement (SD 0.0658, 95% limits of agreement [-0.1329, 0.1252]) and -0.0035 dm 2 for the image processing software [SD 0.0962, 95% limits of agreement (-0.1921, 0.1850)]. The size of the contact area between the paddle and the breast can be determined accurately and precisely, both in real-time using the capacitive method, and retrospectively using image processing software. This result is beneficial for scientific research, data analysis and quality control systems that depend on one of these two methods for determining the average pressure on the breast during mammographic compression. © 2017 Sigmascreening B.V. Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Parallel Tensor Compression for Large-Scale Scientific Data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kolda, Tamara G.; Ballard, Grey; Austin, Woody Nathan
As parallel computing trends towards the exascale, scientific data produced by high-fidelity simulations are growing increasingly massive. For instance, a simulation on a three-dimensional spatial grid with 512 points per dimension that tracks 64 variables per grid point for 128 time steps yields 8 TB of data. By viewing the data as a dense five way tensor, we can compute a Tucker decomposition to find inherent low-dimensional multilinear structure, achieving compression ratios of up to 10000 on real-world data sets with negligible loss in accuracy. So that we can operate on such massive data, we present the first-ever distributed memorymore » parallel implementation for the Tucker decomposition, whose key computations correspond to parallel linear algebra operations, albeit with nonstandard data layouts. Our approach specifies a data distribution for tensors that avoids any tensor data redistribution, either locally or in parallel. We provide accompanying analysis of the computation and communication costs of the algorithms. To demonstrate the compression and accuracy of the method, we apply our approach to real-world data sets from combustion science simulations. We also provide detailed performance results, including parallel performance in both weak and strong scaling experiments.« less
Computing interface motion in compressible gas dynamics
NASA Technical Reports Server (NTRS)
Mulder, W.; Osher, S.; Sethan, James A.
1992-01-01
An analysis is conducted of the coupling of Osher and Sethian's (1988) 'Hamilton-Jacobi' level set formulation of the equations of motion for propagating interfaces to a system of conservation laws for compressible gas dynamics, giving attention to both the conservative and nonconservative differencing of the level set function. The capabilities of the method are illustrated in view of the results of numerical convergence studies of the compressible Rayleigh-Taylor and Kelvin-Helmholtz instabilities for air-air and air-helium boundaries.
Pressure Regulators as Valves for Saving Compressed Air and their Influence on System Dynamics
NASA Astrophysics Data System (ADS)
Dvořák, Lukáš; Fojtášek, Kamil
2015-05-01
Pressure regulators in the field of pneumatic mechanisms can be used as valves for saving compressed air. For example it can be used to reduce the pressure when the piston rod is retracting unloaded and thus it is possible to save some energy. However the problem is that saving valve can significantly affect the dynamics of the pneumatic system. The lower pressure in the piston rod chamber causes extension of time for retraction of the piston rod. This article compare the air consumption experimentally determined and calculated, measured curves of pressure in cylinder chambers and piston speed when saving valve is set up differently.
Improving Remote Health Monitoring: A Low-Complexity ECG Compression Approach
Al-Ali, Abdulla; Mohamed, Amr; Ward, Rabab
2018-01-01
Recent advances in mobile technology have created a shift towards using battery-driven devices in remote monitoring settings and smart homes. Clinicians are carrying out diagnostic and screening procedures based on the electrocardiogram (ECG) signals collected remotely for outpatients who need continuous monitoring. High-speed transmission and analysis of large recorded ECG signals are essential, especially with the increased use of battery-powered devices. Exploring low-power alternative compression methodologies that have high efficiency and that enable ECG signal collection, transmission, and analysis in a smart home or remote location is required. Compression algorithms based on adaptive linear predictors and decimation by a factor B/K are evaluated based on compression ratio (CR), percentage root-mean-square difference (PRD), and heartbeat detection accuracy of the reconstructed ECG signal. With two databases (153 subjects), the new algorithm demonstrates the highest compression performance (CR=6 and PRD=1.88) and overall detection accuracy (99.90% sensitivity, 99.56% positive predictivity) over both databases. The proposed algorithm presents an advantage for the real-time transmission of ECG signals using a faster and more efficient method, which meets the growing demand for more efficient remote health monitoring. PMID:29337892
Improving Remote Health Monitoring: A Low-Complexity ECG Compression Approach.
Elgendi, Mohamed; Al-Ali, Abdulla; Mohamed, Amr; Ward, Rabab
2018-01-16
Recent advances in mobile technology have created a shift towards using battery-driven devices in remote monitoring settings and smart homes. Clinicians are carrying out diagnostic and screening procedures based on the electrocardiogram (ECG) signals collected remotely for outpatients who need continuous monitoring. High-speed transmission and analysis of large recorded ECG signals are essential, especially with the increased use of battery-powered devices. Exploring low-power alternative compression methodologies that have high efficiency and that enable ECG signal collection, transmission, and analysis in a smart home or remote location is required. Compression algorithms based on adaptive linear predictors and decimation by a factor B / K are evaluated based on compression ratio (CR), percentage root-mean-square difference (PRD), and heartbeat detection accuracy of the reconstructed ECG signal. With two databases (153 subjects), the new algorithm demonstrates the highest compression performance ( CR = 6 and PRD = 1.88 ) and overall detection accuracy (99.90% sensitivity, 99.56% positive predictivity) over both databases. The proposed algorithm presents an advantage for the real-time transmission of ECG signals using a faster and more efficient method, which meets the growing demand for more efficient remote health monitoring.
Accelerated radial Fourier-velocity encoding using compressed sensing.
Hilbert, Fabian; Wech, Tobias; Hahn, Dietbert; Köstler, Herbert
2014-09-01
Phase Contrast Magnetic Resonance Imaging (MRI) is a tool for non-invasive determination of flow velocities inside blood vessels. Because Phase Contrast MRI only measures a single mean velocity per voxel, it is only applicable to vessels significantly larger than the voxel size. In contrast, Fourier Velocity Encoding measures the entire velocity distribution inside a voxel, but requires a much longer acquisition time. For accurate diagnosis of stenosis in vessels on the scale of spatial resolution, it is important to know the velocity distribution of a voxel. Our aim was to determine velocity distributions with accelerated Fourier Velocity Encoding in an acquisition time required for a conventional Phase Contrast image. We imaged the femoral artery of healthy volunteers with ECG-triggered, radial CINE acquisition. Data acquisition was accelerated by undersampling, while missing data were reconstructed by Compressed Sensing. Velocity spectra of the vessel were evaluated by high resolution Phase Contrast images and compared to spectra from fully sampled and undersampled Fourier Velocity Encoding. By means of undersampling, it was possible to reduce the scan time for Fourier Velocity Encoding to the duration required for a conventional Phase Contrast image. Acquisition time for a fully sampled data set with 12 different Velocity Encodings was 40 min. By applying a 12.6-fold retrospective undersampling, a data set was generated equal to 3:10 min acquisition time, which is similar to a conventional Phase Contrast measurement. Velocity spectra from fully sampled and undersampled Fourier Velocity Encoded images are in good agreement and show the same maximum velocities as compared to velocity maps from Phase Contrast measurements. Compressed Sensing proved to reliably reconstruct Fourier Velocity Encoded data. Our results indicate that Fourier Velocity Encoding allows an accurate determination of the velocity distribution in vessels in the order of the voxel size. Thus, compared to normal Phase Contrast measurements delivering only mean velocities, no additional scan time is necessary to retrieve meaningful velocity spectra in small vessels. Copyright © 2013. Published by Elsevier GmbH.
Kibadi, K
2018-02-01
The author reports the surgical management of a patient with elephantiasis of the leg in the Democratic Republic of Congo. A fasciotomy and lymphangiectomy with skin preservation, combined with compression therapy, resulted in significant cosmetic, functional, and social improvement. Although challenging in a resource-limited setting, development of surgical management may make it possible to reduce beliefs that elephantiasis is incurable or due to witchcraft and may reduce time to consultation.
Distribution to the Astronomy Community of the Compressed Digitized Sky Survey
NASA Astrophysics Data System (ADS)
Postman, Marc
1996-03-01
The Space Telescope Science Institute has compressed an all-sky collection of ground-based images and has printed the data on a two volume, 102 CD-ROM disc set. The first part of the survey (containing images of the southern sky) was published in May 1994. The second volume (containing images of the northern sky) was published in January 1995. Software which manages the image retrieval is included with each volume. The Astronomical Society of the Pacific (ASP) is handling the distribution of the lOx compressed data and has sold 310 sets as of October 1996. ASP is also handling the distribution of the recently published 100x version of the northern sky survey which is publicly available at a low cost. The target markets for the 100x compressed data set are the amateur astronomy community, educational institutions, and the general public. During the next year, we plan to publish the first version of a photometric calibration database which will allow users of the compressed sky survey to determine the brightness of stars in the images.
Distribution to the Astronomy Community of the Compressed Digitized Sky Survey
NASA Technical Reports Server (NTRS)
Postman, Marc
1996-01-01
The Space Telescope Science Institute has compressed an all-sky collection of ground-based images and has printed the data on a two volume, 102 CD-ROM disc set. The first part of the survey (containing images of the southern sky) was published in May 1994. The second volume (containing images of the northern sky) was published in January 1995. Software which manages the image retrieval is included with each volume. The Astronomical Society of the Pacific (ASP) is handling the distribution of the lOx compressed data and has sold 310 sets as of October 1996. ASP is also handling the distribution of the recently published 100x version of the northern sky survey which is publicly available at a low cost. The target markets for the 100x compressed data set are the amateur astronomy community, educational institutions, and the general public. During the next year, we plan to publish the first version of a photometric calibration database which will allow users of the compressed sky survey to determine the brightness of stars in the images.
Lyon, R M; Clarke, S; Gowens, P; Egan, G; Clegg, G R
2010-12-01
Out-of-hospital cardiac arrest (OHCA) is a leading cause of pre-hospital mortality. Chest compressions performed during cardiopulmonary resuscitation aim to provide adequate perfusion to the vital organs during cardiac arrest. Poor resuscitation technique and the quality of pre-hospital CPR influences outcome from OHCA. Transthoracic impedance (TTI) measurement is a useful tool in the assessment of the quality of pre-hospital resuscitation by ambulance crews but TTI telemetry has not yet been performed in the United Kingdom. We describe a pilot study to implement a data network to collect defibrillator TTI data via telemetry from ambulances. Prospective, observational pilot study over a 5-month period. Modems were fitted to 40 defibrillators on ambulances based in Edinburgh. TTI data was sent to a receiving computer after resuscitation attempts for OHCA. 58 TTI traces were transmitted during the pilot period. Compliance with the telemetry system was high. The mean ratio of chest compressions was 73% (95% CI 69-77%), the mean chest compression rate was 128 (95% CI 122-134). The mean time interval from chest compression interruption to shock delivery was 27 s (95% CI 22-32 s). Trans-thoracic impedance analysis is an effective means of recording important measures of resuscitation quality including the hands-on-the-chest time, compression rate and defibrillation interval time. TTI data transmission via telemetry is straightforward, efficient and allows resuscitation data to be captured and analysed from a large geographical area. Further research is warranted on the impact of post-resuscitation reporting on the quality of resuscitation delivered by ambulance crews. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Tiny videos: a large data set for nonparametric video retrieval and frame classification.
Karpenko, Alexandre; Aarabi, Parham
2011-03-01
In this paper, we present a large database of over 50,000 user-labeled videos collected from YouTube. We develop a compact representation called "tiny videos" that achieves high video compression rates while retaining the overall visual appearance of the video as it varies over time. We show that frame sampling using affinity propagation-an exemplar-based clustering algorithm-achieves the best trade-off between compression and video recall. We use this large collection of user-labeled videos in conjunction with simple data mining techniques to perform related video retrieval, as well as classification of images and video frames. The classification results achieved by tiny videos are compared with the tiny images framework [24] for a variety of recognition tasks. The tiny images data set consists of 80 million images collected from the Internet. These are the largest labeled research data sets of videos and images available to date. We show that tiny videos are better suited for classifying scenery and sports activities, while tiny images perform better at recognizing objects. Furthermore, we demonstrate that combining the tiny images and tiny videos data sets improves classification precision in a wider range of categories.
Compressive Detection of Highly Overlapped Spectra Using Walsh-Hadamard-Based Filter Functions.
Corcoran, Timothy C
2018-03-01
In the chemometric context in which spectral loadings of the analytes are already known, spectral filter functions may be constructed which allow the scores of mixtures of analytes to be determined in on-the-fly fashion directly, by applying a compressive detection strategy. Rather than collecting the entire spectrum over the relevant region for the mixture, a filter function may be applied within the spectrometer itself so that only the scores are recorded. Consequently, compressive detection shrinks data sets tremendously. The Walsh functions, the binary basis used in Walsh-Hadamard transform spectroscopy, form a complete orthonormal set well suited to compressive detection. A method for constructing filter functions using binary fourfold linear combinations of Walsh functions is detailed using mathematics borrowed from genetic algorithm work, as a means of optimizing said functions for a specific set of analytes. These filter functions can be constructed to automatically strip the baseline from analysis. Monte Carlo simulations were performed with a mixture of four highly overlapped Raman loadings and with ten excitation-emission matrix loadings; both sets showed a very high degree of spectral overlap. Reasonable estimates of the true scores were obtained in both simulations using noisy data sets, proving the linearity of the method.
Spatial compression algorithm for the analysis of very large multivariate images
Keenan, Michael R [Albuquerque, NM
2008-07-15
A method for spatially compressing data sets enables the efficient analysis of very large multivariate images. The spatial compression algorithms use a wavelet transformation to map an image into a compressed image containing a smaller number of pixels that retain the original image's information content. Image analysis can then be performed on a compressed data matrix consisting of a reduced number of significant wavelet coefficients. Furthermore, a block algorithm can be used for performing common operations more efficiently. The spatial compression algorithms can be combined with spectral compression algorithms to provide further computational efficiencies.
DOT National Transportation Integrated Search
2009-01-01
There is increasing pressure from owners, contractors, and the public to open bridge decks sooner to full : traffic loads. As a result, a set of criteria or guidelines is needed to determine when concrete bridge decks can : safely be opened. Today, c...
GUERREIRO-TANOMARU, Juliane Maria; VÁZQUEZ-GARCÍA, Fernando Antonio; BOSSO-MARTELO, Roberta; BERNARDI, Maria Inês Basso; FARIA, Gisele; TANOMARU, Mario
2016-01-01
ABSTRACT Objective Mineral Trioxide Aggregate (MTA) is a calcium silicate cement composed of Portland cement (PC) and bismuth oxide. Hydroxyapatite has been incorporated to enhance mechanical and biological properties of dental materials. This study evaluated physicochemical and mechanical properties and antibiofilm activity of MTA and PC associated with zirconium oxide (ZrO2) and hydroxyapatite nanoparticles (HAn). Material and Methods White MTA (Angelus, Brazil); PC (70%)+ZrO2 (30%); PC (60%)+ZrO2 (30%)+HAn (10%); PC (50%)+ZrO2 (30%)+HAn (20%) were evaluated. The pH was assessed by a digital pH-meter and solubility by mass loss. Setting time was evaluated by using Gilmore needles. Compressive strength was analyzed by mechanical test. Samples were radiographed alongside an aluminum step wedge to evaluate radiopacity. For the antibiofilm evaluation, materials were placed in direct contact with E. faecalis biofilm induced on dentine blocks. The number of colony-forming units (CFU mL-1) in the remaining biolfilm was evaluated. The results were submitted to ANOVA and the Tukey test, with 5% significance. Results There was no difference in pH levels of PC+ZrO2, PC+ZrO2+HAn (10%) and PC+ZrO2+HAn (20%) (p>0.05) and these cements presented higher pH levels than MTA (p<0.05). The highest solubility was observed in PC+ZrO2+HAn (10%) and PC+ZrO2+HAn (20%) (p<0.05). MTA had the shortest initial setting time (p<0.05). All the materials showed radiopacity higher than 3 mmAl. PC+ZrO2 and MTA had the highest compressive strength (p<0.05). Materials did not completely neutralize the bacterial biofilm, but the association with HAn provided greater bacterial reduction than MTA and PC+ZrO2 (p<0.05) after the post-manipulation period of 2 days. Conclusions The addition of HAn to PC associated with ZrO2 harmed the compressive strength and solubility. On the other hand, HAn did not change the pH and the initial setting time, but improved the radiopacity (HAn 10%), the final setting time and the E. faecalis antibiofilm activity of the cement. PMID:27383700
Epineural adipose-derived stem cell injection in a sciatic rodent model.
Kappos, Elisabeth A; Baenziger-Sieber, Patricia; Tremp, Mathias; Engels, Patricia E; Thommen, Sarah; Sprenger, Lima; Benz, Robyn M; Schaefer, Dirk J; Schaeren, Stefan; Kalbermatten, Daniel Felix
2018-06-19
The aim was to evaluate the regenerative effect of epineural injection of rat ASCs (rASCs) in three different settings of acute and chronic compression in a rat sciatic nerve model. Acute compression (60 s) with a vessel clamp over a distance of 1 mm (group 1) or 10 mm (group 2), as well as chronic compression with a permanent remaining, nonabsorbable polymeric clip over a distance of 1 mm (group 3) was performed. Depending on the group, either 5 × 10 6 rASCs or the same volume (25 μl) of culture medium (CM) was injected with a 30G needle in the epineurium at the time of compression. Outcome measures were functional gait evaluations, imaging analysis, histomorphometric analyses, and muscle weight. The rats in group 2 had a better function than those with group 1 at one and especially at 2 weeks. After 4 weeks however, almost all rats were close to a normal function. There was a similar Muscle Weight Ratio (MWR) after 2 weeks in all groups, whereas after 4 weeks, the MWR in group 3 was lower compared with group 1 and 2. Histomorphometric analysis showed a better myelination in group 1 & 2 compared to group 3 after 4 weeks. ASCs have a beneficial effect on myelin thickness (G-Ratio). We successfully evaluated the regenerative effect of epineural injection of rASCs in three different settings of acute and chronic compression. However, there were no significant differences in outcomes between the ASC-treated groups and control groups. © 2018 The Authors. Brain and Behavior published by Wiley Periodicals, Inc.
LVQ and backpropagation neural networks applied to NASA SSME data
NASA Technical Reports Server (NTRS)
Doniere, Timothy F.; Dhawan, Atam P.
1993-01-01
Feedfoward neural networks with backpropagation learning have been used as function approximators for modeling the space shuttle main engine (SSME) sensor signals. The modeling of these sensor signals is aimed at the development of a sensor fault detection system that can be used during ground test firings. The generalization capability of a neural network based function approximator depends on the training vectors which in this application may be derived from a number of SSME ground test-firings. This yields a large number of training vectors. Large training sets can cause the time required to train the network to be very large. Also, the network may not be able to generalize for large training sets. To reduce the size of the training sets, the SSME test-firing data is reduced using the learning vector quantization (LVQ) based technique. Different compression ratios were used to obtain compressed data in training the neural network model. The performance of the neural model trained using reduced sets of training patterns is presented and compared with the performance of the model trained using complete data. The LVQ can also be used as a function approximator. The performance of the LVQ as a function approximator using reduced training sets is presented and compared with the performance of the backpropagation network.
NASA Technical Reports Server (NTRS)
Cambon, C.; Coleman, G. N.; Mansour, N. N.
1992-01-01
The effect of rapid mean compression on compressible turbulence at a range of turbulent Mach numbers is investigated. Rapid distortion theory (RDT) and direct numerical simulation results for the case of axial (one-dimensional) compression are used to illustrate the existence of two distinct rapid compression regimes. These regimes are set by the relationships between the timescales of the mean distortion, the turbulence, and the speed of sound. A general RDT formulation is developed and is proposed as a means of improving turbulence models for compressible flows.
Multiresolution With Super-Compact Wavelets
NASA Technical Reports Server (NTRS)
Lee, Dohyung
2000-01-01
The solution data computed from large scale simulations are sometimes too big for main memory, for local disks, and possibly even for a remote storage disk, creating tremendous processing time as well as technical difficulties in analyzing the data. The excessive storage demands a corresponding huge penalty in I/O time, rendering time and transmission time between different computer systems. In this paper, a multiresolution scheme is proposed to compress field simulation or experimental data without much loss of important information in the representation. Originally, the wavelet based multiresolution scheme was introduced in image processing, for the purposes of data compression and feature extraction. Unlike photographic image data which has rather simple settings, computational field simulation data needs more careful treatment in applying the multiresolution technique. While the image data sits on a regular spaced grid, the simulation data usually resides on a structured curvilinear grid or unstructured grid. In addition to the irregularity in grid spacing, the other difficulty is that the solutions consist of vectors instead of scalar values. The data characteristics demand more restrictive conditions. In general, the photographic images have very little inherent smoothness with discontinuities almost everywhere. On the other hand, the numerical solutions have smoothness almost everywhere and discontinuities in local areas (shock, vortices, and shear layers). The wavelet bases should be amenable to the solution of the problem at hand and applicable to constraints such as numerical accuracy and boundary conditions. In choosing a suitable wavelet basis for simulation data among a variety of wavelet families, the supercompact wavelets designed by Beam and Warming provide one of the most effective multiresolution schemes. Supercompact multi-wavelets retain the compactness of Haar wavelets, are piecewise polynomial and orthogonal, and can have arbitrary order of approximation. The advantages of the multiresolution algorithm are that no special treatment is required at the boundaries of the interval, and that the application to functions which are only piecewise continuous (internal boundaries) can be efficiently implemented. In this presentation, Beam's supercompact wavelets are generalized to higher dimensions using multidimensional scaling and wavelet functions rather than alternating the directions as in the 1D version. As a demonstration of actual 3D data compression, supercompact wavelet transforms are applied to a 3D data set for wing tip vortex flow solutions (2.5 million grid points). It is shown that high data compression ratio can be achieved (around 50:1 ratio) in both vector and scalar data set.
Effect of wastewater on properties of Portland pozzolana cement
NASA Astrophysics Data System (ADS)
Babu, G. Reddy
2017-07-01
This paper presents the effect of wastewaters on properties of Portland pozzolana cement (PPC). Fourteen water treatment plants were found out in the Narasaraopet municipality region in Guntur district, Andhra Pradesh, India. Approximately, from each plant, between 3500 and 4000 L/day of potable water is selling to consumers. All plants are extracting ground water and treating through Reverse Osmosis (RO) process. During water treatment, plants are discharging approximately 1,00,000 L/day as wastewater in side drains in Narasaraopet municipality. Physical and chemical analysis was carried out on fourteen plants wastewater and distilled water as per producer described in APHA. In the present work, based on the concentrations of constituent's in wastewater, four typical plants i.e., Narasaraopeta Engineering College (NECWW), Patan Khasim Charitable Trust (PKTWW), Mahmadh Khasim Charitable Trust (MKTWW) and Amara (ARWW) were considered. The performance of four plants wastewater on physical properties i.e., setting times, compressive strength, and flexural strength of Portland pozzolana Cement (PPC) were performed in laboratories and compared same with reference specimens i.e., made with Distilled Water (DW) as mixing water. No significant change was observed in initial and finial setting time but setting times of selected wastewaters were retarded as compared to that of reference water. Almost, no change was observed in 90 days compressive and flexural strengths in four plants wastewaters specimens compared to that of reference water specimens. XRD technique was employed to find out main hydration compounds formed in the process.
Birkun, Alexei; Glotov, Maksim; Ndjamen, Herman Franklin; Alaiye, Esther; Adeleke, Temidara; Samarin, Sergey
2018-01-01
To assess the effectiveness of the telephone chest-compression-only cardiopulmonary resuscitation (CPR) guided by a pre-recorded instructional audio when compared with dispatcher-assisted resuscitation. It was a prospective, blind, randomised controlled study involving 109 medical students without previous CPR training. In a standardized mannequin scenario, after the step of dispatcher-assisted cardiac arrest recognition, the participants performed compression-only resuscitation guided over the telephone by either: (1) the pre-recorded instructional audio ( n =57); or (2) verbal dispatcher assistance ( n =52). The simulation video records were reviewed to assess the CPR performance using a 13-item checklist. The interval from call reception to the first compression, total number and rate of compressions, total number and duration of pauses after the first compression were also recorded. There were no significant differences between the recording-assisted and dispatcher-assisted groups based on the overall performance score (5.6±2.2 vs. 5.1±1.9, P >0.05) or individual criteria of the CPR performance checklist. The recording-assisted group demonstrated significantly shorter time interval from call receipt to the first compression (86.0±14.3 vs. 91.2±14.2 s, P <0.05), higher compression rate (94.9±26.4 vs. 89.1±32.8 min -1 ) and number of compressions provided (170.2±48.0 vs. 156.2±60.7). When provided by untrained persons in the simulated settings, the compression-only resuscitation guided by the pre-recorded instructional audio is no less efficient than dispatcher-assisted CPR. Future studies are warranted to further assess feasibility of using instructional audio aid as a potential alternative to dispatcher assistance.
Birkun, Alexei; Glotov, Maksim; Ndjamen, Herman Franklin; Alaiye, Esther; Adeleke, Temidara; Samarin, Sergey
2018-01-01
BACKGROUND: To assess the effectiveness of the telephone chest-compression-only cardiopulmonary resuscitation (CPR) guided by a pre-recorded instructional audio when compared with dispatcher-assisted resuscitation. METHODS: It was a prospective, blind, randomised controlled study involving 109 medical students without previous CPR training. In a standardized mannequin scenario, after the step of dispatcher-assisted cardiac arrest recognition, the participants performed compression-only resuscitation guided over the telephone by either: (1) the pre-recorded instructional audio (n=57); or (2) verbal dispatcher assistance (n=52). The simulation video records were reviewed to assess the CPR performance using a 13-item checklist. The interval from call reception to the first compression, total number and rate of compressions, total number and duration of pauses after the first compression were also recorded. RESULTS: There were no significant differences between the recording-assisted and dispatcher-assisted groups based on the overall performance score (5.6±2.2 vs. 5.1±1.9, P>0.05) or individual criteria of the CPR performance checklist. The recording-assisted group demonstrated significantly shorter time interval from call receipt to the first compression (86.0±14.3 vs. 91.2±14.2 s, P<0.05), higher compression rate (94.9±26.4 vs. 89.1±32.8 min-1) and number of compressions provided (170.2±48.0 vs. 156.2±60.7). CONCLUSION: When provided by untrained persons in the simulated settings, the compression-only resuscitation guided by the pre-recorded instructional audio is no less efficient than dispatcher-assisted CPR. Future studies are warranted to further assess feasibility of using instructional audio aid as a potential alternative to dispatcher assistance.
Evaluation of a method for enhancing interaural level differences at low frequencies.
Moore, Brian C J; Kolarik, Andrew; Stone, Michael A; Lee, Young-Woo
2016-10-01
A method (called binaural enhancement) for enhancing interaural level differences at low frequencies, based on estimates of interaural time differences, was developed and evaluated. Five conditions were compared, all using simulated hearing-aid processing: (1) Linear amplification with frequency-response shaping; (2) binaural enhancement combined with linear amplification and frequency-response shaping; (3) slow-acting four-channel amplitude compression with independent compression at the two ears (AGC4CH); (4) binaural enhancement combined with four-channel compression (BE-AGC4CH); and (5) four-channel compression but with the compression gains synchronized across ears. Ten hearing-impaired listeners were tested, and gains and compression ratios for each listener were set to match targets prescribed by the CAM2 fitting method. Stimuli were presented via headphones, using virtualization methods to simulate listening in a moderately reverberant room. The intelligibility of speech at ±60° azimuth in the presence of competing speech on the opposite side of the head at ±60° azimuth was not affected by the binaural enhancement processing. Sound localization was significantly better for condition BE-AGC4CH than for condition AGC4CH for a sentence, but not for broadband noise, lowpass noise, or lowpass amplitude-modulated noise. The results suggest that the binaural enhancement processing can improve localization for sounds with distinct envelope fluctuations.
Quality Aware Compression of Electrocardiogram Using Principal Component Analysis.
Gupta, Rajarshi
2016-05-01
Electrocardiogram (ECG) compression finds wide application in various patient monitoring purposes. Quality control in ECG compression ensures reconstruction quality and its clinical acceptance for diagnostic decision making. In this paper, a quality aware compression method of single lead ECG is described using principal component analysis (PCA). After pre-processing, beat extraction and PCA decomposition, two independent quality criteria, namely, bit rate control (BRC) or error control (EC) criteria were set to select optimal principal components, eigenvectors and their quantization level to achieve desired bit rate or error measure. The selected principal components and eigenvectors were finally compressed using a modified delta and Huffman encoder. The algorithms were validated with 32 sets of MIT Arrhythmia data and 60 normal and 30 sets of diagnostic ECG data from PTB Diagnostic ECG data ptbdb, all at 1 kHz sampling. For BRC with a CR threshold of 40, an average Compression Ratio (CR), percentage root mean squared difference normalized (PRDN) and maximum absolute error (MAE) of 50.74, 16.22 and 0.243 mV respectively were obtained. For EC with an upper limit of 5 % PRDN and 0.1 mV MAE, the average CR, PRDN and MAE of 9.48, 4.13 and 0.049 mV respectively were obtained. For mitdb data 117, the reconstruction quality could be preserved up to CR of 68.96 by extending the BRC threshold. The proposed method yields better results than recently published works on quality controlled ECG compression.
Service Lifetime Estimation of EPDM Rubber Based on Accelerated Aging Tests
NASA Astrophysics Data System (ADS)
Liu, Jie; Li, Xiangbo; Xu, Likun; He, Tao
2017-04-01
Service lifetime of ethylene propylene diene monomer (EPDM) rubber at room temperature (25 °C) was estimated based on accelerated aging tests. The study followed sealing stress loss on compressed cylinder samples by compression stress relaxation methods. The results showed that the cylinder samples of EPDM can quickly reach the physical relaxation equilibrium by using the over-compression method. The non-Arrhenius behavior occurred at the lowest aging temperature. A significant linear relationship was observed between compression set values and normalized stress decay results, and the relationship was not related to the ambient temperature of aging. It was estimated that the sealing stress loss in view of practical application would occur after around 86.8 years at 25 °C. The estimations at 25 °C based on the non-Arrhenius behavior were in agreement with compression set data from storage aging tests in natural environment.
The effects of wavelet compression on Digital Elevation Models (DEMs)
Oimoen, M.J.
2004-01-01
This paper investigates the effects of lossy compression on floating-point digital elevation models using the discrete wavelet transform. The compression of elevation data poses a different set of problems and concerns than does the compression of images. Most notably, the usefulness of DEMs depends largely in the quality of their derivatives, such as slope and aspect. Three areas extracted from the U.S. Geological Survey's National Elevation Dataset were transformed to the wavelet domain using the third order filters of the Daubechies family (DAUB6), and were made sparse by setting 95 percent of the smallest wavelet coefficients to zero. The resulting raster is compressible to a corresponding degree. The effects of the nulled coefficients on the reconstructed DEM are noted as residuals in elevation, derived slope and aspect, and delineation of drainage basins and streamlines. A simple masking technique also is presented, that maintains the integrity and flatness of water bodies in the reconstructed DEM.
Negim, El-Sayed; Kozhamzharova, Latipa; Gulzhakhan, Yeligbayeva; Khatib, Jamal; Bekbayeva, Lyazzat; Williams, Craig
2014-01-01
This paper investigates the physicomechanical properties of mortar containing high volume of fly ash (FA) as partial replacement of cement in presence of copolymer latexes. Portland cement (PC) was partially replaced with 0, 10, 20, 30 50, and 60% FA. Copolymer latexes were used based on 2-hydroxyethyl acrylate (2-HEA) and 2-hydroxymethylacrylate (2-HEMA). Testing included workability, setting time, absorption, chemically combined water content, compressive strength, and scanning electron microscopy (SEM). The addition of FA to mortar as replacement of PC affected the physicomechanical properties of mortar. As the content of FA in the concrete increased, the setting times (initial and final) were elongated. The results obtained at 28 days of curing indicate that the maximum properties of mortar occur at around 30% FA. Beyond 30% FA the properties of mortar reduce and at 60% FA the properties of mortar are lower than those of the reference mortar without FA. However, the addition of polymer latexes into mortar containing FA improved most of the physicomechanical properties of mortar at all curing times. Compressive strength, combined water, and workability of mortar containing FA premixed with latexes are higher than those of mortar containing FA without latexes.
Kozhamzharova, Latipa; Gulzhakhan, Yeligbayeva; Bekbayeva, Lyazzat; Williams, Craig
2014-01-01
This paper investigates the physicomechanical properties of mortar containing high volume of fly ash (FA) as partial replacement of cement in presence of copolymer latexes. Portland cement (PC) was partially replaced with 0, 10, 20, 30 50, and 60% FA. Copolymer latexes were used based on 2-hydroxyethyl acrylate (2-HEA) and 2-hydroxymethylacrylate (2-HEMA). Testing included workability, setting time, absorption, chemically combined water content, compressive strength, and scanning electron microscopy (SEM). The addition of FA to mortar as replacement of PC affected the physicomechanical properties of mortar. As the content of FA in the concrete increased, the setting times (initial and final) were elongated. The results obtained at 28 days of curing indicate that the maximum properties of mortar occur at around 30% FA. Beyond 30% FA the properties of mortar reduce and at 60% FA the properties of mortar are lower than those of the reference mortar without FA. However, the addition of polymer latexes into mortar containing FA improved most of the physicomechanical properties of mortar at all curing times. Compressive strength, combined water, and workability of mortar containing FA premixed with latexes are higher than those of mortar containing FA without latexes. PMID:25254256
Fischer, Henrik; Neuhold, Stephanie; Hochbrugger, Eva; Steinlechner, Barbara; Koinig, Herbert; Milosevic, Ljubisa; Havel, Christof; Frantal, Sophie; Greif, Robert
2011-04-01
Cardiopulmonary resuscitation (CPR) during flight is challenging and has to be sustained for long periods. In this setting a mechanical-resuscitation-device (MRD) might improve performance. In this study we compared the quality of resuscitation of trained flight attendants practicing either standard basic life support (BLS) or using a MRD in a cabin-simulator. Prospective, open, randomized and crossover simulation study. Study participants, competent in standard BLS were trained to use the MRD to deliver both chest compressions and ventilation. 39 teams of two rescuers resuscitated a manikin for 12 min in random order, standard BLS or mechanically assisted resuscitation. Primary outcome was "absolute hands-off time" (sum of all periods during which no hand was placed on the chest minus ventilation time). Various parameters describing the quality of chest compression and ventilation were analysed as secondary outcome parameters. Use of the MRD led to significantly less "absolute hands-off time" (164±33 s vs. 205±42 s, p<0.001). The quality of chest compression was comparable among groups, except for a higher compression rate in the standard BLS group (123±14 min(-1) vs. 95±11 min(-1), p<0.001). Tidal volume was higher in the standard BLS group (0.48±0.14 l vs. 0.34±0.13 l, p<0.001), but we registered fewer gastric inflations in the MRD group (0.4±0.3% vs. 16.6±16.9%, p<0.001). Using the MRD resulted in significantly less "absolute hands-off time", but less effective ventilation. The translation of higher chest compression rate into better outcome, as shown in other studies previously, has to be investigated in another human outcome study. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
General Equation Set Solver for Compressible and Incompressible Turbomachinery Flows
NASA Technical Reports Server (NTRS)
Sondak, Douglas L.; Dorney, Daniel J.
2002-01-01
Turbomachines for propulsion applications operate with many different working fluids and flow conditions. The flow may be incompressible, such as in the liquid hydrogen pump in a rocket engine, or supersonic, such as in the turbine which may drive the hydrogen pump. Separate codes have traditionally been used for incompressible and compressible flow solvers. The General Equation Set (GES) method can be used to solve both incompressible and compressible flows, and it is not restricted to perfect gases, as are many compressible-flow turbomachinery solvers. An unsteady GES turbomachinery flow solver has been developed and applied to both air and water flows through turbines. It has been shown to be an excellent alternative to maintaining two separate codes.
Compressed Sensing for Metrics Development
NASA Astrophysics Data System (ADS)
McGraw, R. L.; Giangrande, S. E.; Liu, Y.
2012-12-01
Models by their very nature tend to be sparse in the sense that they are designed, with a few optimally selected key parameters, to provide simple yet faithful representations of a complex observational dataset or computer simulation output. This paper seeks to apply methods from compressed sensing (CS), a new area of applied mathematics currently undergoing a very rapid development (see for example Candes et al., 2006), to FASTER needs for new approaches to model evaluation and metrics development. The CS approach will be illustrated for a time series generated using a few-parameter (i.e. sparse) model. A seemingly incomplete set of measurements, taken at a just few random sampling times, is then used to recover the hidden model parameters. Remarkably there is a sharp transition in the number of required measurements, beyond which both the model parameters and time series are recovered exactly. Applications to data compression, data sampling/collection strategies, and to the development of metrics for model evaluation by comparison with observation (e.g. evaluation of model predictions of cloud fraction using cloud radar observations) are presented and discussed in context of the CS approach. Cited reference: Candes, E. J., Romberg, J., and Tao, T. (2006), Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information, IEEE Transactions on Information Theory, 52, 489-509.
Lifetime Extension Report: Progress on the SAVY-4000 Lifetime Extension Program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Welch, Cynthia F.; Smith, Paul Herrick; Weis, Eric M.
The 3-year accelerated aging study of the SAVY-4000 O-ring shows very little evidence of significant degradation to samples subjected to aggressive elevated temperature and radiation conditions. Whole container thermal aging studies followed by helium leakage testing and compression set measurements were used to establish an estimate for a failure criterion for O-ring compression set of ≥65 %. The whole container aging studies further show that the air flow and efficiency functions of the filter do not degrade significantly after thermal aging. However, the degradation of the water-resistant function leads to water penetration failure after four months at 210°C, but doesmore » not cause failure after 10 months at 120°C (130°C is the maximum operating temperature for the PTFE membrane). The thermal aging data for O-ring compression set do not meet the assumptions of standard time-temperature superposition analysis for accelerated aging studies. Instead, the data suggest that multiple degradation mechanisms are operative, with a reversible mechanism operative at low aging temperatures and an irreversible mechanism dominating at high aging temperatures. To distinguish between these mechanisms, we have measured compression set after allowing the sample to physically relax, thereby minimizing the effect of the reversible mechanism. The resulting data were analyzed using two distinct mathematical methods to obtain a lifetime estimate based on chemical degradation alone. Both methods support a lifetime estimate of greater than 150 years at 80°C. Although the role of the reversible mechanism is not fully understood, it is clear that the contribution to the total compression set is small in comparison to that due to the chemical degradation mechanism. To better understand the chemical degradation mechanism, thermally aged O-ring samples have been characterized by Fourier Transform Infrared (FTIR), Electron Paramagnetic Resonance (EPR), Gel Permeation Chromatography (GPC), and Differential Scanning Calorimetry (DSC). These experiments detect no significant O-ring degradation below 80°C. Furthermore, durometer measurements indicate that there is no significant change in O-ring hardness at all aging conditions examined. Therefore, our current conservative lifetime estimate for the O-ring and the filter is 10 years at 80°C. In FY17, we will continue to probe the chemical degradation mechanism using oxygen consumption measurements under accelerated aging conditions to reveal temperatures at which oxidation occurs, along with any differences in oxidation rate at the low vs. high aging temperatures. We will also refine the failure criteria and finalize the radiation/thermal synergistic studies to determine a final design lifetime.« less
High-performance 3D compressive sensing MRI reconstruction.
Kim, Daehyun; Trzasko, Joshua D; Smelyanskiy, Mikhail; Haider, Clifton R; Manduca, Armando; Dubey, Pradeep
2010-01-01
Compressive Sensing (CS) is a nascent sampling and reconstruction paradigm that describes how sparse or compressible signals can be accurately approximated using many fewer samples than traditionally believed. In magnetic resonance imaging (MRI), where scan duration is directly proportional to the number of acquired samples, CS has the potential to dramatically decrease scan time. However, the computationally expensive nature of CS reconstructions has so far precluded their use in routine clinical practice - instead, more-easily generated but lower-quality images continue to be used. We investigate the development and optimization of a proven inexact quasi-Newton CS reconstruction algorithm on several modern parallel architectures, including CPUs, GPUs, and Intel's Many Integrated Core (MIC) architecture. Our (optimized) baseline implementation on a quad-core Core i7 is able to reconstruct a 256 × 160×80 volume of the neurovasculature from an 8-channel, 10 × undersampled data set within 56 seconds, which is already a significant improvement over existing implementations. The latest six-core Core i7 reduces the reconstruction time further to 32 seconds. Moreover, we show that the CS algorithm benefits from modern throughput-oriented architectures. Specifically, our CUDA-base implementation on NVIDIA GTX480 reconstructs the same dataset in 16 seconds, while Intel's Knights Ferry (KNF) of the MIC architecture even reduces the time to 12 seconds. Such level of performance allows the neurovascular dataset to be reconstructed within a clinically viable time.
Recent advances in lossy compression of scientific floating-point data
NASA Astrophysics Data System (ADS)
Lindstrom, P.
2017-12-01
With a continuing exponential trend in supercomputer performance, ever larger data sets are being generated through numerical simulation. Bandwidth and storage capacity are, however, not keeping pace with this increase in data size, causing significant data movement bottlenecks in simulation codes and substantial monetary costs associated with archiving vast volumes of data. Worse yet, ever smaller fractions of data generated can be stored for further analysis, where scientists frequently rely on decimating or averaging large data sets in time and/or space. One way to mitigate these problems is to employ data compression to reduce data volumes. However, lossless compression of floating-point data can achieve only very modest size reductions on the order of 10-50%. We present ZFP and FPZIP, two state-of-the-art lossy compressors for structured floating-point data that routinely achieve one to two orders of magnitude reduction with little to no impact on the accuracy of visualization and quantitative data analysis. We provide examples of the use of such lossy compressors in climate and seismic modeling applications to effectively accelerate I/O and reduce storage requirements. We further discuss how the design decisions behind these and other compressors impact error distributions and other statistical and differential properties, including derived quantities of interest relevant to each science application.
ERIC Educational Resources Information Center
Buttermore, John; Baker, Eliott; Culp, David
2014-01-01
Time-compressed courses at state-supported universities have served a variety of purposes. It has been traditional to set scheduling policies that would make summer and winter sessions self-funding. They have not been viewed, however, as potential enrollment generators. This is an opportunity that can no longer be overlooked. This paper describes…
Summary of Research 1997, Department of Mathematics.
1999-01-01
problems. This capabil- ity is especially important at the present time when technology in general, and information operations in particular, are changing...compression algorithms, especially the Radiant TIN algorithm and its use on tactical imagery. SUMMARY: Several aspects of this problem were...points are not always the same, especially when bifurcation occurs. The equilibrium sets of control systems and their bifurcations are classified based
Effect of different mixing methods on the physical properties of Portland cement.
Shahi, Shahriar; Ghasemi, Negin; Rahimi, Saeed; Yavari, Hamidreza; Samiei, Mohammad; Jafari, Farnaz
2016-12-01
The Portland cement is hydrophilic cement; as a result, the powder-to-liquid ratio affects the properties of the final mix. In addition, the mixing technique affects hydration. The aim of this study was to evaluate the effect of different mixing techniques (conventional, amalgamator and ultrasonic) on some selective physical properties of Portland cement. The physical properties to be evaluated were determined using the ISO 6786:2001 specification. One hundred sixty two samples of Portland cement were prepared for three mixing techniques for each physical property (each 6 samples). Data were analyzed using descriptive statistics, one-way ANOVA and post hoc Tukey tests. Statistical significance was set at P <0.05. The mixing technique had no significant effect on the compressive strength, film thickness and flow of Portland cement ( P >0.05). Dimensional changes (shrinkage), solubility and pH increased significantly by amalgamator and ultrasonic mixing techniques ( P <0.05). The ultrasonic technique significantly decreased working time, and the amalgamator and ultrasonic techniques significantly decreased the setting time ( P <0.05). The mixing technique exerted no significant effect on the flow, film thickness and compressive strength of Portland cement samples. Key words: Physical properties, Portland cement, mixing methods.
Wang, Yan-Shuai; Dai, Jian-Guo; Wang, Lei; Tsang, Daniel C W; Poon, Chi Sun
2018-01-01
Inorganic binder-based stabilization/solidification (S/S) of Pb-contaminated soil is a commonly used remediation approach. This paper investigates the influences of soluble Pb species on the hydration process of two types of inorganic binders: ordinary Portland cement (OPC) and magnesium potassium phosphate cement (MKPC). The environmental leachability, compressive strength, and setting time of the cement products are assessed as the primary performance indicators. The mechanisms of Pb involved in the hydration process are analyzed through X-ray diffraction (XRD), hydration heat evolution, and thermogravimetric analyses. Results show that the presence of Pb imposes adverse impact on the compressive strength (decreased by 30.4%) and the final setting time (prolonged by 334.7%) of OPC, but it exerts much less influence on those of MKPC. The reduced strength and delayed setting are attributed to the retarded hydration reaction rate of OPC during the induction period. These results suggest that the OPC-based S/S of soluble Pb mainly depends on physical encapsulation by calcium-silicate-hydrate (CSH) gels. In contrast, in case of MKPC-based S/S process, chemical stabilization with residual phosphate (pyromorphite and lead phosphate precipitation) and physical fixation of cementitious struvite-K are the major mechanisms. Therefore, MKPC is a more efficient and chemically stable inorganic binder for the Pb S/S process. Copyright © 2017 Elsevier Ltd. All rights reserved.
An Injectable Glass Polyalkenoate Cement Engineered for Fracture Fixation and Stabilization
Peel, Sean A. F.; Towler, Mark R.
2017-01-01
Glass polyalkenoate cements (GPCs) have potential as bio-adhesives due to their ease of application, appropriate mechanical properties, radiopacity and chemical adhesion to bone. Aluminium (Al)-free GPCs have been discussed in the literature, but have proven difficult to balance injectability with mechanical integrity. For example, zinc-based, Al-free GPCs reported compressive strengths of 63 MPa, but set in under 2 min. Here, the authors design injectable GPCs (IGPCs) based on zinc-containing, Al-free silicate compositions containing GeO2, substituted for ZnO at 3% increments through the series. The setting reactions, injectability and mechanical properties of these GPCs were evaluated using both a hand-mix (h) technique, using a spatula for sample preparation and application and an injection (i) technique, using a 16-gauge needle, post mixing, for application. GPCs ability to act as a carrier for bovine serum albumin (BSA) was also evaluated. Germanium (Ge) and BSA containing IGPCs were produced and reported to have working times between 26 and 44 min and setting times between 37 and 55 min; the extended handling properties being as a result of less Ge. The incorporation of BSA into the cement had no effect on the handling and mechanical properties, but the latter were found to have increased compression strength with the addition of Ge from between 27 and 37 MPa after 30 days maturation. PMID:28678157
Brown, Andrew D; Rodriguez, Francisco A; Portnuff, Cory D F; Goupell, Matthew J; Tollin, Daniel J
2016-10-03
In patients with bilateral hearing loss, the use of two hearing aids (HAs) offers the potential to restore the benefits of binaural hearing, including sound source localization and segregation. However, existing evidence suggests that bilateral HA users' access to binaural information, namely interaural time and level differences (ITDs and ILDs), can be compromised by device processing. Our objective was to characterize the nature and magnitude of binaural distortions caused by modern digital behind-the-ear HAs using a variety of stimuli and HA program settings. Of particular interest was a common frequency-lowering algorithm known as nonlinear frequency compression, which has not previously been assessed for its effects on binaural information. A binaural beamforming algorithm was also assessed. Wide dynamic range compression was enabled in all programs. HAs were placed on a binaural manikin, and stimuli were presented from an arc of loudspeakers inside an anechoic chamber. Stimuli were broadband noise bursts, 10-Hz sinusoidally amplitude-modulated noise bursts, or consonant-vowel-consonant speech tokens. Binaural information was analyzed in terms of ITDs, ILDs, and interaural coherence, both for whole stimuli and in a time-varying sense (i.e., within a running temporal window) across four different frequency bands (1, 2, 4, and 6 kHz). Key findings were: (a) Nonlinear frequency compression caused distortions of high-frequency envelope ITDs and significantly reduced interaural coherence. (b) For modulated stimuli, all programs caused time-varying distortion of ILDs. (c) HAs altered the relationship between ITDs and ILDs, introducing large ITD-ILD conflicts in some cases. Potential perceptual consequences of measured distortions are discussed. © The Author(s) 2016.
Compression based entropy estimation of heart rate variability on multiple time scales.
Baumert, Mathias; Voss, Andreas; Javorka, Michal
2013-01-01
Heart rate fluctuates beat by beat in a complex manner. The aim of this study was to develop a framework for entropy assessment of heart rate fluctuations on multiple time scales. We employed the Lempel-Ziv algorithm for lossless data compression to investigate the compressibility of RR interval time series on different time scales, using a coarse-graining procedure. We estimated the entropy of RR interval time series of 20 young and 20 old subjects and also investigated the compressibility of randomly shuffled surrogate RR time series. The original RR time series displayed significantly smaller compression entropy values than randomized RR interval data. The RR interval time series of older subjects showed significantly different entropy characteristics over multiple time scales than those of younger subjects. In conclusion, data compression may be useful approach for multiscale entropy assessment of heart rate variability.
VLSI chip-set for data compression using the Rice algorithm
NASA Technical Reports Server (NTRS)
Venbrux, J.; Liu, N.
1990-01-01
A full custom VLSI implementation of a data compression encoder and decoder which implements the lossless Rice data compression algorithm is discussed in this paper. The encoder and decoder reside on single chips. The data rates are to be 5 and 10 Mega-samples-per-second for the decoder and encoder respectively.
Real-time contaminant sensing and control in civil infrastructure systems
NASA Astrophysics Data System (ADS)
Rimer, Sara; Katopodes, Nikolaos
2014-11-01
A laboratory-scale prototype has been designed and implemented to test the feasibility of real-time contaminant sensing and control in civil infrastructure systems. A blower wind tunnel is the basis of the prototype design, with propylene glycol smoke as the ``contaminant.'' A camera sensor and compressed-air vacuum nozzle system is set up at the test section portion of the prototype to visually sense and then control the contaminant; a real-time controller is programmed to read in data from the camera sensor and administer pressure to regulators controlling the compressed air operating the vacuum nozzles. A computational fluid dynamics model is being integrated in with this prototype to inform the correct pressure to supply to the regulators in order to optimally control the contaminant's removal from the prototype. The performance of the prototype has been evaluated against the computational fluid dynamics model and is discussed in this presentation. Furthermore, the initial performance of the sensor-control system implemented in the test section of the prototype is discussed. NSF-CMMI 0856438.
Evaluating the Efficacy of Wavelet Configurations on Turbulent-Flow Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Shaomeng; Gruchalla, Kenny; Potter, Kristin
2015-10-25
I/O is increasingly becoming a significant constraint for simulation codes and visualization tools on modern supercomputers. Data compression is an attractive workaround, and, in particular, wavelets provide a promising solution. However, wavelets can be applied in multiple configurations, and the variations in configuration impact accuracy, storage cost, and execution time. While the variation in these factors over wavelet configurations have been explored in image processing, they are not well understood for visualization and analysis of scientific data. To illuminate this issue, we evaluate multiple wavelet configurations on turbulent-flow data. Our approach is to repeat established analysis routines on uncompressed andmore » lossy-compressed versions of a data set, and then quantitatively compare their outcomes. Our findings show that accuracy varies greatly based on wavelet configuration, while storage cost and execution time vary less. Overall, our study provides new insights for simulation analysts and visualization experts, who need to make tradeoffs between accuracy, storage cost, and execution time.« less
Digital codec for real-time processing of broadcast quality video signals at 1.8 bits/pixel
NASA Technical Reports Server (NTRS)
Shalkhauser, Mary JO; Whyte, Wayne A., Jr.
1989-01-01
The authors present the hardware implementation of a digital television bandwidth compression algorithm which processes standard NTSC (National Television Systems Committee) composite color television signals and produces broadcast-quality video in real time at an average of 1.8 b/pixel. The sampling rate used with this algorithm results in 768 samples over the active portion of each video line by 512 active video lines per video frame. The algorithm is based on differential pulse code modulation (DPCM), but additionally utilizes a nonadaptive predictor, nonuniform quantizer, and multilevel Huffman coder to reduce the data rate substantially below that achievable with straight DPCM. The nonadaptive predictor and multilevel Huffman coder combine to set this technique apart from prior-art DPCM encoding algorithms. The authors describe the data compression algorithm and the hardware implementation of the codec and provide performance results.
NASA Astrophysics Data System (ADS)
Canuto, V. M.
1997-06-01
We present a model to treat fully compressible, nonlocal, time-dependent turbulent convection in the presence of large-scale flows and arbitrary density stratification. The problem is of interest, for example, in stellar pulsation problems, especially since accurate helioseismological data are now available, as well as in accretion disks. Owing to the difficulties in formulating an analytical model, it is not surprising that most of the work has gone into numerical simulations. At present, there are three analytical models: one by the author, which leads to a rather complicated set of equations; one by Yoshizawa; and one by Xiong. The latter two use a Reynolds stress model together with phenomenological relations with adjustable parameters whose determination on the basis of terrestrial flows does not guarantee that they may be extrapolated to astrophysical flows. Moreover, all third-order moments representing nonlocality are taken to be of the down gradient form (which in the case of the planetary boundary layer yields incorrect results). In addition, correlations among pressure, temperature, and velocities are often neglected or treated as in the incompressible case. To avoid phenomenological relations, we derive the full set of dynamic, time-dependent, nonlocal equations to describe all mean variables, second- and third-order moments. Closures are carried out at the fourth order following standard procedures in turbulence modeling. The equations are collected in an Appendix. Some of the novelties of the treatment are (1) new flux conservation law that includes the large-scale flow, (2) increase of the rate of dissipation of turbulent kinetic energy owing to compressibility and thus (3) a smaller overshooting, and (4) a new source of mean temperature due to compressibility; moreover, contrary to some phenomenological suggestions, the adiabatic temperature gradient depends only on the thermal pressure, while in the equation for the large-scale flow, the physical pressure is the sum of thermal plus turbulent pressure.
NASA Astrophysics Data System (ADS)
Atkins, M. Stella; Hwang, Robert; Tang, Simon
2001-05-01
We have implemented a prototype system consisting of a Java- based image viewer and a web server extension component for transmitting Magnetic Resonance Images (MRI) to an image viewer, to test the performance of different image retrieval techniques. We used full-resolution images, and images compressed/decompressed using the Set Partitioning in Hierarchical Trees (SPIHT) image compression algorithm. We examined the SPIHT decompression algorithm using both non- progressive and progressive transmission, focusing on the running times of the algorithm, client memory usage and garbage collection. We also compared the Java implementation with a native C++ implementation of the non- progressive SPIHT decompression variant. Our performance measurements showed that for uncompressed image retrieval using a 10Mbps Ethernet, a film of 16 MR images can be retrieved and displayed almost within interactive times. The native C++ code implementation of the client-side decoder is twice as fast as the Java decoder. If the network bandwidth is low, the high communication time for retrieving uncompressed images may be reduced by use of SPIHT-compressed images, although the image quality is then degraded. To provide diagnostic quality images, we also investigated the retrieval of up to 3 images on a MR film at full-resolution, using progressive SPIHT decompression. The Java-based implementation of progressive decompression performed badly, mainly due to the memory requirements for maintaining the image states, and the high cost of execution of the Java garbage collector. Hence, in systems where the bandwidth is high, such as found in a hospital intranet, SPIHT image compression does not provide advantages for image retrieval performance.
Thompson, Douglas; Hallquist, Aaron; Anderson, Hyrum
2017-10-17
The various embodiments presented herein relate to utilizing an operational single-channel radar to collect and process synthetic aperture radar (SAR) and ground moving target indicator (GMTI) imagery from a same set of radar returns. In an embodiment, data is collected by randomly staggering a slow-time pulse repetition interval (PRI) over a SAR aperture such that a number of transmitted pulses in the SAR aperture is preserved with respect to standard SAR, but many of the pulses are spaced very closely enabling movers (e.g., targets) to be resolved, wherein a relative velocity of the movers places them outside of the SAR ground patch. The various embodiments of image reconstruction can be based on compressed sensing inversion from undersampled data, which can be solved efficiently using such techniques as Bregman iteration. The various embodiments enable high-quality SAR reconstruction, and high-quality GMTI reconstruction from the same set of radar returns.
Wavelet-Based Interpolation and Representation of Non-Uniformly Sampled Spacecraft Mission Data
NASA Technical Reports Server (NTRS)
Bose, Tamal
2000-01-01
A well-documented problem in the analysis of data collected by spacecraft instruments is the need for an accurate, efficient representation of the data set. The data may suffer from several problems, including additive noise, data dropouts, an irregularly-spaced sampling grid, and time-delayed sampling. These data irregularities render most traditional signal processing techniques unusable, and thus the data must be interpolated onto an even grid before scientific analysis techniques can be applied. In addition, the extremely large volume of data collected by scientific instrumentation presents many challenging problems in the area of compression, visualization, and analysis. Therefore, a representation of the data is needed which provides a structure which is conducive to these applications. Wavelet representations of data have already been shown to possess excellent characteristics for compression, data analysis, and imaging. The main goal of this project is to develop a new adaptive filtering algorithm for image restoration and compression. The algorithm should have low computational complexity and a fast convergence rate. This will make the algorithm suitable for real-time applications. The algorithm should be able to remove additive noise and reconstruct lost data samples from images.
Filetype Identification Using Long, Summarized N-Grams
2011-03-01
compressed or encrypted data . If the algorithm used to compress or encrypt the data can be determined, then it is frequently possible to uncom- press...fragments. His implementation utilized the bzip2 library to compress the file fragments. The bzip2 library is based off the Lempel - Ziv -Markov chain... algorithm that uses a dictionary compression scheme to remove repeating data patterns within a set of data . The removed patterns are listed within the
DOE Office of Scientific and Technical Information (OSTI.GOV)
Teuton, Jeremy R.; Griswold, Richard L.; Mehdi, Beata L.
Precise analysis of both (S)TEM images and video are time and labor intensive processes. As an example, determining when crystal growth and shrinkage occurs during the dynamic process of Li dendrite deposition and stripping involves manually scanning through each frame in the video to extract a specific set of frames/images. For large numbers of images, this process can be very time consuming, so a fast and accurate automated method is desirable. Given this need, we developed software that uses analysis of video compression statistics for detecting and characterizing events in large data sets. This software works by converting the datamore » into a series of images which it compresses into an MPEG-2 video using the open source “avconv” utility [1]. The software does not use the video itself, but rather analyzes the video statistics from the first pass of the video encoding that avconv records in the log file. This file contains statistics for each frame of the video including the frame quality, intra-texture and predicted texture bits, forward and backward motion vector resolution, among others. In all, avconv records 15 statistics for each frame. By combining different statistics, we have been able to detect events in various types of data. We have developed an interactive tool for exploring the data and the statistics that aids the analyst in selecting useful statistics for each analysis. Going forward, an algorithm for detecting and possibly describing events automatically can be written based on statistic(s) for each data type.« less
Spectral compression algorithms for the analysis of very large multivariate images
Keenan, Michael R.
2007-10-16
A method for spectrally compressing data sets enables the efficient analysis of very large multivariate images. The spectral compression algorithm uses a factored representation of the data that can be obtained from Principal Components Analysis or other factorization technique. Furthermore, a block algorithm can be used for performing common operations more efficiently. An image analysis can be performed on the factored representation of the data, using only the most significant factors. The spectral compression algorithm can be combined with a spatial compression algorithm to provide further computational efficiencies.
Kılıç, D; Göksu, E; Kılıç, T; Buyurgan, C S
2018-05-01
The aim of this randomized cross-over study was to compare one-minute and two-minute continuous chest compressions in terms of chest compression only CPR quality metrics on a mannequin model in the ED. Thirty-six emergency medicine residents participated in this study. In the 1-minute group, there was no statistically significant difference in the mean compression rate (p=0.83), mean compression depth (p=0.61), good compressions (p=0.31), the percentage of complete release (p=0.07), adequate compression depth (p=0.11) or the percentage of good rate (p=51) over the four-minute time period. Only flow time was statistically significant among the 1-minute intervals (p<0.001). In the 2-minute group, the mean compression depth (p=0.19), good compression (p=0.92), the percentage of complete release (p=0.28), adequate compression depth (p=0.96), and the percentage of good rate (p=0.09) were not statistically significant over time. In this group, the number of compressions (248±31 vs 253±33, p=0.01) and mean compression rates (123±15 vs 126±17, p=0.01) and flow time (p=0.001) were statistically significant along the two-minute intervals. There was no statistically significant difference in the mean number of chest compressions per minute, mean chest compression depth, the percentage of good compressions, complete release, adequate chest compression depth and percentage of good compression between the 1-minute and 2-minute groups. There was no statistically significant difference in the quality metrics of chest compressions between 1- and 2-minute chest compression only groups. Copyright © 2017 Elsevier Inc. All rights reserved.
Effect of pH on compressive strength of some modification of mineral trioxide aggregate
Saghiri, Mohammad A.; Garcia-Godoy, Franklin; Asatourian, Armen; Lotfi, Mehrdad; Khezri-Boukani, Kaveh
2013-01-01
Objectives: Recently, it was shown that NanoMTA improved the setting time and promoted a better hydration process which prevents washout and the dislodgment of this novel biomaterial in comparison with WTMA. This study analyzed the compressive strength of ProRoot WMTA (Dentsply), a NanoWMTA (Kamal Asgar Research Center), and Bioaggregate (Innovative Bioceramix) after its exposure to a range of environmental pH conditions during hydration. Study Design: After mixing the cements under aseptic condition and based on the manufacturers` recommendations, the cements were condensed with moderate force using plugger into 9 × 6 mm split molds. Each type of cement was then randomly divided into three groups (n=10). Specimens were exposed to environments with pH values of 4.4, 7.4, or 10.4 for 3 days. Cement pellets were compressed by using an Instron testing machine. Values were recorded and compared. Data were analyzed by using one-way analysis of variance and a post hoc Tukey’s test. Results: After 3 days, the samples were solid when probed with an explorer before removing them from the molds. The greatest mean compressive strength 133.19±11.14 MPa was observed after exposure to a pH value of 10.4 for NanoWMTA. The values decreased to 111.41±8.26 MPa after exposure to a pH value of 4.4. Increasing of pH had a significant effect on the compressive strength of the groups (p<0.001). The mean compressive strength for the NanoWMTA was statistically higher than for ProRoot WMTA and Bioaggregate (p<0.001). Moreover, increasing of pH values had a significant effect on compressive strength of the experimental groups (p<0.001). Conclusion: The compressive strength of NanoWMTA was significantly higher than WMTA and Bioaggregate; the more acidic the environmental pH, the lower was the compressive strength. Key words:Compressive strength, mineral trioxide aggregate, Nano. PMID:23722137
NASA Technical Reports Server (NTRS)
Lewis, Michael
1994-01-01
Statistical encoding techniques enable the reduction of the number of bits required to encode a set of symbols, and are derived from their probabilities. Huffman encoding is an example of statistical encoding that has been used for error-free data compression. The degree of compression given by Huffman encoding in this application can be improved by the use of prediction methods. These replace the set of elevations by a set of corrections that have a more advantageous probability distribution. In particular, the method of Lagrange Multipliers for minimization of the mean square error has been applied to local geometrical predictors. Using this technique, an 8-point predictor achieved about a 7 percent improvement over an existing simple triangular predictor.
The Effect of Hole Quality on the Fatigue Life of 2024-T3 Aluminum Alloy Sheet
NASA Technical Reports Server (NTRS)
Everett, Richard A., Jr.
2004-01-01
This paper presents the results of a study whose main objective was to determine which type of fabrication process would least affect the fatigue life of an open-hole structural detail. Since the open-hole detail is often the fundamental building block for determining the stress concentration of built-up structural parts, it is important to understand any factor that can affect the fatigue life of an open hole. A test program of constant-amplitude fatigue tests was conducted on five different sets of test specimens each made using a different hole fabrication process. Three of the sets used different mechanical drilling procedures while a fourth and fifth set were mechanically drilled and then chemically polished. Two sets of specimens were also tested under spectrum loading to aid in understanding the effects of residual compressive stresses on fatigue life. Three conclusions were made from this study. One, the residual compressive stresses caused by the hole-drilling process increased the fatigue life by two to three times over specimens that were chemically polished after the holes were drilled. Second, the chemical polishing process does not appear to adversely affect the fatigue life. Third, the chemical polishing process will produce a stress-state adjacent to the hole that has insignificant machining residual stresses.
When is the Anelastic Approximation a Valid Model for Compressible Convection?
NASA Astrophysics Data System (ADS)
Alboussiere, T.; Curbelo, J.; Labrosse, S.; Ricard, Y. R.; Dubuffet, F.
2017-12-01
Compressible convection is ubiquitous in large natural systems such Planetary atmospheres, stellar and planetary interiors. Its modelling is notoriously more difficult than the case when the Boussinesq approximation applies. One reason for that difficulty has been put forward by Ogura and Phillips (1961): the compressible equations generate sound waves with very short time scales which need to be resolved. This is why they introduced an anelastic model, based on an expansion of the solution around an isentropic hydrostatic profile. How accurate is that anelastic model? What are the conditions for its validity? To answer these questions, we have developed a numerical model for the full set of compressible equations and compared its solutions with those of the corresponding anelastic model. We considered a simple rectangular 2D Rayleigh-Bénard configuration and decided to restrict the analysis to infinite Prandtl numbers. This choice is valid for convection in the mantles of rocky planets, but more importantly lead to a zero Mach number. So we got rid of the question of the interference of acoustic waves with convection. In that simplified context, we used the entropy balances (that of the full set of equations and that of the anelastic model) to investigate the differences between exact and anelastic solutions. We found that the validity of the anelastic model is dictated by two conditions: first, the superadiabatic temperature difference must be small compared with the adiabatic temperature difference (as expected) ɛ = Δ TSA / delta Ta << 1, and secondly that the product of ɛ with the Nusselt number must be small.
Dynamic control of a homogeneous charge compression ignition engine
Duffy, Kevin P [Metamora, IL; Mehresh, Parag [Peoria, IL; Schuh, David [Peoria, IL; Kieser, Andrew J [Morton, IL; Hergart, Carl-Anders [Peoria, IL; Hardy, William L [Peoria, IL; Rodman, Anthony [Chillicothe, IL; Liechty, Michael P [Chillicothe, IL
2008-06-03
A homogenous charge compression ignition engine is operated by compressing a charge mixture of air, exhaust and fuel in a combustion chamber to an autoignition condition of the fuel. The engine may facilitate a transition from a first combination of speed and load to a second combination of speed and load by changing the charge mixture and compression ratio. This may be accomplished in a consecutive engine cycle by adjusting both a fuel injector control signal and a variable valve control signal away from a nominal variable valve control signal. Thereafter in one or more subsequent engine cycles, more sluggish adjustments are made to at least one of a geometric compression ratio control signal and an exhaust gas recirculation control signal to allow the variable valve control signal to be readjusted back toward its nominal variable valve control signal setting. By readjusting the variable valve control signal back toward its nominal setting, the engine will be ready for another transition to a new combination of engine speed and load.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rozanov, V. B., E-mail: rozanov@sci.lebedev.ru; Vergunova, G. A., E-mail: verg@sci.lebedev.ru
2015-11-15
The possibility of the analysis and interpretation of the reported experiments with the megajoule National Ignition Facility (NIF) laser on the compression of capsules in indirect-irradiation targets by means of the one-dimensional RADIAN program in the spherical geometry has been studied. The problem of the energy balance in a target and the determination of the laser energy that should be used in the spherical model of the target has been considered. The results of action of pulses differing in energy and time profile (“low-foot” and “high-foot” regimes) have been analyzed. The parameters of the compression of targets with a high-densitymore » carbon ablator have been obtained. The results of the simulations are in satisfactory agreement with the measurements and correspond to the range of the observed parameters. The set of compared results can be expanded, in particular, for a more detailed determination of the parameters of a target near the maximum compression of the capsule. The physical foundation of the possibility of using the one-dimensional description is the necessity of the closeness of the last stage of the compression of the capsule to a one-dimensional process. The one-dimensional simulation of the compression of the capsule can be useful in establishing the boundary behind which two-dimensional and three-dimensional simulation should be used.« less
NASA Astrophysics Data System (ADS)
Takarini, V.; Hasratiningsih, Z.; Karlina, E.; Febrida, R.; Asri, L. A. T. W.; Purwasasmita, BS
2017-02-01
Putty elastomeric material is a viscous, moldable material that can be used as a dental impression to record and duplicate the tooth structure. Commercially available putty materials are hardly found in the Indonesian market. The aim of this work is to develop an alternative putty dental material from glutinous rice with two different gelling agents; sodium alginate and bovine gelatine. A commercially putty material was used as a control. The length of time required for the putty materials to set (setting time) was evaluated with compression set test. The result showed that sodium alginate and bovine gelatine gelling agents resulted in moldable putty materials that comparable to the commercial product. Glutinous rice mixed with sodium alginate gelling agent demonstrated longer setting time (more than 1 hours) compared to bovine gelatine (6 minutes). These may occur due to heat treatment applied to the bovine gelatine, while sodium alginate mixture has a chemical reaction since CaCl2 crosslink agent had been added to the mixture. Glutinous rice with bovine gelatine mixture is a promising candidate to be used as a dental putty material.
Evaluation of image compression for computer-aided diagnosis of breast tumors in 3D sonography
NASA Astrophysics Data System (ADS)
Chen, We-Min; Huang, Yu-Len; Tao, Chi-Chuan; Chen, Dar-Ren; Moon, Woo-Kyung
2006-03-01
Medical imaging examinations form the basis for physicians diagnosing diseases, as evidenced by the increasing use of digital medical images for picture archiving and communications systems (PACS). However, with enlarged medical image databases and rapid growth of patients' case reports, PACS requires image compression to accelerate the image transmission rate and conserve disk space for diminishing implementation costs. For this purpose, JPEG and JPEG2000 have been accepted as legal formats for the digital imaging and communications in medicine (DICOM). The high compression ratio is felt to be useful for medical imagery. Therefore, this study evaluates the compression ratios of JPEG and JPEG2000 standards for computer-aided diagnosis (CAD) of breast tumors in 3-D medical ultrasound (US) images. The 3-D US data sets with various compression ratios are compressed using the two efficacious image compression standards. The reconstructed data sets are then diagnosed by a previous proposed CAD system. The diagnostic accuracy is measured based on receiver operating characteristic (ROC) analysis. Namely, the ROC curves are used to compare the diagnostic performance of two or more reconstructed images. Analysis results ensure a comparison of the compression ratios by using JPEG and JPEG2000 for 3-D US images. Results of this study provide the possible bit rates using JPEG and JPEG2000 for 3-D breast US images.
Calcium sulfoaluminate (Ye'elimite) hydration in the presence of gypsum, calcite, and vaterite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hargis, Craig W.; Telesca, Antonio; Monteiro, Paulo J.M., E-mail: monteiro@ce.berkeley.edu
Six calcium sulfoaluminate-based cementitious systems composed of calcium sulfoaluminate, calcite, vaterite, and gypsum were cured as pastes and mortars for 1, 7, 28 and 84 days. Pastes were analyzed with X-ray diffraction, thermogravimetric and differential thermal analyses. Mortars were tested for compressive strength, dimensional stability and setting time. Furthermore, pastes with a water/cementitious material mass ratio of 0.80 were tested for heat evolution during the first 48 h by means of isothermal conduction calorimetry. It has been found that: (1) both calcite and vaterite reacted with monosulfoaluminate to give monocarboaluminate and ettringite, with vaterite being more reactive; (2) gypsum loweredmore » the reactivity of both carbonates; (3) expansion was reduced by calcite and vaterite, irrespective of the presence of gypsum; and (4) both carbonates increased compressive strength in the absence of gypsum and decreased compressive strength less in the presence of gypsum, with vaterite's action more effective than that of calcite.« less
Mid-Cenozoic tectonic and paleoenvironmental setting of the central Arctic Ocean
O'Regan, M.; Moran, K.; Backman, J.; Jakobsson, M.; Sangiorgi, F.; Brinkhuis, Henk; Pockalny, Rob; Skelton, Alasdair; Stickley, Catherine E.; Koc, N.; Brumsack, Hans-Juergen; Willard, Debra A.
2008-01-01
Drilling results from the Integrated Ocean Drilling Program's Arctic Coring Expedition (ACEX) to the Lomonosov Ridge (LR) document a 26 million year hiatus that separates freshwater-influenced biosilica-rich deposits of the middle Eocene from fossil-poor glaciomarine silty clays of the early Miocene. Detailed micropaleontological and sedimentological data from sediments surrounding this mid-Cenozoic hiatus describe a shallow water setting for the LR, a finding that conflicts with predrilling seismic predictions and an initial postcruise assessment of its subsidence history that assumed smooth thermally controlled subsidence following rifting. A review of Cenozoic tectonic processes affecting the geodynamic evolution of the central Arctic Ocean highlights a prolonged phase of basin-wide compression that ended in the early Miocene. The coincidence in timing between the end of compression and the start of rapid early Miocene subsidence provides a compelling link between these observations and similarly accounts for the shallow water setting that persisted more than 30 million years after rifting ended. However, for much of the late Paleogene and early Neogene, tectonic reconstructions of the Arctic Ocean describe a landlocked basin, adding additional uncertainty to reconstructions of paleodepth estimates as the magnitude of regional sea level variations remains unknown.
Chang, Kai-Chun; Chang, Chia-Chieh; Huang, Ying-Chieh; Chen, Min-Hua; Lin, Feng-Huei; Lin, Chun-Pin
2014-01-01
Background/Purpose Mineral Trioxide Aggregate (MTA) was widely used as a root-end filling material and for vital pulp therapy. A significant disadvantage to MTA is the prolonged setting time has limited the application in endodontic treatments. This study examined the physicochemical properties and biological performance of novel partially stabilized cements (PSCs) prepared to address some of the drawbacks of MTA, without causing any change in biological properties. PSC has a great potential as the vital pulp therapy material in dentistry. Methods This study examined three experimental groups consisting of samples that were fabricated using sol-gel processes in C3S/C3A molar ratios of 9/1, 7/3, and 5/5 (denoted as PSC-91, PSC-73, and PSC-55, respectively). The comparison group consisted of MTA samples. The setting times, pH variation, compressive strength, morphology, and phase composition of hydration products and ex vivo bioactivity were evaluated. Moreover, biocompatibility was assessed by using lactate dehydrogenase to determine the cytotoxicity and a cell proliferation (WST-1) assay kit to determine cell viability. Mineralization was evaluated using Alizarin Red S staining. Results Crystalline phases, which were determined using X-ray diffraction analysis, confirmed that the C3A contents of the material powder differed. The initial setting times of PSC-73 and PSC-55 ranged between 15 and 25 min; these values are significantly (p<0.05, ANOVA and post-hoc test) lower than those obtained for MTA (165 min) and PSC-91 (80.5 min). All of the PSCs exhibited ex vivo bioactivity when immersed in simulated body fluid. The biocompatibility results for all of the tested cements were as favorable as those of the negative control, except for PSC-55, which exhibited mild cytotoxicity. Conclusion PSC-91 is a favorable material for vital pulp therapy because it exhibits optimal compressive strength, a short setting time, and high biocompatibility and bioactivity. PMID:25247808
Retrofit device and method to improve humidity control of vapor compression cooling systems
Roth, Robert Paul; Hahn, David C.; Scaringe, Robert P.
2016-08-16
A method and device for improving moisture removal capacity of a vapor compression system is disclosed. The vapor compression system is started up with the evaporator blower initially set to a high speed. A relative humidity in a return air stream is measured with the evaporator blower operating at the high speed. If the measured humidity is above the predetermined high relative humidity value, the evaporator blower speed is reduced from the initially set high speed to the lowest possible speed. The device is a control board connected with the blower and uses a predetermined change in measured relative humidity to control the blower motor speed.
Fife, Caroline E; Davey, Suzanne; Maus, Erik A; Guilliod, Renie; Mayrovitz, Harvey N
2012-12-01
Pneumatic compression devices (PCDs) are used in the home setting as adjunctive treatment for lymphedema after acute treatment in a clinical setting. PCDs range in complexity from simple to technologically advanced. The objective of this prospective, randomized study was to determine whether an advanced PCD (APCD) provides better outcomes as measured by arm edema and tissue water reductions compared to a standard PCD (SPCD) in patients with arm lymphedema after breast cancer treatment. Subjects were randomized to an APCD (Flexitouch system, HCPCS E0652) or SPCD (Bio Compression 2004, HCPCS E0651) used for home treatment 1 h/day for 12 weeks. Pressure settings were 30 mmHg for the SPCD and upper extremity treatment program (UE01) with standard pressure for the APCD. Thirty-six subjects (18 per group) with unilateral upper extremity lymphedema with at least 5% arm edema volume at the time of enrollment, completed treatments over the 12-week period. Arm volumes were determined from arm girth measurements and suitable model calculations, and tissue water was determined based on measurements of the arm tissue dielectric constant (TDC). The APCD-treated group experienced an average of 29% reduction in edema compared to a 16% increase in the SPCD group. Mean changes in TDC values were a 5.8% reduction for the APCD group and a 1.9% increase for the SPCD group. This study suggests that for the home maintenance phase of treatment of arm lymphedema secondary to breast cancer therapy, the adjuvant treatment with an APCD provides better outcomes than with a SPCD.
A FASTQ compressor based on integer-mapped k-mer indexing for biologist.
Zhang, Yeting; Patel, Khyati; Endrawis, Tony; Bowers, Autumn; Sun, Yazhou
2016-03-15
Next generation sequencing (NGS) technologies have gained considerable popularity among biologists. For example, RNA-seq, which provides both genomic and functional information, has been widely used by recent functional and evolutionary studies, especially in non-model organisms. However, storing and transmitting these large data sets (primarily in FASTQ format) have become genuine challenges, especially for biologists with little informatics experience. Data compression is thus a necessity. KIC, a FASTQ compressor based on a new integer-mapped k-mer indexing method, was developed (available at http://www.ysunlab.org/kic.jsp). It offers high compression ratio on sequence data, outstanding user-friendliness with graphic user interfaces, and proven reliability. Evaluated on multiple large RNA-seq data sets from both human and plants, it was found that the compression ratio of KIC had exceeded all major generic compressors, and was comparable to those of the latest dedicated compressors. KIC enables researchers with minimal informatics training to take advantage of the latest sequence compression technologies, easily manage large FASTQ data sets, and reduce storage and transmission cost. Copyright © 2015 Elsevier B.V. All rights reserved.
Individual differences in long-range time representation.
Agostino, Camila S; Caetano, Marcelo S; Balci, Fuat; Claessens, Peter M E; Zana, Yossi
2017-04-01
On the basis of experimental data, long-range time representation has been proposed to follow a highly compressed power function, which has been hypothesized to explain the time inconsistency found in financial discount rate preferences. The aim of this study was to evaluate how well linear and power function models explain empirical data from individual participants tested in different procedural settings. The line paradigm was used in five different procedural variations with 35 adult participants. Data aggregated over the participants showed that fitted linear functions explained more than 98% of the variance in all procedures. A linear regression fit also outperformed a power model fit for the aggregated data. An individual-participant-based analysis showed better fits of a linear model to the data of 14 participants; better fits of a power function with an exponent β > 1 to the data of 12 participants; and better fits of a power function with β < 1 to the data of the remaining nine participants. Of the 35 volunteers, the null hypothesis β = 1 was rejected for 20. The dispersion of the individual β values was approximated well by a normal distribution. These results suggest that, on average, humans perceive long-range time intervals not in a highly compressed, biased manner, but rather in a linear pattern. However, individuals differ considerably in their subjective time scales. This contribution sheds new light on the average and individual psychophysical functions of long-range time representation, and suggests that any attribution of deviation from exponential discount rates in intertemporal choice to the compressed nature of subjective time must entail the characterization of subjective time on an individual-participant basis.
A very efficient RCS data compression and reconstruction technique, volume 4
NASA Technical Reports Server (NTRS)
Tseng, N. Y.; Burnside, W. D.
1992-01-01
A very efficient compression and reconstruction scheme for RCS measurement data was developed. The compression is done by isolating the scattering mechanisms on the target and recording their individual responses in the frequency and azimuth scans, respectively. The reconstruction, which is an inverse process of the compression, is granted by the sampling theorem. Two sets of data, the corner reflectors and the F-117 fighter model, were processed and the results were shown to be convincing. The compression ratio can be as large as several hundred, depending on the target's geometry and scattering characteristics.
A block-based JPEG-LS compression technique with lossless region of interest
NASA Astrophysics Data System (ADS)
Deng, Lihua; Huang, Zhenghua; Yao, Shoukui
2018-03-01
JPEG-LS lossless compression algorithm is used in many specialized applications that emphasize on the attainment of high fidelity for its lower complexity and better compression ratios than the lossless JPEG standard. But it cannot prevent error diffusion because of the context dependence of the algorithm, and have low compression rate when compared to lossy compression. In this paper, we firstly divide the image into two parts: ROI regions and non-ROI regions. Then we adopt a block-based image compression technique to decrease the range of error diffusion. We provide JPEG-LS lossless compression for the image blocks which include the whole or part region of interest (ROI) and JPEG-LS near lossless compression for the image blocks which are included in the non-ROI (unimportant) regions. Finally, a set of experiments are designed to assess the effectiveness of the proposed compression method.
Aspirin in venous leg ulcer study (ASPiVLU): study protocol for a randomised controlled trial.
Weller, Carolina D; Barker, Anna; Darby, Ian; Haines, Terrence; Underwood, Martin; Ward, Stephanie; Aldons, Pat; Dapiran, Elizabeth; Madan, Jason J; Loveland, Paula; Sinha, Sankar; Vicaretti, Mauro; Wolfe, Rory; Woodward, Michael; McNeil, John
2016-04-11
Venous leg ulceration is a common and costly problem that is expected to worsen as the population ages. Current treatment is compression therapy; however, up to 50 % of ulcers remain unhealed after 2 years, and ulcer recurrence is common. New treatments are needed to address those wounds that are more challenging to heal. Targeting the inflammatory processes present in venous ulcers is a possible strategy. Limited evidence suggests that a daily dose of aspirin may be an effective adjunct to aid ulcer healing and reduce recurrence. The Aspirin in Venous Leg Ulcer study (ASPiVLU) will investigate whether 300-mg oral doses of aspirin improve time to healing. This randomised, double-blinded, multicentre, placebo-controlled, clinical trial will recruit participants with venous leg ulcers from community settings and hospital outpatient wound clinics across Australia. Two hundred sixty-eight participants with venous leg ulcers will be randomised to receive either aspirin or placebo, in addition to compression therapy, for 24 weeks. The primary outcome is time to healing within 12 weeks. Secondary outcomes are ulcer recurrence, wound pain, quality of life and wellbeing, adherence to study medication, adherence to compression therapy, serum inflammatory markers, hospitalisations, and adverse events at 24 weeks. The ASPiVLU trial will investigate the efficacy and safety of aspirin as an adjunct to compression therapy to treat venous leg ulcers. Study completion is anticipated to occur in December 2018. Australian New Zealand Clinical Trials Registry, ACTRN12614000293662.
Blomberg, Hans; Gedeborg, Rolf; Berglund, Lars; Karlsten, Rolf; Johansson, Jakob
2011-10-01
Mechanical chest compression devices are being implemented as an aid in cardiopulmonary resuscitation (CPR), despite lack of evidence of improved outcome. This manikin study evaluates the CPR-performance of ambulance crews, who had a mechanical chest compression device implemented in their routine clinical practice 8 months previously. The objectives were to evaluate time to first defibrillation, no-flow time, and estimate the quality of compressions. The performance of 21 ambulance crews (ambulance nurse and emergency medical technician) with the authorization to perform advanced life support was studied in an experimental, randomized cross-over study in a manikin setup. Each crew performed two identical CPR scenarios, with and without the aid of the mechanical compression device LUCAS. A computerized manikin was used for data sampling. There were no substantial differences in time to first defibrillation or no-flow time until first defibrillation. However, the fraction of adequate compressions in relation to total compressions was remarkably low in LUCAS-CPR (58%) compared to manual CPR (88%) (95% confidence interval for the difference: 13-50%). Only 12 out of the 21 ambulance crews (57%) applied the mandatory stabilization strap on the LUCAS device. The use of a mechanical compression aid was not associated with substantial differences in time to first defibrillation or no-flow time in the early phase of CPR. However, constant but poor chest compressions due to failure in recognizing and correcting a malposition of the device may counteract a potential benefit of mechanical chest compressions. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Simulated life-threatening emergency during robot-assisted surgery.
Huser, Anna-Sophia; Müller, Dirk; Brunkhorst, Violeta; Kannisto, Päivi; Musch, Michael; Kröpfl, Darko; Groeben, Harald
2014-06-01
With the increasing use of robot-assisted techniques for urologic and gynecologic surgery in patients with severe comorbidities, the risk of a critical incidence during surgery increases. Due to limited access to the patient the start of effective measures to treat a life-threatening emergency could be delayed. Therefore, we tested the management of an acute emergency in an operating room setting with a full-size simulator in six complete teams. A full-size simulator (ISTAN, Meti, CA), modified to hold five trocars, was placed in a regular operating room and connected to a robotic system. Six teams (each with three nurses, one anesthesiologist, two urologists or gynecologists) were introduced to the scenario. Subsequently, myocardial fibrillation occurred. Time to first chest compression, removal of the robot, first defibrillation, and stabilization of circulation were obtained. After 7 weeks the simulation was repeated. The time to the start of chest compressions, removal of the robotic system, and first defibrillation were significantly improved at the second simulation. Time for restoration of stable circulation was improved from 417 ± 125 seconds to 224 ± 37 seconds (P=0.0054). Unexpected delays occurred during the first simulation because trocars had been removed from the patient but not from the robot, thus preventing the robot to be moved. Following proper training, resuscitation can be started within seconds. A repetition of the simulation significantly improved time for all steps of resuscitation. An emergency simulation of a multidisciplinary team in a real operating room setting can be strongly recommended.
An efficient compression scheme for bitmap indices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Kesheng; Otoo, Ekow J.; Shoshani, Arie
2004-04-13
When using an out-of-core indexing method to answer a query, it is generally assumed that the I/O cost dominates the overall query response time. Because of this, most research on indexing methods concentrate on reducing the sizes of indices. For bitmap indices, compression has been used for this purpose. However, in most cases, operations on these compressed bitmaps, mostly bitwise logical operations such as AND, OR, and NOT, spend more time in CPU than in I/O. To speedup these operations, a number of specialized bitmap compression schemes have been developed; the best known of which is the byte-aligned bitmap codemore » (BBC). They are usually faster in performing logical operations than the general purpose compression schemes, but, the time spent in CPU still dominates the total query response time. To reduce the query response time, we designed a CPU-friendly scheme named the word-aligned hybrid (WAH) code. In this paper, we prove that the sizes of WAH compressed bitmap indices are about two words per row for large range of attributes. This size is smaller than typical sizes of commonly used indices, such as a B-tree. Therefore, WAH compressed indices are not only appropriate for low cardinality attributes but also for high cardinality attributes.In the worst case, the time to operate on compressed bitmaps is proportional to the total size of the bitmaps involved. The total size of the bitmaps required to answer a query on one attribute is proportional to the number of hits. These indicate that WAH compressed bitmap indices are optimal. To verify their effectiveness, we generated bitmap indices for four different datasets and measured the response time of many range queries. Tests confirm that sizes of compressed bitmap indices are indeed smaller than B-tree indices, and query processing with WAH compressed indices is much faster than with BBC compressed indices, projection indices and B-tree indices. In addition, we also verified that the average query response time is proportional to the index size. This indicates that the compressed bitmap indices are efficient for very large datasets.« less
Low Complexity Compression and Speed Enhancement for Optical Scanning Holography
Tsang, P. W. M.; Poon, T.-C.; Liu, J.-P.; Kim, T.; Kim, Y. S.
2016-01-01
In this paper we report a low complexity compression method that is suitable for compact optical scanning holography (OSH) systems with different optical settings. Our proposed method can be divided into 2 major parts. First, an automatic decision maker is applied to select the rows of holographic pixels to be scanned. This process enhances the speed of acquiring a hologram, and also lowers the data rate. Second, each row of down-sampled pixels is converted into a one-bit representation with delta modulation (DM). Existing DM-based hologram compression techniques suffers from the disadvantage that a core parameter, commonly known as the step size, has to be determined in advance. However, the correct value of the step size for compressing each row of hologram is dependent on the dynamic range of the pixels, which could deviate significantly with the object scene, as well as OSH systems with different opical settings. We have overcome this problem by incorporating a dynamic step-size adjustment scheme. The proposed method is applied in the compression of holograms that are acquired with 2 different OSH systems, demonstrating a compression ratio of over two orders of magnitude, while preserving favorable fidelity on the reconstructed images. PMID:27708410
Color image lossy compression based on blind evaluation and prediction of noise characteristics
NASA Astrophysics Data System (ADS)
Ponomarenko, Nikolay N.; Lukin, Vladimir V.; Egiazarian, Karen O.; Lepisto, Leena
2011-03-01
The paper deals with JPEG adaptive lossy compression of color images formed by digital cameras. Adaptation to noise characteristics and blur estimated for each given image is carried out. The dominant factor degrading image quality is determined in a blind manner. Characteristics of this dominant factor are then estimated. Finally, a scaling factor that determines quantization steps for default JPEG table is adaptively set (selected). Within this general framework, two possible strategies are considered. A first one presumes blind estimation for an image after all operations in digital image processing chain just before compressing a given raster image. A second strategy is based on prediction of noise and blur parameters from analysis of RAW image under quite general assumptions concerning characteristics parameters of transformations an image will be subject to at further processing stages. The advantages of both strategies are discussed. The first strategy provides more accurate estimation and larger benefit in image compression ratio (CR) compared to super-high quality (SHQ) mode. However, it is more complicated and requires more resources. The second strategy is simpler but less beneficial. The proposed approaches are tested for quite many real life color images acquired by digital cameras and shown to provide more than two time increase of average CR compared to SHQ mode without introducing visible distortions with respect to SHQ compressed images.
Effect of Compression Garments on Physiological Responses After Uphill Running.
Struhár, Ivan; Kumstát, Michal; Králová, Dagmar Moc
2018-03-01
Limited practical recommendations related to wearing compression garments for athletes can be drawn from the literature at the present time. We aimed to identify the effects of compression garments on physiological and perceptual measures of performance and recovery after uphill running with different pressure and distributions of applied compression. In a random, double blinded study, 10 trained male runners undertook three 8 km treadmill runs at a 6% elevation rate, with the intensity of 75% VO2max while wearing low, medium grade compression garments and high reverse grade compression. In all the trials, compression garments were worn during 4 hours post run. Creatine kinase, measurements of muscle soreness, ankle strength of plantar/dorsal flexors and mean performance time were then measured. The best mean performance time was observed in the medium grade compression garments with the time difference being: medium grade compression garments vs. high reverse grade compression garments. A positive trend in increasing peak torque of plantar flexion (60º·s-1, 120º·s-1) was found in the medium grade compression garments: a difference between 24 and 48 hours post run. The highest pain tolerance shift in the gastrocnemius muscle was the medium grade compression garments, 24 hour post run, with the shift being +11.37% for the lateral head and 6.63% for the medial head. In conclusion, a beneficial trend in the promotion of running performance and decreasing muscle soreness within 24 hour post exercise was apparent in medium grade compression garments.
NASA Astrophysics Data System (ADS)
Johnson, W. N.; Herrick, W. V.; Grundmann, W. J.
1984-10-01
For the first time, VLSI technology is used to compress the full functinality and comparable performance of the VAX 11/780 super-minicomputer into a 1.2 M transistor microprocessor chip set. There was no subsetting of the 304 instruction set and the 17 data types, nor reduction in hardware support for the 4 Gbyte virtual memory management architecture. The chipset supports an integral 8 kbyte memory cache, a 13.3 Mbyte/s system bus, and sophisticated multiprocessing. High performance is achieved through microcode optimizations afforded by the large control store, tightly coupled address and data caches, the use of internal and external 32 bit datapaths, the extensive aplication of both microlevel and macrolevel pipelining, and the use of specialized hardware assists.
Experimental testing of prototype face gears for helicopter transmissions
NASA Technical Reports Server (NTRS)
Handschuh, R.; Lewicki, D.; Bossler, R.
1992-01-01
An experimental program to test the feasibility of using face gears in a high-speed and high-power environment was conducted. Four face gear sets were tested, two sets at a time, in a closed-loop test stand at pinion rotational speeds to 19,100 rpm and to 271 kW. The test gear sets were one-half scale of the helicopter design gear set. Testing the gears at one-eighth power, the test gear set had slightly increased bending and compressive stresses when compared to the full scale design. The tests were performed in the LeRC spiral bevel gear test facility. All four sets of gears successfully ran at 100 percent of design torque and speed for 30 million pinion cycles, and two sets successfully ran at 200 percent of torque for an additional 30 million pinion cycles. The results, although limited, demonstrated the feasibility of using face gears for high-speed, high-load applications.
A joint source-channel distortion model for JPEG compressed images.
Sabir, Muhammad F; Sheikh, Hamid Rahim; Heath, Robert W; Bovik, Alan C
2006-06-01
The need for efficient joint source-channel coding (JSCC) is growing as new multimedia services are introduced in commercial wireless communication systems. An important component of practical JSCC schemes is a distortion model that can predict the quality of compressed digital multimedia such as images and videos. The usual approach in the JSCC literature for quantifying the distortion due to quantization and channel errors is to estimate it for each image using the statistics of the image for a given signal-to-noise ratio (SNR). This is not an efficient approach in the design of real-time systems because of the computational complexity. A more useful and practical approach would be to design JSCC techniques that minimize average distortion for a large set of images based on some distortion model rather than carrying out per-image optimizations. However, models for estimating average distortion due to quantization and channel bit errors in a combined fashion for a large set of images are not available for practical image or video coding standards employing entropy coding and differential coding. This paper presents a statistical model for estimating the distortion introduced in progressive JPEG compressed images due to quantization and channel bit errors in a joint manner. Statistical modeling of important compression techniques such as Huffman coding, differential pulse-coding modulation, and run-length coding are included in the model. Examples show that the distortion in terms of peak signal-to-noise ratio (PSNR) can be predicted within a 2-dB maximum error over a variety of compression ratios and bit-error rates. To illustrate the utility of the proposed model, we present an unequal power allocation scheme as a simple application of our model. Results show that it gives a PSNR gain of around 6.5 dB at low SNRs, as compared to equal power allocation.
Analysis and control of supersonic vortex breakdown flows
NASA Technical Reports Server (NTRS)
Kandil, Osama A.
1990-01-01
Analysis and computation of steady, compressible, quasi-axisymmetric flow of an isolated, slender vortex are considered. The compressible, Navier-Stokes equations are reduced to a simpler set by using the slenderness and quasi-axisymmetry assumptions. The resulting set along with a compatibility equation are transformed from the diverging physical domain to a rectangular computational domain. Solving for a compatible set of initial profiles and specifying a compatible set of boundary conditions, the equations are solved using a type-differencing scheme. Vortex breakdown locations are detected by the failure of the scheme to converge. Computational examples include isolated vortex flows at different Mach numbers, external axial-pressure gradients and swirl ratios.
Lok, U-Wai; Li, Pai-Chi
2016-03-01
Graphics processing unit (GPU)-based software beamforming has advantages over hardware-based beamforming of easier programmability and a faster design cycle, since complicated imaging algorithms can be efficiently programmed and modified. However, the need for a high data rate when transferring ultrasound radio-frequency (RF) data from the hardware front end to the software back end limits the real-time performance. Data compression methods can be applied to the hardware front end to mitigate the data transfer issue. Nevertheless, most decompression processes cannot be performed efficiently on a GPU, thus becoming another bottleneck of the real-time imaging. Moreover, lossless (or nearly lossless) compression is desirable to avoid image quality degradation. In a previous study, we proposed a real-time lossless compression-decompression algorithm and demonstrated that it can reduce the overall processing time because the reduction in data transfer time is greater than the computation time required for compression/decompression. This paper analyzes the lossless compression method in order to understand the factors limiting the compression efficiency. Based on the analytical results, a nearly lossless compression is proposed to further enhance the compression efficiency. The proposed method comprises a transformation coding method involving modified lossless compression that aims at suppressing amplitude data. The simulation results indicate that the compression ratio (CR) of the proposed approach can be enhanced from nearly 1.8 to 2.5, thus allowing a higher data acquisition rate at the front end. The spatial and contrast resolutions with and without compression were almost identical, and the process of decompressing the data of a single frame on a GPU took only several milliseconds. Moreover, the proposed method has been implemented in a 64-channel system that we built in-house to demonstrate the feasibility of the proposed algorithm in a real system. It was found that channel data from a 64-channel system can be transferred using the standard USB 3.0 interface in most practical imaging applications.
Properties of Powder Composite Polyhydroxybutyrate-Chitosan-Calcium Phosphate System
NASA Astrophysics Data System (ADS)
Medvecky, L.; Stulajterova, R.; Giretova, M.; Faberova, M.
2017-12-01
Prepared powder polyhydroxybutyrate - chitosan - calcium phosphate composite system with 10 wt % of biopolymer component can be utilized as biocement which is characterized by the prolonged setting time and achieves wash out resistance after 5 minutes of setting. The origin powder tetracalcium phosphate/nanomonetite agglomerates were coated with the thin layer of biopolymer which decelerates both the transformation rate of calcium phosphates and hardening process of composites. The porosity of hardened composite was around 62% and the compressive strength (8 MPa) was close to trabecular bone. No cytotoxicity of composite resulted from live/dead staining of osteoblasts cultured on substrates.
Panchal, Ashish R; Meziab, Omar; Stolz, Uwe; Anderson, Wes; Bartlett, Mitchell; Spaite, Daniel W; Bobrow, Bentley J; Kern, Karl B
2014-09-01
Recent studies have demonstrated higher-quality chest compressions (CCs) following a 60 s ultra-brief video (UBV) on compression-only CPR (CO-CPR). However, the effectiveness of UBVs as a CPR-teaching tool for lay bystanders in public venues remains unknown. Determine whether an UBV is effective in teaching laypersons CO-CPR in a public setting and if viewing leads to superior responsiveness and CPR skills. Adult lay bystanders were enrolled in a public shopping mall and randomized to: (1) Control (CTR): sat idle for 60 s; (2) UBV: watched a 60 s UBV on CO-CPR. Subjects were read a scenario detailing a sudden collapse in the mall and asked to do what they "thought was best" on a mannequin. Performance measures were recorded for 2 min: responsiveness (time to call 911 and first CCs) and CPR quality [CC depth, rate, hands-off interval (time without CC after first CC)]. One hundred subjects were enrolled. Demographics were similar between groups. UBV subjects called 911 more frequently (percent difference: 31%) and initiated CCs sooner in the arrest scenario (median difference (MD): 5 s). UBV cohort had increased CC rate (MD: 19 cpm) and decreased hands-off interval (MD: 27 s). There was no difference in CC depth. Bystanders with UBV training in a shopping mall had significantly improved responsiveness, CC rate, and decreased hands-off interval. Given the short length of training, UBV may have potential as a ubiquitous intervention for public venues to help improve bystander reaction to arrest and CO-CPR performance. Published by Elsevier Ireland Ltd.
Review: Pressure-Induced Densification of Oxide Glasses at the Glass Transition
NASA Astrophysics Data System (ADS)
Kapoor, Saurabh; Wondraczek, Lothar; Smedskjaer, Morten M.
2017-02-01
Densification of oxide glasses at the glass transition offers a novel route to develop bulk glasses with tailored properties for emerging applications. Such densification can be achieved in the technologically relevant pressure regime of up to 1GPa. However, the present understanding of the composition-structure-property relationships governing these glasses is limited, with key questions, e.g., related to densification mechanism, remaining largely unanswered. Recent advances in structural characterization tools and high-pressure apparatuses have prompted new research efforts. Here, we review this recent progress and the insights gained in the understanding of the influence of isostatic compression at elevated temperature (so-called hot compression) on the composition-structure-property relationships of oxide glasses. We focus on compression at temperatures at or around the glass transition temperature (Tg), with relevant comparisons made to glasses prepared by pressure quenching and cold compression. We show that permanent densification at 1 GPa sets-in at temperatures above 0.7Tg and the degree of densification increases with increasing compression temperature and time, until attaining an approximately constant value for temperatures above Tg. For glasses compressed at the same temperature/pressure conditions, we demonstrate direct relations between the degree of volume densification and the pressure-induced change in micro-mechanical properties such as hardness, elastic moduli, and extent of the indentation size effect across a variety of glass families. Furthermore, we summarize the results on relaxation behavior of hot compressed glasses. All the pressure-induced changes in the structure and properties exhibit strong composition dependence. The experimental results highlight new opportunities for future investigation and identify research challenges that need to be overcome to advance the field.
Analysis-Preserving Video Microscopy Compression via Correlation and Mathematical Morphology
Shao, Chong; Zhong, Alfred; Cribb, Jeremy; Osborne, Lukas D.; O’Brien, E. Timothy; Superfine, Richard; Mayer-Patel, Ketan; Taylor, Russell M.
2015-01-01
The large amount video data produced by multi-channel, high-resolution microscopy system drives the need for a new high-performance domain-specific video compression technique. We describe a novel compression method for video microscopy data. The method is based on Pearson's correlation and mathematical morphology. The method makes use of the point-spread function (PSF) in the microscopy video acquisition phase. We compare our method to other lossless compression methods and to lossy JPEG, JPEG2000 and H.264 compression for various kinds of video microscopy data including fluorescence video and brightfield video. We find that for certain data sets, the new method compresses much better than lossless compression with no impact on analysis results. It achieved a best compressed size of 0.77% of the original size, 25× smaller than the best lossless technique (which yields 20% for the same video). The compressed size scales with the video's scientific data content. Further testing showed that existing lossy algorithms greatly impacted data analysis at similar compression sizes. PMID:26435032
DNABIT Compress - Genome compression algorithm.
Rajarajeswari, Pothuraju; Apparao, Allam
2011-01-22
Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.
The deformation of gum metal under nanoindentation and sub-micron pillar compression
NASA Astrophysics Data System (ADS)
Withey, Elizabeth Ann
Reaching ideal strength has proven to be difficult in most materials. Dislocation slip, phase transformations, twinning, and fracture all tend to occur at stresses well below the ideal strength of a material. Only on very small scales has it been possible to approach ideal strength. Thus, it was of great interest when a set of beta-Ti alloys, Gum Metal, were found to have a bulk yield strength close to half of its ideal strength. However, some recent studies have questioned the reliability of this claim. Several studies have suggested Gum Metal deforms by dislocation slip. Others have suggested the possibility of transformation-induced plasticity. The present study was undertaken in order to help clarify if and how Gum Metal can reach ideal strength. Two different experiments, ex situ nanoindentation and quantitative in situ nanopillar compression in a transmission electron microscope to correlate real-time deformation behavior, were performed on a single composition of Gum Metal, Ti-23Nb-0.7Ta-2Zr-1.20 at. %, obtained from Toyota Central R&D Laboratories. Nanoindented specimens were thinned from the bottom surface until the pits of multiple indentations became electron-transparent allowing for qualitative analysis of the deformation microstructure in both fully cold-worked and solution-treated specimens. Real-time load-displacement behavior from the nanopillar compression tests was correlated with real-time video recorded during each compression to determine both the compressive strength of each pillar and the timing and strengths of different deformation behaviors observed. Combining the results from both experiments provided several important conclusions. First, Gum Metal approaches and can attain ideal strength in nanopillars regardless of processing condition. While dislocations exist in Gum Metal, they can be tightly pinned by obstacles with spacing less than ˜20 nm, which should inhibit their motion at strengths below the ideal shear strength. The plastic deformation of Gum Metal is not controlled by giant faults or by stress-induced phase transformations. Both of these phenomena, while active, are not the source of plasticity in Gum Metal.
Biomechanics of Sports-Induced Axial-Compression Injuries of the Neck
Ivancic, Paul C.
2012-01-01
Context Head-first sports-induced impacts cause cervical fractures and dislocations and spinal cord lesions. In previous biomechanical studies, researchers have vertically dropped human cadavers, head-neck specimens, or surrogate models in inverted postures. Objective To develop a cadaveric neck model to simulate horizontally aligned, head-first impacts with a straightened neck and to use the model to investigate biomechanical responses and failure mechanisms. Design Descriptive laboratory study. Setting Biomechanics research laboratory. Patients or Other Participants Five human cadaveric cervical spine specimens. Intervention(s) The model consisted of the neck specimen mounted horizontally to a torso-equivalent mass on a sled and carrying a surrogate head. Head-first impacts were simulated at 4.1 m/s into a padded, deformable barrier. Main Outcome Measure(s) Time-history responses were determined for head and neck loads, accelerations, and motions. Average occurrence times of the compression force peaks at the impact barrier, occipital condyles, and neck were compared. Results The first local compression force peaks at the impact barrier (3070.0 ± 168.0 N at 18.8 milliseconds), occipital condyles (2868.1 ± 732.4 N at 19.6 milliseconds), and neck (2884.6 ± 910.7 N at 25.0 milliseconds) occurred earlier than all global compression peaks, which reached 7531.6 N in the neck at 46.6 milliseconds (P < .001). Average peak head motions relative to the torso were 6.0 cm in compression, 2.4 cm in posterior shear, and 6.4° in flexion. Neck compression fractures included occipital condyle, atlas, odontoid, and subaxial comminuted burst and facet fractures. Conclusions Neck injuries due to excessive axial compression occurred within 20 milliseconds of impact and were caused by abrupt deceleration of the head and continued forward torso momentum before simultaneous rebound of the head and torso. Improved understanding of neck injury mechanisms during sports-induced impacts will increase clinical awareness and immediate care and ultimately lead to improved protective equipment, reducing the frequency and severity of neck injuries and their associated societal costs. PMID:23068585
Chung, Tae Nyoung; Bae, Jinkun; Kim, Eui Chung; Cho, Yun Kyung; You, Je Sung; Choi, Sung Wook; Kim, Ok Jun
2013-07-01
Recent studies have shown that there may be an interaction between duty cycle and other factors related to the quality of chest compression. Duty cycle represents the fraction of compression phase. We aimed to investigate the effect of shorter compression phase on average chest compression depth during metronome-guided cardiopulmonary resuscitation. Senior medical students performed 12 sets of chest compressions following the guiding sounds, with three down-stroke patterns (normal, fast and very fast) and four rates (80, 100, 120 and 140 compressions/min) in random sequence. Repeated-measures analysis of variance was used to compare the average chest compression depth and duty cycle among the trials. The average chest compression depth increased and the duty cycle decreased in a linear fashion as the down-stroke pattern shifted from normal to very fast (p<0.001 for both). Linear increase of average chest compression depth following the increase of the rate of chest compression was observed only with normal down-stroke pattern (p=0.004). Induction of a shorter compression phase is correlated with a deeper chest compression during metronome-guided cardiopulmonary resuscitation.
Method to produce American Thoracic Society flow-time waveforms using a mechanical pump.
Hankinson, J L; Reynolds, J S; Das, M K; Viola, J O
1997-03-01
The American Thoracic Society (ATS) recently adopted a new set of 26 standard flow-time waveforms for use in testing both diagnostic and monitoring devices. Some of these waveforms have a higher frequency content than present in the ATS-24 standard volume-time waveforms, which, when produced by a mechanical pump, may result in a pump flow output that is less than the desired flow due to gas compression losses within the pump. To investigate the effects of gas compression, a mechanical pump was used to generate the necessary flows to test mini-Wright and Assess peak expiratory flow (PEF) meters. Flow output from the pump was measured by two different independent methods, a pneumotachometer and a method based on piston displacement and pressure measured within the pump. Measuring output flow based on piston displacement and pressure has been validated using a pneumotachometer and mini-Wright PEF meter, and found to accurately measure pump output. This method introduces less resistance (lower back-pressure) and dead space volume than using a pneumotachometer in series with the meter under test. Pump output flow was found to be lower than the desired flow both with the mini-Wright and Assess meters (for waveform No. 26, PEFs 7.1 and 10.9% lower, respectively). To compensate for losses due to gas compression, we have developed a method of deriving new input waveforms, which, when used to drive a commercially available mechanical pump, accurately and reliably produces the 26 ATS flow-time waveforms, even those with the fastest rise-times.
ERIC Educational Resources Information Center
Shi, Lu-Feng; Doherty, Karen A.
2008-01-01
Purpose: The purpose of the current study was to assess the effect of fast and slow attack/release times (ATs/RTs) on aided perception of reverberant speech in quiet. Method: Thirty listeners with mild-to-moderate sensorineural hearing loss were tested monaurally with a commercial hearing aid programmed in 3 AT/RT settings: linear, fast (AT = 9…
Experimental Study on Feasibility of Non Potable Water with Lime on Properties of Ppc
NASA Astrophysics Data System (ADS)
Reddy Babu, G.; Madhusudana Reddy, B.; Ramana, N. V.; Sudharshan Reddy, B.
2017-08-01
This research aimed to investigate feasibility of outlet water of water treatment plant and limewater on properties of Portland pozzolana cement (PPC). Twenty water treatment plants were found out in the Bhimavaram municipality region in West Godavari district, Andhra Pradesh, India. Approximately, each plant supplying potable water about 4000 to 5000 L/day. All plants are extracting ground water and treating through Reverse Osmosis (RO) process. At outlet, huge quantity of wasted water is being discharged into side drains in Bhimavaram municipality. One typical treatment plant was selected, and water at outlet was collected and Physical and chemical analysis was carried out as per producer described in APHA. The effect of plant outlet water(POW), lime water(LM), and plant outlet water with lime (POWL) on physical properties i.e., setting times, compressive strength, and flexural strength of Portland pozzolana Cement (PPC) were studied in laboratory and compared same with reference specimens i.e., made with Distilled Water (DW) as mixing water. No significant change was observed in initial and finial setting time in POW, LW, and (POWL) as compared with reference specimens made with distilled water (DW). Compressive strength was significantly increased with LW and (POWL) specimens compared to that of reference specimens. XRD technique was employed to study the mineralogical analysis.
Gamma Radiation Aging Study of a Dow Corning SE 1700 Porous Structure Made by Direct Ink Writing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Small, Ward; Alviso, Cindy T.; Metz, Tom R.
Dow Corning SE 1700 (reinforced polydimethylsiloxane) porous structures were made by direct ink writing (DIW). The specimens (~50% porosity) were subjected to a compressive strain of ~25% while exposed to a gamma radiation dose of 1, 5, or 10 Mrad under vacuum. Compression set and load retention of the aged specimens were measured after a ~24 h relaxation period. Compression set (relative to deflection) increased with radiation dose: 11, 35, and 51% after 1, 5, and 10 Mrad, respectively. Load retention was 96-97% for the doses tested. The SE 1700 compared favorably to M9763 cellular silicone tested under the samemore » conditions.« less
Memory as Perception of the Past: Compressed Time inMind and Brain.
Howard, Marc W
2018-02-01
In the visual system retinal space is compressed such that acuity decreases further from the fovea. Different forms of memory may rely on a compressed representation of time, manifested as decreased accuracy for events that happened further in the past. Neurophysiologically, "time cells" show receptive fields in time. Analogous to the compression of visual space, time cells show less acuity for events further in the past. Behavioral evidence suggests memory can be accessed by scanning a compressed temporal representation, analogous to visual search. This suggests a common computational language for visual attention and memory retrieval. In this view, time functions like a scaffolding that organizes memories in much the same way that retinal space functions like a scaffolding for visual perception. Copyright © 2017 Elsevier Ltd. All rights reserved.
The Role of Efficient XML Interchange (EXI) in Navy Wide-Area Network (WAN) Optimization
2015-03-01
compress, and re-encrypt data to continue providing optimization through compression; however, that capability requires careful consideration of...optimization 23 of encrypted data requires a careful analysis and comparison of performance improvements and IA vulnerabilities. It is important...Contained EXI capitalizes on multiple techniques to improve compression, and they vary depending on a set of EXI options passed to the codec
Converting Panax ginseng DNA and chemical fingerprints into two-dimensional barcode.
Cai, Yong; Li, Peng; Li, Xi-Wen; Zhao, Jing; Chen, Hai; Yang, Qing; Hu, Hao
2017-07-01
In this study, we investigated how to convert the Panax ginseng DNA sequence code and chemical fingerprints into a two-dimensional code. In order to improve the compression efficiency, GATC2Bytes and digital merger compression algorithms are proposed. HPLC chemical fingerprint data of 10 groups of P. ginseng from Northeast China and the internal transcribed spacer 2 (ITS2) sequence code as the DNA sequence code were ready for conversion. In order to convert such data into a two-dimensional code, the following six steps were performed: First, the chemical fingerprint characteristic data sets were obtained through the inflection filtering algorithm. Second, precompression processing of such data sets is undertaken. Third, precompression processing was undertaken with the P. ginseng DNA (ITS2) sequence codes. Fourth, the precompressed chemical fingerprint data and the DNA (ITS2) sequence code were combined in accordance with the set data format. Such combined data can be compressed by Zlib, an open source data compression algorithm. Finally, the compressed data generated a two-dimensional code called a quick response code (QR code). Through the abovementioned converting process, it can be found that the number of bytes needed for storing P. ginseng chemical fingerprints and its DNA (ITS2) sequence code can be greatly reduced. After GTCA2Bytes algorithm processing, the ITS2 compression rate reaches 75% and the chemical fingerprint compression rate exceeds 99.65% via filtration and digital merger compression algorithm processing. Therefore, the overall compression ratio even exceeds 99.36%. The capacity of the formed QR code is around 0.5k, which can easily and successfully be read and identified by any smartphone. P. ginseng chemical fingerprints and its DNA (ITS2) sequence code can form a QR code after data processing, and therefore the QR code can be a perfect carrier of the authenticity and quality of P. ginseng information. This study provides a theoretical basis for the development of a quality traceability system of traditional Chinese medicine based on a two-dimensional code.
Martin, Philip; Theobald, Peter; Kemp, Alison; Maguire, Sabine; Maconochie, Ian; Jones, Michael
2013-08-01
European and Advanced Paediatric Life Support training courses. Sixty-nine certified CPR providers. CPR providers were randomly allocated to a 'no-feedback' or 'feedback' group, performing two-thumb and two-finger chest compressions on a "physiological", instrumented resuscitation manikin. Baseline data was recorded without feedback, before chest compressions were repeated with one group receiving feedback. Indices were calculated that defined chest compression quality, based upon comparison of the chest wall displacement to the targets of four, internationally recommended parameters: chest compression depth, release force, chest compression rate and compression duty cycle. Baseline data were consistent with other studies, with <1% of chest compressions performed by providers simultaneously achieving the target of the four internationally recommended parameters. During the 'experimental' phase, 34 CPR providers benefitted from the provision of 'real-time' feedback which, on analysis, coincided with a statistical improvement in compression rate, depth and duty cycle quality across both compression techniques (all measures: p<0.001). Feedback enabled providers to simultaneously achieve the four targets in 75% (two-finger) and 80% (two-thumb) of chest compressions. Real-time feedback produced a dramatic increase in the quality of chest compression (i.e. from <1% to 75-80%). If these results transfer to a clinical scenario this technology could, for the first time, support providers in consistently performing accurate chest compressions during infant CPR and thus potentially improving clinical outcomes. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Evaluation of Ceramic Honeycomb Core Compression Behavior at Room Temperature
NASA Technical Reports Server (NTRS)
Bird, Richard K.; Lapointe, Thomas S.
2013-01-01
Room temperature flatwise compression tests were conducted on two varieties of ceramic honeycomb core specimens that have potential for high-temperature structural applications. One set of specimens was fabricated using strips of a commercially-available thin-gage "ceramic paper" sheet molded into a hexagonal core configuration. The other set was fabricated by machining honeycomb core directly from a commercially available rigid insulation tile material. This paper summarizes the results from these tests.
Rudder/Fin Seal Investigations for the X-38 Re-Entry Vehicle
NASA Technical Reports Server (NTRS)
Dunlap, Patrick H., Jr.; Steinetz, Bruce M.; Curry, Donald M.
2000-01-01
NASA is currently developing the X-38 vehicle that will be used to demonstrate the technologies required for a crew return vehicle (CRV) for the International Space Station. The X-38 control surfaces require high temperature seals to limit hot gas ingestion and transfer of heat to underlying low-temperature structures to prevent over-temperature of these structures and possible loss of the vehicle. This paper presents results for thermal analyses and flow and compression tests conducted on as-received and thermally exposed seals for the rudder/fin location of the X-38. A thermal analysis of the rudder/fin dual seal assembly based on representative heating rates on the windward surface of the rudder/fin area predicted a peak seal temperature of 1900 F. The temperature-exposed seals were heated in a compressed state at 1900 F corresponding to the predicted peak temperature. Room temperature compression tests were performed to determine load versus linear compression, preload, contact area, stiffness, and resiliency characteristics for the as-received and temperature-exposed seals. Temperature exposure resulted in permanent set and loss of resiliency in these seals. Unit loads and contact pressures for the seals were below the 5 lb/in. and 10 psi limits set to limit the loads on the Shuttle thermal tiles that the seals seal against in the rudder/fin location. Measured seal flow rates for a double seal were about 4.5 times higher than the preliminary seal flow goal. The seal designs examined in this study are expected to be able to endure the high temperatures that they will be exposed to for a single-use life. Tests performed herein combined with future analyses, arc jet tests, and scrubbing tests will be used to select the final seal design for this application.
Rudder/Fin Seal Investigations for the X-38 Re-Entry Vehicle
NASA Technical Reports Server (NTRS)
Dunlap, Patrick H., Jr.; Steinetz, Bruce M.; Curry, Donald M.
2000-01-01
NASA is currently developing the X-38 vehicle that will be used to demonstrate the technologies required for a crew return vehicle (CRV) for the International Space Station. The X-38 control surfaces require high temperature seals to limit hot gas ingestion and transfer of heat to underlying low-temperature structures to prevent over-temperature of these structures and possible loss of the vehicle. This paper presents results for thermal analyses and flow and compression tests conducted on as-received and thermally exposed seals for the rudder/fin location of the X-38. A thermal analysis of the rudder/fin dual seal assembly based on representative heating rates on the windward surface of the rudder/fin area predicted a peak seal temperature of 1900 F. The temperature-exposed seals were heated in a compressed state at 1900 F corresponding to the predicted peak temperature. Room temperature compression tests were performed to determine load versus linear compression, preload, contact area, stiffness, and resiliency characteristics for the as-received and temperature-exposed seals. Temperature exposure resulted in permanent set and loss of resiliency in these seals. Unit loads and contact pressures for the seals were below the five pounds/inch and ten psi limits set to limit the loads on the Shuttle thermal tiles that the seals seal against in the rudder/fin location. Measured seal flow rates for a double seal were about 4.5 times higher than the preliminary seal flow goal. The seal designs examined in this study are expected to be able to endure the high temperatures that they will be exposed to for a single-use life. Tests performed herein combined with future analyses, arc jet tests, and scrubbing tests will be used to select the final seal design for this application.
Two-thumb technique is superior to two-finger technique during lone rescuer infant manikin CPR.
Udassi, Sharda; Udassi, Jai P; Lamb, Melissa A; Theriaque, Douglas W; Shuster, Jonathan J; Zaritsky, Arno L; Haque, Ikram U
2010-06-01
Infant CPR guidelines recommend two-finger chest compression with a lone rescuer and two-thumb with two rescuers. Two-thumb provides better chest compression but is perceived to be associated with increased ventilation hands-off time. We hypothesized that lone rescuer two-thumb CPR is associated with increased ventilation cycle time, decreased ventilation quality and fewer chest compressions compared to two-finger CPR in an infant manikin model. Crossover observational study randomizing 34 healthcare providers to perform 2 min CPR at a compression rate of 100 min(-1) using a 30:2 compression:ventilation ratio comparing two-thumb vs. two-finger techniques. A Laerdal Baby ALS Trainer manikin was modified to digitally record compression rate, compression depth and compression pressure and ventilation cycle time (two mouth-to-mouth breaths). Manikin chest rise with breaths was video recorded and later reviewed by two blinded CPR instructors for percent effective breaths. Data (mean+/-SD) were analyzed using a two-tailed paired t-test. Significance was defined qualitatively as p< or =0.05. Mean % effective breaths were 90+/-18.6% in two-thumb and 88.9+/-21.1% in two-finger, p=0.65. Mean time (s) to deliver two mouth-to-mouth breaths was 7.6+/-1.6 in two-thumb and 7.0+/-1.5 in two-finger, p<0.0001. Mean delivered compressions per minute were 87+/-11 in two-thumb and 92+/-12 in two-finger, p=0.0005. Two-thumb resulted in significantly higher compression depth and compression pressure compared to the two-finger technique. Healthcare providers required 0.6s longer time to deliver two breaths during two-thumb lone rescuer infant CPR, but there was no significant difference in percent effective breaths delivered between the two techniques. Two-thumb CPR had 4 fewer delivered compressions per minute, which may be offset by far more effective compression depth and compression pressure compared to two-finger technique. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.
SeqCompress: an algorithm for biological sequence compression.
Sardaraz, Muhammad; Tahir, Muhammad; Ikram, Ataul Aziz; Bajwa, Hassan
2014-10-01
The growth of Next Generation Sequencing technologies presents significant research challenges, specifically to design bioinformatics tools that handle massive amount of data efficiently. Biological sequence data storage cost has become a noticeable proportion of total cost in the generation and analysis. Particularly increase in DNA sequencing rate is significantly outstripping the rate of increase in disk storage capacity, which may go beyond the limit of storage capacity. It is essential to develop algorithms that handle large data sets via better memory management. This article presents a DNA sequence compression algorithm SeqCompress that copes with the space complexity of biological sequences. The algorithm is based on lossless data compression and uses statistical model as well as arithmetic coding to compress DNA sequences. The proposed algorithm is compared with recent specialized compression tools for biological sequences. Experimental results show that proposed algorithm has better compression gain as compared to other existing algorithms. Copyright © 2014 Elsevier Inc. All rights reserved.
Human Motion Capture Data Tailored Transform Coding.
Junhui Hou; Lap-Pui Chau; Magnenat-Thalmann, Nadia; Ying He
2015-07-01
Human motion capture (mocap) is a widely used technique for digitalizing human movements. With growing usage, compressing mocap data has received increasing attention, since compact data size enables efficient storage and transmission. Our analysis shows that mocap data have some unique characteristics that distinguish themselves from images and videos. Therefore, directly borrowing image or video compression techniques, such as discrete cosine transform, does not work well. In this paper, we propose a novel mocap-tailored transform coding algorithm that takes advantage of these features. Our algorithm segments the input mocap sequences into clips, which are represented in 2D matrices. Then it computes a set of data-dependent orthogonal bases to transform the matrices to frequency domain, in which the transform coefficients have significantly less dependency. Finally, the compression is obtained by entropy coding of the quantized coefficients and the bases. Our method has low computational cost and can be easily extended to compress mocap databases. It also requires neither training nor complicated parameter setting. Experimental results demonstrate that the proposed scheme significantly outperforms state-of-the-art algorithms in terms of compression performance and speed.
Boroujeni, Nariman Mansoori; Zhou, Huan; Luchini, Timothy J F; Bhaduri, Sarit B
2013-10-01
In this study, we present results of our research on biodegradable monetite (DCPA, CaHPO4) cement with surface-modified multi-walled carbon nanotubes (mMWCNTs) as potential bone defect repair material. The cement pastes showed desirable handling properties and possessed a suitable setting time for use in surgical setting. The incorporation of mMWCNTs shortened the setting time of DCPA and increased the compressive strength of DCPA cement from 11.09±1.85 MPa to 21.56±2.47 MPa. The cytocompatibility of the materials was investigated in vitro using the preosteoblast cell line MC3T3-E1. An increase of cell numbers was observed on both DCPA and DCPA-mMWCNTs. Scanning electron microscopy (SEM) results also revealed an obvious cell growth on the surface of the cements. Based on these results, DCPA-mMWCNTs composite cements can be considered as potential bone defect repair materials. © 2013.
Compressed air injection technique to standardize block injection pressures.
Tsui, Ban C H; Li, Lisa X Y; Pillay, Jennifer J
2006-11-01
Presently, no standardized technique exists to monitor injection pressures during peripheral nerve blocks. Our objective was to determine if a compressed air injection technique, using an in vitro model based on Boyle's law and typical regional anesthesia equipment, could consistently maintain injection pressures below a 1293 mmHg level associated with clinically significant nerve injury. Injection pressures for 20 and 30 mL syringes with various needle sizes (18G, 20G, 21G, 22G, and 24G) were measured in a closed system. A set volume of air was aspirated into a saline-filled syringe and then compressed and maintained at various percentages while pressure was measured. The needle was inserted into the injection port of a pressure sensor, which had attached extension tubing with an injection plug clamped "off". Using linear regression with all data points, the pressure value and 99% confidence interval (CI) at 50% air compression was estimated. The linearity of Boyle's law was demonstrated with a high correlation, r = 0.99, and a slope of 0.984 (99% CI: 0.967-1.001). The net pressure generated at 50% compression was estimated as 744.8 mmHg, with the 99% CI between 729.6 and 760.0 mmHg. The various syringe/needle combinations had similar results. By creating and maintaining syringe air compression at 50% or less, injection pressures will be substantially below the 1293 mmHg threshold considered to be an associated risk factor for clinically significant nerve injury. This technique may allow simple, real-time and objective monitoring during local anesthetic injections while inherently reducing injection speed.
Cardiopulmonary resuscitation duty cycle in out-of-hospital cardiac arrest.
Johnson, Bryce V; Johnson, Bryce; Coult, Jason; Fahrenbruch, Carol; Blackwood, Jennifer; Sherman, Larry; Kudenchuk, Peter; Sayre, Michael; Rea, Thomas
2015-02-01
Duty cycle is the portion of time spent in compression relative to total time of the compression-decompression cycle. Guidelines recommend a 50% duty cycle based largely on animal investigation. We undertook a descriptive evaluation of duty cycle in human resuscitation, and whether duty cycle correlates with other CPR measures. We calculated the duty cycle, compression depth, and compression rate during EMS resuscitation of 164 patients with out-of-hospital ventricular fibrillation cardiac arrest. We captured force recordings from a chest accelerometer to measure ten-second CPR epochs that preceded rhythm analysis. Duty cycle was calculated using two methods. Effective compression time (ECT) is the time from beginning to end of compression divided by total period for that compression-decompression cycle. Area duty cycle (ADC) is the ratio of area under the force curve divided by total area of one compression-decompression cycle. We evaluated the compression depth and compression rate according to duty cycle quartiles. There were 369 ten-second epochs among 164 patients. The median duty cycle was 38.8% (SD=5.5%) using ECT and 32.2% (SD=4.3%) using ADC. A relatively shorter compression phase (lower duty cycle) was associated with greater compression depth (test for trend <0.05 for ECT and ADC) and slower compression rate (test for trend <0.05 for ADC). Sixty-one of 164 patients (37%) survived to hospital discharge. Duty cycle was below the 50% recommended guideline, and was associated with compression depth and rate. These findings provider rationale to incorporate duty cycle into research aimed at understanding optimal CPR metrics. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Multiresolution Distance Volumes for Progressive Surface Compression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laney, D E; Bertram, M; Duchaineau, M A
2002-04-18
We present a surface compression method that stores surfaces as wavelet-compressed signed-distance volumes. Our approach enables the representation of surfaces with complex topology and arbitrary numbers of components within a single multiresolution data structure. This data structure elegantly handles topological modification at high compression rates. Our method does not require the costly and sometimes infeasible base mesh construction step required by subdivision surface approaches. We present several improvements over previous attempts at compressing signed-distance functions, including an 0(n) distance transform, a zero set initialization method for triangle meshes, and a specialized thresholding algorithm. We demonstrate the potential of sampled distancemore » volumes for surface compression and progressive reconstruction for complex high genus surfaces.« less
Synthetic optimization of air turbine for dental handpieces.
Shi, Z Y; Dong, T
2014-01-01
A synthetic optimization of Pelton air turbine in dental handpieces concerning the power output, compressed air consumption and rotation speed in the mean time is implemented by employing a standard design procedure and variable limitation from practical dentistry. The Pareto optimal solution sets acquired by using the Normalized Normal Constraint method are mainly comprised of two piecewise continuous parts. On the Pareto frontier, the supply air stagnation pressure stalls at the lower boundary of the design space, the rotation speed is a constant value within the recommended range from literature, the blade tip clearance insensitive to while the nozzle radius increases with power output and mass flow rate of compressed air to which the residual geometric dimensions are showing an opposite trend within their respective "pieces" compared to the nozzle radius.
Grazziotin-Soares, R; Nekoofar, M H; Davies, T E; Bafail, A; Alhaddar, E; Hübler, R; Busato, A L S; Dummer, P M H
2014-06-01
To assess the effect of bismuth oxide (Bi2 O3 ) on the chemical characterization and physical properties of White mineral trioxide aggregate (MTA) Angelus. Commercially available White MTA Angelus and White MTA Angelus without Bi2 O3 provided by the manufacturer especially for this study were subjected to the following tests: Rietveld X-ray diffraction analysis (XRD), energy-dispersive X-ray analysis (EDX), scanning electron microscopy (SEM), compressive strength, Vickers microhardness test and setting time. Chemical analysis data were reported descriptively, and physical properties were expressed as means and standard deviations. Data were analysed using Student's t-test and Mann-Whitney U test (P = 0.05). Calcium silicate peaks were reduced in the diffractograms of both hydrated materials. Bismuth particles were found on the surface of White MTA Angelus, and a greater amount of particles characterized as calcium hydroxide was observed by visual examination on White MTA without Bi2 O3 . The material without Bi2 O3 had the shortest final setting time (38.33 min, P = 0.002), the highest Vickers microhardness mean value (72.35 MPa, P = 0.000) and similar compressive strength results (P = 0.329) when compared with the commercially available White MTA Angelus containing Bi2 O3 . The lack of Bi2 O3 was associated with an increase in Vickers microhardness, a reduction in final setting time, absence of Bi2 O3 peaks in diffractograms, as well as a large amount of calcium and a morphology characteristic of calcium hydroxide in EDX/SEM analysis. © 2013 International Endodontic Journal. Published by John Wiley & Sons Ltd.
Evaluation of amorphous magnesium phosphate (AMP) based non-exothermic orthopedic cements.
Babaie, Elham; Lin, Boren; Goel, Vijay K; Bhaduri, Sarit B
2016-10-07
This paper reports for the first time the development of a biodegradable, non-exothermic, self-setting orthopedic cement composition based on amorphous magnesium phosphate (AMP). The occurrence of undesirable exothermic reactions was avoided through using AMP as the solid precursor. The phenomenon of self-setting with optimum rheology is achieved by incorporating a water soluble biocompatible/biodegradable polymer, polyvinyl alcohol (PVA). Additionally, PVA enables a controlled growth of the final phase via a biomimetic process. The AMP powder was synthesized using a precipitation method. The powder, when in contact with the aqueous PVA solution, forms a putty resulting in a nanocrystalline magnesium phosphate phase of cattiite. The as-prepared cement compositions were evaluated for setting times, exothermicity, compressive strength, biodegradation, and microstructural features before and after soaking in SBF, and in vitro cytocompatibility. Since cattiite is relatively unexplored in the literature, a first time evaluation reveals that it is cytocompatible, just like the other phases in the MgO-P 2 O 5 (Mg-P) system. The cement composition prepared with 15% PVA in an aqueous medium achieved clinically relevant setting times, mechanical properties, and biodegradation. Simulated body fluid (SBF) soaking resulted in coating of bobierrite onto the cement particle surfaces.
Farruggia, Andrea; Gagie, Travis; Navarro, Gonzalo; Puglisi, Simon J; Sirén, Jouni
2018-05-01
Suffix trees are one of the most versatile data structures in stringology, with many applications in bioinformatics. Their main drawback is their size, which can be tens of times larger than the input sequence. Much effort has been put into reducing the space usage, leading ultimately to compressed suffix trees. These compressed data structures can efficiently simulate the suffix tree, while using space proportional to a compressed representation of the sequence. In this work, we take a new approach to compressed suffix trees for repetitive sequence collections, such as collections of individual genomes. We compress the suffix trees of individual sequences relative to the suffix tree of a reference sequence. These relative data structures provide competitive time/space trade-offs, being almost as small as the smallest compressed suffix trees for repetitive collections, and competitive in time with the largest and fastest compressed suffix trees.
Farruggia, Andrea; Gagie, Travis; Navarro, Gonzalo; Puglisi, Simon J; Sirén, Jouni
2018-01-01
Abstract Suffix trees are one of the most versatile data structures in stringology, with many applications in bioinformatics. Their main drawback is their size, which can be tens of times larger than the input sequence. Much effort has been put into reducing the space usage, leading ultimately to compressed suffix trees. These compressed data structures can efficiently simulate the suffix tree, while using space proportional to a compressed representation of the sequence. In this work, we take a new approach to compressed suffix trees for repetitive sequence collections, such as collections of individual genomes. We compress the suffix trees of individual sequences relative to the suffix tree of a reference sequence. These relative data structures provide competitive time/space trade-offs, being almost as small as the smallest compressed suffix trees for repetitive collections, and competitive in time with the largest and fastest compressed suffix trees. PMID:29795706
Compressing DNA sequence databases with coil.
White, W Timothy J; Hendy, Michael D
2008-05-20
Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip) compression - an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression - the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST) data. Finally, coil can efficiently encode incremental additions to a sequence database. coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work.
Compressing DNA sequence databases with coil
White, W Timothy J; Hendy, Michael D
2008-01-01
Background Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip) compression – an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. Results We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression – the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST) data. Finally, coil can efficiently encode incremental additions to a sequence database. Conclusion coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work. PMID:18489794
COMPRESSIVE FATIGUE IN TITANIUM DENTAL IMPLANTS SUBMITTED TO FLUORIDE IONS ACTION
Ribeiro, Ana Lúcia Roselino; Noriega, Jorge Roberto; Dametto, Fábio Roberto; Vaz, Luís Geraldo
2007-01-01
The aim of this study was to assess the influence of a fluoridated medium on the mechanical properties of an internal hexagon implant-abutment set, by means of compression, mechanical cycling and metallographic characterization by scanning electronic microscopy. Five years of regular use of oral hygiene with a sodium fluoride solution content of 1500 ppm were simulated, immersing the samples in this medium for 184 hours, with the solutions being changed every 12 hours. Data were analyzed at a 95% confidence level with Fisher's exact test. After the action of fluoride ions, a negative influence occurred in the mechanical cycling test performed in a servohydraulic machine (Material Test System-810) set to a frequency of 15 Hz with 100,000 cycles and programmed to 60% of the maximum resistance of static compression test. The sets tended to fracture by compression on the screw, characterized by mixed ruptures with predominance of fragile fracture, as observed by microscopy. An evidence of corrosion by pitting on sample surfaces was found after the fluoride ions action. It may be concluded that prolonged contact with fluoride ions is harmful to the mechanical properties of commercially pure titanium structures. PMID:19089148
NASA Astrophysics Data System (ADS)
Alay, E.; Skotak, M.; Misistia, A.; Chandra, N.
2018-01-01
Dynamic loads on specimens in live-fire conditions as well as at different locations within and outside compressed-gas-driven shock tubes are determined by both static and total blast overpressure-time pressure pulses. The biomechanical loading on the specimen is determined by surface pressures that combine the effects of static, dynamic, and reflected pressures and specimen geometry. Surface pressure is both space and time dependent; it varies as a function of size, shape, and external contour of the specimens. In this work, we used two sets of specimens: (1) anthropometric dummy head and (2) a surrogate rodent headform instrumented with pressure sensors and subjected them to blast waves in the interior and at the exit of the shock tube. We demonstrate in this work that while inside the shock tube the biomechanical loading as determined by various pressure measures closely aligns with live-fire data and shock wave theory, significant deviations are found when tests are performed outside.
Effects of nano-SiO(2) and different ash particle sizes on sludge ash-cement mortar.
Lin, K L; Chang, W C; Lin, D F; Luo, H L; Tsai, M C
2008-09-01
The effects of nano-SiO(2) on three ash particle sizes in mortar were studied by replacing a portion of the cement with incinerated sewage sludge ash. Results indicate that the amount of water needed at standard consistency increased as more nano-SiO(2) was added. Moreover, a reduction in setting time became noticeable for smaller ash particle sizes. The compressive strength of the ash-cement mortar increased as more nano-SiO(2) was added. Additionally, with 2% nano-SiO(2) added and a cure length of 7 days, the compressive strength of the ash-cement mortar with 1 microm ash particle size was about 1.5 times better that of 75microm particle size. Further, nano-SiO(2) functioned to fill pores for ash-cement mortar with different ash particle sizes. However, the effects of this pore-filling varied with ash particle size. Higher amounts of nano-SiO(2) better influenced the ash-cement mortar with larger ash particle sizes.
ERIC Educational Resources Information Center
Dailey, K. Anne
Time-compressed speech (also called compressed speech, speeded speech, or accelerated speech) is an extension of the normal recording procedure for reproducing the spoken word. Compressed speech can be used to achieve dramatic reductions in listening time without significant loss in comprehension. The implications of such temporal reductions in…
Mechanical and optical response of [100] lithium fluoride to multi-megabar dynamic pressures
NASA Astrophysics Data System (ADS)
Davis, Jean-Paul; Knudson, Marcus D.; Shulenburger, Luke; Crockett, Scott D.
2016-10-01
An understanding of the mechanical and optical properties of lithium fluoride (LiF) is essential to its use as a transparent tamper and window for dynamic materials experiments. In order to improve models for this material, we applied iterative Lagrangian analysis to ten independent sets of data from magnetically driven planar shockless compression experiments on single crystal [100] LiF to pressures as high as 350 GPa. We found that the compression response disagreed with a prevalent tabular equation of state for LiF that is commonly used to interpret shockless compression experiments. We also present complementary data from ab initio calculations performed using the diffusion quantum Monte Carlo method. The agreement between these two data sets lends confidence to our interpretation. In order to aid in future experimental analysis, we have modified the tabular equation of state to match the new data. We have also extended knowledge of the optical properties of LiF via shock-compression and shockless compression experiments, refining the transmissibility limit, measuring the refractive index to ˜300 GPa, and confirming the nonlinear dependence of the refractive index on density. We present a new model for the refractive index of LiF that includes temperature dependence and describe a procedure for correcting apparent velocity to true velocity for dynamic compression experiments.
A large-scale video codec comparison of x264, x265 and libvpx for practical VOD applications
NASA Astrophysics Data System (ADS)
De Cock, Jan; Mavlankar, Aditya; Moorthy, Anush; Aaron, Anne
2016-09-01
Over the last years, we have seen exciting improvements in video compression technology, due to the introduction of HEVC and royalty-free coding specifications such as VP9. The potential compression gains of HEVC over H.264/AVC have been demonstrated in different studies, and are usually based on the HM reference software. For VP9, substantial gains over H.264/AVC have been reported in some publications, whereas others reported less optimistic results. Differences in configurations between these publications make it more difficult to assess the true potential of VP9. Practical open-source encoder implementations such as x265 and libvpx (VP9) have matured, and are now showing high compression gains over x264. In this paper, we demonstrate the potential of these encoder imple- mentations, with settings optimized for non-real-time random access, as used in a video-on-demand encoding pipeline. We report results from a large-scale video codec comparison test, which includes x264, x265 and libvpx. A test set consisting of a variety of titles with varying spatio-temporal characteristics from our catalog is used, resulting in tens of millions of encoded frames, hence larger than test sets previously used in the literature. Re- sults are reported in terms of PSNR, SSIM, MS-SSIM, VIF and the recently introduced VMAF quality metric. BD-rate calculations show that using x265 and libvpx vs. x264 can lead to significant bitrate savings for the same quality. x265 outperforms libvpx in most cases, but the performance gap narrows (or even reverses) at the higher resolutions.
Eichhorn, S; Mendoza Garcia, A; Polski, M; Spindler, J; Stroh, A; Heller, M; Lange, R; Krane, M
2017-06-01
The provision of sufficient chest compression is among the most important factors influencing patient survival during cardiopulmonary resuscitation (CPR). One approach to optimize the quality of chest compressions is to use mechanical-resuscitation devices. The aim of this study was to compare a new device for chest compression (corpuls cpr) with an established device (LUCAS II). We used a mechanical thorax model consisting of a chest with variable stiffness and an integrated heart chamber which generated blood flow dependent on the compression depth and waveform. The method of blood-flow generation could be changed between direct cardiac-compression mode and thoracic-pump mode. Different chest-stiffness settings and compression modes were tested to generate various blood-flow profiles. Additionally, an endurance test at high stiffness was performed to measure overall performance and compression consistency. Both resuscitation machines were able to compress the model thorax with a frequency of 100/min and a depth of 5 cm, independent of the chosen chest stiffness. Both devices passed the endurance test without difficulty. The corpuls cpr device was able to generate about 10-40% more blood flow than the LUCAS II device, depending on the model settings. In most scenarios, the corpuls cpr device also generated a higher blood pressure than the LUCAS II. The peak compression forces during CPR were about 30% higher using the corpuls cpr device than with the LUCAS II. In this study, the corpuls cpr device had improved blood flow and pressure outcomes than the LUCAS II device. Further examination in an animal model is required to prove the findings of this preliminary study.
The prevalence of chest compression leaning during in-hospital cardiopulmonary resuscitation
Fried, David A.; Leary, Marion; Smith, Douglas A.; Sutton, Robert M.; Niles, Dana; Herzberg, Daniel L.; Becker, Lance B.; Abella, Benjamin S.
2011-01-01
Objective Successful resuscitation from cardiac arrest requires the delivery of high-quality chest compressions, encompassing parameters such as adequate rate, depth, and full recoil between compressions. The lack of compression recoil (“leaning” or “incomplete recoil”) has been shown to adversely affect hemodynamics in experimental arrest models, but the prevalence of leaning during actual resuscitation is poorly understood. We hypothesized that leaning varies across resuscitation events, possibly due to rescuer and/or patient characteristics and may worsen over time from rescuer fatigue during continuous chest compressions. Methods This was an observational clinical cohort study at one academic medical center. Data were collected from adult in-hospital and Emergency Department arrest events using monitor/defibrillators that record chest compression characteristics and provide real-time feedback. Results We analyzed 112,569 chest compressions from 108 arrest episodes from 5/2007 to 2/2009. Leaning was present in 98/108 (91%) cases; 12% of all compressions exhibited leaning. Leaning varied widely across cases: 41/108 (38%) of arrest episodes exhibited <5% leaning yet 20/108 (19%) demonstrated >20% compression leaning. When evaluating blocks of continuous compressions (>120 sec), only 4/33 (12%) had an increase in leaning over time and 29/33 (88%) showed a decrease (p<0.001). Conclusions Chest compression leaning was common during resuscitation care and exhibited a wide distribution, with most leaning within a subset of resuscitations. Leaning decreased over time during continuous chest compression blocks, suggesting that either leaning may not be a function of rescuer fatiguing, or that it may have been mitigated by automated feedback provided during resuscitation episodes. PMID:21482010
NASA Astrophysics Data System (ADS)
Zdanowicz, E.; Guarino, V.; Konrad, C.; Williams, B.; Capatina, D.; D'Amico, K.; Arganbright, N.; Zimmerman, K.; Turneaure, S.; Gupta, Y. M.
2017-06-01
The Dynamic Compression Sector (DCS) at the Advanced Photon Source (APS), located at Argonne National Laboratory (ANL), has a diverse set of dynamic compression drivers to obtain time resolved x-ray data in single event, dynamic compression experiments. Because the APS x-ray beam direction is fixed, each driver at DCS must have the capability to move through a large range of linear and angular motions with high precision to accommodate a wide variety of scientific needs. Particularly challenging was the design and implementation of the motion control system for the two-stage light gas gun, which rests on a 26' long structure and weighs over 2 tons. The target must be precisely positioned in the x-ray beam while remaining perpendicular to the gun barrel axis to ensure one-dimensional loading of samples. To accommodate these requirements, the entire structure can pivot through 60° of angular motion and move 10's of inches along four independent linear directions with 0.01° and 10 μm resolution, respectively. This presentation will provide details of how this system was constructed, how it is controlled, and provide examples of the wide range of x-ray/sample geometries that can be accommodated. Work supported by DOE/NNSA.
Multiple Compressions in the Middle Energy Plasma Focus Device
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yousefi, H. R.; Ejiri, Y.; Ito, H.
This paper reports some of the results that are aimed to investigate the neutron emission from the middle energy Mather-type plasma focus. These results indicated that with increase the pressure, compression time is increase but there is not any direct relation between the compression time and neutron yield. Also it seems that multiple compression regimes is occurred in low pressure and single compression is appeared at higher pressure where is the favorable to neutron production.
Adaptive efficient compression of genomes
2012-01-01
Modern high-throughput sequencing technologies are able to generate DNA sequences at an ever increasing rate. In parallel to the decreasing experimental time and cost necessary to produce DNA sequences, computational requirements for analysis and storage of the sequences are steeply increasing. Compression is a key technology to deal with this challenge. Recently, referential compression schemes, storing only the differences between a to-be-compressed input and a known reference sequence, gained a lot of interest in this field. However, memory requirements of the current algorithms are high and run times often are slow. In this paper, we propose an adaptive, parallel and highly efficient referential sequence compression method which allows fine-tuning of the trade-off between required memory and compression speed. When using 12 MB of memory, our method is for human genomes on-par with the best previous algorithms in terms of compression ratio (400:1) and compression speed. In contrast, it compresses a complete human genome in just 11 seconds when provided with 9 GB of main memory, which is almost three times faster than the best competitor while using less main memory. PMID:23146997
NASA Astrophysics Data System (ADS)
Kurokawa, A. K.; Miwa, T.; Okumura, S.; Uesugi, K.
2017-12-01
After ash-dominated Strombolian eruption, considerable amount of ash falls back to the volcanic conduit forming a dense near-surface region compacted by weights of its own and other fallback clasts (Patrick et al., 2007). Gas accumulation below this dense cap causes a substantial increase in pressure within the conduit, causing the volcanic activity to shift to the preliminary stages of a forthcoming eruption (Del Bello et al., 2015). Under such conditions, rheology of the fallback ash plays an important role because it controls whether the fallback ash can be the cap. However, little attention has been given to the point. We examined the rheology of ash collected at Stromboli volcano via intermittent compression experiments changing temperature and compression time/rate. The ash deformed at a constant rate during compression process, and then it was compressed without any deformation during rest process. The compression and rest processes repeated during each experiment to see rheological variations with progression of compaction. Viscoelastic changes during the experiment were estimated by Maxwell model. The results show that both elasticity and viscosity increases with decreasing porosity. On the other hand, the elasticity shows strong rate-dependence in the both compression and rest processes while the viscosity dominantly depends on the temperature, although the compression rate also affects the viscosity in the case of the compression process. Thus, the ash behaves either elastically or viscously depending on experimental process, temperature, and compression rate/time. The viscoelastic characteristics can be explained by magnitude relationships between the characteristic relaxation times and times for compression and rest processes. This indicates that the balance of the time scales is key to determining the rheological characteristics and whether the ash behaves elastically or viscously may control cyclic Strombolian eruptions.
Roth, Robert Paul; Hahn, David C.; Scaringe, Robert P.
2015-12-08
A device and method are provided to improve performance of a vapor compression system using a retrofittable control board to start up the vapor compression system with the evaporator blower initially set to a high speed. A baseline evaporator operating temperature with the evaporator blower operating at the high speed is recorded, and then the device detects if a predetermined acceptable change in evaporator temperature has occurred. The evaporator blower speed is reduced from the initially set high speed as long as there is only a negligible change in the measured evaporator temperature and therefore a negligible difference in the compressor's power consumption so as to obtain a net increase in the Coefficient of Performance.
A Semianalytical Analysis of Compressible Electrophoretic Cake Formation
NASA Astrophysics Data System (ADS)
Kambham, Kiran K. R.; Tuncay, Kagan; Corapcioglu, M. Yavuz
1995-05-01
Leaks in geomembrane liners of waste landfills and liquid impoundments cause chemical contaminants to leak into the subsurface environment. A mathematical model is presented to simulate electrophoretic sealing of impoundment leaks. The model describes the formation of a compressible clay cake because of electrical and gravitational forces. The model includes mass balance equations for the solid particles and liquid phase, modified Darcy's law in an electrical field, and Terzaghi's definition of effective stress. The formulation is presented in the Eulerian coordinates. The resulting second-order, nonlinear partial differential equation and the lower boundary condition are linearized to obtain an analytical solution for time-dependent settlement. After discretizing in time the analytical solution is applied to simulate compression of an accreting sediment. In the simulation of an accreting sediment, solid fluxes on either side of suspension/sediment interface are coupled using a no-jump condition. The velocity of a discrete particle in the suspension zone is assumed to be equal to the algebraic sum of electrophoretic and Stoke's settling velocities. An empirical relationship available in the literature is used to account for the effect of concentration on the velocity of solid particles in the suspension zone. The validity of the semianalytical approach is partially verified using an exact steady state solution for self-weight consolidation. The simulation results obtained for a set of material parameters are presented graphically. It is noted that the electrokinetic consolidation of sediment continues even after the completion of electrophoretic settling of all clay particles. An analysis reveals that the electrophoretic cake formation process is quite sensitive to voltage gradient and the coefficient of compressibility.
Musatti, Alida; Manzoni, Matilde; Rollini, Manuela
2013-01-25
The study was aimed at investigating the best biotransformation conditions to increase intracellular glutathione (GSH) levels in samples of baker's yeast (Saccharomyces cerevisiae) employing either the commercially available compressed and dried forms. Glucose, GSH precursors amino acids, as well as other cofactors, were dissolved in a biotransformation solution and yeast cells were added (5%dcw). Two response surface central composite designs (RSCCDs) were performed in sequence: in the first step the influence of amino acid composition (cysteine, glycine, glutamic acid and serine) on GSH accumulation was investigated; once their formulation was set up, the influence of other components was studied. Initial GSH content was found 0.53 and 0.47%dcw for compressed and dried forms. GSH accumulation ability of baker's yeast in compressed form was higher at the beginning of shelf life, that is, in the first week, and a maximum of 2.04%dcw was obtained. Performance of yeast in dried form was not found satisfactory, as the maximum GSH level was 1.18%dcw. When cysteine lacks from the reaction solution, yeast cells do not accumulate GSH. With dried yeast, the highest GSH yields occurred when cysteine was set at 3 g/L, glycine and glutamic acid at least at 4 g/L, without serine. Employing compressed yeast, the highest GSH yields occurred when cysteine and glutamic acid were set at 2-3 g/L, while glycine and serine higher than 2 g/L. Results allowed to set up an optimal and feasible procedure to obtain GSH-enriched yeast biomass, with up to threefold increase with respect to initial content. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Tomo, H. S. S.; Ujianto, O.; Rizal, R.; Pratama, Y.
2017-07-01
Composite material thermoplastic was prepared from polypropilen granule as matrix, kenaf fiber as reinforcement and grafted polypropylene copolymer maleic anhydride as coupling agent. Composite products were produced as sandwich structures using compression molding. This research aimed to observe the influence of number of ply, temperature, pressure, and compression time using factorial design. Effects of variables on tensile and flexural strength were analyzed. Experimental results showed that tensile and flexural strength were influenced by degradation, fiber compaction, and matrix - fiber interaction mechanisms. Flexural strength was significantly affected by number of ply and its interaction to another process parameters (temperature, pressure, and compression time), but no significant effect of process parameters on tensile strength. The highest tensile strength (62.0 MPa) was produced at 3 ply, 210 °C, 50 Bar, and 3 min compression time (low, high, high, low), while the highest flexural strength (80.3 MPa) was produced at 3 ply, 190 °C, 50 Bar, and 3 min compression time (low, low, high, low).
Novel Data Reduction Based on Statistical Similarity
Lee, Dongeun; Sim, Alex; Choi, Jaesik; ...
2016-07-18
Applications such as scientific simulations and power grid monitoring are generating so much data quickly that compression is essential to reduce storage requirement or transmission capacity. To achieve better compression, one is often willing to discard some repeated information. These lossy compression methods are primarily designed to minimize the Euclidean distance between the original data and the compressed data. But this measure of distance severely limits either reconstruction quality or compression performance. In this paper, we propose a new class of compression method by redefining the distance measure with a statistical concept known as exchangeability. This approach reduces the storagemore » requirement and captures essential features, while reducing the storage requirement. In this paper, we report our design and implementation of such a compression method named IDEALEM. To demonstrate its effectiveness, we apply it on a set of power grid monitoring data, and show that it can reduce the volume of data much more than the best known compression method while maintaining the quality of the compressed data. Finally, in these tests, IDEALEM captures extraordinary events in the data, while its compression ratios can far exceed 100.« less
Shi, Jianyong; Qian, Xuede; Liu, Xiaodong; Sun, Long; Liao, Zhiqiang
2016-09-01
The total compression of municipal solid waste (MSW) consists of primary, secondary, and decomposition compressions. It is usually difficult to distinguish between the three parts of compressions. In this study, the odeometer test was used to distinguish between the primary and secondary compressions to determine the primary and secondary compression coefficient. In addition, the ending time of the primary compressions were proposed based on municipal solid waste compression tests in a degradation-inhibited condition by adding vinegar. The amount of the secondary compression occurring in the primary compression stage has a relatively high percentage to either the total compression or the total secondary compression. The relationship between the degradation ratio and time was obtained from the tests independently. Furthermore, a combined compression calculation method of municipal solid waste for all three parts of compressions including considering organics degradation is proposed based on a one-dimensional compression method. The relationship between the methane generation potential L0 of LandGEM model and degradation compression index was also discussed in the paper. A special column compression apparatus system, which can be used to simulate the whole compression process of municipal solid waste in China, was designed. According to the results obtained from 197-day column compression test, the new combined calculation method for municipal solid waste compression was analyzed. The degradation compression is the main part of the compression of MSW in the medium test period. Copyright © 2015 Elsevier Ltd. All rights reserved.
Acosta-Mesa, Héctor-Gabriel; Rechy-Ramírez, Fernando; Mezura-Montes, Efrén; Cruz-Ramírez, Nicandro; Hernández Jiménez, Rodolfo
2014-06-01
In this work, we present a novel application of time series discretization using evolutionary programming for the classification of precancerous cervical lesions. The approach optimizes the number of intervals in which the length and amplitude of the time series should be compressed, preserving the important information for classification purposes. Using evolutionary programming, the search for a good discretization scheme is guided by a cost function which considers three criteria: the entropy regarding the classification, the complexity measured as the number of different strings needed to represent the complete data set, and the compression rate assessed as the length of the discrete representation. This discretization approach is evaluated using a time series data based on temporal patterns observed during a classical test used in cervical cancer detection; the classification accuracy reached by our method is compared with the well-known times series discretization algorithm SAX and the dimensionality reduction method PCA. Statistical analysis of the classification accuracy shows that the discrete representation is as efficient as the complete raw representation for the present application, reducing the dimensionality of the time series length by 97%. This representation is also very competitive in terms of classification accuracy when compared with similar approaches. Copyright © 2014 Elsevier Inc. All rights reserved.
Lee, Bin-Na; Chun, Soo-Ji; Chang, Hoon-Sang; Hwang, Yun-Chan; Hwang, In-Nam; Oh, Won-Mann
2017-01-01
Methylcellulose (MC) is a chemical compound derived from cellulose. MTA mixed with MC reduces setting time and increases plasticity. This study assessed the influence of MC as an anti-washout ingredient and CaCl2 as a setting time accelerator on the physical and biological properties of MTA. Test materials were divided into 3 groups; Group 1(control): distilled water; Group 2: 1% MC/CaCl2; Group 3: 2% MC/CaCl2. Compressive strength, pH, flowability and cell viability were tested. The gene expression of bone sialoprotein (BSP) was detected by RT-PCR and real- time PCR. The expression of alkaline phosphatase (ALP) and mineralization behavior were evaluated using an ALP staining and an alizarin red staining. Compressive strength, pH, and cell viability of MTA mixed with MC/CaCl2 were not significantly different compared to the control group. The flowability of MTA with MC/CaCI2 has decreased significantly when compared to the control (p<.05). The mRNA level of BSP has increased significantly in MTA with MC/CaCl2 compared to the control (p<.05). This study revealed higher expression of ALP and mineralization in cells exposed to MTA mixed with water and MTA mixed with MC/CaCl2 compared to the control (p<.05). MC decreased the flowability of MTA and did not interrupt the physical and biological effect of MTA. It suggests that these cements may be useful as a root-end filling material.
Maximal compression of the redshift-space galaxy power spectrum and bispectrum
NASA Astrophysics Data System (ADS)
Gualdi, Davide; Manera, Marc; Joachimi, Benjamin; Lahav, Ofer
2018-05-01
We explore two methods of compressing the redshift-space galaxy power spectrum and bispectrum with respect to a chosen set of cosmological parameters. Both methods involve reducing the dimension of the original data vector (e.g. 1000 elements) to the number of cosmological parameters considered (e.g. seven ) using the Karhunen-Loève algorithm. In the first case, we run MCMC sampling on the compressed data vector in order to recover the 1D and 2D posterior distributions. The second option, approximately 2000 times faster, works by orthogonalizing the parameter space through diagonalization of the Fisher information matrix before the compression, obtaining the posterior distributions without the need of MCMC sampling. Using these methods for future spectroscopic redshift surveys like DESI, Euclid, and PFS would drastically reduce the number of simulations needed to compute accurate covariance matrices with minimal loss of constraining power. We consider a redshift bin of a DESI-like experiment. Using the power spectrum combined with the bispectrum as a data vector, both compression methods on average recover the 68 {per cent} credible regions to within 0.7 {per cent} and 2 {per cent} of those resulting from standard MCMC sampling, respectively. These confidence intervals are also smaller than the ones obtained using only the power spectrum by 81 per cent, 80 per cent, and 82 per cent respectively, for the bias parameter b1, the growth rate f, and the scalar amplitude parameter As.
Stress relaxation in quasi-two-dimensional self-assembled nanoparticle monolayers
NASA Astrophysics Data System (ADS)
Boucheron, Leandra S.; Stanley, Jacob T.; Dai, Yeling; You, Siheng Sean; Parzyck, Christopher T.; Narayanan, Suresh; Sandy, Alec R.; Jiang, Zhang; Meron, Mati; Lin, Binhua; Shpyrko, Oleg G.
2018-05-01
We experimentally probed the stress relaxation of a monolayer of iron oxide nanoparticles at the water-air interface. Upon drop-casting onto a water surface, the nanoparticles self-assembled into islands of two-dimensional hexagonally close packed crystalline domains surrounded by large voids. When compressed laterally, the voids gradually disappeared as the surface pressure increased. After the compression was stopped, the surface pressure (as measured by a Wilhelmy plate) evolved as a function of the film aging time with three distinct timescales. These aging dynamics were intrinsic to the stressed state built up during the non-equilibrium compression of the film. Utilizing x-ray photon correlation spectroscopy, we measured the characteristic relaxation time (τ ) of in-plane nanoparticle motion as a function of the aging time through both second-order and two-time autocorrelation analysis. Compressed and stretched exponential fitting of the intermediate scattering function yielded exponents (β ) indicating different relaxation mechanisms of the films under different compression stresses. For a monolayer compressed to a lower surface pressure (between 20 mN/m and 30 mN/m), the relaxation time (τ ) decreased continuously as a function of the aging time, as did the fitted exponent, which transitioned from being compressed (>1 ) to stretched (<1 ), indicating that the monolayer underwent a stress release through crystalline domain reorganization. However, for a monolayer compressed to a higher surface pressure (around 40 mN/m), the relaxation time increased continuously and the compressed exponent varied very little from a value of 1.6, suggesting that the system may have been highly stressed and jammed. Despite the interesting stress relaxation signatures seen in these samples, the structural ordering of the monolayer remained the same over the sample lifetime, as revealed by grazing incidence x-ray diffraction.
Method for data compression by associating complex numbers with files of data values
Feo, J.T.; Hanks, D.C.; Kraay, T.A.
1998-02-10
A method for compressing data for storage or transmission is disclosed. Given a complex polynomial and a value assigned to each root, a root generated data file (RGDF) is created, one entry at a time. Each entry is mapped to a point in a complex plane. An iterative root finding technique is used to map the coordinates of the point to the coordinates of one of the roots of the polynomial. The value associated with that root is assigned to the entry. An equational data compression (EDC) method reverses this procedure. Given a target data file, the EDC method uses a search algorithm to calculate a set of m complex numbers and a value map that will generate the target data file. The error between a simple target data file and generated data file is typically less than 10%. Data files can be transmitted or stored without loss by transmitting the m complex numbers, their associated values, and an error file whose size is at most one-tenth of the size of the input data file. 4 figs.
Method for data compression by associating complex numbers with files of data values
Feo, John Thomas; Hanks, David Carlton; Kraay, Thomas Arthur
1998-02-10
A method for compressing data for storage or transmission. Given a complex polynomial and a value assigned to each root, a root generated data file (RGDF) is created, one entry at a time. Each entry is mapped to a point in a complex plane. An iterative root finding technique is used to map the coordinates of the point to the coordinates of one of the roots of the polynomial. The value associated with that root is assigned to the entry. An equational data compression (EDC) method reverses this procedure. Given a target data file, the EDC method uses a search algorithm to calculate a set of m complex numbers and a value map that will generate the target data file. The error between a simple target data file and generated data file is typically less than 10%. Data files can be transmitted or stored without loss by transmitting the m complex numbers, their associated values, and an error file whose size is at most one-tenth of the size of the input data file.
NASA Astrophysics Data System (ADS)
Li, Gongxin; Li, Peng; Wang, Yuechao; Wang, Wenxue; Xi, Ning; Liu, Lianqing
2014-07-01
Scanning Ion Conductance Microscopy (SICM) is one kind of Scanning Probe Microscopies (SPMs), and it is widely used in imaging soft samples for many distinctive advantages. However, the scanning speed of SICM is much slower than other SPMs. Compressive sensing (CS) could improve scanning speed tremendously by breaking through the Shannon sampling theorem, but it still requires too much time in image reconstruction. Block compressive sensing can be applied to SICM imaging to further reduce the reconstruction time of sparse signals, and it has another unique application that it can achieve the function of image real-time display in SICM imaging. In this article, a new method of dividing blocks and a new matrix arithmetic operation were proposed to build the block compressive sensing model, and several experiments were carried out to verify the superiority of block compressive sensing in reducing imaging time and real-time display in SICM imaging.
Real-Time Aggressive Image Data Compression
1990-03-31
implemented with higher degrees of modularity, concurrency, and higher levels of machine intelligence , thereby providing higher data -throughput rates...Project Summary Project Title: Real-Time Aggressive Image Data Compression Principal Investigators: Dr. Yih-Fang Huang and Dr. Ruey-wen Liu Institution...Summary The objective of the proposed research is to develop reliable algorithms !.hat can achieve aggressive image data compression (with a compression
Time-Varying Distortions of Binaural Information by Bilateral Hearing Aids
Rodriguez, Francisco A.; Portnuff, Cory D. F.; Goupell, Matthew J.; Tollin, Daniel J.
2016-01-01
In patients with bilateral hearing loss, the use of two hearing aids (HAs) offers the potential to restore the benefits of binaural hearing, including sound source localization and segregation. However, existing evidence suggests that bilateral HA users’ access to binaural information, namely interaural time and level differences (ITDs and ILDs), can be compromised by device processing. Our objective was to characterize the nature and magnitude of binaural distortions caused by modern digital behind-the-ear HAs using a variety of stimuli and HA program settings. Of particular interest was a common frequency-lowering algorithm known as nonlinear frequency compression, which has not previously been assessed for its effects on binaural information. A binaural beamforming algorithm was also assessed. Wide dynamic range compression was enabled in all programs. HAs were placed on a binaural manikin, and stimuli were presented from an arc of loudspeakers inside an anechoic chamber. Stimuli were broadband noise bursts, 10-Hz sinusoidally amplitude-modulated noise bursts, or consonant–vowel–consonant speech tokens. Binaural information was analyzed in terms of ITDs, ILDs, and interaural coherence, both for whole stimuli and in a time-varying sense (i.e., within a running temporal window) across four different frequency bands (1, 2, 4, and 6 kHz). Key findings were: (a) Nonlinear frequency compression caused distortions of high-frequency envelope ITDs and significantly reduced interaural coherence. (b) For modulated stimuli, all programs caused time-varying distortion of ILDs. (c) HAs altered the relationship between ITDs and ILDs, introducing large ITD–ILD conflicts in some cases. Potential perceptual consequences of measured distortions are discussed. PMID:27698258
Changes In the Pickup Ion Cutoff Under Variable Solar Wind Conditions
NASA Astrophysics Data System (ADS)
Bower, J.; Moebius, E.; Taut, A.; Berger, L.; Drews, C.; Lee, M. A.; Farrugia, C. J.
2017-12-01
We present the first systematic analysis to determine pickup ion (PUI) cutoff speed variations,both during compression regions, identified by their structure, and during times of highly variablesolar wind (SW) speed or magnetic field strength. This study is motivated by the attempt toremove or correct these effects on the determination of the longitude of the interstellar neutralgas flow from the flow pattern related variation of the PUI cutoff with ecliptic longitude. At thesame time, this study sheds light on the physical mechanisms that lead to energy transferbetween the SW and the embedded PUI population. Using 2007-2014 STEREO A PLASTICobservations we identify compression regions in the solar wind and analyze the PUI velocitydistribution function (VDF). We developed a routine to identify stream interaction regions andCIRs, by identifying the stream interface and the successive velocity increase in the solar windspeed and density. Characterizing these individual compression events and combining them in asuperposed epoch analysis allows us to analyze the PUI population in similar conditions andfind the local cutoff shift with adequate statistics. The result of this method yields cutoff shifts forcompression regions with large solar wind speed gradients. Additionally, through sorting theentire set of PUI VDFs at high time resolution we obtain a noticeable correlation of the cutoffshift with gradients in the SW speed and interplanetary magnetic field strength. We willdiscuss implications for the understanding of the PUI VDF evolution and the PUI cutoff analysisof the interstellar gas flow.
Motion-compensated compressed sensing for dynamic imaging
NASA Astrophysics Data System (ADS)
Sundaresan, Rajagopalan; Kim, Yookyung; Nadar, Mariappan S.; Bilgin, Ali
2010-08-01
The recently introduced Compressed Sensing (CS) theory explains how sparse or compressible signals can be reconstructed from far fewer samples than what was previously believed possible. The CS theory has attracted significant attention for applications such as Magnetic Resonance Imaging (MRI) where long acquisition times have been problematic. This is especially true for dynamic MRI applications where high spatio-temporal resolution is needed. For example, in cardiac cine MRI, it is desirable to acquire the whole cardiac volume within a single breath-hold in order to avoid artifacts due to respiratory motion. Conventional MRI techniques do not allow reconstruction of high resolution image sequences from such limited amount of data. Vaswani et al. recently proposed an extension of the CS framework to problems with partially known support (i.e. sparsity pattern). In their work, the problem of recursive reconstruction of time sequences of sparse signals was considered. Under the assumption that the support of the signal changes slowly over time, they proposed using the support of the previous frame as the "known" part of the support for the current frame. While this approach works well for image sequences with little or no motion, motion causes significant change in support between adjacent frames. In this paper, we illustrate how motion estimation and compensation techniques can be used to reconstruct more accurate estimates of support for image sequences with substantial motion (such as cardiac MRI). Experimental results using phantoms as well as real MRI data sets illustrate the improved performance of the proposed technique.
Modeling turbulent energy behavior and sudden viscous dissipation in compressing plasma turbulence
Davidovits, Seth; Fisch, Nathaniel J.
2017-12-21
Here, we present a simple model for the turbulent kinetic energy behavior of subsonic plasma turbulence undergoing isotropic three-dimensional compression, which may exist in various inertial confinement fusion experiments or astrophysical settings. The plasma viscosity depends on both the temperature and the ionization state, for which many possible scalings with compression are possible. For example, in an adiabatic compression the temperature scales as 1/L 2, with L the linear compression ratio, but if thermal energy loss mechanisms are accounted for, the temperature scaling may be weaker. As such, the viscosity has a wide range of net dependencies on the compression.more » The model presented here, with no parameter changes, agrees well with numerical simulations for a range of these dependencies. This model permits the prediction of the partition of injected energy between thermal and turbulent energy in a compressing plasma.« less
Modeling turbulent energy behavior and sudden viscous dissipation in compressing plasma turbulence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davidovits, Seth; Fisch, Nathaniel J.
Here, we present a simple model for the turbulent kinetic energy behavior of subsonic plasma turbulence undergoing isotropic three-dimensional compression, which may exist in various inertial confinement fusion experiments or astrophysical settings. The plasma viscosity depends on both the temperature and the ionization state, for which many possible scalings with compression are possible. For example, in an adiabatic compression the temperature scales as 1/L 2, with L the linear compression ratio, but if thermal energy loss mechanisms are accounted for, the temperature scaling may be weaker. As such, the viscosity has a wide range of net dependencies on the compression.more » The model presented here, with no parameter changes, agrees well with numerical simulations for a range of these dependencies. This model permits the prediction of the partition of injected energy between thermal and turbulent energy in a compressing plasma.« less
Muzíková, J; Páleník, L
2005-05-01
The paper studies the tensile strength and disintegration time of compacts from the mixed dry binder MicroceLac 100. Tensile strength and disintegration time of tablets were tested in connection with the following factors: compression force, compression rate, addition of magnesium stearate, addition of ascorbic acid, the model active principle. The compression forces employed were 5, 6, and 7 kN, compression rates, 20 and 40 mm/min, stearate concentration 0, 0.4, and 0.8%, ascorbic acid concentration, 25 and 50%. With increasing addition of the stearate, the strength of compacts from MicroceLacu 100 was decreased for both compression rates, but with a higher rate, in a concentration of 0.4%, the decrease in strength was more marked. Disintegration time was increased with compression force and the addition of the stearate, but in all cases it was very short. Increased addition of ascorbic acid further intensified the decrease in the strength of compacts and decreased the disintegration time and the effect of the stearate on it. Disintegration time of compacts with ascorbic acid in a concentration of 50% did not increase with compression force.
DNABIT Compress – Genome compression algorithm
Rajarajeswari, Pothuraju; Apparao, Allam
2011-01-01
Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that “DNABIT Compress” algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases. PMID:21383923
The Unsupervised Acquisition of a Lexicon from Continuous Speech.
1995-11-01
Com- munication, 2(1):57{89, 1982. [42] J. Ziv and A. Lempel . Compression of individual sequences by variable rate coding. IEEE Trans- actions on...parameters of the compression algorithm , in a never-ending attempt to identify and eliminate the predictable. They lead us to a class of grammars in...the rst 10 sentences of the test set, previously unseen by the algorithm . Vertical bars indicate word boundaries. 7.1 Text Compression and Language
Nonlinear frequency compression: effects on sound quality ratings of speech and music.
Parsa, Vijay; Scollie, Susan; Glista, Danielle; Seelisch, Andreas
2013-03-01
Frequency lowering technologies offer an alternative amplification solution for severe to profound high frequency hearing losses. While frequency lowering technologies may improve audibility of high frequency sounds, the very nature of this processing can affect the perceived sound quality. This article reports the results from two studies that investigated the impact of a nonlinear frequency compression (NFC) algorithm on perceived sound quality. In the first study, the cutoff frequency and compression ratio parameters of the NFC algorithm were varied, and their effect on the speech quality was measured subjectively with 12 normal hearing adults, 12 normal hearing children, 13 hearing impaired adults, and 9 hearing impaired children. In the second study, 12 normal hearing and 8 hearing impaired adult listeners rated the quality of speech in quiet, speech in noise, and music after processing with a different set of NFC parameters. Results showed that the cutoff frequency parameter had more impact on sound quality ratings than the compression ratio, and that the hearing impaired adults were more tolerant to increased frequency compression than normal hearing adults. No statistically significant differences were found in the sound quality ratings of speech-in-noise and music stimuli processed through various NFC settings by hearing impaired listeners. These findings suggest that there may be an acceptable range of NFC settings for hearing impaired individuals where sound quality is not adversely affected. These results may assist an Audiologist in clinical NFC hearing aid fittings for achieving a balance between high frequency audibility and sound quality.
NASA Astrophysics Data System (ADS)
Santos, Serge Dos; Farova, Zuzana; Kus, Vaclav; Prevorovsky, Zdenek
2012-05-01
This paper examines possibilities of using Nonlinear Elastic Wave Spectroscopy (NEWS) methods in dental investigations. Themain task consisted in imaging cracks or other degradation signatures located in dentin close to the Enamel-Dentine Junction (EDJ). NEWS approach was investigated experimentally with a new bi-modal acousto-optic set-up based on the chirp-coded nonlinear ultrasonic time reversal (TR) concepts. Complex internal structure of the tooth is analyzed by the TR-NEWS procedure adapted to tomography-like imaging of the tooth damages. Ultrasonic instrumentation with 10 MHz bandwidth has been set together including laser vibrometer used to detect responses of the tooth on its excitation carried out by a contact piezoelectric transducer. Bi-modal TR-NEWS images of the tooth were created before and after focusing, which resulted from the time compression. The polar B-scan of the tooth realized with TR-NEWS procedure is suggested to be applied as a new echodentography imaging.
NASA Technical Reports Server (NTRS)
Tseng, K.; Morino, L.
1975-01-01
A general theory for study, oscillatory or fully unsteady potential compressible aerodynamics around complex configurations is presented. Using the finite-element method to discretize the space problem, one obtains a set of differential-delay equations in time relating the potential to its normal derivative which is expressed in terms of the generalized coordinates of the structure. For oscillatory flow, the motion consists of sinusoidal oscillations around a steady, subsonic or supersonic flow. For fully unsteady flow, the motion is assumed to consist of constant subsonic or supersonic speed for time t or = 0 and of small perturbations around the steady state for time t 0.
Evaluation of different fast melting disintegrants by means of a central composite design.
Di Martino, Piera; Martelli, Sante; Wehrlé, Pascal
2005-01-01
Fast-disintegration technologies have encountered increased interest from industries in the past decades. In order to orientate the formulators to the choice of the best disintegrating agent, the most common disintegrants were selected and their ability to quickly disintegrate direct compressed tablets was evaluated. For this study, a central composite design was used. The main factors included were the concentration of disintegrant (X1) and the compression force (X2). These factors were studied for tablets containing either Zeparox or Pearlitol 200 as soluble diluents and six different disintegrants: L-HPC LH11 and LH31, Lycatab PGS, Vivasol, Kollidon CL, and Explotab. Their micromeritics properties were previously determined. The response variables were disintegration time (Y1), tensile strength (Y2), and porosity (Y3). Whatever the diluent, the longest disintegration time is obtained with Vivasol as the disintegrant, while Kollidon CL leads to the shortest disintegration times. Exception for Lycatab PGS and L-HPC LH11, formulations with Pearlitol 200 disintegrate faster. Almost the same results are obtained with porosity: no relevant effect of disintegrant concentration is observed, since porosity is mainly correlated to the compression force. In particular, highest values are obtained with Zeparox as the diluent when compared to Pearlitol 200 and, as the type of disintegrant is concerned, no difference is observed. Tensile strength models have been all statistically validated and are all highly dependent on the compression force. Lycatab PGS concentration does not affect disintegration time, mainly increased by the increase of compression pressure. When Pearlitol 200 is used with Vivasol, disintegration time is more influenced by the disintegrant concentration than by the compression pressure, an increase in concentration leading to a significant and relevant increase of the disintegration time. With Zeparox, the interaction between the two controlled variables is more complex: there is no effect of compression force on the disintegration time for a small amount of disintegrant, but a significant increase for higher concentrations. With Kollidon CL, the main factor influencing the disintegration time is the compression force, rather than the disintegrant concentration. Increasing both the compression force and the disintegrant concentration leads to an increase of the disintegration time. For lower Kollidon CL percentages, the compression pressure increases dramatically the tablet disintegration. With the Explotab, whatever the increase of compression force, the disintegrant concentration leads to an increase of the disintegration time. According to Student's t-test, only the compression force significantly and strongly influences the disintegration time when Pearlitol 200 is used. A slight interaction and some trends nevertheless appear: above 150 MPa, increasing the disintegrant concentration leads to a shortened disintegration time, below this limit the opposite effect is observed.
Compressive passive millimeter wave imager
Gopalsami, Nachappa; Liao, Shaolin; Elmer, Thomas W; Koehl, Eugene R; Heifetz, Alexander; Raptis, Apostolos C
2015-01-27
A compressive scanning approach for millimeter wave imaging and sensing. A Hadamard mask is positioned to receive millimeter waves from an object to be imaged. A subset of the full set of Hadamard acquisitions is sampled. The subset is used to reconstruct an image representing the object.
Evaluation of a Conductive Elastomer Seal for Spacecraft
NASA Technical Reports Server (NTRS)
Daniels, C. C.; Mather, J. L.; Oravec, H. A.; Dunlap, P. H., Jr.
2016-01-01
An electrically conductive elastomer was evaluated as a material candidate for a spacecraft seal. The elastomer used electrically conductive constituents as a means to reduce the resistance between mating interfaces of a sealed joint to meet spacecraft electrical bonding requirements. The compound's outgassing levels were compared against published NASA requirements. The compound was formed into a hollow O-ring seal and its compression set was measured. The O-ring seal was placed into an interface and the electrical resistance and leak rate were quantified. The amount of force required to fully compress the test article in the sealing interface and the force needed to separate the joint were also measured. The outgassing and resistance measurements were below the maximum allowable levels. The room temperature compression set and leak rates were fairly high when compared against other typical spacecraft seal materials, but were not excessive. The compression and adhesion forces were desirably low. Overall, the performance of the elastomer compound was sufficient to be considered for future spacecraft seal applications.
Evaluation of a newly developed infant chest compression technique
Smereka, Jacek; Bielski, Karol; Ladny, Jerzy R.; Ruetzler, Kurt; Szarpak, Lukasz
2017-01-01
Abstract Background: Providing adequate chest compression is essential during infant cardio-pulmonary-resuscitation (CPR) but was reported to be performed poor. The “new 2-thumb technique” (nTTT), which consists in using 2 thumbs directed at the angle of 90° to the chest while closing the fingers of both hands in a fist, was recently introduced. Therefore, the aim of this study was to compare 3 chest compression techniques, namely, the 2-finger-technique (TFT), the 2-thumb-technique (TTHT), and the nTTT in an randomized infant-CPR manikin setting. Methods: A total of 73 paramedics with at least 1 year of clinical experience performed 3 CPR settings with a chest compression:ventilation ratio of 15:2, according to current guidelines. Chest compression was performed with 1 out of the 3 chest compression techniques in a randomized sequence. Chest compression rate and depth, chest decompression, and adequate ventilation after chest compression served as outcome parameters. Results: The chest compression depth was 29 (IQR, 28–29) mm in the TFT group, 42 (40–43) mm in the TTHT group, and 40 (39–40) mm in the nTTT group (TFT vs TTHT, P < 0.001; TFT vs nTTT, P < 0.001; TTHT vs nTTT, P < 0.01). The median compression rate with TFT, TTHT, and nTTT varied and amounted to 136 (IQR, 133–144) min–1 versus 117 (115–121) min–1 versus 111 (109–113) min–1. There was a statistically significant difference in the compression rate between TFT and TTHT (P < 0.001), TFT and nTTT (P < 0.001), as well as TTHT and nTTT (P < 0.001). Incorrect decompressions after CC were significantly increased in the TTHT group compared with the TFT (P < 0.001) and the nTTT (P < 0.001) group. Conclusions: The nTTT provides adequate chest compression depth and rate and was associated with adequate chest decompression and possibility to adequately ventilate the infant manikin. Further clinical studies are necessary to confirm these initial findings. PMID:28383397
Smereka, Jacek; Bielski, Karol; Ladny, Jerzy R; Ruetzler, Kurt; Szarpak, Lukasz
2017-04-01
Providing adequate chest compression is essential during infant cardio-pulmonary-resuscitation (CPR) but was reported to be performed poor. The "new 2-thumb technique" (nTTT), which consists in using 2 thumbs directed at the angle of 90° to the chest while closing the fingers of both hands in a fist, was recently introduced. Therefore, the aim of this study was to compare 3 chest compression techniques, namely, the 2-finger-technique (TFT), the 2-thumb-technique (TTHT), and the nTTT in an randomized infant-CPR manikin setting. A total of 73 paramedics with at least 1 year of clinical experience performed 3 CPR settings with a chest compression:ventilation ratio of 15:2, according to current guidelines. Chest compression was performed with 1 out of the 3 chest compression techniques in a randomized sequence. Chest compression rate and depth, chest decompression, and adequate ventilation after chest compression served as outcome parameters. The chest compression depth was 29 (IQR, 28-29) mm in the TFT group, 42 (40-43) mm in the TTHT group, and 40 (39-40) mm in the nTTT group (TFT vs TTHT, P < 0.001; TFT vs nTTT, P < 0.001; TTHT vs nTTT, P < 0.01). The median compression rate with TFT, TTHT, and nTTT varied and amounted to 136 (IQR, 133-144) min versus 117 (115-121) min versus 111 (109-113) min. There was a statistically significant difference in the compression rate between TFT and TTHT (P < 0.001), TFT and nTTT (P < 0.001), as well as TTHT and nTTT (P < 0.001). Incorrect decompressions after CC were significantly increased in the TTHT group compared with the TFT (P < 0.001) and the nTTT (P < 0.001) group. The nTTT provides adequate chest compression depth and rate and was associated with adequate chest decompression and possibility to adequately ventilate the infant manikin. Further clinical studies are necessary to confirm these initial findings.
Compressed normalized block difference for object tracking
NASA Astrophysics Data System (ADS)
Gao, Yun; Zhang, Dengzhuo; Cai, Donglan; Zhou, Hao; Lan, Ge
2018-04-01
Feature extraction is very important for robust and real-time tracking. Compressive sensing provided a technical support for real-time feature extraction. However, all existing compressive tracking were based on compressed Haar-like feature, and how to compress many more excellent high-dimensional features is worth researching. In this paper, a novel compressed normalized block difference feature (CNBD) was proposed. For resisting noise effectively in a highdimensional normalized pixel difference feature (NPD), a normalized block difference feature extends two pixels in the original formula of NPD to two blocks. A CNBD feature can be obtained by compressing a normalized block difference feature based on compressive sensing theory, with the sparse random Gaussian matrix as the measurement matrix. The comparative experiments of 7 trackers on 20 challenging sequences showed that the tracker based on CNBD feature can perform better than other trackers, especially than FCT tracker based on compressed Haar-like feature, in terms of AUC, SR and Precision.
Kim, Dong-Ae; Abo-Mosallam, Hany; Lee, Hye-Young; Lee, Jung-Hwan; Kim, Hae-Won; Lee, Hae-Hyoung
2015-01-01
Some weaknesses of conventional glass ionomer cement (GIC) as dental materials, for instance the lack of bioactive potential and poor mechanical properties, remain unsolved.Objective The purpose of this study was to investigate the effects of the partial replacement of CaO with MgO or ZnO on the mechanical and biological properties of the experimental glass ionomer cements.Material and Methods Calcium fluoro-alumino-silicate glass was prepared for an experimental glass ionomer cement by melt quenching technique. The glass composition was modified by partial replacement (10 mol%) of CaO with MgO or ZnO. Net setting time, compressive and flexural properties, and in vitrorat dental pulp stem cells (rDPSCs) viability were examined for the prepared GICs and compared to a commercial GIC.Results The experimental GICs set more slowly than the commercial product, but their extended setting times are still within the maximum limit (8 min) specified in ISO 9917-1. Compressive strength of the experimental GIC was not increased by the partial substitution of CaO with either MgO or ZnO, but was comparable to the commercial control. For flexural properties, although there was no significance between the base and the modified glass, all prepared GICs marked a statistically higher flexural strength (p<0.05) and comparable modulus to control. The modified cements showed increased cell viability for rDPSCs.Conclusions The experimental GICs modified with MgO or ZnO can be considered bioactive dental materials.
Satellite on-board real-time SAR processor prototype
NASA Astrophysics Data System (ADS)
Bergeron, Alain; Doucet, Michel; Harnisch, Bernd; Suess, Martin; Marchese, Linda; Bourqui, Pascal; Desnoyers, Nicholas; Legros, Mathieu; Guillot, Ludovic; Mercier, Luc; Châteauneuf, François
2017-11-01
A Compact Real-Time Optronic SAR Processor has been successfully developed and tested up to a Technology Readiness Level of 4 (TRL4), the breadboard validation in a laboratory environment. SAR, or Synthetic Aperture Radar, is an active system allowing day and night imaging independent of the cloud coverage of the planet. The SAR raw data is a set of complex data for range and azimuth, which cannot be compressed. Specifically, for planetary missions and unmanned aerial vehicle (UAV) systems with limited communication data rates this is a clear disadvantage. SAR images are typically processed electronically applying dedicated Fourier transformations. This, however, can also be performed optically in real-time. Originally the first SAR images were optically processed. The optical Fourier processor architecture provides inherent parallel computing capabilities allowing real-time SAR data processing and thus the ability for compression and strongly reduced communication bandwidth requirements for the satellite. SAR signal return data are in general complex data. Both amplitude and phase must be combined optically in the SAR processor for each range and azimuth pixel. Amplitude and phase are generated by dedicated spatial light modulators and superimposed by an optical relay set-up. The spatial light modulators display the full complex raw data information over a two-dimensional format, one for the azimuth and one for the range. Since the entire signal history is displayed at once, the processor operates in parallel yielding real-time performances, i.e. without resulting bottleneck. Processing of both azimuth and range information is performed in a single pass. This paper focuses on the onboard capabilities of the compact optical SAR processor prototype that allows in-orbit processing of SAR images. Examples of processed ENVISAT ASAR images are presented. Various SAR processor parameters such as processing capabilities, image quality (point target analysis), weight and size are reviewed.
An image compression survey and algorithm switching based on scene activity
NASA Technical Reports Server (NTRS)
Hart, M. M.
1985-01-01
Data compression techniques are presented. A description of these techniques is provided along with a performance evaluation. The complexity of the hardware resulting from their implementation is also addressed. The compression effect on channel distortion and the applicability of these algorithms to real-time processing are presented. Also included is a proposed new direction for an adaptive compression technique for real-time processing.
Böl, Markus; Kruse, Roland; Ehret, Alexander E; Leichsenring, Kay; Siebert, Tobias
2012-10-11
Due to the increasing developments in modelling of biological material, adequate parameter identification techniques are urgently needed. The majority of recent contributions on passive muscle tissue identify material parameters solely by comparing characteristic, compressive stress-stretch curves from experiments and simulation. In doing so, different assumptions concerning e.g. the sample geometry or the degree of friction between the sample and the platens are required. In most cases these assumptions are grossly simplified leading to incorrect material parameters. In order to overcome such oversimplifications, in this paper a more reliable parameter identification technique is presented: we use the inverse finite element method (iFEM) to identify the optimal parameter set by comparison of the compressive stress-stretch response including the realistic geometries of the samples and the presence of friction at the compressed sample faces. Moreover, we judge the quality of the parameter identification by comparing the simulated and experimental deformed shapes of the samples. Besides this, the study includes a comprehensive set of compressive stress-stretch data on rabbit soleus muscle and the determination of static friction coefficients between muscle and PTFE. Copyright © 2012 Elsevier Ltd. All rights reserved.
Cho, Gyoun-Yon; Lee, Seo-Joon; Lee, Tae-Ro
2015-01-01
Recent medical information systems are striving towards real-time monitoring models to care patients anytime and anywhere through ECG signals. However, there are several limitations such as data distortion and limited bandwidth in wireless communications. In order to overcome such limitations, this research focuses on compression. Few researches have been made to develop a specialized compression algorithm for ECG data transmission in real-time monitoring wireless network. Not only that, recent researches' algorithm is not appropriate for ECG signals. Therefore this paper presents a more developed algorithm EDLZW for efficient ECG data transmission. Results actually showed that the EDLZW compression ratio was 8.66, which was a performance that was 4 times better than any other recent compression method widely used today.
Managment oriented analysis of sediment yield time compression
NASA Astrophysics Data System (ADS)
Smetanova, Anna; Le Bissonnais, Yves; Raclot, Damien; Nunes, João P.; Licciardello, Feliciana; Le Bouteiller, Caroline; Latron, Jérôme; Rodríguez Caballero, Emilio; Mathys, Nicolle; Klotz, Sébastien; Mekki, Insaf; Gallart, Francesc; Solé Benet, Albert; Pérez Gallego, Nuria; Andrieux, Patrick; Moussa, Roger; Planchon, Olivier; Marisa Santos, Juliana; Alshihabi, Omran; Chikhaoui, Mohamed
2016-04-01
The understanding of inter- and intra-annual variability of sediment yield is important for the land use planning and management decisions for sustainable landscapes. It is of particular importance in the regions where the annual sediment yield is often highly dependent on the occurrence of few large events which produce the majority of sediments, such as in the Mediterranean. This phenomenon is referred as time compression, and relevance of its consideration growths with the increase in magnitude and frequency of extreme events due to climate change in many other regions. So far, time compression has ben studied mainly on events datasets, providing high resolution, but (in terms of data amount, required data precision and methods), demanding analysis. In order to provide an alternative simplified approach, the monthly and yearly time compressions were evaluated in eight Mediterranean catchments (of the R-OSMed network), representing a wide range of Mediterranean landscapes. The annual sediment yield varied between 0 to ~27100 Mg•km-2•a-1, and the monthly sediment yield between 0 to ~11600 Mg•km-2•month-1. The catchment's sediment yield was un-equally distributed at inter- and intra-annual scale, and large differences were observed between the catchments. Two types of time compression were distinguished - (i) the inter-annual (based on annual values) and intra- annual (based on monthly values). Four different rainfall-runoff-sediment yield time compression patterns were observed: (i) no time-compression of rainfall, runoff, nor sediment yield, (ii) low time compression of rainfall and runoff, but high compression of sediment yield, (iii) low compression of rainfall and high of runoff and sediment yield, and (iv) low, medium and high compression of rainfall, runoff and sediment yield. All four patterns were present at inter-annual scale, while at intra-annual scale only the two latter were present. This implies that high sediment yields occurred in particular months, even in catchment with low or no inter-annual time compression. The analysis of seasonality of time compression showed that in most of the catchments large sediment yields were more likely to occur between October and January, while in two catchments it was in summer (June and July). The appropriate sediment yield management measure: enhancement of soil properties, (dis)connectivity measures or vegetation cover, should therefore be selected with regard to the type of inter-annual time compression, to the properties of the individual catchments, and to the magnitudes of sediment yield. To increase the effectivity and lower the costs of the applied measures, the management in the months or periods when large sediment yields are most likely to occur should be prioritized. The analysis of the monthly time compression might be used for their identification in areas where no event datasets are available. The R-OSMed network of Mediterranean erosion research catchments was funded by "SicMed-Mistrals" grants from 2011 to 2014. Anna Smetanová has received the support of the European Union, in the framework of the Marie-Curie FP7 COFUND People Programme, through the award of an AgreenSkills' fellowship (under grant agreement n° 267196). João Pedro Nunes has received support from the European Union (in the framework of the European Social Fund) and the Portuguese Government under a post-doctoral fellowship (SFRH/BPD/87571/2012).
Advanced End-to-end Simulation for On-board Processing (AESOP)
NASA Technical Reports Server (NTRS)
Mazer, Alan S.
1994-01-01
Developers of data compression algorithms typically use their own software together with commercial packages to implement, evaluate and demonstrate their work. While convenient for an individual developer, this approach makes it difficult to build on or use another's work without intimate knowledge of each component. When several people or groups work on different parts of the same problem, the larger view can be lost. What's needed is a simple piece of software to stand in the gap and link together the efforts of different people, enabling them to build on each other's work, and providing a base for engineers and scientists to evaluate the parts as a cohesive whole and make design decisions. AESOP (Advanced End-to-end Simulation for On-board Processing) attempts to meet this need by providing a graphical interface to a developer-selected set of algorithms, interfacing with compiled code and standalone programs, as well as procedures written in the IDL and PV-Wave command languages. As a proof of concept, AESOP is outfitted with several data compression algorithms integrating previous work on different processors (AT&T DSP32C, TI TMS320C30, SPARC). The user can specify at run-time the processor on which individual parts of the compression should run. Compressed data is then fed through simulated transmission and uncompression to evaluate the effects of compression parameters, noise and error correction algorithms. The following sections describe AESOP in detail. Section 2 describes fundamental goals for usability. Section 3 describes the implementation. Sections 4 through 5 describe how to add new functionality to the system and present the existing data compression algorithms. Sections 6 and 7 discuss portability and future work.
NASA Astrophysics Data System (ADS)
Carvalho, Jorge M. F.
2018-05-01
The Maciço Calcário Estremenho (MCE) is an uplifted Jurassic limestone massif unit of the Lusitanian Basin, Portugal, where five main joint sets trending NNE-SSW, WSW-ENE, WNW-ESE, NW-SE, and NNW-SSE are recognized. Except for the NNW-SSE set, all the other sets host calcite veins and barren joints, evidencing a multistage development by several deformation episodes, including shear reactivation. Orthogonal patterns defined by the NNE-SSW/WNW-ESE and NNW-SSE/WSW-ENE systems are characteristic of some tectonostratigraphic units of the MCE, but the sets of each one of the systems are genetically independent. They result from specific deformation episodes undergone by the studied area in the course of its Meso-Cenozoic evolution. NNE-SSW calcite veins were the first to form during Middle Jurassic fault-controlled subsidence. A renewal of this set as barren joints took place during the Eocene Pyrenean compressive phase. The WSW-ENE and WNW-ESE sets have a restricted spatial distribution and relate to transient compressive episodes of the Middle - Late Jurassic and Jurassic - Cretaceous transitions, respectively. The NW-SE set, also characteristic of a specific region, formed during the Late Jurassic rifting and is related to local NE-SW tension dependent on block tilting towards a major NW-SE fault. The Miocene Betic compressive phase is responsible for the formation of the NNW-SSE set, which is widespread throughout the MCE.
Adult-like processing of time-compressed speech by newborns: A NIRS study.
Issard, Cécile; Gervain, Judit
2017-06-01
Humans can adapt to a wide range of variations in the speech signal, maintaining an invariant representation of the linguistic information it contains. Among them, adaptation to rapid or time-compressed speech has been well studied in adults, but the developmental origin of this capacity remains unknown. Does this ability depend on experience with speech (if yes, as heard in utero or as heard postnatally), with sounds in general or is it experience-independent? Using near-infrared spectroscopy, we show that the newborn brain can discriminate between three different compression rates: normal, i.e. 100% of the original duration, moderately compressed, i.e. 60% of original duration and highly compressed, i.e. 30% of original duration. Even more interestingly, responses to normal and moderately compressed speech are similar, showing a canonical hemodynamic response in the left temporoparietal, right frontal and right temporal cortex, while responses to highly compressed speech are inverted, showing a decrease in oxyhemoglobin concentration. These results mirror those found in adults, who readily adapt to moderately compressed, but not to highly compressed speech, showing that adaptation to time-compressed speech requires little or no experience with speech, and happens at an auditory, and not at a more abstract linguistic level. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Image splitting and remapping method for radiological image compression
NASA Astrophysics Data System (ADS)
Lo, Shih-Chung B.; Shen, Ellen L.; Mun, Seong K.
1990-07-01
A new decomposition method using image splitting and gray-level remapping has been proposed for image compression, particularly for images with high contrast resolution. The effects of this method are especially evident in our radiological image compression study. In our experiments, we tested the impact of this decomposition method on image compression by employing it with two coding techniques on a set of clinically used CT images and several laser film digitized chest radiographs. One of the compression techniques used was full-frame bit-allocation in the discrete cosine transform domain, which has been proven to be an effective technique for radiological image compression. The other compression technique used was vector quantization with pruned tree-structured encoding, which through recent research has also been found to produce a low mean-square-error and a high compression ratio. The parameters we used in this study were mean-square-error and the bit rate required for the compressed file. In addition to these parameters, the difference between the original and reconstructed images will be presented so that the specific artifacts generated by both techniques can be discerned by visual perception.
Properties of microcement mortar with nano particles
NASA Astrophysics Data System (ADS)
Alimeneti, Narasimha Reddy
Carbon nanotubes (CNT) and Carbon nanofibers (CNF) are one of the toughest and stiffest materials in the world presently with extreme properties yet to be discovered in terms of elastic modulus and tensile strength. Due to the advanced properties of these materials they are being used in almost all fields of science at nanolevel and are being used in construction industry recently for improvement of material properties. Microcement is fine ground cement which as half the particle size of ordinary Portland cement. In this research the behavior of cement mortar of micro cement with the addition of nanoparticles is studied. Due to high aspect ratio and strong van der Waal forces between the particles of CNT and CNF, they agglomerate and form bundles when mixed with water, sonication method is used to mix nanoparticles with few drops of surfactant and super plasticizer. Mechanical properties such as compressive strength and flexural strength with CNT and CNF composites are examined and compared with control samples. 0.1% and 0.05 % of nanoparticles (both CNT and CNF) by the weight of cement are used in this research and 0.8% of super plasticizer by weight of cement was also used along with 0.4, 0.45 and 0.50 water cement ratios for making specimens for compression test. The compressive strength results are not satisfactory as there was no constant increase in strength with all the composites, however strength of few nanocomposites increased by a good percentage. 0.5 water cement ratio cement mortar had compressive strength of 7.15 ksi (49.3 MPa), whereas sample with 0.1% CNT showed 8.38 ksi (57.8 MPa) with 17% increase in strength after 28 days. Same trend was followed by 0.4 water cement ratio as the compressive strength of control sample was 8.89 ksi (61.3 MPa), with 0.05% of CNT strength increased to 10.90 ksi (75.2 MPa) with 23% increase in strength. 0.4 water cement ratio was used for flexural tests including 0.1%, 0.05% of CNT and 0.1%, 0.05% of CNF with 0.008 ratio of super plasticizer. Results showed that there was a significant increase in strength initially but gradually decreased as the time increase and showed decreased strength at 28 days when compared to control samples. Flow cone results are quite satisfying as the flow is significantly increased with the addition of nanoparticles. Time of efflux of control sample is 16.22 sec whereas for specimen with CNT had a time of efflux 12.67 sec and sample with CNF showed 13.65 seconds. Setting time test was carried on 0.4 water cement ratio. Composites with nanoparticles exhibited faster setting when compared to its control sample. Bleeding was not observed with the nanoparticles in the cement mortar. Shrinkage test was conducted on sample with 0.4 water cement ratio with 0.05% of CNT and CNF. Shrinkage was very small in the samples with nanoparticles.
46 CFR 112.50-7 - Compressed air starting.
Code of Federal Regulations, 2012 CFR
2012-10-01
... Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING EMERGENCY LIGHTING AND POWER SYSTEMS Emergency Diesel and Gas Turbine Engine Driven Generator Sets § 112.50-7 Compressed... emergency power source. If this compressor supplies other auxiliaries, there must be a non-return valve at...
The effect of an external mechanical compression on in vivo optical properties of human skin
NASA Astrophysics Data System (ADS)
Nakhaeva, I. A.; Mohammed, M. R.; Zyuryukina, O. A.; Sinichkin, Yu. P.
2014-09-01
We have studied the influence of an external mechanical compression on diffuse reflection spectra of skin tissue under in vivo conditions. An analysis of these spectra based on the diffusion approximation of the radiation transfer theory has allowed us to find that the application of the external compression weakens absorbing and scattering properties of skin tissue. After the removal of the compression, the recovery time of the skin tissue (on the order of 1 h) considerably exceeds the stabilization time of its parameters after application of external mechanical compression (several minutes). In this case, at the initial moment of time after the removal of the compression, the fullness of blood vessels and the degree of oxygenation of blood hemoglobin in the skin tissue increase considerably compared to normal skin.
Study of on-board compression of earth resources data
NASA Technical Reports Server (NTRS)
Habibi, A.
1975-01-01
The current literature on image bandwidth compression was surveyed and those methods relevant to compression of multispectral imagery were selected. Typical satellite multispectral data was then analyzed statistically and the results used to select a smaller set of candidate bandwidth compression techniques particularly relevant to earth resources data. These were compared using both theoretical analysis and simulation, under various criteria of optimality such as mean square error (MSE), signal-to-noise ratio, classification accuracy, and computational complexity. By concatenating some of the most promising techniques, three multispectral data compression systems were synthesized which appear well suited to current and future NASA earth resources applications. The performance of these three recommended systems was then examined in detail by all of the above criteria. Finally, merits and deficiencies were summarized and a number of recommendations for future NASA activities in data compression proposed.
Mechanical and optical response of [100] lithium fluoride to multi-megabar dynamic pressures
Davis, Jean -Paul; Knudson, Marcus D.; Shulenburger, Luke; ...
2016-10-26
An understanding of the mechanical and optical properties of lithium fluoride (LiF) is essential to its use as a transparent tamper and window for dynamic materials experiments. In order to improve models for this material, we applied iterative Lagrangian analysis to ten independent sets of data from magnetically driven planar shockless compression experiments on single crystal [100] LiF to pressures as high as 350 GPa. We found that the compression response disagreed with a prevalent tabular equation of state for LiF that is commonly used to interpret shockless compression experiments. We also present complementary data from ab initio calculations performedmore » using the diffusion quantum Monte Carlo method. The agreement between these two data sets lends confidence to our interpretation. In order to aid in future experimental analysis, we have modified the tabular equation of state to match the new data. We have also extended knowledge of the optical properties of LiF via shock-compression and shockless compression experiments, refining the transmissibility limit, measuring the refractive index to ~300 GPa, and confirming the nonlinear dependence of the refractive index on density. Lastly, we present a new model for the refractive index of LiF that includes temperature dependence and describe a procedure for correcting apparent velocity to true velocity for dynamic compression experiments.« less
Properties of Chemically Combusted Calcium Carbide Residue and Its Influence on Cement Properties.
Sun, Hongfang; Li, Zishanshan; Bai, Jing; Memon, Shazim Ali; Dong, Biqin; Fang, Yuan; Xu, Weiting; Xing, Feng
2015-02-13
Calcium carbide residue (CCR) is a waste by-product from acetylene gas production. The main component of CCR is Ca(OH)₂, which can react with siliceous materials through pozzolanic reactions, resulting in a product similar to those obtained from the cement hydration process. Thus, it is possible to use CCR as a substitute for Portland cement in concrete. In this research, we synthesized CCR and silica fume through a chemical combustion technique to produce a new reactive cementitious powder (RCP). The properties of paste and mortar in fresh and hardened states (setting time, shrinkage, and compressive strength) with 5% cement replacement by RCP were evaluated. The hydration of RCP and OPC (Ordinary Portland Cement) pastes was also examined through SEM (scanning electron microscope). Test results showed that in comparison to control OPC mix, the hydration products for the RCP mix took longer to formulate. The initial and final setting times were prolonged, while the drying shrinkage was significantly reduced. The compressive strength at the age of 45 days for RCP mortar mix was found to be higher than that of OPC mortar and OPC mortar with silica fume mix by 10% and 8%, respectively. Therefore, the synthesized RCP was proved to be a sustainable active cementitious powder for the strength enhanced of building materials, which will result in the diversion of significant quantities of this by-product from landfills.
Real-time demonstration hardware for enhanced DPCM video compression algorithm
NASA Technical Reports Server (NTRS)
Bizon, Thomas P.; Whyte, Wayne A., Jr.; Marcopoli, Vincent R.
1992-01-01
The lack of available wideband digital links as well as the complexity of implementation of bandwidth efficient digital video CODECs (encoder/decoder) has worked to keep the cost of digital television transmission too high to compete with analog methods. Terrestrial and satellite video service providers, however, are now recognizing the potential gains that digital video compression offers and are proposing to incorporate compression systems to increase the number of available program channels. NASA is similarly recognizing the benefits of and trend toward digital video compression techniques for transmission of high quality video from space and therefore, has developed a digital television bandwidth compression algorithm to process standard National Television Systems Committee (NTSC) composite color television signals. The algorithm is based on differential pulse code modulation (DPCM), but additionally utilizes a non-adaptive predictor, non-uniform quantizer and multilevel Huffman coder to reduce the data rate substantially below that achievable with straight DPCM. The non-adaptive predictor and multilevel Huffman coder combine to set this technique apart from other DPCM encoding algorithms. All processing is done on a intra-field basis to prevent motion degradation and minimize hardware complexity. Computer simulations have shown the algorithm will produce broadcast quality reconstructed video at an average transmission rate of 1.8 bits/pixel. Hardware implementation of the DPCM circuit, non-adaptive predictor and non-uniform quantizer has been completed, providing realtime demonstration of the image quality at full video rates. Video sampling/reconstruction circuits have also been constructed to accomplish the analog video processing necessary for the real-time demonstration. Performance results for the completed hardware compare favorably with simulation results. Hardware implementation of the multilevel Huffman encoder/decoder is currently under development along with implementation of a buffer control algorithm to accommodate the variable data rate output of the multilevel Huffman encoder. A video CODEC of this type could be used to compress NTSC color television signals where high quality reconstruction is desirable (e.g., Space Station video transmission, transmission direct-to-the-home via direct broadcast satellite systems or cable television distribution to system headends and direct-to-the-home).
Wavelet compression techniques for hyperspectral data
NASA Technical Reports Server (NTRS)
Evans, Bruce; Ringer, Brian; Yeates, Mathew
1994-01-01
Hyperspectral sensors are electro-optic sensors which typically operate in visible and near infrared bands. Their characteristic property is the ability to resolve a relatively large number (i.e., tens to hundreds) of contiguous spectral bands to produce a detailed profile of the electromagnetic spectrum. In contrast, multispectral sensors measure relatively few non-contiguous spectral bands. Like multispectral sensors, hyperspectral sensors are often also imaging sensors, measuring spectra over an array of spatial resolution cells. The data produced may thus be viewed as a three dimensional array of samples in which two dimensions correspond to spatial position and the third to wavelength. Because they multiply the already large storage/transmission bandwidth requirements of conventional digital images, hyperspectral sensors generate formidable torrents of data. Their fine spectral resolution typically results in high redundancy in the spectral dimension, so that hyperspectral data sets are excellent candidates for compression. Although there have been a number of studies of compression algorithms for multispectral data, we are not aware of any published results for hyperspectral data. Three algorithms for hyperspectral data compression are compared. They were selected as representatives of three major approaches for extending conventional lossy image compression techniques to hyperspectral data. The simplest approach treats the data as an ensemble of images and compresses each image independently, ignoring the correlation between spectral bands. The second approach transforms the data to decorrelate the spectral bands, and then compresses the transformed data as a set of independent images. The third approach directly generalizes two-dimensional transform coding by applying a three-dimensional transform as part of the usual transform-quantize-entropy code procedure. The algorithms studied all use the discrete wavelet transform. In the first two cases, a wavelet transform coder was used for the two-dimensional compression. The third case used a three dimensional extension of this same algorithm.
Dataset on predictive compressive strength model for self-compacting concrete.
Ofuyatan, O M; Edeki, S O
2018-04-01
The determination of compressive strength is affected by many variables such as the water cement (WC) ratio, the superplasticizer (SP), the aggregate combination, and the binder combination. In this dataset article, 7, 28, and 90-day compressive strength models are derived using statistical analysis. The response surface methodology is used toinvestigate the effect of the parameters: Varying percentages of ash, cement, WC, and SP on hardened properties-compressive strengthat 7,28 and 90 days. Thelevels of independent parameters are determinedbased on preliminary experiments. The experimental values for compressive strengthat 7, 28 and 90 days and modulus of elasticity underdifferent treatment conditions are also discussed and presented.These dataset can effectively be used for modelling and prediction in concrete production settings.
NASA Astrophysics Data System (ADS)
Singh, Mamta; Gupta, D. N.
2018-01-01
The inclusion of laser absorption in plasmas plays an important role in laser-plasma interactions. In this work, the laser pulse compression in weakly relativistic plasmas has been revisited by incorporating the collision-based laser absorption effects. By considering the role of laser absorption in plasmas, a set of coupled nonlinear equations is derived to describe the evolution of pulse compression. The laser pulse compression is reduced due to the collisional absorption in the plasmas. Fast dispersion is also observed with increasing the absorption coefficient, which is obviously due to the strong energy attenuation in plasmas. Using our theoretical model, the involvement and importance of a particular absorption mechanism for pulse compression in plasmas is analyzed.
Cortegiani, Andrea; Russotto, Vincenzo; Montalto, Francesca; Iozzo, Pasquale; Meschis, Roberta; Pugliesi, Marinella; Mariano, Dario; Benenati, Vincenzo; Raineri, Santi Maurizio; Gregoretti, Cesare; Giarratano, Antonino
2017-01-01
High-quality chest compressions are pivotal to improve survival from cardiac arrest. Basic life support training of school students is an international priority. The aim of this trial was to assess the effectiveness of a real-time training software (Laerdal QCPR®) compared to a standard instructor-based feedback for chest compressions acquisition in secondary school students. After an interactive frontal lesson about basic life support and high quality chest compressions, 144 students were randomized to two types of chest compressions training: 1) using Laerdal QCPR® (QCPR group- 72 students) for real-time feedback during chest compressions with the guide of an instructor who considered software data for students' correction 2) based on standard instructor-based feedback (SF group- 72 students). Both groups had a minimum of a 2-minute chest compressions training session. Students were required to reach a minimum technical skill level before the evaluation. We evaluated all students at 7 days from the training with a 2-minute chest compressions session. The primary outcome was the compression score, which is an overall measure of chest compressions quality calculated by the software expressed as percentage. 125 students were present at the evaluation session (60 from QCPR group and 65 from SF group). Students in QCPR group had a significantly higher compression score (median 90%, IQR 81.9-96.0) compared to SF group (median 67%, IQR 27.7-87.5), p = 0.0003. Students in QCPR group performed significantly higher percentage of fully released chest compressions (71% [IQR 24.5-99.0] vs 24% [IQR 2.5-88.2]; p = 0.005) and better chest compression rate (117.5/min [IQR 106-123.5] vs 125/min [115-135.2]; p = 0.001). In secondary school students, a training for chest compressions based on a real-time feedback software (Laerdal QCPR®) guided by an instructor is superior to instructor-based feedback training in terms of chest compression technical skill acquisition. Australian New Zealand Clinical Trials Registry ACTRN12616000383460.
Tibiotalocalcaneal Arthrodesis Nails: A Comparison of Nails With and Without Internal Compression.
Taylor, James; Lucas, Douglas E; Riley, Aimee; Simpson, G Alex; Philbin, Terrence M
2016-03-01
Hindfoot arthrodesis with tibiotalocalcaneal (TTC) intramedullary nails is used commonly when treating ankle and subtalar arthritis and other hindfoot pathology. Adequate compression is paramount to avoid nonunion and fatigue fracture of the hardware. Arthrodesis systems with internal compression have demonstrated superior compression to systems relying on external methods. This study examined the speed of union with TTC fusion nails with internal compression over nails without internal compression. A retrospective review was performed identifying nail type and time to union of the subtalar joint (STJ) and tibiotalar joint (TTJ). A total of 198 patients were included from 2003 to 2011. The median time to STJ fusion without internal compression was 104 days compared to 92 days with internal compression (P = .044). The median time to TTJ fusion without internal compression was 111 days compared to 93 days with internal compression (P = .010). Adjusting for diabetes, there was no significant difference in fusion speed with or without internal compression for the STJ (P = .561) or TTJ (P = .358). Nonunion rates were 24.5% for the STJ and 17.0% for the TTJ with internal compression, and 43.4% for the STJ and 42.1% for the TTJ without internal compression. This difference remained statistically significant after adjusting for diabetes for the TTJ (P = .001) but not for the STJ (P = .194). The intramedullary hindfoot arthrodesis nail was a viable treatment option in degenerative joint disease of the TTC joint. There appeared to be an advantage using systems with internal compression; however, there was no statistically significant difference after controlling for diabetes. Level III, retrospective comparative series. © The Author(s) 2015.
Mechanism and preparation of liquid alkali-free liquid setting accelerator for shotcrete
NASA Astrophysics Data System (ADS)
Qiu, Ying; Ding, Bei; Gan, Jiezhong; Guo, Zhaolai; Zheng, Chunyang; Jiang, Haidong
2017-03-01
A new alkali-free liquid accelerator for shotcrete was prepared through normal temperature drop process by using the nano activated alumina and the modified alcohol amine as the main raw materials. The effect of alkali-free liquid accelerator on the cement setting time and the mechanical properties of mortar, the effect of the penetration strength on the shotcrete rebound were investigated. And the accelerating mechanism of the as-prepared alkali-free liquid accelerator was also analyzed via XRD and SEM characterization methods. The experimental results indicated that the hydration of C3A was accelerated by the polyamine complexation of accelerator, resulting in forming a large number of acicular ettringite and reducing the amount of Ca(OH)2 crystal, which would not affect the later hydration of cement. When the content of alkali-free liquid accelerator was 6%, the initial setting time and final setting time were less than 3min and 8min respectively, and 1d and 28d compressive strength ratios reached 207.6% and 114.2% respectively; beside that, the shotcrete rebound was very low because of the high penetration strength within 30min.
ERIC Educational Resources Information Center
Pastore, Raymond S.
2009-01-01
The purpose of this study was to examine the effects of visual representations and time-compressed instruction on learning and learners' perceptions of cognitive load. Time-compressed instruction refers to instruction that has been increased in speed without sacrificing quality. It was anticipated that learners would be able to gain a conceptual…
5 CFR 610.406 - Holiday for employees on compressed work schedules.
Code of Federal Regulations, 2010 CFR
2010-01-01
... number of hours of the compressed work schedule on that day. (b) If a part-time employee is relieved or... of the compressed work schedule on that day. When a holiday falls on a nonworkday of a part-time... work schedules. (a) If a full-time employee is relieved or prevented from working on a day designated...
Real-time 3D video compression for tele-immersive environments
NASA Astrophysics Data System (ADS)
Yang, Zhenyu; Cui, Yi; Anwar, Zahid; Bocchino, Robert; Kiyanclar, Nadir; Nahrstedt, Klara; Campbell, Roy H.; Yurcik, William
2006-01-01
Tele-immersive systems can improve productivity and aid communication by allowing distributed parties to exchange information via a shared immersive experience. The TEEVE research project at the University of Illinois at Urbana-Champaign and the University of California at Berkeley seeks to foster the development and use of tele-immersive environments by a holistic integration of existing components that capture, transmit, and render three-dimensional (3D) scenes in real time to convey a sense of immersive space. However, the transmission of 3D video poses significant challenges. First, it is bandwidth-intensive, as it requires the transmission of multiple large-volume 3D video streams. Second, existing schemes for 2D color video compression such as MPEG, JPEG, and H.263 cannot be applied directly because the 3D video data contains depth as well as color information. Our goal is to explore from a different angle of the 3D compression space with factors including complexity, compression ratio, quality, and real-time performance. To investigate these trade-offs, we present and evaluate two simple 3D compression schemes. For the first scheme, we use color reduction to compress the color information, which we then compress along with the depth information using zlib. For the second scheme, we use motion JPEG to compress the color information and run-length encoding followed by Huffman coding to compress the depth information. We apply both schemes to 3D videos captured from a real tele-immersive environment. Our experimental results show that: (1) the compressed data preserves enough information to communicate the 3D images effectively (min. PSNR > 40) and (2) even without inter-frame motion estimation, very high compression ratios (avg. > 15) are achievable at speeds sufficient to allow real-time communication (avg. ~ 13 ms per 3D video frame).
42 CFR 84.141 - Breathing gas; minimum requirements.
Code of Federal Regulations, 2013 CFR
2013-10-01
... SAFETY AND HEALTH RESEARCH AND RELATED ACTIVITIES APPROVAL OF RESPIRATORY PROTECTIVE DEVICES Supplied-Air...) Compressed, gaseous breathing air shall meet the applicable minimum grade requirements for Type I gaseous air set forth in the Compressed Gas Association Commodity Specification for Air, G-7.1, 1966 (Grade D or...
42 CFR 84.141 - Breathing gas; minimum requirements.
Code of Federal Regulations, 2011 CFR
2011-10-01
... SAFETY AND HEALTH RESEARCH AND RELATED ACTIVITIES APPROVAL OF RESPIRATORY PROTECTIVE DEVICES Supplied-Air...) Compressed, gaseous breathing air shall meet the applicable minimum grade requirements for Type I gaseous air set forth in the Compressed Gas Association Commodity Specification for Air, G-7.1, 1966 (Grade D or...
42 CFR 84.141 - Breathing gas; minimum requirements.
Code of Federal Regulations, 2014 CFR
2014-10-01
... SAFETY AND HEALTH RESEARCH AND RELATED ACTIVITIES APPROVAL OF RESPIRATORY PROTECTIVE DEVICES Supplied-Air...) Compressed, gaseous breathing air shall meet the applicable minimum grade requirements for Type I gaseous air set forth in the Compressed Gas Association Commodity Specification for Air, G-7.1, 1966 (Grade D or...
42 CFR 84.141 - Breathing gas; minimum requirements.
Code of Federal Regulations, 2012 CFR
2012-10-01
... SAFETY AND HEALTH RESEARCH AND RELATED ACTIVITIES APPROVAL OF RESPIRATORY PROTECTIVE DEVICES Supplied-Air...) Compressed, gaseous breathing air shall meet the applicable minimum grade requirements for Type I gaseous air set forth in the Compressed Gas Association Commodity Specification for Air, G-7.1, 1966 (Grade D or...
46 CFR 112.50-7 - Compressed air starting.
Code of Federal Regulations, 2011 CFR
2011-10-01
... AND POWER SYSTEMS Emergency Diesel and Gas Turbine Engine Driven Generator Sets § 112.50-7 Compressed... emergency generator room and a handcranked, diesel-powered air compressor for recharging the air receiver..., and energy storing devices must be in the emergency generator room, except for the main or auxiliary...
46 CFR 112.50-7 - Compressed air starting.
Code of Federal Regulations, 2010 CFR
2010-10-01
... AND POWER SYSTEMS Emergency Diesel and Gas Turbine Engine Driven Generator Sets § 112.50-7 Compressed... emergency generator room and a handcranked, diesel-powered air compressor for recharging the air receiver..., and energy storing devices must be in the emergency generator room, except for the main or auxiliary...
Data Reorganization for Optimal Time Series Data Access, Analysis, and Visualization
NASA Astrophysics Data System (ADS)
Rui, H.; Teng, W. L.; Strub, R.; Vollmer, B.
2012-12-01
The way data are archived is often not optimal for their access by many user communities (e.g., hydrological), particularly if the data volumes and/or number of data files are large. The number of data records of a non-static data set generally increases with time. Therefore, most data sets are commonly archived by time steps, one step per file, often containing multiple variables. However, many research and application efforts need time series data for a given geographical location or area, i.e., a data organization that is orthogonal to the way the data are archived. The retrieval of a time series of the entire temporal coverage of a data set for a single variable at a single data point, in an optimal way, is an important and longstanding challenge, especially for large science data sets (i.e., with volumes greater than 100 GB). Two examples of such large data sets are the North American Land Data Assimilation System (NLDAS) and Global Land Data Assimilation System (GLDAS), archived at the NASA Goddard Earth Sciences Data and Information Services Center (GES DISC; Hydrology Data Holdings Portal, http://disc.sci.gsfc.nasa.gov/hydrology/data-holdings). To date, the NLDAS data set, hourly 0.125x0.125° from Jan. 1, 1979 to present, has a total volume greater than 3 TB (compressed). The GLDAS data set, 3-hourly and monthly 0.25x0.25° and 1.0x1.0° Jan. 1948 to present, has a total volume greater than 1 TB (compressed). Both data sets are accessible, in the archived time step format, via several convenient methods, including Mirador search and download (http://mirador.gsfc.nasa.gov/), GrADS Data Server (GDS; http://hydro1.sci.gsfc.nasa.gov/dods/), direct FTP (ftp://hydro1.sci.gsfc.nasa.gov/data/s4pa/), and Giovanni Online Visualization and Analysis (http://disc.sci.gsfc.nasa.gov/giovanni). However, users who need long time series currently have no efficient way to retrieve them. Continuing a longstanding tradition of facilitating data access, analysis, and visualization that contribute to knowledge discovery from large science data sets, the GES DISC recently begun a NASA ACCESS-funded project to, in part, optimally reorganize selected large data sets for access and use by the hydrological user community. This presentation discusses the following aspects of the project: (1) explorations of approaches, such as database and file system; (2) findings for each approach, such as limitations and concerns, and pros and cons; (3) implementation of reorganizing data via the file system approach, including data processing (parameter and spatial subsetting), metadata and file structure of reorganized time series data (true "Data Rod," single variable, single grid point, and entire data range per file), and production and quality control. The reorganized time series data will be integrated into several broadly used data tools, such as NASA Giovanni and those provided by CUAHSI HIS (http://his.cuahsi.org/) and EPA BASINS (http://water.epa.gov/scitech/datait/models/basins/), as well as accessible via direct FTP, along with documentation and sample reading software. The data reorganization is initially, as part of the project, applied to selected popular hydrology-related parameters, with other parameters to be added, as resources permit.
Cheremkhin, Pavel A; Kurbatova, Ekaterina A
2018-01-01
Compression of digital holograms can significantly help with the storage of objects and data in 2D and 3D form, its transmission, and its reconstruction. Compression of standard images by methods based on wavelets allows high compression ratios (up to 20-50 times) with minimum losses of quality. In the case of digital holograms, application of wavelets directly does not allow high values of compression to be obtained. However, additional preprocessing and postprocessing can afford significant compression of holograms and the acceptable quality of reconstructed images. In this paper application of wavelet transforms for compression of off-axis digital holograms are considered. The combined technique based on zero- and twin-order elimination, wavelet compression of the amplitude and phase components of the obtained Fourier spectrum, and further additional compression of wavelet coefficients by thresholding and quantization is considered. Numerical experiments on reconstruction of images from the compressed holograms are performed. The comparative analysis of applicability of various wavelets and methods of additional compression of wavelet coefficients is performed. Optimum parameters of compression of holograms by the methods can be estimated. Sizes of holographic information were decreased up to 190 times.
Bae, Jinkun; Chung, Tae Nyoung; Je, Sang Mo
2016-01-01
Objectives To assess how the quality of metronome-guided cardiopulmonary resuscitation (CPR) was affected by the chest compression rate familiarised by training before the performance and to determine a possible mechanism for any effect shown. Design Prospective crossover trial of a simulated, one-person, chest-compression-only CPR. Setting Participants were recruited from a medical school and two paramedic schools of South Korea. Participants 42 senior students of a medical school and two paramedic schools were enrolled but five dropped out due to physical restraints. Intervention Senior medical and paramedic students performed 1 min of metronome-guided CPR with chest compressions only at a speed of 120 compressions/min after training for chest compression with three different rates (100, 120 and 140 compressions/min). Friedman's test was used to compare average compression depths based on the different rates used during training. Results Average compression depths were significantly different according to the rate used in training (p<0.001). A post hoc analysis showed that average compression depths were significantly different between trials after training at a speed of 100 compressions/min and those at speeds of 120 and 140 compressions/min (both p<0.001). Conclusions The depth of chest compression during metronome-guided CPR is affected by the relative difference between the rate of metronome guidance and the chest compression rate practised in previous training. PMID:26873050
NASA Technical Reports Server (NTRS)
Hurst, Victor, IV; West, Sarah; Austin, Paul; Branson, Richard; Beck, George
2006-01-01
Astronaut crew medical officers (CMO) aboard the International Space Station (ISS) receive 40 hours of medical training during the 18 months preceding each mission. Part of this training ilncludes twoperson cardiopulmonary resuscitation (CPR) per training guidelines from the American Heart Association (AHA). Recent studies concluded that the use of metronomic tones improves the coordination of CPR by trained clinicians. Similar data for bystander or "trained lay people" (e.g. CMO) performance of CPR (BCPR) have been limited. The purpose of this study was to evailuate whether use of timing devices, such as audible metronomic tones, would improve BCPR perfomance by trained bystanders. Twenty pairs of bystanders trained in two-person BCPR performled BCPR for 4 minutes on a simulated cardiopulmonary arrest patient using three interventions: 1) BCPR with no timing devices, 2) BCPR plus metronomic tones for coordinating compression rate only, 3) BCPR with a timing device and metronome for coordinating ventilation and compression rates, respectively. Bystanders were evaluated on their ability to meet international and AHA CPR guidelines. Bystanders failed to provide the recommended number of breaths and number of compressions in the absence of a timing device and in the presence of audible metronomic tones for only coordinating compression rate. Bystanders using timing devices to coordinate both components of BCPR provided the reco number of breaths and were closer to providing the recommended number of compressions compared with the other interventions. Survey results indicated that bystanders preferred to use a metronome for delivery of compressions during BCPR. BCPR performance is improved by timing devices that coordinate both compressions and breaths.
Abelairas-Gómez, Cristian; Rodríguez-Núñez, Antonio; Vilas-Pintos, Elisardo; Prieto Saborit, José Antonio; Barcala-Furelos, Roberto
2015-06-01
To describe the quality of chest compressions performed by secondary-school students trained with a realtime audiovisual feedback system. The learners were 167 students aged 12 to 15 years who had no prior experience with cardiopulmonary resuscitation (CPR). They received an hour of instruction in CPR theory and practice and then took a 2-minute test, performing hands-only CPR on a child mannequin (Prestan Professional Child Manikin). Lights built into the mannequin gave learners feedback about how many compressions they had achieved and clicking sounds told them when compressions were deep enough. All the learners were able to maintain a steady enough rhythm of compressions and reached at least 80% of the targeted compression depth. Fewer correct compressions were done in the second minute than in the first (P=.016). Real-time audiovisual feedback helps schoolchildren aged 12 to 15 years to achieve quality chest compressions on a mannequin.
A survey of the state-of-the-art and focused research in range systems, task 1
NASA Technical Reports Server (NTRS)
Omura, J. K.
1986-01-01
This final report presents the latest research activity in voice compression. We have designed a non-real time simulation system that is implemented around the IBM-PC where the IBM-PC is used as a speech work station for data acquisition and analysis of voice samples. A real-time implementation is also proposed. This real-time Voice Compression Board (VCB) is built around the Texas Instruments TMS-3220. The voice compression algorithm investigated here was described in an earlier report titled, Low Cost Voice Compression for Mobile Digital Radios, by the author. We will assume the reader is familiar with the voice compression algorithm discussed in this report. The VCB compresses speech waveforms at data rates ranging from 4.8 K bps to 16 K bps. This board interfaces to the IBM-PC 8-bit bus, and plugs into a single expansion slot on the mother board.
Optimisation algorithms for ECG data compression.
Haugland, D; Heber, J G; Husøy, J H
1997-07-01
The use of exact optimisation algorithms for compressing digital electrocardiograms (ECGs) is demonstrated. As opposed to traditional time-domain methods, which use heuristics to select a small subset of representative signal samples, the problem of selecting the subset is formulated in rigorous mathematical terms. This approach makes it possible to derive algorithms guaranteeing the smallest possible reconstruction error when a bounded selection of signal samples is interpolated. The proposed model resembles well-known network models and is solved by a cubic dynamic programming algorithm. When applied to standard test problems, the algorithm produces a compressed representation for which the distortion is about one-half of that obtained by traditional time-domain compression techniques at reasonable compression ratios. This illustrates that, in terms of the accuracy of decoded signals, existing time-domain heuristics for ECG compression may be far from what is theoretically achievable. The paper is an attempt to bridge this gap.
Cardiopulmonary resuscitation by chest compression alone or with mouth-to-mouth ventilation.
Hallstrom, A; Cobb, L; Johnson, E; Copass, M
2000-05-25
Despite extensive training of citizens of Seattle in cardiopulmonary resuscitation (CPR), bystanders do not perform CPR in almost half of witnessed cardiac arrests. Instructions in chest compression plus mouth-to-mouth ventilation given by dispatchers over the telephone can require 2.4 minutes. In experimental studies, chest compression alone is associated with survival rates similar to those with chest compression plus mouth-to-mouth ventilation. We conducted a randomized study to compare CPR by chest compression alone with CPR by chest compression plus mouth-to-mouth ventilation. The setting of the trial was an urban, fire-department-based, emergency-medical-care system with central dispatching. In a randomized manner, telephone dispatchers gave bystanders at the scene of apparent cardiac arrest instructions in either chest compression alone or chest compression plus mouth-to-mouth ventilation. The primary end point was survival to hospital discharge. Data were analyzed for 241 patients randomly assigned to receive chest compression alone and 279 assigned to chest compression plus mouth-to-mouth ventilation. Complete instructions were delivered in 62 percent of episodes for the group receiving chest compression plus mouth-to-mouth ventilation and 81 percent of episodes for the group receiving chest compression alone (P=0.005). Instructions for compression required 1.4 minutes less to complete than instructions for compression plus mouth-to-mouth ventilation. Survival to hospital discharge was better among patients assigned to chest compression alone than among those assigned to chest compression plus mouth-to-mouth ventilation (14.6 percent vs. 10.4 percent), but the difference was not statistically significant (P=0.18). The outcome after CPR with chest compression alone is similar to that after chest compression with mouth-to-mouth ventilation, and chest compression alone may be the preferred approach for bystanders inexperienced in CPR.
Analysis and testing of axial compression in imperfect slender truss struts
NASA Technical Reports Server (NTRS)
Lake, Mark S.; Georgiadis, Nicholas
1990-01-01
The axial compression of imperfect slender struts for large space structures is addressed. The load-shortening behavior of struts with initially imperfect shapes and eccentric compressive end loading is analyzed using linear beam-column theory and results are compared with geometrically nonlinear solutions to determine the applicability of linear analysis. A set of developmental aluminum clad graphite/epoxy struts sized for application to the Space Station Freedom truss are measured to determine their initial imperfection magnitude, load eccentricity, and cross sectional area and moment of inertia. Load-shortening curves are determined from axial compression tests of these specimens and are correlated with theoretical curves generated using linear analysis.
NASA Technical Reports Server (NTRS)
Richardson, R. M.; Solomon, S. C.; Sleep, N. H.
1979-01-01
In the present paper, the basic set of global intraplate stress orientation data is plotted and tabulated. Although the global intraplate stress field is complicated, several large-scale patterns can be seen. Much of stable North America is characterized by an E-W to NE-SW trend for the maximum compressive stress. South American lithosphere beneath the Andes, and perhaps farther east in the stable interior, has horizontal compressive stresses trending E-W to NW-SE. Western Europe north of the Alps is characterized by a NW-SE trending maximum horizontal compression, while Asia has the maximum horizontal compressive stress trending more nearly N-S, especially near the Himalayan front.
2014-01-01
Background According to the guidelines for cardiopulmonary resuscitation (CPR), the rotation time for chest compression should be about 2 min. The quality of chest compressions is related to the physical fitness of the rescuer, but this was not considered when determining rotation time. The present study aimed to clarify associations between body weight and the quality of chest compression and physical fatigue during CPR performed by 18 registered nurses (10 male and 8 female) assigned to light and heavy groups according to the average weight for each sex in Japan. Methods Five-minute chest compressions were then performed on a manikin that was placed on the floor. Measurement parameters were compression depth, heart rate, oxygen uptake, integrated electromyography signals, and rating of perceived exertion. Compression depth was evaluated according to the ratio (%) of adequate compressions (at least 5 cm deep). Results The ratio of adequate compressions decreased significantly over time in the light group. Values for heart rate, oxygen uptake, muscle activity defined as integrated electromyography signals, and rating of perceived exertion were significantly higher for the light group than for the heavy group. Conclusion Chest compression caused increased fatigue among the light group, which consequently resulted in a gradual fall in the quality of chest compression. These results suggested that individuals with a lower body weight should rotate at 1-min intervals to maintain high quality CPR and thus improve the survival rates and neurological outcomes of victims of cardiac arrest. PMID:24957919
NASA Astrophysics Data System (ADS)
Burgisser, Alain; Alletti, Marina; Scaillet, Bruno
2015-06-01
Modeling magmatic degassing, or how the volatile distribution between gas and melt changes at pressure varies, is a complex task that involves a large number of thermodynamical relationships and that requires dedicated software. This article presents the software D-Compress, which computes the gas and melt volatile composition of five element sets in magmatic systems (O-H, S-O-H, C-S-O-H, C-S-O-H-Fe, and C-O-H). It has been calibrated so as to simulate the volatiles coexisting with three common types of silicate melts (basalt, phonolite, and rhyolite). Operational temperatures depend on melt composition and range from 790 to 1400 °C. A specificity of D-Compress is the calculation of volatile composition as pressure varies along a (de)compression path between atmospheric and 3000 bars. This software was prepared so as to maximize versatility by proposing different sets of input parameters. In particular, whenever new solubility laws on specific melt compositions are available, the model parameters can be easily tuned to run the code on that composition. Parameter gaps were minimized by including sets of chemical species for which calibration data were available over a wide range of pressure, temperature, and melt composition. A brief description of the model rationale is followed by the presentation of the software capabilities. Examples of use are then presented with outputs comparisons between D-Compress and other currently available thermodynamical models. The compiled software and the source code are available as electronic supplementary materials.
Fast and accurate face recognition based on image compression
NASA Astrophysics Data System (ADS)
Zheng, Yufeng; Blasch, Erik
2017-05-01
Image compression is desired for many image-related applications especially for network-based applications with bandwidth and storage constraints. The face recognition community typical reports concentrate on the maximal compression rate that would not decrease the recognition accuracy. In general, the wavelet-based face recognition methods such as EBGM (elastic bunch graph matching) and FPB (face pattern byte) are of high performance but run slowly due to their high computation demands. The PCA (Principal Component Analysis) and LDA (Linear Discriminant Analysis) algorithms run fast but perform poorly in face recognition. In this paper, we propose a novel face recognition method based on standard image compression algorithm, which is termed as compression-based (CPB) face recognition. First, all gallery images are compressed by the selected compression algorithm. Second, a mixed image is formed with the probe and gallery images and then compressed. Third, a composite compression ratio (CCR) is computed with three compression ratios calculated from: probe, gallery and mixed images. Finally, the CCR values are compared and the largest CCR corresponds to the matched face. The time cost of each face matching is about the time of compressing the mixed face image. We tested the proposed CPB method on the "ASUMSS face database" (visible and thermal images) from 105 subjects. The face recognition accuracy with visible images is 94.76% when using JPEG compression. On the same face dataset, the accuracy of FPB algorithm was reported as 91.43%. The JPEG-compressionbased (JPEG-CPB) face recognition is standard and fast, which may be integrated into a real-time imaging device.
Laboratory Characterization of Cemented Rock Fill for Underhand Cut and Fill Method of Mining
NASA Astrophysics Data System (ADS)
Kumar, Dinesh; Singh, Upendra Kumar; Singh, Gauri Shankar Prasad
2016-10-01
Backfilling with controlled specifications is employed for improved ground support and pillar recovery in underground metalliferous mine workings. This paper reports the results of a laboratory study to characterise various mechanical properties of cemented rock fill (CRF) formulations for different compaction levels and cement content percentage for use in underhand cut and fill method of mining. Laboratory test set ups and procedures have been described for conducting compressive and bending tests of CRF block samples. A three dimensional numerical modelling study has also been carried out to overcome the limitations arising due to non-standard dimension of test blocks used in flexural loading test and the test setup devised for this purpose. Based on these studies, specific relations have been established between the compressive and the flexural properties of the CRF. The flexural strength of the wire mesh reinforced CRF is also correlated with its residual strength and the Young's modulus of elasticity under flexural loading condition. The test results of flexural strength, residual flexural strength and modulus show almost linear relations with cement content in CRF. The compressive strength of the CRF block samples is estimated as seven times the flexural strength whereas the compressive modulus is four times the flexural modulus. It has been found that the strengths of CRF of low compaction and no compaction are 75 and 60 % respectively to that of the medium compaction CRF. The relation between the strength and the unit weight of CRF as obtained in this study is significantly important for design and quality control of CRF during its large scale application in underhand cut and fill stopes.
Quasiconservation laws for compressible three-dimensional Navier-Stokes flow.
Gibbon, J D; Holm, D D
2012-10-01
We formulate the quasi-Lagrangian fluid transport dynamics of mass density ρ and the projection q=ω·∇ρ of the vorticity ω onto the density gradient, as determined by the three-dimensional compressible Navier-Stokes equations for an ideal gas, although the results apply for an arbitrary equation of state. It turns out that the quasi-Lagrangian transport of q cannot cross a level set of ρ. That is, in this formulation, level sets of ρ (isopycnals) are impermeable to the transport of the projection q.
Schober, P; Krage, R; Lagerburg, V; Van Groeningen, D; Loer, S A; Schwarte, L A
2014-04-01
Current cardiopulmonary resuscitation (CPR)-guidelines recommend an increased chest compression depth and rate compared to previous guidelines, and the use of automatic feedback devices is encouraged. However, it is unclear whether this compression depth can be maintained at an increased frequency. Moreover, the underlying surface may influence accuracy of feedback devices. We investigated compression depths over time and evaluated the accuracy of a feedback device on different surfaces. Twenty-four volunteers performed four two-minute blocks of CPR targeting at current guideline recommendations on different surfaces (floor, mattress, 2 backboards) on a patient simulator. Participants rested for 2 minutes between blocks. Influences of time and different surfaces on chest compression depth (ANOVA, mean [95% CI]) and accuracy of a feedback device to determine compression depth (Bland-Altman) were assessed. Mean compression depth did not reach recommended depth and decreased over time during all blocks (first block: from 42 mm [39-46 mm] to 39 mm [37-42 mm]). A two-minute resting period was insufficient to restore compression depth to baseline. No differences in compression depth were observed on different surfaces. The feedback device slightly underestimated compression depth on the floor (bias -3.9 mm), but markedly overestimated on the mattress (bias +12.6 mm). This overestimation was eliminated after correcting compression depth by a second sensor between manikin and mattress. Strategies are needed to improve chest compression depth, and more than two providers should alternate with chest compressions. The underlying surface does not necessarily adversely affect CPR performance but influences accuracy of feedback devices. Accuracy is improved by a second, posterior, sensor.
Digital mammography, cancer screening: Factors important for image compression
NASA Technical Reports Server (NTRS)
Clarke, Laurence P.; Blaine, G. James; Doi, Kunio; Yaffe, Martin J.; Shtern, Faina; Brown, G. Stephen; Winfield, Daniel L.; Kallergi, Maria
1993-01-01
The use of digital mammography for breast cancer screening poses several novel problems such as development of digital sensors, computer assisted diagnosis (CAD) methods for image noise suppression, enhancement, and pattern recognition, compression algorithms for image storage, transmission, and remote diagnosis. X-ray digital mammography using novel direct digital detection schemes or film digitizers results in large data sets and, therefore, image compression methods will play a significant role in the image processing and analysis by CAD techniques. In view of the extensive compression required, the relative merit of 'virtually lossless' versus lossy methods should be determined. A brief overview is presented here of the developments of digital sensors, CAD, and compression methods currently proposed and tested for mammography. The objective of the NCI/NASA Working Group on Digital Mammography is to stimulate the interest of the image processing and compression scientific community for this medical application and identify possible dual use technologies within the NASA centers.
Working Characteristics of Variable Intake Valve in Compressed Air Engine
Yu, Qihui; Shi, Yan; Cai, Maolin
2014-01-01
A new camless compressed air engine is proposed, which can make the compressed air energy reasonably distributed. Through analysis of the camless compressed air engine, a mathematical model of the working processes was set up. Using the software MATLAB/Simulink for simulation, the pressure, temperature, and air mass of the cylinder were obtained. In order to verify the accuracy of the mathematical model, the experiments were conducted. Moreover, performance analysis was introduced to design compressed air engine. Results show that, firstly, the simulation results have good consistency with the experimental results. Secondly, under different intake pressures, the highest output power is obtained when the crank speed reaches 500 rpm, which also provides the maximum output torque. Finally, higher energy utilization efficiency can be obtained at the lower speed, intake pressure, and valve duration angle. This research can refer to the design of the camless valve of compressed air engine. PMID:25379536
Working characteristics of variable intake valve in compressed air engine.
Yu, Qihui; Shi, Yan; Cai, Maolin
2014-01-01
A new camless compressed air engine is proposed, which can make the compressed air energy reasonably distributed. Through analysis of the camless compressed air engine, a mathematical model of the working processes was set up. Using the software MATLAB/Simulink for simulation, the pressure, temperature, and air mass of the cylinder were obtained. In order to verify the accuracy of the mathematical model, the experiments were conducted. Moreover, performance analysis was introduced to design compressed air engine. Results show that, firstly, the simulation results have good consistency with the experimental results. Secondly, under different intake pressures, the highest output power is obtained when the crank speed reaches 500 rpm, which also provides the maximum output torque. Finally, higher energy utilization efficiency can be obtained at the lower speed, intake pressure, and valve duration angle. This research can refer to the design of the camless valve of compressed air engine.
Mental Aptitude and Comprehension of Time-Compressed and Compressed-Expanded Listening Selections.
ERIC Educational Resources Information Center
Sticht, Thomas G.
The comprehensibility of materials compressed and then expanded by means of an electromechanical process was tested with 280 Army inductees divided into groups of high and low mental aptitude. Three short listening selections relating to military activities were subjected to compression and compression-expansion to produce seven versions. Data…
30 CFR 57.13020 - Use of compressed air.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Use of compressed air. 57.13020 Section 57... MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-UNDERGROUND METAL AND NONMETAL MINES Compressed Air and Boilers § 57.13020 Use of compressed air. At no time shall compressed air be directed toward a...
30 CFR 56.13020 - Use of compressed air.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Use of compressed air. 56.13020 Section 56... MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-SURFACE METAL AND NONMETAL MINES Compressed Air and Boilers § 56.13020 Use of compressed air. At no time shall compressed air be directed toward a person...
30 CFR 57.13020 - Use of compressed air.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Use of compressed air. 57.13020 Section 57... MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-UNDERGROUND METAL AND NONMETAL MINES Compressed Air and Boilers § 57.13020 Use of compressed air. At no time shall compressed air be directed toward a...
30 CFR 56.13020 - Use of compressed air.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Use of compressed air. 56.13020 Section 56... MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-SURFACE METAL AND NONMETAL MINES Compressed Air and Boilers § 56.13020 Use of compressed air. At no time shall compressed air be directed toward a person...
30 CFR 56.13020 - Use of compressed air.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Use of compressed air. 56.13020 Section 56... MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-SURFACE METAL AND NONMETAL MINES Compressed Air and Boilers § 56.13020 Use of compressed air. At no time shall compressed air be directed toward a person...
30 CFR 57.13020 - Use of compressed air.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Use of compressed air. 57.13020 Section 57... MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-UNDERGROUND METAL AND NONMETAL MINES Compressed Air and Boilers § 57.13020 Use of compressed air. At no time shall compressed air be directed toward a...
30 CFR 56.13020 - Use of compressed air.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Use of compressed air. 56.13020 Section 56... MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-SURFACE METAL AND NONMETAL MINES Compressed Air and Boilers § 56.13020 Use of compressed air. At no time shall compressed air be directed toward a person...
Khashaba, Rania M.; Moussa, Mervet; Koch, Christopher; Jurgensen, Arthur R.; Missimer, David M.; Rutherford, Ronny L.; Chutkan, Norman B.; Borke, James L.
2011-01-01
Aim. Physicochemical mechanical and in vitro biological properties of novel formulations of polymeric calcium phosphate cements (CPCs) were investigated. Methods. Monocalcium phosphate, calcium oxide, and synthetic hydroxyapatite were combined with either modified polyacrylic acid, light activated polyalkenoic acid, or polymethyl vinyl ether maleic acid to obtain Types I, II, and III CPCs. Setting time, compressive and diametral strength of CPCs was compared with zinc polycarboxylate cement (control). Specimens were characterized using X-ray diffraction, scanning electron microscopy, and infrared spectroscopy. In vitro cytotoxicity of CPCs and control was assessed. Results. X-ray diffraction analysis showed hydroxyapatite, monetite, and brushite. Acid-base reaction was confirmed by the appearance of stretching peaks in IR spectra of set cements. SEM revealed rod-like crystals and platy crystals. Setting time of cements was 5–12 min. Type III showed significantly higher strength values compared to control. Type III yielded high biocompatibility. Conclusions. Type III CPCs show promise for dental applications. PMID:21941551
NASA Astrophysics Data System (ADS)
Shahriari, Mohammadreza
2016-06-01
The time-cost tradeoff problem is one of the most important and applicable problems in project scheduling area. There are many factors that force the mangers to crash the time. This factor could be early utilization, early commissioning and operation, improving the project cash flow, avoiding unfavorable weather conditions, compensating the delays, and so on. Since there is a need to allocate extra resources to short the finishing time of project and the project managers are intended to spend the lowest possible amount of money and achieve the maximum crashing time, as a result, both direct and indirect costs will be influenced in the project, and here, we are facing into the time value of money. It means that when we crash the starting activities in a project, the extra investment will be tied in until the end date of the project; however, when we crash the final activities, the extra investment will be tied in for a much shorter period. This study is presenting a two-objective mathematical model for balancing compressing the project time with activities delay to prepare a suitable tool for decision makers caught in available facilities and due to the time of projects. Also drawing the scheduling problem to real world conditions by considering nonlinear objective function and the time value of money are considered. The presented problem was solved using NSGA-II, and the effect of time compressing reports on the non-dominant set.
Rieke, Horst; Rieke, Martin; Gado, Samkon K; Nietert, Paul J; Field, Larry C; Clark, Carlee A; Furse, Cory M; McEvoy, Matthew D
2013-11-01
Quality chest compressions (CC) are the most important factor in successful cardiopulmonary resuscitation. Adjustment of CC based upon an invasive arterial blood pressure (ABP) display would be theoretically beneficial. Additionally, having one compressor present for longer than a 2-min cycle with an ABP display may allow for a learning process to further maximize CC. Accordingly, we tested the hypothesis that CC can be improved with a real-time display of invasively measured blood pressure and with an unchanged, physically fit compressor. A manikin was attached to an ABP display derived from a hemodynamic model responding to parameters of CC rate, depth, and compression-decompression ratio. The area under the blood pressure curve over time (AUC) was used for data analysis. Each participant (N=20) performed 4 CPR sessions: (1) No ABP display, exchange of compressor every 2 min; (2) ABP display, exchange of compressor every 2 min; (3) no ABP display, no exchange of the compressor; (4) ABP display, no exchange of the compressor. Data were analyzed by ANOVA. Significance was set at a p-value<0.05. The average AUC for cycles without ABP display was 5201 mm Hgs (95% confidence interval (CI) of 4804-5597 mm Hgs), and for cycles with ABP display 6110 mm Hgs (95% CI of 5715-6507 mm Hgs) (p<0.0001). The average AUC increase with ABP display for each participant was 20.2±17.4% 95 CI (p<0.0001). Our study confirms the hypothesis that a real-time display of simulated ABP during CPR that responds to participant performance improves achieved and sustained ABP. However, without any real-time visual feedback, even fit compressors demonstrated degradation of CC quality. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Meyer, O; Bucher, M; Schröder, J
2016-03-01
The laryngeal tube (LT) is a recommended alternative to endotracheal intubation during advanced life support (ALS). Its insertion is relatively simple; therefore, it may also serve as an alternative to bag mask ventilation (BMV) for untrained personnel performing basic life support (BLS). Data support the influence of LT on the no-flow time (NFT) compared with BMV during ALS in manikin studies. We performed a manikin study to investigate the effect of using the LT for ventilation instead of BMV on the NFT during BLS in a prospective, randomized, single-rescuer study. All 209 participants were trained in BMV, but were inexperienced in using LT; each participant performed BLS during a 4-min time period. No significant difference in total NFT (LT: mean 81.1 ± 22.7 s; BMV: mean 83.2 ± 13.1 s, p = 0.414) was found; however, significant differences in the later periods of the scenario were identified. While ventilating with the LT, the proportion of chest compressions increased significantly from 67.2 to 73.2%, whereas the proportion of chest compressions increased only marginally when performing BMV. The quality of the chest compressions and the associated ventilation rate did not differ significantly. The mean tidal volume and mean minute volume were significantly lower when performing BMV. The NFT was significantly shorter in the later periods in a single-rescuer, cardiac arrest scenario when using an LT without previous training compared with BMV with previous training. A possible explanation for this result may be the complexity and workload of alternating tasks (e.g., time loss when reclining the head and positioning the mask for each ventilation during BMV).
Jinpeng, Zhang; Limin, Liu; Futao, Zhang; Junzhi, Cao
2018-04-04
With cement, bentonite, water glass, J85 accelerator, retarder and water as raw materials, a new composite grouting material used to seal groundwater inflow and reinforce wall rock in deep fractured rock mass was developed in this paper. Based on the reaction mechanism of raw material, the pumpable time, stone rate, initial setting time, plastic strength and unconfined compressive strength of multi-group proportion grouts were tested by orthogonal experiment. Then, the optimum proportion of composite grouting material was selected and applied to the grouting engineering for sealing groundwater inflow and reinforcing wall rock in mine shaft lining. The results show the mixing proportion of the maximum pumpable time, maximum stone rate and minimum initial setting time of grout are A K4 B K1 C K4 D K2 , A K3 B K1 C K1 D K4 and A K3 B K3 C K4 D K1 , respectively. The mixing proportion of the maximum plastic strength and unconfined compressive strength of grouts concretion bodies are A K1 B K1 C K1 D K3 and A K1 B K1 C K1 D K1 , respectively. Balanced the above 5 indicators overall and determined the optimum proportion of grouts: bentonite-cement ratio of 1.0, water-solid ratio of 3.5, accelerator content of 2.9% and retarder content of 1.45%. This new composite grouting material had good effect on the grouting engineering for sealing groundwater inflow and reinforcing wall rock in deep fractured rock mass.
Progressive compressive imager
NASA Astrophysics Data System (ADS)
Evladov, Sergei; Levi, Ofer; Stern, Adrian
2012-06-01
We have designed and built a working automatic progressive sampling imaging system based on the vector sensor concept, which utilizes a unique sampling scheme of Radon projections. This sampling scheme makes it possible to progressively add information resulting in tradeoff between compression and the quality of reconstruction. The uniqueness of our sampling is that in any moment of the acquisition process the reconstruction can produce a reasonable version of the image. The advantage of the gradual addition of the samples is seen when the sparsity rate of the object is unknown, and thus the number of needed measurements. We have developed the iterative algorithm OSO (Ordered Sets Optimization) which employs our sampling scheme for creation of nearly uniform distributed sets of samples, which allows the reconstruction of Mega-Pixel images. We present the good quality reconstruction from compressed data ratios of 1:20.
CURRENT CONCEPTS AND TREATMENT OF PATELLOFEMORAL COMPRESSIVE ISSUES.
Mullaney, Michael J; Fukunaga, Takumi
2016-12-01
Patellofemoral disorders, commonly encountered in sports and orthopedic rehabilitation settings, may result from dysfunction in patellofemoral joint compression. Osseous and soft tissue factors, as well as the mechanical interaction of the two, contribute to increased patellofemoral compression and pain. Treatment of patellofemoral compressive issues is based on identification of contributory impairments. Use of reliable tests and measures is essential in detecting impairments in hip flexor, quadriceps, iliotibial band, hamstrings, and gastrocnemius flexibility, as well as in joint mobility, myofascial restrictions, and proximal muscle weakness. Once relevant impairments are identified, a combination of manual techniques, instrument-assisted methods, and therapeutic exercises are used to address the impairments and promote functional improvements. The purpose of this clinical commentary is to describe the clinical presentation, contributory considerations, and interventions to address patellofemoral joint compressive issues.
CURRENT CONCEPTS AND TREATMENT OF PATELLOFEMORAL COMPRESSIVE ISSUES
Fukunaga, Takumi
2016-01-01
Patellofemoral disorders, commonly encountered in sports and orthopedic rehabilitation settings, may result from dysfunction in patellofemoral joint compression. Osseous and soft tissue factors, as well as the mechanical interaction of the two, contribute to increased patellofemoral compression and pain. Treatment of patellofemoral compressive issues is based on identification of contributory impairments. Use of reliable tests and measures is essential in detecting impairments in hip flexor, quadriceps, iliotibial band, hamstrings, and gastrocnemius flexibility, as well as in joint mobility, myofascial restrictions, and proximal muscle weakness. Once relevant impairments are identified, a combination of manual techniques, instrument-assisted methods, and therapeutic exercises are used to address the impairments and promote functional improvements. The purpose of this clinical commentary is to describe the clinical presentation, contributory considerations, and interventions to address patellofemoral joint compressive issues. PMID:27904792
Fuel Effects on Ignition and Their Impact on Advanced Combustion Engines (Poster)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor, J.; Li, H.; Neill, S.
The objective of this report is to develop a pathway to use easily measured ignition properties as metrics for characterizing fuels in advanced combustion engine research--correlate IQT{trademark} measured parameters with engine data. In HCCL engines, ignition timing depends on the reaction rates throughout compression stroke: need to understand sensitivity to T, P, and [O{sub 2}]; need to rank fuels based on more than one set of conditions; and need to understand how fuel composition (molecular species) affect ignition properties.
Novel portable press for synchrotron time-resolved 3-D micro-imagining under extreme conditions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Philippe, J.; Le Godec, Y., E-mail: yann.legodec@impmc.upmc.fr; Bergame, F.
Here we present the instrumental development to extend the synchrotron X-ray microtomography techniques to in situ studies under static compression (HP) or shear stress or the both conditions at high temperatures (HT). To achieve this, a new rotating tomography Paris-Edinburgh cell (rotoPEc) has been developed. This ultra-compact portable device, easily and successfully adapted to various multi-modal synchrotron experimental set-up at ESRF, SOLEIL and DIAMOND is explained in detail.
The effect of water binder ratio and fly ash on the properties of foamed concrete
NASA Astrophysics Data System (ADS)
Saloma, Hanafiah, Urmila, Dea
2017-11-01
Foamed concrete is a lightweight concrete composed by cement, water, fine aggregate and evenly distributed foam. Foamed concrete is produced by adding foam to the mixture. The function of foam is to create air voids in the mixture, so the weight of the concrete becomes lighter. The foaming agent is diluted in water then given air pressure by foam generator to produce foam. This research utilizes coal combustion, which is fly ash as cementitious material with a percentage of 0%, 10%, 15%, and 20%. The purpose of the research is to examine the effect of water binder ratio 0.425, 0.450, 0.475, and 0.500 using fly ash on the properties of foamed concrete. Fresh concrete tests include slump flow and setting time test while hardened concrete tests include density and compressive strength. The maximum value of slump flow test result is 59.50 cm on FC-20-0.500 mixture with w/b = 0.500 and 20% of fly ash percentage. The results of the setting time tests indicate the fastest initial and final time are 335 and 720 minutes, respectively on FC-0-0.425 mixture with w/b = 0.425 without fly ash. The lowest density is 978.344 kg/m3 on FC-20-0.500 mixture with w/b = 0.500 and 20% of fly ash percentage. The maximum compressive strength value is 4.510 MPa at 28 days on FC-10-0.450 mixture with w/b = 0.450 and 10% of fly ash percentage.
Hesaraki, S; Moztarzadeh, F; Nezafati, N
2009-12-01
In this study, nanocomposite of 50wt% calcium sulfate and 50wt% nanocrystalline apatite was produced and its biocompatibility, physical and structural properties were compared with pure calcium sulfate (CS) cement. Indomethacin (IM), a non-steroidal anti-inflammatory drug, was also loaded on both CS and nanocomposite cements and its in vitro release was evaluated over a period of time. The effect of the loaded IM on basic properties of the cements was also investigated. Biocompatibility tests showed a partial cytotoxicity in CS cement due to the reduced number of viable mouse fibroblast L929 cells in contact with the samples as well as spherical morphologies of the cells. However, no cytotoxic effect was observed for nanocomposite cement and no significant difference was found between the number of the cells seeded in contact with this specimens and culture plate as control. Other results showed that the setting time and injectability of the nanocomposite cement was much higher than those of CS cement, whereas reverse result obtained for compressive strength. In addition, incorporation of IM into compositions slightly increased the initial setting time and injectability of the cements and did not change their compressive strength. While a fast IM release was observed from CS cement in which about 97% of the loaded drug was released during 48h, nanocomposite cement showed a sustained release behavior in which 80% of the loaded IM was liberated after 144h. Thus, the nanocomposite can be a more appropriate carrier than CS for controlled release of IM in bone defect treatments.
Self-Similar Compressible Free Vortices
NASA Technical Reports Server (NTRS)
vonEllenrieder, Karl
1998-01-01
Lie group methods are used to find both exact and numerical similarity solutions for compressible perturbations to all incompressible, two-dimensional, axisymmetric vortex reference flow. The reference flow vorticity satisfies an eigenvalue problem for which the solutions are a set of two-dimensional, self-similar, incompressible vortices. These solutions are augmented by deriving a conserved quantity for each eigenvalue, and identifying a Lie group which leaves the reference flow equations invariant. The partial differential equations governing the compressible perturbations to these reference flows are also invariant under the action of the same group. The similarity variables found with this group are used to determine the decay rates of the velocities and thermodynamic variables in the self-similar flows, and to reduce the governing partial differential equations to a set of ordinary differential equations. The ODE's are solved analytically and numerically for a Taylor vortex reference flow, and numerically for an Oseen vortex reference flow. The solutions are used to examine the dependencies of the temperature, density, entropy, dissipation and radial velocity on the Prandtl number. Also, experimental data on compressible free vortex flow are compared to the analytical results, the evolution of vortices from initial states which are not self-similar is discussed, and the energy transfer in a slightly-compressible vortex is considered.
Sparsity based target detection for compressive spectral imagery
NASA Astrophysics Data System (ADS)
Boada, David Alberto; Arguello Fuentes, Henry
2016-09-01
Hyperspectral imagery provides significant information about the spectral characteristics of objects and materials present in a scene. It enables object and feature detection, classification, or identification based on the acquired spectral characteristics. However, it relies on sophisticated acquisition and data processing systems able to acquire, process, store, and transmit hundreds or thousands of image bands from a given area of interest which demands enormous computational resources in terms of storage, computationm, and I/O throughputs. Specialized optical architectures have been developed for the compressed acquisition of spectral images using a reduced set of coded measurements contrary to traditional architectures that need a complete set of measurements of the data cube for image acquisition, dealing with the storage and acquisition limitations. Despite this improvement, if any processing is desired, the image has to be reconstructed by an inverse algorithm in order to be processed, which is also an expensive task. In this paper, a sparsity-based algorithm for target detection in compressed spectral images is presented. Specifically, the target detection model adapts a sparsity-based target detector to work in a compressive domain, modifying the sparse representation basis in the compressive sensing problem by means of over-complete training dictionaries and a wavelet basis representation. Simulations show that the presented method can achieve even better detection results than the state of the art methods.
Hunt, Elizabeth A; Vera, Kimberly; Diener-West, Marie; Haggerty, Jamie A; Nelson, Kristen L; Shaffner, Donald H; Pronovost, Peter J
2009-07-01
The quality of life support delivered during cardiopulmonary resuscitation affects outcomes. However, little data exist regarding the quality of resuscitation delivered to children and factors associated with adherence to American Heart Association (AHA) resuscitation guidelines. Pediatric residents from an academic, tertiary care hospital. Prospective, observational cohort study of residents trained in the AHA PALS 2000 guidelines managing a high-fidelity mannequin simulator programmed to develop pulseless ventricular tachycardia (PVT). Proportion of residents who: (1) started compressions in < or =1min from onset of PVT, (2) defibrillated in < or =3min and (3) factors associated with time to defibrillation. Seventy of eighty (88%) residents participated. Forty-six of seventy (66%) failed to start compressions within 1min of pulselessness and 23/70 (33%) never started compressions. Only 38/70 (54%) residents defibrillated the mannequin in < or =3min of onset of PVT. There was no significant difference in time elapsed between onset of PVT and defibrillation by level of post-graduate training. However, residents who had previously discharged a defibrillator on either a patient or a simulator compared to those who had not were 87% more likely to successfully defibrillate the mannequin at any point in time (hazard ratio 1.87, 95% CI: 1.08-3.21, p=0.02). Pediatric residents do not meet performance standards set by the AHA. Future curricula should focus training on identified defects including: (1) equal emphasis on "airway and breathing" and "circulation" and (2) hands-on training with using and discharging a defibrillator in order to improve safety and outcomes.
Dynamical complexity of short and noisy time series. Compression-Complexity vs. Shannon entropy
NASA Astrophysics Data System (ADS)
Nagaraj, Nithin; Balasubramanian, Karthi
2017-07-01
Shannon entropy has been extensively used for characterizing complexity of time series arising from chaotic dynamical systems and stochastic processes such as Markov chains. However, for short and noisy time series, Shannon entropy performs poorly. Complexity measures which are based on lossless compression algorithms are a good substitute in such scenarios. We evaluate the performance of two such Compression-Complexity Measures namely Lempel-Ziv complexity (LZ) and Effort-To-Compress (ETC) on short time series from chaotic dynamical systems in the presence of noise. Both LZ and ETC outperform Shannon entropy (H) in accurately characterizing the dynamical complexity of such systems. For very short binary sequences (which arise in neuroscience applications), ETC has higher number of distinct complexity values than LZ and H, thus enabling a finer resolution. For two-state ergodic Markov chains, we empirically show that ETC converges to a steady state value faster than LZ. Compression-Complexity measures are promising for applications which involve short and noisy time series.
Mixed raster content (MRC) model for compound image compression
NASA Astrophysics Data System (ADS)
de Queiroz, Ricardo L.; Buckley, Robert R.; Xu, Ming
1998-12-01
This paper will describe the Mixed Raster Content (MRC) method for compressing compound images, containing both binary test and continuous-tone images. A single compression algorithm that simultaneously meets the requirements for both text and image compression has been elusive. MRC takes a different approach. Rather than using a single algorithm, MRC uses a multi-layered imaging model for representing the results of multiple compression algorithms, including ones developed specifically for text and for images. As a result, MRC can combine the best of existing or new compression algorithms and offer different quality-compression ratio tradeoffs. The algorithms used by MRC set the lower bound on its compression performance. Compared to existing algorithms, MRC has some image-processing overhead to manage multiple algorithms and the imaging model. This paper will develop the rationale for the MRC approach by describing the multi-layered imaging model in light of a rate-distortion trade-off. Results will be presented comparing images compressed using MRC, JPEG and state-of-the-art wavelet algorithms such as SPIHT. MRC has been approved or proposed as an architectural model for several standards, including ITU Color Fax, IETF Internet Fax, and JPEG 2000.
C-FSCV: Compressive Fast-Scan Cyclic Voltammetry for Brain Dopamine Recording.
Zamani, Hossein; Bahrami, Hamid Reza; Chalwadi, Preeti; Garris, Paul A; Mohseni, Pedram
2018-01-01
This paper presents a novel compressive sensing framework for recording brain dopamine levels with fast-scan cyclic voltammetry (FSCV) at a carbon-fiber microelectrode. Termed compressive FSCV (C-FSCV), this approach compressively samples the measured total current in each FSCV scan and performs basic FSCV processing steps, e.g., background current averaging and subtraction, directly with compressed measurements. The resulting background-subtracted faradaic currents, which are shown to have a block-sparse representation in the discrete cosine transform domain, are next reconstructed from their compressively sampled counterparts with the block sparse Bayesian learning algorithm. Using a previously recorded dopamine dataset, consisting of electrically evoked signals recorded in the dorsal striatum of an anesthetized rat, the C-FSCV framework is shown to be efficacious in compressing and reconstructing brain dopamine dynamics and associated voltammograms with high fidelity (correlation coefficient, ), while achieving compression ratio, CR, values as high as ~ 5. Moreover, using another set of dopamine data recorded 5 minutes after administration of amphetamine (AMPH) to an ambulatory rat, C-FSCV once again compresses (CR = 5) and reconstructs the temporal pattern of dopamine release with high fidelity ( ), leading to a true-positive rate of 96.4% in detecting AMPH-induced dopamine transients.
Determination of stresses in RC eccentrically compressed members using optimization methods
NASA Astrophysics Data System (ADS)
Lechman, Marek; Stachurski, Andrzej
2018-01-01
The paper presents an optimization method for determining the strains and stresses in reinforced concrete (RC) members subjected to the eccentric compression. The governing equations for strains in the rectangular cross-sections are derived by integrating the equilibrium equations of cross-sections, taking account of the effect of concrete softening in plastic range and the mean compressive strength of concrete. The stress-strain relationship for concrete in compression for short term uniaxial loading is assumed according to Eurocode 2 for nonlinear analysis. For reinforcing steel linear-elastic model with hardening in plastic range is applied. The task consists in the solving the set of the derived equations s.t. box constraints. The resulting problem was solved by means of fmincon function implemented from the Matlab's Optimization Toolbox. Numerical experiments have shown the existence of many points verifying the equations with a very good accuracy. Therefore, some operations from the global optimization were included: start of fmincon from many points and clusterization. The model is verified on the set of data encountered in the engineering practice.
Optimized satellite image compression and reconstruction via evolution strategies
NASA Astrophysics Data System (ADS)
Babb, Brendan; Moore, Frank; Peterson, Michael
2009-05-01
This paper describes the automatic discovery, via an Evolution Strategy with Covariance Matrix Adaptation (CMA-ES), of vectors of real-valued coefficients representing matched forward and inverse transforms that outperform the 9/7 Cohen-Daubechies-Feauveau (CDF) discrete wavelet transform (DWT) for satellite image compression and reconstruction under conditions subject to quantization error. The best transform evolved during this study reduces the mean squared error (MSE) present in reconstructed satellite images by an average of 33.78% (1.79 dB), while maintaining the average information entropy (IE) of compressed images at 99.57% in comparison to the wavelet. In addition, this evolved transform achieves 49.88% (3.00 dB) average MSE reduction when tested on 80 images from the FBI fingerprint test set, and 42.35% (2.39 dB) average MSE reduction when tested on a set of 18 digital photographs, while achieving average IE of 104.36% and 100.08%, respectively. These results indicate that our evolved transform greatly improves the quality of reconstructed images without substantial loss of compression capability over a broad range of image classes.
Imposition of physical parameters in dissipative particle dynamics
NASA Astrophysics Data System (ADS)
Mai-Duy, N.; Phan-Thien, N.; Tran-Cong, T.
2017-12-01
In the mesoscale simulations by the dissipative particle dynamics (DPD), the motion of a fluid is modelled by a set of particles interacting in a pairwise manner, and it has been shown to be governed by the Navier-Stokes equation, with its physical properties, such as viscosity, Schmidt number, isothermal compressibility, relaxation and inertia time scales, in fact its whole rheology resulted from the choice of the DPD model parameters. In this work, we will explore the response of a DPD fluid with respect to its parameter space, where the model input parameters can be chosen in advance so that (i) the ratio between the relaxation and inertia time scales is fixed; (ii) the isothermal compressibility of water at room temperature is enforced; and (iii) the viscosity and Schmidt number can be specified as inputs. These impositions are possible with some extra degrees of freedom in the weighting functions for the conservative and dissipative forces. Numerical experiments show an improvement in the solution quality over conventional DPD parameters/weighting functions, particularly for the number density distribution and computed stresses.
Shielding techniques tackle EMI excesses. V - EMI shielding
NASA Astrophysics Data System (ADS)
Grant, P.
1982-10-01
The utilization of shielding gaskets in EMI design is presented in terms of seam design, gasket design, groove design, and fastener spacing. The main function of seam design is to minimize the coupling efficiency of a seam, and for effective shielding, seam design should include mating surfaces which are as flat as possible, and a flange width at least five times the maximum anticipated separation between mating surfaces. Seam surface contact with a gasket should be firm, continuous, and uniform. Gasket height, closure pressure, and compression set as a function of the applied pressure parameters are determined using compression/deflection curves. Environmental seal requirements are given and the most common materials used are neoprene, silicone, butadiene-acrylonitrile, and natural rubber. Groove design is also discussed, considering gasket heights and cross-sectional areas. Finally, fastener spacing is considered, by examining deflection as a percentage of gasket height.
MedTxting: Learning based and Knowledge Rich SMS-style Medical Text Contraction
Liu, Feifan; Moosavinasab, Soheil; Houston, Thomas K.; Yu, Hong
2012-01-01
In mobile health (M-health), Short Message Service (SMS) has shown to improve disease related self-management and health service outcomes, leading to enhanced patient care. However, the hard limit on character size for each message limits the full value of exploring SMS communication in health care practices. To overcome this problem and improve the efficiency of clinical workflow, we developed an innovative system, MedTxting (available at http://medtxting.askhermes.org), which is a learning-based but knowledge-rich system that compresses medical texts in a SMS style. Evaluations on clinical questions and discharge summary narratives show that MedTxting can effectively compress medical texts with reasonable readability and noticeable size reduction. Findings in this work reveal potentials of MedTxting to the clinical settings, allowing for real-time and cost-effective communication, such as patient condition reporting, medication consulting, physicians connecting to share expertise to improve point of care. PMID:23304328
Enhanced protocol for real-time transmission of echocardiograms over wireless channels.
Cavero, Eva; Alesanco, Alvaro; García, Jose
2012-11-01
This paper presents a methodology to transmit clinical video over wireless networks in real-time. A 3-D set partitioning in hierarchical trees compression prior to transmission is proposed. In order to guarantee the clinical quality of the compressed video, a clinical evaluation specific to each video modality has to be made. This evaluation indicates the minimal transmission rate necessary for an accurate diagnosis. However, the channel conditions produce errors and distort the video. A reliable application protocol is therefore proposed using a hybrid solution in which either retransmission or retransmission combined with forward error correction (FEC) techniques are used, depending on the channel conditions. In order to analyze the proposed methodology, the 2-D mode of an echocardiogram has been assessed. A bandwidth of 200 kbps is necessary to guarantee its clinical quality. The transmission using the proposed solution and retransmission and FEC techniques working separately have been simulated and compared in high-speed uplink packet access (HSUPA) and worldwide interoperability for microwave access (WiMAX) networks. The proposed protocol achieves guaranteed clinical quality for bit error rates higher than with the other protocols, being for a mobile speed of 60 km/h up to 3.3 times higher for HSUPA and 10 times for WiMAX.
Fast and predictable video compression in software design and implementation of an H.261 codec
NASA Astrophysics Data System (ADS)
Geske, Dagmar; Hess, Robert
1998-09-01
The use of software codecs for video compression becomes commonplace in several videoconferencing applications. In order to reduce conflicts with other applications used at the same time, mechanisms for resource reservation on endsystems need to determine an upper bound for computing time used by the codec. This leads to the demand for predictable execution times of compression/decompression. Since compression schemes as H.261 inherently depend on the motion contained in the video, an adaptive admission control is required. This paper presents a data driven approach based on dynamical reduction of the number of processed macroblocks in peak situations. Besides the absolute speed is a point of interest. The question, whether and how software compression of high quality video is feasible on today's desktop computers, is examined.
NASA Astrophysics Data System (ADS)
Vatandoost, Hossein; Norouzi, Mahmood; Masoud Sajjadi Alehashem, Seyed; Smoukov, Stoyan K.
2017-06-01
Tension-compression operation in MR elastomers (MREs) offers both the most compact design and superior stiffness in many vertical load-bearing applications, such as MRE bearing isolators in bridges and buildings, suspension systems and engine mounts in cars, and vibration control equipment. It suffers, however, from lack of good computational models to predict device performance, and as a result shear-mode MREs are widely used in the industry, despite their low stiffness and load-bearing capacity. We start with a comprehensive review of modeling of MREs and their dynamic characteristics, showing previous studies have mostly focused on dynamic behavior of MREs in shear mode, though the MRE strength and MR effect are greatly decreased at high strain amplitudes, due to increasing distance between the magnetic particles. Moreover, the characteristic parameters of the current models assume either frequency, or strain, or magnetic field are constant; hence, new model parameters must be recalculated for new loading conditions. This is an experimentally time consuming and computationally expensive task, and no models capture the full dynamic behavior of the MREs at all loading conditions. In this study, we present an experimental setup to test MREs in a coupled tension-compression mode, as well as a novel phenomenological model which fully predicts the stress-strain material behavior as a function of magnetic flux density, loading frequency and strain. We use a training set of experiments to find the experimentally derived model parameters, from which can predict by interpolation the MRE behavior in a relatively large continuous range of frequency, strain and magnetic field. We also challenge the model to make extrapolating predictions and compare to additional experiments outside the training experimental data set with good agreement. Further development of this model would allow design and control of engineering structures equipped with tension-compression MREs and all the advantages they offer.
Perceptual Learning of Time-Compressed Speech: More than Rapid Adaptation
Banai, Karen; Lavner, Yizhar
2012-01-01
Background Time-compressed speech, a form of rapidly presented speech, is harder to comprehend than natural speech, especially for non-native speakers. Although it is possible to adapt to time-compressed speech after a brief exposure, it is not known whether additional perceptual learning occurs with further practice. Here, we ask whether multiday training on time-compressed speech yields more learning than that observed during the initial adaptation phase and whether the pattern of generalization following successful learning is different than that observed with initial adaptation only. Methodology/Principal Findings Two groups of non-native Hebrew speakers were tested on five different conditions of time-compressed speech identification in two assessments conducted 10–14 days apart. Between those assessments, one group of listeners received five practice sessions on one of the time-compressed conditions. Between the two assessments, trained listeners improved significantly more than untrained listeners on the trained condition. Furthermore, the trained group generalized its learning to two untrained conditions in which different talkers presented the trained speech materials. In addition, when the performance of the non-native speakers was compared to that of a group of naïve native Hebrew speakers, performance of the trained group was equivalent to that of the native speakers on all conditions on which learning occurred, whereas performance of the untrained non-native listeners was substantially poorer. Conclusions/Significance Multiday training on time-compressed speech results in significantly more perceptual learning than brief adaptation. Compared to previous studies of adaptation, the training induced learning is more stimulus specific. Taken together, the perceptual learning of time-compressed speech appears to progress from an initial, rapid adaptation phase to a subsequent prolonged and more stimulus specific phase. These findings are consistent with the predictions of the Reverse Hierarchy Theory of perceptual learning and suggest constraints on the use of perceptual-learning regimens during second language acquisition. PMID:23056592
Leaching of heavy metals from solidified waste using Portland cement and zeolite as a binder.
Napia, Chuwit; Sinsiri, Theerawat; Jaturapitakkul, Chai; Chindaprasirt, Prinya
2012-07-01
This study investigated the properties of solidified waste using ordinary Portland cement (OPC) containing synthesized zeolite (SZ) and natural zeolite (NZ) as a binder. Natural and synthesized zeolites were used to partially replace the OPC at rates of 0%, 20%, and 40% by weight of the binder. Plating sludge was used as contaminated waste to replace the binder at rates of 40%, 50% and 60% by weight. A water to binder (w/b) ratio of 0.40 was used for all of the mixtures. The setting time and compressive strength of the solidified waste were investigated, while the leachability of the heavy metals was determined by TCLP. Additionally, XRD, XRF, and SEM were performed to investigate the fracture surface, while the pore size distribution was analyzed with MIP. The results indicated that the setting time of the binders marginally increased as the amount of SZ and NZ increased in the mix. The compressive strengths of the pastes containing 20 and 40wt.% of NZ were higher than those containing SZ. The compressive strengths at 28 days of the SZ solidified waste mixes were 1.2-31.1MPa and those of NZ solidified waste mixes were 26.0-62.4MPa as compared to 72.9MPa of the control mix at the same age. The quality of the solidified waste containing zeolites was better than that with OPC alone in terms of the effectiveness in reducing the leachability. The concentrations of heavy metals in the leachates were within the limits specified by the US EPA. SEM and MIP revealed that the replacement of Portland cement by zeolites increased the total porosity but decreased the average pore size and resulted in the better containment of heavy ions from the solidified waste. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Pompili, R.; Anania, M. P.; Bellaveglia, M.; Biagioni, A.; Castorina, G.; Chiadroni, E.; Cianchi, A.; Croia, M.; Di Giovenale, D.; Ferrario, M.; Filippi, F.; Gallo, A.; Gatti, G.; Giorgianni, F.; Giribono, A.; Li, W.; Lupi, S.; Mostacci, A.; Petrarca, M.; Piersanti, L.; Di Pirro, G.; Romeo, S.; Scifo, J.; Shpakov, V.; Vaccarezza, C.; Villa, F.
2016-08-01
The generation of ultra-short electron bunches with ultra-low timing-jitter relative to the photo-cathode (PC) laser has been experimentally proved for the first time at the SPARC_LAB test-facility (INFN-LNF, Frascati) exploiting a two-stage hybrid compression scheme. The first stage employs RF-based compression (velocity-bunching), which shortens the bunch and imprints an energy chirp on it. The second stage is performed in a non-isochronous dogleg line, where the compression is completed resulting in a final bunch duration below 90 fs (rms). At the same time, the beam arrival timing-jitter with respect to the PC laser has been measured to be lower than 20 fs (rms). The reported results have been validated with numerical simulations.
Stress direction history of the western United States and Mexico since 85 Ma
NASA Astrophysics Data System (ADS)
Bird, Peter
2002-06-01
A data set of 369 paleostress direction indicators (sets of dikes, veins, or fault slip vectors) is collected from previous compilations and the geologic literature. Like contemporary data, these stress directions show great variability, even over short distances. Therefore statistical methods are helpful in deciding which apparent variations in space or in time are significant. First, the interpolation technique of Bird and Li [1996] is used to interpolate stress directions to a grid of evenly spaced points in each of seventeen 5-m.y. time steps since 85 Ma. Then, a t test is used to search for stress direction changes between pairs of time windows whose sense can be determined with some minimum confidence. Available data cannot resolve local stress provinces, and only the broadest changes affecting country-sized regions are reasonably certain. During 85-50 Ma, the most compressive horizontal stress azimuth $\\hat \\sigma $1H was fairly constant at ~68° (United States) to 75° (Mexico). During 50-35 Ma, both counterclockwise stress changes (in the Pacific Northwest) and clockwise stress changes (from Nevada to New Mexico) are seen, but only locally and with about 50% confidence. A major stress azimuth change by ~90° occurred at 33 +/- 2 Ma in Mexico and at 30 +/- 2 Ma in the western United States. This was probably an interchange between $\\hat \\sigma $1 and $\\hat \\sigma $3 caused by a decrease in horizontal compression and/or an increase in vertical compression. The most likely cause was the rollback of horizontally subducting Farallon slab from under the southwestern United States and northwest Mexico, which was rapid during 35-25 Ma. After this transition, a clockwise rotation of principal stress axes by 36°-48° occurred more gradually since 22 Ma, affecting the region between latitudes 28°N and 41°N. This occurred as the lengthening Pacific/North America transform boundary gradually added dextral shear on northwest striking planes to the previous stress field of SW-NE extension.
Statistical Compression for Climate Model Output
NASA Astrophysics Data System (ADS)
Hammerling, D.; Guinness, J.; Soh, Y. J.
2017-12-01
Numerical climate model simulations run at high spatial and temporal resolutions generate massive quantities of data. As our computing capabilities continue to increase, storing all of the data is not sustainable, and thus is it important to develop methods for representing the full datasets by smaller compressed versions. We propose a statistical compression and decompression algorithm based on storing a set of summary statistics as well as a statistical model describing the conditional distribution of the full dataset given the summary statistics. We decompress the data by computing conditional expectations and conditional simulations from the model given the summary statistics. Conditional expectations represent our best estimate of the original data but are subject to oversmoothing in space and time. Conditional simulations introduce realistic small-scale noise so that the decompressed fields are neither too smooth nor too rough compared with the original data. Considerable attention is paid to accurately modeling the original dataset-one year of daily mean temperature data-particularly with regard to the inherent spatial nonstationarity in global fields, and to determining the statistics to be stored, so that the variation in the original data can be closely captured, while allowing for fast decompression and conditional emulation on modest computers.
Han, Hyeon; Kim, Donghoon; Chu, Kanghyun; Park, Jucheol; Nam, Sang Yeol; Heo, Seungyang; Yang, Chan-Ho; Jang, Hyun Myung
2018-01-17
Ferroelectric photovoltaics (FPVs) are being extensively investigated by virtue of switchable photovoltaic responses and anomalously high photovoltages of ∼10 4 V. However, FPVs suffer from extremely low photocurrents due to their wide band gaps (E g ). Here, we present a promising FPV based on hexagonal YbFeO 3 (h-YbFO) thin-film heterostructure by exploiting its narrow E g . More importantly, we demonstrate enhanced FPV effects by suitably exploiting the substrate-induced film strain in these h-YbFO-based photovoltaics. A compressive-strained h-YbFO/Pt/MgO heterojunction device shows ∼3 times enhanced photovoltaic efficiency than that of a tensile-strained h-YbFO/Pt/Al 2 O 3 device. We have shown that the enhanced photovoltaic efficiency mainly stems from the enhanced photon absorption over a wide range of the photon energy, coupled with the enhanced polarization under a compressive strain. Density functional theory studies indicate that the compressive strain reduces E g substantially and enhances the strength of d-d transitions. This study will set a new standard for determining substrates toward thin-film photovoltaics and optoelectronic devices.
NASA Technical Reports Server (NTRS)
Lee, Jeffrey M.
1999-01-01
This study establishes a consistent set of differential equations for use in describing the steady secondary flows generated by periodic compression and expansion of an ideal gas in pulse tubes. Also considered is heat transfer between the gas and the tube wall of finite thickness. A small-amplitude series expansion solution in the inverse Strouhal number is proposed for the two-dimensional axisymmetric mass, momentum and energy equations. The anelastic approach applies when shock and acoustic energies are small compared with the energy needed to compress and expand the gas. An analytic solution to the ordered series is obtained in the strong temperature limit where the zeroth-order temperature is constant. The solution shows steady velocities increase linearly for small Valensi number and can be of order I for large Valensi number. A conversion of steady work flow to heat flow occurs whenever temperature, velocity or phase angle gradients are present. Steady enthalpy flow is reduced by heat transfer and is scaled by the Prandtl times Valensi numbers. Particle velocities from a smoke-wire experiment were compared with predictions for the basic and orifice pulse tube configurations. The theory accurately predicted the observed steady streaming.
Logarithmic compression methods for spectral data
Dunham, Mark E.
2003-01-01
A method is provided for logarithmic compression, transmission, and expansion of spectral data. A log Gabor transformation is made of incoming time series data to output spectral phase and logarithmic magnitude values. The output phase and logarithmic magnitude values are compressed by selecting only magnitude values above a selected threshold and corresponding phase values to transmit compressed phase and logarithmic magnitude values. A reverse log Gabor transformation is then performed on the transmitted phase and logarithmic magnitude values to output transmitted time series data to a user.
Wanner, Gregory K; Osborne, Arayel; Greene, Charlotte H
2016-11-29
Cardiopulmonary resuscitation (CPR) training has traditionally involved classroom-based courses or, more recently, home-based video self-instruction. These methods typically require preparation and purchase fee; which can dissuade many potential bystanders from receiving training. This study aimed to evaluate the effectiveness of teaching compression-only CPR to previously untrained individuals using our 6-min online CPR training video and skills practice on a homemade mannequin, reproduced by viewers with commonly available items (towel, toilet paper roll, t-shirt). Participants viewed the training video and practiced with the homemade mannequin. This was a parallel-design study with pre and post training evaluations of CPR skills (compression rate, depth, hand position, release), and hands-off time (time without compressions). CPR skills were evaluated using a sensor-equipped mannequin and two blinded CPR experts observed testing of participants. Twenty-four participants were included: 12 never-trained and 12 currently certified in CPR. Comparing pre and post training, the never-trained group had improvements in average compression rate per minute (64.3 to 103.9, p = 0.006), compressions with correct hand position in 1 min (8.3 to 54.3, p = 0.002), and correct compression release in 1 min (21.2 to 76.3, p < 0.001). The CPR-certified group had adequate pre and post-test compression rates (>100/min), but an improved number of compressions with correct release (53.5 to 94.7, p < 0.001). Both groups had significantly reduced hands-off time after training. Achieving adequate compression depths (>50 mm) remained problematic in both groups. Comparisons made between groups indicated significant improvements in compression depth, hand position, and hands-off time in never-trained compared to CPR-certified participants. Inter-rater agreement values were also calculated between the CPR experts and sensor-equipped mannequin. A brief internet-based video coupled with skill practice on a homemade mannequin improved compression-only CPR skills, especially in the previously untrained participants. This training method allows for widespread compression-only CPR training with a tactile learning component, without fees or advance preparation.
Kinetics of the B1-B2 phase transition in KCl under rapid compression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Chuanlong; Smith, Jesse S.; Sinogeikin, Stanislav V.
2016-01-28
Kinetics of the B1-B2 phase transition in KCl has been investigated under various compression rates (0.03–13.5 GPa/s) in a dynamic diamond anvil cell using time-resolved x-ray diffraction and fast imaging. Our experimental data show that the volume fraction across the transition generally gives sigmoidal curves as a function of pressure during rapid compression. Based upon classical nucleation and growth theories (Johnson-Mehl-Avrami-Kolmogorov theories), we propose a model that is applicable for studying kinetics for the compression rates studied. The fit of the experimental volume fraction as a function of pressure provides information on effective activation energy and average activation volume at amore » given compression rate. The resulting parameters are successfully used for interpreting several experimental observables that are compression-rate dependent, such as the transition time, grain size, and over-pressurization. The effective activation energy (Q{sub eff}) is found to decrease linearly with the logarithm of compression rate. When Q{sub eff} is applied to the Arrhenius equation, this relationship can be used to interpret the experimentally observed linear relationship between the logarithm of the transition time and logarithm of the compression rates. The decrease of Q{sub eff} with increasing compression rate results in the decrease of the nucleation rate, which is qualitatively in agreement with the observed change of the grain size with compression rate. The observed over-pressurization is also well explained by the model when an exponential relationship between the average activation volume and the compression rate is assumed.« less
Compressed air massage hastens healing of the diabetic foot.
Mars, M; Desai, Y; Gregory, M A
2008-02-01
The management of diabetic foot ulcers remains a problem. A treatment modality that uses compressed air massage has been developed as a supplement to standard surgical and medical treatment. Compressed air massage is thought to improve local tissue oxygenation around ulcers. The aim of this study was to determine whether the addition of compressed air massage influences the rate of healing of diabetic ulcers. Sixty consecutive patients with diabetes, admitted to one hospital for urgent surgical management of diabetic foot ulcers, were randomized into two groups. Both groups received standard medical and surgical management of their diabetes and ulcer. In addition, one group received 15-20 min of compressed air massage, at 1 bar pressure, daily, for 5 days a week, to the foot and the tissue around the ulcer. Healing time was calculated as the time from admission to the time of re-epithelialization. Fifty-seven patients completed the trial; 28 received compressed air massage. There was no difference in the mean age, Wagner score, ulcer size, pulse status, or peripheral sensation in the two groups. The time to healing in the compressed air massage group was significantly reduced: 58.1 +/- 22.3 days (95% confidence interval: 49.5-66.6) versus 82.7 +/- 30.7 days (95% confidence interval: 70.0-94.3) (P = 0.001). No adverse effects in response to compressed air massage were noted. The addition of compressed air massage to standard medical and surgical management of diabetic ulcers appears to enhance ulcer healing. Further studies with this new treatment modality are warranted.
2012-01-01
Background As Next-Generation Sequencing data becomes available, existing hardware environments do not provide sufficient storage space and computational power to store and process the data due to their enormous size. This is and will be a frequent problem that is encountered everyday by researchers who are working on genetic data. There are some options available for compressing and storing such data, such as general-purpose compression software, PBAT/PLINK binary format, etc. However, these currently available methods either do not offer sufficient compression rates, or require a great amount of CPU time for decompression and loading every time the data is accessed. Results Here, we propose a novel and simple algorithm for storing such sequencing data. We show that, the compression factor of the algorithm ranges from 16 to several hundreds, which potentially allows SNP data of hundreds of Gigabytes to be stored in hundreds of Megabytes. We provide a C++ implementation of the algorithm, which supports direct loading and parallel loading of the compressed format without requiring extra time for decompression. By applying the algorithm to simulated and real datasets, we show that the algorithm gives greater compression rate than the commonly used compression methods, and the data-loading process takes less time. Also, The C++ library provides direct-data-retrieving functions, which allows the compressed information to be easily accessed by other C++ programs. Conclusions The SpeedGene algorithm enables the storage and the analysis of next generation sequencing data in current hardware environment, making system upgrades unnecessary. PMID:22591016
[Real-time feedback systems for improvement of resuscitation quality].
Lukas, R P; Van Aken, H; Engel, P; Bohn, A
2011-07-01
The quality of chest compression is a determinant of survival after cardiac arrest. Therefore, the European Resuscitation Council (ERC) 2010 guidelines on resuscitation strongly focus on compression quality. Despite its impact on survival, observational studies have shown that chest compression quality is not reached by professional rescue teams. Real-time feedback devices for resuscitation are able to measure chest compression during an ongoing resuscitation attempt through a sternal sensor equipped with a motion and pressure detection system. In addition to the electrocardiograph (ECG) ventilation can be detected by transthoracic impedance monitoring. In cases of quality deviation, such as shallow chest compression depth or hyperventilation, feedback systems produce visual or acoustic alarms. Rescuers can thereby be supported and guided to the requested quality in chest compression and ventilation. Feedback technology is currently available both as a so-called stand-alone device and as an integrated feature in a monitor/defibrillator unit. Multiple studies have demonstrated sustainable enhancement in the education of resuscitation due to the use of real-time feedback technology. There is evidence that real-time feedback for resuscitation combined with training and debriefing strategies can improve both resuscitation quality and patient survival. Chest compression quality is an independent predictor for survival in resuscitation and should therefore be measured and documented in further clinical multicenter trials.
Effects of heat treatment on shape-setting and non-linearmechanical properties of Nitinol stent
NASA Astrophysics Data System (ADS)
Liu, Xiaopeng; Wang, Yinong; Qi, Min; Yang, Dazhi
2007-07-01
NiTi shape memory alloy is a temperature sensitive material with non-linear mechanical properties and good biocompatibility, which can be used for medical devices such as stent, catheter guide wire and orthodontic wire. The majority of nitinol stents are of the self-expanding type basing on the superelasticity. Nitinol stents are shape set into the open condition and compressed and inserted into the delivery catheter. Additional the shape-setting treatment can be used as a tool to accurately tune the transformation temperatures and mechanical properties. In this study, different heat treatments have been performed on the Ti-50.7at%Ni alloy wires. And results of shape-setting, austenite transformation finish temperature and non-linear mechanical property of NiTi shape memory alloy at body temperature have been investigated. The experimental results show that the proper shape-setting temperature should be chosen between 450-550 °C. And the shape-setting results were stabilization when the NiTi wires were constrain-treated at 500 and 550°C and ageing time longer than 10 minutes. The austenite finish temperatures increased with ageing time and increased first and then decreased with ageing temperature. The peak values were obtained at 400°C. When the heat treatments was performed at the same temperature, both the upper plateau stresses and lower plateau stresses decreased with the ageing time. Most of treated nitinol wires owned good recovery ability at body temperature and the permanent sets were less than 0.05% when short time ageing treatment was performed at 500°C.
Experimental Investigation of Elastomer Docking Seal Compression Set, Adhesion, and Leakage
NASA Technical Reports Server (NTRS)
Daniels, Christopher C.; Oswald, Jay J.; Bastrzyk, Marta B.; Smith, Ian; Dunlap, Patrick H., Jr.; Steinetz, Bruce M.
2008-01-01
A universal docking and berthing system is being developed by the National Aeronautics and Space Administration (NASA) to support all future space exploration missions to low-Earth orbit (LEO), to the Moon, and to Mars. An investigation of the compression set of two seals mated in a seal-on-seal configuration and the force required to separate the two seals after periods of mating was conducted. The leakage rates of seals made from two silicone elastomer compounds, S0383-70 and S0899-50, configured in seal-on-seal mating were quantified. The test specimens were sub-scale seals with representative cross-sections and a 12 inch outside diameter. The leakage rate of the seals manufactured from S0899-50 was higher than that of the seals made from S0383-70 by a factor of 1.8. Similarly, the adhesion of the 50 durometer elastomer was significantly higher than that of the 70 durometer compound. However, the compression set values of the S0899-50 material were observed to be significantly lower than those for the S0383-70.
Effect of crosslinking UHMWPE on its tensile and compressive creep performance.
Lewis, G; Carroll, M
2001-01-01
The in vitro quasi-static tensile and compressive creep properties of three sets of GUR 1050 ultra-high-molecular-weight polyethylene (UHMWPE) specimens were obtained. These sets were: control (as-received stock); "low-gamma" (specimens were crosslinked using gamma radiation, with a minimum dose of 5 Mrad); and "high-gamma" (specimens were crosslinked using gamma radiation, with a minimum dose of 15 Mrad). The % crystallinity (%C) and crosslink density (rho(x)) of the specimens in the three sets were also obtained. It was found that, in both tension and compression, crosslinking resulted in a significant depreciation in the creep properties, relative to control. The trend in the creep results is explained in terms of the impact of crosslinking on the polymer's %C and rho(x). The present results are in contrast to literature reports that show that crosslinking enhances the wear resistance of the polymer. The implications of the present results, taken together with the aforementioned literature results, are fully discussed vis-a-vis the use of crosslinked UHMWPE for fabricating articular components for arthroplasties.
JP3D compressed-domain watermarking of volumetric medical data sets
NASA Astrophysics Data System (ADS)
Ouled Zaid, Azza; Makhloufi, Achraf; Olivier, Christian
2010-01-01
Increasing transmission of medical data across multiple user systems raises concerns for medical image watermarking. Additionaly, the use of volumetric images triggers the need for efficient compression techniques in picture archiving and communication systems (PACS), or telemedicine applications. This paper describes an hybrid data hiding/compression system, adapted to volumetric medical imaging. The central contribution is to integrate blind watermarking, based on turbo trellis-coded quantization (TCQ), to JP3D encoder. Results of our method applied to Magnetic Resonance (MR) and Computed Tomography (CT) medical images have shown that our watermarking scheme is robust to JP3D compression attacks and can provide relative high data embedding rate whereas keep a relative lower distortion.
Compressed multi-block local binary pattern for object tracking
NASA Astrophysics Data System (ADS)
Li, Tianwen; Gao, Yun; Zhao, Lei; Zhou, Hao
2018-04-01
Both robustness and real-time are very important for the application of object tracking under a real environment. The focused trackers based on deep learning are difficult to satisfy with the real-time of tracking. Compressive sensing provided a technical support for real-time tracking. In this paper, an object can be tracked via a multi-block local binary pattern feature. The feature vector was extracted based on the multi-block local binary pattern feature, which was compressed via a sparse random Gaussian matrix as the measurement matrix. The experiments showed that the proposed tracker ran in real-time and outperformed the existed compressive trackers based on Haar-like feature on many challenging video sequences in terms of accuracy and robustness.
Temporal compression in episodic memory for real-life events.
Jeunehomme, Olivier; Folville, Adrien; Stawarczyk, David; Van der Linden, Martial; D'Argembeau, Arnaud
2018-07-01
Remembering an event typically takes less time than experiencing it, suggesting that episodic memory represents past experience in a temporally compressed way. Little is known, however, about how the continuous flow of real-life events is summarised in memory. Here we investigated the nature and determinants of temporal compression by directly comparing memory contents with the objective timing of events as measured by a wearable camera. We found that episodic memories consist of a succession of moments of prior experience that represent events with varying compression rates, such that the density of retrieved information is modulated by goal processing and perceptual changes. Furthermore, the results showed that temporal compression rates remain relatively stable over one week and increase after a one-month delay, particularly for goal-related events. These data shed new light on temporal compression in episodic memory and suggest that compression rates are adaptively modulated to maintain current goal-relevant information.
Real-time community detection in full social networks on a laptop
Chamberlain, Benjamin Paul; Levy-Kramer, Josh; Humby, Clive
2018-01-01
For a broad range of research and practical applications it is important to understand the allegiances, communities and structure of key players in society. One promising direction towards extracting this information is to exploit the rich relational data in digital social networks (the social graph). As global social networks (e.g., Facebook and Twitter) are very large, most approaches make use of distributed computing systems for this purpose. Distributing graph processing requires solving many difficult engineering problems, which has lead some researchers to look at single-machine solutions that are faster and easier to maintain. In this article, we present an approach for analyzing full social networks on a standard laptop, allowing for interactive exploration of the communities in the locality of a set of user specified query vertices. The key idea is that the aggregate actions of large numbers of users can be compressed into a data structure that encapsulates the edge weights between vertices in a derived graph. Local communities can be constructed by selecting vertices that are connected to the query vertices with high edge weights in the derived graph. This compression is robust to noise and allows for interactive queries of local communities in real-time, which we define to be less than the average human reaction time of 0.25s. We achieve single-machine real-time performance by compressing the neighborhood of each vertex using minhash signatures and facilitate rapid queries through Locality Sensitive Hashing. These techniques reduce query times from hours using industrial desktop machines operating on the full graph to milliseconds on standard laptops. Our method allows exploration of strongly associated regions (i.e., communities) of large graphs in real-time on a laptop. It has been deployed in software that is actively used by social network analysts and offers another channel for media owners to monetize their data, helping them to continue to provide free services that are valued by billions of people globally. PMID:29342158
Fast lossless compression via cascading Bloom filters
2014-01-01
Background Data from large Next Generation Sequencing (NGS) experiments present challenges both in terms of costs associated with storage and in time required for file transfer. It is sometimes possible to store only a summary relevant to particular applications, but generally it is desirable to keep all information needed to revisit experimental results in the future. Thus, the need for efficient lossless compression methods for NGS reads arises. It has been shown that NGS-specific compression schemes can improve results over generic compression methods, such as the Lempel-Ziv algorithm, Burrows-Wheeler transform, or Arithmetic Coding. When a reference genome is available, effective compression can be achieved by first aligning the reads to the reference genome, and then encoding each read using the alignment position combined with the differences in the read relative to the reference. These reference-based methods have been shown to compress better than reference-free schemes, but the alignment step they require demands several hours of CPU time on a typical dataset, whereas reference-free methods can usually compress in minutes. Results We present a new approach that achieves highly efficient compression by using a reference genome, but completely circumvents the need for alignment, affording a great reduction in the time needed to compress. In contrast to reference-based methods that first align reads to the genome, we hash all reads into Bloom filters to encode, and decode by querying the same Bloom filters using read-length subsequences of the reference genome. Further compression is achieved by using a cascade of such filters. Conclusions Our method, called BARCODE, runs an order of magnitude faster than reference-based methods, while compressing an order of magnitude better than reference-free methods, over a broad range of sequencing coverage. In high coverage (50-100 fold), compared to the best tested compressors, BARCODE saves 80-90% of the running time while only increasing space slightly. PMID:25252952
Fast lossless compression via cascading Bloom filters.
Rozov, Roye; Shamir, Ron; Halperin, Eran
2014-01-01
Data from large Next Generation Sequencing (NGS) experiments present challenges both in terms of costs associated with storage and in time required for file transfer. It is sometimes possible to store only a summary relevant to particular applications, but generally it is desirable to keep all information needed to revisit experimental results in the future. Thus, the need for efficient lossless compression methods for NGS reads arises. It has been shown that NGS-specific compression schemes can improve results over generic compression methods, such as the Lempel-Ziv algorithm, Burrows-Wheeler transform, or Arithmetic Coding. When a reference genome is available, effective compression can be achieved by first aligning the reads to the reference genome, and then encoding each read using the alignment position combined with the differences in the read relative to the reference. These reference-based methods have been shown to compress better than reference-free schemes, but the alignment step they require demands several hours of CPU time on a typical dataset, whereas reference-free methods can usually compress in minutes. We present a new approach that achieves highly efficient compression by using a reference genome, but completely circumvents the need for alignment, affording a great reduction in the time needed to compress. In contrast to reference-based methods that first align reads to the genome, we hash all reads into Bloom filters to encode, and decode by querying the same Bloom filters using read-length subsequences of the reference genome. Further compression is achieved by using a cascade of such filters. Our method, called BARCODE, runs an order of magnitude faster than reference-based methods, while compressing an order of magnitude better than reference-free methods, over a broad range of sequencing coverage. In high coverage (50-100 fold), compared to the best tested compressors, BARCODE saves 80-90% of the running time while only increasing space slightly.
Richmond, N D; Pilling, K E; Peedell, C; Shakespeare, D; Walker, C P
2012-01-01
Stereotactic body radiotherapy for early stage non-small cell lung cancer is an emerging treatment option in the UK. Since relatively few high-dose ablative fractions are delivered to a small target volume, the consequences of a geometric miss are potentially severe. This paper presents the results of treatment delivery set-up data collected using Elekta Synergy (Elekta, Crawley, UK) cone-beam CT imaging for 17 patients immobilised using the Bodyfix system (Medical Intelligence, Schwabmuenchen, Germany). Images were acquired on the linear accelerator at initial patient treatment set-up, following any position correction adjustments, and post-treatment. These were matched to the localisation CT scan using the Elekta XVI software. In total, 71 fractions were analysed for patient set-up errors. The mean vector error at initial set-up was calculated as 5.3±2.7 mm, which was significantly reduced to 1.4±0.7 mm following image guided correction. Post-treatment the corresponding value was 2.1±1.2 mm. The use of the Bodyfix abdominal compression plate on 5 patients to reduce the range of tumour excursion during respiration produced mean longitudinal set-up corrections of −4.4±4.5 mm compared with −0.7±2.6 mm without compression for the remaining 12 patients. The use of abdominal compression led to a greater variation in set-up errors and a shift in the mean value. PMID:22665927
An Energy-Efficient Compressive Image Coding for Green Internet of Things (IoT).
Li, Ran; Duan, Xiaomeng; Li, Xu; He, Wei; Li, Yanling
2018-04-17
Aimed at a low-energy consumption of Green Internet of Things (IoT), this paper presents an energy-efficient compressive image coding scheme, which provides compressive encoder and real-time decoder according to Compressive Sensing (CS) theory. The compressive encoder adaptively measures each image block based on the block-based gradient field, which models the distribution of block sparse degree, and the real-time decoder linearly reconstructs each image block through a projection matrix, which is learned by Minimum Mean Square Error (MMSE) criterion. Both the encoder and decoder have a low computational complexity, so that they only consume a small amount of energy. Experimental results show that the proposed scheme not only has a low encoding and decoding complexity when compared with traditional methods, but it also provides good objective and subjective reconstruction qualities. In particular, it presents better time-distortion performance than JPEG. Therefore, the proposed compressive image coding is a potential energy-efficient scheme for Green IoT.
Quality evaluation of motion-compensated edge artifacts in compressed video.
Leontaris, Athanasios; Cosman, Pamela C; Reibman, Amy R
2007-04-01
Little attention has been paid to an impairment common in motion-compensated video compression: the addition of high-frequency (HF) energy as motion compensation displaces blocking artifacts off block boundaries. In this paper, we employ an energy-based approach to measure this motion-compensated edge artifact, using both compressed bitstream information and decoded pixels. We evaluate the performance of our proposed metric, along with several blocking and blurring metrics, on compressed video in two ways. First, ordinal scales are evaluated through a series of expectations that a good quality metric should satisfy: the objective evaluation. Then, the best performing metrics are subjectively evaluated. The same subjective data set is finally used to obtain interval scales to gain more insight. Experimental results show that we accurately estimate the percentage of the added HF energy in compressed video.
Comparison of two SVD-based color image compression schemes.
Li, Ying; Wei, Musheng; Zhang, Fengxia; Zhao, Jianli
2017-01-01
Color image compression is a commonly used process to represent image data as few bits as possible, which removes redundancy in the data while maintaining an appropriate level of quality for the user. Color image compression algorithms based on quaternion are very common in recent years. In this paper, we propose a color image compression scheme, based on the real SVD, named real compression scheme. First, we form a new real rectangular matrix C according to the red, green and blue components of the original color image and perform the real SVD for C. Then we select several largest singular values and the corresponding vectors in the left and right unitary matrices to compress the color image. We compare the real compression scheme with quaternion compression scheme by performing quaternion SVD using the real structure-preserving algorithm. We compare the two schemes in terms of operation amount, assignment number, operation speed, PSNR and CR. The experimental results show that with the same numbers of selected singular values, the real compression scheme offers higher CR, much less operation time, but a little bit smaller PSNR than the quaternion compression scheme. When these two schemes have the same CR, the real compression scheme shows more prominent advantages both on the operation time and PSNR.
Comparison of two SVD-based color image compression schemes
Li, Ying; Wei, Musheng; Zhang, Fengxia; Zhao, Jianli
2017-01-01
Color image compression is a commonly used process to represent image data as few bits as possible, which removes redundancy in the data while maintaining an appropriate level of quality for the user. Color image compression algorithms based on quaternion are very common in recent years. In this paper, we propose a color image compression scheme, based on the real SVD, named real compression scheme. First, we form a new real rectangular matrix C according to the red, green and blue components of the original color image and perform the real SVD for C. Then we select several largest singular values and the corresponding vectors in the left and right unitary matrices to compress the color image. We compare the real compression scheme with quaternion compression scheme by performing quaternion SVD using the real structure-preserving algorithm. We compare the two schemes in terms of operation amount, assignment number, operation speed, PSNR and CR. The experimental results show that with the same numbers of selected singular values, the real compression scheme offers higher CR, much less operation time, but a little bit smaller PSNR than the quaternion compression scheme. When these two schemes have the same CR, the real compression scheme shows more prominent advantages both on the operation time and PSNR. PMID:28257451
Ding, Zhengwen; Li, Hong; Wei, Jie; Li, Ruijiang; Yan, Yonggang
2018-06-01
Considering that the phospholipids and glycerophosphoric acid are the basic materials throughout the metabolism of the whole life period and the bone is composed of organic polymer collagen and inorganic mineral apatite, a novel self-setting composite of magnesium glycerophosphate (MG) and di-calcium silicate(C2S)/tri-calcium silicate(C3S) was developed as bio-cement for bone repair, reconstruction and regeneration. The composite was prepared by mixing the MG, C2S and C3S with the certain ratios, and using the deionized water and phosphoric acid solution as mixed liquid. The combination and formation of the composites was characterized by FTIR, XPS and XRD. The physicochemical properties were studied by setting time, compressive strength, pH value, weight loss in the PBS and surface change by SEM-EDX. The biocompatibility was evaluated by cell culture in the leaching solution of the composites. The preliminary results showed that when di- and tri-calcium silicate contact with water, there are lots of Ca(OH) 2 generated making the pH value of solution is higher than 9 which is helpful for the formation of hydroxyapatite(HA) that is the main bone material. The new organic-inorganic self-setting bio-cements showed initial setting time is ranged from 20 min to 85 min and the compressive strength reached 30 MPa on the 7th days, suitable as the bone fillers. The weight loss was 20% in the first week, and 25% in the 4th week. Meanwhile, the new HA precipitated on the composite surface during the incubation in the SBF showed bioactivity. The cell cultured in the leaching liquid of the composite showed high proliferation inferring the new bio-cement has good biocompatibility to the cells. Copyright © 2018 Elsevier B.V. All rights reserved.
An equivalent-time-lines model for municipal solid waste based on its compression characteristics.
Gao, Wu; Bian, Xuecheng; Xu, Wenjie; Chen, Yunmin
2017-10-01
Municipal solid waste (MSW) demonstrates a noticeable time-dependent stress-strain behavior, which contributes greatly to the settlement of landfills and therefore influences both the storage capacity of landfills and the integrity of internal structures. The long-term compression tests for MSW under different biodegradation conditions were analyzed. It showed that the primary compression can affect the secondary compression due to the biodegradation and mechanical creep. Based on the time-lines model for clays and the compression characteristics of MSW, relationships between MSW's viscous strain rate and equivalent time were established, and then the viscous strain functions of MSW under different biodegradation conditions were deduced, and an equivalent-time-lines model for MSW settlement for two biodegradation conditions was developed, including the Type I model for the enhanced biodegradation condition and the Type II model for the normal biodegradation condition. The simulated compression results of laboratory and field compression tests under different biodegradation conditions were consistent with the measured data, which showed the reliability of both types of the equivalent-time-lines model for MSW. In addition, investigations of the long-term settlement of landfills from the literature indicated that the Type I model is suitable for predicting settlement in MSW landfills with a distinct biodegradation progress of MSW, a high content of organics in MSW, a short fill age or under an enhanced biodegradation environment; while the Type II model is good at predicting settlement in MSW landfills with a distinct progress of mechanical creep compression, a low content of organics in MSW, a long fill age or under a normal biodegradation condition. Furthermore, relationships between model parameters and the fill age of landfills were summarized. Finally, the similarities and differences between the equivalent-time-lines model for MSW and the stress-biodegradation model for MSW were discussed. Copyright © 2017 Elsevier Ltd. All rights reserved.
Real-time video compressing under DSP/BIOS
NASA Astrophysics Data System (ADS)
Chen, Qiu-ping; Li, Gui-ju
2009-10-01
This paper presents real-time MPEG-4 Simple Profile video compressing based on the DSP processor. The programming framework of video compressing is constructed using TMS320C6416 Microprocessor, TDS510 simulator and PC. It uses embedded real-time operating system DSP/BIOS and the API functions to build periodic function, tasks and interruptions etcs. Realize real-time video compressing. To the questions of data transferring among the system. Based on the architecture of the C64x DSP, utilized double buffer switched and EDMA data transfer controller to transit data from external memory to internal, and realize data transition and processing at the same time; the architecture level optimizations are used to improve software pipeline. The system used DSP/BIOS to realize multi-thread scheduling. The whole system realizes high speed transition of a great deal of data. Experimental results show the encoder can realize real-time encoding of 768*576, 25 frame/s video images.
DCTune Perceptual Optimization of Compressed Dental X-Rays
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Null, Cynthia H. (Technical Monitor)
1997-01-01
In current dental practice, x-rays of completed dental work are often sent to the insurer for verification. It is faster and cheaper to transmit instead digital scans of the x-rays. Further economies result if the images are sent in compressed form. DCtune is a technology for optimizing DCT quantization matrices to yield maximum perceptual quality for a given bit-rate, or minimum bit-rate for a given perceptual quality. In addition, the technology provides a means of setting the perceptual quality of compressed imagery in a systematic way. The purpose of this research was, with respect to dental x-rays: (1) to verify the advantage of DCTune over standard JPEG; (2) to verify the quality control feature of DCTune; and (3) to discover regularities in the optimized matrices of a set of images. Additional information is contained in the original extended abstract.
Fabrication and evaluation of cold/formed/weldbrazed beta-titanium skin-stiffened compression panels
NASA Technical Reports Server (NTRS)
Royster, D. M.; Bales, T. T.; Davis, R. C.; Wiant, H. R.
1983-01-01
The room temperature and elevated temperature buckling behavior of cold formed beta titanium hat shaped stiffeners joined by weld brazing to alpha-beta titanium skins was determined. A preliminary set of single stiffener compression panels were used to develop a data base for material and panel properties. These panels were tested at room temperature and 316 C (600 F). A final set of multistiffener compression panels were fabricated for room temperature tests by the process developed in making the single stiffener panels. The overall geometrical dimensions for the multistiffener panels were determined by the structural sizing computer code PASCO. The data presented from the panel tests include load shortening curves, local buckling strengths, and failure loads. Experimental buckling loads are compared with the buckling loads predicted by the PASCO code. Material property data obtained from tests of ASTM standard dogbone specimens are also presented.
NASA Technical Reports Server (NTRS)
Estep, L.; Davis, B.
2001-01-01
A remote sensing campaign was conducted over a U.S. Department of Agriculture test farm at Shelton, Nebraska. An experimental field was set off in plots that were differentially treated with anhydrous ammonia. Four replicates of 0-kg/ha to 200-kg/ha plots, in 50-kg/ha increments, were set out in a random block design. Low-altitude (GSD of 3 m) Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) hyperspectral data were collected over the site in 224 bands. Simultaneously, ground data were collected to support the airborne imagery. In an effort to reduce data load while maintaining or enhancing algorithm performance for vegetation stress detection, band-moment compression and analysis was applied to the AVIRIS image cube. The results indicated that band-moment techniques compress the AVIRIS dataset significantly while retaining the capability of detecting environmentally induced vegetation stress.
Generalised power graph compression reveals dominant relationship patterns in complex networks
Ahnert, Sebastian E.
2014-01-01
We introduce a framework for the discovery of dominant relationship patterns in complex networks, by compressing the networks into power graphs with overlapping power nodes. When paired with enrichment analysis of node classification terms, the most compressible sets of edges provide a highly informative sketch of the dominant relationship patterns that define the network. In addition, this procedure also gives rise to a novel, link-based definition of overlapping node communities in which nodes are defined by their relationships with sets of other nodes, rather than through connections within the community. We show that this completely general approach can be applied to undirected, directed, and bipartite networks, yielding valuable insights into the large-scale structure of real-world networks, including social networks and food webs. Our approach therefore provides a novel way in which network architecture can be studied, defined and classified. PMID:24663099
2012-03-15
compressing the field. Equation (5) uses a geocentric spherical coordinate system with units of length in Earth radii. It is clear that setting b1 = 0...in a complementary approach to the one used by McCollough et al. [2009]. 3. Anisotropy Arising From Magnetic Field Configuration [21] McCollough et al
Lossless data compression for improving the performance of a GPU-based beamformer.
Lok, U-Wai; Fan, Gang-Wei; Li, Pai-Chi
2015-04-01
The powerful parallel computation ability of a graphics processing unit (GPU) makes it feasible to perform dynamic receive beamforming However, a real time GPU-based beamformer requires high data rate to transfer radio-frequency (RF) data from hardware to software memory, as well as from central processing unit (CPU) to GPU memory. There are data compression methods (e.g. Joint Photographic Experts Group (JPEG)) available for the hardware front end to reduce data size, alleviating the data transfer requirement of the hardware interface. Nevertheless, the required decoding time may even be larger than the transmission time of its original data, in turn degrading the overall performance of the GPU-based beamformer. This article proposes and implements a lossless compression-decompression algorithm, which enables in parallel compression and decompression of data. By this means, the data transfer requirement of hardware interface and the transmission time of CPU to GPU data transfers are reduced, without sacrificing image quality. In simulation results, the compression ratio reached around 1.7. The encoder design of our lossless compression approach requires low hardware resources and reasonable latency in a field programmable gate array. In addition, the transmission time of transferring data from CPU to GPU with the parallel decoding process improved by threefold, as compared with transferring original uncompressed data. These results show that our proposed lossless compression plus parallel decoder approach not only mitigate the transmission bandwidth requirement to transfer data from hardware front end to software system but also reduce the transmission time for CPU to GPU data transfer. © The Author(s) 2014.
Real-time heart rate measurement for multi-people using compressive tracking
NASA Astrophysics Data System (ADS)
Liu, Lingling; Zhao, Yuejin; Liu, Ming; Kong, Lingqin; Dong, Liquan; Ma, Feilong; Pang, Zongguang; Cai, Zhi; Zhang, Yachu; Hua, Peng; Yuan, Ruifeng
2017-09-01
The rise of aging population has created a demand for inexpensive, unobtrusive, automated health care solutions. Image PhotoPlethysmoGraphy(IPPG) aids in the development of these solutions by allowing for the extraction of physiological signals from video data. However, the main deficiencies of the recent IPPG methods are non-automated, non-real-time and susceptible to motion artifacts(MA). In this paper, a real-time heart rate(HR) detection method for multiple subjects simultaneously was proposed and realized using the open computer vision(openCV) library, which consists of getting multiple subjects' facial video automatically through a Webcam, detecting the region of interest (ROI) in the video, reducing the false detection rate by our improved Adaboost algorithm, reducing the MA by our improved compress tracking(CT) algorithm, wavelet noise-suppression algorithm for denoising and multi-threads for higher detection speed. For comparison, HR was measured simultaneously using a medical pulse oximetry device for every subject during all sessions. Experimental results on a data set of 30 subjects show that the max average absolute error of heart rate estimation is less than 8 beats per minute (BPM), and the processing speed of every frame has almost reached real-time: the experiments with video recordings of ten subjects under the condition of the pixel resolution of 600× 800 pixels show that the average HR detection time of 10 subjects was about 17 frames per second (fps).
Degradable borate glass polyalkenoate cements.
Shen, L; Coughlan, A; Towler, M; Hall, M
2014-04-01
Glass polyalkenoate cements (GPCs) containing aluminum-free borate glasses having the general composition Ag2O-Na2O-CaO-SrO-ZnO-TiO2-B2O3 were evaluated in this work. An initial screening study of sixteen compositions was used to identify regions of glass formation and cement compositions with promising rheological properties. The results of the screening study were used to develop four model borate glass compositions for further study. A second round of rheological experiments was used to identify a preferred GPC formulation for each model glass composition. The model borate glasses containing higher levels of TiO2 (7.5 mol %) tended to have longer working times and shorter setting times. Dissolution behavior of the four model GPC formulations was evaluated by measuring ion release profiles as a function of time. All four GPC formulations showed evidence of incongruent dissolution behavior when considering the relative release profiles of sodium and boron, although the exact dissolution profile of the glass was presumably obscured by the polymeric cement matrix. Compression testing was undertaken to evaluate cement strength over time during immersion in water. The cements containing the borate glass with 7.5 mol % TiO2 had the highest initial compressive strength, ranging between 20 and 30 MPa. No beneficial aging effect was observed-instead, the strength of all four model GPC formulations was found to degrade with time.
NASA Astrophysics Data System (ADS)
Hultberg, Tim; August, Thomas; Lenti, Flavia
2017-09-01
Principal Component (PC) compression is the method of choice to achieve band-width reduction for dissemination of hyper spectral (HS) satellite measurements and will become increasingly important with the advent of future HS missions (such as IASI-NG and MTG-IRS) with ever higher data-rates. It is a linear transformation defined by a truncated set of the leading eigenvectors of the covariance of the measurements as well as the mean of the measurements. We discuss the strategy for generation of the eigenvectors, based on the operational experience made with IASI. To compute the covariance and mean, a so-called training set of measurements is needed, which ideally should include all relevant spectral features. For the dissemination of IASI PC scores a global static training set consisting of a large sample of measured spectra covering all seasons and all regions is used. This training set was updated once after the start of the dissemination of IASI PC scores in April 2010 by adding spectra from the 2010 Russian wildfires, in which spectral features not captured by the previous training set were identified. An alternative approach, which has sometimes been proposed, is to compute the eigenvectors on the fly from a local training set, for example consisting of all measurements in the current processing granule. It might naively be thought that this local approach would improve the compression rate by reducing the number of PC scores needed to represent the measurements within each granule. This false belief is apparently confirmed, if the reconstruction scores (root mean square of the reconstruction residuals) is used as the sole criteria for choosing the number of PC scores to retain, which would overlook the fact that the decrease in reconstruction score (for the same number of PCs) is achieved only by the retention of an increased amount of random noise. We demonstrate that the local eigenvectors retain a higher amount of noise and a lower amount of atmospheric signal than global eigenvectors. Local eigenvectors do not increase the compression rate, but increase the amount of atmospheric loss and should be avoided. Only extremely rare situations, resulting in spectra with features which have not been observed previously, can lead to problems for the global approach. To cope with such situations we investigate a hybrid approach, which first apply the global eigenvectors and then apply local compression to the residuals in order to identify and disseminate in addition any directions in the local signal, which are orthogonal to the subspace spanned by the global eigenvectors.
Multiscale Simulation of Moist Global Atmospheric Flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grabowski, Wojciech W.; Smolarkiewicz, P. K.
The overarching goal of this award was to include phase changes of the water substance and accompanying latent heating and precipitation processes into the all-scale nonhydrostatic atmospheric dynamics EUlerian/LAGrangian (EULAG) model. The model includes fluid flow solver that is based on either an unabbreviated set of the governing equations (i.e., compressible dynamics) or a simplified set of equations without sound waves (i.e., sound-proof, either anelastic or pseudo-incompressible). The latter set has been used in small-scale dynamics for decades, but its application to the all-scale dynamics (from small-scale to planetary) has never been studied in practical implementations. The highlight of themore » project is the development of the moist implicit compressible model that can be run by applying time steps, as long as the anelastic model is limited only by the computational stability of the fluid flow and not by the speed of sound waves that limit the stability of explicit compressible models. Applying various versions of the EULAG model within the same numerical framework allows for an unprecedented comparison of solutions obtained with various sets of the governing equations and straightforward evaluation of the impact of various physical parameterizations on the model solutions. The main outcomes of this study are reported in three papers, two published and one currently under review. These papers include comparisons between model solutions for idealized moist problems across the range of scales from small to planetary. These tests include: moist thermals rising in the stable-stratified environment (following Grabowski and Clark, J. Atmos. Sci. 1991) and in the moist-neutral environment (after Bryan and Fritsch, Mon. Wea. Rev. 2002), moist flows over a mesoscale topography (as in Grabowski and Smolarkiewicz, Mon. Wea. Rev. 2002), deep convection in a sheared environment (following Weisman and Klemp, Mon. Wea. Rev. 1982), moist extension of the baroclinic wave on the sphere of Jablonowski and Williamson (Q. J. R. Met. Soc. 2006), and moist extension of the Held-Suarez idealized climate benchmark (Held and Suarez, Bull. Amer. Met. Soc., 1994).« less
Compression of high-density EMG signals for trapezius and gastrocnemius muscles.
Itiki, Cinthia; Furuie, Sergio S; Merletti, Roberto
2014-03-10
New technologies for data transmission and multi-electrode arrays increased the demand for compressing high-density electromyography (HD EMG) signals. This article aims the compression of HD EMG signals recorded by two-dimensional electrode matrices at different muscle-contraction forces. It also shows methodological aspects of compressing HD EMG signals for non-pinnate (upper trapezius) and pinnate (medial gastrocnemius) muscles, using image compression techniques. HD EMG signals were placed in image rows, according to two distinct electrode orders: parallel and perpendicular to the muscle longitudinal axis. For the lossless case, the images obtained from single-differential signals as well as their differences in time were compressed. For the lossy algorithm, the images associated to the recorded monopolar or single-differential signals were compressed for different compression levels. Lossless compression provided up to 59.3% file-size reduction (FSR), with lower contraction forces associated to higher FSR. For lossy compression, a 90.8% reduction on the file size was attained, while keeping the signal-to-noise ratio (SNR) at 21.19 dB. For a similar FSR, higher contraction forces corresponded to higher SNR CONCLUSIONS: The computation of signal differences in time improves the performance of lossless compression while the selection of signals in the transversal order improves the lossy compression of HD EMG, for both pinnate and non-pinnate muscles.
Compression of high-density EMG signals for trapezius and gastrocnemius muscles
2014-01-01
Background New technologies for data transmission and multi-electrode arrays increased the demand for compressing high-density electromyography (HD EMG) signals. This article aims the compression of HD EMG signals recorded by two-dimensional electrode matrices at different muscle-contraction forces. It also shows methodological aspects of compressing HD EMG signals for non-pinnate (upper trapezius) and pinnate (medial gastrocnemius) muscles, using image compression techniques. Methods HD EMG signals were placed in image rows, according to two distinct electrode orders: parallel and perpendicular to the muscle longitudinal axis. For the lossless case, the images obtained from single-differential signals as well as their differences in time were compressed. For the lossy algorithm, the images associated to the recorded monopolar or single-differential signals were compressed for different compression levels. Results Lossless compression provided up to 59.3% file-size reduction (FSR), with lower contraction forces associated to higher FSR. For lossy compression, a 90.8% reduction on the file size was attained, while keeping the signal-to-noise ratio (SNR) at 21.19 dB. For a similar FSR, higher contraction forces corresponded to higher SNR Conclusions The computation of signal differences in time improves the performance of lossless compression while the selection of signals in the transversal order improves the lossy compression of HD EMG, for both pinnate and non-pinnate muscles. PMID:24612604
Astrometric Search Method for Individually Resolvable Gravitational Wave Sources with Gaia
NASA Astrophysics Data System (ADS)
Moore, Christopher J.; Mihaylov, Deyan P.; Lasenby, Anthony; Gilmore, Gerard
2017-12-01
Gravitational waves (GWs) cause the apparent position of distant stars to oscillate with a characteristic pattern on the sky. Astrometric measurements (e.g., those made by Gaia) provide a new way to search for GWs. The main difficulty facing such a search is the large size of the data set; Gaia observes more than one billion stars. In this Letter the problem of searching for GWs from individually resolvable supermassive black hole binaries using astrometry is addressed for the first time; it is demonstrated how the data set can be compressed by a factor of more than 1 06, with a loss of sensitivity of less than 1%. This technique was successfully used to recover artificially injected GW signals from mock Gaia data and to assess the GW sensitivity of Gaia. Throughout the Letter the complementarity of Gaia and pulsar timing searches for GWs is highlighted.
Influence of Ultrafine 2CaO·SiO₂ Powder on Hydration Properties of Reactive Powder Concrete.
Sun, Hongfang; Li, Zishanshan; Memon, Shazim Ali; Zhang, Qiwu; Wang, Yaocheng; Liu, Bing; Xu, Weiting; Xing, Feng
2015-09-17
In this research, we assessed the influence of an ultrafine 2CaO·SiO₂ powder on the hydration properties of a reactive powder concrete system. The ultrafine powder was manufactured through chemical combustion method. The morphology of ultrafine powder and the development of hydration products in the cement paste prepared with ultrafine powder were investigated by scanning electron microscopy (SEM), mineralogical composition were determined by X-ray diffraction, while the heat release characteristics up to the age of 3 days were investigated by calorimetry. Moreover, the properties of cementitious system in fresh and hardened state (setting time, drying shrinkage, and compressive strength) with 5% ordinary Portland cement replaced by ultrafine powder were evaluated. From SEM micrographs, the particle size of ultrafine powder was found to be up to several hundred nanometers. The hydration product started formulating at the age of 3 days due to slow reacting nature of belitic 2CaO·SiO₂. The initial and final setting times were prolonged and no significant difference in drying shrinkage was observed when 5% ordinary Portland cement was replaced by ultrafine powder. Moreover, in comparison to control reactive powder concrete, the reactive powder concrete containing ultrafine powder showed improvement in compressive strength at and above 7 days of testing. Based on above, it can be concluded that the manufactured ultrafine 2CaO·SiO₂ powder has the potential to improve the performance of a reactive powder cementitious system.
Clinical efficacy of telemedicine in emergency radiotherapy for malignant spinal cord compression.
Hashimoto, S; Shirato, H; Kaneko, K; Ooshio, W; Nishioka, T; Miyasaka, K
2001-09-01
The authors developed a Telecommunication-HElped Radiotherapy Planning and Information SysTem (THERAPIST), then estimated its clinical benefit in radiotherapy in district hospitals where consultation with the university hospital was required. The system consists of a personal computer with an image scanner and a digital camera, set up in district hospitals and directly connected via ISDN to an image server, and a treatment planning device set up in a university hospital. Image data and consultative reports are sent to the server. Radiation oncologists at the university hospital determine a treatment schedule and verify actual treatment fields. From 1998 to 1999, 12 patients with malignant spinal cord compression (MSCC) were treated by emergency radiotherapy with the help of this system. Image quality, transmission time, and cost benefit also were satisfactory for clinical use. The mean time between the onset of symptoms and the start of radiotherapy was reduced significantly from 7.1 days to 0.8 days (P < .05) by the introduction of the system. Five of 6 nonambulant patients became ambulant after the introduction of THERAPIST compared with 2 of 8 before the introduction of THERAPIST. The treatment outcome was significantly better after the introduction of the system (P < .05), and suggested to be beyond the international standard. The telecommunication-helped radiotherapy and information system was useful in emergency radiotherapy in district hospitals for patients with MSCC for whom consultation with experienced radiation oncologists at a university hospital was required.
Influence of Ultrafine 2CaO·SiO2 Powder on Hydration Properties of Reactive Powder Concrete
Sun, Hongfang; Li, Zishanshan; Memon, Shazim Ali; Zhang, Qiwu; Wang, Yaocheng; Liu, Bing; Xu, Weiting; Xing, Feng
2015-01-01
In this research, we assessed the influence of an ultrafine 2CaO·SiO2 powder on the hydration properties of a reactive powder concrete system. The ultrafine powder was manufactured through chemical combustion method. The morphology of ultrafine powder and the development of hydration products in the cement paste prepared with ultrafine powder were investigated by scanning electron microscopy (SEM), mineralogical composition were determined by X-ray diffraction, while the heat release characteristics up to the age of 3 days were investigated by calorimetry. Moreover, the properties of cementitious system in fresh and hardened state (setting time, drying shrinkage, and compressive strength) with 5% ordinary Portland cement replaced by ultrafine powder were evaluated. From SEM micrographs, the particle size of ultrafine powder was found to be up to several hundred nanometers. The hydration product started formulating at the age of 3 days due to slow reacting nature of belitic 2CaO·SiO2. The initial and final setting times were prolonged and no significant difference in drying shrinkage was observed when 5% ordinary Portland cement was replaced by ultrafine powder. Moreover, in comparison to control reactive powder concrete, the reactive powder concrete containing ultrafine powder showed improvement in compressive strength at and above 7 days of testing. Based on above, it can be concluded that the manufactured ultrafine 2CaO·SiO2 powder has the potential to improve the performance of a reactive powder cementitious system. PMID:28793560
Shimokochi, Yohei; Kuwano, Satoshi; Yamaguchi, Taichi; Abutani, Hiroyuki; Shima, Norihiro
2017-10-01
This study aimed to investigate the effects of wearing a compression garment (CG) during night sleep on muscle fatigue recovery after high-intensity eccentric and concentric knee extensor exercises. Seventeen male college students participated in 2 experimental sessions under CG and non-CG (NCG) wearing conditions. Before night sleep under CG or NCG wearing conditions, the subjects performed a fatiguing protocol consisting of 10 sets of 10 repetitions of maximal isokinetic eccentric and concentric knee extensor contractions, with 30-second rest intervals between the sets. Immediately before and after and 24 hours after the fatiguing protocol, maximum voluntary isometric contraction (MVIC) force for knee extensor muscles was measured; surface electromyographic data from the vastus medialis and rectus femoris were also measured. A 2-way repeated-measure analysis of variance followed by Bonferroni pairwise comparisons were used to analyze the differences in each variable. Paired-sample t-tests were used to analyze the mean differences between the conditions at the same time points for each variable. The MVIC 24 hours after the fatiguing protocol was approximately 10% greater in the CG than in the NCG condition (p = 0.033). Changes in the electromyographic variables over time did not significantly differ between the conditions. Thus, it was concluded that wearing a CG during night sleep may promote localized muscle fatigue recovery but does not influence neurological factors after the fatiguing exercise.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cole, K.C.; Noel, D.; Hechler, J.-J.
Samples of Narmco Rigidite 5208/WC3000 carbon-epoxy composite prepreg were exposed to ambient temperature and 50 percent relative humidity for different periods up to 66 days. The aging has a significant effect on prepreg physical properties such as tack, volatiles content, and gel time. A set of four-ply laminates made from aged prepreg was subjected to tensile testing, ultrasonic inspection, and optothermal inspection. No relationship could be discerned between laminate properties and prepreg aging time. However, variations in panel homogeneity were observed, and these correlated with thermal diffusivity and tensile modulus measurements, but not with ultimate tensile strength or elongation. Amore » set of six-ply laminates was used to measure compressive properties, interlaminar shear strength, and physical properties. These panels also showed variations in porosity, again unrelated to aging, but in addition, the fiber-resin ratio was observed to decrease with aging time. Both factors were found to affect mechanical properties. The implications concerning the importance of monitoring the aging by physicochemical methods are discussed. 30 refs.« less
2018-12-28
MED EV AC) transport time on combat mortality in patients with non-compressible torso injury and traumatic amputations Sb. GRANT NUMBER Sc. PROGRAM...increased morbidity and mortality. Limited data exists on the influence of transport time on patient outcomes with specific injury types. The...treatment facility on morbidity and mortality in casualties with traumatic extremity amputation and non-compressible torso injury (NCTI). Methods: We
A source-specific model for lossless compression of global Earth data
NASA Astrophysics Data System (ADS)
Kess, Barbara Lynne
A Source Specific Model for Global Earth Data (SSM-GED) is a lossless compression method for large images that captures global redundancy in the data and achieves a significant improvement over CALIC and DCXT-BT/CARP, two leading lossless compression schemes. The Global Land 1-Km Advanced Very High Resolution Radiometer (AVHRR) data, which contains 662 Megabytes (MB) per band, is an example of a large data set that requires decompression of regions of the data. For this reason, SSM-GED compresses the AVHRR data as a collection of subwindows. This approach defines the statistical parameters for the model prior to compression. Unlike universal models that assume no a priori knowledge of the data, SSM-GED captures global redundancy that exists among all of the subwindows of data. The overlap in parameters among subwindows of data enables SSM-GED to improve the compression rate by increasing the number of parameters and maintaining a small model cost for each subwindow of data. This lossless compression method is applicable to other large volumes of image data such as video.
Safety and Efficacy of Defibrillator Charging During Ongoing Chest Compressions: A Multicenter Study
Edelson, Dana P.; Robertson-Dick, Brian J.; Yuen, Trevor C.; Eilevstjønn, Joar; Walsh, Deborah; Bareis, Charles J.; Vanden Hoek, Terry L.; Abella, Benjamin S.
2013-01-01
BACKGROUND Pauses in chest compressions during cardiopulmonary resuscitation have been shown to correlate with poor outcomes. In an attempt to minimize these pauses, the American Heart Association recommends charging the defibrillator during chest compressions. While simulation work suggests decreased pause times using this technique, little is known about its use in clinical practice. METHODS We conducted a multicenter, retrospective study of defibrillator charging at three US academic teaching hospitals between April 2006 and April 2009. Data were abstracted from CPR-sensing defibrillator transcripts. Pre-shock pauses and total hands- off time preceding the defibrillation attempts were compared among techniques. RESULTS A total of 680 charge-cycles from 244 cardiac arrests were analyzed. The defibrillator was charged during ongoing chest compressions in 448 (65.9%) instances with wide variability across the three sites. Charging during compressions correlated with a decrease in median pre-shock pause [2.6 (IQR 1.9–3.8) vs 13.3 (IQR 8.6–19.5) s; p < 0.001] and total hands-off time in the 30 s preceding defibrillation [10.3 (IQR 6.4–13.8) vs 14.8 (IQR 11.0–19.6) s; p < 0.001]. The improvement in hands-off time was most pronounced when rescuers charged the defibrillator in anticipation of the pause, prior to any rhythm analysis. There was no difference in inappropriate shocks when charging during chest compressions (20.0 vs 20.1%; p=0.97) and there was only one instance noted of inadvertent shock administration during compressions, which went unnoticed by the compressor. CONCLUSIONS Charging during compressions is underutilized in clinical practice. The technique is associated with decreased hands-off time preceding defibrillation, with minimal risk to patients or rescuers. PMID:20807672
The time to remember: Temporal compression and duration judgements in memory for real-life events.
Jeunehomme, Olivier; D'Argembeau, Arnaud
2018-05-01
Recent studies suggest that the continuous flow of information that constitutes daily life events is temporally compressed in episodic memory, yet the characteristics and determinants of this compression mechanism remain unclear. This study examined this question using an experimental paradigm incorporating wearable camera technology. Participants experienced a series of real-life events and were later asked to mentally replay various event sequences that were cued by pictures taken during the original events. Estimates of temporal compression (the ratio of the time needed to mentally re-experience an event to the actual event duration) showed that events were replayed, on average, about eight times faster than the original experiences. This compression mechanism seemed to operate by representing events as a succession of moments or slices of prior experience separated by temporal discontinuities. Importantly, however, rates of temporal compression were not constant and were lower for events involving goal-directed actions. The results also showed that the perceived duration of events increased with the density of recalled moments of prior experience. Taken together, these data extend our understanding of the mechanisms underlying the temporal compression and perceived duration of real-life events in episodic memory.
Xu, Hui; Gong, Weiliang; Syltebo, Larry; Lutze, Werner; Pegg, Ian L
2014-08-15
The binary furnace slag-metakaolin DuraLith geopolymer waste form, which has been considered as one of the candidate waste forms for immobilization of certain Hanford secondary wastes (HSW) from the vitrification of nuclear wastes at the Hanford Site, Washington, was extended to a ternary fly ash-furnace slag-metakaolin system to improve workability, reduce hydration heat, and evaluate high HSW waste loading. A concentrated HSW simulant, consisting of more than 20 chemicals with a sodium concentration of 5 mol/L, was employed to prepare the alkaline activating solution. Fly ash was incorporated at up to 60 wt% into the binder materials, whereas metakaolin was kept constant at 26 wt%. The fresh waste form pastes were subjected to isothermal calorimetry and setting time measurement, and the cured samples were further characterized by compressive strength and TCLP leach tests. This study has firstly established quantitative linear relationships between both initial and final setting times and hydration heat, which were never discovered in scientific literature for any cementitious waste form or geopolymeric material. The successful establishment of the correlations between setting times and hydration heat may make it possible to efficiently design and optimize cementitious waste forms and industrial wastes based geopolymers using limited testing results. Copyright © 2014 Elsevier B.V. All rights reserved.
The mathematical theory of signal processing and compression-designs
NASA Astrophysics Data System (ADS)
Feria, Erlan H.
2006-05-01
The mathematical theory of signal processing, named processor coding, will be shown to inherently arise as the computational time dual of Shannon's mathematical theory of communication which is also known as source coding. Source coding is concerned with signal source memory space compression while processor coding deals with signal processor computational time compression. Their combination is named compression-designs and referred as Conde in short. A compelling and pedagogically appealing diagram will be discussed highlighting Conde's remarkable successful application to real-world knowledge-aided (KA) airborne moving target indicator (AMTI) radar.
Buléon, Clément; Delaunay, Julie; Parienti, Jean-Jacques; Halbout, Laurent; Arrot, Xavier; Gérard, Jean-Louis; Hanouz, Jean-Luc
2016-09-01
Chest compressions require physical effort leading to increased fatigue and rapid degradation in the quality of cardiopulmonary resuscitation overtime. Despite harmful effect of interrupting chest compressions, current guidelines recommend that rescuers switch every 2 minutes. The impact on the quality of chest compressions during extended cardiopulmonary resuscitation has yet to be assessed. We conducted randomized crossover study on manikin (ResusciAnne; Laerdal). After randomization, 60 professional emergency rescuers performed 2 × 10 minutes of continuous chest compressions with and without a feedback device (CPRmeter). Efficient compression rate (primary outcome) was defined as the frequency target reached along with depth and leaning at the same time (recorded continuously). The 10-minute mean efficient compression rate was significantly better in the feedback group: 42% vs 21% (P< .001). There was no significant difference between the first (43%) and the tenth minute (36%; P= .068) with feedback. Conversely, a significant difference was evident from the second minute without feedback (35% initially vs 27%; P< .001). The efficient compression rate difference with and without feedback was significant every minute, from the second minute onwards. CPRmeter feedback significantly improved chest compression depth from the first minute, leaning from the second minute and rate from the third minute. A real-time feedback device delivers longer effective, steadier chest compressions over time. An extrapolation of these results from simulation may allow rescuer switches to be carried out beyond the currently recommended 2 minutes when a feedback device is used. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Fambri, Francesco; Dumbser, Michael; Zanotti, Olindo
2017-11-01
This paper presents an arbitrary high-order accurate ADER Discontinuous Galerkin (DG) method on space-time adaptive meshes (AMR) for the solution of two important families of non-linear time dependent partial differential equations for compressible dissipative flows : the compressible Navier-Stokes equations and the equations of viscous and resistive magnetohydrodynamics in two and three space-dimensions. The work continues a recent series of papers concerning the development and application of a proper a posteriori subcell finite volume limiting procedure suitable for discontinuous Galerkin methods (Dumbser et al., 2014, Zanotti et al., 2015 [40,41]). It is a well known fact that a major weakness of high order DG methods lies in the difficulty of limiting discontinuous solutions, which generate spurious oscillations, namely the so-called 'Gibbs phenomenon'. In the present work, a nonlinear stabilization of the scheme is sequentially and locally introduced only for troubled cells on the basis of a novel a posteriori detection criterion, i.e. the MOOD approach. The main benefits of the MOOD paradigm, i.e. the computational robustness even in the presence of strong shocks, are preserved and the numerical diffusion is considerably reduced also for the limited cells by resorting to a proper sub-grid. In practice the method first produces a so-called candidate solution by using a high order accurate unlimited DG scheme. Then, a set of numerical and physical detection criteria is applied to the candidate solution, namely: positivity of pressure and density, absence of floating point errors and satisfaction of a discrete maximum principle in the sense of polynomials. Furthermore, in those cells where at least one of these criteria is violated the computed candidate solution is detected as troubled and is locally rejected. Subsequently, a more reliable numerical solution is recomputed a posteriori by employing a more robust but still very accurate ADER-WENO finite volume scheme on the subgrid averages within that troubled cell. Finally, a high order DG polynomial is reconstructed back from the evolved subcell averages. We apply the whole approach for the first time to the equations of compressible gas dynamics and magnetohydrodynamics in the presence of viscosity, thermal conductivity and magnetic resistivity, therefore extending our family of adaptive ADER-DG schemes to cases for which the numerical fluxes also depend on the gradient of the state vector. The distinguished high-resolution properties of the presented numerical scheme standout against a wide number of non-trivial test cases both for the compressible Navier-Stokes and the viscous and resistive magnetohydrodynamics equations. The present results show clearly that the shock-capturing capability of the news schemes is significantly enhanced within a cell-by-cell Adaptive Mesh Refinement (AMR) implementation together with time accurate local time stepping (LTS).
The New CCSDS Image Compression Recommendation
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu; Armbruster, Philippe; Kiely, Aaron; Masschelein, Bart; Moury, Gilles; Schaefer, Christoph
2005-01-01
The Consultative Committee for Space Data Systems (CCSDS) data compression working group has recently adopted a recommendation for image data compression, with a final release expected in 2005. The algorithm adopted in the recommendation consists of a two-dimensional discrete wavelet transform of the image, followed by progressive bit-plane coding of the transformed data. The algorithm can provide both lossless and lossy compression, and allows a user to directly control the compressed data volume or the fidelity with which the wavelet-transformed data can be reconstructed. The algorithm is suitable for both frame-based image data and scan-based sensor data, and has applications for near-Earth and deep-space missions. The standard will be accompanied by free software sources on a future web site. An Application-Specific Integrated Circuit (ASIC) implementation of the compressor is currently under development. This paper describes the compression algorithm along with the requirements that drove the selection of the algorithm. Performance results and comparisons with other compressors are given for a test set of space images.
Wavelet compression of noisy tomographic images
NASA Astrophysics Data System (ADS)
Kappeler, Christian; Mueller, Stefan P.
1995-09-01
3D data acquisition is increasingly used in positron emission tomography (PET) to collect a larger fraction of the emitted radiation. A major practical difficulty with data storage and transmission in 3D-PET is the large size of the data sets. A typical dynamic study contains about 200 Mbyte of data. PET images inherently have a high level of photon noise and therefore usually are evaluated after being processed by a smoothing filter. In this work we examined lossy compression schemes under the postulate not induce image modifications exceeding those resulting from low pass filtering. The standard we will refer to is the Hanning filter. Resolution and inhomogeneity serve as figures of merit for quantification of image quality. The images to be compressed are transformed to a wavelet representation using Daubechies12 wavelets and compressed after filtering by thresholding. We do not include further compression by quantization and coding here. Achievable compression factors at this level of processing are thirty to fifty.
SCALCE: boosting sequence compression algorithms using locally consistent encoding.
Hach, Faraz; Numanagic, Ibrahim; Alkan, Can; Sahinalp, S Cenk
2012-12-01
The high throughput sequencing (HTS) platforms generate unprecedented amounts of data that introduce challenges for the computational infrastructure. Data management, storage and analysis have become major logistical obstacles for those adopting the new platforms. The requirement for large investment for this purpose almost signalled the end of the Sequence Read Archive hosted at the National Center for Biotechnology Information (NCBI), which holds most of the sequence data generated world wide. Currently, most HTS data are compressed through general purpose algorithms such as gzip. These algorithms are not designed for compressing data generated by the HTS platforms; for example, they do not take advantage of the specific nature of genomic sequence data, that is, limited alphabet size and high similarity among reads. Fast and efficient compression algorithms designed specifically for HTS data should be able to address some of the issues in data management, storage and communication. Such algorithms would also help with analysis provided they offer additional capabilities such as random access to any read and indexing for efficient sequence similarity search. Here we present SCALCE, a 'boosting' scheme based on Locally Consistent Parsing technique, which reorganizes the reads in a way that results in a higher compression speed and compression rate, independent of the compression algorithm in use and without using a reference genome. Our tests indicate that SCALCE can improve the compression rate achieved through gzip by a factor of 4.19-when the goal is to compress the reads alone. In fact, on SCALCE reordered reads, gzip running time can improve by a factor of 15.06 on a standard PC with a single core and 6 GB memory. Interestingly even the running time of SCALCE + gzip improves that of gzip alone by a factor of 2.09. When compared with the recently published BEETL, which aims to sort the (inverted) reads in lexicographic order for improving bzip2, SCALCE + gzip provides up to 2.01 times better compression while improving the running time by a factor of 5.17. SCALCE also provides the option to compress the quality scores as well as the read names, in addition to the reads themselves. This is achieved by compressing the quality scores through order-3 Arithmetic Coding (AC) and the read names through gzip through the reordering SCALCE provides on the reads. This way, in comparison with gzip compression of the unordered FASTQ files (including reads, read names and quality scores), SCALCE (together with gzip and arithmetic encoding) can provide up to 3.34 improvement in the compression rate and 1.26 improvement in running time. Our algorithm, SCALCE (Sequence Compression Algorithm using Locally Consistent Encoding), is implemented in C++ with both gzip and bzip2 compression options. It also supports multithreading when gzip option is selected, and the pigz binary is available. It is available at http://scalce.sourceforge.net. fhach@cs.sfu.ca or cenk@cs.sfu.ca Supplementary data are available at Bioinformatics online.
Chan, Lung Sang; Gao, Jian-Feng
2017-01-01
The Cathaysia Block is located in southeastern part of South China, which situates in the west Pacific subduction zone. It is thought to have undergone a compression-extension transition of the continental crust during Mesozoic-Cenozoic during the subduction of Pacific Plate beneath Eurasia-Pacific Plate, resulting in extensive magmatism, extensional basins and reactivation of fault systems. Although some mechanisms such as the trench roll-back have been generally proposed for the compression-extension transition, the timing and progress of the transition under a convergence setting remain ambiguous due to lack of suitable geological records and overprinting by later tectonic events. In this study, a numerical thermo-dynamical program was employed to evaluate how variable slab angles, thermal gradients of the lithospheres and convergence velocities would give rise to the change of crustal stress in a convergent subduction zone. Model results show that higher slab dip angle, lower convergence velocity and higher lithospheric thermal gradient facilitate the subduction process. The modeling results reveal the continental crust stress is dominated by horizontal compression during the early stage of the subduction, which could revert to a horizontal extension in the back-arc region, combing with the roll-back of the subducting slab and development of mantle upwelling. The parameters facilitating the subduction process also favor the compression-extension transition in the upper plate of the subduction zone. Such results corroborate the geology of the Cathaysia Block: the initiation of the extensional regime in the Cathaysia Block occurring was probably triggered by roll-back of the slowly subducting slab. PMID:28182640
Compressive sampling by artificial neural networks for video
NASA Astrophysics Data System (ADS)
Szu, Harold; Hsu, Charles; Jenkins, Jeffrey; Reinhardt, Kitt
2011-06-01
We describe a smart surveillance strategy for handling novelty changes. Current sensors seem to keep all, redundant or not. The Human Visual System's Hubel-Wiesel (wavelet) edge detection mechanism pays attention to changes in movement, which naturally produce organized sparseness because a stagnant edge is not reported to the brain's visual cortex by retinal neurons. Sparseness is defined as an ordered set of ones (movement or not) relative to zeros that could be pseudo-orthogonal among themselves; then suited for fault tolerant storage and retrieval by means of Associative Memory (AM). The firing is sparse at the change locations. Unlike purely random sparse masks adopted in medical Compressive Sensing, these organized ones have an additional benefit of using the image changes to make retrievable graphical indexes. We coined this organized sparseness as Compressive Sampling; sensing but skipping over redundancy without altering the original image. Thus, we turn illustrate with video the survival tactics which animals that roam the Earth use daily. They acquire nothing but the space-time changes that are important to satisfy specific prey-predator relationships. We have noticed a similarity between the mathematical Compressive Sensing and this biological mechanism used for survival. We have designed a hardware implementation of the Human Visual System's Compressive Sampling scheme. To speed up further, our mixedsignal circuit design of frame differencing is built in on-chip processing hardware. A CMOS trans-conductance amplifier is designed here to generate a linear current output using a pair of differential input voltages from 2 photon detectors for change detection---one for the previous value and the other the subsequent value, ("write" synaptic weight by Hebbian outer products; "read" by inner product & pt. NL threshold) to localize and track the threat targets.
The Efficacy of LUCAS in Prehospital Cardiac Arrest Scenarios: A Crossover Mannequin Study.
Gyory, Robert A; Buchle, Scott E; Rodgers, David; Lubin, Jeffrey S
2017-04-01
High-quality cardiopulmonary resuscitation (CPR) is critical for successful cardiac arrest outcomes. Mechanical devices may improve CPR quality. We simulated a prehospital cardiac arrest, including patient transport, and compared the performance of the LUCAS™ device, a mechanical chest compression-decompression system, to manual CPR. We hypothesized that because of the movement involved in transporting the patient, LUCAS would provide chest compressions more consistent with high-quality CPR guidelines. We performed a crossover-controlled study in which a recording mannequin was placed on the second floor of a building. An emergency medical services (EMS) crew responded, defibrillated, and provided either manual or LUCAS CPR. The team transported the mannequin through hallways and down stairs to an ambulance and drove to the hospital with CPR in progress. Critical events were manually timed while the mannequin recorded data on compressions. Twenty-three EMS providers participated. Median time to defibrillation was not different for LUCAS compared to manual CPR (p=0.97). LUCAS had a lower median number of compressions per minute (112/min vs. 125/min; IQR = 102-128 and 102-126 respectively; p<0.002), which was more consistent with current American Heart Association CPR guidelines, and percent adequate compression rate (71% vs. 40%; IQR = 21-93 and 12-88 respectively; p<0.002). In addition, LUCAS had a higher percent adequate depth (52% vs. 36%; IQR = 25-64 and 29-39 respectively; p<0.007) and lower percent total hands-off time (15% vs. 20%; IQR = 10-22 and 15-27 respectively; p<0.005). LUCAS performed no differently than manual CPR in median compression release depth, percent fully released compressions, median time hands off, or percent correct hand position. In our simulation, LUCAS had a higher rate of adequate compressions and decreased total hands-off time as compared to manual CPR. Chest compression quality may be better when using a mechanical device during patient movement in prehospital cardiac arrest patient.
NASA Astrophysics Data System (ADS)
Mansoor, Awais; Robinson, J. Paul; Rajwa, Bartek
2009-02-01
Modern automated microscopic imaging techniques such as high-content screening (HCS), high-throughput screening, 4D imaging, and multispectral imaging are capable of producing hundreds to thousands of images per experiment. For quick retrieval, fast transmission, and storage economy, these images should be saved in a compressed format. A considerable number of techniques based on interband and intraband redundancies of multispectral images have been proposed in the literature for the compression of multispectral and 3D temporal data. However, these works have been carried out mostly in the elds of remote sensing and video processing. Compression for multispectral optical microscopy imaging, with its own set of specialized requirements, has remained under-investigated. Digital photography{oriented 2D compression techniques like JPEG (ISO/IEC IS 10918-1) and JPEG2000 (ISO/IEC 15444-1) are generally adopted for multispectral images which optimize visual quality but do not necessarily preserve the integrity of scientic data, not to mention the suboptimal performance of 2D compression techniques in compressing 3D images. Herein we report our work on a new low bit-rate wavelet-based compression scheme for multispectral fluorescence biological imaging. The sparsity of signicant coefficients in high-frequency subbands of multispectral microscopic images is found to be much greater than in natural images; therefore a quad-tree concept such as Said et al.'s SPIHT1 along with correlation of insignicant wavelet coefficients has been proposed to further exploit redundancy at high-frequency subbands. Our work propose a 3D extension to SPIHT, incorporating a new hierarchal inter- and intra-spectral relationship amongst the coefficients of 3D wavelet-decomposed image. The new relationship, apart from adopting the parent-child relationship of classical SPIHT, also brought forth the conditional "sibling" relationship by relating only the insignicant wavelet coefficients of subbands at the same level of decomposition. The insignicant quadtrees in dierent subbands in the high-frequency subband class are coded by a combined function to reduce redundancy. A number of experiments conducted on microscopic multispectral images have shown promising results for the proposed method over current state-of-the-art image-compression techniques.
Scan-Line Methods in Spatial Data Systems
1990-09-04
algorithms in detail to show some of the implementation issues. Data Compression Storage and transmission times can be reduced by using compression ...goes through the data . Luckily, there are good one-directional compression algorithms , such as run-length coding 13 in which each scan line can be...independently compressed . These are the algorithms to use in a parallel scan-line system. Data compression is usually only used for long-term storage of
NASA Technical Reports Server (NTRS)
Dahl, Milo D.; Mankbadi, Reda R.
2002-01-01
An analysis of the nonlinear development of the large-scale structures or instability waves in compressible round jets was conducted using the integral energy method. The equations of motion were decomposed into two sets of equations; one set governing the mean flow motion and the other set governing the large-scale structure motion. The equations in each set were then combined to derive kinetic energy equations that were integrated in the radial direction across the jet after the boundary-layer approximations were applied. Following the application of further assumptions regarding the radial shape of the mean flow and the large structures, equations were derived that govern the nonlinear, streamwise development of the large structures. Using numerically generated mean flows, calculations show the energy exchanges and the effects of the initial amplitude on the coherent structure development in the jet.
Collected Data of The Boreal Ecosystem and Atmosphere Study (BOREAS)
NASA Technical Reports Server (NTRS)
Newcomer, J. (Editor); Landis, D. (Editor); Conrad, S. (Editor); Curd, S. (Editor); Huemmrich, K. (Editor); Knapp, D. (Editor); Morrell, A. (Editor); Nickerson, J. (Editor); Papagno, A. (Editor); Rinker, D. (Editor)
2000-01-01
The Boreal Ecosystem-Atmosphere Study (BOREAS) was a large-scale international interdisciplinary climate-ecosystem interaction experiment in the northern boreal forests of Canada. Its goal was to improve our understanding of the boreal forests -- how they interact with the atmosphere, how much CO2 they can store, and how climate change will affect them. BOREAS wanted to learn to use satellite data to monitor the forests, and to improve computer simulation and weather models so scientists can anticipate the effects of global change. This BOREAS CD-ROM set is a set of 12 CD-ROMs containing the finalized point data sets and compressed image data from the BOREAS Project. All point data are stored in ASCII text files, and all image and GIS products are stored as binary images, compressed using GZip. Additional descriptions of the various data sets on this CD-ROM are available in other documents in the BOREAS series.
NASA Astrophysics Data System (ADS)
Kim, Sang-Young; Shim, Chun Sik; Sturtevant, Caleb; Kim, Dave (Dae-Wook); Song, Ha Cheol
2014-09-01
Glass Fiber Reinforced Plastic (GFRP) structures are primarily manufactured using hand lay-up or vacuum infusion techniques, which are cost-effective for the construction of marine vessels. This paper aims to investigate the mechanical properties and failure mechanisms of the hybrid GFRP composites, formed by applying the hand lay-up processed exterior and the vacuum infusion processed interior layups, providing benefits for structural performance and ease of manufacturing. The hybrid GFRP composites contain one, two, and three vacuum infusion processed layer sets with consistent sets of hand lay-up processed layers. Mechanical properties assessed in this study include tensile, compressive and in-plane shear properties. Hybrid composites with three sets of vacuum infusion layers showed the highest tensile mechanical properties while those with two sets had the highest mechanical properties in compression. The batch homogeneity, for the GFRP fabrication processes, is evaluated using the experimentally obtained mechanical properties
Loaded delay lines for future RF pulse compression systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, R.M.; Wilson, P.B.; Kroll, N.M.
1995-05-01
The peak power delivered by the klystrons in the NLCRA (Next Linear Collider Test Accelerator) now under construction at SLAC is enhanced by a factor of four in a SLED-II type of R.F. pulse compression system (pulse width compression ratio of six). To achieve the desired output pulse duration of 250 ns, a delay line constructed from a 36 m length of circular waveguide is used. Future colliders, however, will require even higher peak power and larger compression factors, which favors a more efficient binary pulse compression approach. Binary pulse compression, however, requires a line whose delay time is approximatelymore » proportional to the compression factor. To reduce the length of these lines to manageable proportions, periodically loaded delay lines are being analyzed using a generalized scattering matrix approach. One issue under study is the possibility of propagating two TE{sub o} modes, one with a high group velocity and one with a group velocity of the order 0.05c, for use in a single-line binary pulse compression system. Particular attention is paid to time domain pulse degradation and to Ohmic losses.« less
Light-weight reference-based compression of FASTQ data.
Zhang, Yongpeng; Li, Linsen; Yang, Yanli; Yang, Xiao; He, Shan; Zhu, Zexuan
2015-06-09
The exponential growth of next generation sequencing (NGS) data has posed big challenges to data storage, management and archive. Data compression is one of the effective solutions, where reference-based compression strategies can typically achieve superior compression ratios compared to the ones not relying on any reference. This paper presents a lossless light-weight reference-based compression algorithm namely LW-FQZip to compress FASTQ data. The three components of any given input, i.e., metadata, short reads and quality score strings, are first parsed into three data streams in which the redundancy information are identified and eliminated independently. Particularly, well-designed incremental and run-length-limited encoding schemes are utilized to compress the metadata and quality score streams, respectively. To handle the short reads, LW-FQZip uses a novel light-weight mapping model to fast map them against external reference sequence(s) and produce concise alignment results for storage. The three processed data streams are then packed together with some general purpose compression algorithms like LZMA. LW-FQZip was evaluated on eight real-world NGS data sets and achieved compression ratios in the range of 0.111-0.201. This is comparable or superior to other state-of-the-art lossless NGS data compression algorithms. LW-FQZip is a program that enables efficient lossless FASTQ data compression. It contributes to the state of art applications for NGS data storage and transmission. LW-FQZip is freely available online at: http://csse.szu.edu.cn/staff/zhuzx/LWFQZip.
Kelly, Terri-Ann N; Roach, Brendan L; Weidner, Zachary D; Mackenzie-Smith, Charles R; O'Connell, Grace D; Lima, Eric G; Stoker, Aaron M; Cook, James L; Ateshian, Gerard A; Hung, Clark T
2013-07-26
The tensile modulus of articular cartilage is much larger than its compressive modulus. This tension-compression nonlinearity enhances interstitial fluid pressurization and decreases the frictional coefficient. The current set of studies examines the tensile and compressive properties of cylindrical chondrocyte-seeded agarose constructs over different developmental stages through a novel method that combines osmotic loading, video microscopy, and uniaxial unconfined compression testing. This method was previously used to examine tension-compression nonlinearity in native cartilage. Engineered cartilage, cultured under free-swelling (FS) or dynamically loaded (DL) conditions, was tested in unconfined compression in hypertonic and hypotonic salt solutions. The apparent equilibrium modulus decreased with increasing salt concentration, indicating that increasing the bath solution osmolarity shielded the fixed charges within the tissue, shifting the measured moduli along the tension-compression curve and revealing the intrinsic properties of the tissue. With this method, we were able to measure the tensile (401±83kPa for FS and 678±473kPa for DL) and compressive (161±33kPa for FS and 348±203kPa for DL) moduli of the same engineered cartilage specimens. These moduli are comparable to values obtained from traditional methods, validating this technique for measuring the tensile and compressive properties of hydrogel-based constructs. This study shows that engineered cartilage exhibits tension-compression nonlinearity reminiscent of the native tissue, and that dynamic deformational loading can yield significantly higher tensile properties. Copyright © 2013 Elsevier Ltd. All rights reserved.
Quantum autoencoders for efficient compression of quantum data
NASA Astrophysics Data System (ADS)
Romero, Jonathan; Olson, Jonathan P.; Aspuru-Guzik, Alan
2017-12-01
Classical autoencoders are neural networks that can learn efficient low-dimensional representations of data in higher-dimensional space. The task of an autoencoder is, given an input x, to map x to a lower dimensional point y such that x can likely be recovered from y. The structure of the underlying autoencoder network can be chosen to represent the data on a smaller dimension, effectively compressing the input. Inspired by this idea, we introduce the model of a quantum autoencoder to perform similar tasks on quantum data. The quantum autoencoder is trained to compress a particular data set of quantum states, where a classical compression algorithm cannot be employed. The parameters of the quantum autoencoder are trained using classical optimization algorithms. We show an example of a simple programmable circuit that can be trained as an efficient autoencoder. We apply our model in the context of quantum simulation to compress ground states of the Hubbard model and molecular Hamiltonians.
Automatic Summarization as a Combinatorial Optimization Problem
NASA Astrophysics Data System (ADS)
Hirao, Tsutomu; Suzuki, Jun; Isozaki, Hideki
We derived the oracle summary with the highest ROUGE score that can be achieved by integrating sentence extraction with sentence compression from the reference abstract. The analysis results of the oracle revealed that summarization systems have to assign an appropriate compression rate for each sentence in the document. In accordance with this observation, this paper proposes a summarization method as a combinatorial optimization: selecting the set of sentences that maximize the sum of the sentence scores from the pool which consists of the sentences with various compression rates, subject to length constrains. The score of the sentence is defined by its compression rate, content words and positional information. The parameters for the compression rates and positional information are optimized by minimizing the loss between score of oracles and that of candidates. The results obtained from TSC-2 corpus showed that our method outperformed the previous systems with statistical significance.
Algorithm That Synthesizes Other Algorithms for Hashing
NASA Technical Reports Server (NTRS)
James, Mark
2010-01-01
An algorithm that includes a collection of several subalgorithms has been devised as a means of synthesizing still other algorithms (which could include computer code) that utilize hashing to determine whether an element (typically, a number or other datum) is a member of a set (typically, a list of numbers). Each subalgorithm synthesizes an algorithm (e.g., a block of code) that maps a static set of key hashes to a somewhat linear monotonically increasing sequence of integers. The goal in formulating this mapping is to cause the length of the sequence thus generated to be as close as practicable to the original length of the set and thus to minimize gaps between the elements. The advantage of the approach embodied in this algorithm is that it completely avoids the traditional approach of hash-key look-ups that involve either secondary hash generation and look-up or further searching of a hash table for a desired key in the event of collisions. This algorithm guarantees that it will never be necessary to perform a search or to generate a secondary key in order to determine whether an element is a member of a set. This algorithm further guarantees that any algorithm that it synthesizes can be executed in constant time. To enforce these guarantees, the subalgorithms are formulated to employ a set of techniques, each of which works very effectively covering a certain class of hash-key values. These subalgorithms are of two types, summarized as follows: Given a list of numbers, try to find one or more solutions in which, if each number is shifted to the right by a constant number of bits and then masked with a rotating mask that isolates a set of bits, a unique number is thereby generated. In a variant of the foregoing procedure, omit the masking. Try various combinations of shifting, masking, and/or offsets until the solutions are found. From the set of solutions, select the one that provides the greatest compression for the representation and is executable in the minimum amount of time. Given a list of numbers, try to find one or more solutions in which, if each number is compressed by use of the modulo function by some value, then a unique value is generated.
NASA Astrophysics Data System (ADS)
Johnson, J. R.; Bell, J. F., III; Hayes, A.; Deen, R. G.; Godber, A.; Arvidson, R. E.; Lemmon, M. T.
2015-12-01
The Mastcam imaging system on the Curiosity rover continued acquisition of multispectral images of the same terrain at multiple times of day at three new rover locations between sols 872 and 1003. These data sets will be used to investigate the light scattering properties of rocks and soils along the Curiosity traverse using radiative transfer models. Images were acquired by the Mastcam-34 (M-34) camera on Sols 872-892 at 8 times of day (Mojave drill location), Sols 914-917 (Telegraph Peak drill location) at 9 times of day, and Sols 1000-1003 at 8 times of day (Stimson-Murray Formation contact near Marias Pass). Data sets were acquired using filters centered at 445, 527, 751, and 1012 nm, and the images were jpeg-compressed. Data sets typically were pointed ~east and ~west to provide phase angle coverage from near 0° to 125-140° for a variety of rocks and soils. Also acquired on Sols 917-918 at the Telegraph Peak site was a multiple time-of-day Mastcam sequence pointed southeast using only the broadband Bayer filters that provided losslessly compressed images with phase angles ~55-129°. Navcam stereo images were also acquired with each data set to provide broadband photometry and terrain measurements for computing surface normals and local incidence and emission angles used in photometric modeling. On Sol 1028, the MAHLI camera was used as a goniometer to acquire images at 20 arm positions, all centered at the same location within the work volume from a near-constant distance of 85 cm from the surface. Although this experiment was run at only one time of day (~15:30 LTST), it provided phase angle coverage from ~30° to ~111°. The terrain included the contact between the uppermost portion of the Murray Formation and the Stimson sandstones, and was the first acquisition of both Mastcam and MALHI photometry images at the same rover location. The MAHLI images also allowed construction of a 3D shape model of the Stimson-Murray contact region. The attached figure shows a phase color composite of the western Stimson area, created using phase angles of 8°, 78°, and 130° at 751 nm. The red areas correspond to highly backscattering materials that appear to concentrate along linear fractures throughout this area. The blue areas correspond to more forward scattering materials dispersed through the stratigraphic sequence.
Design Space Approach in Optimization of Fluid Bed Granulation and Tablets Compression Process
Djuriš, Jelena; Medarević, Djordje; Krstić, Marko; Vasiljević, Ivana; Mašić, Ivana; Ibrić, Svetlana
2012-01-01
The aim of this study was to optimize fluid bed granulation and tablets compression processes using design space approach. Type of diluent, binder concentration, temperature during mixing, granulation and drying, spray rate, and atomization pressure were recognized as critical formulation and process parameters. They were varied in the first set of experiments in order to estimate their influences on critical quality attributes, that is, granules characteristics (size distribution, flowability, bulk density, tapped density, Carr's index, Hausner's ratio, and moisture content) using Plackett-Burman experimental design. Type of diluent and atomization pressure were selected as the most important parameters. In the second set of experiments, design space for process parameters (atomization pressure and compression force) and its influence on tablets characteristics was developed. Percent of paracetamol released and tablets hardness were determined as critical quality attributes. Artificial neural networks (ANNs) were applied in order to determine design space. ANNs models showed that atomization pressure influences mostly on the dissolution profile, whereas compression force affects mainly the tablets hardness. Based on the obtained ANNs models, it is possible to predict tablet hardness and paracetamol release profile for any combination of analyzed factors. PMID:22919295
NASA Astrophysics Data System (ADS)
Al-Hayani, Nazar; Al-Jawad, Naseer; Jassim, Sabah A.
2014-05-01
Video compression and encryption became very essential in a secured real time video transmission. Applying both techniques simultaneously is one of the challenges where the size and the quality are important in multimedia transmission. In this paper we proposed a new technique for video compression and encryption. Both encryption and compression are based on edges extracted from the high frequency sub-bands of wavelet decomposition. The compression algorithm based on hybrid of: discrete wavelet transforms, discrete cosine transform, vector quantization, wavelet based edge detection, and phase sensing. The compression encoding algorithm treats the video reference and non-reference frames in two different ways. The encryption algorithm utilized A5 cipher combined with chaotic logistic map to encrypt the significant parameters and wavelet coefficients. Both algorithms can be applied simultaneously after applying the discrete wavelet transform on each individual frame. Experimental results show that the proposed algorithms have the following features: high compression, acceptable quality, and resistance to the statistical and bruteforce attack with low computational processing.
Khosravan, Shahla; Mohammadzadeh-Moghadam, Hossein; Mohammadzadeh, Fatemeh; Fadafen, Samane Ajam Khames; Gholami, Malihe
2017-01-01
Breast engorgement affects lactation. The present study was conducted to determine the effect of hollyhock combined with warm and cold compresses on improving breast engorgement in lactating women. Participants included 40 women with breast engorgement divided into intervention and control groups, with participants in both groups being applied routine interventions and warm compress before nursing and a cold compress after nursing; however, the intervention group was also applied hollyhock compress. Both groups received these treatments 6 times during 2 days. The data collected were analyzed in SPSS-16 using a generalized estimating equation. According to the results, a significant difference was observed in the overall breast engorgement severity in the intervention group (P < .001). The severity of breast engorgement was also found to have a significant relationship with time (P < .001). According to the findings, hollyhock leaf compress combined with performing routine interventions for breast engorgement can improve breast engorgement. © The Author(s) 2015.
Schröder, J; Bucher, M; Meyer, O
2016-09-01
Intubation with a laryngeal tube (LT) is a recommended alternative to endotracheal intubation during advanced life support (ALS). LT insertion is easy; therefore, it may also be an alternative to bag-mask ventilation (BMV) for untrained personnel performing basic life support (BLS). Data from manikin studies support the influence of LT on no-flow-time (NFT) during ALS. We performed a prospective, randomized manikin study using a two-rescuer model to compare the effects of ventilation using a LT and BMV on NFT during BLS. Participants were trained in BMV and were inexperienced in the use of a LT. There was no significant difference in total NFT with the use of a LT and BMV (LT: mean 83.1 ± 37.3 s; BMV: mean 78.7 ± 24.5 s; p = 0.313), but we found significant differences in the progression of the scenario: in the BLS-scenario, the proportion of time spent performing chest compressions was higher when BMV was used compared to when a LT was used. The quality of chest compressions and the ventilation rate did not differ significantly between the two groups. The mean tidal volume and mean minute volume were significantly larger with the use of a LT compared with the use of BMV. In conclusion, in a two-rescuer BLS scenario, NFT is longer with the use of a LT (without prior training) than with the use of BMV (with prior training). The probable reasons for this result are higher tidal volumes with the use of a LT leading to longer interruptions without chest compressions.
NASA Astrophysics Data System (ADS)
Wason, H.; Herrmann, F. J.; Kumar, R.
2016-12-01
Current efforts towards dense shot (or receiver) sampling and full azimuthal coverage to produce high resolution images have led to the deployment of multiple source vessels (or streamers) across marine survey areas. Densely sampled marine seismic data acquisition, however, is expensive, and hence necessitates the adoption of sampling schemes that save acquisition costs and time. Compressed sensing is a sampling paradigm that aims to reconstruct a signal--that is sparse or compressible in some transform domain--from relatively fewer measurements than required by the Nyquist sampling criteria. Leveraging ideas from the field of compressed sensing, we show how marine seismic acquisition can be setup as a compressed sensing problem. A step ahead from multi-source seismic acquisition is simultaneous source acquisition--an emerging technology that is stimulating both geophysical research and commercial efforts--where multiple source arrays/vessels fire shots simultaneously resulting in better coverage in marine surveys. Following the design principles of compressed sensing, we propose a pragmatic simultaneous time-jittered time-compressed marine acquisition scheme where single or multiple source vessels sail across an ocean-bottom array firing airguns at jittered times and source locations, resulting in better spatial sampling and speedup acquisition. Our acquisition is low cost since our measurements are subsampled. Simultaneous source acquisition generates data with overlapping shot records, which need to be separated for further processing. We can significantly impact the reconstruction quality of conventional seismic data from jittered data and demonstrate successful recovery by sparsity promotion. In contrast to random (sub)sampling, acquisition via jittered (sub)sampling helps in controlling the maximum gap size, which is a practical requirement of wavefield reconstruction with localized sparsifying transforms. We illustrate our results with simulations of simultaneous time-jittered marine acquisition for 2D and 3D ocean-bottom cable survey.
Roberts, Jonathan S; Niu, Jianli; Pastor-Cervantes, Juan A
2017-10-01
Hemostasis following transradial access (TRA) is usually achieved by mechanical compression. We investigated use of the QuikClot Radial hemostasis pad (Z-Medica) compared with the TR Band (Terumo Medical) to shorten hemostasis after TRA. Thirty patients undergoing TRA coronary angiography and/or percutaneous coronary intervention were randomized into three cohorts post TRA: 10 patients received mechanical compression with the TR Band, 10 patients received 30 min of compression with the QuikClot Radial pad, and 10 patients received 60 min of compression with the QuikClot Radial pad. Times to hemostasis and access-site complications were recorded. Radial artery patency was evaluated 1 hour after hemostasis by the reverse Barbeau's test. There were no differences in patient characteristics, mean dose of heparin (7117 ± 1054 IU), or mean activated clotting time value (210 ± 50 sec) at the end of procedure among the three groups. Successful hemostasis was achieved in 100% of patients with both the 30-min and 60-min compression groups using the QuikClot pad. Hemostasis failure occurred in 50% of patients when the TR Band was initially weaned at the protocol-driven time (40 min after sheath removal). Mean compression time for hemostasis with the TR Band was 149.4 min compared with 30.7 min and 60.9 min for the 30-min and 60-min QuikClot groups, respectively. No radial artery occlusion occurred in any subject at the end of the study. Use of the QuikClot Radial pad following TRA in this pilot trial significantly shortened hemostasis times when compared with the TR Band, with no increased complications noted.
Ferragina, Paolo; Giancarlo, Raffaele; Greco, Valentina; Manzini, Giovanni; Valiente, Gabriel
2007-07-13
Similarity of sequences is a key mathematical notion for Classification and Phylogenetic studies in Biology. It is currently primarily handled using alignments. However, the alignment methods seem inadequate for post-genomic studies since they do not scale well with data set size and they seem to be confined only to genomic and proteomic sequences. Therefore, alignment-free similarity measures are actively pursued. Among those, USM (Universal Similarity Metric) has gained prominence. It is based on the deep theory of Kolmogorov Complexity and universality is its most novel striking feature. Since it can only be approximated via data compression, USM is a methodology rather than a formula quantifying the similarity of two strings. Three approximations of USM are available, namely UCD (Universal Compression Dissimilarity), NCD (Normalized Compression Dissimilarity) and CD (Compression Dissimilarity). Their applicability and robustness is tested on various data sets yielding a first massive quantitative estimate that the USM methodology and its approximations are of value. Despite the rich theory developed around USM, its experimental assessment has limitations: only a few data compressors have been tested in conjunction with USM and mostly at a qualitative level, no comparison among UCD, NCD and CD is available and no comparison of USM with existing methods, both based on alignments and not, seems to be available. We experimentally test the USM methodology by using 25 compressors, all three of its known approximations and six data sets of relevance to Molecular Biology. This offers the first systematic and quantitative experimental assessment of this methodology, that naturally complements the many theoretical and the preliminary experimental results available. Moreover, we compare the USM methodology both with methods based on alignments and not. We may group our experiments into two sets. The first one, performed via ROC (Receiver Operating Curve) analysis, aims at assessing the intrinsic ability of the methodology to discriminate and classify biological sequences and structures. A second set of experiments aims at assessing how well two commonly available classification algorithms, UPGMA (Unweighted Pair Group Method with Arithmetic Mean) and NJ (Neighbor Joining), can use the methodology to perform their task, their performance being evaluated against gold standards and with the use of well known statistical indexes, i.e., the F-measure and the partition distance. Based on the experiments, several conclusions can be drawn and, from them, novel valuable guidelines for the use of USM on biological data. The main ones are reported next. UCD and NCD are indistinguishable, i.e., they yield nearly the same values of the statistical indexes we have used, accross experiments and data sets, while CD is almost always worse than both. UPGMA seems to yield better classification results with respect to NJ, i.e., better values of the statistical indexes (10% difference or above), on a substantial fraction of experiments, compressors and USM approximation choices. The compression program PPMd, based on PPM (Prediction by Partial Matching), for generic data and Gencompress for DNA, are the best performers among the compression algorithms we have used, although the difference in performance, as measured by statistical indexes, between them and the other algorithms depends critically on the data set and may not be as large as expected. PPMd used with UCD or NCD and UPGMA, on sequence data is very close, although worse, in performance with the alignment methods (less than 2% difference on the F-measure). Yet, it scales well with data set size and it can work on data other than sequences. In summary, our quantitative analysis naturally complements the rich theory behind USM and supports the conclusion that the methodology is worth using because of its robustness, flexibility, scalability, and competitiveness with existing techniques. In particular, the methodology applies to all biological data in textual format. The software and data sets are available under the GNU GPL at the supplementary material web page.
Ferragina, Paolo; Giancarlo, Raffaele; Greco, Valentina; Manzini, Giovanni; Valiente, Gabriel
2007-01-01
Background Similarity of sequences is a key mathematical notion for Classification and Phylogenetic studies in Biology. It is currently primarily handled using alignments. However, the alignment methods seem inadequate for post-genomic studies since they do not scale well with data set size and they seem to be confined only to genomic and proteomic sequences. Therefore, alignment-free similarity measures are actively pursued. Among those, USM (Universal Similarity Metric) has gained prominence. It is based on the deep theory of Kolmogorov Complexity and universality is its most novel striking feature. Since it can only be approximated via data compression, USM is a methodology rather than a formula quantifying the similarity of two strings. Three approximations of USM are available, namely UCD (Universal Compression Dissimilarity), NCD (Normalized Compression Dissimilarity) and CD (Compression Dissimilarity). Their applicability and robustness is tested on various data sets yielding a first massive quantitative estimate that the USM methodology and its approximations are of value. Despite the rich theory developed around USM, its experimental assessment has limitations: only a few data compressors have been tested in conjunction with USM and mostly at a qualitative level, no comparison among UCD, NCD and CD is available and no comparison of USM with existing methods, both based on alignments and not, seems to be available. Results We experimentally test the USM methodology by using 25 compressors, all three of its known approximations and six data sets of relevance to Molecular Biology. This offers the first systematic and quantitative experimental assessment of this methodology, that naturally complements the many theoretical and the preliminary experimental results available. Moreover, we compare the USM methodology both with methods based on alignments and not. We may group our experiments into two sets. The first one, performed via ROC (Receiver Operating Curve) analysis, aims at assessing the intrinsic ability of the methodology to discriminate and classify biological sequences and structures. A second set of experiments aims at assessing how well two commonly available classification algorithms, UPGMA (Unweighted Pair Group Method with Arithmetic Mean) and NJ (Neighbor Joining), can use the methodology to perform their task, their performance being evaluated against gold standards and with the use of well known statistical indexes, i.e., the F-measure and the partition distance. Based on the experiments, several conclusions can be drawn and, from them, novel valuable guidelines for the use of USM on biological data. The main ones are reported next. Conclusion UCD and NCD are indistinguishable, i.e., they yield nearly the same values of the statistical indexes we have used, accross experiments and data sets, while CD is almost always worse than both. UPGMA seems to yield better classification results with respect to NJ, i.e., better values of the statistical indexes (10% difference or above), on a substantial fraction of experiments, compressors and USM approximation choices. The compression program PPMd, based on PPM (Prediction by Partial Matching), for generic data and Gencompress for DNA, are the best performers among the compression algorithms we have used, although the difference in performance, as measured by statistical indexes, between them and the other algorithms depends critically on the data set and may not be as large as expected. PPMd used with UCD or NCD and UPGMA, on sequence data is very close, although worse, in performance with the alignment methods (less than 2% difference on the F-measure). Yet, it scales well with data set size and it can work on data other than sequences. In summary, our quantitative analysis naturally complements the rich theory behind USM and supports the conclusion that the methodology is worth using because of its robustness, flexibility, scalability, and competitiveness with existing techniques. In particular, the methodology applies to all biological data in textual format. The software and data sets are available under the GNU GPL at the supplementary material web page. PMID:17629909
SU-F-T-91: Development of Real Time Abdominal Compression Force (ACF) Monitoring System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, T; Kim, D; Kang, S
Purpose: Hard-plate based abdominal compression is known to be effective, but no explicit method exists to quantify abdominal compression force (ACF) and maintain the proper ACF through the whole procedure. In addition, even with compression, it is necessary to do 4D CT to manage residual motion but, 4D CT is often not possible due to reduced surrogating sensitivity. In this study, we developed and evaluated a system that both monitors ACF in real time and provides surrogating signal even under compression. The system can also provide visual-biofeedback. Methods: The system developed consists of a compression plate, an ACF monitoring unitmore » and a visual-biofeedback device. The ACF monitoring unit contains a thin air balloon in the size of compression plate and a gas pressure sensor. The unit is attached to the bottom of the plate thus, placed between the plate and the patient when compression is applied, and detects compression pressure. For reliability test, 3 volunteers were directed to take several different breathing patterns and the ACF variation was compared with the respiratory flow and external respiratory signal to assure that the system provides corresponding behavior. In addition, guiding waveform were generated based on free breathing, and then applied for evaluating the effectiveness of visual-biofeedback. Results: We could monitor ACF variation in real time and confirmed that the data was correlated with both respiratory flow data and external respiratory signal. Even under abdominal compression, in addition, it was possible to make the subjects successfully follow the guide patterns using the visual biofeedback system. Conclusion: The developed real time ACF monitoring system was found to be functional as intended and consistent. With the capability of both providing real time surrogating signal under compression and enabling visual-biofeedback, it is considered that the system would improve the quality of respiratory motion management in radiation therapy. This research was supported by the Mid-career Researcher Program through NRF funded by the Ministry of Science, ICT & Future Planning of Korea (NRF-2014R1A2A1A10050270) and by the Radiation Technology R&D program through the National Research Foundation of Korea funded by the Ministry of Science, ICT & Future Planning (No. 2013M2A2A7038291)« less
Comanche Helmet-Mounted Display Heading-Tape Simulation
NASA Technical Reports Server (NTRS)
Turpin, Terry; Dowell, Susan; Atencio, Adolph
2006-01-01
The Aeroflightdynamics Directorate (AMRDEC) conducted a simulation to assess the performance associated with a Contact Analog, world-referenced heading tape as implemented on the Comanche Helmet Integrated Display Sight System (HIDSS) when compared with a Compressed heading tape similar to that specified by the former Military Standard (MIL-STD) 1295. Six experienced pilots flew three modified Aeronautical Design Standards (ADS)-33 maneuvers (Hover Turn, Bob-up, Transient Turn) and a precision traffic pattern in the NASA Vertical Motion Simulator (VMS). Analysis of the pilot objective performance data and subjective handling qualities ratings (HQRs) showed the following: Compressed symbology in the Velocity Stabilization (VelStab) flight mode generally produced the most precise performances over Contact Analog symbology with respect to the heading, altitude, position, and time criteria specified for the maneuvers tested. VelStab outperformed the Automatic Flight Control System (AFCS) on all maneuvers achieving desired performance on most maneuvers for both symbol sets. Performance in the AFCS mode was generally desirable to adequate for heading and altitude and did not meet adequate standards for hover position and time for the Hover Turn and Bob-up maneuvers. VelStab and AFCS performance were nearly the same for the Transient Turn. Pilot comments concerning the Contact Analog heading-tape implementation were generally unfavorable in spite of the achieved levels of performance. HQRs showed Compressed symbology in the VelStab flight mode produced the lowest mean HQR, encompassing mixed ratings of satisfactory handling and needing improvement. All other symbology/flight-mode combinations yielded higher HQRs, which characterized opinions that deficiencies in aircraft handling due to HMD symbology would need improvement. Contact Analog heading tape and other symbology require improvement, especially when operating in the AFCS mode. NASA-TLX rated Compressed symbology in the VelStab flight mode as the least demanding on resources, closely followed by ratings for Contact Analog in the VelStab mode. In a similar pattern, TLX ratings for maneuvers completed in the AFCS mode yielded a higher level of resource demand with even slighter differences between Contact Analog and Compressed symbology sets. Further research should be conducted where objective data and subjective HQR ratings indicate a need for improvement. The areas requiring attention are those where the symbology implementation, the flight control system, or a combination of both caused workload to reach an objectionable level where adequate performance was either difficult to achieve or unachievable. These areas are clearly identified in this report. Symbology that received negative HQR comments by a majority of pilots should also be examined. The summary of pilot comments can be found in appendix A. Additional simulation trials should be considered to address the identified issues.
Croghan, Naomi B H; Arehart, Kathryn H; Kates, James M
2014-01-01
Current knowledge of how to design and fit hearing aids to optimize music listening is limited. Many hearing-aid users listen to recorded music, which often undergoes compression limiting (CL) in the music industry. Therefore, hearing-aid users may experience twofold effects of compression when listening to recorded music: music-industry CL and hearing-aid wide dynamic-range compression (WDRC). The goal of this study was to examine the roles of input-signal properties, hearing-aid processing, and individual variability in the perception of recorded music, with a focus on the effects of dynamic-range compression. A group of 18 experienced hearing-aid users made paired-comparison preference judgments for classical and rock music samples using simulated hearing aids. Music samples were either unprocessed before hearing-aid input or had different levels of music-industry CL. Hearing-aid conditions included linear gain and individually fitted WDRC. Combinations of four WDRC parameters were included: fast release time (50 msec), slow release time (1,000 msec), three channels, and 18 channels. Listeners also completed several psychophysical tasks. Acoustic analyses showed that CL and WDRC reduced temporal envelope contrasts, changed amplitude distributions across the acoustic spectrum, and smoothed the peaks of the modulation spectrum. Listener judgments revealed that fast WDRC was least preferred for both genres of music. For classical music, linear processing and slow WDRC were equally preferred, and the main effect of number of channels was not significant. For rock music, linear processing was preferred over slow WDRC, and three channels were preferred to 18 channels. Heavy CL was least preferred for classical music, but the amount of CL did not change the patterns of WDRC preferences for either genre. Auditory filter bandwidth as estimated from psychophysical tuning curves was associated with variability in listeners' preferences for classical music. Fast, multichannel WDRC often leads to poor music quality, whereas linear processing or slow WDRC are generally preferred. Furthermore, the effect of WDRC is more important for music preferences than music-industry CL applied to signals before the hearing-aid input stage. Variability in hearing-aid users' perceptions of music quality may be partially explained by frequency resolution abilities.
Compression socks and functional recovery following marathon running: a randomized controlled trial.
Armstrong, Stuart A; Till, Eloise S; Maloney, Stephen R; Harris, Gregory A
2015-02-01
Compression socks have become a popular recovery aid for distance running athletes. Although some physiological markers have been shown to be influenced by wearing these garments, scant evidence exists on their effects on functional recovery. This research aims to shed light onto whether the wearing of compression socks for 48 hours after marathon running can improve functional recovery, as measured by a timed treadmill test to exhaustion 14 days following marathon running. Athletes (n = 33, age, 38.5 ± 7.2 years) participating in the 2012 Melbourne, 2013 Canberra, or 2013 Gold Coast marathons were recruited and randomized into the compression sock or placebo group. A graded treadmill test to exhaustion was performed 2 weeks before and 2 weeks after each marathon. Time to exhaustion, average and maximum heart rates were recorded. Participants were asked to wear their socks for 48 hours immediately after completion of the marathon. The change in treadmill times (seconds) was recorded for each participant. Thirty-three participants completed the treadmill protocols. In the compression group, average treadmill run to exhaustion time 2 weeks after the marathon increased by 2.6% (52 ± 103 seconds). In the placebo group, run to exhaustion time decreased by 3.4% (-62 ± 130 seconds), P = 0.009. This shows a significant beneficial effect of compression socks on recovery compared with placebo. The wearing of below-knee compression socks for 48 hours after marathon running has been shown to improve functional recovery as measured by a graduated treadmill test to exhaustion 2 weeks after the event.
Time domain SAR raw data simulation using CST and image focusing of 3D objects
NASA Astrophysics Data System (ADS)
Saeed, Adnan; Hellwich, Olaf
2017-10-01
This paper presents the use of a general purpose electromagnetic simulator, CST, to simulate realistic synthetic aperture radar (SAR) raw data of three-dimensional objects. Raw data is later focused in MATLAB using range-doppler algorithm. Within CST Microwave Studio a replica of TerraSAR-X chirp signal is incident upon a modeled Corner Reflector (CR) whose design and material properties are identical to that of the real one. Defining mesh and other appropriate settings reflected wave is measured at several distant points within a line parallel to the viewing direction. This is analogous to an array antenna and is synthesized to create a long aperture for SAR processing. The time domain solver in CST is based on the solution of differential form of Maxwells equations. Exported data from CST is arranged into a 2-d matrix of axis range and azimuth. Hilbert transform is applied to convert the real signal to complex data with phase information. Range compression, range cell migration correction (RCMC), and azimuth compression are applied in time domain to obtain the final SAR image. This simulation can provide valuable information to clarify which real world objects cause images suitable for high accuracy identification in the SAR images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zylstra, A. B.; Frenje, J. A.; Séguin, F. H.
The effects of shock dynamics on compressibility of indirect-drive ignition-scale surrogate implosions, CH shells filled with D 3He gas, have been studied using charged-particle spectroscopy. Spectral measurements of D 3He protons produced at the shock-bang time probe the shock dynamics and in-flight characteristics of an implosion. The proton shock yield is found to vary by over an order of magnitude. A simple model relates the observed yield to incipient hot-spot adiabat, suggesting that implosions with rapid radiation-power increase during the main drive pulse may have a 2x higher hot-spot adiabat, potentially reducing compressibility. A self-consistent 1-D implosion model was usedmore » to infer the areal density (pR) and the shell center-of-mass radius (R cm) from the downshift of the shock-produced D 3He protons. The observed pR at shock-bang time is substantially higher for implosions, where the laser drive is on until near the compression bang time ('short-coast'), while longer-coasting implosions have lower pR. This corresponds to a much larger temporal difference between the shock- and compression-bang time in the long-coast implosions (~800 ps) than in the short-coast (~400 ps); this will be verified with a future direct bang-time diagnostic. This model-inferred differential bang time contradicts radiation-hydrodynamic simulations, which predict constant 700–800 ps differential independent of coasting time. This result is potentially explained by uncertainties in modeling late-time ablation drive on the capsule. In an ignition experiment, an earlier shock-bang time resulting in an earlier onset of shell deceleration, potentially reducing compression and, thus, fuel pR.« less
Zylstra, A. B.; Frenje, J. A.; Séguin, F. H.; ...
2014-11-03
The effects of shock dynamics on compressibility of indirect-drive ignition-scale surrogate implosions, CH shells filled with D 3He gas, have been studied using charged-particle spectroscopy. Spectral measurements of D 3He protons produced at the shock-bang time probe the shock dynamics and in-flight characteristics of an implosion. The proton shock yield is found to vary by over an order of magnitude. A simple model relates the observed yield to incipient hot-spot adiabat, suggesting that implosions with rapid radiation-power increase during the main drive pulse may have a 2x higher hot-spot adiabat, potentially reducing compressibility. A self-consistent 1-D implosion model was usedmore » to infer the areal density (pR) and the shell center-of-mass radius (R cm) from the downshift of the shock-produced D 3He protons. The observed pR at shock-bang time is substantially higher for implosions, where the laser drive is on until near the compression bang time ('short-coast'), while longer-coasting implosions have lower pR. This corresponds to a much larger temporal difference between the shock- and compression-bang time in the long-coast implosions (~800 ps) than in the short-coast (~400 ps); this will be verified with a future direct bang-time diagnostic. This model-inferred differential bang time contradicts radiation-hydrodynamic simulations, which predict constant 700–800 ps differential independent of coasting time. This result is potentially explained by uncertainties in modeling late-time ablation drive on the capsule. In an ignition experiment, an earlier shock-bang time resulting in an earlier onset of shell deceleration, potentially reducing compression and, thus, fuel pR.« less
Interdisciplinary ICU Cardiac Arrest Debriefing Improves Survival Outcomes
Wolfe, Heather; Zebuhr, Carleen; Topjian, Alexis A.; Nishisaki, Akira; Niles, Dana E.; Meaney, Peter A.; Boyle, Lori; Giordano, Rita T.; Davis, Daniela; Priestley, Margaret; Apkon, Michael; Berg, Robert A.; Nadkarni, Vinay M.; Sutton, Robert M.
2014-01-01
Objective In-hospital cardiac arrest is an important public health problem. High-quality resuscitation improves survival but is difficult to achieve. Our objective is to evaluate the effectiveness of a novel, interdisciplinary, postevent quantitative debriefing program to improve survival outcomes after in-hospital pediatric chest compression events. Design, Setting, and Patients Single-center prospective interventional study of children who received chest compressions between December 2008 and June 2012 in the ICU. Interventions Structured, quantitative, audiovisual, interdisciplinary debriefing of chest compression events with front-line providers. Measurements and Main Results Primary outcome was survival to hospital discharge. Secondary outcomes included survival of event (return of spontaneous circulation for ≥ 20 min) and favorable neurologic outcome. Primary resuscitation quality outcome was a composite variable, termed “excellent cardiopulmonary resuscitation,” prospectively defined as a chest compression depth ≥ 38 mm, rate ≥ 100/min, ≤ 10% of chest compressions with leaning, and a chest compression fraction > 90% during a given 30-second epoch. Quantitative data were available only for patients who are 8 years old or older. There were 119 chest compression events (60 control and 59 interventional). The intervention was associated with a trend toward improved survival to hospital discharge on both univariate analysis (52% vs 33%, p = 0.054) and after controlling for confounders (adjusted odds ratio, 2.5; 95% CI, 0.91–6.8; p = 0.075), and it significantly increased survival with favorable neurologic outcome on both univariate (50% vs 29%, p = 0.036) and multivariable analyses (adjusted odds ratio, 2.75; 95% CI, 1.01–7.5; p = 0.047). Cardiopulmonary resuscitation epochs for patients who are 8 years old or older during the debriefing period were 5.6 times more likely to meet targets of excellent cardiopulmonary resuscitation (95% CI, 2.9–10.6; p < 0.01). Conclusion Implementation of an interdisciplinary, postevent quantitative debriefing program was significantly associated with improved cardiopulmonary resuscitation quality and survival with favorable neurologic outcome. (Crit Care Med 2014; XX:00–00) PMID:24717462
Time-compressed speech test in the elderly.
Arceno, Rayana Silva; Scharlach, Renata Coelho
2017-09-28
The present study aimed to evaluate the performance of elderly people in the time-compressed speech test according to the variables ears and order of display, and analyze the types of errors presented by the volunteers. This is an observational, descriptive, quantitative, analytical and primary cross-sectional study involving 22 elderly with normal hearing or mild sensorineural hearing loss between the ages of 60 and 80. The elderly were submitted to the time-compressed speech test with compression ratio of 60%, through the electromechanical time compression method. A list of 50 disyllables was applied to each ear and the initial side was chosen at random. On what concerns to the performance in the test, the elderly fell short in relation to the adults and there was no statistical difference between the ears. It was found statistical evidence of better performance for the second ear in the test. The most mistaken words were the ones initiated with the phonemes /p/ and /d/. The presence of consonant combination in a word also increased the occurrence of mistakes. The elderly have worse performance in the auditory closure ability when assessed by the time-compressed speech test compared to adults. This result suggests that elderly people have difficulty in recognizing speech when this is pronounced in faster rates. Therefore, strategies must be used to facilitate the communicative process, regardless the presence of hearing loss.
5 CFR 610.404 - Requirement for time-accounting method.
Code of Federal Regulations, 2010 CFR
2010-01-01
... REGULATIONS HOURS OF DUTY Flexible and Compressed Work Schedules § 610.404 Requirement for time-accounting method. An agency that authorizes a flexible work schedule or a compressed work schedule under this...
Image Segmentation, Registration, Compression, and Matching
NASA Technical Reports Server (NTRS)
Yadegar, Jacob; Wei, Hai; Yadegar, Joseph; Ray, Nilanjan; Zabuawala, Sakina
2011-01-01
A novel computational framework was developed of a 2D affine invariant matching exploiting a parameter space. Named as affine invariant parameter space (AIPS), the technique can be applied to many image-processing and computer-vision problems, including image registration, template matching, and object tracking from image sequence. The AIPS is formed by the parameters in an affine combination of a set of feature points in the image plane. In cases where the entire image can be assumed to have undergone a single affine transformation, the new AIPS match metric and matching framework becomes very effective (compared with the state-of-the-art methods at the time of this reporting). No knowledge about scaling or any other transformation parameters need to be known a priori to apply the AIPS framework. An automated suite of software tools has been created to provide accurate image segmentation (for data cleaning) and high-quality 2D image and 3D surface registration (for fusing multi-resolution terrain, image, and map data). These tools are capable of supporting existing GIS toolkits already in the marketplace, and will also be usable in a stand-alone fashion. The toolkit applies novel algorithmic approaches for image segmentation, feature extraction, and registration of 2D imagery and 3D surface data, which supports first-pass, batched, fully automatic feature extraction (for segmentation), and registration. A hierarchical and adaptive approach is taken for achieving automatic feature extraction, segmentation, and registration. Surface registration is the process of aligning two (or more) data sets to a common coordinate system, during which the transformation between their different coordinate systems is determined. Also developed here are a novel, volumetric surface modeling and compression technique that provide both quality-guaranteed mesh surface approximations and compaction of the model sizes by efficiently coding the geometry and connectivity/topology components of the generated models. The highly efficient triangular mesh compression compacts the connectivity information at the rate of 1.5-4 bits per vertex (on average for triangle meshes), while reducing the 3D geometry by 40-50 percent. Finally, taking into consideration the characteristics of 3D terrain data, and using the innovative, regularized binary decomposition mesh modeling, a multistage, pattern-drive modeling, and compression technique has been developed to provide an effective framework for compressing digital elevation model (DEM) surfaces, high-resolution aerial imagery, and other types of NASA data.
Techniques for information extraction from compressed GPS traces : final report.
DOT National Transportation Integrated Search
2015-12-31
Developing techniques for extracting information requires a good understanding of methods used to compress the traces. Many techniques for compressing trace data : consisting of position (i.e., latitude/longitude) and time values have been developed....
Structural efficiencies of various aluminum, titanium, and steel alloys at elevated temperatures
NASA Technical Reports Server (NTRS)
Heimerl, George J; Hughes, Philip J
1953-01-01
Efficient temperature ranges are indicated for two high-strength aluminum alloys, two titanium alloys, and three steels for some short-time compression-loading applications at elevated temperatures. Only the effects of constant temperatures and short exposure to temperature are considered, and creep is assumed not to be a factor. The structural efficiency analysis is based upon preliminary results of short-time elevated-temperature compressive stress-strain tests of the materials. The analysis covers strength under uniaxial compression, elastic stiffness, column buckling, and the buckling of long plates in compression or in shear.
Exploring compression techniques for ROOT IO
NASA Astrophysics Data System (ADS)
Zhang, Z.; Bockelman, B.
2017-10-01
ROOT provides an flexible format used throughout the HEP community. The number of use cases - from an archival data format to end-stage analysis - has required a number of tradeoffs to be exposed to the user. For example, a high “compression level” in the traditional DEFLATE algorithm will result in a smaller file (saving disk space) at the cost of slower decompression (costing CPU time when read). At the scale of the LHC experiment, poor design choices can result in terabytes of wasted space or wasted CPU time. We explore and attempt to quantify some of these tradeoffs. Specifically, we explore: the use of alternate compressing algorithms to optimize for read performance; an alternate method of compressing individual events to allow efficient random access; and a new approach to whole-file compression. Quantitative results are given, as well as guidance on how to make compression decisions for different use cases.
Variable valve timing in a homogenous charge compression ignition engine
Lawrence, Keith E.; Faletti, James J.; Funke, Steven J.; Maloney, Ronald P.
2004-08-03
The present invention relates generally to the field of homogenous charge compression ignition engines, in which fuel is injected when the cylinder piston is relatively close to the bottom dead center position for its compression stroke. The fuel mixes with air in the cylinder during the compression stroke to create a relatively lean homogeneous mixture that preferably ignites when the piston is relatively close to the top dead center position. However, if the ignition event occurs either earlier or later than desired, lowered performance, engine misfire, or even engine damage, can result. The present invention utilizes internal exhaust gas recirculation and/or compression ratio control to control the timing of ignition events and combustion duration in homogeneous charge compression ignition engines. Thus, at least one electro-hydraulic assist actuator is provided that is capable of mechanically engaging at least one cam actuated intake and/or exhaust valve.
Comparative data compression techniques and multi-compression results
NASA Astrophysics Data System (ADS)
Hasan, M. R.; Ibrahimy, M. I.; Motakabber, S. M. A.; Ferdaus, M. M.; Khan, M. N. H.
2013-12-01
Data compression is very necessary in business data processing, because of the cost savings that it offers and the large volume of data manipulated in many business applications. It is a method or system for transmitting a digital image (i.e., an array of pixels) from a digital data source to a digital data receiver. More the size of the data be smaller, it provides better transmission speed and saves time. In this communication, we always want to transmit data efficiently and noise freely. This paper will provide some compression techniques for lossless text type data compression and comparative result of multiple and single compression, that will help to find out better compression output and to develop compression algorithms.
Correlation between k-space sampling pattern and MTF in compressed sensing MRSI.
Heikal, A A; Wachowicz, K; Fallone, B G
2016-10-01
To investigate the relationship between the k-space sampling patterns used for compressed sensing MR spectroscopic imaging (CS-MRSI) and the modulation transfer function (MTF) of the metabolite maps. This relationship may allow the desired frequency content of the metabolite maps to be quantitatively tailored when designing an undersampling pattern. Simulations of a phantom were used to calculate the MTF of Nyquist sampled (NS) 32 × 32 MRSI, and four-times undersampled CS-MRSI reconstructions. The dependence of the CS-MTF on the k-space sampling pattern was evaluated for three sets of k-space sampling patterns generated using different probability distribution functions (PDFs). CS-MTFs were also evaluated for three more sets of patterns generated using a modified algorithm where the sampling ratios are constrained to adhere to PDFs. Strong visual correlation as well as high R 2 was found between the MTF of CS-MRSI and the product of the frequency-dependant sampling ratio and the NS 32 × 32 MTF. Also, PDF-constrained sampling patterns led to higher reproducibility of the CS-MTF, and stronger correlations to the above-mentioned product. The relationship established in this work provides the user with a theoretical solution for the MTF of CS MRSI that is both predictable and customizable to the user's needs.
Kim, Bohyoung; Lee, Kyoung Ho; Kim, Kil Joong; Mantiuk, Rafal; Kim, Hye-ri; Kim, Young Hoon
2008-06-01
The objective of our study was to assess the effects of compressing source thin-section abdominal CT images on final transverse average-intensity-projection (AIP) images. At reversible, 4:1, 6:1, 8:1, 10:1, and 15:1 Joint Photographic Experts Group (JPEG) 2000 compressions, we compared the artifacts in 20 matching compressed thin sections (0.67 mm), compressed thick sections (5 mm), and AIP images (5 mm) reformatted from the compressed thin sections. The artifacts were quantitatively measured with peak signal-to-noise ratio (PSNR) and a perceptual quality metric (High Dynamic Range Visual Difference Predictor [HDR-VDP]). By comparing the compressed and original images, three radiologists independently graded the artifacts as 0 (none, indistinguishable), 1 (barely perceptible), 2 (subtle), or 3 (significant). Friedman tests and exact tests for paired proportions were used. At irreversible compressions, the artifacts tended to increase in the order of AIP, thick-section, and thin-section images in terms of PSNR (p < 0.0001), HDR-VDP (p < 0.0001), and the readers' grading (p < 0.01 at 6:1 or higher compressions). At 6:1 and 8:1, distinguishable pairs (grades 1-3) tended to increase in the order of AIP, thick-section, and thin-section images. Visually lossless threshold for the compression varied between images but decreased in the order of AIP, thick-section, and thin-section images (p < 0.0001). Compression artifacts in thin sections are significantly attenuated in AIP images. On the premise that thin sections are typically reviewed using an AIP technique, it is justifiable to compress them to a compression level currently accepted for thick sections.
Compression of the Global Land 1-km AVHRR dataset
Kess, B. L.; Steinwand, D.R.; Reichenbach, S.E.
1996-01-01
Large datasets, such as the Global Land 1-km Advanced Very High Resolution Radiometer (AVHRR) Data Set (Eidenshink and Faundeen 1994), require compression methods that provide efficient storage and quick access to portions of the data. A method of lossless compression is described that provides multiresolution decompression within geographic subwindows of multi-spectral, global, 1-km, AVHRR images. The compression algorithm segments each image into blocks and compresses each block in a hierarchical format. Users can access the data by specifying either a geographic subwindow or the whole image and a resolution (1,2,4, 8, or 16 km). The Global Land 1-km AVHRR data are presented in the Interrupted Goode's Homolosine map projection. These images contain masked regions for non-land areas which comprise 80 per cent of the image. A quadtree algorithm is used to compress the masked regions. The compressed region data are stored separately from the compressed land data. Results show that the masked regions compress to 0·143 per cent of the bytes they occupy in the test image and the land areas are compressed to 33·2 per cent of their original size. The entire image is compressed hierarchically to 6·72 per cent of the original image size, reducing the data from 9·05 gigabytes to 623 megabytes. These results are compared to the first order entropy of the residual image produced with lossless Joint Photographic Experts Group predictors. Compression results are also given for Lempel-Ziv-Welch (LZW) and LZ77, the algorithms used by UNIX compress and GZIP respectively. In addition to providing multiresolution decompression of geographic subwindows of the data, the hierarchical approach and the use of quadtrees for storing the masked regions gives a marked improvement over these popular methods.
NASA Astrophysics Data System (ADS)
Lindsay, R. A.; Cox, B. V.
Universal and adaptive data compression techniques have the capability to globally compress all types of data without loss of information but have the disadvantage of complexity and computation speed. Advances in hardware speed and the reduction of computational costs have made universal data compression feasible. Implementations of the Adaptive Huffman and Lempel-Ziv compression algorithms are evaluated for performance. Compression ratios versus run times for different size data files are graphically presented and discussed in the paper. Required adjustments needed for optimum performance of the algorithms relative to theoretical achievable limits will be outlined.
Effects of roughness and compressibility of flooring on cow locomotion.
Rushen, J; de Passillé, A M
2006-08-01
We examined the effects of roughness and degree of compressibility of flooring on the locomotion of dairy cows. We observed 16 cows walking down specially constructed walkways with materials that differed in surface roughness and degree of compressibility. Use of a commercially available soft rubber flooring material decreased slipping, number of strides, and time to traverse the corridor. These effects were most apparent at difficult sections of the corridor, such as at the start, at a right-angle turn, and across a gutter. Covering the walkway with a thin layer of slurry increased frequency of slipping, number of strides, and time taken to traverse the walkway. Effects of adding slurry were not overcome by increasing surface roughness or compressibility. Placing more compressible materials under a slip-resistant material reduced the time and number of steps needed to traverse the corridor but did not reduce slips, and the effects on cow locomotion varied nonlinearly with the degree of compressibility of the floor. Use of commercially available rubber floors improved cow locomotion compared with concrete floors. However, standard engineering measures of the floor properties may not predict effects of the floor on cow behavior well. Increasing compressibility of the flooring on which cows walk, independently of the roughness of the surface, can improve cow locomotion.
Fuzzy Relational Compression Applied on Feature Vectors for Infant Cry Recognition
NASA Astrophysics Data System (ADS)
Reyes-Galaviz, Orion Fausto; Reyes-García, Carlos Alberto
Data compression is always advisable when it comes to handling and processing information quickly and efficiently. There are two main problems that need to be solved when it comes to handling data; store information in smaller spaces and processes it in the shortest possible time. When it comes to infant cry analysis (ICA), there is always the need to construct large sound repositories from crying babies. Samples that have to be analyzed and be used to train and test pattern recognition algorithms; making this a time consuming task when working with uncompressed feature vectors. In this work, we show a simple, but efficient, method that uses Fuzzy Relational Product (FRP) to compresses the information inside a feature vector, building with this a compressed matrix that will help us recognize two kinds of pathologies in infants; Asphyxia and Deafness. We describe the sound analysis, which consists on the extraction of Mel Frequency Cepstral Coefficients that generate vectors which will later be compressed by using FRP. There is also a description of the infant cry database used in this work, along with the training and testing of a Time Delay Neural Network with the compressed features, which shows a performance of 96.44% with our proposed feature vector compression.
A hybrid data compression approach for online backup service
NASA Astrophysics Data System (ADS)
Wang, Hua; Zhou, Ke; Qin, MingKang
2009-08-01
With the popularity of Saas (Software as a service), backup service has becoming a hot topic of storage application. Due to the numerous backup users, how to reduce the massive data load is a key problem for system designer. Data compression provides a good solution. Traditional data compression application used to adopt a single method, which has limitations in some respects. For example data stream compression can only realize intra-file compression, de-duplication is used to eliminate inter-file redundant data, compression efficiency cannot meet the need of backup service software. This paper proposes a novel hybrid compression approach, which includes two levels: global compression and block compression. The former can eliminate redundant inter-file copies across different users, the latter adopts data stream compression technology to realize intra-file de-duplication. Several compressing algorithms were adopted to measure the compression ratio and CPU time. Adaptability using different algorithm in certain situation is also analyzed. The performance analysis shows that great improvement is made through the hybrid compression policy.
SCALCE: boosting sequence compression algorithms using locally consistent encoding
Hach, Faraz; Numanagić, Ibrahim; Sahinalp, S Cenk
2012-01-01
Motivation: The high throughput sequencing (HTS) platforms generate unprecedented amounts of data that introduce challenges for the computational infrastructure. Data management, storage and analysis have become major logistical obstacles for those adopting the new platforms. The requirement for large investment for this purpose almost signalled the end of the Sequence Read Archive hosted at the National Center for Biotechnology Information (NCBI), which holds most of the sequence data generated world wide. Currently, most HTS data are compressed through general purpose algorithms such as gzip. These algorithms are not designed for compressing data generated by the HTS platforms; for example, they do not take advantage of the specific nature of genomic sequence data, that is, limited alphabet size and high similarity among reads. Fast and efficient compression algorithms designed specifically for HTS data should be able to address some of the issues in data management, storage and communication. Such algorithms would also help with analysis provided they offer additional capabilities such as random access to any read and indexing for efficient sequence similarity search. Here we present SCALCE, a ‘boosting’ scheme based on Locally Consistent Parsing technique, which reorganizes the reads in a way that results in a higher compression speed and compression rate, independent of the compression algorithm in use and without using a reference genome. Results: Our tests indicate that SCALCE can improve the compression rate achieved through gzip by a factor of 4.19—when the goal is to compress the reads alone. In fact, on SCALCE reordered reads, gzip running time can improve by a factor of 15.06 on a standard PC with a single core and 6 GB memory. Interestingly even the running time of SCALCE + gzip improves that of gzip alone by a factor of 2.09. When compared with the recently published BEETL, which aims to sort the (inverted) reads in lexicographic order for improving bzip2, SCALCE + gzip provides up to 2.01 times better compression while improving the running time by a factor of 5.17. SCALCE also provides the option to compress the quality scores as well as the read names, in addition to the reads themselves. This is achieved by compressing the quality scores through order-3 Arithmetic Coding (AC) and the read names through gzip through the reordering SCALCE provides on the reads. This way, in comparison with gzip compression of the unordered FASTQ files (including reads, read names and quality scores), SCALCE (together with gzip and arithmetic encoding) can provide up to 3.34 improvement in the compression rate and 1.26 improvement in running time. Availability: Our algorithm, SCALCE (Sequence Compression Algorithm using Locally Consistent Encoding), is implemented in C++ with both gzip and bzip2 compression options. It also supports multithreading when gzip option is selected, and the pigz binary is available. It is available at http://scalce.sourceforge.net. Contact: fhach@cs.sfu.ca or cenk@cs.sfu.ca Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23047557
Heterogeneous Compression of Large Collections of Evolutionary Trees.
Matthews, Suzanne J
2015-01-01
Compressing heterogeneous collections of trees is an open problem in computational phylogenetics. In a heterogeneous tree collection, each tree can contain a unique set of taxa. An ideal compression method would allow for the efficient archival of large tree collections and enable scientists to identify common evolutionary relationships over disparate analyses. In this paper, we extend TreeZip to compress heterogeneous collections of trees. TreeZip is the most efficient algorithm for compressing homogeneous tree collections. To the best of our knowledge, no other domain-based compression algorithm exists for large heterogeneous tree collections or enable their rapid analysis. Our experimental results indicate that TreeZip averages 89.03 percent (72.69 percent) space savings on unweighted (weighted) collections of trees when the level of heterogeneity in a collection is moderate. The organization of the TRZ file allows for efficient computations over heterogeneous data. For example, consensus trees can be computed in mere seconds. Lastly, combining the TreeZip compressed (TRZ) file with general-purpose compression yields average space savings of 97.34 percent (81.43 percent) on unweighted (weighted) collections of trees. Our results lead us to believe that TreeZip will prove invaluable in the efficient archival of tree collections, and enables scientists to develop novel methods for relating heterogeneous collections of trees.
2012-01-01
Background Transcript profiling of differentiating secondary xylem has allowed us to draw a general picture of the genes involved in wood formation. However, our knowledge is still limited about the regulatory mechanisms that coordinate and modulate the different pathways providing substrates during xylogenesis. The development of compression wood in conifers constitutes an exceptional model for these studies. Although differential expression of a few genes in differentiating compression wood compared to normal or opposite wood has been reported, the broad range of features that distinguish this reaction wood suggest that the expression of a larger set of genes would be modified. Results By combining the construction of different cDNA libraries with microarray analyses we have identified a total of 496 genes in maritime pine (Pinus pinaster, Ait.) that change in expression during differentiation of compression wood (331 up-regulated and 165 down-regulated compared to opposite wood). Samples from different provenances collected in different years and geographic locations were integrated into the analyses to mitigate the effects of multiple sources of variability. This strategy allowed us to define a group of genes that are consistently associated with compression wood formation. Correlating with the deposition of a thicker secondary cell wall that characterizes compression wood development, the expression of a number of genes involved in synthesis of cellulose, hemicellulose, lignin and lignans was up-regulated. Further analysis of a set of these genes involved in S-adenosylmethionine metabolism, ammonium recycling, and lignin and lignans biosynthesis showed changes in expression levels in parallel to the levels of lignin accumulation in cells undergoing xylogenesis in vivo and in vitro. Conclusions The comparative transcriptomic analysis reported here have revealed a broad spectrum of coordinated transcriptional modulation of genes involved in biosynthesis of different cell wall polymers associated with within-tree variations in pine wood structure and composition. In particular, we demonstrate the coordinated modulation at transcriptional level of a gene set involved in S-adenosylmethionine synthesis and ammonium assimilation with increased demand for coniferyl alcohol for lignin and lignan synthesis, enabling a better understanding of the metabolic requirements in cells undergoing lignification. PMID:22747794
Holmqvist, Kristian; Svensson, Mats Y; Davidsson, Johan; Gutsche, Andreas; Tomasch, Ernst; Darok, Mario; Ravnik, Dean
2016-02-01
The chest response of the human body has been studied for several load conditions, but is not well known in the case of steering wheel rim-to-chest impact in heavy goods vehicle frontal collisions. The aim of this study was to determine the response of the human chest in a set of simulated steering wheel impacts. PMHS tests were carried out and analysed. The steering wheel load pattern was represented by a rigid pendulum with a straight bar-shaped front. A crash test dummy chest calibration pendulum was utilised for comparison. In this study, a set of rigid bar impacts were directed at various heights of the chest, spanning approximately 120mm around the fourth intercostal space. The impact energy was set below a level estimated to cause rib fracture. The analysed results consist of responses, evaluated with respect to differences in the impacting shape and impact heights on compression and viscous criteria chest injury responses. The results showed that the bar impacts consistently produced lesser scaled chest compressions than the hub; the Middle bar responses were around 90% of the hub responses. A superior bar impact provided lesser chest compression; the average response was 86% of the Middle bar response. For inferior bar impacts, the chest compression response was 116% of the chest compression in the middle. The damping properties of the chest caused the compression to decrease in the high speed bar impacts to 88% of that in low speed impacts. From the analysis it could be concluded that the bar impact shape provides lower chest criteria responses compared to the hub. Further, the bar responses are dependent on the impact location of the chest. Inertial and viscous effects of the upper body affect the responses. The results can be used to assess the responses of human substitutes such as anthropomorphic test devices and finite element human body models, which will benefit the development process of heavy goods vehicle safety systems. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Evje, Steinar; Wang, Wenjun; Wen, Huanyao
2016-09-01
In this paper, we consider a compressible two-fluid model with constant viscosity coefficients and unequal pressure functions {P^+neq P^-}. As mentioned in the seminal work by Bresch, Desjardins, et al. (Arch Rational Mech Anal 196:599-629, 2010) for the compressible two-fluid model, where {P^+=P^-} (common pressure) is used and capillarity effects are accounted for in terms of a third-order derivative of density, the case of constant viscosity coefficients cannot be handled in their settings. Besides, their analysis relies on a special choice for the density-dependent viscosity [refer also to another reference (Commun Math Phys 309:737-755, 2012) by Bresch, Huang and Li for a study of the same model in one dimension but without capillarity effects]. In this work, we obtain the global solution and its optimal decay rate (in time) with constant viscosity coefficients and some smallness assumptions. In particular, capillary pressure is taken into account in the sense that {Δ P=P^+ - P^-=fneq 0} where the difference function {f} is assumed to be a strictly decreasing function near the equilibrium relative to the fluid corresponding to {P^-}. This assumption plays an key role in the analysis and appears to have an essential stabilization effect on the model in question.
Compressive Sensing with Cross-Validation and Stop-Sampling for Sparse Polynomial Chaos Expansions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huan, Xun; Safta, Cosmin; Sargsyan, Khachik
Compressive sensing is a powerful technique for recovering sparse solutions of underdetermined linear systems, which is often encountered in uncertainty quanti cation analysis of expensive and high-dimensional physical models. We perform numerical investigations employing several com- pressive sensing solvers that target the unconstrained LASSO formulation, with a focus on linear systems that arise in the construction of polynomial chaos expansions. With core solvers of l1 ls, SpaRSA, CGIST, FPC AS, and ADMM, we develop techniques to mitigate over tting through an automated selection of regularization constant based on cross-validation, and a heuristic strategy to guide the stop-sampling decision. Practical recommendationsmore » on parameter settings for these tech- niques are provided and discussed. The overall method is applied to a series of numerical examples of increasing complexity, including large eddy simulations of supersonic turbulent jet-in-cross flow involving a 24-dimensional input. Through empirical phase-transition diagrams and convergence plots, we illustrate sparse recovery performance under structures induced by polynomial chaos, accuracy and computational tradeoffs between polynomial bases of different degrees, and practi- cability of conducting compressive sensing for a realistic, high-dimensional physical application. Across test cases studied in this paper, we find ADMM to have demonstrated empirical advantages through consistent lower errors and faster computational times.« less
3D printed cellular solid outperforms traditional stochastic foam in long-term mechanical response
Maiti, A.; Small, W.; Lewicki, J.; ...
2016-04-27
3D printing of polymeric foams by direct-ink-write is a recent technological breakthrough that enables the creation of versatile compressible solids with programmable microstructure, customizable shapes, and tunable mechanical response including negative elastic modulus. However, in many applications the success of these 3D printed materials as a viable replacement for traditional stochastic foams critically depends on their mechanical performance and micro-architectural stability while deployed under long-term mechanical strain. To predict the long-term performance of the two types of foams we employed multi-year-long accelerated aging studies under compressive strain followed by a time-temperature-superposition analysis using a minimum-arc-length-based algorithm. The resulting master curvesmore » predict superior long-term performance of the 3D printed foam in terms of two different metrics, i.e., compression set and load retention. To gain deeper understanding, we imaged the microstructure of both foams using X-ray computed tomography, and performed finite-element analysis of the mechanical response within these microstructures. As a result, this indicates a wider stress variation in the stochastic foam with points of more extreme local stress as compared to the 3D printed material, which might explain the latter’s improved long-term stability and mechanical performance.« less
Shockwave compression of Ar gas at several initial densities
NASA Astrophysics Data System (ADS)
Dattelbaum, Dana M.; Goodwin, Peter M.; Garcia, Daniel B.; Gustavsen, Richard L.; Lang, John M.; Aslam, Tariq D.; Sheffield, Stephen A.; Gibson, Lloyd L.; Morris, John S.
2017-01-01
Experimental data of the principal Hugoniot locus of variable density gas-phase noble and molecular gases are rare. The majority of shock Hugoniot data is either from shock tube experiments on low-pressure gases or from plate impact experiments on cryogenic, liquefied gases. In both cases, physics regarding shock compressibility, thresholds for the on-set of shock-driven ionization, and even dissociation chemistry are difficult to infer for gases at intermediate densities. We have developed an experimental target design for gas gun-driven plate impact experiments on noble gases at initial pressures between 200-1000 psi. Using optical velocimetry, we are able to directly determine both the shock and particle velocities of the gas on the principal Hugoniot locus, as well as clearly differentiate ionization thresholds. The target design also results in multiply shocking the gas in a quasi-isentropic fashion yielding off-Hugoniot compression data. We describe the results of a series of plate impact experiments on Ar with starting densities between 0.02-0.05 g/cm3 at room temperature. Furthermore, by coupling optical fibers to the targets, we have measured the time-resolved optical emission from the shocked gas using a spectrometer coupled to an optical streak camera to spectrally-resolve the emission, and with a 5-color optical pyrometer for temperature determination.