Near-lossless multichannel EEG compression based on matrix and tensor decompositions.
Dauwels, Justin; Srinivasan, K; Reddy, M Ramasubba; Cichocki, Andrzej
2013-05-01
A novel near-lossless compression algorithm for multichannel electroencephalogram (MC-EEG) is proposed based on matrix/tensor decomposition models. MC-EEG is represented in suitable multiway (multidimensional) forms to efficiently exploit temporal and spatial correlations simultaneously. Several matrix/tensor decomposition models are analyzed in view of efficient decorrelation of the multiway forms of MC-EEG. A compression algorithm is built based on the principle of “lossy plus residual coding,” consisting of a matrix/tensor decomposition-based coder in the lossy layer followed by arithmetic coding in the residual layer. This approach guarantees a specifiable maximum absolute error between original and reconstructed signals. The compression algorithm is applied to three different scalp EEG datasets and an intracranial EEG dataset, each with different sampling rate and resolution. The proposed algorithm achieves attractive compression ratios compared to compressing individual channels separately. For similar compression ratios, the proposed algorithm achieves nearly fivefold lower average error compared to a similar wavelet-based volumetric MC-EEG compression algorithm.
A novel ECG data compression method based on adaptive Fourier decomposition
NASA Astrophysics Data System (ADS)
Tan, Chunyu; Zhang, Liming
2017-12-01
This paper presents a novel electrocardiogram (ECG) compression method based on adaptive Fourier decomposition (AFD). AFD is a newly developed signal decomposition approach, which can decompose a signal with fast convergence, and hence reconstruct ECG signals with high fidelity. Unlike most of the high performance algorithms, our method does not make use of any preprocessing operation before compression. Huffman coding is employed for further compression. Validated with 48 ECG recordings of MIT-BIH arrhythmia database, the proposed method achieves the compression ratio (CR) of 35.53 and the percentage root mean square difference (PRD) of 1.47% on average with N = 8 decomposition times and a robust PRD-CR relationship. The results demonstrate that the proposed method has a good performance compared with the state-of-the-art ECG compressors.
Influences of operational practices on municipal solid waste landfill storage capacity.
Li, Yu-Chao; Liu, Hai-Long; Cleall, Peter John; Ke, Han; Bian, Xue-Cheng
2013-03-01
The quantitative effects of three operational factors, that is initial compaction, decomposition condition and leachate level, on municipal solid waste (MSW) landfill settlement and storage capacity are investigated in this article via consideration of a hypothetical case. The implemented model for calculating landfill compression displacement is able to consider decreases in compressibility induced by biological decomposition and load dependence of decomposition compression for the MSW. According to the investigation, a significant increase in storage capacity can be achieved by intensive initial compaction, adjustment of decomposition condition and lowering of leachate levels. The quantitative investigation presented aims to encourage landfill operators to improve management to enhance storage capacity. Furthermore, improving initial compaction and creating a preferential decomposition condition can also significantly reduce operational and post-closure settlements, respectively, which helps protect leachate and gas management infrastructure and monitoring equipment in modern landfills.
Ma, JiaLi; Zhang, TanTan; Dong, MingChui
2015-05-01
This paper presents a novel electrocardiogram (ECG) compression method for e-health applications by adapting an adaptive Fourier decomposition (AFD) algorithm hybridized with a symbol substitution (SS) technique. The compression consists of two stages: first stage AFD executes efficient lossy compression with high fidelity; second stage SS performs lossless compression enhancement and built-in data encryption, which is pivotal for e-health. Validated with 48 ECG records from MIT-BIH arrhythmia benchmark database, the proposed method achieves averaged compression ratio (CR) of 17.6-44.5 and percentage root mean square difference (PRD) of 0.8-2.0% with a highly linear and robust PRD-CR relationship, pushing forward the compression performance to an unexploited region. As such, this paper provides an attractive candidate of ECG compression method for pervasive e-health applications.
Density-dependent liquid nitromethane decomposition: molecular dynamics simulations based on ReaxFF.
Rom, Naomi; Zybin, Sergey V; van Duin, Adri C T; Goddard, William A; Zeiri, Yehuda; Katz, Gil; Kosloff, Ronnie
2011-09-15
The decomposition mechanism of hot liquid nitromethane at various compressions was studied using reactive force field (ReaxFF) molecular dynamics simulations. A competition between two different initial thermal decomposition schemes is observed, depending on compression. At low densities, unimolecular C-N bond cleavage is the dominant route, producing CH(3) and NO(2) fragments. As density and pressure rise approaching the Chapman-Jouget detonation conditions (∼30% compression, >2500 K) the dominant mechanism switches to the formation of the CH(3)NO fragment via H-transfer and/or N-O bond rupture. The change in the decomposition mechanism of hot liquid NM leads to a different kinetic and energetic behavior, as well as products distribution. The calculated density dependence of the enthalpy change correlates with the change in initial decomposition reaction mechanism. It can be used as a convenient and useful global parameter for the detection of reaction dynamics. Atomic averaged local diffusion coefficients are shown to be sensitive to the reactions dynamics, and can be used to distinguish between time periods where chemical reactions occur and diffusion-dominated, nonreactive time periods. © 2011 American Chemical Society
Reconstruction of Complex Network based on the Noise via QR Decomposition and Compressed Sensing.
Li, Lixiang; Xu, Dafei; Peng, Haipeng; Kurths, Jürgen; Yang, Yixian
2017-11-08
It is generally known that the states of network nodes are stable and have strong correlations in a linear network system. We find that without the control input, the method of compressed sensing can not succeed in reconstructing complex networks in which the states of nodes are generated through the linear network system. However, noise can drive the dynamics between nodes to break the stability of the system state. Therefore, a new method integrating QR decomposition and compressed sensing is proposed to solve the reconstruction problem of complex networks under the assistance of the input noise. The state matrix of the system is decomposed by QR decomposition. We construct the measurement matrix with the aid of Gaussian noise so that the sparse input matrix can be reconstructed by compressed sensing. We also discover that noise can build a bridge between the dynamics and the topological structure. Experiments are presented to show that the proposed method is more accurate and more efficient to reconstruct four model networks and six real networks by the comparisons between the proposed method and only compressed sensing. In addition, the proposed method can reconstruct not only the sparse complex networks, but also the dense complex networks.
Image splitting and remapping method for radiological image compression
NASA Astrophysics Data System (ADS)
Lo, Shih-Chung B.; Shen, Ellen L.; Mun, Seong K.
1990-07-01
A new decomposition method using image splitting and gray-level remapping has been proposed for image compression, particularly for images with high contrast resolution. The effects of this method are especially evident in our radiological image compression study. In our experiments, we tested the impact of this decomposition method on image compression by employing it with two coding techniques on a set of clinically used CT images and several laser film digitized chest radiographs. One of the compression techniques used was full-frame bit-allocation in the discrete cosine transform domain, which has been proven to be an effective technique for radiological image compression. The other compression technique used was vector quantization with pruned tree-structured encoding, which through recent research has also been found to produce a low mean-square-error and a high compression ratio. The parameters we used in this study were mean-square-error and the bit rate required for the compressed file. In addition to these parameters, the difference between the original and reconstructed images will be presented so that the specific artifacts generated by both techniques can be discerned by visual perception.
Improving 3D Wavelet-Based Compression of Hyperspectral Images
NASA Technical Reports Server (NTRS)
Klimesh, Matthew; Kiely, Aaron; Xie, Hua; Aranki, Nazeeh
2009-01-01
Two methods of increasing the effectiveness of three-dimensional (3D) wavelet-based compression of hyperspectral images have been developed. (As used here, images signifies both images and digital data representing images.) The methods are oriented toward reducing or eliminating detrimental effects of a phenomenon, referred to as spectral ringing, that is described below. In 3D wavelet-based compression, an image is represented by a multiresolution wavelet decomposition consisting of several subbands obtained by applying wavelet transforms in the two spatial dimensions corresponding to the two spatial coordinate axes of the image plane, and by applying wavelet transforms in the spectral dimension. Spectral ringing is named after the more familiar spatial ringing (spurious spatial oscillations) that can be seen parallel to and near edges in ordinary images reconstructed from compressed data. These ringing phenomena are attributable to effects of quantization. In hyperspectral data, the individual spectral bands play the role of edges, causing spurious oscillations to occur in the spectral dimension. In the absence of such corrective measures as the present two methods, spectral ringing can manifest itself as systematic biases in some reconstructed spectral bands and can reduce the effectiveness of compression of spatially-low-pass subbands. One of the two methods is denoted mean subtraction. The basic idea of this method is to subtract mean values from spatial planes of spatially low-pass subbands prior to encoding, because (a) such spatial planes often have mean values that are far from zero and (b) zero-mean data are better suited for compression by methods that are effective for subbands of two-dimensional (2D) images. In this method, after the 3D wavelet decomposition is performed, mean values are computed for and subtracted from each spatial plane of each spatially-low-pass subband. The resulting data are converted to sign-magnitude form and compressed in a manner similar to that of a baseline hyperspectral- image-compression method. The mean values are encoded in the compressed bit stream and added back to the data at the appropriate decompression step. The overhead incurred by encoding the mean values only a few bits per spectral band is negligible with respect to the huge size of a typical hyperspectral data set. The other method is denoted modified decomposition. This method is so named because it involves a modified version of a commonly used multiresolution wavelet decomposition, known in the art as the 3D Mallat decomposition, in which (a) the first of multiple stages of a 3D wavelet transform is applied to the entire dataset and (b) subsequent stages are applied only to the horizontally-, vertically-, and spectrally-low-pass subband from the preceding stage. In the modified decomposition, in stages after the first, not only is the spatially-low-pass, spectrally-low-pass subband further decomposed, but also spatially-low-pass, spectrally-high-pass subbands are further decomposed spatially. Either method can be used alone to improve the quality of a reconstructed image (see figure). Alternatively, the two methods can be combined by first performing modified decomposition, then subtracting the mean values from spatial planes of spatially-low-pass subbands.
Intelligent transportation systems data compression using wavelet decomposition technique.
DOT National Transportation Integrated Search
2009-12-01
Intelligent Transportation Systems (ITS) generates massive amounts of traffic data, which posts : challenges for data storage, transmission and retrieval. Data compression and reconstruction technique plays an : important role in ITS data procession....
Compression of hyper-spectral images using an accelerated nonnegative tensor decomposition
NASA Astrophysics Data System (ADS)
Li, Jin; Liu, Zilong
2017-12-01
Nonnegative tensor Tucker decomposition (NTD) in a transform domain (e.g., 2D-DWT, etc) has been used in the compression of hyper-spectral images because it can remove redundancies between spectrum bands and also exploit spatial correlations of each band. However, the use of a NTD has a very high computational cost. In this paper, we propose a low complexity NTD-based compression method of hyper-spectral images. This method is based on a pair-wise multilevel grouping approach for the NTD to overcome its high computational cost. The proposed method has a low complexity under a slight decrease of the coding performance compared to conventional NTD. We experimentally confirm this method, which indicates that this method has the less processing time and keeps a better coding performance than the case that the NTD is not used. The proposed approach has a potential application in the loss compression of hyper-spectral or multi-spectral images
Image compression using singular value decomposition
NASA Astrophysics Data System (ADS)
Swathi, H. R.; Sohini, Shah; Surbhi; Gopichand, G.
2017-11-01
We often need to transmit and store the images in many applications. Smaller the image, less is the cost associated with transmission and storage. So we often need to apply data compression techniques to reduce the storage space consumed by the image. One approach is to apply Singular Value Decomposition (SVD) on the image matrix. In this method, digital image is given to SVD. SVD refactors the given digital image into three matrices. Singular values are used to refactor the image and at the end of this process, image is represented with smaller set of values, hence reducing the storage space required by the image. Goal here is to achieve the image compression while preserving the important features which describe the original image. SVD can be adapted to any arbitrary, square, reversible and non-reversible matrix of m × n size. Compression ratio and Mean Square Error is used as performance metrics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kowal, Grzegorz; Lazarian, A., E-mail: kowal@astro.wisc.ed, E-mail: lazarian@astro.wisc.ed
We study compressible magnetohydrodynamic turbulence, which holds the key to many astrophysical processes, including star formation and cosmic-ray propagation. To account for the variations of the magnetic field in the strongly turbulent fluid, we use wavelet decomposition of the turbulent velocity field into Alfven, slow, and fast modes, which presents an extension of the Cho and Lazarian decomposition approach based on Fourier transforms. The wavelets allow us to follow the variations of the local direction of the magnetic field and therefore improve the quality of the decomposition compared to the Fourier transforms, which are done in the mean field referencemore » frame. For each resulting component, we calculate the spectra and two-point statistics such as longitudinal and transverse structure functions as well as higher order intermittency statistics. In addition, we perform a Helmholtz- Hodge decomposition of the velocity field into incompressible and compressible parts and analyze these components. We find that the turbulence intermittency is different for different components, and we show that the intermittency statistics depend on whether the phenomenon was studied in the global reference frame related to the mean magnetic field or in the frame defined by the local magnetic field. The dependencies of the measures we obtained are different for different components of the velocity; for instance, we show that while the Alfven mode intermittency changes marginally with the Mach number, the intermittency of the fast mode is substantially affected by the change.« less
Shan, Tzu-Ray; van Duin, Adri C T; Thompson, Aidan P
2014-02-27
We have developed a new ReaxFF reactive force field parametrization for ammonium nitrate. Starting with an existing nitramine/TATB ReaxFF parametrization, we optimized it to reproduce electronic structure calculations for dissociation barriers, heats of formation, and crystal structure properties of ammonium nitrate phases. We have used it to predict the isothermal pressure-volume curve and the unreacted principal Hugoniot states. The predicted isothermal pressure-volume curve for phase IV solid ammonium nitrate agreed with electronic structure calculations and experimental data within 10% error for the considered range of compression. The predicted unreacted principal Hugoniot states were approximately 17% stiffer than experimental measurements. We then simulated thermal decomposition during heating to 2500 K. Thermal decomposition pathways agreed with experimental findings.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dreger, Zbigniew A.; Tao, Yuchuan; Gupta, Yogendra M.
The high pressure-high temperature (HP-HT) phase diagram and decomposition of FOX-7, central to understanding its stability and reactivity, were determined using optical spectroscopy and imaging measurements in hydrostatically compressed and heated single crystals. Boundaries between various FOX-7 phases (α, α’, β, γ, and ε) and melting/decomposition curves were established up to 10 GPa and 750 K. Main findings are: (i) a triple point is observed between α, β, and γ phases ~ 0.6 GPa and ~ 535 K, (ii) previously suggested δ phase is not a new phase but is partly decomposed γ phase, (iii) the α-α’ transition takes placemore » along an isobar, whereas the α’-ε transition pressure decreases with increasing temperature, and (iv) melting/decomposition temperatures increase rapidly with pressure, with an increase in the slope at the onset of the α’-ε transition. Our results differ from the recently reported HP-HT phase diagram for nonhydrostatically compressed polycrystalline FOX-7. In addition, the observed interplay between melting and decomposition suggests the suppression of melting with pressure. Our FTIR measurements at different pressures to 3.5 GPa showed similar decomposition products, suggesting similar decomposition pathways irrespective of the pressure. Lastly, the present results provide new insights into the structural and chemical stability of an important insensitive high explosive (IHE) crystal under well-defined HP-HT conditions.« less
Dreger, Zbigniew A.; Tao, Yuchuan; Gupta, Yogendra M.
2016-05-10
The high pressure-high temperature (HP-HT) phase diagram and decomposition of FOX-7, central to understanding its stability and reactivity, were determined using optical spectroscopy and imaging measurements in hydrostatically compressed and heated single crystals. Boundaries between various FOX-7 phases (α, α’, β, γ, and ε) and melting/decomposition curves were established up to 10 GPa and 750 K. Main findings are: (i) a triple point is observed between α, β, and γ phases ~ 0.6 GPa and ~ 535 K, (ii) previously suggested δ phase is not a new phase but is partly decomposed γ phase, (iii) the α-α’ transition takes placemore » along an isobar, whereas the α’-ε transition pressure decreases with increasing temperature, and (iv) melting/decomposition temperatures increase rapidly with pressure, with an increase in the slope at the onset of the α’-ε transition. Our results differ from the recently reported HP-HT phase diagram for nonhydrostatically compressed polycrystalline FOX-7. In addition, the observed interplay between melting and decomposition suggests the suppression of melting with pressure. Our FTIR measurements at different pressures to 3.5 GPa showed similar decomposition products, suggesting similar decomposition pathways irrespective of the pressure. Lastly, the present results provide new insights into the structural and chemical stability of an important insensitive high explosive (IHE) crystal under well-defined HP-HT conditions.« less
The FBI wavelet/scalar quantization standard for gray-scale fingerprint image compression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradley, J.N.; Brislawn, C.M.; Hopper, T.
1993-05-01
The FBI has recently adopted a standard for the compression of digitized 8-bit gray-scale fingerprint images. The standard is based on scalar quantization of a 64-subband discrete wavelet transform decomposition of the images, followed by Huffman coding. Novel features of the algorithm include the use of symmetric boundary conditions for transforming finite-length signals and a subband decomposition tailored for fingerprint images scanned at 500 dpi. The standard is intended for use in conjunction with ANSI/NBS-CLS 1-1993, American National Standard Data Format for the Interchange of Fingerprint Information, and the FBI`s Integrated Automated Fingerprint Identification System.
The FBI wavelet/scalar quantization standard for gray-scale fingerprint image compression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradley, J.N.; Brislawn, C.M.; Hopper, T.
1993-01-01
The FBI has recently adopted a standard for the compression of digitized 8-bit gray-scale fingerprint images. The standard is based on scalar quantization of a 64-subband discrete wavelet transform decomposition of the images, followed by Huffman coding. Novel features of the algorithm include the use of symmetric boundary conditions for transforming finite-length signals and a subband decomposition tailored for fingerprint images scanned at 500 dpi. The standard is intended for use in conjunction with ANSI/NBS-CLS 1-1993, American National Standard Data Format for the Interchange of Fingerprint Information, and the FBI's Integrated Automated Fingerprint Identification System.
Reactive decomposition of low density PMDI foam subject to shock compression
NASA Astrophysics Data System (ADS)
Alexander, Scott; Reinhart, William; Brundage, Aaron; Peterson, David
Low density polymethylene diisocyanate (PMDI) foam with a density of 5.4 pounds per cubic foot (0.087 g/cc) was tested to determine the equation of state properties under shock compression over the pressure range of 0.58 - 3.4 GPa. This pressure range encompasses a region approximately 1.0-1.2 GPa within which the foam undergoes reactive decomposition resulting in significant volume expansion of approximately three times the volume prior to reaction. This volume expansion has a significant effect on the high pressure equation of state. Previous work on similar foam was conducted only up to the region where volume expansion occurs and extrapolation of that data to higher pressure results in a significant error. It is now clear that new models are required to account for the reactive decomposition of this class of foam. The results of plate impact tests will be presented and discussed including details of the unique challenges associated with shock compression of low density foams. Sandia National Labs is a multi-program lab managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corp., for the U.S. Dept. of Energy's National Nuclear Security Administration under Contract DE-AC04-94AL85000.
Brodie, Eoin
2018-04-26
Eoin Brodie of Berkeley Lab on "Succession of phylogeny and function during plant litter decomposition" at the 8th Annual Genomics of Energy & Environment Meeting on March 27, 2013 in Walnut Creek, CA.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brodie, Eoin
2013-03-01
Eoin Brodie of Berkeley Lab on "Succession of phylogeny and function during plant litter decomposition" at the 8th Annual Genomics of Energy & Environment Meeting on March 27, 2013 in Walnut Creek, CA.
Adiabatic Compression Sensitivity of Liquid Fuels and Monopropellants
NASA Technical Reports Server (NTRS)
Ismail, Ismail M. K.; Hawkins, Tom W.
2000-01-01
Liquid rocket propellants can be sensitive to rapid compression. Such liquids may undergo decomposition and their handling may be accompanied with risk. Decomposition produces small gas bubbles in the liquid, which upon rapid compression may cause catastrophic explosions. The rapid compression can result from mechanical shocks applied on the tank containing the liquid or from rapid closure of the valves installed on the lines. It is desirable to determine the conditions that may promote explosive reactions. At Air Force Research Laboratory (AFRL), we constructed an apparatus and established a safe procedure for estimating the sensitivity of propellant materials towards mechanical shocks (Adiabatic Compression Tester). A sample is placed on a stainless steel U-tube, held isothermally at a temperature between 20 and 150 C then exposed to an abrupt mechanical shock of nitrogen gas at a pressure between 6.9 and 20.7 MPa (1000 to 3000 psi). The apparatus is computer interfaced and is driven with LABTECH NOTEBOOK-pro (registered) Software. In this presentation, the design of the apparatus is shown, the operating procedure is outlined, and the safety issues are addressed. The results obtained on different energetic materials are presented.
Fingerprint recognition of wavelet-based compressed images by neuro-fuzzy clustering
NASA Astrophysics Data System (ADS)
Liu, Ti C.; Mitra, Sunanda
1996-06-01
Image compression plays a crucial role in many important and diverse applications requiring efficient storage and transmission. This work mainly focuses on a wavelet transform (WT) based compression of fingerprint images and the subsequent classification of the reconstructed images. The algorithm developed involves multiresolution wavelet decomposition, uniform scalar quantization, entropy and run- length encoder/decoder and K-means clustering of the invariant moments as fingerprint features. The performance of the WT-based compression algorithm has been compared with JPEG current image compression standard. Simulation results show that WT outperforms JPEG in high compression ratio region and the reconstructed fingerprint image yields proper classification.
The pointwise estimates of diffusion wave of the compressible micropolar fluids
NASA Astrophysics Data System (ADS)
Wu, Zhigang; Wang, Weike
2018-09-01
The pointwise estimates for the compressible micropolar fluids in dimension three are given, which exhibit generalized Huygens' principle for the fluid density and fluid momentum as the compressible Navier-Stokes equation, while the micro-rational momentum behaves like the fluid momentum of the Euler equation with damping. To circumvent the complexity from 7 × 7 Green's matrix, we use the decomposition of fluid part and electromagnetic part for the momentums to study three smaller Green's matrices. The following from this decomposition is that we have to deal with the new problem that the nonlinear terms contain nonlocal operators. We solve it by using the natural match of these new Green's functions and the nonlinear terms. Moreover, to derive the different pointwise estimates for different unknown variables such that the estimate of each unknown variable is in agreement with its Green's function, we develop some new estimates on the nonlinear interplay between different waves.
1987-10-01
34 Proceedings of the 16th JANNAF Com- bustion Meeting, Sept. 1979, Vol. II, pp. 13-34. 44. Schroeder , M. A., " Critical Analysis of Nitramine Decomposition...34 Proceedings of the 19th JANNAF Combustion Meeting, Oct. 1982. 47. Schroeder , M. A., " Critical Analysis of Nitramine Decomposition Data: Ac- tivation...the surface of the propellant. This is consis- tent with the decomposition mechanism considered by Boggs[48] and Schroeder [43J. They concluded that the
Compressibility Effects on the Passive Scalar Flux Within Homogeneous Turbulence
NASA Technical Reports Server (NTRS)
Blaisdell, G. A.; Mansour, N. N.; Reynolds, W. C.
1994-01-01
Compressibility effects on turbulent transport of a passive scalar are studied within homogeneous turbulence using a kinematic decomposition of the velocity field into solenoidal and dilatational parts. It is found that the dilatational velocity does not produce a passive scalar flux, and that all of the passive scalar flux is due to the solenoidal velocity.
Subband/transform functions for image processing
NASA Technical Reports Server (NTRS)
Glover, Daniel
1993-01-01
Functions for image data processing written for use with the MATLAB(TM) software package are presented. These functions provide the capability to transform image data with block transformations (such as the Walsh Hadamard) and to produce spatial frequency subbands of the transformed data. Block transforms are equivalent to simple subband systems. The transform coefficients are reordered using a simple permutation to give subbands. The low frequency subband is a low resolution version of the original image, while the higher frequency subbands contain edge information. The transform functions can be cascaded to provide further decomposition into more subbands. If the cascade is applied to all four of the first stage subbands (in the case of a four band decomposition), then a uniform structure of sixteen bands is obtained. If the cascade is applied only to the low frequency subband, an octave structure of seven bands results. Functions for the inverse transforms are also given. These functions can be used for image data compression systems. The transforms do not in themselves produce data compression, but prepare the data for quantization and compression. Sample quantization functions for subbands are also given. A typical compression approach is to subband the image data, quantize it, then use statistical coding (e.g., run-length coding followed by Huffman coding) for compression. Contour plots of image data and subbanded data are shown.
Non-US data compression and coding research. FASAC Technical Assessment Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gray, R.M.; Cohn, M.; Craver, L.W.
1993-11-01
This assessment of recent data compression and coding research outside the United States examines fundamental and applied work in the basic areas of signal decomposition, quantization, lossless compression, and error control, as well as application development efforts in image/video compression and speech/audio compression. Seven computer scientists and engineers who are active in development of these technologies in US academia, government, and industry carried out the assessment. Strong industrial and academic research groups in Western Europe, Israel, and the Pacific Rim are active in the worldwide search for compression algorithms that provide good tradeoffs among fidelity, bit rate, and computational complexity,more » though the theoretical roots and virtually all of the classical compression algorithms were developed in the United States. Certain areas, such as segmentation coding, model-based coding, and trellis-coded modulation, have developed earlier or in more depth outside the United States, though the United States has maintained its early lead in most areas of theory and algorithm development. Researchers abroad are active in other currently popular areas, such as quantizer design techniques based on neural networks and signal decompositions based on fractals and wavelets, but, in most cases, either similar research is or has been going on in the United States, or the work has not led to useful improvements in compression performance. Because there is a high degree of international cooperation and interaction in this field, good ideas spread rapidly across borders (both ways) through international conferences, journals, and technical exchanges. Though there have been no fundamental data compression breakthroughs in the past five years--outside or inside the United State--there have been an enormous number of significant improvements in both places in the tradeoffs among fidelity, bit rate, and computational complexity.« less
Wu, Zhaohua; Feng, Jiaxin; Qiao, Fangli; Tan, Zhe-Min
2016-04-13
In this big data era, it is more urgent than ever to solve two major issues: (i) fast data transmission methods that can facilitate access to data from non-local sources and (ii) fast and efficient data analysis methods that can reveal the key information from the available data for particular purposes. Although approaches in different fields to address these two questions may differ significantly, the common part must involve data compression techniques and a fast algorithm. This paper introduces the recently developed adaptive and spatio-temporally local analysis method, namely the fast multidimensional ensemble empirical mode decomposition (MEEMD), for the analysis of a large spatio-temporal dataset. The original MEEMD uses ensemble empirical mode decomposition to decompose time series at each spatial grid and then pieces together the temporal-spatial evolution of climate variability and change on naturally separated timescales, which is computationally expensive. By taking advantage of the high efficiency of the expression using principal component analysis/empirical orthogonal function analysis for spatio-temporally coherent data, we design a lossy compression method for climate data to facilitate its non-local transmission. We also explain the basic principles behind the fast MEEMD through decomposing principal components instead of original grid-wise time series to speed up computation of MEEMD. Using a typical climate dataset as an example, we demonstrate that our newly designed methods can (i) compress data with a compression rate of one to two orders; and (ii) speed-up the MEEMD algorithm by one to two orders. © 2016 The Authors.
Flour, Mieke; Clark, Michael; Partsch, Hugo; Mosti, Giovanni; Uhl, Jean-Francois; Chauveau, Michel; Cros, Francois; Gelade, Pierre; Bender, Dean; Andriessen, Anneke; Schuren, Jan; Cornu-Thenard, André; Arkans, Ed; Milic, Dragan; Benigni, Jean-Patrick; Damstra, Robert; Szolnoky, Gyozo; Schingale, Franz
2013-10-01
The International Compression Club (ICC) is a partnership between academics, clinicians and industry focused upon understanding the role of compression in the management of different clinical conditions. The ICC meet regularly and from these meetings have produced a series of eight consensus publications upon topics ranging from evidence-based compression to compression trials for arm lymphoedema. All of the current consensus documents can be accessed on the ICC website (http://www.icc-compressionclub.com/index.php). In May 2011, the ICC met in Brussels during the European Wound Management Association (EWMA) annual conference. With almost 50 members in attendance, the day-long ICC meeting challenged a series of dogmas and myths that exist when considering compression therapies. In preparation for a discussion on beliefs surrounding compression, a forum was established on the ICC website where presenters were able to display a summary of their thoughts upon each dogma to be discussed during the meeting. Members of the ICC could then provide comments on each topic thereby widening the discussion to the entire membership of the ICC rather than simply those who were attending the EWMA conference. This article presents an extended report of the issues that were discussed, with each dogma covered in a separate section. The ICC discussed 12 'dogmas' with areas 1 through 7 dedicated to materials and application techniques used to apply compression with the remaining topics (8 through 12) related to the indications for using compression. © 2012 The Authors. International Wound Journal © 2012 John Wiley & Sons Ltd and Medicalhelplines.com Inc.
A review of lossless audio compression standards and algorithms
NASA Astrophysics Data System (ADS)
Muin, Fathiah Abdul; Gunawan, Teddy Surya; Kartiwi, Mira; Elsheikh, Elsheikh M. A.
2017-09-01
Over the years, lossless audio compression has gained popularity as researchers and businesses has become more aware of the need for better quality and higher storage demand. This paper will analyse various lossless audio coding algorithm and standards that are used and available in the market focusing on Linear Predictive Coding (LPC) specifically due to its popularity and robustness in audio compression, nevertheless other prediction methods are compared to verify this. Advanced representation of LPC such as LSP decomposition techniques are also discussed within this paper.
The wavelet/scalar quantization compression standard for digital fingerprint images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradley, J.N.; Brislawn, C.M.
1994-04-01
A new digital image compression standard has been adopted by the US Federal Bureau of Investigation for use on digitized gray-scale fingerprint images. The algorithm is based on adaptive uniform scalar quantization of a discrete wavelet transform image decomposition and is referred to as the wavelet/scalar quantization standard. The standard produces archival quality images at compression ratios of around 20:1 and will allow the FBI to replace their current database of paper fingerprint cards with digital imagery.
NASA Astrophysics Data System (ADS)
Agurto, C.; Barriga, S.; Murray, V.; Pattichis, M.; Soliz, P.
2010-03-01
Diabetic retinopathy (DR) is one of the leading causes of blindness among adult Americans. Automatic methods for detection of the disease have been developed in recent years, most of them addressing the segmentation of bright and red lesions. In this paper we present an automatic DR screening system that does approach the problem through the segmentation of features. The algorithm determines non-diseased retinal images from those with pathology based on textural features obtained using multiscale Amplitude Modulation-Frequency Modulation (AM-FM) decompositions. The decomposition is represented as features that are the inputs to a classifier. The algorithm achieves 0.88 area under the ROC curve (AROC) for a set of 280 images from the MESSIDOR database. The algorithm is then used to analyze the effects of image compression and degradation, which will be present in most actual clinical or screening environments. Results show that the algorithm is insensitive to illumination variations, but high rates of compression and large blurring effects degrade its performance.
Electroencephalographic compression based on modulated filter banks and wavelet transform.
Bazán-Prieto, Carlos; Cárdenas-Barrera, Julián; Blanco-Velasco, Manuel; Cruz-Roldán, Fernando
2011-01-01
Due to the large volume of information generated in an electroencephalographic (EEG) study, compression is needed for storage, processing or transmission for analysis. In this paper we evaluate and compare two lossy compression techniques applied to EEG signals. It compares the performance of compression schemes with decomposition by filter banks or wavelet Packets transformation, seeking the best value for compression, best quality and more efficient real time implementation. Due to specific properties of EEG signals, we propose a quantization stage adapted to the dynamic range of each band, looking for higher quality. The results show that the compressor with filter bank performs better than transform methods. Quantization adapted to the dynamic range significantly enhances the quality.
Explosive decomposition of hydrazine by rapid compression of a gas volume
NASA Technical Reports Server (NTRS)
Bunker, R. L.; Baker, D. L.; Lee, J. H. S.
1991-01-01
In the present investigation of the initiation mechanism and the explosion mode of hydrazine decomposition, a 20 cm-long column of liquid hydrazine was accelerated into a column of gaseous nitrogen, from which it was separated by a thin Teflon diaphragm, in a close-ended cylindrical chamber. Video data obtained reveal the formation of a froth generated by the acceleration of hydrazine into nitrogen at the liquid hydrazine-gaseous nitrogen interface. The explosive hydrazine decomposition had as its initiation mechanism the formation of a froth at a critical temperature; the explosion mode of hydrazine is a confined thermal runaway reaction.
Subband directional vector quantization in radiological image compression
NASA Astrophysics Data System (ADS)
Akrout, Nabil M.; Diab, Chaouki; Prost, Remy; Goutte, Robert; Amiel, Michel
1992-05-01
The aim of this paper is to propose a new scheme for image compression. The method is very efficient for images which have directional edges such as the tree-like structure of the coronary vessels in digital angiograms. This method involves two steps. First, the original image is decomposed at different resolution levels using a pyramidal subband decomposition scheme. For decomposition/reconstruction of the image, free of aliasing and boundary errors, we use an ideal band-pass filter bank implemented in the Discrete Cosine Transform domain (DCT). Second, the high-frequency subbands are vector quantized using a multiresolution codebook with vertical and horizontal codewords which take into account the edge orientation of each subband. The proposed method reduces the blocking effect encountered at low bit rates in conventional vector quantization.
A novel method to detect ignition angle of diesel
NASA Astrophysics Data System (ADS)
Li, Baofu; Peng, Yong; Huang, Hongzhong
2018-04-01
This paper is based on the combustion signal collected by the combustion sensor of piezomagnetic type, taking how to get the diesel fuel to start the combustion as the starting point. It analyzes the operating principle and pressure change of the combustion sensor, the compression peak signal of the diesel engine in the process of compression, and several common methods. The author puts forward a new idea that ignition angle timing can be determined more accurately by the compression peak decomposition method. Then, the method is compared with several common methods.
Delos Reyes, Arthur P; Partsch, Hugo; Mosti, Giovanni; Obi, Andrea; Lurie, Fedor
2014-10-01
The International Compression Club, a collaboration of medical experts and industry representatives, was founded in 2005 to develop consensus reports and recommendations regarding the use of compression therapy in the treatment of acute and chronic vascular disease. During the recent meeting of the International Compression Club, member presentations were focused on the clinical application of intermittent pneumatic compression in different disease scenarios as well as on the use of inelastic and short stretch compression therapy. In addition, several new compression devices and systems were introduced by industry representatives. This article summarizes the presentations and subsequent discussions and provides a description of the new compression therapies presented. Copyright © 2014 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
Subband/Transform MATLAB Functions For Processing Images
NASA Technical Reports Server (NTRS)
Glover, D.
1995-01-01
SUBTRANS software is package of routines implementing image-data-processing functions for use with MATLAB*(TM) software. Provides capability to transform image data with block transforms and to produce spatial-frequency subbands of transformed data. Functions cascaded to provide further decomposition into more subbands. Also used in image-data-compression systems. For example, transforms used to prepare data for lossy compression. Written for use in MATLAB mathematical-analysis environment.
Use of zerotree coding in a high-speed pyramid image multiresolution decomposition
NASA Astrophysics Data System (ADS)
Vega-Pineda, Javier; Cabrera, Sergio D.; Lucero, Aldo
1995-03-01
A Zerotree (ZT) coding scheme is applied as a post-processing stage to avoid transmitting zero data in the High-Speed Pyramid (HSP) image compression algorithm. This algorithm has features that increase the capability of the ZT coding to give very high compression rates. In this paper the impact of the ZT coding scheme is analyzed and quantified. The HSP algorithm creates a discrete-time multiresolution analysis based on a hierarchical decomposition technique that is a subsampling pyramid. The filters used to create the image residues and expansions can be related to wavelet representations. According to the pixel coordinates and the level in the pyramid, N2 different wavelet basis functions of various sizes and rotations are linearly combined. The HSP algorithm is computationally efficient because of the simplicity of the required operations, and as a consequence, it can be very easily implemented with VLSI hardware. This is the HSP's principal advantage over other compression schemes. The ZT coding technique transforms the different quantized image residual levels created by the HSP algorithm into a bit stream. The use of ZT's compresses even further the already compressed image taking advantage of parent-child relationships (trees) between the pixels of the residue images at different levels of the pyramid. Zerotree coding uses the links between zeros along the hierarchical structure of the pyramid, to avoid transmission of those that form branches of all zeros. Compression performance and algorithm complexity of the combined HSP-ZT method are compared with those of the JPEG standard technique.
Adiabatic Compression Sensitivity of AF-M315E
2015-07-01
the current work is to expand the knowledge base from previous experiments completed at AFRL for AF-M315E in stainless steel U-tubes at room...addressed, to some degree, with the use of clamps and a large stainless steel plate to dissipate any major vibrations. A large preheated bath of 50:50 v/v...autocatalytic chain decomposition in the propellant. This exothermic decomposition decreases the fume -off initiation temperature of the propellant and its
NASA Technical Reports Server (NTRS)
Gatski, Thomas B. (Editor); Sarkar, Sutanu (Editor); Speziale, Charles G. (Editor)
1992-01-01
Various papers on turbulence are presented. Individual topics addressed include: modeling the dissipation rate in rotating turbulent flows, mapping closures for turbulent mixing and reaction, understanding turbulence in vortex dynamics, models for the structure and dynamics of near-wall turbulence, complexity of turbulence near a wall, proper orthogonal decomposition, propagating structures in wall-bounded turbulence flows. Also discussed are: constitutive relation in compressible turbulence, compressible turbulence and shock waves, direct simulation of compressible turbulence in a shear flow, structural genesis in wall-bounded turbulence flows, vortex lattice structure of turbulent shear slows, etiology of shear layer vortices, trilinear coordinates in fluid mechanics.
Wavelet/scalar quantization compression standard for fingerprint images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brislawn, C.M.
1996-06-12
US Federal Bureau of Investigation (FBI) has recently formulated a national standard for digitization and compression of gray-scale fingerprint images. Fingerprints are scanned at a spatial resolution of 500 dots per inch, with 8 bits of gray-scale resolution. The compression algorithm for the resulting digital images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition (wavelet/scalar quantization method). The FBI standard produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. The compression standard specifies a class ofmore » potential encoders and a universal decoder with sufficient generality to reconstruct compressed images produced by any compliant encoder, allowing flexibility for future improvements in encoder technology. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations.« less
NASA Astrophysics Data System (ADS)
Athaudage, Chandranath R. N.; Bradley, Alan B.; Lech, Margaret
2003-12-01
A dynamic programming-based optimization strategy for a temporal decomposition (TD) model of speech and its application to low-rate speech coding in storage and broadcasting is presented. In previous work with the spectral stability-based event localizing (SBEL) TD algorithm, the event localization was performed based on a spectral stability criterion. Although this approach gave reasonably good results, there was no assurance on the optimality of the event locations. In the present work, we have optimized the event localizing task using a dynamic programming-based optimization strategy. Simulation results show that an improved TD model accuracy can be achieved. A methodology of incorporating the optimized TD algorithm within the standard MELP speech coder for the efficient compression of speech spectral information is also presented. The performance evaluation results revealed that the proposed speech coding scheme achieves 50%-60% compression of speech spectral information with negligible degradation in the decoded speech quality.
The FBI compression standard for digitized fingerprint images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brislawn, C.M.; Bradley, J.N.; Onyshczak, R.J.
1996-10-01
The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the currentmore » status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.« less
FBI compression standard for digitized fingerprint images
NASA Astrophysics Data System (ADS)
Brislawn, Christopher M.; Bradley, Jonathan N.; Onyshczak, Remigius J.; Hopper, Thomas
1996-11-01
The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the current status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.
Ke, Dongxu; Dernell, William; Bandyopadhyay, Amit; Bose, Susmita
2015-01-01
Tricalcium phosphate (TCP) is a bioceramic that is widely used in orthopedic and dental applications. TCP structures show excellent biocompatibility as well as biodegradability. In this study, porous β-TCP scaffolds were prepared by thermal decomposition of naphthalene. Scaffolds with 57.64 ± 3.54 % density and a maximum pore size around 100 μm were fabricated via removing 30% naphthalene at 1150°C. The compressive strength for these scaffolds was 32.85 ± 1.41 MPa. Furthermore, by mixing 1 wt % SrO and 0.5 wt % SiO2, pore interconnectivity improved, but the compressive strength decreased to 22.40 ± 2.70 MPa. However, after addition of polycaprolactone (PCL) coating layers, the compressive strength of doped scaffolds increased to 29.57 ± 3.77 MPa. Porous scaffolds were implanted in rabbit femur defects to evaluate their biological property. The addition of dopants triggered osteoinduction by enhancing osteoid formation, osteocalcin expression and bone regeneration, especially at the interface of the scaffold and host bone. This study showed processing flexibility to make interconnected porous scaffolds with different pore size and volume fraction porosity with high compressive mechanical strength and better bioactivity. Results show that SrO/SiO2 doped porous TCP scaffolds have excellent potential to be used in bone tissue engineering applications. PMID:25504889
X-Ray Thomson Scattering Without the Chihara Decomposition
NASA Astrophysics Data System (ADS)
Magyar, Rudolph; Baczewski, Andrew; Shulenburger, Luke; Hansen, Stephanie B.; Desjarlais, Michael P.; Sandia National Laboratories Collaboration
X-Ray Thomson Scattering is an important experimental technique used in dynamic compression experiments to measure the properties of warm dense matter. The fundamental property probed in these experiments is the electronic dynamic structure factor that is typically modeled using an empirical three-term decomposition (Chihara, J. Phys. F, 1987). One of the crucial assumptions of this decomposition is that the system's electrons can be either classified as bound to ions or free. This decomposition may not be accurate for materials in the warm dense regime. We present unambiguous first principles calculations of the dynamic structure factor independent of the Chihara decomposition that can be used to benchmark these assumptions. Results are generated using a finite-temperature real-time time-dependent density functional theory applied for the first time in these conditions. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Security Administration under contract DE-AC04-94AL85000.
NASA Astrophysics Data System (ADS)
Al-Hayani, Nazar; Al-Jawad, Naseer; Jassim, Sabah A.
2014-05-01
Video compression and encryption became very essential in a secured real time video transmission. Applying both techniques simultaneously is one of the challenges where the size and the quality are important in multimedia transmission. In this paper we proposed a new technique for video compression and encryption. Both encryption and compression are based on edges extracted from the high frequency sub-bands of wavelet decomposition. The compression algorithm based on hybrid of: discrete wavelet transforms, discrete cosine transform, vector quantization, wavelet based edge detection, and phase sensing. The compression encoding algorithm treats the video reference and non-reference frames in two different ways. The encryption algorithm utilized A5 cipher combined with chaotic logistic map to encrypt the significant parameters and wavelet coefficients. Both algorithms can be applied simultaneously after applying the discrete wavelet transform on each individual frame. Experimental results show that the proposed algorithms have the following features: high compression, acceptable quality, and resistance to the statistical and bruteforce attack with low computational processing.
Parallel Tensor Compression for Large-Scale Scientific Data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kolda, Tamara G.; Ballard, Grey; Austin, Woody Nathan
As parallel computing trends towards the exascale, scientific data produced by high-fidelity simulations are growing increasingly massive. For instance, a simulation on a three-dimensional spatial grid with 512 points per dimension that tracks 64 variables per grid point for 128 time steps yields 8 TB of data. By viewing the data as a dense five way tensor, we can compute a Tucker decomposition to find inherent low-dimensional multilinear structure, achieving compression ratios of up to 10000 on real-world data sets with negligible loss in accuracy. So that we can operate on such massive data, we present the first-ever distributed memorymore » parallel implementation for the Tucker decomposition, whose key computations correspond to parallel linear algebra operations, albeit with nonstandard data layouts. Our approach specifies a data distribution for tensors that avoids any tensor data redistribution, either locally or in parallel. We provide accompanying analysis of the computation and communication costs of the algorithms. To demonstrate the compression and accuracy of the method, we apply our approach to real-world data sets from combustion science simulations. We also provide detailed performance results, including parallel performance in both weak and strong scaling experiments.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eggert, J.; Ryan, S. J.; Ramesh, K. T.
2009-12-28
The following article contains the summary of the discussion held at the Shock Compression of Condensed Matter Town Hall Meeting. This was held on Tuesday afternoon of the meeting and attracted 100+ attendees. This meeting, chaired by John Eggert, was planned to introduce challenges in selected topics relevant to shock wave science. The three subjects and speakers were: space research introduced by Shannon Ryan, nanotechnology presented by Kaliat T. Ramesh, and compression tools delivered by Dave Funk. After each presentation there were a number of questions.
A Tensor-Train accelerated solver for integral equations in complex geometries
NASA Astrophysics Data System (ADS)
Corona, Eduardo; Rahimian, Abtin; Zorin, Denis
2017-04-01
We present a framework using the Quantized Tensor Train (QTT) decomposition to accurately and efficiently solve volume and boundary integral equations in three dimensions. We describe how the QTT decomposition can be used as a hierarchical compression and inversion scheme for matrices arising from the discretization of integral equations. For a broad range of problems, computational and storage costs of the inversion scheme are extremely modest O (log N) and once the inverse is computed, it can be applied in O (Nlog N) . We analyze the QTT ranks for hierarchically low rank matrices and discuss its relationship to commonly used hierarchical compression techniques such as FMM and HSS. We prove that the QTT ranks are bounded for translation-invariant systems and argue that this behavior extends to non-translation invariant volume and boundary integrals. For volume integrals, the QTT decomposition provides an efficient direct solver requiring significantly less memory compared to other fast direct solvers. We present results demonstrating the remarkable performance of the QTT-based solver when applied to both translation and non-translation invariant volume integrals in 3D. For boundary integral equations, we demonstrate that using a QTT decomposition to construct preconditioners for a Krylov subspace method leads to an efficient and robust solver with a small memory footprint. We test the QTT preconditioners in the iterative solution of an exterior elliptic boundary value problem (Laplace) formulated as a boundary integral equation in complex, multiply connected geometries.
Code of Federal Regulations, 2014 CFR
2014-10-01
... using liquefied petroleum gas (LPG) and compressed natural gas (CNG) must meet the following... design, installation and testing of each CNG system must meet ABYC A-22, “Marine Compressed Natural Gas (CNG) Systems,” Chapter 6 of NFPA 302, or other standard specified by the Commandant. (c) Cooking...
Kumar, Ranjeet; Kumar, A; Singh, G K
2016-06-01
In the field of biomedical, it becomes necessary to reduce data quantity due to the limitation of storage in real-time ambulatory system and telemedicine system. Research has been underway since very beginning for the development of an efficient and simple technique for longer term benefits. This paper, presents an algorithm based on singular value decomposition (SVD), and embedded zero tree wavelet (EZW) techniques for ECG signal compression which deals with the huge data of ambulatory system. The proposed method utilizes the low rank matrix for initial compression on two dimensional (2-D) ECG data array using SVD, and then EZW is initiated for final compression. Initially, 2-D array construction has key issue for the proposed technique in pre-processing. Here, three different beat segmentation approaches have been exploited for 2-D array construction using segmented beat alignment with exploitation of beat correlation. The proposed algorithm has been tested on MIT-BIH arrhythmia record, and it was found that it is very efficient in compression of different types of ECG signal with lower signal distortion based on different fidelity assessments. The evaluation results illustrate that the proposed algorithm has achieved the compression ratio of 24.25:1 with excellent quality of signal reconstruction in terms of percentage-root-mean square difference (PRD) as 1.89% for ECG signal Rec. 100 and consumes only 162bps data instead of 3960bps uncompressed data. The proposed method is efficient and flexible with different types of ECG signal for compression, and controls quality of reconstruction. Simulated results are clearly illustrate the proposed method can play a big role to save the memory space of health data centres as well as save the bandwidth in telemedicine based healthcare systems. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Stress-induced activation of decomposition of organic explosives: a simple way to understand.
Zhang, Chaoyang
2013-01-01
We provide a very simply way to understand the stress-induced activation of decomposition of organic explosives by taking the simplest explosive molecule nitromethane (NM) as a prototype and constraining one or two NM molecules in a shell to represent the condensed phrase of NM against the stress caused by tension and compression, sliding and rotational shear, and imperfection. The results show that the stress loaded on NM molecule can always reduce the barriers of its decomposition. We think the origin of this stress-induced activation is due to the increased repulsive intra- and/or inter- molecular interaction potentials in explosives resulted from the stress, whose release is positive to accelerate the decomposition. Besides, by these models, we can understand that the explosives in gaseous state are easier to analyze than those in condensed state and the voids in condensed explosives make them more sensitive to external stimuli relative to the perfect crystals.
Scalar/Vector potential formulation for compressible viscous unsteady flows
NASA Technical Reports Server (NTRS)
Morino, L.
1985-01-01
A scalar/vector potential formulation for unsteady viscous compressible flows is presented. The scalar/vector potential formulation is based on the classical Helmholtz decomposition of any vector field into the sum of an irrotational and a solenoidal field. The formulation is derived from fundamental principles of mechanics and thermodynamics. The governing equations for the scalar potential and vector potential are obtained, without restrictive assumptions on either the equation of state or the constitutive relations or the stress tensor and the heat flux vector.
Shi, Jianyong; Qian, Xuede; Liu, Xiaodong; Sun, Long; Liao, Zhiqiang
2016-09-01
The total compression of municipal solid waste (MSW) consists of primary, secondary, and decomposition compressions. It is usually difficult to distinguish between the three parts of compressions. In this study, the odeometer test was used to distinguish between the primary and secondary compressions to determine the primary and secondary compression coefficient. In addition, the ending time of the primary compressions were proposed based on municipal solid waste compression tests in a degradation-inhibited condition by adding vinegar. The amount of the secondary compression occurring in the primary compression stage has a relatively high percentage to either the total compression or the total secondary compression. The relationship between the degradation ratio and time was obtained from the tests independently. Furthermore, a combined compression calculation method of municipal solid waste for all three parts of compressions including considering organics degradation is proposed based on a one-dimensional compression method. The relationship between the methane generation potential L0 of LandGEM model and degradation compression index was also discussed in the paper. A special column compression apparatus system, which can be used to simulate the whole compression process of municipal solid waste in China, was designed. According to the results obtained from 197-day column compression test, the new combined calculation method for municipal solid waste compression was analyzed. The degradation compression is the main part of the compression of MSW in the medium test period. Copyright © 2015 Elsevier Ltd. All rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-25
... DEPARTMENT OF ENERGY Research and Development Strategies for Compressed & Cryo- Compressed Hydrogen Storage Workshops AGENCY: Fuel Cell Technologies Program, Office of Energy Efficiency and Renewable Energy, Department of Energy. ACTION: Notice of meeting. SUMMARY: The Systems Integration group of...
Wavelet domain textual coding of Ottoman script images
NASA Astrophysics Data System (ADS)
Gerek, Oemer N.; Cetin, Enis A.; Tewfik, Ahmed H.
1996-02-01
Image coding using wavelet transform, DCT, and similar transform techniques is well established. On the other hand, these coding methods neither take into account the special characteristics of the images in a database nor are they suitable for fast database search. In this paper, the digital archiving of Ottoman printings is considered. Ottoman documents are printed in Arabic letters. Witten et al. describes a scheme based on finding the characters in binary document images and encoding the positions of the repeated characters This method efficiently compresses document images and is suitable for database research, but it cannot be applied to Ottoman or Arabic documents as the concept of character is different in Ottoman or Arabic. Typically, one has to deal with compound structures consisting of a group of letters. Therefore, the matching criterion will be according to those compound structures. Furthermore, the text images are gray tone or color images for Ottoman scripts for the reasons that are described in the paper. In our method the compound structure matching is carried out in wavelet domain which reduces the search space and increases the compression ratio. In addition to the wavelet transformation which corresponds to the linear subband decomposition, we also used nonlinear subband decomposition. The filters in the nonlinear subband decomposition have the property of preserving edges in the low resolution subband image.
He, Zheng-Hua; Chen, Jun; Ji, Guang-Fu; Liu, Li-Min; Zhu, Wen-Jun; Wu, Qiang
2015-08-20
Despite extensive efforts on studying the decomposition mechanism of HMX under extreme condition, an intrinsic understanding of mechanical and chemical response processes, inducing the initial chemical reaction, is not yet achieved. In this work, the microscopic dynamic response and initial decomposition of β-HMX with (1 0 0) surface and molecular vacancy under shock condition, were explored by means of the self-consistent-charge density-functional tight-binding method (SCC-DFTB) in conjunction with multiscale shock technique (MSST). The evolutions of various bond lengths and charge transfers were analyzed to explore and understand the initial reaction mechanism of HMX. Our results discovered that the C-N bond close to major axes had less compression sensitivity and higher stretch activity. The charge was transferred mainly from the N-NO2 group along the minor axes and H atom to C atom during the early compression process. The first reaction of HMX primarily initiated with the fission of the molecular ring at the site of the C-N bond close to major axes. Further breaking of the molecular ring enhanced intermolecular interactions and promoted the cleavage of C-H and N-NO2 bonds. More significantly, the dynamic response behavior clearly depended on the angle between chemical bond and shock direction.
Lagrangian statistics in compressible isotropic homogeneous turbulence
NASA Astrophysics Data System (ADS)
Yang, Yantao; Wang, Jianchun; Shi, Yipeng; Chen, Shiyi
2011-11-01
In this work we conducted the Direct Numerical Simulation (DNS) of a forced compressible isotropic homogeneous turbulence and investigated the flow statistics from the Lagrangian point of view, namely the statistics is computed following the passive tracers trajectories. The numerical method combined the Eulerian field solver which was developed by Wang et al. (2010, J. Comp. Phys., 229, 5257-5279), and a Lagrangian module for tracking the tracers and recording the data. The Lagrangian probability density functions (p.d.f.'s) have then been calculated for both kinetic and thermodynamic quantities. In order to isolate the shearing part from the compressing part of the flow, we employed the Helmholtz decomposition to decompose the flow field (mainly the velocity field) into the solenoidal and compressive parts. The solenoidal part was compared with the incompressible case, while the compressibility effect showed up in the compressive part. The Lagrangian structure functions and cross-correlation between various quantities will also be discussed. This work was supported in part by the China's Turbulence Program under Grant No.2009CB724101.
Projection decomposition algorithm for dual-energy computed tomography via deep neural network.
Xu, Yifu; Yan, Bin; Chen, Jian; Zeng, Lei; Li, Lei
2018-03-15
Dual-energy computed tomography (DECT) has been widely used to improve identification of substances from different spectral information. Decomposition of the mixed test samples into two materials relies on a well-calibrated material decomposition function. This work aims to establish and validate a data-driven algorithm for estimation of the decomposition function. A deep neural network (DNN) consisting of two sub-nets is proposed to solve the projection decomposition problem. The compressing sub-net, substantially a stack auto-encoder (SAE), learns a compact representation of energy spectrum. The decomposing sub-net with a two-layer structure fits the nonlinear transform between energy projection and basic material thickness. The proposed DNN not only delivers image with lower standard deviation and higher quality in both simulated and real data, and also yields the best performance in cases mixed with photon noise. Moreover, DNN costs only 0.4 s to generate a decomposition solution of 360 × 512 size scale, which is about 200 times faster than the competing algorithms. The DNN model is applicable to the decomposition tasks with different dual energies. Experimental results demonstrated the strong function fitting ability of DNN. Thus, the Deep learning paradigm provides a promising approach to solve the nonlinear problem in DECT.
NASA Technical Reports Server (NTRS)
Hurst, Victor, IV; West, Sarah; Austin, Paul; Branson, Richard; Beck, George
2005-01-01
Astronaut crew medical officers (CMO) aboard the International Space Station (ISS) receive 40 hours of medical training over 18 months before each mission, including two-person cardiopulmonary resuscitation (2CPR) as recommended by the American Heart Association (AHA). Recent studies have concluded that the use of metronomic tones improves the coordination of 2CPR by trained clinicians. 2CPR performance data for minimally-trained caregivers has been limited. The goal of this study was to determine whether use of a metronome by minimally-trained caregivers (CMO analogues) would improve 2CPR performance. 20 pairs of minimally-trained caregivers certified in 2CPR via AHA guidelines performed 2CPR for 4 minutes on an instrumented manikin using 3 interventions: 1) Standard 2CPR without a metronome [NONE], 2) Standard 2CPR plus a metronome for coordinating compression rate only [MET], 3) Standard 2CPR plus a metronome for coordinating both the compression rate and ventilation rate [BOTH]. Caregivers were evaluated for their ability to meet the AHA guideline of 32 breaths-240 compressions in 4 minutes. All (100%) caregivers using the BOTH intervention provided the required number of ventilation breaths as compared with the NONE caregivers (10%) and MET caregivers (0%). For compressions, 97.5% of the BOTH caregivers were not successful in meeting the AHA compression guideline; however, an average of 238 compressions of the desired 240 were completed. None of the caregivers were successful in meeting the compression guideline using the NONE and MET interventions. This study demonstrates that use of metronomic tones by minimally-trained caregivers for coordinating both compressions and breaths improves 2CPR performance. Meeting the breath guideline is important to minimize air entering the stomach, thus decreasing the likelihood of gastric aspiration. These results suggest that manifesting a metronome for the ISS may augment the performance of 2CPR on orbit and thus may increase the level of care.
46 CFR 112.50-7 - Compressed air starting.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 4 2013-10-01 2013-10-01 false Compressed air starting. 112.50-7 Section 112.50-7... air starting. A compressed air starting system must meet the following: (a) The starting, charging... air compressors addressed in paragraph (c)(3)(i) of this section. (b) The compressed air starting...
46 CFR 112.50-7 - Compressed air starting.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 4 2014-10-01 2014-10-01 false Compressed air starting. 112.50-7 Section 112.50-7... air starting. A compressed air starting system must meet the following: (a) The starting, charging... air compressors addressed in paragraph (c)(3)(i) of this section. (b) The compressed air starting...
The role of bulk viscosity on the decay of compressible, homogeneous, isotropic turbulence
NASA Astrophysics Data System (ADS)
Johnsen, Eric; Pan, Shaowu
2016-11-01
The practice of neglecting bulk viscosity in studies of compressible turbulence is widespread. While exact for monatomic gases and unlikely to strongly affect the dynamics of fluids whose bulk-to-shear viscosity ratio is small and/or of weakly compressible turbulence, this assumption is not justifiable for compressible, turbulent flows of gases whose bulk viscosity is orders of magnitude larger than their shear viscosities (e.g., CO2). To understand the mechanisms by which bulk viscosity and the associated phenomena affect compressible turbulence, we conduct DNS of freely decaying compressible, homogeneous, isotropic turbulence for ratios of bulk-to-shear viscosity ranging from 0-1000. Our simulations demonstrate that bulk viscosity increases the decay rate of turbulent kinetic energy; while enstrophy exhibits little sensitivity to bulk viscosity, dilatation is reduced by an order of magnitude within the two eddy turnover time. Via a Helmholtz decomposition of the flow, we determined that bulk viscosity damps the dilatational velocity and reduces dilatational-solenoidal exchanges, as well as pressure-dilatation coupling. In short, bulk viscosity renders compressible turbulence incompressible by reducing energy transfer between translational and internal modes.
Multidimensional Compressed Sensing MRI Using Tensor Decomposition-Based Sparsifying Transform
Yu, Yeyang; Jin, Jin; Liu, Feng; Crozier, Stuart
2014-01-01
Compressed Sensing (CS) has been applied in dynamic Magnetic Resonance Imaging (MRI) to accelerate the data acquisition without noticeably degrading the spatial-temporal resolution. A suitable sparsity basis is one of the key components to successful CS applications. Conventionally, a multidimensional dataset in dynamic MRI is treated as a series of two-dimensional matrices, and then various matrix/vector transforms are used to explore the image sparsity. Traditional methods typically sparsify the spatial and temporal information independently. In this work, we propose a novel concept of tensor sparsity for the application of CS in dynamic MRI, and present the Higher-order Singular Value Decomposition (HOSVD) as a practical example. Applications presented in the three- and four-dimensional MRI data demonstrate that HOSVD simultaneously exploited the correlations within spatial and temporal dimensions. Validations based on cardiac datasets indicate that the proposed method achieved comparable reconstruction accuracy with the low-rank matrix recovery methods and, outperformed the conventional sparse recovery methods. PMID:24901331
NASA Technical Reports Server (NTRS)
Houseman, John (Inventor); Voecks, Gerald E. (Inventor)
1986-01-01
A flow through catalytic reactor which selectively catalytically decomposes methanol into a soot free hydrogen rich product gas utilizing engine exhaust at temperatures of 200 to 650 C to provide the heat for vaporizing and decomposing the methanol is described. The reactor is combined with either a spark ignited or compression ignited internal combustion engine or a gas turbine to provide a combustion engine system. The system may be fueled entirely by the hydrogen rich gas produced in the methanol decomposition reactor or the system may be operated on mixed fuels for transient power gain and for cold start of the engine system. The reactor includes a decomposition zone formed by a plurality of elongated cylinders which contain a body of vapor permeable, methanol decomposition catalyst preferably a shift catalyst such as copper-zinc.
Dual domain watermarking for authentication and compression of cultural heritage images.
Zhao, Yang; Campisi, Patrizio; Kundur, Deepa
2004-03-01
This paper proposes an approach for the combined image authentication and compression of color images by making use of a digital watermarking and data hiding framework. The digital watermark is comprised of two components: a soft-authenticator watermark for authentication and tamper assessment of the given image, and a chrominance watermark employed to improve the efficiency of compression. The multipurpose watermark is designed by exploiting the orthogonality of various domains used for authentication, color decomposition and watermark insertion. The approach is implemented as a DCT-DWT dual domain algorithm and is applied for the protection and compression of cultural heritage imagery. Analysis is provided to characterize the behavior of the scheme under ideal conditions. Simulations and comparisons of the proposed approach with state-of-the-art existing work demonstrate the potential of the overall scheme.
Real-Time Mobile Device-Assisted Chest Compression During Cardiopulmonary Resuscitation.
Sarma, Satyam; Bucuti, Hakiza; Chitnis, Anurag; Klacman, Alex; Dantu, Ram
2017-07-15
Prompt administration of high-quality cardiopulmonary resuscitation (CPR) is a key determinant of survival from cardiac arrest. Strategies to improve CPR quality at point of care could improve resuscitation outcomes. We tested whether a low cost and scalable mobile phone- or smart watch-based solution could provide accurate measures of compression depth and rate during simulated CPR. Fifty health care providers (58% intensive care unit nurses) performed simulated CPR on a calibrated training manikin (Resusci Anne, Laerdal) while wearing both devices. Subjects received real-time audiovisual feedback from each device sequentially. Primary outcome was accuracy of compression depth and rate compared with the calibrated training manikin. Secondary outcome was improvement in CPR quality as defined by meeting both guideline-recommend compression depth (5 to 6 cm) and rate (100 to 120/minute). Compared with the training manikin, typical error for compression depth was <5 mm (smart phone 4.6 mm; 95% CI 4.1 to 5.3 mm; smart watch 4.3 mm; 95% CI 3.8 to 5.0 mm). Compression rates were similarly accurate (smart phone Pearson's R = 0.93; smart watch R = 0.97). There was no difference in improved CPR quality defined as the number of sessions meeting both guideline-recommended compression depth (50 to 60 mm) and rate (100 to 120 compressions/minute) with mobile device feedback (60% vs 50%; p = 0.3). Sessions that did not meet guideline recommendations failed primarily because of inadequate compression depth (46 ± 2 mm). In conclusion, a mobile device application-guided CPR can accurately track compression depth and rate during simulation in a practice environment in accordance with resuscitation guidelines. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Manohar, Mareboyana; Tilton, James C.
1994-01-01
A progressive vector quantization (VQ) compression approach is discussed which decomposes image data into a number of levels using full search VQ. The final level is losslessly compressed, enabling lossless reconstruction. The computational difficulties are addressed by implementation on a massively parallel SIMD machine. We demonstrate progressive VQ on multispectral imagery obtained from the Advanced Very High Resolution Radiometer instrument and other Earth observation image data, and investigate the trade-offs in selecting the number of decomposition levels and codebook training method.
The Compressible Stokes Flows with No-Slip Boundary Condition on Non-Convex Polygons
NASA Astrophysics Data System (ADS)
Kweon, Jae Ryong
2017-03-01
In this paper we study the compressible Stokes equations with no-slip boundary condition on non-convex polygons and show a best regularity result that the solution can have without subtracting corner singularities. This is obtained by a suitable Helmholtz decomposition: {{{u}}={{w}}+nablaφ_R} with div w = 0 and a potential φ_R. Here w is the solution for the incompressible Stokes problem and φ_R is defined by subtracting from the solution of the Neumann problem the leading two corner singularities at non-convex vertices.
NASA Astrophysics Data System (ADS)
Lv, Peng; Tang, Xun; Zheng, Ruilin; Ma, Xiaobo; Yu, Kehan; Wei, Wei
2017-12-01
Superelastic graphene aerogel with ultra-high compressibility shows promising potential for compression-tolerant supercapacitor electrode. However, its specific capacitance is too low to meet the practical application. Herein, we deposited polyaniline (PANI) into the superelastic graphene aerogel to improve the capacitance while maintaining the superelasticity. Graphene/PANI aerogel with optimized PANI mass content of 63 wt% shows the improved specific capacitance of 713 F g-1 in the three-electrode system. And the graphene/PANI aerogel presents a high recoverable compressive strain of 90% due to the strong interaction between PANI and graphene. The all-solid-state supercapacitors were assembled to demonstrate the compression-tolerant ability of graphene/PANI electrodes. The gravimetric capacitance of graphene/PANI electrodes reaches 424 F g-1 and retains 96% even at 90% compressive strain. And a volumetric capacitance of 65.5 F cm-3 is achieved, which is much higher than that of other compressible composite electrodes. Furthermore, several compressible supercapacitors can be integrated and connected in series to enhance the overall output voltage, suggesting the potential to meet the practical application.
Lv, Peng; Tang, Xun; Zheng, Ruilin; Ma, Xiaobo; Yu, Kehan; Wei, Wei
2017-12-19
Superelastic graphene aerogel with ultra-high compressibility shows promising potential for compression-tolerant supercapacitor electrode. However, its specific capacitance is too low to meet the practical application. Herein, we deposited polyaniline (PANI) into the superelastic graphene aerogel to improve the capacitance while maintaining the superelasticity. Graphene/PANI aerogel with optimized PANI mass content of 63 wt% shows the improved specific capacitance of 713 F g -1 in the three-electrode system. And the graphene/PANI aerogel presents a high recoverable compressive strain of 90% due to the strong interaction between PANI and graphene. The all-solid-state supercapacitors were assembled to demonstrate the compression-tolerant ability of graphene/PANI electrodes. The gravimetric capacitance of graphene/PANI electrodes reaches 424 F g -1 and retains 96% even at 90% compressive strain. And a volumetric capacitance of 65.5 F cm -3 is achieved, which is much higher than that of other compressible composite electrodes. Furthermore, several compressible supercapacitors can be integrated and connected in series to enhance the overall output voltage, suggesting the potential to meet the practical application.
Inviscid criterion for decomposing scales
NASA Astrophysics Data System (ADS)
Zhao, Dongxiao; Aluie, Hussein
2018-05-01
The proper scale decomposition in flows with significant density variations is not as straightforward as in incompressible flows, with many possible ways to define a "length scale." A choice can be made according to the so-called inviscid criterion [Aluie, Physica D 24, 54 (2013), 10.1016/j.physd.2012.12.009]. It is a kinematic requirement that a scale decomposition yield negligible viscous effects at large enough length scales. It has been proved [Aluie, Physica D 24, 54 (2013), 10.1016/j.physd.2012.12.009] recently that a Favre decomposition satisfies the inviscid criterion, which is necessary to unravel inertial-range dynamics and the cascade. Here we present numerical demonstrations of those results. We also show that two other commonly used decompositions can violate the inviscid criterion and, therefore, are not suitable to study inertial-range dynamics in variable-density and compressible turbulence. Our results have practical modeling implication in showing that viscous terms in Large Eddy Simulations do not need to be modeled and can be neglected.
Compressed Continuous Computation v. 12/20/2016
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorodetsky, Alex
2017-02-17
A library for performing numerical computation with low-rank functions. The (C3) library enables performing continuous linear and multilinear algebra with multidimensional functions. Common tasks include taking "matrix" decompositions of vector- or matrix-valued functions, approximating multidimensional functions in low-rank format, adding or multiplying functions together, integrating multidimensional functions.
Structure of peat soils and implications for biogeochemical processes and hydrological flow
NASA Astrophysics Data System (ADS)
Rezanezhad, F.; McCarter, C. P. R.; Gharedaghloo, B.; Kleimeier, C.; Milojevic, T.; Liu, H.; Weber, T. K. D.; Price, J. S.; Quinton, W. L.; Lenartz, B.; Van Cappellen, P.
2017-12-01
Permafrost peatlands contain globally important amounts of soil organic carbon and play major roles in global water, nutrient and biogeochemical cycles. The structure of peatland soils (i.e., peat) are highly complex with unique physical and hydraulic properties; where significant, and only partially reversible, shrinkage occurs during dewatering (including water table fluctuations), compression and/or decomposition. These distinct physical and hydraulic properties controls water flow, which in turn affect reactive and non-reactive solute transport (such as, sorption or degradation) and biogeochemical functions. Additionally, peat further attenuates solute migration through molecular diffusion into the inactive pores of Sphagnum dominated peat. These slow, diffusion-limited solute exchanges between the pore regions may give rise to pore-scale chemical gradients and heterogeneous distributions of microbial habitats and activity in peat soils. Permafrost peat plateaus have the same essential subsurface characteristics as other widely organic soil-covered peatlands, where the hydraulic conductivity is related to the degree of decomposition and soil compression. Increasing levels of decomposition correspond with a reduction of effective pore diameter and consequently restrict water and solute flow (by several orders of magnitude in hydraulic conductivity between the ground surface and a depth of 50 cm). In this presentation, we present the current knowledge of key physical and hydraulic properties related to the structure of globally available peat soils and discuss their implications for water storage, flow and the migration of solutes.
Low rank approximation methods for MR fingerprinting with large scale dictionaries.
Yang, Mingrui; Ma, Dan; Jiang, Yun; Hamilton, Jesse; Seiberlich, Nicole; Griswold, Mark A; McGivney, Debra
2018-04-01
This work proposes new low rank approximation approaches with significant memory savings for large scale MR fingerprinting (MRF) problems. We introduce a compressed MRF with randomized singular value decomposition method to significantly reduce the memory requirement for calculating a low rank approximation of large sized MRF dictionaries. We further relax this requirement by exploiting the structures of MRF dictionaries in the randomized singular value decomposition space and fitting them to low-degree polynomials to generate high resolution MRF parameter maps. In vivo 1.5T and 3T brain scan data are used to validate the approaches. T 1 , T 2 , and off-resonance maps are in good agreement with that of the standard MRF approach. Moreover, the memory savings is up to 1000 times for the MRF-fast imaging with steady-state precession sequence and more than 15 times for the MRF-balanced, steady-state free precession sequence. The proposed compressed MRF with randomized singular value decomposition and dictionary fitting methods are memory efficient low rank approximation methods, which can benefit the usage of MRF in clinical settings. They also have great potentials in large scale MRF problems, such as problems considering multi-component MRF parameters or high resolution in the parameter space. Magn Reson Med 79:2392-2400, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
An efficient coding algorithm for the compression of ECG signals using the wavelet transform.
Rajoub, Bashar A
2002-04-01
A wavelet-based electrocardiogram (ECG) data compression algorithm is proposed in this paper. The ECG signal is first preprocessed, the discrete wavelet transform (DWT) is then applied to the preprocessed signal. Preprocessing guarantees that the magnitudes of the wavelet coefficients be less than one, and reduces the reconstruction errors near both ends of the compressed signal. The DWT coefficients are divided into three groups, each group is thresholded using a threshold based on a desired energy packing efficiency. A binary significance map is then generated by scanning the wavelet decomposition coefficients and outputting a binary one if the scanned coefficient is significant, and a binary zero if it is insignificant. Compression is achieved by 1) using a variable length code based on run length encoding to compress the significance map and 2) using direct binary representation for representing the significant coefficients. The ability of the coding algorithm to compress ECG signals is investigated, the results were obtained by compressing and decompressing the test signals. The proposed algorithm is compared with direct-based and wavelet-based compression algorithms and showed superior performance. A compression ratio of 24:1 was achieved for MIT-BIH record 117 with a percent root mean square difference as low as 1.08%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldberger, David; Evlyukhin, Egor; Cifligu, Petrika
We report measurements of the X-ray-induced decomposition of crystalline strontium oxalate (SrC2O4) as a function of energy and high pressure in two separate experiments. SrC2O4 at ambient conditions was irradiated with monochromatic synchrotron X-rays ranging in energy from 15 to 28 keV. A broad resonance of the decomposition yield was observed with a clear maximum when irradiating with ~20 keV X-rays and ambient pressure. Little or no decomposition was observed at 15 keV, which is below the Sr K-shell energy of 16.12 keV, suggesting that excitation of core electrons may play an important role in the destabilization of the C2O42–more » anion. A second experiment was performed to investigate the high-pressure dependence of the X-ray-induced decomposition of strontium oxalate at fixed energy. SrC2O4 was compressed in a diamond anvil cell (DAC) in the pressure range from 0 to 7.6 GPa with 1 GPa increments and irradiated in situ with 20 keV X-rays. A marked pressure dependence of the decomposition yield of SrC2O4 was observed with a decomposition yield maximum at around 1 GPa, suggesting that different crystal structures of the material play an important role in the decomposition process. This may be due in part to a phase transition observed near this pressure.« less
Goldberger, David; Evlyukhin, Egor; Cifligu, Petrika; Wang, Yonggang; Pravica, Michael
2017-09-28
We report measurements of the X-ray-induced decomposition of crystalline strontium oxalate (SrC 2 O 4 ) as a function of energy and high pressure in two separate experiments. SrC 2 O 4 at ambient conditions was irradiated with monochromatic synchrotron X-rays ranging in energy from 15 to 28 keV. A broad resonance of the decomposition yield was observed with a clear maximum when irradiating with ∼20 keV X-rays and ambient pressure. Little or no decomposition was observed at 15 keV, which is below the Sr K-shell energy of 16.12 keV, suggesting that excitation of core electrons may play an important role in the destabilization of the C 2 O 4 2- anion. A second experiment was performed to investigate the high-pressure dependence of the X-ray-induced decomposition of strontium oxalate at fixed energy. SrC 2 O 4 was compressed in a diamond anvil cell (DAC) in the pressure range from 0 to 7.6 GPa with 1 GPa increments and irradiated in situ with 20 keV X-rays. A marked pressure dependence of the decomposition yield of SrC 2 O 4 was observed with a decomposition yield maximum at around 1 GPa, suggesting that different crystal structures of the material play an important role in the decomposition process. This may be due in part to a phase transition observed near this pressure.
Compressible Turbulent Channel Flows: DNS Results and Modeling
NASA Technical Reports Server (NTRS)
Huang, P. G.; Coleman, G. N.; Bradshaw, P.; Rai, Man Mohan (Technical Monitor)
1994-01-01
The present paper addresses some topical issues in modeling compressible turbulent shear flows. The work is based on direct numerical simulation of two supersonic fully developed channel flows between very cold isothermal walls. Detailed decomposition and analysis of terms appearing in the momentum and energy equations are presented. The simulation results are used to provide insights into differences between conventional time-and Favre-averaging of the mean-flow and turbulent quantities. Study of the turbulence energy budget for the two cases shows that the compressibility effects due to turbulent density and pressure fluctuations are insignificant. In particular, the dilatational dissipation and the mean product of the pressure and dilatation fluctuations are very small, contrary to the results of simulations for sheared homogeneous compressible turbulence and to recent proposals for models for general compressible turbulent flows. This provides a possible explanation of why the Van Driest density-weighted transformation is so successful in correlating compressible boundary layer data. Finally, it is found that the DNS data do not support the strong Reynolds analogy. A more general representation of the analogy is analysed and shown to match the DNS data very well.
The History of the APS Topical Group on Shock Compression of Condensed Matter
NASA Astrophysics Data System (ADS)
Forbes, Jerry W.
2002-07-01
In order to provide broader scientific recognition and to advance the science of shock compressed condensed matter, a group of American Physical Society (APS) members worked within the Society to make this field an active part of the APS. Individual papers were presented at APS meetings starting in the 1940's and shock wave sessions were organized starting with the 1967 Pasadena meeting. Shock wave topical conferences began in 1979 in Pullman, WA. Signatures were obtained on a petition in 1984 from a balanced cross-section of the shock wave community to form an APS Topical Group (TG). The APS Council officially accepted the formation of the Shock Compression of Condensed Matter (SCCM) TG at its October 1984 meeting. This action firmly aligned the shock wave field with a major physical science organization. Most early topical conferences were sanctioned by the APS while those held after 1992 were official APS meetings. The topical group organizes a shock wave topical conference in odd numbered years while participating in shock wave/high pressure sessions at APS general meetings in even numbered years.
A Simple Application of Compressed Sensing to Further Accelerate Partially Parallel Imaging
Miao, Jun; Guo, Weihong; Narayan, Sreenath; Wilson, David L.
2012-01-01
Compressed Sensing (CS) and partially parallel imaging (PPI) enable fast MR imaging by reducing the amount of k-space data required for reconstruction. Past attempts to combine these two have been limited by the incoherent sampling requirement of CS, since PPI routines typically sample on a regular (coherent) grid. Here, we developed a new method, “CS+GRAPPA,” to overcome this limitation. We decomposed sets of equidistant samples into multiple random subsets. Then, we reconstructed each subset using CS, and averaging the results to get a final CS k-space reconstruction. We used both a standard CS, and an edge and joint-sparsity guided CS reconstruction. We tested these intermediate results on both synthetic and real MR phantom data, and performed a human observer experiment to determine the effectiveness of decomposition, and to optimize the number of subsets. We then used these CS reconstructions to calibrate the GRAPPA complex coil weights. In vivo parallel MR brain and heart data sets were used. An objective image quality evaluation metric, Case-PDM, was used to quantify image quality. Coherent aliasing and noise artifacts were significantly reduced using two decompositions. More decompositions further reduced coherent aliasing and noise artifacts but introduced blurring. However, the blurring was effectively minimized using our new edge and joint-sparsity guided CS using two decompositions. Numerical results on parallel data demonstrated that the combined method greatly improved image quality as compared to standard GRAPPA, on average halving Case-PDM scores across a range of sampling rates. The proposed technique allowed the same Case-PDM scores as standard GRAPPA, using about half the number of samples. We conclude that the new method augments GRAPPA by combining it with CS, allowing CS to work even when the k-space sampling pattern is equidistant. PMID:22902065
Liu, Zhichao; Wu, Qiong; Zhu, Weihua; Xiao, Heming
2015-04-28
Density functional theory with dispersion-correction (DFT-D) was employed to study the effects of vacancy and pressure on the structure and initial decomposition of crystalline 5-nitro-2,4-dihydro-3H-1,2,4-triazol-3-one (β-NTO), a high-energy insensitive explosive. A comparative analysis of the chemical behaviors of NTO in the ideal bulk crystal and vacancy-containing crystals under applied hydrostatic compression was considered. Our calculated formation energy, vacancy interaction energy, electron density difference, and frontier orbitals reveal that the stability of NTO can be effectively manipulated by changing the molecular environment. Bimolecular hydrogen transfer is suggested to be a potential initial chemical reaction in the vacancy-containing NTO solid at 50 GPa, which is prior to the C-NO2 bond dissociation as its initiation decomposition in the gas phase. The vacancy defects introduced into the ideal bulk NTO crystal can produce a localized site, where the initiation decomposition is preferentially accelerated and then promotes further decompositions. Our results may shed some light on the influence of the molecular environments on the initial pathways in molecular explosives.
Global Solutions to Repulsive Hookean Elastodynamics
NASA Astrophysics Data System (ADS)
Hu, Xianpeng; Masmoudi, Nader
2017-01-01
The global existence of classical solutions to the three dimensional repulsive Hookean elastodynamics around an equilibrium is considered. By linearization and Hodge's decomposition, the compressible part of the velocity, the density, and the compressible part of the transpose of the deformation gradient satisfy Klein-Gordon equations with speed {√{2}}, while the incompressible parts of the velocity and of the transpose of the deformation gradient satisfy wave equations with speed one. The space-time resonance method combined with the vector field method is used in a novel way to obtain the decay of the solution and hence global existence.
Formation and decomposition of CO2-filled ice.
Massani, B; Mitterdorfer, C; Loerting, T
2017-10-07
Recently it was shown that CO 2 -filled ice is formed upon compression of CO 2 -clathrate hydrate. Here we show two alternative routes of its formation, namely, by decompression of CO 2 /ice VI mixtures at 250 K and by isobaric heating of CO 2 /high-density amorphous ice mixtures at 0.5-1.0 GPa above 200 K. Furthermore, we show that filled ice may either transform into the clathrate at an elevated pressure or decompose to "empty" hexagonal ice at ambient pressure and low temperature. This complements the literature studies in which decomposition to ice VI was favoured at high pressures and low temperatures.
Formation and decomposition of CO2-filled ice
NASA Astrophysics Data System (ADS)
Massani, B.; Mitterdorfer, C.; Loerting, T.
2017-10-01
Recently it was shown that CO2-filled ice is formed upon compression of CO2-clathrate hydrate. Here we show two alternative routes of its formation, namely, by decompression of CO2/ice VI mixtures at 250 K and by isobaric heating of CO2/high-density amorphous ice mixtures at 0.5-1.0 GPa above 200 K. Furthermore, we show that filled ice may either transform into the clathrate at an elevated pressure or decompose to "empty" hexagonal ice at ambient pressure and low temperature. This complements the literature studies in which decomposition to ice VI was favoured at high pressures and low temperatures.
Fast Boundary Element Method for acoustics with the Sparse Cardinal Sine Decomposition
NASA Astrophysics Data System (ADS)
Alouges, François; Aussal, Matthieu; Parolin, Emile
2017-07-01
This paper presents the newly proposed method Sparse Cardinal Sine Decomposition that allows fast convolution on unstructured grids. We focus on its use when coupled with finite element techniques to solve acoustic problems with the (compressed) Boundary Element Method. In addition, we also compare the computational performances of two equivalent Matlab® and Python implementations of the method. We show validation test cases in order to assess the precision of the approach. Eventually, the performance of the method is illustrated by the computation of the acoustic target strength of a realistic submarine from the Benchmark Target Strength Simulation international workshop.
Shock Melting of Iron Silicide as Determined by In Situ X-ray Diffraction.
NASA Astrophysics Data System (ADS)
Newman, M.; Kraus, R. G.; Wicks, J. K.; Smith, R.; Duffy, T. S.
2016-12-01
The equation of state of core alloys at pressures and temperatures near the solid-liquid coexistence curve is important for understanding the dynamics at the inner core boundary of the Earth and super-Earths. Here, we present a series of laser driven shock experiments on textured polycrystalline Fe-15Si. These experiments were conducted at the Omega and Omega EP laser facilities. Particle velocities in the Fe-15Si samples were measured using a line VISAR and were used to infer the thermodynamic state of the shocked samples. In situ x-ray diffraction measurements were used to probe the melting transition and investigate the potential decomposition of Fe-15Si in to hcp and B2 structures. This work examines the kinetic effects of decomposition due to the short time scale of dynamic compression experiments. In addition, the thermodynamic data collected in these experiments adds to a limited body of information regarding the equation of state of Fe-15Si, which is a candidate for the composition in Earth's outer core. Our experimental results show a highly textured solid phase upon shock compression to pressures ranging from 170 to 300 GPa. Below 320 GPa, we observe diffraction peaks consistent with decomposition of the D03 starting material in to an hcp and a cubic (potentially B2) structure. Upon shock compression above 320 GPa, the intense and textured solid diffraction peaks give way to diffuse scattering and loss of texture, consistent with melting along the Hugoniot. When comparing these results to that of pure iron, we can ascertain that addition of 15 wt% silicon increases the equilibrium melting temperature significantly, or that the addition of silicon significantly increases the metastability of the solid phase, relative to the liquid. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Lu, Zhipeng; Zeng, Qun; Xue, Xianggui; Zhang, Zengming; Nie, Fude; Zhang, Chaoyang
2017-08-30
Performances and behaviors under high temperature-high pressure conditions are fundamentals for many materials. We study in the present work the pressure effect on the thermal decomposition of a new energetic ionic salt (EIS), TKX-50, by confining samples in a diamond anvil cell, using Raman spectroscopy measurements and ab initio simulations. As a result, we find a quadratic increase in decomposition temperature (T d ) of TKX-50 with increasing pressure (P) (T d = 6.28P 2 + 12.94P + 493.33, T d and P in K and GPa, respectively, and R 2 = 0.995) and the decomposition under various pressures initiated by an intermolecular H-transfer reaction (a bimolecular reaction). Surprisingly, this finding is contrary to a general observation about the pressure effect on the decomposition of common energetic materials (EMs) composed of neutral molecules: increasing pressure will impede the decomposition if it starts from a bimolecular reaction. Our results also demonstrate that increasing pressure impedes the H-transfer via the enhanced long-range electrostatic repulsion of H +δ H +δ of neighboring NH 3 OH + , with blue shifts of the intermolecular H-bonds. And the subsequent decomposition of the H-transferred intermediates is also suppressed, because the decomposition proceeds from a bimolecular reaction to a unimolecular one, which is generally prevented by compression. These two factors are the basic root for which the decomposition retarded with increasing pressure of TKX-50. Therefore, our finding breaks through the previously proposed concept that, for the condensed materials, increasing pressure will accelerate the thermal decomposition initiated by bimolecular reactions, and reveals a distinct mechanism of the pressure effect on thermal decomposition. That is to say, increasing pressure does not always promote the condensed material decay initiated through bimolecular reactions. Moreover, such a mechanism may be feasible to other EISs due to the similar intermolecular interactions.
Niles, Dana E; Duval-Arnould, Jordan; Skellett, Sophie; Knight, Lynda; Su, Felice; Raymond, Tia T; Sweberg, Todd; Sen, Anita I; Atkins, Dianne L; Friess, Stuart H; de Caen, Allan R; Kurosawa, Hiroshi; Sutton, Robert M; Wolfe, Heather; Berg, Robert A; Silver, Annemarie; Hunt, Elizabeth A; Nadkarni, Vinay M
2018-05-01
Pediatric in-hospital cardiac arrest cardiopulmonary resuscitation quality metrics have been reported in few children less than 8 years. Our objective was to characterize chest compression fraction, rate, depth, and compliance with 2015 American Heart Association guidelines across multiple pediatric hospitals. Retrospective observational study of data from a multicenter resuscitation quality collaborative from October 2015 to April 2017. Twelve pediatric hospitals across United States, Canada, and Europe. In-hospital cardiac arrest patients (age < 18 yr) with quantitative cardiopulmonary resuscitation data recordings. None. There were 112 events yielding 2,046 evaluable 60-second epochs of cardiopulmonary resuscitation (196,669 chest compression). Event cardiopulmonary resuscitation metric summaries (median [interquartile range]) by age: less than 1 year (38/112): chest compression fraction 0.88 (0.61-0.98), chest compression rate 119/min (110-129), and chest compression depth 2.3 cm (1.9-3.0 cm); for 1 to less than 8 years (42/112): chest compression fraction 0.94 (0.79-1.00), chest compression rate 117/min (110-124), and chest compression depth 3.8 cm (2.9-4.6 cm); for 8 to less than 18 years (32/112): chest compression fraction 0.94 (0.85-1.00), chest compression rate 117/min (110-123), chest compression depth 5.5 cm (4.0-6.5 cm). "Compliance" with guideline targets for 60-second chest compression "epochs" was predefined: chest compression fraction greater than 0.80, chest compression rate 100-120/min, and chest compression depth: greater than or equal to 3.4 cm in less than 1 year, greater than or equal to 4.4 cm in 1 to less than 8 years, and 4.5 to less than 6.6 cm in 8 to less than 18 years. Proportion of less than 1 year, 1 to less than 8 years, and 8 to less than 18 years events with greater than or equal to 60% of 60-second epochs meeting compliance (respectively): chest compression fraction was 53%, 81%, and 78%; chest compression rate was 32%, 50%, and 63%; chest compression depth was 13%, 19%, and 44%. For all events combined, total compliance (meeting all three guideline targets) was 10% (11/112). Across an international pediatric resuscitation collaborative, we characterized the landscape of pediatric in-hospital cardiac arrest chest compression quality metrics and found that they often do not meet 2015 American Heart Association guidelines. Guideline compliance for rate and depth in children less than 18 years is poor, with the greatest difficulty in achieving chest compression depth targets in younger children.
Zhang, Zhilin; Jung, Tzyy-Ping; Makeig, Scott; Rao, Bhaskar D
2013-02-01
Fetal ECG (FECG) telemonitoring is an important branch in telemedicine. The design of a telemonitoring system via a wireless body area network with low energy consumption for ambulatory use is highly desirable. As an emerging technique, compressed sensing (CS) shows great promise in compressing/reconstructing data with low energy consumption. However, due to some specific characteristics of raw FECG recordings such as nonsparsity and strong noise contamination, current CS algorithms generally fail in this application. This paper proposes to use the block sparse Bayesian learning framework to compress/reconstruct nonsparse raw FECG recordings. Experimental results show that the framework can reconstruct the raw recordings with high quality. Especially, the reconstruction does not destroy the interdependence relation among the multichannel recordings. This ensures that the independent component analysis decomposition of the reconstructed recordings has high fidelity. Furthermore, the framework allows the use of a sparse binary sensing matrix with much fewer nonzero entries to compress recordings. Particularly, each column of the matrix can contain only two nonzero entries. This shows that the framework, compared to other algorithms such as current CS algorithms and wavelet algorithms, can greatly reduce code execution in CPU in the data compression stage.
NASA Astrophysics Data System (ADS)
Lv, Peng; Tang, Xun; Yuan, Jiajiao; Ji, Chenglong
2017-11-01
Highly compressible electrodes are in high demand in volume-restricted energy storage devices. Superelastic reduced graphene oxide (rGO) aerogel with attractive characteristics are proposed as the promising skeleton for compressible electrodes. Herein, a ternary aerogel was prepared by successively electrodepositing polypyrrole (PPy) and MnO2 into the superelastic rGO aerogel. In the rGO/PPy/MnO2 aerogel, rGO aerogel provides the continuously conductive network; MnO2 is mainly responsible for pseudo reactions; the middle PPy layer not only reduces the interface resistance between rGO and MnO2, but also further enhanced the mechanical strength of rGO backbone. The synergistic effect of the three components leads to excellent performances including high specific capacitance, reversible compressibility, and extreme durability. The gravimetric capacitance of the compressible rGO/PPy/MnO2 aerogel electrodes reaches 366 F g-1 and can retain 95.3% even under 95% compressive strain. And a volumetric capacitance of 138 F cm-3 is achieved, which is much higher than that of other rGO-based compressible electrodes. This volumetric capacitance value can be preserved by 85% after 3500 charge/discharge cycles with various compression conditions. This work will pave the way for advanced applications in the area of compressible energy-storage devices meeting the requirement of limiting space.
NASA Astrophysics Data System (ADS)
Nazri, Fadzli Mohamed; Shahidan, Shahiron; Khaida Baharuddin, Nur; Beddu, Salmia; Hisyam Abu Bakar, Badorul
2017-11-01
This study investigates the effects of high temperature with five different heating durations on residual properties of 30 MPa normal concrete. Concrete cubes were being heated up to 600°C for 30, 60, 90, 120 and 150 minutes. The temperature will keep constant for 30, 60, 90, 120 and 150 minutes. The standard temperature-time curve ISO 834 is referred to. After heating the specimen were left to cool in the furnace and removed. After cooling down to ambient temperature, the residual mass and residual compressive strength were observed. The obtained result shows that, the compressive strength of concrete decrease as the heating duration increases. This heating duration influence, might affects the loss of free water present and decomposition of hydration products in concrete. As the heating duration increases, the amount of water evaporated also increases led to loss in concrete mass. Conclusively, the percentage of mass and compressive strength loss increased as the heating duration increased.
Udhayakumar, Ganesan; Sujatha, Chinnaswamy Manoharan; Ramakrishnan, Swaminathan
2013-01-01
Analysis of bone strength in radiographic images is an important component of estimation of bone quality in diseases such as osteoporosis. Conventional radiographic femur bone images are used to analyze its architecture using bi-dimensional empirical mode decomposition method. Surface interpolation of local maxima and minima points of an image is a crucial part of bi-dimensional empirical mode decomposition method and the choice of appropriate interpolation depends on specific structure of the problem. In this work, two interpolation methods of bi-dimensional empirical mode decomposition are analyzed to characterize the trabecular femur bone architecture of radiographic images. The trabecular bone regions of normal and osteoporotic femur bone images (N = 40) recorded under standard condition are used for this study. The compressive and tensile strength regions of the images are delineated using pre-processing procedures. The delineated images are decomposed into their corresponding intrinsic mode functions using interpolation methods such as Radial basis function multiquadratic and hierarchical b-spline techniques. Results show that bi-dimensional empirical mode decomposition analyses using both interpolations are able to represent architectural variations of femur bone radiographic images. As the strength of the bone depends on architectural variation in addition to bone mass, this study seems to be clinically useful.
High-temperature catalyst for catalytic combustion and decomposition
NASA Technical Reports Server (NTRS)
Mays, Jeffrey A. (Inventor); Lohner, Kevin A. (Inventor); Sevener, Kathleen M. (Inventor); Jensen, Jeff J. (Inventor)
2005-01-01
A robust, high temperature mixed metal oxide catalyst for propellant composition, including high concentration hydrogen peroxide, and catalytic combustion, including methane air mixtures. The uses include target, space, and on-orbit propulsion systems and low-emission terrestrial power and gas generation. The catalyst system requires no special preheat apparatus or special sequencing to meet start-up requirements, enabling a fast overall response time. Start-up transients of less than 1 second have been demonstrated with catalyst bed and propellant temperatures as low as 50 degrees Fahrenheit. The catalyst system has consistently demonstrated high decomposition effeciency, extremely low decomposition roughness, and long operating life on multiple test particles.
Ge, Ni-Na; Wei, Yong-Kai; Zhao, Feng; Chen, Xiang-Rong; Ji, Guang-Fu
2014-07-01
The electronic structure and initial decomposition in high explosive HMX under conditions of shock loading are examined. The simulation is performed using quantum molecular dynamics in conjunction with multi-scale shock technique (MSST). A self-consistent charge density-functional tight-binding (SCC-DFTB) method is adapted. The results show that the N-N-C angle has a drastic change under shock wave compression along lattice vector b at shock velocity 11 km/s, which is the main reason that leads to an insulator-to-metal transition for the HMX system. The metallization pressure (about 130 GPa) of condensed-phase HMX is predicted firstly. We also detect the formation of several key products of condensed-phase HMX decomposition, such as NO2, NO, N2, N2O, H2O, CO, and CO2, and all of them have been observed in previous experimental studies. Moreover, the initial decomposition products include H2 due to the C-H bond breaking as a primary reaction pathway at extreme condition, which presents a new insight into the initial decomposition mechanism of HMX under shock loading at the atomistic level.
Simulated pressure denaturation thermodynamics of ubiquitin.
Ploetz, Elizabeth A; Smith, Paul E
2017-12-01
Simulations of protein thermodynamics are generally difficult to perform and provide limited information. It is desirable to increase the degree of detail provided by simulation and thereby the potential insight into the thermodynamic properties of proteins. In this study, we outline how to analyze simulation trajectories to decompose conformation-specific, parameter free, thermodynamically defined protein volumes into residue-based contributions. The total volumes are obtained using established methods from Fluctuation Solution Theory, while the volume decomposition is new and is performed using a simple proximity method. Native and fully extended ubiquitin are used as the test conformations. Changes in the protein volumes are then followed as a function of pressure, allowing for conformation-specific protein compressibility values to also be obtained. Residue volume and compressibility values indicate significant contributions to protein denaturation thermodynamics from nonpolar and coil residues, together with a general negative compressibility exhibited by acidic residues. Copyright © 2017 Elsevier B.V. All rights reserved.
Real-time filtering and detection of dynamics for compression of HDTV
NASA Technical Reports Server (NTRS)
Sauer, Ken D.; Bauer, Peter
1991-01-01
The preprocessing of video sequences for data compressing is discussed. The end goal associated with this is a compression system for HDTV capable of transmitting perceptually lossless sequences at under one bit per pixel. Two subtopics were emphasized to prepare the video signal for more efficient coding: (1) nonlinear filtering to remove noise and shape the signal spectrum to take advantage of insensitivities of human viewers; and (2) segmentation of each frame into temporally dynamic/static regions for conditional frame replenishment. The latter technique operates best under the assumption that the sequence can be modelled as a superposition of active foreground and static background. The considerations were restricted to monochrome data, since it was expected to use the standard luminance/chrominance decomposition, which concentrates most of the bandwidth requirements in the luminance. Similar methods may be applied to the two chrominance signals.
Hydrogen bond asymmetric local potentials in compressed ice.
Huang, Yongli; Ma, Zengsheng; Zhang, Xi; Zhou, Guanghui; Zhou, Yichun; Sun, Chang Q
2013-10-31
A combination of the Lagrangian mechanics of oscillators vibration, molecular dynamics decomposition of volume evolution, and Raman spectroscopy of phonon relaxation has enabled us to resolve the asymmetric, local, and short-range potentials pertaining to the hydrogen bond (O:H-O) in compressed ice. Results show that both oxygen atoms in the O:H-O bond shift initially outwardly with respect to the coordination origin (H), lengthening the O-O distance by 0.0136 nm from 0.2597 to 0.2733 nm by Coulomb repulsion between electron pairs on adjacent oxygen atoms. Both oxygen atoms then move toward right along the O:H-O bond by different amounts upon being compressed, approaching identical length of 0.11 nm. The van der Waals potential VL(r) for the O:H noncovalent bond reaches a valley at -0.25 eV, and the lowest exchange VH(r) for the H-O polar-covalent bond is valued at -3.97 eV.
NASA Technical Reports Server (NTRS)
Cooke, C. H.; Blanchard, D. K.
1975-01-01
A finite element algorithm for solution of fluid flow problems characterized by the two-dimensional compressible Navier-Stokes equations was developed. The program is intended for viscous compressible high speed flow; hence, primitive variables are utilized. The physical solution was approximated by trial functions which at a fixed time are piecewise cubic on triangular elements. The Galerkin technique was employed to determine the finite-element model equations. A leapfrog time integration is used for marching asymptotically from initial to steady state, with iterated integrals evaluated by numerical quadratures. The nonsymmetric linear systems of equations governing time transition from step-to-step are solved using a rather economical block iterative triangular decomposition scheme. The concept was applied to the numerical computation of a free shear flow. Numerical results of the finite-element method are in excellent agreement with those obtained from a finite difference solution of the same problem.
NASA Astrophysics Data System (ADS)
Kou, Jiaqing; Le Clainche, Soledad; Zhang, Weiwei
2018-01-01
This study proposes an improvement in the performance of reduced-order models (ROMs) based on dynamic mode decomposition to model the flow dynamics of the attractor from a transient solution. By combining higher order dynamic mode decomposition (HODMD) with an efficient mode selection criterion, the HODMD with criterion (HODMDc) ROM is able to identify dominant flow patterns with high accuracy. This helps us to develop a more parsimonious ROM structure, allowing better predictions of the attractor dynamics. The method is tested in the solution of a NACA0012 airfoil buffeting in a transonic flow, and its good performance in both the reconstruction of the original solution and the prediction of the permanent dynamics is shown. In addition, the robustness of the method has been successfully tested using different types of parameters, indicating that the proposed ROM approach is a tool promising for using in both numerical simulations and experimental data.
Polystyrene Foam EOS as a Function of Porosity and Fill Gas
NASA Astrophysics Data System (ADS)
Mulford, Roberta; Swift, Damian
2009-06-01
An accurate EOS for polystyrene foam is necessary for analysis of numerous experiments in shock compression, inertial confinement fusion, and astrophysics. Plastic to gas ratios vary between various samples of foam, according to the density and cell-size of the foam. A matrix of compositions has been investigated, allowing prediction of foam response as a function of the plastic-to-air ratio. The EOS code CHEETAH allows participation of the air in the decomposition reaction of the foam, Differences between air-filled, nitrogen-blown, and CO2-blown foams are investigated, to estimate the importance of allowing air to react with plastic products during decomposition. Results differ somewhat from the conventional EOS, which are generated from values for plastic extrapolated to low densities.
Low bit rate coding of Earth science images
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Chung, Wilson C.; Smith, Mark J. T.
1993-01-01
In this paper, the authors discuss compression based on some new ideas in vector quantization and their incorporation in a sub-band coding framework. Several variations are considered, which collectively address many of the individual compression needs within the earth science community. The approach taken in this work is based on some recent advances in the area of variable rate residual vector quantization (RVQ). This new RVQ method is considered separately and in conjunction with sub-band image decomposition. Very good results are achieved in coding a variety of earth science images. The last section of the paper provides some comparisons that illustrate the improvement in performance attributable to this approach relative the the JPEG coding standard.
Sparse Representation for Color Image Restoration (PREPRINT)
2006-10-01
as a universal denoiser of images, which learns the posterior from the given image in a way inspired by the Lempel - Ziv universal compression ...such as images, admit a sparse decomposition over a redundant dictionary leads to efficient algorithms for handling such sources of data . In...describe the data source. Such a model becomes paramount when developing algorithms for processing these signals. In this context, Markov-Random-Field
Compressive sensing in medical imaging
Graff, Christian G.; Sidky, Emil Y.
2015-01-01
The promise of compressive sensing, exploitation of compressibility to achieve high quality image reconstructions with less data, has attracted a great deal of attention in the medical imaging community. At the Compressed Sensing Incubator meeting held in April 2014 at OSA Headquarters in Washington, DC, presentations were given summarizing some of the research efforts ongoing in compressive sensing for x-ray computed tomography and magnetic resonance imaging systems. This article provides an expanded version of these presentations. Sparsity-exploiting reconstruction algorithms that have gained popularity in the medical imaging community are studied, and examples of clinical applications that could benefit from compressive sensing ideas are provided. The current and potential future impact of compressive sensing on the medical imaging field is discussed. PMID:25968400
A hybrid data compression approach for online backup service
NASA Astrophysics Data System (ADS)
Wang, Hua; Zhou, Ke; Qin, MingKang
2009-08-01
With the popularity of Saas (Software as a service), backup service has becoming a hot topic of storage application. Due to the numerous backup users, how to reduce the massive data load is a key problem for system designer. Data compression provides a good solution. Traditional data compression application used to adopt a single method, which has limitations in some respects. For example data stream compression can only realize intra-file compression, de-duplication is used to eliminate inter-file redundant data, compression efficiency cannot meet the need of backup service software. This paper proposes a novel hybrid compression approach, which includes two levels: global compression and block compression. The former can eliminate redundant inter-file copies across different users, the latter adopts data stream compression technology to realize intra-file de-duplication. Several compressing algorithms were adopted to measure the compression ratio and CPU time. Adaptability using different algorithm in certain situation is also analyzed. The performance analysis shows that great improvement is made through the hybrid compression policy.
High-pressure high-temperature stability of hcp-Ir xOs 1-x (x = 0.50 and 0.55) alloys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yusenko, Kirill V.; Bykova, Elena; Bykov, Maxim
2016-12-23
Hcp-Ir 0.55Os 0.45 and hcp-Ir 0.50Os 0.50 alloys were synthesised by thermal decomposition of single-source precursors in hydrogen atmosphere. Both alloys correspond to a miscibility gap in the Ir–Os binary phase diagram and therefore are metastable at ambient conditions. An in situ powder X-ray diffraction has been used for a monitoring a formation of hcp-Ir0.55Os0.45 alloy from (NH 4) 2[Ir 0.55Os 0.45Cl 6] precursor. A crystalline intermediate compound and nanodimentional metallic particles with a large concentration of defects has been found as key intermediates in the thermal decomposition process in hydrogen flow. High-temperature stability of titled hcp-structured alloys has beenmore » investigated upon compression up to 11 GPa using a multi-anvil press and up to 80 GPa using laser-heated diamond-anvil cells to obtain a phase separation into fcc + hcp mixture. Compressibility curves at room temperature as well as thermal expansion at ambient pressure and under compression up to 80 GPa were collected to obtain thermal expansion coefficients and bulk moduli. hcp-Ir 0.55Os 0.45 alloy shows bulk moduli B0 = 395 GPa. Thermal expansion coefficients were estimated as α = 1.6·10 -5 K -1 at ambient pressure and α = 0.3·10 -5 K -1 at 80 GPa. Obtained high-pressure high-temperature data allowed us to construct the first model for pressure-dependent Ir–Os phase diagram.« less
Some results on numerical methods for hyperbolic conservation laws
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang Huanan.
1989-01-01
This dissertation contains some results on the numerical solutions of hyperbolic conservation laws. (1) The author introduced an artificial compression method as a correction to the basic ENO schemes. The method successfully prevents contact discontinuities from being smeared. This is achieved by increasing the slopes of the ENO reconstructions in such a way that the essentially non-oscillatory property of the schemes is kept. He analyzes the non-oscillatory property of the new artificial compression method by applying it to the UNO scheme which is a second order accurate ENO scheme, and proves that the resulting scheme is indeed non-oscillatory. Extensive 1-Dmore » numerical results and some preliminary 2-D ones are provided to show the strong performance of the method. (2) He combines the ENO schemes and the centered difference schemes into self-adjusting hybrid schemes which will be called the localized ENO schemes. At or near the jumps, he uses the ENO schemes with the field by field decompositions, otherwise he simply uses the centered difference schemes without the field by field decompositions. The method involves a new interpolation analysis. In the numerical experiments on several standard test problems, the quality of the numerical results of this method is close to that of the pure ENO results. The localized ENO schemes can be equipped with the above artificial compression method. In this way, he dramatically improves the resolutions of the contact discontinuities at very little additional costs. (3) He introduces a space-time mesh refinement method for time dependent problems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pace, Edward J.; Binns, Jack; Peña Alvarez, Miriam
The observation of high-temperature superconductivity in hydride sulfide (H2S) at high pressures has generated considerable interest in compressed hydrogen-rich compounds. High-pressure hydrogen selenide (H2Se) has also been predicted to be superconducting at high temperatures; however, its behaviour and stability upon compression remains unknown. In this study, we synthesize H2Se in situ from elemental Se and molecular H2 at pressures of 0.4 GPa and temperatures of 473 K. On compression at 300 K, we observe the high-pressure solid phase sequence (I-I'-IV) of H2Se through Raman spectroscopy and x-ray diffraction measurements, before dissociation into its constituent elements. Through the compression of H2Semore » in H2 media, we also observe the formation of a host-guest structure, (H2Se)2H2, which is stable at the same conditions as H2Se, with respect to decomposition. These measurements show that the behaviour of H2Se is remarkably similar to that of H2S and provides further understanding of the hydrogen chalcogenides under pressure.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takamoto, Makoto; Lazarian, Alexandre, E-mail: mtakamoto@eps.s.u-tokyo.ac.jp, E-mail: alazarian@facstaff.wisc.edu
2016-11-10
In this Letter, we report compressible mode effects on relativistic magnetohydrodynamic (RMHD) turbulence in Poynting-dominated plasmas using three-dimensional numerical simulations. We decomposed fluctuations in the turbulence into 3 MHD modes (fast, slow, and Alfvén) following the procedure of mode decomposition in Cho and Lazarian, and analyzed their energy spectra and structure functions separately. We also analyzed the ratio of compressible mode to Alfvén mode energy with respect to its Mach number. We found the ratio of compressible mode increases not only with the Alfvén Mach number, but also with the background magnetization, which indicates a strong coupling between the fastmore » and Alfvén modes. It also signifies the appearance of a new regime of RMHD turbulence in Poynting-dominated plasmas where the fast and Alfvén modes are strongly coupled and, unlike the non-relativistic MHD regime, cannot be treated separately. This finding will affect particle acceleration efficiency obtained by assuming Alfvénic critical-balance turbulence and can change the resulting photon spectra emitted by non-thermal electrons.« less
Launch Safety, Toxicity, and Environmental Effects of the High Performance Oxidizer ClF(5)
1994-03-31
Pentafluoride," J. Phys. Chem. 74, 1183 (1970). 7. J. A. Blauer, H. G. McMath, F. C. Jaye, and V. S, Engleman, " Decomposition Kinetics of Chlorine Trifluoride ...similar. A greater concern is propellant release in the stratosphere. Fluorine atoms lead to catalytic decomposition of 03 at rates similar to chlorine ...Propulsion Meeting - Publication 550, 3, 447 (1990). 4. R. F. Sawyer, E. T. McMullen, and P. Purgalis, "The Reaction of Hydrazine and Chlorine Pentafluoride
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-24
... browser interface; (8) LED luminaires for roadway illumination with customized filter application to meet... customized filter application to meet specific lighting requirements of Mauna Kea observatory; (9) Compressed...
NASA Astrophysics Data System (ADS)
El-Shafai, W.; El-Bakary, E. M.; El-Rabaie, S.; Zahran, O.; El-Halawany, M.; Abd El-Samie, F. E.
2017-06-01
Three-Dimensional Multi-View Video (3D-MVV) transmission over wireless networks suffers from Macro-Blocks losses due to either packet dropping or fading-motivated bit errors. Thus, the robust performance of 3D-MVV transmission schemes over wireless channels becomes a recent considerable hot research issue due to the restricted resources and the presence of severe channel errors. The 3D-MVV is composed of multiple video streams shot by several cameras around a single object, simultaneously. Therefore, it is an urgent task to achieve high compression ratios to meet future bandwidth constraints. Unfortunately, the highly-compressed 3D-MVV data becomes more sensitive and vulnerable to packet losses, especially in the case of heavy channel faults. Thus, in this paper, we suggest the application of a chaotic Baker interleaving approach with equalization and convolution coding for efficient Singular Value Decomposition (SVD) watermarked 3D-MVV transmission over an Orthogonal Frequency Division Multiplexing wireless system. Rayleigh fading and Additive White Gaussian Noise are considered in the real scenario of 3D-MVV transmission. The SVD watermarked 3D-MVV frames are primarily converted to their luminance and chrominance components, which are then converted to binary data format. After that, chaotic interleaving is applied prior to the modulation process. It is used to reduce the channel effects on the transmitted bit streams and it also adds a degree of encryption to the transmitted 3D-MVV frames. To test the performance of the proposed framework; several simulation experiments on different SVD watermarked 3D-MVV frames have been executed. The experimental results show that the received SVD watermarked 3D-MVV frames still have high Peak Signal-to-Noise Ratios and watermark extraction is possible in the proposed framework.
2015-01-01
AFRL-RY-WP-TR-2014-0230 INFLUENCE OF SPECTRAL TRANSFER PROCESSES IN COMPRESSIBLE LOW FREQUENCY PLASMA TURBULENCE ON SCATTERING AND...INFLUENCE OF SPECTRAL TRANSFER PROCESSES IN COMPRESSIBLE LOW FREQUENCY PLASMA TURBULENCE ON SCATTERING AND REFRACTION OF ELECTROMAGNETIC SIGNALS 5a...research is to analyze influence of plasma turbulence on hypersonic sensor systems and NGOTHR applications and to meet the Air Force’s ever-increasing
Network Monitoring Traffic Compression Using Singular Value Decomposition
2014-03-27
Shootouts." Workshop on Intrusion Detection and Network Monitoring. 1999. [12] Goodall , John R. "Visualization is better! a comparative evaluation...34 Visualization for Cyber Security, 2009. VizSec 2009. 6th International Workshop on IEEE, 2009. [13] Goodall , John R., and Mark Sowul. "VIAssist...Viruses and Log Visualization.” In Australian Digital Forensics Conference. Paper 54, 2008. [30] Tesone, Daniel R., and John R. Goodall . "Balancing
NASA Technical Reports Server (NTRS)
Herraez, Miguel; Bergan, Andrew C.; Gonzalez, Carlos; Lopes, Claudio S.
2017-01-01
In this work, the fiber kinking phenomenon, which is known as the failure mechanism that takes place when a fiber reinforced polymer is loaded under longitudinal compression, is studied. A computational micromechanics model is employed to interrogate the assumptions of a recently developed mesoscale continuum damage mechanics (CDM) model for fiber kinking based on the deformation gradient decomposition (DGD) and the LaRC04 failure criteria.
Quality Aware Compression of Electrocardiogram Using Principal Component Analysis.
Gupta, Rajarshi
2016-05-01
Electrocardiogram (ECG) compression finds wide application in various patient monitoring purposes. Quality control in ECG compression ensures reconstruction quality and its clinical acceptance for diagnostic decision making. In this paper, a quality aware compression method of single lead ECG is described using principal component analysis (PCA). After pre-processing, beat extraction and PCA decomposition, two independent quality criteria, namely, bit rate control (BRC) or error control (EC) criteria were set to select optimal principal components, eigenvectors and their quantization level to achieve desired bit rate or error measure. The selected principal components and eigenvectors were finally compressed using a modified delta and Huffman encoder. The algorithms were validated with 32 sets of MIT Arrhythmia data and 60 normal and 30 sets of diagnostic ECG data from PTB Diagnostic ECG data ptbdb, all at 1 kHz sampling. For BRC with a CR threshold of 40, an average Compression Ratio (CR), percentage root mean squared difference normalized (PRDN) and maximum absolute error (MAE) of 50.74, 16.22 and 0.243 mV respectively were obtained. For EC with an upper limit of 5 % PRDN and 0.1 mV MAE, the average CR, PRDN and MAE of 9.48, 4.13 and 0.049 mV respectively were obtained. For mitdb data 117, the reconstruction quality could be preserved up to CR of 68.96 by extending the BRC threshold. The proposed method yields better results than recently published works on quality controlled ECG compression.
Cheng, Yih-Chun; Tsai, Pei-Yun; Huang, Ming-Hao
2016-05-19
Low-complexity compressed sensing (CS) techniques for monitoring electrocardiogram (ECG) signals in wireless body sensor network (WBSN) are presented. The prior probability of ECG sparsity in the wavelet domain is first exploited. Then, variable orthogonal multi-matching pursuit (vOMMP) algorithm that consists of two phases is proposed. In the first phase, orthogonal matching pursuit (OMP) algorithm is adopted to effectively augment the support set with reliable indices and in the second phase, the orthogonal multi-matching pursuit (OMMP) is employed to rescue the missing indices. The reconstruction performance is thus enhanced with the prior information and the vOMMP algorithm. Furthermore, the computation-intensive pseudo-inverse operation is simplified by the matrix-inversion-free (MIF) technique based on QR decomposition. The vOMMP-MIF CS decoder is then implemented in 90 nm CMOS technology. The QR decomposition is accomplished by two systolic arrays working in parallel. The implementation supports three settings for obtaining 40, 44, and 48 coefficients in the sparse vector. From the measurement result, the power consumption is 11.7 mW at 0.9 V and 12 MHz. Compared to prior chip implementations, our design shows good hardware efficiency and is suitable for low-energy applications.
Spectroscopic studies of fly ash-based geopolymers
NASA Astrophysics Data System (ADS)
Rożek, Piotr; Król, Magdalena; Mozgawa, Włodzimierz
2018-06-01
In the present work fly-ash based geopolymers with different contents of alkali-activator and water were prepared. Alkali-activation was conducted with sodium hydroxide (NaOH) at the SiO2/Na2O molar ratio of 3, 4, and 5. Water content was at the ratio of 30, 40, and 50 wt% in respect to the weight of the fly ash. Structural and microstructural characterization (FT-IR spectroscopy, 29Si and 27Al MAS NMR, X-ray diffraction, SEM) of the specimens as well as compressive strength and apparent density measurements were carried out. The obtained geopolymers are mainly amorphous due to the presence of disordered aluminosilicate phases. However, hydroxysodalite have been identified as a crystalline product of geopolymerization. The major band in the mid-infrared spectra (at about 1000 cm-1) is related to Sisbnd O(Si,Al) asymmetric stretching vibrations and is an indicator of the geopolymeric network formation. Several component bands in this region can be noticed after the decomposition process. Decomposition of band at 1450 cm-1 (vibrations of Csbnd O bonds in bicarbonate group) has been also conducted. Higher NaOH content favors carbonation, inasmuch as the intensity of the band then increases. Both water and alkaline activator contents have an influence on compressive strength and microstructure of the obtained fly-ash based geopolymers.
Shock chemistry in SX358 foams
NASA Astrophysics Data System (ADS)
Maerzke, Katie; Coe, Joshua; Fredenburg, Anthony; Lang, John; Dattelbaum, Dana
2017-06-01
We have developed new equation of state models for SX358, a cross-linked PDMS polymer. Recent experiments on SX358 over a range of initial densities (0-65% porous) have yielded new data that allow for a more thorough calibration of the equations of state. SX358 chemically decomposes under shock compression, as evidenced by a cusp in the shock locus. We therefore treat this material using two equations of state, specifically a SESAME model for the unreacted material and a free energy minimization assuming full chemical and thermodynamic equilibrium for the decomposition products. The shock locus of porous SX358 is found to be ``anomalous'' in that the decomposition reaction causes a volume expansion, rather than a volume collapse. Similar behavior has been observed in other polymer foams, notably polyurethane.
42 CFR 84.141 - Breathing gas; minimum requirements.
Code of Federal Regulations, 2013 CFR
2013-10-01
... SAFETY AND HEALTH RESEARCH AND RELATED ACTIVITIES APPROVAL OF RESPIRATORY PROTECTIVE DEVICES Supplied-Air...) Compressed, gaseous breathing air shall meet the applicable minimum grade requirements for Type I gaseous air set forth in the Compressed Gas Association Commodity Specification for Air, G-7.1, 1966 (Grade D or...
42 CFR 84.141 - Breathing gas; minimum requirements.
Code of Federal Regulations, 2011 CFR
2011-10-01
... SAFETY AND HEALTH RESEARCH AND RELATED ACTIVITIES APPROVAL OF RESPIRATORY PROTECTIVE DEVICES Supplied-Air...) Compressed, gaseous breathing air shall meet the applicable minimum grade requirements for Type I gaseous air set forth in the Compressed Gas Association Commodity Specification for Air, G-7.1, 1966 (Grade D or...
42 CFR 84.141 - Breathing gas; minimum requirements.
Code of Federal Regulations, 2014 CFR
2014-10-01
... SAFETY AND HEALTH RESEARCH AND RELATED ACTIVITIES APPROVAL OF RESPIRATORY PROTECTIVE DEVICES Supplied-Air...) Compressed, gaseous breathing air shall meet the applicable minimum grade requirements for Type I gaseous air set forth in the Compressed Gas Association Commodity Specification for Air, G-7.1, 1966 (Grade D or...
42 CFR 84.141 - Breathing gas; minimum requirements.
Code of Federal Regulations, 2012 CFR
2012-10-01
... SAFETY AND HEALTH RESEARCH AND RELATED ACTIVITIES APPROVAL OF RESPIRATORY PROTECTIVE DEVICES Supplied-Air...) Compressed, gaseous breathing air shall meet the applicable minimum grade requirements for Type I gaseous air set forth in the Compressed Gas Association Commodity Specification for Air, G-7.1, 1966 (Grade D or...
NASA Technical Reports Server (NTRS)
Oliger, Joseph
1997-01-01
Topics considered include: high-performance computing; cognitive and perceptual prostheses (computational aids designed to leverage human abilities); autonomous systems. Also included: development of a 3D unstructured grid code based on a finite volume formulation and applied to the Navier-stokes equations; Cartesian grid methods for complex geometry; multigrid methods for solving elliptic problems on unstructured grids; algebraic non-overlapping domain decomposition methods for compressible fluid flow problems on unstructured meshes; numerical methods for the compressible navier-stokes equations with application to aerodynamic flows; research in aerodynamic shape optimization; S-HARP: a parallel dynamic spectral partitioner; numerical schemes for the Hamilton-Jacobi and level set equations on triangulated domains; application of high-order shock capturing schemes to direct simulation of turbulence; multicast technology; network testbeds; supercomputer consolidation project.
NASA Technical Reports Server (NTRS)
Lohner, Kevin A. (Inventor); Mays, Jeffrey A. (Inventor); Sevener, Kathleen M. (Inventor)
2004-01-01
A method for designing and assembling a high performance catalyst bed gas generator for use in decomposing propellants, particularly hydrogen peroxide propellants, for use in target, space, and on-orbit propulsion systems and low-emission terrestrial power and gas generation. The gas generator utilizes a sectioned catalyst bed system, and incorporates a robust, high temperature mixed metal oxide catalyst. The gas generator requires no special preheat apparatus or special sequencing to meet start-up requirements, enabling a fast overall response time. The high performance catalyst bed gas generator system has consistently demonstrated high decomposition efficiency, extremely low decomposition roughness, and long operating life on multiple test articles.
Context Modeler for Wavelet Compression of Spectral Hyperspectral Images
NASA Technical Reports Server (NTRS)
Kiely, Aaron; Xie, Hua; Klimesh, matthew; Aranki, Nazeeh
2010-01-01
A context-modeling sub-algorithm has been developed as part of an algorithm that effects three-dimensional (3D) wavelet-based compression of hyperspectral image data. The context-modeling subalgorithm, hereafter denoted the context modeler, provides estimates of probability distributions of wavelet-transformed data being encoded. These estimates are utilized by an entropy coding subalgorithm that is another major component of the compression algorithm. The estimates make it possible to compress the image data more effectively than would otherwise be possible. The following background discussion is prerequisite to a meaningful summary of the context modeler. This discussion is presented relative to ICER-3D, which is the name attached to a particular compression algorithm and the software that implements it. The ICER-3D software is summarized briefly in the preceding article, ICER-3D Hyperspectral Image Compression Software (NPO-43238). Some aspects of this algorithm were previously described, in a slightly more general context than the ICER-3D software, in "Improving 3D Wavelet-Based Compression of Hyperspectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. In turn, ICER-3D is a product of generalization of ICER, another previously reported algorithm and computer program that can perform both lossless and lossy wavelet-based compression and decompression of gray-scale-image data. In ICER-3D, hyperspectral image data are decomposed using a 3D discrete wavelet transform (DWT). Following wavelet decomposition, mean values are subtracted from spatial planes of spatially low-pass subbands prior to encoding. The resulting data are converted to sign-magnitude form and compressed. In ICER-3D, compression is progressive, in that compressed information is ordered so that as more of the compressed data stream is received, successive reconstructions of the hyperspectral image data are of successively higher overall fidelity.
NASA Astrophysics Data System (ADS)
Wang, Xiaochen; Shao, Yun; Tian, Wei; Li, Kun
2018-06-01
This study explored different methodologies using a C-band RADARSAT-2 quad-polarized Synthetic Aperture Radar (SAR) image located over China's Yellow Sea to investigate polarization decomposition parameters for identifying mixed floating pollutants from a complex ocean background. It was found that solitary polarization decomposition did not meet the demand for detecting and classifying multiple floating pollutants, even after applying a polarized SAR image. Furthermore, considering that Yamaguchi decomposition is sensitive to vegetation and the algal variety Enteromorpha prolifera, while H/A/alpha decomposition is sensitive to oil spills, a combination of parameters which was deduced from these two decompositions was proposed for marine environmental monitoring of mixed floating sea surface pollutants. A combination of volume scattering, surface scattering, and scattering entropy was the best indicator for classifying mixed floating pollutants from a complex ocean background. The Kappa coefficients for Enteromorpha prolifera and oil spills were 0.7514 and 0.8470, respectively, evidence that the composite polarized parameters based on quad-polarized SAR imagery proposed in this research is an effective monitoring method for complex marine pollution.
The increase of compressive strength of natural polymer modified concrete with Moringa oleifera
NASA Astrophysics Data System (ADS)
Susilorini, Rr. M. I. Retno; Santosa, Budi; Rejeki, V. G. Sri; Riangsari, M. F. Devita; Hananta, Yan's. Dianaga
2017-03-01
Polymer modified concrete is one of some concrete technology innovations to meet the need of strong and durable concrete. Previous research found that Moringa oleifera can be applied as natural polymer modifiers into mortars. Natural polymer modified mortar using Moringa oleifera is proven to increase their compressive strength significantly. In this resesearch, Moringa oleifera seeds have been grinded and added into concrete mix for natural polymer modified concrete, based on the optimum composition of previous research. The research investigated the increase of compressive strength of polymer modified concrete with Moringa oleifera as natural polymer modifiers. There were 3 compositions of natural polymer modified concrete with Moringa oleifera referred to previous research optimum compositions. Several cylinder of 10 cm x 20 cm specimens were produced and tested for compressive strength at age 7, 14, and, 28 days. The research meets conclusions: (1) Natural polymer modified concrete with Moringa oleifera, with and without skin, has higher compressive strength compared to natural polymer modified mortar with Moringa oleifera and also control specimens; (2) Natural polymer modified concrete with Moringa oleifera without skin is achieved by specimens contains Moringa oleifera that is 0.2% of cement weight; and (3) The compressive strength increase of natural polymer modified concrete with Moringa oleifera without skin is about 168.11-221.29% compared to control specimens
DOE Office of Scientific and Technical Information (OSTI.GOV)
Futatani, S.; Bos, W.J.T.; Del-Castillo-Negrete, Diego B
2011-01-01
We assess two techniques for extracting coherent vortices out of turbulent flows: the wavelet based Coherent Vorticity Extraction (CVE) and the Proper Orthogonal Decomposition (POD). The former decomposes the flow field into an orthogonal wavelet representation and subsequent thresholding of the coefficients allows one to split the flow into organized coherent vortices with non-Gaussian statistics and an incoherent random part which is structureless. POD is based on the singular value decomposition and decomposes the flow into basis functions which are optimal with respect to the retained energy for the ensemble average. Both techniques are applied to direct numerical simulation datamore » of two-dimensional drift-wave turbulence governed by Hasegawa Wakatani equation, considering two limit cases: the quasi-hydrodynamic and the quasi-adiabatic regimes. The results are compared in terms of compression rate, retained energy, retained enstrophy and retained radial flux, together with the enstrophy spectrum and higher order statistics. (c) 2010 Published by Elsevier Masson SAS on behalf of Academie des sciences.« less
NASA Astrophysics Data System (ADS)
Fosas de Pando, Miguel; Schmid, Peter J.; Sipp, Denis
2016-11-01
Nonlinear model reduction for large-scale flows is an essential component in many fluid applications such as flow control, optimization, parameter space exploration and statistical analysis. In this article, we generalize the POD-DEIM method, introduced by Chaturantabut & Sorensen [1], to address nonlocal nonlinearities in the equations without loss of performance or efficiency. The nonlinear terms are represented by nested DEIM-approximations using multiple expansion bases based on the Proper Orthogonal Decomposition. These extensions are imperative, for example, for applications of the POD-DEIM method to large-scale compressible flows. The efficient implementation of the presented model-reduction technique follows our earlier work [2] on linearized and adjoint analyses and takes advantage of the modular structure of our compressible flow solver. The efficacy of the nonlinear model-reduction technique is demonstrated to the flow around an airfoil and its acoustic footprint. We could obtain an accurate and robust low-dimensional model that captures the main features of the full flow.
Numerical simulation of a compressible homogeneous, turbulent shear flow. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Feiereisen, W. J.; Reynolds, W. C.; Ferziger, J. H.
1981-01-01
A direct, low Reynolds number, numerical simulation was performed on a homogeneous turbulent shear flow. The full compressible Navier-Stokes equations were used in a simulation on the ILLIAC IV computer with a 64,000 mesh. The flow fields generated by the code are used as an experimental data base, to examine the behavior of the Reynols stresses in this simple, compressible flow. The variation of the structure of the stresses and their dynamic equations as the character of the flow changed is emphasized. The structure of the tress tensor is more heavily dependent on the shear number and less on the fluctuating Mach number. The pressure-strain correlation tensor in the dynamic uations is directly calculated in this simulation. These correlations are decomposed into several parts, as contrasted with the traditional incompressible decomposition into two parts. The performance of existing models for the conventional terms is examined, and a model is proposed for the 'mean fluctuating' part.
SVD compression for magnetic resonance fingerprinting in the time domain.
McGivney, Debra F; Pierre, Eric; Ma, Dan; Jiang, Yun; Saybasili, Haris; Gulani, Vikas; Griswold, Mark A
2014-12-01
Magnetic resonance (MR) fingerprinting is a technique for acquiring and processing MR data that simultaneously provides quantitative maps of different tissue parameters through a pattern recognition algorithm. A predefined dictionary models the possible signal evolutions simulated using the Bloch equations with different combinations of various MR parameters and pattern recognition is completed by computing the inner product between the observed signal and each of the predicted signals within the dictionary. Though this matching algorithm has been shown to accurately predict the MR parameters of interest, one desires a more efficient method to obtain the quantitative images. We propose to compress the dictionary using the singular value decomposition, which will provide a low-rank approximation. By compressing the size of the dictionary in the time domain, we are able to speed up the pattern recognition algorithm, by a factor of between 3.4-4.8, without sacrificing the high signal-to-noise ratio of the original scheme presented previously.
Efficient Simulation of Compressible, Viscous Fluids using Multi-rate Time Integration
NASA Astrophysics Data System (ADS)
Mikida, Cory; Kloeckner, Andreas; Bodony, Daniel
2017-11-01
In the numerical simulation of problems of compressible, viscous fluids with single-rate time integrators, the global timestep used is limited to that of the finest mesh point or fastest physical process. This talk discusses the application of multi-rate Adams-Bashforth (MRAB) integrators to an overset mesh framework to solve compressible viscous fluid problems of varying scale with improved efficiency, with emphasis on the strategy of timescale separation and the application of the resulting numerical method to two sample problems: subsonic viscous flow over a cylinder and a viscous jet in crossflow. The results presented indicate the numerical efficacy of MRAB integrators, outline a number of outstanding code challenges, demonstrate the expected reduction in time enabled by MRAB, and emphasize the need for proper load balancing through spatial decomposition in order for parallel runs to achieve the predicted time-saving benefit. This material is based in part upon work supported by the Department of Energy, National Nuclear Security Administration, under Award Number DE-NA0002374.
SVD Compression for Magnetic Resonance Fingerprinting in the Time Domain
McGivney, Debra F.; Pierre, Eric; Ma, Dan; Jiang, Yun; Saybasili, Haris; Gulani, Vikas; Griswold, Mark A.
2016-01-01
Magnetic resonance fingerprinting is a technique for acquiring and processing MR data that simultaneously provides quantitative maps of different tissue parameters through a pattern recognition algorithm. A predefined dictionary models the possible signal evolutions simulated using the Bloch equations with different combinations of various MR parameters and pattern recognition is completed by computing the inner product between the observed signal and each of the predicted signals within the dictionary. Though this matching algorithm has been shown to accurately predict the MR parameters of interest, one desires a more efficient method to obtain the quantitative images. We propose to compress the dictionary using the singular value decomposition (SVD), which will provide a low-rank approximation. By compressing the size of the dictionary in the time domain, we are able to speed up the pattern recognition algorithm, by a factor of between 3.4-4.8, without sacrificing the high signal-to-noise ratio of the original scheme presented previously. PMID:25029380
49 CFR 393.68 - Compressed natural gas fuel containers.
Code of Federal Regulations, 2013 CFR
2013-10-01
... containers. (a) Applicability. The rules in this section apply to compressed natural gas (CNG) fuel... auxiliary equipment installed on, or used in connection with commercial motor vehicles. (b) CNG containers... equipped with a CNG fuel tank must meet the CNG container requirements of FMVSS No. 304 (49 CFR 571.304) in...
49 CFR 393.68 - Compressed natural gas fuel containers.
Code of Federal Regulations, 2012 CFR
2012-10-01
... containers. (a) Applicability. The rules in this section apply to compressed natural gas (CNG) fuel... auxiliary equipment installed on, or used in connection with commercial motor vehicles. (b) CNG containers... equipped with a CNG fuel tank must meet the CNG container requirements of FMVSS No. 304 (49 CFR 571.304) in...
49 CFR 393.68 - Compressed natural gas fuel containers.
Code of Federal Regulations, 2014 CFR
2014-10-01
... containers. (a) Applicability. The rules in this section apply to compressed natural gas (CNG) fuel... auxiliary equipment installed on, or used in connection with commercial motor vehicles. (b) CNG containers... equipped with a CNG fuel tank must meet the CNG container requirements of FMVSS No. 304 (49 CFR 571.304) in...
A Unified View of Global Instabilities of Compressible Flow Over Open Cavities
2005-06-30
the early work of Rossiter [3], have treated the shear-layer emanating from the upstream comer of the cavity in isolation ( using parallel flow... using a domain-decomposition method. The code has optional equation sets to solve either (i) nonlinear Navier-Stokes, (ii) Navier-Stokes equations...early experments of Maull and East [15]. They used oil flow visualization of surface streamlines on the cavity bottom to show the existence, under certain
Defect inspection using a time-domain mode decomposition technique
NASA Astrophysics Data System (ADS)
Zhu, Jinlong; Goddard, Lynford L.
2018-03-01
In this paper, we propose a technique called time-varying frequency scanning (TVFS) to meet the challenges in killer defect inspection. The proposed technique enables the dynamic monitoring of defects by checking the hopping in the instantaneous frequency data and the classification of defect types by comparing the difference in frequencies. The TVFS technique utilizes the bidimensional empirical mode decomposition (BEMD) method to separate the defect information from the sea of system errors. This significantly improve the signal-to-noise ratio (SNR) and moreover, it potentially enables reference-free defect inspection.
Retained energy-based coding for EEG signals.
Bazán-Prieto, Carlos; Blanco-Velasco, Manuel; Cárdenas-Barrera, Julián; Cruz-Roldán, Fernando
2012-09-01
The recent use of long-term records in electroencephalography is becoming more frequent due to its diagnostic potential and the growth of novel signal processing methods that deal with these types of recordings. In these cases, the considerable volume of data to be managed makes compression necessary to reduce the bit rate for transmission and storage applications. In this paper, a new compression algorithm specifically designed to encode electroencephalographic (EEG) signals is proposed. Cosine modulated filter banks are used to decompose the EEG signal into a set of subbands well adapted to the frequency bands characteristic of the EEG. Given that no regular pattern may be easily extracted from the signal in time domain, a thresholding-based method is applied for quantizing samples. The method of retained energy is designed for efficiently computing the threshold in the decomposition domain which, at the same time, allows the quality of the reconstructed EEG to be controlled. The experiments are conducted over a large set of signals taken from two public databases available at Physionet and the results show that the compression scheme yields better compression than other reported methods. Copyright © 2011 IPEM. Published by Elsevier Ltd. All rights reserved.
Multidimensional NMR inversion without Kronecker products: Multilinear inversion
NASA Astrophysics Data System (ADS)
Medellín, David; Ravi, Vivek R.; Torres-Verdín, Carlos
2016-08-01
Multidimensional NMR inversion using Kronecker products poses several challenges. First, kernel compression is only possible when the kernel matrices are separable, and in recent years, there has been an increasing interest in NMR sequences with non-separable kernels. Second, in three or more dimensions, the singular value decomposition is not unique; therefore kernel compression is not well-defined for higher dimensions. Without kernel compression, the Kronecker product yields matrices that require large amounts of memory, making the inversion intractable for personal computers. Finally, incorporating arbitrary regularization terms is not possible using the Lawson-Hanson (LH) or the Butler-Reeds-Dawson (BRD) algorithms. We develop a minimization-based inversion method that circumvents the above problems by using multilinear forms to perform multidimensional NMR inversion without using kernel compression or Kronecker products. The new method is memory efficient, requiring less than 0.1% of the memory required by the LH or BRD methods. It can also be extended to arbitrary dimensions and adapted to include non-separable kernels, linear constraints, and arbitrary regularization terms. Additionally, it is easy to implement because only a cost function and its first derivative are required to perform the inversion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
In 1998, GKN Sinter Metals completed a successful compressed air system improvement project at its Salem, Indiana manufacturing facility. The project was performed after GKN undertook a survey of its system in order to solve air quality problems and to evaluate whether the capacity of their compressed air system would meet their anticipated plant expansion. Once the project was implemented, the plant was able to increase production by 31% without having to add any additional compressor capacity.
A practical material decomposition method for x-ray dual spectral computed tomography.
Hu, Jingjing; Zhao, Xing
2016-03-17
X-ray dual spectral CT (DSCT) scans the measured object with two different x-ray spectra, and the acquired rawdata can be used to perform the material decomposition of the object. Direct calibration methods allow a faster material decomposition for DSCT and can be separated in two groups: image-based and rawdata-based. The image-based method is an approximative method, and beam hardening artifacts remain in the resulting material-selective images. The rawdata-based method generally obtains better image quality than the image-based method, but this method requires geometrically consistent rawdata. However, today's clinical dual energy CT scanners usually measure different rays for different energy spectra and acquire geometrically inconsistent rawdata sets, and thus cannot meet the requirement. This paper proposes a practical material decomposition method to perform rawdata-based material decomposition in the case of inconsistent measurement. This method first yields the desired consistent rawdata sets from the measured inconsistent rawdata sets, and then employs rawdata-based technique to perform material decomposition and reconstruct material-selective images. The proposed method was evaluated by use of simulated FORBILD thorax phantom rawdata and dental CT rawdata, and simulation results indicate that this method can produce highly quantitative DSCT images in the case of inconsistent DSCT measurements.
Nano and micro U1-xThxO2 solid solutions: From powders to pellets
NASA Astrophysics Data System (ADS)
Balice, Luca; Bouëxière, Daniel; Cologna, Marco; Cambriani, Andrea; Vigier, Jean-François; De Bona, Emanuele; Sorarù, Gian Domenico; Kübel, Christian; Walter, Olaf; Popa, Karin
2018-01-01
Nuclear fuels production, structural materials, separation techniques, and waste management, all may benefit from an extensive knowledge in the nano-nuclear technology. In this line, we present here the production of U1-xThxO2 (x = 0 to 1) mixed oxides nanocrystals (NC's) through the hydrothermal decomposition of the oxalates in hot compressed water at 250 °C. Particles of spherical shape and size of about 5.5-6 nm are obtained during the hydrothermal decomposition process. The powdery nanocrystalline products were consolidated by spark plasma sintering into homogeneous mixed oxides pellets with grain sizes in the 0.4 to 5.5 μm range. Grain growth and mechanical properties were studied as a function of composition and size. No grain size effect was observed on the hardness or elastic modulus.
A polygon soup representation for free viewpoint video
NASA Astrophysics Data System (ADS)
Colleu, T.; Pateux, S.; Morin, L.; Labit, C.
2010-02-01
This paper presents a polygon soup representation for multiview data. Starting from a sequence of multi-view video plus depth (MVD) data, the proposed representation takes into account, in a unified manner, different issues such as compactness, compression, and intermediate view synthesis. The representation is built in two steps. First, a set of 3D quads is extracted using a quadtree decomposition of the depth maps. Second, a selective elimination of the quads is performed in order to reduce inter-view redundancies and thus provide a compact representation. Moreover, the proposed methodology for extracting the representation allows to reduce ghosting artifacts. Finally, an adapted compression technique is proposed that limits coding artifacts. The results presented on two real sequences show that the proposed representation provides a good trade-off between rendering quality and data compactness.
The History of the APS Shock Compression of Condensed Matter Topical Group
NASA Astrophysics Data System (ADS)
Forbes, Jerry W.
2001-06-01
To provide broader scientific recognition and to advance the science of shock-compressed condensed matter, a group of APS members worked within the Society to make this technical field an active part of APS. Individual papers were given at APS meetings starting in the 1950’s and then later whole sessions were organized starting at the 1967 Pasadena meeting. Topical conferences began in 1979 in Pullman, WA where George Duvall and Dennis Hayes were co-chairs. Most all early topical conferences were sanctioned by the APS while those held after 1985 were official APS meetings. In 1984, after consulting with a number of people in the shock wave field, Robert Graham circulated a petition to form an APS topical group. He obtained signatures from a balanced cross-section of the community. William Havens, the executive secretary of APS, informed Robert Graham by letter on November 28, 1984 that the APS Council had officially accepted the formation of this topical group at its October 28, 1984 meeting. The first election occurred July 23, 1985 where Robert Graham was elected chairman, William Nellis vice-chairman, and Jerry Forbes secretary/treasurer. The topical group remains viable today by holding a topical conference in odd numbered years and shock wave sessions at APS general meetings in even numbered years A major benefit of being an official unit of APS is the allotment of APS fellows every year. The APS shock compression award established in 1987, has also provided broad recognition of many major scientific accomplishments in this field.
JANNAF 18th Propulsion Systems Hazards Subcommittee Meeting. Volume 1
NASA Technical Reports Server (NTRS)
Cocchiaro, James E. (Editor); Gannaway, Mary T. (Editor)
1999-01-01
This volume, the first of two volumes is a compilation of 18 unclassified/unlimited-distribution technical papers presented at the Joint Army-Navy-NASA-Air Force (JANNAF) 18th Propulsion Systems Hazards Subcommittee (PSHS) meeting held jointly with the 36th Combustion Subcommittee (CS) and 24th Airbreathing Propulsion Subcommittee (APS) meetings. The meeting was held 18-21 October 1999 at NASA Kennedy Space Center and The DoubleTree Oceanfront Hotel, Cocoa Beach, Florida. Topics covered at the PSHS meeting include: shaped charge jet and kinetic energy penetrator impact vulnerability of gun propellants; thermal decomposition and cookoff behavior of energetic materials; violent reaction; detonation phenomena of solid energetic materials subjected to shock and impact stimuli; and hazard classification, insensitive munitions, and propulsion systems safety.
Fast and Adaptive Lossless On-Board Hyperspectral Data Compression System for Space Applications
NASA Technical Reports Server (NTRS)
Aranki, Nazeeh; Bakhshi, Alireza; Keymeulen, Didier; Klimesh, Matthew
2009-01-01
Efficient on-board lossless hyperspectral data compression reduces the data volume necessary to meet NASA and DoD limited downlink capabilities. The techniques also improves signature extraction, object recognition and feature classification capabilities by providing exact reconstructed data on constrained downlink resources. At JPL a novel, adaptive and predictive technique for lossless compression of hyperspectral data was recently developed. This technique uses an adaptive filtering method and achieves a combination of low complexity and compression effectiveness that far exceeds state-of-the-art techniques currently in use. The JPL-developed 'Fast Lossless' algorithm requires no training data or other specific information about the nature of the spectral bands for a fixed instrument dynamic range. It is of low computational complexity and thus well-suited for implementation in hardware, which makes it practical for flight implementations of pushbroom instruments. A prototype of the compressor (and decompressor) of the algorithm is available in software, but this implementation may not meet speed and real-time requirements of some space applications. Hardware acceleration provides performance improvements of 10x-100x vs. the software implementation (about 1M samples/sec on a Pentium IV machine). This paper describes a hardware implementation of the JPL-developed 'Fast Lossless' compression algorithm on a Field Programmable Gate Array (FPGA). The FPGA implementation targets the current state of the art FPGAs (Xilinx Virtex IV and V families) and compresses one sample every clock cycle to provide a fast and practical real-time solution for Space applications.
Thermal decomposition of high-nitrogen energetic compounds: TAGzT and GUzT
NASA Astrophysics Data System (ADS)
Hayden, Heather F.
The U.S. Navy is exploring high-nitrogen compounds as burning-rate additives to meet the growing demands of future high-performance gun systems. Two high-nitrogen compounds investigated as potential burning-rate additives are bis(triaminoguanidinium) 5,5-azobitetrazolate (TAGzT) and bis(guanidinium) 5,5'-azobitetrazolate (GUzT). Small-scale tests showed that formulations containing TAGzT exhibit significant increases in the burning rates of RDX-based gun propellants. However, when GUzT, a similarly structured molecule was incorporated into the formulation, there was essentially no effect on the burning rate of the propellant. Through the use of simultaneous thermogravimetric modulated beam mass spectrometry (STMBMS) and Fourier-Transform ion cyclotron resonance (FTICR) mass spectrometry methods, an investigation of the underlying chemical and physical processes that control the thermal decomposition behavior of TAGzT and GUzT alone and in the presence of RDX, was conducted. The objective was to determine why GUzT is not as good a burning-rate enhancer in RDX-based gun propellants as compared to TAGzT. The results show that TAGzT is an effective burning-rate modifier in the presence of RDX because the decomposition of TAGzT alters the initial stages of the decomposition of RDX. Hydrazine, formed in the decomposition of TAGzT, reacts faster with RDX than RDX can decompose itself. The reactions occur at temperatures below the melting point of RDX and thus the TAGzT decomposition products react with RDX in the gas phase. Although there is no hydrazine formed in the decomposition of GUzT, amines formed in the decomposition of GUzT react with aldehydes, formed in the decomposition of RDX, resulting in an increased reaction rate of RDX in the presence of GUzT. However, GUzT is not an effective burning-rate modifier because its decomposition does not alter the initial gas-phase decomposition of RDX. The decomposition of GUzT occurs at temperatures above the melting point of RDX. Therefore, the decomposition of GUzT affects reactions that are dominant in the liquid phase of RDX. Although GUzT is not an effective burning-rate modifier, features of its decomposition where the reaction between amines formed in the decomposition of GUzT react with the aldehydes, formed in the decomposition of RDX, may have implications from an insensitive-munitions perspective.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ciatti, Stephen A.
The history, present and future of the compression ignition engine is a fascinating story that spans over 100 years, from the time of Rudolf Diesel to the highly regulated and computerized engines of the 21st Century. The development of these engines provided inexpensive, reliable and high power density machines to allow transportation, construction and farming to be more productive with less human effort than in any previous period of human history. The concept that fuels could be consumed efficiently and effectively with only the ignition of pressurized and heated air was a significant departure from the previous coal-burning architecture ofmore » the 1800s. Today, the compression ignition engine is undergoing yet another revolution. The equipment that provides transport, builds roads and infrastructure, and harvests the food we eat needs to meet more stringent requirements than ever before. How successfully 21st Century engineers are able to make compression ignition engine technology meet these demands will be of major influence in assisting developing nations (with over 50% of the world’s population) achieve the economic and environmental goals they seek.« less
Dynamic Factorization in Large-Scale Optimization
1993-03-12
variable production charges, distribution via multiple modes, taxes, duties and duty drawback, and inventory charges. See Harrison, Arntzen , and Brown...Decomposition," presented at CORS/TIMS/ORSA meeting, Vancouver. British Columbia, Canada, May. Harrison, T. P., Arntzen , B. C., and Brown, G. G. 1992
The 17th JANNAF Combustion Meeting, Volume 2
NASA Technical Reports Server (NTRS)
Eggleston, D. S. (Editor)
1980-01-01
Combustion of gun and nitramine propellants are discussed. Topics include gun charge designs, flame spreading in granular and stick charges, muzzle flash, ignition and combustion of liquid propellants for guns, laminar flames, decomposition and combustion of nitramine ingredients and nitramine propellant development.
Reducing Memory Cost of Exact Diagonalization using Singular Value Decomposition
NASA Astrophysics Data System (ADS)
Weinstein, Marvin; Chandra, Ravi; Auerbach, Assa
2012-02-01
We present a modified Lanczos algorithm to diagonalize lattice Hamiltonians with dramatically reduced memory requirements. In contrast to variational approaches and most implementations of DMRG, Lanczos rotations towards the ground state do not involve incremental minimizations, (e.g. sweeping procedures) which may get stuck in false local minima. The lattice of size N is partitioned into two subclusters. At each iteration the rotating Lanczos vector is compressed into two sets of nsvd small subcluster vectors using singular value decomposition. For low entanglement entropy See, (satisfied by short range Hamiltonians), the truncation error is bounded by (-nsvd^1/See). Convergence is tested for the Heisenberg model on Kagom'e clusters of 24, 30 and 36 sites, with no lattice symmetries exploited, using less than 15GB of dynamical memory. Generalization of the Lanczos-SVD algorithm to multiple partitioning is discussed, and comparisons to other techniques are given. Reference: arXiv:1105.0007
NASA Astrophysics Data System (ADS)
Guo, Feng; Zhang, Hong; Hu, Hai-Quan; Cheng, Xin-Lu; Zhang, Li-Yan
2015-11-01
We investigate the Hugoniot curve, shock-particle velocity relations, and Chapman-Jouguet conditions of the hot dense system through molecular dynamics (MD) simulations. The detailed pathways from crystal nitromethane to reacted state by shock compression are simulated. The phase transition of N2 and CO mixture is found at about 10 GPa, and the main reason is that the dissociation of the C-O bond and the formation of C-C bond start at 10.0-11.0 GPa. The unreacted state simulations of nitromethane are consistent with shock Hugoniot data. The complete pathway from unreacted to reacted state is discussed. Through chemical species analysis, we find that the C-N bond breaking is the main event of the shock-induced nitromethane decomposition. Project supported by the National Natural Science Foundation of China (Grant No. 11374217) and the Shandong Provincial Natural Science Foundation, China (Grant No. ZR2014BQ008).
NASA Technical Reports Server (NTRS)
Bergan, Andrew C.; Leone, Frank A., Jr.
2016-01-01
A new model is proposed that represents the kinematics of kink-band formation and propagation within the framework of a mesoscale continuum damage mechanics (CDM) model. The model uses the recently proposed deformation gradient decomposition approach to represent a kink band as a displacement jump via a cohesive interface that is embedded in an elastic bulk material. The model is capable of representing the combination of matrix failure in the frame of a misaligned fiber and instability due to shear nonlinearity. In contrast to conventional linear or bilinear strain softening laws used in most mesoscale CDM models for longitudinal compression, the constitutive response of the proposed model includes features predicted by detailed micromechanical models. These features include: 1) the rotational kinematics of the kink band, 2) an instability when the peak load is reached, and 3) a nonzero plateau stress under large strains.
Domain decomposition methods in aerodynamics
NASA Technical Reports Server (NTRS)
Venkatakrishnan, V.; Saltz, Joel
1990-01-01
Compressible Euler equations are solved for two-dimensional problems by a preconditioned conjugate gradient-like technique. An approximate Riemann solver is used to compute the numerical fluxes to second order accuracy in space. Two ways to achieve parallelism are tested, one which makes use of parallelism inherent in triangular solves and the other which employs domain decomposition techniques. The vectorization/parallelism in triangular solves is realized by the use of a recording technique called wavefront ordering. This process involves the interpretation of the triangular matrix as a directed graph and the analysis of the data dependencies. It is noted that the factorization can also be done in parallel with the wave front ordering. The performances of two ways of partitioning the domain, strips and slabs, are compared. Results on Cray YMP are reported for an inviscid transonic test case. The performances of linear algebra kernels are also reported.
Human visual system-based color image steganography using the contourlet transform
NASA Astrophysics Data System (ADS)
Abdul, W.; Carré, P.; Gaborit, P.
2010-01-01
We present a steganographic scheme based on the contourlet transform which uses the contrast sensitivity function (CSF) to control the force of insertion of the hidden information in a perceptually uniform color space. The CIELAB color space is used as it is well suited for steganographic applications because any change in the CIELAB color space has a corresponding effect on the human visual system as is very important for steganographic schemes to be undetectable by the human visual system (HVS). The perceptual decomposition of the contourlet transform gives it a natural advantage over other decompositions as it can be molded with respect to the human perception of different frequencies in an image. The evaluation of the imperceptibility of the steganographic scheme with respect to the color perception of the HVS is done using standard methods such as the structural similarity (SSIM) and CIEDE2000. The robustness of the inserted watermark is tested against JPEG compression.
JANNAF 19th Propulsion Systems Hazards Subcommittee Meeting. Volume 1
NASA Technical Reports Server (NTRS)
Cocchiaro, James E. (Editor); Kuckels, Melanie C. (Editor)
2000-01-01
This volume, the first of two volumes is a compilation of 25 unclassified/unlimited-distribution technical papers presented at the Joint Army-Navy-NASA-Air Force (JANNAF) 19th Propulsion Systems Hazards Subcommittee (PSHS) meeting held jointly with the 37th Combustion Subcommittee (CS) and 25th Airbreathing Propulsion Subcommittee (APS), and 1st Modeling and Simulation Subcommittee (MSS) meetings. The meeting was held 13-17 November 2000 at the Naval Postgraduate School and Hyatt Regency Hotel, Monterey, California. Topics covered at the PSHS meeting include: impact and thermal vulnerability of gun propellants; thermal decomposition and cookoff behavior of energetic materials; violent reaction and detonation phenomena of solid energetic materials subjected to shock and impact loading; and hazard classification, and insensitive munitions testing of propellants and propulsion systems.
Streamlined Genome Sequence Compression using Distributed Source Coding
Wang, Shuang; Jiang, Xiaoqian; Chen, Feng; Cui, Lijuan; Cheng, Samuel
2014-01-01
We aim at developing a streamlined genome sequence compression algorithm to support alternative miniaturized sequencing devices, which have limited communication, storage, and computation power. Existing techniques that require heavy client (encoder side) cannot be applied. To tackle this challenge, we carefully examined distributed source coding theory and developed a customized reference-based genome compression protocol to meet the low-complexity need at the client side. Based on the variation between source and reference, our protocol will pick adaptively either syndrome coding or hash coding to compress subsequences of changing code length. Our experimental results showed promising performance of the proposed method when compared with the state-of-the-art algorithm (GRS). PMID:25520552
Electrical resistivity of fluid methane multiply shock compressed to 147 GPa
NASA Astrophysics Data System (ADS)
Wang, Yi-Gao; Liu, Fu-Sheng; Liu, Qi-Jun; Wang, Wen-Peng
2018-01-01
Shock wave experiments were carried out to measure the electrical resistivity of fluid methane. The pressure range of 89-147 GPa and the temperature range from 1800 to 2600 K were achieved with a two-stage light-gas gun. We obtained a minimum electrical resistivity value of 4.5 × 10-2 Ω cm at pressure and temperature of 147 GPa and 2600 K, which is two orders of magnitude higher than that of hydrogen under similar conditions. The data are interpreted in terms of a continuous transition from insulator to semiconductor state. One possibility reason is chemical decomposition of methane in the shock compression process. Along density and temperature increase with Hugoniot pressure, dissociation of fluid methane increases continuously to form a H2-rich fluid.
Parallel discontinuous Galerkin FEM for computing hyperbolic conservation law on unstructured grids
NASA Astrophysics Data System (ADS)
Ma, Xinrong; Duan, Zhijian
2018-04-01
High-order resolution Discontinuous Galerkin finite element methods (DGFEM) has been known as a good method for solving Euler equations and Navier-Stokes equations on unstructured grid, but it costs too much computational resources. An efficient parallel algorithm was presented for solving the compressible Euler equations. Moreover, the multigrid strategy based on three-stage three-order TVD Runge-Kutta scheme was used in order to improve the computational efficiency of DGFEM and accelerate the convergence of the solution of unsteady compressible Euler equations. In order to make each processor maintain load balancing, the domain decomposition method was employed. Numerical experiment performed for the inviscid transonic flow fluid problems around NACA0012 airfoil and M6 wing. The results indicated that our parallel algorithm can improve acceleration and efficiency significantly, which is suitable for calculating the complex flow fluid.
2012-05-01
molten salts can be employed over a wide range of applications, which include solvents, 7 electrolytes , 8 pharmaceuticals and therapeutics,9 and...waxy, hygroscopic solid at room temperature, where the additional products in the HP series exist as liquids at room 9 temperature. In general...compressed aluminum pans. Melting and decomposition points for solids were measured by DSC from 40 to 400 oC at a scan rate of 5 ºC/min. IR spectra
Shock initiation of explosives: High temperature hot spots explained
NASA Astrophysics Data System (ADS)
Bassett, Will P.; Johnson, Belinda P.; Neelakantan, Nitin K.; Suslick, Kenneth S.; Dlott, Dana D.
2017-08-01
We investigated the shock initiation of energetic materials with a tabletop apparatus that uses km s-1 laser-driven flyer plates to initiate tiny explosive charges and obtains complete temperature histories with a high dynamic range. By comparing various microstructured formulations, including a pentaerythritol tetranitrate (PETN) based plastic explosive (PBX) denoted XTX-8003, we determined that micron-scale pores were needed to create high hot spot temperatures. In charges where micropores (i.e., micron-sized pores) were present, a hot spot temperature of 6000 K was observed; when the micropores were pre-compressed to nm scale, however, the hot spot temperature dropped to ˜4000 K. By comparing XTX-8003 with an analog that replaced PETN by nonvolatile silica, we showed that the high temperatures require gas in the pores, that the high temperatures were created by adiabatic gas compression, and that the temperatures observed can be controlled by the choice of ambient gases. The hot spots persist in shock-compressed PBXs even in vacuum because the initially empty pores became filled with gas created in-situ by shock-induced chemical decomposition.
Exact Theory of Compressible Fluid Turbulence
NASA Astrophysics Data System (ADS)
Drivas, Theodore; Eyink, Gregory
2017-11-01
We obtain exact results for compressible turbulence with any equation of state, using coarse-graining/filtering. We find two mechanisms of turbulent kinetic energy dissipation: scale-local energy cascade and ``pressure-work defect'', or pressure-work at viscous scales exceeding that in the inertial-range. Planar shocks in an ideal gas dissipate all kinetic energy by pressure-work defect, but the effect is omitted by standard LES modeling of pressure-dilatation. We also obtain a novel inverse cascade of thermodynamic entropy, injected by microscopic entropy production, cascaded upscale, and removed by large-scale cooling. This nonlinear process is missed by the Kovasznay linear mode decomposition, treating entropy as a passive scalar. For small Mach number we recover the incompressible ``negentropy cascade'' predicted by Obukhov. We derive exact Kolmogorov 4/5th-type laws for energy and entropy cascades, constraining scaling exponents of velocity, density, and internal energy to sub-Kolmogorov values. Although precise exponents and detailed physics are Mach-dependent, our exact results hold at all Mach numbers. Flow realizations at infinite Reynolds are ``dissipative weak solutions'' of compressible Euler equations, similarly as Onsager proposed for incompressible turbulence.
NASA Astrophysics Data System (ADS)
Duplaga, M.; Leszczuk, M. I.; Papir, Z.; Przelaskowski, A.
2008-12-01
Wider dissemination of medical digital video libraries is affected by two correlated factors, resource effective content compression that directly influences its diagnostic credibility. It has been proved that it is possible to meet these contradictory requirements halfway for long-lasting and low motion surgery recordings at compression ratios close to 100 (bronchoscopic procedures were a case study investigated). As the main supporting assumption, it has been accepted that the content can be compressed as far as clinicians are not able to sense a loss of video diagnostic fidelity (a visually lossless compression). Different market codecs were inspected by means of the combined subjective and objective tests toward their usability in medical video libraries. Subjective tests involved a panel of clinicians who had to classify compressed bronchoscopic video content according to its quality under the bubble sort algorithm. For objective tests, two metrics (hybrid vector measure and hosaka Plots) were calculated frame by frame and averaged over a whole sequence.
Dynamic Factorization in Large-Scale Optimization
1994-01-01
and variable production charges, distribution via multiple modes, taxes, duties and duty draw- back, and inventory charges. See Harrison, Arntzen and...34 Capital allocation and project selection via decomposition:’ presented at CORS/TIMS/ORSA meeting. Vancouver. Be ( 1989). T.P. Harrison. B.C. Arntzen and
JPEG 2000-based compression of fringe patterns for digital holographic microscopy
NASA Astrophysics Data System (ADS)
Blinder, David; Bruylants, Tim; Ottevaere, Heidi; Munteanu, Adrian; Schelkens, Peter
2014-12-01
With the advent of modern computing and imaging technologies, digital holography is becoming widespread in various scientific disciplines such as microscopy, interferometry, surface shape measurements, vibration analysis, data encoding, and certification. Therefore, designing an efficient data representation technology is of particular importance. Off-axis holograms have very different signal properties with respect to regular imagery, because they represent a recorded interference pattern with its energy biased toward the high-frequency bands. This causes traditional images' coders, which assume an underlying 1/f2 power spectral density distribution, to perform suboptimally for this type of imagery. We propose a JPEG 2000-based codec framework that provides a generic architecture suitable for the compression of many types of off-axis holograms. This framework has a JPEG 2000 codec at its core, extended with (1) fully arbitrary wavelet decomposition styles and (2) directional wavelet transforms. Using this codec, we report significant improvements in coding performance for off-axis holography relative to the conventional JPEG 2000 standard, with Bjøntegaard delta-peak signal-to-noise ratio improvements ranging from 1.3 to 11.6 dB for lossy compression in the 0.125 to 2.00 bpp range and bit-rate reductions of up to 1.6 bpp for lossless compression.
NASA Astrophysics Data System (ADS)
Wang, Yi; Trouvé, Arnaud
2004-09-01
A pseudo-compressibility method is proposed to modify the acoustic time step restriction found in fully compressible, explicit flow solvers. The method manipulates terms in the governing equations of order Ma2, where Ma is a characteristic flow Mach number. A decrease in the speed of acoustic waves is obtained by adding an extra term in the balance equation for total energy. This term is proportional to flow dilatation and uses a decomposition of the dilatational field into an acoustic component and a component due to heat transfer. The present method is a variation of the pressure gradient scaling (PGS) method proposed in Ramshaw et al (1985 Pressure gradient scaling method for fluid flow with nearly uniform pressure J. Comput. Phys. 58 361-76). It achieves gains in computational efficiencies similar to PGS: at the cost of a slightly more involved right-hand-side computation, the numerical time step increases by a full order of magnitude. It also features the added benefit of preserving the hydrodynamic pressure field. The original and modified PGS methods are implemented into a parallel direct numerical simulation solver developed for applications to turbulent reacting flows with detailed chemical kinetics. The performance of the pseudo-compressibility methods is illustrated in a series of test problems ranging from isothermal sound propagation to laminar premixed flame problems.
Edge compression techniques for visualization of dense directed graphs.
Dwyer, Tim; Henry Riche, Nathalie; Marriott, Kim; Mears, Christopher
2013-12-01
We explore the effectiveness of visualizing dense directed graphs by replacing individual edges with edges connected to 'modules'-or groups of nodes-such that the new edges imply aggregate connectivity. We only consider techniques that offer a lossless compression: that is, where the entire graph can still be read from the compressed version. The techniques considered are: a simple grouping of nodes with identical neighbor sets; Modular Decomposition which permits internal structure in modules and allows them to be nested; and Power Graph Analysis which further allows edges to cross module boundaries. These techniques all have the same goal--to compress the set of edges that need to be rendered to fully convey connectivity--but each successive relaxation of the module definition permits fewer edges to be drawn in the rendered graph. Each successive technique also, we hypothesize, requires a higher degree of mental effort to interpret. We test this hypothetical trade-off with two studies involving human participants. For Power Graph Analysis we propose a novel optimal technique based on constraint programming. This enables us to explore the parameter space for the technique more precisely than could be achieved with a heuristic. Although applicable to many domains, we are motivated by--and discuss in particular--the application to software dependency analysis.
A simple and efficient algorithm operating with linear time for MCEEG data compression.
Titus, Geevarghese; Sudhakar, M S
2017-09-01
Popularisation of electroencephalograph (EEG) signals in diversified fields have increased the need for devices capable of operating at lower power and storage requirements. This has led to a great deal of research in data compression, that can address (a) low latency in the coding of the signal, (b) reduced hardware and software dependencies, (c) quantify the system anomalies, and (d) effectively reconstruct the compressed signal. This paper proposes a computationally simple and novel coding scheme named spatial pseudo codec (SPC), to achieve lossy to near lossless compression of multichannel EEG (MCEEG). In the proposed system, MCEEG signals are initially normalized, followed by two parallel processes: one operating on integer part and the other, on fractional part of the normalized data. The redundancies in integer part are exploited using spatial domain encoder, and the fractional part is coded as pseudo integers. The proposed method has been tested on a wide range of databases having variable sampling rates and resolutions. Results indicate that the algorithm has a good recovery performance with an average percentage root mean square deviation (PRD) of 2.72 for an average compression ratio (CR) of 3.16. Furthermore, the algorithm has a complexity of only O(n) with an average encoding and decoding time per sample of 0.3 ms and 0.04 ms respectively. The performance of the algorithm is comparable with recent methods like fast discrete cosine transform (fDCT) and tensor decomposition methods. The results validated the feasibility of the proposed compression scheme for practical MCEEG recording, archiving and brain computer interfacing systems.
Progressive Precision Surface Design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duchaineau, M; Joy, KJ
2002-01-11
We introduce a novel wavelet decomposition algorithm that makes a number of powerful new surface design operations practical. Wavelets, and hierarchical representations generally, have held promise to facilitate a variety of design tasks in a unified way by approximating results very precisely, thus avoiding a proliferation of undergirding mathematical representations. However, traditional wavelet decomposition is defined from fine to coarse resolution, thus limiting its efficiency for highly precise surface manipulation when attempting to create new non-local editing methods. Our key contribution is the progressive wavelet decomposition algorithm, a general-purpose coarse-to-fine method for hierarchical fitting, based in this paper on anmore » underlying multiresolution representation called dyadic splines. The algorithm requests input via a generic interval query mechanism, allowing a wide variety of non-local operations to be quickly implemented. The algorithm performs work proportionate to the tiny compressed output size, rather than to some arbitrarily high resolution that would otherwise be required, thus increasing performance by several orders of magnitude. We describe several design operations that are made tractable because of the progressive decomposition. Free-form pasting is a generalization of the traditional control-mesh edit, but for which the shape of the change is completely general and where the shape can be placed using a free-form deformation within the surface domain. Smoothing and roughening operations are enhanced so that an arbitrary loop in the domain specifies the area of effect. Finally, the sculpting effect of moving a tool shape along a path is simulated.« less
Mangal, Sharad; Meiser, Felix; Morton, David; Larson, Ian
2015-01-01
Tablets represent the preferred and most commonly dispensed pharmaceutical dosage form for administering active pharmaceutical ingredients (APIs). Minimizing the cost of goods and improving manufacturing output efficiency has motivated companies to use direct compression as a preferred method of tablet manufacturing. Excipients dictate the success of direct compression, notably by optimizing powder formulation compactability and flow, thus there has been a surge in creating excipients specifically designed to meet these needs for direct compression. Greater scientific understanding of tablet manufacturing coupled with effective application of the principles of material science and particle engineering has resulted in a number of improved direct compression excipients. Despite this, significant practical disadvantages of direct compression remain relative to granulation, and this is partly due to the limitations of direct compression excipients. For instance, in formulating high-dose APIs, a much higher level of excipient is required relative to wet or dry granulation and so tablets are much bigger. Creating excipients to enable direct compression of high-dose APIs requires the knowledge of the relationship between fundamental material properties and excipient functionalities. In this paper, we review the current understanding of the relationship between fundamental material properties and excipient functionality for direct compression.
Mixed raster content (MRC) model for compound image compression
NASA Astrophysics Data System (ADS)
de Queiroz, Ricardo L.; Buckley, Robert R.; Xu, Ming
1998-12-01
This paper will describe the Mixed Raster Content (MRC) method for compressing compound images, containing both binary test and continuous-tone images. A single compression algorithm that simultaneously meets the requirements for both text and image compression has been elusive. MRC takes a different approach. Rather than using a single algorithm, MRC uses a multi-layered imaging model for representing the results of multiple compression algorithms, including ones developed specifically for text and for images. As a result, MRC can combine the best of existing or new compression algorithms and offer different quality-compression ratio tradeoffs. The algorithms used by MRC set the lower bound on its compression performance. Compared to existing algorithms, MRC has some image-processing overhead to manage multiple algorithms and the imaging model. This paper will develop the rationale for the MRC approach by describing the multi-layered imaging model in light of a rate-distortion trade-off. Results will be presented comparing images compressed using MRC, JPEG and state-of-the-art wavelet algorithms such as SPIHT. MRC has been approved or proposed as an architectural model for several standards, including ITU Color Fax, IETF Internet Fax, and JPEG 2000.
46 CFR 25.45-2 - Cooking systems on vessels carrying passengers for hire.
Code of Federal Regulations, 2014 CFR
2014-10-01
... stated therein, and liquefied petroleum gas (LPG), or compressed natural gas (CNG). (b) Cooking systems using LPG or CNG must meet the following requirements: (1) The design, installation, and testing of each... of each CNG system must meet ABYC A-22-78 or chapter 6 of NFPA 302. (3) Cooking systems using chapter...
Experiment to Capture Gaseous Products from Shock-Decomposed Materials
NASA Astrophysics Data System (ADS)
Holt, William; Mock, Willis, Jr.
2001-06-01
Recent gas gun experiments have been performed in which initially porous polytetrafluoroethylene (PTFE) powder specimens were shock compressed inside a closed steel container and soft recovered^1,2. Although a powder decomposition residue was produced in the container and analyzed in situ, no attempt was made to recover any gaseous decomposition products for analysis. The purpose of the present experiment is to extend these earlier studies to include the capture of gaseous products. The specimen container is constructed from two metal flanges and a metal gasket, held together by high-strength bolts. A cavity between the flanges contains a porous powder specimen of material to be shock-decomposed, and is connected to a gas sample cylinder via a metal tube and a valve. The system is evacuated prior to the experiment. A gas-gun-accelerated metal disk impacts the flat surface of one of the flanges. On impact, a stress wave passes through the flange and into the specimen material. If gaseous products are formed, they can be collected in the sample cylinder for subsequent analyses by mass spectrometry. Results will be presented for PTFE powder specimens. Work supported by the NSWCDD Independent Research Office. ^1W. Mock, Jr., W. H. Holt, and G. I. Kerley, in Shock Compression of Condensed Matter - 1997, S. C. Schmidt, D. P. Dandekar, and J. W. Forbes (AIP, New York, 1998), p. 671. ^2W. H. Holt, W. Mock, Jr., and F. Santiago, J. Appl. Phys. 88, 5485 (2000).
Columbia County Kindergarten Center Environmental Study Area Guide.
ERIC Educational Resources Information Center
Florida State Dept. of Education, Tallahassee. Office of Environment Education.
The guide lists seven program objectives and 15 activities guides for meeting the objectives. Included in each activity is an introduction, outdoor activity, classroom activity, and evaluation. Sample activities are: Animals Use Natural Materials to Provide Food and Shelter, Differences in Soil, Decomposition, Man-made or Natural Objects, Food…
40 CFR 267.111 - What general standards must I meet when I stop operating the unit?
Code of Federal Regulations, 2011 CFR
2011-07-01
... to protect human health and the environment, post-closure escape of hazardous waste, hazardous constituents, leachate, contaminated run-off, or hazardous waste decomposition products to the ground or... PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) STANDARDS FOR OWNERS AND OPERATORS OF HAZARDOUS WASTE...
40 CFR 267.111 - What general standards must I meet when I stop operating the unit?
Code of Federal Regulations, 2010 CFR
2010-07-01
... to protect human health and the environment, post-closure escape of hazardous waste, hazardous constituents, leachate, contaminated run-off, or hazardous waste decomposition products to the ground or... PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) STANDARDS FOR OWNERS AND OPERATORS OF HAZARDOUS WASTE...
Multifunctional foaming agent to prepare aluminum foam with enhanced mechanical properties
NASA Astrophysics Data System (ADS)
Li, Xun; Liu, Ying; Ye, Jinwen; An, Xuguang; Ran, Huaying
2018-03-01
In this paper, CuSO4 was used as foaming agent to prepare close cell Aluminum foam(Al foam) at the temperature range of 680 °C ∼ 758 °C for the first time. The results show that CuSO4 has multifunctional such as, foaming, viscosity increasing, reinforcement in Al matrix, it has a wide decomposition temperature range of 641 °C ∼ 816 °C, its sustain-release time is 5.5 min at 758 °C. The compression stress and energy absorption of CuSO4-Al foam is 6.89 Mpa and 4.82 × 106 J m‑3(compression strain 50%), which are 77.12% and 99.17% higher than that of TiH2-Al foam at the same porosity(76% in porosity) due to the reinforcement in Al matrix and uniform pore dispersion.
Compressed-sensing wavenumber-scanning interferometry
NASA Astrophysics Data System (ADS)
Bai, Yulei; Zhou, Yanzhou; He, Zhaoshui; Ye, Shuangli; Dong, Bo; Xie, Shengli
2018-01-01
The Fourier transform (FT), the nonlinear least-squares algorithm (NLSA), and eigenvalue decomposition algorithm (EDA) are used to evaluate the phase field in depth-resolved wavenumber-scanning interferometry (DRWSI). However, because the wavenumber series of the laser's output is usually accompanied by nonlinearity and mode-hop, FT, NLSA, and EDA, which are only suitable for equidistant interference data, often lead to non-negligible phase errors. In this work, a compressed-sensing method for DRWSI (CS-DRWSI) is proposed to resolve this problem. By using the randomly spaced inverse Fourier matrix and solving the underdetermined equation in the wavenumber domain, CS-DRWSI determines the nonuniform sampling and spectral leakage of the interference spectrum. Furthermore, it can evaluate interference data without prior knowledge of the object. The experimental results show that CS-DRWSI improves the depth resolution and suppresses sidelobes. It can replace the FT as a standard algorithm for DRWSI.
DOE Office of Scientific and Technical Information (OSTI.GOV)
2012-01-05
SandiaMCR was developed to identify pure components and their concentrations from spectral data. This software efficiently implements the multivariate calibration regression alternating least squares (MCR-ALS), principal component analysis (PCA), and singular value decomposition (SVD). Version 3.37 also includes the PARAFAC-ALS Tucker-1 (for trilinear analysis) algorithms. The alternating least squares methods can be used to determine the composition without or with incomplete prior information on the constituents and their concentrations. It allows the specification of numerous preprocessing, initialization and data selection and compression options for the efficient processing of large data sets. The software includes numerous options including the definition ofmore » equality and non-negativety constraints to realistically restrict the solution set, various normalization or weighting options based on the statistics of the data, several initialization choices and data compression. The software has been designed to provide a practicing spectroscopist the tools required to routinely analysis data in a reasonable time and without requiring expert intervention.« less
A guided wave dispersion compensation method based on compressed sensing
NASA Astrophysics Data System (ADS)
Xu, Cai-bin; Yang, Zhi-bo; Chen, Xue-feng; Tian, Shao-hua; Xie, Yong
2018-03-01
The ultrasonic guided wave has emerged as a promising tool for structural health monitoring (SHM) and nondestructive testing (NDT) due to their capability to propagate over long distances with minimal loss and sensitivity to both surface and subsurface defects. The dispersion effect degrades the temporal and spatial resolution of guided waves. A novel ultrasonic guided wave processing method for both single mode and multi-mode guided waves dispersion compensation is proposed in this work based on compressed sensing, in which a dispersion signal dictionary is built by utilizing the dispersion curves of the guided wave modes in order to sparsely decompose the recorded dispersive guided waves. Dispersion-compensated guided waves are obtained by utilizing a non-dispersion signal dictionary and the results of sparse decomposition. Numerical simulations and experiments are implemented to verify the effectiveness of the developed method for both single mode and multi-mode guided waves.
Fast Plasma Instrument for MMS: Simulation Results
NASA Technical Reports Server (NTRS)
Figueroa-Vinas, Adolfo; Adrian, Mark L.; Lobell, James V.; Simpson, David G.; Barrie, Alex; Winkert, George E.; Yeh, Pen-Shu; Moore, Thomas E.
2008-01-01
Magnetospheric Multiscale (MMS) mission will study small-scale reconnection structures and their rapid motions from closely spaced platforms using instruments capable of high angular, energy, and time resolution measurements. The Dual Electron Spectrometer (DES) of the Fast Plasma Instrument (FPI) for MMS meets these demanding requirements by acquiring the electron velocity distribution functions (VDFs) for the full sky with high-resolution angular measurements every 30 ms. This will provide unprecedented access to electron scale dynamics within the reconnection diffusion region. The DES consists of eight half-top-hat energy analyzers. Each analyzer has a 6 deg. x 11.25 deg. Full-sky coverage is achieved by electrostatically stepping the FOV of each of the eight sensors through four discrete deflection look directions. Data compression and burst memory management will provide approximately 30 minutes of high time resolution data during each orbit of the four MMS spacecraft. Each spacecraft will intelligently downlink the data sequences that contain the greatest amount of temporal structure. Here we present the results of a simulation of the DES analyzer measurements, data compression and decompression, as well as ground-based analysis using as a seed re-processed Cluster/PEACE electron measurements. The Cluster/PEACE electron measurements have been reprocessed through virtual DES analyzers with their proper geometrical, energy, and timing scale factors and re-mapped via interpolation to the DES angular and energy phase-space sampling measurements. The results of the simulated DES measurements are analyzed and the full moments of the simulated VDFs are compared with those obtained from the Cluster/PEACE spectrometer using a standard quadrature moment, a newly implemented spectral spherical harmonic method, and a singular value decomposition method. Our preliminary moment calculations show a remarkable agreement within the uncertainties of the measurements, with the results obtained by the Cluster/PEACE electron spectrometers. The data analyzed was selected because it represented a potential reconnection event as currently published.
42 CFR 84.79 - Breathing gas; minimum requirements.
Code of Federal Regulations, 2011 CFR
2011-10-01
... SAFETY AND HEALTH RESEARCH AND RELATED ACTIVITIES APPROVAL OF RESPIRATORY PROTECTIVE DEVICES Self... respiratory tract irritating compounds. (c) Compressed, gaseous breathing air shall meet the applicable...
42 CFR 84.79 - Breathing gas; minimum requirements.
Code of Federal Regulations, 2012 CFR
2012-10-01
... SAFETY AND HEALTH RESEARCH AND RELATED ACTIVITIES APPROVAL OF RESPIRATORY PROTECTIVE DEVICES Self... respiratory tract irritating compounds. (c) Compressed, gaseous breathing air shall meet the applicable...
42 CFR 84.79 - Breathing gas; minimum requirements.
Code of Federal Regulations, 2014 CFR
2014-10-01
... SAFETY AND HEALTH RESEARCH AND RELATED ACTIVITIES APPROVAL OF RESPIRATORY PROTECTIVE DEVICES Self... respiratory tract irritating compounds. (c) Compressed, gaseous breathing air shall meet the applicable...
42 CFR 84.79 - Breathing gas; minimum requirements.
Code of Federal Regulations, 2010 CFR
2010-10-01
... SAFETY AND HEALTH RESEARCH AND RELATED ACTIVITIES APPROVAL OF RESPIRATORY PROTECTIVE DEVICES Self... respiratory tract irritating compounds. (c) Compressed, gaseous breathing air shall meet the applicable...
42 CFR 84.79 - Breathing gas; minimum requirements.
Code of Federal Regulations, 2013 CFR
2013-10-01
... SAFETY AND HEALTH RESEARCH AND RELATED ACTIVITIES APPROVAL OF RESPIRATORY PROTECTIVE DEVICES Self... respiratory tract irritating compounds. (c) Compressed, gaseous breathing air shall meet the applicable...
Invariant Functional Forms for K(r,P) Type Equations of State for Hydrodynamically Driven Flow
NASA Astrophysics Data System (ADS)
Hrbek, George
2001-06-01
At the 11th American Physical Society Topical Group Meeting on Shock Compression of Condensed Matter, Group Theoretic Methods, as defined by Lie were applied to the problem of temperature independent, hydrodynamic shock in a Birch-Murnaghan continuum. (1) Group parameter ratios were linked to the physical quantities (i.e., KT, K'T, and K''T) specified for the various order Birch-Murnaghan approximations. This technique has now been generalized to provide a mathematical formalism applicable to a wide class of forms (i.e., K(r,P)) for the equation of state. Variations in material expansion and resistance (i.e., counter pressure) are shown to be functions of compression and material variation ahead of the expanding front. Illustrative examples include the Birch-Murnaghan, Vinet, Brennan-Stacey, Shanker, Tait, Poirier, and Jones-Wilkins-Lee (JWL) forms. The results of this study will allow the various equations of state, and their respective fitting coefficients, to be compared with experiments. To do this, one must introduce the group ratios into a numerical simulation for the flow and generate the density, pressure, and particle velocity profiles as the shock moves through the material. (2) (1) Hrbek, G. M., Invariant Functional Forms For The Second, Third, And Fourth Order Birch-Murnaghan Equation of State For Materials Subject to Hydrodynamic Shock, Proceedings of the 11th American Physical Society Topical Group Meeting on Shock Compression of Condensed Matter (SCCM Shock 99), Snowbird, Utah (2) Hrbek, G. M., Physical Interpretation of Mathematically Invariant K(r,P) Type Equations Of State For Hydrodynamically Driven Flows, Submitted to the 12th American Physical Society Topical Group Meeting on Shock Compression of Condensed Matter (SCCM Shock 01), Atlanta, Georgia
NASA Astrophysics Data System (ADS)
Asilah Khairi, Nor; Bahari Jambek, Asral
2017-11-01
An Internet of Things (IoT) device is usually powered by a small battery, which does not last long. As a result, saving energy in IoT devices has become an important issue when it comes to this subject. Since power consumption is the primary cause of radio communication, some researchers have proposed several compression algorithms with the purpose of overcoming this particular problem. Several data compression algorithms from previous reference papers are discussed in this paper. The description of the compression algorithm in the reference papers was collected and summarized in a table form. From the analysis, MAS compression algorithm was selected as a project prototype due to its high potential for meeting the project requirements. Besides that, it also produced better performance regarding energy-saving, better memory usage, and data transmission efficiency. This method is also suitable to be implemented in WSN. MAS compression algorithm will be prototyped and applied in portable electronic devices for Internet of Things applications.
New single-layer compression bandage system for chronic venous leg ulcers.
Lee, Gillian; Rajendran, Subbiyan; Anand, Subhash
A new single-layer bandage system for the treatment of venous leg ulcers has been designed and developed at the University of Bolton. This three-dimensional (3D) knitted spacer fabric structure has been designed by making use of mathematical modelling and Laplace's law. The sustained graduated compression of the developed 3D knitted spacer bandages were tested and characterized, and compared with that of commercially available compression bandages. It was observed that the developed 3D single-layer bandage meets the ideal criteria stipulated for compression therapy. The laboratory results were verified by carrying out a pilot user study incorporating volunteers from different age groups. This article examines the insight into the design and development of the new 3D knitted spacer bandage, along with briefly discussing the issues of compression therapy systems intended for the treatment of venous leg ulcers.
Atomistic Simulations of Chemical Reactivity of TATB Under Thermal and Shock Conditions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Manaa, M R; Reed, E J; Fried, L E
2009-09-23
The study of chemical transformations that occur at the reactive shock front of energetic materials provides important information for the development of predictive models at the grain-and continuum scales. A major shortcoming of current high explosives models is the lack of chemical kinetics data of the reacting explosive in the high pressure and temperature regimes. In the absence of experimental data, long-time scale atomistic molecular dynamics simulations with reactive chemistry become a viable recourse to provide an insight into the decomposition mechanism of explosives, and to obtain effective reaction rate laws. These rates can then be incorporated into thermo-chemical-hydro codesmore » (such as Cheetah linked to ALE3D) for accurate description of the grain and macro scales dynamics of reacting explosives. In this talk, I will present quantum simulations of 1,3,5-triamino-2,4,6-trinitrobenzene (TATB) crystals under thermal decomposition (high density and temperature) and shock compression conditions. This is the first time that condensed phase quantum methods have been used to study the chemistry of insensitive high explosives. We used the quantum-based, self-consistent charge density functional tight binding method (SCC{_}DFTB) to calculate the interatomic forces for reliable predictions of chemical reactions, and to examine electronic properties at detonation conditions for a relatively long time-scale on the order of several hundreds of picoseconds. For thermal decomposition of TATB, we conducted constant volume-temperature simulations, ranging from 0.35 to 2 nanoseconds, at {rho} = 2.87 g/cm{sup 3} at T = 3500, 3000, 2500, and 1500 K, and {rho} = 2.9 g/cm{sup 3} and 2.72 g/cm{sup 3}, at T = 3000 K. We also simulated crystal TATB's reactivity under steady overdriven shock compression using the multi-scale shock technique. We conducted shock simulations with specified shock speeds of 8, 9, and 10 km/s for up to 0.43 ns duration, enabling us to track the reactivity of TATB well into the formation of several stable gas products, such as H{sub 2}O, N{sub 2}, and CO{sub 2}. Although complex chemical transformations are occurring continuously in the dynamical, high temperature, reactive environment of our simulations, a simple overall scheme for the decomposition of TATB emerges: Water is the earliest decomposition products to form, followed by a polymerization (or condensation) process in which several TATB remaining fragments are joined together, initiating the early step in the formation of high-nitrogen clusters, along with stable products such as N{sub 2} and CO{sub 2}. Remarkably, these clusters with high concentration of carbon and nitrogen (and little oxygen) remain dynamically stable for the remaining period of the simulations. Our simulations, thus, reveal a hitherto unidentified region of high concentrations of nitrogen-rich heterocyclic clusters in reacting TATB, whose persistence impede further reactivity towards final products of fluid N{sub 2} and solid carbon. These simulations also predict significant populations of charged species such as NCO{sup -}, H{sup +}, OH{sup -}, H{sub 3}O{sup +}, and O{sup -2}, the first such observation in a reacting explosive. Finally, A reduced four steps, global reaction mechanism with Arrhenius kinetic rates for the decomposition of TATB, along with comparative Cheetah decomposition kinetics at various temperatures has been constructed and will be discussed.« less
Code of Federal Regulations, 2011 CFR
2011-07-01
... I am an owner or operator of a stationary internal combustion engine using special fuels? 60.4217... Compression Ignition Internal Combustion Engines Special Requirements § 60.4217 What emission standards must I meet if I am an owner or operator of a stationary internal combustion engine using special fuels? (a...
Code of Federal Regulations, 2013 CFR
2013-07-01
... I am an owner or operator of a stationary internal combustion engine using special fuels? 60.4217... Compression Ignition Internal Combustion Engines Special Requirements § 60.4217 What emission standards must I meet if I am an owner or operator of a stationary internal combustion engine using special fuels...
Code of Federal Regulations, 2014 CFR
2014-07-01
... I am an owner or operator of a stationary internal combustion engine using special fuels? 60.4217... Compression Ignition Internal Combustion Engines Special Requirements § 60.4217 What emission standards must I meet if I am an owner or operator of a stationary internal combustion engine using special fuels...
Code of Federal Regulations, 2010 CFR
2010-07-01
... I am an owner or operator of a stationary internal combustion engine using special fuels? 60.4217... Compression Ignition Internal Combustion Engines Special Requirements § 60.4217 What emission standards must I meet if I am an owner or operator of a stationary internal combustion engine using special fuels? (a...
Code of Federal Regulations, 2012 CFR
2012-07-01
... I am an owner or operator of a stationary internal combustion engine using special fuels? 60.4217... Compression Ignition Internal Combustion Engines Special Requirements § 60.4217 What emission standards must I meet if I am an owner or operator of a stationary internal combustion engine using special fuels...
Spectral Data Reduction via Wavelet Decomposition
NASA Technical Reports Server (NTRS)
Kaewpijit, S.; LeMoigne, J.; El-Ghazawi, T.; Rood, Richard (Technical Monitor)
2002-01-01
The greatest advantage gained from hyperspectral imagery is that narrow spectral features can be used to give more information about materials than was previously possible with broad-band multispectral imagery. For many applications, the new larger data volumes from such hyperspectral sensors, however, present a challenge for traditional processing techniques. For example, the actual identification of each ground surface pixel by its corresponding reflecting spectral signature is still one of the most difficult challenges in the exploitation of this advanced technology, because of the immense volume of data collected. Therefore, conventional classification methods require a preprocessing step of dimension reduction to conquer the so-called "curse of dimensionality." Spectral data reduction using wavelet decomposition could be useful, as it does not only reduce the data volume, but also preserves the distinctions between spectral signatures. This characteristic is related to the intrinsic property of wavelet transforms that preserves high- and low-frequency features during the signal decomposition, therefore preserving peaks and valleys found in typical spectra. When comparing to the most widespread dimension reduction technique, the Principal Component Analysis (PCA), and looking at the same level of compression rate, we show that Wavelet Reduction yields better classification accuracy, for hyperspectral data processed with a conventional supervised classification such as a maximum likelihood method.
NASA Astrophysics Data System (ADS)
Teal, Paul D.; Eccles, Craig
2015-04-01
The two most successful methods of estimating the distribution of nuclear magnetic resonance relaxation times from two dimensional data are data compression followed by application of the Butler-Reeds-Dawson algorithm, and a primal-dual interior point method using preconditioned conjugate gradient. Both of these methods have previously been presented using a truncated singular value decomposition of matrices representing the exponential kernel. In this paper it is shown that other matrix factorizations are applicable to each of these algorithms, and that these illustrate the different fundamental principles behind the operation of the algorithms. These are the rank-revealing QR (RRQR) factorization and the LDL factorization with diagonal pivoting, also known as the Bunch-Kaufman-Parlett factorization. It is shown that both algorithms can be improved by adaptation of the truncation as the optimization process progresses, improving the accuracy as the optimal value is approached. A variation on the interior method viz, the use of barrier function instead of the primal-dual approach, is found to offer considerable improvement in terms of speed and reliability. A third type of algorithm, related to the algorithm known as Fast iterative shrinkage-thresholding algorithm, is applied to the problem. This method can be efficiently formulated without the use of a matrix decomposition.
JANNAF 17th Propulsion Systems Hazards Subcommittee Meeting. Volume 1
NASA Technical Reports Server (NTRS)
Cocchiaro, James E. (Editor); Gannaway, Mary T. (Editor); Rognan, Melanie (Editor)
1998-01-01
Volume 1, the first of two volumes is a compilation of 16 unclassified/unlimited technical papers presented at the 17th meeting of the Joint Army-Navy-NASA-Air Force (JANNAF) Propulsion Systems Hazards Subcommittee (PSHS) held jointly with the 35th Combustion Subcommittee (CS) and Airbreathing Propulsion Subcommittee (APS). The meeting was held on 7 - 11 December 1998 at Raytheon Systems Company and the Marriott Hotel, Tucson, AZ. Topics covered include projectile and shaped charge jet impact vulnerability of munitions; thermal decomposition and cookoff behavior of energetic materials; damage and hot spot initiation mechanisms with energetic materials; detonation phenomena of solid energetic materials; and hazard classification, insensitive munitions, and propulsion systems safety.
Galerkin Method for Nonlinear Dynamics
NASA Astrophysics Data System (ADS)
Noack, Bernd R.; Schlegel, Michael; Morzynski, Marek; Tadmor, Gilead
A Galerkin method is presented for control-oriented reduced-order models (ROM). This method generalizes linear approaches elaborated by M. Morzyński et al. for the nonlinear Navier-Stokes equation. These ROM are used as plants for control design in the chapters by G. Tadmor et al., S. Siegel, and R. King in this volume. Focus is placed on empirical ROM which compress flow data in the proper orthogonal decomposition (POD). The chapter shall provide a complete description for construction of straight-forward ROM as well as the physical understanding and teste
Vibrational Softening of a Protein on Ligand Binding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balog, Erica; Perahia, David; Smith, Jeremy C
2011-01-01
Neutron scattering experiments have demonstrated that binding of the cancer drug methotrexate softens the low-frequency vibrations of its target protein, dihydrofolate reductase (DHFR). Here, this softening is fully reproduced using atomic detail normal-mode analysis. Decomposition of the vibrational density of states demonstrates that the largest contributions arise from structural elements of DHFR critical to stability and function. Mode-projection analysis reveals an increase of the breathing-like character of the affected vibrational modes consistent with the experimentally observed increased adiabatic compressibility of the protein on complexation.
Partsch, Hugo; Clark, Michael; Bassez, Sophie; Benigni, Jean-Patrick; Becker, Francis; Blazek, Vladimir; Caprini, Joseph; Cornu-Thénard, André; Hafner, Jürg; Flour, Mieke; Jünger, Michael; Moffatt, Christine; Neumann, Martino
2006-02-01
Interface pressure and stiffness characterizing the elastic properties of the material are the parameters determining the dosage of compression treatment and should therefore be measured in future clinical trials. To provide some recommendations regarding the use of suitable methods for this indication. This article was formulated based on the results of an international consensus meeting between a group of medical experts and representatives from the industry held in January 2005 in Vienna, Austria. Proposals are made concerning methods for measuring the interface pressure and for assessing the stiffness of a compression device in an individual patient. In vivo measurement of interface pressure is encouraged when clinical and experimental outcomes of compression treatment are to be evaluated.
Silver-palladium catalysts for the direct synthesis of hydrogen peroxide
NASA Astrophysics Data System (ADS)
Khan, Zainab; Dummer, Nicholas F.; Edwards, Jennifer K.
2017-11-01
A series of bimetallic silver-palladium catalysts supported on titania were prepared by wet impregnation and assessed for the direct synthesis of hydrogen peroxide, and its subsequent side reactions. The addition of silver to a palladium catalyst was found to significantly decrease hydrogen peroxide productivity and hydrogenation, but crucially increase the rate of decomposition. The decomposition product, which is predominantly hydroxyl radicals, can be used to decrease bacterial colonies. The interaction between silver and palladium was characterized using scanning electron microscopy, X-ray diffraction, X-ray photoelectron spectroscopy (XPS) and temperature programmed reduction (TPR). The results of the TPR and XPS indicated the formation of a silver-palladium alloy. The optimal 1% Ag-4% Pd/TiO2 bimetallic catalyst was able to produce approximately 200 ppm of H2O2 in 30 min. The findings demonstrate that AgPd/TiO2 catalysts are active for the synthesis of hydrogen peroxide and its subsequent decomposition to reactive oxygen species. The catalysts are promising for use in wastewater treatment as they combine the disinfectant properties of silver, hydrogen peroxide production and subsequent decomposition. This article is part of a discussion meeting issue 'Providing sustainable catalytic solutions for a rapidly changing world'.
76 FR 8772 - Government in the Sunshine Act Meeting Notice
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-15
...-TA-587 (Remand) (Certain Connecting Devices (``Quick Clamps'') for Use with Modular Compressed Air Conditioning Units, Including Filters, Regulators, and Lubricators (``FRL's'') That are Part of Larger...
NASA Astrophysics Data System (ADS)
Vishnukumar, S.; Wilscy, M.
2017-12-01
In this paper, we propose a single image Super-Resolution (SR) method based on Compressive Sensing (CS) and Improved Total Variation (TV) Minimization Sparse Recovery. In the CS framework, low-resolution (LR) image is treated as the compressed version of high-resolution (HR) image. Dictionary Training and Sparse Recovery are the two phases of the method. K-Singular Value Decomposition (K-SVD) method is used for dictionary training and the dictionary represents HR image patches in a sparse manner. Here, only the interpolated version of the LR image is used for training purpose and thereby the structural self similarity inherent in the LR image is exploited. In the sparse recovery phase the sparse representation coefficients with respect to the trained dictionary for LR image patches are derived using Improved TV Minimization method. HR image can be reconstructed by the linear combination of the dictionary and the sparse coefficients. The experimental results show that the proposed method gives better results quantitatively as well as qualitatively on both natural and remote sensing images. The reconstructed images have better visual quality since edges and other sharp details are preserved.
NASA Astrophysics Data System (ADS)
Mansoor, Awais; Robinson, J. Paul; Rajwa, Bartek
2009-02-01
Modern automated microscopic imaging techniques such as high-content screening (HCS), high-throughput screening, 4D imaging, and multispectral imaging are capable of producing hundreds to thousands of images per experiment. For quick retrieval, fast transmission, and storage economy, these images should be saved in a compressed format. A considerable number of techniques based on interband and intraband redundancies of multispectral images have been proposed in the literature for the compression of multispectral and 3D temporal data. However, these works have been carried out mostly in the elds of remote sensing and video processing. Compression for multispectral optical microscopy imaging, with its own set of specialized requirements, has remained under-investigated. Digital photography{oriented 2D compression techniques like JPEG (ISO/IEC IS 10918-1) and JPEG2000 (ISO/IEC 15444-1) are generally adopted for multispectral images which optimize visual quality but do not necessarily preserve the integrity of scientic data, not to mention the suboptimal performance of 2D compression techniques in compressing 3D images. Herein we report our work on a new low bit-rate wavelet-based compression scheme for multispectral fluorescence biological imaging. The sparsity of signicant coefficients in high-frequency subbands of multispectral microscopic images is found to be much greater than in natural images; therefore a quad-tree concept such as Said et al.'s SPIHT1 along with correlation of insignicant wavelet coefficients has been proposed to further exploit redundancy at high-frequency subbands. Our work propose a 3D extension to SPIHT, incorporating a new hierarchal inter- and intra-spectral relationship amongst the coefficients of 3D wavelet-decomposed image. The new relationship, apart from adopting the parent-child relationship of classical SPIHT, also brought forth the conditional "sibling" relationship by relating only the insignicant wavelet coefficients of subbands at the same level of decomposition. The insignicant quadtrees in dierent subbands in the high-frequency subband class are coded by a combined function to reduce redundancy. A number of experiments conducted on microscopic multispectral images have shown promising results for the proposed method over current state-of-the-art image-compression techniques.
Polystyrene Foam Products Equation of State as a Function of Porosity and Fill Gas
NASA Astrophysics Data System (ADS)
Mulford, R. N.; Swift, D. C.
2009-12-01
An accurate EOS for polystyrene foam is necessary for analysis of numerous experiments in shock compression, inertial confinement fusion, and astrophysics. Plastic to gas ratios vary between various samples of foam, according to the density and cell-size of the foam. A matrix of compositions has been investigated, allowing prediction of foam response as a function of the plastic-to-air ratio. The EOS code CHEETAH allows participation of the air in the decomposition reaction of the foam. Differences between air-filled, Ar-blown, and CO2-blown foams are investigated, to estimate the importance of allowing air to react with products of polystyrene decomposition. O2-blown foams are included in some comparisons, to amplify any consequences of reaction with oxygen in air. He-blown foams are included in some comparisons, to provide an extremum of density. Product pressures are slightly higher for oxygen-containing fill gases than for non-oxygen-containing fill gases. Examination of product species indicates that CO2 decomposes at high temperatures.
Turbulence and vorticity in Galaxy clusters generated by structure formation
NASA Astrophysics Data System (ADS)
Vazza, F.; Jones, T. W.; Brüggen, M.; Brunetti, G.; Gheller, C.; Porter, D.; Ryu, D.
2017-01-01
Turbulence is a key ingredient for the evolution of the intracluster medium, whose properties can be predicted with high-resolution numerical simulations. We present initial results on the generation of solenoidal and compressive turbulence in the intracluster medium during the formation of a small-size cluster using highly resolved, non-radiative cosmological simulations, with a refined monitoring in time. In this first of a series of papers, we closely look at one simulated cluster whose formation was distinguished by a merger around z ˜ 0.3. We separate laminar gas motions, turbulence and shocks with dedicated filtering strategies and distinguish the solenoidal and compressive components of the gas flows using Hodge-Helmholtz decomposition. Solenoidal turbulence dominates the dissipation of turbulent motions (˜95 per cent) in the central cluster volume at all epochs. The dissipation via compressive modes is found to be more important (˜30 per cent of the total) only at large radii (≥0.5rvir) and close to merger events. We show that enstrophy (vorticity squared) is good proxy of solenoidal turbulence. All terms ruling the evolution of enstrophy (I.e. baroclinic, compressive, stretching and advective terms) are found to be significant, but in amounts that vary with time and location. Two important trends for the growth of enstrophy in our simulation are identified: first, enstrophy is continuously accreted into the cluster from the outside, and most of that accreted enstrophy is generated near the outer accretion shocks by baroclinic and compressive processes. Secondly, in the cluster interior vortex, stretching is dominant, although the other terms also contribute substantially.
High strength yttria-reinforced HA scaffolds fabricated via honeycomb ceramic extrusion.
Elbadawi, M; Shbeh, M
2018-01-01
The present study investigated the effects of hydroxyapatite (HA) reinforced with yttria on porous scaffolds fabricated via honeycomb ceramic extrusion. Yttria was selected as it has been demonstrated to toughen other ceramics. Moreover, yttria has been surmised to suppress dehydroxylation in HA, a characteristic that prefigures decomposition thereof during sintering into mechanically weaker phases. However, the compressive strength of yttria-reinforced hydroxyapatite (Y-HA) porous scaffolds has hitherto not been reported. Y-HA was synthesised by calcining a commercially available HA with 10wt% yttria at 1000°C. Y-HA was then fabricated into porous scaffolds using an in-house honeycomb extruder, and subsequently sintered at 1200 and 1250°C. The results were compared to the uncalcined as-received commercial powder (AR-HA) and calcined pure HA powder at 1000°C (C-HA). It was discovered that calcination alone caused marked improvements to the stoichiometry, thermal stability, porosity and compressive strength of scaffolds. The improvements were ascribed to the calcined powders with less susceptibility to both agglomeration and enhanced densification. Still, differences were observed between C-HA and Y-HA at 1250°C. The compressive strength increased from 105.9 to 127.3MPa, a larger microporosity was descried and the HA matrix in Y-HA was more stoichiometric. The latter was confirmed by XRD and EDS analyses. Therefore, it was concluded that the reinforcing of hydroxyapatite with yttria improved the compressive strength and suppressed dehydroxylation of porous HA scaffolds. In addition, the compressive strength achieved demonstrated great potential for load-bearing application. Copyright © 2017 Elsevier Ltd. All rights reserved.
Code of Federal Regulations, 2011 CFR
2011-07-01
... am an owner or operator of a stationary CI internal combustion engine subject to this subpart? 60... Compression Ignition Internal Combustion Engines Fuel Requirements for Owners and Operators § 60.4207 What fuel requirements must I meet if I am an owner or operator of a stationary CI internal combustion...
Code of Federal Regulations, 2010 CFR
2010-07-01
... am an owner or operator of a stationary CI internal combustion engine subject to this subpart? 60... Compression Ignition Internal Combustion Engines Fuel Requirements for Owners and Operators § 60.4207 What fuel requirements must I meet if I am an owner or operator of a stationary CI internal combustion...
Code of Federal Regulations, 2012 CFR
2012-07-01
... am an owner or operator of a stationary CI internal combustion engine subject to this subpart? 60... Compression Ignition Internal Combustion Engines Fuel Requirements for Owners and Operators § 60.4207 What fuel requirements must I meet if I am an owner or operator of a stationary CI internal combustion...
Code of Federal Regulations, 2013 CFR
2013-07-01
... am an owner or operator of a stationary CI internal combustion engine subject to this subpart? 60... Compression Ignition Internal Combustion Engines Fuel Requirements for Owners and Operators § 60.4207 What fuel requirements must I meet if I am an owner or operator of a stationary CI internal combustion...
Code of Federal Regulations, 2014 CFR
2014-07-01
... am an owner or operator of a stationary CI internal combustion engine subject to this subpart? 60... Compression Ignition Internal Combustion Engines Fuel Requirements for Owners and Operators § 60.4207 What fuel requirements must I meet if I am an owner or operator of a stationary CI internal combustion...
Radar Range Sidelobe Reduction Using Adaptive Pulse Compression Technique
NASA Technical Reports Server (NTRS)
Li, Lihua; Coon, Michael; McLinden, Matthew
2013-01-01
Pulse compression has been widely used in radars so that low-power, long RF pulses can be transmitted, rather than a highpower short pulse. Pulse compression radars offer a number of advantages over high-power short pulsed radars, such as no need of high-power RF circuitry, no need of high-voltage electronics, compact size and light weight, better range resolution, and better reliability. However, range sidelobe associated with pulse compression has prevented the use of this technique on spaceborne radars since surface returns detected by range sidelobes may mask the returns from a nearby weak cloud or precipitation particles. Research on adaptive pulse compression was carried out utilizing a field-programmable gate array (FPGA) waveform generation board and a radar transceiver simulator. The results have shown significant improvements in pulse compression sidelobe performance. Microwave and millimeter-wave radars present many technological challenges for Earth and planetary science applications. The traditional tube-based radars use high-voltage power supply/modulators and high-power RF transmitters; therefore, these radars usually have large size, heavy weight, and reliability issues for space and airborne platforms. Pulse compression technology has provided a path toward meeting many of these radar challenges. Recent advances in digital waveform generation, digital receivers, and solid-state power amplifiers have opened a new era for applying pulse compression to the development of compact and high-performance airborne and spaceborne remote sensing radars. The primary objective of this innovative effort is to develop and test a new pulse compression technique to achieve ultrarange sidelobes so that this technique can be applied to spaceborne, airborne, and ground-based remote sensing radars to meet future science requirements. By using digital waveform generation, digital receiver, and solid-state power amplifier technologies, this improved pulse compression technique could bring significant impact on future radar development. The novel feature of this innovation is the non-linear FM (NLFM) waveform design. The traditional linear FM has the limit (-20 log BT -3 dB) for achieving ultra-low-range sidelobe in pulse compression. For this study, a different combination of 20- or 40-microsecond chirp pulse width and 2- or 4-MHz chirp bandwidth was used. These are typical operational parameters for airborne or spaceborne weather radars. The NLFM waveform design was then implemented on a FPGA board to generate a real chirp signal, which was then sent to the radar transceiver simulator. The final results have shown significant improvement on sidelobe performance compared to that obtained using a traditional linear FM chirp.
ICER-3D Hyperspectral Image Compression Software
NASA Technical Reports Server (NTRS)
Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh
2010-01-01
Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, three-dimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume. The ICER-3D software is capable of providing either lossless or lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The compression algorithm, which was derived from the ICER image compression algorithm, includes wavelet-transform, context-modeling, and entropy coding subalgorithms. The 3D wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of sets of hyperspectral image data, while facilitating elimination of spectral ringing artifacts, using a technique summarized in "Improving 3D Wavelet-Based Compression of Spectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. Correlation is further exploited by a context-modeling subalgorithm, which exploits spectral dependencies in the wavelet-transformed hyperspectral data, using an algorithm that is summarized in "Context Modeler for Wavelet Compression of Hyperspectral Images" (NPO-43239), which follows this article. An important feature of ICER-3D is a scheme for limiting the adverse effects of loss of data during transmission. In this scheme, as in the similar scheme used by ICER, the spatial-frequency domain is partitioned into rectangular error-containment regions. In ICER-3D, the partitions extend through all the wavelength bands. The data in each partition are compressed independently of those in the other partitions, so that loss or corruption of data from any partition does not affect the other partitions. Furthermore, because compression is progressive within each partition, when data are lost, any data from that partition received prior to the loss can be used to reconstruct that partition at lower fidelity. By virtue of the compression improvement it achieves relative to previous means of onboard data compression, this software enables (1) increased return of hyperspectral scientific data in the presence of limits on the rates of transmission of data from spacecraft to Earth via radio communication links and/or (2) reduction in spacecraft radio-communication power and/or cost through reduction in the amounts of data required to be downlinked and stored onboard prior to downlink. The software is also suitable for compressing hyperspectral images for ground storage or archival purposes.
The Speech multi features fusion perceptual hash algorithm based on tensor decomposition
NASA Astrophysics Data System (ADS)
Huang, Y. B.; Fan, M. H.; Zhang, Q. Y.
2018-03-01
With constant progress in modern speech communication technologies, the speech data is prone to be attacked by the noise or maliciously tampered. In order to make the speech perception hash algorithm has strong robustness and high efficiency, this paper put forward a speech perception hash algorithm based on the tensor decomposition and multi features is proposed. This algorithm analyses the speech perception feature acquires each speech component wavelet packet decomposition. LPCC, LSP and ISP feature of each speech component are extracted to constitute the speech feature tensor. Speech authentication is done by generating the hash values through feature matrix quantification which use mid-value. Experimental results showing that the proposed algorithm is robust for content to maintain operations compared with similar algorithms. It is able to resist the attack of the common background noise. Also, the algorithm is highly efficiency in terms of arithmetic, and is able to meet the real-time requirements of speech communication and complete the speech authentication quickly.
Compressive strength of concrete and mortar containing fly ash
Liskowitz, J.W.; Wecharatana, M.; Jaturapitakkul, C.; Cerkanowicz, A.E.
1997-04-29
The present invention relates to concrete, mortar and other hardenable mixtures comprising cement and fly ash for use in construction. The invention includes a method for predicting the compressive strength of such a hardenable mixture, which is very important for planning a project. The invention also relates to hardenable mixtures comprising cement and fly ash which can achieve greater compressive strength than hardenable mixtures containing only concrete over the time period relevant for construction. In a specific embodiment, a formula is provided that accurately predicts compressive strength of concrete containing fly ash out to 180 days. In other specific examples, concrete and mortar containing about 15% to 25% fly ash as a replacement for cement, which are capable of meeting design specifications required for building and highway construction, are provided. Such materials can thus significantly reduce construction costs. 33 figs.
Compressive strength of concrete and mortar containing fly ash
Liskowitz, J.W.; Wecharatana, M.; Jaturapitakkul, C.; Cerkanowicz, A.E.
1998-12-29
The present invention relates to concrete, mortar and other hardenable mixtures comprising cement and fly ash for use in construction. The invention includes a method for predicting the compressive strength of such a hardenable mixture, which is very important for planning a project. The invention also relates to hardenable mixtures comprising cement and fly ash which can achieve greater compressive strength than hardenable mixtures containing only concrete over the time period relevant for construction. In a specific embodiment, a formula is provided that accurately predicts compressive strength of concrete containing fly ash out to 180 days. In other specific examples, concrete and mortar containing about 15% to 25% fly ash as a replacement for cement, which are capable of meeting design specification required for building and highway construction, are provided. Such materials can thus significantly reduce construction costs. 33 figs.
Compressive strength of concrete and mortar containing fly ash
Liskowitz, John W.; Wecharatana, Methi; Jaturapitakkul, Chai; Cerkanowicz, deceased, Anthony E.
1997-01-01
The present invention relates to concrete, mortar and other hardenable mixtures comprising cement and fly ash for use in construction. The invention includes a method for predicting the compressive strength of such a hardenable mixture, which is very important for planning a project. The invention also relates to hardenable mixtures comprising cement and fly ash which can achieve greater compressive strength than hardenable mixtures containing only concrete over the time period relevant for construction. In a specific embodiment, a formula is provided that accurately predicts compressive strength of concrete containing fly ash out to 180 days. In other specific examples, concrete and mortar containing about 15% to 25% fly ash as a replacement for cement, which are capable of meeting design specifications required for building and highway construction, are provided. Such materials can thus significantly reduce construction costs.
Compressive strength of concrete and mortar containing fly ash
Liskowitz, John W.; Wecharatana, Methi; Jaturapitakkul, Chai; Cerkanowicz, deceased, Anthony E.
1998-01-01
The present invention relates to concrete, mortar and other hardenable mixtures comprising cement and fly ash for use in construction. The invention includes a method for predicting the compressive strength of such a hardenable mixture, which is very important for planning a project. The invention also relates to hardenable mixtures comprising cement and fly ash which can achieve greater compressive strength than hardenable mixtures containing only concrete over the time period relevant for construction. In a specific embodiment, a formula is provided that accurately predicts compressive strength of concrete containing fly ash out to 180 days. In other specific examples, concrete and mortar containing about 15% to 25% fly ash as a replacement for cement, which are capable of meeting design specification required for building and highway construction, are provided. Such materials can thus significantly reduce construction costs.
46 CFR 153.520 - Special requirements for carbon disulfide.
Code of Federal Regulations, 2010 CFR
2010-10-01
... CARGOES SHIPS CARRYING BULK LIQUID, LIQUEFIED GAS, OR COMPRESSED GAS HAZARDOUS MATERIALS Design and... carrying carbon disulfide must meet the following: (a) Each cargo pump must be of the intank type and...
Model and Data Reduction for Control, Identification and Compressed Sensing
NASA Astrophysics Data System (ADS)
Kramer, Boris
This dissertation focuses on problems in design, optimization and control of complex, large-scale dynamical systems from different viewpoints. The goal is to develop new algorithms and methods, that solve real problems more efficiently, together with providing mathematical insight into the success of those methods. There are three main contributions in this dissertation. In Chapter 3, we provide a new method to solve large-scale algebraic Riccati equations, which arise in optimal control, filtering and model reduction. We present a projection based algorithm utilizing proper orthogonal decomposition, which is demonstrated to produce highly accurate solutions at low rank. The method is parallelizable, easy to implement for practitioners, and is a first step towards a matrix free approach to solve AREs. Numerical examples for n ≥ 106 unknowns are presented. In Chapter 4, we develop a system identification method which is motivated by tangential interpolation. This addresses the challenge of fitting linear time invariant systems to input-output responses of complex dynamics, where the number of inputs and outputs is relatively large. The method reduces the computational burden imposed by a full singular value decomposition, by carefully choosing directions on which to project the impulse response prior to assembly of the Hankel matrix. The identification and model reduction step follows from the eigensystem realization algorithm. We present three numerical examples, a mass spring damper system, a heat transfer problem, and a fluid dynamics system. We obtain error bounds and stability results for this method. Chapter 5 deals with control and observation design for parameter dependent dynamical systems. We address this by using local parametric reduced order models, which can be used online. Data available from simulations of the system at various configurations (parameters, boundary conditions) is used to extract a sparse basis to represent the dynamics (via dynamic mode decomposition). Subsequently, a new, compressed sensing based classification algorithm is developed which incorporates the extracted dynamic information into the sensing basis. We show that this augmented classification basis makes the method more robust to noise, and results in superior identification of the correct parameter. Numerical examples consist of a Navier-Stokes, as well as a Boussinesq flow application.
Advanced technologies impact on compressor design and development: A perspective
NASA Technical Reports Server (NTRS)
Ball, Calvin L.
1989-01-01
A historical perspective of the impact of advanced technologies on compression system design and development for aircraft gas turbine applications is presented. A bright view of the future is projected in which further advancements in compression system technologies will be made. These advancements will have a significant impact on the ability to meet the ever-more-demanding requirements being imposed on the propulsion system for advanced aircraft. Examples are presented of advanced compression system concepts now being studied. The status and potential impact of transitioning from an empirically derived design system to a computationally oriented system are highlighted. A current NASA Lewis Research Center program to enhance this transitioning is described.
The compression and storage method of the same kind of medical images: DPCM
NASA Astrophysics Data System (ADS)
Zhao, Xiuying; Wei, Jingyuan; Zhai, Linpei; Liu, Hong
2006-09-01
Medical imaging has started to take advantage of digital technology, opening the way for advanced medical imaging and teleradiology. Medical images, however, require large amounts of memory. At over 1 million bytes per image, a typical hospital needs a staggering amount of memory storage (over one trillion bytes per year), and transmitting an image over a network (even the promised superhighway) could take minutes--too slow for interactive teleradiology. This calls for image compression to reduce significantly the amount of data needed to represent an image. Several compression techniques with different compression ratio have been developed. However, the lossless techniques, which allow for perfect reconstruction of the original images, yield modest compression ratio, while the techniques that yield higher compression ratio are lossy, that is, the original image is reconstructed only approximately. Medical imaging poses the great challenge of having compression algorithms that are lossless (for diagnostic and legal reasons) and yet have high compression ratio for reduced storage and transmission time. To meet this challenge, we are developing and studying some compression schemes, which are either strictly lossless or diagnostically lossless, taking advantage of the peculiarities of medical images and of the medical practice. In order to increase the Signal to Noise Ratio (SNR) by exploitation of correlations within the source signal, a method of combining differential pulse code modulation (DPCM) is presented.
Option pricing from wavelet-filtered financial series
NASA Astrophysics Data System (ADS)
de Almeida, V. T. X.; Moriconi, L.
2012-10-01
We perform wavelet decomposition of high frequency financial time series into large and small time scale components. Taking the FTSE100 index as a case study, and working with the Haar basis, it turns out that the small scale component defined by most (≃99.6%) of the wavelet coefficients can be neglected for the purpose of option premium evaluation. The relevance of the hugely compressed information provided by low-pass wavelet-filtering is related to the fact that the non-gaussian statistical structure of the original financial time series is essentially preserved for expiration times which are larger than just one trading day.
High-pressure synthesis, amorphization, and decomposition of silane.
Hanfland, Michael; Proctor, John E; Guillaume, Christophe L; Degtyareva, Olga; Gregoryanz, Eugene
2011-03-04
By compressing elemental silicon and hydrogen in a diamond anvil cell, we have synthesized polymeric silicon tetrahydride (SiH(4)) at 124 GPa and 300 K. In situ synchrotron x-ray diffraction reveals that the compound forms the insulating I4(1)/a structure previously proposed from ab initio calculations for the high-pressure phase of silane. From a series of high-pressure experiments at room and low temperature on silane itself, we find that its tetrahedral molecules break up, while silane undergoes pressure-induced amorphization at pressures above 60 GPa, recrystallizing at 90 GPa into the polymeric crystal structures.
NASA Astrophysics Data System (ADS)
Kudryavtsev, Alexey N.; Kashkovsky, Alexander V.; Borisov, Semyon P.; Shershnev, Anton A.
2017-10-01
In the present work a computer code RCFS for numerical simulation of chemically reacting compressible flows on hybrid CPU/GPU supercomputers is developed. It solves 3D unsteady Euler equations for multispecies chemically reacting flows in general curvilinear coordinates using shock-capturing TVD schemes. Time advancement is carried out using the explicit Runge-Kutta TVD schemes. Program implementation uses CUDA application programming interface to perform GPU computations. Data between GPUs is distributed via domain decomposition technique. The developed code is verified on the number of test cases including supersonic flow over a cylinder.
NASA Technical Reports Server (NTRS)
Rao, T. R. N.; Seetharaman, G.; Feng, G. L.
1996-01-01
With the development of new advanced instruments for remote sensing applications, sensor data will be generated at a rate that not only requires increased onboard processing and storage capability, but imposes demands on the space to ground communication link and ground data management-communication system. Data compression and error control codes provide viable means to alleviate these demands. Two types of data compression have been studied by many researchers in the area of information theory: a lossless technique that guarantees full reconstruction of the data, and a lossy technique which generally gives higher data compaction ratio but incurs some distortion in the reconstructed data. To satisfy the many science disciplines which NASA supports, lossless data compression becomes a primary focus for the technology development. While transmitting the data obtained by any lossless data compression, it is very important to use some error-control code. For a long time, convolutional codes have been widely used in satellite telecommunications. To more efficiently transform the data obtained by the Rice algorithm, it is required to meet the a posteriori probability (APP) for each decoded bit. A relevant algorithm for this purpose has been proposed which minimizes the bit error probability in the decoding linear block and convolutional codes and meets the APP for each decoded bit. However, recent results on iterative decoding of 'Turbo codes', turn conventional wisdom on its head and suggest fundamentally new techniques. During the past several months of this research, the following approaches have been developed: (1) a new lossless data compression algorithm, which is much better than the extended Rice algorithm for various types of sensor data, (2) a new approach to determine the generalized Hamming weights of the algebraic-geometric codes defined by a large class of curves in high-dimensional spaces, (3) some efficient improved geometric Goppa codes for disk memory systems and high-speed mass memory systems, and (4) a tree based approach for data compression using dynamic programming.
Hardware Implementation of 32-Bit High-Speed Direct Digital Frequency Synthesizer
Ibrahim, Salah Hasan; Ali, Sawal Hamid Md.; Islam, Md. Shabiul
2014-01-01
The design and implementation of a high-speed direct digital frequency synthesizer are presented. A modified Brent-Kung parallel adder is combined with pipelining technique to improve the speed of the system. A gated clock technique is proposed to reduce the number of registers in the phase accumulator design. The quarter wave symmetry technique is used to store only one quarter of the sine wave. The ROM lookup table (LUT) is partitioned into three 4-bit sub-ROMs based on angular decomposition technique and trigonometric identity. Exploiting the advantages of sine-cosine symmetrical attributes together with XOR logic gates, one sub-ROM block can be removed from the design. These techniques, compressed the ROM into 368 bits. The ROM compressed ratio is 534.2 : 1, with only two adders, two multipliers, and XOR-gates with high frequency resolution of 0.029 Hz. These techniques make the direct digital frequency synthesizer an attractive candidate for wireless communication applications. PMID:24991635
Thermal Stability and Flammability of Styrene-Butadiene Rubber-Based (SBR) Ceramifiable Composites
Anyszka, Rafał; Bieliński, Dariusz M.; Pędzich, Zbigniew; Rybiński, Przemysław; Imiela, Mateusz; Siciński, Mariusz; Zarzecka-Napierała, Magdalena; Gozdek, Tomasz; Rutkowski, Paweł
2016-01-01
Ceramifiable styrene-butadiene (SBR)-based composites containing low-softening-point-temperature glassy frit promoting ceramification, precipitated silica, one of four thermally stable refractory fillers (halloysite, calcined kaolin, mica or wollastonite) and a sulfur-based curing system were prepared. Kinetics of vulcanization and basic mechanical properties were analyzed and added as Supplementary Materials. Combustibility of the composites was measured by means of cone calorimetry. Their thermal properties were analyzed by means of thermogravimetry and specific heat capacity determination. Activation energy of thermal decomposition was calculated using the Flynn-Wall-Ozawa method. Finally, compression strength of the composites after ceramification was measured and their micromorphology was studied by scanning electron microscopy. The addition of a ceramification-facilitating system resulted in the lowering of combustibility and significant improvement of the thermal stability of the composites. Moreover, the compression strength of the mineral structure formed after ceramification is considerably high. The most promising refractory fillers for SBR-based ceramifiable composites are mica and halloysite. PMID:28773726
NASA Technical Reports Server (NTRS)
Han, Shin-Chan; Sauber, Jeanne; Pollitz, Fred
2015-01-01
The 2012 Indian Ocean earthquake sequence (M(sub w) 8.6, 8.2) is a rare example of great strike slip earthquakes in an intra-oceanic setting. With over a decade of GRACE data, we were able to measure and model the unanticipated large co-, and post-seismic gravity changes of these events. Using the approach of normal mode decomposition and spatial localization, we computed the gravity changes corresponding to five moment tensor components. Our analysis revealed that the gravity changes are produced predominantly by coseismic compression and dilatation within the oceanic crust and upper mantle and by post-seismic vertical motion. Our results suggest that the post-seismic positive gravity and the post-seismic uplift measured with GPS within the coseismic compressional quadrant are best fit by ongoing uplift associated with viscoelastic mantle relaxation. Our study demonstrates that the GRACE data are suitable for analyzing strike-slip earthquakes as small as M(sub w) 8.2 with the noise characteristics of this region.
Tensor network method for reversible classical computation
NASA Astrophysics Data System (ADS)
Yang, Zhi-Cheng; Kourtis, Stefanos; Chamon, Claudio; Mucciolo, Eduardo R.; Ruckenstein, Andrei E.
2018-03-01
We develop a tensor network technique that can solve universal reversible classical computational problems, formulated as vertex models on a square lattice [Nat. Commun. 8, 15303 (2017), 10.1038/ncomms15303]. By encoding the truth table of each vertex constraint in a tensor, the total number of solutions compatible with partial inputs and outputs at the boundary can be represented as the full contraction of a tensor network. We introduce an iterative compression-decimation (ICD) scheme that performs this contraction efficiently. The ICD algorithm first propagates local constraints to longer ranges via repeated contraction-decomposition sweeps over all lattice bonds, thus achieving compression on a given length scale. It then decimates the lattice via coarse-graining tensor contractions. Repeated iterations of these two steps gradually collapse the tensor network and ultimately yield the exact tensor trace for large systems, without the need for manual control of tensor dimensions. Our protocol allows us to obtain the exact number of solutions for computations where a naive enumeration would take astronomically long times.
Processing and characterization of epoxy composites reinforced with short human hair
NASA Astrophysics Data System (ADS)
Prasad Nanda, Bishnu; Satapathy, Alok
2017-02-01
Human hair is a biological fiber with well characterized microstructure. It has many unique properties like high tensile strength, thermal insulation, unique chemical composition, elastic recovery, scaly surface etc. But due to its slow decomposition, it creates many environmental problems. Although a number of utilization avenues are already in place, hair is still considered as a biological waste. In view of this, the present work makes an attempt to explore the possibility of fabricating a class of polymer composites reinforced with short human hair fibers. Epoxy composites with different proportions of hair fiber (0, 2, 4, 6 and 8 wt.%) are prepared by simple hand lay-up technique. Mechanical properties such as tensile, flexural and compressive strengths were evaluated by conducting tests as per ASTM standards. It was found out that with the increase in fiber content, the tensile and flexural strength of the composite were increasing significantly while the compressive strength improved marginally. Scanning electron microscopy was done on these samples to observe the microstructural features.
The Outer Loop bioreactor: a case study of settlement monitoring and solids decomposition.
Abichou, Tarek; Barlaz, Morton A; Green, Roger; Hater, Gary
2013-10-01
The Outer Loop landfill bioreactor (OLLB) located in Louisville, KY, USA has been in operation since 2000 and represents an opportunity to evaluate long-term bioreactor monitoring data at a full-scale operational landfill. Three types of landfill units were studied including a Control cell, a new landfill area that had a piping network installed as waste was being placed to support leachate recirculation (As-Built cell), and a conventional landfill that was modified to allow for liquid recirculation (Retrofit cell). The objective of this study is to summarize the results of settlement data and assess how these data relate to solids decomposition monitoring at the OLLB. The Retrofit cells started to settle as soon as liquids were introduced. The cumulative settlement during the 8years of monitoring varied from 60 to 100cm. These results suggest that liquid recirculation in the Retrofit cells caused a 5-8% reduction in the thickness of the waste column. The average long-term settlement in the As-Built and Control Cells was about 37% and 19%, respectively. The modified compression index (Cα(')) was 0.17 for the Control cells and 0.2-0.48 for the As-Built cells. While the As-Built cells exhibited greater settlement than the Control cells, the data do not support biodegradation as the only explanation. The increased settlement in the As-Built bioreactor cell appeared to be associated with liquid movement and not with biodegradation because both chemical (biochemical methane potential) and physical (moisture content) indicators of decomposition were similar in the Control and As-Built cells. The solids data are consistent with the concept that bioreactor operations accelerate the rate of decomposition, but not necessarily the cumulative loss of anaerobically degradable solids. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Noh, Hae Young; Kiremidjian, Anne S.
2011-04-01
This paper introduces a data compression method using the K-SVD algorithm and its application to experimental ambient vibration data for structural health monitoring purposes. Because many damage diagnosis algorithms that use system identification require vibration measurements of multiple locations, it is necessary to transmit long threads of data. In wireless sensor networks for structural health monitoring, however, data transmission is often a major source of battery consumption. Therefore, reducing the amount of data to transmit can significantly lengthen the battery life and reduce maintenance cost. The K-SVD algorithm was originally developed in information theory for sparse signal representation. This algorithm creates an optimal over-complete set of bases, referred to as a dictionary, using singular value decomposition (SVD) and represents the data as sparse linear combinations of these bases using the orthogonal matching pursuit (OMP) algorithm. Since ambient vibration data are stationary, we can segment them and represent each segment sparsely. Then only the dictionary and the sparse vectors of the coefficients need to be transmitted wirelessly for restoration of the original data. We applied this method to ambient vibration data measured from a four-story steel moment resisting frame. The results show that the method can compress the data efficiently and restore the data with very little error.
NASA Astrophysics Data System (ADS)
Giri, Ashutosh; Hopkins, Patrick E.
2017-12-01
Fullerene condensed-matter solids can possess thermal conductivities below their minimum glassy limit while theorized to be stiffer than diamond when crystallized under pressure. These seemingly disparate extremes in thermal and mechanical properties raise questions into the pressure dependence on the thermal conductivity of C60 fullerite crystals, and how the spectral contributions to vibrational thermal conductivity changes under applied pressure. To answer these questions, we investigate the effect of strain on the thermal conductivity of C60 fullerite crystals via pressure-dependent molecular dynamics simulations under the Green-Kubo formalism. We show that the thermal conductivity increases rapidly with compressive strain, which demonstrates a power-law relationship similar to their stress-strain relationship for the C60 crystals. Calculations of the density of states for the crystals under compressive strains reveal that the librational modes characteristic in the unstrained case are diminished due to densification of the molecular crystal. Over a large compression range (0-20 GPa), the Leibfried-Schlömann equation is shown to adequately describe the pressure dependence of thermal conductivity, suggesting that low-frequency intermolecular vibrations dictate heat flow in the C60 crystals. A spectral decomposition of the thermal conductivity supports this hypothesis.
A Sparsity-Promoted Decomposition for Compressed Fault Diagnosis of Roller Bearings
Wang, Huaqing; Ke, Yanliang; Song, Liuyang; Tang, Gang; Chen, Peng
2016-01-01
The traditional approaches for condition monitoring of roller bearings are almost always achieved under Shannon sampling theorem conditions, leading to a big-data problem. The compressed sensing (CS) theory provides a new solution to the big-data problem. However, the vibration signals are insufficiently sparse and it is difficult to achieve sparsity using the conventional techniques, which impedes the application of CS theory. Therefore, it is of great significance to promote the sparsity when applying the CS theory to fault diagnosis of roller bearings. To increase the sparsity of vibration signals, a sparsity-promoted method called the tunable Q-factor wavelet transform based on decomposing the analyzed signals into transient impact components and high oscillation components is utilized in this work. The former become sparser than the raw signals with noise eliminated, whereas the latter include noise. Thus, the decomposed transient impact components replace the original signals for analysis. The CS theory is applied to extract the fault features without complete reconstruction, which means that the reconstruction can be completed when the components with interested frequencies are detected and the fault diagnosis can be achieved during the reconstruction procedure. The application cases prove that the CS theory assisted by the tunable Q-factor wavelet transform can successfully extract the fault features from the compressed samples. PMID:27657063
Design of light-small high-speed image data processing system
NASA Astrophysics Data System (ADS)
Yang, Jinbao; Feng, Xue; Li, Fei
2015-10-01
A light-small high speed image data processing system was designed in order to meet the request of image data processing in aerospace. System was constructed of FPGA, DSP and MCU (Micro-controller), implementing a video compress of 3 million pixels@15frames and real-time return of compressed image to the upper system. Programmable characteristic of FPGA, high performance image compress IC and configurable MCU were made best use to improve integration. Besides, hard-soft board design was introduced and PCB layout was optimized. At last, system achieved miniaturization, light-weight and fast heat dispersion. Experiments show that, system's multifunction was designed correctly and worked stably. In conclusion, system can be widely used in the area of light-small imaging.
20th JANNAF Propulsion Systems Hazards Subcommittee Meeting. Volume 1
NASA Technical Reports Server (NTRS)
Cocchiaro, James E. (Editor); Eggleston, Debra S. (Editor); Gannaway, Mary T. (Editor); Inzar, Jeanette M. (Editor)
2002-01-01
This volume, the first of two volumes, is a collection of 24 unclassified/unlimited-distribution papers which were presented at the Joint Army-Navy-NASA-Air Force (JANNAF) 20th Propulsion Systems Hazards Subcommittee (PSHS), 38th Combustion Subcommittee (CS), 26th Airbreathing Propulsion Subcommittee (APS), and 21 Modeling and Simulation Subcommittee meeting. The meeting was held 8-12 April 2002 at the Bayside Inn at The Sandestin Golf & Beach Resort and Eglin Air Force Base, Destin, Florida. Topics covered include: insensitive munitions and hazard classification testing of solid rocket motors and other munitions; vulnerability of gun propellants to impact stimuli; thermal decomposition and cookoff properties of energetic materials; burn-to-violent reaction phenomena in energetic materials; and shock-to-detonation properties of solid propellants and energetic materials.
McCreery, Ryan W.; Venediktov, Rebecca A.; Coleman, Jaumeiko J.; Leech, Hillary M.
2013-01-01
Purpose Two clinical questions were developed: one addressing the comparison of linear amplification with compression limiting to linear amplification with peak clipping, and the second comparing wide dynamic range compression with linear amplification for outcomes of audibility, speech recognition, speech and language, and self- or parent report in children with hearing loss. Method Twenty-six databases were systematically searched for studies addressing a clinical question and meeting all inclusion criteria. Studies were evaluated for methodological quality, and effect sizes were reported or calculated when possible. Results The literature search resulted in the inclusion of 8 studies. All 8 studies included comparisons of wide dynamic range compression to linear amplification, and 2 of the 8 studies provided comparisons of compression limiting versus peak clipping. Conclusions Moderate evidence from the included studies demonstrated that audibility was improved and speech recognition was either maintained or improved with wide dynamic range compression as compared with linear amplification. No significant differences were observed between compression limiting and peak clipping on outcomes (i.e., speech recognition and self-/parent report) reported across the 2 studies. Preference ratings appear to be influenced by participant characteristics and environmental factors. Further research is needed before conclusions can confidently be drawn. PMID:22858616
NASA Astrophysics Data System (ADS)
Yao, Juncai; Liu, Guizhong
2017-03-01
In order to achieve higher image compression ratio and improve visual perception of the decompressed image, a novel color image compression scheme based on the contrast sensitivity characteristics of the human visual system (HVS) is proposed. In the proposed scheme, firstly the image is converted into the YCrCb color space and divided into sub-blocks. Afterwards, the discrete cosine transform is carried out for each sub-block, and three quantization matrices are built to quantize the frequency spectrum coefficients of the images by combining the contrast sensitivity characteristics of HVS. The Huffman algorithm is used to encode the quantized data. The inverse process involves decompression and matching to reconstruct the decompressed color image. And simulations are carried out for two color images. The results show that the average structural similarity index measurement (SSIM) and peak signal to noise ratio (PSNR) under the approximate compression ratio could be increased by 2.78% and 5.48%, respectively, compared with the joint photographic experts group (JPEG) compression. The results indicate that the proposed compression algorithm in the text is feasible and effective to achieve higher compression ratio under ensuring the encoding and image quality, which can fully meet the needs of storage and transmission of color images in daily life.
NASA Technical Reports Server (NTRS)
Juang, Hann-Ming Henry; Tao, Wei-Kuo; Zeng, Xi-Ping; Shie, Chung-Lin; Simpson, Joanne; Lang, Steve
2004-01-01
The capability for massively parallel programming (MPP) using a message passing interface (MPI) has been implemented into a three-dimensional version of the Goddard Cumulus Ensemble (GCE) model. The design for the MPP with MPI uses the concept of maintaining similar code structure between the whole domain as well as the portions after decomposition. Hence the model follows the same integration for single and multiple tasks (CPUs). Also, it provides for minimal changes to the original code, so it is easily modified and/or managed by the model developers and users who have little knowledge of MPP. The entire model domain could be sliced into one- or two-dimensional decomposition with a halo regime, which is overlaid on partial domains. The halo regime requires that no data be fetched across tasks during the computational stage, but it must be updated before the next computational stage through data exchange via MPI. For reproducible purposes, transposing data among tasks is required for spectral transform (Fast Fourier Transform, FFT), which is used in the anelastic version of the model for solving the pressure equation. The performance of the MPI-implemented codes (i.e., the compressible and anelastic versions) was tested on three different computing platforms. The major results are: 1) both versions have speedups of about 99% up to 256 tasks but not for 512 tasks; 2) the anelastic version has better speedup and efficiency because it requires more computations than that of the compressible version; 3) equal or approximately-equal numbers of slices between the x- and y- directions provide the fastest integration due to fewer data exchanges; and 4) one-dimensional slices in the x-direction result in the slowest integration due to the need for more memory relocation for computation.
NASA Astrophysics Data System (ADS)
Pandey, Anil; Niwa, Syunta; Morii, Yoshinari; Ikezawa, Shunjiro
2012-10-01
In order to decompose CO2 . NOx [1], we have developed the large flow atmospheric microwave plasma; LAMP [2]. It is very important to apply it for industrial innovation, so we have studied to apply the LAMP into motorcar. The characteristics of the developed LAMP are that the price is cheap and the decomposition efficiencies of CO2 . NOx are high. The mechanism was shown as the vertical configuration between the exhaust gas pipe and the waveguide was suitable [2]. The system was set up in the car body with a battery and an inverter. The battery is common between the engine and the inverter. In the application of motorcar, the flow is large, so the LAMP which has the merits of large flow, high efficient decomposition, and cheap apparatus will be superior.[4pt] [1] H. Barankova, L. Bardos, ISSP 2011, Kyoto.[0pt] [2] S. Ikezawa, S. Parajulee, S. Sharma, A. Pandey, ISSP 2011, Kyoto (2011) pp. 28-31; S. Ikezawa, S. Niwa, Y. Morii, JJAP meeting 2012, March 16, Waseda U. (2012).
Polystyrene foam products equation of state as a function of porosity and fill gas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mulford, Roberta N; Swift, Damian C
2009-01-01
An accurate EOS for polystyrene foam is necessary for analysis of numerous experiments in shock compression, inertial confinement fusion, and astrophysics. Plastic to gas ratios vary between various samples of foam, according to the density and cell-size of the foam. A matrix of compositions has been investigated, allowing prediction of foam response as a function of the plastic-to-air ratio. The EOS code CHEETAH allows participation of the air in the decomposition reaction of the foam. Differences between air-filled, Ar-blown, and CO{sub 2}-blown foams are investigated, to estimate the importance of allowing air to react with products of polystyrene decomposition. O{submore » 2}-blown foams are included in some comparisons, to amplify any consequences of reaction with oxygen in air. He-blown foams are included in some comparisons, to provide an extremum of density. Product pressures are slightly higher for oxygen-containing fill gases than for non-oxygen-containing fill gases. Examination of product species indicates that CO{sub 2} decomposes at high temperatures.« less
Direct ink writing of silica-bonded calcite scaffolds from preceramic polymers and fillers.
Fiocco, L; Elsayed, H; Badocco, D; Pastore, P; Bellucci, D; Cannillo, V; Detsch, R; Boccaccini, A R; Bernardo, E
2017-05-11
Silica-bonded calcite scaffolds have been successfully 3D-printed by direct ink writing, starting from a paste comprising a silicone polymer and calcite powders, calibrated in order to match a SiO 2 /CaCO 3 weight balance of 35/65. The scaffolds, fabricated with two slightly different geometries, were first cross-linked at 350 °C, then fired at 600 °C, in air. The low temperature adopted for the conversion of the polymer into amorphous silica, by thermo-oxidative decomposition, prevented the decomposition of calcite. The obtained silica-bonded calcite scaffolds featured open porosity of about 56%-64% and compressive strength of about 2.9-5.5 MPa, depending on the geometry. Dissolution studies in SBF and preliminary cell culture tests, with bone marrow stromal cells, confirmed the in vitro bioactivity of the scaffolds and their biocompatibility. The seeded cells were found to be alive, well anchored and spread on the samples surface. The new silica-calcite composites are expected to be suitable candidates as tissue-engineering 3D scaffolds for regeneration of cancellous bone defects.
Geometric decompositions of collective motion
NASA Astrophysics Data System (ADS)
Mischiati, Matteo; Krishnaprasad, P. S.
2017-04-01
Collective motion in nature is a captivating phenomenon. Revealing the underlying mechanisms, which are of biological and theoretical interest, will require empirical data, modelling and analysis techniques. Here, we contribute a geometric viewpoint, yielding a novel method of analysing movement. Snapshots of collective motion are portrayed as tangent vectors on configuration space, with length determined by the total kinetic energy. Using the geometry of fibre bundles and connections, this portrait is split into orthogonal components each tangential to a lower dimensional manifold derived from configuration space. The resulting decomposition, when interleaved with classical shape space construction, is categorized into a family of kinematic modes-including rigid translations, rigid rotations, inertia tensor transformations, expansions and compressions. Snapshots of empirical data from natural collectives can be allocated to these modes and weighted by fractions of total kinetic energy. Such quantitative measures can provide insight into the variation of the driving goals of a collective, as illustrated by applying these methods to a publicly available dataset of pigeon flocking. The geometric framework may also be profitably employed in the control of artificial systems of interacting agents such as robots.
NASA Astrophysics Data System (ADS)
Zhang, Fei; Huang, Weizhang; Li, Xianping; Zhang, Shicheng
2018-03-01
A moving mesh finite element method is studied for the numerical solution of a phase-field model for brittle fracture. The moving mesh partial differential equation approach is employed to dynamically track crack propagation. Meanwhile, the decomposition of the strain tensor into tensile and compressive components is essential for the success of the phase-field modeling of brittle fracture but results in a non-smooth elastic energy and stronger nonlinearity in the governing equation. This makes the governing equation much more difficult to solve and, in particular, Newton's iteration often fails to converge. Three regularization methods are proposed to smooth out the decomposition of the strain tensor. Numerical examples of fracture propagation under quasi-static load demonstrate that all of the methods can effectively improve the convergence of Newton's iteration for relatively small values of the regularization parameter but without compromising the accuracy of the numerical solution. They also show that the moving mesh finite element method is able to adaptively concentrate the mesh elements around propagating cracks and handle multiple and complex crack systems.
Geometric decompositions of collective motion
Krishnaprasad, P. S.
2017-01-01
Collective motion in nature is a captivating phenomenon. Revealing the underlying mechanisms, which are of biological and theoretical interest, will require empirical data, modelling and analysis techniques. Here, we contribute a geometric viewpoint, yielding a novel method of analysing movement. Snapshots of collective motion are portrayed as tangent vectors on configuration space, with length determined by the total kinetic energy. Using the geometry of fibre bundles and connections, this portrait is split into orthogonal components each tangential to a lower dimensional manifold derived from configuration space. The resulting decomposition, when interleaved with classical shape space construction, is categorized into a family of kinematic modes—including rigid translations, rigid rotations, inertia tensor transformations, expansions and compressions. Snapshots of empirical data from natural collectives can be allocated to these modes and weighted by fractions of total kinetic energy. Such quantitative measures can provide insight into the variation of the driving goals of a collective, as illustrated by applying these methods to a publicly available dataset of pigeon flocking. The geometric framework may also be profitably employed in the control of artificial systems of interacting agents such as robots. PMID:28484319
X-ray Thomson Scattering in Warm Dense Matter without the Chihara Decomposition.
Baczewski, A D; Shulenburger, L; Desjarlais, M P; Hansen, S B; Magyar, R J
2016-03-18
X-ray Thomson scattering is an important experimental technique used to measure the temperature, ionization state, structure, and density of warm dense matter (WDM). The fundamental property probed in these experiments is the electronic dynamic structure factor. In most models, this is decomposed into three terms [J. Chihara, J. Phys. F 17, 295 (1987)] representing the response of tightly bound, loosely bound, and free electrons. Accompanying this decomposition is the classification of electrons as either bound or free, which is useful for gapped and cold systems but becomes increasingly questionable as temperatures and pressures increase into the WDM regime. In this work we provide unambiguous first principles calculations of the dynamic structure factor of warm dense beryllium, independent of the Chihara form, by treating bound and free states under a single formalism. The computational approach is real-time finite-temperature time-dependent density functional theory (TDDFT) being applied here for the first time to WDM. We compare results from TDDFT to Chihara-based calculations for experimentally relevant conditions in shock-compressed beryllium.
JANNAF 36th Combustion Subcommittee Meeting. Volume 2
NASA Technical Reports Server (NTRS)
Fry, Ronald S. (Editor); Gannaway, Mary T. (Editor)
1999-01-01
Volume 11, the second of three volumes is a compilation of 33 unclassified/unlimited-distribution technical papers presented at the Joint Army-Navy-NASA-Air Force (JANNAF) 36th Combustion Subcommittee held jointly with the 24 Airbreathing Propulsion Subcommittee and 18th Propulsion Systems Hazards Subcommittee. The meeting was held on 18-21 October 1999 at NASA Kennedy Space Center and The DoubleTree Oceanfront Hotel, Cocoa Beach, Florida. Topics covered include gun solid propellant ignition and combustion, Electrothermal Chemical (ETC) propulsion phenomena, liquid propellant gun combustion and barrel erosion, gas phase propellant combustion, kinetic and decomposition phenomena and liquid and hybrid propellant combustion behavior.
Long term mechanical properties of alkali activated slag
NASA Astrophysics Data System (ADS)
Zhu, J.; Zheng, W. Z.; Xu, Z. Z.; Leng, Y. F.; Qin, C. Z.
2018-01-01
This article reports a study on the microstructural and long-term mechanical properties of the alkali activated slag up to 180 days, and cement paste is studied as the comparison. The mechanical properties including compressive strength, flexural strength, axis tensile strength and splitting tensile strength are analyzed. The results showed that the alkali activated slag had higher compressive and tensile strength, Slag is activated by potassium silicate (K2SiO3) and sodium hydroxide (NaOH) solutions for attaining silicate modulus of 1 using 12 potassium silicate and 5.35% sodium hydroxide. The volume dosage of water is 35% and 42%. The results indicate that alkali activated slag is a kind of rapid hardening and early strength cementitious material with excellent long-term mechanical properties. Single row of holes block compressive strength, single-hole block compressive strength and standard solid brick compressive strength basically meet engineering requirements. The microstructures of alkali activated slag are studied by X-ray diffraction (XRD). The hydration products of alkali-activated slag are assured as hydrated calcium silicate and hydrated calcium aluminate.
Partsch, H; Stout, N; Forner-Cordero, I; Flour, M; Moffatt, C; Szuba, A; Milic, D; Szolnoky, G; Brorson, H; Abel, M; Schuren, J; Schingale, F; Vignes, S; Piller, N; Döller, W
2010-10-01
A mainstay of lymphedema management involves the use of compression therapy. Compression therapy application is variable at different levels of disease severity. Evidence is scant to direct clinicians in best practice regarding compression therapy use. Further, compression clinical trials are fragmented and poorly extrapolable to the greater population. An ideal construct for conducting clinical trials in regards to compression therapy will promote parallel global initiatives based on a standard research agenda. The purpose of this article is to review current evidence in practice regarding compression therapy for BCRL management and based on this evidence, offer an expert consensus recommendation for a research agenda and prescriptive trials. Recommendations herein focus solely on compression interventions. This document represents the proceedings of a session organized by the International Compression Club (ICC) in June 2009 in Ponzano (Veneto, Italy). The purpose of the meeting was to enable a group of experts to discuss the existing evidence for compression treatment in breast cancer related lymphedema (BCRL) concentrating on areas where randomized controlled trials (RCTs) are lacking. The current body of research suggests efficacy of compression interventions in the treatment and management of lymphedema. However, studies to date have failed to adequately address various forms of compression therapy and their optimal application in BCRL. We offer recommendations for standardized compression research trials for prophylaxis of arm lymphedema and for the management of chronic BCRL. Suggestions are also made regarding; inclusion and exclusion criteria, measurement methodology and additional variables of interest for researchers to capture. This document should inform future research trials in compression therapy and serve as a guide to clinical researchers, industry researchers and lymphologists regarding the strengths, weaknesses and shortcomings of the current literature. By providing this construct for research trials, the authors aim to support evidence-based therapy interventions, promote a cohesive, standardized and informative body of literature to enhance clinical outcomes, improve the quality of future research trials, inform industry innovation and guide policy related to BCRL.
EPA Protocol Gas Verification Program - Presented at NIST Gas Panel Meeting
Accurate compressed gas calibration standards are needed to calibrate continuous emission monitors (CEMs) and ambient air quality monitors that are being used for regulatory purposes. US Environmental Protection Agency (EPA) established its traceability protocol to ensure that c...
Vanraes, Patrick; Willems, Gert; Daels, Nele; Van Hulle, Stijn W H; De Clerck, Karen; Surmont, Pieter; Lynen, Frederic; Vandamme, Jeroen; Van Durme, Jim; Nikiforov, Anton; Leys, Christophe
2015-04-01
In recent decades, several types of persistent substances are detected in the aquatic environment at very low concentrations. Unfortunately, conventional water treatment processes are not able to remove these micropollutants. As such, advanced treatment methods are required to meet both current and anticipated maximally allowed concentrations. Plasma discharge in contact with water is a promising new technology, since it produces a wide spectrum of oxidizing species. In this study, a new type of reactor is tested, in which decomposition by atmospheric pulsed direct barrier discharge (pDBD) plasma is combined with micropollutant adsorption on a nanofiber polyamide membrane. Atrazine is chosen as model micropollutant with an initial concentration of 30 μg/L. While the H2O2 and O3 production in the reactor is not influenced by the presence of the membrane, there is a significant increase in atrazine decomposition when the membrane is added. With membrane, 85% atrazine removal can be obtained in comparison to only 61% removal without membrane, at the same experimental parameters. The by-products of atrazine decomposition identified by HPLC-MS are deethylatrazine and ammelide. Formation of these by-products is more pronounced when the membrane is added. These results indicate the synergetic effect of plasma discharge and pollutant adsorption, which is attractive for future applications of water treatment. Copyright © 2014 Elsevier Ltd. All rights reserved.
Survey Of Lossless Image Coding Techniques
NASA Astrophysics Data System (ADS)
Melnychuck, Paul W.; Rabbani, Majid
1989-04-01
Many image transmission/storage applications requiring some form of data compression additionally require that the decoded image be an exact replica of the original. Lossless image coding algorithms meet this requirement by generating a decoded image that is numerically identical to the original. Several lossless coding techniques are modifications of well-known lossy schemes, whereas others are new. Traditional Markov-based models and newer arithmetic coding techniques are applied to predictive coding, bit plane processing, and lossy plus residual coding. Generally speaking, the compression ratio offered by these techniques are in the area of 1.6:1 to 3:1 for 8-bit pictorial images. Compression ratios for 12-bit radiological images approach 3:1, as these images have less detailed structure, and hence, their higher pel correlation leads to a greater removal of image redundancy.
Xia, J.; Xu, Y.; Miller, R.D.; Chen, C.
2006-01-01
A Gibson half-space model (a non-layered Earth model) has the shear modulus varying linearly with depth in an inhomogeneous elastic half-space. In a half-space of sedimentary granular soil under a geostatic state of initial stress, the density and the Poisson's ratio do not vary considerably with depth. In such an Earth body, the dynamic shear modulus is the parameter that mainly affects the dispersion of propagating waves. We have estimated shear-wave velocities in the compressible Gibson half-space by inverting Rayleigh-wave phase velocities. An analytical dispersion law of Rayleigh-type waves in a compressible Gibson half-space is given in an algebraic form, which makes our inversion process extremely simple and fast. The convergence of the weighted damping solution is guaranteed through selection of the damping factor using the Levenberg-Marquardt method. Calculation efficiency is achieved by reconstructing a weighted damping solution using singular value decomposition techniques. The main advantage of this algorithm is that only three parameters define the compressible Gibson half-space model. Theoretically, to determine the model by the inversion, only three Rayleigh-wave phase velocities at different frequencies are required. This is useful in practice where Rayleigh-wave energy is only developed in a limited frequency range or at certain frequencies as data acquired at manmade structures such as dams and levees. Two real examples are presented and verified by borehole S-wave velocity measurements. The results of these real examples are also compared with the results of the layered-Earth model. ?? Springer 2006.
An infrared-visible image fusion scheme based on NSCT and compressed sensing
NASA Astrophysics Data System (ADS)
Zhang, Qiong; Maldague, Xavier
2015-05-01
Image fusion, as a research hot point nowadays in the field of infrared computer vision, has been developed utilizing different varieties of methods. Traditional image fusion algorithms are inclined to bring problems, such as data storage shortage and computational complexity increase, etc. Compressed sensing (CS) uses sparse sampling without knowing the priori knowledge and greatly reconstructs the image, which reduces the cost and complexity of image processing. In this paper, an advanced compressed sensing image fusion algorithm based on non-subsampled contourlet transform (NSCT) is proposed. NSCT provides better sparsity than the wavelet transform in image representation. Throughout the NSCT decomposition, the low-frequency and high-frequency coefficients can be obtained respectively. For the fusion processing of low-frequency coefficients of infrared and visible images , the adaptive regional energy weighting rule is utilized. Thus only the high-frequency coefficients are specially measured. Here we use sparse representation and random projection to obtain the required values of high-frequency coefficients, afterwards, the coefficients of each image block can be fused via the absolute maximum selection rule and/or the regional standard deviation rule. In the reconstruction of the compressive sampling results, a gradient-based iterative algorithm and the total variation (TV) method are employed to recover the high-frequency coefficients. Eventually, the fused image is recovered by inverse NSCT. Both the visual effects and the numerical computation results after experiments indicate that the presented approach achieves much higher quality of image fusion, accelerates the calculations, enhances various targets and extracts more useful information.
ICER-3D: A Progressive Wavelet-Based Compressor for Hyperspectral Images
NASA Technical Reports Server (NTRS)
Kiely, A.; Klimesh, M.; Xie, H.; Aranki, N.
2005-01-01
ICER-3D is a progressive, wavelet-based compressor for hyperspectral images. ICER-3D is derived from the ICER image compressor. ICER-3D can provide lossless and lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The three-dimensional wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of hyperspectral data sets, while facilitating elimination of spectral ringing artifacts. Correlation is further exploited by a context modeler that effectively exploits spectral dependencies in the wavelet-transformed hyperspectral data. Performance results illustrating the benefits of these features are presented.
DWT-Based High Capacity Audio Watermarking
NASA Astrophysics Data System (ADS)
Fallahpour, Mehdi; Megías, David
This letter suggests a novel high capacity robust audio watermarking algorithm by using the high frequency band of the wavelet decomposition, for which the human auditory system (HAS) is not very sensitive to alteration. The main idea is to divide the high frequency band into frames and then, for embedding, the wavelet samples are changed based on the average of the relevant frame. The experimental results show that the method has very high capacity (about 5.5kbps), without significant perceptual distortion (ODG in [-1, 0] and SNR about 33dB) and provides robustness against common audio signal processing such as added noise, filtering, echo and MPEG compression (MP3).
Precision spectral manipulation of optical pulses using a coherent photon echo memory.
Buchler, B C; Hosseini, M; Hétet, G; Sparkes, B M; Lam, P K
2010-04-01
Photon echo schemes are excellent candidates for high efficiency coherent optical memory. They are capable of high-bandwidth multipulse storage, pulse resequencing and have been shown theoretically to be compatible with quantum information applications. One particular photon echo scheme is the gradient echo memory (GEM). In this system, an atomic frequency gradient is induced in the direction of light propagation leading to a Fourier decomposition of the optical spectrum along the length of the storage medium. This Fourier encoding allows precision spectral manipulation of the stored light. In this Letter, we show frequency shifting, spectral compression, spectral splitting, and fine dispersion control of optical pulses using GEM.
Detecting objects in radiographs for homeland security
NASA Astrophysics Data System (ADS)
Prasad, Lakshman; Snyder, Hans
2005-05-01
We present a general scheme for segmenting a radiographic image into polygons that correspond to visual features. This decomposition provides a vectorized representation that is a high-level description of the image. The polygons correspond to objects or object parts present in the image. This characterization of radiographs allows the direct application of several shape recognition algorithms to identify objects. In this paper we describe the use of constrained Delaunay triangulations as a uniform foundational tool to achieve multiple visual tasks, namely image segmentation, shape decomposition, and parts-based shape matching. Shape decomposition yields parts that serve as tokens representing local shape characteristics. Parts-based shape matching enables the recognition of objects in the presence of occlusions, which commonly occur in radiographs. The polygonal representation of image features affords the efficient design and application of sophisticated geometric filtering methods to detect large-scale structural properties of objects in images. Finally, the representation of radiographs via polygons results in significant reduction of image file sizes and permits the scalable graphical representation of images, along with annotations of detected objects, in the SVG (scalable vector graphics) format that is proposed by the world wide web consortium (W3C). This is a textual representation that can be compressed and encrypted for efficient and secure transmission of information over wireless channels and on the Internet. In particular, our methods described here provide an algorithmic framework for developing image analysis tools for screening cargo at ports of entry for homeland security.
The integrated design and archive of space-borne signal processing and compression coding
NASA Astrophysics Data System (ADS)
He, Qiang-min; Su, Hao-hang; Wu, Wen-bo
2017-10-01
With the increasing demand of users for the extraction of remote sensing image information, it is very urgent to significantly enhance the whole system's imaging quality and imaging ability by using the integrated design to achieve its compact structure, light quality and higher attitude maneuver ability. At this present stage, the remote sensing camera's video signal processing unit and image compression and coding unit are distributed in different devices. The volume, weight and consumption of these two units is relatively large, which unable to meet the requirements of the high mobility remote sensing camera. This paper according to the high mobility remote sensing camera's technical requirements, designs a kind of space-borne integrated signal processing and compression circuit by researching a variety of technologies, such as the high speed and high density analog-digital mixed PCB design, the embedded DSP technology and the image compression technology based on the special-purpose chips. This circuit lays a solid foundation for the research of the high mobility remote sensing camera.
Evaluation of a Conductive Elastomer Seal for Spacecraft
NASA Technical Reports Server (NTRS)
Daniels, C. C.; Mather, J. L.; Oravec, H. A.; Dunlap, P. H., Jr.
2016-01-01
An electrically conductive elastomer was evaluated as a material candidate for a spacecraft seal. The elastomer used electrically conductive constituents as a means to reduce the resistance between mating interfaces of a sealed joint to meet spacecraft electrical bonding requirements. The compound's outgassing levels were compared against published NASA requirements. The compound was formed into a hollow O-ring seal and its compression set was measured. The O-ring seal was placed into an interface and the electrical resistance and leak rate were quantified. The amount of force required to fully compress the test article in the sealing interface and the force needed to separate the joint were also measured. The outgassing and resistance measurements were below the maximum allowable levels. The room temperature compression set and leak rates were fairly high when compared against other typical spacecraft seal materials, but were not excessive. The compression and adhesion forces were desirably low. Overall, the performance of the elastomer compound was sufficient to be considered for future spacecraft seal applications.
NASA Astrophysics Data System (ADS)
Feigin, Alexander; Gavrilov, Andrey; Loskutov, Evgeny; Mukhin, Dmitry
2015-04-01
Proper decomposition of the complex system into well separated "modes" is a way to reveal and understand the mechanisms governing the system behaviour as well as discover essential feedbacks and nonlinearities. The decomposition is also natural procedure that provides to construct adequate and concurrently simplest models of both corresponding sub-systems, and of the system in whole. In recent works two new methods of decomposition of the Earth's climate system into well separated modes were discussed. The first method [1-3] is based on the MSSA (Multichannel Singular Spectral Analysis) [4] for linear expanding vector (space-distributed) time series and makes allowance delayed correlations of the processes recorded in spatially separated points. The second one [5-7] allows to construct nonlinear dynamic modes, but neglects delay of correlations. It was demonstrated [1-3] that first method provides effective separation of different time scales, but prevent from correct reduction of data dimension: slope of variance spectrum of spatio-temporal empirical orthogonal functions that are "structural material" for linear spatio-temporal modes, is too flat. The second method overcomes this problem: variance spectrum of nonlinear modes falls essentially sharply [5-7]. However neglecting time-lag correlations brings error of mode selection that is uncontrolled and increases with growth of mode time scale. In the report we combine these two methods in such a way that the developed algorithm allows constructing nonlinear spatio-temporal modes. The algorithm is applied for decomposition of (i) multi hundreds years globally distributed data generated by the INM RAS Coupled Climate Model [8], and (ii) 156 years time series of SST anomalies distributed over the globe [9]. We compare efficiency of different methods of decomposition and discuss the abilities of nonlinear spatio-temporal modes for construction of adequate and concurrently simplest ("optimal") models of climate systems. 1. Feigin A.M., Mukhin D., Gavrilov A., Volodin E.M., and Loskutov E.M. (2013) "Separation of spatial-temporal patterns ("climatic modes") by combined analysis of really measured and generated numerically vector time series", AGU 2013 Fall Meeting, Abstract NG33A-1574. 2. Alexander Feigin, Dmitry Mukhin, Andrey Gavrilov, Evgeny Volodin, and Evgeny Loskutov (2014) "Approach to analysis of multiscale space-distributed time series: separation of spatio-temporal modes with essentially different time scales", Geophysical Research Abstracts, Vol. 16, EGU2014-6877. 3. Dmitry Mukhin, Dmitri Kondrashov, Evgeny Loskutov, Andrey Gavrilov, Alexander Feigin, and Michael Ghil (2014) "Predicting critical transitions in ENSO models, Part II: Spatially dependent models", Journal of Climate (accepted, doi: 10.1175/JCLI-D-14-00240.1). 4. Ghil, M., R. M. Allen, M. D. Dettinger, K. Ide, D. Kondrashov, et al. (2002) "Advanced spectral methods for climatic time series", Rev. Geophys. 40(1), 3.1-3.41. 5. Dmitry Mukhin, Andrey Gavrilov, Evgeny M Loskutov and Alexander M Feigin (2014) "Nonlinear Decomposition of Climate Data: a New Method for Reconstruction of Dynamical Modes", AGU 2014 Fall Meeting, Abstract NG43A-3752. 6. Andrey Gavrilov, Dmitry Mukhin, Evgeny Loskutov, and Alexander Feigin (2015) "Empirical decomposition of climate data into nonlinear dynamic modes", Geophysical Research Abstracts, Vol. 17, EGU2015-627. 7. Dmitry Mukhin, Andrey Gavrilov, Evgeny Loskutov, Alexander Feigin, and Juergen Kurths (2015) "Reconstruction of principal dynamical modes from climatic variability: nonlinear approach", Geophysical Research Abstracts, Vol. 17, EGU2015-5729. 8. http://83.149.207.89/GCM_DATA_PLOTTING/GCM_INM_DATA_XY_en.htm. 9. http://iridl.ldeo.columbia.edu/SOURCES/.KAPLAN/.EXTENDED/.v2/.ssta/.
Liang, Bin; Li, Yongbao; Wei, Ran; Guo, Bin; Xu, Xuang; Liu, Bo; Li, Jiafeng; Wu, Qiuwen; Zhou, Fugen
2018-01-05
With robot-controlled linac positioning, robotic radiotherapy systems such as CyberKnife significantly increase freedom of radiation beam placement, but also impose more challenges on treatment plan optimization. The resampling mechanism in the vendor-supplied treatment planning system (MultiPlan) cannot fully explore the increased beam direction search space. Besides, a sparse treatment plan (using fewer beams) is desired to improve treatment efficiency. This study proposes a singular value decomposition linear programming (SVDLP) optimization technique for circular collimator based robotic radiotherapy. The SVDLP approach initializes the input beams by simulating the process of covering the entire target volume with equivalent beam tapers. The requirements on dosimetry distribution are modeled as hard and soft constraints, and the sparsity of the treatment plan is achieved by compressive sensing. The proposed linear programming (LP) model optimizes beam weights by minimizing the deviation of soft constraints subject to hard constraints, with a constraint on the l 1 norm of the beam weight. A singular value decomposition (SVD) based acceleration technique was developed for the LP model. Based on the degeneracy of the influence matrix, the model is first compressed into lower dimension for optimization, and then back-projected to reconstruct the beam weight. After beam weight optimization, the number of beams is reduced by removing the beams with low weight, and optimizing the weights of the remaining beams using the same model. This beam reduction technique is further validated by a mixed integer programming (MIP) model. The SVDLP approach was tested on a lung case. The results demonstrate that the SVD acceleration technique speeds up the optimization by a factor of 4.8. Furthermore, the beam reduction achieves a similar plan quality to the globally optimal plan obtained by the MIP model, but is one to two orders of magnitude faster. Furthermore, the SVDLP approach is tested and compared with MultiPlan on three clinical cases of varying complexities. In general, the plans generated by the SVDLP achieve steeper dose gradient, better conformity and less damage to normal tissues. In conclusion, the SVDLP approach effectively improves the quality of treatment plan due to the use of the complete beam search space. This challenging optimization problem with the complete beam search space is effectively handled by the proposed SVD acceleration.
NASA Astrophysics Data System (ADS)
Liang, Bin; Li, Yongbao; Wei, Ran; Guo, Bin; Xu, Xuang; Liu, Bo; Li, Jiafeng; Wu, Qiuwen; Zhou, Fugen
2018-01-01
With robot-controlled linac positioning, robotic radiotherapy systems such as CyberKnife significantly increase freedom of radiation beam placement, but also impose more challenges on treatment plan optimization. The resampling mechanism in the vendor-supplied treatment planning system (MultiPlan) cannot fully explore the increased beam direction search space. Besides, a sparse treatment plan (using fewer beams) is desired to improve treatment efficiency. This study proposes a singular value decomposition linear programming (SVDLP) optimization technique for circular collimator based robotic radiotherapy. The SVDLP approach initializes the input beams by simulating the process of covering the entire target volume with equivalent beam tapers. The requirements on dosimetry distribution are modeled as hard and soft constraints, and the sparsity of the treatment plan is achieved by compressive sensing. The proposed linear programming (LP) model optimizes beam weights by minimizing the deviation of soft constraints subject to hard constraints, with a constraint on the l 1 norm of the beam weight. A singular value decomposition (SVD) based acceleration technique was developed for the LP model. Based on the degeneracy of the influence matrix, the model is first compressed into lower dimension for optimization, and then back-projected to reconstruct the beam weight. After beam weight optimization, the number of beams is reduced by removing the beams with low weight, and optimizing the weights of the remaining beams using the same model. This beam reduction technique is further validated by a mixed integer programming (MIP) model. The SVDLP approach was tested on a lung case. The results demonstrate that the SVD acceleration technique speeds up the optimization by a factor of 4.8. Furthermore, the beam reduction achieves a similar plan quality to the globally optimal plan obtained by the MIP model, but is one to two orders of magnitude faster. Furthermore, the SVDLP approach is tested and compared with MultiPlan on three clinical cases of varying complexities. In general, the plans generated by the SVDLP achieve steeper dose gradient, better conformity and less damage to normal tissues. In conclusion, the SVDLP approach effectively improves the quality of treatment plan due to the use of the complete beam search space. This challenging optimization problem with the complete beam search space is effectively handled by the proposed SVD acceleration.
Fast Plasma Instrument for MMS: Data Compression Simulation Results
NASA Technical Reports Server (NTRS)
Barrie, A.; Adrian, Mark L.; Yeh, P.-S.; Winkert, G. E.; Lobell, J. V.; Vinas, A.F.; Simpson, D. J.; Moore, T. E.
2008-01-01
Magnetospheric Multiscale (MMS) mission will study small-scale reconnection structures and their rapid motions from closely spaced platforms using instruments capable of high angular, energy, and time resolution measurements. To meet these requirements, the Fast Plasma Instrument (FPI) consists of eight (8) identical half top-hat electron sensors and eights (8) identical ion sensors and an Instrument Data Processing Unit (IDPU). The sensors (electron or ion) are grouped into pairs whose 6 deg x 180 deg fields-of-view (FOV) are set 90 deg apart. Each sensor is equipped with electrostatic aperture steering to allow the sensor to scan a 45 deg x 180 deg fan about its nominal viewing (0 deg deflection) direction. Each pair of sensors, known as the Dual Electron Spectrometer (DES) and the Dual Ion Spectrometer (DIS), occupies a quadrant on the MMS spacecraft and the combination of the eight electron/ion sensors, employing aperture steering, image the full-sky every 30-ms (electrons) and 150-ms (ions), respectively. To probe the results in the DES complement of a given spacecraft generating 6.5-Mbs(exp -1) of electron data while the DIS generates 1.1-Mbs(exp -1) of ion data yielding an FPI total data rate of 6.6-MBs(exp -1). The FPI electron/ion data is collected by the IDPU then transmitted to the Central Data Instrument Processor (CIDP) on the spacecraft for science interest ranking. Only data sequences that contain the greatest amount of temporal/spatial structure will be intelligently down-linked by the spacecraft. Currently, the FPI data rate allocation to the CIDP is 1.5-Mbs(exp -1). Consequently, the FPI-IDPU must employ data/image compression to meet this CIDP telemetry allocation. Here, we present simulations of the CCSDS 122.0-B-1 algorithm-based compression of the FPI-DES electron data. Compression analysis is based upon a seed of re-processed Cluster/PEACE electron measurements. Topics to be discussed include: review of compression algorithm; data quality; data formatting/organization; and, implications for data/matrix pruning. To conclude a presentation of the base-lined FPI data compression approach is provided.
Fast Plasma Instrument for MMS: Data Compression Simulation Results
NASA Astrophysics Data System (ADS)
Barrie, A. C.; Adrian, M. L.; Yeh, P.; Winkert, G. E.; Lobell, J. V.; Viňas, A. F.; Simpson, D. G.; Moore, T. E.
2008-12-01
Magnetospheric Multiscale (MMS) mission will study small-scale reconnection structures and their rapid motions from closely spaced platforms using instruments capable of high angular, energy, and time resolution measurements. To meet these requirements, the Fast Plasma Instrument (FPI) consists of eight (8) identical half top-hat electron sensors and eight (8) identical ion sensors and an Instrument Data Processing Unit (IDPU). The sensors (electron or ion) are grouped into pairs whose 6° × 180° fields-of-view (FOV) are set 90° apart. Each sensor is equipped with electrostatic aperture steering to allow the sensor to scan a 45° × 180° fan about the its nominal viewing (0° deflection) direction. Each pair of sensors, known as the Dual Electron Spectrometer (DES) and the Dual Ion Spectrometer (DIS), occupies a quadrant on the MMS spacecraft and the combination of the eight electron/ion sensors, employing aperture steering, image the full-sky every 30-ms (electrons) and 150-ms (ions), respectively. To probe the diffusion regions of reconnection, the highest temporal/spatial resolution mode of FPI results in the DES complement of a given spacecraft generating 6.5-Mb s-1 of electron data while the DIS generates 1.1-Mb s-1 of ion data yielding an FPI total data rate of 7.6-Mb s-1. The FPI electron/ion data is collected by the IDPU then transmitted to the Central Data Instrument Processor (CIDP) on the spacecraft for science interest ranking. Only data sequences that contain the greatest amount of temporal/spatial structure will be intelligently down-linked by the spacecraft. Currently, the FPI data rate allocation to the CIDP is 1.5-Mb s-1. Consequently, the FPI-IDPU must employ data/image compression to meet this CIDP telemetry allocation. Here, we present simulations of the CCSDS 122.0-B-1 algorithm- based compression of the FPI-DES electron data. Compression analysis is based upon a seed of re- processed Cluster/PEACE electron measurements. Topics to be discussed include: (i) Review of compression algorithm; (ii) Data quality; (iii) Data formatting/organization; (iv) Compression optimization; and (v) Implications for data/matrix pruning. We conclude with a presentation of the base-lined FPI data compression approach.
Venous leg ulcer management: single use negative pressure wound therapy.
Dowsett, Caroline; Grothier, Lorraine; Henderson, Valerie; Leak, Kathy; Milne, Jeanette; Davis, Lynn; Bielby, Alistair; Timmons, John
2013-06-01
A number of leg ulcer specialist/tissue viability specialists from across the UK were invited to evaluate PICO (Smith and Nephew, Hull) as a treatment for venous leg ulcers also in conjunction with a variety of compression bandages and garments. Patients across 5 sites had PICO applied in conjunction with compression therapy. This group of treating clinicians were then asked to give feedback on the outcome of the patients on whom they had used the new device. All feedback was recorded at a meeting and this was used to create a guideline for use.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kirby J. Baumgard; Richard E. Winsor
2009-12-31
The objectives of the reported work were: to apply the stoichiometric compression ignition (SCI) concept to a 9.0 liter diesel engine; to obtain engine-out NO{sub x} and PM exhaust emissions so that the engine can meet 2010 on-highway emission standards by applying a three-way catalyst for NO{sub x} control and a particulate filter for PM control; and to simulate an optimize the engine and air system to approach 50% thermal efficiency using variable valve actuation and electric turbo compounding. The work demonstrated that an advanced diesel engine can be operated at stoichiometric conditions with reasonable particulate and NOx emissions atmore » full power and peak torque conditions; calculated that the SCI engine will operate at 42% brake thermal efficiency without advanced hardware, turbocompounding, or waste heat recovery; and determined that EGR is not necessary for this advanced concept engine, and this greatly simplifies the concept.« less
Noncatalytic hydrazine thruster development - 0.050 to 5.0 pounds thrust
NASA Technical Reports Server (NTRS)
Murch, C. K.; Sackheim, R. L.; Kuenzly, J. D.; Callens, R. A.
1976-01-01
Noncatalytic (thermal-decompositon) hydrazine thrusters can operate in both the pulsing and steady-state modes to meet the propulsive requirements of long-life spacecraft. The thermal decomposition mode yields higher specific impulse than is characteristic of catalytic thrusters at similar thrust levels. This performance gain is the result of higher temperature operation and a lower fraction of ammonia dissociation. Some life limiting factors of catalytic thrusters are eliminated.
Fast Plasma Instrument for MMS: Data Compression Simulation Results
NASA Astrophysics Data System (ADS)
Barrie, A.; Adrian, M. L.; Yeh, P.; Winkert, G.; Lobell, J.; Vinas, A. F.; Simpson, D. G.
2009-12-01
Magnetospheric Multiscale (MMS) mission will study small-scale reconnection structures and their rapid motions from closely spaced platforms using instruments capable of high angular, energy, and time resolution measurements. To meet these requirements, the Fast Plasma Instrument (FPI) consists of eight (8) identical half top-hat electron sensors and eight (8) identical ion sensors and an Instrument Data Processing Unit (IDPU). The sensors (electron or ion) are grouped into pairs whose 6° x 180° fields-of-view (FOV) are set 90° apart. Each sensor is equipped with electrostatic aperture steering to allow the sensor to scan a 45° x 180° fan about the its nominal viewing (0° deflection) direction. Each pair of sensors, known as the Dual Electron Spectrometer (DES) and the Dual Ion Spectrometer (DIS), occupies a quadrant on the MMS spacecraft and the combination of the eight electron/ion sensors, employing aperture steering, image the full-sky every 30-ms (electrons) and 150-ms (ions), respectively. To probe the diffusion regions of reconnection, the highest temporal/spatial resolution mode of FPI results in the DES complement of a given spacecraft generating 6.5-Mb s-1 of electron data while the DIS generates 1.1-Mb s-1 of ion data yielding an FPI total data rate of 6.6-Mb s-1. The FPI electron/ion data is collected by the IDPU then transmitted to the Central Data Instrument Processor (CIDP) on the spacecraft for science interest ranking. Only data sequences that contain the greatest amount of temporal/spatial structure will be intelligently down-linked by the spacecraft. Currently, the FPI data rate allocation to the CIDP is 1.5-Mb s-1. Consequently, the FPI-IDPU must employ data/image compression to meet this CIDP telemetry allocation. Here, we present updated simulations of the CCSDS 122.0-B-1 algorithm-based compression of the FPI-DES electron data as well as the FPI-DIS ion data. Compression analysis is based upon a seed of re-processed Cluster/PEACE electron measurements and Cluster/CIS ion measurements. Topics to be discussed include: (i) Review of compression algorithm; (ii) Data quality; (iii) Data formatting/organization; (iv) Compression optimization; (v) Investigation of pseudo-log precompression; and (vi) Analysis of compression effectiveness for burst mode as well as fast survey mode data packets for both electron and ion data We conclude with a presentation of the current base-lined FPI data compression approach.
NASA Astrophysics Data System (ADS)
Zhou, Zhenggan; Ma, Baoquan; Jiang, Jingtao; Yu, Guang; Liu, Kui; Zhang, Dongmei; Liu, Weiping
2014-10-01
Air-coupled ultrasonic testing (ACUT) technique has been viewed as a viable solution in defect detection of advanced composites used in aerospace and aviation industries. However, the giant mismatch of acoustic impedance in air-solid interface makes the transmission efficiency of ultrasound low, and leads to poor signal-to-noise (SNR) ratio of received signal. The utilisation of signal-processing techniques in non-destructive testing is highly appreciated. This paper presents a wavelet filtering and phase-coded pulse compression hybrid method to improve the SNR and output power of received signal. The wavelet transform is utilised to filter insignificant components from noisy ultrasonic signal, and pulse compression process is used to improve the power of correlated signal based on cross-correction algorithm. For the purpose of reasonable parameter selection, different families of wavelets (Daubechies, Symlet and Coiflet) and decomposition level in discrete wavelet transform are analysed, different Barker codes (5-13 bits) are also analysed to acquire higher main-to-side lobe ratio. The performance of the hybrid method was verified in a honeycomb composite sample. Experimental results demonstrated that the proposed method is very efficient in improving the SNR and signal strength. The applicability of the proposed method seems to be a very promising tool to evaluate the integrity of high ultrasound attenuation composite materials using the ACUT.
An inventory of bispectrum estimators for redshift space distortions
NASA Astrophysics Data System (ADS)
Regan, Donough
2017-12-01
In order to best improve constraints on cosmological parameters and on models of modified gravity using current and future galaxy surveys it is necessary maximally exploit the available data. As redshift-space distortions mean statistical translation invariance is broken for galaxy observations, this will require measurement of the monopole, quadrupole and hexadecapole of not just the galaxy power spectrum, but also the galaxy bispectrum. A recent (2015) paper by Scoccimarro demonstrated how the standard bispectrum estimator may be expressed in terms of Fast Fourier Transforms (FFTs) to afford an extremely efficient algorithm, allowing the bispectrum multipoles on all scales and triangle shapes to be measured in comparable time to those of the power spectrum. In this paper we present a suite of alternative proxies to measure the three-point correlation multipoles. In particular, we describe a modal (or plane wave) decomposition to capture the information in each multipole in a series of basis coefficients, and also describe three compressed estimators formed using the skew-spectrum, the line correlation function and the integrated bispectrum, respectively. As well as each of the estimators offering a different measurement channel, and thereby a robustness check, it is expected that some (especially the modal estimator) will offer a vast data compression, and so a much reduced covariance matrix. This compression may be vital to reduce the computational load involved in extracting the available three-point information.
Three-dimensional dictionary-learning reconstruction of (23)Na MRI data.
Behl, Nicolas G R; Gnahm, Christine; Bachert, Peter; Ladd, Mark E; Nagel, Armin M
2016-04-01
To reduce noise and artifacts in (23)Na MRI with a Compressed Sensing reconstruction and a learned dictionary as sparsifying transform. A three-dimensional dictionary-learning compressed sensing reconstruction algorithm (3D-DLCS) for the reconstruction of undersampled 3D radial (23)Na data is presented. The dictionary used as the sparsifying transform is learned with a K-singular-value-decomposition (K-SVD) algorithm. The reconstruction parameters are optimized on simulated data, and the quality of the reconstructions is assessed with peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). The performance of the algorithm is evaluated in phantom and in vivo (23)Na MRI data of seven volunteers and compared with nonuniform fast Fourier transform (NUFFT) and other Compressed Sensing reconstructions. The reconstructions of simulated data have maximal PSNR and SSIM for an undersampling factor (USF) of 10 with numbers of averages equal to the USF. For 10-fold undersampling, the PSNR is increased by 5.1 dB compared with the NUFFT reconstruction, and the SSIM by 24%. These results are confirmed by phantom and in vivo (23)Na measurements in the volunteers that show markedly reduced noise and undersampling artifacts in the case of 3D-DLCS reconstructions. The 3D-DLCS algorithm enables precise reconstruction of undersampled (23)Na MRI data with markedly reduced noise and artifact levels compared with NUFFT reconstruction. Small structures are well preserved. © 2015 Wiley Periodicals, Inc.
Khuan, L Y; Bister, M; Blanchfield, P; Salleh, Y M; Ali, R A; Chan, T H
2006-06-01
Increased inter-equipment connectivity coupled with advances in Web technology allows ever escalating amounts of physiological data to be produced, far too much to be displayed adequately on a single computer screen. The consequence is that large quantities of insignificant data will be transmitted and reviewed. This carries an increased risk of overlooking vitally important transients. This paper describes a technique to provide an integrated solution based on a single algorithm for the efficient analysis, compression and remote display of long-term physiological signals with infrequent short duration, yet vital events, to effect a reduction in data transmission and display cluttering and to facilitate reliable data interpretation. The algorithm analyses data at the server end and flags significant events. It produces a compressed version of the signal at a lower resolution that can be satisfactorily viewed in a single screen width. This reduced set of data is initially transmitted together with a set of 'flags' indicating where significant events occur. Subsequent transmissions need only involve transmission of flagged data segments of interest at the required resolution. Efficient processing and code protection with decomposition alone is novel. The fixed transmission length method ensures clutter-less display, irrespective of the data length. The flagging of annotated events in arterial oxygen saturation, electroencephalogram and electrocardiogram illustrates the generic property of the algorithm. Data reduction of 87% to 99% and improved displays are demonstrated.
Sparse radar imaging using 2D compressed sensing
NASA Astrophysics Data System (ADS)
Hou, Qingkai; Liu, Yang; Chen, Zengping; Su, Shaoying
2014-10-01
Radar imaging is an ill-posed linear inverse problem and compressed sensing (CS) has been proved to have tremendous potential in this field. This paper surveys the theory of radar imaging and a conclusion is drawn that the processing of ISAR imaging can be denoted mathematically as a problem of 2D sparse decomposition. Based on CS, we propose a novel measuring strategy for ISAR imaging radar and utilize random sub-sampling in both range and azimuth dimensions, which will reduce the amount of sampling data tremendously. In order to handle 2D reconstructing problem, the ordinary solution is converting the 2D problem into 1D by Kronecker product, which will increase the size of dictionary and computational cost sharply. In this paper, we introduce the 2D-SL0 algorithm into the reconstruction of imaging. It is proved that 2D-SL0 can achieve equivalent result as other 1D reconstructing methods, but the computational complexity and memory usage is reduced significantly. Moreover, we will state the results of simulating experiments and prove the effectiveness and feasibility of our method.
Survey and analysis of multiresolution methods for turbulence data
Pulido, Jesus; Livescu, Daniel; Woodring, Jonathan; ...
2015-11-10
This paper compares the effectiveness of various multi-resolution geometric representation methods, such as B-spline, Daubechies, Coiflet and Dual-tree wavelets, curvelets and surfacelets, to capture the structure of fully developed turbulence using a truncated set of coefficients. The turbulence dataset is obtained from a Direct Numerical Simulation of buoyancy driven turbulence on a 512 3 mesh size, with an Atwood number, A = 0.05, and turbulent Reynolds number, Re t = 1800, and the methods are tested against quantities pertaining to both velocities and active scalar (density) fields and their derivatives, spectra, and the properties of constant density surfaces. The comparisonsmore » between the algorithms are given in terms of performance, accuracy, and compression properties. The results should provide useful information for multi-resolution analysis of turbulence, coherent feature extraction, compression for large datasets handling, as well as simulations algorithms based on multi-resolution methods. In conclusion, the final section provides recommendations for best decomposition algorithms based on several metrics related to computational efficiency and preservation of turbulence properties using a reduced set of coefficients.« less
Simulating squeeze flows in multiaxial laminates using an improved TIF model
NASA Astrophysics Data System (ADS)
Ibañez, R.; Abisset-Chavanne, Emmanuelle; Chinesta, Francisco
2017-10-01
Thermoplastic composites are widely considered in structural parts. In this paper attention is paid to squeeze flow of continuous fiber laminates. In the case of unidirectional prepregs, the ply constitutive equation is modeled as a transversally isotropic fluid, that must satisfy both the fiber inextensibility as well as the fluid incompressibility. When laminate is squeezed the flow kinematics exhibits a complex dependency along the laminate thickness requiring a detailed velocity description through the thickness. In a former work the solution making use of an in-plane-out-of-plane separated representation within the PGD - Poper Generalized Decomposition - framework was successfully accomplished when both kinematic constraints (inextensibility and in-compressibility) were introduced using a penalty formulation for circumventing the LBB constraints. However, such a formulation makes difficult the calculation on fiber tractions and compression forces, the last required in rheological characterizations. In this paper the former penalty formulation is substituted by a mixed formulation that makes use of two Lagrange multipliers, while addressing the LBB stability conditions within the separated representation framework, questions never until now addressed.
Fabrication and Characterization of Porous MgAl₂O₄ Ceramics via a Novel Aqueous Gel-Casting Process.
Yuan, Lei; Liu, Zongquan; Liu, Zhenli; He, Xiao; Ma, Beiyue; Zhu, Qiang; Yu, Jingkun
2017-11-30
A novel and aqueous gel-casting process has been successfully developed to fabricate porous MgAl₂O₄ ceramics by using hydratable alumina and MgO powders as raw materials and deionized water as hydration agent. The effects of different amounts of deionized water on the hydration properties, apparent porosity, bulk density, microstructure, pore size distribution and compressive strength of the samples were investigated. The results indicated that the porosity and the microstructure of porous MgAl₂O₄ ceramics were governed by the amounts of deionized water added. The porous structure was formed by the liberation of physisorbed water and the decomposition of hydration products such as bayerite, brucite and boehmite. After determining the addition amounts of deionized water, the fabricated porous MgAl₂O₄ ceramics had a high apparent porosity (52.5-65.8%), a small average pore size structure (around 1-3 μm) and a relatively high compressive strength (12-28 MPa). The novel aqueous gel-casting process with easy access is expected to be a promising candidate for the preparation of Al₂O₃-based porous ceramics.
NASA Technical Reports Server (NTRS)
Gajjar, J. S. B.
1993-01-01
The nonlinear stability of an oblique mode propagating in a two-dimensional compressible boundary layer is considered under the long wave-length approximation. The growth rate of the wave is assumed to be small so that the concept of unsteady nonlinear critical layers can be used. It is shown that the spatial/temporal evolution of the mode is governed by a pair of coupled unsteady nonlinear equations for the disturbance vorticity and density. Expressions for the linear growth rate show clearly the effects of wall heating and cooling and in particular how heating destabilizes the boundary layer for these long wavelength inviscid modes at O(1) Mach numbers. A generalized expression for the linear growth rate is obtained and is shown to compare very well for a range of frequencies and wave-angles at moderate Mach numbers with full numerical solutions of the linear stability problem. The numerical solution of the nonlinear unsteady critical layer problem using a novel method based on Fourier decomposition and Chebychev collocation is discussed and some results are presented.
Calcium leaching behavior of cementitious materials in hydrochloric acid solution.
Yang, Huashan; Che, Yujun; Leng, Faguang
2018-06-11
The calcium leaching behavior of cement paste and silica fume modified calcium hydroxide paste, exposed to hydrochloric acid solution, is reported in this paper. The kinetic of degradation was assessed by the changes of pH of hydrochloric acid solution with time. The changes of compressive strength of specimens in hydrochloric acid with time were tested. Hydration products of leached specimens were also analyzed by X-ray diffraction (XRD), differential scanning calorimetry (DSC), thermogravimetric (TG), and atomic force microscope (AFM). Tests results show that there is a dynamic equilibrium in the supply and consumption of calcium hydroxide in hydrochloric acid solution, which govern the stability of hydration products such as calcium silicate hydrate (C-S-H). The decrease of compressive strength indicates that C-S-H are decomposed due to the lower concentration of calcium hydroxide in the pore solution than the equilibrium concentration of the hydration products. Furthermore, the hydration of unhydrated clinker delayed the decomposition of C-S-H in hydrochloric acid solution due to the increase of calcium hydroxide in pore solution of cementitious materials.
An attempt to perform water balance in a Brazilian municipal solid waste landfill.
São Mateus, Maria do Socorro Costa; Machado, Sandro Lemos; Barbosa, Maria Cláudia
2012-03-01
This paper presents an attempt to model the water balance in the metropolitan center landfill (MCL) in Salvador, Brazil. Aspects such as the municipal solid waste (MSW) initial water content, mass loss due to decomposition, MSW liquid expelling due to compression and those related to weather conditions, such as the amount of rainfall and evaporation are considered. Superficial flow and infiltration were modeled considering the waste and the hydraulic characteristics (permeability and soil-water retention curves) of the cover layer and simplified uni-dimensional empirical models. In order to validate the modeling procedure, data from one cell at the landfill were used. Monthly waste entry, volume of collected leachate and leachate level inside the cell were monitored. Water balance equations and the compressibility of the MSW were used to calculate the amount of leachate stored in the cell and the corresponding leachate level. Measured and calculated values of the leachate level inside the cell were similar and the model was able to capture the main trends of the water balance behavior during the cell operational period. Copyright © 2011 Elsevier Ltd. All rights reserved.
Matched field localization based on CS-MUSIC algorithm
NASA Astrophysics Data System (ADS)
Guo, Shuangle; Tang, Ruichun; Peng, Linhui; Ji, Xiaopeng
2016-04-01
The problem caused by shortness or excessiveness of snapshots and by coherent sources in underwater acoustic positioning is considered. A matched field localization algorithm based on CS-MUSIC (Compressive Sensing Multiple Signal Classification) is proposed based on the sparse mathematical model of the underwater positioning. The signal matrix is calculated through the SVD (Singular Value Decomposition) of the observation matrix. The observation matrix in the sparse mathematical model is replaced by the signal matrix, and a new concise sparse mathematical model is obtained, which means not only the scale of the localization problem but also the noise level is reduced; then the new sparse mathematical model is solved by the CS-MUSIC algorithm which is a combination of CS (Compressive Sensing) method and MUSIC (Multiple Signal Classification) method. The algorithm proposed in this paper can overcome effectively the difficulties caused by correlated sources and shortness of snapshots, and it can also reduce the time complexity and noise level of the localization problem by using the SVD of the observation matrix when the number of snapshots is large, which will be proved in this paper.
Improving Remote Health Monitoring: A Low-Complexity ECG Compression Approach
Al-Ali, Abdulla; Mohamed, Amr; Ward, Rabab
2018-01-01
Recent advances in mobile technology have created a shift towards using battery-driven devices in remote monitoring settings and smart homes. Clinicians are carrying out diagnostic and screening procedures based on the electrocardiogram (ECG) signals collected remotely for outpatients who need continuous monitoring. High-speed transmission and analysis of large recorded ECG signals are essential, especially with the increased use of battery-powered devices. Exploring low-power alternative compression methodologies that have high efficiency and that enable ECG signal collection, transmission, and analysis in a smart home or remote location is required. Compression algorithms based on adaptive linear predictors and decimation by a factor B/K are evaluated based on compression ratio (CR), percentage root-mean-square difference (PRD), and heartbeat detection accuracy of the reconstructed ECG signal. With two databases (153 subjects), the new algorithm demonstrates the highest compression performance (CR=6 and PRD=1.88) and overall detection accuracy (99.90% sensitivity, 99.56% positive predictivity) over both databases. The proposed algorithm presents an advantage for the real-time transmission of ECG signals using a faster and more efficient method, which meets the growing demand for more efficient remote health monitoring. PMID:29337892
Improving Remote Health Monitoring: A Low-Complexity ECG Compression Approach.
Elgendi, Mohamed; Al-Ali, Abdulla; Mohamed, Amr; Ward, Rabab
2018-01-16
Recent advances in mobile technology have created a shift towards using battery-driven devices in remote monitoring settings and smart homes. Clinicians are carrying out diagnostic and screening procedures based on the electrocardiogram (ECG) signals collected remotely for outpatients who need continuous monitoring. High-speed transmission and analysis of large recorded ECG signals are essential, especially with the increased use of battery-powered devices. Exploring low-power alternative compression methodologies that have high efficiency and that enable ECG signal collection, transmission, and analysis in a smart home or remote location is required. Compression algorithms based on adaptive linear predictors and decimation by a factor B / K are evaluated based on compression ratio (CR), percentage root-mean-square difference (PRD), and heartbeat detection accuracy of the reconstructed ECG signal. With two databases (153 subjects), the new algorithm demonstrates the highest compression performance ( CR = 6 and PRD = 1.88 ) and overall detection accuracy (99.90% sensitivity, 99.56% positive predictivity) over both databases. The proposed algorithm presents an advantage for the real-time transmission of ECG signals using a faster and more efficient method, which meets the growing demand for more efficient remote health monitoring.
3D Multifunctional Ablative Thermal Protection System
NASA Technical Reports Server (NTRS)
Feldman, Jay; Venkatapathy, Ethiraj; Wilkinson, Curt; Mercer, Ken
2015-01-01
NASA is developing the Orion spacecraft to carry astronauts farther into the solar system than ever before, with human exploration of Mars as its ultimate goal. One of the technologies required to enable this advanced, Apollo-shaped capsule is a 3-dimensional quartz fiber composite for the vehicle's compression pad. During its mission, the compression pad serves first as a structural component and later as an ablative heat shield, partially consumed on Earth re-entry. This presentation will summarize the development of a new 3D quartz cyanate ester composite material, 3-Dimensional Multifunctional Ablative Thermal Protection System (3D-MAT), designed to meet the mission requirements for the Orion compression pad. Manufacturing development, aerothermal (arc-jet) testing, structural performance, and the overall status of material development for the 2018 EM-1 flight test will be discussed.
Fast and Adaptive Lossless Onboard Hyperspectral Data Compression System
NASA Technical Reports Server (NTRS)
Aranki, Nazeeh I.; Keymeulen, Didier; Kimesh, Matthew A.
2012-01-01
Modern hyperspectral imaging systems are able to acquire far more data than can be downlinked from a spacecraft. Onboard data compression helps to alleviate this problem, but requires a system capable of power efficiency and high throughput. Software solutions have limited throughput performance and are power-hungry. Dedicated hardware solutions can provide both high throughput and power efficiency, while taking the load off of the main processor. Thus a hardware compression system was developed. The implementation uses a field-programmable gate array (FPGA). The implementation is based on the fast lossless (FL) compression algorithm reported in Fast Lossless Compression of Multispectral-Image Data (NPO-42517), NASA Tech Briefs, Vol. 30, No. 8 (August 2006), page 26, which achieves excellent compression performance and has low complexity. This algorithm performs predictive compression using an adaptive filtering method, and uses adaptive Golomb coding. The implementation also packetizes the coded data. The FL algorithm is well suited for implementation in hardware. In the FPGA implementation, one sample is compressed every clock cycle, which makes for a fast and practical realtime solution for space applications. Benefits of this implementation are: 1) The underlying algorithm achieves a combination of low complexity and compression effectiveness that exceeds that of techniques currently in use. 2) The algorithm requires no training data or other specific information about the nature of the spectral bands for a fixed instrument dynamic range. 3) Hardware acceleration provides a throughput improvement of 10 to 100 times vs. the software implementation. A prototype of the compressor is available in software, but it runs at a speed that does not meet spacecraft requirements. The hardware implementation targets the Xilinx Virtex IV FPGAs, and makes the use of this compressor practical for Earth satellites as well as beyond-Earth missions with hyperspectral instruments.
Site dependent factors affecting the economic feasibility of solar powered absorption cooling
NASA Technical Reports Server (NTRS)
Bartlett, J. C.
1978-01-01
A procedure was developed to evaluate the cost effectiveness of combining an absorption cycle chiller with a solar energy system. A basic assumption of the procedure is that a solar energy system exists for meeting the heating load of the building, and that the building must be cooled. The decision to be made is to either cool the building with a conventional vapor compression cycle chiller or to use the existing solar energy system to provide a heat input to the absorption chiller. Two methods of meeting the cooling load not supplied by solar energy were considered. In the first method, heat is supplied to the absorption chiller by a boiler using fossil fuel. In the second method, the load not met by solar energy is net by a conventional vapor compression chiller. In addition, the procedure can consider waste heat as another form of auxiliary energy. Commercial applications of solar cooling with an absorption chiller were found to be more cost effective than the residential applications. In general, it was found that the larger the chiller, the more economically feasible it would be. Also, it was found that a conventional vapor compression chiller is a viable alternative for the auxiliary cooling source, especially for the larger chillers. The results of the analysis gives a relative rating of the sites considered as to their economic feasibility of solar cooling.
Pressure profiles of sport compression stockings.
Reich-Schupke, Stefanie; Surhoff, Stefan; Stücker, Markus
2016-05-01
While sport compression stockings (SCS) have become increasingly popular, there is no regulatory norm as exists for medical compression stockings (MCS). The objective of this pilot study was to compare five SCS with respect to their pressure profiles ex vivo and in vivo, and in relation to German standards for MCS (RAL norm). In vivo (10 competitive athletes; standardized procedure using the Kikuhime pressure monitor) and ex vivo (tested at the Hohenstein Institute) pressure profiles were tested for the following products: CEP Running Progressive Socks, Falke Running Energizing, Sigvaris Performance, X-Socks Speed Metal Energizer, and 2XU Compression Race Socks. Ex vivo ankle pressures of CEP (25.6 mmHg) and 2XU (23.2 mmHg) corresponded to class 2 MCS; that of Sigvaris (20.8 mmHg), to class 1 MCS. The remaining SCS achieved lower pressure values. The pressure gradients showed marked differences, and did not meet MCS standards. Average in vivo pressures were higher for 2XU, CEP, and Sigvaris than for Falke and X-Socks. However, in vivo values for all SCS were below those of class 1 MCS. None of the SCS showed the decreasing pressure gradient (from distal to proximal) required for MCS. In vivo and ex vivo pressure profiles of all SCS examined showed marked heterogeneity, and did not meet MCS standards. Consequently, the clinical and practical effects of SCS cannot be compared, either. It would therefore be desirable to establish a classification that allows for the categorization and comparison of various SCS as well as their selection based on individual preferences and needs (high vs. low pressure, progressive vs. degressive profile). © 2016 Deutsche Dermatologische Gesellschaft (DDG). Published by John Wiley & Sons Ltd.
High-speed and high-ratio referential genome compression.
Liu, Yuansheng; Peng, Hui; Wong, Limsoon; Li, Jinyan
2017-11-01
The rapidly increasing number of genomes generated by high-throughput sequencing platforms and assembly algorithms is accompanied by problems in data storage, compression and communication. Traditional compression algorithms are unable to meet the demand of high compression ratio due to the intrinsic challenging features of DNA sequences such as small alphabet size, frequent repeats and palindromes. Reference-based lossless compression, by which only the differences between two similar genomes are stored, is a promising approach with high compression ratio. We present a high-performance referential genome compression algorithm named HiRGC. It is based on a 2-bit encoding scheme and an advanced greedy-matching search on a hash table. We compare the performance of HiRGC with four state-of-the-art compression methods on a benchmark dataset of eight human genomes. HiRGC takes <30 min to compress about 21 gigabytes of each set of the seven target genomes into 96-260 megabytes, achieving compression ratios of 217 to 82 times. This performance is at least 1.9 times better than the best competing algorithm on its best case. Our compression speed is also at least 2.9 times faster. HiRGC is stable and robust to deal with different reference genomes. In contrast, the competing methods' performance varies widely on different reference genomes. More experiments on 100 human genomes from the 1000 Genome Project and on genomes of several other species again demonstrate that HiRGC's performance is consistently excellent. The C ++ and Java source codes of our algorithm are freely available for academic and non-commercial use. They can be downloaded from https://github.com/yuansliu/HiRGC. jinyan.li@uts.edu.au. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Wavelet transform analysis of transient signals: the seismogram and the electrocardiogram
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anant, K.S.
1997-06-01
In this dissertation I quantitatively demonstrate how the wavelet transform can be an effective mathematical tool for the analysis of transient signals. The two key signal processing applications of the wavelet transform, namely feature identification and representation (i.e., compression), are shown by solving important problems involving the seismogram and the electrocardiogram. The seismic feature identification problem involved locating in time the P and S phase arrivals. Locating these arrivals accurately (particularly the S phase) has been a constant issue in seismic signal processing. In Chapter 3, I show that the wavelet transform can be used to locate both the Pmore » as well as the S phase using only information from single station three-component seismograms. This is accomplished by using the basis function (wave-let) of the wavelet transform as a matching filter and by processing information across scales of the wavelet domain decomposition. The `pick` time results are quite promising as compared to analyst picks. The representation application involved the compression of the electrocardiogram which is a recording of the electrical activity of the heart. Compression of the electrocardiogram is an important problem in biomedical signal processing due to transmission and storage limitations. In Chapter 4, I develop an electrocardiogram compression method that applies vector quantization to the wavelet transform coefficients. The best compression results were obtained by using orthogonal wavelets, due to their ability to represent a signal efficiently. Throughout this thesis the importance of choosing wavelets based on the problem at hand is stressed. In Chapter 5, I introduce a wavelet design method that uses linear prediction in order to design wavelets that are geared to the signal or feature being analyzed. The use of these designed wavelets in a test feature identification application led to positive results. The methods developed in this thesis; the feature identification methods of Chapter 3, the compression methods of Chapter 4, as well as the wavelet design methods of Chapter 5, are general enough to be easily applied to other transient signals.« less
Chen, Jing-Yin; Kim, Minseob; Yoo, Choong-Shik; Dattelbaum, Dana M; Sheffield, Stephen
2010-06-07
We have studied the pressure-induced phase transition and chemical decomposition of hydrogen peroxide and its mixtures with water to 50 GPa, using confocal micro-Raman and synchrotron x-ray diffractions. The x-ray results indicate that pure hydrogen peroxide crystallizes into a tetragonal structure (P4(1)2(1)2), the same structure previously found in 82.7% H(2)O(2) at high pressures and in pure H(2)O(2) at low temperatures. The tetragonal phase (H(2)O(2)-I) is stable to 15 GPa, above which transforms into an orthorhombic structure (H(2)O(2)-II) over a relatively large pressure range between 13 and 18 GPa. Inferring from the splitting of the nu(s)(O-O) stretching mode, the phase I-to-II transition pressure decreases in diluted H(2)O(2) to around 7 GPa for the 41.7% H(2)O(2) and 3 GPa for the 9.5%. Above 18 GPa H(2)O(2)-II gradually decomposes to a mixture of H(2)O and O(2), which completes at around 40 GPa for pure and 45 GPa for the 9.5% H(2)O(2). Upon pressure unloading, H(2)O(2) also decomposes to H(2)O and O(2) mixtures across the melts, occurring at 2.5 GPa for pure and 1.5 GPa for the 9.5% mixture. At H(2)O(2) concentrations below 20%, decomposed mixtures form oxygen hydrate clathrates at around 0.8 GPa--just after H(2)O melts. The compression data of pure H(2)O(2) and the stability data of the mixtures seem to indicate that the high-pressure decomposition is likely due to the pressure-induced densification, whereas the low-pressure decomposition is related to the heterogeneous nucleation process associated with H(2)O(2) melting.
The comparison between SVD-DCT and SVD-DWT digital image watermarking
NASA Astrophysics Data System (ADS)
Wira Handito, Kurniawan; Fauzi, Zulfikar; Aminy Ma’ruf, Firda; Widyaningrum, Tanti; Muslim Lhaksmana, Kemas
2018-03-01
With internet, anyone can publish their creation into digital data simply, inexpensively, and absolutely easy to be accessed by everyone. However, the problem appears when anyone else claims that the creation is their property or modifies some part of that creation. It causes necessary protection of copyrights; one of the examples is with watermarking method in digital image. The application of watermarking technique on digital data, especially on image, enables total invisibility if inserted in carrier image. Carrier image will not undergo any decrease of quality and also the inserted image will not be affected by attack. In this paper, watermarking will be implemented on digital image using Singular Value Decomposition based on Discrete Wavelet Transform (DWT) and Discrete Cosine Transform (DCT) by expectation in good performance of watermarking result. In this case, trade-off happen between invisibility and robustness of image watermarking. In embedding process, image watermarking has a good quality for scaling factor < 0.1. The quality of image watermarking in decomposition level 3 is better than level 2 and level 1. Embedding watermark in low-frequency is robust to Gaussian blur attack, rescale, and JPEG compression, but in high-frequency is robust to Gaussian noise.
A novel image watermarking method based on singular value decomposition and digital holography
NASA Astrophysics Data System (ADS)
Cai, Zhishan
2016-10-01
According to the information optics theory, a novel watermarking method based on Fourier-transformed digital holography and singular value decomposition (SVD) is proposed in this paper. First of all, a watermark image is converted to a digital hologram using the Fourier transform. After that, the original image is divided into many non-overlapping blocks. All the blocks and the hologram are decomposed using SVD. The singular value components of the hologram are then embedded into the singular value components of each block using an addition principle. Finally, SVD inverse transformation is carried out on the blocks and hologram to generate the watermarked image. The watermark information embedded in each block is extracted at first when the watermark is extracted. After that, an averaging operation is carried out on the extracted information to generate the final watermark information. Finally, the algorithm is simulated. Furthermore, to test the encrypted image's resistance performance against attacks, various attack tests are carried out. The results show that the proposed algorithm has very good robustness against noise interference, image cut, compression, brightness stretching, etc. In particular, when the image is rotated by a large angle, the watermark information can still be extracted correctly.
Anhydrite EOS and Phase Diagram in Relation to Shock Decomposition
NASA Technical Reports Server (NTRS)
Ivanov, B. A.; Langenhorst, F.; Deutsch, A.; Hornemann, U.
2004-01-01
In the context of the Chicxulub impact, it became recently obvious that experimental and theoretical research on the shock behavior of sulfates is essential for an assessment of the role of shock-released gases in the K/T mass extinction. The Chicxulub crater is the most important large impact structure where the bolide penetrated a sedimentary layer with large amounts of interbedded anhydrite (Haughton has also significant anhydrite in the target). The sulfuric gas production by shock compression/decompression of anhydrite is an important issue, even if the size of Chicxulub crater is only half of the so far assumed size. The comparison of experimental data for anhydrite, shocked with different techniques at various laboratories, reveals large differences in the threshold pressures for melting and decomposition. To gain insight into this issue, we have made a theoretical investigation of the thermodynamic properties of anhydrite. The project includes the review of data published in the last 40 years - reasons to study anhydrite cover a wide field of interests: from industrial problems of cement and ceramic production to the analysis of nuclear underground explosions in salt domes, conducted in the USA and USSR in the 1970th.
The ARO Working Group Meeting on Ignition Processes, June 1978.
1980-03-01
great variety of products are formed from HMX and RDX , including several which cannot be readily explained by the propellant molecules simply breaking...nascent product from HMX is N20, which indicates that some chemistry has taken place somewhere (Figure I shows the HMX and RDX molecules for reference...who described his research into the gas phase unimolecular decomposition of molecules used as explosives (TNT, HMX , RDX ). The purpose of this research
49 CFR 173.307 - Exceptions for compressed gases.
Code of Federal Regulations, 2010 CFR
2010-10-01
.... For transportation by air, tires and tire assemblies must meet the conditions in § 175.8(b)(4) of this subchapter. (3) Balls used for sports. (4) Refrigerating machines, including dehumidifiers and air conditioners, and components thereof, such as precharged tubing containing: (i) 12 kg (25 pounds) or less of a...
46 CFR 153.808 - Examination required for a Certificate of Compliance.
Code of Federal Regulations, 2010 CFR
2010-10-01
... DANGEROUS CARGOES SHIPS CARRYING BULK LIQUID, LIQUEFIED GAS, OR COMPRESSED GAS HAZARDOUS MATERIALS Design and Equipment Testing and Inspection § 153.808 Examination required for a Certificate of Compliance... Officer in Charge, Marine Inspection, determines whether or not the vessel meets the requirements of this...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-11
... and MSPs, trade associations, public interest groups, traders, and other interested parties. In... for the Proposed Rules The Working Group of Commercial Energy Firms (The Working Group) [[Page 55905... CEA. The Working Group believes that the Commission could meet its statutory mandate by publishing...
Microstructures and Mechanical Properties of Two-Phase Alloys Based on NbCr(2)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cady, C.M.; Chen, K.C.; Kotula, P.G.
A two-phase, Nb-Cr-Ti alloy (bee+ C15 Laves phase) has been developed using several alloy design methodologies. In effort to understand processing-microstructure-property relationships, diffment processing routes were employed. The resulting microstructure and mechanical properties are discussed and compared. Plasma arc-melted samples served to establish baseline, . . . as-cast properties. In addition, a novel processing technique, involving decomposition of a supersaturated and metastable precursor phase during hot isostatic pressing (HIP), was used to produce a refined, equilibrium two-phase microstructure. Quasi-static compression tests as a ~ function of temperature were performed on both alloy types. Different deformation mechanisms were encountered based uponmore » temperature and microstructure.« less
Video segmentation and camera motion characterization using compressed data
NASA Astrophysics Data System (ADS)
Milanese, Ruggero; Deguillaume, Frederic; Jacot-Descombes, Alain
1997-10-01
We address the problem of automatically extracting visual indexes from videos, in order to provide sophisticated access methods to the contents of a video server. We focus on tow tasks, namely the decomposition of a video clip into uniform segments, and the characterization of each shot by camera motion parameters. For the first task we use a Bayesian classification approach to detecting scene cuts by analyzing motion vectors. For the second task a least- squares fitting procedure determines the pan/tilt/zoom camera parameters. In order to guarantee the highest processing speed, all techniques process and analyze directly MPEG-1 motion vectors, without need for video decompression. Experimental results are reported for a database of news video clips.
VDT microplane model with anisotropic effectiveness and plasticity
NASA Astrophysics Data System (ADS)
Benelfellah, Abdelkibir; Gratton, Michel; Caliez, Michael; Frachon, Arnaud; Picart, Didier
2018-03-01
The opening-closing state of the microcracks is a kinematic phenomenon usually modeled using a set of damage effectiveness variables, which results in different elastic responses for the same damage level. In this work, the microplane model with volumetric, deviatoric and tangential decomposition denoted V-D-T is modified. The influence of the confining pressure is taken into account in the damage variables evolution laws. For a better understanding of the mechanisms introduced into the model, the damage rosettes are presented for a strain given level. The model is confirmed through comparisons of the simulations with the experimental results of monotonic, and cyclic tensile and compressive testing with different levels of confining pressure.
Canonical forms of multidimensional steady inviscid flows
NASA Technical Reports Server (NTRS)
Taasan, Shlomo
1993-01-01
Canonical forms and canonical variables for inviscid flow problems are derived. In these forms the components of the system governed by different types of operators (elliptic and hyperbolic) are separated. Both the incompressible and compressible cases are analyzed, and their similarities and differences are discussed. The canonical forms obtained are block upper triangular operator form in which the elliptic and non-elliptic parts reside in different blocks. The full nonlinear equations are treated without using any linearization process. This form enables a better analysis of the equations as well as better numerical treatment. These forms are the analog of the decomposition of the one dimensional Euler equations into characteristic directions and Riemann invariants.
Abraham, Mark James; Murtola, Teemu; Schulz, Roland; ...
2015-07-15
GROMACS is one of the most widely used open-source and free software codes in chemistry, used primarily for dynamical simulations of biomolecules. It provides a rich set of calculation types, preparation and analysis tools. Several advanced techniques for free-energy calculations are supported. In version 5, it reaches new performance heights, through several new and enhanced parallelization algorithms. This work on every level; SIMD registers inside cores, multithreading, heterogeneous CPU–GPU acceleration, state-of-the-art 3D domain decomposition, and ensemble-level parallelization through built-in replica exchange and the separate Copernicus framework. Finally, the latest best-in-class compressed trajectory storage format is supported.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abraham, Mark James; Murtola, Teemu; Schulz, Roland
GROMACS is one of the most widely used open-source and free software codes in chemistry, used primarily for dynamical simulations of biomolecules. It provides a rich set of calculation types, preparation and analysis tools. Several advanced techniques for free-energy calculations are supported. In version 5, it reaches new performance heights, through several new and enhanced parallelization algorithms. This work on every level; SIMD registers inside cores, multithreading, heterogeneous CPU–GPU acceleration, state-of-the-art 3D domain decomposition, and ensemble-level parallelization through built-in replica exchange and the separate Copernicus framework. Finally, the latest best-in-class compressed trajectory storage format is supported.
Discrete wavelet transform: a tool in smoothing kinematic data.
Ismail, A R; Asfour, S S
1999-03-01
Motion analysis systems typically introduce noise to the displacement data recorded. Butterworth digital filters have been used to smooth the displacement data in order to obtain smoothed velocities and accelerations. However, this technique does not yield satisfactory results, especially when dealing with complex kinematic motions that occupy the low- and high-frequency bands. The use of the discrete wavelet transform, as an alternative to digital filters, is presented in this paper. The transform passes the original signal through two complementary low- and high-pass FIR filters and decomposes the signal into an approximation function and a detail function. Further decomposition of the signal results in transforming the signal into a hierarchy set of orthogonal approximation and detail functions. A reverse process is employed to perfectly reconstruct the signal (inverse transform) back from its approximation and detail functions. The discrete wavelet transform was applied to the displacement data recorded by Pezzack et al., 1977. The smoothed displacement data were twice differentiated and compared to Pezzack et al.'s acceleration data in order to choose the most appropriate filter coefficients and decomposition level on the basis of maximizing the percentage of retained energy (PRE) and minimizing the root mean square error (RMSE). Daubechies wavelet of the fourth order (Db4) at the second decomposition level showed better results than both the biorthogonal and Coiflet wavelets (PRE = 97.5%, RMSE = 4.7 rad s-2). The Db4 wavelet was then used to compress complex displacement data obtained from a noisy mathematically generated function. Results clearly indicate superiority of this new smoothing approach over traditional filters.
Experimental study on vertical static stiffnesses of polycal wire rope isolators
NASA Astrophysics Data System (ADS)
Balaji, P. S.; Moussa, Leblouba; Khandoker, Noman; Yuk Shyh, Ting; Rahman, M. E.; Hieng Ho, Lau
2017-07-01
Wire rope isolator is one of the most effective isolation system that can be used to attenuate the vibration disturbances and shocks during the operation of machineries. This paper presents the results of investigation on static elastic stiffnesses (both in tension and in compression) of Polycal Wire Rope Isolator (PWRI) under quasi-static monotonic loading conditions. It also studied effect of variations in height and width of PWRI on its static stiffnesses. Suitable experimental setup was designed and manufactured to meet the test conditions. The results show that their elastic stiffnesses for both tension and compression loading conditions are highly influenced by their geometric dimensions. It is found that their compressive stiffness reduced by 55% for an increment of 20% in their height to width ratio. Therefore, the stiffness of PWRI can be fine-tuned by controlling their dimensions according to the requirements of the application.
NASA Astrophysics Data System (ADS)
Bardhan, Abheek; Mohan, Nagaboopathy; Chandrasekar, Hareesh; Ghosh, Priyadarshini; Sridhara Rao, D. V.; Raghavan, Srinivasan
2018-04-01
The bending and interaction of threading dislocations are essential to reduce their density for applications involving III-nitrides. Bending of dislocation lines also relaxes the compressive growth stress that is essential to prevent cracking on cooling down due to tensile thermal expansion mismatch stress while growing on Si substrates. It is shown in this work that surface roughness plays a key role in dislocation bending. Dislocations only bend and relax compressive stresses when the lines intersect a smooth surface. These films then crack. In rough films, dislocation lines which terminate at the bottom of the valleys remain straight. Compressive stresses are not relaxed and the films are relatively crack-free. The reasons for this difference are discussed in this work along with the implications on simultaneously meeting the requirements of films being smooth, crack free and having low defect density for device applications.
NASA Technical Reports Server (NTRS)
Tescher, Andrew G. (Editor)
1989-01-01
Various papers on image compression and automatic target recognition are presented. Individual topics addressed include: target cluster detection in cluttered SAR imagery, model-based target recognition using laser radar imagery, Smart Sensor front-end processor for feature extraction of images, object attitude estimation and tracking from a single video sensor, symmetry detection in human vision, analysis of high resolution aerial images for object detection, obscured object recognition for an ATR application, neural networks for adaptive shape tracking, statistical mechanics and pattern recognition, detection of cylinders in aerial range images, moving object tracking using local windows, new transform method for image data compression, quad-tree product vector quantization of images, predictive trellis encoding of imagery, reduced generalized chain code for contour description, compact architecture for a real-time vision system, use of human visibility functions in segmentation coding, color texture analysis and synthesis using Gibbs random fields.
Zhou, Jun; Wang, Chao
2017-01-01
Intelligent sensing is drastically changing our everyday life including healthcare by biomedical signal monitoring, collection, and analytics. However, long-term healthcare monitoring generates tremendous data volume and demands significant wireless transmission power, which imposes a big challenge for wearable healthcare sensors usually powered by batteries. Efficient compression engine design to reduce wireless transmission data rate with ultra-low power consumption is essential for wearable miniaturized healthcare sensor systems. This paper presents an ultra-low power biomedical signal compression engine for healthcare data sensing and analytics in the era of big data and sensor intelligence. It extracts the feature points of the biomedical signal by window-based turning angle detection. The proposed approach has low complexity and thus low power consumption while achieving a large compression ratio (CR) and good quality of reconstructed signal. Near-threshold design technique is adopted to further reduce the power consumption on the circuit level. Besides, the angle threshold for compression can be adaptively tuned according to the error between the original signal and reconstructed signal to address the variation of signal characteristics from person to person or from channel to channel to meet the required signal quality with optimal CR. For demonstration, the proposed biomedical compression engine has been used and evaluated for ECG compression. It achieves an average (CR) of 71.08% and percentage root-mean-square difference (PRD) of 5.87% while consuming only 39 nW. Compared to several state-of-the-art ECG compression engines, the proposed design has significantly lower power consumption while achieving similar CRD and PRD, making it suitable for long-term wearable miniaturized sensor systems to sense and collect healthcare data for remote data analytics. PMID:28783079
Zhou, Jun; Wang, Chao
2017-08-06
Intelligent sensing is drastically changing our everyday life including healthcare by biomedical signal monitoring, collection, and analytics. However, long-term healthcare monitoring generates tremendous data volume and demands significant wireless transmission power, which imposes a big challenge for wearable healthcare sensors usually powered by batteries. Efficient compression engine design to reduce wireless transmission data rate with ultra-low power consumption is essential for wearable miniaturized healthcare sensor systems. This paper presents an ultra-low power biomedical signal compression engine for healthcare data sensing and analytics in the era of big data and sensor intelligence. It extracts the feature points of the biomedical signal by window-based turning angle detection. The proposed approach has low complexity and thus low power consumption while achieving a large compression ratio (CR) and good quality of reconstructed signal. Near-threshold design technique is adopted to further reduce the power consumption on the circuit level. Besides, the angle threshold for compression can be adaptively tuned according to the error between the original signal and reconstructed signal to address the variation of signal characteristics from person to person or from channel to channel to meet the required signal quality with optimal CR. For demonstration, the proposed biomedical compression engine has been used and evaluated for ECG compression. It achieves an average (CR) of 71.08% and percentage root-mean-square difference (PRD) of 5.87% while consuming only 39 nW. Compared to several state-of-the-art ECG compression engines, the proposed design has significantly lower power consumption while achieving similar CRD and PRD, making it suitable for long-term wearable miniaturized sensor systems to sense and collect healthcare data for remote data analytics.
NASA Technical Reports Server (NTRS)
Hurst, Victor, IV; West, Sarah; Austin, Paul; Branson, Richard; Beck, George
2006-01-01
Astronaut crew medical officers (CMO) aboard the International Space Station (ISS) receive 40 hours of medical training during the 18 months preceding each mission. Part of this training ilncludes twoperson cardiopulmonary resuscitation (CPR) per training guidelines from the American Heart Association (AHA). Recent studies concluded that the use of metronomic tones improves the coordination of CPR by trained clinicians. Similar data for bystander or "trained lay people" (e.g. CMO) performance of CPR (BCPR) have been limited. The purpose of this study was to evailuate whether use of timing devices, such as audible metronomic tones, would improve BCPR perfomance by trained bystanders. Twenty pairs of bystanders trained in two-person BCPR performled BCPR for 4 minutes on a simulated cardiopulmonary arrest patient using three interventions: 1) BCPR with no timing devices, 2) BCPR plus metronomic tones for coordinating compression rate only, 3) BCPR with a timing device and metronome for coordinating ventilation and compression rates, respectively. Bystanders were evaluated on their ability to meet international and AHA CPR guidelines. Bystanders failed to provide the recommended number of breaths and number of compressions in the absence of a timing device and in the presence of audible metronomic tones for only coordinating compression rate. Bystanders using timing devices to coordinate both components of BCPR provided the reco number of breaths and were closer to providing the recommended number of compressions compared with the other interventions. Survey results indicated that bystanders preferred to use a metronome for delivery of compressions during BCPR. BCPR performance is improved by timing devices that coordinate both compressions and breaths.
Formulation of portland composite cement using waste glass as a supplementary cementitious material
NASA Astrophysics Data System (ADS)
Manullang, Ria Julyana; Samadhi, Tjokorde Walmiki; Purbasari, Aprilina
2017-09-01
Utilization of waste glass in cement is an attractive options because of its pozzolanic behaviour and the market of glass-composite cement is potentially available. The objective of this research is to evaluate the formulation of waste glass as supplementary cementitious material (SCM) by an extreme vertices mixture experiment, in which clinker, waste glass and gypsum proportions are chosen as experimental variables. The composite cements were synthesized by mixing all of powder materials in jar mill. The compressive strength of the composite cement mortars after being cured for 28 days ranges between 229 to 268 kg/cm2. Composite cement mortars exhibit lower compressive strength than ordinary Portland cement (OPC) mortars but is still capable of meeting the SNI 15-7064-2004 standards. The highest compressive strength is obtained by shifting the cement blend composition to the direction of increasing clinker and gypsum proportions as well as reducing glass proportion. The lower compressive strength of composite cement is caused by expansion due to ettringite and ASR gel. Based on the experimental result, the composite cement containing 80% clinker, 15% glass and 5% gypsum has the highest compressive strength. As such, the preliminary technical feasibility of reuse of waste glass as SCM has been confirmed.
GPU Lossless Hyperspectral Data Compression System for Space Applications
NASA Technical Reports Server (NTRS)
Keymeulen, Didier; Aranki, Nazeeh; Hopson, Ben; Kiely, Aaron; Klimesh, Matthew; Benkrid, Khaled
2012-01-01
On-board lossless hyperspectral data compression reduces data volume in order to meet NASA and DoD limited downlink capabilities. At JPL, a novel, adaptive and predictive technique for lossless compression of hyperspectral data, named the Fast Lossless (FL) algorithm, was recently developed. This technique uses an adaptive filtering method and achieves state-of-the-art performance in both compression effectiveness and low complexity. Because of its outstanding performance and suitability for real-time onboard hardware implementation, the FL compressor is being formalized as the emerging CCSDS Standard for Lossless Multispectral & Hyperspectral image compression. The FL compressor is well-suited for parallel hardware implementation. A GPU hardware implementation was developed for FL targeting the current state-of-the-art GPUs from NVIDIA(Trademark). The GPU implementation on a NVIDIA(Trademark) GeForce(Trademark) GTX 580 achieves a throughput performance of 583.08 Mbits/sec (44.85 MSamples/sec) and an acceleration of at least 6 times a software implementation running on a 3.47 GHz single core Intel(Trademark) Xeon(Trademark) processor. This paper describes the design and implementation of the FL algorithm on the GPU. The massively parallel implementation will provide in the future a fast and practical real-time solution for airborne and space applications.
Walsh, P J; Walker, G M; Maggs, C A; Buchanan, F J
2011-06-01
Bone void fillers that can enhance biological function to augment skeletal repair have significant therapeutic potential in bone replacement surgery. This work focuses on the development of a unique microporous (0.5-10 microm) marine-derived calcium phosphate bioceramic granule. It was prepared from Corallina officinalis, a mineralized red alga, using a novel manufacturing process. This involved thermal processing, followed by a low pressure-temperature chemical synthesis reaction. The study found that the ability to maintain the unique algal morphology was dependent on the thermal processing conditions. This study investigates the effect of thermal heat treatment on the physiochemical properties of the alga. Thermogravimetric analysis was used to monitor its thermal decomposition. The resultant thermograms indicated the presence of a residual organic phase at temperatures below 500 degrees C and an irreversible solid-state phase transition from mg-rich-calcite to calcium oxide at temperatures over 850 degrees C. Algae and synthetic calcite were evaluated following heat treatment in an air-circulating furnace at temperatures ranging from 400 to 800 degrees C. The highest levels of mass loss occurred between 400-500 degrees C and 700-800 degrees C, which were attributed to the organic and carbonate decomposition respectively. The changes in mechanical strength were quantified using a simple mechanical test, which measured the bulk compressive strength of the algae. The mechanical test used may provide a useful evaluation of the compressive properties of similar bone void fillers that are in granular form. The study concluded that soak temperatures in the range of 600 to 700 degrees C provided the optimum physiochemical properties as a precursor to conversion to hydroxyapatite (HA). At these temperatures, a partial phase transition to calcium oxide occurred and the original skeletal morphology of the alga remained intact.
2016-08-10
thermal decomposition and mechanical damage of energetics. The program for the meeting included nine oral presentation sessions. Discussion leaders...USA) 7:30 pm - 7:35 pm Introduction by Discussion Leader 7:35 pm - 7:50 pm Vincent Baijot (Laboratory for Analysis and Architecture of Systems , CNRS...were synthesis of new materials, performance, advanced diagnostics, experimental techniques, theoretical approaches, and computational models for
Representation of deformable motion for compression of dynamic cardiac image data
NASA Astrophysics Data System (ADS)
Weinlich, Andreas; Amon, Peter; Hutter, Andreas; Kaup, André
2012-02-01
We present a new approach for efficient estimation and storage of tissue deformation in dynamic medical image data like 3-D+t computed tomography reconstructions of human heart acquisitions. Tissue deformation between two points in time can be described by means of a displacement vector field indicating for each voxel of a slice, from which position in the previous slice at a fixed position in the third dimension it has moved to this position. Our deformation model represents the motion in a compact manner using a down-sampled potential function of the displacement vector field. This function is obtained by a Gauss-Newton minimization of the estimation error image, i. e., the difference between the current and the deformed previous slice. For lossless or lossy compression of volume slices, the potential function and the error image can afterwards be coded separately. By assuming deformations instead of translational motion, a subsequent coding algorithm using this method will achieve better compression ratios for medical volume data than with conventional block-based motion compensation known from video coding. Due to the smooth prediction without block artifacts, particularly whole-image transforms like wavelet decomposition as well as intra-slice prediction methods can benefit from this approach. We show that with discrete cosine as well as with Karhunen-Lo`eve transform the method can achieve a better energy compaction of the error image than block-based motion compensation while reaching approximately the same prediction error energy.
Cochlea-inspired sensing node for compressive sensing
NASA Astrophysics Data System (ADS)
Peckens, Courtney A.; Lynch, Jerome P.
2013-04-01
While sensing technologies for structural monitoring applications have made significant advances over the last several decades, there is still room for improvement in terms of computational efficiency, as well as overall energy consumption. The biological nervous system can offer a potential solution to address these current deficiencies. The nervous system is capable of sensing and aggregating information about the external environment through very crude processing units known as neurons. Neurons effectively communicate in an extremely condensed format by encoding information into binary electrical spike trains, thereby reducing the amount of raw information sent throughout a neural network. Due to its unique signal processing capabilities, the mammalian cochlea and its interaction with the biological nervous system is of particular interest for devising compressive sensing strategies for dynamic engineered systems. The cochlea uses a novel method of place theory and frequency decomposition, thereby allowing for rapid signal processing within the nervous system. In this study, a low-power sensing node is proposed that draws inspiration from the mechanisms employed by the cochlea and the biological nervous system. As such, the sensor is able to perceive and transmit a compressed representation of the external stimulus with minimal distortion. Each sensor represents a basic building block, with function similar to the neuron, and can form a network with other sensors, thus enabling a system that can convey input stimulus in an extremely condensed format. The proposed sensor is validated through a structural monitoring application of a single degree of freedom structure excited by seismic ground motion.
Code of Federal Regulations, 2010 CFR
2010-07-01
... EMISSIONS FROM NEW AND IN-USE NONROAD COMPRESSION-IGNITION ENGINES Special Compliance Provisions § 1039.626... and manufacturing equipment. Companies that import equipment into the United States without meeting... and copying of any documents related to demonstrating compliance with the exemptions in § 1039.625...
46 CFR 153.251 - Independent cargo tanks.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 46 Shipping 5 2012-10-01 2012-10-01 false Independent cargo tanks. 153.251 Section 153.251... CARRYING BULK LIQUID, LIQUEFIED GAS, OR COMPRESSED GAS HAZARDOUS MATERIALS Design and Equipment Cargo Tanks § 153.251 Independent cargo tanks. All independent cargo tank must meet § 38.05-10 (a)(1), (b), (d), and...
46 CFR 153.251 - Independent cargo tanks.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 5 2010-10-01 2010-10-01 false Independent cargo tanks. 153.251 Section 153.251... CARRYING BULK LIQUID, LIQUEFIED GAS, OR COMPRESSED GAS HAZARDOUS MATERIALS Design and Equipment Cargo Tanks § 153.251 Independent cargo tanks. All independent cargo tank must meet § 38.05-10 (a)(1), (b), (d), and...
42 CFR 84.141 - Breathing gas; minimum requirements.
Code of Federal Regulations, 2010 CFR
2010-10-01
... respirators shall be respirable breathing air and contain no less than 19.5 volume-percent of oxygen. (b... Register in accordance with 5 U.S.C. 552(a) and 1 CFR part 51. Copies may be obtained from American..._locations.html. (c) Compressed, liquefied breathing air shall meet the applicable minimum grade requirements...
40 CFR 1039.640 - What special provisions apply to branded engines?
Code of Federal Regulations, 2014 CFR
2014-07-01
... (CONTINUED) AIR POLLUTION CONTROLS CONTROL OF EMISSIONS FROM NEW AND IN-USE NONROAD COMPRESSION-IGNITION... following provisions apply if you identify the name and trademark of another company instead of your own on... contractual agreement with the other company that obligates that company to take the following steps: (1) Meet...
40 CFR 1039.640 - What special provisions apply to branded engines?
Code of Federal Regulations, 2013 CFR
2013-07-01
... (CONTINUED) AIR POLLUTION CONTROLS CONTROL OF EMISSIONS FROM NEW AND IN-USE NONROAD COMPRESSION-IGNITION... following provisions apply if you identify the name and trademark of another company instead of your own on... contractual agreement with the other company that obligates that company to take the following steps: (1) Meet...
40 CFR 1039.640 - What special provisions apply to branded engines?
Code of Federal Regulations, 2012 CFR
2012-07-01
... (CONTINUED) AIR POLLUTION CONTROLS CONTROL OF EMISSIONS FROM NEW AND IN-USE NONROAD COMPRESSION-IGNITION... following provisions apply if you identify the name and trademark of another company instead of your own on... contractual agreement with the other company that obligates that company to take the following steps: (1) Meet...
40 CFR 1039.640 - What special provisions apply to branded engines?
Code of Federal Regulations, 2011 CFR
2011-07-01
... (CONTINUED) AIR POLLUTION CONTROLS CONTROL OF EMISSIONS FROM NEW AND IN-USE NONROAD COMPRESSION-IGNITION... following provisions apply if you identify the name and trademark of another company instead of your own on... contractual agreement with the other company that obligates that company to take the following steps: (1) Meet...
Creating the Hybrid Electronic Course: An Instructor's Journal.
ERIC Educational Resources Information Center
Ross, Jeff
This paper details the day to day curriculum of an e-mail-based English class at Central Arizona College. The intent of the class--a Hybrid Electronic Course (HEC)--was to expose the students to both independent research and writing, while also giving them opportunities for traditional classroom meetings. An entire semester was compressed into…
Hyperspectral image compressing using wavelet-based method
NASA Astrophysics Data System (ADS)
Yu, Hui; Zhang, Zhi-jie; Lei, Bo; Wang, Chen-sheng
2017-10-01
Hyperspectral imaging sensors can acquire images in hundreds of continuous narrow spectral bands. Therefore each object presented in the image can be identified from their spectral response. However, such kind of imaging brings a huge amount of data, which requires transmission, processing, and storage resources for both airborne and space borne imaging. Due to the high volume of hyperspectral image data, the exploration of compression strategies has received a lot of attention in recent years. Compression of hyperspectral data cubes is an effective solution for these problems. Lossless compression of the hyperspectral data usually results in low compression ratio, which may not meet the available resources; on the other hand, lossy compression may give the desired ratio, but with a significant degradation effect on object identification performance of the hyperspectral data. Moreover, most hyperspectral data compression techniques exploits the similarities in spectral dimensions; which requires bands reordering or regrouping, to make use of the spectral redundancy. In this paper, we explored the spectral cross correlation between different bands, and proposed an adaptive band selection method to obtain the spectral bands which contain most of the information of the acquired hyperspectral data cube. The proposed method mainly consist three steps: First, the algorithm decomposes the original hyperspectral imagery into a series of subspaces based on the hyper correlation matrix of the hyperspectral images between different bands. And then the Wavelet-based algorithm is applied to the each subspaces. At last the PCA method is applied to the wavelet coefficients to produce the chosen number of components. The performance of the proposed method was tested by using ISODATA classification method.
A Review of Radiolysis Concerns for Water Shielding in Fission Surface Power Applications
NASA Technical Reports Server (NTRS)
Schoenfeld, Michael P.
2008-01-01
This paper presents an overview of radiolysis concerns with regard to water shields for fission surface power. A review of the radiolysis process is presented and key parameters and trends are identified. From this understanding of the radiolytic decomposition of water, shield pressurization and corrosion are identified as the primary concerns. Existing experimental and modeling data addressing concerns are summarized. It was found that radiolysis of pure water in a closed volume results in minimal, if any net decomposition, and therefore reduces the potential for shield pressurization and corrosion. With the space program focus m emphasize more on permanent return to the Moon and eventually manned exploration of Mars, there has been a renewed look at fission power to meet the difficult technical & design challenges associated with this effort. This is due to the ability of fission power to provide a power rich environment that is insensitive to solar intensity and related aspects such as duration of night, dusty environments, and distance from the sun, etc. One critical aspect in the utilization of fission power for these applications of manned exploration is shielding. Although not typically considered for space applications, water shields have been identified as one potential option due to benefits in mass savings and reduced development cost and technical risk (Poston, 2006). However, the water shield option requires demonstration of its ability to meet key technical challenges including such things as adequate natural circulation for thermal management and capability for operational periods up to 8 years. Thermal management concerns have begun to be addressed and are not expected to be a problem (Pearson, 2007). One significant concern remaining is the ability to maintain the shield integrity through its operational lifetime. Shield integrity could be compromised through shield pressurization and corrosion resulting from the radiolytic decomposition of water.
Performance of the Wavelet Decomposition on Massively Parallel Architectures
NASA Technical Reports Server (NTRS)
El-Ghazawi, Tarek A.; LeMoigne, Jacqueline; Zukor, Dorothy (Technical Monitor)
2001-01-01
Traditionally, Fourier Transforms have been utilized for performing signal analysis and representation. But although it is straightforward to reconstruct a signal from its Fourier transform, no local description of the signal is included in its Fourier representation. To alleviate this problem, Windowed Fourier transforms and then wavelet transforms have been introduced, and it has been proven that wavelets give a better localization than traditional Fourier transforms, as well as a better division of the time- or space-frequency plane than Windowed Fourier transforms. Because of these properties and after the development of several fast algorithms for computing the wavelet representation of any signal, in particular the Multi-Resolution Analysis (MRA) developed by Mallat, wavelet transforms have increasingly been applied to signal analysis problems, especially real-life problems, in which speed is critical. In this paper we present and compare efficient wavelet decomposition algorithms on different parallel architectures. We report and analyze experimental measurements, using NASA remotely sensed images. Results show that our algorithms achieve significant performance gains on current high performance parallel systems, and meet scientific applications and multimedia requirements. The extensive performance measurements collected over a number of high-performance computer systems have revealed important architectural characteristics of these systems, in relation to the processing demands of the wavelet decomposition of digital images.
Effect of dry torrefaction on kinetics of catalytic pyrolysis of sugarcane bagasse
NASA Astrophysics Data System (ADS)
Daniyanto, Sutijan, Deendarlianto, Budiman, Arief
2015-12-01
Decreasing world reserve of fossil resources (i.e. petroleum oil, coal and natural gas) encourage discovery of renewable resources as subtitute for fossil resources. Biomass is one of the main natural renewable resources which is promising resource as alternate resources to meet the world's energy needs and raw material to produce chemical platform. Conversion of biomass, as source of energy, fuel and biochemical, is conducted using thermochemical process such as pyrolysis-gasification process. Pyrolysis step is an important step in the mechanism of pyrolysis - gasification of biomass. The objective of this study is to obtain the kinetic reaction of catalytic pyrolysis of dry torrified sugarcane bagasse which used Ca and Mg as catalysts. The model of kinetic reaction is interpreted using model n-order of single reaction equation of biomass. Rate of catalytic pyrolysis reaction depends on the weight of converted biomass into char and volatile matters. Based on TG/DTA analysis, rate of pyrolysis reaction is influenced by the composition of biomass (i.e. hemicellulose, cellulose and lignin) and inorganic component especially alkali and alkaline earth metallic (AAEM). From this study, it has found two equations rate of reaction of catalytic pyrolysis in sugarcane bagasse using catalysts Ca and Mg. First equation is equation of pyrolysis reaction in rapid zone of decomposition and the second equation is slow zone of decomposition. Value of order reaction for rapid decomposition is n > 1 and for slow decomposition is n<1. Constant and order of reactions for catalytic pyrolysis of dry-torrified sugarcane bagasse with presence of Ca tend to higher than that's of presence of Mg.
Hardware Implementation of Lossless Adaptive and Scalable Hyperspectral Data Compression for Space
NASA Technical Reports Server (NTRS)
Aranki, Nazeeh; Keymeulen, Didier; Bakhshi, Alireza; Klimesh, Matthew
2009-01-01
On-board lossless hyperspectral data compression reduces data volume in order to meet NASA and DoD limited downlink capabilities. The technique also improves signature extraction, object recognition and feature classification capabilities by providing exact reconstructed data on constrained downlink resources. At JPL a novel, adaptive and predictive technique for lossless compression of hyperspectral data was recently developed. This technique uses an adaptive filtering method and achieves a combination of low complexity and compression effectiveness that far exceeds state-of-the-art techniques currently in use. The JPL-developed 'Fast Lossless' algorithm requires no training data or other specific information about the nature of the spectral bands for a fixed instrument dynamic range. It is of low computational complexity and thus well-suited for implementation in hardware. A modified form of the algorithm that is better suited for data from pushbroom instruments is generally appropriate for flight implementation. A scalable field programmable gate array (FPGA) hardware implementation was developed. The FPGA implementation achieves a throughput performance of 58 Msamples/sec, which can be increased to over 100 Msamples/sec in a parallel implementation that uses twice the hardware resources This paper describes the hardware implementation of the 'Modified Fast Lossless' compression algorithm on an FPGA. The FPGA implementation targets the current state-of-the-art FPGAs (Xilinx Virtex IV and V families) and compresses one sample every clock cycle to provide a fast and practical real-time solution for space applications.
Fabrication and Characterization of Porous MgAl2O4 Ceramics via a Novel Aqueous Gel-Casting Process
Yuan, Lei; Liu, Zongquan; Liu, Zhenli; He, Xiao; Ma, Beiyue; Zhu, Qiang; Yu, Jingkun
2017-01-01
A novel and aqueous gel-casting process has been successfully developed to fabricate porous MgAl2O4 ceramics by using hydratable alumina and MgO powders as raw materials and deionized water as hydration agent. The effects of different amounts of deionized water on the hydration properties, apparent porosity, bulk density, microstructure, pore size distribution and compressive strength of the samples were investigated. The results indicated that the porosity and the microstructure of porous MgAl2O4 ceramics were governed by the amounts of deionized water added. The porous structure was formed by the liberation of physisorbed water and the decomposition of hydration products such as bayerite, brucite and boehmite. After determining the addition amounts of deionized water, the fabricated porous MgAl2O4 ceramics had a high apparent porosity (52.5–65.8%), a small average pore size structure (around 1–3 μm) and a relatively high compressive strength (12–28 MPa). The novel aqueous gel-casting process with easy access is expected to be a promising candidate for the preparation of Al2O3-based porous ceramics. PMID:29189734
Equations of state of detonation products: ammonia and methane
NASA Astrophysics Data System (ADS)
Lang, John; Dattelbaum, Dana; Goodwin, Peter; Garcia, Daniel; Coe, Joshua; Leiding, Jeffery; Gibson, Lloyd; Bartram, Brian
2015-06-01
Ammonia (NH3) and methane (CH4) are two principal product gases resulting from explosives detonation, and the decomposition of other organic materials under shockwave loading (such as foams). Accurate thermodynamic descriptions of these gases are important for understanding the detonation performance of high explosives. However, shock compression data often do not exist for molecular species in the dense gas phase, and are limited in the fluid phase. Here, we present equation of state measurements of elevated initial density ammonia and methane gases dynamically compressed in gas-gun driven plate impact experiments. Pressure and density of the shocked gases on the principal Hugoniot were determined from direct particle velocity and shock wave velocity measurements recorded using optical velocimetry (Photonic Doppler velocimetry (PDV) and VISAR (velocity interferometer system for any reflector)). Streak spectroscopy and 5-color pyrometry were further used to measure the emission from the shocked gases, from which the temperatures of the shocked gases were estimated. Up to 0.07 GPa, ammonia was not observed to ionize, with temperature remaining below 7000 K. These results provide quantitative measurements of the Hugoniot locus for improving equations of state models of detonation products.
NASA Astrophysics Data System (ADS)
Beardsell, Guillaume; Blanquart, Guillaume
2017-11-01
In direct numerical simulations (DNS) of turbulent flows, it is often prohibitively expensive to simulate complete flow geometries. For example, to study turbulence-flame interactions, one cannot perform a DNS of a full combustor. Usually, a well-selected portion of the domain is chosen, in this particular case the region around the flame front. In this work, we perform a Reynolds decomposition of the velocity field and solve for the fluctuating part only. The resulting equations are the same as the original Navier-Stokes equations, except for turbulence-generating large scale features of the flow such as mean shear, which appear as forcing terms. This approach allows us to achieve high Reynolds numbers and sustained turbulence while keeping the computational cost reasonable. We have already applied this strategy to incompressible flows, but not to compressible ones, where special care has to be taken regarding the energy equation. Implementation of the resulting additional terms in the finite-difference code NGA is discussed and preliminary results are presented. In particular, we look at the budget of turbulent kinetic energy and internal energy. We are considering applying this technique to turbulent premixed flames.
Fragmentation modeling of a resin bonded sand
NASA Astrophysics Data System (ADS)
Hilth, William; Ryckelynck, David
2017-06-01
Cemented sands exhibit a complex mechanical behavior that can lead to sophisticated models, with numerous parameters without real physical meaning. However, using a rather simple generalized critical state bonded soil model has proven to be a relevant compromise between an easy calibration and good results. The constitutive model formulation considers a non-associated elasto-plastic formulation within the critical state framework. The calibration procedure, using standard laboratory tests, is complemented by the study of an uniaxial compression test observed by tomography. Using finite elements simulations, this test is simulated considering a non-homogeneous 3D media. The tomography of compression sample gives access to 3D displacement fields by using image correlation techniques. Unfortunately these fields have missing experimental data because of the low resolution of correlations for low displacement magnitudes. We propose a recovery method that reconstructs 3D full displacement fields and 2D boundary displacement fields. These fields are mandatory for the calibration of the constitutive parameters by using 3D finite element simulations. The proposed recovery technique is based on a singular value decomposition of available experimental data. This calibration protocol enables an accurate prediction of the fragmentation of the specimen.
Rabe, Eberhard; Partsch, Hugo; Hafner, Juerg; Lattimer, Christopher; Mosti, Giovanni; Neumann, Martino; Urbanek, Tomasz; Huebner, Monika; Gaillard, Sylvain; Carpentier, Patrick
2017-01-01
Objective Medical compression stockings are a standard, non-invasive treatment option for all venous and lymphatic diseases. The aim of this consensus document is to provide up-to-date recommendations and evidence grading on the indications for treatment, based on evidence accumulated during the past decade, under the auspices of the International Compression Club. Methods A systematic literature review was conducted and, using PRISMA guidelines, 51 relevant publications were selected for an evidence-based analysis of an initial 2407 unrefined results. Key search terms included: ‘acute', CEAP', ‘chronic', ‘compression stockings', ‘compression therapy', ‘lymph', ‘lymphatic disease', ‘vein' and ‘venous disease'. Evidence extracted from the publications was graded initially by the panel members individually and then refined at the consensus meeting. Results Based on the current evidence, 25 recommendations for chronic and acute venous disorders were made. Of these, 24 recommendations were graded as: Grade 1A (n = 4), 1B (n = 13), 1C (n = 2), 2B (n = 4) and 2C (n = 1). The panel members found moderately robust evidence for medical compression stockings in patients with venous symptoms and prevention and treatment of venous oedema. Robust evidence was found for prevention and treatment of venous leg ulcers. Recommendations for stocking-use after great saphenous vein interventions were limited to the first post-interventional week. No randomised clinical trials are available that document a prophylactic effect of medical compression stockings on the progression of chronic venous disease (CVD). In acute deep vein thrombosis, immediate compression is recommended to reduce pain and swelling. Despite conflicting results from a recent study to prevent post-thrombotic syndrome, medical compression stockings are still recommended. In thromboprophylaxis, the role of stockings in addition to anticoagulation is limited. For the maintenance phase of lymphoedema management, compression stockings are the most important intervention. Conclusion The beneficial value of applying compression stockings in the treatment of venous and lymphatic disease is supported by this document, with 19/25 recommendations rated as Grade 1 evidence. For recommendations rated with Grade 2 level of evidence, further studies are needed. PMID:28549402
Compressed Air Quality, A Case Study In Paiton Coal Fired Power Plant Unit 1 And 2
NASA Astrophysics Data System (ADS)
Indah, Nur; Kusuma, Yuriadi; Mardani
2018-03-01
The compressed air system becomes part of a very important utility system in a Plant, including the Steam Power Plant. In PLN’S coal fired power plant, Paiton units 1 and 2, there are four Centrifugal air compressor types, which produce compressed air as much as 5.652 cfm and with electric power capacity of 1200 kW. Electricity consumption to operate centrifugal compressor is 7.104.117 kWh per year. Compressed air generation is not only sufficient in quantity (flow rate) but also meets the required air quality standards. compressed air at Steam Power Plant is used for; service air, Instrument air, and for fly Ash. This study aims to measure some important parameters related to air quality, followed by potential disturbance analysis, equipment breakdown or reduction of energy consumption from existing compressed air conditions. These measurements include counting the number of dust particles, moisture content, relative humidity, and also compressed air pressure. From the measurements, the compressed air pressure generated by the compressor is about 8.4 barg and decreased to 7.7 barg at the furthest point, so the pressure drop is 0.63 barg, this number satisfies the needs in the end user. The measurement of the number of particles contained in compressed air, for particle of 0.3 micron reaches 170,752 particles, while for the particle size 0.5 micron reaches 45,245 particles. Measurements of particles conducted at several points of measurement. For some point measurements the number of dust particle exceeds the standard set by ISO 8573.1-2010 and also NACE Code, so it needs to be improved on the air treatment process. To see the amount of moisture content in compressed air, it is done by measuring pressure dew point temperature (PDP). Measurements were made at several points with results ranging from -28.4 to 30.9 °C. The recommendation of improving compressed air quality in steam power plant, Paiton unit 1 and 2 has the potential to extend the life of instrumentation equipment, improve the reliability of equipment, and reduce the amount of energy consumption up to 502,579 kWh per year.
NASA Technical Reports Server (NTRS)
Bryson, L. L.; Mccarty, J. E.
1973-01-01
Analytical and experimental investigations, performed to establish the feasibility of reinforcing metal aircraft structures with advanced filamentary composites, are reported. Aluminum-boron-epoxy and titanium-boron-epoxy were used in the design and manufacture of three major structural components. The components were representative of subsonic aircraft fuselage and window belt panels and supersonic aircraft compression panels. Both unidirectional and multidirectional reinforcement concepts were employed. Blade penetration, axial compression, and inplane shear tests were conducted. Composite reinforced structural components designed to realistic airframe structural criteria demonstrated the potential for significant weight savings while maintaining strength, stability, and damage containment properties of all metal components designed to meet the same criteria.
Post-Buckling and Ultimate Strength Analysis of Stiffened Composite Panel Base on Progressive Damage
NASA Astrophysics Data System (ADS)
Zhang, Guofan; Sun, Xiasheng; Sun, Zhonglei
Stiffened composite panel is the typical thin wall structure applied in aerospace industry, and its main failure mode is buckling subjected to compressive loading. In this paper, the development of an analysis approach using Finite Element Method on post-buckling behavior of stiffened composite structures under compression was presented. Then, the numerical results of stiffened panel are obtained by FE simulations. A thorough comparison were accomplished by comparing the load carrying capacity and key position strains of the specimen with test. The comparison indicates that the FEM results which adopted developed methodology could meet the demand of engineering application in predicting the post-buckling behavior of intact stiffened structures in aircraft design stage.
Improved ALE mesh velocities for complex flows
Bakosi, Jozsef; Waltz, Jacob I.; Morgan, Nathaniel Ray
2017-05-31
A key choice in the development of arbitrary Lagrangian-Eulerian solution algorithms is how to move the computational mesh. The most common approaches are smoothing and relaxation techniques, or to compute a mesh velocity field that produces smooth mesh displacements. We present a method in which the mesh velocity is specified by the irrotational component of the fluid velocity as computed from a Helmholtz decomposition, and excess compression of mesh cells is treated through a noniterative, local spring-force model. This approach allows distinct and separate control over rotational and translational modes. In conclusion, the utility of the new mesh motion algorithmmore » is demonstrated on a number of 3D test problems, including problems that involve both shocks and significant amounts of vorticity.« less
Studies of the use of heat from high temperature nuclear sources for hydrogen production processes
NASA Technical Reports Server (NTRS)
Farbman, G. H.
1976-01-01
Future uses of hydrogen and hydrogen production processes that can meet the demand for hydrogen in the coming decades were considered. To do this, a projection was made of the market for hydrogen through the year 2000. Four hydrogen production processes were selected, from among water electrolysis, fossil based and thermochemical water decomposition systems, and evaluated, using a consistent set of ground rules, in terms of relative performance, economics, resource requirements, and technology status.
Stability of Nonlinear Wave Patterns to the Bipolar Vlasov-Poisson-Boltzmann System
NASA Astrophysics Data System (ADS)
Li, Hailiang; Wang, Yi; Yang, Tong; Zhong, Mingying
2018-04-01
The main purpose of the present paper is to investigate the nonlinear stability of viscous shock waves and rarefaction waves for the bipolar Vlasov-Poisson-Boltzmann (VPB) system. To this end, motivated by the micro-macro decomposition to the Boltzmann equation in Liu and Yu (Commun Math Phys 246:133-179, 2004) and Liu et al. (Physica D 188:178-192, 2004), we first set up a new micro-macro decomposition around the local Maxwellian related to the bipolar VPB system and give a unified framework to study the nonlinear stability of the basic wave patterns to the system. Then, as applications of this new decomposition, the time-asymptotic stability of the two typical nonlinear wave patterns, viscous shock waves and rarefaction waves are proved for the 1D bipolar VPB system. More precisely, it is first proved that the linear superposition of two Boltzmann shock profiles in the first and third characteristic fields is nonlinearly stable to the 1D bipolar VPB system up to some suitable shifts without the zero macroscopic mass conditions on the initial perturbations. Then the time-asymptotic stability of the rarefaction wave fan to compressible Euler equations is proved for the 1D bipolar VPB system. These two results are concerned with the nonlinear stability of wave patterns for Boltzmann equation coupled with additional (electric) forces, which together with spectral analysis made in Li et al. (Indiana Univ Math J 65(2):665-725, 2016) sheds light on understanding the complicated dynamic behaviors around the wave patterns in the transportation of charged particles under the binary collisions, mutual interactions, and the effect of the electrostatic potential forces.
Natural Gas Compressor Stations on the Interstate Pipeline Network: Developments Since 1996
2007-01-01
This special report looks at the use of natural gas pipeline compressor stations on the interstate natural gas pipeline network that serves the lower 48 states. It examines the compression facilities added over the past 10 years and how the expansions have supported pipeline capacity growth intended to meet the increasing demand for natural gas.
40 CFR 1065.260 - Flame-ionization detector.
Code of Federal Regulations, 2012 CFR
2012-07-01
... concentrations on a carbon number basis of one, C1. For measuring THC or THCE you must use a FID analyzer. For... § 1065.205. Note that your FID-based system for measuring THC, THCE, or CH4 must meet all the... bias. (c) Heated FID analyzers. For measuring THC or THCE from compression-ignition engines, two-stroke...
40 CFR 1065.260 - Flame-ionization detector.
Code of Federal Regulations, 2014 CFR
2014-07-01
... concentrations on a carbon number basis of one, C1. For measuring THC or THCE you must use a FID analyzer. For... § 1065.205. Note that your FID-based system for measuring THC, THCE, or CH4 must meet all the... verification in § 1065.307. (c) Heated FID analyzers. For measuring THC or THCE from compression-ignition...
40 CFR 1065.260 - Flame-ionization detector.
Code of Federal Regulations, 2013 CFR
2013-07-01
... concentrations on a carbon number basis of one, C1. For measuring THC or THCE you must use a FID analyzer. For... § 1065.205. Note that your FID-based system for measuring THC, THCE, or CH4 must meet all the... bias. (c) Heated FID analyzers. For measuring THC or THCE from compression-ignition engines, two-stroke...
46 CFR 169.703 - Cooking and heating.
Code of Federal Regulations, 2014 CFR
2014-10-01
... compressed natural gas (CNG) is authorized for cooking purposes only. (1) The design, installation and..., installation, and testing of each CNG system must meet either Chapter 6 of NFPA 302 or ABYC A-22. (3) The... additional requirements must also be met: (i) LPG or CNG must be odorized in accordance with ABYC A-1.5.d or...
46 CFR 169.703 - Cooking and heating.
Code of Federal Regulations, 2013 CFR
2013-10-01
... compressed natural gas (CNG) is authorized for cooking purposes only. (1) The design, installation and..., installation, and testing of each CNG system must meet either Chapter 6 of NFPA 302 or ABYC A-22. (3) The... additional requirements must also be met: (i) LPG or CNG must be odorized in accordance with ABYC A-1.5.d or...
46 CFR 169.703 - Cooking and heating.
Code of Federal Regulations, 2012 CFR
2012-10-01
... compressed natural gas (CNG) is authorized for cooking purposes only. (1) The design, installation and..., installation, and testing of each CNG system must meet either Chapter 6 of NFPA 302 or ABYC A-22. (3) The... additional requirements must also be met: (i) LPG or CNG must be odorized in accordance with ABYC A-1.5.d or...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-22
.... Create a central gas/liquids separation facility (Ryckman Plant) where all of the gas pipelines meet. It would contain a small electric-driven compressor to compress casing head gas, liquids separation... Convenience and Necessity under section 7 of the Natural Gas Act. The Bureau of Land Management (BLM) is...
Utilization of sewage sludge in the manufacture of lightweight aggregate.
Franus, Małgorzata; Barnat-Hunek, Danuta; Wdowin, Magdalena
2016-01-01
This paper presents a comprehensive study on the possibility of sewage sludge management in a sintered ceramic material such as a lightweight aggregate. Made from clay and sludge lightweight aggregates were sintered at two temperatures: 1100 °C (name of sample LWA1) and 1150 °C (name of sample LWA2). Physical and mechanical properties indicate that the resulting expanded clay aggregate containing sludge meets the basic requirements for lightweight aggregates. The presence of sludge supports the swelling of the raw material, thereby causing an increase in the porosity of aggregates. The LWA2 has a lower value of bulk particle density (0.414 g/cm(3)), apparent particle density (0.87 g/cm(3)), and dry particle density (2.59 g/cm(3)) than it is in the case of LWA1 where these parameters were as follows: bulk particle density 0.685 g/cm(3), apparent particle density 1.05 g/cm(3), and dry particle density 2.69 g/cm(3). Water absorption and porosity of LWA1 (WA = 14.4 %, P = 60 %) are lower than the LWA2 (WA = 16.2 % and P = 66 %). This is due to the higher heating temperature of granules which make the waste gases, liberating them from the decomposition of organic sewage sludge. The compressive strength of LWA2 aggregate is 4.64 MPa and for LWA1 is 0.79 MPa. Results of leaching tests of heavy metals from examined aggregates have shown that insoluble metal compounds are placed in silicate and aluminosilicate structure of the starting materials (clays and sludges), whereas soluble substances formed crystalline skeleton of the aggregates. The thermal synthesis of lightweight aggregates from clay and sludge mixture is a waste-free method of their development.
High-performance software-only H.261 video compression on PC
NASA Astrophysics Data System (ADS)
Kasperovich, Leonid
1996-03-01
This paper describes an implementation of a software H.261 codec for PC, that takes an advantage of the fast computational algorithms for DCT-based video compression, which have been presented by the author at the February's 1995 SPIE/IS&T meeting. The motivation for developing the H.261 prototype system is to demonstrate a feasibility of real time software- only videoconferencing solution to operate across a wide range of network bandwidth, frame rate, and resolution of the input video. As the bandwidths of current network technology will be increased, the higher frame rate and resolution of video to be transmitted is allowed, that requires, in turn, a software codec to be able to compress pictures of CIF (352 X 288) resolution at up to 30 frame/sec. Running on Pentium 133 MHz PC the codec presented is capable to compress video in CIF format at 21 - 23 frame/sec. This result is comparable to the known hardware-based H.261 solutions, but it doesn't require any specific hardware. The methods to achieve high performance, the program optimization technique for Pentium microprocessor along with the performance profile, showing the actual contribution of the different encoding/decoding stages to the overall computational process, are presented.
A weakly-compressible Cartesian grid approach for hydrodynamic flows
NASA Astrophysics Data System (ADS)
Bigay, P.; Oger, G.; Guilcher, P.-M.; Le Touzé, D.
2017-11-01
The present article aims at proposing an original strategy to solve hydrodynamic flows. In introduction, the motivations for this strategy are developed. It aims at modeling viscous and turbulent flows including complex moving geometries, while avoiding meshing constraints. The proposed approach relies on a weakly-compressible formulation of the Navier-Stokes equations. Unlike most hydrodynamic CFD (Computational Fluid Dynamics) solvers usually based on implicit incompressible formulations, a fully-explicit temporal scheme is used. A purely Cartesian grid is adopted for numerical accuracy and algorithmic simplicity purposes. This characteristic allows an easy use of Adaptive Mesh Refinement (AMR) methods embedded within a massively parallel framework. Geometries are automatically immersed within the Cartesian grid with an AMR compatible treatment. The method proposed uses an Immersed Boundary Method (IBM) adapted to the weakly-compressible formalism and imposed smoothly through a regularization function, which stands as another originality of this work. All these features have been implemented within an in-house solver based on this WCCH (Weakly-Compressible Cartesian Hydrodynamic) method which meets the above requirements whilst allowing the use of high-order (> 3) spatial schemes rarely used in existing hydrodynamic solvers. The details of this WCCH method are presented and validated in this article.
Decomposing Mortality Disparities in Urban and Rural U.S. Counties.
Spencer, Jennifer C; Wheeler, Stephanie B; Rotter, Jason S; Holmes, George M
2018-05-30
To understand the role of county characteristics in the growing divide between rural and urban mortality from 1980 to 2010. Age-adjusted mortality rates for all U.S. counties from 1980 to 2010 were obtained from the CDC Compressed Mortality File and combined with county characteristics from the U.S. Census Bureau, the Area Health Resources File, and the Inter-University Consortium for Political and Social research. We used Oaxaca-Blinder decomposition to assess the extent to which rural-urban mortality disparities are explained by observed county characteristics at each decade. Decomposition shows that, at each decade, differences in rural/urban characteristics are sufficient to explain differences in mortality. Furthermore, starting in 1990, rural counties have significantly lower predicted mortality than urban counties when given identical county characteristics. We find changes in the effect of characteristics on mortality, not the characteristics themselves, drive the growing mortality divide. Differences in economic and demographic characteristics between rural and urban counties largely explain the differences in age-adjusted mortality in any given year. Over time, the role these characteristics play in improving mortality has increased differentially for urban counties. As characteristics continue changing in importance as determinants of health, this divide may continue to widen. © Health Research and Educational Trust.
NASA Astrophysics Data System (ADS)
Clayton, J. D.
2017-02-01
A theory of deformation of continuous media based on concepts from Finsler differential geometry is presented. The general theory accounts for finite deformations, nonlinear elasticity, and changes in internal state of the material, the latter represented by elements of a state vector of generalized Finsler space whose entries consist of one or more order parameter(s). Two descriptive representations of the deformation gradient are considered. The first invokes an additive decomposition and is applied to problems involving localized inelastic deformation mechanisms such as fracture. The second invokes a multiplicative decomposition and is applied to problems involving distributed deformation mechanisms such as phase transformations or twinning. Appropriate free energy functions are posited for each case, and Euler-Lagrange equations of equilibrium are derived. Solutions are obtained for specific problems of tensile fracture of an elastic cylinder and for amorphization of a crystal under spherical and uniaxial compression. The Finsler-based approach is demonstrated to be more general and potentially more physically descriptive than existing hyperelasticity models couched in Riemannian geometry or Euclidean space, without incorporation of supplementary ad hoc equations or spurious fitting parameters. Predictions for single crystals of boron carbide ceramic agree qualitatively, and in many instances quantitatively, with results from physical experiments and atomic simulations involving structural collapse and failure of the crystal along its c-axis.
Salt dependence of compression normal forces of quenched polyelectrolyte brushes
NASA Astrophysics Data System (ADS)
Hernandez-Zapata, Ernesto; Tamashiro, Mario N.; Pincus, Philip A.
2001-03-01
We obtained mean-field expressions for the compression normal forces between two identical opposing quenched polyelectrolyte brushes in the presence of monovalent salt. The brush elasticity is modeled using the entropy of ideal Gaussian chains, while the entropy of the microions and the electrostatic contribution to the grand potential is obtained by solving the non-linear Poisson-Boltzmann equation for the system in contact with a salt reservoir. For the polyelectrolyte brush we considered both a uniformly charged slab as well as an inhomogeneous charge profile obtained using a self-consistent field theory. Using the Derjaguin approximation, we related the planar-geometry results to the realistic two-crossed cylinders experimental set up. Theoretical predictions are compared to experimental measurements(Marc Balastre's abstract, APS March 2001 Meeting.) of the salt dependence of the compression normal forces between two quenched polyelectrolyte brushes formed by the adsorption of diblock copolymers poly(tert-butyl styrene)-sodium poly(styrene sulfonate) [PtBs/NaPSS] onto an octadecyltriethoxysilane (OTE) hydrophobically modified mica, as well as onto bare mica.
NASA Astrophysics Data System (ADS)
Zhu, Zhenyu; Wang, Jianyu
1996-11-01
In this paper, two compression schemes are presented to meet the urgent needs of compressing the huge volume and high data rate of imaging spectrometer images. According to the multidimensional feature of the images and the high fidelity requirement of the reconstruction, both schemes were devised to exploit the high redundancy in both spatial and spectral dimension based on the mature wavelet transform technology. Wavelet transform was applied here in two ways: First, with the spatial wavelet transform and the spectral DPCM decorrelation, a ratio up to 84.3 with PSNR > 48db's near-lossless result was attained. This is based ont he fact that the edge structure among all the spectral bands are similar while WT has higher resolution in high frequency components. Secondly, with the wavelet's high efficiency in processing the 'wideband transient' signals, it was used to transform the raw nonstationary signals in the spectral dimension. A good result was also attained.
NASA Astrophysics Data System (ADS)
Mishra, Rahul Kumar; soota, Tarun, Dr.; singh, Ranjeet
2017-08-01
Rapid exploration and lavish consumption of underground petroleum resources have led to the scarcity of underground fossil fuels moreover the toxic emissions from such fuels are pernicious which have increased the health hazards around the world. So the aim was to find an alternative fuel which would meet the requirements of petroleum or fossil fuels. Biodiesel is a clean, renewable and bio-degradable fuel having several advantages, one of the most important of which is being its eco-friendly and better knocking characteristics than diesel fuel. In this work the performance of Karanja oil was analyzed on a four stroke, single cylinder, water cooled, variable compression ratio diesel engine. The fuel used was 5% - 25% karanja oil methyl ester by volume in diesel. The results such obtained are compared with standard diesel fuel. Several properties i.e. Brake Thermal Efficiency, Brake Specific Fuel Consumptions, Exhaust Gas Temperature are determined at all operating conditions & at variable compression ratio 17 and 17.5.
Further Investigations of Hypersonic Engine Seals
NASA Technical Reports Server (NTRS)
Dunlap, Patrick H., Jr.; Steinetz, Bruce M.; DeMange, Jeffrey J.
2004-01-01
Durable, flexible sliding seals are required in advanced hypersonic engines to seal the perimeters of movable engine ramps for efficient, safe operation in high heat flux environments at temperatures of 2000 to 2500 F. Current seal designs do not meet the demanding requirements for future engines, so NASA's Glenn Research Center is developing advanced seals and preloading devices to overcome these shortfalls. An advanced ceramic wafer seal design and two silicon nitride compression spring designs were evaluated in a series of compression, scrub, and flow tests. Silicon nitride wafer seals survived 2000 in. (50.8 m) of scrubbing at 2000 F against a silicon carbide rub surface with no chips or signs of damage. Flow rates measured for the wafers before and after scrubbing were almost identical and were up to 32 times lower than those recorded for the best braided rope seal flow blockers. Silicon nitride compression springs showed promise conceptually as potential seal preload devices to help maintain seal resiliency.
Ke, Dongxu; Bose, Susmita
2017-09-01
β-tricalcium phosphate (β-TCP) is a widely used biocompatible ceramic in orthopedic and dental applications. However, its osteoinductivity and mechanical properties still require improvements. In this study, porous β-TCP and MgO/ZnO-TCP scaffolds were prepared by the thermal decomposition of sucrose. Crack-free cylindrical scaffolds could only be prepared with the addition of MgO and ZnO due to their stabilization effects. Porous MgO/ZnO-TCP scaffolds with a density of 61.39±0.66%, an estimated pore size of 200μm and a compressive strength of 24.96±3.07MPa were prepared by using 25wt% sucrose after conventional sintering at 1250°C. Microwave sintering further increased the compressive strength to 37.94±6.70MPa, but it decreased the open interconnected porosity to 8.74±1.38%. In addition, the incorporation of polycaprolactone (PCL) increased 22.36±3.22% of toughness while maintaining its compressive strength at 25.45±2.21MPa. Human osteoblast cell line was seeded on scaffolds to evaluate the effects of MgO/ZnO and PCL on the biological property of β-TCP in vitro. Both MgO/ZnO and PCL improved osteoinductivity of β-TCP. PCL also decreased osteoblastic apoptosis due to its particular surface chemistry. This novel porous MgO/ZnO-TCP scaffold with PCL shows improved mechanical and biological properties, which has great potential in bone tissue engineering applications. Copyright © 2017. Published by Elsevier B.V.
A Framework for Propagation of Uncertainties in the Kepler Data Analysis Pipeline
NASA Technical Reports Server (NTRS)
Clarke, Bruce D.; Allen, Christopher; Bryson, Stephen T.; Caldwell, Douglas A.; Chandrasekaran, Hema; Cote, Miles T.; Girouard, Forrest; Jenkins, Jon M.; Klaus, Todd C.; Li, Jie;
2010-01-01
The Kepler space telescope is designed to detect Earth-like planets around Sun-like stars using transit photometry by simultaneously observing 100,000 stellar targets nearly continuously over a three and a half year period. The 96-megapixel focal plane consists of 42 charge-coupled devices (CCD) each containing two 1024 x 1100 pixel arrays. Cross-correlations between calibrated pixels are introduced by common calibrations performed on each CCD requiring downstream data products access to the calibrated pixel covariance matrix in order to properly estimate uncertainties. The prohibitively large covariance matrices corresponding to the 75,000 calibrated pixels per CCD preclude calculating and storing the covariance in standard lock-step fashion. We present a novel framework used to implement standard propagation of uncertainties (POU) in the Kepler Science Operations Center (SOC) data processing pipeline. The POU framework captures the variance of the raw pixel data and the kernel of each subsequent calibration transformation allowing the full covariance matrix of any subset of calibrated pixels to be recalled on-the-fly at any step in the calibration process. Singular value decomposition (SVD) is used to compress and low-pass filter the raw uncertainty data as well as any data dependent kernels. The combination of POU framework and SVD compression provide downstream consumers of the calibrated pixel data access to the full covariance matrix of any subset of the calibrated pixels traceable to pixel level measurement uncertainties without having to store, retrieve and operate on prohibitively large covariance matrices. We describe the POU Framework and SVD compression scheme and its implementation in the Kepler SOC pipeline.
Energy Efficient GNSS Signal Acquisition Using Singular Value Decomposition (SVD).
Bermúdez Ordoñez, Juan Carlos; Arnaldo Valdés, Rosa María; Gómez Comendador, Fernando
2018-05-16
A significant challenge in global navigation satellite system (GNSS) signal processing is a requirement for a very high sampling rate. The recently-emerging compressed sensing (CS) theory makes processing GNSS signals at a low sampling rate possible if the signal has a sparse representation in a certain space. Based on CS and SVD theories, an algorithm for sampling GNSS signals at a rate much lower than the Nyquist rate and reconstructing the compressed signal is proposed in this research, which is validated after the output from that process still performs signal detection using the standard fast Fourier transform (FFT) parallel frequency space search acquisition. The sparse representation of the GNSS signal is the most important precondition for CS, by constructing a rectangular Toeplitz matrix (TZ) of the transmitted signal, calculating the left singular vectors using SVD from the TZ, to achieve sparse signal representation. Next, obtaining the M-dimensional observation vectors based on the left singular vectors of the SVD, which are equivalent to the sampler operator in standard compressive sensing theory, the signal can be sampled below the Nyquist rate, and can still be reconstructed via ℓ 1 minimization with accuracy using convex optimization. As an added value, there is a GNSS signal acquisition enhancement effect by retaining the useful signal and filtering out noise by projecting the signal into the most significant proper orthogonal modes (PODs) which are the optimal distributions of signal power. The algorithm is validated with real recorded signals, and the results show that the proposed method is effective for sampling, reconstructing intermediate frequency (IF) GNSS signals in the time discrete domain.
Energy Efficient GNSS Signal Acquisition Using Singular Value Decomposition (SVD)
Arnaldo Valdés, Rosa María; Gómez Comendador, Fernando
2018-01-01
A significant challenge in global navigation satellite system (GNSS) signal processing is a requirement for a very high sampling rate. The recently-emerging compressed sensing (CS) theory makes processing GNSS signals at a low sampling rate possible if the signal has a sparse representation in a certain space. Based on CS and SVD theories, an algorithm for sampling GNSS signals at a rate much lower than the Nyquist rate and reconstructing the compressed signal is proposed in this research, which is validated after the output from that process still performs signal detection using the standard fast Fourier transform (FFT) parallel frequency space search acquisition. The sparse representation of the GNSS signal is the most important precondition for CS, by constructing a rectangular Toeplitz matrix (TZ) of the transmitted signal, calculating the left singular vectors using SVD from the TZ, to achieve sparse signal representation. Next, obtaining the M-dimensional observation vectors based on the left singular vectors of the SVD, which are equivalent to the sampler operator in standard compressive sensing theory, the signal can be sampled below the Nyquist rate, and can still be reconstructed via ℓ1 minimization with accuracy using convex optimization. As an added value, there is a GNSS signal acquisition enhancement effect by retaining the useful signal and filtering out noise by projecting the signal into the most significant proper orthogonal modes (PODs) which are the optimal distributions of signal power. The algorithm is validated with real recorded signals, and the results show that the proposed method is effective for sampling, reconstructing intermediate frequency (IF) GNSS signals in the time discrete domain. PMID:29772731
Resistivity behavior of hydrogen and liquid silane at high shock compression
NASA Astrophysics Data System (ADS)
Wang, Yi-Gao; Liu, Fu-Sheng; Liu, Qi-Jun
2018-07-01
To study the electrical properties of hydrogen rich compounds under extreme conditions, the electrical resistivity of density hydrogen and silane fluid was measured, respectively. The hydrogen sample was prepared by compressing pure hydrogen gas to 10 MPa in a coolant target system at the temperature of 77 K. The silane sample can be obtained with the same method. High-pressure and high-temperature experiments were performed using a two-stage light-gas gun. The electrical resistivity of the sample decreased with increasing pressure and temperature as expected. A minimum electrical resistivity value of 0.3 × 10-3 Ω cm at 138 GPa and 4100 K was obtained for silane. The minimum resistivity of hydrogen in the state of 102 GPa and 4300 K was 0.35 Ω cm. It showed that the measured electrical resistivity of the shock-compressed hydrogen was an order of magnitude higher than fluid silane at 50-90 GPa. However, beyond 100 GPa, the resistivity difference between silane and hydrogen was very minor. The carriers in the sample were hydrogen, and the concentration of hydrogen atoms in these two substances was close to each other. These results supported the theoretical prediction that silane was interpreted simply in terms of chemical decomposition into silicon nanoparticles and fluid hydrogen, and electrical conduction flows predominately dominated by the fluid hydrogen. In addition, the results also supported the theory of "chemical precompression", the existence of Sisbnd H bond helped to reduce the pressure of hydrogen metallization. These findings could lead the way for further metallic phases of hydrogen-rich materials and experimental studies.
The Orbital Maneuvering Vehicle Training Facility visual system concept
NASA Technical Reports Server (NTRS)
Williams, Keith
1989-01-01
The purpose of the Orbital Maneuvering Vehicle (OMV) Training Facility (OTF) is to provide effective training for OMV pilots. A critical part of the training environment is the Visual System, which will simulate the video scenes produced by the OMV Closed-Circuit Television (CCTV) system. The simulation will include camera models, dynamic target models, moving appendages, and scene degradation due to the compression/decompression of video signal. Video system malfunctions will also be provided to ensure that the pilot is ready to meet all challenges the real-world might provide. One possible visual system configuration for the training facility that will meet existing requirements is described.
33 CFR 154.850 - Operational requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
...), § 154.824 (d) and (e) of this subpart must be checked for calibration by use of a span gas. (c) The... system that meets the requirements of 46 CFR 39.20-9(b) must not be connected to an overfill sensor... control system, compressed air or gas may be used to clear cargo hoses and loading arms, but must not be...
2010-09-02
Dynamic Mechanical Analysis (DMA). The fracture behavior of the mechanophore-linked polymer is also examined through the Double Cleavage Drilled ...multinary complex structures. Structural, microstructural, and chemical characterizations were explored by metrological tools to support this...simple hydrocarbons in order to quantitatively define structure-property relationships for reacting materials under shock compression. Embedded gauge
MEMS-Based Satellite Micropropulsion Via Catalyzed Hydrogen Peroxide Decomposition
NASA Technical Reports Server (NTRS)
Hitt, Darren L.; Zakrzwski, Charles M.; Thomas, Michael A.; Bauer, Frank H. (Technical Monitor)
2001-01-01
Micro-electromechanical systems (MEMS) techniques offer great potential in satisfying the mission requirements for the next generation of "micro-scale" satellites being designed by NASA and Department of Defense agencies. More commonly referred to as "nanosats", these miniature satellites feature masses in the range of 10-100 kg and therefore have unique propulsion requirements. The propulsion systems must be capable of providing extremely low levels of thrust and impulse while also satisfying stringent demands on size, mass, power consumption and cost. We begin with an overview of micropropulsion requirements and some current MEMS-based strategies being developed to meet these needs. The remainder of the article focuses the progress being made at NASA Goddard Space Flight Center towards the development of a prototype monopropellant MEMS thruster which uses the catalyzed chemical decomposition of high concentration hydrogen peroxide as a propulsion mechanism. The products of decomposition are delivered to a micro-scale converging/diverging supersonic nozzle which produces the thrust vector; the targeted thrust level approximately 500 N with a specific impulse of 140-180 seconds. Macro-scale hydrogen peroxide thrusters have been used for satellite propulsion for decades; however, the implementation of traditional thruster designs on a MEMS scale has uncovered new challenges in fabrication, materials compatibility, and combustion and hydrodynamic modeling. A summary of the achievements of the project to date is given, as is a discussion of remaining challenges and future prospects.
Effect of silane dilution on intrinsic stress in glow discharge hydrogenated amorphous silicon films
NASA Astrophysics Data System (ADS)
Harbison, J. P.; Williams, A. J.; Lang, D. V.
1984-02-01
Measurements of the intrinsic stress in hydrogenated amorphous silicon (a-Si : H) films grown by rf glow discharge decomposition of silane diluted to varying degrees in argon are presented. Films are found to grow under exceedingly high compressive stress. Low values of macroscopic film density and low stress values are found to correlate with high growth rate. An abrupt drop in stress occurs between 2 and 3% silane at precisely the point where columnar growth morphology appears. No corresponding abrupt change is noted in density, growth rate, or plasma species concentrations as determined by optical emissioin spectroscopy. Finally a model of diffusive incorporation of hydrogen or some gaseous impurity during growth into the bulk of the film behind the growing interface is proposed to explain the results.
Video quality assesment using M-SVD
NASA Astrophysics Data System (ADS)
Tao, Peining; Eskicioglu, Ahmet M.
2007-01-01
Objective video quality measurement is a challenging problem in a variety of video processing application ranging from lossy compression to printing. An ideal video quality measure should be able to mimic the human observer. We present a new video quality measure, M-SVD, to evaluate distorted video sequences based on singular value decomposition. A computationally efficient approach is developed for full-reference (FR) video quality assessment. This measure is tested on the Video Quality Experts Group (VQEG) phase I FR-TV test data set. Our experiments show the graphical measure displays the amount of distortion as well as the distribution of error in all frames of the video sequence while the numerical measure has a good correlation with perceived video quality outperforms PSNR and other objective measures by a clear margin.
Parallel processing approach to transform-based image coding
NASA Astrophysics Data System (ADS)
Normile, James O.; Wright, Dan; Chu, Ken; Yeh, Chia L.
1991-06-01
This paper describes a flexible parallel processing architecture designed for use in real time video processing. The system consists of floating point DSP processors connected to each other via fast serial links, each processor has access to a globally shared memory. A multiple bus architecture in combination with a dual ported memory allows communication with a host control processor. The system has been applied to prototyping of video compression and decompression algorithms. The decomposition of transform based algorithms for decompression into a form suitable for parallel processing is described. A technique for automatic load balancing among the processors is developed and discussed, results ar presented with image statistics and data rates. Finally techniques for accelerating the system throughput are analyzed and results from the application of one such modification described.
Multi-focus image fusion and robust encryption algorithm based on compressive sensing
NASA Astrophysics Data System (ADS)
Xiao, Di; Wang, Lan; Xiang, Tao; Wang, Yong
2017-06-01
Multi-focus image fusion schemes have been studied in recent years. However, little work has been done in multi-focus image transmission security. This paper proposes a scheme that can reduce data transmission volume and resist various attacks. First, multi-focus image fusion based on wavelet decomposition can generate complete scene images and optimize the perception of the human eye. The fused images are sparsely represented with DCT and sampled with structurally random matrix (SRM), which reduces the data volume and realizes the initial encryption. Then the obtained measurements are further encrypted to resist noise and crop attack through combining permutation and diffusion stages. At the receiver, the cipher images can be jointly decrypted and reconstructed. Simulation results demonstrate the security and robustness of the proposed scheme.
NASA Technical Reports Server (NTRS)
Chung, T. J. (Editor); Karr, Gerald R. (Editor)
1989-01-01
Recent advances in computational fluid dynamics are examined in reviews and reports, with an emphasis on finite-element methods. Sections are devoted to adaptive meshes, atmospheric dynamics, combustion, compressible flows, control-volume finite elements, crystal growth, domain decomposition, EM-field problems, FDM/FEM, and fluid-structure interactions. Consideration is given to free-boundary problems with heat transfer, free surface flow, geophysical flow problems, heat and mass transfer, high-speed flow, incompressible flow, inverse design methods, MHD problems, the mathematics of finite elements, and mesh generation. Also discussed are mixed finite elements, multigrid methods, non-Newtonian fluids, numerical dissipation, parallel vector processing, reservoir simulation, seepage, shallow-water problems, spectral methods, supercomputer architectures, three-dimensional problems, and turbulent flows.
Synthesis and properties of selenium trihydride at high pressures
NASA Astrophysics Data System (ADS)
Zhang, Xiao; Xu, Wan; Wang, Yu; Jiang, Shuqing; Gorelli, Federico A.; Greenberg, Eran; Prakapenka, Vitali B.; Goncharov, Alexander F.
2018-02-01
The chemical reaction products of molecular hydrogen (H2) with selenium (Se) are studied by synchrotron x-ray diffraction (XRD) and Raman spectroscopy at high pressures. We find that a common H2Se is synthesized at 0.3 GPa using laser heating. Upon compression at 300 K, a crystal of the theoretically predicted Cccm H3Se has been grown at 4.6 GPa. At room temperature, H3Se shows a reversible phase decomposition after laser irradiation above 8.6 GPa, but remains stable up to 21 GPa. However, at 170 K Cccm H3Se persists up to 39.5 GPa based on XRD measurements, while low-temperature Raman spectra weaken and broaden above 23.1 GPa. At these conditions, the sample is visually nontransparent and shiny suggesting that metallization occurred.
Structural applications of metal foams considering material and geometrical uncertainty
NASA Astrophysics Data System (ADS)
Moradi, Mohammadreza
Metal foam is a relatively new and potentially revolutionary material that allows for components to be replaced with elements capable of large energy dissipation, or components to be stiffened with elements which will generate significant supplementary energy dissipation when buckling occurs. Metal foams provide a means to explore reconfiguring steel structures to mitigate cross-section buckling in many cases and dramatically increase energy dissipation in all cases. The microstructure of metal foams consists of solid and void phases. These voids have random shape and size. Therefore, randomness ,which is introduced into metal foams during the manufacturing processes, creating more uncertainty in the behavior of metal foams compared to solid steel. Therefore, studying uncertainty in the performance metrics of structures which have metal foams is more crucial than for conventional structures. Therefore, in this study, structural application of metal foams considering material and geometrical uncertainty is presented. This study applies the Sobol' decomposition of a function of many random variables to different problem in structural mechanics. First, the Sobol' decomposition itself is reviewed and extended to cover the case in which the input random variables have Gaussian distribution. Then two examples are given for a polynomial function of 3 random variables and the collapse load of a two story frame. In the structural example, the Sobol' decomposition is used to decompose the variance of the response, the collapse load, into contributions from the individual input variables. This decomposition reveals the relative importance of the individual member yield stresses in determining the collapse load of the frame. In applying the Sobol' decomposition to this structural problem the following issues are addressed: calculation of the components of the Sobol' decomposition by Monte Carlo simulation; the effect of input distribution on the Sobol' decomposition; convergence of estimates of the Sobol' decomposition with sample size using various sampling schemes; the possibility of model reduction guided by the results of the Sobol' decomposition. For the rest of the study the different structural applications of metal foam is investigated. In the first application, it is shown that metal foams have the potential to serve as hysteric dampers in the braces of braced building frames. Using metal foams in the structural braces decreases different dynamic responses such as roof drift, base shear and maximum moment in the columns. Optimum metal foam strengths are different for different earthquakes. In order to use metal foam in the structural braces, metal foams need to have stable cyclic response which might be achievable for metal foams with high relative density. The second application is to improve strength and ductility of a steel tube by filling it with steel foam. Steel tube beams and columns are able to provide significant strength for structures. They have an efficient shape with large second moment of inertia which leads to light elements with high bending strength. Steel foams with high strength to weight ratio are used to fill the steel tube to improves its mechanical behavior. The linear eigenvalue and plastic collapse finite element (FE) analysis are performed on steel foam filled tube under pure compression and three point bending simulation. It is shown that foam improves the maximum strength and the ability of energy absorption of the steel tubes significantly. Different configurations with different volume of steel foam and composite behavior are investigated. It is demonstrated that there are some optimum configurations with more efficient behavior. If composite action between steel foam and steel increases, the strength of the element will improve due to the change of the failure mode from local buckling to yielding. Moreover, the Sobol' decomposition is used to investigate uncertainty in the strength and ductility of the composite tube, including the sensitivity of the strength to input parameters such as the foam density, tube wall thickness, steel properties etc. Monte Carlo simulation is performed on aluminum foam filled tubes under three point bending conditions. The simulation method is nonlinear finite element analysis. Results show that the steel foam properties have a greater effect on ductility of the steel foam filled tube than its strength. Moreover, flexural strength is more sensitive to steel properties than to aluminum foam properties. Finally, the properties of hypothetical structural steel foam C-channels foamed are investigated via simulations. In thin-walled structural members, stability of the walls is the primary driver of structural limit states. Moreover, having a light weight is one of the main advantages of the thin-walled structural members. Therefore, thin-walled structural members made of steel foam exhibit improved strength while maintaining their low weight. Linear eigenvalue, finite strip method (FSM) and plastic collapse FE analysis is used to evaluate the strength and ductility of steel foam C-channels under uniform compression and bending. It is found that replacing steel walls of the C-channel with steel foam walls increases the local buckling resistance and decreases the global buckling resistance of the C-channel. By using the Sobol' decomposition, an optimum configuration for the variable density steel foam C-channel can be found. For high relative density, replacing solid steel of the lips and flange elements with steel foam increases the buckling strength. On the other hand, for low relative density replacing solid steel of the lips and flange elements with steel foam deceases the buckling strength. Moreover, it is shown that buckling strength of the steel foam C-channel is sensitive to the second order Sobol' indices. In summary, it is shown in this research that the metal foams have a great potential to improve different types of structural responses, and there are many promising application for metal foam in civil structures.
NASA Astrophysics Data System (ADS)
Fujioka, Shinsuke; Arikawa, Yasunobu; Kojima, Sadaoki; Johzaki, Tomoyuki; Nagatomo, Hideo; Sawada, Hiroshi; Lee, Seung Ho; Shiroto, Takashi; Ohnishi, Naofumi; Morace, Alessio; Vaisseau, Xavier; Sakata, Shohei; Abe, Yuki; Matsuo, Kazuki; Farley Law, King Fai; Tosaki, Shota; Yogo, Akifumi; Shigemori, Keisuke; Hironaka, Yoichiro; Zhang, Zhe; Sunahara, Atsushi; Ozaki, Tetsuo; Sakagami, Hitoshi; Mima, Kunioki; Fujimoto, Yasushi; Yamanoi, Kohei; Norimatsu, Takayoshi; Tokita, Shigeki; Nakata, Yoshiki; Kawanaka, Junji; Jitsuno, Takahisa; Miyanaga, Noriaki; Nakai, Mitsuo; Nishimura, Hiroaki; Shiraga, Hiroyuki; Kondo, Kotaro; Bailly-Grandvaux, Mathieu; Bellei, Claudio; Santos, João Jorge; Azechi, Hiroshi
2016-05-01
A petawatt laser for fast ignition experiments (LFEX) laser system [N. Miyanaga et al., J. Phys. IV France 133, 81 (2006)], which is currently capable of delivering 2 kJ in a 1.5 ps pulse using 4 laser beams, has been constructed beside the GEKKO-XII laser facility for demonstrating efficient fast heating of a dense plasma up to the ignition temperature under the auspices of the Fast Ignition Realization EXperiment (FIREX) project [H. Azechi et al., Nucl. Fusion 49, 104024 (2009)]. In the FIREX experiment, a cone is attached to a spherical target containing a fuel to prevent a corona plasma from entering the path of the intense heating LFEX laser beams. The LFEX laser beams are focused at the tip of the cone to generate a relativistic electron beam (REB), which heats a dense fuel core generated by compression of a spherical deuterized plastic target induced by the GEKKO-XII laser beams. Recent studies indicate that the current heating efficiency is only 0.4%, and three requirements to achieve higher efficiency of the fast ignition (FI) scheme with the current GEKKO and LFEX systems have been identified: (i) reduction of the high energy tail of the REB; (ii) formation of a fuel core with high areal density using a limited number (twelve) of GEKKO-XII laser beams as well as a limited energy (4 kJ of 0.53-μm light in a 1.3 ns pulse); (iii) guiding and focusing of the REB to the fuel core. Laser-plasma interactions in a long-scale plasma generate electrons that are too energetic to efficiently heat the fuel core. Three actions were taken to meet the first requirement. First, the intensity contrast of the foot pulses to the main pulses of the LFEX was improved to >109. Second, a 5.5-mm-long cone was introduced to reduce pre-heating of the inner cone wall caused by illumination of the unconverted 1.053-μm light of implosion beam (GEKKO-XII). Third, the outside of the cone wall was coated with a 40-μm plastic layer to protect it from the pressure caused by imploding plasma. Following the above improvements, conversion of 13% of the LFEX laser energy to a low energy portion of the REB, whose slope temperature is 0.7 MeV, which is close to the ponderomotive scaling value, was achieved. To meet the second requirement, the compression of a solid spherical ball with a diameter of 200-μm to form a dense core with an areal density of ˜0.07 g/cm2 was induced by a laser-driven spherically converging shock wave. Converging shock compression is more hydrodynamically stable compared to shell implosion, while a hot spot cannot be generated with a solid ball target. Solid ball compression is preferable also for compressing an external magnetic field to collimate the REB to the fuel core, due to the relatively small magnetic Reynolds number of the shock compressed region. To meet the third requirement, we have generated a strong kilo-tesla magnetic field using a laser-driven capacitor-coil target. The strength and time history of the magnetic field were characterized with proton deflectometry and a B-dot probe. Guidance of the REB using a 0.6-kT field in a planar geometry has been demonstrated at the LULI 2000 laser facility. In a realistic FI scenario, a magnetic mirror is formed between the REB generation point and the fuel core. The effects of the strong magnetic field on not only REB transport but also plasma compression were studied using numerical simulations. According to the transport calculations, the heating efficiency can be improved from 0.4% to 4% by the GEKKO and LFEX laser system by meeting the three requirements described above. This efficiency is scalable to 10% of the heating efficiency by increasing the areal density of the fuel core.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fujioka, Shinsuke, E-mail: sfujioka@ile.osaka-u.ac.jp; Arikawa, Yasunobu; Kojima, Sadaoki
A petawatt laser for fast ignition experiments (LFEX) laser system [N. Miyanaga et al., J. Phys. IV France 133, 81 (2006)], which is currently capable of delivering 2 kJ in a 1.5 ps pulse using 4 laser beams, has been constructed beside the GEKKO-XII laser facility for demonstrating efficient fast heating of a dense plasma up to the ignition temperature under the auspices of the Fast Ignition Realization EXperiment (FIREX) project [H. Azechi et al., Nucl. Fusion 49, 104024 (2009)]. In the FIREX experiment, a cone is attached to a spherical target containing a fuel to prevent a corona plasma frommore » entering the path of the intense heating LFEX laser beams. The LFEX laser beams are focused at the tip of the cone to generate a relativistic electron beam (REB), which heats a dense fuel core generated by compression of a spherical deuterized plastic target induced by the GEKKO-XII laser beams. Recent studies indicate that the current heating efficiency is only 0.4%, and three requirements to achieve higher efficiency of the fast ignition (FI) scheme with the current GEKKO and LFEX systems have been identified: (i) reduction of the high energy tail of the REB; (ii) formation of a fuel core with high areal density using a limited number (twelve) of GEKKO-XII laser beams as well as a limited energy (4 kJ of 0.53-μm light in a 1.3 ns pulse); (iii) guiding and focusing of the REB to the fuel core. Laser–plasma interactions in a long-scale plasma generate electrons that are too energetic to efficiently heat the fuel core. Three actions were taken to meet the first requirement. First, the intensity contrast of the foot pulses to the main pulses of the LFEX was improved to >10{sup 9}. Second, a 5.5-mm-long cone was introduced to reduce pre-heating of the inner cone wall caused by illumination of the unconverted 1.053-μm light of implosion beam (GEKKO-XII). Third, the outside of the cone wall was coated with a 40-μm plastic layer to protect it from the pressure caused by imploding plasma. Following the above improvements, conversion of 13% of the LFEX laser energy to a low energy portion of the REB, whose slope temperature is 0.7 MeV, which is close to the ponderomotive scaling value, was achieved. To meet the second requirement, the compression of a solid spherical ball with a diameter of 200-μm to form a dense core with an areal density of ∼0.07 g/cm{sup 2} was induced by a laser-driven spherically converging shock wave. Converging shock compression is more hydrodynamically stable compared to shell implosion, while a hot spot cannot be generated with a solid ball target. Solid ball compression is preferable also for compressing an external magnetic field to collimate the REB to the fuel core, due to the relatively small magnetic Reynolds number of the shock compressed region. To meet the third requirement, we have generated a strong kilo-tesla magnetic field using a laser-driven capacitor-coil target. The strength and time history of the magnetic field were characterized with proton deflectometry and a B-dot probe. Guidance of the REB using a 0.6-kT field in a planar geometry has been demonstrated at the LULI 2000 laser facility. In a realistic FI scenario, a magnetic mirror is formed between the REB generation point and the fuel core. The effects of the strong magnetic field on not only REB transport but also plasma compression were studied using numerical simulations. According to the transport calculations, the heating efficiency can be improved from 0.4% to 4% by the GEKKO and LFEX laser system by meeting the three requirements described above. This efficiency is scalable to 10% of the heating efficiency by increasing the areal density of the fuel core.« less
1982 AFOSR/AFRPL Rocket Propulsion Research Meeting Held at Lancaster, California on 2-4 March 1982.
1982-02-01
OF DELAWARE P.I.: THOMAS B. BRILL I I THE L*6 HMX SOLID PHASE DIAGRAM 00 •6- HMX is the stable polymorph 0 3 iM above 248*C regardless of 500 the...MX trans forma- <ia is orders of maqnitude faster miian p-piezllant combustion rates. ’- HMX is therefore the polymorph that initiates decomposition...rapidly accelerating Osage of HMX / RDX for minimu smoke solid propellants has been hampered by a lack of ballistic tailoring flexability which limits
Effects of varying material properties on the load deformation characteristics of heel cushions.
Sun, Pi-Chang; Wei, Hung-Wen; Chen, Chien-Hua; Wu, Chun-Hao; Kao, Hung-Chan; Cheng, Cheng-Kung
2008-07-01
Various insole materials were used in attenuation of heel-strike impact. This study presented a compression test to investigate the deformation characteristics of common heel cushions. There were two materials (thermoplastic elastomer "TPE" and silicone) with three hardness and six thickness being analyzed. They underwent consecutive loading-unloading cycles with a load control mode. The displacement of material thickness was recorded during cyclic compression being applied and released from 0 to 1050 N. The energy input, return and dissipation were evaluated based on the load deformation curves when new and after repeated compression. The TPE recovered more deformed energy and thickness than the silicone after the first loading cycle. The silicone would preserve more strain energy with increasing its hardness for the elastic recovery in the unloading process. The deformed energy was decreased as the original thickness did not completely recover under cyclic tests. The reduction in hysteresis area was gradually converged within 20 cycles. The silicone attenuated more impact energy in the initial cycles, but its energy dissipation was reduced after repeated loading. To increase hardness or thickness should be considered to improve resilience or accommodate persistent compression without flattening. The careful selection of cushion materials is imperative to meet individual functional demands.
Shock wave-induced phase transition in RDX single crystals.
Patterson, James E; Dreger, Zbigniew A; Gupta, Yogendra M
2007-09-20
The real-time, molecular-level response of oriented single crystals of hexahydro-1,3,5-trinitro-s-triazine (RDX) to shock compression was examined using Raman spectroscopy. Single crystals of [111], [210], or [100] orientation were shocked under stepwise loading to peak stresses from 3.0 to 5.5 GPa. Two types of measurements were performed: (i) high-resolution Raman spectroscopy to probe the material at peak stress and (ii) time-resolved Raman spectroscopy to monitor the evolution of molecular changes as the shock wave reverberated through the material. The frequency shift of the CH stretching modes under shock loading appeared to be similar for all three crystal orientations below 3.5 GPa. Significant spectral changes were observed in crystals shocked above 4.5 GPa. These changes were similar to those observed in static pressure measurements, indicating the occurrence of the alpha-gamma phase transition in shocked RDX crystals. No apparent orientation dependence in the molecular response of RDX to shock compression up to 5.5 GPa was observed. The phase transition had an incubation time of approximately 100 ns when RDX was shocked to 5.5 GPa peak stress. The observation of the alpha-gamma phase transition under shock wave loading is briefly discussed in connection with the onset of chemical decomposition in shocked RDX.
Stability of benzocaine formulated in commercial oral disintegrating tablet platforms.
Köllmer, Melanie; Popescu, Carmen; Manda, Prashanth; Zhou, Leon; Gemeinhart, Richard A
2013-12-01
Pharmaceutical excipients contain reactive groups and impurities due to manufacturing processes that can cause decomposition of active drug compounds. The aim of this investigation was to determine if commercially available oral disintegrating tablet (ODT) platforms induce active pharmaceutical ingredient (API) degradation. Benzocaine was selected as the model API due to known degradation through ester and primary amino groups. Benzocaine was either compressed at a constant pressure, 20 kN, or at pressure necessary to produce a set hardness, i.e., where a series of tablets were produced at different compression forces until an average hardness of approximately 100 N was achieved. Tablets were then stored for 6 months under International Conference on Harmonization recommended conditions, 25°C and 60% relative humidity (RH), or under accelerated conditions, 40°C and 75% RH. Benzocaine degradation was monitored by liquid chromatography-mass spectrometry. Regardless of the ODT platform, no degradation of benzocaine was observed in tablets that were kept for 6 months at 25°C and 60% RH. After storage for 30 days under accelerated conditions, benzocaine degradation was observed in a single platform. Qualitative differences in ODT platform behavior were observed in physical appearance of the tablets after storage under different temperature and humidity conditions.
Mixture design and treatment methods for recycling contaminated sediment.
Wang, Lei; Kwok, June S H; Tsang, Daniel C W; Poon, Chi-Sun
2015-01-01
Conventional marine disposal of contaminated sediment presents significant financial and environmental burden. This study aimed to recycle the contaminated sediment by assessing the roles and integration of binder formulation, sediment pretreatment, curing method, and waste inclusion in stabilization/solidification. The results demonstrated that the 28-d compressive strength of sediment blocks produced with coal fly ash and lime partially replacing cement at a binder-to-sediment ratio of 3:7 could be used as fill materials for construction. The X-ray diffraction analysis revealed that hydration products (calcium hydroxide) were difficult to form at high sediment content. Thermal pretreatment of sediment removed 90% of indigenous organic matter, significantly increased the compressive strength, and enabled reuse as non-load-bearing masonry units. Besides, 2-h CO2 curing accelerated early-stage carbonation inside the porous structure, sequestered 5.6% of CO2 (by weight) in the sediment blocks, and acquired strength comparable to 7-d curing. Thermogravimetric analysis indicated substantial weight loss corresponding to decomposition of poorly and well crystalline calcium carbonate. Moreover, partial replacement of contaminated sediment by various granular waste materials notably augmented the strength of sediment blocks. The metal leachability of sediment blocks was minimal and acceptable for reuse. These results suggest that contaminated sediment should be viewed as useful resources. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Bowen; He, Mengsheng; Wang, Huaguang
2017-07-01
Andalusite has been realized as a special mineral for the production of refractory ceramics due to its unique property to automatically decompose into mullite and silica during heating at high temperature. The phase transformation from andalusite to mullite plays a critical role for the effective applications of andalusite. This study investigated the microstructural characteristics and sinterability of andalusite powder during high-temperature decomposition. The andalusite powder was bonded with kaolin and prepared as a cylinder green body at 20 MPa; it was then fired at 1423 K to 1723 K (1150 °C to 1450 °C). The microstructures and mechanical strengths of the sintered ceramics were studied by the compressive test, X-ray diffraction, and scanning electron microscopy. The results showed that newly born mullite appeared as rodlike microcrystals and dispersed around the initial andalusite. At 1423 K (1150 °C), the mullitization of andalusite was started, but the complete mullitization was not found until firing at 1723 K (1450 °C). The compressive strength of the ceramics increased from 93.7 to 294.6 MPa while increasing the fire temperature from 1423 K to 1723 K (1150 °C to 1450 °C). Meanwhile, the bulk density of the ceramics was only slightly changed from 2.15 to 2.19 g/cm3.
Fast Detection of Material Deformation through Structural Dissimilarity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ushizima, Daniela; Perciano, Talita; Parkinson, Dilworth
2015-10-29
Designing materials that are resistant to extreme temperatures and brittleness relies on assessing structural dynamics of samples. Algorithms are critically important to characterize material deformation under stress conditions. Here, we report on our design of coarse-grain parallel algorithms for image quality assessment based on structural information and on crack detection of gigabyte-scale experimental datasets. We show how key steps can be decomposed into distinct processing flows, one based on structural similarity (SSIM) quality measure, and another on spectral content. These algorithms act upon image blocks that fit into memory, and can execute independently. We discuss the scientific relevance of themore » problem, key developments, and decomposition of complementary tasks into separate executions. We show how to apply SSIM to detect material degradation, and illustrate how this metric can be allied to spectral analysis for structure probing, while using tiled multi-resolution pyramids stored in HDF5 chunked multi-dimensional arrays. Results show that the proposed experimental data representation supports an average compression rate of 10X, and data compression scales linearly with the data size. We also illustrate how to correlate SSIM to crack formation, and how to use our numerical schemes to enable fast detection of deformation from 3D datasets evolving in time.« less
50th Annual Technical Meeting of the Society of Engineering Science (SES)
2014-08-15
McDowell (Gerogia Tech), Min Zhou () Virtual Characterization of composites with Lamination Defects for wind turbine spar cap MUKUNDAN SRINIVASAN...Zhang (IHCP Singapore) Damage Mechanisms in Irradiated Metallic Glasses Richard Baumer (MIT), Michael Demkowicz (MIT) Slip Avalanches in Amorphous...Michigan, 48090) Atomistic Simulations of c+a Pyramidal Slip in Magnesium Single Crystal under Compression Xiaozhi Tang (MIT & BJTU), Yafang Guo
Code of Federal Regulations, 2010 CFR
2010-07-01
... emission standards as required in §§ 60.4204 and 60.4205 according to the manufacturer's written... standards if I am an owner or operator of a stationary CI internal combustion engine? 60.4206 Section 60...) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Stationary Compression...
Code of Federal Regulations, 2011 CFR
2011-07-01
... non-emergency engines if I am an owner or operator of a stationary CI internal combustion engine? 60... Compression Ignition Internal Combustion Engines Emission Standards for Owners and Operators § 60.4204 What... internal combustion engine? (a) Owners and operators of pre-2007 model year non-emergency stationary CI ICE...
Code of Federal Regulations, 2013 CFR
2013-07-01
... non-emergency engines if I am an owner or operator of a stationary CI internal combustion engine? 60... Compression Ignition Internal Combustion Engines Emission Standards for Owners and Operators § 60.4204 What... internal combustion engine? (a) Owners and operators of pre-2007 model year non-emergency stationary CI ICE...
Code of Federal Regulations, 2014 CFR
2014-07-01
... non-emergency engines if I am an owner or operator of a stationary CI internal combustion engine? 60... Compression Ignition Internal Combustion Engines Emission Standards for Owners and Operators § 60.4204 What... internal combustion engine? (a) Owners and operators of pre-2007 model year non-emergency stationary CI ICE...
Code of Federal Regulations, 2012 CFR
2012-07-01
... non-emergency engines if I am an owner or operator of a stationary CI internal combustion engine? 60... Compression Ignition Internal Combustion Engines Emission Standards for Owners and Operators § 60.4204 What... internal combustion engine? (a) Owners and operators of pre-2007 model year non-emergency stationary CI ICE...
Code of Federal Regulations, 2013 CFR
2013-07-01
... meet the following emission limitation, except during periods of startup . . . During periods of startup you must . . . 1. 2SLB stationary RICE a. Reduce CO emissions by 58 percent or more; orb. Limit... time spent at idle and minimize the engine's startup time at startup to a period needed for appropriate...
Code of Federal Regulations, 2014 CFR
2014-07-01
... meet the following emission limitation, except during periods of startup . . . During periods of startup you must . . . 1. 2SLB stationary RICE a. Reduce CO emissions by 58 percent or more; orb. Limit... time spent at idle and minimize the engine's startup time at startup to a period needed for appropriate...
Code of Federal Regulations, 2012 CFR
2012-07-01
... meet the following emission limitation, except during periods of startup . . . During periods of startup you must . . . 1. 2SLB stationary RICE a. Reduce CO emissions by 58 percent or more; orb. Limit... time spent at idle and minimize the engine's startup time at startup to a period needed for appropriate...
Advanced End-to-end Simulation for On-board Processing (AESOP)
NASA Technical Reports Server (NTRS)
Mazer, Alan S.
1994-01-01
Developers of data compression algorithms typically use their own software together with commercial packages to implement, evaluate and demonstrate their work. While convenient for an individual developer, this approach makes it difficult to build on or use another's work without intimate knowledge of each component. When several people or groups work on different parts of the same problem, the larger view can be lost. What's needed is a simple piece of software to stand in the gap and link together the efforts of different people, enabling them to build on each other's work, and providing a base for engineers and scientists to evaluate the parts as a cohesive whole and make design decisions. AESOP (Advanced End-to-end Simulation for On-board Processing) attempts to meet this need by providing a graphical interface to a developer-selected set of algorithms, interfacing with compiled code and standalone programs, as well as procedures written in the IDL and PV-Wave command languages. As a proof of concept, AESOP is outfitted with several data compression algorithms integrating previous work on different processors (AT&T DSP32C, TI TMS320C30, SPARC). The user can specify at run-time the processor on which individual parts of the compression should run. Compressed data is then fed through simulated transmission and uncompression to evaluate the effects of compression parameters, noise and error correction algorithms. The following sections describe AESOP in detail. Section 2 describes fundamental goals for usability. Section 3 describes the implementation. Sections 4 through 5 describe how to add new functionality to the system and present the existing data compression algorithms. Sections 6 and 7 discuss portability and future work.
[Development of a portable ambulatory ECG monitor based on embedded microprocessor unit].
Wang, Da-xiong; Wang, Guo-jun
2005-06-01
To develop a new kind of portable ambulatory ECG monitor. The hardware and software were designed based on RCA-CDP1802. New methods of ECG data compression and feature extraction of QRS complexes were applied to software design. A model for automatic arrhythmia analysis was established for real-time ambulatory ECG Data analysis. Compact, low power consumption and low cost were emphasized in the hardware design. This compact and light-weight monitor with low power consumption and high intelligence was capable of real-time monitoring arrhythmia for more than 48 h. More than ten types of arrhythmia could be detected, only the compressed abnormal ECG data was recorded and could be transmitted to the host if required. The monitor meets the design requirements and can be used for ambulatory ECG monitoring.
Two Phase Technology Development Initiatives
NASA Technical Reports Server (NTRS)
Didion, Jeffrey R.
1999-01-01
Three promising thermal technology development initiatives, vapor compression thermal control system, electronics cooling, and electrohydrodynamics applications are outlined herein. These technologies will provide thermal engineers with additional tools to meet the thermal challenges presented by increased power densities and reduced architectural options that will be available in future spacecraft. Goddard Space Flight Center and the University of Maryland are fabricating and testing a 'proto- flight' vapor compression based thermal control system for the Ultra Long Duration Balloon (ULDB) Program. The vapor compression system will be capable of transporting approximately 400 W of heat while providing a temperature lift of 60C. The system is constructed of 'commercial off-the-shelf' hardware that is modified to meet the unique environmental requirements of the ULDB. A demonstration flight is planned for 1999 or early 2000. Goddard Space Flight Center has embarked upon a multi-discipline effort to address a number of design issues regarding spacecraft electronics. The program addressed the high priority design issues concerning the total mass of standard spacecraft electronics enclosures and the impact of design changes on thermal performance. This presentation reviews the pertinent results of the Lightweight Electronics Enclosure Program. Electronics cooling is a growing challenge to thermal engineers due to increasing power densities and spacecraft architecture. The space-flight qualification program and preliminary results of thermal performance tests of copper-water heat pipes are presented. Electrohydrodynamics (EHD) is an emerging technology that uses the secondary forces that result from the application of an electric field to a flowing fluid to enhance heat transfer and manage fluid flow. A brief review of current EHD capabilities regarding heat transfer enhancement of commercial heat exchangers and capillary pumped loops is presented. Goddard Space Flight Center research efforts applying this technique to fluid management and fluid pumping are discussed.
NASA Astrophysics Data System (ADS)
Hrbek, George
2001-06-01
At SCCM Shock 99, Lie Group Theory was applied to the problem of temperature independent, hydrodynamic shock in a Birch-Murnaghan continuum. (1) Ratios of the group parameters were shown to be linked to the physical parameters specified in the second, third, and fourth order BM-EOS approximations. This effort has subsequently been extended to provide a general formalism for a wide class of mathematical forms (i.e., K(r,P)) of the equation of state. Variations in material expansion and resistance (i.e., counter pressure) are shown to be functions of compression and material variation ahead of the expanding front. Specific examples included the Birch-Murnaghan, Vinet, Brennan-Stacey, Shanker, Tait, Poirier, and Jones-Wilkins-Lee (JWL) forms. (2) With these ratios defined, the next step is to predict the behavior of these K(r,P) type solids. To do this, one must introduce the group ratios into a numerical simulation for the flow and generate the density, pressure, and particle velocity profiles as the shock moves through the material. This will allow the various equations of state, and their respective fitting coefficients, to be compared with experiments, and additionally, allow the empirical coefficients for these EOS forms to be adjusted accordingly. (1) Hrbek, G. M., Invariant Functional Forms For The Second, Third, And Fourth Order Birch-Murnaghan Equation of State For Materials Subject to Hydrodynamic Shock, Proceedings of the 11th American Physical Society Topical Group Meeting on Shock Compression of Condensed Matter (SCCM Shock 99), Snowbird, Utah (2) Hrbek, G. M., Invariant Functional Forms For K(r,P) Type Equations Of State For Hydrodynamically Driven Flows, Submitted to the 12th American Physical Society Topical Group Meeting on Shock Compression of Condensed Matter (SCCM Shock 01), Atlanta, Georgia
Mg-doped ZnO nanoparticles for efficient sunlight-driven photocatalysis.
Etacheri, Vinodkumar; Roshan, Roshith; Kumar, Vishwanathan
2012-05-01
Magnesium-doped ZnO (ZMO) nanoparticles were synthesized through an oxalate coprecipitation method. Crystallization of ZMO upon thermal decomposition of the oxalate precursors was investigated using differential scanning calorimetry (DSC) and X-ray diffraction (XRD) techniques. XRD studies point toward a significant c-axis compression and reduced crystallite sizes for ZMO samples in contrast to undoped ZnO, which was further confirmed by HRSEM studies. X-ray photoelectron spectroscopy (XPS), UV/vis spectroscopy and photoluminescence (PL) spectroscopy were employed to establish the electronic and optical properties of these nanoparticles. (XPS) studies confirmed the substitution of Zn(2+) by Mg(2+), crystallization of MgO secondary phase, and increased Zn-O bond strengths in Mg-doped ZnO samples. Textural properties of these ZMO samples obtained at various calcination temperatures were superior in comparison to the undoped ZnO. In addition to this, ZMO samples exhibited a blue-shift in the near band edge photoluminescence (PL) emission, decrease of PL intensities and superior sunlight-induced photocatalytic decomposition of methylene blue in contrast to undoped ZnO. The most active photocatalyst 0.1-MgZnO obtained after calcination at 600 °C showed a 2-fold increase in photocatalytic activity compared to the undoped ZnO. Band gap widening, superior textural properties and efficient electron-hole separation were identified as the factors responsible for the enhanced sunlight-driven photocatalytic activities of Mg-doped ZnO nanoparticles.
NASA Astrophysics Data System (ADS)
Liu, Tingting; Zhang, Ling; Wang, Shutao; Cui, Yaoyao; Wang, Yutian; Liu, Lingfei; Yang, Zhe
2018-03-01
Qualitative and quantitative analysis of polycyclic aromatic hydrocarbons (PAHs) was carried out by three-dimensional fluorescence spectroscopy combining with Alternating Weighted Residue Constraint Quadrilinear Decomposition (AWRCQLD). The experimental subjects were acenaphthene (ANA) and naphthalene (NAP). Firstly, in order to solve the redundant information of the three-dimensional fluorescence spectral data, the wavelet transform was used to compress data in preprocessing. Then, the four-dimensional data was constructed by using the excitation-emission fluorescence spectra of different concentration PAHs. The sample data was obtained from three solvents that are methanol, ethanol and Ultra-pure water. The four-dimensional spectral data was analyzed by AWRCQLD, then the recovery rate of PAHs was obtained from the three solvents and compared respectively. On one hand, the results showed that PAHs can be measured more accurately by the high-order data, and the recovery rate was higher. On the other hand, the results presented that AWRCQLD can better reflect the superiority of four-dimensional algorithm than the second-order calibration and other third-order calibration algorithms. The recovery rate of ANA was 96.5% 103.3% and the root mean square error of prediction was 0.04 μgL- 1. The recovery rate of NAP was 96.7% 115.7% and the root mean square error of prediction was 0.06 μgL- 1.
Spatial-temporal distortion metric for in-service quality monitoring of any digital video system
NASA Astrophysics Data System (ADS)
Wolf, Stephen; Pinson, Margaret H.
1999-11-01
Many organizations have focused on developing digital video quality metrics which produce results that accurately emulate subjective responses. However, to be widely applicable a metric must also work over a wide range of quality, and be useful for in-service quality monitoring. The Institute for Telecommunication Sciences (ITS) has developed spatial-temporal distortion metrics that meet all of these requirements. These objective metrics are described in detail and have a number of interesting properties, including utilization of (1) spatial activity filters which emphasize long edges on the order of 10 arc min while simultaneously performing large amounts of noise suppression, (2) the angular direction of the spatial gradient, (3) spatial-temporal compression factors of at least 384:1 (spatial compression of at least 64:1 and temporal compression of at least 6:1, and 4) simple perceptibility thresholds and spatial-temporal masking functions. Results are presented that compare the objective metric values with mean opinion scores from a wide range of subjective data bases spanning many different scenes, systems, bit-rates, and applications.
Pharmaceutical and analytical evaluation of triphalaguggulkalpa tablets
Savarikar, Shreeram S.; Barbhind, Maneesha M.; Halde, Umakant K.; Kulkarni, Alpana P.
2011-01-01
Aim of the Study: Development of standardized, synergistic, safe and effective traditional herbal formulations with robust scientific evidence can offer faster and more economical alternatives for the treatment of disease. The main objective was to develop a method of preparation of guggulkalpa tablets so that the tablets meet the criteria of efficacy, stability, and safety. Materials and Methods: Triphalaguggulkalpa tablet, described in sharangdharsanhita and containing guggul and triphala powder, was used as a model drug. Preliminary experiments on marketed triphalaguggulkalpa tablets exhibited delayed in vitro disintegration that indicated probable delayed in vivo disintegration. The study involved preparation of triphalaguggulkalpa tablets by Ayurvedic text methods and by wet granulation, dry granulation, and direct compression method. The tablets were evaluated for loss on drying, volatile oil content, % solubility, and steroidal content. The tablets were evaluated for performance tests like weight variation, disintegration, and hardness. Results: It was observed that triphalaguggulkalpa tablets, prepared by direct compression method, complied with the hardness and disintegration tests, whereas tablets prepared by Ayurvedic text methods failed. Conclusion: Direct compression is the best method of preparing triphalaguggulkalpa tablets. PMID:21731383
Solidification/stabilization of spent abrasives and use as nonstructural concrete
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brabrand, D.J.; Loehr, R.C.
1993-01-01
Tons of spent abrasives result each year from the removal of old paint from bridges. Because the spent abrasives contain metals from the paint, some spent abrasives may be considered hazardous by the Toxicity Characteristic (TC) criteria. Incorporation of the spent blasting abrasives in nonstructural concrete (rip-rap, dolphins) offers an opportunity to recycle the spent abrasives while immobilizing potentially leachable metals. This study focused on the Portland Cement Solidification/Stabilization (S/S) of spent blasting abrasives taken from a bridge located in Southeast Texas. The study examined (a) the cadmium, chromium, and lead concentrations in extracts obtained by using the Toxicity Characteristicmore » Leaching Procedure (TCLP) and (b) the compressive strengths of Portland Cement mixes that contained different amounts of the spent abrasives. Performance was measured by meeting the TC criteria as well as the requirements for compressive strength. Study results indicated that considerable quantities of these spent abrasives can be solidified/stabilized while reducing the leachability of cadmium, chromium, and lead and producing compressive strengths over 6,895 kN/m[sup 2] (1,000 psi).« less
Pieczywek, Piotr M; Zdunek, Artur
2017-10-18
A hybrid model based on a mass-spring system methodology coupled with the discrete element method (DEM) was implemented to simulate the deformation of cellular structures in 3D. Models of individual cells were constructed using the particles which cover the surfaces of cell walls and are interconnected in a triangle mesh network by viscoelastic springs. The spatial arrangement of the cells required to construct a virtual tissue was obtained using Poisson-disc sampling and Voronoi tessellation in 3D space. Three structural features were included in the model: viscoelastic material of cell walls, linearly elastic interior of the cells (simulating compressible liquid) and a gas phase in the intercellular spaces. The response of the models to an external load was demonstrated during quasi-static compression simulations. The sensitivity of the model was investigated at fixed compression parameters with variable tissue porosity, cell size and cell wall properties, such as thickness and Young's modulus, and a stiffness of the cell interior that simulated turgor pressure. The extent of the agreement between the simulation results and other models published is discussed. The model demonstrated the significant influence of tissue structure on micromechanical properties and allowed for the interpretation of the compression test results with respect to changes occurring in the structure of the virtual tissue. During compression virtual structures composed of smaller cells produced higher reaction forces and therefore they were stiffer than structures with large cells. The increase in the number of intercellular spaces (porosity) resulted in a decrease in reaction forces. The numerical model was capable of simulating the quasi-static compression experiment and reproducing the strain stiffening observed in experiment. Stress accumulation at the edges of the cell walls where three cells meet suggests that cell-to-cell debonding and crack propagation through the contact edge of neighboring cells is one of the most prevalent ways for tissue to rupture.
Phase Stability of Epsilon and Gamma HNIW (CL-20) at High-Pressure and Temperature
NASA Astrophysics Data System (ADS)
Gump, Jared
2007-06-01
Hexanitrohexaazaisowurtzitane (CL-20) is one of the few ingredients developed since World War II to be considered for transition to military use. Five polymorphs have been identified for CL-20 by FTIR measurements (α, β, γ, ɛ, and ζ). As CL-20 is transitioned into munitions it will become necessary to predict its response under conditions of detonation, for performance evaluation. Such predictive modeling requires a phase diagram and basic thermodynamic properties of the various phases at high pressure and temperature. Theoretical calculations have been performed for a variety of explosive ingredients including CL-20, but it was noted that no experimental measurements existed for comparison with the theoretical bulk modulus calculated for CL-20. Therefore, the phase stabilities of epsilon and gamma CL-20 at static high-pressure and temperature were investigated using synchrotron angle-dispersive x-ray diffraction experiments. The samples were compressed and heated using diamond anvil cells (DAC). Pressures and temperatures achieved were around 5GPa and 175^oC, respectively. No phase change (from the starting epsilon phase) was observed under hydrostatic compression up to 6.3 GPa at ambient temperature. Under ambient pressure the epsilon phase was determined to be stable to a temperature of 120^oC. When heating above 125^oC the gamma phase appeared and it remained stable until thermal decomposition occurred above 150^oC. The gamma phase exhibits a phase change upon compression at both ambient temperature and 140^oC. Pressure -- volume data for the epsilon and gamma phase at ambient temperature and the epsilon phase at 75^oC were fit to the Birch-Murnaghan formalism to obtain isothermal equations of state.
Computationally efficient simulation of unsteady aerodynamics using POD on the fly
NASA Astrophysics Data System (ADS)
Moreno-Ramos, Ruben; Vega, José M.; Varas, Fernando
2016-12-01
Modern industrial aircraft design requires a large amount of sufficiently accurate aerodynamic and aeroelastic simulations. Current computational fluid dynamics (CFD) solvers with aeroelastic capabilities, such as the NASA URANS unstructured solver FUN3D, require very large computational resources. Since a very large amount of simulation is necessary, the CFD cost is just unaffordable in an industrial production environment and must be significantly reduced. Thus, a more inexpensive, yet sufficiently precise solver is strongly needed. An opportunity to approach this goal could follow some recent results (Terragni and Vega 2014 SIAM J. Appl. Dyn. Syst. 13 330-65 Rapun et al 2015 Int. J. Numer. Meth. Eng. 104 844-68) on an adaptive reduced order model that combines ‘on the fly’ a standard numerical solver (to compute some representative snapshots), proper orthogonal decomposition (POD) (to extract modes from the snapshots), Galerkin projection (onto the set of POD modes), and several additional ingredients such as projecting the equations using a limited amount of points and fairly generic mode libraries. When applied to the complex Ginzburg-Landau equation, the method produces acceleration factors (comparing with standard numerical solvers) of the order of 20 and 300 in one and two space dimensions, respectively. Unfortunately, the extension of the method to unsteady, compressible flows around deformable geometries requires new approaches to deal with deformable meshes, high-Reynolds numbers, and compressibility. A first step in this direction is presented considering the unsteady compressible, two-dimensional flow around an oscillating airfoil using a CFD solver in a rigidly moving mesh. POD on the Fly gives results whose accuracy is comparable to that of the CFD solver used to compute the snapshots.
Shirley, Robin; Black, Leon
2011-10-30
This paper examines the potential treatment by solidification/stabilisation (S/S) of air pollution control (APC) residues using only waste materials otherwise bound for disposal, namely a pulverised fuel ash (PFA) from a co-fired power station and a waste caustic solution. The use of waste materials to stabilise hazardous wastes in order to meet waste acceptance criteria (WAC) would offer an economical and efficient method for reducing the environmental impact of the hazardous waste. The potential is examined against leach limits for chlorides, sulphates and total dissolved solids, and compressive strength performance described in the WAC for stable non-reactive (SNR) hazardous waste landfill cells in England and Wales. The work demonstrates some potential for the treatment, including suitable compressive strengths to meet regulatory limits. Monolithic leach results showed good encapsulation compared to previous work using a more traditional cement binder. However, consistent with previous work, SNR WAC for chlorides was not met, suggesting the need for a washing stage. The potential problems of using a non-EN450 PFA for S/S applications were also highlighted, as well as experimental results which demonstrate the effect of ionic interactions on the mobility of phases during regulatory leach testing. Copyright © 2011 Elsevier B.V. All rights reserved.
Downstream processing of hyperforin from Hypericum perforatum root cultures.
Haas, Paul; Gaid, Mariam; Zarinwall, Ajmal; Beerhues, Ludger; Scholl, Stephan
2018-05-01
Hyperforin is a major metabolite of the medicinal plant Hypericum perforatum (St. John's Wort) and has recently been found in hormone induced root cultures. The objective of this study is to identify a downstream process for the production of a hyperforin-rich extract with maximum extraction efficiency and minimal decomposition. The maximum extraction time was found to be 60min. The comparison of two equipment concepts for the extraction and solvent evaporation was performed employing two different solvents. While the rotary mixer showed better results for the extraction efficiency than a stirred vessel, the latter set-up was able to handle larger volumes but did not meet all process requirements. For the evaporation the prompt evaporation of the extraction agent using nitrogen stripping led to minor decomposition. In a 5L stirred vessel, the highest specific extraction of hyperforin was 4.3mg hyperforin/g dry weight bio material. Parameters for the equipment design for extraction and solvent evaporation were determined based on the experimental data. Copyright © 2017 Elsevier B.V. All rights reserved.
Interactions of double patterning technology with wafer processing, OPC and design flows
NASA Astrophysics Data System (ADS)
Lucas, Kevin; Cork, Chris; Miloslavsky, Alex; Luk-Pat, Gerry; Barnes, Levi; Hapli, John; Lewellen, John; Rollins, Greg; Wiaux, Vincent; Verhaegen, Staf
2008-03-01
Double patterning technology (DPT) is one of the main options for printing logic devices with half-pitch less than 45nm; and flash and DRAM memory devices with half-pitch less than 40nm. DPT methods decompose the original design intent into two individual masking layers which are each patterned using single exposures and existing 193nm lithography tools. The results of the individual patterning layers combine to re-create the design intent pattern on the wafer. In this paper we study interactions of DPT with lithography, masks synthesis and physical design flows. Double exposure and etch patterning steps create complexity for both process and design flows. DPT decomposition is a critical software step which will be performed in physical design and also in mask synthesis. Decomposition includes cutting (splitting) of original design intent polygons into multiple polygons where required; and coloring of the resulting polygons. We evaluate the ability to meet key physical design goals such as: reduce circuit area; minimize rework; ensure DPT compliance; guarantee patterning robustness on individual layer targets; ensure symmetric wafer results; and create uniform wafer density for the individual patterning layers.
Deng, Xinyang; Jiang, Wen; Zhang, Jiandong
2017-01-01
The zero-sum matrix game is one of the most classic game models, and it is widely used in many scientific and engineering fields. In the real world, due to the complexity of the decision-making environment, sometimes the payoffs received by players may be inexact or uncertain, which requires that the model of matrix games has the ability to represent and deal with imprecise payoffs. To meet such a requirement, this paper develops a zero-sum matrix game model with Dempster–Shafer belief structure payoffs, which effectively represents the ambiguity involved in payoffs of a game. Then, a decomposition method is proposed to calculate the value of such a game, which is also expressed with belief structures. Moreover, for the possible computation-intensive issue in the proposed decomposition method, as an alternative solution, a Monte Carlo simulation approach is presented, as well. Finally, the proposed zero-sum matrix games with payoffs of Dempster–Shafer belief structures is illustratively applied to the sensor selection and intrusion detection of sensor networks, which shows its effectiveness and application process. PMID:28430156
Sheng, Weitian; Zhou, Chenming; Liu, Yang; Bagci, Hakan; Michielssen, Eric
2018-01-01
A fast and memory efficient three-dimensional full-wave simulator for analyzing electromagnetic (EM) wave propagation in electrically large and realistic mine tunnels/galleries loaded with conductors is proposed. The simulator relies on Muller and combined field surface integral equations (SIEs) to account for scattering from mine walls and conductors, respectively. During the iterative solution of the system of SIEs, the simulator uses a fast multipole method-fast Fourier transform (FMM-FFT) scheme to reduce CPU and memory requirements. The memory requirement is further reduced by compressing large data structures via singular value and Tucker decompositions. The efficiency, accuracy, and real-world applicability of the simulator are demonstrated through characterization of EM wave propagation in electrically large mine tunnels/galleries loaded with conducting cables and mine carts. PMID:29726545
Experimental Study of Thermophysical Properties of Peat Fuel
NASA Astrophysics Data System (ADS)
Mikhailov, A. S.; Piralishvili, Sh. A.; Stepanov, E. G.; Spesivtseva, N. S.
2017-03-01
A study has been made of thermophysical properties of peat pellets of higher-than-average reactivity due to the pretreatment of the raw material. A synchronous differential analysis of the produced pellets was performed to determine the gaseous products of their decomposition by the mass-spectroscopy method. The parameters of the mass loss rate, the heat-release function, the activation energy, the rate constant of the combustion reaction, and the volatile yield were compared to the properties of pellets compressed by the traditional method on a matrix pelletizer. It has been determined that as a result of the peat pretreatment, the yield of volatile components increases and the activation energy of the combustion reaction decreases by 17 and 30% respectively compared with the raw fuel. This determines its prospects for burning in an atomized state at coal-fired thermal electric power plants.
NASA Technical Reports Server (NTRS)
Kikuchi, Hideaki; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya; Shimojo, Fuyuki; Saini, Subhash
2003-01-01
Scalability of a low-cost, Intel Xeon-based, multi-Teraflop Linux cluster is tested for two high-end scientific applications: Classical atomistic simulation based on the molecular dynamics method and quantum mechanical calculation based on the density functional theory. These scalable parallel applications use space-time multiresolution algorithms and feature computational-space decomposition, wavelet-based adaptive load balancing, and spacefilling-curve-based data compression for scalable I/O. Comparative performance tests are performed on a 1,024-processor Linux cluster and a conventional higher-end parallel supercomputer, 1,184-processor IBM SP4. The results show that the performance of the Linux cluster is comparable to that of the SP4. We also study various effects, such as the sharing of memory and L2 cache among processors, on the performance.
Spray pyrolytic deposition of α-MoO3 film and its use in dye-sensitized solar cell
NASA Astrophysics Data System (ADS)
Tamboli, Parvin S.; Jagtap, Chaitali V.; Kadam, Vishal S.; Ingle, Ravi V.; Vhatkar, Rajiv S.; Mahajan, Smita S.; Pathan, Habib M.
2018-04-01
Thermal decomposition of ammonium para molybdate tetrahydrate precursor has been studied to determine degradation temperatures in air atmosphere. Current work explores the synthesis of α-MoO3 films by an economical spray pyrolysis technique using ammonium para molybdate tetrahydrate precursor in the presence of compressed air. A variety of characterization techniques such as X-ray diffraction, scanning electron microscopy, transmission electron microscopy, UV-visible spectroscopy, Fourier transform infrared, and Raman spectroscopy were carried out, and the studies have confirmed that orthorhombic phase formation of MoO3 takes place with spongy mesh-type structure. The study of electro-catalytic activity of α-MoO3 in titania-based dye-sensitized solar cell is also carried out by cyclic voltammetry, electrochemical impedance spectroscopy, and Tafel curves to evaluate its performance as a counter electrode.
Self-degradable Slag/Class F Fly Ash-Blend Cements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sugama, T.; Warren, J.; Butcher, T.
2011-03-01
Self-degradable slag/Class F fly ash blend pozzolana cements were formulated, assuming that they might serve well as alternative temporary fracture sealers in Enhanced Geothermal System (EGS) wells operating at temperatures of {ge} 200 C. Two candidate formulas were screened based upon material criteria including an initial setting time {ge} 60 min at 85 C, compressive strength {ge} 2000 psi for a 200 C autoclaved specimen, and the extent of self-degradation of cement heated at {ge} 200 C for it was contacted with water. The first screened dry mix formula consisted of 76.5 wt% slag-19.0 wt% Class F fly ash-3.8 wt%more » sodium silicate as alkali activator, and 0.7 wt% carboxymethyl cellulose (CMC) as the self-degradation promoting additive, and second formula comprised of 57.3 wt% slag, 38.2 wt% Class F fly ash, 3.8 wt% sodium silicate, and 0.7 wt% CMC. After mixing with water and autoclaving it at 200 C, the aluminum-substituted 1.1 nm tobermorite crystal phase was identified as hydrothermal reaction product responsible for the development of a compressive strength of 5983 psi. The 200 C-autoclaved cement made with the latter formula had the combined phases of tobermorite as its major reaction product and amorphous geopolymer as its minor one providing a compressive strength of 5271 psi. Sodium hydroxide derived from the hydrolysis of sodium silicate activator not only initiated the pozzolanic reaction of slag and fly ash, but also played an important role in generating in-situ exothermic heat that significantly contributed to promoting self-degradation of cementitious sealers. The source of this exothermic heat was the interactions between sodium hydroxide, and gaseous CO{sub 2} and CH{sub 3}COOH by-products generated from thermal decomposition of CMC at {ge} 200 C in an aqueous medium. Thus, the magnitude of this self-degradation depended on the exothermic temperature evolved in the sealer; a higher temperature led to a sever disintegration of sealer. The exothermic temperature was controlled by the extent of thermal decomposition of CMC, demonstrating that CMC decomposed at higher temperature emitted more gaseous reactants. Hence, such large emission enhanced the evolution of in-situ exothermic heat. In contrast, the excessive formation of geopolymer phase due to more incorporation of Class F fly ash into this cementitious system affected its ability to self-degrade, reflecting that there was no self-degradation. The geopolymer was formed by hydrothermal reactions between sodium hydroxide from sodium silicate and mullite in Class F fly ash. Thus, the major reason why geopolymer-based cementitiuos sealers did not degrade after heated sealers came in contact with water was their lack of free sodium hydroxide.« less
Image Segmentation, Registration, Compression, and Matching
NASA Technical Reports Server (NTRS)
Yadegar, Jacob; Wei, Hai; Yadegar, Joseph; Ray, Nilanjan; Zabuawala, Sakina
2011-01-01
A novel computational framework was developed of a 2D affine invariant matching exploiting a parameter space. Named as affine invariant parameter space (AIPS), the technique can be applied to many image-processing and computer-vision problems, including image registration, template matching, and object tracking from image sequence. The AIPS is formed by the parameters in an affine combination of a set of feature points in the image plane. In cases where the entire image can be assumed to have undergone a single affine transformation, the new AIPS match metric and matching framework becomes very effective (compared with the state-of-the-art methods at the time of this reporting). No knowledge about scaling or any other transformation parameters need to be known a priori to apply the AIPS framework. An automated suite of software tools has been created to provide accurate image segmentation (for data cleaning) and high-quality 2D image and 3D surface registration (for fusing multi-resolution terrain, image, and map data). These tools are capable of supporting existing GIS toolkits already in the marketplace, and will also be usable in a stand-alone fashion. The toolkit applies novel algorithmic approaches for image segmentation, feature extraction, and registration of 2D imagery and 3D surface data, which supports first-pass, batched, fully automatic feature extraction (for segmentation), and registration. A hierarchical and adaptive approach is taken for achieving automatic feature extraction, segmentation, and registration. Surface registration is the process of aligning two (or more) data sets to a common coordinate system, during which the transformation between their different coordinate systems is determined. Also developed here are a novel, volumetric surface modeling and compression technique that provide both quality-guaranteed mesh surface approximations and compaction of the model sizes by efficiently coding the geometry and connectivity/topology components of the generated models. The highly efficient triangular mesh compression compacts the connectivity information at the rate of 1.5-4 bits per vertex (on average for triangle meshes), while reducing the 3D geometry by 40-50 percent. Finally, taking into consideration the characteristics of 3D terrain data, and using the innovative, regularized binary decomposition mesh modeling, a multistage, pattern-drive modeling, and compression technique has been developed to provide an effective framework for compressing digital elevation model (DEM) surfaces, high-resolution aerial imagery, and other types of NASA data.
NASA Astrophysics Data System (ADS)
Hollingsworth, Kieren Grant
2015-11-01
MRI is often the most sensitive or appropriate technique for important measurements in clinical diagnosis and research, but lengthy acquisition times limit its use due to cost and considerations of patient comfort and compliance. Once an image field of view and resolution is chosen, the minimum scan acquisition time is normally fixed by the amount of raw data that must be acquired to meet the Nyquist criteria. Recently, there has been research interest in using the theory of compressed sensing (CS) in MR imaging to reduce scan acquisition times. The theory argues that if our target MR image is sparse, having signal information in only a small proportion of pixels (like an angiogram), or if the image can be mathematically transformed to be sparse then it is possible to use that sparsity to recover a high definition image from substantially less acquired data. This review starts by considering methods of k-space undersampling which have already been incorporated into routine clinical imaging (partial Fourier imaging and parallel imaging), and then explains the basis of using compressed sensing in MRI. The practical considerations of applying CS to MRI acquisitions are discussed, such as designing k-space undersampling schemes, optimizing adjustable parameters in reconstructions and exploiting the power of combined compressed sensing and parallel imaging (CS-PI). A selection of clinical applications that have used CS and CS-PI prospectively are considered. The review concludes by signposting other imaging acceleration techniques under present development before concluding with a consideration of the potential impact and obstacles to bringing compressed sensing into routine use in clinical MRI.
NASA Technical Reports Server (NTRS)
Srinivasan, R.; Daw, M. S.; Noebe, R. D.; Mills, M. J.
2003-01-01
Ni-44at.% Al and Ni-50at.% single crystals were tested in compression in the hard (001) orientations. The dislocation processes and deformation behavior were studied as a function of temperature, strain and strain rate. A slip transition in NiAl occurs from alpha(111) slip to non-alphaaaaaaaaaaa9111) slip at intermediate temperatures. In Ni-50at.% Al single crystal, only alpha(010) dislocations are observed above the slip transition temperature. In contrast, alpha(101)(101) glide has been observed to control deformation beyond the slip transition temperature in Ni-44at.%Al. alpha(101) dislocations are observed primarily along both (111) directions in the glide plane. High-resolution transmission electron microscopy observations show that the core of the alpha(101) dislocations along these directions is decomposed into two alpha(010) dislocations, separated by a distance of approximately 2nm. The temperature window of stability for these alpha(101) dislocations depends upon the strain rate. At a strain rate of 1.4 x 10(exp -4)/s, lpha(101) dislocations are observed between 800 and 1000K. Complete decomposition of a alpha(101) dislocations into alpha(010) dislocations occurs beyond 1000K, leading to alpha(010) climb as the deformation mode at higher temperature. At lower strain rates, decomposition of a alpha(101) dislocations has been observed to occur along the edge orientation at temperatures below 1000K. Embedded-atom method calculations and experimental results indicate that alpha(101) dislocation have a large Peieris stress at low temperature. Based on the present microstructural observations and a survey of the literature with respect to vacancy content and diffusion in NiAl, a model is proposed for alpha(101)(101) glide in Ni-44at.%Al, and for the observed yield strength versus temperature behavior of Ni-Al alloys at intermediate and high temperatures.
The inner topological structure and defect control of magnetic skyrmions
NASA Astrophysics Data System (ADS)
Ren, Ji-Rong; Yu, Zhong-Xi
2017-10-01
We prove that the integrand of magnetic skyrmions can be expressed as curvature tensor of Wu-Yang potential. Taking the projection of the normalized magnetization vector on the 2-dim material surface, and according to Duan's decomposition theory of gauge potential, we reveal that every single skyrmion is just characterized by Hopf index and Brouwer degree at the zero point of this vector field. Our theory meet the results that experimental physicists have achieved by many technologies. The inner topological structure expression of skyrmion with Hopf index and Brouwer degree will be indispensable mathematical basis of skyrmion logic gates.
NASA Astrophysics Data System (ADS)
Skiff, Fred; Davidson, Ronald C.
2013-05-01
Each year, the annual meeting of the APS Division of Plasma Physics (DPP) brings together a broad representation of the many active subfields of plasma physics and enjoys an audience that is equally diverse. The meeting was well attended and largely went as planned despite the interventions of hurricane Sandy which caused the city of Providence to shut-down during the first day of the conference. The meeting began on Monday morning with a review of the physics of cosmic rays, 2012 being the 100th year since their discovery, which illustrated the central importance of plasma physics to astrophysical problems. Subsequent reviews covered the importance of tokamak plasma boundaries, progress towards ignition on the National Ignition Facility (NIF), and magnetized plasma turbulence. The Maxwell prize address, by Professor Liu Chen, covered the field of nonlinear Alfvén wave physics. Tutorial lectures were presented on the verification of gyrokinetics, new capabilities in laboratory astrophysics, magnetic flux compression, and tokamak plasma start-up.
Overview of the JPEG XS objective evaluation procedures
NASA Astrophysics Data System (ADS)
Willème, Alexandre; Richter, Thomas; Rosewarne, Chris; Macq, Benoit
2017-09-01
JPEG XS is a standardization activity conducted by the Joint Photographic Experts Group (JPEG), formally known as ISO/IEC SC29 WG1 group that aims at standardizing a low-latency, lightweight and visually lossless video compression scheme. This codec is intended to be used in applications where image sequences would otherwise be transmitted or stored in uncompressed form, such as in live production (through SDI or IP transport), display links, or frame buffers. Support for compression ratios ranging from 2:1 to 6:1 allows significant bandwidth and power reduction for signal propagation. This paper describes the objective quality assessment procedures conducted as part of the JPEG XS standardization activity. Firstly, this paper discusses the objective part of the experiments that led to the technology selection during the 73th WG1 meeting in late 2016. This assessment consists of PSNR measurements after a single and multiple compression decompression cycles at various compression ratios. After this assessment phase, two proposals among the six responses to the CfP were selected and merged to form the first JPEG XS test model (XSM). Later, this paper describes the core experiments (CEs) conducted so far on the XSM. These experiments are intended to evaluate its performance in more challenging scenarios, such as insertion of picture overlays, robustness to frame editing, assess the impact of the different algorithmic choices, and also to measure the XSM performance using the HDR VDP metric.
Leone, Thomas G; Anderson, James E; Davis, Richard S; Iqbal, Asim; Reese, Ronald A; Shelby, Michael H; Studzinski, William M
2015-09-15
Light-duty vehicles (LDVs) in the United States and elsewhere are required to meet increasingly challenging regulations on fuel economy and greenhouse gas (GHG) emissions as well as criteria pollutant emissions. New vehicle trends to improve efficiency include higher compression ratio, downsizing, turbocharging, downspeeding, and hybridization, each involving greater operation of spark-ignited (SI) engines under higher-load, knock-limited conditions. Higher octane ratings for regular-grade gasoline (with greater knock resistance) are an enabler for these technologies. This literature review discusses both fuel and engine factors affecting knock resistance and their contribution to higher engine efficiency and lower tailpipe CO2 emissions. Increasing compression ratios for future SI engines would be the primary response to a significant increase in fuel octane ratings. Existing LDVs would see more advanced spark timing and more efficient combustion phasing. Higher ethanol content is one available option for increasing the octane ratings of gasoline and would provide additional engine efficiency benefits for part and full load operation. An empirical calculation method is provided that allows estimation of expected vehicle efficiency, volumetric fuel economy, and CO2 emission benefits for future LDVs through higher compression ratios for different assumptions on fuel properties and engine types. Accurate "tank-to-wheel" estimates of this type are necessary for "well-to-wheel" analyses of increased gasoline octane ratings in the context of light duty vehicle transportation.
Improved high temperature resistant matrix resins
NASA Technical Reports Server (NTRS)
Chang, G. E.; Powell, S. H.; Jones, R. J.
1983-01-01
The objective was to develop organic matrix resins suitable for service at temperatures up to 644 K (700 F) and at air pressures up to 0.4 MPa (60 psia) for time durations of a minimum of 100 hours. Matrix resins capable of withstanding these extreme oxidative environmental conditions would lead to increased use of polymer matrix composites in aircraft engines and provide significant weight and cost savings. Six linear condensation, aromatic/heterocyclic polymers containing fluorinated and/or diphenyl linkages were synthesized. The thermo-oxidative stability of the resins was determined at 644 K and compressed air pressures up to 0.4 MPa. Two formulations, both containing perfluoroisopropylidene linkages in the polymer backbone structure, exhibited potential for 644 K service to meet the program objectives. Two other formulations could not be fabricated into compression molded zero defect specimens.
Development of stitching reinforcement for transport wing panels
NASA Technical Reports Server (NTRS)
Palmer, Raymond J.; Dow, Marvin B.; Smith, Donald L.
1991-01-01
The NASA Advanced Composites Technology (ACT) program has the objective of providing the technology required to obtain the full benefit of weight savings and performance improvements offered by composite primary aircraft structures. Achieving the objective is dependent upon developing composite materials and structures which are damage tolerant and economical to manufacture. Researchers are investigating stitching reinforcement combined with resin transfer molding to produce materials meeting the ACT program objective. Research is aimed at materials, processes, and structural concepts for application in both transport wings and fuselages, but the emphasis to date has been on wing panels. Empirical guidelines are being established for stitching reinforcement in structures designed for heavy loads. Results are presented from evaluation tests investigating stitching types, threads, and density (penetrations per square inch). Tension strength, compression strength, and compression after impact data are reported.
1989-05-01
from the computer and tells the operator whether the hole meets the established criteria. All the components have been incorporated into a mobile cart...DeHavilland: - Dash-8 applications " General Dynamics: - F16 tail light panel - ATF Supportable Hybrid Stuctures Program - ATA aplications " USAF AFWAL...Compressive Fatigue of Flawed Graphite/Epoxy Composites," Doctoral Thesis , Massachusetts Institute of Technology, Department of Aeronautics and
NASA Astrophysics Data System (ADS)
Jablonská, Jana
2014-03-01
The presence of air in the liquid causes the dynamic system behaviour. When solve to issue of the dynamics we often meet problems of cavitation. Cavitation is an undesirable phenomenon, since it causes a disruption of the surrounding material and material destruction. Cavitation is accompanied by loud sound effects and reduces the efficiency of such pumps, etc. Therefore, it is desirable to model systems in which the cavitation might occur. A typical example is a solution of water hammer.
Experiment evaluation of impact attenuator for a racing car under static load
NASA Astrophysics Data System (ADS)
Imanullah, Fahmi; Ubaidillah, Prasojo, Arfi Singgih; Wirawan, Adhe Aji
2018-02-01
The automotive world is a world where one of the factors that must be considered carefully is the safety aspect. In the formula student car one of the safety factor in the form of impact attenuator. Impact attenuator is used as anchoring when a collision occurs in front of the vehicle. In the rule of formula society of automotive engineer (FSAE) student, impact attenuator is required to absorb the energy must meet or exceed 7350 Joules with a slowdown in speed not exceeding 20 g average and peak of 40 g. The student formula participants are challenged to pass the boundaries so that in designing and making the impact attenuator must pay attention to the strength and use of the minimum material so that it can minimize the expenditure. In this work, an impact attenuator was fabricated and tested using static compression. The primary goal was evaluating the actual capability of the impact attenuator for impact energy absorption. The prototype was made of aluminum alloy in a prismatic shape, and the inside wall was filled with rooftop plastic slices and polyurethane hard foam. The compression test has successfully carried out, and the load versus displacement data could be used in calculating energy absorption capability. The result of the absorbent energy of the selected impact attenuator material. Impact attenuator full polyurethane absorbed energy reach 6380 Joule. For impact attenuator with aluminum polyurethane with a slashed rooftop material as section absorbed energy reach 6600 Joule. Impact attenuator with Aluminum Polyurethane with aluminum orange peel partitions absorbed energy reach 8800 Joule. From standard student formula, energy absorbed in this event must meet or exceed 7350 Joules that meet aluminum polyurethane with aluminum orange peel partitions with the ability to absorb 8800 Joule.
Long-term surface EMG monitoring using K-means clustering and compressive sensing
NASA Astrophysics Data System (ADS)
Balouchestani, Mohammadreza; Krishnan, Sridhar
2015-05-01
In this work, we present an advanced K-means clustering algorithm based on Compressed Sensing theory (CS) in combination with the K-Singular Value Decomposition (K-SVD) method for Clustering of long-term recording of surface Electromyography (sEMG) signals. The long-term monitoring of sEMG signals aims at recording of the electrical activity produced by muscles which are very useful procedure for treatment and diagnostic purposes as well as for detection of various pathologies. The proposed algorithm is examined for three scenarios of sEMG signals including healthy person (sEMG-Healthy), a patient with myopathy (sEMG-Myopathy), and a patient with neuropathy (sEMG-Neuropathr), respectively. The proposed algorithm can easily scan large sEMG datasets of long-term sEMG recording. We test the proposed algorithm with Principal Component Analysis (PCA) and Linear Correlation Coefficient (LCC) dimensionality reduction methods. Then, the output of the proposed algorithm is fed to K-Nearest Neighbours (K-NN) and Probabilistic Neural Network (PNN) classifiers in order to calclute the clustering performance. The proposed algorithm achieves a classification accuracy of 99.22%. This ability allows reducing 17% of Average Classification Error (ACE), 9% of Training Error (TE), and 18% of Root Mean Square Error (RMSE). The proposed algorithm also reduces 14% clustering energy consumption compared to the existing K-Means clustering algorithm.
Liu, Jun; Li, Qingshan; Zhuo, Yuguo; Hong, Wei; Lv, Wenfeng; Xing, Guangzhong
2014-06-01
P(U-MMA-ANI) interpenetrating polymer network (IPN) damping and absorbing material is successfully synthesized by PANI particles served as an absorbing agent with the microemulsion polymerization and P(U-MMA) foam IPN network structure for substrate materials with foaming way. P(U-MMA-ANI) IPN is characterized by the compression mechanical performance testing, TG-DSC, and DSC. The results verify that the P(U-MMA) IPN foam damping material has a good compressive strength and compaction cycle property, and the optimum content of PMMA was 40% (mass) with which the SEM graphs do not present the phase separation on the macro level between PMMA and PU, while the phase separation was observed on the micro level. The DTG curve indicates that because of the formation of P(U-MMA) IPN, the decomposition temperature of PMMA and the carbamate in PU increases, while that of the polyol segment in PU has almost no change. P(U-MMA-ANI) IPN foam damping and absorbing material is obtained by PANI particles served as absorbing agent in the form of filler, and PMMA in the form of micro area in substrate material. When the content of PANI was up to 2.0% (mass), the dissipation factor of composites increased, and with the increasing of frequency the dissipation factor increased in a straight line.
Hydroelastic behaviour of a structure exposed to an underwater explosion
Colicchio, G.; Greco, M.; Brocchini, M.; Faltinsen, O. M.
2015-01-01
The hydroelastic interaction between an underwater explosion and an elastic plate is investigated num- erically through a domain-decomposition strategy. The three-dimensional features of the problem require a large computational effort, which is reduced through a weak coupling between a one-dimensional radial blast solver, which resolves the blast evolution far from the boundaries, and a three-dimensional compressible flow solver used where the interactions between the compression wave and the boundaries take place and the flow becomes three-dimensional. The three-dimensional flow solver at the boundaries is directly coupled with a modal structural solver that models the response of the solid boundaries like elastic plates. This enables one to simulate the fluid–structure interaction as a strong coupling, in order to capture hydroelastic effects. The method has been applied to the experimental case of Hung et al. (2005 Int. J. Impact Eng. 31, 151–168 (doi:10.1016/j.ijimpeng.2003.10.039)) with explosion and structure sufficiently far from other boundaries and successfully validated in terms of the evolution of the acceleration induced on the plate. It was also used to investigate the interaction of an underwater explosion with the bottom of a close-by ship modelled as an orthotropic plate. In the application, the acoustic phase of the fluid–structure interaction is examined, highlighting the need of the fluid–structure coupling to capture correctly the possible inception of cavitation. PMID:25512585
NASA Astrophysics Data System (ADS)
Cvetkovic, Sascha D.; Schirris, Johan; de With, Peter H. N.
2009-01-01
For real-time imaging in surveillance applications, visibility of details is of primary importance to ensure customer confidence. If we display High Dynamic-Range (HDR) scenes whose contrast spans four or more orders of magnitude on a conventional monitor without additional processing, results are unacceptable. Compression of the dynamic range is therefore a compulsory part of any high-end video processing chain because standard monitors are inherently Low- Dynamic Range (LDR) devices with maximally two orders of display dynamic range. In real-time camera processing, many complex scenes are improved with local contrast enhancements, bringing details to the best possible visibility. In this paper, we show how a multi-scale high-frequency enhancement scheme, in which gain is a non-linear function of the detail energy, can be used for the dynamic range compression of HDR real-time video camera signals. We also show the connection of our enhancement scheme to the processing way of the Human Visual System (HVS). Our algorithm simultaneously controls perceived sharpness, ringing ("halo") artifacts (contrast) and noise, resulting in a good balance between visibility of details and non-disturbance of artifacts. The overall quality enhancement, suitable for both HDR and LDR scenes, is based on a careful selection of the filter types for the multi-band decomposition and a detailed analysis of the signal per frequency band.
Mazurek-Wadołkowska, Edyta; Winnicka, Katarzyna; Czyzewska, Urszula; Miltyk, Wojciech
2016-07-01
High profitability and simplicity of direct compression, encourages pharmaceutical industry to create universal excipients to improve technology process. Prosolv® SMCC - silicified microcrystalline cellulose and Starch 1500® - pregelatinized starch, are the example of multifunctional excipients. The aim of the present study was to evaluate the stability of theophylline (API) in the mixtures with excipients with various physico-chemical properties (Prosolv® SMCC 90, Prosolv® SMCC HD 90, Prosolv* SMCC 50®, Starch 1500® and magnesium stearate). The study presents results of thermal analysis of the mixtures with theophylline before and after 6 months storage of the tablets at various temperatures and relative humidity conditions (25 ± 2°C/40 ± 5% RH, 40 ± 2°C/75 ± 5% RH). It was shown that high concentration of Starch 1500® (49%) affects the stability of the theophylline tablets with Prosolv® SMCC. Prosolv® SMCC had no effect on API stability as confirmed by the differential scanning calorimetry (DSC). Changes in peak placements were observed just after tabletting process, which might indicate that compression accelerated the incompatibilities between theophylline and Starch 1500. TGA analysis showed loss in tablets mass equal to water content in starch. GC-MS study established no chemical decomposition of theophylline. We demonstrated that high content of Starch 1500® (49%) in the tablet mass, affects stability on tablets containing theophylline and Prosolv® SMCC.
NASA Astrophysics Data System (ADS)
Woolf, D.; Lehmann, J.
2016-12-01
The exchange of carbon between soils and the atmosphere represents an important uncertainty in climate predictions. Current Earth system models apply soil organic matter (SOM) models based on independent carbon pools with 1st order decomposition dynamics. It has been widely argued over the last decade that such models do not accurately describe soil processes and mechanisms. For example, the long term persistence of soil organic carbon (SOC) is only adequately described by such models by the post hoc assumption of passive or inert carbon pools. Further, such 1st order models also fail to account for microbially-mediated dynamics such as priming interactions. These shortcomings may limit their applicability to long term predictions under conditions of global environmental change. In addition to incorporating recent conceptual advances in the mechanisms of SOM decomposition and protection, next-generation SOM models intended for use in Earth system models need to meet further quality criteria. Namely, that they should (a) accurately describe historical data from long term trials and the current global distribution of soil organic carbon, (b) be computationally efficient for large number of iterations involved in climate modeling, and (c) have sufficiently simple parameterization that they can be run on spatially-explicit data available at global scale under varying conditions of global change over long time scales. Here we show that linking fundamental ecological principles and microbial population dynamics to SOC turnover rates results in a dynamic model that meets all of these quality criteria. This approach simultaneously eliminates the need to postulate biogeochemically-implausible passive or inert pools, instead showing how SOM persistence emerges from ecological principles, while also reproducing observed priming interactions.
NASA Astrophysics Data System (ADS)
Poese, Matthew E.; Smith, Robert W. M.; Garrett, Steven L.
2005-09-01
This talk will compare electrodynamically driven thermoacoustic refrigeration technology to some common implementations of low-lift vapor-compression technology. A rudimentary explanation of vapor-compression refrigeration will be presented along with some of the implementation problems faced by refrigeration engineers using compressor-based systems. These problems include oil management, compressor slugging, refrigerant leaks and the environmental impact of refrigerants. Recently, the method of evaluating this environmental impact has been codified to include the direct effects of the refrigerants on global warming as well as the so-called ``indirect'' warming impact of the carbon dioxide released during the generation (at the power plant) of the electrical power consumed by the refrigeration equipment. It is issues like these that generate commercial interest in an alternative refrigeration technology. However, the requirements of a candidate technology for adoption in a mature and risk-averse commercial refrigeration industry are as hard to divine as they are to meet. Also mentioned will be the state of other alternative refrigeration technologies like free-piston Stirling, thermoelectric and magnetocaloric as well as progress using vapor compression technology with alternative refrigerants like hydrocarbons and carbon dioxide.
NASA Technical Reports Server (NTRS)
Torrence, M. G.
1975-01-01
An investigation of a fixed-geometry, swept external-internal compression inlet was conducted at a Mach number of 6.0 and a test-section Reynolds number of 1.55 x 10 to the 7th power per meter. The test conditions was constant for all runs with stagnation pressure and temperature at 20 atmospheres and 500 K, respectively. Tests were made at angles of attack of -5 deg, 0 deg, 3 deg, and 5 deg. Measurements consisted of pitot- and static-pressure surveys in inlet throat, wall static pressures, and surface temperatures. Boundary-layer bleed was provided on the centerbody and on the cowl internal surface. The inlet performance was consistently high over the range of the angle of attack tested, with an overall average total pressure recovery of 78 percent and corresponding adiabatic kinetic-energy efficiency of 99 percent. The inlet throat flow distribution was uniform and the Mach number and pressure level were of the correct magnitude for efficient combustor design. The utilization of a swept compression field to meet the starting requirements of a fixed-geometry inlet produced neither flow instability nor a tendency to unstart.
Additive Manufacturing of Parts and Tooling in Robotic Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Love, Lonnie J.; Hassen, Ahmed A.; Chesser, Phillip C.
ORNL worked with Transcend Robotics, LLC to explore additive manufacturing of the two-piece compression body for their ARTI mobile robot platform. Extrusion compression molding was identified as an effective means of manufacturing these parts. ORNL consulted on modifications to the housing design to accommodate the selected manufacturing process. Parts were printed using ORNL's FDM machines for testing and evaluation of the design as a precursor to molding the parts. The assembly and evaluation of the parts proved favorable and minor design changes to improve assembly and performance were identified.The goal is to develop a light weight and rugged two-part roboticmore » enclosure for an unmanned ground vehicle UGV) that will be used in search and rescue applications. The FDM parts fabricated by ORNL allowed Transcend Robotics to assemble a prototype robot and verify that the new parts will meet the performance requirements. ORNL fabricated enclosure parts out of ABS and Nylon 12 materials such that the design could be tested prior to fabricating tooling for compression molding of Nylon 6 with carbon fiber fill. The robot was performance tested and compared with the previous manufacturing techniques and found to have superior performance.« less
NASA Astrophysics Data System (ADS)
Leahy, Lauren N.; Haslach, Henry W.
2018-02-01
During normal extracellular fluid (ECF) flow in the brain glymphatic system or during pathological flow induced by trauma resulting from impacts and blast waves, ECF-solid matter interactions result from sinusoidal shear waves in the brain and cranial arterial tissue, both heterogeneous biological tissues with high fluid content. The flow in the glymphatic system is known to be forced by pulsations of the cranial arteries at about 1 Hz. The experimental shear stress response to sinusoidal translational shear deformation at 1 Hz and 25% strain amplitude and either 0% or 33% compression is compared for rat cerebrum and bovine aortic tissue. Time-frequency analyses aim to correlate the shear stress signal frequency components over time with the behavior of brain tissue constituents to identify the physical source of the shear nonlinear viscoelastic response. Discrete fast Fourier transformation analysis and the novel application to the shear stress signal of harmonic wavelet decomposition both show significant 1 Hz and 3 Hz components. The 3 Hz component in brain tissue, whose magnitude is much larger than in aortic tissue, may result from interstitial fluid induced drag forces. The harmonic wavelet decomposition locates 3 Hz harmonics whose magnitudes decrease on subsequent cycles perhaps because of bond breaking that results in easier fluid movement. Both tissues exhibit transient shear stress softening similar to the Mullins effect in rubber. The form of a new mathematical model for the drag force produced by ECF-solid matter interactions captures the third harmonic seen experimentally.
Liu, Tingting; Zhang, Ling; Wang, Shutao; Cui, Yaoyao; Wang, Yutian; Liu, Lingfei; Yang, Zhe
2018-03-15
Qualitative and quantitative analysis of polycyclic aromatic hydrocarbons (PAHs) was carried out by three-dimensional fluorescence spectroscopy combining with Alternating Weighted Residue Constraint Quadrilinear Decomposition (AWRCQLD). The experimental subjects were acenaphthene (ANA) and naphthalene (NAP). Firstly, in order to solve the redundant information of the three-dimensional fluorescence spectral data, the wavelet transform was used to compress data in preprocessing. Then, the four-dimensional data was constructed by using the excitation-emission fluorescence spectra of different concentration PAHs. The sample data was obtained from three solvents that are methanol, ethanol and Ultra-pure water. The four-dimensional spectral data was analyzed by AWRCQLD, then the recovery rate of PAHs was obtained from the three solvents and compared respectively. On one hand, the results showed that PAHs can be measured more accurately by the high-order data, and the recovery rate was higher. On the other hand, the results presented that AWRCQLD can better reflect the superiority of four-dimensional algorithm than the second-order calibration and other third-order calibration algorithms. The recovery rate of ANA was 96.5%~103.3% and the root mean square error of prediction was 0.04μgL -1 . The recovery rate of NAP was 96.7%~115.7% and the root mean square error of prediction was 0.06μgL -1 . Copyright © 2017 Elsevier B.V. All rights reserved.
Machine Learning Techniques for Global Sensitivity Analysis in Climate Models
NASA Astrophysics Data System (ADS)
Safta, C.; Sargsyan, K.; Ricciuto, D. M.
2017-12-01
Climate models studies are not only challenged by the compute intensive nature of these models but also by the high-dimensionality of the input parameter space. In our previous work with the land model components (Sargsyan et al., 2014) we identified subsets of 10 to 20 parameters relevant for each QoI via Bayesian compressive sensing and variance-based decomposition. Nevertheless the algorithms were challenged by the nonlinear input-output dependencies for some of the relevant QoIs. In this work we will explore a combination of techniques to extract relevant parameters for each QoI and subsequently construct surrogate models with quantified uncertainty necessary to future developments, e.g. model calibration and prediction studies. In the first step, we will compare the skill of machine-learning models (e.g. neural networks, support vector machine) to identify the optimal number of classes in selected QoIs and construct robust multi-class classifiers that will partition the parameter space in regions with smooth input-output dependencies. These classifiers will be coupled with techniques aimed at building sparse and/or low-rank surrogate models tailored to each class. Specifically we will explore and compare sparse learning techniques with low-rank tensor decompositions. These models will be used to identify parameters that are important for each QoI. Surrogate accuracy requirements are higher for subsequent model calibration studies and we will ascertain the performance of this workflow for multi-site ALM simulation ensembles.
Superconductivity in metastable phases of phosphorus-hydride compounds under high pressure
NASA Astrophysics Data System (ADS)
Flores Livas, Jose; Amsler, Maximilian; Sanna, Antonio; Heil, Christoph; Boeri, Lilia; Profeta, Gianni; Wolverton, Crhis; Goedecker, Stefan; Gross, E. K. U.
Recently, compressed phosphine was reported to metallize at pressures above 45 GPa, reaching a superconducting transition temperature (Tc) of 100 K at 200 GPa. However, neither the exact composition nor the crystal structure of the superconducting phase have been conclusively determined. In this work the phase diagram of PHn (n = 1 , 2 , 3 , 4 , 5 , 6) was extensively explored by means of ab initio crystal structure prediction methods. The results do not support the existence of thermodynamically stable PHn compounds, which exhibit a tendency for elemental decomposition at high pressure even when vibrational contributions to the free energies are taken into account. Although the lowest energy phases of PH1 , 2 , 3 display Tc's comparable to experiments, it remains questionable if the measured values of Tc can be fully attributed to a phase-pure compound of PHn. This work was done within the NCCR MARVEL project.
Explosively Generated Plasmas: Measurement and Models of Shock Generation and Material Interactions
NASA Astrophysics Data System (ADS)
Emery, Samuel; Elert, Mark; Giannuzzi, Paul; Le, Ryan; McCarthy, Daniel; Schweigert, Igor
2017-06-01
Explosively generated plasmas (EGPs) are created by the focusing of a shock produced from an explosive driver via a conical waveguide. In the waveguide, the gases from the explosive along with the trapped air are accelerated and compressed (via Mach stemming) to such extent that plasma is produced. These EGPs have been measured in controlled experiments to achieve temperatures on the order of 1 eV and velocities as high as 25 km/s. We have conducted a combined modeling and measurement effort to increase the understanding for design purposes of the shock generation of EGPs and the interaction of EGP with explosive materials. Such efforts have led to improved measures of pressure and temperature, spatial structure of the plasma, and the decomposition/deflagration behavior of RDX upon exposure to an EGP. Funding provided by the Environmental Security Technology Certification Program (ESTCP) Munitions Response program area.
Pressure-induced effects and phase relations in Mg2NiH4
NASA Astrophysics Data System (ADS)
Gavra, Z.; Kimmel, G.; Gefen, Y.; Mintz, Moshe H.
1985-05-01
The low-temperature (<210 °C) crystallographic structure, electrical conductivity, and thermal stability of Mg2NiH4 powders compacted under isostatic pressures of up to 10 kbar were studied. A comparison is made with the corresponding properties of the noncompressed material. It has been concluded that under stress-free hydriding conditions performed below 210 °C, a two-phase hydride mixture is formed. Each of the hydride particles consists of an inner core composed of an hydrogen-deficient monoclinic phase coated by a layer of a stoichiometric orthorhombic phase. The monoclinic phase has a metalliclike electrical conductivity while the orthorhombic phase is insulating. High compaction pressures cause the transformation of the orthorhombic structure into the monoclinic one, thereby resulting in a pressure-induced insulator-to-conductor transition. Reduced decomposition temperatures are obtained for the compressed hydrides. This reduction is attributed to kinetic factors rather than to a reduced thermodynamic stability.
Spatial and directional control of self-assembled wrinkle patterns by UV light absorption
NASA Astrophysics Data System (ADS)
Kortz, C.; Oesterschulze, E.
2017-12-01
Wrinkle formation on surfaces is a phenomenon that is observed in layered systems with a compressed elastic thin capping layer residing on a viscoelastic film. So far, the properties of the viscoelastic material could only be changed replacing it by another material. Here, we propose to use a photosensitive material whose viscoelastic properties, Young's modulus, and glass transition temperature can easily be adjusted by the absorption of UV light. Employing UV lithography masks during the exposure, we gain additionally spatial and directional control of the self-assembled wrinkle pattern formation that relies on a spinodal decomposition process. Inspired by the results on surface wrinkling and its dependence on the intrinsic stress, we also derive a method to avoid wrinkling locally by tailoring the mechanical stress distribution in the layered system choosing UV masks with convex patterns. This is of particular interest in technical applications where the buckling of surfaces is undesirable.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gamblin, T; de Supinski, B R; Schulz, M
Good load balance is crucial on very large parallel systems, but the most sophisticated algorithms introduce dynamic imbalances through adaptation in domain decomposition or use of adaptive solvers. To observe and diagnose imbalance, developers need system-wide, temporally-ordered measurements from full-scale runs. This potentially requires data collection from multiple code regions on all processors over the entire execution. Doing this instrumentation naively can, in combination with the application itself, exceed available I/O bandwidth and storage capacity, and can induce severe behavioral perturbations. We present and evaluate a novel technique for scalable, low-error load balance measurement. This uses a parallel wavelet transformmore » and other parallel encoding methods. We show that our technique collects and reconstructs system-wide measurements with low error. Compression time scales sublinearly with system size and data volume is several orders of magnitude smaller than the raw data. The overhead is low enough for online use in a production environment.« less
Filtering techniques for efficient inversion of two-dimensional Nuclear Magnetic Resonance data
NASA Astrophysics Data System (ADS)
Bortolotti, V.; Brizi, L.; Fantazzini, P.; Landi, G.; Zama, F.
2017-10-01
The inversion of two-dimensional Nuclear Magnetic Resonance (NMR) data requires the solution of a first kind Fredholm integral equation with a two-dimensional tensor product kernel and lower bound constraints. For the solution of this ill-posed inverse problem, the recently presented 2DUPEN algorithm [V. Bortolotti et al., Inverse Problems, 33(1), 2016] uses multiparameter Tikhonov regularization with automatic choice of the regularization parameters. In this work, I2DUPEN, an improved version of 2DUPEN that implements Mean Windowing and Singular Value Decomposition filters, is deeply tested. The reconstruction problem with filtered data is formulated as a compressed weighted least squares problem with multi-parameter Tikhonov regularization. Results on synthetic and real 2D NMR data are presented with the main purpose to deeper analyze the separate and combined effects of these filtering techniques on the reconstructed 2D distribution.
Biomass gasification for liquid fuel production
DOE Office of Scientific and Technical Information (OSTI.GOV)
Najser, Jan, E-mail: jan.najser@vsb.cz, E-mail: vaclav.peer@vsb.cz; Peer, Václav, E-mail: jan.najser@vsb.cz, E-mail: vaclav.peer@vsb.cz; Vantuch, Martin
2014-08-06
In our old fix-bed autothermal gasifier we tested wood chips and wood pellets. We make experiments for Czech company producing agro pellets - pellets made from agricultural waste and fastrenewable natural resources. We tested pellets from wheat and rice straw and hay. These materials can be very perspective, because they dońt compete with food production, they were formed in sufficient quantity and in the place of their treatment. New installation is composed of allothermal biomass fixed bed gasifier with conditioning and using produced syngas for Fischer - Tropsch synthesis. As a gasifying agent will be used steam. Gas purification willmore » have two parts - separation of dust particles using a hot filter and dolomite reactor for decomposition of tars. In next steps, gas will be cooled, compressed and removed of sulphur and chlorine compounds and carbon dioxide. This syngas will be used for liquid fuel synthesis.« less
Biomass gasification for liquid fuel production
NASA Astrophysics Data System (ADS)
Najser, Jan; Peer, Václav; Vantuch, Martin
2014-08-01
In our old fix-bed autothermal gasifier we tested wood chips and wood pellets. We make experiments for Czech company producing agro pellets - pellets made from agricultural waste and fastrenewable natural resources. We tested pellets from wheat and rice straw and hay. These materials can be very perspective, because they dońt compete with food production, they were formed in sufficient quantity and in the place of their treatment. New installation is composed of allothermal biomass fixed bed gasifier with conditioning and using produced syngas for Fischer - Tropsch synthesis. As a gasifying agent will be used steam. Gas purification will have two parts - separation of dust particles using a hot filter and dolomite reactor for decomposition of tars. In next steps, gas will be cooled, compressed and removed of sulphur and chlorine compounds and carbon dioxide. This syngas will be used for liquid fuel synthesis.
Comparative performance evaluation of transform coding in image pre-processing
NASA Astrophysics Data System (ADS)
Menon, Vignesh V.; NB, Harikrishnan; Narayanan, Gayathri; CK, Niveditha
2017-07-01
We are in the midst of a communication transmute which drives the development as largely as dissemination of pioneering communication systems with ever-increasing fidelity and resolution. Distinguishable researches have been appreciative in image processing techniques crazed by a growing thirst for faster and easier encoding, storage and transmission of visual information. In this paper, the researchers intend to throw light on many techniques which could be worn at the transmitter-end in order to ease the transmission and reconstruction of the images. The researchers investigate the performance of different image transform coding schemes used in pre-processing, their comparison, and effectiveness, the necessary and sufficient conditions, properties and complexity in implementation. Whimsical by prior advancements in image processing techniques, the researchers compare various contemporary image pre-processing frameworks- Compressed Sensing, Singular Value Decomposition, Integer Wavelet Transform on performance. The paper exposes the potential of Integer Wavelet transform to be an efficient pre-processing scheme.
The BCC/B2 morphologies in Al xNiCoFeCr high-entropy alloys
Ma, Yue; Jiang, Beibei; Li, Chunling; ...
2017-02-15
Here, the present work primarily investigates the morphological evolution of the body-centered-cubic (BCC)/B2 phases in Al xNiCoFeCr high-entropy alloys (HEAs) with increasing Al content. It is found that the BCC/B2 coherent morphology is closely related to the lattice misfit between these two phases, which is sensitive to Al. There are two types of microscopic BCC/B2 morphologies in this HEA series: one is the weave-like morphology induced by the spinodal decomposition, and the other is the microstructure of a spherical disordered BCC precipitation on the ordered B2 matrix that appears in HEAs with a much higher Al content. The mechanical properties,more » including the compressive yielding strength and microhardness of the Al xNiCoFeCr HEAs, are also discussed in light of the concept of the valence electron concentration (VEC).« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alevli, Mustafa, E-mail: mustafaalevli@marmara.edu.tr; Gungor, Neşe; Haider, Ali
2016-01-15
Gallium nitride films were grown by hollow cathode plasma-assisted atomic layer deposition using triethylgallium and N{sub 2}/H{sub 2} plasma. An optimized recipe for GaN film was developed, and the effect of substrate temperature was studied in both self-limiting growth window and thermal decomposition-limited growth region. With increased substrate temperature, film crystallinity improved, and the optical band edge decreased from 3.60 to 3.52 eV. The refractive index and reflectivity in Reststrahlen band increased with the substrate temperature. Compressive strain is observed for both samples, and the surface roughness is observed to increase with the substrate temperature. Despite these temperature dependent material properties,more » the chemical composition, E{sub 1}(TO), phonon position, and crystalline phases present in the GaN film were relatively independent from growth temperature.« less
Cord, Maximilien; Sirjean, Baptiste; Fournet, René; Tomlin, Alison; Ruiz-Lopez, Manuel; Battin-Leclerc, Frédérique
2012-06-21
This paper revisits the primary reactions involved in the oxidation of n-butane from low to intermediate temperatures (550-800 K) including the negative temperature coefficient (NTC) zone. A model that was automatically generated is used as a starting point and a large number of thermochemical and kinetic data are then re-estimated. The kinetic data of the isomerization of alkylperoxy radicals giving (•)QOOH radicals and the subsequent decomposition to give cyclic ethers has been calculated at the CBS-QB3 level of theory. The newly obtained model allows a satisfactory prediction of experimental data recently obtained in a jet-stirred reactor and in rapid compression machines. A considerable improvement of the prediction of the selectivity of cyclic ethers is especially obtained compared to previous models. Linear and global sensitivity analyses have been performed to better understand which reactions are of influence in the NTC zone.
NASA Technical Reports Server (NTRS)
Periaux, J.
1979-01-01
The numerical simulation of the transonic flows of idealized fluids and of incompressible viscous fluids, by the nonlinear least squares methods is presented. The nonlinear equations, the boundary conditions, and the various constraints controlling the two types of flow are described. The standard iterative methods for solving a quasi elliptical nonlinear equation with partial derivatives are reviewed with emphasis placed on two examples: the fixed point method applied to the Gelder functional in the case of compressible subsonic flows and the Newton method used in the technique of decomposition of the lifting potential. The new abstract least squares method is discussed. It consists of substituting the nonlinear equation by a problem of minimization in a H to the minus 1 type Sobolev functional space.
Newmark-Beta-FDTD method for super-resolution analysis of time reversal waves
NASA Astrophysics Data System (ADS)
Shi, Sheng-Bing; Shao, Wei; Ma, Jing; Jin, Congjun; Wang, Xiao-Hua
2017-09-01
In this work, a new unconditionally stable finite-difference time-domain (FDTD) method with the split-field perfectly matched layer (PML) is proposed for the analysis of time reversal (TR) waves. The proposed method is very suitable for multiscale problems involving microstructures. The spatial and temporal derivatives in this method are discretized by the central difference technique and Newmark-Beta algorithm, respectively, and the derivation results in the calculation of a banded-sparse matrix equation. Since the coefficient matrix keeps unchanged during the whole simulation process, the lower-upper (LU) decomposition of the matrix needs to be performed only once at the beginning of the calculation. Moreover, the reverse Cuthill-Mckee (RCM) technique, an effective preprocessing technique in bandwidth compression of sparse matrices, is used to improve computational efficiency. The super-resolution focusing of TR wave propagation in two- and three-dimensional spaces is included to validate the accuracy and efficiency of the proposed method.
Villéger, Sébastien; Mason, Norman W H; Mouillot, David
2008-08-01
Functional diversity is increasingly identified as an important driver of ecosystem functioning. Various indices have been proposed to measure the functional diversity of a community, but there is still no consensus on which are most suitable. Indeed, none of the existing indices meets all the criteria required for general use. The main criteria are that they must be designed to deal with several traits, take into account abundances, and measure all the facets of functional diversity. Here we propose three indices to quantify each facet of functional diversity for a community with species distributed in a multidimensional functional space: functional richness (volume of the functional space occupied by the community), functional evenness (regularity of the distribution of abundance in this volume), and functional divergence (divergence in the distribution of abundance in this volume). Functional richness is estimated using the existing convex hull volume index. The new functional evenness index is based on the minimum spanning tree which links all the species in the multidimensional functional space. Then this new index quantifies the regularity with which species abundances are distributed along the spanning tree. Functional divergence is measured using a novel index which quantifies how species diverge in their distances (weighted by their abundance) from the center of gravity in the functional space. We show that none of the indices meets all the criteria required for a functional diversity index, but instead we show that the set of three complementary indices meets these criteria. Through simulations of artificial data sets, we demonstrate that functional divergence and functional evenness are independent of species richness and that the three functional diversity indices are independent of each other. Overall, our study suggests that decomposition of functional diversity into its three primary components provides a meaningful framework for its quantification and for the classification of existing functional diversity indices. This decomposition has the potential to shed light on the role of biodiversity on ecosystem functioning and on the influence of biotic and abiotic filters on the structure of species communities. Finally, we propose a general framework for applying these three functional diversity indices.
Decomposition of Composite Electric Field in a Three-Phase D-Dot Voltage Transducer Measuring System
Hu, Xueqi; Wang, Jingang; Wei, Gang; Deng, Xudong
2016-01-01
In line with the wider application of non-contact voltage transducers in the engineering field, transducers are required to have better performance for different measuring environments. In the present study, the D-dot voltage transducer is further improved based on previous research in order to meet the requirements for long-distance measurement of electric transmission lines. When measuring three-phase electric transmission lines, problems such as synchronous data collection and composite electric field need to be resolved. A decomposition method is proposed with respect to the superimposed electric field generated between neighboring phases. The charge simulation method is utilized to deduce the decomposition equation of the composite electric field and the validity of the proposed method is verified by simulation calculation software. With the deduced equation as the algorithm foundation, this paper improves hardware circuits, establishes a measuring system and constructs an experimental platform for examination. Under experimental conditions, a 10 kV electric transmission line was tested for steady-state errors, and the measuring results of the transducer and the high-voltage detection head were compared. Ansoft Maxwell Stimulation Software was adopted to obtain the electric field intensity in different positions under transmission lines; its values and the measuring values of the transducer were also compared. Experimental results show that the three-phase transducer is characterized by a relatively good synchronization for data measurement, measuring results with high precision, and an error ratio within a prescribed limit. Therefore, the proposed three-phase transducer can be broadly applied and popularized in the engineering field. PMID:27754340
NASA Astrophysics Data System (ADS)
Hwang, James Ho-Jin; Duran, Adam
2016-08-01
Most of the times pyrotechnic shock design and test requirements for space systems are provided in Shock Response Spectrum (SRS) without the input time history. Since the SRS does not describe the input or the environment, a decomposition method is used to obtain the source time history. The main objective of this paper is to develop a decomposition method producing input time histories that can satisfy the SRS requirement based on the pyrotechnic shock test data measured from a mechanical impact test apparatus. At the heart of this decomposition method is the statistical representation of the pyrotechnic shock test data measured from the MIT Lincoln Laboratory (LL) designed Universal Pyrotechnic Shock Simulator (UPSS). Each pyrotechnic shock test data measured at the interface of a test unit has been analyzed to produce the temporal peak acceleration, Root Mean Square (RMS) acceleration, and the phase lag at each band center frequency. Maximum SRS of each filtered time history has been calculated to produce a relationship between the input and the response. Two new definitions are proposed as a result. The Peak Ratio (PR) is defined as the ratio between the maximum SRS and the temporal peak acceleration at each band center frequency. The ratio between the maximum SRS and the RMS acceleration is defined as the Energy Ratio (ER) at each band center frequency. Phase lag is estimated based on the time delay between the temporal peak acceleration at each band center frequency and the peak acceleration at the lowest band center frequency. This stochastic process has been applied to more than one hundred pyrotechnic shock test data to produce probabilistic definitions of the PR, ER, and the phase lag. The SRS is decomposed at each band center frequency using damped sinusoids with the PR and the decays obtained by matching the ER of the damped sinusoids to the ER of the test data. The final step in this stochastic SRS decomposition process is the Monte Carlo (MC) simulation. The MC simulation identifies combinations of the PR and decays that can meet the SRS requirement at each band center frequency. Decomposed input time histories are produced by summing the converged damped sinusoids with the MC simulation of the phase lag distribution.
NASA Astrophysics Data System (ADS)
Chen, Gui-Qiang; Wang, Ya-Guang
2008-03-01
Compressible vortex sheets are fundamental waves, along with shocks and rarefaction waves, in entropy solutions to multidimensional hyperbolic systems of conservation laws. Understanding the behavior of compressible vortex sheets is an important step towards our full understanding of fluid motions and the behavior of entropy solutions. For the Euler equations in two-dimensional gas dynamics, the classical linearized stability analysis on compressible vortex sheets predicts stability when the Mach number M > sqrt{2} and instability when M < sqrt{2} ; and Artola and Majda’s analysis reveals that the nonlinear instability may occur if planar vortex sheets are perturbed by highly oscillatory waves even when M > sqrt{2} . For the Euler equations in three dimensions, every compressible vortex sheet is violently unstable and this instability is the analogue of the Kelvin Helmholtz instability for incompressible fluids. The purpose of this paper is to understand whether compressible vortex sheets in three dimensions, which are unstable in the regime of pure gas dynamics, become stable under the magnetic effect in three-dimensional magnetohydrodynamics (MHD). One of the main features is that the stability problem is equivalent to a free-boundary problem whose free boundary is a characteristic surface, which is more delicate than noncharacteristic free-boundary problems. Another feature is that the linearized problem for current-vortex sheets in MHD does not meet the uniform Kreiss Lopatinskii condition. These features cause additional analytical difficulties and especially prevent a direct use of the standard Picard iteration to the nonlinear problem. In this paper, we develop a nonlinear approach to deal with these difficulties in three-dimensional MHD. We first carefully formulate the linearized problem for the current-vortex sheets to show rigorously that the magnetic effect makes the problem weakly stable and establish energy estimates, especially high-order energy estimates, in terms of the nonhomogeneous terms and variable coefficients. Then we exploit these results to develop a suitable iteration scheme of the Nash Moser Hörmander type to deal with the loss of the order of derivative in the nonlinear level and establish its convergence, which leads to the existence and stability of compressible current-vortex sheets, locally in time, in three-dimensional MHD.
Interdisciplinary ICU Cardiac Arrest Debriefing Improves Survival Outcomes
Wolfe, Heather; Zebuhr, Carleen; Topjian, Alexis A.; Nishisaki, Akira; Niles, Dana E.; Meaney, Peter A.; Boyle, Lori; Giordano, Rita T.; Davis, Daniela; Priestley, Margaret; Apkon, Michael; Berg, Robert A.; Nadkarni, Vinay M.; Sutton, Robert M.
2014-01-01
Objective In-hospital cardiac arrest is an important public health problem. High-quality resuscitation improves survival but is difficult to achieve. Our objective is to evaluate the effectiveness of a novel, interdisciplinary, postevent quantitative debriefing program to improve survival outcomes after in-hospital pediatric chest compression events. Design, Setting, and Patients Single-center prospective interventional study of children who received chest compressions between December 2008 and June 2012 in the ICU. Interventions Structured, quantitative, audiovisual, interdisciplinary debriefing of chest compression events with front-line providers. Measurements and Main Results Primary outcome was survival to hospital discharge. Secondary outcomes included survival of event (return of spontaneous circulation for ≥ 20 min) and favorable neurologic outcome. Primary resuscitation quality outcome was a composite variable, termed “excellent cardiopulmonary resuscitation,” prospectively defined as a chest compression depth ≥ 38 mm, rate ≥ 100/min, ≤ 10% of chest compressions with leaning, and a chest compression fraction > 90% during a given 30-second epoch. Quantitative data were available only for patients who are 8 years old or older. There were 119 chest compression events (60 control and 59 interventional). The intervention was associated with a trend toward improved survival to hospital discharge on both univariate analysis (52% vs 33%, p = 0.054) and after controlling for confounders (adjusted odds ratio, 2.5; 95% CI, 0.91–6.8; p = 0.075), and it significantly increased survival with favorable neurologic outcome on both univariate (50% vs 29%, p = 0.036) and multivariable analyses (adjusted odds ratio, 2.75; 95% CI, 1.01–7.5; p = 0.047). Cardiopulmonary resuscitation epochs for patients who are 8 years old or older during the debriefing period were 5.6 times more likely to meet targets of excellent cardiopulmonary resuscitation (95% CI, 2.9–10.6; p < 0.01). Conclusion Implementation of an interdisciplinary, postevent quantitative debriefing program was significantly associated with improved cardiopulmonary resuscitation quality and survival with favorable neurologic outcome. (Crit Care Med 2014; XX:00–00) PMID:24717462
Workshop: The Technical Requirements for Image-Guided Therapy (Focus: Spinal Cord and Spinal Column)
2000-02-01
degenerative disease, spondylosis , ligamental ossification, fractures, tumors, and other causes. Compression is a painful condition that may require...series of 7000 patients who underwent lumbar disk surgery, Long indicates three reasons for failed surgery: 1. Failure of the patient to meet the...validated outcomes measures in the lumbar area, is used for a 70-year-old patient with osteoarthritis of the knees and low back pain as well as problems
Friendly Extensible Transfer Tool Beta Version
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collins, William P.; Gutierrez, Kenneth M.; McRee, Susan R.
2016-04-15
Often data transfer software is designed to meet specific requirements or apply to specific environments. Frequently, this requires source code integration for added functionality. An extensible data transfer framework is needed to more easily incorporate new capabilities, in modular fashion. Using FrETT framework, functionality may be incorporated (in many cases without need of source code) to handle new platform capabilities: I/O methods (e.g., platform specific data access), network transport methods, data processing (e.g., data compression.).
NASA Astrophysics Data System (ADS)
Li, W.; Shao, H.
2017-12-01
For geospatial cyberinfrastructure enabled web services, the ability of rapidly transmitting and sharing spatial data over the Internet plays a critical role to meet the demands of real-time change detection, response and decision-making. Especially for the vector datasets which serve as irreplaceable and concrete material in data-driven geospatial applications, their rich geometry and property information facilitates the development of interactive, efficient and intelligent data analysis and visualization applications. However, the big-data issues of vector datasets have hindered their wide adoption in web services. In this research, we propose a comprehensive optimization strategy to enhance the performance of vector data transmitting and processing. This strategy combines: 1) pre- and on-the-fly generalization, which automatically determines proper simplification level through the introduction of appropriate distance tolerance (ADT) to meet various visualization requirements, and at the same time speed up simplification efficiency; 2) a progressive attribute transmission method to reduce data size and therefore the service response time; 3) compressed data transmission and dynamic adoption of a compression method to maximize the service efficiency under different computing and network environments. A cyberinfrastructure web portal was developed for implementing the proposed technologies. After applying our optimization strategies, substantial performance enhancement is achieved. We expect this work to widen the use of web service providing vector data to support real-time spatial feature sharing, visual analytics and decision-making.
Consider the DME alternative for diesel engines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fleisch, T.H.; Meurer, P.C.
1996-07-01
Engine tests demonstrate that dimethyl ether (DME, CH{sub 3}OCH{sub 3}) can provide an alternative approach toward efficient, ultra-clean and quiet compression ignition (CI) engines. From a combustion point of view, DME is an attractive alternative fuel for CI engines, primarily for commercial applications in urban areas, where ultra-low emissions will be required in the future. DME can resolve the classical diesel emission problem of smoke emissions, which are completely eliminated. With a properly developed DME injection and combustion system, NO{sub x} emissions can be reduced to 40% of Euro II or U.S. 1998 limits, and can meet the future ULEVmore » standards of California. Simultaneously, the combustion noise is reduced by as much as 15 dB(A) below diesel levels. In addition, the classical diesel advantages such as high thermal efficiency, compression ignition, engine robustness, etc., are retained.« less
NASA Astrophysics Data System (ADS)
Moon, Dukjae; Hong, Deukjo; Kwon, Daesung; Hong, Seokhie
We assume that the domain extender is the Merkle-Damgård (MD) scheme and he message is padded by a ‘1’, and minimum number of ‘0’s, followed by a fixed size length information so that the length of padded message is multiple of block length. Under this assumption, we analyze securities of the hash mode when the compression function follows the Davies-Meyer (DM) scheme and the underlying block cipher is one of the plain Feistel or Misty scheme or the generalized Feistel or Misty schemes with Substitution-Permutation (SP) round function. We do this work based on Meet-in-the-Middle (MitM) preimage attack techniques, and develop several useful initial structures.
In vitro degradation of a 3D porous Pennisetum purpureum/PLA biocomposite scaffold.
Revati, R; Majid, M S Abdul; Ridzuan, M J M; Basaruddin, K S; Rahman Y, M N; Cheng, E M; Gibson, A G
2017-10-01
The in vitro degradation and mechanical properties of a 3D porous Pennisetum purpureum (PP)/polylactic acid (PLA)-based scaffold were investigated. In this study, composite scaffolds with PP to PLA ratios of 0%, 10%, 20%, and 30% were immersed in a PBS solution at 37°C for 40 days. Compression tests were conducted to evaluate the compressive strength and modulus of the scaffolds, according to ASTM F451-95. The compression strength of the scaffolds was found to increase from 1.94 to 9.32MPa, while the compressive modulus increased from 1.73 to 5.25MPa as the fillers' content increased from 0wt% to 30wt%. Moreover, field emission scanning electron microscopy (FESEM) and X-ray diffraction were employed to observe and analyse the microstructure and fibre-matrix interface. Interestingly, the degradation rate was reduced for the PLA/PP 20 scaffold, though insignificantly, this could be attributed to the improved mechanical properties and stronger fibre-matrix interface. Microstructure changes after degradation were observed using FESEM. The FESEM results indicated that a strong fibre-matrix interface was formed in the PLA/PP 20 scaffold, which reflected the addition of P. purpureum into PLA decreasing the degradation rate compared to in pure PLA scaffolds. The results suggest that the P. purpureum/PLA scaffold degradation rate can be altered and controlled to meet requirements imposed by a given tissue engineering application. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hariyanto, Sucipto; Usman, Mohammad Nurdianfajar; Citrasari, Nita
2017-06-01
This research aim is to determine the best briquettes as implementation of wastes recycle based on scoring method, main component composition, compressive strength, caloric value, water content, vollatile content, and ash content, also the suitability with SNI 01-6235-2000. Main component that used are rice husk, 2mm and 6 mm PET, and dried leaves. Composition variation in this research are marked as K1, K2, K3, K4, and K5 with 2 mm PET plastic and K1, K2, K3, K4, and K5 with 6 mm PET plastic. The total weight of the briquettes is 100 g and divided into 90% main components and 10% tapioca as binder. The compressive strength, caloric value, water content, vollatile content, and ash content were tested according to ASTM D 5865-04, ASTM D 3173-03, ASTM D 3175-02, ASTM D 3174-02. The tested results were used to determine the best briquette by scoring method, and the chosen briquettes is K2 with 6 mm PET plastic. The composition is 70% rice husk, 20% 6 mm PET plastic, and 10% dried leaves with the compressive strength, caloric value, water content, vollatile content, and ash content value is 51,55 kg/cm2; 5123 kal/g; 3,049%; 31,823%, dan 12,869%. The suitable value that meet the criteria according to SNI 01-6235-2000 is compressive strength, caloric value, water content, and ash content.
38th JANNAF Combustion Subcommittee Meeting. Volume 1
NASA Technical Reports Server (NTRS)
Fry, Ronald S. (Editor); Eggleston, Debra S. (Editor); Gannaway, Mary T. (Editor)
2002-01-01
This volume, the first of two volumes, is a collection of 55 unclassified/unlimited-distribution papers which were presented at the Joint Army-Navy-NASA-Air Force (JANNAF) 38th Combustion Subcommittee (CS), 26 th Airbreathing Propulsion Subcommittee (APS), 20th Propulsion Systems Hazards Subcommittee (PSHS), and 21 Modeling and Simulation Subcommittee. The meeting was held 8-12 April 2002 at the Bayside Inn at The Sandestin Golf & Beach Resort and Eglin Air Force Base, Destin, Florida. Topics cover five major technology areas including: 1) Combustion - Propellant Combustion, Ingredient Kinetics, Metal Combustion, Decomposition Processes and Material Characterization, Rocket Motor Combustion, and Liquid & Hybrid Combustion; 2) Liquid Rocket Engines - Low Cost Hydrocarbon Liquid Rocket Engines, Liquid Propulsion Turbines, Liquid Propulsion Pumps, and Staged Combustion Injector Technology; 3) Modeling & Simulation - Development of Multi- Disciplinary RBCC Modeling, Gun Modeling, and Computational Modeling for Liquid Propellant Combustion; 4) Guns Gun Propelling Charge Design, and ETC Gun Propulsion; and 5) Airbreathing - Scramjet an Ramjet- S&T Program Overviews.
Flynn, G; Stokes, K; Ryan, K M
2018-05-31
Herein, we report the formation of silicon, germanium and more complex Si-SixGe1-x and Si-Ge axial 1D heterostructures, at low temperatures in solution. These nanorods/nanowires are grown using phenylated compounds of silicon and germanium as reagents, with precursor decomposition achieved at substantially reduced temperatures (200 °C for single crystal nanostructures and 300 °C for heterostructures), through the addition of a reducing agent. This low energy route for the production of these functional nanostructures as a wet chemical in high yield is attractive to meet the processing needs for next generation photovoltaics, batteries and electronics.
Structural design using equilibrium programming formulations
NASA Technical Reports Server (NTRS)
Scotti, Stephen J.
1995-01-01
Solutions to increasingly larger structural optimization problems are desired. However, computational resources are strained to meet this need. New methods will be required to solve increasingly larger problems. The present approaches to solving large-scale problems involve approximations for the constraints of structural optimization problems and/or decomposition of the problem into multiple subproblems that can be solved in parallel. An area of game theory, equilibrium programming (also known as noncooperative game theory), can be used to unify these existing approaches from a theoretical point of view (considering the existence and optimality of solutions), and be used as a framework for the development of new methods for solving large-scale optimization problems. Equilibrium programming theory is described, and existing design techniques such as fully stressed design and constraint approximations are shown to fit within its framework. Two new structural design formulations are also derived. The first new formulation is another approximation technique which is a general updating scheme for the sensitivity derivatives of design constraints. The second new formulation uses a substructure-based decomposition of the structure for analysis and sensitivity calculations. Significant computational benefits of the new formulations compared with a conventional method are demonstrated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eric Fluga
The US Department of Energy and Caterpillar entered a Cooperative Agreement to develop compression ignition engine technology suitable for the light truck/SUV market. Caterpillar, in collaboration with a suitable commercialization partner, developed a new Compression Ignition Direct Injection (CIDI) engine technology to dramatically improve the emissions and performance of light truck engines. The overall program objective was to demonstrate engine prototypes by 2004, with an order of magnitude emission reduction while meeting challenging fuel consumption goals. Program emphasis was placed on developing and incorporating cutting edge technologies that could remove the current impediments to commercialization of CIDI power sources inmore » light truck applications. The major obstacle to commercialization is emissions regulations with secondary concerns of driveability and NVH (noise, vibration and harshness). The target emissions levels were 0.05 g/mile NOx and 0.01 g/mile PM to be compliant with the EPA Tier 2 fleet average requirements of 0.07 g/mile and the CARB LEV 2 of 0.05 g/mile for NOx, both have a PM requirement of 0.01 g/mile. The program team developed a combustion process that fundamentally shifted the classic NOx vs. PM behavior of CIDI engines. The NOx vs. PM shift was accomplished with a form of Homogeneous Charge Compression Ignition (HCCI). The HCCI concept centers on appropriate mixing of air and fuel in the compression process and controlling the inception and rate of combustion through various means such as variable valve timing, inlet charge temperature and pressure control. Caterpillar has adapted an existing Caterpillar design of a single injector that: (1) creates the appropriate fuel and air mixture for HCCI, (2) is capable of a more conventional injection to overcome the low power density problems of current HCCI implementations, (3) provides a mixed mode where both the HCCI and conventional combustion are functioning in the same combustion cycle. Figure 1 illustrates the mixed mode injection system. Under the LTCD program Caterpillar developed a mixed mode injector for a multi-cylinder engine system. The mixed mode injection system represents a critical enabling technology for the implementation of HCCI. In addition, Caterpillar implemented variable valve system technology and air system technology on the multi-cylinder engine platform. The valve and air system technology were critical to system control. Caterpillar developed the combustion system to achieve a 93% reduction in NOx emissions. The resulting NOx emissions were 0.12 gm/mile NOx. The demonstrated emissions level meets the stringent Tier 2 Bin 8 requirement without NOx aftertreatment! However, combustion development alone was not adequate to meet the program goal of 0.05gm/mile NOx. To meet the program goals, an additional 60% NOx reduction technology will be required. Caterpillar evaluated a number of NOx reduction technologies to quantify and understand the NOx reduction potential and system performance implications. The NOx adsorber was the most attractive NOx aftertreatment option based on fuel consumption and NOx reduction potential. In spite of the breakthrough technology development conducted under the LTCD program there remains many significant challenges associated with the technology configuration. For HCCI, additional effort is needed to develop a robust control strategy, reduce the hydrocarbon emissions at light load condition, and develop a more production viable fuel system. Furthermore, the NOx adsorber suffers from cost, packaging, and durability challenges that must be addressed.« less
EVALUATION OF REGISTRATION, COMPRESSION AND CLASSIFICATION ALGORITHMS
NASA Technical Reports Server (NTRS)
Jayroe, R. R.
1994-01-01
Several types of algorithms are generally used to process digital imagery such as Landsat data. The most commonly used algorithms perform the task of registration, compression, and classification. Because there are different techniques available for performing registration, compression, and classification, imagery data users need a rationale for selecting a particular approach to meet their particular needs. This collection of registration, compression, and classification algorithms was developed so that different approaches could be evaluated and the best approach for a particular application determined. Routines are included for six registration algorithms, six compression algorithms, and two classification algorithms. The package also includes routines for evaluating the effects of processing on the image data. This collection of routines should be useful to anyone using or developing image processing software. Registration of image data involves the geometrical alteration of the imagery. Registration routines available in the evaluation package include image magnification, mapping functions, partitioning, map overlay, and data interpolation. The compression of image data involves reducing the volume of data needed for a given image. Compression routines available in the package include adaptive differential pulse code modulation, two-dimensional transforms, clustering, vector reduction, and picture segmentation. Classification of image data involves analyzing the uncompressed or compressed image data to produce inventories and maps of areas of similar spectral properties within a scene. The classification routines available include a sequential linear technique and a maximum likelihood technique. The choice of the appropriate evaluation criteria is quite important in evaluating the image processing functions. The user is therefore given a choice of evaluation criteria with which to investigate the available image processing functions. All of the available evaluation criteria basically compare the observed results with the expected results. For the image reconstruction processes of registration and compression, the expected results are usually the original data or some selected characteristics of the original data. For classification processes the expected result is the ground truth of the scene. Thus, the comparison process consists of determining what changes occur in processing, where the changes occur, how much change occurs, and the amplitude of the change. The package includes evaluation routines for performing such comparisons as average uncertainty, average information transfer, chi-square statistics, multidimensional histograms, and computation of contingency matrices. This collection of routines is written in FORTRAN IV for batch execution and has been implemented on an IBM 360 computer with a central memory requirement of approximately 662K of 8 bit bytes. This collection of image processing and evaluation routines was developed in 1979.
AFOSR Contractors Meeting in Propulsion Held in Atlantic City, New Jersey on 14 - 18 June 1993
1994-04-20
number is found to be much weaker, but not universal. Previous results in compressible shear layers are difficult to interpret as a con- sequence of...turbulence. Reference 9 provides new interpretation of measured spectra of reacting species in turbulence. RKF RENCES 1. Bilger, RW., Phys. Fluids A...A simple interpretation of Eq. (2) is that 0(n) (t) is drawn to its two neighbors at a rate proportional to its separation from them. Note that as
Improvement of a Continuous-Culture Apparatus for Long-Term Use1
Taub, Frieda B.; Dollar, Alexander M.
1968-01-01
A glass and plastic apparatus was designed to meet requirements for continuous culture of cells. Some of the improvements incorporated into this apparatus include an all-glass growth vessel with a self-cleaning bottom, special compression fittings to connect tubing to glass tubing, a removable yield reservoir, and a nonwetting gas exhaust assembly. All portions of the system can be autoclaved for sterilization, and medium bottles and pump lines are replaced aseptically. Images Fig. 1 PMID:5645410
NASA Technical Reports Server (NTRS)
Belton, M. J. S.; Aksnes, K.; Davies, M. E.; Hartmann, W. K.; Millis, R. L.; Owen, T. C.; Reilly, T. H.; Sagan, C.; Suomi, V. E.; Collins, S. A., Jr.
1972-01-01
A variety of imaging systems proposed for use aboard the Outer Planet Grand Tour Explorer are discussed and evaluated in terms of optimal resolution capability and efficient time utilization. It is pointed out that the planetary and satellite alignments at the time of encounter dictate a high degree of adaptability and versatility in order to provide sufficient image enhancement over earth-based techniques. Data compression methods are also evaluated according to the same criteria.
Finite element methodology for integrated flow-thermal-structural analysis
NASA Technical Reports Server (NTRS)
Thornton, Earl A.; Ramakrishnan, R.; Vemaganti, G. R.
1988-01-01
Papers entitled, An Adaptive Finite Element Procedure for Compressible Flows and Strong Viscous-Inviscid Interactions, and An Adaptive Remeshing Method for Finite Element Thermal Analysis, were presented at the June 27 to 29, 1988, meeting of the AIAA Thermophysics, Plasma Dynamics and Lasers Conference, San Antonio, Texas. The papers describe research work supported under NASA/Langley Research Grant NsG-1321, and are submitted in fulfillment of the progress report requirement on the grant for the period ending February 29, 1988.
NASA Astrophysics Data System (ADS)
Tong, Fulin; Li, Xinliang; Duan, Yanhui; Yu, Changping
2017-12-01
Numerical investigations on a supersonic turbulent boundary layer over a longitudinal curved compression ramp are conducted using direct numerical simulation for a free stream Mach number M∞ = 2.9 and Reynolds number Reθ = 2300. The total turning angle is 24°, and the concave curvature radius is 15 times the thickness of the incoming turbulent boundary layer. Under the selected conditions, the shock foot is transferred to a fan of the compression wave because of the weaker adverse pressure gradient. The time-averaged flow-field in the curved ramp is statistically attached where the instantaneous flow-field is close to the intermittent transitory detachment state. Studies on coherent vortex structures have shown that large-scale vortex packets are enhanced significantly when the concave curvature is aligned in the spanwise direction. Consistent with findings of previous experiments, the effect of the concave curvature on the logarithmic region of the mean velocity profiles is found to be small. The intensity of the turbulent fluctuations is amplified across the curved ramp. Based on the analysis of the Reynolds stress anisotropy tensor, the evolutions of the turbulence state in the inner and outer layers of the boundary layer are considerably different. The curvature effect on the transport mechanism of the turbulent kinetic energy is studied using the balance analysis of the contributing terms in the transport equation. Furthermore, the Görtler instability in the curved ramp is quantitatively analyzed using a stability criterion. The instantaneous streamwise vorticity confirms the existence of the Görtler-like structures. These structures are characterized by an unsteady motion. In addition, the dynamic mode decomposition analysis of the instantaneous flow field at the spanwise/wall-normal plane reveals that four dynamical relevant modes with performance loss of 16% provide an optimal low-order representation of the essential characteristics of the numerical data. The spatial structures of the dominated low-frequency dynamic modes are found to be similar to that of the Görtler-like vortices.
S-EMG signal compression based on domain transformation and spectral shape dynamic bit allocation
2014-01-01
Background Surface electromyographic (S-EMG) signal processing has been emerging in the past few years due to its non-invasive assessment of muscle function and structure and because of the fast growing rate of digital technology which brings about new solutions and applications. Factors such as sampling rate, quantization word length, number of channels and experiment duration can lead to a potentially large volume of data. Efficient transmission and/or storage of S-EMG signals are actually a research issue. That is the aim of this work. Methods This paper presents an algorithm for the data compression of surface electromyographic (S-EMG) signals recorded during isometric contractions protocol and during dynamic experimental protocols such as the cycling activity. The proposed algorithm is based on discrete wavelet transform to proceed spectral decomposition and de-correlation, on a dynamic bit allocation procedure to code the wavelets transformed coefficients, and on an entropy coding to minimize the remaining redundancy and to pack all data. The bit allocation scheme is based on mathematical decreasing spectral shape models, which indicates a shorter digital word length to code high frequency wavelets transformed coefficients. Four bit allocation spectral shape methods were implemented and compared: decreasing exponential spectral shape, decreasing linear spectral shape, decreasing square-root spectral shape and rotated hyperbolic tangent spectral shape. Results The proposed method is demonstrated and evaluated for an isometric protocol and for a dynamic protocol using a real S-EMG signal data bank. Objective performance evaluations metrics are presented. In addition, comparisons with other encoders proposed in scientific literature are shown. Conclusions The decreasing bit allocation shape applied to the quantized wavelet coefficients combined with arithmetic coding results is an efficient procedure. The performance comparisons of the proposed S-EMG data compression algorithm with the established techniques found in scientific literature have shown promising results. PMID:24571620
Greedy algorithms for diffuse optical tomography reconstruction
NASA Astrophysics Data System (ADS)
Dileep, B. P. V.; Das, Tapan; Dutta, Pranab K.
2018-03-01
Diffuse optical tomography (DOT) is a noninvasive imaging modality that reconstructs the optical parameters of a highly scattering medium. However, the inverse problem of DOT is ill-posed and highly nonlinear due to the zig-zag propagation of photons that diffuses through the cross section of tissue. The conventional DOT imaging methods iteratively compute the solution of forward diffusion equation solver which makes the problem computationally expensive. Also, these methods fail when the geometry is complex. Recently, the theory of compressive sensing (CS) has received considerable attention because of its efficient use in biomedical imaging applications. The objective of this paper is to solve a given DOT inverse problem by using compressive sensing framework and various Greedy algorithms such as orthogonal matching pursuit (OMP), compressive sampling matching pursuit (CoSaMP), and stagewise orthogonal matching pursuit (StOMP), regularized orthogonal matching pursuit (ROMP) and simultaneous orthogonal matching pursuit (S-OMP) have been studied to reconstruct the change in the absorption parameter i.e, Δα from the boundary data. Also, the Greedy algorithms have been validated experimentally on a paraffin wax rectangular phantom through a well designed experimental set up. We also have studied the conventional DOT methods like least square method and truncated singular value decomposition (TSVD) for comparison. One of the main features of this work is the usage of less number of source-detector pairs, which can facilitate the use of DOT in routine applications of screening. The performance metrics such as mean square error (MSE), normalized mean square error (NMSE), structural similarity index (SSIM), and peak signal to noise ratio (PSNR) have been used to evaluate the performance of the algorithms mentioned in this paper. Extensive simulation results confirm that CS based DOT reconstruction outperforms the conventional DOT imaging methods in terms of computational efficiency. The main advantage of this study is that the forward diffusion equation solver need not be repeatedly solved.
Sub-band/transform compression of video sequences
NASA Technical Reports Server (NTRS)
Sauer, Ken; Bauer, Peter
1992-01-01
The progress on compression of video sequences is discussed. The overall goal of the research was the development of data compression algorithms for high-definition television (HDTV) sequences, but most of our research is general enough to be applicable to much more general problems. We have concentrated on coding algorithms based on both sub-band and transform approaches. Two very fundamental issues arise in designing a sub-band coder. First, the form of the signal decomposition must be chosen to yield band-pass images with characteristics favorable to efficient coding. A second basic consideration, whether coding is to be done in two or three dimensions, is the form of the coders to be applied to each sub-band. Computational simplicity is of essence. We review the first portion of the year, during which we improved and extended some of the previous grant period's results. The pyramid nonrectangular sub-band coder limited to intra-frame application is discussed. Perhaps the most critical component of the sub-band structure is the design of bandsplitting filters. We apply very simple recursive filters, which operate at alternating levels on rectangularly sampled, and quincunx sampled images. We will also cover the techniques we have studied for the coding of the resulting bandpass signals. We discuss adaptive three-dimensional coding which takes advantage of the detection algorithm developed last year. To this point, all the work on this project has been done without the benefit of motion compensation (MC). Motion compensation is included in many proposed codecs, but adds significant computational burden and hardware expense. We have sought to find a lower-cost alternative featuring a simple adaptation to motion in the form of the codec. In sequences of high spatial detail and zooming or panning, it appears that MC will likely be necessary for the proposed quality and bit rates.
Inverse methods for 3D quantitative optical coherence elasticity imaging (Conference Presentation)
NASA Astrophysics Data System (ADS)
Dong, Li; Wijesinghe, Philip; Hugenberg, Nicholas; Sampson, David D.; Munro, Peter R. T.; Kennedy, Brendan F.; Oberai, Assad A.
2017-02-01
In elastography, quantitative elastograms are desirable as they are system and operator independent. Such quantification also facilitates more accurate diagnosis, longitudinal studies and studies performed across multiple sites. In optical elastography (compression, surface-wave or shear-wave), quantitative elastograms are typically obtained by assuming some form of homogeneity. This simplifies data processing at the expense of smearing sharp transitions in elastic properties, and/or introducing artifacts in these regions. Recently, we proposed an inverse problem-based approach to compression OCE that does not assume homogeneity, and overcomes the drawbacks described above. In this approach, the difference between the measured and predicted displacement field is minimized by seeking the optimal distribution of elastic parameters. The predicted displacements and recovered elastic parameters together satisfy the constraint of the equations of equilibrium. This approach, which has been applied in two spatial dimensions assuming plane strain, has yielded accurate material property distributions. Here, we describe the extension of the inverse problem approach to three dimensions. In addition to the advantage of visualizing elastic properties in three dimensions, this extension eliminates the plane strain assumption and is therefore closer to the true physical state. It does, however, incur greater computational costs. We address this challenge through a modified adjoint problem, spatially adaptive grid resolution, and three-dimensional decomposition techniques. Through these techniques the inverse problem is solved on a typical desktop machine within a wall clock time of 20 hours. We present the details of the method and quantitative elasticity images of phantoms and tissue samples.
A novel multiple description scalable coding scheme for mobile wireless video transmission
NASA Astrophysics Data System (ADS)
Zheng, Haifeng; Yu, Lun; Chen, Chang Wen
2005-03-01
We proposed in this paper a novel multiple description scalable coding (MDSC) scheme based on in-band motion compensation temporal filtering (IBMCTF) technique in order to achieve high video coding performance and robust video transmission. The input video sequence is first split into equal-sized groups of frames (GOFs). Within a GOF, each frame is hierarchically decomposed by discrete wavelet transform. Since there is a direct relationship between wavelet coefficients and what they represent in the image content after wavelet decomposition, we are able to reorganize the spatial orientation trees to generate multiple bit-streams and employed SPIHT algorithm to achieve high coding efficiency. We have shown that multiple bit-stream transmission is very effective in combating error propagation in both Internet video streaming and mobile wireless video. Furthermore, we adopt the IBMCTF scheme to remove the redundancy for inter-frames along the temporal direction using motion compensated temporal filtering, thus high coding performance and flexible scalability can be provided in this scheme. In order to make compressed video resilient to channel error and to guarantee robust video transmission over mobile wireless channels, we add redundancy to each bit-stream and apply error concealment strategy for lost motion vectors. Unlike traditional multiple description schemes, the integration of these techniques enable us to generate more than two bit-streams that may be more appropriate for multiple antenna transmission of compressed video. Simulate results on standard video sequences have shown that the proposed scheme provides flexible tradeoff between coding efficiency and error resilience.
Hydroelastic behaviour of a structure exposed to an underwater explosion.
Colicchio, G; Greco, M; Brocchini, M; Faltinsen, O M
2015-01-28
The hydroelastic interaction between an underwater explosion and an elastic plate is investigated num- erically through a domain-decomposition strategy. The three-dimensional features of the problem require a large computational effort, which is reduced through a weak coupling between a one-dimensional radial blast solver, which resolves the blast evolution far from the boundaries, and a three-dimensional compressible flow solver used where the interactions between the compression wave and the boundaries take place and the flow becomes three-dimensional. The three-dimensional flow solver at the boundaries is directly coupled with a modal structural solver that models the response of the solid boundaries like elastic plates. This enables one to simulate the fluid-structure interaction as a strong coupling, in order to capture hydroelastic effects. The method has been applied to the experimental case of Hung et al. (2005 Int. J. Impact Eng. 31, 151-168 (doi:10.1016/j.ijimpeng.2003.10.039)) with explosion and structure sufficiently far from other boundaries and successfully validated in terms of the evolution of the acceleration induced on the plate. It was also used to investigate the interaction of an underwater explosion with the bottom of a close-by ship modelled as an orthotropic plate. In the application, the acoustic phase of the fluid-structure interaction is examined, highlighting the need of the fluid-structure coupling to capture correctly the possible inception of cavitation. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
DSC and TG Analysis of a Blended Binder Based on Waste Ceramic Powder and Portland Cement
NASA Astrophysics Data System (ADS)
Pavlík, Zbyšek; Trník, Anton; Kulovaná, Tereza; Scheinherrová, Lenka; Rahhal, Viviana; Irassar, Edgardo; Černý, Robert
2016-03-01
Cement industry belongs to the business sectors characteristic by high energy consumption and high {CO}2 generation. Therefore, any replacement of cement in concrete by waste materials can lead to immediate environmental benefits. In this paper, a possible use of waste ceramic powder in blended binders is studied. At first, the chemical composition of Portland cement and ceramic powder is analyzed using the X-ray fluorescence method. Then, thermal and mechanical characterization of hydrated blended binders containing up to 24 % ceramic is carried out within the time period of 2 days to 28 days. The differential scanning calorimetry and thermogravimetry measurements are performed in the temperature range of 25°C to 1000°C in an argon atmosphere. The measurement of compressive strength is done according to the European standards for cement mortars. The thermal analysis results in the identification of temperature and quantification of enthalpy and mass changes related to the liberation of physically bound water, calcium-silicate-hydrates dehydration and portlandite, vaterite and calcite decomposition. The portlandite content is found to decrease with time for all blends which provides the evidence of the pozzolanic activity of ceramic powder even within the limited monitoring time of 28 days. Taking into account the favorable results obtained in the measurement of compressive strength, it can be concluded that the applied waste ceramic powder can be successfully used as a supplementary cementing material to Portland cement in an amount of up to 24 mass%.
Maurya, Rakesh Kumar; Saxena, Mohit Raj; Rai, Piyush; Bhardwaj, Aashish
2018-05-01
Currently, diesel engines are more preferred over gasoline engines due to their higher torque output and fuel economy. However, diesel engines confront major challenge of meeting the future stringent emission norms (especially soot particle emissions) while maintaining the same fuel economy. In this study, nanosize range soot particle emission characteristics of a stationary (non-road) diesel engine have been experimentally investigated. Experiments are conducted at a constant speed of 1500 rpm for three compression ratios and nozzle opening pressures at different engine loads. In-cylinder pressure history for 2000 consecutive engine cycles is recorded and averaged data is used for analysis of combustion characteristics. An electrical mobility-based fast particle sizer is used for analyzing particle size and mass distributions of engine exhaust particles at different test conditions. Soot particle distribution from 5 to 1000 nm was recorded. Results show that total particle concentration decreases with an increase in engine operating loads. Moreover, the addition of butanol in the diesel fuel leads to the reduction in soot particle concentration. Regression analysis was also conducted to derive a correlation between combustion parameters and particle number emissions for different compression ratios. Regression analysis shows a strong correlation between cylinder pressure-based combustion parameters and particle number emission.
The physical and mechanical properties of treated and untreated Gigantochloa Scortechinii bamboo
NASA Astrophysics Data System (ADS)
Daud, Norhasliya Mohd; Nor, Norazman Mohamad; Yusof, Mohammed Alias; Bakhri, Azrul Affandhi Mustaffa Al; Shaari, Amalina Aisyah
2018-02-01
Bamboo's advantages such as fast growing, renewable and easily available raw material meets the demand of sustainable material in construction. Bamboo act as reinforcement to enhance strength in structural members. This paper investigated on the properties of Gigantochloa Scortechinii bamboo (moisture content, density, compression, shear and bending) by referring to ISO 22157. Moisture content for both untreated and treated bamboo high at the bottom section while density is high at the top section. Compression strength for untreated bamboo were between 19.96 to 23.80 MPa and treated bamboo were between 31.74 to 36.60 MPa. High compression was at the top section which have the greatest wall thickness. Shear strength recorded between 4.28 to 5.69 MPa for untreated bamboo with node and 3.67 to 5.21 MPa for treated bamboo with node. The shear strength of samples with node recorded high strength compared to internode. Untreated bamboo recorded the MOR between 53.64 to 73.66 MPa and 58.23 to 62.86 MPa for treated bamboo. MOE of untreated bamboo were between 26.70 GPa to 36.31 GPa while treated bamboo were between 28.83 to 33.41 GPa. By replacing bamboo to the conventional building material, cost of materials will be reduced and sustainability will be enhanced.
Hong, Danfeng; Su, Jian; Hong, Qinggen; Pan, Zhenkuan; Wang, Guodong
2014-01-01
As palmprints are captured using non-contact devices, image blur is inevitably generated because of the defocused status. This degrades the recognition performance of the system. To solve this problem, we propose a stable-feature extraction method based on a Vese–Osher (VO) decomposition model to recognize blurred palmprints effectively. A Gaussian defocus degradation model is first established to simulate image blur. With different degrees of blurring, stable features are found to exist in the image which can be investigated by analyzing the blur theoretically. Then, a VO decomposition model is used to obtain structure and texture layers of the blurred palmprint images. The structure layer is stable for different degrees of blurring (this is a theoretical conclusion that needs to be further proved via experiment). Next, an algorithm based on weighted robustness histogram of oriented gradients (WRHOG) is designed to extract the stable features from the structure layer of the blurred palmprint image. Finally, a normalized correlation coefficient is introduced to measure the similarity in the palmprint features. We also designed and performed a series of experiments to show the benefits of the proposed method. The experimental results are used to demonstrate the theoretical conclusion that the structure layer is stable for different blurring scales. The WRHOG method also proves to be an advanced and robust method of distinguishing blurred palmprints. The recognition results obtained using the proposed method and data from two palmprint databases (PolyU and Blurred–PolyU) are stable and superior in comparison to previous high-performance methods (the equal error rate is only 0.132%). In addition, the authentication time is less than 1.3 s, which is fast enough to meet real-time demands. Therefore, the proposed method is a feasible way of implementing blurred palmprint recognition. PMID:24992328
Hong, Danfeng; Su, Jian; Hong, Qinggen; Pan, Zhenkuan; Wang, Guodong
2014-01-01
As palmprints are captured using non-contact devices, image blur is inevitably generated because of the defocused status. This degrades the recognition performance of the system. To solve this problem, we propose a stable-feature extraction method based on a Vese-Osher (VO) decomposition model to recognize blurred palmprints effectively. A Gaussian defocus degradation model is first established to simulate image blur. With different degrees of blurring, stable features are found to exist in the image which can be investigated by analyzing the blur theoretically. Then, a VO decomposition model is used to obtain structure and texture layers of the blurred palmprint images. The structure layer is stable for different degrees of blurring (this is a theoretical conclusion that needs to be further proved via experiment). Next, an algorithm based on weighted robustness histogram of oriented gradients (WRHOG) is designed to extract the stable features from the structure layer of the blurred palmprint image. Finally, a normalized correlation coefficient is introduced to measure the similarity in the palmprint features. We also designed and performed a series of experiments to show the benefits of the proposed method. The experimental results are used to demonstrate the theoretical conclusion that the structure layer is stable for different blurring scales. The WRHOG method also proves to be an advanced and robust method of distinguishing blurred palmprints. The recognition results obtained using the proposed method and data from two palmprint databases (PolyU and Blurred-PolyU) are stable and superior in comparison to previous high-performance methods (the equal error rate is only 0.132%). In addition, the authentication time is less than 1.3 s, which is fast enough to meet real-time demands. Therefore, the proposed method is a feasible way of implementing blurred palmprint recognition.
High-frequency monopole sound source for anechoic chamber qualification
NASA Astrophysics Data System (ADS)
Saussus, Patrick; Cunefare, Kenneth A.
2003-04-01
Anechoic chamber qualification procedures require the use of an omnidirectional monopole sound source. Required characteristics for these monopole sources are explicitly listed in ISO 3745. Building a high-frequency monopole source that meets these characteristics has proved difficult due to the size limitations imposed by small wavelengths at high frequency. A prototype design developed for use in hemianechoic chambers employs telescoping tubes, which act as an inverse horn. This same design can be used in anechoic chambers, with minor adaptations. A series of gradually decreasing brass telescoping tubes is attached to the throat of a well-insulated high-frequency compression driver. Therefore, all of the sound emitted from the driver travels through the horn and exits through an opening of approximately 2.5 mm. Directivity test data show that this design meets all of the requirements set forth by ISO 3745.
Concurrent airline fleet allocation and aircraft design with profit modeling for multiple airlines
NASA Astrophysics Data System (ADS)
Govindaraju, Parithi
A "System of Systems" (SoS) approach is particularly beneficial in analyzing complex large scale systems comprised of numerous independent systems -- each capable of independent operations in their own right -- that when brought in conjunction offer capabilities and performance beyond the constituents of the individual systems. The variable resource allocation problem is a type of SoS problem, which includes the allocation of "yet-to-be-designed" systems in addition to existing resources and systems. The methodology presented here expands upon earlier work that demonstrated a decomposition approach that sought to simultaneously design a new aircraft and allocate this new aircraft along with existing aircraft in an effort to meet passenger demand at minimum fleet level operating cost for a single airline. The result of this describes important characteristics of the new aircraft. The ticket price model developed and implemented here enables analysis of the system using profit maximization studies instead of cost minimization. A multiobjective problem formulation has been implemented to determine characteristics of a new aircraft that maximizes the profit of multiple airlines to recognize the fact that aircraft manufacturers sell their aircraft to multiple customers and seldom design aircraft customized to a single airline's operations. The route network characteristics of two simple airlines serve as the example problem for the initial studies. The resulting problem formulation is a mixed-integer nonlinear programming problem, which is typically difficult to solve. A sequential decomposition strategy is applied as a solution methodology by segregating the allocation (integer programming) and aircraft design (non-linear programming) subspaces. After solving a simple problem considering two airlines, the decomposition approach is then applied to two larger airline route networks representing actual airline operations in the year 2005. The decomposition strategy serves as a promising technique for future detailed analyses. Results from the profit maximization studies favor a smaller aircraft in terms of passenger capacity due to its higher yield generation capability on shorter routes while results from the cost minimization studies favor a larger aircraft due to its lower direct operating cost per seat mile.
NASA Astrophysics Data System (ADS)
Yuan, Jiao-Nan; Wei, Yong-Kai; Zhang, Xiu-Qing; Chen, Xiang-Rong; Ji, Guang-Fu; Kotni, Meena Kumari; Wei, Dong-Qing
2017-10-01
The shock response has a great influence on the design, synthesis, and application of energetic materials in both industrial and military areas. Therefore, the initial decomposition mechanism of bond scission at the atomistic level of condensed-phase α-RDX under shock loading has been studied based on quantum molecular dynamics simulations in combination with a multi-scale shock technique. First, based on the frontier molecular orbital theory, our calculated result shows that the N-NO2 bond is the weakest bond in the α-RDX molecule in the ground state, which may be the initial bond for pyrolysis. Second, the changes of bonds under shock loading are investigated by the changes of structures, kinetic bond lengths, and Laplacian bond orders during the simulation. Also, the variation of thermodynamic properties with time in shocked α-RDX at 10 km/s along the lattice vector a for a timescale of up to 3.5 ps is presented. By analyzing the detailed structural changes of RDX under shock loading, we find that the shocked RDX crystal undergoes a process of compression and rotation, which leads to the C-N bond initial rupture. The time variation of dynamic bond lengths in a shocked RDX crystal is calculated, and the result indicates that the C-N bond is easier to rupture than other bonds. The Laplacian bond orders are used to predict the molecular reactivity and stability. The values of the calculated bond orders show that the C-N bonds are more sensitive than other bonds under shock loading. In a word, the C-N bond scission has been validated as the initial decomposition in a RDX crystal shocked at 10 km/s. Finally, the bond-length criterion has been used to identify individual molecules in the simulation. The distance thresholds up to which two particles are considered direct neighbors and assigned to the same cluster have been tested. The species and density numbers of the initial decomposition products are collected according to the trajectory.
NASA Astrophysics Data System (ADS)
Brouns, Karlijn; Eikelboom, Tessa; Jansen, Peter C.; Janssen, Ron; Kwakernaak, Cees; van den Akker, Jan J. H.; Verhoeven, Jos T. A.
2015-02-01
Dutch peatlands have been subsiding due to peat decomposition, shrinkage and compression, since their reclamation in the 11th century. Currently, subsidence amounts to 1-2 cm/year. Water management in these areas is complex and costly, greenhouse gases are being emitted, and surface water quality is relatively poor. Regional and local authorities and landowners responsible for peatland management have recognized these problems. In addition, the Netherlands Royal Meteorological Institute predicts higher temperatures and drier summers, which both are expected to enhance peat decomposition. Stakeholder workshops have been organized in three case study areas in the province of Friesland to exchange knowledge on subsidence and explore future subsidence rates and the effects of land use and management changes on subsidence rates. Subsidence rates were up to 3 cm/year in deeply drained parcels and increased when we included climate change in the modeling exercises. This means that the relatively thin peat layers in this province (ca 1 m) would shrink or even disappear by the end of the century when current practices continue. Adaptation measures were explored, such as extensive dairy farming and the production of new crops in wetter conditions, but little experience has been gained on best practices. The workshops have resulted in useful exchange of ideas on possible measures and their consequences for land use and water management in the three case study areas. The province and the regional water board will use the results to develop land use and water management policies for the next decades.
Formats and Network Protocols for Browser Access to 2D Raster Data
NASA Astrophysics Data System (ADS)
Plesea, L.
2015-12-01
Tiled web maps in browsers are a major success story, forming the foundation of many current web applications. Enabling tiled data access is the next logical step, and is likely to meet with similar success. Many ad-hoc approaches have already started to appear, and something similar is explored within the Open Geospatial Consortium. One of the main obstacles in making browser data access a reality is the lack of a well-known data format. This obstacle also represents an opportunity to analyze the requirements and possible candidates, applying lessons learned from web tiled image services and protocols. Similar to the image counterpart, a web tile raster data format needs to have good intrinsic compression and be able to handle high byte count data types including floating point. An overview of a possible solution to the format problem, a 2D data raster compression algorithm called Limited Error Raster Compression (LERC) will be presented. In addition to the format, best practices for high request rate HTTP services also need to be followed. In particular, content delivery network (CDN) caching suitability needs to be part of any design, not an after-thought. Last but not least, HTML 5 browsers will certainly be part of any solution since they provide improved access to binary data, as well as more powerful ways to view and interact with the data in the browser. In a simple but relevant application, digital elevation model (DEM) raster data is served as LERC compressed data tiles which are used to generate terrain by a HTML5 scene viewer.
Improving multispectral satellite image compression using onboard subpixel registration
NASA Astrophysics Data System (ADS)
Albinet, Mathieu; Camarero, Roberto; Isnard, Maxime; Poulet, Christophe; Perret, Jokin
2013-09-01
Future CNES earth observation missions will have to deal with an ever increasing telemetry data rate due to improvements in resolution and addition of spectral bands. Current CNES image compressors implement a discrete wavelet transform (DWT) followed by a bit plane encoding (BPE) but only on a mono spectral basis and do not profit from the multispectral redundancy of the observed scenes. Recent CNES studies have proven a substantial gain on the achievable compression ratio, +20% to +40% on selected scenarios, by implementing a multispectral compression scheme based on a Karhunen Loeve transform (KLT) followed by the classical DWT+BPE. But such results can be achieved only on perfectly registered bands; a default of registration as low as 0.5 pixel ruins all the benefits of multispectral compression. In this work, we first study the possibility to implement a multi-bands subpixel onboard registration based on registration grids generated on-the-fly by the satellite attitude control system and simplified resampling and interpolation techniques. Indeed bands registration is usually performed on ground using sophisticated techniques too computationally intensive for onboard use. This fully quantized algorithm is tuned to meet acceptable registration performances within stringent image quality criteria, with the objective of onboard real-time processing. In a second part, we describe a FPGA implementation developed to evaluate the design complexity and, by extrapolation, the data rate achievable on a spacequalified ASIC. Finally, we present the impact of this approach on the processing chain not only onboard but also on ground and the impacts on the design of the instrument.
ThermoYield actuators: nano-adjustable set-and-forget optics mounts
NASA Astrophysics Data System (ADS)
DeTienne, Michael D.; Bruccoleri, Alexander R.; Chalifoux, Brandon; Heilmann, Ralf K.; Tedesco, Ross E.; Schattenburg, Mark L.
2017-08-01
The X-ray optics community has been developing technology for high angular resolution, large collecting area X-ray telescopes such as the Lynx X-ray telescope concept. To meet the high collecting area requirements of such telescope concepts, research is being conducted on thin, segmented optics. The mounts that fixture and align segmented optics must be the correct length to sub-micron accuracy to satisfy the angular resolution goals of such a concept. Set-andforget adjustable length optical mounting posts have been developed to meet this need. The actuator consists of a cylinder made of metal. Halfway up the height of the metal cylinder, a reduced diameter cylindrical neck is cut. To change the length of this actuator, an axial compressive or tensile force is applied to the actuator. A high-current electrical pulse is sent through the actuator, and this electrical current resistively heats the neck of the actuator. This heating temporarily reduces the yield strength of the neck, so that the applied force plastically deforms the neck. Once the current stops and the neck cools, the neck will regain yield strength, and the plastic deformation will stop. All of the plastic deformation that occurred during heating is now permanent. Both compression and expansion of these actuators has been demonstrated in steps ranging from 6 nanometers to several microns. This paper will explain the concept of ThermoYield actuation, explore X-ray telescope applications, describe an experimental setup, show and discuss data, and propose future ideas.
The Influence of Addition of Plastiment-VZ to Concrete Characteristics in Riau Province
NASA Astrophysics Data System (ADS)
Wahyuni Megasari, Shanti; Winayati
2017-12-01
Riau Province has an area of 8,702,000 ha consisting of 7,121.344,00 ha of forest and 3,867,000 ha in the form of peatlands. Peat structures are soft and have pores that make it easy to hold water. Peat water has a high color intensity, low pH, high organic content and has an acidic properties So it does not qualify as a mixture of concrete. To meet the needs of water in the concrete mix then water should be obtained from another place but it will require a greater cost and time. To resolve the issue, the advancement of concrete technology has resulted in admixture that can help in maintaining the quality of concrete. Plastiment-VZ is a plasticizer material that can increase workability of concrete without adding water. However, for the use in the field, the selection of admixture must be adjusted to the planned concrete situation and condition. Excessive use of admixture will also result in uneconomical concrete. The design of the job mix using the Department of Environment (DOE) method with compressive strength concrete plan fc ' = 25 MPa. The percentage of Plastiment-VZ addition is 0%, 0,05%; 0,10%; 0,15% and 0,20% to the weight of cement. The reduction of the amount of water in this study is 10% of the total amount of water. Specimens in each variation were made using cylinder mold with 15 cm in diameter and 30 cm high. After specimens are created and maintained, testing of compressive strength concrete held in 28 days. The test results show that the trend of average compressive strength has increased along with the addition of Plastiment-VZ percentage. The equation resulting from the average compressive strength is y = -362,7x2 + 133,3x + 28,10 with value R2 = 0,969. The highest average compressive strength value was obtained in the addition of 0,20% Plastiment-VZ at 40,76 MPa. Statistical testing with Analysis of Variance - ANOVA states that there is a very real interaction or treatment between the compressive strength of the concrete with the addition of Plastiment-VZ. So it can be concluded that the reduction of the amount of water with the addition of Plastiment-VZ has an effect on the increasing of concrete compressive strength characteristics.
Dossa, Gbadamassi G. O.; Paudel, Ekananda; Cao, Kunfang; Schaefer, Douglas; Harrison, Rhett D.
2016-01-01
Organic matter decomposition represents a vital ecosystem process by which nutrients are made available for plant uptake and is a major flux in the global carbon cycle. Previous studies have investigated decomposition of different plant parts, but few considered bark decomposition or its role in decomposition of wood. However, bark can comprise a large fraction of tree biomass. We used a common litter-bed approach to investigate factors affecting bark decomposition and its role in wood decomposition for five tree species in a secondary seasonal tropical rain forest in SW China. For bark, we implemented a litter bag experiment over 12 mo, using different mesh sizes to investigate effects of litter meso- and macro-fauna. For wood, we compared the decomposition of branches with and without bark over 24 mo. Bark in coarse mesh bags decomposed 1.11–1.76 times faster than bark in fine mesh bags. For wood decomposition, responses to bark removal were species dependent. Three species with slow wood decomposition rates showed significant negative effects of bark-removal, but there was no significant effect in the other two species. Future research should also separately examine bark and wood decomposition, and consider bark-removal experiments to better understand roles of bark in wood decomposition. PMID:27698461