Sample records for gain compression point

  1. Gain compression and its dependence on output power in quantum dot lasers

    NASA Astrophysics Data System (ADS)

    Zhukov, A. E.; Maximov, M. V.; Savelyev, A. V.; Shernyakov, Yu. M.; Zubov, F. I.; Korenev, V. V.; Martinez, A.; Ramdane, A.; Provost, J.-G.; Livshits, D. A.

    2013-06-01

    The gain compression coefficient was evaluated by applying the frequency modulation/amplitude modulation technique in a distributed feedback InAs/InGaAs quantum dot laser. A strong dependence of the gain compression coefficient on the output power was found. Our analysis of the gain compression within the frame of the modified well-barrier hole burning model reveals that the gain compression coefficient decreases beyond the lasing threshold, which is in a good agreement with the experimental observations.

  2. Fixed-Rate Compressed Floating-Point Arrays.

    PubMed

    Lindstrom, Peter

    2014-12-01

    Current compression schemes for floating-point data commonly take fixed-precision values and compress them to a variable-length bit stream, complicating memory management and random access. We present a fixed-rate, near-lossless compression scheme that maps small blocks of 4(d) values in d dimensions to a fixed, user-specified number of bits per block, thereby allowing read and write random access to compressed floating-point data at block granularity. Our approach is inspired by fixed-rate texture compression methods widely adopted in graphics hardware, but has been tailored to the high dynamic range and precision demands of scientific applications. Our compressor is based on a new, lifted, orthogonal block transform and embedded coding, allowing each per-block bit stream to be truncated at any point if desired, thus facilitating bit rate selection using a single compression scheme. To avoid compression or decompression upon every data access, we employ a software write-back cache of uncompressed blocks. Our compressor has been designed with computational simplicity and speed in mind to allow for the possibility of a hardware implementation, and uses only a small number of fixed-point arithmetic operations per compressed value. We demonstrate the viability and benefits of lossy compression in several applications, including visualization, quantitative data analysis, and numerical simulation.

  3. The Fusion Gain Analysis of the Inductively Driven Liner Compression Based Fusion

    NASA Astrophysics Data System (ADS)

    Shimazu, Akihisa; Slough, John

    2016-10-01

    An analytical analysis of the fusion gain expected in the inductively driven liner compression (IDLC) based fusion is conducted to identify the fusion gain scaling at various operating conditions. The fusion based on the IDLC is a magneto-inertial fusion concept, where a Field-Reversed Configuration (FRC) plasmoid is compressed via the inductively-driven metal liner to drive the FRC to fusion conditions. In the past, an approximate scaling law for the expected fusion gain for the IDLC based fusion was obtained under the key assumptions of (1) D-T fuel at 5-40 keV, (2) adiabatic scaling laws for the FRC dynamics, (3) FRC energy dominated by the pressure balance with the edge magnetic field at the peak compression, and (4) the liner dwell time being liner final diameter divided by the peak liner velocity. In this study, various assumptions made in the previous derivation is relaxed to study the change in the fusion gain scaling from the previous result of G ml1 / 2 El11 / 8 , where ml is the liner mass and El is the peak liner kinetic energy. The implication from the modified fusion gain scaling on the performance of the IDLC fusion reactor system is also explored.

  4. Optimal Compression Methods for Floating-point Format Images

    NASA Technical Reports Server (NTRS)

    Pence, W. D.; White, R. L.; Seaman, R.

    2009-01-01

    We report on the results of a comparison study of different techniques for compressing FITS images that have floating-point (real*4) pixel values. Standard file compression methods like GZIP are generally ineffective in this case (with compression ratios only in the range 1.2 - 1.6), so instead we use a technique of converting the floating-point values into quantized scaled integers which are compressed using the Rice algorithm. The compressed data stream is stored in FITS format using the tiled-image compression convention. This is technically a lossy compression method, since the pixel values are not exactly reproduced, however all the significant photometric and astrometric information content of the image can be preserved while still achieving file compression ratios in the range of 4 to 8. We also show that introducing dithering, or randomization, when assigning the quantized pixel-values can significantly improve the photometric and astrometric precision in the stellar images in the compressed file without adding additional noise. We quantify our results by comparing the stellar magnitudes and positions as measured in the original uncompressed image to those derived from the same image after applying successively greater amounts of compression.

  5. Motion-Compensated Compression of Dynamic Voxelized Point Clouds.

    PubMed

    De Queiroz, Ricardo L; Chou, Philip A

    2017-05-24

    Dynamic point clouds are a potential new frontier in visual communication systems. A few articles have addressed the compression of point clouds, but very few references exist on exploring temporal redundancies. This paper presents a novel motion-compensated approach to encoding dynamic voxelized point clouds at low bit rates. A simple coder breaks the voxelized point cloud at each frame into blocks of voxels. Each block is either encoded in intra-frame mode or is replaced by a motion-compensated version of a block in the previous frame. The decision is optimized in a rate-distortion sense. In this way, both the geometry and the color are encoded with distortion, allowing for reduced bit-rates. In-loop filtering is employed to minimize compression artifacts caused by distortion in the geometry information. Simulations reveal that this simple motion compensated coder can efficiently extend the compression range of dynamic voxelized point clouds to rates below what intra-frame coding alone can accommodate, trading rate for geometry accuracy.

  6. Fast and efficient compression of floating-point data.

    PubMed

    Lindstrom, Peter; Isenburg, Martin

    2006-01-01

    Large scale scientific simulation codes typically run on a cluster of CPUs that write/read time steps to/from a single file system. As data sets are constantly growing in size, this increasingly leads to I/O bottlenecks. When the rate at which data is produced exceeds the available I/O bandwidth, the simulation stalls and the CPUs are idle. Data compression can alleviate this problem by using some CPU cycles to reduce the amount of data needed to be transfered. Most compression schemes, however, are designed to operate offline and seek to maximize compression, not throughput. Furthermore, they often require quantizing floating-point values onto a uniform integer grid, which disqualifies their use in applications where exact values must be retained. We propose a simple scheme for lossless, online compression of floating-point data that transparently integrates into the I/O of many applications. A plug-in scheme for data-dependent prediction makes our scheme applicable to a wide variety of data used in visualization, such as unstructured meshes, point sets, images, and voxel grids. We achieve state-of-the-art compression rates and speeds, the latter in part due to an improved entropy coder. We demonstrate that this significantly accelerates I/O throughput in real simulation runs. Unlike previous schemes, our method also adapts well to variable-precision floating-point and integer data.

  7. Effect of Human Auditory Efferent Feedback on Cochlear Gain and Compression

    PubMed Central

    Drga, Vit; Plack, Christopher J.

    2014-01-01

    The mammalian auditory system includes a brainstem-mediated efferent pathway from the superior olivary complex by way of the medial olivocochlear system, which reduces the cochlear response to sound (Warr and Guinan, 1979; Liberman et al., 1996). The human medial olivocochlear response has an onset delay of between 25 and 40 ms and rise and decay constants in the region of 280 and 160 ms, respectively (Backus and Guinan, 2006). Physiological studies with nonhuman mammals indicate that onset and decay characteristics of efferent activation are dependent on the temporal and level characteristics of the auditory stimulus (Bacon and Smith, 1991; Guinan and Stankovic, 1996). This study uses a novel psychoacoustical masking technique using a precursor sound to obtain a measure of the efferent effect in humans. This technique avoids confounds currently associated with other psychoacoustical measures. Both temporal and level dependency of the efferent effect was measured, providing a comprehensive measure of the effect of human auditory efferents on cochlear gain and compression. Results indicate that a precursor (>20 dB SPL) induced efferent activation, resulting in a decrease in both maximum gain and maximum compression, with linearization of the compressive function for input sound levels between 50 and 70 dB SPL. Estimated gain decreased as precursor level increased, and increased as the silent interval between the precursor and combined masker-signal stimulus increased, consistent with a decay of the efferent effect. Human auditory efferent activation linearizes the cochlear response for mid-level sounds while reducing maximum gain. PMID:25392499

  8. Photogrammetric point cloud compression for tactical networks

    NASA Astrophysics Data System (ADS)

    Madison, Andrew C.; Massaro, Richard D.; Wayant, Clayton D.; Anderson, John E.; Smith, Clint B.

    2017-05-01

    We report progress toward the development of a compression schema suitable for use in the Army's Common Operating Environment (COE) tactical network. The COE facilitates the dissemination of information across all Warfighter echelons through the establishment of data standards and networking methods that coordinate the readout and control of a multitude of sensors in a common operating environment. When integrated with a robust geospatial mapping functionality, the COE enables force tracking, remote surveillance, and heightened situational awareness to Soldiers at the tactical level. Our work establishes a point cloud compression algorithm through image-based deconstruction and photogrammetric reconstruction of three-dimensional (3D) data that is suitable for dissimination within the COE. An open source visualization toolkit was used to deconstruct 3D point cloud models based on ground mobile light detection and ranging (LiDAR) into a series of images and associated metadata that can be easily transmitted on a tactical network. Stereo photogrammetric reconstruction is then conducted on the received image stream to reveal the transmitted 3D model. The reported method boasts nominal compression ratios typically on the order of 250 while retaining tactical information and accurate georegistration. Our work advances the scope of persistent intelligence, surveillance, and reconnaissance through the development of 3D visualization and data compression techniques relevant to the tactical operations environment.

  9. Point-Cloud Compression for Vehicle-Based Mobile Mapping Systems Using Portable Network Graphics

    NASA Astrophysics Data System (ADS)

    Kohira, K.; Masuda, H.

    2017-09-01

    A mobile mapping system is effective for capturing dense point-clouds of roads and roadside objects Point-clouds of urban areas, residential areas, and arterial roads are useful for maintenance of infrastructure, map creation, and automatic driving. However, the data size of point-clouds measured in large areas is enormously large. A large storage capacity is required to store such point-clouds, and heavy loads will be taken on network if point-clouds are transferred through the network. Therefore, it is desirable to reduce data sizes of point-clouds without deterioration of quality. In this research, we propose a novel point-cloud compression method for vehicle-based mobile mapping systems. In our compression method, point-clouds are mapped onto 2D pixels using GPS time and the parameters of the laser scanner. Then, the images are encoded in the Portable Networking Graphics (PNG) format and compressed using the PNG algorithm. In our experiments, our method could efficiently compress point-clouds without deteriorating the quality.

  10. Effects of bandwidth, compression speed, and gain at high frequencies on preferences for amplified music.

    PubMed

    Moore, Brian C J

    2012-09-01

    This article reviews a series of studies on the factors influencing sound quality preferences, mostly for jazz and classical music stimuli. The data were obtained using ratings of individual stimuli or using the method of paired comparisons. For normal-hearing participants, the highest ratings of sound quality were obtained when the reproduction bandwidth was wide (55 to 16000 Hz) and ripples in the frequency response were small (less than ± 5 dB). For hearing-impaired participants listening via a simulated five-channel compression hearing aid with gains set using the CAM2 fitting method, preferences for upper cutoff frequency varied across participants: Some preferred a 7.5- or 10-kHz upper cutoff frequency over a 5-kHz cutoff frequency, and some showed the opposite preference. Preferences for a higher upper cutoff frequency were associated with a shallow high-frequency slope of the audiogram. A subsequent study comparing the CAM2 and NAL-NL2 fitting methods, with gains slightly reduced for participants who were not experienced hearing aid users, showed a consistent preference for CAM2. Since the two methods differ mainly in the gain applied for frequencies above 4 kHz (CAM2 recommending higher gain than NAL-NL2), these results suggest that extending the upper cutoff frequency is beneficial. A system for reducing "overshoot" effects produced by compression gave small but significant benefits for sound quality of a percussion instrument (xylophone). For a high-input level (80 dB SPL), slow compression was preferred over fast compression.

  11. Multi-rate, real time image compression for images dominated by point sources

    NASA Technical Reports Server (NTRS)

    Huber, A. Kris; Budge, Scott E.; Harris, Richard W.

    1993-01-01

    An image compression system recently developed for compression of digital images dominated by point sources is presented. Encoding consists of minimum-mean removal, vector quantization, adaptive threshold truncation, and modified Huffman encoding. Simulations are presented showing that the peaks corresponding to point sources can be transmitted losslessly for low signal-to-noise ratios (SNR) and high point source densities while maintaining a reduced output bit rate. Encoding and decoding hardware has been built and tested which processes 552,960 12-bit pixels per second at compression rates of 10:1 and 4:1. Simulation results are presented for the 10:1 case only.

  12. High-Gain High-Field Fusion Plasma

    PubMed Central

    Li, Ge

    2015-01-01

    A Faraday wheel (FW)—an electric generator of constant electrical polarity that produces huge currents—could be implemented in an existing tokamak to study high-gain high-field (HGHF) fusion plasma, such as the Experimental Advanced Superconducting Tokamak (EAST). HGHF plasma can be realized in EAST by updating its pulsed-power system to compress plasma in two steps by induction fields; high gains of the Lawson trinity parameter and fusion power are both predicted by formulating the HGHF plasma. Both gain rates are faster than the decrease rate of the plasma volume. The formulation is checked by earlier ATC tests. Good agreement between theory and tests indicates that scaling to over 10 T at EAST may be possible by two-step compressions with a compression ratio of the minor radius of up to 3. These results point to a quick new path of fusion plasma study, i.e., simulating the Sun by EAST. PMID:26507314

  13. Compressed storage of arterial pressure waveforms by selection of significant points.

    PubMed

    de Graaf, P M; van Goudoever, J; Wesseling, K H

    1997-09-01

    Continuous records of arterial blood pressure can be obtained non-invasively with Finapres, even for periods of 24 hours. Increasingly, storage of such records is done digitally, requiring large disc capacities. It is therefore necessary to find methods to store blood pressure waveforms in compressed form. The method of selection of significant points known from ECG data compression is adapted. Points are selected as significant wherever the first derivative of the pressure wave changes sign. As a second stage recursive partitioning is used to select additional points such that the difference between the selected points, linearly interpolated, and the original curve remains below a maximum. This method is tested on finger arterial pressure waveform epochs of 60 s duration taken from 32 patients with a wide range of blood pressures and heart rates. An average compression factor of 4.6 (SD 1.0) is obtained when accepting a maximum difference of 3 mmHg. The root mean squared error is 1 mmHg averaged over the group of patient waveforms. Clinically relevant parameters such as systolic, diastolic and mean pressure are reproduced with an offset error of less than 0.5 (0.3) mmHg and scatter less than 0.6 (0.1) mmHg. It is concluded that a substantial compression factor can be achieved with a simple and computationally fast algorithm and little deterioration in waveform quality and pressure level accuracy.

  14. Method for increasing the rate of compressive strength gain in hardenable mixtures containing fly ash

    DOEpatents

    Liskowitz, John W.; Wecharatana, Methi; Jaturapitakkul, Chai; Cerkanowicz, deceased, Anthony E.

    1997-01-01

    The present invention relates to concrete, mortar and other hardenable mixtures comprising cement and fly ash for use in construction. The invention provides a method for increasing the rate of strength gain of a hardenable mixture containing fly ash by exposing the fly ash to an aqueous slurry of calcium oxide (lime) prior to its incorporation into the hardenable mixture. The invention further relates to such hardenable mixtures, e.g., concrete and mortar, that contain fly ash pre-reacted with calcium oxide. In particular, the fly ash is added to a slurry of calcium oxide in water, prior to incorporating the fly ash in a hardenable mixture. The hardenable mixture may be concrete or mortar. In a specific embodiment, mortar containing fly ash treated by exposure to an aqueous lime slurry are prepared and tested for compressive strength at early time points.

  15. The use of ZFP lossy floating point data compression in tornado-resolving thunderstorm simulations

    NASA Astrophysics Data System (ADS)

    Orf, L.

    2017-12-01

    In the field of atmospheric science, numerical models are used to produce forecasts of weather and climate and serve as virtual laboratories for scientists studying atmospheric phenomena. In both operational and research arenas, atmospheric simulations exploiting modern supercomputing hardware can produce a tremendous amount of data. During model execution, the transfer of floating point data from memory to the file system is often a significant bottleneck where I/O can dominate wallclock time. One way to reduce the I/O footprint is to compress the floating point data, which reduces amount of data saved to the file system. In this presentation we introduce LOFS, a file system developed specifically for use in three-dimensional numerical weather models that are run on massively parallel supercomputers. LOFS utilizes the core (in-memory buffered) HDF5 driver and includes compression options including ZFP, a lossy floating point data compression algorithm. ZFP offers several mechanisms for specifying the amount of lossy compression to be applied to floating point data, including the ability to specify the maximum absolute error allowed in each compressed 3D array. We explore different maximum error tolerances in a tornado-resolving supercell thunderstorm simulation for model variables including cloud and precipitation, temperature, wind velocity and vorticity magnitude. We find that average compression ratios exceeding 20:1 in scientifically interesting regions of the simulation domain produce visually identical results to uncompressed data in visualizations and plots. Since LOFS splits the model domain across many files, compression ratios for a given error tolerance can be compared across different locations within the model domain. We find that regions of high spatial variability (which tend to be where scientifically interesting things are occurring) show the lowest compression ratios, whereas regions of the domain with little spatial variability compress

  16. Method for increasing the rate of compressive strength gain in hardenable mixtures containing fly ash

    DOEpatents

    Liskowitz, J.W.; Wecharatana, M.; Jaturapitakkul, C.; Cerkanowicz, A.E.

    1997-10-28

    The present invention relates to concrete, mortar and other hardenable mixtures comprising cement and fly ash for use in construction. The invention provides a method for increasing the rate of strength gain of a hardenable mixture containing fly ash by exposing the fly ash to an aqueous slurry of calcium oxide (lime) prior to its incorporation into the hardenable mixture. The invention further relates to such hardenable mixtures, e.g., concrete and mortar, that contain fly ash pre-reacted with calcium oxide. In particular, the fly ash is added to a slurry of calcium oxide in water, prior to incorporating the fly ash in a hardenable mixture. The hardenable mixture may be concrete or mortar. In a specific embodiment, mortar containing fly ash treated by exposure to an aqueous lime slurry are prepared and tested for compressive strength at early time points. 2 figs.

  17. Compression After Impact Testing of Sandwich Structures Using the Four Point Bend Test

    NASA Technical Reports Server (NTRS)

    Nettles, Alan T.; Gregory, Elizabeth; Jackson, Justin; Kenworthy, Devon

    2008-01-01

    For many composite laminated structures, the design is driven by data obtained from Compression after Impact (CAI) testing. There currently is no standard for CAI testing of sandwich structures although there is one for solid laminates of a certain thickness and lay-up configuration. Most sandwich CAI testing has followed the basic technique of this standard where the loaded ends are precision machined and placed between two platens and compressed until failure. If little or no damage is present during the compression tests, the loaded ends may need to be potted to prevent end brooming. By putting a sandwich beam in a four point bend configuration, the region between the inner supports is put under a compressive load and a sandwich laminate with damage can be tested in this manner without the need for precision machining. Also, specimens with no damage can be taken to failure so direct comparisons between damaged and undamaged strength can be made. Data is presented that demonstrates the four point bend CAI test and is compared with end loaded compression tests of the same sandwich structure.

  18. An Adaptive Prediction-Based Approach to Lossless Compression of Floating-Point Volume Data.

    PubMed

    Fout, N; Ma, Kwan-Liu

    2012-12-01

    In this work, we address the problem of lossless compression of scientific and medical floating-point volume data. We propose two prediction-based compression methods that share a common framework, which consists of a switched prediction scheme wherein the best predictor out of a preset group of linear predictors is selected. Such a scheme is able to adapt to different datasets as well as to varying statistics within the data. The first method, called APE (Adaptive Polynomial Encoder), uses a family of structured interpolating polynomials for prediction, while the second method, which we refer to as ACE (Adaptive Combined Encoder), combines predictors from previous work with the polynomial predictors to yield a more flexible, powerful encoder that is able to effectively decorrelate a wide range of data. In addition, in order to facilitate efficient visualization of compressed data, our scheme provides an option to partition floating-point values in such a way as to provide a progressive representation. We compare our two compressors to existing state-of-the-art lossless floating-point compressors for scientific data, with our data suite including both computer simulations and observational measurements. The results demonstrate that our polynomial predictor, APE, is comparable to previous approaches in terms of speed but achieves better compression rates on average. ACE, our combined predictor, while somewhat slower, is able to achieve the best compression rate on all datasets, with significantly better rates on most of the datasets.

  19. Analysis of actual pressure point using the power flexible capacitive sensor during chest compression.

    PubMed

    Minami, Kouichiro; Kokubo, Yota; Maeda, Ichinosuke; Hibino, Shingo

    2017-02-01

    In chest compression for cardiopulmonary resuscitation (CPR), the lower half of the sternum is pressed according to the American Heart Association (AHA) guidelines 2010. These have been no studies which identify the exact location of the applied by individual chest compressions. We developed a rubber power-flexible capacitive sensor that could measure the actual pressure point of chest compression in real time. Here, we examined the pressure point of chest compression by ambulance crews during CPR using a mannequin. We included 179 ambulance crews. Chest compression was performed for 2 min. The pressure position was monitored, and the quality of chest compression was analyzed by using a flexible pressure sensor (Shinnosukekun™). Of the ambulance crews, 58 (32.4 %) pressed the center and 121 (67.6 %) pressed outside the proper area of chest compression. Many of them pressed outside the center; 8, 7, 41, and 90 pressed on the caudal, left, right, and cranial side, respectively. Average compression rate, average recoil, average depth, and average duty cycle were 108.6 counts per minute, 0.089, 4.5 cm, and 48.27 %, respectively. Many of the ambulance crews did not press on the sternal lower half definitely. This new device has the potential to improve the quality of CPR during training or in clinical practice.

  20. Image Processing, Coding, and Compression with Multiple-Point Impulse Response Functions.

    NASA Astrophysics Data System (ADS)

    Stossel, Bryan Joseph

    1995-01-01

    Aspects of image processing, coding, and compression with multiple-point impulse response functions are investigated. Topics considered include characterization of the corresponding random-walk transfer function, image recovery for images degraded by the multiple-point impulse response, and the application of the blur function to image coding and compression. It is found that although the zeros of the real and imaginary parts of the random-walk transfer function occur in continuous, closed contours, the zeros of the transfer function occur at isolated spatial frequencies. Theoretical calculations of the average number of zeros per area are in excellent agreement with experimental results obtained from computer counts of the zeros. The average number of zeros per area is proportional to the standard deviations of the real part of the transfer function as well as the first partial derivatives. Statistical parameters of the transfer function are calculated including the mean, variance, and correlation functions for the real and imaginary parts of the transfer function and their corresponding first partial derivatives. These calculations verify the assumptions required in the derivation of the expression for the average number of zeros. Interesting results are found for the correlations of the real and imaginary parts of the transfer function and their first partial derivatives. The isolated nature of the zeros in the transfer function and its characteristics at high spatial frequencies result in largely reduced reconstruction artifacts and excellent reconstructions are obtained for distributions of impulses consisting of 25 to 150 impulses. The multiple-point impulse response obscures original scenes beyond recognition. This property is important for secure transmission of data on many communication systems. The multiple-point impulse response enables the decoding and restoration of the original scene with very little distortion. Images prefiltered by the random

  1. Oil point and mechanical behaviour of oil palm kernels in linear compression

    NASA Astrophysics Data System (ADS)

    Kabutey, Abraham; Herak, David; Choteborsky, Rostislav; Mizera, Čestmír; Sigalingging, Riswanti; Akangbe, Olaosebikan Layi

    2017-07-01

    The study described the oil point and mechanical properties of roasted and unroasted bulk oil palm kernels under compression loading. The literature information available is very limited. A universal compression testing machine and vessel diameter of 60 mm with a plunger were used by applying maximum force of 100 kN and speed ranging from 5 to 25 mm min-1. The initial pressing height of the bulk kernels was measured at 40 mm. The oil point was determined by a litmus test for each deformation level of 5, 10, 15, 20, and 25 mm at a minimum speed of 5 mmmin-1. The measured parameters were the deformation, deformation energy, oil yield, oil point strain and oil point pressure. Clearly, the roasted bulk kernels required less deformation energy compared to the unroasted kernels for recovering the kernel oil. However, both kernels were not permanently deformed. The average oil point strain was determined at 0.57. The study is an essential contribution to pursuing innovative methods for processing palm kernel oil in rural areas of developing countries.

  2. Laser pulse self-compression in an active fibre with a finite gain bandwidth under conditions of a nonstationary nonlinear response

    NASA Astrophysics Data System (ADS)

    Balakin, A. A.; Litvak, A. G.; Mironov, V. A.; Skobelev, S. A.

    2018-04-01

    We study the influence of a nonstationary nonlinear response of a medium on self-compression of soliton-like laser pulses in active fibres with a finite gain bandwidth. Based on the variational approach, we qualitatively analyse the self-action of the wave packet in the system under consideration in order to classify the main evolution regimes and to determine the minimum achievable laser pulse duration during self-compression. The existence of stable soliton-type structures is shown in the framework of the parabolic approximation of the gain profile (in the approximation of the Gnizburg – Landau equation). An analysis of the self-action of laser pulses in the framework of the nonlinear Schrödinger equation with a sign-constant gain profile demonstrate a qualitative change in the dynamics of the wave field in the case of a nonsta­tionary nonlinear response that shifts the laser pulse spectrum from the amplification region and stops the pulse compression. Expressions for a minimum duration of a soliton-like laser pulse are obtained as a function of the problem parameters, which are in good agreement with the results of numerical simulation.

  3. Optimal Compression of Floating-Point Astronomical Images Without Significant Loss of Information

    NASA Technical Reports Server (NTRS)

    Pence, William D.; White, R. L.; Seaman, R.

    2010-01-01

    We describe a compression method for floating-point astronomical images that gives compression ratios of 6 - 10 while still preserving the scientifically important information in the image. The pixel values are first preprocessed by quantizing them into scaled integer intensity levels, which removes some of the uncompressible noise in the image. The integers are then losslessly compressed using the fast and efficient Rice algorithm and stored in a portable FITS format file. Quantizing an image more coarsely gives greater image compression, but it also increases the noise and degrades the precision of the photometric and astrometric measurements in the quantized image. Dithering the pixel values during the quantization process greatly improves the precision of measurements in the more coarsely quantized images. We perform a series of experiments on both synthetic and real astronomical CCD images to quantitatively demonstrate that the magnitudes and positions of stars in the quantized images can be measured with the predicted amount of precision. In order to encourage wider use of these image compression methods, we have made available a pair of general-purpose image compression programs, called fpack and funpack, which can be used to compress any FITS format image.

  4. Compression of 3D Point Clouds Using a Region-Adaptive Hierarchical Transform.

    PubMed

    De Queiroz, Ricardo; Chou, Philip A

    2016-06-01

    In free-viewpoint video, there is a recent trend to represent scene objects as solids rather than using multiple depth maps. Point clouds have been used in computer graphics for a long time and with the recent possibility of real time capturing and rendering, point clouds have been favored over meshes in order to save computation. Each point in the cloud is associated with its 3D position and its color. We devise a method to compress the colors in point clouds which is based on a hierarchical transform and arithmetic coding. The transform is a hierarchical sub-band transform that resembles an adaptive variation of a Haar wavelet. The arithmetic encoding of the coefficients assumes Laplace distributions, one per sub-band. The Laplace parameter for each distribution is transmitted to the decoder using a custom method. The geometry of the point cloud is encoded using the well-established octtree scanning. Results show that the proposed solution performs comparably to the current state-of-the-art, in many occasions outperforming it, while being much more computationally efficient. We believe this work represents the state-of-the-art in intra-frame compression of point clouds for real-time 3D video.

  5. High gain antenna pointing on the Mars Exploration Rovers

    NASA Technical Reports Server (NTRS)

    Vanelli, C. Anthony; Ali, Khaled S.

    2005-01-01

    This paper describes the algorithm used to point the high gain antennae on NASA/JPL's Mars Exploration Rovers. The gimballed antennae must track the Earth as it moves across the Martian sky during communication sessions. The algorithm accounts for (1) gimbal range limitations, (2) obstructions both on the rover and in the surrounding environment, (3) kinematic singularities in the gimbal design, and (4) up to two joint-space solutions for a given pointing direction. The algorithm computes the intercept-times for each of the occlusions and chooses the jointspace solution that provides the longest track time before encountering an occlusion. Upon encountering an occlusion, the pointing algorithm automatically switches to the other joint-space solution if it is not also occluded. The algorithm has successfully provided flop-free pointing for both rovers throughout the mission.

  6. Recent advances in lossy compression of scientific floating-point data

    NASA Astrophysics Data System (ADS)

    Lindstrom, P.

    2017-12-01

    With a continuing exponential trend in supercomputer performance, ever larger data sets are being generated through numerical simulation. Bandwidth and storage capacity are, however, not keeping pace with this increase in data size, causing significant data movement bottlenecks in simulation codes and substantial monetary costs associated with archiving vast volumes of data. Worse yet, ever smaller fractions of data generated can be stored for further analysis, where scientists frequently rely on decimating or averaging large data sets in time and/or space. One way to mitigate these problems is to employ data compression to reduce data volumes. However, lossless compression of floating-point data can achieve only very modest size reductions on the order of 10-50%. We present ZFP and FPZIP, two state-of-the-art lossy compressors for structured floating-point data that routinely achieve one to two orders of magnitude reduction with little to no impact on the accuracy of visualization and quantitative data analysis. We provide examples of the use of such lossy compressors in climate and seismic modeling applications to effectively accelerate I/O and reduce storage requirements. We further discuss how the design decisions behind these and other compressors impact error distributions and other statistical and differential properties, including derived quantities of interest relevant to each science application.

  7. In-flight calibration of the high-gain antenna pointing for the Mariner Venus-Mercury 1973 spacecraft

    NASA Technical Reports Server (NTRS)

    Hardman, J. M.; Havens, W. F.; Ohtakay, H.

    1975-01-01

    The methods used to in-flight calibrate the pointing direction of the Mariner Venus-Mercury 1973 spacecraft high gain antenna and the achieved antenna pointing accuracy are described. The overall pointing calibration was accomplished by performing calibration sequences at a number of points along the spacecraft trajectory. Each of these consisted of articulating the antenna about the expected spacecraft-earth vector to determine systematic pointing errors. The high gain antenna pointing system, the error model used in the calibration, and the calibration and pointing strategy and results are discussed.

  8. Individual set-point and gain of emmetropization in chickens.

    PubMed

    Tepelus, Tudor Cosmin; Schaeffel, Frank

    2010-01-01

    , were also not correlated, suggesting that the "gains of lens compensation" are different from those in deprivation myopia. In summary, (1) there appears to be an endogenous, possibly genetic, definition of the set-point of emmetropization in each individual, which is similar in both eyes, (2) visual conditions that induce ametropia produce variable changes in refractions, with high correlations between both eyes, (3) overall, the "gain of emmetropization" appears only weakly controlled by endogenous factors.

  9. SeqCompress: an algorithm for biological sequence compression.

    PubMed

    Sardaraz, Muhammad; Tahir, Muhammad; Ikram, Ataul Aziz; Bajwa, Hassan

    2014-10-01

    The growth of Next Generation Sequencing technologies presents significant research challenges, specifically to design bioinformatics tools that handle massive amount of data efficiently. Biological sequence data storage cost has become a noticeable proportion of total cost in the generation and analysis. Particularly increase in DNA sequencing rate is significantly outstripping the rate of increase in disk storage capacity, which may go beyond the limit of storage capacity. It is essential to develop algorithms that handle large data sets via better memory management. This article presents a DNA sequence compression algorithm SeqCompress that copes with the space complexity of biological sequences. The algorithm is based on lossless data compression and uses statistical model as well as arithmetic coding to compress DNA sequences. The proposed algorithm is compared with recent specialized compression tools for biological sequences. Experimental results show that proposed algorithm has better compression gain as compared to other existing algorithms. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. Least Median of Squares Filtering of Locally Optimal Point Matches for Compressible Flow Image Registration

    PubMed Central

    Castillo, Edward; Castillo, Richard; White, Benjamin; Rojo, Javier; Guerrero, Thomas

    2012-01-01

    Compressible flow based image registration operates under the assumption that the mass of the imaged material is conserved from one image to the next. Depending on how the mass conservation assumption is modeled, the performance of existing compressible flow methods is limited by factors such as image quality, noise, large magnitude voxel displacements, and computational requirements. The Least Median of Squares Filtered Compressible Flow (LFC) method introduced here is based on a localized, nonlinear least squares, compressible flow model that describes the displacement of a single voxel that lends itself to a simple grid search (block matching) optimization strategy. Spatially inaccurate grid search point matches, corresponding to erroneous local minimizers of the nonlinear compressible flow model, are removed by a novel filtering approach based on least median of squares fitting and the forward search outlier detection method. The spatial accuracy of the method is measured using ten thoracic CT image sets and large samples of expert determined landmarks (available at www.dir-lab.com). The LFC method produces an average error within the intra-observer error on eight of the ten cases, indicating that the method is capable of achieving a high spatial accuracy for thoracic CT registration. PMID:22797602

  11. Beam steering performance of compressed Luneburg lens based on transformation optics

    NASA Astrophysics Data System (ADS)

    Gao, Ju; Wang, Cong; Zhang, Kuang; Hao, Yang; Wu, Qun

    2018-06-01

    In this paper, two types of compressed Luneburg lenses based on transformation optics are investigated and simulated using two different sources, namely, waveguides and dipoles, which represent plane and spherical wave sources, respectively. We determined that the largest beam steering angle and the related feed point are intrinsic characteristics of a certain type of compressed Luneburg lens, and that the optimized distance between the feed and lens, gain enhancement, and side-lobe suppression are related to the type of source. Based on our results, we anticipate that these lenses will prove useful in various future antenna applications.

  12. Floating-point scaling technique for sources separation automatic gain control

    NASA Astrophysics Data System (ADS)

    Fermas, A.; Belouchrani, A.; Ait-Mohamed, O.

    2012-07-01

    Based on the floating-point representation and taking advantage of scaling factor indetermination in blind source separation (BSS) processing, we propose a scaling technique applied to the separation matrix, to avoid the saturation or the weakness in the recovered source signals. This technique performs an automatic gain control in an on-line BSS environment. We demonstrate the effectiveness of this technique by using the implementation of a division-free BSS algorithm with two inputs, two outputs. The proposed technique is computationally cheaper and efficient for a hardware implementation compared to the Euclidean normalisation.

  13. Anaesthetic injection versus ischemic compression for the pain relief of abdominal wall trigger points in women with chronic pelvic pain.

    PubMed

    Montenegro, Mary L L S; Braz, Carolina A; Rosa-e-Silva, Julio C; Candido-dos-Reis, Francisco J; Nogueira, Antonio A; Poli-Neto, Omero B

    2015-12-01

    Chronic pelvic pain is a common condition among women, and 10 to 30 % of causes originate from the abdominal wall, and are associated with trigger points. Although little is known about their pathophysiology, variable methods have been practiced clinically. The purpose of this study was to evaluate the efficacy of local anaesthetic injections versus ischemic compression via physical therapy for pain relief of abdominal wall trigger points in women with chronic pelvic pain. We conducted a parallel group randomized trial including 30 women with chronic pelvic pain with abdominal wall trigger points. Subjects were randomly assigned to one of two intervention groups. One group received an injection of 2 mL 0.5 % lidocaine without a vasoconstrictor into a trigger point. In the other group, ischemic compression via physical therapy was administered at the trigger points three times, with each session lasting for 60 s, and a rest period of 30 s between applications. Both treatments were administered during one weekly session for four weeks. Our primary outcomes were satisfactory clinical response rates and percentages of pain relief. Our secondary outcomes are pain threshold and tolerance at the trigger points. All subjects were evaluated at baseline and 1, 4, and 12 weeks after the interventions. The study was conducted at a tertiary hospital that was associated with a university providing assistance predominantly to working class women who were treated by the public health system. Clinical response rates and pain relief were significantly better at 1, 4, and 12 weeks for those receiving local anaesthetic injections than ischemic compression via physical therapy. The pain relief of women treated with local anaesthetic injections progressively improved at 1, 4, and 12 weeks after intervention. In contrast, women treated with ischemic compression did not show considerable changes in pain relief after intervention. In the local anaesthetic injection group, pain threshold

  14. Relative Gains, Losses, and Reference Points in Probabilistic Choice in Rats

    PubMed Central

    Marshall, Andrew T.; Kirkpatrick, Kimberly

    2015-01-01

    Theoretical reference points have been proposed to differentiate probabilistic gains from probabilistic losses in humans, but such a phenomenon in non-human animals has yet to be thoroughly elucidated. Three experiments evaluated the effect of reward magnitude on probabilistic choice in rats, seeking to determine reference point use by examining the effect of previous outcome magnitude(s) on subsequent choice behavior. Rats were trained to choose between an outcome that always delivered reward (low-uncertainty choice) and one that probabilistically delivered reward (high-uncertainty). The probability of high-uncertainty outcome receipt and the magnitudes of low-uncertainty and high-uncertainty outcomes were manipulated within and between experiments. Both the low- and high-uncertainty outcomes involved variable reward magnitudes, so that either a smaller or larger magnitude was probabilistically delivered, as well as reward omission following high-uncertainty choices. In Experiments 1 and 2, the between groups factor was the magnitude of the high-uncertainty-smaller (H-S) and high-uncertainty-larger (H-L) outcome, respectively. The H-S magnitude manipulation differentiated the groups, while the H-L magnitude manipulation did not. Experiment 3 showed that manipulating the probability of differential losses as well as the expected value of the low-uncertainty choice produced systematic effects on choice behavior. The results suggest that the reference point for probabilistic gains and losses was the expected value of the low-uncertainty choice. Current theories of probabilistic choice behavior have difficulty accounting for the present results, so an integrated theoretical framework is proposed. Overall, the present results have implications for understanding individual differences and corresponding underlying mechanisms of probabilistic choice behavior. PMID:25658448

  15. Isolated Deep Venous Thrombosis: Implications for 2-Point Compression Ultrasonography of the Lower Extremity.

    PubMed

    Adhikari, Srikar; Zeger, Wes; Thom, Christopher; Fields, J Matthew

    2015-09-01

    Two-point compression ultrasonography focuses on the evaluation of common femoral and popliteal veins for complete compressibility. The presence of isolated thrombi in proximal veins other than the common femoral and popliteal veins should prompt modification of 2-point compression technique. The objective of this study is to determine the prevalence and distribution of deep venous thrombi isolated to lower-extremity veins other than the common femoral and popliteal veins in emergency department (ED) patients with clinically suspected deep venous thrombosis. This was a retrospective study of all adult ED patients who received a lower-extremity venous duplex ultrasonographic examination for evaluation of deep venous thrombosis during a 6-year period. The ultrasonographic protocol included B-mode, color-flow, and spectral Doppler scanning of the common femoral, femoral, deep femoral, popliteal, and calf veins. Deep venous thrombosis was detected in 362 of 2,451 patients (14.7%; 95% confidence interval [CI] 13.3% to 16.1%). Thrombus confined to the common femoral vein alone was found in 5 of 362 cases (1.4%; 95% CI 0.2% to 2.6%). Isolated femoral vein thrombus was identified in 20 of 362 patients (5.5%; 95% CI 3.2% to 7.9%). Isolated deep femoral vein thrombus was found in 3 of 362 cases (0.8%; 95% CI -0.1% to 1.8%). Thrombus in the popliteal vein alone was identified in 53 of 362 cases (14.6%; 95% CI 11% to 18.2%). In our study, 6.3% of ED patients with suspected deep venous thrombosis had isolated thrombi in proximal veins other than common femoral and popliteal veins. Our study results support the addition of femoral and deep femoral vein evaluation to standard compression ultrasonography of the common femoral and popliteal vein, assuming that this does not have a deleterious effect on specificity. Copyright © 2014 American College of Emergency Physicians. Published by Elsevier Inc. All rights reserved.

  16. A-posteriori error estimation for the finite point method with applications to compressible flow

    NASA Astrophysics Data System (ADS)

    Ortega, Enrique; Flores, Roberto; Oñate, Eugenio; Idelsohn, Sergio

    2017-08-01

    An a-posteriori error estimate with application to inviscid compressible flow problems is presented. The estimate is a surrogate measure of the discretization error, obtained from an approximation to the truncation terms of the governing equations. This approximation is calculated from the discrete nodal differential residuals using a reconstructed solution field on a modified stencil of points. Both the error estimation methodology and the flow solution scheme are implemented using the Finite Point Method, a meshless technique enabling higher-order approximations and reconstruction procedures on general unstructured discretizations. The performance of the proposed error indicator is studied and applications to adaptive grid refinement are presented.

  17. Effects of Instantaneous Multiband Dynamic Compression on Speech Intelligibility

    NASA Astrophysics Data System (ADS)

    Herzke, Tobias; Hohmann, Volker

    2005-12-01

    The recruitment phenomenon, that is, the reduced dynamic range between threshold and uncomfortable level, is attributed to the loss of instantaneous dynamic compression on the basilar membrane. Despite this, hearing aids commonly use slow-acting dynamic compression for its compensation, because this was found to be the most successful strategy in terms of speech quality and intelligibility rehabilitation. Former attempts to use fast-acting compression gave ambiguous results, raising the question as to whether auditory-based recruitment compensation by instantaneous compression is in principle applicable in hearing aids. This study thus investigates instantaneous multiband dynamic compression based on an auditory filterbank. Instantaneous envelope compression is performed in each frequency band of a gammatone filterbank, which provides a combination of time and frequency resolution comparable to the normal healthy cochlea. The gain characteristics used for dynamic compression are deduced from categorical loudness scaling. In speech intelligibility tests, the instantaneous dynamic compression scheme was compared against a linear amplification scheme, which used the same filterbank for frequency analysis, but employed constant gain factors that restored the sound level for medium perceived loudness in each frequency band. In subjective comparisons, five of nine subjects preferred the linear amplification scheme and would not accept the instantaneous dynamic compression in hearing aids. Four of nine subjects did not perceive any quality differences. A sentence intelligibility test in noise (Oldenburg sentence test) showed little to no negative effects of the instantaneous dynamic compression, compared to linear amplification. A word intelligibility test in quiet (one-syllable rhyme test) showed that the subjects benefit from the larger amplification at low levels provided by instantaneous dynamic compression. Further analysis showed that the increase in intelligibility

  18. AFRESh: an adaptive framework for compression of reads and assembled sequences with random access functionality.

    PubMed

    Paridaens, Tom; Van Wallendael, Glenn; De Neve, Wesley; Lambert, Peter

    2017-05-15

    The past decade has seen the introduction of new technologies that lowered the cost of genomic sequencing increasingly. We can even observe that the cost of sequencing is dropping significantly faster than the cost of storage and transmission. The latter motivates a need for continuous improvements in the area of genomic data compression, not only at the level of effectiveness (compression rate), but also at the level of functionality (e.g. random access), configurability (effectiveness versus complexity, coding tool set …) and versatility (support for both sequenced reads and assembled sequences). In that regard, we can point out that current approaches mostly do not support random access, requiring full files to be transmitted, and that current approaches are restricted to either read or sequence compression. We propose AFRESh, an adaptive framework for no-reference compression of genomic data with random access functionality, targeting the effective representation of the raw genomic symbol streams of both reads and assembled sequences. AFRESh makes use of a configurable set of prediction and encoding tools, extended by a Context-Adaptive Binary Arithmetic Coding scheme (CABAC), to compress raw genetic codes. To the best of our knowledge, our paper is the first to describe an effective implementation CABAC outside of its' original application. By applying CABAC, the compression effectiveness improves by up to 19% for assembled sequences and up to 62% for reads. By applying AFRESh to the genomic symbols of the MPEG genomic compression test set for reads, a compression gain is achieved of up to 51% compared to SCALCE, 42% compared to LFQC and 44% compared to ORCOM. When comparing to generic compression approaches, a compression gain is achieved of up to 41% compared to GNU Gzip and 22% compared to 7-Zip at the Ultra setting. Additionaly, when compressing assembled sequences of the Human Genome, a compression gain is achieved up to 34% compared to GNU Gzip and 16

  19. Scaling functions for the Inverse Compressibility near the QCD critical point

    NASA Astrophysics Data System (ADS)

    Lacey, Roy

    2017-09-01

    The QCD phase diagram can be mapped out by studying fluctuations and their response to changes in the temperature and baryon chemical potential. Theoretical studies indicate that the cumulant ratios Cn /Cm used to characterize the fluctuation of conserved charges, provide a valuable probe of deconfinement and chiral dynamics, as well as for identifying the position of the critical endpoint (CEP) in the QCD phase diagram. The ratio C1 /C2 , which is linked to the inverse compressibility, vanishes at the CEP due to the divergence of the net quark number fluctuations at the critical point belonging to the Z(2) universality class. Therefore, it's associated scaling function can give insight on the location of the critical end point, as well as the critical exponents required to assign its static universality class. Scaling functions for the ratio C1 /C2 , obtained from net-proton multiplicity distributions for a broad range of collision centralities in Au+Au (√{sNN} = 7.7 - 200 GeV) collisions will be presented and discussed.

  20. Compressive sensing in medical imaging

    PubMed Central

    Graff, Christian G.; Sidky, Emil Y.

    2015-01-01

    The promise of compressive sensing, exploitation of compressibility to achieve high quality image reconstructions with less data, has attracted a great deal of attention in the medical imaging community. At the Compressed Sensing Incubator meeting held in April 2014 at OSA Headquarters in Washington, DC, presentations were given summarizing some of the research efforts ongoing in compressive sensing for x-ray computed tomography and magnetic resonance imaging systems. This article provides an expanded version of these presentations. Sparsity-exploiting reconstruction algorithms that have gained popularity in the medical imaging community are studied, and examples of clinical applications that could benefit from compressive sensing ideas are provided. The current and potential future impact of compressive sensing on the medical imaging field is discussed. PMID:25968400

  1. The α-γ-ɛ triple point and phase boundaries of iron under shock compression

    NASA Astrophysics Data System (ADS)

    Li, Jun; Wu, Qiang; Xue, Tao; Geng, Huayun; Yu, Jidong; Jin, Ke; Li, Jiabo; Tan, Ye; Xi, Feng

    2017-07-01

    The phase transition of iron under shock compression has attracted much attention in recent decades because of its importance in fields such as condensed matter physics, geophysics, and metallurgy. At room temperature, the transition of iron from the α-phase (bcc) to the ɛ-phase (hpc) occurs at a stress of 13 GPa. At high temperature, a triple point followed by transformation to the γ-phase (fcc) is expected. However, the details of the high-temperature phase transitions of iron are still under debate. Here, we investigate the phase-transition behavior of polycrystalline iron under compression from room temperature to 820 K. The results show that the shock-induced phase transition is determined unequivocally from the measured three-wave-structure profiles, which clearly consist of an elastic wave, a plastic wave, and a phase-transition wave. The phase transition is temperature-dependent, with an average rate Δσtr/ΔT of -6.91 MPa/K below 700 K and -34.7 MPa/K at higher temperatures. The shock α-ɛ and α-γ phase boundaries intersect at 10.6 ± 0.53 GPa and 763 K, which agrees with the α-ɛ-γ triple point from early shock wave experiments and recent laser-heated diamond-anvil cell resistivity and in situ X-ray diffraction data but disagrees with the shock pressure-temperature phase diagram reported in 2009 by Zaretsky [J. Appl. Phys. 106, 023510 (2009)].

  2. Mathematical modelling of the beam under axial compression force applied at any point – the buckling problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Magnucka-Blandzi, Ewa

    The study is devoted to stability of simply supported beam under axial compression. The beam is subjected to an axial load located at any point along the axis of the beam. The buckling problem has been desribed and solved mathematically. Critical loads have been calculated. In the particular case, the Euler’s buckling load is obtained. Explicit solutions are given. The values of critical loads are collected in tables and shown in figure. The relation between the point of the load application and the critical load is presented.

  3. High-quality lossy compression: current and future trends

    NASA Astrophysics Data System (ADS)

    McLaughlin, Steven W.

    1995-01-01

    This paper is concerned with current and future trends in the lossy compression of real sources such as imagery, video, speech and music. We put all lossy compression schemes into common framework where each can be characterized in terms of three well-defined advantages: cell shape, region shape and memory advantages. We concentrate on image compression and discuss how new entropy constrained trellis-based compressors achieve cell- shape, region-shape and memory gain resulting in high fidelity and high compression.

  4. Effects of dynamic range compression on spatial selective auditory attention in normal-hearing listeners.

    PubMed

    Schwartz, Andrew H; Shinn-Cunningham, Barbara G

    2013-04-01

    Many hearing aids introduce compressive gain to accommodate the reduced dynamic range that often accompanies hearing loss. However, natural sounds produce complicated temporal dynamics in hearing aid compression, as gain is driven by whichever source dominates at a given moment. Moreover, independent compression at the two ears can introduce fluctuations in interaural level differences (ILDs) important for spatial perception. While independent compression can interfere with spatial perception of sound, it does not always interfere with localization accuracy or speech identification. Here, normal-hearing listeners reported a target message played simultaneously with two spatially separated masker messages. We measured the amount of spatial separation required between the target and maskers for subjects to perform at threshold in this task. Fast, syllabic compression that was independent at the two ears increased the required spatial separation, but linking the compressors to provide identical gain to both ears (preserving ILDs) restored much of the deficit caused by fast, independent compression. Effects were less clear for slower compression. Percent-correct performance was lower with independent compression, but only for small spatial separations. These results may help explain differences in previous reports of the effect of compression on spatial perception of sound.

  5. A Genuine Jahn-Teller System with Compressed Geometry and Quantum Effects Originating from Zero-Point Motion.

    PubMed

    Aramburu, José Antonio; García-Fernández, Pablo; García-Lastra, Juan María; Moreno, Miguel

    2016-07-18

    First-principle calculations together with analysis of the experimental data found for 3d(9) and 3d(7) ions in cubic oxides proved that the center found in irradiated CaO:Ni(2+) corresponds to Ni(+) under a static Jahn-Teller effect displaying a compressed equilibrium geometry. It was also shown that the anomalous positive g∥ shift (g∥ -g0 =0.065) measured at T=20 K obeys the superposition of the |3 z(2) -r(2) ⟩ and |x(2) -y(2) ⟩ states driven by quantum effects associated with the zero-point motion, a mechanism first put forward by O'Brien for static Jahn-Teller systems and later extended by Ham to the dynamic Jahn-Teller case. To our knowledge, this is the first genuine Jahn-Teller system (i.e. in which exact degeneracy exists at the high-symmetry configuration) exhibiting a compressed equilibrium geometry for which large quantum effects allow experimental observation of the effect predicted by O'Brien. Analysis of the calculated energy barriers for different Jahn-Teller systems allowed us to explain the origin of the compressed geometry observed for CaO:Ni(+) . © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Adaptive efficient compression of genomes

    PubMed Central

    2012-01-01

    Modern high-throughput sequencing technologies are able to generate DNA sequences at an ever increasing rate. In parallel to the decreasing experimental time and cost necessary to produce DNA sequences, computational requirements for analysis and storage of the sequences are steeply increasing. Compression is a key technology to deal with this challenge. Recently, referential compression schemes, storing only the differences between a to-be-compressed input and a known reference sequence, gained a lot of interest in this field. However, memory requirements of the current algorithms are high and run times often are slow. In this paper, we propose an adaptive, parallel and highly efficient referential sequence compression method which allows fine-tuning of the trade-off between required memory and compression speed. When using 12 MB of memory, our method is for human genomes on-par with the best previous algorithms in terms of compression ratio (400:1) and compression speed. In contrast, it compresses a complete human genome in just 11 seconds when provided with 9 GB of main memory, which is almost three times faster than the best competitor while using less main memory. PMID:23146997

  7. Inflight calibration technique for onboard high-gain antenna pointing. [of Mariner 10 spacecraft in Venus and Mercury flyby mission

    NASA Technical Reports Server (NTRS)

    Ohtakay, H.; Hardman, J. M.

    1975-01-01

    The X-band radio frequency communication system was used for the first time in deep space planetary exploration by the Mariner 10 Venus and Mercury flyby mission. This paper presents the technique utilized for and the results of inflight calibration of high-gain antenna (HGA) pointing. Also discussed is pointing accuracy to maintain a high data transmission rate throughout the mission, including the performance of HGA pointing during the critical period of Mercury encounter.

  8. Raman Spectroscopy of Rdx Single Crystals Under Static Compression

    NASA Astrophysics Data System (ADS)

    Dreger, Zbigniew A.; Gupta, Yogendra M.

    2007-12-01

    To gain insight into the high pressure response of energetic crystal of RDX, Raman measurements were performed under hydrostatic compression up to 15 GPa. Several distinct changes in the spectra were found at 4.0±0.3 GPa, confirming the α-γ phase transition previously observed in polycrystalline samples. Symmetry correlation analyses indicate that the γ-polymorph may assume a space group isomorphous with a point group D2h with eight molecules occupying the C1 symmetry sites, similar to the α-phase. It is proposed that factor group coupling can account for the observed increase in the number of modes in the γ-phase.

  9. Dynamics of cochlear nonlinearity: Automatic gain control or instantaneous damping?

    PubMed

    Altoè, Alessandro; Charaziak, Karolina K; Shera, Christopher A

    2017-12-01

    Measurements of basilar-membrane (BM) motion show that the compressive nonlinearity of cochlear mechanical responses is not an instantaneous phenomenon. For this reason, the cochlear amplifier has been thought to incorporate an automatic gain control (AGC) mechanism characterized by a finite reaction time. This paper studies the effect of instantaneous nonlinear damping on the responses of oscillatory systems. The principal results are that (i) instantaneous nonlinear damping produces a noninstantaneous gain control that differs markedly from typical AGC strategies; (ii) the kinetics of compressive nonlinearity implied by the finite reaction time of an AGC system appear inconsistent with the nonlinear dynamics measured on the gerbil basilar membrane; and (iii) conversely, those nonlinear dynamics can be reproduced using an harmonic oscillator with instantaneous nonlinear damping. Furthermore, existing cochlear models that include instantaneous gain-control mechanisms capture the principal kinetics of BM nonlinearity. Thus, an AGC system with finite reaction time appears neither necessary nor sufficient to explain nonlinear gain control in the cochlea.

  10. Texture Studies and Compression Behaviour of Apple Flesh

    NASA Astrophysics Data System (ADS)

    James, Bryony; Fonseca, Celia

    Compressive behavior of fruit flesh has been studied using mechanical tests and microstructural analysis. Apple flesh from two cultivars (Braeburn and Cox's Orange Pippin) was investigated to represent the extremes in a spectrum of fruit flesh types, hard and juicy (Braeburn) and soft and mealy (Cox's). Force-deformation curves produced during compression of unconstrained discs of apple flesh followed trends predicted from the literature for each of the "juicy" and "mealy" types. The curves display the rupture point and, in some cases, a point of inflection that may be related to the point of incipient juice release. During compression these discs of flesh generally failed along the centre line, perpendicular to the direction of loading, through a barrelling mechanism. Cryo-Scanning Electron Microscopy (cryo-SEM) was used to examine the behavior of the parenchyma cells during fracture and compression using a purpose designed sample holder and compression tester. Fracture behavior reinforced the difference in mechanical properties between crisp and mealy fruit flesh. During compression testing prior to cryo-SEM imaging the apple flesh was constrained perpendicular to the direction of loading. Microstructural analysis suggests that, in this arrangement, the material fails along a compression front ahead of the compressing plate. Failure progresses by whole lines of parenchyma cells collapsing, or rupturing, with juice filling intercellular spaces, before the compression force is transferred to the next row of cells.

  11. Comparison of chest compression quality between the modified chest compression method with the use of smartphone application and the standardized traditional chest compression method during CPR.

    PubMed

    Park, Sang-Sub

    2014-01-01

    The purpose of this study is to grasp difference in quality of chest compression accuracy between the modified chest compression method with the use of smartphone application and the standardized traditional chest compression method. Participants were progressed 64 people except 6 absentees among 70 people who agreed to participation with completing the CPR curriculum. In the classification of group in participants, the modified chest compression method was called as smartphone group (33 people). The standardized chest compression method was called as traditional group (31 people). The common equipments in both groups were used Manikin for practice and Manikin for evaluation. In the meantime, the smartphone group for application was utilized Android and iOS Operating System (OS) of 2 smartphone products (G, i). The measurement period was conducted from September 25th to 26th, 2012. Data analysis was used SPSS WIN 12.0 program. As a result of research, the proper compression depth (mm) was shown the proper compression depth (p< 0.01) in traditional group (53.77 mm) compared to smartphone group (48.35 mm). Even the proper chest compression (%) was formed suitably (p< 0.05) in traditional group (73.96%) more than smartphone group (60.51%). As for the awareness of chest compression accuracy, the traditional group (3.83 points) had the higher awareness of chest compression accuracy (p< 0.001) than the smartphone group (2.32 points). In the questionnaire that was additionally carried out 1 question only in smartphone group, the modified chest compression method with the use of smartphone had the high negative reason in rescuer for occurrence of hand back pain (48.5%) and unstable posture (21.2%).

  12. Comparison of vertical hydraulic conductivity in a streambed-point bar system of a gaining stream

    NASA Astrophysics Data System (ADS)

    Dong, Weihong; Chen, Xunhong; Wang, Zhaowei; Ou, Gengxin; Liu, Can

    2012-07-01

    SummaryVertical hydraulic conductivities (Kv) of both streambed and point bars can influence water and solute exchange between streams and surrounding groundwater systems. The sediments in point bars are relatively young compared to the older sediments in the adjacent aquifers but slightly older compared to submerged streambeds. Thus, the permeability in point bar sediments can be different not only from regional aquifer but also from modern streambed. However, there is a lack of detailed studies that document spatial variability of vertical hydraulic conductivity in point bars of meandering streams. In this study, the authors proposed an in situ permeameter test method to measure vertical hydraulic conductivity of the two point bars in Clear Creek, Nebraska, USA. We compared the Kv values in streambed and adjacent point bars through 45 test locations in the two point bars and 51 test locations in the streambed. The Kv values in the point bars were lower than those in the streambed. Kruskal-Wallis test confirmed that the Kv values from the point bars and from the channel came from two statistically different populations. Within a point bar, the Kv values were higher along the point bar edges than those from inner point bars. Grain size analysis indicated that slightly more silt and clay particles existed in sediments from inner point bars, compared to that from streambed and from locations near the point bar edges. While point bars are the deposits of the adjacent channel, the comparison of two groups of Kv values suggests that post-depositional processes had an effect on the evolution of Kv from channel to point bars in fluvial deposits. We believed that the transport of fine particles and the gas ebullition in this gaining stream had significant effects on the distribution of Kv values in a streambed-point bar system. With the ageing of deposition in a floodplain, the permeability of point bar sediments can likely decrease due to reduced effects of the upward

  13. Optimization of Error-Bounded Lossy Compression for Hard-to-Compress HPC Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di, Sheng; Cappello, Franck

    Since today’s scientific applications are producing vast amounts of data, compressing them before storage/transmission is critical. Results of existing compressors show two types of HPC data sets: highly compressible and hard to compress. In this work, we carefully design and optimize the error-bounded lossy compression for hard-tocompress scientific data. We propose an optimized algorithm that can adaptively partition the HPC data into best-fit consecutive segments each having mutually close data values, such that the compression condition can be optimized. Another significant contribution is the optimization of shifting offset such that the XOR-leading-zero length between two consecutive unpredictable data points canmore » be maximized. We finally devise an adaptive method to select the best-fit compressor at runtime for maximizing the compression factor. We evaluate our solution using 13 benchmarks based on real-world scientific problems, and we compare it with 9 other state-of-the-art compressors. Experiments show that our compressor can always guarantee the compression errors within the user-specified error bounds. Most importantly, our optimization can improve the compression factor effectively, by up to 49% for hard-tocompress data sets with similar compression/decompression time cost.« less

  14. A design approach for systems based on magnetic pulse compression.

    PubMed

    Kumar, D Durga Praveen; Mitra, S; Senthil, K; Sharma, D K; Rajan, Rehim N; Sharma, Archana; Nagesh, K V; Chakravarthy, D P

    2008-04-01

    A design approach giving the optimum number of stages in a magnetic pulse compression circuit and gain per stage is given. The limitation on the maximum gain per stage is discussed. The total system volume minimization is done by considering the energy storage capacitor volume and magnetic core volume at each stage. At the end of this paper, the design of a magnetic pulse compression based linear induction accelerator of 200 kV, 5 kA, and 100 ns with a repetition rate of 100 Hz is discussed with its experimental results.

  15. Fabrication of an infrared Shack-Hartmann sensor by combining high-speed single-point diamond milling and precision compression molding processes.

    PubMed

    Zhang, Lin; Zhou, Wenchen; Naples, Neil J; Yi, Allen Y

    2018-05-01

    A novel fabrication method by combining high-speed single-point diamond milling and precision compression molding processes for fabrication of discontinuous freeform microlens arrays was proposed. Compared with slow tool servo diamond broaching, high-speed single-point diamond milling was selected for its flexibility in the fabrication of true 3D optical surfaces with discontinuous features. The advantage of single-point diamond milling is that the surface features can be constructed sequentially by spacing the axes of a virtual spindle at arbitrary positions based on the combination of rotational and translational motions of both the high-speed spindle and linear slides. By employing this method, each micro-lenslet was regarded as a microstructure cell by passing the axis of the virtual spindle through the vertex of each cell. An optimization arithmetic based on minimum-area fabrication was introduced to the machining process to further increase the machining efficiency. After the mold insert was machined, it was employed to replicate the microlens array onto chalcogenide glass. In the ensuing optical measurement, the self-built Shack-Hartmann wavefront sensor was proven to be accurate in detecting an infrared wavefront by both experiments and numerical simulation. The combined results showed that precision compression molding of chalcogenide glasses could be an economic and precision optical fabrication technology for high-volume production of infrared optics.

  16. Contrast Gain Control in Auditory Cortex

    PubMed Central

    Rabinowitz, Neil C.; Willmore, Ben D.B.; Schnupp, Jan W.H.; King, Andrew J.

    2011-01-01

    Summary The auditory system must represent sounds with a wide range of statistical properties. One important property is the spectrotemporal contrast in the acoustic environment: the variation in sound pressure in each frequency band, relative to the mean pressure. We show that neurons in ferret auditory cortex rescale their gain to partially compensate for the spectrotemporal contrast of recent stimulation. When contrast is low, neurons increase their gain, becoming more sensitive to small changes in the stimulus, although the effectiveness of contrast gain control is reduced at low mean levels. Gain is primarily determined by contrast near each neuron's preferred frequency, but there is also a contribution from contrast in more distant frequency bands. Neural responses are modulated by contrast over timescales of ∼100 ms. By using contrast gain control to expand or compress the representation of its inputs, the auditory system may be seeking an efficient coding of natural sounds. PMID:21689603

  17. Determination of preferred parameters for multichannel compression using individually fitted simulated hearing AIDS and paired comparisons.

    PubMed

    Moore, Brian C J; Füllgrabe, Christian; Stone, Michael A

    2011-01-01

    To determine preferred parameters of multichannel compression using individually fitted simulated hearing aids and a method of paired comparisons. Fourteen participants with mild to moderate hearing loss listened via a simulated five-channel compression hearing aid fitted using the CAMEQ2-HF method to pairs of speech sounds (a male talker and a female talker) and musical sounds (a percussion instrument, orchestral classical music, and a jazz trio) presented sequentially and indicated which sound of the pair was preferred and by how much. The sounds in each pair were derived from the same token and differed along a single dimension in the type of processing applied. For the speech sounds, participants judged either pleasantness or clarity; in the latter case, the speech was presented in noise at a 2-dB signal-to-noise ratio. For musical sounds, they judged pleasantness. The parameters explored were time delay of the audio signal relative to the gain control signal (the alignment delay), compression speed (attack and release times), bandwidth (5, 7.5, or 10 kHz), and gain at high frequencies relative to that prescribed by CAMEQ2-HF. Pleasantness increased with increasing alignment delay only for the percussive musical sound. Clarity was not affected by alignment delay. There was a trend for pleasantness to decrease slightly with increasing bandwidth, but this was significant only for female speech with fast compression. Judged clarity was significantly higher for the 7.5- and 10-kHz bandwidths than for the 5-kHz bandwidth for both slow and fast compression and for both talker genders. Compression speed had little effect on pleasantness for 50- or 65-dB SPL input levels, but slow compression was generally judged as slightly more pleasant than fast compression for an 80-dB SPL input level. Clarity was higher for slow than for fast compression for input levels of 80 and 65 dB SPL but not for a level of 50 dB SPL. Preferences for pleasantness were approximately equal

  18. RF pulse compression for future linear colliders

    NASA Astrophysics Data System (ADS)

    Wilson, Perry B.

    1995-07-01

    Future (nonsuperconducting) linear colliders will require very high values of peak rf power per meter of accelerating structure. The role of rf pulse compression in producing this power is examined within the context of overall rf system design for three future colliders at energies of 1.0-1.5 TeV, 5 TeV, and 25 TeV. In order to keep the average AC input power and the length of the accelerator within reasonable limits, a collider in the 1.0-1.5 TeV energy range will probably be built at an x-band rf frequency, and will require a peak power on the order of 150-200 MW per meter of accelerating structure. A 5 TeV collider at 34 GHz with a reasonable length (35 km) and AC input power (225 MW) would require about 550 MW per meter of structure. Two-beam accelerators can achieve peak powers of this order by applying dc pulse compression techniques (induction linac modules) to produce the drive beam. Klystron-driven colliders achieve high peak power by a combination of dc pulse compression (modulators) and rf pulse compression, with about the same overall rf system efficiency (30-40%) as a two-beam collider. A high gain (6.8) three-stage binary pulse compression system with high efficiency (80%) is described, which (compared to a SLED-II system) can be used to reduce the klystron peak power by about a factor of two, or alternatively, to cut the number of klystrons in half for a 1.0-1.5 TeV x-band collider. For a 5 TeV klystron-driven collider, a high gain, high efficiency rf pulse compression system is essential.

  19. ZFP compression plugin (filter) for HDF5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Mark C.

    H5Z-ZFP is a compression plugin (filter) for the HDF5 library based upon the ZFP-0.5.0 compression library. It supports 4- or 8-byte integer or floating point HDF5 datasets of any dimension but partitioned in 1, 2, or 3 dimensional chunks. It supports ZFP's four fundamental modes of operation; rate, precision, accuracy or expert. It is a lossy compression plugin.

  20. Optimal color coding for compression of true color images

    NASA Astrophysics Data System (ADS)

    Musatenko, Yurij S.; Kurashov, Vitalij N.

    1998-11-01

    In the paper we present the method that improves lossy compression of the true color or other multispectral images. The essence of the method is to project initial color planes into Karhunen-Loeve (KL) basis that gives completely decorrelated representation for the image and to compress basis functions instead of the planes. To do that the new fast algorithm of true KL basis construction with low memory consumption is suggested and our recently proposed scheme for finding optimal losses of Kl functions while compression is used. Compare to standard JPEG compression of the CMYK images the method provides the PSNR gain from 0.2 to 2 dB for the convenient compression ratios. Experimental results are obtained for high resolution CMYK images. It is demonstrated that presented scheme could work on common hardware.

  1. Gain curves and hydrodynamic modeling for shock ignition

    NASA Astrophysics Data System (ADS)

    Lafon, M.; Ribeyre, X.; Schurtz, G.

    2010-05-01

    Ignition of a precompressed thermonuclear fuel by means of a converging shock is now considered as a credible scheme to obtain high gains for inertial fusion energy. This work aims at modeling the successive stages of the fuel time history, from compression to final thermonuclear combustion, in order to provide the gain curves of shock ignition (SI). The leading physical mechanism at work in SI is pressure amplification, at first by spherical convergence, and by collision with the shock reflected at center during the stagnation process. These two effects are analyzed, and ignition conditions are provided as functions of the shock pressure and implosion velocity. Ignition conditions are obtained from a non-isobaric fuel assembly, for which we present a gain model. The corresponding gain curves exhibit a significantly lower ignition threshold and higher target gains than conventional central ignition.

  2. FRESCO: Referential compression of highly similar sequences.

    PubMed

    Wandelt, Sebastian; Leser, Ulf

    2013-01-01

    In many applications, sets of similar texts or sequences are of high importance. Prominent examples are revision histories of documents or genomic sequences. Modern high-throughput sequencing technologies are able to generate DNA sequences at an ever-increasing rate. In parallel to the decreasing experimental time and cost necessary to produce DNA sequences, computational requirements for analysis and storage of the sequences are steeply increasing. Compression is a key technology to deal with this challenge. Recently, referential compression schemes, storing only the differences between a to-be-compressed input and a known reference sequence, gained a lot of interest in this field. In this paper, we propose a general open-source framework to compress large amounts of biological sequence data called Framework for REferential Sequence COmpression (FRESCO). Our basic compression algorithm is shown to be one to two orders of magnitudes faster than comparable related work, while achieving similar compression ratios. We also propose several techniques to further increase compression ratios, while still retaining the advantage in speed: 1) selecting a good reference sequence; and 2) rewriting a reference sequence to allow for better compression. In addition,we propose a new way of further boosting the compression ratios by applying referential compression to already referentially compressed files (second-order compression). This technique allows for compression ratios way beyond state of the art, for instance,4,000:1 and higher for human genomes. We evaluate our algorithms on a large data set from three different species (more than 1,000 genomes, more than 3 TB) and on a collection of versions of Wikipedia pages. Our results show that real-time compression of highly similar sequences at high compression ratios is possible on modern hardware.

  3. Video Compression Study: h.265 vs h.264

    NASA Technical Reports Server (NTRS)

    Pryor, Jonathan

    2016-01-01

    H.265 video compression (also known as High Efficiency Video Encoding (HEVC)) promises to provide double the video quality at half the bandwidth, or the same quality at half the bandwidth of h.264 video compression [1]. This study uses a Tektronix PQA500 to determine the video quality gains by using h.265 encoding. This study also compares two video encoders to see how different implementations of h.264 and h.265 impact video quality at various bandwidths.

  4. Structure and Properties of Silica Glass Densified in Cold Compression and Hot Compression

    NASA Astrophysics Data System (ADS)

    Guerette, Michael; Ackerson, Michael R.; Thomas, Jay; Yuan, Fenglin; Bruce Watson, E.; Walker, David; Huang, Liping

    2015-10-01

    Silica glass has been shown in numerous studies to possess significant capacity for permanent densification under pressure at different temperatures to form high density amorphous (HDA) silica. However, it is unknown to what extent the processes leading to irreversible densification of silica glass in cold-compression at room temperature and in hot-compression (e.g., near glass transition temperature) are common in nature. In this work, a hot-compression technique was used to quench silica glass from high temperature (1100 °C) and high pressure (up to 8 GPa) conditions, which leads to density increase of ~25% and Young’s modulus increase of ~71% relative to that of pristine silica glass at ambient conditions. Our experiments and molecular dynamics (MD) simulations provide solid evidences that the intermediate-range order of the hot-compressed HDA silica is distinct from that of the counterpart cold-compressed at room temperature. This explains the much higher thermal and mechanical stability of the former than the latter upon heating and compression as revealed in our in-situ Brillouin light scattering (BLS) experiments. Our studies demonstrate the limitation of the resulting density as a structural indicator of polyamorphism, and point out the importance of temperature during compression in order to fundamentally understand HDA silica.

  5. DSN 70-meter antenna X-band gain, phase, and pointing performance, with particular application for Voyager 2 Neptune encounter

    NASA Technical Reports Server (NTRS)

    Slobin, S. D.; Bathker, D. A.

    1988-01-01

    The gain, phase, and pointing performance of the Deep Space Network (DSN) 70 m antennas are investigated using theoretical antenna analysis computer programs that consider the gravity induced deformation of the antenna surface and quadripod structure. The microwave effects are calculated for normal subreflector focusing motion and for special fixed-subreflector conditions that may be used during the Voyager 2 Neptune encounter. The frequency stability effects of stepwise lateral and axial subreflector motions are also described. Comparisons with recently measured antenna efficiency and subreflector motion tests are presented. A modification to the existing 70 m antenna pointing squint correction constant is proposed.

  6. Application of content-based image compression to telepathology

    NASA Astrophysics Data System (ADS)

    Varga, Margaret J.; Ducksbury, Paul G.; Callagy, Grace

    2002-05-01

    Telepathology is a means of practicing pathology at a distance, viewing images on a computer display rather than directly through a microscope. Without compression, images take too long to transmit to a remote location and are very expensive to store for future examination. However, to date the use of compressed images in pathology remains controversial. This is because commercial image compression algorithms such as JPEG achieve data compression without knowledge of the diagnostic content. Often images are lossily compressed at the expense of corrupting informative content. None of the currently available lossy compression techniques are concerned with what information has been preserved and what data has been discarded. Their sole objective is to compress and transmit the images as fast as possible. By contrast, this paper presents a novel image compression technique, which exploits knowledge of the slide diagnostic content. This 'content based' approach combines visually lossless and lossy compression techniques, judiciously applying each in the appropriate context across an image so as to maintain 'diagnostic' information while still maximising the possible compression. Standard compression algorithms, e.g. wavelets, can still be used, but their use in a context sensitive manner can offer high compression ratios and preservation of diagnostically important information. When compared with lossless compression the novel content-based approach can potentially provide the same degree of information with a smaller amount of data. When compared with lossy compression it can provide more information for a given amount of compression. The precise gain in the compression performance depends on the application (e.g. database archive or second opinion consultation) and the diagnostic content of the images.

  7. Optimal wavelets for biomedical signal compression.

    PubMed

    Nielsen, Mogens; Kamavuako, Ernest Nlandu; Andersen, Michael Midtgaard; Lucas, Marie-Françoise; Farina, Dario

    2006-07-01

    Signal compression is gaining importance in biomedical engineering due to the potential applications in telemedicine. In this work, we propose a novel scheme of signal compression based on signal-dependent wavelets. To adapt the mother wavelet to the signal for the purpose of compression, it is necessary to define (1) a family of wavelets that depend on a set of parameters and (2) a quality criterion for wavelet selection (i.e., wavelet parameter optimization). We propose the use of an unconstrained parameterization of the wavelet for wavelet optimization. A natural performance criterion for compression is the minimization of the signal distortion rate given the desired compression rate. For coding the wavelet coefficients, we adopted the embedded zerotree wavelet coding algorithm, although any coding scheme may be used with the proposed wavelet optimization. As a representative example of application, the coding/encoding scheme was applied to surface electromyographic signals recorded from ten subjects. The distortion rate strongly depended on the mother wavelet (for example, for 50% compression rate, optimal wavelet, mean+/-SD, 5.46+/-1.01%; worst wavelet 12.76+/-2.73%). Thus, optimization significantly improved performance with respect to previous approaches based on classic wavelets. The algorithm can be applied to any signal type since the optimal wavelet is selected on a signal-by-signal basis. Examples of application to ECG and EEG signals are also reported.

  8. Integer cosine transform compression for Galileo at Jupiter: A preliminary look

    NASA Technical Reports Server (NTRS)

    Ekroot, L.; Dolinar, S.; Cheung, K.-M.

    1993-01-01

    The Galileo low-gain antenna mission has a severely rate-constrained channel over which we wish to send large amounts of information. Because of this link pressure, compression techniques for image and other data are being selected. The compression technique that will be used for images is the integer cosine transform (ICT). This article investigates the compression performance of Galileo's ICT algorithm as applied to Galileo images taken during the early portion of the mission and to images that simulate those expected from the encounter at Jupiter.

  9. Changes in blood flow and cellular metabolism at a myofascial trigger point with trigger point release (ischemic compression): a proof-of-principle pilot study

    PubMed Central

    Moraska, Albert F.; Hickner, Robert C.; Kohrt, Wendy M.; Brewer, Alan

    2012-01-01

    Objective To demonstrate proof-of-principle measurement for physiological change within an active myofascial trigger point (MTrP) undergoing trigger point release (ischemic compression). Design Interstitial fluid was sampled continuously at a trigger point before and after intervention. Setting A biomedical research clinic at a university hospital. Participants Two subjects from a pain clinic presenting with chronic headache pain. Interventions A single microdialysis catheter was inserted into an active MTrP of the upper trapezius to allow for continuous sampling of interstitial fluid before and after application of trigger point therapy by a massage therapist. Main Outcome Measures Procedural success, pain tolerance, feasibility of intervention during sample collection, determination of physiologically relevant values for local blood flow, as well as glucose and lactate concentrations. Results Both patients tolerated the microdialysis probe insertion into the MTrP and treatment intervention without complication. Glucose and lactate concentrations were measured in the physiological range. Following intervention, a sustained increase in lactate was noted for both subjects. Conclusions Identifying physiological constituents of MTrP’s following intervention is an important step toward understanding pathophysiology and resolution of myofascial pain. The present study forwards that aim by showing proof-of-concept for collection of interstitial fluid from an MTrP before and after intervention can be accomplished using microdialysis, thus providing methodological insight toward treatment mechanism and pain resolution. Of the biomarkers measured in this study, lactate may be the most relevant for detection and treatment of abnormalities in the MTrP. PMID:22975226

  10. DNABIT Compress - Genome compression algorithm.

    PubMed

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-22

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.

  11. Multivariable control of vapor compression systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, X.D.; Liu, S.; Asada, H.H.

    1999-07-01

    This paper presents the results of a study of multi-input multi-output (MIMO) control of vapor compression cycles that have multiple actuators and sensors for regulating multiple outputs, e.g., superheat and evaporating temperature. The conventional single-input single-output (SISO) control was shown to have very limited performance. A low order lumped-parameter model was developed to describe the significant dynamics of vapor compression cycles. Dynamic modes were analyzed based on the low order model to provide physical insight of system dynamic behavior. To synthesize a MIMO control system, the Linear-Quadratic Gaussian (LQG) technique was applied to coordinate compressor speed and expansion valve openingmore » with guaranteed stability robustness in the design. Furthermore, to control a vapor compression cycle over a wide range of operating conditions where system nonlinearities become evident, a gain scheduling scheme was used so that the MIMO controller could adapt to changing operating conditions. Both analytical studies and experimental tests showed that the MIMO control could significantly improve the transient behavior of vapor compression cycles compared to the conventional SISO control scheme. The MIMO control proposed in this paper could be extended to the control of vapor compression cycles in a variety of HVAC and refrigeration applications to improve system performance and energy efficiency.« less

  12. Channel coding/decoding alternatives for compressed TV data on advanced planetary missions.

    NASA Technical Reports Server (NTRS)

    Rice, R. F.

    1972-01-01

    The compatibility of channel coding/decoding schemes with a specific TV compressor developed for advanced planetary missions is considered. Under certain conditions, it is shown that compressed data can be transmitted at approximately the same rate as uncompressed data without any loss in quality. Thus, the full gains of data compression can be achieved in real-time transmission.

  13. IQ Gains and the Binet Decrements.

    ERIC Educational Resources Information Center

    Flynn, James R.

    1984-01-01

    Thorndike's Stanford-Binet data suggest that from 1932 to 1971-72 preschool children enjoyed greater IQ gains than older children, possibly due to the rise of television. Additional analysis indicated that gains were either due to sampling error or totally antedated 1947. Gains of 12 IQ points were found for Americans. (Author/EGS)

  14. Compression of laminated composite beams with initial damage

    NASA Technical Reports Server (NTRS)

    Breivik, Nicole L.; Gurdal, Zafer; Griffin, O. H., Jr.

    1993-01-01

    The effect of isolated damage modes on the compressive strength and failure characteristics of laminated composite test specimens were evaluated experimentally and numerically. In addition to specimens without initial damage, specimens with three types of initial damage were considered: (1) specimens with short delaminations distributed evenly through the specimen thickness, (2) specimens with few long delaminations, and (3) specimens with local fiber damage in the surface plies under the three-point bend contact point. It was found that specimens with short multiple delamination experienced the greatest reduction in compression strength compared to the undamaged specimens. Single delaminations far from the specimen surface had little effect on the final compression strength, and moderate strength reduction was observed for specimens with localized surface ply damage.

  15. Compressibility, Laws of Nature, Initial Conditions and Complexity

    NASA Astrophysics Data System (ADS)

    Chibbaro, Sergio; Vulpiani, Angelo

    2017-10-01

    We critically analyse the point of view for which laws of nature are just a mean to compress data. Discussing some basic notions of dynamical systems and information theory, we show that the idea that the analysis of large amount of data by means of an algorithm of compression is equivalent to the knowledge one can have from scientific laws, is rather naive. In particular we discuss the subtle conceptual topic of the initial conditions of phenomena which are generally incompressible. Starting from this point, we argue that laws of nature represent more than a pure compression of data, and that the availability of large amount of data, in general, is not particularly useful to understand the behaviour of complex phenomena.

  16. Using Compression Isotherms of Phospholipid Monolayers to Explore Critical Phenomena: A Biophysical Chemistry Experiment

    ERIC Educational Resources Information Center

    Gragson, Derek E.; Beaman, Dan; Porter, Rhiannon

    2008-01-01

    Two experiments are described in which students explore phase transitions and critical phenomena by obtaining compression isotherms of phospholipid monolayers using a Langmuir trough. Through relatively simple analysis of their data students gain a better understanding of compression isotherms, the application of the Clapeyron equation, the…

  17. Fast generation of complex modulation video holograms using temporal redundancy compression and hybrid point-source/wave-field approaches

    NASA Astrophysics Data System (ADS)

    Gilles, Antonin; Gioia, Patrick; Cozot, Rémi; Morin, Luce

    2015-09-01

    The hybrid point-source/wave-field method is a newly proposed approach for Computer-Generated Hologram (CGH) calculation, based on the slicing of the scene into several depth layers parallel to the hologram plane. The complex wave scattered by each depth layer is then computed using either a wave-field or a point-source approach according to a threshold criterion on the number of points within the layer. Finally, the complex waves scattered by all the depth layers are summed up in order to obtain the final CGH. Although outperforming both point-source and wave-field methods without producing any visible artifact, this approach has not yet been used for animated holograms, and the possible exploitation of temporal redundancies has not been studied. In this paper, we propose a fast computation of video holograms by taking into account those redundancies. Our algorithm consists of three steps. First, intensity and depth data of the current 3D video frame are extracted and compared with those of the previous frame in order to remove temporally redundant data. Then the CGH pattern for this compressed frame is generated using the hybrid point-source/wave-field approach. The resulting CGH pattern is finally transmitted to the video output and stored in the previous frame buffer. Experimental results reveal that our proposed method is able to produce video holograms at interactive rates without producing any visible artifact.

  18. Study of communications data compression methods

    NASA Technical Reports Server (NTRS)

    Jones, H. W.

    1978-01-01

    A simple monochrome conditional replenishment system was extended to higher compression and to higher motion levels, by incorporating spatially adaptive quantizers and field repeating. Conditional replenishment combines intraframe and interframe compression, and both areas are investigated. The gain of conditional replenishment depends on the fraction of the image changing, since only changed parts of the image need to be transmitted. If the transmission rate is set so that only one fourth of the image can be transmitted in each field, greater change fractions will overload the system. A computer simulation was prepared which incorporated (1) field repeat of changes, (2) a variable change threshold, (3) frame repeat for high change, and (4) two mode, variable rate Hadamard intraframe quantizers. The field repeat gives 2:1 compression in moving areas without noticeable degradation. Variable change threshold allows some flexibility in dealing with varying change rates, but the threshold variation must be limited for acceptable performance.

  19. A test data compression scheme based on irrational numbers stored coding.

    PubMed

    Wu, Hai-feng; Cheng, Yu-sheng; Zhan, Wen-fa; Cheng, Yi-fei; Wu, Qiong; Zhu, Shi-juan

    2014-01-01

    Test question has already become an important factor to restrict the development of integrated circuit industry. A new test data compression scheme, namely irrational numbers stored (INS), is presented. To achieve the goal of compress test data efficiently, test data is converted into floating-point numbers, stored in the form of irrational numbers. The algorithm of converting floating-point number to irrational number precisely is given. Experimental results for some ISCAS 89 benchmarks show that the compression effect of proposed scheme is better than the coding methods such as FDR, AARLC, INDC, FAVLC, and VRL.

  20. Adaptive gain and filtering circuit for a sound reproduction system

    NASA Technical Reports Server (NTRS)

    Engebretson, A. Maynard (Inventor); O'Connell, Michael P. (Inventor)

    1998-01-01

    Adaptive compressive gain and level dependent spectral shaping circuitry for a hearing aid include a microphone to produce an input signal and a plurality of channels connected to a common circuit output. Each channel has a preset frequency response. Each channel includes a filter with a preset frequency response to receive the input signal and to produce a filtered signal, a channel amplifier to amplify the filtered signal to produce a channel output signal, a threshold register to establish a channel threshold level, and a gain circuit. The gain circuit increases the gain of the channel amplifier when the channel output signal falls below the channel threshold level and decreases the gain of the channel amplifier when the channel output signal rises above the channel threshold level. A transducer produces sound in response to the signal passed by the common circuit output.

  1. A review of lossless audio compression standards and algorithms

    NASA Astrophysics Data System (ADS)

    Muin, Fathiah Abdul; Gunawan, Teddy Surya; Kartiwi, Mira; Elsheikh, Elsheikh M. A.

    2017-09-01

    Over the years, lossless audio compression has gained popularity as researchers and businesses has become more aware of the need for better quality and higher storage demand. This paper will analyse various lossless audio coding algorithm and standards that are used and available in the market focusing on Linear Predictive Coding (LPC) specifically due to its popularity and robustness in audio compression, nevertheless other prediction methods are compared to verify this. Advanced representation of LPC such as LSP decomposition techniques are also discussed within this paper.

  2. Stability of compressible Taylor-Couette flow

    NASA Technical Reports Server (NTRS)

    Kao, Kai-Hsiung; Chow, Chuen-Yen

    1991-01-01

    Compressible stability equations are solved using the spectral collocation method in an attempt to study the effects of temperature difference and compressibility on the stability of Taylor-Couette flow. It is found that the Chebyshev collocation spectral method yields highly accurate results using fewer grid points for solving stability problems. Comparisons are made between the result obtained by assuming small Mach number with a uniform temperature distribution and that based on fully incompressible analysis.

  3. Complete chirp analysis of a gain-switched pulse using an interferometric two-photon absorption autocorrelation.

    PubMed

    Chin, Sang Hoon; Kim, Young Jae; Song, Ho Seong; Kim, Dug Young

    2006-10-10

    We propose a simple but powerful scheme for the complete analysis of the frequency chirp of a gain-switched optical pulse using a fringe-resolved interferometric two-photon absorption autocorrelator. A frequency chirp imposed on the gain-switched pulse from a laser diode was retrieved from both the intensity autocorrelation trace and the envelope of the second-harmonic interference fringe pattern. To verify the accuracy of the proposed phase retrieval method, we have performed an optical pulse compression experiment by using dispersion-compensating fibers with different lengths. We have obtained close agreement by less than a 1% error between the compressed pulse widths and numerically calculated pulse widths.

  4. Operational procedure for computer program for design point characteristics of a compressed-air generator with through-flow combustor for V/STOL applications

    NASA Technical Reports Server (NTRS)

    Krebs, R. P.

    1971-01-01

    The computer program described in this report calculates the design-point characteristics of a compressed-air generator for use in V/STOL applications such as systems with a tip-turbine-driven lift fan. The program computes the dimensions and mass, as well as the thermodynamic performance of a model air generator configuration which involves a straight through-flow combustor. Physical and thermodynamic characteristics of the air generator components are also given. The program was written in FORTRAN IV language. Provision has been made so that the program will accept input values in either SI units or U.S. customary units. Each air generator design-point calculation requires about 1.5 seconds of 7094 computer time for execution.

  5. A Framework of Hyperspectral Image Compression using Neural Networks

    DOE PAGES

    Masalmah, Yahya M.; Martínez Nieves, Christian; Rivera Soto, Rafael; ...

    2015-01-01

    Hyperspectral image analysis has gained great attention due to its wide range of applications. Hyperspectral images provide a vast amount of information about underlying objects in an image by using a large range of the electromagnetic spectrum for each pixel. However, since the same image is taken multiple times using distinct electromagnetic bands, the size of such images tend to be significant, which leads to greater processing requirements. The aim of this paper is to present a proposed framework for image compression and to study the possible effects of spatial compression on quality of unmixing results. Image compression allows usmore » to reduce the dimensionality of an image while still preserving most of the original information, which could lead to faster image processing. Lastly, this paper presents preliminary results of different training techniques used in Artificial Neural Network (ANN) based compression algorithm.« less

  6. Fracture in compression of brittle solids

    NASA Technical Reports Server (NTRS)

    1983-01-01

    The fracture of brittle solids in monotonic compression is reviewed from both the mechanistic and phenomenological points of view. The fundamental theoretical developments based on the extension of pre-existing cracks in general multiaxial stress fields are recognized as explaining extrinsic behavior where a single crack is responsible for the final failure. In contrast, shear faulting in compression is recognized to be the result of an evolutionary localization process involving en echelon action of cracks and is termed intrinsic.

  7. Neural network for image compression

    NASA Astrophysics Data System (ADS)

    Panchanathan, Sethuraman; Yeap, Tet H.; Pilache, B.

    1992-09-01

    In this paper, we propose a new scheme for image compression using neural networks. Image data compression deals with minimization of the amount of data required to represent an image while maintaining an acceptable quality. Several image compression techniques have been developed in recent years. We note that the coding performance of these techniques may be improved by employing adaptivity. Over the last few years neural network has emerged as an effective tool for solving a wide range of problems involving adaptivity and learning. A multilayer feed-forward neural network trained using the backward error propagation algorithm is used in many applications. However, this model is not suitable for image compression because of its poor coding performance. Recently, a self-organizing feature map (SOFM) algorithm has been proposed which yields a good coding performance. However, this algorithm requires a long training time because the network starts with random initial weights. In this paper we have used the backward error propagation algorithm (BEP) to quickly obtain the initial weights which are then used to speedup the training time required by the SOFM algorithm. The proposed approach (BEP-SOFM) combines the advantages of the two techniques and, hence, achieves a good coding performance in a shorter training time. Our simulation results demonstrate the potential gains using the proposed technique.

  8. DNABIT Compress – Genome compression algorithm

    PubMed Central

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that “DNABIT Compress” algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases. PMID:21383923

  9. Concept for an off-line gain stabilisation method.

    PubMed

    Pommé, S; Sibbens, G

    2004-01-01

    Conceptual ideas are presented for an off-line gain stabilisation method for spectrometry, in particular for alpha-particle spectrometry at low count rate. The method involves list mode storage of individual energy and time stamp data pairs. The 'Stieltjes integral' of measured spectra with respect to a reference spectrum is proposed as an indicator for gain instability. 'Exponentially moving averages' of the latter show the gain shift as a function of time. With this information, the data are relocated stochastically on a point-by-point basis.

  10. Proposed data compression schemes for the Galileo S-band contingency mission

    NASA Technical Reports Server (NTRS)

    Cheung, Kar-Ming; Tong, Kevin

    1993-01-01

    The Galileo spacecraft is currently on its way to Jupiter and its moons. In April 1991, the high gain antenna (HGA) failed to deploy as commanded. In case the current efforts to deploy the HGA fails, communications during the Jupiter encounters will be through one of two low gain antenna (LGA) on an S-band (2.3 GHz) carrier. A lot of effort has been and will be conducted to attempt to open the HGA. Also various options for improving Galileo's telemetry downlink performance are being evaluated in the event that the HGA will not open at Jupiter arrival. Among all viable options the most promising and powerful one is to perform image and non-image data compression in software onboard the spacecraft. This involves in-flight re-programming of the existing flight software of Galileo's Command and Data Subsystem processors and Attitude and Articulation Control System (AACS) processor, which have very limited computational and memory resources. In this article we describe the proposed data compression algorithms and give their respective compression performance. The planned image compression algorithm is a 4 x 4 or an 8 x 8 multiplication-free integer cosine transform (ICT) scheme, which can be viewed as an integer approximation of the popular discrete cosine transform (DCT) scheme. The implementation complexity of the ICT schemes is much lower than the DCT-based schemes, yet the performances of the two algorithms are indistinguishable. The proposed non-image compression algorith is a Lempel-Ziv-Welch (LZW) variant, which is a lossless universal compression algorithm based on a dynamic dictionary lookup table. We developed a simple and efficient hashing function to perform the string search.

  11. Quality evaluation of motion-compensated edge artifacts in compressed video.

    PubMed

    Leontaris, Athanasios; Cosman, Pamela C; Reibman, Amy R

    2007-04-01

    Little attention has been paid to an impairment common in motion-compensated video compression: the addition of high-frequency (HF) energy as motion compensation displaces blocking artifacts off block boundaries. In this paper, we employ an energy-based approach to measure this motion-compensated edge artifact, using both compressed bitstream information and decoded pixels. We evaluate the performance of our proposed metric, along with several blocking and blurring metrics, on compressed video in two ways. First, ordinal scales are evaluated through a series of expectations that a good quality metric should satisfy: the objective evaluation. Then, the best performing metrics are subjectively evaluated. The same subjective data set is finally used to obtain interval scales to gain more insight. Experimental results show that we accurately estimate the percentage of the added HF energy in compressed video.

  12. Development of 1D Liner Compression Code for IDL

    NASA Astrophysics Data System (ADS)

    Shimazu, Akihisa; Slough, John; Pancotti, Anthony

    2015-11-01

    A 1D liner compression code is developed to model liner implosion dynamics in the Inductively Driven Liner Experiment (IDL) where FRC plasmoid is compressed via inductively-driven metal liners. The driver circuit, magnetic field, joule heating, and liner dynamics calculations are performed at each time step in sequence to couple these effects in the code. To obtain more realistic magnetic field results for a given drive coil geometry, 2D and 3D effects are incorporated into the 1D field calculation through use of correction factor table lookup approach. Commercial low-frequency electromagnetic fields solver, ANSYS Maxwell 3D, is used to solve the magnetic field profile for static liner condition at various liner radius in order to derive correction factors for the 1D field calculation in the code. The liner dynamics results from the code is verified to be in good agreement with the results from commercial explicit dynamics solver, ANSYS Explicit Dynamics, and previous liner experiment. The developed code is used to optimize the capacitor bank and driver coil design for better energy transfer and coupling. FRC gain calculations are also performed using the liner compression data from the code for the conceptual design of the reactor sized system for fusion energy gains.

  13. Electronic topological transitions in Zn under compression

    NASA Astrophysics Data System (ADS)

    Kechin, Vladimir V.

    2001-01-01

    The electronic structure of hcp Zn under pressure up to 10 GPa has been calculated self-consistently by means of the scalar relativistic tight-binding linear muffin-tin orbital method. The calculations show that three electronic topological transitions (ETT's) occur in Zn when the c/a axial ratio diminishes under compression. One transition occurs at c/a~=1.82 when the ``needles'' appear around the symmetry point K of the Brillouin zone. The other two transitions occur at c/a~=3, when the ``butterfly'' and ``cigar'' appear simultaneously both around the L point. It has been shown that these ETT's are responsible for a number of anomalies observed in Zn at compression.

  14. Nova Upgrade: A proposed ICF facility to demonstrate ignition and gain, revision 1

    NASA Astrophysics Data System (ADS)

    1992-07-01

    The present objective of the national Inertial Confinement Fusion (ICF) Program is to determine the scientific feasibility of compressing and heating a small mass of mixed deuterium and tritium (DT) to conditions at which fusion occurs and significant energy is released. The potential applications of ICF will be determined by the resulting fusion energy yield (amount of energy produced) and gain (ratio of energy released to energy required to heat and compress the DT fuel). Important defense and civilian applications, including weapons physics, weapons effects simulation, and ultimately the generation of electric power will become possible if yields of 100 to 1,000 MJ and gains exceeding approximately 50 can be achieved. Once ignition and propagating bum producing modest gain (2 to 10) at moderate drive energy (1 to 2 MJ) has been achieved, the extension to high gain (greater than 50) is straightforward. Therefore, the demonstration of ignition and modest gain is the final step in establishing the scientific feasibility of ICF. Lawrence Livermore National Laboratory (LLNL) proposes the Nova Upgrade Facility to achieve this demonstration by the end of the decade. This facility would be constructed within the existing Nova building at LLNL for a total cost of approximately $400 M over the proposed FY 1995-1999 construction period. This report discusses this facility.

  15. Is the GAIN Act a turning point in new antibiotic discovery?

    PubMed

    Brown, Eric D

    2013-03-01

    The United States GAIN (Generating Antibiotic Incentives Now) Act is a call to action for new antibiotic discovery and development that arises from a ground swell of concern over declining activity in this therapeutic area in the pharmaceutical sector. The GAIN Act aims to provide economic incentives for antibiotic drug discovery in the form of market exclusivity and accelerated drug approval processes. The legislation comes on the heels of nearly two decades of failure using the tools of modern drug discovery to find new antibiotic drugs. The lessons of failure are examined herein as are the prospects for a renewed effort in antibiotic drug discovery and development stimulated by new investments in both the public and private sector.

  16. Development and Qualification of an Antenna Pointing Mechanism for the ExoMars High-Gain Antenna

    NASA Astrophysics Data System (ADS)

    St-Andre, Stephane; Dumais, Marie-Christine; Lebel, Louis-Philippe; Langevin, Jean-Paul; Horth, Richard; Winton, Alistair; Lebleu, Denis

    2015-09-01

    The European Space Agency ExoMars 2016 mission required a gimbaled High Gain Antenna (HGA) for orbiter-to-earth communications. The ExoMars Program is a cooperative program between ESA and ROSCOSMOS with participation of NASA. The ExoMars Program industrial consortium is led by THALES ALENIA SPACE.This paper presents the design and qualification test results of the Antenna Pointing Mechanism (APM) used to point the HGA towards Earth. This electrically redundant APM includes motors, drive trains, optical encoders, cable cassette and RF Rotary Joints.Furthermore, the paper describes the design, development and the qualification approach applied to this APM. The design challenges include a wide pointing domain necessary to maximise the communication duty cycle during the early operation phase, the interplanetary cruise phase and during the mission’s orbital science phase. Other design drivers are an extended rotation cycle life with very low backlash yielding little wear and accurate position feedback on both axes. Major challenges and related areas of development include:• Large moments are induced on the APM due to aerobraking forces when the Mars atmosphere is used to slow the orbiter into its science mission orbit,• Thermal control of the critical components of the APM due to the different environments of the various phases of the mission. Also, the large travel range of the actuators complicated the radiator design in order to maintain clearances and to avoid overheating.• The APM, with a mass less than 17.5 kg, is exposed to a demanding dynamic environment due to its mounting on the spacecraft thrust tube and aggravated by its elevated location on the payload.• Power and Data transmission between elevation and azimuth axes through a compact large rotation range spiral type cable cassette.• Integration of a 16 bit redundant encoder on both axes for position feedback: Each encoder is installed on the back of a rotary actuator and is coupled using the

  17. Comparison of lossless compression techniques for prepress color images

    NASA Astrophysics Data System (ADS)

    Van Assche, Steven; Denecker, Koen N.; Philips, Wilfried R.; Lemahieu, Ignace L.

    1998-12-01

    In the pre-press industry color images have both a high spatial and a high color resolution. Such images require a considerable amount of storage space and impose long transmission times. Data compression is desired to reduce these storage and transmission problems. Because of the high quality requirements in the pre-press industry only lossless compression is acceptable. Most existing lossless compression schemes operate on gray-scale images. In this case the color components of color images must be compressed independently. However, higher compression ratios can be achieved by exploiting inter-color redundancies. In this paper we present a comparison of three state-of-the-art lossless compression techniques which exploit such color redundancies: IEP (Inter- color Error Prediction) and a KLT-based technique, which are both linear color decorrelation techniques, and Interframe CALIC, which uses a non-linear approach to color decorrelation. It is shown that these techniques are able to exploit color redundancies and that color decorrelation can be done effectively and efficiently. The linear color decorrelators provide a considerable coding gain (about 2 bpp) on some typical prepress images. The non-linear interframe CALIC predictor does not yield better results, but the full interframe CALIC technique does.

  18. Visually lossless compression of digital hologram sequences

    NASA Astrophysics Data System (ADS)

    Darakis, Emmanouil; Kowiel, Marcin; Näsänen, Risto; Naughton, Thomas J.

    2010-01-01

    Digital hologram sequences have great potential for the recording of 3D scenes of moving macroscopic objects as their numerical reconstruction can yield a range of perspective views of the scene. Digital holograms inherently have large information content and lossless coding of holographic data is rather inefficient due to the speckled nature of the interference fringes they contain. Lossy coding of still holograms and hologram sequences has shown promising results. By definition, lossy compression introduces errors in the reconstruction. In all of the previous studies, numerical metrics were used to measure the compression error and through it, the coding quality. Digital hologram reconstructions are highly speckled and the speckle pattern is very sensitive to data changes. Hence, numerical quality metrics can be misleading. For example, for low compression ratios, a numerically significant coding error can have visually negligible effects. Yet, in several cases, it is of high interest to know how much lossy compression can be achieved, while maintaining the reconstruction quality at visually lossless levels. Using an experimental threshold estimation method, the staircase algorithm, we determined the highest compression ratio that was not perceptible to human observers for objects compressed with Dirac and MPEG-4 compression methods. This level of compression can be regarded as the point below which compression is perceptually lossless although physically the compression is lossy. It was found that up to 4 to 7.5 fold compression can be obtained with the above methods without any perceptible change in the appearance of video sequences.

  19. The Significance of Education for Mortality Compression in the United States*

    PubMed Central

    Brown, Dustin C.; Hayward, Mark D.; Montez, Jennifer Karas; Humme, Robert A.; Chiu, Chi-Tsun; Hidajat, Mira M.

    2012-01-01

    Recent studies of old-age mortality trends assess whether longevity improvements over time are linked to increasing compression of mortality at advanced ages. The historical backdrop of these studies is the long-term improvements in a population's socioeconomic resources that fueled longevity gains. We extend this line of inquiry by examining whether socioeconomic differences in longevity within a population are accompanied by old-age mortality compression. Specifically, we document educational differences in longevity and mortality compression for older men and women in the United States. Drawing on the fundamental cause of disease framework, we hypothesize that both longevity and compression increase with higher levels of education and that women with the highest levels of education will exhibit the greatest degree of longevity and compression. Results based on the Health and Retirement Study and the National Health Interview Survey Linked Mortality File confirm a strong educational gradient in both longevity and mortality compression. We also find that mortality is more compressed within educational groups among women than men. The results suggest that educational attainment in the United States maximizes life chances by delaying the biological aging process. PMID:22556045

  20. A variable-gain output feedback control design approach

    NASA Technical Reports Server (NTRS)

    Haylo, Nesim

    1989-01-01

    A multi-model design technique to find a variable-gain control law defined over the whole operating range is proposed. The design is formulated as an optimal control problem which minimizes a cost function weighing the performance at many operating points. The solution is obtained by embedding into the Multi-Configuration Control (MCC) problem, a multi-model robust control design technique. In contrast to conventional gain scheduling which uses a curve fit of single model designs, the optimal variable-gain control law stabilizes the plant at every operating point included in the design. An iterative algorithm to compute the optimal control gains is presented. The methodology has been successfully applied to reconfigurable aircraft flight control and to nonlinear flight control systems.

  1. Neurofilaments Function as Shock Absorbers: Compression Response Arising from Disordered Proteins.

    PubMed

    Kornreich, Micha; Malka-Gibor, Eti; Zuker, Ben; Laser-Azogui, Adi; Beck, Roy

    2016-09-30

    What can cells gain by using disordered, rather than folded, proteins in the architecture of their skeleton? Disordered proteins take multiple coexisting conformations, and often contain segments which act as random-walk-shaped polymers. Using x-ray scattering we measure the compression response of disordered protein hydrogels, which are the main stress-responsive component of neuron cells. We find that at high compression their mechanics are dominated by gaslike steric and ionic repulsions. At low compression, specific attractive interactions dominate. This is demonstrated by the considerable hydrogel expansion induced by the truncation of critical short protein segments. Accordingly, the floppy disordered proteins form a weakly cross-bridged hydrogel, and act as shock absorbers that sustain large deformations without failure.

  2. Neurofilaments Function as Shock Absorbers: Compression Response Arising from Disordered Proteins

    NASA Astrophysics Data System (ADS)

    Kornreich, Micha; Malka-Gibor, Eti; Zuker, Ben; Laser-Azogui, Adi; Beck, Roy

    2016-09-01

    What can cells gain by using disordered, rather than folded, proteins in the architecture of their skeleton? Disordered proteins take multiple coexisting conformations, and often contain segments which act as random-walk-shaped polymers. Using x-ray scattering we measure the compression response of disordered protein hydrogels, which are the main stress-responsive component of neuron cells. We find that at high compression their mechanics are dominated by gaslike steric and ionic repulsions. At low compression, specific attractive interactions dominate. This is demonstrated by the considerable hydrogel expansion induced by the truncation of critical short protein segments. Accordingly, the floppy disordered proteins form a weakly cross-bridged hydrogel, and act as shock absorbers that sustain large deformations without failure.

  3. Alternative Fuels Data Center: Little Rock Gains Momentum with Natural Gas

    Science.gov Websites

    BusesA> Little Rock Gains Momentum with Natural Gas Buses to someone by E-mail Share Alternative on compressed natural gas. For information about this project, contact Arkansas Clean Cities Public Television Related Videos Photo of a car Hydrogen Powers Fuel Cell Vehicles in California Nov. 18

  4. Two-stage optical parametric chirped-pulse amplifier using sub-nanosecond pump pulse generated by stimulated Brillouin scattering compression

    NASA Astrophysics Data System (ADS)

    Ogino, Jumpei; Miyamoto, Sho; Matsuyama, Takahiro; Sueda, Keiichi; Yoshida, Hidetsugu; Tsubakimoto, Koji; Miyanaga, Noriaki

    2014-12-01

    We demonstrate optical parametric chirped-pulse amplification (OPCPA) based on two-beam pumping, using sub-nanosecond pulses generated by stimulated Brillouin scattering compression. Seed pulse energy, duration, and center wavelength were 5 nJ, 220 ps, and ˜1065 nm, respectively. The 532 nm pulse from a Q-switched Nd:YAG laser was compressed to ˜400 ps in heavy fluorocarbon FC-40 liquid. Stacking of two time-delayed pump pulses reduced the amplifier gain fluctuation. Using a walk-off-compensated two-stage OPCPA at a pump energy of 34 mJ, a total gain of 1.6 × 105 was obtained, yielding an output energy of 0.8 mJ. The amplified chirped pulse was compressed to 97 fs.

  5. Optimal Compressed Sensing and Reconstruction of Unstructured Mesh Datasets

    DOE PAGES

    Salloum, Maher; Fabian, Nathan D.; Hensinger, David M.; ...

    2017-08-09

    Exascale computing promises quantities of data too large to efficiently store and transfer across networks in order to be able to analyze and visualize the results. We investigate compressed sensing (CS) as an in situ method to reduce the size of the data as it is being generated during a large-scale simulation. CS works by sampling the data on the computational cluster within an alternative function space such as wavelet bases and then reconstructing back to the original space on visualization platforms. While much work has gone into exploring CS on structured datasets, such as image data, we investigate itsmore » usefulness for point clouds such as unstructured mesh datasets often found in finite element simulations. We sample using a technique that exhibits low coherence with tree wavelets found to be suitable for point clouds. We reconstruct using the stagewise orthogonal matching pursuit algorithm that we improved to facilitate automated use in batch jobs. We analyze the achievable compression ratios and the quality and accuracy of reconstructed results at each compression ratio. In the considered case studies, we are able to achieve compression ratios up to two orders of magnitude with reasonable reconstruction accuracy and minimal visual deterioration in the data. Finally, our results suggest that, compared to other compression techniques, CS is attractive in cases where the compression overhead has to be minimized and where the reconstruction cost is not a significant concern.« less

  6. 2D-RBUC for efficient parallel compression of residuals

    NASA Astrophysics Data System (ADS)

    Đurđević, Đorđe M.; Tartalja, Igor I.

    2018-02-01

    In this paper, we present a method for lossless compression of residuals with an efficient SIMD parallel decompression. The residuals originate from lossy or near lossless compression of height fields, which are commonly used to represent models of terrains. The algorithm is founded on the existing RBUC method for compression of non-uniform data sources. We have adapted the method to capture 2D spatial locality of height fields, and developed the data decompression algorithm for modern GPU architectures already present even in home computers. In combination with the point-level SIMD-parallel lossless/lossy high field compression method HFPaC, characterized by fast progressive decompression and seamlessly reconstructed surface, the newly proposed method trades off small efficiency degradation for a non negligible compression ratio (measured up to 91%) benefit.

  7. Cost-Effectiveness Analysis of Percutaneous Vertebroplasty for Osteoporotic Compression Fractures.

    PubMed

    Takura, Tomoyuki; Yoshimatsu, Misako; Sugimori, Hiroki; Takizawa, Kenji; Furumatsu, Yoshiyuki; Ikeda, Hirotaka; Kato, Hiroshi; Ogawa, Yukihisa; Hamaguchi, Shingo; Fujikawa, Atsuko; Satoh, Toshihiko; Nakajima, Yasuo

    2017-04-01

    Single-center, single-arm, prospective time-series study. To assess the cost-effectiveness and improvement in quality of life (QOL) of percutaneous vertebroplasty (PVP). PVP is known to relieve back pain and increase QOL for osteoporotic compression fractures. However, the economic value of PVP has never been evaluated in Japan where universal health care system is adopted. We prospectively followed up 163 patients with acute vertebral osteoporotic compression fractures, 44 males aged 76.4±6.0 years and 119 females aged 76.8±7.1 years, who underwent PVP. To measure health-related QOL and pain during 52 weeks observation, we used the European Quality of Life-5 Dimensions (EQ-5D), the Rolland-Morris Disability Questionnaire (RMD), the 8-item Short-Form health survey (SF-8), and visual analogue scale (VAS). Quality-adjusted life years (QALY) were calculated using the change of health utility of EQ-5D. The direct medical cost was calculated by accounting system of the hospital and Japanese health insurance system. Cost-effectiveness was analyzed using incremental cost-effectiveness ratio (ICER): Δ medical cost/Δ QALY. After PVP, improvement in EQ-5D, RMD, SF-8, and VAS scores were observed. The gain of QALY until 52 weeks was 0.162. The estimated lifetime gain of QALY reached 1.421. The direct medical cost for PVP was ¥286,740 (about 3061 US dollars). Cost-effectiveness analysis using ICER showed that lifetime medical cost for a gain of 1 QALY was ¥201,748 (about 2154 US dollars). Correlations between changes in EQ-5D scores and other parameters such as RMD, SF-8, and VAS were observed during most of the study period, which might support the reliability and applicability to measure health utilities by EQ-5D for osteoporotic compression fractures in Japan as well. PVP may improve QOL and ameliorate pain for acute osteoporotic compression fractures and be cost-effective in Japan.

  8. Lagrangian statistics in compressible isotropic homogeneous turbulence

    NASA Astrophysics Data System (ADS)

    Yang, Yantao; Wang, Jianchun; Shi, Yipeng; Chen, Shiyi

    2011-11-01

    In this work we conducted the Direct Numerical Simulation (DNS) of a forced compressible isotropic homogeneous turbulence and investigated the flow statistics from the Lagrangian point of view, namely the statistics is computed following the passive tracers trajectories. The numerical method combined the Eulerian field solver which was developed by Wang et al. (2010, J. Comp. Phys., 229, 5257-5279), and a Lagrangian module for tracking the tracers and recording the data. The Lagrangian probability density functions (p.d.f.'s) have then been calculated for both kinetic and thermodynamic quantities. In order to isolate the shearing part from the compressing part of the flow, we employed the Helmholtz decomposition to decompose the flow field (mainly the velocity field) into the solenoidal and compressive parts. The solenoidal part was compared with the incompressible case, while the compressibility effect showed up in the compressive part. The Lagrangian structure functions and cross-correlation between various quantities will also be discussed. This work was supported in part by the China's Turbulence Program under Grant No.2009CB724101.

  9. POLYCOMP: Efficient and configurable compression of astronomical timelines

    NASA Astrophysics Data System (ADS)

    Tomasi, M.

    2016-07-01

    This paper describes the implementation of polycomp, a open-sourced, publicly available program for compressing one-dimensional data series in tabular format. The program is particularly suited for compressing smooth, noiseless streams of data like pointing information, as one of the algorithms it implements applies a combination of least squares polynomial fitting and discrete Chebyshev transforms that is able to achieve a compression ratio Cr up to ≈ 40 in the examples discussed in this work. This performance comes at the expense of a loss of information, whose upper bound is configured by the user. I show two areas in which the usage of polycomp is interesting. In the first example, I compress the ephemeris table of an astronomical object (Ganymede), obtaining Cr ≈ 20, with a compression error on the x , y , z coordinates smaller than 1 m. In the second example, I compress the publicly available timelines recorded by the Low Frequency Instrument (LFI), an array of microwave radiometers onboard the ESA Planck spacecraft. The compression reduces the needed storage from ∼ 6.5 TB to ≈ 0.75 TB (Cr ≈ 9), thus making them small enough to be kept in a portable hard drive.

  10. Improved integral images compression based on multi-view extraction

    NASA Astrophysics Data System (ADS)

    Dricot, Antoine; Jung, Joel; Cagnazzo, Marco; Pesquet, Béatrice; Dufaux, Frédéric

    2016-09-01

    Integral imaging is a technology based on plenoptic photography that captures and samples the light-field of a scene through a micro-lens array. It provides views of the scene from several angles and therefore is foreseen as a key technology for future immersive video applications. However, integral images have a large resolution and a structure based on micro-images which is challenging to encode. A compression scheme for integral images based on view extraction has previously been proposed, with average BD-rate gains of 15.7% (up to 31.3%) reported over HEVC when using one single extracted view. As the efficiency of the scheme depends on a tradeoff between the bitrate required to encode the view and the quality of the image reconstructed from the view, it is proposed to increase the number of extracted views. Several configurations are tested with different positions and different number of extracted views. Compression efficiency is increased with average BD-rate gains of 22.2% (up to 31.1%) reported over the HEVC anchor, with a realistic runtime increase.

  11. Analysis-Preserving Video Microscopy Compression via Correlation and Mathematical Morphology

    PubMed Central

    Shao, Chong; Zhong, Alfred; Cribb, Jeremy; Osborne, Lukas D.; O’Brien, E. Timothy; Superfine, Richard; Mayer-Patel, Ketan; Taylor, Russell M.

    2015-01-01

    The large amount video data produced by multi-channel, high-resolution microscopy system drives the need for a new high-performance domain-specific video compression technique. We describe a novel compression method for video microscopy data. The method is based on Pearson's correlation and mathematical morphology. The method makes use of the point-spread function (PSF) in the microscopy video acquisition phase. We compare our method to other lossless compression methods and to lossy JPEG, JPEG2000 and H.264 compression for various kinds of video microscopy data including fluorescence video and brightfield video. We find that for certain data sets, the new method compresses much better than lossless compression with no impact on analysis results. It achieved a best compressed size of 0.77% of the original size, 25× smaller than the best lossless technique (which yields 20% for the same video). The compressed size scales with the video's scientific data content. Further testing showed that existing lossy algorithms greatly impacted data analysis at similar compression sizes. PMID:26435032

  12. Radio Frequency Transistors Using Aligned Semiconducting Carbon Nanotubes with Current-Gain Cutoff Frequency and Maximum Oscillation Frequency Simultaneously Greater than 70 GHz.

    PubMed

    Cao, Yu; Brady, Gerald J; Gui, Hui; Rutherglen, Chris; Arnold, Michael S; Zhou, Chongwu

    2016-07-26

    In this paper, we report record radio frequency (RF) performance of carbon nanotube transistors based on combined use of a self-aligned T-shape gate structure, and well-aligned, high-semiconducting-purity, high-density polyfluorene-sorted semiconducting carbon nanotubes, which were deposited using dose-controlled, floating evaporative self-assembly method. These transistors show outstanding direct current (DC) performance with on-current density of 350 μA/μm, transconductance as high as 310 μS/μm, and superior current saturation with normalized output resistance greater than 100 kΩ·μm. These transistors create a record as carbon nanotube RF transistors that demonstrate both the current-gain cutoff frequency (ft) and the maximum oscillation frequency (fmax) greater than 70 GHz. Furthermore, these transistors exhibit good linearity performance with 1 dB gain compression point (P1dB) of 14 dBm and input third-order intercept point (IIP3) of 22 dBm. Our study advances state-of-the-art of carbon nanotube RF electronics, which have the potential to be made flexible and may find broad applications for signal amplification, wireless communication, and wearable/flexible electronics.

  13. Compression of thick laminated composite beams with initial impact-like damage

    NASA Technical Reports Server (NTRS)

    Breivik, N. L.; Guerdal, Z.; Griffin, O. H., Jr.

    1992-01-01

    While the study of compression after impact of laminated composites has been under consideration for many years, the complexity of the damage initiated by low velocity impact has not lent itself to simple predictive models for compression strength. The damage modes due to non-penetrating, low velocity impact by large diameter objects can be simulated using quasi-static three-point bending. The resulting damage modes are less coupled and more easily characterized than actual impact damage modes. This study includes the compression testing of specimens with well documented initial damage states obtained from three-point bend testing. Compression strengths and failure modes were obtained for quasi-isotropic stacking sequences from 0.24 to 1.1 inches thick with both grouped and interspersed ply stacking. Initial damage prior to compression testing was divided into four classifications based on the type, extent, and location of the damage. These classifications are multiple through-thickness delaminations, isolated delamination, damage near the surface, and matrix cracks. Specimens from each classification were compared to specimens tested without initial damage in order to determine the effects of the initial damage on the final compression strength and failure modes. A finite element analysis was used to aid in the understanding and explanation of the experimental results.

  14. Optical antenna gain. I - Transmitting antennas

    NASA Technical Reports Server (NTRS)

    Klein, B. J.; Degnan, J. J.

    1974-01-01

    The gain of centrally obscured optical transmitting antennas is analyzed in detail. The calculations, resulting in near- and far-field antenna gain patterns, assume a circular antenna illuminated by a laser operating in the TEM-00 mode. A simple polynomial equation is derived for matching the incident source distribution to a general antenna configuration for maximum on-axis gain. An interpretation of the resultant gain curves allows a number of auxiliary design curves to be drawn that display the losses in antenna gain due to pointing errors and the cone angle of the beam in the far field as a function of antenna aperture size and its central obscuration. The results are presented in a series of graphs that allow the rapid and accurate evaluation of the antenna gain which may then be substituted into the conventional range equation.

  15. High-performance compression of astronomical images

    NASA Technical Reports Server (NTRS)

    White, Richard L.

    1993-01-01

    Astronomical images have some rather unusual characteristics that make many existing image compression techniques either ineffective or inapplicable. A typical image consists of a nearly flat background sprinkled with point sources and occasional extended sources. The images are often noisy, so that lossless compression does not work very well; furthermore, the images are usually subjected to stringent quantitative analysis, so any lossy compression method must be proven not to discard useful information, but must instead discard only the noise. Finally, the images can be extremely large. For example, the Space Telescope Science Institute has digitized photographic plates covering the entire sky, generating 1500 images each having 14000 x 14000 16-bit pixels. Several astronomical groups are now constructing cameras with mosaics of large CCD's (each 2048 x 2048 or larger); these instruments will be used in projects that generate data at a rate exceeding 100 MBytes every 5 minutes for many years. An effective technique for image compression may be based on the H-transform (Fritze et al. 1977). The method that we have developed can be used for either lossless or lossy compression. The digitized sky survey images can be compressed by at least a factor of 10 with no noticeable losses in the astrometric and photometric properties of the compressed images. The method has been designed to be computationally efficient: compression or decompression of a 512 x 512 image requires only 4 seconds on a Sun SPARCstation 1. The algorithm uses only integer arithmetic, so it is completely reversible in its lossless mode, and it could easily be implemented in hardware for space applications.

  16. Investigations of gain redshift in high peak power Ti:sapphire laser systems

    NASA Astrophysics Data System (ADS)

    Wu, Fenxiang; Yu, Linpeng; Zhang, Zongxin; Li, Wenkai; Yang, Xiaojun; Wu, Yuanfeng; Li, Shuai; Wang, Cheng; Liu, Yanqi; Lu, Xiaoming; Xu, Yi; Leng, Yuxin

    2018-07-01

    Gain redshift in high peak power Ti:sapphire laser systems can result in narrowband spectral output and hence lengthen the compressed pulse duration. In order to realize broadband spectral output in 10 PW-class Ti:sapphire lasers, the influence on gain redshift induced by spectral pre-shaping, gain distribution of cascaded amplifiers and Extraction During Pumping (EDP) technique have been investigated. The theoretical and experimental results show that the redshift of output spectrum is sensitive to the spectral pre-shaping and the gain distribution of cascaded amplifiers, while insensitive to the pumping scheme with or without EDP. Moreover, the output spectrum from our future 10 PW Ti:sapphire laser is theoretically analyzed based on the investigations above, which indicates that a Fourier-transform limited (FTL) pulse duration of 21 fs can be achieved just by optimizing the spectral pre-shaping and gain distribution in 10 PW-class Ti:sapphire lasers.

  17. MFCompress: a compression tool for FASTA and multi-FASTA data.

    PubMed

    Pinho, Armando J; Pratas, Diogo

    2014-01-01

    The data deluge phenomenon is becoming a serious problem in most genomic centers. To alleviate it, general purpose tools, such as gzip, are used to compress the data. However, although pervasive and easy to use, these tools fall short when the intention is to reduce as much as possible the data, for example, for medium- and long-term storage. A number of algorithms have been proposed for the compression of genomics data, but unfortunately only a few of them have been made available as usable and reliable compression tools. In this article, we describe one such tool, MFCompress, specially designed for the compression of FASTA and multi-FASTA files. In comparison to gzip and applied to multi-FASTA files, MFCompress can provide additional average compression gains of almost 50%, i.e. it potentially doubles the available storage, although at the cost of some more computation time. On highly redundant datasets, and in comparison with gzip, 8-fold size reductions have been obtained. Both source code and binaries for several operating systems are freely available for non-commercial use at http://bioinformatics.ua.pt/software/mfcompress/.

  18. Gain in three-dimensional metamaterials utilizing semiconductor quantum structures

    NASA Astrophysics Data System (ADS)

    Schwaiger, Stephan; Klingbeil, Matthias; Kerbst, Jochen; Rottler, Andreas; Costa, Ricardo; Koitmäe, Aune; Bröll, Markus; Heyn, Christian; Stark, Yuliya; Heitmann, Detlef; Mendach, Stefan

    2011-10-01

    We demonstrate gain in a three-dimensional metal/semiconductor metamaterial by the integration of optically active semiconductor quantum structures. The rolling-up of a metallic structure on top of strained semiconductor layers containing a quantum well allows us to achieve a tightly bent superlattice consisting of alternating layers of lossy metallic and amplifying gain material. We show that the transmission through the superlattice can be enhanced by exciting the quantum well optically under both pulsed or continuous wave excitation. This points out that our structures can be used as a starting point for arbitrary three-dimensional metamaterials including gain.

  19. A PDF closure model for compressible turbulent chemically reacting flows

    NASA Technical Reports Server (NTRS)

    Kollmann, W.

    1992-01-01

    The objective of the proposed research project was the analysis of single point closures based on probability density function (pdf) and characteristic functions and the development of a prediction method for the joint velocity-scalar pdf in turbulent reacting flows. Turbulent flows of boundary layer type and stagnation point flows with and without chemical reactions were be calculated as principal applications. Pdf methods for compressible reacting flows were developed and tested in comparison with available experimental data. The research work carried in this project was concentrated on the closure of pdf equations for incompressible and compressible turbulent flows with and without chemical reactions.

  20. Approximate reversibility in the context of entropy gain, information gain, and complete positivity

    NASA Astrophysics Data System (ADS)

    Buscemi, Francesco; Das, Siddhartha; Wilde, Mark M.

    2016-06-01

    There are several inequalities in physics which limit how well we can process physical systems to achieve some intended goal, including the second law of thermodynamics, entropy bounds in quantum information theory, and the uncertainty principle of quantum mechanics. Recent results provide physically meaningful enhancements of these limiting statements, determining how well one can attempt to reverse an irreversible process. In this paper, we apply and extend these results to give strong enhancements to several entropy inequalities, having to do with entropy gain, information gain, entropic disturbance, and complete positivity of open quantum systems dynamics. Our first result is a remainder term for the entropy gain of a quantum channel. This result implies that a small increase in entropy under the action of a subunital channel is a witness to the fact that the channel's adjoint can be used as a recovery map to undo the action of the original channel. We apply this result to pure-loss, quantum-limited amplifier, and phase-insensitive quantum Gaussian channels, showing how a quantum-limited amplifier can serve as a recovery from a pure-loss channel and vice versa. Our second result regards the information gain of a quantum measurement, both without and with quantum side information. We find here that a small information gain implies that it is possible to undo the action of the original measurement if it is efficient. The result also has operational ramifications for the information-theoretic tasks known as measurement compression without and with quantum side information. Our third result shows that the loss of Holevo information caused by the action of a noisy channel on an input ensemble of quantum states is small if and only if the noise can be approximately corrected on average. We finally establish that the reduced dynamics of a system-environment interaction are approximately completely positive and trace preserving if and only if the data processing

  1. Machine compliance in compression tests

    NASA Astrophysics Data System (ADS)

    Sousa, Pedro; Ivens, Jan; Lomov, Stepan V.

    2018-05-01

    The compression behavior of a material cannot be accurately determined if the machine compliance is not accounted prior to the measurements. This work discusses the machine compliance during a compressibility test with fiberglass fabrics. The thickness variation was measured during loading and unloading cycles with a relaxation stage of 30 minutes between them. The measurements were performed using an indirect technique based on the comparison between the displacement at a free compression cycle and the displacement with a sample. Relating to the free test, it has been noticed the nonexistence of machine relaxation during relaxation stage. Considering relaxation or not, the characteristic curves for a free compression cycle can be overlapped precisely in the majority of the points. For the compression test with sample, it was noticed a non-physical decrease of about 30 µm during the relaxation stage, what can be explained by the greater fabric relaxation in relation to the machine relaxation. Beyond the technique normally used, another technique was used which allows a constant thickness during relaxation. Within this second method, machine displacement with sample is simply subtracted to the machine displacement without sample being imposed as constant. If imposed as a constant it will remain constant during relaxation stage and it will suddenly decrease after relaxation. If constantly calculated it will decrease gradually during relaxation stage. Independently of the technique used the final result will remain unchanged. The uncertainty introduced by this imprecision is about ±15 µm.

  2. The Instructional Effects of Diagrams and Time-Compressed Instruction on Student Achievement and Learners' Perceptions of Cognitive Load

    ERIC Educational Resources Information Center

    Pastore, Raymond S.

    2009-01-01

    The purpose of this study was to examine the effects of visual representations and time-compressed instruction on learning and learners' perceptions of cognitive load. Time-compressed instruction refers to instruction that has been increased in speed without sacrificing quality. It was anticipated that learners would be able to gain a conceptual…

  3. Compressed Air Quality, A Case Study In Paiton Coal Fired Power Plant Unit 1 And 2

    NASA Astrophysics Data System (ADS)

    Indah, Nur; Kusuma, Yuriadi; Mardani

    2018-03-01

    The compressed air system becomes part of a very important utility system in a Plant, including the Steam Power Plant. In PLN’S coal fired power plant, Paiton units 1 and 2, there are four Centrifugal air compressor types, which produce compressed air as much as 5.652 cfm and with electric power capacity of 1200 kW. Electricity consumption to operate centrifugal compressor is 7.104.117 kWh per year. Compressed air generation is not only sufficient in quantity (flow rate) but also meets the required air quality standards. compressed air at Steam Power Plant is used for; service air, Instrument air, and for fly Ash. This study aims to measure some important parameters related to air quality, followed by potential disturbance analysis, equipment breakdown or reduction of energy consumption from existing compressed air conditions. These measurements include counting the number of dust particles, moisture content, relative humidity, and also compressed air pressure. From the measurements, the compressed air pressure generated by the compressor is about 8.4 barg and decreased to 7.7 barg at the furthest point, so the pressure drop is 0.63 barg, this number satisfies the needs in the end user. The measurement of the number of particles contained in compressed air, for particle of 0.3 micron reaches 170,752 particles, while for the particle size 0.5 micron reaches 45,245 particles. Measurements of particles conducted at several points of measurement. For some point measurements the number of dust particle exceeds the standard set by ISO 8573.1-2010 and also NACE Code, so it needs to be improved on the air treatment process. To see the amount of moisture content in compressed air, it is done by measuring pressure dew point temperature (PDP). Measurements were made at several points with results ranging from -28.4 to 30.9 °C. The recommendation of improving compressed air quality in steam power plant, Paiton unit 1 and 2 has the potential to extend the life of

  4. Trajectory NG: portable, compressed, general molecular dynamics trajectories.

    PubMed

    Spångberg, Daniel; Larsson, Daniel S D; van der Spoel, David

    2011-10-01

    We present general algorithms for the compression of molecular dynamics trajectories. The standard ways to store MD trajectories as text or as raw binary floating point numbers result in very large files when efficient simulation programs are used on supercomputers. Our algorithms are based on the observation that differences in atomic coordinates/velocities, in either time or space, are generally smaller than the absolute values of the coordinates/velocities. Also, it is often possible to store values at a lower precision. We apply several compression schemes to compress the resulting differences further. The most efficient algorithms developed here use a block sorting algorithm in combination with Huffman coding. Depending on the frequency of storage of frames in the trajectory, either space, time, or combinations of space and time differences are usually the most efficient. We compare the efficiency of our algorithms with each other and with other algorithms present in the literature for various systems: liquid argon, water, a virus capsid solvated in 15 mM aqueous NaCl, and solid magnesium oxide. We perform tests to determine how much precision is necessary to obtain accurate structural and dynamic properties, as well as benchmark a parallelized implementation of the algorithms. We obtain compression ratios (compared to single precision floating point) of 1:3.3-1:35 depending on the frequency of storage of frames and the system studied.

  5. Compressive Properties and Anti-Erosion Characteristics of Foam Concrete in Road Engineering

    NASA Astrophysics Data System (ADS)

    Li, Jinzhu; Huang, Hongxiang; Wang, Wenjun; Ding, Yifan

    2018-01-01

    To analyse the compression properties and anti-erosion characteristics of foam concrete, one dimensional compression tests were carried out using ring specimens of foam concrete, and unconfined compression tests were carried out using foam concrete specimens cured in different conditions. The results of one dimensional compression tests show that the compression curve of foam concrete has two critical points and three stages, which has significant difference with ordinary geotechnical materials such as soil. Based on the compression curve the compression modulus of each stage were determined. The results of erosion tests show that sea water has a slight influence on the long-term strength of foam concrete, while the sulphate solution has a significant influence on the long-term strength of foam concrete, which needs to pay more attention.

  6. Trial of Continuous or Interrupted Chest Compressions during CPR.

    PubMed

    Nichol, Graham; Leroux, Brian; Wang, Henry; Callaway, Clifton W; Sopko, George; Weisfeldt, Myron; Stiell, Ian; Morrison, Laurie J; Aufderheide, Tom P; Cheskes, Sheldon; Christenson, Jim; Kudenchuk, Peter; Vaillancourt, Christian; Rea, Thomas D; Idris, Ahamed H; Colella, Riccardo; Isaacs, Marshal; Straight, Ron; Stephens, Shannon; Richardson, Joe; Condle, Joe; Schmicker, Robert H; Egan, Debra; May, Susanne; Ornato, Joseph P

    2015-12-03

    During cardiopulmonary resuscitation (CPR) in patients with out-of-hospital cardiac arrest, the interruption of manual chest compressions for rescue breathing reduces blood flow and possibly survival. We assessed whether outcomes after continuous compressions with positive-pressure ventilation differed from those after compressions that were interrupted for ventilations at a ratio of 30 compressions to two ventilations. This cluster-randomized trial with crossover included 114 emergency medical service (EMS) agencies. Adults with non-trauma-related cardiac arrest who were treated by EMS providers received continuous chest compressions (intervention group) or interrupted chest compressions (control group). The primary outcome was the rate of survival to hospital discharge. Secondary outcomes included the modified Rankin scale score (on a scale from 0 to 6, with a score of ≤3 indicating favorable neurologic function). CPR process was measured to assess compliance. Of 23,711 patients included in the primary analysis, 12,653 were assigned to the intervention group and 11,058 to the control group. A total of 1129 of 12,613 patients with available data (9.0%) in the intervention group and 1072 of 11,035 with available data (9.7%) in the control group survived until discharge (difference, -0.7 percentage points; 95% confidence interval [CI], -1.5 to 0.1; P=0.07); 7.0% of the patients in the intervention group and 7.7% of those in the control group survived with favorable neurologic function at discharge (difference, -0.6 percentage points; 95% CI, -1.4 to 0.1, P=0.09). Hospital-free survival was significantly shorter in the intervention group than in the control group (mean difference, -0.2 days; 95% CI, -0.3 to -0.1; P=0.004). In patients with out-of-hospital cardiac arrest, continuous chest compressions during CPR performed by EMS providers did not result in significantly higher rates of survival or favorable neurologic function than did interrupted chest compressions

  7. Integer cosine transform for image compression

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.; Pollara, F.; Shahshahani, M.

    1991-01-01

    This article describes a recently introduced transform algorithm called the integer cosine transform (ICT), which is used in transform-based data compression schemes. The ICT algorithm requires only integer operations on small integers and at the same time gives a rate-distortion performance comparable to that offered by the floating-point discrete cosine transform (DCT). The article addresses the issue of implementation complexity, which is of prime concern for source coding applications of interest in deep-space communications. Complexity reduction in the transform stage of the compression scheme is particularly relevant, since this stage accounts for most (typically over 80 percent) of the computational load.

  8. Cloud solution for histopathological image analysis using region of interest based compression.

    PubMed

    Kanakatte, Aparna; Subramanya, Rakshith; Delampady, Ashik; Nayak, Rajarama; Purushothaman, Balamuralidhar; Gubbi, Jayavardhana

    2017-07-01

    Recent technological gains have led to the adoption of innovative cloud based solutions in medical imaging field. Once the medical image is acquired, it can be viewed, modified, annotated and shared on many devices. This advancement is mainly due to the introduction of Cloud computing in medical domain. Tissue pathology images are complex and are normally collected at different focal lengths using a microscope. The single whole slide image contains many multi resolution images stored in a pyramidal structure with the highest resolution image at the base and the smallest thumbnail image at the top of the pyramid. Highest resolution image will be used for tissue pathology diagnosis and analysis. Transferring and storing such huge images is a big challenge. Compression is a very useful and effective technique to reduce the size of these images. As pathology images are used for diagnosis, no information can be lost during compression (lossless compression). A novel method of extracting the tissue region and applying lossless compression on this region and lossy compression on the empty regions has been proposed in this paper. The resulting compression ratio along with lossless compression on tissue region is in acceptable range allowing efficient storage and transmission to and from the Cloud.

  9. Three-dimensional numerical simulation for plastic injection-compression molding

    NASA Astrophysics Data System (ADS)

    Zhang, Yun; Yu, Wenjie; Liang, Junjie; Lang, Jianlin; Li, Dequn

    2018-03-01

    Compared with conventional injection molding, injection-compression molding can mold optical parts with higher precision and lower flow residual stress. However, the melt flow process in a closed cavity becomes more complex because of the moving cavity boundary during compression and the nonlinear problems caused by non-Newtonian polymer melt. In this study, a 3D simulation method was developed for injection-compression molding. In this method, arbitrary Lagrangian- Eulerian was introduced to model the moving-boundary flow problem in the compression stage. The non-Newtonian characteristics and compressibility of the polymer melt were considered. The melt flow and pressure distribution in the cavity were investigated by using the proposed simulation method and compared with those of injection molding. Results reveal that the fountain flow effect becomes significant when the cavity thickness increases during compression. The back flow also plays an important role in the flow pattern and redistribution of cavity pressure. The discrepancy in pressures at different points along the flow path is complicated rather than monotonically decreased in injection molding.

  10. Electron core ionization in compressed alkali metal cesium

    NASA Astrophysics Data System (ADS)

    Degtyareva, V. F.

    2018-01-01

    Elements of groups I and II in the periodic table have valence electrons of s-type and are usually considered as simple metals. Crystal structures of these elements at ambient pressure are close-packed and high-symmetry of bcc and fcc-types, defined by electrostatic (Madelung) energy. Diverse structures were found under high pressure with decrease of the coordination number, packing fraction and symmetry. Formation of complex structures can be understood within the model of Fermi sphere-Brillouin zone interactions and supported by Hume-Rothery arguments. With the volume decrease there is a gain of band structure energy accompanied by a formation of many-faced Brillouin zone polyhedra. Under compression to less than a half of the initial volume the interatomic distances become close to or smaller than the ionic radius which should lead to the electron core ionization. At strong compression it is necessary to assume that for alkali metals the valence electron band overlaps with the upper core electrons, which increases the valence electron count under compression.

  11. Optimal PID gain schedule for hydrogenerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Orelind, G.; Wozniak, L.; Medanic, J.

    1989-09-01

    This paper describes the development and testing of a digital gain switching governor for hydrogenerators. Optimal gains were found at different load points by minimizing a quadratic performance criterion prior to controller operating. During operation, the gain sets are switched in depending on the gate position and speed error magnitude. With gain switching operating, the digital governor was shown to have a substantial reduction of noise on the command signal and up to 42% faster responses to power requests. Non-linear control strategies enabled the digital governor to have a 2.5% to 2% reduction in speed overshoot on startups, and anmore » 8% to 1% reduction in undershoot on load rejections as compared to the analog.« less

  12. Compression of color-mapped images

    NASA Technical Reports Server (NTRS)

    Hadenfeldt, A. C.; Sayood, Khalid

    1992-01-01

    In a standard image coding scenario, pixel-to-pixel correlation nearly always exists in the data, especially if the image is a natural scene. This correlation is what allows predictive coding schemes (e.g., DPCM) to perform efficient compression. In a color-mapped image, the values stored in the pixel array are no longer directly related to the pixel intensity. Two color indices which are numerically adjacent (close) may point to two very different colors. The correlation still exists, but only via the colormap. This fact can be exploited by sorting the color map to reintroduce the structure. The sorting of colormaps is studied and it is shown how the resulting structure can be used in both lossless and lossy compression of images.

  13. Knee joint passive stiffness and moment in sagittal and frontal planes markedly increase with compression.

    PubMed

    Marouane, H; Shirazi-Adl, A; Adouni, M

    2015-01-01

    Knee joints are subject to large compression forces in daily activities. Due to artefact moments and instability under large compression loads, biomechanical studies impose additional constraints to circumvent the compression position-dependency in response. To quantify the effect of compression on passive knee moment resistance and stiffness, two validated finite element models of the tibiofemoral (TF) joint, one refined with depth-dependent fibril-reinforced cartilage and the other less refined with homogeneous isotropic cartilage, are used. The unconstrained TF joint response in sagittal and frontal planes is investigated at different flexion angles (0°, 15°, 30° and 45°) up to 1800 N compression preloads. The compression is applied at a novel joint mechanical balance point (MBP) identified as a point at which the compression does not cause any coupled rotations in sagittal and frontal planes. The MBP of the unconstrained joint is located at the lateral plateau in small compressions and shifts medially towards the inter-compartmental area at larger compression forces. The compression force substantially increases the joint moment-bearing capacities and instantaneous angular rigidities in both frontal and sagittal planes. The varus-valgus laxities diminish with compression preloads despite concomitant substantial reductions in collateral ligament forces. While the angular rigidity would enhance the joint stability, the augmented passive moment resistance under compression preloads plays a role in supporting external moments and should as such be considered in the knee joint musculoskeletal models.

  14. Modeling Two-Stage Bunch Compression With Wakefields: Macroscopic Properties And Microbunching Instability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bosch, R.A.; Kleman, K.J.; /Wisconsin U., SRC

    2011-09-08

    In a two-stage compression and acceleration system, where each stage compresses a chirped bunch in a magnetic chicane, wakefields affect high-current bunches. The longitudinal wakes affect the macroscopic energy and current profiles of the compressed bunch and cause microbunching at short wavelengths. For macroscopic wavelengths, impedance formulas and tracking simulations show that the wakefields can be dominated by the resistive impedance of coherent edge radiation. For this case, we calculate the minimum initial bunch length that can be compressed without producing an upright tail in phase space and associated current spike. Formulas are also obtained for the jitter in themore » bunch arrival time downstream of the compressors that results from the bunch-to-bunch variation of current, energy, and chirp. Microbunching may occur at short wavelengths where the longitudinal space-charge wakes dominate or at longer wavelengths dominated by edge radiation. We model this range of wavelengths with frequency-dependent impedance before and after each stage of compression. The growth of current and energy modulations is described by analytic gain formulas that agree with simulations.« less

  15. CNES studies for on-board implementation via HLS tools of a cloud-detection module for selective compression

    NASA Astrophysics Data System (ADS)

    Camarero, R.; Thiebaut, C.; Dejean, Ph.; Speciel, A.

    2010-08-01

    Future CNES high resolution instruments for remote sensing missions will lead to higher data-rates because of the increase in resolution and dynamic range. For example, the ground resolution improvement has induced a data-rate multiplied by 8 from SPOT4 to SPOT5 [1] and by 28 to PLEIADES-HR [2]. Innovative "smart" compression techniques will be then required, performing different types of compression inside a scene, in order to reach higher global compression ratios while complying with image quality requirements. This socalled "selective compression", allows important compression gains by detecting and then differently compressing the regions-of-interest (ROI) and non-interest in the image (e.g. higher compression ratios are assigned to the non-interesting data). Given that most of CNES high resolution images are cloudy [1], significant mass-memory and transmission gain could be reached by just detecting and suppressing (or compressing significantly) the areas covered by clouds. Since 2007, CNES works on a cloud detection module [3] as a simplification for on-board implementation of an already existing module used on-ground for PLEIADES-HR album images [4]. The different steps of this Support Vector Machine classifier have already been analyzed, for simplification and optimization, during this on-board implementation study: reflectance computation, characteristics vector computation (based on multispectral criteria) and computation of the SVM output. In order to speed up the hardware design phase, a new approach based on HLS [5] tools is being tested for the VHDL description stage. The aim is to obtain a bit-true VDHL design directly from a high level description language as C or Matlab/Simulink [6].

  16. iDoComp: a compression scheme for assembled genomes

    PubMed Central

    Ochoa, Idoia; Hernaez, Mikel; Weissman, Tsachy

    2015-01-01

    Motivation: With the release of the latest next-generation sequencing (NGS) machine, the HiSeq X by Illumina, the cost of sequencing a Human has dropped to a mere $4000. Thus we are approaching a milestone in the sequencing history, known as the $1000 genome era, where the sequencing of individuals is affordable, opening the doors to effective personalized medicine. Massive generation of genomic data, including assembled genomes, is expected in the following years. There is crucial need for compression of genomes guaranteed of performing well simultaneously on different species, from simple bacteria to humans, which will ease their transmission, dissemination and analysis. Further, most of the new genomes to be compressed will correspond to individuals of a species from which a reference already exists on the database. Thus, it is natural to propose compression schemes that assume and exploit the availability of such references. Results: We propose iDoComp, a compressor of assembled genomes presented in FASTA format that compresses an individual genome using a reference genome for both the compression and the decompression. In terms of compression efficiency, iDoComp outperforms previously proposed algorithms in most of the studied cases, with comparable or better running time. For example, we observe compression gains of up to 60% in several cases, including H.sapiens data, when comparing with the best compression performance among the previously proposed algorithms. Availability: iDoComp is written in C and can be downloaded from: http://www.stanford.edu/~iochoa/iDoComp.html (We also provide a full explanation on how to run the program and an example with all the necessary files to run it.). Contact: iochoa@stanford.edu Supplementary information: Supplementary Data are available at Bioinformatics online. PMID:25344501

  17. Dramatic Raman Gain Suppression in the Vicinity of the Zero Dispersion Point in a Gas-Filled Hollow-Core Photonic Crystal Fiber.

    PubMed

    Bauerschmidt, S T; Novoa, D; Russell, P St J

    2015-12-11

    In 1964 Bloembergen and Shen predicted that Raman gain could be suppressed if the rates of phonon creation and annihilation (by inelastic scattering) exactly balance. This is only possible if the momentum required for each process is identical, i.e., phonon coherence waves created by pump-to-Stokes scattering are identical to those annihilated in pump-to-anti-Stokes scattering. In bulk gas cells, this can only be achieved over limited interaction lengths at an oblique angle to the pump axis. Here we report a simple system that provides dramatic Raman gain suppression over long collinear path lengths in hydrogen. It consists of a gas-filled hollow-core photonic crystal fiber whose zero dispersion point is pressure adjusted to lie close to the pump laser wavelength. At a certain precise pressure, stimulated generation of Stokes light in the fundamental mode is completely suppressed, allowing other much weaker phenomena such as spontaneous Raman scattering to be explored at high pump powers.

  18. Lossless Astronomical Image Compression and the Effects of Random Noise

    NASA Technical Reports Server (NTRS)

    Pence, William

    2009-01-01

    In this paper we compare a variety of modern image compression methods on a large sample of astronomical images. We begin by demonstrating from first principles how the amount of noise in the image pixel values sets a theoretical upper limit on the lossless compression ratio of the image. We derive simple procedures for measuring the amount of noise in an image and for quantitatively predicting how much compression will be possible. We then compare the traditional technique of using the GZIP utility to externally compress the image, with a newer technique of dividing the image into tiles, and then compressing and storing each tile in a FITS binary table structure. This tiled-image compression technique offers a choice of other compression algorithms besides GZIP, some of which are much better suited to compressing astronomical images. Our tests on a large sample of images show that the Rice algorithm provides the best combination of speed and compression efficiency. In particular, Rice typically produces 1.5 times greater compression and provides much faster compression speed than GZIP. Floating point images generally contain too much noise to be effectively compressed with any lossless algorithm. We have developed a compression technique which discards some of the useless noise bits by quantizing the pixel values as scaled integers. The integer images can then be compressed by a factor of 4 or more. Our image compression and uncompression utilities (called fpack and funpack) that were used in this study are publicly available from the HEASARC web site.Users may run these stand-alone programs to compress and uncompress their own images.

  19. Compressed digital holography: from micro towards macro

    NASA Astrophysics Data System (ADS)

    Schretter, Colas; Bettens, Stijn; Blinder, David; Pesquet-Popescu, Béatrice; Cagnazzo, Marco; Dufaux, Frédéric; Schelkens, Peter

    2016-09-01

    signal processing methods from software-driven computer engineering and applied mathematics. The compressed sensing theory in particular established a practical framework for reconstructing the scene content using few linear combinations of complex measurements and a sparse prior for regularizing the solution. Compressed sensing found direct applications in digital holography for microscopy. Indeed, the wave propagation phenomenon in free space mixes in a natural way the spatial distribution of point sources from the 3-dimensional scene. As the 3-dimensional scene is mapped to a 2-dimensional hologram, the hologram samples form a compressed representation of the scene as well. This overview paper discusses contributions in the field of compressed digital holography at the micro scale. Then, an outreach on future extensions towards the real-size macro scale is discussed. Thanks to advances in sensor technologies, increasing computing power and the recent improvements in sparse digital signal processing, holographic modalities are on the verge of practical high-quality visualization at a macroscopic scale where much higher resolution holograms must be acquired and processed on the computer.

  20. Quantum autoencoders for efficient compression of quantum data

    NASA Astrophysics Data System (ADS)

    Romero, Jonathan; Olson, Jonathan P.; Aspuru-Guzik, Alan

    2017-12-01

    Classical autoencoders are neural networks that can learn efficient low-dimensional representations of data in higher-dimensional space. The task of an autoencoder is, given an input x, to map x to a lower dimensional point y such that x can likely be recovered from y. The structure of the underlying autoencoder network can be chosen to represent the data on a smaller dimension, effectively compressing the input. Inspired by this idea, we introduce the model of a quantum autoencoder to perform similar tasks on quantum data. The quantum autoencoder is trained to compress a particular data set of quantum states, where a classical compression algorithm cannot be employed. The parameters of the quantum autoencoder are trained using classical optimization algorithms. We show an example of a simple programmable circuit that can be trained as an efficient autoencoder. We apply our model in the context of quantum simulation to compress ground states of the Hubbard model and molecular Hamiltonians.

  1. Compression embedding

    DOEpatents

    Sandford, M.T. II; Handel, T.G.; Bradley, J.N.

    1998-07-07

    A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique are disclosed. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%. 21 figs.

  2. Compression embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.

    1998-01-01

    A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%.

  3. Comparative data compression techniques and multi-compression results

    NASA Astrophysics Data System (ADS)

    Hasan, M. R.; Ibrahimy, M. I.; Motakabber, S. M. A.; Ferdaus, M. M.; Khan, M. N. H.

    2013-12-01

    Data compression is very necessary in business data processing, because of the cost savings that it offers and the large volume of data manipulated in many business applications. It is a method or system for transmitting a digital image (i.e., an array of pixels) from a digital data source to a digital data receiver. More the size of the data be smaller, it provides better transmission speed and saves time. In this communication, we always want to transmit data efficiently and noise freely. This paper will provide some compression techniques for lossless text type data compression and comparative result of multiple and single compression, that will help to find out better compression output and to develop compression algorithms.

  4. Loop gain stabilizing with an all-digital automatic-gain-control method for high-precision fiber-optic gyroscope.

    PubMed

    Zheng, Yue; Zhang, Chunxi; Li, Lijing; Song, Lailiang; Chen, Wen

    2016-06-10

    For a fiber-optic gyroscope (FOG) using electronic dithers to suppress the dead zone, without a fixed loop gain, the deterministic compensation for the dither signals in the control loop of the FOG cannot remain accurate, resulting in the dither residuals in the FOG rotation rate output and the navigation errors in the inertial navigation system. An all-digital automatic-gain-control method for stabilizing the loop gain of the FOG is proposed. By using a perturbation square wave to measure the loop gain of the FOG and adding an automatic gain control loop in the conventional control loop of the FOG, we successfully obtain the actual loop gain and make the loop gain converge to the reference value. The experimental results show that in the case of 20% variation in the loop gain, the dither residuals are successfully eliminated and the standard deviation of the FOG sampling outputs is decreased from 2.00  deg/h to 0.62  deg/h (sampling period 2.5 ms, 10 points smoothing). With this method, the loop gain of the FOG can be stabilized over the operation temperature range and in the long-time application, which provides a solid foundation for the engineering applications of the high-precision FOG.

  5. Compression of electromyographic signals using image compression techniques.

    PubMed

    Costa, Marcus Vinícius Chaffim; Berger, Pedro de Azevedo; da Rocha, Adson Ferreira; de Carvalho, João Luiz Azevedo; Nascimento, Francisco Assis de Oliveira

    2008-01-01

    Despite the growing interest in the transmission and storage of electromyographic signals for long periods of time, few studies have addressed the compression of such signals. In this article we present an algorithm for compression of electromyographic signals based on the JPEG2000 coding system. Although the JPEG2000 codec was originally designed for compression of still images, we show that it can also be used to compress EMG signals for both isotonic and isometric contractions. For EMG signals acquired during isometric contractions, the proposed algorithm provided compression factors ranging from 75 to 90%, with an average PRD ranging from 3.75% to 13.7%. For isotonic EMG signals, the algorithm provided compression factors ranging from 75 to 90%, with an average PRD ranging from 3.4% to 7%. The compression results using the JPEG2000 algorithm were compared to those using other algorithms based on the wavelet transform.

  6. Verification of the Solar Dynamics Observatory High Gain Antenna Pointing Algorithm Using Flight Data

    NASA Technical Reports Server (NTRS)

    Bourkland, Kristin L.; Liu, Kuo-Chia

    2011-01-01

    The Solar Dynamics Observatory (SDO) is a NASA spacecraft designed to study the Sun. It was launched on February 11, 2010 into a geosynchronous orbit, and uses a suite of attitude sensors and actuators to finely point the spacecraft at the Sun. SDO has three science instruments: the Atmospheric Imaging Assembly (AIA), the Helioseismic and Magnetic Imager (HMI), and the Extreme Ultraviolet Variability Experiment (EVE). SDO uses two High Gain Antennas (HGAs) to send science data to a dedicated ground station in White Sands, New Mexico. In order to meet the science data capture budget, the HGAs must be able to transmit data to the ground for a very large percentage of the time. Each HGA is a dual-axis antenna driven by stepper motors. Both antennas transmit data at all times, but only a single antenna is required in order to meet the transmission rate requirement. For portions of the year, one antenna or the other has an unobstructed view of the White Sands ground station. During other periods, however, the view from both antennas to the Earth is blocked for different portions of the day. During these times of blockage, the two HGAs take turns pointing to White Sands, with the other antenna pointing out to space. The HGAs handover White Sands transmission responsibilities to the unblocked antenna. There are two handover seasons per year, each lasting about 72 days, where the antennas hand off control every twelve hours. The non-tracking antenna slews back to the ground station by following a ground commanded trajectory and arrives approximately 5 minutes before the formerly tracking antenna slews away to point out into space. The SDO Attitude Control System (ACS) runs at 5 Hz, and the HGA Gimbal Control Electronics (GCE) run at 200 Hz. There are 40 opportunities for the gimbals to step each ACS cycle, with a hardware limitation of no more than one step every three GCE cycles. The ACS calculates the desired gimbal motion for tracking the ground station or for slewing

  7. Shock ignition targets: gain and robustness vs ignition threshold factor

    NASA Astrophysics Data System (ADS)

    Atzeni, Stefano; Antonelli, Luca; Schiavi, Angelo; Picone, Silvia; Volponi, Gian Marco; Marocchino, Alberto

    2017-10-01

    Shock ignition is a laser direct-drive inertial confinement fusion scheme, in which the stages of compression and hot spot formation are partly separated. The hot spot is created at the end of the implosion by a converging shock driven by a final ``spike'' of the laser pulse. Several shock-ignition target concepts have been proposed and relevant gain curves computed (see, e.g.). Here, we consider both pure-DT targets and more facility-relevant targets with plastic ablator. The investigation is conducted with 1D and 2D hydrodynamic simulations. We determine ignition threshold factors ITF's (and their dependence on laser pulse parameters) by means of 1D simulations. 2D simulations indicate that robustness to long-scale perturbations increases with ITF. Gain curves (gain vs laser energy), for different ITF's, are generated using 1D simulations. Work partially supported by Sapienza Project C26A15YTMA, Sapienza 2016 (n. 257584), Eurofusion Project AWP17-ENR-IFE-CEA-01.

  8. The second modern condition? Compressed modernity as internalized reflexive cosmopolitization.

    PubMed

    Kyung-Sup, Chang

    2010-09-01

    Compressed modernity is a civilizational condition in which economic, political, social and/or cultural changes occur in an extremely condensed manner in respect to both time and space, and in which the dynamic coexistence of mutually disparate historical and social elements leads to the construction and reconstruction of a highly complex and fluid social system. During what Beck considers the second modern stage of humanity, every society reflexively internalizes cosmopolitanized risks. Societies (or their civilizational conditions) are thereby being internalized into each other, making compressed modernity a universal feature of contemporary societies. This paper theoretically discusses compressed modernity as nationally ramified from reflexive cosmopolitization, and, then, comparatively illustrates varying instances of compressed modernity in advanced capitalist societies, un(der)developed capitalist societies, and system transition societies. In lieu of a conclusion, I point out the declining status of national societies as the dominant unit of (compressed) modernity and the interactive acceleration of compressed modernity among different levels of human life ranging from individuals to the global community. © London School of Economics and Political Science 2010.

  9. Distributed Compressive CSIT Estimation and Feedback for FDD Multi-User Massive MIMO Systems

    NASA Astrophysics Data System (ADS)

    Rao, Xiongbin; Lau, Vincent K. N.

    2014-06-01

    To fully utilize the spatial multiplexing gains or array gains of massive MIMO, the channel state information must be obtained at the transmitter side (CSIT). However, conventional CSIT estimation approaches are not suitable for FDD massive MIMO systems because of the overwhelming training and feedback overhead. In this paper, we consider multi-user massive MIMO systems and deploy the compressive sensing (CS) technique to reduce the training as well as the feedback overhead in the CSIT estimation. The multi-user massive MIMO systems exhibits a hidden joint sparsity structure in the user channel matrices due to the shared local scatterers in the physical propagation environment. As such, instead of naively applying the conventional CS to the CSIT estimation, we propose a distributed compressive CSIT estimation scheme so that the compressed measurements are observed at the users locally, while the CSIT recovery is performed at the base station jointly. A joint orthogonal matching pursuit recovery algorithm is proposed to perform the CSIT recovery, with the capability of exploiting the hidden joint sparsity in the user channel matrices. We analyze the obtained CSIT quality in terms of the normalized mean absolute error, and through the closed-form expressions, we obtain simple insights into how the joint channel sparsity can be exploited to improve the CSIT recovery performance.

  10. Measurement of compressed breast thickness by optical stereoscopic photogrammetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tyson, Albert H.; Mawdsley, Gordon E.; Yaffe, Martin J.

    2009-02-15

    The determination of volumetric breast density (VBD) from mammograms requires accurate knowledge of the thickness of the compressed breast. In attempting to accurately determine VBD from images obtained on conventional mammography systems, the authors found that the thickness reported by a number of mammography systems in the field varied by as much as 15 mm when compressing the same breast or phantom. In order to evaluate the behavior of mammographic compression systems and to be able to predict the thickness at different locations in the breast on patients, they have developed a method for measuring the local thickness of themore » breast at all points of contact with the compression paddle using optical stereoscopic photogrammetry. On both flat (solid) and compressible phantoms, the measurements were accurate to better than 1 mm with a precision of 0.2 mm. In a pilot study, this method was used to measure thickness on 108 volunteers who were undergoing mammography examination. This measurement tool will allow us to characterize paddle surface deformations, deflections and calibration offsets for mammographic units.« less

  11. Measurement of compressed breast thickness by optical stereoscopic photogrammetry.

    PubMed

    Tyson, Albert H; Mawdsley, Gordon E; Yaffe, Martin J

    2009-02-01

    The determination of volumetric breast density (VBD) from mammograms requires accurate knowledge of the thickness of the compressed breast. In attempting to accurately determine VBD from images obtained on conventional mammography systems, the authors found that the thickness reported by a number of mammography systems in the field varied by as much as 15 mm when compressing the same breast or phantom. In order to evaluate the behavior of mammographic compression systems and to be able to predict the thickness at different locations in the breast on patients, they have developed a method for measuring the local thickness of the breast at all points of contact with the compression paddle using optical stereoscopic photogrammetry. On both flat (solid) and compressible phantoms, the measurements were accurate to better than 1 mm with a precision of 0.2 mm. In a pilot study, this method was used to measure thickness on 108 volunteers who were undergoing mammography examination. This measurement tool will allow us to characterize paddle surface deformations, deflections and calibration offsets for mammographic units.

  12. Learning random networks for compression of still and moving images

    NASA Technical Reports Server (NTRS)

    Gelenbe, Erol; Sungur, Mert; Cramer, Christopher

    1994-01-01

    Image compression for both still and moving images is an extremely important area of investigation, with numerous applications to videoconferencing, interactive education, home entertainment, and potential applications to earth observations, medical imaging, digital libraries, and many other areas. We describe work on a neural network methodology to compress/decompress still and moving images. We use the 'point-process' type neural network model which is closer to biophysical reality than standard models, and yet is mathematically much more tractable. We currently achieve compression ratios of the order of 120:1 for moving grey-level images, based on a combination of motion detection and compression. The observed signal-to-noise ratio varies from values above 25 to more than 35. The method is computationally fast so that compression and decompression can be carried out in real-time. It uses the adaptive capabilities of a set of neural networks so as to select varying compression ratios in real-time as a function of quality achieved. It also uses a motion detector which will avoid retransmitting portions of the image which have varied little from the previous frame. Further improvements can be achieved by using on-line learning during compression, and by appropriate compensation of nonlinearities in the compression/decompression scheme. We expect to go well beyond the 250:1 compression level for color images with good quality levels.

  13. 30 CFR 75.1730 - Compressed air; general; compressed air systems.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped with...

  14. 30 CFR 75.1730 - Compressed air; general; compressed air systems.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped with...

  15. 30 CFR 75.1730 - Compressed air; general; compressed air systems.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped with...

  16. 30 CFR 75.1730 - Compressed air; general; compressed air systems.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped with...

  17. Adaptive bit plane quadtree-based block truncation coding for image compression

    NASA Astrophysics Data System (ADS)

    Li, Shenda; Wang, Jin; Zhu, Qing

    2018-04-01

    Block truncation coding (BTC) is a fast image compression technique applied in spatial domain. Traditional BTC and its variants mainly focus on reducing computational complexity for low bit rate compression, at the cost of lower quality of decoded images, especially for images with rich texture. To solve this problem, in this paper, a quadtree-based block truncation coding algorithm combined with adaptive bit plane transmission is proposed. First, the direction of edge in each block is detected using Sobel operator. For the block with minimal size, adaptive bit plane is utilized to optimize the BTC, which depends on its MSE loss encoded by absolute moment block truncation coding (AMBTC). Extensive experimental results show that our method gains 0.85 dB PSNR on average compare to some other state-of-the-art BTC variants. So it is desirable for real time image compression applications.

  18. The Galileo high gain antenna deployment anomaly

    NASA Technical Reports Server (NTRS)

    Johnson, Michael R.

    1994-01-01

    On April 11, 1991, the Galileo spacecraft executed a sequence that would open the spacecraft's High Gain Antenna. The Antenna's launch restraint had been released just after deployment sequence, the antenna, which opens like an umbrella, never reached the fully deployed position. The analyses and tests that followed allowed a conclusive determination of the likely failure mechanisms and pointed to some strategies to use for recovery of the high gain antenna.

  19. Wavelet-based scalable L-infinity-oriented compression.

    PubMed

    Alecu, Alin; Munteanu, Adrian; Cornelis, Jan P H; Schelkens, Peter

    2006-09-01

    Among the different classes of coding techniques proposed in literature, predictive schemes have proven their outstanding performance in near-lossless compression. However, these schemes are incapable of providing embedded L(infinity)-oriented compression, or, at most, provide a very limited number of potential L(infinity) bit-stream truncation points. We propose a new multidimensional wavelet-based L(infinity)-constrained scalable coding framework that generates a fully embedded L(infinity)-oriented bit stream and that retains the coding performance and all the scalability options of state-of-the-art L2-oriented wavelet codecs. Moreover, our codec instantiation of the proposed framework clearly outperforms JPEG2000 in L(infinity) coding sense.

  20. Sudden gains in group cognitive-behavioral therapy for panic disorder.

    PubMed

    Clerkin, Elise M; Teachman, Bethany A; Smith-Janik, Shannan B

    2008-11-01

    The current study investigates sudden gains (rapid symptom reduction) in group cognitive-behavioral therapy for panic disorder. Sudden gains occurring after session 2 of treatment predicted overall symptom reduction at treatment termination and some changes in cognitive biases. Meanwhile, sudden gains occurring immediately following session 1 were not associated with symptom reduction or cognitive change. Together, this research points to the importance of examining sudden gains across the entire span of treatment, as well as the potential role of sudden gains in recovery from panic disorder.

  1. Shear wave pulse compression for dynamic elastography using phase-sensitive optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Nguyen, Thu-Mai; Song, Shaozhen; Arnal, Bastien; Wong, Emily Y.; Huang, Zhihong; Wang, Ruikang K.; O'Donnell, Matthew

    2014-01-01

    Assessing the biomechanical properties of soft tissue provides clinically valuable information to supplement conventional structural imaging. In the previous studies, we introduced a dynamic elastography technique based on phase-sensitive optical coherence tomography (PhS-OCT) to characterize submillimetric structures such as skin layers or ocular tissues. Here, we propose to implement a pulse compression technique for shear wave elastography. We performed shear wave pulse compression in tissue-mimicking phantoms. Using a mechanical actuator to generate broadband frequency-modulated vibrations (1 to 5 kHz), induced displacements were detected at an equivalent frame rate of 47 kHz using a PhS-OCT. The recorded signal was digitally compressed to a broadband pulse. Stiffness maps were then reconstructed from spatially localized estimates of the local shear wave speed. We demonstrate that a simple pulse compression scheme can increase shear wave detection signal-to-noise ratio (>12 dB gain) and reduce artifacts in reconstructing stiffness maps of heterogeneous media.

  2. Realizing Ultrafast Electron Pulse Self-Compression by Femtosecond Pulse Shaping Technique.

    PubMed

    Qi, Yingpeng; Pei, Minjie; Qi, Dalong; Yang, Yan; Jia, Tianqing; Zhang, Shian; Sun, Zhenrong

    2015-10-01

    Uncorrelated position and velocity distribution of the electron bunch at the photocathode from the residual energy greatly limit the transverse coherent length and the recompression ability. Here we first propose a femtosecond pulse-shaping method to realize the electron pulse self-compression in ultrafast electron diffraction system based on a point-to-point space-charge model. The positively chirped femtosecond laser pulse can correspondingly create the positively chirped electron bunch at the photocathode (such as metal-insulator heterojunction), and such a shaped electron pulse can realize the self-compression in the subsequent propagation process. The greatest advantage for our proposed scheme is that no additional components are introduced into the ultrafast electron diffraction system, which therefore does not affect the electron bunch shape. More importantly, this scheme can break the limitation that the electron pulse via postphotocathode static compression schemes is not shorter than the excitation laser pulse due to the uncorrelated position and velocity distribution of the initial electron bunch.

  3. A 1-channel 3-band wide dynamic range compression chip for vibration transducer of implantable hearing aids.

    PubMed

    Kim, Dongwook; Seong, Kiwoong; Kim, Myoungnam; Cho, Jinho; Lee, Jyunghyun

    2014-01-01

    In this paper, a digital audio processing chip which uses a wide dynamic range compression (WDRC) algorithm is designed and implemented for implantable hearing aids system. The designed chip operates at a single voltage of 3.3V and drives a 16 bit parallel input and output at 32 kHz sample. The designed chip has 1-channel 3-band WDRC composed of a FIR filter bank, a level detector, and a compression part. To verify the performance of the designed chip, we measured the frequency separations of bands and compression gain control to reflect the hearing threshold level.

  4. Combining image-processing and image compression schemes

    NASA Technical Reports Server (NTRS)

    Greenspan, H.; Lee, M.-C.

    1995-01-01

    An investigation into the combining of image-processing schemes, specifically an image enhancement scheme, with existing compression schemes is discussed. Results are presented on the pyramid coding scheme, the subband coding scheme, and progressive transmission. Encouraging results are demonstrated for the combination of image enhancement and pyramid image coding schemes, especially at low bit rates. Adding the enhancement scheme to progressive image transmission allows enhanced visual perception at low resolutions. In addition, further progressing of the transmitted images, such as edge detection schemes, can gain from the added image resolution via the enhancement.

  5. Dynamics of Two Point Vortices in an External Compressible Shear Flow

    NASA Astrophysics Data System (ADS)

    Vetchanin, Evgeny V.; Mamaev, Ivan S.

    2017-12-01

    This paper is concerned with a system of equations that describes the motion of two point vortices in a flow possessing constant uniform vorticity and perturbed by an acoustic wave. The system is shown to have both regular and chaotic regimes of motion. In addition, simple and chaotic attractors are found in the system. Attention is given to bifurcations of fixed points of a Poincaré map which lead to the appearance of these regimes. It is shown that, in the case where the total vortex strength changes, the "reversible pitch-fork" bifurcation is a typical scenario of emergence of asymptotically stable fixed and periodic points. As a result of this bifurcation, a saddle point, a stable and an unstable point of the same period emerge from an elliptic point of some period. By constructing and analyzing charts of dynamical regimes and bifurcation diagrams we show that a cascade of period-doubling bifurcations is a typical scenario of transition to chaos in the system under consideration.

  6. Practicality of magnetic compression for plasma density control

    DOE PAGES

    Gueroult, Renaud; Fisch, Nathaniel J.

    2016-03-16

    Here, plasma densification through magnetic compression has been suggested for time-resolved control of the wave properties in plasma-based accelerators [P. F. Schmit and N. J. Fisch, Phys. Rev. Lett. 109, 255003 (2012)]. Using particle in cell simulations with real mass ratio, the practicality of large magnetic compression on timescales shorter than the ion gyro-period is investigated. For compression times shorter than the transit time of a compressional Alfven wave across the plasma slab, results show the formation of two counter-propagating shock waves, leading to a highly non-uniform plasma density profile. Furthermore, the plasma slab displays large hydromagnetic like oscillations aftermore » the driving field has reached steady state. Peak compression is obtained when the two shocks collide in the mid-plane. At this instant, very large plasma heating is observed, and the plasmaβ is estimated to be about 1. Although these results point out a densification mechanism quite different and more complex than initially envisioned, these features still might be advantageous in particle accelerators.« less

  7. Sudden Gains in Group Cognitive-Behavioral Therapy for Panic Disorder

    PubMed Central

    Clerkin, Elise M.; Teachman, Bethany A.; Smith-Janik, Shannan B.

    2008-01-01

    The current study investigates sudden gains (rapid symptom reduction) in group cognitive-behavioral therapy for panic disorder. Sudden gains occurring after session 2 of treatment predicted overall symptom reduction at treatment termination and some changes in cognitive biases. Meanwhile, sudden gains occurring immediately following session 1 were not associated with symptom reduction or cognitive change. Together, this research points to the importance of examining sudden gains across the entire span of treatment, as well as the potential role of sudden gains in recovery from panic disorder. PMID:18804199

  8. Gmz: a Gml Compression Model for Webgis

    NASA Astrophysics Data System (ADS)

    Khandelwal, A.; Rajan, K. S.

    2017-09-01

    Geography markup language (GML) is an XML specification for expressing geographical features. Defined by Open Geospatial Consortium (OGC), it is widely used for storage and transmission of maps over the Internet. XML schemas provide the convenience to define custom features profiles in GML for specific needs as seen in widely popular cityGML, simple features profile, coverage, etc. Simple features profile (SFP) is a simpler subset of GML profile with support for point, line and polygon geometries. SFP has been constructed to make sure it covers most commonly used GML geometries. Web Feature Service (WFS) serves query results in SFP by default. But it falls short of being an ideal choice due to its high verbosity and size-heavy nature, which provides immense scope for compression. GMZ is a lossless compression model developed to work for SFP compliant GML files. Our experiments indicate GMZ achieves reasonably good compression ratios and can be useful in WebGIS based applications.

  9. Numerical study on the maximum small-signal gain coefficient in passively mode-locked fiber lasers

    NASA Astrophysics Data System (ADS)

    Tang, Xin; Wang, Jian; Chen, Zhaoyang; Lin, Chengyou; Ding, Yingchun

    2017-06-01

    Ultrashort pulses have been found to have important applications in many fields, such as ultrafast diagnosis, biomedical engineering, and optical imaging. Passively mode-locked fiber lasers have become a tool for generating picosecond and femtosecond pulses. In this paper, the evolution of a picosecond laser pulse in different stable passively mode-locked fiber laser is analyzed using nonlinear Schrödinger equation. Firstly, different mode-locked regimes are calculated with different net cavity dispersion (from -0.3 ps2 to +0.3 ps2 ). Then we calculate the maximum small-signal gain on the different net cavity dispersion conditions, and estimate the pulse width, 3 dB bandwidth and time bandwidth product (TBP) when the small-signal gain coefficient is selected as the maximum value. The results show that the small signal gain coefficient is approximately proportional to the net cavity. Moreover, when the small signal gain coefficient reaches the maximum value, the pulse width of the output pulse and their corresponding TBP show a trend of increase gradually, and 3dB bandwidth shows a trend of increase firstly and then decrease. In addition, in the case that the net dispersion is positive, because of the pulse with quite large frequency chirp, the revolution to dechirp the pulse is researched and the output of the pulse is compressed and its compression ratio reached more than 10 times. The results provide a reference for the optimization of passively mode-locked fiber lasers.

  10. Lossy Wavefield Compression for Full-Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Boehm, C.; Fichtner, A.; de la Puente, J.; Hanzich, M.

    2015-12-01

    We present lossy compression techniques, tailored to the inexact computation of sensitivity kernels, that significantly reduce the memory requirements of adjoint-based minimization schemes. Adjoint methods are a powerful tool to solve tomography problems in full-waveform inversion (FWI). Yet they face the challenge of massive memory requirements caused by the opposite directions of forward and adjoint simulations and the necessity to access both wavefields simultaneously during the computation of the sensitivity kernel. Thus, storage, I/O operations, and memory bandwidth become key topics in FWI. In this talk, we present strategies for the temporal and spatial compression of the forward wavefield. This comprises re-interpolation with coarse time steps and an adaptive polynomial degree of the spectral element shape functions. In addition, we predict the projection errors on a hierarchy of grids and re-quantize the residuals with an adaptive floating-point accuracy to improve the approximation. Furthermore, we use the first arrivals of adjoint waves to identify "shadow zones" that do not contribute to the sensitivity kernel at all. Updating and storing the wavefield within these shadow zones is skipped, which reduces memory requirements and computational costs at the same time. Compared to check-pointing, our approach has only a negligible computational overhead, utilizing the fact that a sufficiently accurate sensitivity kernel does not require a fully resolved forward wavefield. Furthermore, we use adaptive compression thresholds during the FWI iterations to ensure convergence. Numerical experiments on the reservoir scale and for the Western Mediterranean prove the high potential of this approach with an effective compression factor of 500-1000. Furthermore, it is computationally cheap and easy to integrate in both, finite-differences and finite-element wave propagation codes.

  11. Issues with Strong Compression of Plasma Target by Stabilized Imploding Liner

    NASA Astrophysics Data System (ADS)

    Turchi, Peter; Frese, Sherry; Frese, Michael

    2017-10-01

    Strong compression (10:1 in radius) of an FRC by imploding liquid metal liners, stabilized against Rayleigh-Taylor modes, using different scalings for loss based on Bohm vs 100X classical diffusion rates, predict useful compressions with implosion times half the initial energy lifetime. The elongation (length-to-diameter ratio) near peak compression needed to satisfy empirical stability criterion and also retain alpha-particles is about ten. The present paper extends these considerations to issues of the initial FRC, including stability conditions (S*/E) and allowable angular speeds. Furthermore, efficient recovery of the implosion energy and alpha-particle work, in order to reduce the necessary nuclear gain for an economical power reactor, is seen as an important element of the stabilized liner implosion concept for fusion. We describe recent progress in design and construction of the high energy-density prototype of a Stabilized Liner Compressor (SLC) leading to repetitive laboratory experiments to develop the plasma target. Supported by ARPA-E ALPHA Program.

  12. 3D single point imaging with compressed sensing provides high temporal resolution R 2* mapping for in vivo preclinical applications.

    PubMed

    Rioux, James A; Beyea, Steven D; Bowen, Chris V

    2017-02-01

    Purely phase-encoded techniques such as single point imaging (SPI) are generally unsuitable for in vivo imaging due to lengthy acquisition times. Reconstruction of highly undersampled data using compressed sensing allows SPI data to be quickly obtained from animal models, enabling applications in preclinical cellular and molecular imaging. TurboSPI is a multi-echo single point technique that acquires hundreds of images with microsecond spacing, enabling high temporal resolution relaxometry of large-R 2 * systems such as iron-loaded cells. TurboSPI acquisitions can be pseudo-randomly undersampled in all three dimensions to increase artifact incoherence, and can provide prior information to improve reconstruction. We evaluated the performance of CS-TurboSPI in phantoms, a rat ex vivo, and a mouse in vivo. An algorithm for iterative reconstruction of TurboSPI relaxometry time courses does not affect image quality or R 2 * mapping in vitro at acceleration factors up to 10. Imaging ex vivo is possible at similar acceleration factors, and in vivo imaging is demonstrated at an acceleration factor of 8, such that acquisition time is under 1 h. Accelerated TurboSPI enables preclinical R 2 * mapping without loss of data quality, and may show increased specificity to iron oxide compared to other sequences.

  13. [Atypical antipsychotic-induced weight gain].

    PubMed

    Godlewska, Beata R; Olajossy-Hilkesberger, Luiza; Marmurowska-Michałowska, Halina; Olajossy, Marcin; Landowski, Jerzy

    2006-01-01

    Introduction of a new group of antipsychotic drugs, called atypical because of the proprieties differing them from classical neuroleptics, gave hope for the beginning of a new era in treatment of psychoses, including schizophrenia. Different mechanisms of action not only resulted in a broader spectrum of action and high efficacy but also in a relative lack of extrapiramidal symptoms. However, atypical neuroleptics are not totally free from adverse effects. Symptoms such as sedation, metabolic changes and weight gain, often very quick and severe - present also in the case of classical drugs, but put to the background by extrapiramidal symptoms--have become prominent. Weight gain is important both from the clinical and subjective point of view--as associated with serious somatic consequences and as a source of enormous mental distress. These problems are addressed in this review, with the focus on weight gain associated with the use of specific atypical neuroleptics.

  14. The Gains from Vertical Scaling

    ERIC Educational Resources Information Center

    Briggs, Derek C.; Domingue, Ben

    2013-01-01

    It is often assumed that a vertical scale is necessary when value-added models depend upon the gain scores of students across two or more points in time. This article examines the conditions under which the scale transformations associated with the vertical scaling process would be expected to have a significant impact on normative interpretations…

  15. Effect of compressive force on PEM fuel cell performance

    NASA Astrophysics Data System (ADS)

    MacDonald, Colin Stephen

    question and the performance gains from the aforementioned compression factors were quantified. The study provided a considerable amount of practical and analytical knowledge in the area of cell compression and shed light on the importance of precision compressive control within the PEM fuel cell.

  16. Compression embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.

    1998-01-01

    A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method.

  17. Optimum SNR data compression in hardware using an Eigencoil array.

    PubMed

    King, Scott B; Varosi, Steve M; Duensing, G Randy

    2010-05-01

    With the number of receivers available on clinical MRI systems now ranging from 8 to 32 channels, data compression methods are being explored to lessen the demands on the computer for data handling and processing. Although software-based methods of compression after reception lessen computational requirements, a hardware-based method before the receiver also reduces the number of receive channels required. An eight-channel Eigencoil array is constructed by placing a hardware radiofrequency signal combiner inline after preamplification, before the receiver system. The Eigencoil array produces signal-to-noise ratio (SNR) of an optimal reconstruction using a standard sum-of-squares reconstruction, with peripheral SNR gains of 30% over the standard array. The concept of "receiver channel reduction" or MRI data compression is demonstrated, with optimal SNR using only four channels, and with a three-channel Eigencoil, superior sum-of-squares SNR was achieved over the standard eight-channel array. A three-channel Eigencoil portion of a product neurovascular array confirms in vivo SNR performance and demonstrates parallel MRI up to R = 3. This SNR-preserving data compression method advantageously allows users of MRI systems with fewer receiver channels to achieve the SNR of higher-channel MRI systems. (c) 2010 Wiley-Liss, Inc.

  18. [Manual trigger point therapy of shoulder pain : Randomized controlled study of effectiveness].

    PubMed

    Sohns, S; Schnieder, K; Licht, G; von Piekartz, H

    2016-12-01

    Although chronic shoulder pain is highly prevalent and myofascial trigger points (mTrP) are thought to be found in the majority of patients with shoulder complaints, the influence on the pain mechanism remains unclear. There are only very few controlled clinical studies on the effects of manual trigger point compression therapy. This randomized controlled trial (RCT) compared the short-term effects of manual trigger point compression therapy (n = 6) with manual sham therapy (n = 6) in patients with unilateral shoulder pain due to myofascial syndrome (MFS). The measurement data were collected before and after two sessions of therapy. Pressure pain thresholds (PPT) of mTrP and symmetrically located points on the asymptomatic side were measured together with neutral points in order to detect a potential unilateral or generalized hyperalgesia. Additionally, the pain was assessed on a visual analog scale (VAS) at rest and during movement and the neck disability index (NDI) and disabilities of the arm, shoulder and hand (DASH) questionnaires were also completed and evaluated. Both treatment modalities led to a significant improvement; however, the manual trigger point compression therapy was significantly more effective in comparison to sham therapy, as measured by different parameters. The significant improvement of PPT values in the interventional group even at sites that were not directly treated, indicates central mechanisms in pain threshold modulation induced by manual compression therapy. The weaker but still measurable effects of sham therapy might be explained by the sham modality being a hands on technique or by sufficient stimulation of the trigger point region during the diagnostics and PPT measurements.

  19. Fracture in Compression of Brittle Solids

    DTIC Science & Technology

    1983-08-01

    SUPPLEMENTARY NOTES 19. KEY WORDS (Continue on reverse aide It necesaray and id:5ntily by block number) Acoustic Emission High Strength Steel Compression...mechanistic models are related to the phenomenological developments in dilatational plasticity that have been applied widely in concrete technology. The...is reviewed in some detail, both from the point of view of fundamentals as well as technological applications. Experimental verification of models is

  20. A scalable and multi-purpose point cloud server (PCS) for easier and faster point cloud data management and processing

    NASA Astrophysics Data System (ADS)

    Cura, Rémi; Perret, Julien; Paparoditis, Nicolas

    2017-05-01

    In addition to more traditional geographical data such as images (rasters) and vectors, point cloud data are becoming increasingly available. Such data are appreciated for their precision and true three-Dimensional (3D) nature. However, managing point clouds can be difficult due to scaling problems and specificities of this data type. Several methods exist but are usually fairly specialised and solve only one aspect of the management problem. In this work, we propose a comprehensive and efficient point cloud management system based on a database server that works on groups of points (patches) rather than individual points. This system is specifically designed to cover the basic needs of point cloud users: fast loading, compressed storage, powerful patch and point filtering, easy data access and exporting, and integrated processing. Moreover, the proposed system fully integrates metadata (like sensor position) and can conjointly use point clouds with other geospatial data, such as images, vectors, topology and other point clouds. Point cloud (parallel) processing can be done in-base with fast prototyping capabilities. Lastly, the system is built on open source technologies; therefore it can be easily extended and customised. We test the proposed system with several billion points obtained from Lidar (aerial and terrestrial) and stereo-vision. We demonstrate loading speeds in the ˜50 million pts/h per process range, transparent-for-user and greater than 2 to 4:1 compression ratio, patch filtering in the 0.1 to 1 s range, and output in the 0.1 million pts/s per process range, along with classical processing methods, such as object detection.

  1. New numerical solutions of three-dimensional compressible hydrodynamic convection. [in stars

    NASA Technical Reports Server (NTRS)

    Hossain, Murshed; Mullan, D. J.

    1990-01-01

    Numerical solutions of three-dimensional compressible hydrodynamics (including sound waves) in a stratified medium with open boundaries are presented. Convergent/divergent points play a controlling role in the flows, which are dominated by a single frequency related to the mean sound crossing time. Superposed on these rapid compressive flows, slower eddy-like flows eventually create convective transport. The solutions contain small structures stacked on top of larger ones, with vertical scales equal to the local pressure scale heights, H sub p. Although convective transport starts later in the evolution, vertical scales of H sub p are apparently selected at much earlier times by nonlinear compressive effects.

  2. Compression embedding

    DOEpatents

    Sandford, M.T. II; Handel, T.G.; Bradley, J.N.

    1998-03-10

    A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique is disclosed. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method. 11 figs.

  3. Cardiopulmonary resuscitation by chest compression alone or with mouth-to-mouth ventilation.

    PubMed

    Hallstrom, A; Cobb, L; Johnson, E; Copass, M

    2000-05-25

    Despite extensive training of citizens of Seattle in cardiopulmonary resuscitation (CPR), bystanders do not perform CPR in almost half of witnessed cardiac arrests. Instructions in chest compression plus mouth-to-mouth ventilation given by dispatchers over the telephone can require 2.4 minutes. In experimental studies, chest compression alone is associated with survival rates similar to those with chest compression plus mouth-to-mouth ventilation. We conducted a randomized study to compare CPR by chest compression alone with CPR by chest compression plus mouth-to-mouth ventilation. The setting of the trial was an urban, fire-department-based, emergency-medical-care system with central dispatching. In a randomized manner, telephone dispatchers gave bystanders at the scene of apparent cardiac arrest instructions in either chest compression alone or chest compression plus mouth-to-mouth ventilation. The primary end point was survival to hospital discharge. Data were analyzed for 241 patients randomly assigned to receive chest compression alone and 279 assigned to chest compression plus mouth-to-mouth ventilation. Complete instructions were delivered in 62 percent of episodes for the group receiving chest compression plus mouth-to-mouth ventilation and 81 percent of episodes for the group receiving chest compression alone (P=0.005). Instructions for compression required 1.4 minutes less to complete than instructions for compression plus mouth-to-mouth ventilation. Survival to hospital discharge was better among patients assigned to chest compression alone than among those assigned to chest compression plus mouth-to-mouth ventilation (14.6 percent vs. 10.4 percent), but the difference was not statistically significant (P=0.18). The outcome after CPR with chest compression alone is similar to that after chest compression with mouth-to-mouth ventilation, and chest compression alone may be the preferred approach for bystanders inexperienced in CPR.

  4. Compressed domain indexing of losslessly compressed images

    NASA Astrophysics Data System (ADS)

    Schaefer, Gerald

    2001-12-01

    Image retrieval and image compression have been pursued separately in the past. Only little research has been done on a synthesis of the two by allowing image retrieval to be performed directly in the compressed domain of images without the need to uncompress them first. In this paper methods for image retrieval in the compressed domain of losslessly compressed images are introduced. While most image compression techniques are lossy, i.e. discard visually less significant information, lossless techniques are still required in fields like medical imaging or in situations where images must not be changed due to legal reasons. The algorithms in this paper are based on predictive coding methods where a pixel is encoded based on the pixel values of its (already encoded) neighborhood. The first method is based on an understanding that predictively coded data is itself indexable and represents a textural description of the image. The second method operates directly on the entropy encoded data by comparing codebooks of images. Experiments show good image retrieval results for both approaches.

  5. Radiological Image Compression

    NASA Astrophysics Data System (ADS)

    Lo, Shih-Chung Benedict

    The movement toward digital images in radiology presents the problem of how to conveniently and economically store, retrieve, and transmit the volume of digital images. Basic research into image data compression is necessary in order to move from a film-based department to an efficient digital -based department. Digital data compression technology consists of two types of compression technique: error-free and irreversible. Error -free image compression is desired; however, present techniques can only achieve compression ratio of from 1.5:1 to 3:1, depending upon the image characteristics. Irreversible image compression can achieve a much higher compression ratio; however, the image reconstructed from the compressed data shows some difference from the original image. This dissertation studies both error-free and irreversible image compression techniques. In particular, some modified error-free techniques have been tested and the recommended strategies for various radiological images are discussed. A full-frame bit-allocation irreversible compression technique has been derived. A total of 76 images which include CT head and body, and radiographs digitized to 2048 x 2048, 1024 x 1024, and 512 x 512 have been used to test this algorithm. The normalized mean -square-error (NMSE) on the difference image, defined as the difference between the original and the reconstructed image from a given compression ratio, is used as a global measurement on the quality of the reconstructed image. The NMSE's of total of 380 reconstructed and 380 difference images are measured and the results tabulated. Three complex compression methods are also suggested to compress images with special characteristics. Finally, various parameters which would effect the quality of the reconstructed images are discussed. A proposed hardware compression module is given in the last chapter.

  6. Filtered gradient reconstruction algorithm for compressive spectral imaging

    NASA Astrophysics Data System (ADS)

    Mejia, Yuri; Arguello, Henry

    2017-04-01

    Compressive sensing matrices are traditionally based on random Gaussian and Bernoulli entries. Nevertheless, they are subject to physical constraints, and their structure unusually follows a dense matrix distribution, such as the case of the matrix related to compressive spectral imaging (CSI). The CSI matrix represents the integration of coded and shifted versions of the spectral bands. A spectral image can be recovered from CSI measurements by using iterative algorithms for linear inverse problems that minimize an objective function including a quadratic error term combined with a sparsity regularization term. However, current algorithms are slow because they do not exploit the structure and sparse characteristics of the CSI matrices. A gradient-based CSI reconstruction algorithm, which introduces a filtering step in each iteration of a conventional CSI reconstruction algorithm that yields improved image quality, is proposed. Motivated by the structure of the CSI matrix, Φ, this algorithm modifies the iterative solution such that it is forced to converge to a filtered version of the residual ΦTy, where y is the compressive measurement vector. We show that the filtered-based algorithm converges to better quality performance results than the unfiltered version. Simulation results highlight the relative performance gain over the existing iterative algorithms.

  7. Compression ultrasonography of the lower extremity with portable vascular ultrasonography can accurately detect deep venous thrombosis in the emergency department.

    PubMed

    Crisp, Jonathan G; Lovato, Luis M; Jang, Timothy B

    2010-12-01

    Compression ultrasonography of the lower extremity is an established method of detecting proximal lower extremity deep venous thrombosis when performed by a certified operator in a vascular laboratory. Our objective is to determine the sensitivity and specificity of bedside 2-point compression ultrasonography performed in the emergency department (ED) with portable vascular ultrasonography for the detection of proximal lower extremity deep venous thrombosis. We did this by directly comparing emergency physician-performed ultrasonography to lower extremity duplex ultrasonography performed by the Department of Radiology. This was a prospective, cross-sectional study and diagnostic test assessment of a convenience sample of ED patients with a suspected lower extremity deep venous thrombosis, conducted at a single-center, urban, academic ED. All physicians had a 10-minute training session before enrolling patients. ED compression ultrasonography occurred before Department of Radiology ultrasonography and involved identification of 2 specific points: the common femoral and popliteal vessels, with subsequent compression of the common femoral and popliteal veins. The study result was considered positive for proximal lower extremity deep venous thrombosis if either vein was incompressible or a thrombus was visualized. Sensitivity and specificity were calculated with the final radiologist interpretation of the Department of Radiology ultrasonography as the criterion standard. A total of 47 physicians performed 199 2-point compression ultrasonographic examinations in the ED. Median number of examinations per physician was 2 (range 1 to 29 examinations; interquartile range 1 to 5 examinations). There were 45 proximal lower extremity deep venous thromboses observed on Department of Radiology evaluation, all correctly identified by ED 2-point compression ultrasonography. The 153 patients without proximal lower extremity deep venous thrombosis all had a negative ED compression

  8. Inflection point caustic problems and solutions for high-gain dual-shaped reflectors

    NASA Technical Reports Server (NTRS)

    Galindo-Israel, Victor; Veruttipong, Thavath; Imbriale, William; Rengarajan, Sembiam

    1990-01-01

    The singular nature of the uniform geometrical theory of diffraction (UTD) subreflector scattered field at the vicinity of the main reflector edge (for a high-gain antenna design) is investigated. It is shown that the singularity in the UTD edge-diffracted and slope-diffracted fields is due to the reflection distance parameter approaching infinity in the transition functions. While the geometrical optics (GO) and UTD edge-diffracted fields exhibit singularities of the same order, the edge slope-diffracted field singularity is more significant and is substantial for greater subreflector edge tapers. The diffraction analysis of such a subreflector in the vicinity of the main reflector edge has been carried out efficiently and accurately by a stationary phase evaluation of the phi-integral, whereas the theta-integral is carried out numerically. Computational results from UTD and physical optics (PO) analysis of a 34-m ground station dual-shaped reflector confirm the analytical formulations for both circularly symmetric and offset asymmetric subreflectors. It is concluded that the proposed PO(theta)GO(phi) technique can be used to study the spillover or noise temperature characteristics of a high-gain reflector antenna efficiently and accurately.

  9. Assessment of the impact of modeling axial compression on PET image reconstruction.

    PubMed

    Belzunce, Martin A; Reader, Andrew J

    2017-10-01

    To comprehensively evaluate both the acceleration and image-quality impacts of axial compression and its degree of modeling in fully 3D PET image reconstruction. Despite being used since the very dawn of 3D PET reconstruction, there are still no extensive studies on the impact of axial compression and its degree of modeling during reconstruction on the end-point reconstructed image quality. In this work, an evaluation of the impact of axial compression on the image quality is performed by extensively simulating data with span values from 1 to 121. In addition, two methods for modeling the axial compression in the reconstruction were evaluated. The first method models the axial compression in the system matrix, while the second method uses an unmatched projector/backprojector, where the axial compression is modeled only in the forward projector. The different system matrices were analyzed by computing their singular values and the point response functions for small subregions of the FOV. The two methods were evaluated with simulated and real data for the Biograph mMR scanner. For the simulated data, the axial compression with span values lower than 7 did not show a decrease in the contrast of the reconstructed images. For span 11, the standard sinogram size of the mMR scanner, losses of contrast in the range of 5-10 percentage points were observed when measured for a hot lesion. For higher span values, the spatial resolution was degraded considerably. However, impressively, for all span values of 21 and lower, modeling the axial compression in the system matrix compensated for the spatial resolution degradation and obtained similar contrast values as the span 1 reconstructions. Such approaches have the same processing times as span 1 reconstructions, but they permit significant reduction in storage requirements for the fully 3D sinograms. For higher span values, the system has a large condition number and it is therefore difficult to recover accurately the higher

  10. Determination of stresses in RC eccentrically compressed members using optimization methods

    NASA Astrophysics Data System (ADS)

    Lechman, Marek; Stachurski, Andrzej

    2018-01-01

    The paper presents an optimization method for determining the strains and stresses in reinforced concrete (RC) members subjected to the eccentric compression. The governing equations for strains in the rectangular cross-sections are derived by integrating the equilibrium equations of cross-sections, taking account of the effect of concrete softening in plastic range and the mean compressive strength of concrete. The stress-strain relationship for concrete in compression for short term uniaxial loading is assumed according to Eurocode 2 for nonlinear analysis. For reinforcing steel linear-elastic model with hardening in plastic range is applied. The task consists in the solving the set of the derived equations s.t. box constraints. The resulting problem was solved by means of fmincon function implemented from the Matlab's Optimization Toolbox. Numerical experiments have shown the existence of many points verifying the equations with a very good accuracy. Therefore, some operations from the global optimization were included: start of fmincon from many points and clusterization. The model is verified on the set of data encountered in the engineering practice.

  11. Compression Limit of Two-Dimensional Water Constrained in Graphene Nanocapillaries.

    PubMed

    Zhu, YinBo; Wang, FengChao; Bai, Jaeil; Zeng, Xiao Cheng; Wu, HengAn

    2015-12-22

    Evaluation of the tensile/compression limit of a solid under conditions of tension or compression is often performed to provide mechanical properties that are critical for structure design and assessment. Algara-Siller et al. recently demonstrated that when water is constrained between two sheets of graphene, it becomes a two-dimensional (2D) liquid and then is turned into an intriguing monolayer solid with a square pattern under high lateral pressure [ Nature , 2015 , 519 , 443 - 445 ]. From a mechanics point of view, this liquid-to-solid transformation characterizes the compression limit (or metastability limit) of the 2D monolayer water. Here, we perform a simulation study of the compression limit of 2D monolayer, bilayer, and trilayer water constrained in graphene nanocapillaries. At 300 K, a myriad of 2D ice polymorphs (both crystalline-like and amorphous) are formed from the liquid water at different widths of the nanocapillaries, ranging from 6.0 to11.6 Å. For monolayer water, the compression limit is typically a few hundred MPa, while for the bilayer and trilayer water, the compression limit is 1.5 GPa or higher, reflecting the ultrahigh van der Waals pressure within the graphene nanocapillaries. The compression-limit (phase) diagram is obtained at the nanocapillary width versus pressure (h-P) plane, based on the comprehensive molecular dynamics simulations at numerous thermodynamic states as well as on the Clapeyron equation. Interestingly, the compression-limit curves exhibit multiple local minima.

  12. Texture characterization for joint compression and classification based on human perception in the wavelet domain.

    PubMed

    Fahmy, Gamal; Black, John; Panchanathan, Sethuraman

    2006-06-01

    Today's multimedia applications demand sophisticated compression and classification techniques in order to store, transmit, and retrieve audio-visual information efficiently. Over the last decade, perceptually based image compression methods have been gaining importance. These methods take into account the abilities (and the limitations) of human visual perception (HVP) when performing compression. The upcoming MPEG 7 standard also addresses the need for succinct classification and indexing of visual content for efficient retrieval. However, there has been no research that has attempted to exploit the characteristics of the human visual system to perform both compression and classification jointly. One area of HVP that has unexplored potential for joint compression and classification is spatial frequency perception. Spatial frequency content that is perceived by humans can be characterized in terms of three parameters, which are: 1) magnitude; 2) phase; and 3) orientation. While the magnitude of spatial frequency content has been exploited in several existing image compression techniques, the novel contribution of this paper is its focus on the use of phase coherence for joint compression and classification in the wavelet domain. Specifically, this paper describes a human visual system-based method for measuring the degree to which an image contains coherent (perceptible) phase information, and then exploits that information to provide joint compression and classification. Simulation results that demonstrate the efficiency of this method are presented.

  13. Merging of the Dirac points in electronic artificial graphene

    NASA Astrophysics Data System (ADS)

    Feilhauer, J.; Apel, W.; Schweitzer, L.

    2015-12-01

    Theory predicts that graphene under uniaxial compressive strain in an armchair direction should undergo a topological phase transition from a semimetal into an insulator. Due to the change of the hopping integrals under compression, both Dirac points shift away from the corners of the Brillouin zone towards each other. For sufficiently large strain, the Dirac points merge and an energy gap appears. However, such a topological phase transition has not yet been observed in normal graphene (due to its large stiffness) neither in any other electronic system. We show numerically and analytically that such a merging of the Dirac points can be observed in electronic artificial graphene created from a two-dimensional electron gas by application of a triangular lattice of repulsive antidots. Here, the effect of strain is modeled by tuning the distance between the repulsive potentials along the armchair direction. Our results show that the merging of the Dirac points should be observable in a recent experiment with molecular graphene.

  14. Pressure-induced structural change in liquid GaIn eutectic alloy.

    PubMed

    Yu, Q; Ahmad, A S; Ståhl, K; Wang, X D; Su, Y; Glazyrin, K; Liermann, H P; Franz, H; Cao, Q P; Zhang, D X; Jiang, J Z

    2017-04-25

    Synchrotron x-ray diffraction reveals a pressure induced crystallization at about 3.4 GPa and a polymorphic transition near 10.3 GPa when compressed a liquid GaIn eutectic alloy up to ~13 GPa at room temperature in a diamond anvil cell. Upon decompression, the high pressure crystalline phase remains almost unchanged until it transforms to the liquid state at around 2.3 GPa. The ab initio molecular dynamics calculations can reproduce the low pressure crystallization and give some hints on the understanding of the transition between the liquid and the crystalline phase on the atomic level. The calculated pair correlation function g(r) shows a non-uniform contraction reflected by the different compressibility between the short (1st shell) and the intermediate (2nd to 4th shells). It is concluded that the pressure-induced liquid-crystalline phase transformation likely arises from the changes in local atomic packing of the nearest neighbors as well as electronic structures at the transition pressure.

  15. Incompressible material point method for free surface flow

    NASA Astrophysics Data System (ADS)

    Zhang, Fan; Zhang, Xiong; Sze, Kam Yim; Lian, Yanping; Liu, Yan

    2017-02-01

    To overcome the shortcomings of the weakly compressible material point method (WCMPM) for modeling the free surface flow problems, an incompressible material point method (iMPM) is proposed based on operator splitting technique which splits the solution of momentum equation into two steps. An intermediate velocity field is first obtained by solving the momentum equations ignoring the pressure gradient term, and then the intermediate velocity field is corrected by the pressure term to obtain a divergence-free velocity field. A level set function which represents the signed distance to free surface is used to track the free surface and apply the pressure boundary conditions. Moreover, an hourglass damping is introduced to suppress the spurious velocity modes which are caused by the discretization of the cell center velocity divergence from the grid vertexes velocities when solving pressure Poisson equations. Numerical examples including dam break, oscillation of a cubic liquid drop and a droplet impact into deep pool show that the proposed incompressible material point method is much more accurate and efficient than the weakly compressible material point method in solving free surface flow problems.

  16. Qualitative analysis of gain spectra of InGaAlAs/InP lasing nano-heterostructure

    NASA Astrophysics Data System (ADS)

    Lal, Pyare; Yadav, Rashmi; Sharma, Meha; Rahman, F.; Dalela, S.; Alvi, P. A.

    2014-08-01

    This paper deals with the studies of lasing characteristics along with the gain spectra of compressively strained and step SCH based In0.71Ga0.21Al0.08As/InP lasing nano-heterostructure within TE polarization mode, taking into account the variation in well width of the single quantum well of the nano-heterostructure. In addition, the compressive conduction and valence bands dispersion profiles for quantum well of the material composition In0.71Ga0.21Al0.08As at temperature 300 K and strain 1.12% have been studied using 4 × 4 Luttinger Hamiltonian. For the proposed nano-heterostructure, the quantum well width dependence of differential gain, refractive index change and relaxation oscillation frequency with current density have been studied. Moreover, the G-J characteristics of the nano-heterostructure at different well widths have also been investigated, that provided significant information about threshold current density, threshold gain and transparency current density. The results obtained in the study of nano-heterostructure suggest that the gain and relaxation oscillation frequency both are decreased with increasing quantum well width but the required lasing wavelength is found to shift towards higher values. On behalf of qualitative analysis of the structure, the well width of 6 nm is found more suitable for lasing action at the wavelength of 1.55 μm due to minimum optical attenuation and minimum dispersion within the waveguide. The results achieved are, therefore, very important in the emerging area of nano-optoelectronics.

  17. Entropy Stable Staggered Grid Spectral Collocation for the Burgers' and Compressible Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    Carpenter, Mark H.; Parsani, Matteo; Fisher, Travis C.; Nielsen, Eric J.

    2015-01-01

    Staggered grid, entropy stable discontinuous spectral collocation operators of any order are developed for Burgers' and the compressible Navier-Stokes equations on unstructured hexahedral elements. This generalization of previous entropy stable spectral collocation work [1, 2], extends the applicable set of points from tensor product, Legendre-Gauss-Lobatto (LGL) to a combination of tensor product Legendre-Gauss (LG) and LGL points. The new semi-discrete operators discretely conserve mass, momentum, energy and satisfy a mathematical entropy inequality for both Burgers' and the compressible Navier-Stokes equations in three spatial dimensions. They are valid for smooth as well as discontinuous flows. The staggered LG and conventional LGL point formulations are compared on several challenging test problems. The staggered LG operators are significantly more accurate, although more costly to implement. The LG and LGL operators exhibit similar robustness, as is demonstrated using test problems known to be problematic for operators that lack a nonlinearly stability proof for the compressible Navier-Stokes equations (e.g., discontinuous Galerkin, spectral difference, or flux reconstruction operators).

  18. Efficient genotype compression and analysis of large genetic variation datasets

    PubMed Central

    Layer, Ryan M.; Kindlon, Neil; Karczewski, Konrad J.; Quinlan, Aaron R.

    2015-01-01

    Genotype Query Tools (GQT) is a new indexing strategy that expedites analyses of genome variation datasets in VCF format based on sample genotypes, phenotypes and relationships. GQT’s compressed genotype index minimizes decompression for analysis, and performance relative to existing methods improves with cohort size. We show substantial (up to 443 fold) performance gains over existing methods and demonstrate GQT’s utility for exploring massive datasets involving thousands to millions of genomes. PMID:26550772

  19. The effects of wavelet compression on Digital Elevation Models (DEMs)

    USGS Publications Warehouse

    Oimoen, M.J.

    2004-01-01

    This paper investigates the effects of lossy compression on floating-point digital elevation models using the discrete wavelet transform. The compression of elevation data poses a different set of problems and concerns than does the compression of images. Most notably, the usefulness of DEMs depends largely in the quality of their derivatives, such as slope and aspect. Three areas extracted from the U.S. Geological Survey's National Elevation Dataset were transformed to the wavelet domain using the third order filters of the Daubechies family (DAUB6), and were made sparse by setting 95 percent of the smallest wavelet coefficients to zero. The resulting raster is compressible to a corresponding degree. The effects of the nulled coefficients on the reconstructed DEM are noted as residuals in elevation, derived slope and aspect, and delineation of drainage basins and streamlines. A simple masking technique also is presented, that maintains the integrity and flatness of water bodies in the reconstructed DEM.

  20. Compression-based integral curve data reuse framework for flow visualization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Fan; Bi, Chongke; Guo, Hanqi

    Currently, by default, integral curves are repeatedly re-computed in different flow visualization applications, such as FTLE field computation, source-destination queries, etc., leading to unnecessary resource cost. We present a compression-based data reuse framework for integral curves, to greatly reduce their retrieval cost, especially in a resource-limited environment. In our design, a hierarchical and hybrid compression scheme is proposed to balance three objectives, including high compression ratio, controllable error, and low decompression cost. Specifically, we use and combine digitized curve sparse representation, floating-point data compression, and octree space partitioning to adaptively achieve the objectives. Results have shown that our data reusemore » framework could acquire tens of times acceleration in the resource-limited environment compared to on-the-fly particle tracing, and keep controllable information loss. Moreover, our method could provide fast integral curve retrieval for more complex data, such as unstructured mesh data.« less

  1. Compressed NMR: Combining compressive sampling and pure shift NMR techniques.

    PubMed

    Aguilar, Juan A; Kenwright, Alan M

    2017-12-26

    Historically, the resolution of multidimensional nuclear magnetic resonance (NMR) has been orders of magnitude lower than the intrinsic resolution that NMR spectrometers are capable of producing. The slowness of Nyquist sampling as well as the existence of signals as multiplets instead of singlets have been two of the main reasons for this underperformance. Fortunately, two compressive techniques have appeared that can overcome these limitations. Compressive sensing, also known as compressed sampling (CS), avoids the first limitation by exploiting the compressibility of typical NMR spectra, thus allowing sampling at sub-Nyquist rates, and pure shift techniques eliminate the second issue "compressing" multiplets into singlets. This paper explores the possibilities and challenges presented by this combination (compressed NMR). First, a description of the CS framework is given, followed by a description of the importance of combining it with the right pure shift experiment. Second, examples of compressed NMR spectra and how they can be combined with covariance methods will be shown. Copyright © 2017 John Wiley & Sons, Ltd.

  2. [Medical image compression: a review].

    PubMed

    Noreña, Tatiana; Romero, Eduardo

    2013-01-01

    Modern medicine is an increasingly complex activity , based on the evidence ; it consists of information from multiple sources : medical record text , sound recordings , images and videos generated by a large number of devices . Medical imaging is one of the most important sources of information since they offer comprehensive support of medical procedures for diagnosis and follow-up . However , the amount of information generated by image capturing gadgets quickly exceeds storage availability in radiology services , generating additional costs in devices with greater storage capacity . Besides , the current trend of developing applications in cloud computing has limitations, even though virtual storage is available from anywhere, connections are made through internet . In these scenarios the optimal use of information necessarily requires powerful compression algorithms adapted to medical activity needs . In this paper we present a review of compression techniques used for image storage , and a critical analysis of them from the point of view of their use in clinical settings.

  3. Cost-effectiveness of compression technologies for evidence-informed leg ulcer care: results from the Canadian Bandaging Trial

    PubMed Central

    2012-01-01

    Background Venous leg ulcers, affecting approximately 1% of the population, are costly to manage due to poor healing and high recurrence rates. We evaluated an evidence-informed leg ulcer care protocol with two frequently used high compression systems: ‘four-layer bandage’ (4LB) and ‘short-stretch bandage’ (SSB). Methods We conducted a cost-effectiveness analysis using individual patient data from the Canadian Bandaging Trial, a publicly funded, pragmatic, randomized trial evaluating high compression therapy with 4LB (n = 215) and SSB (n = 209) for community care of venous leg ulcers. We estimated costs (in 2009–2010 Canadian dollars) from the societal perspective and used a time horizon corresponding to each trial participant’s first year. Results Relative to SSB, 4LB was associated with an average 15 ulcer-free days gained, although the 95% confidence interval [−32, 21 days] crossed zero, indicating no treatment difference; an average health benefit of 0.009 QALYs gained [−0.019, 0.037] and overall, an average cost increase of $420 [$235, $739] (due to twice as many 4LB bandages used); or equivalently, a cost of $46,667 per QALY gained. If decision makers are willing to pay from $50,000 to $100,000 per QALY, the probability of 4LB being more cost effective increased from 51% to 63%. Conclusions Our findings differ from the emerging clinical and economic evidence that supports high compression therapy with 4LB, and therefore suggest another perspective on high compression practice, namely when delivered by trained registered nurses using an evidence-informed protocol, both 4LB and SSB systems offer comparable effectiveness and value for money. Trial registration ClinicalTrials.gov Identifier: NCT00202267 PMID:23031428

  4. Coating process optimization through in-line monitoring for coating weight gain using Raman spectroscopy and design of experiments.

    PubMed

    Kim, Byungsuk; Woo, Young-Ah

    2018-05-30

    In this study the authors developed a real-time Process Analytical Technology (PAT) of a coating process by applying in-line Raman spectroscopy to evaluate the coating weight gain, which is a quantitative analysis of the film coating layer. The wide area illumination (WAI) Raman probe was connected to the pan coater for real-time monitoring of changes in the weight gain of coating layers. Under the proposed in-line Raman scheme, a non-contact, non-destructive analysis was performed using WAI Raman probes with a spot size of 6 mm. The in-line Raman probe maintained a focal length of 250 mm, and a compressed air line was designed to protect the lens surface from spray droplets. The Design of Experiment (DOE) was applied to identify factors affecting the Raman spectra background of laser irradiation. The factors selected for DOE were the strength of compressed air connected to the probe, and the shielding of light by the transparent door connecting the probe to the pan coater. To develop a quantitative model, partial least squares (PLS) models as multivariate calibration were developed based on the three regions showing the specificity of TiO 2 individually or in combination. For the three single peaks (636 cm -1 , 512 cm -1 , 398 cm -1 ), least squares method (LSM) was applied to develop three univariate quantitative analysis models. One of best multivariate quantitative model having a factor of 1 gave the lowest RMSEP of 0.128, 0.129, and 0.125, respectively for prediction batches. When LSM was applied to the single peak at 636 cm -1 , the univariate quantitative model with an R 2 of 0.9863, slope of 0.5851, and y-intercept of 0.8066 had the lowest RMSEP of 0.138, 0.144, and 0.153, respectively for prediction batches. The in-line Raman spectroscopic method for the analysis of coating weight gain was verified by considering system suitability and parameters such as specificity, range, linearity, accuracy, and precision in accordance with ICH Q2 regarding

  5. The perceptual learning of time-compressed speech: A comparison of training protocols with different levels of difficulty

    PubMed Central

    Gabay, Yafit; Karni, Avi; Banai, Karen

    2017-01-01

    Speech perception can improve substantially with practice (perceptual learning) even in adults. Here we compared the effects of four training protocols that differed in whether and how task difficulty was changed during a training session, in terms of the gains attained and the ability to apply (transfer) these gains to previously un-encountered items (tokens) and to different talkers. Participants trained in judging the semantic plausibility of sentences presented as time-compressed speech and were tested on their ability to reproduce, in writing, the target sentences; trail-by-trial feedback was afforded in all training conditions. In two conditions task difficulty (low or high compression) was kept constant throughout the training session, whereas in the other two conditions task difficulty was changed in an adaptive manner (incrementally from easy to difficult, or using a staircase procedure). Compared to a control group (no training), all four protocols resulted in significant post-training improvement in the ability to reproduce the trained sentences accurately. However, training in the constant-high-compression protocol elicited the smallest gains in deciphering and reproducing trained items and in reproducing novel, untrained, items after training. Overall, these results suggest that training procedures that start off with relatively little signal distortion (“easy” items, not far removed from standard speech) may be advantageous compared to conditions wherein severe distortions are presented to participants from the very beginning of the training session. PMID:28545039

  6. Fast Ignition Thermonuclear Fusion: Enhancement of the Pellet Gain by the Colossal-Magnetic-Field Shells

    NASA Astrophysics Data System (ADS)

    Stefan, V. Alexander

    2013-10-01

    The fast ignition fusion pellet gain can be enhanced by a laser generated B-field shell. The B-field shell, (similar to Earth's B-field, but with the alternating B-poles), follows the pellet compression in a frozen-in B-field regime. A properly designed laser-pellet coupling can lead to the generation of a B-field shell, (up to 100 MG), which inhibits electron thermal transport and confines the alpha-particles. In principle, a pellet gain of few-100s can be achieved in this manner. Supported in part by Nikola Tesla Labs, Stefan University, 1010 Pearl, La Jolla, CA 92038-1007.

  7. Mechanical Metamaterials with Negative Compressibility Transitions

    NASA Astrophysics Data System (ADS)

    Motter, Adilson

    2015-03-01

    When tensioned, ordinary materials expand along the direction of the applied force. In this presentation, I will explore network concepts to design metamaterials exhibiting negative compressibility transitions, during which the material undergoes contraction when tensioned (or expansion when pressured). Such transitions, which are forbidden in thermodynamic equilibrium, are possible during the decay of metastable, super-strained states. I will introduce a statistical physics theory for negative compressibility transitions, derive a first-principles model to predict these transitions, and present a validation of the model using molecular dynamics simulations. Aside from its immediate mechanical implications, our theory points to a wealth of analogous inverted responses, such as inverted susceptibility or heat-capacity transitions, allowed when considering realistic scales. This research was done in collaboration with Zachary Nicolaou, and was supported by the National Science Foundation and the Alfred P. Sloan Foundation.

  8. An image assessment study of image acceptability of the Galileo low gain antenna mission

    NASA Technical Reports Server (NTRS)

    Chuang, S. L.; Haines, R. F.; Grant, T.; Gold, Yaron; Cheung, Kar-Ming

    1994-01-01

    This paper describes a study conducted by NASA Ames Research Center (ARC) in collaboration with the Jet Propulsion Laboratory (JPL), Pasadena, California on the image acceptability of the Galileo Low Gain Antenna mission. The primary objective of the study is to determine the impact of the Integer Cosine Transform (ICT) compression algorithm on Galilean images of atmospheric bodies, moons, asteroids and Jupiter's rings. The approach involved fifteen volunteer subjects representing twelve institutions involved with the Galileo Solid State Imaging (SSI) experiment. Four different experiment specific quantization tables (q-table) and various compression stepsizes (q-factor) to achieve different compression ratios were used. It then determined the acceptability of the compressed monochromatic astronomical images as evaluated by Galileo SSI mission scientists. Fourteen different images were evaluated. Each observer viewed two versions of the same image side by side on a high resolution monitor, each was compressed using a different quantization stepsize. They were requested to select which image had the highest overall quality to support them in carrying out their visual evaluations of image content. Then they rated both images using a scale from one to five on its judged degree of usefulness. Up to four pre-selected types of images were presented with and without noise to each subject based upon results of a previously administered survey of their image preferences. Fourteen different images in seven image groups were studied. The results showed that: (1) acceptable compression ratios vary widely with the type of images; (2) noisy images detract greatly from image acceptability and acceptable compression ratios; and (3) atmospheric images of Jupiter seem to have higher compression ratios of 4 to 5 times that of some clear surface satellite images.

  9. Theory of compressive modeling and simulation

    NASA Astrophysics Data System (ADS)

    Szu, Harold; Cha, Jae; Espinola, Richard L.; Krapels, Keith

    2013-05-01

    Modeling and Simulation (M&S) has been evolving along two general directions: (i) data-rich approach suffering the curse of dimensionality and (ii) equation-rich approach suffering computing power and turnaround time. We suggest a third approach. We call it (iii) compressive M&S (CM&S); because the basic Minimum Free-Helmholtz Energy (MFE) facilitating CM&S can reproduce and generalize Candes, Romberg, Tao & Donoho (CRT&D) Compressive Sensing (CS) paradigm as a linear Lagrange Constraint Neural network (LCNN) algorithm. CM&S based MFE can generalize LCNN to 2nd order as Nonlinear augmented LCNN. For example, during the sunset, we can avoid a reddish bias of sunlight illumination due to a long-range Rayleigh scattering over the horizon. With CM&S we can take instead of day camera, a night vision camera. We decomposed long wave infrared (LWIR) band with filter into 2 vector components (8~10μm and 10~12μm) and used LCNN to find pixel by pixel the map of Emissive-Equivalent Planck Radiation Sources (EPRS). Then, we up-shifted consistently, according to de-mixed sources map, to the sub-micron RGB color image. Moreover, the night vision imaging can also be down-shifted at Passive Millimeter Wave (PMMW) imaging, suffering less blur owing to dusty smokes scattering and enjoying apparent smoothness of surface reflectivity of man-made objects under the Rayleigh resolution. One loses three orders of magnitudes in the spatial Rayleigh resolution; but gains two orders of magnitude in the reflectivity, and gains another two orders in the propagation without obscuring smog . Since CM&S can generate missing data and hard to get dynamic transients, CM&S can reduce unnecessary measurements and their associated cost and computing in the sense of super-saving CS: measuring one & getting one's neighborhood free .

  10. Investigation of Spheromak Plasma Cooling through Metallic Liner Spallation during Compression

    NASA Astrophysics Data System (ADS)

    Ross, Keeton; Mossman, Alex; Young, William; Ivanov, Russ; O'Shea, Peter; Howard, Stephen

    2016-10-01

    Various magnetic-target fusion (MTF) reactor concepts involve a preliminary magnetic confinement stage, followed by a metallic liner implosion that compresses the plasma to fusion conditions. The process is repeated to produce a pulsed, net-gain energy system. General Fusion, Inc. is pursuing one scheme that involves the compression of spheromak plasmas inside a liner formed by a collapsing vortex of liquid Pb-Li. The compression is driven by focused acoustic waves launched by gas-driven piston impacts. Here we describe a project to exploring the effects of possible liner spallation during compression on the spheromaks temperature, lifetime, and stability. We employ a 1 J, 10 ns pulsed YAG laser at 532nm focused onto a thin film of Li or Al to inject a known quantity of metallic impurities into a spheromak plasma and then measure the response. Diagnostics including visible and ultraviolet spectrometers, ion Doppler, B-probes, and Thomson scattering are used for plasma characterization. We then plan to apply the trends measured under these controlled conditions to evaluate the role of wall impurities during `field shots', where spheromaks are compressed through a chemically driven implosion of an aluminum flux conserver. The hope is that with further study we could more accurately include the effect of wall impurities on the fusion yield of a reactor-scale MTF system. Experimental procedures and results are presented, along with their relation to other liner-driven, MTF schemes. -/a

  11. View compensated compression of volume rendered images for remote visualization.

    PubMed

    Lalgudi, Hariharan G; Marcellin, Michael W; Bilgin, Ali; Oh, Han; Nadar, Mariappan S

    2009-07-01

    Remote visualization of volumetric images has gained importance over the past few years in medical and industrial applications. Volume visualization is a computationally intensive process, often requiring hardware acceleration to achieve a real time viewing experience. One remote visualization model that can accomplish this would transmit rendered images from a server, based on viewpoint requests from a client. For constrained server-client bandwidth, an efficient compression scheme is vital for transmitting high quality rendered images. In this paper, we present a new view compensation scheme that utilizes the geometric relationship between viewpoints to exploit the correlation between successive rendered images. The proposed method obviates motion estimation between rendered images, enabling significant reduction to the complexity of a compressor. Additionally, the view compensation scheme, in conjunction with JPEG2000 performs better than AVC, the state of the art video compression standard.

  12. Relationship Between Optimal Gain and Coherence Zone in Flight Simulation

    NASA Technical Reports Server (NTRS)

    Gracio, Bruno Jorge Correia; Pais, Ana Rita Valente; vanPaassen, M. M.; Mulder, Max; Kely, Lon C.; Houck, Jacob A.

    2011-01-01

    In motion simulation the inertial information generated by the motion platform is most of the times different from the visual information in the simulator displays. This occurs due to the physical limits of the motion platform. However, for small motions that are within the physical limits of the motion platform, one-to-one motion, i.e. visual information equal to inertial information, is possible. It has been shown in previous studies that one-to-one motion is often judged as too strong, causing researchers to lower the inertial amplitude. When trying to measure the optimal inertial gain for a visual amplitude, we found a zone of optimal gains instead of a single value. Such result seems related with the coherence zones that have been measured in flight simulation studies. However, the optimal gain results were never directly related with the coherence zones. In this study we investigated whether the optimal gain measurements are the same as the coherence zone measurements. We also try to infer if the results obtained from the two measurements can be used to differentiate between simulators with different configurations. An experiment was conducted at the NASA Langley Research Center which used both the Cockpit Motion Facility and the Visual Motion Simulator. The results show that the inertial gains obtained with the optimal gain are different than the ones obtained with the coherence zone measurements. The optimal gain is within the coherence zone.The point of mean optimal gain was lower and further away from the one-to-one line than the point of mean coherence. The zone width obtained for the coherence zone measurements was dependent on the visual amplitude and frequency. For the optimal gain, the zone width remained constant when the visual amplitude and frequency were varied. We found no effect of the simulator configuration in both the coherence zone and optimal gain measurements.

  13. Universal data compression

    NASA Astrophysics Data System (ADS)

    Lindsay, R. A.; Cox, B. V.

    Universal and adaptive data compression techniques have the capability to globally compress all types of data without loss of information but have the disadvantage of complexity and computation speed. Advances in hardware speed and the reduction of computational costs have made universal data compression feasible. Implementations of the Adaptive Huffman and Lempel-Ziv compression algorithms are evaluated for performance. Compression ratios versus run times for different size data files are graphically presented and discussed in the paper. Required adjustments needed for optimum performance of the algorithms relative to theoretical achievable limits will be outlined.

  14. A novel "gain chip" concept for high-power lasers (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Li, Min; Li, Mingzhong; Wang, Zhenguo; Yan, Xiongwei; Jiang, Xinying; Zheng, Jiangang; Cui, Xudong; Zhang, Xiaomin

    2017-05-01

    High-power lasers, including high-peak power lasers (HPPL) and high-average power lasers (HAPL), attract much interest for enormous variety of applications in inertial fusion energy (IFE), materials processing, defense, spectroscopy, and high-field physics research. To meet the requirements of high efficiency and quality, a "gain chip" concept is proposed to properly design the pumping, cooling and lasing fields. The gain chip mainly consists of the laser diode arrays, lens duct, rectangle wave guide and slab-shaped gain media. For the pumping field, the pump light will be compressed and homogenized by the lens duct to high irradiance with total internal reflection, and further coupled into the gain media through its two edge faces. For the cooling field, the coolant travels along the flow channel created by the adjacent slabs in the other two edge-face direction, and cool the lateral faces of the gain media. For the lasing field, the laser beam travels through the lateral faces and experiences minimum thermal wavefront distortions. Thereby, these three fields are in orthogonality offering more spatial freedom to handle them during the construction of the lasers. Transverse gradient doping profiles for HPPL and HAPL have been employed to achieve uniform gain distributions (UGD) within the gain media, respectively. This UGD will improve the management for both amplified spontaneous emission (ASE) and thermal behavior. Since each "gain chip" has its own pump source, power scaling can be easily achieved by placing identical "gain chips" along the laser beam axis without disturbing the gain and thermal distributions. To detail our concept, a 1-kJ pulsed amplifier is designed and optical-to-optical efficiency up to 40% has been obtained. We believe that with proper coolant (gas or liquid) and gain media (Yb:YAG, Nd:glass or Nd:YAG) our "gain chip" concept might provide a general configuration for high-power lasers with high efficiency and quality.

  15. Physicsdesign point for a 1MW fusion neutron source

    NASA Astrophysics Data System (ADS)

    Woodruff, Simon; Melnik, Paul; Sieck, Paul; Stuber, James; Romero-Talamas, Carlos; O'Bryan, John; Miller, Ronald

    2016-10-01

    We are developing a design point for a spheromak experiment heated by adiabatic compression for use as a compact neutron source. We utilize the CORSICA and NIMROD MHD codes as well as analytic modeling to assess a concept with target parameters R0 =0.5m, Rf =0.17m, T0 =1keV, Tf =8keV, n0 =2e20m-3 and nf = 5e21m-3, with radial convergence of C =R0/Rf =3. We present results from CORSICA showing the placement of coils and passive structure to ensure stability during compression. We specify target parameters for the compression in terms of plasma beta, formation efficiency and energy confinement. We present results simulations of magnetic compression using the NIMROD code to examine the role of rotation on the stability and confinement of the spheromak as it is compressed. Supported by DARPA Grant N66001-14-1-4044 and IAEA CRP on Compact Fusion Neutron Sources.

  16. Software For Tie-Point Registration Of SAR Data

    NASA Technical Reports Server (NTRS)

    Rignot, Eric; Dubois, Pascale; Okonek, Sharon; Van Zyl, Jacob; Burnette, Fred; Borgeaud, Maurice

    1995-01-01

    SAR-REG software package registers synthetic-aperture-radar (SAR) image data to common reference frame based on manual tie-pointing. Image data can be in binary, integer, floating-point, or AIRSAR compressed format. For example, with map of soil characteristics, vegetation map, digital elevation map, or SPOT multispectral image, as long as user can generate binary image to be used by tie-pointing routine and data are available in one of the previously mentioned formats. Written in FORTRAN 77.

  17. Collaborative Wideband Compressed Signal Detection in Interplanetary Internet

    NASA Astrophysics Data System (ADS)

    Wang, Yulin; Zhang, Gengxin; Bian, Dongming; Gou, Liang; Zhang, Wei

    2014-07-01

    As the development of autonomous radio in deep space network, it is possible to actualize communication between explorers, aircrafts, rovers and satellites, e.g. from different countries, adopting different signal modes. The first mission to enforce the autonomous radio is to detect signals of the explorer autonomously without disturbing the original communication. This paper develops a collaborative wideband compressed signal detection approach for InterPlaNetary (IPN) Internet where there exist sparse active signals in the deep space environment. Compressed sensing (CS) can be utilized by exploiting the sparsity of IPN Internet communication signal, whose useful frequency support occupies only a small portion of an entirely wide spectrum. An estimate of the signal spectrum can be obtained by using reconstruction algorithms. Against deep space shadowing and channel fading, multiple satellites collaboratively sense and make a final decision according to certain fusion rule to gain spatial diversity. A couple of novel discrete cosine transform (DCT) and walsh-hadamard transform (WHT) based compressed spectrum detection methods are proposed which significantly improve the performance of spectrum recovery and signal detection. Finally, extensive simulation results are presented to show the effectiveness of our proposed collaborative scheme for signal detection in IPN Internet. Compared with the conventional discrete fourier transform (DFT) based method, our DCT and WHT based methods reduce computational complexity, decrease processing time, save energy and enhance probability of detection.

  18. Tension and compression fatigue response of unnotched 3D braided composites

    NASA Technical Reports Server (NTRS)

    Portanova, M. A.

    1992-01-01

    The unnotched compression and tension fatigue response of a 3-D braided composite was measured. Both gross compressive stress and tensile stress were plotted against cycles to failure to evaluate the fatigue life of these materials. Damage initiation and growth was monitored visually and by tracking compliance change during cycle loading. The intent was to establish by what means the strength of a 3-D architecture will start to degrade, at what point will it degrade beyond an acceptable level, and how this material will typically fail.

  19. Influence of Tension-Compression Asymmetry on the Mechanical Behavior of AZ31B Magnesium Alloy Sheets in Bending

    NASA Astrophysics Data System (ADS)

    Zhou, Ping; Beeh, Elmar; Friedrich, Horst E.

    2016-03-01

    Magnesium alloys are promising materials for lightweight design in the automotive industry due to their high strength-to-mass ratio. This study aims to study the influence of tension-compression asymmetry on the radius of curvature and energy absorption capacity of AZ31B-O magnesium alloy sheets in bending. The mechanical properties were characterized using tension, compression, and three-point bending tests. The material exhibits significant tension-compression asymmetry in terms of strength and strain hardening rate due to extension twinning in compression. The compressive yield strength is much lower than the tensile yield strength, while the strain hardening rate is much higher in compression. Furthermore, the tension-compression asymmetry in terms of r value (Lankford value) was also observed. The r value in tension is much higher than that in compression. The bending results indicate that the AZ31B-O sheet can outperform steel and aluminum sheets in terms of specific energy absorption in bending mainly due to its low density. In addition, the AZ31B-O sheet was deformed with a larger radius of curvature than the steel and aluminum sheets, which brings a benefit to energy absorption capacity. Finally, finite element simulation for three-point bending was performed using LS-DYNA and the results confirmed that the larger radius of curvature of a magnesium specimen is mainly attributed to the high strain hardening rate in compression.

  20. Audiovisual focus of attention and its application to Ultra High Definition video compression

    NASA Astrophysics Data System (ADS)

    Rerabek, Martin; Nemoto, Hiromi; Lee, Jong-Seok; Ebrahimi, Touradj

    2014-02-01

    Using Focus of Attention (FoA) as a perceptual process in image and video compression belongs to well-known approaches to increase coding efficiency. It has been shown that foveated coding, when compression quality varies across the image according to region of interest, is more efficient than the alternative coding, when all region are compressed in a similar way. However, widespread use of such foveated compression has been prevented due to two main conflicting causes, namely, the complexity and the efficiency of algorithms for FoA detection. One way around these is to use as much information as possible from the scene. Since most video sequences have an associated audio, and moreover, in many cases there is a correlation between the audio and the visual content, audiovisual FoA can improve efficiency of the detection algorithm while remaining of low complexity. This paper discusses a simple yet efficient audiovisual FoA algorithm based on correlation of dynamics between audio and video signal components. Results of audiovisual FoA detection algorithm are subsequently taken into account for foveated coding and compression. This approach is implemented into H.265/HEVC encoder producing a bitstream which is fully compliant to any H.265/HEVC decoder. The influence of audiovisual FoA in the perceived quality of high and ultra-high definition audiovisual sequences is explored and the amount of gain in compression efficiency is analyzed.

  1. Optimal gains for a single polar orbiting satellite

    NASA Technical Reports Server (NTRS)

    Banfield, Don; Ingersoll, A. P.; Keppenne, C. L.

    1993-01-01

    Gains are the spatial weighting of an observation in its neighborhood versus the local values of a model prediction. They are the key to data assimilation, as they are the direct measure of how the data are used to guide the model. As derived in the broad context of data assimilation by Kalman and in the context of meteorology, for example, by Rutherford, the optimal gains are functions of the prediction error covariances between the observation and analysis points. Kalman introduced a very powerful technique that allows one to calculate these optimal gains at the time of each observation. Unfortunately, this technique is both computationally expensive and often numerically unstable for dynamical systems of the magnitude of meteorological models, and thus is unsuited for use in PMIRR data assimilation. However, the optimal gains as calculated by a Kalman filter do reach a steady state for regular observing patterns like that of a satellite. In this steady state, the gains are constants in time, and thus could conceivably be computed off-line. These steady-state Kalman gains (i.e., Wiener gains) would yield optimal performance without the computational burden of true Kalman filtering. We proposed to use this type of constant-in-time Wiener gain for the assimilation of data from PMIRR and Mars Observer.

  2. Multi-millijoule few-cycle mid-infrared pulses through nonlinear self-compression in bulk

    PubMed Central

    Shumakova, V.; Malevich, P.; Ališauskas, S.; Voronin, A.; Zheltikov, A. M.; Faccio, D.; Kartashov, D.; Baltuška, A.; Pugžlys, A.

    2016-01-01

    The physics of strong-field applications requires driver laser pulses that are both energetic and extremely short. Whereas optical amplifiers, laser and parametric, boost the energy, their gain bandwidth restricts the attainable pulse duration, requiring additional nonlinear spectral broadening to enable few or even single cycle compression and a corresponding peak power increase. Here we demonstrate, in the mid-infrared wavelength range that is important for scaling the ponderomotive energy in strong-field interactions, a simple energy-efficient and scalable soliton-like pulse compression in a mm-long yttrium aluminium garnet crystal with no additional dispersion management. Sub-three-cycle pulses with >0.44 TW peak power are compressed and extracted before the onset of modulation instability and multiple filamentation as a result of a favourable interplay between strong anomalous dispersion and optical nonlinearity around the wavelength of 3.9 μm. As a manifestation of the increased peak power, we show the evidence of mid-infrared pulse filamentation in atmospheric air. PMID:27620117

  3. Correlation between compressive strength and ultrasonic pulse velocity of high strength concrete incorporating chopped basalt fibre

    NASA Astrophysics Data System (ADS)

    Shafiq, Nasir; Fadhilnuruddin, Muhd; Elshekh, Ali Elheber Ahmed; Fathi, Ahmed

    2015-07-01

    Ultrasonic pulse velocity (UPV), is considered as the most important test for non-destructive techniques that are used to evaluate the mechanical characteristics of high strength concrete (HSC). The relationship between the compressive strength of HSC containing chopped basalt fibre stands (CBSF) and UPV was investigated. The concrete specimens were prepared using a different ratio of CBSF as internal strengthening materials. The compressive strength measurements were conducted at the sample ages of 3, 7, 28, 56 and 90 days; whilst, the ultrasonic pulse velocity was measured at 28 days. The result of HSC's compressive strength with the chopped basalt fibre did not show any improvement; instead, it was decreased. The UPV of the chopped basalt fibre reinforced concrete has been found to be less than that of the control mix for each addition ratio of the basalt fibre. A relationship plot is gained between the cube compressive strength for HSC and UPV with various amounts of chopped basalt fibres.

  4. Real-Time Mobile Device-Assisted Chest Compression During Cardiopulmonary Resuscitation.

    PubMed

    Sarma, Satyam; Bucuti, Hakiza; Chitnis, Anurag; Klacman, Alex; Dantu, Ram

    2017-07-15

    Prompt administration of high-quality cardiopulmonary resuscitation (CPR) is a key determinant of survival from cardiac arrest. Strategies to improve CPR quality at point of care could improve resuscitation outcomes. We tested whether a low cost and scalable mobile phone- or smart watch-based solution could provide accurate measures of compression depth and rate during simulated CPR. Fifty health care providers (58% intensive care unit nurses) performed simulated CPR on a calibrated training manikin (Resusci Anne, Laerdal) while wearing both devices. Subjects received real-time audiovisual feedback from each device sequentially. Primary outcome was accuracy of compression depth and rate compared with the calibrated training manikin. Secondary outcome was improvement in CPR quality as defined by meeting both guideline-recommend compression depth (5 to 6 cm) and rate (100 to 120/minute). Compared with the training manikin, typical error for compression depth was <5 mm (smart phone 4.6 mm; 95% CI 4.1 to 5.3 mm; smart watch 4.3 mm; 95% CI 3.8 to 5.0 mm). Compression rates were similarly accurate (smart phone Pearson's R = 0.93; smart watch R = 0.97). There was no difference in improved CPR quality defined as the number of sessions meeting both guideline-recommended compression depth (50 to 60 mm) and rate (100 to 120 compressions/minute) with mobile device feedback (60% vs 50%; p = 0.3). Sessions that did not meet guideline recommendations failed primarily because of inadequate compression depth (46 ± 2 mm). In conclusion, a mobile device application-guided CPR can accurately track compression depth and rate during simulation in a practice environment in accordance with resuscitation guidelines. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Data compression: The end-to-end information systems perspective for NASA space science missions

    NASA Technical Reports Server (NTRS)

    Tai, Wallace

    1991-01-01

    The unique characteristics of compressed data have important implications to the design of space science data systems, science applications, and data compression techniques. The sequential nature or data dependence between each of the sample values within a block of compressed data introduces an error multiplication or propagation factor which compounds the effects of communication errors. The data communication characteristics of the onboard data acquisition, storage, and telecommunication channels may influence the size of the compressed blocks and the frequency of included re-initialization points. The organization of the compressed data are continually changing depending on the entropy of the input data. This also results in a variable output rate from the instrument which may require buffering to interface with the spacecraft data system. On the ground, there exist key tradeoff issues associated with the distribution and management of the science data products when data compression techniques are applied in order to alleviate the constraints imposed by ground communication bandwidth and data storage capacity.

  6. Galileo Spacecraft Scan Platform Celestial Pointing Cone Control Gain Redesign

    NASA Technical Reports Server (NTRS)

    In, C-H. C.; Hilbert, K. B.

    1994-01-01

    During September and October 1991, pictures of the Gaspra asteroid and neighboring stars were taken by the Galileo Optical Navigation (OPNAV) Team for the purpose of navigation the spacecraft for a successful Gaspra encounter. The star tracks in these pictures showed that the scan platform celestial pointing cone controller performed poorly in compensating for wobble-induced cone offsets.

  7. 2D-pattern matching image and video compression: theory, algorithms, and experiments.

    PubMed

    Alzina, Marc; Szpankowski, Wojciech; Grama, Ananth

    2002-01-01

    In this paper, we propose a lossy data compression framework based on an approximate two-dimensional (2D) pattern matching (2D-PMC) extension of the Lempel-Ziv (1977, 1978) lossless scheme. This framework forms the basis upon which higher level schemes relying on differential coding, frequency domain techniques, prediction, and other methods can be built. We apply our pattern matching framework to image and video compression and report on theoretical and experimental results. Theoretically, we show that the fixed database model used for video compression leads to suboptimal but computationally efficient performance. The compression ratio of this model is shown to tend to the generalized entropy. For image compression, we use a growing database model for which we provide an approximate analysis. The implementation of 2D-PMC is a challenging problem from the algorithmic point of view. We use a range of techniques and data structures such as k-d trees, generalized run length coding, adaptive arithmetic coding, and variable and adaptive maximum distortion level to achieve good compression ratios at high compression speeds. We demonstrate bit rates in the range of 0.25-0.5 bpp for high-quality images and data rates in the range of 0.15-0.5 Mbps for a baseline video compression scheme that does not use any prediction or interpolation. We also demonstrate that this asymmetric compression scheme is capable of extremely fast decompression making it particularly suitable for networked multimedia applications.

  8. Analysis of Compression Algorithm in Ground Collision Avoidance Systems (Auto-GCAS)

    NASA Technical Reports Server (NTRS)

    Schmalz, Tyler; Ryan, Jack

    2011-01-01

    Automatic Ground Collision Avoidance Systems (Auto-GCAS) utilizes Digital Terrain Elevation Data (DTED) stored onboard a plane to determine potential recovery maneuvers. Because of the current limitations of computer hardware on military airplanes such as the F-22 and F-35, the DTED must be compressed through a lossy technique called binary-tree tip-tilt. The purpose of this study is to determine the accuracy of the compressed data with respect to the original DTED. This study is mainly interested in the magnitude of the error between the two as well as the overall distribution of the errors throughout the DTED. By understanding how the errors of the compression technique are affected by various factors (topography, density of sampling points, sub-sampling techniques, etc.), modifications can be made to the compression technique resulting in better accuracy. This, in turn, would minimize unnecessary activation of A-GCAS during flight as well as maximizing its contribution to fighter safety.

  9. Coding gains and error rates from the Big Viterbi Decoder

    NASA Technical Reports Server (NTRS)

    Onyszchuk, I. M.

    1991-01-01

    A prototype hardware Big Viterbi Decoder (BVD) was completed for an experiment with the Galileo Spacecraft. Searches for new convolutional codes, studies of Viterbi decoder hardware designs and architectures, mathematical formulations, and decompositions of the deBruijn graph into identical and hierarchical subgraphs, and very large scale integration (VLSI) chip design are just a few examples of tasks completed for this project. The BVD bit error rates (BER), measured from hardware and software simulations, are plotted as a function of bit signal to noise ratio E sub b/N sub 0 on the additive white Gaussian noise channel. Using the constraint length 15, rate 1/4, experimental convolutional code for the Galileo mission, the BVD gains 1.5 dB over the NASA standard (7,1/2) Maximum Likelihood Convolution Decoder (MCD) at a BER of 0.005. At this BER, the same gain results when the (255,233) NASA standard Reed-Solomon decoder is used, which yields a word error rate of 2.1 x 10(exp -8) and a BER of 1.4 x 10(exp -9). The (15, 1/6) code to be used by the Cometary Rendezvous Asteroid Flyby (CRAF)/Cassini Missions yields 1.7 dB of coding gain. These gains are measured with respect to symbols input to the BVD and increase with decreasing BER. Also, 8-bit input symbol quantization makes the BVD resistant to demodulated signal-level variations which may cause higher bandwidth than the NASA (7,1/2) code, these gains are offset by about 0.1 dB of expected additional receiver losses. Coding gains of several decibels are possible by compressing all spacecraft data.

  10. Stability of Bifurcating Stationary Solutions of the Artificial Compressible System

    NASA Astrophysics Data System (ADS)

    Teramoto, Yuka

    2018-02-01

    The artificial compressible system gives a compressible approximation of the incompressible Navier-Stokes system. The latter system is obtained from the former one in the zero limit of the artificial Mach number ɛ which is a singular limit. The sets of stationary solutions of both systems coincide with each other. It is known that if a stationary solution of the incompressible system is asymptotically stable and the velocity field of the stationary solution satisfies an energy-type stability criterion, then it is also stable as a solution of the artificial compressible one for sufficiently small ɛ . In general, the range of ɛ shrinks when the spectrum of the linearized operator for the incompressible system approaches to the imaginary axis. This can happen when a stationary bifurcation occurs. It is proved that when a stationary bifurcation from a simple eigenvalue occurs, the range of ɛ can be taken uniformly near the bifurcation point to conclude the stability of the bifurcating solution as a solution of the artificial compressible system.

  11. Context-dependent arm pointing adaptation

    NASA Technical Reports Server (NTRS)

    Seidler, R. D.; Bloomberg, J. J.; Stelmach, G. E.

    2001-01-01

    We sought to determine the effectiveness of head posture as a contextual cue to facilitate adaptive transitions in manual control during visuomotor distortions. Subjects performed arm pointing movements by drawing on a digitizing tablet, with targets and movement trajectories displayed in real time on a computer monitor. Adaptation was induced by presenting the trajectories in an altered gain format on the monitor. The subjects were shown visual displays of their movements that corresponded to either 0.5 or 1.5 scaling of the movements made. Subjects were assigned to three groups: the head orientation group tilted the head towards the right shoulder when drawing under a 0.5 gain of display and towards the left shoulder when drawing under a 1.5 gain of display; the target orientation group had the home and target positions rotated counterclockwise when drawing under the 0.5 gain and clockwise for the 1.5 gain; the arm posture group changed the elbow angle of the arm they were not drawing with from full flexion to full extension with 0.5 and 1.5 gain display changes. To determine if contextual cues were associated with display alternations, the gain changes were returned to the standard (1.0) display. Aftereffects were assessed to determine the efficacy of the head orientation contextual cue compared to the two control cues. The head orientation cue was effectively associated with the multiple gains. The target orientation cue also demonstrated some effectiveness while the arm posture cue did not. The results demonstrate that contextual cues can be used to switch between multiple adaptive states. These data provide support for the idea that static head orientation information is a crucial component to the arm adaptation process. These data further define the functional linkage between head posture and arm pointing movements.

  12. Context-Dependent Arm Pointing Adaptation

    NASA Technical Reports Server (NTRS)

    Seidler, R. D.; Bloomberg, J. J.; Stelmach, G. E.

    2000-01-01

    We sought to determine the effectiveness of head posture as a contextual cue to facilitate adaptive transitions in manual control during visuomotor distortions. Subjects performed arm pointing movements by drawing on a digitizing tablet, with targets and movement trajectories displayed in real time on a computer monitor. Adaptation was induced by presenting the trajectories in an altered gain format on the monitor. The subjects were shown visual displays of their movements that corresponded to either 0.5 or 1.5 scaling of the movements made. Subjects were assigned to three groups: the head orientation group tilted the head towards the right shoulder when drawing under a 0.5 gain of display and towards the left shoulder when drawing under a 1.5 gain of display, the target orientation group had the home & target positions rotated counterclockwise when drawing under the 0.5 gain and clockwise for the 1.5 gain, the arm posture group changed the elbow angle of the arm they were not drawing with from full flexion to full extension with 0.5 and 1.5 gain display changes. To determine if contextual cues were associated with display alternations, the gain changes were returned to the standard (1.0) display. Aftereffects were assessed to determine the efficacy of the head orientation contextual cue. . compared to the two control cues. The head orientation cue was effectively associated with the multiple gains. The target orientation cue also demonstrated some effectiveness while the.arm posture cue did not. The results demonstrate that contextual cues can be used to switch between multiple adaptive states. These data provide support for the idea that static head orientation information is a crucial component to the arm adaptation process. These data further define the functional linkage between head posture and arm pointing movements.

  13. Distinguishing one from many using super-resolution compressive sensing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anthony, Stephen Michael; Mulcahy-Stanislawczyk, Johnathan; Shields, Eric A.

    We present that distinguishing whether a signal corresponds to a single source or a limited number of highly overlapping point spread functions (PSFs) is a ubiquitous problem across all imaging scales, whether detecting receptor-ligand interactions in cells or detecting binary stars. Super-resolution imaging based upon compressed sensing exploits the relative sparseness of the point sources to successfully resolve sources which may be separated by much less than the Rayleigh criterion. However, as a solution to an underdetermined system of linear equations, compressive sensing requires the imposition of constraints which may not always be valid. One typical constraint is that themore » PSF is known. However, the PSF of the actual optical system may reflect aberrations not present in the theoretical ideal optical system. Even when the optics are well characterized, the actual PSF may reflect factors such as non-uniform emission of the point source (e.g. fluorophore dipole emission). As such, the actual PSF may differ from the PSF used as a constraint. Similarly, multiple different regularization constraints have been suggested including the l 1-norm, l 0-norm, and generalized Gaussian Markov random fields (GGMRFs), each of which imposes a different constraint. Other important factors include the signal-to-noise ratio of the point sources and whether the point sources vary in intensity. In this work, we explore how these factors influence super-resolution image recovery robustness, determining the sensitivity and specificity. In conclusion, we determine an approach that is more robust to the types of PSF errors present in actual optical systems.« less

  14. Distinguishing one from many using super-resolution compressive sensing

    DOE PAGES

    Anthony, Stephen Michael; Mulcahy-Stanislawczyk, Johnathan; Shields, Eric A.; ...

    2018-05-14

    We present that distinguishing whether a signal corresponds to a single source or a limited number of highly overlapping point spread functions (PSFs) is a ubiquitous problem across all imaging scales, whether detecting receptor-ligand interactions in cells or detecting binary stars. Super-resolution imaging based upon compressed sensing exploits the relative sparseness of the point sources to successfully resolve sources which may be separated by much less than the Rayleigh criterion. However, as a solution to an underdetermined system of linear equations, compressive sensing requires the imposition of constraints which may not always be valid. One typical constraint is that themore » PSF is known. However, the PSF of the actual optical system may reflect aberrations not present in the theoretical ideal optical system. Even when the optics are well characterized, the actual PSF may reflect factors such as non-uniform emission of the point source (e.g. fluorophore dipole emission). As such, the actual PSF may differ from the PSF used as a constraint. Similarly, multiple different regularization constraints have been suggested including the l 1-norm, l 0-norm, and generalized Gaussian Markov random fields (GGMRFs), each of which imposes a different constraint. Other important factors include the signal-to-noise ratio of the point sources and whether the point sources vary in intensity. In this work, we explore how these factors influence super-resolution image recovery robustness, determining the sensitivity and specificity. In conclusion, we determine an approach that is more robust to the types of PSF errors present in actual optical systems.« less

  15. An efficient and robust 3D mesh compression based on 3D watermarking and wavelet transform

    NASA Astrophysics Data System (ADS)

    Zagrouba, Ezzeddine; Ben Jabra, Saoussen; Didi, Yosra

    2011-06-01

    The compression and watermarking of 3D meshes are very important in many areas of activity including digital cinematography, virtual reality as well as CAD design. However, most studies on 3D watermarking and 3D compression are done independently. To verify a good trade-off between protection and a fast transfer of 3D meshes, this paper proposes a new approach which combines 3D mesh compression with mesh watermarking. This combination is based on a wavelet transformation. In fact, the used compression method is decomposed to two stages: geometric encoding and topologic encoding. The proposed approach consists to insert a signature between these two stages. First, the wavelet transformation is applied to the original mesh to obtain two components: wavelets coefficients and a coarse mesh. Then, the geometric encoding is done on these two components. The obtained coarse mesh will be marked using a robust mesh watermarking scheme. This insertion into coarse mesh allows obtaining high robustness to several attacks. Finally, the topologic encoding is applied to the marked coarse mesh to obtain the compressed mesh. The combination of compression and watermarking permits to detect the presence of signature after a compression of the marked mesh. In plus, it allows transferring protected 3D meshes with the minimum size. The experiments and evaluations show that the proposed approach presents efficient results in terms of compression gain, invisibility and robustness of the signature against of many attacks.

  16. Hybrid thermal link-wise artificial compressibility method

    NASA Astrophysics Data System (ADS)

    Obrecht, Christian; Kuznik, Frédéric

    2015-10-01

    Thermal flow prediction is a subject of interest from a scientific and engineering points of view. Our motivation is to develop an accurate, easy to implement and highly scalable method for convective flows simulation. To this end, we present an extension to the link-wise artificial compressibility method (LW-ACM) for thermal simulation of weakly compressible flows. The novel hybrid formulation uses second-order finite difference operators of the energy equation based on the same stencils as the LW-ACM. For validation purposes, the differentially heated cubic cavity was simulated. The simulations remained stable for Rayleigh numbers up to Ra =108. The Nusselt numbers at isothermal walls and dynamics quantities are in good agreement with reference values from the literature. Our results show that the hybrid thermal LW-ACM is an effective and easy-to-use solution to solve convective flows.

  17. Recce imagery compression options

    NASA Astrophysics Data System (ADS)

    Healy, Donald J.

    1995-09-01

    The errors introduced into reconstructed RECCE imagery by ATARS DPCM compression are compared to those introduced by the more modern DCT-based JPEG compression algorithm. For storage applications in which uncompressed sensor data is available JPEG provides better mean-square-error performance while also providing more flexibility in the selection of compressed data rates. When ATARS DPCM compression has already been performed, lossless encoding techniques may be applied to the DPCM deltas to achieve further compression without introducing additional errors. The abilities of several lossless compression algorithms including Huffman, Lempel-Ziv, Lempel-Ziv-Welch, and Rice encoding to provide this additional compression of ATARS DPCM deltas are compared. It is shown that the amount of noise in the original imagery significantly affects these comparisons.

  18. A variable-gain output feedback control design methodology

    NASA Technical Reports Server (NTRS)

    Halyo, Nesim; Moerder, Daniel D.; Broussard, John R.; Taylor, Deborah B.

    1989-01-01

    A digital control system design technique is developed in which the control system gain matrix varies with the plant operating point parameters. The design technique is obtained by formulating the problem as an optimal stochastic output feedback control law with variable gains. This approach provides a control theory framework within which the operating range of a control law can be significantly extended. Furthermore, the approach avoids the major shortcomings of the conventional gain-scheduling techniques. The optimal variable gain output feedback control problem is solved by embedding the Multi-Configuration Control (MCC) problem, previously solved at ICS. An algorithm to compute the optimal variable gain output feedback control gain matrices is developed. The algorithm is a modified version of the MCC algorithm improved so as to handle the large dimensionality which arises particularly in variable-gain control problems. The design methodology developed is applied to a reconfigurable aircraft control problem. A variable-gain output feedback control problem was formulated to design a flight control law for an AFTI F-16 aircraft which can automatically reconfigure its control strategy to accommodate failures in the horizontal tail control surface. Simulations of the closed-loop reconfigurable system show that the approach produces a control design which can accommodate such failures with relative ease. The technique can be applied to many other problems including sensor failure accommodation, mode switching control laws and super agility.

  19. Compressing turbulence and sudden viscous dissipation with compression-dependent ionization state

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davidovits, Seth; Fisch, Nathaniel J.

    Turbulent plasma flow, amplified by rapid three-dimensional compression, can be suddenly dissipated under continuing compression. Furthermore, this effect relies on the sensitivity of the plasma viscosity to the temperature, μ ~ T 5 / 2 . The plasma viscosity is also sensitive to the plasma ionization state. Here, we show that the sudden dissipation phenomenon may be prevented when the plasma ionization state increases during compression, and we demonstrate the regime of net viscosity dependence on compression where sudden dissipation is guaranteed. In addition, it is shown that, compared to cases with no ionization, ionization during compression is associated withmore » larger increases in turbulent energy and can make the difference between growing and decreasing turbulent energy.« less

  20. Compressing turbulence and sudden viscous dissipation with compression-dependent ionization state

    DOE PAGES

    Davidovits, Seth; Fisch, Nathaniel J.

    2016-11-14

    Turbulent plasma flow, amplified by rapid three-dimensional compression, can be suddenly dissipated under continuing compression. Furthermore, this effect relies on the sensitivity of the plasma viscosity to the temperature, μ ~ T 5 / 2 . The plasma viscosity is also sensitive to the plasma ionization state. Here, we show that the sudden dissipation phenomenon may be prevented when the plasma ionization state increases during compression, and we demonstrate the regime of net viscosity dependence on compression where sudden dissipation is guaranteed. In addition, it is shown that, compared to cases with no ionization, ionization during compression is associated withmore » larger increases in turbulent energy and can make the difference between growing and decreasing turbulent energy.« less

  1. On the compressibility and temperature boundary of warm frozen soils

    NASA Astrophysics Data System (ADS)

    Qi, Jilin; Dang, Boxiang; Guo, Xueluan; Sun, Xiaoyu; Yan, Xu

    2017-04-01

    A silty-clay obtained along the Qinghai-Tibetan railway and a standard Chinese sand were taken as study objects. Saturated frozen soil samples were prepared for testing. Step-load was used and confined compression was carried out on the soils under different temperatures. Compression index and pseudo-preconsolidation pressure (PPC) were obtained. Unlike unfrozen soils, PPC is not associated with stress history. However, it is still the boundary of elastic and plastic deformations. Different compression indexes can be obtained from an individual compression curve under pressures before and after PPC. The parameters at different thermal and stress conditions were analyzed. It is found that temperature plays a critical role in mechanical behaviours of frozen soils. Efforts were then made on the silty-clay in order to suggest a convincing temperature boundary in defining warm frozen soil. Three groups of ice-rich samples with different ice contents were prepared and tested under confined compression. The samples were compressed under a constant load and with 5 stepped temperatures. Strain rates at different temperatures were examined. It was found that the strain rate at around -0.6°C increased abruptly. Analysis of compression index was performed on the data both from our own testing program and from the literature, which showed that at about -1°C was a turning point in the curves for compression index against temperature. Based on both our work and taking into account the unfrozen water content vs. temperature, the range of -1°C to -0.5°C seems to be the temperature where the mechanical properties change greatly. For convenience, -1.0°C can be defined as the boundary for warm frozen soils.

  2. An experimental study on compressive behavior of rubble stone walls retrofitted with BFRP grids

    NASA Astrophysics Data System (ADS)

    Huang, Hui; Jia, Bin; Li, Wenjing; Liu, Xiao; Yang, Dan; Deng, Chuanli

    2018-03-01

    An experimental study was conducted to investigate the compressive behavior of rubble stone walls retrofitted with BFRP grids. The experimental program consisted of four rubble stone walls: one unretrofitted rubble stone wall (reference wall) and three BFRP grids retrofitted rubble stone walls. The main purpose of the tests was to gain a better understanding of the compressive behavior of rubble stone walls retrofitted with different amount of BFRP grids. The experimental results showed that the reference wall failed with out-of-plane collapse due to poor connection between rubble stone blocks and the three BFRP grids retrofitted walls failed with BFRP grids rupture followed by out-of-plane collapse. The measured compressive strength of the BFRP grids retrofitted walls is about 1.4 to 2.5 times of that of the reference wall. Besides, the rubble stone wall retrofitted with the maximum amount of BFRP grids showed the minimum vertical and out-of-plane displacements under the same load.

  3. National proficiency-gain curves for minimally invasive gastrointestinal cancer surgery.

    PubMed

    Mackenzie, H; Markar, S R; Askari, A; Ni, M; Faiz, O; Hanna, G B

    2016-01-01

    Minimal access surgery for gastrointestinal cancer has short-term benefits but is associated with a proficiency-gain curve. The aim of this study was to define national proficiency-gain curves for minimal access colorectal and oesophagogastric surgery, and to determine the impact on clinical outcomes. All adult patients undergoing minimal access oesophageal, colonic and rectal surgery between 2002 and 2012 were identified from the Hospital Episode Statistics database. Proficiency-gain curves were created using risk-adjusted cumulative sum analysis. Change points were identified, and bootstrapping was performed with 1000 iterations to identify a confidence level. The primary outcome was 30-day mortality; secondary outcomes were 90-day mortality, reintervention, conversion and length of hospital stay. Some 1696, 15 008 and 16 701 minimal access oesophageal, rectal and colonic cancer resections were performed during the study period. The change point in the proficiency-gain curve for 30-day mortality for oesophageal, rectal and colonic surgery was 19 (confidence level 98·4 per cent), 20 (99·2 per cent) and three (99·5 per cent) procedures; the mortality rate fell from 4·0 to 2·0 per cent (relative risk reduction (RRR) 0·50, P = 0·033), from 2·1 to 1·2 per cent (RRR 0·43, P < 0·001) and from 2·4 to 1·8 per cent (RRR 0·25, P = 0·058) respectively. The change point in the proficiency-gain curve for reintervention in oesophageal, rectal and colonic resection was 19 (98·1 per cent), 32 (99·5 per cent) and 26 (99·2 per cent) procedures respectively. There were also significant proficiency-gain curves for 90-day mortality, conversion and length of stay. The introduction of minimal access gastrointestinal cancer surgery has been associated with a proficiency-gain curve for mortality and major morbidity at a national level. Unnecessary patient harm should be avoided by appropriate training and monitoring of new surgical techniques. © 2015 BJS

  4. Compression of facsimile graphics for transmission over digital mobile satellite circuits

    NASA Astrophysics Data System (ADS)

    Dimolitsas, Spiros; Corcoran, Frank L.

    A technique for reducing the transmission requirements of facsimile images while maintaining high intelligibility in mobile communications environments is described. The algorithms developed are capable of achieving a compression of approximately 32 to 1. The technique focuses on the implementation of a low-cost interface unit suitable for facsimile communication between low-power mobile stations and fixed stations for both point-to-point and point-to-multipoint transmissions. This interface may be colocated with the transmitting facsimile terminals. The technique was implemented and tested by intercepting facsimile documents in a store-and-forward mode.

  5. High Average Power Laser Gain Medium With Low Optical Distortion Using A Transverse Flowing Liquid Host

    DOEpatents

    Comaskey, Brian J.; Ault, Earl R.; Kuklo, Thomas C.

    2005-07-05

    A high average power, low optical distortion laser gain media is based on a flowing liquid media. A diode laser pumping device with tailored irradiance excites the laser active atom, ion or molecule within the liquid media. A laser active component of the liquid media exhibits energy storage times longer than or comparable to the thermal optical response time of the liquid. A circulation system that provides a closed loop for mixing and circulating the lasing liquid into and out of the optical cavity includes a pump, a diffuser, and a heat exchanger. A liquid flow gain cell includes flow straighteners and flow channel compression.

  6. Computation of saddle point of attachment

    NASA Technical Reports Server (NTRS)

    Hung, Ching-Mao; Sung, Chao-Ho; Chen, Chung-Lung

    1991-01-01

    Low-speed flows over a cylinder mounted on a flat plate are studied numerically in order to confirm the existence of a saddle point of attachment in the flow before an obstacle, to analyze the flow characteristics near the saddle point theoretically, and to address the significance of the saddle point of attachment to the construction of external flow structures, the interpretation of experimental surface oil-flow patterns, and the theoretical definition of three-dimensional flow separation. Two numerical codes, one for an incompressible flow and another for a compressible flow, are used for various Mach numbers, Reynolds numbers, grid sizes, and numbers of grid points. It is pointed out that the potential presence of a saddle point of attachment means that a line of 'oil accumulation' from both sides of a skin-friction line emanating outward from a saddle point can be either a line of separation or a line of attachment.

  7. Compressive residual strength of graphite/epoxy laminates after impact

    NASA Technical Reports Server (NTRS)

    Guy, Teresa A.; Lagace, Paul A.

    1992-01-01

    The issue of damage tolerance after impact, in terms of the compressive residual strength, was experimentally examined in graphite/epoxy laminates using Hercules AS4/3501-6 in a (+ or - 45/0)(sub 2S) configuration. Three different impactor masses were used at various velocities and the resultant damage measured via a number of nondestructive and destructive techniques. Specimens were then tested to failure under uniaxial compression. The results clearly show that a minimum compressive residual strength exists which is below the open hole strength for a hole of the same diameter as the impactor. Increases in velocity beyond the point of minimum strength cause a difference in the damage produced and cause a resultant increase in the compressive residual strength which asymptotes to the open hole strength value. Furthermore, the results show that this minimum compressive residual strength value is independent of the impactor mass used and is only dependent upon the damage present in the impacted specimen which is the same for the three impactor mass cases. A full 3-D representation of the damage is obtained through the various techniques. Only this 3-D representation can properly characterize the damage state that causes the resultant residual strength. Assessment of the state-of-the-art in predictive analysis capabilities shows a need to further develop techniques based on the 3-D damage state that exists. In addition, the need for damage 'metrics' is clearly indicated.

  8. Compression for radiological images

    NASA Astrophysics Data System (ADS)

    Wilson, Dennis L.

    1992-07-01

    The viewing of radiological images has peculiarities that must be taken into account in the design of a compression technique. The images may be manipulated on a workstation to change the contrast, to change the center of the brightness levels that are viewed, and even to invert the images. Because of the possible consequences of losing information in a medical application, bit preserving compression is used for the images used for diagnosis. However, for archiving the images may be compressed to 10 of their original size. A compression technique based on the Discrete Cosine Transform (DCT) takes the viewing factors into account by compressing the changes in the local brightness levels. The compression technique is a variation of the CCITT JPEG compression that suppresses the blocking of the DCT except in areas of very high contrast.

  9. Does team lifting increase the variability in peak lumbar compression in ironworkers?

    PubMed

    Faber, Gert; Visser, Steven; van der Molen, Henk F; Kuijer, P Paul F M; Hoozemans, Marco J M; Van Dieën, Jaap H; Frings-Dresen, Monique H W

    2012-01-01

    Ironworkers frequently perform heavy lifting tasks in teams of two or four workers. Team lifting could potentially lead to a higher variation in peak lumbar compression forces than lifts performed by one worker, resulting in higher maximal peak lumbar compression forces. This study compared single-worker lifts (25-kg, iron bar) to two-worker lifts (50-kg, two iron bars) and to four-worker lifts (100-kg, iron lattice). Inverse dynamics was used to calculate peak lumbar compression forces. To assess the variability in peak lumbar loading, all three lifting tasks were performed six times. Results showed that the variability in peak lumbar loading was somewhat higher in the team lifts compared to the single-worker lifts. However, despite this increased variability, team lifts did not result in larger maximum peak lumbar compression forces. Therefore, it was concluded that, from a biomechanical point of view, team lifting does not result in an additional risk for low back complaints in ironworkers.

  10. Intelligent bandwith compression

    NASA Astrophysics Data System (ADS)

    Tseng, D. Y.; Bullock, B. L.; Olin, K. E.; Kandt, R. K.; Olsen, J. D.

    1980-02-01

    The feasibility of a 1000:1 bandwidth compression ratio for image transmission has been demonstrated using image-analysis algorithms and a rule-based controller. Such a high compression ratio was achieved by first analyzing scene content using auto-cueing and feature-extraction algorithms, and then transmitting only the pertinent information consistent with mission requirements. A rule-based controller directs the flow of analysis and performs priority allocations on the extracted scene content. The reconstructed bandwidth-compressed image consists of an edge map of the scene background, with primary and secondary target windows embedded in the edge map. The bandwidth-compressed images are updated at a basic rate of 1 frame per second, with the high-priority target window updated at 7.5 frames per second. The scene-analysis algorithms used in this system together with the adaptive priority controller are described. Results of simulated 1000:1 band width-compressed images are presented. A video tape simulation of the Intelligent Bandwidth Compression system has been produced using a sequence of video input from the data base.

  11. Optical gain coefficients of silicon: a theoretical study

    NASA Astrophysics Data System (ADS)

    Tsai, Chin-Yi

    2018-05-01

    A theoretical model is presented and an explicit formula is derived for calculating the optical gain coefficients of indirect band-gap semiconductors. This model is based on the second-order time-dependent perturbation theory of quantum mechanics by incorporating all the eight processes of photon/phonon emission and absorption between the band edges of the conduction and valence bands. Numerical calculation results are given for Si. The calculated absorption coefficients agree well with the existing fitting formula of experiment data with two modes of phonons: optical phonons with energy of 57.73 meV and acoustic phonons with energy of 18.27 meV near (but not exactly at) the zone edge of the X-point in the dispersion relation of phonons. These closely match with existing data of 57.5 meV transverse optical (TO) phonons at the X4-point and 18.6 meV transverse acoustic (TA) phonons at the X3-point of the zone edge. The calculated results show that the material optical gain of Si will overcome free-carrier absorption if the energy separation of quasi-Fermi levels between electrons and holes exceeds 1.15 eV.

  12. Avoiding Obstructions in Aiming a High-Gain Antenna

    NASA Technical Reports Server (NTRS)

    Edmonds, Karina

    2006-01-01

    The High Gain Antenna Pointing and Obstruction Avoidance software performs computations for pointing a Mars Rover high-gain antenna for communication with Earth while (1) avoiding line-of-sight obstructions (the Martian terrain and other parts of the Rover) that would block communication and (2) taking account of limits in ranges of motion of antenna gimbals and of kinematic singularities in gimbal mechanisms. The software uses simplified geometric models of obstructions and of the trajectory of the Earth in the Martian sky(see figure). It treats all obstructions according to a generalized approach, computing and continually updating the time remaining before interception of each obstruction. In cases in which the gimbal-mechanism design allows two aiming solutions, the algorithm chooses the solution that provides the longest obstruction-free Earth-tracking time. If the communication session continues until an obstruction is encountered in the current pointing solution and the other solution is now unobstructed, then the algorithm automatically switches to the other position. This software also notifies communication- managing software to cease transmission during the switch to the unobstructed position, resuming it when the switch is complete.

  13. Verification of the Solar Dynamics Observatory High Gain Antenna Pointing Algorithm Using Flight Data

    NASA Technical Reports Server (NTRS)

    Bourkland, Kristin L.; Liu, Kuo-Chia

    2011-01-01

    The Solar Dynamics Observatory (SDO), launched in 2010, is a NASA-designed spacecraft built to study the Sun. SDO has tight pointing requirements and instruments that are sensitive to spacecraft jitter. Two High Gain Antennas (HGAs) are used to continuously send science data to a dedicated ground station. Preflight analysis showed that jitter resulting from motion of the HGAs was a cause for concern. Three jitter mitigation techniques were developed and implemented to overcome effects of jitter from different sources. These mitigation techniques include: the random step delay, stagger stepping, and the No Step Request (NSR). During the commissioning phase of the mission, a jitter test was performed onboard the spacecraft, in which various sources of jitter were examined to determine their level of effect on the instruments. During the HGA portion of the test, the jitter amplitudes from the single step of a gimbal were examined, as well as the amplitudes due to the execution of various gimbal rates. The jitter levels were compared with the gimbal jitter allocations for each instrument. The decision was made to consider implementing two of the jitter mitigating techniques on board the spacecraft: stagger stepping and the NSR. Flight data with and without jitter mitigation enabled was examined, and it is shown in this paper that HGA tracking is not negatively impacted with the addition of the jitter mitigation techniques. Additionally, the individual gimbal steps were examined, and it was confirmed that the stagger stepping and NSRs worked as designed. An Image Quality Test was performed to determine the amount of cumulative jitter from the reaction wheels, HGAs, and instruments during various combinations of typical operations. The HGA-induced jitter on the instruments is well within the jitter requirement when the stagger step and NSR mitigation options are enabled.

  14. The Existence of Steady Compressible Subsonic Impinging Jet Flows

    NASA Astrophysics Data System (ADS)

    Cheng, Jianfeng; Du, Lili; Wang, Yongfu

    2018-03-01

    In this paper, we investigate the compressible subsonic impinging jet flows through a semi-infinitely long nozzle and impacting on a solid wall. Firstly, it is shown that given a two-dimensional semi-infinitely long nozzle and a wall behind the nozzle, and an appropriate atmospheric pressure, then there exists a smooth global subsonic compressible impinging jet flow with two asymptotic directions. The subsonic impinging jet develops two free streamlines, which initiate smoothly at the end points of the semi-infinitely long nozzles. In particular, there exists a smooth curve which separates the fluids which go to different places downstream. Moreover, under some suitable asymptotic assumptions of the nozzle, the asymptotic behaviors of the compressible subsonic impinging jet flows in the inlet and the downstream are obtained by means of a blow-up argument. On the other hand, the non-existence of compressible subsonic impinging jet flows with only one asymptotic direction is also established. This main result in this paper solves the open problem (4) in Chapter 16.3 proposed by uc(Friedman) in his famous survey (uc(Friedman) in Mathematics in industrial problems, II, I.M.A. volumes in mathematics and its applications, vol 24, Springer, New York, 1989).

  15. Stent longitudinal strength assessed using point compression: insights from a second-generation, clinically related bench test.

    PubMed

    Ormiston, John A; Webber, Bruce; Ubod, Ben; White, Jonathon; Webster, Mark W I

    2014-02-01

    Stent longitudinal distortion, while infrequent, can lead to adverse clinical events. Our first bench comparison of susceptibility of different stent designs to distortion applied force to the entire circumference of the proximal stent hoop. The test increased understanding of stent design and led to recommendations for design change in some. Our second-generation test more closely mimics clinical scenarios by applying force to a point on the proximal hoop of a malapposed stent. Each 3-mm-diameter stent was secured in a test apparatus so that its proximal 5 mm was malapposed in a 3.5-mm tube. An instron applied force to the proximal hoop of each of 5 examples of each of 6 stent designs using a narrow rod so that force applied and distance compressed could be measured. Hoops on the side of the force were pushed together, became malapposed, and obstructed the lumen. In addition, the proximal stent hoop tilted causing malapposition, the contralateral side of the stent from the applied force causing lumen obstruction. This second-generation, more clinically relevant test showed the Biomatrix Flex was the most resistant to deformation and the Element the most easily deformed. The addition of more connectors between the proximal hoops in the Promus Premier design has reduced the potential for distortion when compared with the Element, so that distortion was similar to the Vision, Multi-Link 8, and Integrity designs. The test also provided insight into the way in which stents are likely to distort in clinical practice.

  16. Simplified Modeling of Steady-State and Transient Brillouin Gain in Magnetoactive Non-Centrosymmetric Semiconductors

    NASA Astrophysics Data System (ADS)

    Singh, M.; Aghamkar, P.; Sen, P. K.

    With the aid of a hydrodynamic model of semiconductor-plasmas, a detailed analytical investigation is made to study both the steady-state and the transient Brillouin gain in magnetized non-centrosymmetric III-V semiconductors arising from the nonlinear interaction of an intense pump beam with the internally-generated acoustic wave, due to piezoelectric and electrostrictive properties of the crystal. Using the fact that the origin of coherent Brillouin scattering (CBS) lies in the third-order (Brillouin) susceptibility of the medium, we obtained an expression of the gain coefficient of backward Stokes mode in steady-state and transient regimes and studied the dependence of piezoelectricity, magnetic field and pump pulse duration on its growth rate. The threshold-pump intensity and optimum pulse duration for the onset of transient CBS are estimated. The piezoelectricity and externally-applied magnetic field substantially enhances the transient CBS gain coefficient in III-V semiconductors which can be of great use in the compression of scattered pulses.

  17. GTRAC: fast retrieval from compressed collections of genomic variants

    PubMed Central

    Tatwawadi, Kedar; Hernaez, Mikel; Ochoa, Idoia; Weissman, Tsachy

    2016-01-01

    Motivation: The dramatic decrease in the cost of sequencing has resulted in the generation of huge amounts of genomic data, as evidenced by projects such as the UK10K and the Million Veteran Project, with the number of sequenced genomes ranging in the order of 10 K to 1 M. Due to the large redundancies among genomic sequences of individuals from the same species, most of the medical research deals with the variants in the sequences as compared with a reference sequence, rather than with the complete genomic sequences. Consequently, millions of genomes represented as variants are stored in databases. These databases are constantly updated and queried to extract information such as the common variants among individuals or groups of individuals. Previous algorithms for compression of this type of databases lack efficient random access capabilities, rendering querying the database for particular variants and/or individuals extremely inefficient, to the point where compression is often relinquished altogether. Results: We present a new algorithm for this task, called GTRAC, that achieves significant compression ratios while allowing fast random access over the compressed database. For example, GTRAC is able to compress a Homo sapiens dataset containing 1092 samples in 1.1 GB (compression ratio of 160), while allowing for decompression of specific samples in less than a second and decompression of specific variants in 17 ms. GTRAC uses and adapts techniques from information theory, such as a specialized Lempel-Ziv compressor, and tailored succinct data structures. Availability and Implementation: The GTRAC algorithm is available for download at: https://github.com/kedartatwawadi/GTRAC Contact: kedart@stanford.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27587665

  18. Seeding magnetic fields for laser-driven flux compression in high-energy-density plasmas.

    PubMed

    Gotchev, O V; Knauer, J P; Chang, P Y; Jang, N W; Shoup, M J; Meyerhofer, D D; Betti, R

    2009-04-01

    A compact, self-contained magnetic-seed-field generator (5 to 16 T) is the enabling technology for a novel laser-driven flux-compression scheme in laser-driven targets. A magnetized target is directly irradiated by a kilojoule or megajoule laser to compress the preseeded magnetic field to thousands of teslas. A fast (300 ns), 80 kA current pulse delivered by a portable pulsed-power system is discharged into a low-mass coil that surrounds the laser target. A >15 T target field has been demonstrated using a <100 J capacitor bank, a laser-triggered switch, and a low-impedance (<1 Omega) strip line. The device has been integrated into a series of magnetic-flux-compression experiments on the 60 beam, 30 kJ OMEGA laser [T. R. Boehly et al., Opt. Commun. 133, 495 (1997)]. The initial application is a novel magneto-inertial fusion approach [O. V. Gotchev et al., J. Fusion Energy 27, 25 (2008)] to inertial confinement fusion (ICF), where the amplified magnetic field can inhibit thermal conduction losses from the hot spot of a compressed target. This can lead to the ignition of massive shells imploded with low velocity-a way of reaching higher gains than is possible with conventional ICF.

  19. Turbulence in Compressible Flows

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Lecture notes for the AGARD Fluid Dynamics Panel (FDP) Special Course on 'Turbulence in Compressible Flows' have been assembled in this report. The following topics were covered: Compressible Turbulent Boundary Layers, Compressible Turbulent Free Shear Layers, Turbulent Combustion, DNS/LES and RANS Simulations of Compressible Turbulent Flows, and Case Studies of Applications of Turbulence Models in Aerospace.

  20. Fast and predictable video compression in software design and implementation of an H.261 codec

    NASA Astrophysics Data System (ADS)

    Geske, Dagmar; Hess, Robert

    1998-09-01

    The use of software codecs for video compression becomes commonplace in several videoconferencing applications. In order to reduce conflicts with other applications used at the same time, mechanisms for resource reservation on endsystems need to determine an upper bound for computing time used by the codec. This leads to the demand for predictable execution times of compression/decompression. Since compression schemes as H.261 inherently depend on the motion contained in the video, an adaptive admission control is required. This paper presents a data driven approach based on dynamical reduction of the number of processed macroblocks in peak situations. Besides the absolute speed is a point of interest. The question, whether and how software compression of high quality video is feasible on today's desktop computers, is examined.

  1. Methods for compressible fluid simulation on GPUs using high-order finite differences

    NASA Astrophysics Data System (ADS)

    Pekkilä, Johannes; Väisälä, Miikka S.; Käpylä, Maarit J.; Käpylä, Petri J.; Anjum, Omer

    2017-08-01

    We focus on implementing and optimizing a sixth-order finite-difference solver for simulating compressible fluids on a GPU using third-order Runge-Kutta integration. Since graphics processing units perform well in data-parallel tasks, this makes them an attractive platform for fluid simulation. However, high-order stencil computation is memory-intensive with respect to both main memory and the caches of the GPU. We present two approaches for simulating compressible fluids using 55-point and 19-point stencils. We seek to reduce the requirements for memory bandwidth and cache size in our methods by using cache blocking and decomposing a latency-bound kernel into several bandwidth-bound kernels. Our fastest implementation is bandwidth-bound and integrates 343 million grid points per second on a Tesla K40t GPU, achieving a 3 . 6 × speedup over a comparable hydrodynamics solver benchmarked on two Intel Xeon E5-2690v3 processors. Our alternative GPU implementation is latency-bound and achieves the rate of 168 million updates per second.

  2. Solution of weakly compressible isothermal flow in landfill gas collection networks

    NASA Astrophysics Data System (ADS)

    Nec, Y.; Huculak, G.

    2017-12-01

    Pipe networks collecting gas in sanitary landfills operate under the regime of a weakly compressible isothermal flow of ideal gas. The effect of compressibility has been traditionally neglected in this application in favour of simplicity, thereby creating a conceptual incongruity between the flow equations and thermodynamic equation of state. Here the flow is solved by generalisation of the classic Darcy-Weisbach equation for an incompressible steady flow in a pipe to an ordinary differential equation, permitting continuous variation of density, viscosity and related fluid parameters, as well as head loss or gain due to gravity, in isothermal flow. The differential equation is solved analytically in the case of ideal gas for a single edge in the network. Thereafter the solution is used in an algorithm developed to construct the flow equations automatically for a network characterised by an incidence matrix, and determine pressure distribution, flow rates and all associated parameters therein.

  3. Compressed gas manifold

    DOEpatents

    Hildebrand, Richard J.; Wozniak, John J.

    2001-01-01

    A compressed gas storage cell interconnecting manifold including a thermally activated pressure relief device, a manual safety shut-off valve, and a port for connecting the compressed gas storage cells to a motor vehicle power source and to a refueling adapter. The manifold is mechanically and pneumatically connected to a compressed gas storage cell by a bolt including a gas passage therein.

  4. Coding Strategies and Implementations of Compressive Sensing

    NASA Astrophysics Data System (ADS)

    Tsai, Tsung-Han

    This dissertation studies the coding strategies of computational imaging to overcome the limitation of conventional sensing techniques. The information capacity of conventional sensing is limited by the physical properties of optics, such as aperture size, detector pixels, quantum efficiency, and sampling rate. These parameters determine the spatial, depth, spectral, temporal, and polarization sensitivity of each imager. To increase sensitivity in any dimension can significantly compromise the others. This research implements various coding strategies subject to optical multidimensional imaging and acoustic sensing in order to extend their sensing abilities. The proposed coding strategies combine hardware modification and signal processing to exploiting bandwidth and sensitivity from conventional sensors. We discuss the hardware architecture, compression strategies, sensing process modeling, and reconstruction algorithm of each sensing system. Optical multidimensional imaging measures three or more dimensional information of the optical signal. Traditional multidimensional imagers acquire extra dimensional information at the cost of degrading temporal or spatial resolution. Compressive multidimensional imaging multiplexes the transverse spatial, spectral, temporal, and polarization information on a two-dimensional (2D) detector. The corresponding spectral, temporal and polarization coding strategies adapt optics, electronic devices, and designed modulation techniques for multiplex measurement. This computational imaging technique provides multispectral, temporal super-resolution, and polarization imaging abilities with minimal loss in spatial resolution and noise level while maintaining or gaining higher temporal resolution. The experimental results prove that the appropriate coding strategies may improve hundreds times more sensing capacity. Human auditory system has the astonishing ability in localizing, tracking, and filtering the selected sound sources or

  5. Data Compression Techniques for Maps

    DTIC Science & Technology

    1989-01-01

    Lempel - Ziv compression is applied to the classified and unclassified images as also to the output of the compression algorithms . The algorithms ...resulted in a compression of 7:1. The output of the quadtree coding algorithm was then compressed using Lempel - Ziv coding. The compression ratio achieved...using Lempel - Ziv coding. The unclassified image gave a compression ratio of only 1.4:1. The K means classified image

  6. Mammographic compression in Asian women.

    PubMed

    Lau, Susie; Abdul Aziz, Yang Faridah; Ng, Kwan Hoong

    2017-01-01

    To investigate: (1) the variability of mammographic compression parameters amongst Asian women; and (2) the effects of reducing compression force on image quality and mean glandular dose (MGD) in Asian women based on phantom study. We retrospectively collected 15818 raw digital mammograms from 3772 Asian women aged 35-80 years who underwent screening or diagnostic mammography between Jan 2012 and Dec 2014 at our center. The mammograms were processed using a volumetric breast density (VBD) measurement software (Volpara) to assess compression force, compression pressure, compressed breast thickness (CBT), breast volume, VBD and MGD against breast contact area. The effects of reducing compression force on image quality and MGD were also evaluated based on measurement obtained from 105 Asian women, as well as using the RMI156 Mammographic Accreditation Phantom and polymethyl methacrylate (PMMA) slabs. Compression force, compression pressure, CBT, breast volume, VBD and MGD correlated significantly with breast contact area (p<0.0001). Compression parameters including compression force, compression pressure, CBT and breast contact area were widely variable between [relative standard deviation (RSD)≥21.0%] and within (p<0.0001) Asian women. The median compression force should be about 8.1 daN compared to the current 12.0 daN. Decreasing compression force from 12.0 daN to 9.0 daN increased CBT by 3.3±1.4 mm, MGD by 6.2-11.0%, and caused no significant effects on image quality (p>0.05). Force-standardized protocol led to widely variable compression parameters in Asian women. Based on phantom study, it is feasible to reduce compression force up to 32.5% with minimal effects on image quality and MGD.

  7. Extreme compression for extreme conditions: pilot study to identify optimal compression of CT images using MPEG-4 video compression.

    PubMed

    Peterson, P Gabriel; Pak, Sung K; Nguyen, Binh; Jacobs, Genevieve; Folio, Les

    2012-12-01

    This study aims to evaluate the utility of compressed computed tomography (CT) studies (to expedite transmission) using Motion Pictures Experts Group, Layer 4 (MPEG-4) movie formatting in combat hospitals when guiding major treatment regimens. This retrospective analysis was approved by Walter Reed Army Medical Center institutional review board with a waiver for the informed consent requirement. Twenty-five CT chest, abdomen, and pelvis exams were converted from Digital Imaging and Communications in Medicine to MPEG-4 movie format at various compression ratios. Three board-certified radiologists reviewed various levels of compression on emergent CT findings on 25 combat casualties and compared with the interpretation of the original series. A Universal Trauma Window was selected at -200 HU level and 1,500 HU width, then compressed at three lossy levels. Sensitivities and specificities for each reviewer were calculated along with 95 % confidence intervals using the method of general estimating equations. The compression ratios compared were 171:1, 86:1, and 41:1 with combined sensitivities of 90 % (95 % confidence interval, 79-95), 94 % (87-97), and 100 % (93-100), respectively. Combined specificities were 100 % (85-100), 100 % (85-100), and 96 % (78-99), respectively. The introduction of CT in combat hospitals with increasing detectors and image data in recent military operations has increased the need for effective teleradiology; mandating compression technology. Image compression is currently used to transmit images from combat hospital to tertiary care centers with subspecialists and our study demonstrates MPEG-4 technology as a reasonable means of achieving such compression.

  8. Alternative Compression Garments

    NASA Technical Reports Server (NTRS)

    Stenger, M. B.; Lee, S. M. C.; Ribeiro, L. C.; Brown, A. K.; Westby, C. M.; Platts, S. H.

    2011-01-01

    Orthostatic intolerance after spaceflight is still an issue for astronauts as no in-flight countermeasure has been 100% effective. Future anti-gravity suits (AGS) may be similar to the Shuttle era inflatable AGS or may be a mechanical compression device like the Russian Kentavr. We have evaluated the above garments as well as elastic, gradient compression garments of varying magnitude and determined that breast-high elastic compression garments may be a suitable replacement to the current AGS. This new garment should be more comfortable than the AGS, easy to don and doff, and as effective a countermeasure to orthostatic intolerance. Furthermore, these new compression garments could be worn for several days after space flight as necessary if symptoms persisted. We conducted two studies to evaluate elastic, gradient compression garments. The purpose of these studies was to evaluate the comfort and efficacy of an alternative compression garment (ACG) immediately after actual space flight and 6 degree head-down tilt bed rest as a model of space flight, and to determine if they would impact recovery if worn for up to three days after bed rest.

  9. Intelligent bandwidth compression

    NASA Astrophysics Data System (ADS)

    Tseng, D. Y.; Bullock, B. L.; Olin, K. E.; Kandt, R. K.; Olsen, J. D.

    1980-02-01

    The feasibility of a 1000:1 bandwidth compression ratio for image transmission has been demonstrated using image-analysis algorithms and a rule-based controller. Such a high compression ratio was achieved by first analyzing scene content using auto-cueing and feature-extraction algorithms, and then transmitting only the pertinent information consistent with mission requirements. A rule-based controller directs the flow of analysis and performs priority allocations on the extracted scene content. The reconstructed bandwidth-compressed image consists of an edge map of the scene background, with primary and secondary target windows embedded in the edge map. The bandwidth-compressed images are updated at a basic rate of 1 frame per second, with the high-priority target window updated at 7.5 frames per second. The scene-analysis algorithms used in this system together with the adaptive priority controller are described. Results of simulated 1000:1 bandwidth-compressed images are presented.

  10. Biological sequence compression algorithms.

    PubMed

    Matsumoto, T; Sadakane, K; Imai, H

    2000-01-01

    Today, more and more DNA sequences are becoming available. The information about DNA sequences are stored in molecular biology databases. The size and importance of these databases will be bigger and bigger in the future, therefore this information must be stored or communicated efficiently. Furthermore, sequence compression can be used to define similarities between biological sequences. The standard compression algorithms such as gzip or compress cannot compress DNA sequences, but only expand them in size. On the other hand, CTW (Context Tree Weighting Method) can compress DNA sequences less than two bits per symbol. These algorithms do not use special structures of biological sequences. Two characteristic structures of DNA sequences are known. One is called palindromes or reverse complements and the other structure is approximate repeats. Several specific algorithms for DNA sequences that use these structures can compress them less than two bits per symbol. In this paper, we improve the CTW so that characteristic structures of DNA sequences are available. Before encoding the next symbol, the algorithm searches an approximate repeat and palindrome using hash and dynamic programming. If there is a palindrome or an approximate repeat with enough length then our algorithm represents it with length and distance. By using this preprocessing, a new program achieves a little higher compression ratio than that of existing DNA-oriented compression algorithms. We also describe new compression algorithm for protein sequences.

  11. Compressive sensing for single-shot two-dimensional coherent spectroscopy

    NASA Astrophysics Data System (ADS)

    Harel, E.; Spencer, A.; Spokoyny, B.

    2017-02-01

    In this work, we explore the use of compressive sensing for the rapid acquisition of two-dimensional optical spectra that encodes the electronic structure and ultrafast dynamics of condensed-phase molecular species. Specifically, we have developed a means to combine multiplexed single-element detection and single-shot and phase-resolved two-dimensional coherent spectroscopy. The method described, which we call Single Point Array Reconstruction by Spatial Encoding (SPARSE) eliminates the need for costly array detectors while speeding up acquisition by several orders of magnitude compared to scanning methods. Physical implementation of SPARSE is facilitated by combining spatiotemporal encoding of the nonlinear optical response and signal modulation by a high-speed digital micromirror device. We demonstrate the approach by investigating a well-characterized cyanine molecule and a photosynthetic pigment-protein complex. Hadamard and compressive sensing algorithms are demonstrated, with the latter achieving compression factors as high as ten. Both show good agreement with directly detected spectra. We envision a myriad of applications in nonlinear spectroscopy using SPARSE with broadband femtosecond light sources in so-far unexplored regions of the electromagnetic spectrum.

  12. smallWig: parallel compression of RNA-seq WIG files.

    PubMed

    Wang, Zhiying; Weissman, Tsachy; Milenkovic, Olgica

    2016-01-15

    this claim, we performed a statistical analysis of expression data in different transform domains and developed accompanying entropy coding methods that bridge the gap between theoretical and practical WIG file compression rates. We tested different variants of the smallWig compression algorithm on a number of integer-and real- (floating point) valued RNA-seq WIG files generated by the ENCODE project. The results reveal that, on average, smallWig offers 18-fold compression rate improvements, up to 2.5-fold compression time improvements, and 1.5-fold decompression time improvements when compared with bigWig. On the tested files, the memory usage of the algorithm never exceeded 90 KB. When more elaborate context mixing compressors were used within smallWig, the obtained compression rates were as much as 23 times better than those of bigWig. For smallWig used in the random query mode, which also supports retrieval of the summary statistics, an overhead in the compression rate of roughly 3-17% was introduced depending on the chosen system parameters. An increase in encoding and decoding time of 30% and 55% represents an additional performance loss caused by enabling random data access. We also implemented smallWig using multi-processor programming. This parallelization feature decreases the encoding delay 2-3.4 times compared with that of a single-processor implementation, with the number of processors used ranging from 2 to 8; in the same parameter regime, the decoding delay decreased 2-5.2 times. The smallWig software can be downloaded from: http://stanford.edu/~zhiyingw/smallWig/smallwig.html, http://publish.illinois.edu/milenkovic/, http://web.stanford.edu/~tsachy/. zhiyingw@stanford.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  13. Numerical simulation of the compressible Orszag-Tang vortex 2. Supersonic flow

    NASA Technical Reports Server (NTRS)

    Picone, J. M.; Dahlburg, Russell B.

    1990-01-01

    The numerical investigation of the Orszag-Tang vortex system in compressible magnetofluids will consider initial conditions with embedded supersonic regions. The simulations have initial average Mach numbers 1.0 and 1.5 and beta 10/3 with Lundquist numbers 50, 100, or 200. The behavior of the system differs significantly from that found previously for the incompressible and subsonic analogs. Shocks form at the downstream boundaries of the embedded supersonic regions outside the central magnetic X-point and produce strong local current sheets which dissipate appreciable magnetic energy. Reconnection at the central X-point, which dominates the incompressible and subsonic systems, peaks later and has a smaller impact as M increases from 0.6 to 1.5. Similarly, correlation between the momentum and magnetic field begins significant growth later than in subsonic and incompressible flows. The shocks bound large compression regions, which dominate the wavenumber spectra of autocorrelations in mass density, velocity, and magnetic field.

  14. Video bandwidth compression system

    NASA Astrophysics Data System (ADS)

    Ludington, D.

    1980-08-01

    The objective of this program was the development of a Video Bandwidth Compression brassboard model for use by the Air Force Avionics Laboratory, Wright-Patterson Air Force Base, in evaluation of bandwidth compression techniques for use in tactical weapons and to aid in the selection of particular operational modes to be implemented in an advanced flyable model. The bandwidth compression system is partitioned into two major divisions: the encoder, which processes the input video with a compression algorithm and transmits the most significant information; and the decoder where the compressed data is reconstructed into a video image for display.

  15. Efficacy of compression of different capacitance beds in the amelioration of orthostatic hypotension

    NASA Technical Reports Server (NTRS)

    Denq, J. C.; Opfer-Gehrking, T. L.; Giuliani, M.; Felten, J.; Convertino, V. A.; Low, P. A.

    1997-01-01

    Orthostatic hypotension (OH) is the most disabling and serious manifestation of adrenergic failure, occurring in the autonomic neuropathies, pure autonomic failure (PAF) and multiple system atrophy (MSA). No specific treatment is currently available for most etiologies of OH. A reduction in venous capacity, secondary to some physical counter maneuvers (e.g., squatting or leg crossing), or the use of compressive garments, can ameliorate OH. However, there is little information on the differential efficacy, or the mechanisms of improvement, engendered by compression of specific capacitance beds. We therefore evaluated the efficacy of compression of specific compartments (calves, thighs, low abdomen, calves and thighs, and all compartments combined), using a modified antigravity suit, on the end-points of orthostatic blood pressure, and symptoms of orthostatic intolerance. Fourteen patients (PAF, n = 9; MSA, n = 3; diabetic autonomic neuropathy, n = 2; five males and nine females) with clinical OH were studied. The mean age was 62 years (range 31-78). The mean +/- SEM orthostatic systolic blood pressure when all compartments were compressed was 115.9 +/- 7.4 mmHg, significantly improved (p < 0.001) over the head-up tilt value without compression of 89.6 +/- 7.0 mmHg. The abdomen was the only single compartment whose compression significantly reduced OH (p < 0.005). There was a significant increase of peripheral resistance index (PRI) with compression of abdomen (p < 0.001) or all compartments (p < 0.001); end-diastolic index and cardiac index did not change. We conclude that denervation increases vascular capacity, and that venous compression improves OH by reducing this capacity and increasing PRI. Compression of all compartments is the most efficacious, followed by abdominal compression, whereas leg compression alone was less effective, presumably reflecting the large capacity of the abdomen relative to the legs.

  16. The compression of a heavy floating elastic film.

    PubMed

    Jambon-Puillet, Etienne; Vella, Dominic; Protière, Suzie

    2016-11-23

    We study the effect of film density on the uniaxial compression of thin elastic films at a liquid-fluid interface. Using a combination of experiments and theory, we show that dense films first wrinkle and then fold as the compression is increased, similarly to what has been reported when the film density is neglected. However, we highlight the changes in the shape of the fold induced by the film's own weight and extend the model of Diamant and Witten [Phys. Rev. Lett., 2011, 107, 164302] to understand these changes. In particular, we suggest that it is the weight of the film that breaks the up-down symmetry apparent from previous models, but elusive experimentally. We then compress the film beyond the point of self-contact and observe a new behaviour dependent on the film density: the single fold that forms after wrinkling transitions into a closed loop after self-contact, encapsulating a cylindrical droplet of the upper fluid. The encapsulated drop either causes the loop to bend upward or to sink deeper as the compression is increased, depending on the relative buoyancy of the drop-film combination. We propose a model to qualitatively explain this behaviour. Finally, we discuss the relevance of the different buckling modes predicted in previous theoretical studies and highlight the important role of surface tension in the shape of the fold that is observed from the side-an aspect that is usually neglected in theoretical analyses.

  17. Optical speedup at transparency of the gain recovery in semiconductor optical amplifiers

    NASA Astrophysics Data System (ADS)

    Hessler, T. P.; Dupertuis, M.-A.; Deveaud, B.; Emery, J.-Y.; Dagens, B.

    2002-10-01

    Experimental demonstration of optical speedup at transparency (OSAT) has been performed on a 1 mm long semiconductor optical amplifiers (SOA). OSAT is a recently proposed scheme that decreases the recovery time of an SOA while maintaining the available gain. It is achieved by externally injecting into the SOA the beam of a separate high power laser at energies around the transparency point. Even though the experimental conditions were not optimal, a beam of 100 mW decreases the recovery time by a third when it is injected in the vicinity of the material transparency point of the device. This acceleration of the device response without detrimental reduction of the gain is found to be effective over a broad wavelength window of about 20 nm around transparency. The injection of the accelerating beam into the gain region is a less efficient solution not only because the gain is then strongly diminished but also because speeding is reduced. This originates from the reduction of the amplified spontaneous emission power in the device, which counterbalances the speeding capabilities of the external laser beam. Another advantage of the OSAT scheme is realized in relatively long SOAs, which suffer from gain overshoot under strong current injection. Simulations show that OSAT decreases the gain overshoot, which should enable us to use OSAT to further speedup the response of long SOAs.

  18. Mental Aptitude and Comprehension of Time-Compressed and Compressed-Expanded Listening Selections.

    ERIC Educational Resources Information Center

    Sticht, Thomas G.

    The comprehensibility of materials compressed and then expanded by means of an electromechanical process was tested with 280 Army inductees divided into groups of high and low mental aptitude. Three short listening selections relating to military activities were subjected to compression and compression-expansion to produce seven versions. Data…

  19. GTRAC: fast retrieval from compressed collections of genomic variants.

    PubMed

    Tatwawadi, Kedar; Hernaez, Mikel; Ochoa, Idoia; Weissman, Tsachy

    2016-09-01

    The dramatic decrease in the cost of sequencing has resulted in the generation of huge amounts of genomic data, as evidenced by projects such as the UK10K and the Million Veteran Project, with the number of sequenced genomes ranging in the order of 10 K to 1 M. Due to the large redundancies among genomic sequences of individuals from the same species, most of the medical research deals with the variants in the sequences as compared with a reference sequence, rather than with the complete genomic sequences. Consequently, millions of genomes represented as variants are stored in databases. These databases are constantly updated and queried to extract information such as the common variants among individuals or groups of individuals. Previous algorithms for compression of this type of databases lack efficient random access capabilities, rendering querying the database for particular variants and/or individuals extremely inefficient, to the point where compression is often relinquished altogether. We present a new algorithm for this task, called GTRAC, that achieves significant compression ratios while allowing fast random access over the compressed database. For example, GTRAC is able to compress a Homo sapiens dataset containing 1092 samples in 1.1 GB (compression ratio of 160), while allowing for decompression of specific samples in less than a second and decompression of specific variants in 17 ms. GTRAC uses and adapts techniques from information theory, such as a specialized Lempel-Ziv compressor, and tailored succinct data structures. The GTRAC algorithm is available for download at: https://github.com/kedartatwawadi/GTRAC CONTACT: : kedart@stanford.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  20. CSAM: Compressed SAM format.

    PubMed

    Cánovas, Rodrigo; Moffat, Alistair; Turpin, Andrew

    2016-12-15

    Next generation sequencing machines produce vast amounts of genomic data. For the data to be useful, it is essential that it can be stored and manipulated efficiently. This work responds to the combined challenge of compressing genomic data, while providing fast access to regions of interest, without necessitating decompression of whole files. We describe CSAM (Compressed SAM format), a compression approach offering lossless and lossy compression for SAM files. The structures and techniques proposed are suitable for representing SAM files, as well as supporting fast access to the compressed information. They generate more compact lossless representations than BAM, which is currently the preferred lossless compressed SAM-equivalent format; and are self-contained, that is, they do not depend on any external resources to compress or decompress SAM files. An implementation is available at https://github.com/rcanovas/libCSAM CONTACT: canovas-ba@lirmm.frSupplementary Information: Supplementary data is available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  1. Temperature Measurements in Compressed and Uncompressed SPECTOR Plasmas at General Fusion

    NASA Astrophysics Data System (ADS)

    Young, William; Carter, Neil; Howard, Stephen; Carle, Patrick; O'Shea, Peter; Fusion Team, General

    2017-10-01

    Accurate temperature measurements are critical to establishing the behavior of General Fusion's SPECTOR plasma injector, both before and during compression. As compression tests impose additional constraints on diagnostic access to the plasma, a two-color, filter-based soft x-ray electron temperature diagnostic has been implemented. Ion Doppler spectroscopy measurements also provide impurity ion temperatures on compression tests. The soft x-ray and ion Doppler spectroscopy measurements are being validated against a Thomson scattering system on an uncompressed version of SPECTOR with more diagnostic access. The multipoint Thomson scattering diagnostic also provides up to a six point temperature and density profile, with the density measurements validated against a far infrared interferometer. Temperatures above 300 eV have been demonstrated to be sustained for over 500 microseconds in uncompressed plasmas. Optimization of soft x-ray filters is ongoing, in order to balance blocking of impurity line radiation with signal strength.

  2. Miniature Fixed Points as Temperature Standards for In Situ Calibration of Temperature Sensors

    NASA Astrophysics Data System (ADS)

    Hao, X. P.; Sun, J. P.; Xu, C. Y.; Wen, P.; Song, J.; Xu, M.; Gong, L. Y.; Ding, L.; Liu, Z. L.

    2017-06-01

    Miniature Ga and Ga-In alloy fixed points as temperature standards are developed at National Institute of Metrology, China for the in situ calibration of temperature sensors. A quasi-adiabatic vacuum measurement system is constructed to study the phase-change plateaus of the fixed points. The system comprises a high-stability bath, a quasi-adiabatic vacuum chamber and a temperature control and measurement system. The melting plateau of the Ga fixed point is longer than 2 h at 0.008 W. The standard deviation of the melting temperature of the Ga and Ga-In alloy fixed points is better than 2 mK. The results suggest that the melting temperature of the Ga or Ga-In alloy fixed points is linearly related with the heating power.

  3. Parallel Tensor Compression for Large-Scale Scientific Data.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kolda, Tamara G.; Ballard, Grey; Austin, Woody Nathan

    As parallel computing trends towards the exascale, scientific data produced by high-fidelity simulations are growing increasingly massive. For instance, a simulation on a three-dimensional spatial grid with 512 points per dimension that tracks 64 variables per grid point for 128 time steps yields 8 TB of data. By viewing the data as a dense five way tensor, we can compute a Tucker decomposition to find inherent low-dimensional multilinear structure, achieving compression ratios of up to 10000 on real-world data sets with negligible loss in accuracy. So that we can operate on such massive data, we present the first-ever distributed memorymore » parallel implementation for the Tucker decomposition, whose key computations correspond to parallel linear algebra operations, albeit with nonstandard data layouts. Our approach specifies a data distribution for tensors that avoids any tensor data redistribution, either locally or in parallel. We provide accompanying analysis of the computation and communication costs of the algorithms. To demonstrate the compression and accuracy of the method, we apply our approach to real-world data sets from combustion science simulations. We also provide detailed performance results, including parallel performance in both weak and strong scaling experiments.« less

  4. A model for compression-weakening materials and the elastic fields due to contractile cells

    NASA Astrophysics Data System (ADS)

    Rosakis, Phoebus; Notbohm, Jacob; Ravichandran, Guruswami

    2015-12-01

    We construct a homogeneous, nonlinear elastic constitutive law that models aspects of the mechanical behavior of inhomogeneous fibrin networks. Fibers in such networks buckle when in compression. We model this as a loss of stiffness in compression in the stress-strain relations of the homogeneous constitutive model. Problems that model a contracting biological cell in a finite matrix are solved. It is found that matrix displacements and stresses induced by cell contraction decay slower (with distance from the cell) in a compression weakening material than linear elasticity would predict. This points toward a mechanism for long-range cell mechanosensing. In contrast, an expanding cell would induce displacements that decay faster than in a linear elastic matrix.

  5. The estimation of uniaxial compressive strength conversion factor of trona and interbeds from point load tests and numerical modeling

    NASA Astrophysics Data System (ADS)

    Ozturk, H.; Altinpinar, M.

    2017-07-01

    The point load (PL) test is generally used for estimation of uniaxial compressive strength (UCS) of rocks because of its economic advantages and simplicity in testing. If the PL index of a specimen is known, the UCS can be estimated using conversion factors. Several conversion factors have been proposed by various researchers and they are dependent upon the rock type. In the literature, conversion factors on different sedimentary, igneous and metamorphic rocks can be found, but no study exists on trona. In this study, laboratory UCS and field PL tests were carried out on trona and interbeds of volcano-sedimentary rocks. Based on these tests, PL to UCS conversion factors of trona and interbeds are proposed. The tests were modeled numerically using a distinct element method (DEM) software, particle flow code (PFC), in an attempt to guide researchers having various types of modeling problems (excavation, cavern design, hydraulic fracturing, etc.) of the abovementioned rock types. Average PFC parallel bond contact model micro properties for the trona and interbeds were determined within this study so that future researchers can use them to avoid the rigorous PFC calibration procedure. It was observed that PFC overestimates the tensile strength of the rocks by a factor that ranges from 22 to 106.

  6. Fatigue life of additively manufactured Ti6Al4V scaffolds under tension-tension, tension-compression and compression-compression fatigue load.

    PubMed

    Lietaert, Karel; Cutolo, Antonio; Boey, Dries; Van Hooreweder, Brecht

    2018-03-21

    Mechanical performance of additively manufactured (AM) Ti6Al4V scaffolds has mostly been studied in uniaxial compression. However, in real-life applications, more complex load conditions occur. To address this, a novel sample geometry was designed, tested and analyzed in this work. The new scaffold geometry, with porosity gradient between the solid ends and scaffold middle, was successfully used for quasi-static tension, tension-tension (R = 0.1), tension-compression (R = -1) and compression-compression (R = 10) fatigue tests. Results show that global loading in tension-tension leads to a decreased fatigue performance compared to global loading in compression-compression. This difference in fatigue life can be understood fairly well by approximating the local tensile stress amplitudes in the struts near the nodes. Local stress based Haigh diagrams were constructed to provide more insight in the fatigue behavior. When fatigue life is interpreted in terms of local stresses, the behavior of single struts is shown to be qualitatively the same as bulk Ti6Al4V. Compression-compression and tension-tension fatigue regimes lead to a shorter fatigue life than fully reversed loading due to the presence of a mean local tensile stress. Fractographic analysis showed that most fracture sites were located close to the nodes, where the highest tensile stresses are located.

  7. Measuring learning gain: Comparing anatomy drawing screencasts and paper-based resources.

    PubMed

    Pickering, James D

    2017-07-01

    The use of technology-enhanced learning (TEL) resources is now a common tool across a variety of healthcare programs. Despite this popular approach to curriculum delivery there remains a paucity in empirical evidence that quantifies the change in learning gain. The aim of the study was to measure the changes in learning gain observed with anatomy drawing screencasts in comparison to a traditional paper-based resource. Learning gain is a widely used term to describe the tangible changes in learning outcomes that have been achieved after a specific intervention. In regard to this study, a cohort of Year 2 medical students voluntarily participated and were randomly assigned to either a screencast or textbook group to compare changes in learning gain across resource type. Using a pre-test/post-test protocol, and a range of statistical analyses, the learning gain was calculated at three test points: immediate post-test, 1-week post-test and 4-week post-test. Results at all test points revealed a significant increase in learning gain and large effect sizes for the screencast group compared to the textbook group. Possible reasons behind the difference in learning gain are explored by comparing the instructional design of both resources. Strengths and weaknesses of the study design are also considered. This work adds to the growing area of research that supports the effective design of TEL resources which are complimentary to the cognitive theory of multimedia learning to achieve both an effective and efficient learning resource for anatomical education. Anat Sci Educ 10: 307-316. © 2016 American Association of Anatomists. © 2016 American Association of Anatomists.

  8. Numerical simulation of the compressible Orszag-Tang vortex. Interim report, June 1988-February 1989

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dahlburg, R.B.; Picone, J.M.

    Results of fully compressible, Fourier collocation, numerical simulations of the Orszag-Tang vortex system are presented. Initial conditions consist of a nonrandom, periodic field in which the magnetic and velocity fields contain X-points but differ in modal structure along one spatial direction. The velocity field is initially solenoidal, with the total initial pressure-field consisting of the superposition of the appropriate incompressible pressure distribution upon a flat pressure field corresponding to the initial, average flow Mach number of the flow. In the numerical simulations, this initial Mach number is varied from 0.2 to 0.6. These values correspond to average plasma beta valuesmore » ranging from 30.0 to 3.3, respectively. Compressible effects develop within one or two Alfven transit times, as manifested in the spectra of compressible quantities such as mass density and nonsolenoidal flow field. These effects include (1) retardation of growth of correlation between the magnetic field and the velocity field, (2) emergence of compressible small-scale structure such as massive jets, and (3) bifurcation of eddies in the compressible-flow field. Differences between the incompressible and compressible results tend to increase with increasing initial average Mach number.« less

  9. Analysis of the Optimum Usage of Slag for the Compressive Strength of Concrete.

    PubMed

    Lee, Han-Seung; Wang, Xiao-Yong; Zhang, Li-Na; Koh, Kyung-Taek

    2015-03-18

    Ground granulated blast furnace slag is widely used as a mineral admixture to replace partial Portland cement in the concrete industry. As the amount of slag increases, the late-age compressive strength of concrete mixtures increases. However, after an optimum point, any further increase in slag does not improve the late-age compressive strength. This optimum replacement ratio of slag is a crucial factor for its efficient use in the concrete industry. This paper proposes a numerical procedure to analyze the optimum usage of slag for the compressive strength of concrete. This numerical procedure starts with a blended hydration model that simulates cement hydration, slag reaction, and interactions between cement hydration and slag reaction. The amount of calcium silicate hydrate (CSH) is calculated considering the contributions from cement hydration and slag reaction. Then, by using the CSH contents, the compressive strength of the slag-blended concrete is evaluated. Finally, based on the parameter analysis of the compressive strength development of concrete with different slag inclusions, the optimum usage of slag in concrete mixtures is determined to be approximately 40% of the total binder content. The proposed model is verified through experimental results of the compressive strength of slag-blended concrete with different water-to-binder ratios and different slag inclusions.

  10. Analysis of the Optimum Usage of Slag for the Compressive Strength of Concrete

    PubMed Central

    Lee, Han-Seung; Wang, Xiao-Yong; Zhang, Li-Na; Koh, Kyung-Taek

    2015-01-01

    Ground granulated blast furnace slag is widely used as a mineral admixture to replace partial Portland cement in the concrete industry. As the amount of slag increases, the late-age compressive strength of concrete mixtures increases. However, after an optimum point, any further increase in slag does not improve the late-age compressive strength. This optimum replacement ratio of slag is a crucial factor for its efficient use in the concrete industry. This paper proposes a numerical procedure to analyze the optimum usage of slag for the compressive strength of concrete. This numerical procedure starts with a blended hydration model that simulates cement hydration, slag reaction, and interactions between cement hydration and slag reaction. The amount of calcium silicate hydrate (CSH) is calculated considering the contributions from cement hydration and slag reaction. Then, by using the CSH contents, the compressive strength of the slag-blended concrete is evaluated. Finally, based on the parameter analysis of the compressive strength development of concrete with different slag inclusions, the optimum usage of slag in concrete mixtures is determined to be approximately 40% of the total binder content. The proposed model is verified through experimental results of the compressive strength of slag-blended concrete with different water-to-binder ratios and different slag inclusions. PMID:28787998

  11. Weight gain as a barrier to smoking cessation among military personnel.

    PubMed

    Russ, C R; Fonseca, V P; Peterson, A L; Blackman, L R; Robbins, A S

    2001-01-01

    To assess the relationships between active-duty military status, military weight standards, concern about weight gain, and anticipated relapse after smoking cessation. Cross-sectional study. Hospital-based tobacco cessation program. Two hundred fifty-two enrollees, of 253 eligible, to a tobacco cessation program in 1999 (135 men, 117 women; 43% on active duty in the military). Independent variables included gender, body mass index (weight/height2), and military status. Dependent variables included about weight gain with smoking cessation and anticipated relapse. In multivariate regression analyses that controlled for gender and body mass index, active-duty military status was associated with an elevated level of concern about weight gain (1.9-point increase on a 10-point scale; 95% confidence interval [CI], 1.0- to 2.8-point increase), as well as higher anticipated relapse (odds ratio [OR] = 3.6; 95% CI, 1.3 to 9.8). Among subjects who were close to or over the U.S. Air Force maximum allowable weight for height, the analogous OR for active-duty military status was 6.9 (p = .02). Occupational weight standards or expectations may pose additional barriers for individuals contemplating or attempting smoking cessation, as they do among active-duty military personnel. These barriers are likely to hinder efforts to decrease smoking prevalence in certain groups.

  12. Comparison of three portable instruments to measure compression pressure.

    PubMed

    Partsch, H; Mosti, G

    2010-10-01

    Measurement of interface pressure between the skin and a compression device has gained practical importance not only for characterizing the efficacy of different compression products in physiological and clinical studies but also for the training of medical staff. A newly developed portable pneumatic pressure transducer (Picopress®) was compared with two established systems (Kikuhime® and SIGaT tester®) measuring linearity, variability and accuracy on a cylindrical model using a stepwise inflated sphygmomanometer as the reference. In addition the variation coefficients were measured by applying the transducers repeatedly under a blood pressure cuff on the distal lower leg of a healthy human subject with stepwise inflation. In the pressure range between 10 and 80 mmHg all three devices showed a linear association compared with the sphygmomanometer values (Pearson r>0.99). The best reproducibility (variation coefficients between 1.05-7.4%) and the highest degree of accuracy demonstrated by Bland-Altman plots was achieved with the Picopress® transducer. Repeated measurements of pressure in a human leg revealed average variation coefficients for the three devices of 4.17% (Kikuhime®), 8.52% (SIGaT®) and 2.79% (Picopress®). The results suggest that the Picopress® transducer, which also allows dynamic pressure tracing in connection with a software program and which may be left under a bandage for several days, is a reliable instrument for measuring the pressure under a compression device.

  13. Thermofluidic compression effects to achieve combustion in a low-compression scramjet engine

    NASA Astrophysics Data System (ADS)

    Moura, A. F.; Wheatley, V.; Jahn, I.

    2018-07-01

    The compression provided by a scramjet inlet is an important parameter in its design. It must be low enough to limit thermal and structural loads and stagnation pressure losses, but high enough to provide the conditions favourable for combustion. Inlets are typically designed to achieve sufficient compression without accounting for the fluidic, and subsequently thermal, compression provided by the fuel injection, which can enable robust combustion in a low-compression engine. This is investigated using Reynolds-averaged Navier-Stokes numerical simulations of a simplified scramjet engine designed to have insufficient compression to auto-ignite fuel in the absence of thermofluidic compression. The engine was designed with a wide rectangular combustor and a single centrally located injector, in order to reduce three-dimensional effects of the walls on the fuel plume. By varying the injected mass flow rate of hydrogen fuel (equivalence ratios of 0.22, 0.17, and 0.13), it is demonstrated that higher equivalence ratios lead to earlier ignition and more rapid combustion, even though mean conditions in the combustor change by no more than 5% for pressure and 3% for temperature with higher equivalence ratio. By supplementing the lower equivalence ratio with helium to achieve a higher mass flow rate, it is confirmed that these benefits are primarily due to the local compression provided by the extra injected mass. Investigation of the conditions around the fuel plume indicated two connected mechanisms. The higher mass flow rate for higher equivalence ratios generated a stronger injector bow shock that compresses the free-stream gas, increasing OH radical production and promoting ignition. This was observed both in the higher equivalence ratio case and in the case with helium. This earlier ignition led to increased temperature and pressure downstream and, consequently, stronger combustion. The heat release from combustion provided thermal compression in the combustor, further

  14. Thermofluidic compression effects to achieve combustion in a low-compression scramjet engine

    NASA Astrophysics Data System (ADS)

    Moura, A. F.; Wheatley, V.; Jahn, I.

    2017-12-01

    The compression provided by a scramjet inlet is an important parameter in its design. It must be low enough to limit thermal and structural loads and stagnation pressure losses, but high enough to provide the conditions favourable for combustion. Inlets are typically designed to achieve sufficient compression without accounting for the fluidic, and subsequently thermal, compression provided by the fuel injection, which can enable robust combustion in a low-compression engine. This is investigated using Reynolds-averaged Navier-Stokes numerical simulations of a simplified scramjet engine designed to have insufficient compression to auto-ignite fuel in the absence of thermofluidic compression. The engine was designed with a wide rectangular combustor and a single centrally located injector, in order to reduce three-dimensional effects of the walls on the fuel plume. By varying the injected mass flow rate of hydrogen fuel (equivalence ratios of 0.22, 0.17, and 0.13), it is demonstrated that higher equivalence ratios lead to earlier ignition and more rapid combustion, even though mean conditions in the combustor change by no more than 5% for pressure and 3% for temperature with higher equivalence ratio. By supplementing the lower equivalence ratio with helium to achieve a higher mass flow rate, it is confirmed that these benefits are primarily due to the local compression provided by the extra injected mass. Investigation of the conditions around the fuel plume indicated two connected mechanisms. The higher mass flow rate for higher equivalence ratios generated a stronger injector bow shock that compresses the free-stream gas, increasing OH radical production and promoting ignition. This was observed both in the higher equivalence ratio case and in the case with helium. This earlier ignition led to increased temperature and pressure downstream and, consequently, stronger combustion. The heat release from combustion provided thermal compression in the combustor, further

  15. Bunch length compression method for free electron lasers to avoid parasitic compressions

    DOEpatents

    Douglas, David R.; Benson, Stephen; Nguyen, Dinh Cong; Tennant, Christopher; Wilson, Guy

    2015-05-26

    A method of bunch length compression method for a free electron laser (FEL) that avoids parasitic compressions by 1) applying acceleration on the falling portion of the RF waveform, 2) compressing using a positive momentum compaction (R.sub.56>0), and 3) compensating for aberration by using nonlinear magnets in the compressor beam line.

  16. The Compressibility Burble

    NASA Technical Reports Server (NTRS)

    Stack, John

    1935-01-01

    Simultaneous air-flow photographs and pressure-distribution measurements have been made of the NACA 4412 airfoil at high speeds in order to determine the physical nature of the compressibility burble. The flow photographs were obtained by the Schlieren method and the pressures were simultaneously measured for 54 stations on the 5-inch-chord wing by means of a multiple-tube photographic manometer. Pressure-measurement results and typical Schlieren photographs are presented. The general nature of the phenomenon called the "compressibility burble" is shown by these experiments. The source of the increased drag is the compression shock that occurs, the excess drag being due to the conversion of a considerable amount of the air-stream kinetic energy into heat at the compression shock.

  17. Gain-scheduled {{\\mathscr{H}}}_{\\infty } buckling control of a circular beam-column subject to time-varying axial loads

    NASA Astrophysics Data System (ADS)

    Schaeffner, Maximilian; Platz, Roland

    2018-06-01

    For slender beam-columns loaded by axial compressive forces, active buckling control provides a possibility to increase the maximum bearable axial load above that of a purely passive structure. In this paper, an approach for gain-scheduled {{\\mathscr{H}}}∞ buckling control of a slender beam-column with circular cross-section subject to time-varying axial loads is investigated experimentally. Piezo-elastic supports with integrated piezoelectric stack actuators at the beam-column ends allow an active stabilization in arbitrary lateral directions. The axial loads on the beam-column influence its lateral dynamic behavior and, eventually, cause the beam-column to buckle. A reduced modal model of the beam-column subject to axial loads including the dynamics of the electrical components is set up and calibrated with experimental data. Particularly, the linear parameter-varying open-loop plant is used to design a model-based gain-scheduled {{\\mathscr{H}}}∞ buckling control that is implemented in an experimental test setup. The beam-column is loaded by ramp- and step-shaped time-varying axial compressive loads that result in a lateral deformation of the beam-column due to imperfections, such as predeformation, eccentric loading or clamping moments. The lateral deformations and the maximum bearable loads of the beam-column are analyzed and compared for the beam-column with and without gain-scheduled {{\\mathscr{H}}}∞ buckling control or, respectively, active and passive configuration. With the proposed gain-scheduled {{\\mathscr{H}}}∞ buckling control it is possible to increase the maximum bearable load of the active beam-column by 19% for ramp-shaped axial loads and to significantly reduce the beam-column deformations for step-shaped axial loads compared to the passive structure.

  18. Viscoelastic behavior of basaltic ash from Stromboli volcano inferred from intermittent compression experiments

    NASA Astrophysics Data System (ADS)

    Kurokawa, A. K.; Miwa, T.; Okumura, S.; Uesugi, K.

    2017-12-01

    After ash-dominated Strombolian eruption, considerable amount of ash falls back to the volcanic conduit forming a dense near-surface region compacted by weights of its own and other fallback clasts (Patrick et al., 2007). Gas accumulation below this dense cap causes a substantial increase in pressure within the conduit, causing the volcanic activity to shift to the preliminary stages of a forthcoming eruption (Del Bello et al., 2015). Under such conditions, rheology of the fallback ash plays an important role because it controls whether the fallback ash can be the cap. However, little attention has been given to the point. We examined the rheology of ash collected at Stromboli volcano via intermittent compression experiments changing temperature and compression time/rate. The ash deformed at a constant rate during compression process, and then it was compressed without any deformation during rest process. The compression and rest processes repeated during each experiment to see rheological variations with progression of compaction. Viscoelastic changes during the experiment were estimated by Maxwell model. The results show that both elasticity and viscosity increases with decreasing porosity. On the other hand, the elasticity shows strong rate-dependence in the both compression and rest processes while the viscosity dominantly depends on the temperature, although the compression rate also affects the viscosity in the case of the compression process. Thus, the ash behaves either elastically or viscously depending on experimental process, temperature, and compression rate/time. The viscoelastic characteristics can be explained by magnitude relationships between the characteristic relaxation times and times for compression and rest processes. This indicates that the balance of the time scales is key to determining the rheological characteristics and whether the ash behaves elastically or viscously may control cyclic Strombolian eruptions.

  19. Early prediction of olanzapine-induced weight gain for schizophrenia patients.

    PubMed

    Lin, Ching-Hua; Lin, Shih-Chi; Huang, Yu-Hui; Wang, Fu-Chiang; Huang, Chun-Jen

    2018-05-01

    The aim of this study was to determine whether weight changes at week 2 or other factors predicted weight gain at week 6 for schizophrenia patients receiving olanzapine. This study was the secondary analysis of a six-week trial for 94 patients receiving olanzapine (5 mg/d) plus trifluoperazine (5 mg/d), or olanzapine (10 mg/d) alone. Patients were included in analysis only if they had completed the 6-week trial (per protocol analysis). Weight gain was defined as a 7% or greater increase of the patient's baseline weight. The receiver operating characteristic curve was employed to determine the optimal cutoff points of statistically significant predictors. Eleven of the 67 patients completing the 6-week trial were classified as weight gainers. Weight change at week 2 was the statistically significant predictor for ultimate weight gain at week 6. A weight change of 1.0 kg at week 2 appeared to be the optimal cutoff point, with a sensitivity of 0.92, a specificity of 0.75, and an AUC of 0.85. Using weight change at week 2 to predict weight gain at week 6 is favorable in terms of both specificity and sensitivity. Weight change of 1.0 kg or more at 2 weeks is a reliable predictor. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Image compression technique

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1997-01-01

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace's equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image.

  1. Parallel image compression

    NASA Technical Reports Server (NTRS)

    Reif, John H.

    1987-01-01

    A parallel compression algorithm for the 16,384 processor MPP machine was developed. The serial version of the algorithm can be viewed as a combination of on-line dynamic lossless test compression techniques (which employ simple learning strategies) and vector quantization. These concepts are described. How these concepts are combined to form a new strategy for performing dynamic on-line lossy compression is discussed. Finally, the implementation of this algorithm in a massively parallel fashion on the MPP is discussed.

  2. Sequential neural text compression.

    PubMed

    Schmidhuber, J; Heil, S

    1996-01-01

    The purpose of this paper is to show that neural networks may be promising tools for data compression without loss of information. We combine predictive neural nets and statistical coding techniques to compress text files. We apply our methods to certain short newspaper articles and obtain compression ratios exceeding those of the widely used Lempel-Ziv algorithms (which build the basis of the UNIX functions "compress" and "gzip"). The main disadvantage of our methods is that they are about three orders of magnitude slower than standard methods.

  3. Laser observations of the moon: Normal points for 1973

    NASA Technical Reports Server (NTRS)

    Mulholland, J. D.; Shelus, P. J.; Silverburg, E. C.

    1975-01-01

    McDonald Observatory lunar laser ranging observations for 1973 are presented in the form of compressed normal points and amendments for the 1969-1972 data set are given. Observations of the reflector mounted on the Soviet roving vehicle Lunakhod 2 have also been included.

  4. Laser observations of the moon - Normal points for 1973

    NASA Technical Reports Server (NTRS)

    Mulholland, J. D.; Shelus, P. J.; Silverberg, E. C.

    1975-01-01

    McDonald Observatory lunar laser-ranging observations for 1973 are presented in the form of compressed normal points, and amendments for the 1969-1972 data set are given. Observations of the reflector mounted on the Soviet roving vehicle Lunakhod 2 have also been included.

  5. Material gain engineering in GeSn/Ge quantum wells integrated with an Si platform

    NASA Astrophysics Data System (ADS)

    Mączko, H. S.; Kudrawiec, R.; Gladysiewicz, M.

    2016-09-01

    It is shown that compressively strained Ge1-xSnx/Ge quantum wells (QWs) grown on a Ge substrate with 0.1 ≤ x ≤ 0.2 and width of 8 nm ≤ d ≤ 14 nm are a very promising gain medium for lasers integrated with an Si platform. Such QWs are type-I QWs with a direct bandgap and positive transverse electric mode of material gain, i.e. the modal gain. The electronic band structure near the center of Brillouin zone has been calculated for various Ge1-xSnx/Ge QWs with use of the 8-band kp Hamiltonian. To calculate the material gain for these QWs, occupation of the L valley in Ge barriers has been taken into account. It is clearly shown that this occupation has a lot of influence on the material gain in the QWs with low Sn concentrations (Sn < 15%) and is less important for QWs with larger Sn concentration (Sn > 15%). However, for QWs with Sn > 20% the critical thickness of a GeSn layer deposited on a Ge substrate starts to play an important role. Reduction in the QW width shifts up the ground electron subband in the QW and increases occupation of the L valley in the barriers instead of the Γ valley in the QW region.

  6. A Simple, Low Overhead Data Compression Algorithm for Converting Lossy Compression Processes to Lossless

    DTIC Science & Technology

    1993-12-01

    0~0 S* NAVAL POSTGRADUATE SCHOOL Monterey, California DTIC ELECTE THESIS S APR 11 1994DU A SIMPLE, LOW OVERHEAD DATA COMPRESSION ALGORITHM FOR...A SIMPLE. LOW OVERHEAD DATA COMPRESSION ALGORITHM FOR CONVERTING LOSSY COMPRESSION PROCESSES TO LOSSLESS. 6. AUTHOR(S) Abbott, Walter D., III 7...Approved for public release; distribution is unlimited. A Simple, Low Overhead Data Compression Algorithm for Converting Lossy Processes to Lossless by

  7. Effect of Impact Damage and Open Hole on Compressive Strength of Hybrid Composite Laminates

    NASA Technical Reports Server (NTRS)

    Hiel, Clement; Brinson, H. F.

    1993-01-01

    Impact damage tolerance is a frequently listed design requirement for composites hardware. The effect of impact damage and open hole size on laminate compressive strength was studied on sandwich beam specimens which combine CFRP-GFRP hybrid skins and a syntactic foam core. Three test specimen configurations have been investigated for this study. The first two were sandwich beams which were loaded in pure bending (by four point flexure). One series had a skin damaged by impact, and the second series had a circular hole machined through one of the skins. The reduction of compressive strength with increasing damage (hole) size was compared. Additionally a third series of uniaxially loaded open hole compression coupons were tested to generate baseline data for comparison with both series of sandwich beams.

  8. Testing compression strength of wood logs by drilling resistance

    NASA Astrophysics Data System (ADS)

    Kalny, Gerda; Rados, Kristijan; Rauch, Hans Peter

    2017-04-01

    Soil bioengineering is a construction technique using biological components for hydraulic and civil engineering solutions, based on the application of living plants and other auxiliary materials including among others log wood. Considering the reliability of the construction it is important to know about the durability and the degradation process of the wooden logs to estimate and retain the integral performance of a soil bioengineering system. An important performance indicator is the compression strength, but this parameter is not easy to examine by non-destructive methods. The Rinntech Resistograph is an instrument to measure the drilling resistance by a 3 mm wide needle in a wooden log. It is a quasi-non-destructive method as the remaining hole has no weakening effects to the wood. This is an easy procedure but result in values, hard to interpret. To assign drilling resistance values to specific compression strengths, wooden specimens were tested in an experiment and analysed with the Resistograph. Afterwards compression tests were done at the same specimens. This should allow an easier interpretation of drilling resistance curves in future. For detailed analyses specimens were investigated by means of branch inclusions, cracks and distances between annual rings. Wood specimens are tested perpendicular to the grain. First results show a correlation between drilling resistance and compression strength by using the mean drilling resistance, average width of the annual rings and the mean range of the minima and maxima values as factors for the drilling resistance. The extended limit of proportionality, the offset yield strength and the maximum strength were taken as parameters for compression strength. Further investigations at a second point in time strengthen these results.

  9. Compressed air injection technique to standardize block injection pressures.

    PubMed

    Tsui, Ban C H; Li, Lisa X Y; Pillay, Jennifer J

    2006-11-01

    Presently, no standardized technique exists to monitor injection pressures during peripheral nerve blocks. Our objective was to determine if a compressed air injection technique, using an in vitro model based on Boyle's law and typical regional anesthesia equipment, could consistently maintain injection pressures below a 1293 mmHg level associated with clinically significant nerve injury. Injection pressures for 20 and 30 mL syringes with various needle sizes (18G, 20G, 21G, 22G, and 24G) were measured in a closed system. A set volume of air was aspirated into a saline-filled syringe and then compressed and maintained at various percentages while pressure was measured. The needle was inserted into the injection port of a pressure sensor, which had attached extension tubing with an injection plug clamped "off". Using linear regression with all data points, the pressure value and 99% confidence interval (CI) at 50% air compression was estimated. The linearity of Boyle's law was demonstrated with a high correlation, r = 0.99, and a slope of 0.984 (99% CI: 0.967-1.001). The net pressure generated at 50% compression was estimated as 744.8 mmHg, with the 99% CI between 729.6 and 760.0 mmHg. The various syringe/needle combinations had similar results. By creating and maintaining syringe air compression at 50% or less, injection pressures will be substantially below the 1293 mmHg threshold considered to be an associated risk factor for clinically significant nerve injury. This technique may allow simple, real-time and objective monitoring during local anesthetic injections while inherently reducing injection speed.

  10. Loss-induced super scattering and gain-induced absorption.

    PubMed

    Feng, Simin

    2016-01-25

    Giant transmission and reflection of a finite bandwidth are demonstrated at the same wavelength when the electromagnetic wave is incident on a subwavelength array of parity-time (PT) symmetric dimers embedded in a metallic film. Remarkably, this phenomenon vanishes if the metallic substrate is lossless while keeping other parameters unchanged. Moreover super scattering can also occur when increasing the loss of the dimers while keeping the gain unchanged. When the metafilm is adjusted to the vicinity of an exceptional point, tuning either the substrate dissipation or the loss of the dimers can lead to supper scattering in stark contrast to what would be expected in conventional systems. In addition, increasing the gain of the dimers can increase the absorption near the exceptional point. These phenomena indicate that the PT-synthetic plasmonic metafilm can function as a thinfilm PT-plasmonic laser or absorber depending on the tuning parameter. One implication is that super radiation is possible from a cavity by tuning cavity dissipation or lossy element inside the cavity.

  11. Bulk hydrodynamic stability and turbulent saturation in compressing hot spots

    NASA Astrophysics Data System (ADS)

    Davidovits, Seth; Fisch, Nathaniel J.

    2018-04-01

    For hot spots compressed at constant velocity, we give a hydrodynamic stability criterion that describes the expected energy behavior of non-radial hydrodynamic motion for different classes of trajectories (in ρR — T space). For a given compression velocity, this criterion depends on ρR, T, and d T /d (ρR ) (the trajectory slope) and applies point-wise so that the expected behavior can be determined instantaneously along the trajectory. Among the classes of trajectories are those where the hydromotion is guaranteed to decrease and those where the hydromotion is bounded by a saturated value. We calculate this saturated value and find the compression velocities for which hydromotion may be a substantial fraction of hot-spot energy at burn time. The Lindl (Phys. Plasmas 2, 3933 (1995)] "attractor" trajectory is shown to experience non-radial hydrodynamic energy that grows towards this saturated state. Comparing the saturation value with the available detailed 3D simulation results, we find that the fluctuating velocities in these simulations reach substantial fractions of the saturated value.

  12. High Gain Antenna Gimbal for the 2003-2004 Mars Exploration Rover Program

    NASA Technical Reports Server (NTRS)

    Sokol, Jeff; Krishnan, Satish; Ayari, Laoucet

    2004-01-01

    The High Gain Antenna Assemblies built for the 2003-2004 Mars Exploration Rover (MER) missions provide the primary communication link for the Rovers once they arrive on Mars. The High Gain Antenna Gimbal (HGAG) portion of the assembly is a two-axis gimbal that provides the structural support, pointing, and tracking for the High Gain Antenna (HGA). The MER mission requirements provided some unique design challenges for the HGAG. This paper describes all the major subsystems of the HGAG that were developed to meet these challenges, and the requirements that drove their design.

  13. Compressing DNA sequence databases with coil.

    PubMed

    White, W Timothy J; Hendy, Michael D

    2008-05-20

    Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip) compression - an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression - the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST) data. Finally, coil can efficiently encode incremental additions to a sequence database. coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work.

  14. EBLAST: an efficient high-compression image transformation 3. application to Internet image and video transmission

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.; Ritter, Gerhard X.; Caimi, Frank M.

    2001-12-01

    A wide variety of digital image compression transforms developed for still imaging and broadcast video transmission are unsuitable for Internet video applications due to insufficient compression ratio, poor reconstruction fidelity, or excessive computational requirements. Examples include hierarchical transforms that require all, or large portion of, a source image to reside in memory at one time, transforms that induce significant locking effect at operationally salient compression ratios, and algorithms that require large amounts of floating-point computation. The latter constraint holds especially for video compression by small mobile imaging devices for transmission to, and compression on, platforms such as palmtop computers or personal digital assistants (PDAs). As Internet video requirements for frame rate and resolution increase to produce more detailed, less discontinuous motion sequences, a new class of compression transforms will be needed, especially for small memory models and displays such as those found on PDAs. In this, the third series of papers, we discuss the EBLAST compression transform and its application to Internet communication. Leading transforms for compression of Internet video and still imagery are reviewed and analyzed, including GIF, JPEG, AWIC (wavelet-based), wavelet packets, and SPIHT, whose performance is compared with EBLAST. Performance analysis criteria include time and space complexity and quality of the decompressed image. The latter is determined by rate-distortion data obtained from a database of realistic test images. Discussion also includes issues such as robustness of the compressed format to channel noise. EBLAST has been shown to perform superiorly to JPEG and, unlike current wavelet compression transforms, supports fast implementation on embedded processors with small memory models.

  15. Image compression technique

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1997-03-25

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace`s equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image. 16 figs.

  16. Modeling a distribution of point defects as misfitting inclusions in stressed solids

    NASA Astrophysics Data System (ADS)

    Cai, W.; Sills, R. B.; Barnett, D. M.; Nix, W. D.

    2014-05-01

    The chemical equilibrium distribution of point defects modeled as non-overlapping, spherical inclusions with purely positive dilatational eigenstrain in an isotropically elastic solid is derived. The compressive self-stress inside existing inclusions must be excluded from the stress dependence of the equilibrium concentration of the point defects, because it does no work when a new inclusion is introduced. On the other hand, a tensile image stress field must be included to satisfy the boundary conditions in a finite solid. Through the image stress, existing inclusions promote the introduction of additional inclusions. This is contrary to the prevailing approach in the literature in which the equilibrium point defect concentration depends on a homogenized stress field that includes the compressive self-stress. The shear stress field generated by the equilibrium distribution of such inclusions is proved to be proportional to the pre-existing stress field in the solid, provided that the magnitude of the latter is small, so that a solid containing an equilibrium concentration of point defects can be described by a set of effective elastic constants in the small-stress limit.

  17. Dynamic Recrystallization Kinetics of 690 Alloy During Hot Compression of Double-Cone Samples

    NASA Astrophysics Data System (ADS)

    Wang, Jue; Zhai, Shun-Chao

    2017-03-01

    Hot compression tests of double-cone samples were conducted for 690 alloy to study the kinetic behavior of the complete dynamic recrystallization (DRX) process under low deformation temperatures from 960 to 1080 °C. The microstructure of 82 points in the vertical section of every deformed sample was quantitatively analyzed to determine the DRX fraction. Corresponding strain of these points was calculated by finite element simulations. Kinetic curves of the specimens with different preheating temperatures were then constructed. The features of various boundaries with different misorientation angles were investigated by electron backscatter diffraction technology and transmission electron microscope. The results showed that the strain is continuously and symmetrically distributed along the centerline of the vertical section. Large strain of 1.84 was obtained when the compression amount is 12 mm for double-cone samples. All the fitted kinetic curves display an "S" type, which possess a low growth rate of DRX at the beginning and the end of compression. The critical strain of recrystallization decreases with the increase in preheating temperature, while the completion strain remains around 1.5 for all the samples. The initial and maximum growth rates of DRX fraction have the opposite trend with the change in temperature, which is considered to be attributed to the behaviors of different misorientation boundaries.

  18. Method of making a non-lead hollow point bullet

    DOEpatents

    Vaughn, Norman L.; Lowden, Richard A.

    2003-10-07

    The method of making a non-lead hollow point bullet has the steps of a) compressing an unsintered powdered metal composite core into a jacket, b) punching a hollow cavity tip portion into the core, c) seating an insert, the insert having a hollow point tip and a tail protrusion, on top of the core such that the tail protrusion couples with the hollow cavity tip portion, and d) swaging the open tip of the jacket.

  19. Compact compressive arc and beam switchyard for energy recovery linac-driven ultraviolet free electron lasers

    NASA Astrophysics Data System (ADS)

    Akkermans, J. A. G.; Di Mitri, S.; Douglas, D.; Setija, I. D.

    2017-08-01

    High gain free electron lasers (FELs) driven by high repetition rate recirculating accelerators have received considerable attention in the scientific and industrial communities in recent years. Cost-performance optimization of such facilities encourages limiting machine size and complexity, and a compact machine can be realized by combining bending and bunch length compression during the last stage of recirculation, just before lasing. The impact of coherent synchrotron radiation (CSR) on electron beam quality during compression can, however, limit FEL output power. When methods to counteract CSR are implemented, appropriate beam diagnostics become critical to ensure that the target beam parameters are met before lasing, as well as to guarantee reliable, predictable performance and rapid machine setup and recovery. This article describes a beam line for bunch compression and recirculation, and beam switchyard accessing a diagnostic line for EUV lasing at 1 GeV beam energy. The footprint is modest, with 12 m compressive arc diameter and ˜20 m diagnostic line length. The design limits beam quality degradation due to CSR both in the compressor and in the switchyard. Advantages and drawbacks of two switchyard lines providing, respectively, off-line and on-line measurements are discussed. The entire design is scalable to different beam energies and charges.

  20. Compressing DNA sequence databases with coil

    PubMed Central

    White, W Timothy J; Hendy, Michael D

    2008-01-01

    Background Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip) compression – an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. Results We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression – the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST) data. Finally, coil can efficiently encode incremental additions to a sequence database. Conclusion coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work. PMID:18489794

  1. Influence of several factors on ignition lag in a compression-ignition engine

    NASA Technical Reports Server (NTRS)

    Gerrish, Harold C; Voss, Fred

    1932-01-01

    This investigation was made to determine the influence of fuel quality, injection advance angle, injection valve-opening pressure, inlet-air pressure, compression ratio, and engine speed on the time lag of auto-ignition of a Diesel fuel oil in a single-cylinder compression-ignition engine as obtained from an analysis of indicator diagrams. Three cam-operated fuel-injection pumps, two pumps cams, and an automatic injection valve with two different nozzles were used. Ignition lag was considered to be the interval between the start of injection of the fuel as determined with a Stroborama and the start of effective combustion as determined from the indicator diagram, the latter being the point where 4.0 x 10(exp-6) pound of fuel had been effectively burned. For this particular engine and fuel it was found that: (1) for a constant start and the same rate of fuel injection up the point of cut-off, a variation in fuel quantity from 1.2 x 10(exp-4) to 4.1 x 10(exp-4) pound per cycle has no appreciable effect on the ignition lag; (2) injection advance angle increases or decreases the lag according to whether density, temperature, or turbulence has the controlling influence; (3) increase in valve-opening pressure slightly increases the lag; and (4) increase of inlet-air pressure, compression ratio, and engine speed reduces the lag.

  2. Block sparsity-based joint compressed sensing recovery of multi-channel ECG signals.

    PubMed

    Singh, Anurag; Dandapat, Samarendra

    2017-04-01

    In recent years, compressed sensing (CS) has emerged as an effective alternative to conventional wavelet based data compression techniques. This is due to its simple and energy-efficient data reduction procedure, which makes it suitable for resource-constrained wireless body area network (WBAN)-enabled electrocardiogram (ECG) telemonitoring applications. Both spatial and temporal correlations exist simultaneously in multi-channel ECG (MECG) signals. Exploitation of both types of correlations is very important in CS-based ECG telemonitoring systems for better performance. However, most of the existing CS-based works exploit either of the correlations, which results in a suboptimal performance. In this work, within a CS framework, the authors propose to exploit both types of correlations simultaneously using a sparse Bayesian learning-based approach. A spatiotemporal sparse model is employed for joint compression/reconstruction of MECG signals. Discrete wavelets transform domain block sparsity of MECG signals is exploited for simultaneous reconstruction of all the channels. Performance evaluations using Physikalisch-Technische Bundesanstalt MECG diagnostic database show a significant gain in the diagnostic reconstruction quality of the MECG signals compared with the state-of-the art techniques at reduced number of measurements. Low measurement requirement may lead to significant savings in the energy-cost of the existing CS-based WBAN systems.

  3. Fracture mechanisms of glass particles under dynamic compression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parab, Niranjan D.; Guo, Zherui; Hudspeth, Matthew C.

    2017-08-01

    In this study, dynamic fracture mechanisms of single and contacting spherical glass particles were observed using high speed synchrotron X-ray phase contrast imaging. A modified Kolsky bar setup was used to apply controlled dynamic compressive loading on the soda-lime glass particles. Four different configurations of particle arrangements with one, two, three, and five particles were studied. In single particle experiments, cracking initiated near the contact area between the particle and the platen, subsequently fragmenting the particle in many small sub-particles. In multi-particle experiments, a crack was observed to initiate from the point just outside the contact area between two particles.more » The initiated crack propagated at an angle to the horizontal loading direction, resulting in separation of a fragment. However, this fragment separation did not affect the ability of the particle to withstand further contact loading. On further compression, large number of cracks initiated in the particle with the highest number of particle-particle contacts near one of the particle-particle contacts. The initiated cracks roughly followed the lines joining the contact points. Subsequently, the initiated cracks along with the newly developed sub-cracks bifurcated rapidly as they propagated through the particle and fractured the particle explosively into many small fragments, leaving the other particles nearly intact.« less

  4. Galileo mission planning for Low Gain Antenna based operations

    NASA Technical Reports Server (NTRS)

    Gershman, R.; Buxbaum, K. L.; Ludwinski, J. M.; Paczkowski, B. G.

    1994-01-01

    The Galileo mission operations concept is undergoing substantial redesign, necessitated by the deployment failure of the High Gain Antenna, while the spacecraft is on its way to Jupiter. The new design applies state-of-the-art technology and processes to increase the telemetry rate available through the Low Gain Antenna and to increase the information density of the telemetry. This paper describes the mission planning process being developed as part of this redesign. Principal topics include a brief description of the new mission concept and anticipated science return (these have been covered more extensively in earlier papers), identification of key drivers on the mission planning process, a description of the process and its implementation schedule, a discussion of the application of automated mission planning tool to the process, and a status report on mission planning work to date. Galileo enhancements include extensive reprogramming of on-board computers and substantial hard ware and software upgrades for the Deep Space Network (DSN). The principal mode of operation will be onboard recording of science data followed by extended playback periods. A variety of techniques will be used to compress and edit the data both before recording and during playback. A highly-compressed real-time science data stream will also be important. The telemetry rate will be increased using advanced coding techniques and advanced receivers. Galileo mission planning for orbital operations now involves partitioning of several scarce resources. Particularly difficult are division of the telemetry among the many users (eleven instruments, radio science, engineering monitoring, and navigation) and allocation of space on the tape recorder at each of the ten satellite encounters. The planning process is complicated by uncertainty in forecast performance of the DSN modifications and the non-deterministic nature of the new data compression schemes. Key mission planning steps include

  5. Galileo mission planning for Low Gain Antenna based operations

    NASA Astrophysics Data System (ADS)

    Gershman, R.; Buxbaum, K. L.; Ludwinski, J. M.; Paczkowski, B. G.

    1994-11-01

    The Galileo mission operations concept is undergoing substantial redesign, necessitated by the deployment failure of the High Gain Antenna, while the spacecraft is on its way to Jupiter. The new design applies state-of-the-art technology and processes to increase the telemetry rate available through the Low Gain Antenna and to increase the information density of the telemetry. This paper describes the mission planning process being developed as part of this redesign. Principal topics include a brief description of the new mission concept and anticipated science return (these have been covered more extensively in earlier papers), identification of key drivers on the mission planning process, a description of the process and its implementation schedule, a discussion of the application of automated mission planning tool to the process, and a status report on mission planning work to date. Galileo enhancements include extensive reprogramming of on-board computers and substantial hard ware and software upgrades for the Deep Space Network (DSN). The principal mode of operation will be onboard recording of science data followed by extended playback periods. A variety of techniques will be used to compress and edit the data both before recording and during playback. A highly-compressed real-time science data stream will also be important. The telemetry rate will be increased using advanced coding techniques and advanced receivers. Galileo mission planning for orbital operations now involves partitioning of several scarce resources. Particularly difficult are division of the telemetry among the many users (eleven instruments, radio science, engineering monitoring, and navigation) and allocation of space on the tape recorder at each of the ten satellite encounters. The planning process is complicated by uncertainty in forecast performance of the DSN modifications and the non-deterministic nature of the new data compression schemes. Key mission planning steps include

  6. Density and Adiabatic Compressibility of the Immiscible Molten AgBr+LiCl Mixture

    NASA Astrophysics Data System (ADS)

    Stepanov, Victor P.; Kulik, Nina P.

    2017-04-01

    The adiabatic compressibility, β, of the immiscible liquid mixture 0.52 LiCl+0.48 AgBr (the top of the miscibility gap) was experimentally investigated in the temperature range from the melting point to the critical mixing temperature using the sound velocity values, u, measured by the pulse method, and the density quantities, ρ, which were determined using the hydrostatic weight procedure based on the relationship β=u- 2ρ- 1. It is shown that the coefficients of the temperature dependencies for the compressibility and density of the upper and lower equilibrium phases have opposite signs because of the superposition of the intensity of the thermal motion of the ions and the change in the composition of the phases. The differences, ∆β and ∆ρ, in the magnitudes of the compressibility and density for the equilibrium phases decrease with temperature elevation. The temperature dependencies of the compressibility and density difference are described using the empirical equations ∆β≈(Tc-T)0.438 and ∆ρ≈(Tc-T)0.439.

  7. Nearly fully compressed 1053 nm pulses directly obtained from 800 nm laser-seeded photonic crystal fiber below zero dispersion point

    NASA Astrophysics Data System (ADS)

    Refaeli, Zaharit; Shamir, Yariv; Ofir, Atara; Marcus, Gilad

    2018-02-01

    We report a simple robust and broadly spectral-adjustable source generating near fully compressed 1053 nm 62 fs pulses directly out of a highly-nonlinear photonic crystal fiber. A dispersion-nonlinearity balance of 800 nm Ti:Sa 20 fs pulses was obtained initially by negative pre-chirping and then launching the pulses into the fibers' normal dispersion regime. Following a self-phase modulation spectral broadening, some energy that leaked below the zero dispersion point formed a soliton whose central wavelength could be tuned by Self-Frequency-Raman-Shift effect. Contrary to a common approach of power, or, fiber-length control over the shift, here we continuously varied the state of polarization, exploiting the Raman and Kerr nonlinearities responsivity for state of polarization. We obtained soliton pulses with central wavelength tuned over 150 nm, spanning from well below 1000 to over 1150 nm, of which we could select stable pulses around the 1 μm vicinity. With linewidth of > 20 nm FWHM Gaussian-like temporal-shape pulses with 62 fs duration and near flat phase structure we confirmed high quality pulse source. We believe such scheme can be used for high energy or high power glass lasers systems, such as Nd or Yb ion-doped amplifiers and systems.

  8. A study of compressibility and compactibility of directly compressible tableting materials containing tramadol hydrochloride.

    PubMed

    Mužíková, Jitka; Kubíčková, Alena

    2016-09-01

    The paper evaluates and compares the compressibility and compactibility of directly compressible tableting materials for the preparation of hydrophilic gel matrix tablets containing tramadol hydrochloride and the coprocessed dry binders Prosolv® SMCC 90 and Disintequik™ MCC 25. The selected types of hypromellose are Methocel™ Premium K4M and Methocel™ Premium K100M in 30 and 50 % concentrations, the lubricant being magnesium stearate in a 1 % concentration. Compressibility is evaluated by means of the energy profile of compression process and compactibility by the tensile strength of tablets. The values of total energy of compression and plasticity were higher in the tableting materials containing Prosolv® SMCC 90 than in those containing Disintequik™ MCC 25. Tramadol slightly decreased the values of total energy of compression and plasticity. Tableting materials containing Prosolv® SMCC 90 yielded stronger tablets. Tramadol decreased the strength of tablets from both coprocessed dry binders.

  9. Improving multispectral satellite image compression using onboard subpixel registration

    NASA Astrophysics Data System (ADS)

    Albinet, Mathieu; Camarero, Roberto; Isnard, Maxime; Poulet, Christophe; Perret, Jokin

    2013-09-01

    Future CNES earth observation missions will have to deal with an ever increasing telemetry data rate due to improvements in resolution and addition of spectral bands. Current CNES image compressors implement a discrete wavelet transform (DWT) followed by a bit plane encoding (BPE) but only on a mono spectral basis and do not profit from the multispectral redundancy of the observed scenes. Recent CNES studies have proven a substantial gain on the achievable compression ratio, +20% to +40% on selected scenarios, by implementing a multispectral compression scheme based on a Karhunen Loeve transform (KLT) followed by the classical DWT+BPE. But such results can be achieved only on perfectly registered bands; a default of registration as low as 0.5 pixel ruins all the benefits of multispectral compression. In this work, we first study the possibility to implement a multi-bands subpixel onboard registration based on registration grids generated on-the-fly by the satellite attitude control system and simplified resampling and interpolation techniques. Indeed bands registration is usually performed on ground using sophisticated techniques too computationally intensive for onboard use. This fully quantized algorithm is tuned to meet acceptable registration performances within stringent image quality criteria, with the objective of onboard real-time processing. In a second part, we describe a FPGA implementation developed to evaluate the design complexity and, by extrapolation, the data rate achievable on a spacequalified ASIC. Finally, we present the impact of this approach on the processing chain not only onboard but also on ground and the impacts on the design of the instrument.

  10. Compression fractures of the back

    MedlinePlus

    ... treatments. Surgery can include: Balloon kyphoplasty Vertebroplasty Spinal fusion Other surgery may be done to remove bone ... Alternative Names Vertebral compression fractures; Osteoporosis - compression fracture Images Compression fracture References Cosman F, de Beur SJ, ...

  11. Optical properties of highly compressed polystyrene: An ab initio study

    NASA Astrophysics Data System (ADS)

    Hu, S. X.; Collins, L. A.; Colgan, J. P.; Goncharov, V. N.; Kilcrease, D. P.

    2017-10-01

    Using all-electron density functional theory, we have performed an ab initio study on x-ray absorption spectra of highly compressed polystyrene (CH). We found that the K -edge shifts in strongly coupled, degenerate polystyrene cannot be explained by existing continuum-lowering models adopted in traditional plasma physics. To gain insights into the K -edge shift in warm, dense CH, we have developed a model designated as "single mixture in a box" (SMIAB), which incorporates both the lowering of the continuum and the rising of the Fermi surface resulting from high compression. This simple SMIAB model correctly predicts the K -edge shift of carbon in highly compressed CH in good agreement with results from quantum molecular dynamics (QMD) calculations. Traditional opacity models failed to give the proper K -edge shifts as the CH density increased. Based on QMD calculations, we have established a first-principles opacity table (FPOT) for CH in a wide range of densities and temperatures [ρ =0.1 -100 g /c m3 and T =2000 -1 000 000 K ]. The FPOT gives much higher Rosseland mean opacity compared to the cold-opacity-patched astrophysics opacity table for warm, dense CH and favorably compares to the newly improved Los Alamos atomic model for moderately compressed CH (ρCH≤10 g /c m3 ), but remains a factor of 2 to 3 higher at extremely high densities (ρCH≥50 g /c m3 ). We anticipate the established FPOT of CH will find important applications to reliable designs of high-energy-density experiments. Moreover, the understanding of K -edge shifting revealed in this study could provide guides for improving the traditional opacity models to properly handle the strongly coupled and degenerate conditions.

  12. A joint source-channel distortion model for JPEG compressed images.

    PubMed

    Sabir, Muhammad F; Sheikh, Hamid Rahim; Heath, Robert W; Bovik, Alan C

    2006-06-01

    The need for efficient joint source-channel coding (JSCC) is growing as new multimedia services are introduced in commercial wireless communication systems. An important component of practical JSCC schemes is a distortion model that can predict the quality of compressed digital multimedia such as images and videos. The usual approach in the JSCC literature for quantifying the distortion due to quantization and channel errors is to estimate it for each image using the statistics of the image for a given signal-to-noise ratio (SNR). This is not an efficient approach in the design of real-time systems because of the computational complexity. A more useful and practical approach would be to design JSCC techniques that minimize average distortion for a large set of images based on some distortion model rather than carrying out per-image optimizations. However, models for estimating average distortion due to quantization and channel bit errors in a combined fashion for a large set of images are not available for practical image or video coding standards employing entropy coding and differential coding. This paper presents a statistical model for estimating the distortion introduced in progressive JPEG compressed images due to quantization and channel bit errors in a joint manner. Statistical modeling of important compression techniques such as Huffman coding, differential pulse-coding modulation, and run-length coding are included in the model. Examples show that the distortion in terms of peak signal-to-noise ratio (PSNR) can be predicted within a 2-dB maximum error over a variety of compression ratios and bit-error rates. To illustrate the utility of the proposed model, we present an unequal power allocation scheme as a simple application of our model. Results show that it gives a PSNR gain of around 6.5 dB at low SNRs, as compared to equal power allocation.

  13. Optical properties of highly compressed polystyrene: An ab initio study

    DOE PAGES

    Hu, S. X.; Collins, L. A.; Colgan, J. P.; ...

    2017-10-16

    Using all-electron density functional theory, we have performed an ab initio study on x ray absorption spectra of highly compressed polystyrene (CH). Here, we found that the K-edge shifts in strongly coupled, degenerate polystyrene cannot be explained by existing continuum-lowering models adopted in traditional plasma physics. To gain insights into the K edge shift in warm, dense CH, we have developed a model designated as “single-mixture-in-a-box” (SMIAB), which incorporates both the lowering of continuum and the rising of Fermi surface resulting from high compression. This simple SMIAB model correctly predicts the K-edge shift of carbon in highly compressed CH inmore » good agreement with results from quantum-molecular-dynamics (QMD) calculations. Traditional opacity models failed to give the proper K-edge shifts as the CH density increased. Based on QMD calculations, we have established a first-principles opacity table (FPOT) for CH in a wide range of densities and temperatures [p = 0.1 to 100 g/cm 3 and T = 2000 to 1,000,000 K]. The FPOT gives much higher Rosseland mean opacity compared to the cold-opacity–patched astrophysics opacity table for warm, dense CH and favorably compares to the newly improved Los Alamos ATOMIC model for moderately compressed CH (pCH ≤10 g/cm 3) but remains a factor of 2 to 3 higher at extremely high densities (pCH ≥ 50 g/cm 3). We anticipate the established FPOT of CH will find important applications to reliable designs of high-energy-density experiments. Moreover, the understanding of K-edge shifting revealed in this study could provide guides for improving the traditional opacity models to properly handle the strongly coupled and degenerate conditions.« less

  14. Optical properties of highly compressed polystyrene: An ab initio study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, S. X.; Collins, L. A.; Colgan, J. P.

    Using all-electron density functional theory, we have performed an ab initio study on x ray absorption spectra of highly compressed polystyrene (CH). Here, we found that the K-edge shifts in strongly coupled, degenerate polystyrene cannot be explained by existing continuum-lowering models adopted in traditional plasma physics. To gain insights into the K edge shift in warm, dense CH, we have developed a model designated as “single-mixture-in-a-box” (SMIAB), which incorporates both the lowering of continuum and the rising of Fermi surface resulting from high compression. This simple SMIAB model correctly predicts the K-edge shift of carbon in highly compressed CH inmore » good agreement with results from quantum-molecular-dynamics (QMD) calculations. Traditional opacity models failed to give the proper K-edge shifts as the CH density increased. Based on QMD calculations, we have established a first-principles opacity table (FPOT) for CH in a wide range of densities and temperatures [p = 0.1 to 100 g/cm 3 and T = 2000 to 1,000,000 K]. The FPOT gives much higher Rosseland mean opacity compared to the cold-opacity–patched astrophysics opacity table for warm, dense CH and favorably compares to the newly improved Los Alamos ATOMIC model for moderately compressed CH (pCH ≤10 g/cm 3) but remains a factor of 2 to 3 higher at extremely high densities (pCH ≥ 50 g/cm 3). We anticipate the established FPOT of CH will find important applications to reliable designs of high-energy-density experiments. Moreover, the understanding of K-edge shifting revealed in this study could provide guides for improving the traditional opacity models to properly handle the strongly coupled and degenerate conditions.« less

  15. Three-dimensional density and compressible magnetic structure in solar wind turbulence

    NASA Astrophysics Data System (ADS)

    Roberts, Owen W.; Narita, Yasuhito; Escoubet, C.-Philippe

    2018-03-01

    The three-dimensional structure of both compressible and incompressible components of turbulence is investigated at proton characteristic scales in the solar wind. Measurements of the three-dimensional structure are typically difficult, since the majority of measurements are performed by a single spacecraft. However, the Cluster mission consisting of four spacecraft in a tetrahedral formation allows for a fully three-dimensional investigation of turbulence. Incompressible turbulence is investigated by using the three vector components of the magnetic field. Meanwhile compressible turbulence is investigated by considering the magnitude of the magnetic field as a proxy for the compressible fluctuations and electron density data deduced from spacecraft potential. Application of the multi-point signal resonator technique to intervals of fast and slow wind shows that both compressible and incompressible turbulence are anisotropic with respect to the mean magnetic field direction P⟂ ≫ P∥ and are sensitive to the value of the plasma beta (β; ratio of thermal to magnetic pressure) and the wind type. Moreover, the incompressible fluctuations of the fast and slow solar wind are revealed to be different with enhancements along the background magnetic field direction present in the fast wind intervals. The differences in the fast and slow wind and the implications for the presence of different wave modes in the plasma are discussed.

  16. Comparison of the compressive strength of 3 different implant design systems.

    PubMed

    Pedroza, Jose E; Torrealba, Ysidora; Elias, Augusto; Psoter, Walter

    2007-01-01

    The aims of this study were twofold: to compare the static compressive strength at the implant-abutment interface of 3 design systems and to describe the implant abutment connection failure mode. A stainless steel holding device was designed to align the implants at 30 degrees with respect to the y-axis. Sixty-nine specimens were used, 23 for each system. A computer-controlled universal testing machine (MTS 810) applied static compression loading by a unidirectional vertical piston until failure. Specimens were evaluated macroscopically for longitudinal displacement, abutment looseness, and screw and implant fracture. Data were analyzed by analysis of variance (ANOVA). The mean compressive strength for the Unipost system was 392.5 psi (SD +/-40.9), for the Spline system 342.8 psi (SD+/-25.8), and for the Screw-Vent system 269.1 psi (SD+/-30.7). The Unipost implant-abutment connection demonstrated a statistically significant superior mechanical stability (P < or = .009) compared with the Spline implant system. The Spline implant system showed a statistically significant higher compressive strength than the Screw-Vent implant system (P < or =.009). Regarding failure mode, the Unipost system consistently broke at the same site, while the other systems failed at different points of the connection. The Unipost system demonstrated excellent fracture resistance to compressive forces; this resistance may be attributed primarily to the diameter of the abutment screw and the 2.5 mm counter bore, representing the same and a unique piece of the implant. The Unipost implant system demonstrated a statistically significant superior compressive strength value compared with the Spline and Screw-Vent systems, at a 30 degrees angulation.

  17. Commutated automatic gain control system

    NASA Technical Reports Server (NTRS)

    Yost, S. R.

    1982-01-01

    A commutated automatic gain control (AGC) system was designed and built for a prototype Loran C receiver. The receiver uses a microcomputer to control a memory aided phase-locked loop (MAPLL). The microcomputer also controls the input/output, latitude/longitude conversion, and the recently added AGC system. The circuit designed for the AGC is described, and bench and flight test results are presented. The AGC circuit described actually samples starting at a point 40 microseconds after a zero crossing determined by the software lock pulse ultimately generated by a 30 microsecond delay and add network in the receiver front end envelope detector.

  18. Analysis and design of gain scheduled control systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Shamma, Jeff S.

    1988-01-01

    Gain scheduling, as an idea, is to construct a global feedback control system for a time varying and/or nonlinear plant from a collection of local time invariant designs. However in the absence of a sound analysis, these designs come with no guarantees on the robustness, performance, or even nominal stability of the overall gain schedule design. Such an analysis is presented for three types of gain scheduling situations: (1) a linear parameter varying plant scheduling on its exogenous parameters, (2) a nonlinear plant scheduling on a prescribed reference trajectory, and (3) a nonlinear plant scheduling on the current plant output. Conditions are given which guarantee that the stability, robustness, and performance properties of the fixed operating point designs carry over to the global gain scheduled designs, such as the scheduling variable should vary slowly and capture the plants nonlinearities. Finally, an alternate design framework is proposed which removes the slowing varying restriction or gain scheduled systems. This framework addresses some fundamental feedback issues previously ignored in standard gain.

  19. Digital compression algorithms for HDTV transmission

    NASA Technical Reports Server (NTRS)

    Adkins, Kenneth C.; Shalkhauser, Mary JO; Bibyk, Steven B.

    1990-01-01

    Digital compression of video images is a possible avenue for high definition television (HDTV) transmission. Compression needs to be optimized while picture quality remains high. Two techniques for compression the digital images are explained and comparisons are drawn between the human vision system and artificial compression techniques. Suggestions for improving compression algorithms through the use of neural and analog circuitry are given.

  20. Sugar Determination in Foods with a Radially Compressed High Performance Liquid Chromatography Column.

    ERIC Educational Resources Information Center

    Ondrus, Martin G.; And Others

    1983-01-01

    Advocates use of Waters Associates Radial Compression Separation System for high performance liquid chromatography. Discusses instrumentation and reagents, outlining procedure for analyzing various foods and discussing typical student data. Points out potential problems due to impurities and pump seal life. Suggests use of ribose as internal…

  1. Subjective evaluation of compressed image quality

    NASA Astrophysics Data System (ADS)

    Lee, Heesub; Rowberg, Alan H.; Frank, Mark S.; Choi, Hyung-Sik; Kim, Yongmin

    1992-05-01

    Lossy data compression generates distortion or error on the reconstructed image and the distortion becomes visible as the compression ratio increases. Even at the same compression ratio, the distortion appears differently depending on the compression method used. Because of the nonlinearity of the human visual system and lossy data compression methods, we have evaluated subjectively the quality of medical images compressed with two different methods, an intraframe and interframe coding algorithms. The evaluated raw data were analyzed statistically to measure interrater reliability and reliability of an individual reader. Also, the analysis of variance was used to identify which compression method is better statistically, and from what compression ratio the quality of a compressed image is evaluated as poorer than that of the original. Nine x-ray CT head images from three patients were used as test cases. Six radiologists participated in reading the 99 images (some were duplicates) compressed at four different compression ratios, original, 5:1, 10:1, and 15:1. The six readers agree more than by chance alone and their agreement was statistically significant, but there were large variations among readers as well as within a reader. The displacement estimated interframe coding algorithm is significantly better in quality than that of the 2-D block DCT at significance level 0.05. Also, 10:1 compressed images with the interframe coding algorithm do not show any significant differences from the original at level 0.05.

  2. Numerical simulation of the compressible Orszag-Tang vortex. II. Supersonic flow. Interim report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Picone, J.M.; Dahlburg, R.B.

    The numerical investigation of the Orszag-Tang vortex system in compressible magnetofluids will consider initial conditions with embedded supersonic regions. The simulations have initial average Mach numbers M = 1.0 and 1.5 and beta = 10/3 with Lundquist numbers S = 50, 100, or 200. The behavior of the system differs significantly from that found previously for the incompressible and subsonic analogs. Shocks form at the downstream boundaries of the embedded supersonic regions outside the central magnetic X-point and produce strong local current sheets which dissipate appreciable magnetic energy. Reconnection at the central X-point, which dominates the incompressible and subsonic systems,more » peaks later and has a smaller impact as M increases from 0.6 to 1.5. Similarly, correlation between the momentum and magnetic field begins significant growth later than in subsonic and incompressible flows. The shocks bound large compression regions, which dominate the wavenumber spectra of autocorrelations in mass density, velocity, and magnetic field.« less

  3. Progress with lossy compression of data from the Community Earth System Model

    NASA Astrophysics Data System (ADS)

    Xu, H.; Baker, A.; Hammerling, D.; Li, S.; Clyne, J.

    2017-12-01

    Climate models, such as the Community Earth System Model (CESM), generate massive quantities of data, particularly when run at high spatial and temporal resolutions. The burden of storage is further exacerbated by creating large ensembles, generating large numbers of variables, outputting at high frequencies, and duplicating data archives (to protect against disk failures). Applying lossy compression methods to CESM datasets is an attractive means of reducing data storage requirements, but ensuring that the loss of information does not negatively impact science objectives is critical. In particular, test methods are needed to evaluate whether critical features (e.g., extreme values and spatial and temporal gradients) have been preserved and to boost scientists' confidence in the lossy compression process. We will provide an overview on our progress in applying lossy compression to CESM output and describe our unique suite of metric tests that evaluate the impact of information loss. Further, we will describe our processes how to choose an appropriate compression algorithm (and its associated parameters) given the diversity of CESM data (e.g., variables may be constant, smooth, change abruptly, contain missing values, or have large ranges). Traditional compression algorithms, such as those used for images, are not necessarily ideally suited for floating-point climate simulation data, and different methods may have different strengths and be more effective for certain types of variables than others. We will discuss our progress towards our ultimate goal of developing an automated multi-method parallel approach for compression of climate data that both maximizes data reduction and minimizes the impact of data loss on science results.

  4. Sensitivity Analysis in RIPless Compressed Sensing

    DTIC Science & Technology

    2014-10-01

    SECURITY CLASSIFICATION OF: The compressive sensing framework finds a wide range of applications in signal processing and analysis. Within this...Analysis of Compressive Sensing Solutions Report Title The compressive sensing framework finds a wide range of applications in signal processing and...compressed sensing. More specifically, we show that in a noiseless and RIP-less setting [11], the recovery process of a compressed sensing framework is

  5. The development and use of SPIO Lycra compression bracing in children with neuromotor deficits.

    PubMed

    Hylton, N; Allen, C

    1997-01-01

    The use of flexible compression bracing in persons with neuromotor deficits offers improved possibilities for stability and movement control without severely limiting joint movement options. At the Children's Therapy Center in Kent, Washington, this treatment modality has been explored with increasing application in children with moderate to severe cerebral palsy and other neuromotor deficits over the past 6 years, with good success. Significant functional improvements using Neoprene shoulder/trunk/hip Bracing led us to experiment with much lighter compression materials. The stabilizing pressure input orthosis or SPIO bracing system (developed by Cheryl Allen, parent and Chief Designer, and Nancy Hylton, PT) is custom-fitted to the stability, movement control and sensory deficit needs of a specific individual. SPIO bracing developed for a specific child has often become part of a rapidly increasing group of flexible bracing options which appear to provide an improved base of support for functional gains in balance, dynamic stability, general and specific movement control with improved postural and muscle readiness. Both deep sensory and subtle biomechanical factors may account for the functional changes observed. This article discusses the development and current use of flexible compression SPIO bracing in this area.

  6. OpenCL-based vicinity computation for 3D multiresolution mesh compression

    NASA Astrophysics Data System (ADS)

    Hachicha, Soumaya; Elkefi, Akram; Ben Amar, Chokri

    2017-03-01

    3D multiresolution mesh compression systems are still widely addressed in many domains. These systems are more and more requiring volumetric data to be processed in real-time. Therefore, the performance is becoming constrained by material resources usage and an overall reduction in the computational time. In this paper, our contribution entirely lies on computing, in real-time, triangles neighborhood of 3D progressive meshes for a robust compression algorithm based on the scan-based wavelet transform(WT) technique. The originality of this latter algorithm is to compute the WT with minimum memory usage by processing data as they are acquired. However, with large data, this technique is considered poor in term of computational complexity. For that, this work exploits the GPU to accelerate the computation using OpenCL as a heterogeneous programming language. Experiments demonstrate that, aside from the portability across various platforms and the flexibility guaranteed by the OpenCL-based implementation, this method can improve performance gain in speedup factor of 5 compared to the sequential CPU implementation.

  7. Massive data compression for parameter-dependent covariance matrices

    NASA Astrophysics Data System (ADS)

    Heavens, Alan F.; Sellentin, Elena; de Mijolla, Damien; Vianello, Alvise

    2017-12-01

    We show how the massive data compression algorithm MOPED can be used to reduce, by orders of magnitude, the number of simulated data sets which are required to estimate the covariance matrix required for the analysis of Gaussian-distributed data. This is relevant when the covariance matrix cannot be calculated directly. The compression is especially valuable when the covariance matrix varies with the model parameters. In this case, it may be prohibitively expensive to run enough simulations to estimate the full covariance matrix throughout the parameter space. This compression may be particularly valuable for the next generation of weak lensing surveys, such as proposed for Euclid and Large Synoptic Survey Telescope, for which the number of summary data (such as band power or shear correlation estimates) is very large, ∼104, due to the large number of tomographic redshift bins which the data will be divided into. In the pessimistic case where the covariance matrix is estimated separately for all points in an Monte Carlo Markov Chain analysis, this may require an unfeasible 109 simulations. We show here that MOPED can reduce this number by a factor of 1000, or a factor of ∼106 if some regularity in the covariance matrix is assumed, reducing the number of simulations required to a manageable 103, making an otherwise intractable analysis feasible.

  8. Colour image compression by grey to colour conversion

    NASA Astrophysics Data System (ADS)

    Drew, Mark S.; Finlayson, Graham D.; Jindal, Abhilash

    2011-03-01

    Instead of de-correlating image luminance from chrominance, some use has been made of using the correlation between the luminance component of an image and its chromatic components, or the correlation between colour components, for colour image compression. In one approach, the Green colour channel was taken as a base, and the other colour channels or their DCT subbands were approximated as polynomial functions of the base inside image windows. This paper points out that we can do better if we introduce an addressing scheme into the image description such that similar colours are grouped together spatially. With a Luminance component base, we test several colour spaces and rearrangement schemes, including segmentation. and settle on a log-geometric-mean colour space. Along with PSNR versus bits-per-pixel, we found that spatially-keyed s-CIELAB colour error better identifies problem regions. Instead of segmentation, we found that rearranging on sorted chromatic components has almost equal performance and better compression. Here, we sort on each of the chromatic components and separately encode windows of each. The result consists of the original greyscale plane plus the polynomial coefficients of windows of rearranged chromatic values, which are then quantized. The simplicity of the method produces a fast and simple scheme for colour image and video compression, with excellent results.

  9. Comparison of the effectiveness of compression stockings and layer compression systems in venous ulceration treatment

    PubMed Central

    Jawień, Arkadiusz; Cierzniakowska, Katarzyna; Cwajda-Białasik, Justyna; Mościcka, Paulina

    2010-01-01

    Introduction The aim of the research was to compare the dynamics of venous ulcer healing when treated with the use of compression stockings as well as original two- and four-layer bandage systems. Material and methods A group of 46 patients suffering from venous ulcers was studied. This group consisted of 36 (78.3%) women and 10 (21.70%) men aged between 41 and 88 years (the average age was 66.6 years and the median was 67). Patients were randomized into three groups, for treatment with the ProGuide two-layer system, Profore four-layer compression, and with the use of compression stockings class II. In the case of multi-layer compression, compression ensuring 40 mmHg blood pressure at ankle level was used. Results In all patients, independently of the type of compression therapy, a few significant statistical changes of ulceration area in time were observed (Student’s t test for matched pairs, p < 0.05). The largest loss of ulceration area in each of the successive measurements was observed in patients treated with the four-layer system – on average 0.63 cm2/per week. The smallest loss of ulceration area was observed in patients using compression stockings – on average 0.44 cm2/per week. However, the observed differences were not statistically significant (Kruskal-Wallis test H = 4.45, p > 0.05). Conclusions A systematic compression therapy, applied with preliminary blood pressure of 40 mmHg, is an effective method of conservative treatment of venous ulcers. Compression stockings and prepared systems of multi-layer compression were characterized by similar clinical effectiveness. PMID:22419941

  10. Effect of Time on Perceived Gains from an Undergraduate Research Program

    PubMed Central

    Adedokun, Omolola A.; Parker, Loran C.; Childress, Amy; Burgess, Wilella; Adams, Robin; Agnew, Christopher R.; Leary, James; Knapp, Deborah; Shields, Cleveland; Lelievre, Sophie; Teegarden, Dorothy

    2014-01-01

    The current study examines the trajectories of student perceived gains as a result of time spent in an undergraduate research experience (URE). Data for the study come from a survey administered at three points over a 1-yr period: before participation in the program, at the end of a Summer segment of research, and at the end of the year. Repeated-measures analysis of variance was used to examine the effect of time on perceived gains in student research skills, research confidence, and understanding of research processes. The results suggest that the students experienced different gains/benefits at developmentally different stages of their UREs. Participants reported gains in fewer areas at the end of the Summer segment compared with the end of the yearlong experience, thus supporting the notion that longer UREs offer students more benefit. PMID:24591512

  11. Novel image compression-encryption hybrid algorithm based on key-controlled measurement matrix in compressive sensing

    NASA Astrophysics Data System (ADS)

    Zhou, Nanrun; Zhang, Aidi; Zheng, Fen; Gong, Lihua

    2014-10-01

    The existing ways to encrypt images based on compressive sensing usually treat the whole measurement matrix as the key, which renders the key too large to distribute and memorize or store. To solve this problem, a new image compression-encryption hybrid algorithm is proposed to realize compression and encryption simultaneously, where the key is easily distributed, stored or memorized. The input image is divided into 4 blocks to compress and encrypt, then the pixels of the two adjacent blocks are exchanged randomly by random matrices. The measurement matrices in compressive sensing are constructed by utilizing the circulant matrices and controlling the original row vectors of the circulant matrices with logistic map. And the random matrices used in random pixel exchanging are bound with the measurement matrices. Simulation results verify the effectiveness, security of the proposed algorithm and the acceptable compression performance.

  12. Survey of Header Compression Techniques

    NASA Technical Reports Server (NTRS)

    Ishac, Joseph

    2001-01-01

    This report provides a summary of several different header compression techniques. The different techniques included are: (1) Van Jacobson's header compression (RFC 1144); (2) SCPS (Space Communications Protocol Standards) header compression (SCPS-TP, SCPS-NP); (3) Robust header compression (ROHC); and (4) The header compression techniques in RFC2507 and RFC2508. The methodology for compression and error correction for these schemes are described in the remainder of this document. All of the header compression schemes support compression over simplex links, provided that the end receiver has some means of sending data back to the sender. However, if that return path does not exist, then neither Van Jacobson's nor SCPS can be used, since both rely on TCP (Transmission Control Protocol). In addition, under link conditions of low delay and low error, all of the schemes perform as expected. However, based on the methodology of the schemes, each scheme is likely to behave differently as conditions degrade. Van Jacobson's header compression relies heavily on the TCP retransmission timer and would suffer an increase in loss propagation should the link possess a high delay and/or bit error rate (BER). The SCPS header compression scheme protects against high delay environments by avoiding delta encoding between packets. Thus, loss propagation is avoided. However, SCPS is still affected by an increased BER (bit-error-rate) since the lack of delta encoding results in larger header sizes. Next, the schemes found in RFC2507 and RFC2508 perform well for non-TCP connections in poor conditions. RFC2507 performance with TCP connections is improved by various techniques over Van Jacobson's, but still suffers a performance hit with poor link properties. Also, RFC2507 offers the ability to send TCP data without delta encoding, similar to what SCPS offers. ROHC is similar to the previous two schemes, but adds additional CRCs (cyclic redundancy check) into headers and improves

  13. Experimental Study on Properties of Methane Diffusion of Coal Block under Triaxial Compressive Stress

    PubMed Central

    Zhao, Hong-Bao

    2014-01-01

    Taking the standard size coal block samples defined by ISRM as research objects, both properties of methane diffusion of coal block under triaxial compressive stress and characteristic influences caused by methane pressure were systematically studied with thermo-fluid-solid coupling with triaxial servocontrolled seepage equipment of methane-containing coal. The result shows the methane diffusion property of coal block under triaxial compressive stress was shown in four-stage as follow, first is sharply reduce stage, second is hyperbolic reduce stage, third is close to a fixed value stage, fourth stage is 0. There is a special point making the reduced rate of characteristic curve of methane diffusion speed become sharply small; the influences of shape of methane diffusion speed characteristic curve caused by methane pressure are not obvious, which only is shown in numerical size of methane diffusion speed. Test time was extended required by appear of the special point makes the reduce rate of methane diffusion speed become sharply small. The fitting four-phase relation of methane diffusion of coal block under triaxial compressive stress was obtained, and the idea is proposed that influences of the fitting four-phase relation caused by methane pressure were only shown in value of fitting parameters. PMID:25531000

  14. Reversible Watermarking Surviving JPEG Compression.

    PubMed

    Zain, J; Clarke, M

    2005-01-01

    This paper will discuss the properties of watermarking medical images. We will also discuss the possibility of such images being compressed by JPEG and give an overview of JPEG compression. We will then propose a watermarking scheme that is reversible and robust to JPEG compression. The purpose is to verify the integrity and authenticity of medical images. We used 800x600x8 bits ultrasound (US) images in our experiment. SHA-256 of the image is then embedded in the Least significant bits (LSB) of an 8x8 block in the Region of Non Interest (RONI). The image is then compressed using JPEG and decompressed using Photoshop 6.0. If the image has not been altered, the watermark extracted will match the hash (SHA256) of the original image. The result shown that the embedded watermark is robust to JPEG compression up to image quality 60 (~91% compressed).

  15. Compressed Sensing for Body MRI

    PubMed Central

    Feng, Li; Benkert, Thomas; Block, Kai Tobias; Sodickson, Daniel K; Otazo, Ricardo; Chandarana, Hersh

    2016-01-01

    The introduction of compressed sensing for increasing imaging speed in MRI has raised significant interest among researchers and clinicians, and has initiated a large body of research across multiple clinical applications over the last decade. Compressed sensing aims to reconstruct unaliased images from fewer measurements than that are traditionally required in MRI by exploiting image compressibility or sparsity. Moreover, appropriate combinations of compressed sensing with previously introduced fast imaging approaches, such as parallel imaging, have demonstrated further improved performance. The advent of compressed sensing marks the prelude to a new era of rapid MRI, where the focus of data acquisition has changed from sampling based on the nominal number of voxels and/or frames to sampling based on the desired information content. This paper presents a brief overview of the application of compressed sensing techniques in body MRI, where imaging speed is crucial due to the presence of respiratory motion along with stringent constraints on spatial and temporal resolution. The first section provides an overview of the basic compressed sensing methodology, including the notion of sparsity, incoherence, and non-linear reconstruction. The second section reviews state-of-the-art compressed sensing techniques that have been demonstrated for various clinical body MRI applications. In the final section, the paper discusses current challenges and future opportunities. PMID:27981664

  16. Radiometric resolution enhancement by lossy compression as compared to truncation followed by lossless compression

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Manohar, Mareboyana

    1994-01-01

    Recent advances in imaging technology make it possible to obtain imagery data of the Earth at high spatial, spectral and radiometric resolutions from Earth orbiting satellites. The rate at which the data is collected from these satellites can far exceed the channel capacity of the data downlink. Reducing the data rate to within the channel capacity can often require painful trade-offs in which certain scientific returns are sacrificed for the sake of others. In this paper we model the radiometric version of this form of lossy compression by dropping a specified number of least significant bits from each data pixel and compressing the remaining bits using an appropriate lossless compression technique. We call this approach 'truncation followed by lossless compression' or TLLC. We compare the TLLC approach with applying a lossy compression technique to the data for reducing the data rate to the channel capacity, and demonstrate that each of three different lossy compression techniques (JPEG/DCT, VQ and Model-Based VQ) give a better effective radiometric resolution than TLLC for a given channel rate.

  17. Cellular characterization of compression induced-damage in live biological samples

    NASA Astrophysics Data System (ADS)

    Bo, Chiara; Balzer, Jens; Hahnel, Mark; Rankin, Sara M.; Brown, Katherine A.; Proud, William G.

    2011-06-01

    Understanding the dysfunctions that high-intensity compression waves induce in human tissues is critical to impact on acute-phase treatments and requires the development of experimental models of traumatic damage in biological samples. In this study we have developed an experimental system to directly assess the impact of dynamic loading conditions on cellular function at the molecular level. Here we present a confinement chamber designed to subject live cell cultures in liquid environment to compression waves in the range of tens of MPa using a split Hopkinson pressure bars system. Recording the loading history and collecting the samples post-impact without external contamination allow the definition of parameters such as pressure and duration of the stimulus that can be related to the cellular damage. The compression experiments are conducted on Mesenchymal Stem Cells from BALB/c mice and the damage analysis are compared to two control groups. Changes in Stem cell viability, phenotype and function are assessed flow cytometry and with in vitro bioassays at two different time points. Identifying the cellular and molecular mechanisms underlying the damage caused by dynamic loading in live biological samples could enable the development of new treatments for traumatic injuries.

  18. Using irreversible compression in digital radiology: a preliminary study of the opinions of radiologists

    NASA Astrophysics Data System (ADS)

    Seeram, Euclid

    2006-03-01

    The large volumes of digital images produced by digital imaging modalities in Radiology have provided the motivation for the development of picture archiving and communication systems (PACS) in an effort to provide an organized mechanism for digital image management. The development of more sophisticated methods of digital image acquisition (Multislice CT and Digital Mammography, for example), as well as the implementation and performance of PACS and Teleradiology systems in a health care environment, have created challenges in the area of image compression with respect to storing and transmitting digital images. Image compression can be reversible (lossless) or irreversible (lossy). While in the former, there is no loss of information, the latter presents concerns since there is a loss of information. This loss of information from diagnostic medical images is of primary concern not only to radiologists, but also to patients and their physicians. In 1997, Goldberg pointed out that "there is growing evidence that lossy compression can be applied without significantly affecting the diagnostic content of images... there is growing consensus in the radiologic community that some forms of lossy compression are acceptable". The purpose of this study was to explore the opinions of expert radiologists, and related professional organizations on the use of irreversible compression in routine practice The opinions of notable radiologists in the US and Canada are varied indicating no consensus of opinion on the use of irreversible compression in primary diagnosis, however, they are generally positive on the notion of the image storage and transmission advantages. Almost all radiologists are concerned with the litigation potential of an incorrect diagnosis based on irreversible compressed images. The survey of several radiology professional and related organizations reveals that no professional practice standards exist for the use of irreversible compression. Currently, the

  19. Evolution of the Orszag--Tang vortex system in a compressible medium. II. Supersonic flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Picone, J.M.; Dahlburg, R.B.

    The numerical investigation of Orszag--Tang vortex system in compressible magnetofluids continues, this time using initial conditions with embedded supersonic regions. The simulations have initial average Mach numbers M=1.0 and 1.5 and {beta}=10/3 with Lundquist numbers {ital S}=50, 100, or 200. Depending on the particular set of parameters, the numerical grid contains 256{sup 2} or 512{sup 2} collocation points. The behavior of the system differs significantly from that found previously for the incompressible and subsonic analogs. Shocks form at the downstream boundaries of the embedded supersonic regions outside the central magnetic X point and produce strong local current sheets that dissipatemore » appreciable magnetic energy. Reconnection at the central X point, which dominates the incompressible and subsonic systems, peaks later and has a smaller impact as {ital M} increases from 0.6 to 1.5. Reconnection becomes significant only after shocks reach the central region, compressing the weak current sheet there. Similarly, the correlation between the momentum and magnetic field begins significant growth later than in subsonic and incompressible flows. The shocks bound large compression regions, which dominate the wave-number spectra of autocorrelations in mass density, velocity, and magnetic field. The normalized spectral amplitude of the cross helicity is almost zero over the middle and upper portions of the wave-number domain, unlike the incompressible and subsonic flows. The thermal and magnetic pressures are anticorrelated over a wide wave-number range during the earlier portion of the calculations, consistent with the presence of quasistationary structures bounded by shocks.« less

  20. Report from the 2013 meeting of the International Compression Club on advances and challenges of compression therapy.

    PubMed

    Delos Reyes, Arthur P; Partsch, Hugo; Mosti, Giovanni; Obi, Andrea; Lurie, Fedor

    2014-10-01

    The International Compression Club, a collaboration of medical experts and industry representatives, was founded in 2005 to develop consensus reports and recommendations regarding the use of compression therapy in the treatment of acute and chronic vascular disease. During the recent meeting of the International Compression Club, member presentations were focused on the clinical application of intermittent pneumatic compression in different disease scenarios as well as on the use of inelastic and short stretch compression therapy. In addition, several new compression devices and systems were introduced by industry representatives. This article summarizes the presentations and subsequent discussions and provides a description of the new compression therapies presented. Copyright © 2014 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.

  1. A Bunch Compression Method for Free Electron Lasers that Avoids Parasitic Compressions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benson, Stephen V.; Douglas, David R.; Tennant, Christopher D.

    2015-09-01

    Virtually all existing high energy (>few MeV) linac-driven FELs compress the electron bunch length though the use of off-crest acceleration on the rising side of the RF waveform followed by transport through a magnetic chicane. This approach has at least three flaws: 1) it is difficult to correct aberrations--particularly RF curvature, 2) rising side acceleration exacerbates space charge-induced distortion of the longitudinal phase space, and 3) all achromatic "negative compaction" compressors create parasitic compression during the final compression process, increasing the CSR-induced emittance growth. One can avoid these deficiencies by using acceleration on the falling side of the RF waveformmore » and a compressor with M 56>0. This approach offers multiple advantages: 1) It is readily achieved in beam lines supporting simple schemes for aberration compensation, 2) Longitudinal space charge (LSC)-induced phase space distortion tends, on the falling side of the RF waveform, to enhance the chirp, and 3) Compressors with M 56>0 can be configured to avoid spurious over-compression. We will discuss this bunch compression scheme in detail and give results of a successful beam test in April 2012 using the JLab UV Demo FEL« less

  2. QLog Solar-Cell Mode Photodiode Logarithmic CMOS Pixel Using Charge Compression and Readout.

    PubMed

    Ni, Yang

    2018-02-14

    In this paper, we present a new logarithmic pixel design currently under development at New Imaging Technologies SA (NIT). This new logarithmic pixel design uses charge domain logarithmic signal compression and charge-transfer-based signal readout. This structure gives a linear response in low light conditions and logarithmic response in high light conditions. The charge transfer readout efficiently suppresses the reset (KTC) noise by using true correlated double sampling (CDS) in low light conditions. In high light conditions, thanks to charge domain logarithmic compression, it has been demonstrated that 3000 electrons should be enough to cover a 120 dB dynamic range with a mobile phone camera-like signal-to-noise ratio (SNR) over the whole dynamic range. This low electron count permits the use of ultra-small floating diffusion capacitance (sub-fF) without charge overflow. The resulting large conversion gain permits a single photon detection capability with a wide dynamic range without a complex sensor/system design. A first prototype sensor with 320 × 240 pixels has been implemented to validate this charge domain logarithmic pixel concept and modeling. The first experimental results validate the logarithmic charge compression theory and the low readout noise due to the charge-transfer-based readout.

  3. Generalized massive optimal data compression

    NASA Astrophysics Data System (ADS)

    Alsing, Justin; Wandelt, Benjamin

    2018-05-01

    In this paper, we provide a general procedure for optimally compressing N data down to n summary statistics, where n is equal to the number of parameters of interest. We show that compression to the score function - the gradient of the log-likelihood with respect to the parameters - yields n compressed statistics that are optimal in the sense that they preserve the Fisher information content of the data. Our method generalizes earlier work on linear Karhunen-Loéve compression for Gaussian data whilst recovering both lossless linear compression and quadratic estimation as special cases when they are optimal. We give a unified treatment that also includes the general non-Gaussian case as long as mild regularity conditions are satisfied, producing optimal non-linear summary statistics when appropriate. As a worked example, we derive explicitly the n optimal compressed statistics for Gaussian data in the general case where both the mean and covariance depend on the parameters.

  4. Shock-adiabatic to quasi-isentropic compression of warm dense helium up to 150 GPa

    NASA Astrophysics Data System (ADS)

    Zheng, J.; Chen, Q. F.; Gu, Y. J.; Li, J. T.; Li, Z. G.; Li, C. J.; Chen, Z. Y.

    2017-06-01

    Multiple reverberation compression can achieve higher pressure, higher temperature, but lower entropy. It is available to provide an important validation for the elaborate and wider planetary models and simulate the inertial confinement fusion capsule implosion process. In the work, we have developed the thermodynamic and optical properties of helium from shock-adiabatic to quasi-isentropic compression by means of a multiple reverberation technique. By this technique, the initial dense gaseous helium was compressed to high pressure and high temperature and entered the warm dense matter (WDM) region. The experimental equation of state (EOS) of WDM helium in the pressure-density-temperature (P-ρ -T) range of 1 -150 GPa , 0.1 -1.1 g c m-3 , and 4600-24 000 K were measured. The optical radiations emanating from the WDM helium were recorded, and the particle velocity profiles detecting from the sample/window interface were obtained successfully up to 10 times compression. The optical radiation results imply that dense He has become rather opaque after the 2nd compression with a density of about 0.3 g c m-3 and a temperature of about 1 eV. The opaque states of helium under multiple compression were analyzed by the particle velocity measurements. The multiple compression technique could efficiently enhanced the density and the compressibility, and our multiple compression ratios (ηi=ρi/ρ0,i =1 -10 ) of helium are greatly improved from 3.5 to 43 based on initial precompressed density (ρ0) . For the relative compression ratio (ηi'=ρi/ρi -1) , it increases with pressure in the lower density regime and reversely decreases in the higher density regime, and a turning point occurs at the 3rd and 4th compression states under the different loading conditions. This nonmonotonic evolution of the compression is controlled by two factors, where the excitation of internal degrees of freedom results in the increasing compressibility and the repulsive interactions between the

  5. Fractal-Based Image Compression

    DTIC Science & Technology

    1990-01-01

    used Ziv - Lempel - experiments and for software development. Addi- Welch compression algorithm (ZLW) [51 [4] was used tional thanks to Roger Boss, Bill...vol17no. 6 (June 4) and with the minimum number of maps. [5] J. Ziv and A. Lempel , Compression of !ndivid- 5 Summary ual Sequences via Variable-Rate...transient and should be discarded. 2.5 Collage Theorem algorithm2 C3.2 Deterministic Algorithm for IFS Attractor For fast image compression the best

  6. 29 CFR 1917.154 - Compressed air.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 7 2013-07-01 2013-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed a...

  7. 29 CFR 1917.154 - Compressed air.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 7 2012-07-01 2012-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed a...

  8. 29 CFR 1917.154 - Compressed air.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 7 2014-07-01 2014-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed a...

  9. Image quality (IQ) guided multispectral image compression

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik

    2016-05-01

    Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.

  10. Predicting failure: acoustic emission of berlinite under compression.

    PubMed

    Nataf, Guillaume F; Castillo-Villa, Pedro O; Sellappan, Pathikumar; Kriven, Waltraud M; Vives, Eduard; Planes, Antoni; Salje, Ekhard K H

    2014-07-09

    Acoustic emission has been measured and statistical characteristics analyzed during the stress-induced collapse of porous berlinite, AlPO4, containing up to 50 vol% porosity. Stress collapse occurs in a series of individual events (avalanches), and each avalanche leads to a jerk in sample compression with corresponding acoustic emission (AE) signals. The distribution of AE avalanche energies can be approximately described by a power law p(E)dE = E(-ε)dE (ε ~ 1.8) over a large stress interval. We observed several collapse mechanisms whereby less porous minerals show the superposition of independent jerks, which were not related to the major collapse at the failure stress. In highly porous berlinite (40% and 50%) an increase of energy emission occurred near the failure point. In contrast, the less porous samples did not show such an increase in energy emission. Instead, in the near vicinity of the main failure point they showed a reduction in the energy exponent to ~ 1.4, which is consistent with the value reported for compressed porous systems displaying critical behavior. This suggests that a critical avalanche regime with a lack of precursor events occurs. In this case, all preceding large events were 'false alarms' and unrelated to the main failure event. Our results identify a method to use pico-seismicity detection of foreshocks to warn of mine collapse before the main failure (the collapse) occurs, which can be applied to highly porous materials only.

  11. Compression Ratio Adjuster

    NASA Technical Reports Server (NTRS)

    Akkerman, J. W.

    1982-01-01

    New mechanism alters compression ratio of internal-combustion engine according to load so that engine operates at top fuel efficiency. Ordinary gasoline, diesel and gas engines with their fixed compression ratios are inefficient at partial load and at low-speed full load. Mechanism ensures engines operate as efficiently under these conditions as they do at highload and high speed.

  12. Artificial acoustic stiffness reduction in fully compressible, direct numerical simulation of combustion

    NASA Astrophysics Data System (ADS)

    Wang, Yi; Trouvé, Arnaud

    2004-09-01

    A pseudo-compressibility method is proposed to modify the acoustic time step restriction found in fully compressible, explicit flow solvers. The method manipulates terms in the governing equations of order Ma2, where Ma is a characteristic flow Mach number. A decrease in the speed of acoustic waves is obtained by adding an extra term in the balance equation for total energy. This term is proportional to flow dilatation and uses a decomposition of the dilatational field into an acoustic component and a component due to heat transfer. The present method is a variation of the pressure gradient scaling (PGS) method proposed in Ramshaw et al (1985 Pressure gradient scaling method for fluid flow with nearly uniform pressure J. Comput. Phys. 58 361-76). It achieves gains in computational efficiencies similar to PGS: at the cost of a slightly more involved right-hand-side computation, the numerical time step increases by a full order of magnitude. It also features the added benefit of preserving the hydrodynamic pressure field. The original and modified PGS methods are implemented into a parallel direct numerical simulation solver developed for applications to turbulent reacting flows with detailed chemical kinetics. The performance of the pseudo-compressibility methods is illustrated in a series of test problems ranging from isothermal sound propagation to laminar premixed flame problems.

  13. Shock compression and release of a-axis magnesium single crystals: Anisotropy and time dependent inelastic response

    DOE PAGES

    Renganathan, P.; Winey, J. M.; Gupta, Y. M.

    2017-01-19

    Here, to gain insight into inelastic deformation mechanisms for shocked hexagonal close-packed (hcp) metals, particularly the role of crystal anisotropy, magnesium (Mg) single crystals were subjected to shock compression and release along the a-axis to 3.0 and 4.8 GPa elastic impact stresses. Wave profiles measured at several thicknesses, using laser interferometry, show a sharply peaked elastic wave followed by the plastic wave. Additionally, a smooth and featureless release wave is observed following peak compression. When compared to wave profiles measured previously for c-axis Mg, the elastic wave amplitudes for a-axis Mg are lower for the same propagation distance, and less attenuation of elastic wave amplitude is observed for a given peak stress. The featureless release wave for a-axis Mg is in marked contrast to the structured features observed for c-axis unloading. Numerical simulations, using a time-dependent anisotropic modeling framework, showed that the wave profiles calculated using prismatic slip or (10more » $$\\bar{1}$$2) twinning, individually, do not match the measured compression profiles for a-axis Mg. However, a combination of slip and twinning provides a good overall match to the measured compression profiles. In contrast to compression,prismatic slip alone provides a reasonable match to the measured release wave profiles; (10$$\\bar{1}$$2) twinning due to its uni-directionality is not activated during release. The experimental results and wave profile simulations for a-axis Mg presented here are quite different from the previously published c-axis results, demonstrating the important role of crystal anisotropy on the time-dependent inelastic deformation of Mg single crystals under shock compression and release.« less

  14. Shock compression and release of a-axis magnesium single crystals: Anisotropy and time dependent inelastic response

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Renganathan, P.; Winey, J. M.; Gupta, Y. M.

    Here, to gain insight into inelastic deformation mechanisms for shocked hexagonal close-packed (hcp) metals, particularly the role of crystal anisotropy, magnesium (Mg) single crystals were subjected to shock compression and release along the a-axis to 3.0 and 4.8 GPa elastic impact stresses. Wave profiles measured at several thicknesses, using laser interferometry, show a sharply peaked elastic wave followed by the plastic wave. Additionally, a smooth and featureless release wave is observed following peak compression. When compared to wave profiles measured previously for c-axis Mg, the elastic wave amplitudes for a-axis Mg are lower for the same propagation distance, and less attenuation of elastic wave amplitude is observed for a given peak stress. The featureless release wave for a-axis Mg is in marked contrast to the structured features observed for c-axis unloading. Numerical simulations, using a time-dependent anisotropic modeling framework, showed that the wave profiles calculated using prismatic slip or (10more » $$\\bar{1}$$2) twinning, individually, do not match the measured compression profiles for a-axis Mg. However, a combination of slip and twinning provides a good overall match to the measured compression profiles. In contrast to compression,prismatic slip alone provides a reasonable match to the measured release wave profiles; (10$$\\bar{1}$$2) twinning due to its uni-directionality is not activated during release. The experimental results and wave profile simulations for a-axis Mg presented here are quite different from the previously published c-axis results, demonstrating the important role of crystal anisotropy on the time-dependent inelastic deformation of Mg single crystals under shock compression and release.« less

  15. Compressibility of the protein-water interface

    NASA Astrophysics Data System (ADS)

    Persson, Filip; Halle, Bertil

    2018-06-01

    The compressibility of a protein relates to its stability, flexibility, and hydrophobic interactions, but the measurement, interpretation, and computation of this important thermodynamic parameter present technical and conceptual challenges. Here, we present a theoretical analysis of protein compressibility and apply it to molecular dynamics simulations of four globular proteins. Using additively weighted Voronoi tessellation, we decompose the solution compressibility into contributions from the protein and its hydration shells. We find that positively cross-correlated protein-water volume fluctuations account for more than half of the protein compressibility that governs the protein's pressure response, while the self correlations correspond to small (˜0.7%) fluctuations of the protein volume. The self compressibility is nearly the same as for ice, whereas the total protein compressibility, including cross correlations, is ˜45% of the bulk-water value. Taking the inhomogeneous solvent density into account, we decompose the experimentally accessible protein partial compressibility into intrinsic, hydration, and molecular exchange contributions and show how they can be computed with good statistical accuracy despite the dominant bulk-water contribution. The exchange contribution describes how the protein solution responds to an applied pressure by redistributing water molecules from lower to higher density; it is negligibly small for native proteins, but potentially important for non-native states. Because the hydration shell is an open system, the conventional closed-system compressibility definitions yield a pseudo-compressibility. We define an intrinsic shell compressibility, unaffected by occupation number fluctuations, and show that it approaches the bulk-water value exponentially with a decay "length" of one shell, less than the bulk-water compressibility correlation length. In the first hydration shell, the intrinsic compressibility is 25%-30% lower than in

  16. Cosmological Particle Data Compression in Practice

    NASA Astrophysics Data System (ADS)

    Zeyen, M.; Ahrens, J.; Hagen, H.; Heitmann, K.; Habib, S.

    2017-12-01

    In cosmological simulations trillions of particles are handled and several terabytes of unstructured particle data are generated in each time step. Transferring this data directly from memory to disk in an uncompressed way results in a massive load on I/O and storage systems. Hence, one goal of domain scientists is to compress the data before storing it to disk while minimizing the loss of information. To prevent reading back uncompressed data from disk, this can be done in an in-situ process. Since the simulation continuously generates data, the available time for the compression of one time step is limited. Therefore, the evaluation of compression techniques has shifted from only focusing on compression rates to include run-times and scalability.In recent years several compression techniques for cosmological data have become available. These techniques can be either lossy or lossless, depending on the technique. For both cases, this study aims to evaluate and compare the state of the art compression techniques for unstructured particle data. This study focuses on the techniques available in the Blosc framework with its multi-threading support, the XZ Utils toolkit with the LZMA algorithm that achieves high compression rates, and the widespread FPZIP and ZFP methods for lossy compressions.For the investigated compression techniques, quantitative performance indicators such as compression rates, run-time/throughput, and reconstruction errors are measured. Based on these factors, this study offers a comprehensive analysis of the individual techniques and discusses their applicability for in-situ compression. In addition, domain specific measures are evaluated on the reconstructed data sets, and the relative error rates and statistical properties are analyzed and compared. Based on this study future challenges and directions in the compression of unstructured cosmological particle data were identified.

  17. Compressibility of the protein-water interface.

    PubMed

    Persson, Filip; Halle, Bertil

    2018-06-07

    The compressibility of a protein relates to its stability, flexibility, and hydrophobic interactions, but the measurement, interpretation, and computation of this important thermodynamic parameter present technical and conceptual challenges. Here, we present a theoretical analysis of protein compressibility and apply it to molecular dynamics simulations of four globular proteins. Using additively weighted Voronoi tessellation, we decompose the solution compressibility into contributions from the protein and its hydration shells. We find that positively cross-correlated protein-water volume fluctuations account for more than half of the protein compressibility that governs the protein's pressure response, while the self correlations correspond to small (∼0.7%) fluctuations of the protein volume. The self compressibility is nearly the same as for ice, whereas the total protein compressibility, including cross correlations, is ∼45% of the bulk-water value. Taking the inhomogeneous solvent density into account, we decompose the experimentally accessible protein partial compressibility into intrinsic, hydration, and molecular exchange contributions and show how they can be computed with good statistical accuracy despite the dominant bulk-water contribution. The exchange contribution describes how the protein solution responds to an applied pressure by redistributing water molecules from lower to higher density; it is negligibly small for native proteins, but potentially important for non-native states. Because the hydration shell is an open system, the conventional closed-system compressibility definitions yield a pseudo-compressibility. We define an intrinsic shell compressibility, unaffected by occupation number fluctuations, and show that it approaches the bulk-water value exponentially with a decay "length" of one shell, less than the bulk-water compressibility correlation length. In the first hydration shell, the intrinsic compressibility is 25%-30% lower than

  18. Magnetic compression laser driving circuit

    DOEpatents

    Ball, D.G.; Birx, D.; Cook, E.G.

    1993-01-05

    A magnetic compression laser driving circuit is disclosed. The magnetic compression laser driving circuit compresses voltage pulses in the range of 1.5 microseconds at 20 kilovolts of amplitude to pulses in the range of 40 nanoseconds and 60 kilovolts of amplitude. The magnetic compression laser driving circuit includes a multi-stage magnetic switch where the last stage includes a switch having at least two turns which has larger saturated inductance with less core material so that the efficiency of the circuit and hence the laser is increased.

  19. Magnetic compression laser driving circuit

    DOEpatents

    Ball, Don G.; Birx, Dan; Cook, Edward G.

    1993-01-01

    A magnetic compression laser driving circuit is disclosed. The magnetic compression laser driving circuit compresses voltage pulses in the range of 1.5 microseconds at 20 Kilovolts of amplitude to pulses in the range of 40 nanoseconds and 60 Kilovolts of amplitude. The magnetic compression laser driving circuit includes a multi-stage magnetic switch where the last stage includes a switch having at least two turns which has larger saturated inductance with less core material so that the efficiency of the circuit and hence the laser is increased.

  20. Effect of compressibility on the hypervelocity penetration

    NASA Astrophysics Data System (ADS)

    Song, W. J.; Chen, X. W.; Chen, P.

    2018-02-01

    We further consider the effect of rod strength by employing the compressible penetration model to study the effect of compressibility on hypervelocity penetration. Meanwhile, we define different instances of penetration efficiency in various modified models and compare these penetration efficiencies to identify the effects of different factors in the compressible model. To systematically discuss the effect of compressibility in different metallic rod-target combinations, we construct three cases, i.e., the penetrations by the more compressible rod into the less compressible target, rod into the analogously compressible target, and the less compressible rod into the more compressible target. The effects of volumetric strain, internal energy, and strength on the penetration efficiency are analyzed simultaneously. It indicates that the compressibility of the rod and target increases the pressure at the rod/target interface. The more compressible rod/target has larger volumetric strain and higher internal energy. Both the larger volumetric strain and higher strength enhance the penetration or anti-penetration ability. On the other hand, the higher internal energy weakens the penetration or anti-penetration ability. The two trends conflict, but the volumetric strain dominates in the variation of the penetration efficiency, which would not approach the hydrodynamic limit if the rod and target are not analogously compressible. However, if the compressibility of the rod and target is analogous, it has little effect on the penetration efficiency.

  1. Authenticity examination of compressed audio recordings using detection of multiple compression and encoders' identification.

    PubMed

    Korycki, Rafal

    2014-05-01

    Since the appearance of digital audio recordings, audio authentication has been becoming increasingly difficult. The currently available technologies and free editing software allow a forger to cut or paste any single word without audible artifacts. Nowadays, the only method referring to digital audio files commonly approved by forensic experts is the ENF criterion. It consists in fluctuation analysis of the mains frequency induced in electronic circuits of recording devices. Therefore, its effectiveness is strictly dependent on the presence of mains signal in the recording, which is a rare occurrence. Recently, much attention has been paid to authenticity analysis of compressed multimedia files and several solutions were proposed for detection of double compression in both digital video and digital audio. This paper addresses the problem of tampering detection in compressed audio files and discusses new methods that can be used for authenticity analysis of digital recordings. Presented approaches consist in evaluation of statistical features extracted from the MDCT coefficients as well as other parameters that may be obtained from compressed audio files. Calculated feature vectors are used for training selected machine learning algorithms. The detection of multiple compression covers up tampering activities as well as identification of traces of montage in digital audio recordings. To enhance the methods' robustness an encoder identification algorithm was developed and applied based on analysis of inherent parameters of compression. The effectiveness of tampering detection algorithms is tested on a predefined large music database consisting of nearly one million of compressed audio files. The influence of compression algorithms' parameters on the classification performance is discussed, based on the results of the current study. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  2. Influence of crystal habit on the compression and densification mechanism of ibuprofen

    NASA Astrophysics Data System (ADS)

    Di Martino, Piera; Beccerica, Moira; Joiris, Etienne; Palmieri, Giovanni F.; Gayot, Anne; Martelli, Sante

    2002-08-01

    Ibuprofen was recrystallized from several solvents by two different methods: addition of a non-solvent to a drug solution and cooling of a drug solution. Four samples, characterized by different crystal habit, were selected: sample A, sample E and sample T, recrystallized respectively from acetone, ethanol and THF by addition of water as non-solvent and sample M recrystallized from methanol by temperature decrease. By SEM analysis, sample were characterized with the respect of their crystal habit, mean particle diameter and elongation ratio. Sample A appears stick-shaped, sample E acicular with lamellar characteristics, samples T and M polyhedral. DSC and X-ray diffraction studies permit to exclude a polymorphic modification of ibuprofen during crystallization. For all samples micromeritics properties, densification behaviour and compression ability was analysed. Sample M shows a higher densification tendency, evidenciated by its higher apparent and tapped particle density. The ability to densificate is also pointed out by D0' value of Heckel's plot, which indicate the rearrangement of original particles at the initial stage of compression. This fact is related to the crystal habit of sample M, which is characterized by strongly smoothed coins. The increase in powder bed porosity permits a particle-particle interaction of greater extent during the subsequent stage of compression, which allows higher tabletability and compressibility.

  3. On-Chip Neural Data Compression Based On Compressed Sensing With Sparse Sensing Matrices.

    PubMed

    Zhao, Wenfeng; Sun, Biao; Wu, Tong; Yang, Zhi

    2018-02-01

    On-chip neural data compression is an enabling technique for wireless neural interfaces that suffer from insufficient bandwidth and power budgets to transmit the raw data. The data compression algorithm and its implementation should be power and area efficient and functionally reliable over different datasets. Compressed sensing is an emerging technique that has been applied to compress various neurophysiological data. However, the state-of-the-art compressed sensing (CS) encoders leverage random but dense binary measurement matrices, which incur substantial implementation costs on both power and area that could offset the benefits from the reduced wireless data rate. In this paper, we propose two CS encoder designs based on sparse measurement matrices that could lead to efficient hardware implementation. Specifically, two different approaches for the construction of sparse measurement matrices, i.e., the deterministic quasi-cyclic array code (QCAC) matrix and -sparse random binary matrix [-SRBM] are exploited. We demonstrate that the proposed CS encoders lead to comparable recovery performance. And efficient VLSI architecture designs are proposed for QCAC-CS and -SRBM encoders with reduced area and total power consumption.

  4. Fibrocartilage in tendons and ligaments — an adaptation to compressive load

    PubMed Central

    BENJAMIN, M.; RALPHS, J. R.

    1998-01-01

    Where tendons and ligaments are subject to compression, they are frequently fibrocartilaginous. This occurs at 2 principal sites: where tendons (and sometimes ligaments) wrap around bony or fibrous pulleys, and in the region where they attach to bone, i.e. at their entheses. Wrap-around tendons are most characteristic of the limbs and are commonly wider at their point of bony contact so that the pressure is reduced. The most fibrocartilaginous tendons are heavily loaded and permanently bent around their pulleys. There is often pronounced interweaving of collagen fibres that prevents the tendons from splaying apart under compression. The fibrocartilage can be located within fascicles, or in endo- or epitenon (where it may protect blood vessels from compression or allow fascicles to slide). Fibrocartilage cells are commonly packed with intermediate filaments which could be involved in transducing mechanical load. The ECM often contains aggrecan which allows the tendon to imbibe water and withstand compression. Type II collagen may also be present, particularly in tendons that are heavily loaded. Fibrocartilage is a dynamic tissue that disappears when the tendons are rerouted surgically and can be maintained in vitro when discs of tendon are compressed. Finite element analyses provide a good correlation between its distribution and levels of compressive stress, but at some locations fibrocartilage is a sign of pathology. Enthesis fibrocartilage is most typical of tendons or ligaments that attach to the epiphyses of long bones where it may also be accompanied by sesamoid and periosteal fibrocartilages. It is characteristic of sites where the angle of attachment changes throughout the range of joint movement and it reduces wear and tear by dissipating stress concentration at the bony interface. There is a good correlation between the distribution of fibrocartilage within an enthesis and the levels of compressive stress. The complex interlocking between calcified

  5. Micromechanics of composite laminate compression failure

    NASA Technical Reports Server (NTRS)

    Guynn, E. Gail; Bradley, Walter L.

    1986-01-01

    The Dugdale analysis for metals loaded in tension was adapted to model the failure of notched composite laminates loaded in compression. Compression testing details, MTS alignment verification, and equipment needs were resolved. Thus far, only 2 ductile material systems, HST7 and F155, were selected for study. A Wild M8 Zoom Stereomicroscope and necessary attachments for video taping and 35 mm pictures were purchased. Currently, this compression test system is fully operational. A specimen is loaded in compression, and load vs shear-crippling zone size is monitored and recorded. Data from initial compression tests indicate that the Dugdale model does not accurately predict the load vs damage zone size relationship of notched composite specimens loaded in compression.

  6. Real-time demonstration hardware for enhanced DPCM video compression algorithm

    NASA Technical Reports Server (NTRS)

    Bizon, Thomas P.; Whyte, Wayne A., Jr.; Marcopoli, Vincent R.

    1992-01-01

    The lack of available wideband digital links as well as the complexity of implementation of bandwidth efficient digital video CODECs (encoder/decoder) has worked to keep the cost of digital television transmission too high to compete with analog methods. Terrestrial and satellite video service providers, however, are now recognizing the potential gains that digital video compression offers and are proposing to incorporate compression systems to increase the number of available program channels. NASA is similarly recognizing the benefits of and trend toward digital video compression techniques for transmission of high quality video from space and therefore, has developed a digital television bandwidth compression algorithm to process standard National Television Systems Committee (NTSC) composite color television signals. The algorithm is based on differential pulse code modulation (DPCM), but additionally utilizes a non-adaptive predictor, non-uniform quantizer and multilevel Huffman coder to reduce the data rate substantially below that achievable with straight DPCM. The non-adaptive predictor and multilevel Huffman coder combine to set this technique apart from other DPCM encoding algorithms. All processing is done on a intra-field basis to prevent motion degradation and minimize hardware complexity. Computer simulations have shown the algorithm will produce broadcast quality reconstructed video at an average transmission rate of 1.8 bits/pixel. Hardware implementation of the DPCM circuit, non-adaptive predictor and non-uniform quantizer has been completed, providing realtime demonstration of the image quality at full video rates. Video sampling/reconstruction circuits have also been constructed to accomplish the analog video processing necessary for the real-time demonstration. Performance results for the completed hardware compare favorably with simulation results. Hardware implementation of the multilevel Huffman encoder/decoder is currently under development

  7. A cascadable circular concentrator with parallel compressed structure for increasing the energy density

    NASA Astrophysics Data System (ADS)

    Ku, Nai-Lun; Chen, Yi-Yung; Hsieh, Wei-Che; Whang, Allen Jong-Woei

    2012-02-01

    Due to the energy crisis, the principle of green energy gains popularity. This leads the increasing interest in renewable energy such as solar energy. Thus, how to collect the sunlight for indoor illumination becomes our ultimate target. With the environmental awareness increasing, we use the nature light as the light source. Then we start to devote the development of solar collecting system. The Natural Light Guiding System includes three parts, collecting, transmitting and lighting part. The idea of our solar collecting system design is a concept for combining the buildings with a combination of collecting modules. Therefore, we can use it anyplace where the sunlight can directly impinges on buildings with collecting elements. In the meantime, while collecting the sunlight with high efficiency, we can transmit the sunlight into indoor through shorter distance zone by light pipe where we needs the light. We proposed a novel design including disk-type collective lens module. With the design, we can let the incident light and exit light be parallel and compressed. By the parallel and compressed design, we make every output light become compressed in the proposed optical structure. In this way, we can increase the ratio about light compression, get the better efficiency and let the energy distribution more uniform for indoor illumination. By the definition of "KPI" as an performance index about light density as following: lm/(mm)2, the simulation results show that the proposed Concentrator is 40,000,000 KPI much better than the 800,000 KPI measured from the traditional ones.

  8. Evaluation of balloon trajectory forecast routines for GAINS

    NASA Astrophysics Data System (ADS)

    Collander, R.; Girz, C.

    The Global Air-ocean IN-situ System (GAINS) is a global observing system designed to augment current environmental observing and monitoring networks. GAINS is a network of long-duration, stratospheric platforms that carry onboard sensors and hundreds of dropsondes to acquire meteorological, air chemistry, and climate data over oceans and in remote land regions of the globe. Although GAINS platforms will include balloons and Remotely Operated Aircraft (ROA), the scope of this paper is limited to balloon-based platforms. A primary goal of GAINS balloon test flights is post-flight recovery of the balloon shell and payload, which requires information on the expected flight path and landing site prior to launch. Software has been developed for the prediction of the balloon trajectory and landing site, with separate versions written to generate predictions based upon rawinsonde data and model output. Balloon positions are calculated in 1-min increments based on wind data from the closest rawinsonde site or model grid point, given a known launch point, ascent and descent rate and flight duration. For short flights (< 6h), rawinsonde winds interpolated to 10-mb levels are used for trajectory calculations. Predictions for flight durations of 6 to 48h are based upon the initialization and 3 h forecast wind fields from NOAA's global aviation- (AVN) and Rapid Update Cycle (RUC) models. Given a limited number of actual balloon launches, trajectories computed from a chronological series of hourly RUC initializations are used as the baseline for comparison purposes. These baseline trajectories are compared to trajectory predictions from the rawinsonde and model-based versions on a monthly and seasonal basis over a 1-year period (January 1 - December 31, 2001) for flight durations of 3h, 6h and 48h. Predicted trajectories diverge from the baseline path, with the divergence increasing with increasing time. We examine the zonal, meridional and net magnitudes of these deviations, and

  9. JPEG and wavelet compression of ophthalmic images

    NASA Astrophysics Data System (ADS)

    Eikelboom, Robert H.; Yogesan, Kanagasingam; Constable, Ian J.; Barry, Christopher J.

    1999-05-01

    This study was designed to determine the degree and methods of digital image compression to produce ophthalmic imags of sufficient quality for transmission and diagnosis. The photographs of 15 subjects, which inclined eyes with normal, subtle and distinct pathologies, were digitized to produce 1.54MB images and compressed to five different methods: (i) objectively by calculating the RMS error between the uncompressed and compressed images, (ii) semi-subjectively by assessing the visibility of blood vessels, and (iii) subjectively by asking a number of experienced observers to assess the images for quality and clinical interpretation. Results showed that as a function of compressed image size, wavelet compressed images produced less RMS error than JPEG compressed images. Blood vessel branching could be observed to a greater extent after Wavelet compression compared to JPEG compression produced better images then a JPEG compression for a given image size. Overall, it was shown that images had to be compressed to below 2.5 percent for JPEG and 1.7 percent for Wavelet compression before fine detail was lost, or when image quality was too poor to make a reliable diagnosis.

  10. Evolution of the Orszag-Tang vortex system in a compressible medium. I - Initial average subsonic flow

    NASA Technical Reports Server (NTRS)

    Dahlburg, R. B.; Picone, J. M.

    1989-01-01

    The results of fully compressible, Fourier collocation, numerical simulations of the Orszag-Tang vortex system are presented. The initial conditions for this system consist of a nonrandom, periodic field in which the magnetic and velocity field contain X points but differ in modal structure along one spatial direction. The velocity field is initially solenoidal, with the total initial pressure field consisting of the superposition of the appropriate incompressible pressure distribution upon a flat pressure field corresponding to the initial, average Mach number of the flow. In these numerical simulations, this initial Mach number is varied from 0.2-0.6. These values correspond to average plasma beta values ranging from 30.0 to 3.3, respectively. It is found that compressible effects develop within one or two Alfven transit times, as manifested in the spectra of compressible quantities such as the mass density and the nonsolenoidal flow field. These effects include (1) a retardation of growth of correlation between the magnetic field and the velocity field, (2) the emergence of compressible small-scale structure such as massive jets, and (3) bifurcation of eddies in the compressible flow field. Differences between the incompressible and compressible results tend to increase with increasing initial average Mach number.

  11. Evolution of the Orszag--Tang vortex system in a compressible medium. I. Initial average subsonic flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dahlburg, R.B.; Picone, J.M.

    In this paper the results of fully compressible, Fourier collocation, numerical simulations of the Orszag--Tang vortex system are presented. The initial conditions for this system consist of a nonrandom, periodic field in which the magnetic and velocity field contain X points but differ in modal structure along one spatial direction. The velocity field is initially solenoidal, with the total initial pressure field consisting of the superposition of the appropriate incompressible pressure distribution upon a flat pressure field corresponding to the initial, average Mach number of the flow. In these numerical simulations, this initial Mach number is varied from 0.2--0.6. Thesemore » values correspond to average plasma beta values ranging from 30.0 to 3.3, respectively. It is found that compressible effects develop within one or two Alfven transit times, as manifested in the spectra of compressible quantities such as the mass density and the nonsolenoidal flow field. These effects include (1) a retardation of growth of correlation between the magnetic field and the velocity field, (2) the emergence of compressible small-scale structure such as massive jets, and (3) bifurcation of eddies in the compressible flow field. Differences between the incompressible and compressible results tend to increase with increasing initial average Mach number.« less

  12. Application of variable-gain output feedback for high-alpha control

    NASA Technical Reports Server (NTRS)

    Ostroff, Aaron J.

    1990-01-01

    A variable-gain, optimal, discrete, output feedback design approach that is applied to a nonlinear flight regime is described. The flight regime covers a wide angle-of-attack range that includes stall and post stall. The paper includes brief descriptions of the variable-gain formulation, the discrete-control structure and flight equations used to apply the design approach, and the high performance airplane model used in the application. Both linear and nonlinear analysis are shown for a longitudinal four-model design case with angles of attack of 5, 15, 35, and 60 deg. Linear and nonlinear simulations are compared for a single-point longitudinal design at 60 deg angle of attack. Nonlinear simulations for the four-model, multi-mode, variable-gain design include a longitudinal pitch-up and pitch-down maneuver and high angle-of-attack regulation during a lateral maneuver.

  13. Streamflow gain/loss in the Republican River basin, Nebraska, March 1989

    USGS Publications Warehouse

    Johnson, Michaela R.; Stanton, Jennifer S.; Cornwall, James F.; Landon, Matthew K.

    2002-01-01

    This arc and point data set contains streamflow measurement sites and reaches indicating streamflow gain or loss under base-flow conditions along the Republican River and tributaries in Nebraska during March 21 to 22, 1989 (Boohar and others, 1990). These measurements were made to obtain data on ground-water/surface-water interaction. Flow was visually observed to be zero, was measured, or was estimated at 136 sites. The measurements were made on the main stem of the Republican River and all flowing tributaries that enter the Republican River above Swanson Reservoir and parts of the Frenchman, Red Willow, and Medicine Creek drainages in the Nebraska part of the Republican River Basin. Tributaries were followed upstream until the first road crossing where zero flow was encountered. For selected streams, points of zero flow upstream of the first zero flow site were also checked. Streamflow gain or loss for each stream reach was calculated by subtracting the streamflow values measured at the upstream end of the reach and values for contributing tributaries from the downstream value. The data obtained reflected base-flow conditions suitable for estimating streamflow gains and losses for stream reaches between sites. This digital data set was created by manually plotting locations of streamflow measurements. These points were used to designate stream-reach segments to calculate gain/loss per river mile. Reach segments were created by manually splitting the lines from a 1:250,000 hydrography data set (Soenksen and others, 1999) at every location where the streams were measured. Each stream-reach segment between streamflow-measurement sites was assigned a unique reach number. All other lines in the hydrography data set without reach numbers were omitted. This data set was created to archive the calculated streamflow gains and losses of selected streams in part of the Republican River Basin, Nebraska in March 1989, and make the data available for use with geographic

  14. The point explosion with radiation transport

    NASA Astrophysics Data System (ADS)

    Lin, Zhiwei; Zhang, Lu; Kuang, Longyu; Jiang, Shaoen

    2017-10-01

    Some amount of energy is released instantaneously at the origin to generate simultaneously a spherical radiative heat wave and a spherical shock wave in the point explosion with radiation transport, which is a complicated problem due to the competition between these two waves. The point explosion problem possesses self-similar solutions when only hydrodynamic motion or only heat conduction is considered, which are Sedov solution and Barenblatt solution respectively. The point explosion problem wherein both physical mechanisms of hydrodynamic motion and heat conduction are included has been studied by P. Reinicke and A.I. Shestakov. In this talk we numerically investigate the point explosion problem wherein both physical mechanisms of hydrodynamic motion and radiation transport are taken into account. The radiation transport equation in one dimensional spherical geometry has to be solved for this problem since the ambient medium is optically thin with respect to the initially extremely high temperature at the origin. The numerical results reveal a high compression of medium and a bi-peak structure of density, which are further theoretically analyzed at the end.

  15. Compression evaluation of surgery video recordings retaining diagnostic credibility (compression evaluation of surgery video)

    NASA Astrophysics Data System (ADS)

    Duplaga, M.; Leszczuk, M. I.; Papir, Z.; Przelaskowski, A.

    2008-12-01

    Wider dissemination of medical digital video libraries is affected by two correlated factors, resource effective content compression that directly influences its diagnostic credibility. It has been proved that it is possible to meet these contradictory requirements halfway for long-lasting and low motion surgery recordings at compression ratios close to 100 (bronchoscopic procedures were a case study investigated). As the main supporting assumption, it has been accepted that the content can be compressed as far as clinicians are not able to sense a loss of video diagnostic fidelity (a visually lossless compression). Different market codecs were inspected by means of the combined subjective and objective tests toward their usability in medical video libraries. Subjective tests involved a panel of clinicians who had to classify compressed bronchoscopic video content according to its quality under the bubble sort algorithm. For objective tests, two metrics (hybrid vector measure and hosaka Plots) were calculated frame by frame and averaged over a whole sequence.

  16. Quantitative study of fluctuation effects by fast lattice Monte Carlo simulations: Compression of grafted homopolymers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Pengfei; Wang, Qiang, E-mail: q.wang@colostate.edu

    2014-01-28

    Using fast lattice Monte Carlo (FLMC) simulations [Q. Wang, Soft Matter 5, 4564 (2009)] and the corresponding lattice self-consistent field (LSCF) calculations, we studied a model system of grafted homopolymers, in both the brush and mushroom regimes, in an explicit solvent compressed by an impenetrable surface. Direct comparisons between FLMC and LSCF results, both of which are based on the same Hamiltonian (thus without any parameter-fitting between them), unambiguously and quantitatively reveal the fluctuations/correlations neglected by the latter. We studied both the structure (including the canonical-ensemble averages of the height and the mean-square end-to-end distances of grafted polymers) and thermodynamicsmore » (including the ensemble-averaged reduced energy density and the related internal energy per chain, the differences in the Helmholtz free energy and entropy per chain from the uncompressed state, and the pressure due to compression) of the system. In particular, we generalized the method for calculating pressure in lattice Monte Carlo simulations proposed by Dickman [J. Chem. Phys. 87, 2246 (1987)], and combined it with the Wang-Landau–Optimized Ensemble sampling [S. Trebst, D. A. Huse, and M. Troyer, Phys. Rev. E 70, 046701 (2004)] to efficiently and accurately calculate the free energy difference and the pressure due to compression. While we mainly examined the effects of the degree of compression, the distance between the nearest-neighbor grafting points, the reduced number of chains grafted at each grafting point, and the system fluctuations/correlations in an athermal solvent, the θ-solvent is also considered in some cases.« less

  17. Quantum-mechanical analysis of low-gain free-electron laser oscillators

    NASA Astrophysics Data System (ADS)

    Fares, H.; Yamada, M.; Chiadroni, E.; Ferrario, M.

    2018-05-01

    In the previous classical theory of the low-gain free-electron laser (FEL) oscillators, the electron is described as a point-like particle, a delta function in the spatial space. On the other hand, in the previous quantum treatments, the electron is described as a plane wave with a single momentum state, a delta function in the momentum space. In reality, an electron must have statistical uncertainties in the position and momentum domains. Then, the electron is neither a point-like charge nor a plane wave of a single momentum. In this paper, we rephrase the theory of the low-gain FEL where the interacting electron is represented quantum mechanically by a plane wave with a finite spreading length (i.e., a wave packet). Using the concepts of the transformation of reference frames and the statistical quantum mechanics, an expression for the single-pass radiation gain is derived. The spectral broadening of the radiation is expressed in terms of the spreading length of an electron, the relaxation time characterizing the energy spread of electrons, and the interaction time. We introduce a comparison between our results and those obtained in the already known classical analyses where a good agreement between both results is shown. While the correspondence between our results and the classical results are shown, novel insights into the electron dynamics and the interaction mechanism are presented.

  18. Competitive Parallel Processing For Compression Of Data

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B.; Fender, Antony R. H.

    1990-01-01

    Momentarily-best compression algorithm selected. Proposed competitive-parallel-processing system compresses data for transmission in channel of limited band-width. Likely application for compression lies in high-resolution, stereoscopic color-television broadcasting. Data from information-rich source like color-television camera compressed by several processors, each operating with different algorithm. Referee processor selects momentarily-best compressed output.

  19. Video compression via log polar mapping

    NASA Astrophysics Data System (ADS)

    Weiman, Carl F. R.

    1990-09-01

    A three stage process for compressing real time color imagery by factors in the range of 1600-to-i is proposed for remote driving'. The key is to match the resolution gradient of human vision and preserve only those cues important for driving. Some hardware components have been built and a research prototype is planned. Stage 1 is log polar mapping, which reduces peripheral image sampling resolution to match the peripheral gradient in human visual acuity. This can yield 25-to-i compression. Stage 2 partitions color and contrast into separate channels. This can yield 8-to-i compression. Stage 3 is conventional block data compression such as hybrid DCT/DPCM which can yield 8-to-i compression. The product of all three stages is i600-to-i data compression. The compressed signal can be transmitted over FM bands which do not require line-of-sight, greatly increasing the range of operation and reducing the topographic exposure of teleoperated vehicles. Since the compressed channel data contains the essential constituents of human visual perception, imagery reconstructed by inverting each of the three compression stages is perceived as complete, provided the operator's direction of gaze is at the center of the mapping. This can be achieved by eye-tracker feedback which steers the center of log polar mapping in the remote vehicle to match the teleoperator's direction of gaze.

  20. Role of fluctuations in random compressible systems at marginal dimensionality

    NASA Astrophysics Data System (ADS)

    Meissner, G.; Sasvári, L.; Tadić, B.

    1986-07-01

    In a unified treatment we have studied the role of fluctuations in uniaxial random systems at marginal dimensionality d*=4 with the n=1 component order parameter being coupled to elastic degrees of freedom. Depending on the ratio of the nonuniversal parameters of quenched disorder Δ0 and of elastic fluctuations v~0, a first- or second-order phase transition is found to occur, separated by a tricritical point. A complete account of critical properties and of macroscopic as well as of microscopic elastic stability is given for temperatures T>Tc. Universal singularities of thermodynamic functions are determined for t=(T-Tc)/Tc-->0 including the tricritical point: for v~0/Δ0>-2, they are the same as in a rigid random system; for v~0/Δ0=-2, they are different due to lattice compressibility being related, however, to the former by Fisher renormalization. Fluctuation corrections in one-loop approximation have been evaluated in a nonuniversal critical temperature range, tx<compressible systems, unlike that of pure compressible systems, is finally shown to remain stable against weak lattice anisotropy.

  1. The Distinction of Hot Herbal Compress, Hot Compress, and Topical Diclofenac as Myofascial Pain Syndrome Treatment.

    PubMed

    Boonruab, Jurairat; Nimpitakpong, Netraya; Damjuti, Watchara

    2018-01-01

    This randomized controlled trial aimed to investigate the distinctness after treatment among hot herbal compress, hot compress, and topical diclofenac. The registrants were equally divided into groups and received the different treatments including hot herbal compress, hot compress, and topical diclofenac group, which served as the control group. After treatment courses, Visual Analog Scale and 36-Item Short Form Health survey were, respectively, used to establish the level of pain intensity and quality of life. In addition, cervical range of motion and pressure pain threshold were also examined to identify the motional effects. All treatments showed significantly decreased level of pain intensity and increased cervical range of motion, while the intervention groups exhibited extraordinary capability compared with the topical diclofenac group in pressure pain threshold and quality of life. In summary, hot herbal compress holds promise to be an efficacious treatment parallel to hot compress and topical diclofenac.

  2. Individual Differences in Social, Cognitive, and Morphological Aspects of Infant Pointing

    ERIC Educational Resources Information Center

    Liszkowski, Ulf; Tomasello, Michael

    2011-01-01

    Little is known about the origins of the pointing gesture. We sought to gain insight into its emergence by investigating individual differences in the pointing of 12-month-old infants in two ways. First, we looked at differences in the communicative and interactional uses of pointing and asked how different hand shapes relate to point frequency,…

  3. Prediction of compression-induced image interpretability degradation

    NASA Astrophysics Data System (ADS)

    Blasch, Erik; Chen, Hua-Mei; Irvine, John M.; Wang, Zhonghai; Chen, Genshe; Nagy, James; Scott, Stephen

    2018-04-01

    Image compression is an important component in modern imaging systems as the volume of the raw data collected is increasing. To reduce the volume of data while collecting imagery useful for analysis, choosing the appropriate image compression method is desired. Lossless compression is able to preserve all the information, but it has limited reduction power. On the other hand, lossy compression, which may result in very high compression ratios, suffers from information loss. We model the compression-induced information loss in terms of the National Imagery Interpretability Rating Scale or NIIRS. NIIRS is a user-based quantification of image interpretability widely adopted by the Geographic Information System community. Specifically, we present the Compression Degradation Image Function Index (CoDIFI) framework that predicts the NIIRS degradation (i.e., a decrease of NIIRS level) for a given compression setting. The CoDIFI-NIIRS framework enables a user to broker the maximum compression setting while maintaining a specified NIIRS rating.

  4. LNA with wide range of gain control and wideband interference rejection

    NASA Astrophysics Data System (ADS)

    Wang, Jhen-Ji; Chen, Duan-Yu

    2016-10-01

    This work presents a low-noise amplifier (LNA) design with a wide-range gain control characteristic that integrates adjustable current distribution and output impedance techniques. For a given gain characteristic, the proposed LNA provides better wideband interference rejection performance than conventional LNA. Moreover, the proposed LNA also has a wider gain control range than conventional LNA. Therefore, it is suitable for satellite communications systems. The simulation results demonstrate that the voltage gain control range is between 14.5 and 34.2 dB for such applications (2600 MHz); the input reflection coefficient is less than -18.9 dB; the noise figure (NF) is 1.25 dB; and the third-order intercept point (IIP3) is 4.52 dBm. The proposed LNA consumes 23.85-28.17 mW at a supply voltage of 1.8 V. It is implemented by using TSMC 0.18-um RF CMOS process technology.

  5. QLog Solar-Cell Mode Photodiode Logarithmic CMOS Pixel Using Charge Compression and Readout †

    PubMed Central

    Ni, Yang

    2018-01-01

    In this paper, we present a new logarithmic pixel design currently under development at New Imaging Technologies SA (NIT). This new logarithmic pixel design uses charge domain logarithmic signal compression and charge-transfer-based signal readout. This structure gives a linear response in low light conditions and logarithmic response in high light conditions. The charge transfer readout efficiently suppresses the reset (KTC) noise by using true correlated double sampling (CDS) in low light conditions. In high light conditions, thanks to charge domain logarithmic compression, it has been demonstrated that 3000 electrons should be enough to cover a 120 dB dynamic range with a mobile phone camera-like signal-to-noise ratio (SNR) over the whole dynamic range. This low electron count permits the use of ultra-small floating diffusion capacitance (sub-fF) without charge overflow. The resulting large conversion gain permits a single photon detection capability with a wide dynamic range without a complex sensor/system design. A first prototype sensor with 320 × 240 pixels has been implemented to validate this charge domain logarithmic pixel concept and modeling. The first experimental results validate the logarithmic charge compression theory and the low readout noise due to the charge-transfer-based readout. PMID:29443903

  6. Method for data compression by associating complex numbers with files of data values

    DOEpatents

    Feo, J.T.; Hanks, D.C.; Kraay, T.A.

    1998-02-10

    A method for compressing data for storage or transmission is disclosed. Given a complex polynomial and a value assigned to each root, a root generated data file (RGDF) is created, one entry at a time. Each entry is mapped to a point in a complex plane. An iterative root finding technique is used to map the coordinates of the point to the coordinates of one of the roots of the polynomial. The value associated with that root is assigned to the entry. An equational data compression (EDC) method reverses this procedure. Given a target data file, the EDC method uses a search algorithm to calculate a set of m complex numbers and a value map that will generate the target data file. The error between a simple target data file and generated data file is typically less than 10%. Data files can be transmitted or stored without loss by transmitting the m complex numbers, their associated values, and an error file whose size is at most one-tenth of the size of the input data file. 4 figs.

  7. Method for data compression by associating complex numbers with files of data values

    DOEpatents

    Feo, John Thomas; Hanks, David Carlton; Kraay, Thomas Arthur

    1998-02-10

    A method for compressing data for storage or transmission. Given a complex polynomial and a value assigned to each root, a root generated data file (RGDF) is created, one entry at a time. Each entry is mapped to a point in a complex plane. An iterative root finding technique is used to map the coordinates of the point to the coordinates of one of the roots of the polynomial. The value associated with that root is assigned to the entry. An equational data compression (EDC) method reverses this procedure. Given a target data file, the EDC method uses a search algorithm to calculate a set of m complex numbers and a value map that will generate the target data file. The error between a simple target data file and generated data file is typically less than 10%. Data files can be transmitted or stored without loss by transmitting the m complex numbers, their associated values, and an error file whose size is at most one-tenth of the size of the input data file.

  8. Calculation of three-dimensional compressible laminar and turbulent boundary layers. Calculation of three-dimensional compressible boundary layers on arbitrary wings

    NASA Technical Reports Server (NTRS)

    Cebeci, T.; Kaups, K.; Ramsey, J.; Moser, A.

    1975-01-01

    A very general method for calculating compressible three-dimensional laminar and turbulent boundary layers on arbitrary wings is described. The method utilizes a nonorthogonal coordinate system for the boundary-layer calculations and includes a geometry package that represents the wing analytically. In the calculations all the geometric parameters of the coordinate system are accounted for. The Reynolds shear-stress terms are modeled by an eddy-viscosity formulation developed by Cebeci. The governing equations are solved by a very efficient two-point finite-difference method used earlier by Keller and Cebeci for two-dimensional flows and later by Cebeci for three-dimensional flows.

  9. System design of an optical interferometer based on compressive sensing

    NASA Astrophysics Data System (ADS)

    Liu, Gang; Wen, De-Sheng; Song, Zong-Xi

    2018-07-01

    In this paper, we develop a new optical interferometric telescope architecture based on compressive sensing (CS) theory. Traditional optical telescopes with large apertures must be large in size, heavy and have high-power consumption, which limits the development of space-based telescopes. A turning point has occurred in the advent of imaging technology that utilizes Fourier-domain interferometry. This technology can reduce the system size, weight and power consumption by an order of magnitude compared to traditional optical telescopes at the same resolution. CS theory demonstrates that incomplete and noisy Fourier measurements may suffice for the exact reconstruction of sparse or compressible signals. Our proposed architecture combines advantages from the two frameworks, and the performance is evaluated through simulations. The results indicate the ability to efficiently sample spatial frequencies, while being lightweight and compact in size. Another attractive property of our architecture is the strong denoising ability for Gaussian noise.

  10. Shear waves in inhomogeneous, compressible fluids in a gravity field.

    PubMed

    Godin, Oleg A

    2014-03-01

    While elastic solids support compressional and shear waves, waves in ideal compressible fluids are usually thought of as compressional waves. Here, a class of acoustic-gravity waves is studied in which the dilatation is identically zero, and the pressure and density remain constant in each fluid particle. These shear waves are described by an exact analytic solution of linearized hydrodynamics equations in inhomogeneous, quiescent, inviscid, compressible fluids with piecewise continuous parameters in a uniform gravity field. It is demonstrated that the shear acoustic-gravity waves also can be supported by moving fluids as well as quiescent, viscous fluids with and without thermal conductivity. Excitation of a shear-wave normal mode by a point source and the normal mode distortion in realistic environmental models are considered. The shear acoustic-gravity waves are likely to play a significant role in coupling wave processes in the ocean and atmosphere.

  11. Pseudo-critical point in anomalous phase diagrams of simple plasma models

    NASA Astrophysics Data System (ADS)

    Chigvintsev, A. Yu; Iosilevskiy, I. L.; Noginova, L. Yu

    2016-11-01

    Anomalous phase diagrams in subclass of simplified (“non-associative”) Coulomb models is under discussion. The common feature of this subclass is absence on definition of individual correlations for charges of opposite sign. It is e.g. modified OCP of ions on uniformly compressible background of ideal Fermi-gas of electrons OCP(∼), or a superposition of two non-ideal OCP(∼) models of ions and electrons etc. In contrast to the ordinary OCP model on non-compressible (“rigid”) background OCP(#) two new phase transitions with upper critical point, boiling and sublimation, appear in OCP(∼) phase diagram in addition to the well-known Wigner crystallization. The point is that the topology of phase diagram in OCP(∼) becomes anomalous at high enough value of ionic charge number Z. Namely, the only one unified crystal- fluid phase transition without critical point exists as continuous superposition of melting and sublimation in OCP(∼) at the interval (Z 1 < Z < Z 2). The most remarkable is appearance of pseudo-critical points at both boundary values Z = Z 1 ≈ 35.5 and Z = Z 2 ≈ 40.0. It should be stressed that critical isotherm is exactly cubic in both these pseudo-critical points. In this study we have improved our previous calculations and utilized more complicated model components equation of state provided by Chabrier and Potekhin (1998 Phys. Rev. E 58 4941).

  12. The Numerical Analysis of a Turbulent Compressible Jet. Degree awarded by Ohio State Univ., 2000

    NASA Technical Reports Server (NTRS)

    DeBonis, James R.

    2001-01-01

    A numerical method to simulate high Reynolds number jet flows was formulated and applied to gain a better understanding of the flow physics. Large-eddy simulation was chosen as the most promising approach to model the turbulent structures due to its compromise between accuracy and computational expense. The filtered Navier-Stokes equations were developed including a total energy form of the energy equation. Subgrid scale models for the momentum and energy equations were adapted from compressible forms of Smagorinsky's original model. The effect of using disparate temporal and spatial accuracy in a numerical scheme was discovered through one-dimensional model problems and a new uniformly fourth-order accurate numerical method was developed. Results from two- and three-dimensional validation exercises show that the code accurately reproduces both viscous and inviscid flows. Numerous axisymmetric jet simulations were performed to investigate the effect of grid resolution, numerical scheme, exit boundary conditions and subgrid scale modeling on the solution and the results were used to guide the three-dimensional calculations. Three-dimensional calculations of a Mach 1.4 jet showed that this LES simulation accurately captures the physics of the turbulent flow. The agreement with experimental data was relatively good and is much better than results in the current literature. Turbulent intensities indicate that the turbulent structures at this level of modeling are not isotropic and this information could lend itself to the development of improved subgrid scale models for LES and turbulence models for RANS simulations. A two point correlation technique was used to quantify the turbulent structures. Two point space correlations were used to obtain a measure of the integral length scale, which proved to be approximately 1/2 D(sub j). Two point space-time correlations were used to obtain the convection velocity for the turbulent structures. This velocity ranged from 0.57 to

  13. Video Compression

    NASA Technical Reports Server (NTRS)

    1996-01-01

    Optivision developed two PC-compatible boards and associated software under a Goddard Space Flight Center Small Business Innovation Research grant for NASA applications in areas such as telerobotics, telesciences and spaceborne experimentation. From this technology, the company used its own funds to develop commercial products, the OPTIVideo MPEG Encoder and Decoder, which are used for realtime video compression and decompression. They are used in commercial applications including interactive video databases and video transmission. The encoder converts video source material to a compressed digital form that can be stored or transmitted, and the decoder decompresses bit streams to provide high quality playback.

  14. The compressed breast during mammography and breast tomosynthesis: in vivo shape characterization and modeling

    NASA Astrophysics Data System (ADS)

    Rodríguez-Ruiz, Alejandro; Agasthya, Greeshma A.; Sechopoulos, Ioannis

    2017-09-01

    To characterize and develop a patient-based 3D model of the compressed breast undergoing mammography and breast tomosynthesis. During this IRB-approved, HIPAA-compliant study, 50 women were recruited to undergo 3D breast surface imaging with structured light (SL) during breast compression, along with simultaneous acquisition of a tomosynthesis image. A pair of SL systems were used to acquire 3D surface images by projecting 24 different patterns onto the compressed breast and capturing their reflection off the breast surface in approximately 12-16 s. The 3D surface was characterized and modeled via principal component analysis. The resulting surface model was combined with a previously developed 2D model of projected compressed breast shapes to generate a full 3D model. Data from ten patients were discarded due to technical problems during image acquisition. The maximum breast thickness (found at the chest-wall) had an average value of 56 mm, and decreased 13% towards the nipple (breast tilt angle of 5.2°). The portion of the breast not in contact with the compression paddle or the support table extended on average 17 mm, 18% of the chest-wall to nipple distance. The outermost point along the breast surface lies below the midline of the total thickness. A complete 3D model of compressed breast shapes was created and implemented as a software application available for download, capable of generating new random realistic 3D shapes of breasts undergoing compression. Accurate characterization and modeling of the breast curvature and shape was achieved and will be used for various image processing and clinical tasks.

  15. Data compression for sequencing data

    PubMed Central

    2013-01-01

    Post-Sanger sequencing methods produce tons of data, and there is a general agreement that the challenge to store and process them must be addressed with data compression. In this review we first answer the question “why compression” in a quantitative manner. Then we also answer the questions “what” and “how”, by sketching the fundamental compression ideas, describing the main sequencing data types and formats, and comparing the specialized compression algorithms and tools. Finally, we go back to the question “why compression” and give other, perhaps surprising answers, demonstrating the pervasiveness of data compression techniques in computational biology. PMID:24252160

  16. Effect of interfragmentary gap on compression force in a headless compression screw used for scaphoid fixation.

    PubMed

    Tan, E S; Mat Jais, I S; Abdul Rahim, S; Tay, S C

    2018-01-01

    We investigated the effect of an interfragmentary gap on the final compression force using the Acutrak 2 Mini headless compression screw (length 26 mm) (Acumed, Hillsboro, OR, USA). Two blocks of solid rigid polyurethane foam in a custom jig were separated by spacers of varying thickness (1.0, 1.5, 2.0 and 2.5 mm) to simulate an interfragmentary gap. The spacers were removed before full insertion of the screw and the compression force was measured when the screw was buried 2 mm below the surface of the upper block. Gaps of 1.5 mm and 2.0 mm resulted in significantly decreased compression forces, whereas there was no significant decrease in compression force with a gap of 1 mm. An interfragmentary gap of 2.5 mm did not result in any contact between blocks. We conclude that an increased interfragmentary gap leads to decreased compression force with this screw, which may have implications on fracture healing.

  17. An integrated circuit floating point accumulator

    NASA Technical Reports Server (NTRS)

    Goldsmith, T. C.

    1977-01-01

    Goddard Space Flight Center has developed a large scale integrated circuit (type 623) which can perform pulse counting, storage, floating point compression, and serial transmission, using a single monolithic device. Counts of 27 or 19 bits can be converted to transmitted values of 12 or 8 bits respectively. Use of the 623 has resulted in substantial savaings in weight, volume, and dollar resources on at least 11 scientific instruments to be flown on 4 NASA spacecraft. The design, construction, and application of the 623 are described.

  18. Effects of wearing lower leg compression sleeves on locomotion economy.

    PubMed

    Kurz, Eduard; Anders, Christoph

    2018-09-01

    The purpose of this investigation was to assess the effect of compression sleeves on muscle activation cost during locomotion. Twenty-two recreationally active men (age: 25 ± 3 years) ran on a treadmill at four different speeds (ordered sequence of 2.8, 3.3, 2.2, and 3.9 m/s). The tests were performed without (control situation, CON) and while wearing specially designed lower leg compression sleeves (SL). Myoelectric activity of five lower leg muscles (tibialis anterior, fibularis longus, lateral and medial head of gastrocnemius, and soleus) was captured using Surface EMG. To assess muscle activation cost, the cumulative muscle activity per distance travelled (CMAPD) of the CON and SL situations was determined. Repeated measures analyses of variance were performed separately for each muscle. The analyses revealed a reduced lower leg muscle activation cost with respect to test situation for SL for all muscles (p < 0.05, η p 2  > 0.18). The respective significant reductions of CMAPD values during SL ranged between 4% and 16% and were largest at 2.8 m/s. The findings presented point towards an improved muscle activation cost while wearing lower leg compression sleeves during locomotion that have potential to postpone muscle fatigue.

  19. Compression strength of composite primary structural components

    NASA Technical Reports Server (NTRS)

    Johnson, Eric R.

    1992-01-01

    A status report of work performed during the period May 1, 1992 to October 31, 1992 is presented. Research was conducted in three areas: delamination initiation in postbuckled dropped-ply laminates; stiffener crippling initiated by delamination; and pressure pillowing of an orthogonally stiffened cylindrical shell. The geometrically nonlinear response and delamination initiation of compression-loaded dropped-ply laminates is analyzed. A computational model of the stiffener specimens that includes the capability to predict the interlaminar response at the flange free edge in postbuckling is developed. The distribution of the interacting loads between the stiffeners and the shell wall, particularly at the load transfer at the stiffener crossing point, is determined.

  20. Cost-effectiveness analysis of treatments for vertebral compression fractures.

    PubMed

    Edidin, Avram A; Ong, Kevin L; Lau, Edmund; Schmier, Jordana K; Kemner, Jason E; Kurtz, Steven M

    2012-07-01

    Vertebral compression fractures (VCFs) can be treated by nonsurgical management or by minimally invasive surgical treatment including vertebroplasty and balloon kyphoplasty. The purpose of the present study was to characterize the cost to Medicare for treating VCF-diagnosed patients by nonsurgical management, vertebroplasty, or kyphoplasty. We hypothesized that surgical treatments for VCFs using vertebroplasty or kyphoplasty would be a cost-effective alternative to nonsurgical management for the Medicare patient population. Cost per life-year gained for VCF patients in the US Medicare population was compared between operated (kyphoplasty and vertebroplasty) and non-operated patients and between kyphoplasty and vertebroplasty patients, all as a function of patient age and gender. Life expectancy was estimated using a parametric Weibull survival model (adjusted for comorbidities) for 858 978 VCF patients in the 100% Medicare dataset (2005-2008). Median payer costs were identified for each treatment group for up to 3 years following VCF diagnosis, based on 67 018 VCF patients in the 5% Medicare dataset (2005-2008). A discount rate of 3% was used for the base case in the cost-effectiveness analysis, with 0% and 5% discount rates used in sensitivity analyses. After accounting for the differences in median costs and using a discount rate of 3%, the cost per life-year gained for kyphoplasty and vertebroplasty patients ranged from $US1863 to $US6687 and from $US2452 to $US13 543, respectively, compared with non-operated patients. The cost per life-year gained for kyphoplasty compared with vertebroplasty ranged from -$US4878 (cost saving) to $US2763. Among patients for whom surgical treatment was indicated, kyphoplasty was found to be cost effective, and perhaps even cost saving, compared with vertebroplasty. Even for the oldest patients (85 years of age and older), both interventions would be considered cost effective in terms of cost per life-year gained.

  1. Summary of Pressure Gain Combustion Research at NASA

    NASA Technical Reports Server (NTRS)

    Perkins, H. Douglas; Paxson, Daniel E.

    2018-01-01

    NASA has undertaken a systematic exploration of many different facets of pressure gain combustion over the last 25 years in an effort to exploit the inherent thermodynamic advantage of pressure gain combustion over the constant pressure combustion process used in most aerospace propulsion systems. Applications as varied as small-scale UAV's, rotorcraft, subsonic transports, hypersonics and launch vehicles have been considered. In addition to studying pressure gain combustor concepts such as wave rotors, pulse detonation engines, pulsejets, and rotating detonation engines, NASA has studied inlets, nozzles, ejectors and turbines which must also process unsteady flow in an integrated propulsion system. Other design considerations such as acoustic signature, combustor material life and heat transfer that are unique to pressure gain combustors have also been addressed in NASA research projects. In addition to a wide range of experimental studies, a number of computer codes, from 0-D up through 3-D, have been developed or modified to specifically address the analysis of unsteady flow fields. Loss models have also been developed and incorporated into these codes that improve the accuracy of performance predictions and decrease computational time. These codes have been validated numerous times across a broad range of operating conditions, and it has been found that once validated for one particular pressure gain combustion configuration, these codes are readily adaptable to the others. All in all, the documentation of this work has encompassed approximately 170 NASA technical reports, conference papers and journal articles to date. These publications are very briefly summarized herein, providing a single point of reference for all of NASA's pressure gain combustion research efforts. This documentation does not include the significant contributions made by NASA research staff to the programs of other agencies, universities, industrial partners and professional society

  2. A Semi-implicit Method for Time Accurate Simulation of Compressible Flow

    NASA Astrophysics Data System (ADS)

    Wall, Clifton; Pierce, Charles D.; Moin, Parviz

    2001-11-01

    A semi-implicit method for time accurate simulation of compressible flow is presented. The method avoids the acoustic CFL limitation, allowing a time step restricted only by the convective velocity. Centered discretization in both time and space allows the method to achieve zero artificial attenuation of acoustic waves. The method is an extension of the standard low Mach number pressure correction method to the compressible Navier-Stokes equations, and the main feature of the method is the solution of a Helmholtz type pressure correction equation similar to that of Demirdžić et al. (Int. J. Num. Meth. Fluids, Vol. 16, pp. 1029-1050, 1993). The method is attractive for simulation of acoustic combustion instabilities in practical combustors. In these flows, the Mach number is low; therefore the time step allowed by the convective CFL limitation is significantly larger than that allowed by the acoustic CFL limitation, resulting in significant efficiency gains. Also, the method's property of zero artificial attenuation of acoustic waves is important for accurate simulation of the interaction between acoustic waves and the combustion process. The method has been implemented in a large eddy simulation code, and results from several test cases will be presented.

  3. Compression techniques in tele-radiology

    NASA Astrophysics Data System (ADS)

    Lu, Tianyu; Xiong, Zixiang; Yun, David Y.

    1999-10-01

    This paper describes a prototype telemedicine system for remote 3D radiation treatment planning. Due to voluminous medical image data and image streams generated in interactive frame rate involved in the application, the importance of deploying adjustable lossy to lossless compression techniques is emphasized in order to achieve acceptable performance via various kinds of communication networks. In particular, the compression of the data substantially reduces the transmission time and therefore allows large-scale radiation distribution simulation and interactive volume visualization using remote supercomputing resources in a timely fashion. The compression algorithms currently used in the software we developed are JPEG and H.263 lossy methods and Lempel-Ziv (LZ77) lossless methods. Both objective and subjective assessment of the effect of lossy compression methods on the volume data are conducted. Favorable results are obtained showing that substantial compression ratio is achievable within distortion tolerance. From our experience, we conclude that 30dB (PSNR) is about the lower bound to achieve acceptable quality when applying lossy compression to anatomy volume data (e.g. CT). For computer simulated data, much higher PSNR (up to 100dB) is expectable. This work not only introduces such novel approach for delivering medical services that will have significant impact on the existing cooperative image-based services, but also provides a platform for the physicians to assess the effects of lossy compression techniques on the diagnostic and aesthetic appearance of medical imaging.

  4. Impact of Various Compression Ratio on the Compression Ignition Engine with Diesel and Jatropha Biodiesel

    NASA Astrophysics Data System (ADS)

    Sivaganesan, S.; Chandrasekaran, M.; Ruban, M.

    2017-03-01

    The present experimental investigation evaluates the effects of using blends of diesel fuel with 20% concentration of Methyl Ester of Jatropha biodiesel blended with various compression ratio. Both the diesel and biodiesel fuel blend was injected at 23º BTDC to the combustion chamber. The experiment was carried out with three different compression ratio. Biodiesel was extracted from Jatropha oil, 20% (B20) concentration is found to be best blend ratio from the earlier experimental study. The engine was maintained at various compression ratio i.e., 17.5, 16.5 and 15.5 respectively. The main objective is to obtain minimum specific fuel consumption, better efficiency and lesser Emission with different compression ratio. The results concluded that full load show an increase in efficiency when compared with diesel, highest efficiency is obtained with B20MEOJBA with compression ratio 17.5. It is noted that there is an increase in thermal efficiency as the blend ratio increases. Biodiesel blend has performance closer to diesel, but emission is reduced in all blends of B20MEOJBA compared to diesel. Thus this work focuses on the best compression ratio and suitability of biodiesel blends in diesel engine as an alternate fuel.

  5. Point-ahead limitation on reciprocity tracking. [in earth-space optical link

    NASA Technical Reports Server (NTRS)

    Shapiro, J. H.

    1975-01-01

    The average power received at a spacecraft from a reciprocity-tracking transmitter is shown to be the free-space diffraction-limited result times a gain-reduction factor that is due to the point-ahead requirement. For a constant-power transmitter, the gain-reduction factor is approximately equal to the appropriate spherical-wave mutual-coherence function. For a constant-average-power transmitter, an exact expression is obtained for the gain-reduction factor.

  6. Development of Gradient Compression Garments for Protection Against Post Flight Orthostatic Intolerance

    NASA Technical Reports Server (NTRS)

    Stenger, M. B.; Lee, S. M. C.; Westby, C. M.; Platts, S. H.

    2010-01-01

    Orthostatic intolerance after space flight is still an issue for astronaut health. No in-flight countermeasure has been 100% effective to date. NASA currently uses an inflatable anti-gravity suit (AGS) during reentry, but this device is uncomfortable and loses effectiveness upon egress from the Shuttle. The Russian Space Agency currently uses a mechanical counter-pressure garment (Kentavr) that is difficult to adjust alone, and prolonged use may result in painful swelling at points where the garment is not continuous (feet, knees, and groin). To improve comfort, reduce upmass and stowage requirements, and control fabrication and maintenance costs, we have been evaluating a variety of gradient compression, mechanical counter-pressure garments, constructed from spandex and nylon, as a possible replacement for the current AGS. We have examined comfort and cardiovascular responses to knee-high garments in normovolemic subjects; thigh-high garments in hypovolemic subjects and in astronauts after space flight; and 1-piece, breast-high garments in hypovolemic subjects. These gradient compression garments provide 55 mmHg of compression over the ankle, decreasing linearly to 35 mmHg at the knee. In thigh-high versions the compression continues to decrease to 20 mmHg at the top of the leg, and for breast-high versions, to 15 mmHg over the abdomen. Measures of efficacy include increased tilt survival time, elevated blood pressure and stroke volume, and lower heart-rate response to orthostatic stress. Results from these studies indicate that the greater the magnitude of compression and the greater the area of coverage, the more effective the compression garment becomes. Therefore, we are currently testing a 3-piece breast-high compression garment on astronauts after short-duration flight. We chose a 3-piece garment consisting of thigh-high stockings and shorts, because it is easy to don and comfortable to wear, and should provide the same level of protection as the 1-piece

  7. 3-Tesla MRI-assisted detection of compression points in ulnar neuropathy at the elbow in correlation with intraoperative findings.

    PubMed

    Hold, Alina; Mayr-Riedler, Michael S; Rath, Thomas; Pona, Igor; Nierlich, Patrick; Breitenseher, Julia; Kasprian, Gregor

    2018-03-06

    Releasing the ulnar nerve from all entrapments is the primary objective of every surgical method in ulnar neuropathy at the elbow (UNE). The aim of this retrospective diagnostic study was to validate preoperative 3-Tesla MRI results by comparing the MRI findings with the intraoperative aspects during endoscopic-assisted or open surgery. Preoperative MRI studies were assessed by a radiologist not informed about intraoperative findings in request for the exact site of nerve compression. The localizations of compression were then correlated with the intraoperative findings obtained from the operative records. Percent agreement and Cohen's kappa (κ) values were calculated. From a total of 41 elbows, there was a complete agreement in 27 (65.8%) cases and a partial agreement in another 12 (29.3%) cases. Cohen's kappa showed fair-to-moderate agreement. High-resolution MRI cannot replace thorough intraoperative visualization of the ulnar nerve and its surrounding structures but may provide valuable information in ambiguous cases or relapses. Copyright © 2018 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  8. Melting point of high-purity germanium stable isotopes

    NASA Astrophysics Data System (ADS)

    Gavva, V. A.; Bulanov, A. D.; Kut'in, A. M.; Plekhovich, A. D.; Churbanov, M. F.

    2018-05-01

    The melting point (Tm) of germanium stable isotopes 72Ge, 73Ge, 74Ge, 76Ge was determined by differential scanning calorimetry. With the increase in atomic mass of isotope the decrease in Tm is observed. The decrease was equal to 0.15 °C per the unit of atomic mass which qualitatively agrees with the value calculated by Lindemann formula accounting for the effect of "isotopic compression" of elementary cell.

  9. Compression of Probabilistic XML Documents

    NASA Astrophysics Data System (ADS)

    Veldman, Irma; de Keijzer, Ander; van Keulen, Maurice

    Database techniques to store, query and manipulate data that contains uncertainty receives increasing research interest. Such UDBMSs can be classified according to their underlying data model: relational, XML, or RDF. We focus on uncertain XML DBMS with as representative example the Probabilistic XML model (PXML) of [10,9]. The size of a PXML document is obviously a factor in performance. There are PXML-specific techniques to reduce the size, such as a push down mechanism, that produces equivalent but more compact PXML documents. It can only be applied, however, where possibilities are dependent. For normal XML documents there also exist several techniques for compressing a document. Since Probabilistic XML is (a special form of) normal XML, it might benefit from these methods even more. In this paper, we show that existing compression mechanisms can be combined with PXML-specific compression techniques. We also show that best compression rates are obtained with a combination of PXML-specific technique with a rather simple generic DAG-compression technique.

  10. Oblivious image watermarking combined with JPEG compression

    NASA Astrophysics Data System (ADS)

    Chen, Qing; Maitre, Henri; Pesquet-Popescu, Beatrice

    2003-06-01

    For most data hiding applications, the main source of concern is the effect of lossy compression on hidden information. The objective of watermarking is fundamentally in conflict with lossy compression. The latter attempts to remove all irrelevant and redundant information from a signal, while the former uses the irrelevant information to mask the presence of hidden data. Compression on a watermarked image can significantly affect the retrieval of the watermark. Past investigations of this problem have heavily relied on simulation. It is desirable not only to measure the effect of compression on embedded watermark, but also to control the embedding process to survive lossy compression. In this paper, we focus on oblivious watermarking by assuming that the watermarked image inevitably undergoes JPEG compression prior to watermark extraction. We propose an image-adaptive watermarking scheme where the watermarking algorithm and the JPEG compression standard are jointly considered. Watermark embedding takes into consideration the JPEG compression quality factor and exploits an HVS model to adaptively attain a proper trade-off among transparency, hiding data rate, and robustness to JPEG compression. The scheme estimates the image-dependent payload under JPEG compression to achieve the watermarking bit allocation in a determinate way, while maintaining consistent watermark retrieval performance.

  11. Fractal-Based Image Compression, II

    DTIC Science & Technology

    1990-06-01

    data for figure 3 ----------------------------------- 10 iv 1. INTRODUCTION The need for data compression is not new. With humble beginnings such as...the use of acronyms and abbreviations in spoken and written word, the methods for data compression became more advanced as the need for information...grew. The Morse code, developed because of the need for faster telegraphy, was an early example of a data compression technique. Largely because of the

  12. Compressible Flow Toolbox

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin J.

    2006-01-01

    The Compressible Flow Toolbox is primarily a MATLAB-language implementation of a set of algorithms that solve approximately 280 linear and nonlinear classical equations for compressible flow. The toolbox is useful for analysis of one-dimensional steady flow with either constant entropy, friction, heat transfer, or Mach number greater than 1. The toolbox also contains algorithms for comparing and validating the equation-solving algorithms against solutions previously published in open literature. The classical equations solved by the Compressible Flow Toolbox are as follows: The isentropic-flow equations, The Fanno flow equations (pertaining to flow of an ideal gas in a pipe with friction), The Rayleigh flow equations (pertaining to frictionless flow of an ideal gas, with heat transfer, in a pipe of constant cross section), The normal-shock equations, The oblique-shock equations, and The expansion equations.

  13. Analytics-Driven Lossless Data Compression for Rapid In-situ Indexing, Storing, and Querying

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jenkins, John; Arkatkar, Isha; Lakshminarasimhan, Sriram

    2013-01-01

    The analysis of scientific simulations is highly data-intensive and is becoming an increasingly important challenge. Peta-scale data sets require the use of light-weight query-driven analysis methods, as opposed to heavy-weight schemes that optimize for speed at the expense of size. This paper is an attempt in the direction of query processing over losslessly compressed scientific data. We propose a co-designed double-precision compression and indexing methodology for range queries by performing unique-value-based binning on the most significant bytes of double precision data (sign, exponent, and most significant mantissa bits), and inverting the resulting metadata to produce an inverted index over amore » reduced data representation. Without the inverted index, our method matches or improves compression ratios over both general-purpose and floating-point compression utilities. The inverted index is light-weight, and the overall storage requirement for both reduced column and index is less than 135%, whereas existing DBMS technologies can require 200-400%. As a proof-of-concept, we evaluate univariate range queries that additionally return column values, a critical component of data analytics, against state-of-the-art bitmap indexing technology, showing multi-fold query performance improvements.« less

  14. The Pointing Self-calibration Algorithm for Aperture Synthesis Radio Telescopes

    NASA Astrophysics Data System (ADS)

    Bhatnagar, S.; Cornwell, T. J.

    2017-11-01

    This paper is concerned with algorithms for calibration of direction-dependent effects (DDE) in aperture synthesis radio telescopes (ASRT). After correction of direction-independent effects (DIE) using self-calibration, imaging performance can be limited by the imprecise knowledge of the forward gain of the elements in the array. In general, the forward gain pattern is directionally dependent and varies with time due to a number of reasons. Some factors, such as rotation of the primary beam with Parallactic Angle for Azimuth-Elevation mount antennas are known a priori. Some, such as antenna pointing errors and structural deformation/projection effects for aperture-array elements cannot be measured a priori. Thus, in addition to algorithms to correct for DD effects known a priori, algorithms to solve for DD gains are required for high dynamic range imaging. Here, we discuss a mathematical framework for antenna-based DDE calibration algorithms and show that this framework leads to computationally efficient optimal algorithms that scale well in a parallel computing environment. As an example of an antenna-based DD calibration algorithm, we demonstrate the Pointing SelfCal (PSC) algorithm to solve for the antenna pointing errors. Our analysis show that the sensitivity of modern ASRT is sufficient to solve for antenna pointing errors and other DD effects. We also discuss the use of the PSC algorithm in real-time calibration systems and extensions for antenna Shape SelfCal algorithm for real-time tracking and corrections for pointing offsets and changes in antenna shape.

  15. Compressed Sensing for Chemistry

    NASA Astrophysics Data System (ADS)

    Sanders, Jacob Nathan

    Many chemical applications, from spectroscopy to quantum chemistry, involve measuring or computing a large amount of data, and then compressing this data to retain the most chemically-relevant information. In contrast, compressed sensing is an emergent technique that makes it possible to measure or compute an amount of data that is roughly proportional to its information content. In particular, compressed sensing enables the recovery of a sparse quantity of information from significantly undersampled data by solving an ℓ 1-optimization problem. This thesis represents the application of compressed sensing to problems in chemistry. The first half of this thesis is about spectroscopy. Compressed sensing is used to accelerate the computation of vibrational and electronic spectra from real-time time-dependent density functional theory simulations. Using compressed sensing as a drop-in replacement for the discrete Fourier transform, well-resolved frequency spectra are obtained at one-fifth the typical simulation time and computational cost. The technique is generalized to multiple dimensions and applied to two-dimensional absorption spectroscopy using experimental data collected on atomic rubidium vapor. Finally, a related technique known as super-resolution is applied to open quantum systems to obtain realistic models of a protein environment, in the form of atomistic spectral densities, at lower computational cost. The second half of this thesis deals with matrices in quantum chemistry. It presents a new use of compressed sensing for more efficient matrix recovery whenever the calculation of individual matrix elements is the computational bottleneck. The technique is applied to the computation of the second-derivative Hessian matrices in electronic structure calculations to obtain the vibrational modes and frequencies of molecules. When applied to anthracene, this technique results in a threefold speed-up, with greater speed-ups possible for larger molecules. The

  16. Simulations of in situ x-ray diffraction from uniaxially compressed highly textured polycrystalline targets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGonegle, David, E-mail: d.mcgonegle1@physics.ox.ac.uk; Wark, Justin S.; Higginbotham, Andrew

    2015-08-14

    A growing number of shock compression experiments, especially those involving laser compression, are taking advantage of in situ x-ray diffraction as a tool to interrogate structure and microstructure evolution. Although these experiments are becoming increasingly sophisticated, there has been little work on exploiting the textured nature of polycrystalline targets to gain information on sample response. Here, we describe how to generate simulated x-ray diffraction patterns from materials with an arbitrary texture function subject to a general deformation gradient. We will present simulations of Debye-Scherrer x-ray diffraction from highly textured polycrystalline targets that have been subjected to uniaxial compression, as maymore » occur under planar shock conditions. In particular, we study samples with a fibre texture, and find that the azimuthal dependence of the diffraction patterns contains information that, in principle, affords discrimination between a number of similar shock-deformation mechanisms. For certain cases, we compare our method with results obtained by taking the Fourier transform of the atomic positions calculated by classical molecular dynamics simulations. Illustrative results are presented for the shock-induced α–ϵ phase transition in iron, the α–ω transition in titanium and deformation due to twinning in tantalum that is initially preferentially textured along [001] and [011]. The simulations are relevant to experiments that can now be performed using 4th generation light sources, where single-shot x-ray diffraction patterns from crystals compressed via laser-ablation can be obtained on timescales shorter than a phonon period.« less

  17. Simulations of in situ x-ray diffraction from uniaxially compressed highly textured polycrystalline targets

    DOE PAGES

    McGonegle, David; Milathianaki, Despina; Remington, Bruce A.; ...

    2015-08-11

    A growing number of shock compression experiments, especially those involving laser compression, are taking advantage of in situ x-ray diffraction as a tool to interrogate structure and microstructure evolution. Although these experiments are becoming increasingly sophisticated, there has been little work on exploiting the textured nature of polycrystalline targets to gain information on sample response. Here, we describe how to generate simulated x-ray diffraction patterns from materials with an arbitrary texture function subject to a general deformation gradient. We will present simulations of Debye-Scherrer x-ray diffraction from highly textured polycrystalline targets that have been subjected to uniaxial compression, as maymore » occur under planar shock conditions. In particular, we study samples with a fibre texture, and find that the azimuthal dependence of the diffraction patterns contains information that, in principle, affords discrimination between a number of similar shock-deformation mechanisms. For certain cases, we compare our method with results obtained by taking the Fourier transform of the atomic positions calculated by classical molecular dynamics simulations. Illustrative results are presented for the shock-induced α–ϵ phase transition in iron, the α–ω transition in titanium and deformation due to twinning in tantalum that is initially preferentially textured along [001] and [011]. In conclusion, the simulations are relevant to experiments that can now be performed using 4th generation light sources, where single-shot x-ray diffraction patterns from crystals compressed via laser-ablation can be obtained on timescales shorter than a phonon period.« less

  18. Compressible Heating in the Condense Phase due to Pore Collapse in HMX

    NASA Astrophysics Data System (ADS)

    Zhang, Ju; Jackson, Thomas

    Axisymmetric pore collapse in HMX is studied numerically by solving multi-phase reactive Euler equations. The generation of hot spots in the condense phase due to compressible heating is examined. The motivation is to improve the understanding of the role of embedded cavities in the initiation of reaction in explosives, and to investigate the effect of hot spots in the condense phase due to compressible heating alone, complementing previous study on hot spots due to the reaction in the gas phase and at the interface. It is found that the shock-cavity interaction results in pressures and thus temperatures that are substantially higher than the post-shock values in the condense phase. However, these hot spots in the condense phase due to compressible heating alone do not seem to be sufficiently hot to lead to ignition at shock pressures of 1-3 GPa. Thus, compressible heating in the condense phase may be excluded as a mechanism for initiation of explosives. It should be pointed out that the ignition threshold for the temperature, the so-called ``switch-on'' temperature, of hot spots depend on chemistry kinetics parameters. Switch-on temperature is lower for faster reaction rate. The current chemistry kinetics parameters are based on previous experimental work. This work was supported in part by the Defense Threat Reduction Agency and by the U.S. Department of Energy.

  19. Compression Frequency Choice for Compression Mass Gauge Method and Effect on Measurement Accuracy

    NASA Astrophysics Data System (ADS)

    Fu, Juan; Chen, Xiaoqian; Huang, Yiyong

    2013-12-01

    It is a difficult job to gauge the liquid fuel mass in a tank on spacecrafts under microgravity condition. Without the presence of strong buoyancy, the configuration of the liquid and gas in the tank is uncertain and more than one bubble may exist in the liquid part. All these will affect the measure accuracy of liquid mass gauge, especially for a method called Compression Mass Gauge (CMG). Four resonance resources affect the choice of compression frequency for CMG method. There are the structure resonance, liquid sloshing, transducer resonance and bubble resonance. Ground experimental apparatus are designed and built to validate the gauging method and the influence of different compression frequencies at different fill levels on the measurement accuracy. Harmonic phenomenon should be considered during filter design when processing test data. Results demonstrate the ground experiment system performances well with high accuracy and the measurement accuracy increases as the compression frequency climbs in low fill levels. But low compression frequencies should be the better choice for high fill levels. Liquid sloshing induces the measurement accuracy to degrade when the surface is excited to wave by external disturbance at the liquid natural frequency. The measurement accuracy is still acceptable at small amplitude vibration.

  20. Compressed normalized block difference for object tracking

    NASA Astrophysics Data System (ADS)

    Gao, Yun; Zhang, Dengzhuo; Cai, Donglan; Zhou, Hao; Lan, Ge

    2018-04-01

    Feature extraction is very important for robust and real-time tracking. Compressive sensing provided a technical support for real-time feature extraction. However, all existing compressive tracking were based on compressed Haar-like feature, and how to compress many more excellent high-dimensional features is worth researching. In this paper, a novel compressed normalized block difference feature (CNBD) was proposed. For resisting noise effectively in a highdimensional normalized pixel difference feature (NPD), a normalized block difference feature extends two pixels in the original formula of NPD to two blocks. A CNBD feature can be obtained by compressing a normalized block difference feature based on compressive sensing theory, with the sparse random Gaussian matrix as the measurement matrix. The comparative experiments of 7 trackers on 20 challenging sequences showed that the tracker based on CNBD feature can perform better than other trackers, especially than FCT tracker based on compressed Haar-like feature, in terms of AUC, SR and Precision.

  1. Induction of a shorter compression phase is correlated with a deeper chest compression during metronome-guided cardiopulmonary resuscitation: a manikin study.

    PubMed

    Chung, Tae Nyoung; Bae, Jinkun; Kim, Eui Chung; Cho, Yun Kyung; You, Je Sung; Choi, Sung Wook; Kim, Ok Jun

    2013-07-01

    Recent studies have shown that there may be an interaction between duty cycle and other factors related to the quality of chest compression. Duty cycle represents the fraction of compression phase. We aimed to investigate the effect of shorter compression phase on average chest compression depth during metronome-guided cardiopulmonary resuscitation. Senior medical students performed 12 sets of chest compressions following the guiding sounds, with three down-stroke patterns (normal, fast and very fast) and four rates (80, 100, 120 and 140 compressions/min) in random sequence. Repeated-measures analysis of variance was used to compare the average chest compression depth and duty cycle among the trials. The average chest compression depth increased and the duty cycle decreased in a linear fashion as the down-stroke pattern shifted from normal to very fast (p<0.001 for both). Linear increase of average chest compression depth following the increase of the rate of chest compression was observed only with normal down-stroke pattern (p=0.004). Induction of a shorter compression phase is correlated with a deeper chest compression during metronome-guided cardiopulmonary resuscitation.

  2. Transmitter and receiver antenna gain analysis for laser radar and communication systems

    NASA Technical Reports Server (NTRS)

    Klein, B. J.; Degnan, J. J.

    1973-01-01

    A comprehensive and fairly self-contained study of centrally obscured optical transmitting and receiving antennas is presented and is intended for use by the laser radar and communication systems designer. The material is presented in a format which allows the rapid and accurate evaluation of antenna gain. The Fresnel approximation to scalar wave theory is reviewed and the antenna analysis proceeds in terms of the power gain. Conventional range equations may then be used to calculate the power budget. The transmitter calculations, resulting in near and far field antenna gain patterns, assumes the antenna is illuminated by a laser operating in the fundamental cavity mode. A simple equation is derived for matching the incident source distribution to a general antenna configuration for maximum on-axis gain. An interpretation of the resultant gain curves allows a number of auxiliary design curves to be drawn which display the losses in antenna gain due to pointing errors and the cone angle of the outgoing beam as a function of antenna size and central obscuration. The use of telescope defocusing as an approach to spreading the beam for target acquisition is compared to some alternate methods.

  3. A GPU-accelerated implicit meshless method for compressible flows

    NASA Astrophysics Data System (ADS)

    Zhang, Jia-Le; Ma, Zhi-Hua; Chen, Hong-Quan; Cao, Cheng

    2018-05-01

    This paper develops a recently proposed GPU based two-dimensional explicit meshless method (Ma et al., 2014) by devising and implementing an efficient parallel LU-SGS implicit algorithm to further improve the computational efficiency. The capability of the original 2D meshless code is extended to deal with 3D complex compressible flow problems. To resolve the inherent data dependency of the standard LU-SGS method, which causes thread-racing conditions destabilizing numerical computation, a generic rainbow coloring method is presented and applied to organize the computational points into different groups by painting neighboring points with different colors. The original LU-SGS method is modified and parallelized accordingly to perform calculations in a color-by-color manner. The CUDA Fortran programming model is employed to develop the key kernel functions to apply boundary conditions, calculate time steps, evaluate residuals as well as advance and update the solution in the temporal space. A series of two- and three-dimensional test cases including compressible flows over single- and multi-element airfoils and a M6 wing are carried out to verify the developed code. The obtained solutions agree well with experimental data and other computational results reported in the literature. Detailed analysis on the performance of the developed code reveals that the developed CPU based implicit meshless method is at least four to eight times faster than its explicit counterpart. The computational efficiency of the implicit method could be further improved by ten to fifteen times on the GPU.

  4. What causes the buoyancy reversal in compressible convection?

    NASA Technical Reports Server (NTRS)

    Chan, K. L.

    1983-01-01

    The problem posed by the existence of a negative buoyancy work region at the top of cellular type convection in a deeply stratified superadiabatic layer (Massaguer and Zahn, 1980) is addressed. It is approached by studying two-dimensional cellular compressible convection with different physical parameters. The results suggest that a large viscosity, together with density stratification, is responsible for the buoyancy reversal. The numerical results obtained are analyzed. It is pointed out, however, that in an astrophysical situation a fluid involved in convection will generally have very small viscosity. It is therefore thought unlikely that buoyancy reversal occurs in this way.

  5. Tomographic Image Compression Using Multidimensional Transforms.

    ERIC Educational Resources Information Center

    Villasenor, John D.

    1994-01-01

    Describes a method for compressing tomographic images obtained using Positron Emission Tomography (PET) and Magnetic Resonance (MR) by applying transform compression using all available dimensions. This takes maximum advantage of redundancy of the data, allowing significant increases in compression efficiency and performance. (13 references) (KRN)

  6. 30 CFR 77.412 - Compressed air systems.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Compressed air systems. 77.412 Section 77.412... for Mechanical Equipment § 77.412 Compressed air systems. (a) Compressors and compressed-air receivers... involving the pressure system of compressors, receivers, or compressed-air-powered equipment shall not be...

  7. 30 CFR 77.412 - Compressed air systems.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Compressed air systems. 77.412 Section 77.412... for Mechanical Equipment § 77.412 Compressed air systems. (a) Compressors and compressed-air receivers... involving the pressure system of compressors, receivers, or compressed-air-powered equipment shall not be...

  8. Damage development under compression-compression fatigue loading in a stitched uniwoven graphite/epoxy composite material

    NASA Technical Reports Server (NTRS)

    Vandermey, Nancy E.; Morris, Don H.; Masters, John E.

    1991-01-01

    Damage initiation and growth under compression-compression fatigue loading were investigated for a stitched uniweave material system with an underlying AS4/3501-6 quasi-isotropic layup. Performance of unnotched specimens having stitch rows at either 0 degree or 90 degrees to the loading direction was compared. Special attention was given to the effects of stitching related manufacturing defects. Damage evaluation techniques included edge replication, stiffness monitoring, x-ray radiography, residual compressive strength, and laminate sectioning. It was found that the manufacturing defect of inclined stitches had the greatest adverse effect on material performance. Zero degree and 90 degree specimen performances were generally the same. While the stitches were the source of damage initiation, they also slowed damage propagation both along the length and across the width and affected through-the-thickness damage growth. A pinched layer zone formed by the stitches particularly affected damage initiation and growth. The compressive failure mode was transverse shear for all specimens, both in static compression and fatigue cycling effects.

  9. Development of structural and material clavicle response corridors under axial compression and three point bending loading for clavicle finite element model validation.

    PubMed

    Zhang, Qi; Kindig, Matthew; Li, Zuoping; Crandall, Jeff R; Kerrigan, Jason R

    2014-08-22

    Clavicle injuries were frequently observed in automotive side and frontal crashes. Finite element (FE) models have been developed to understand the injury mechanism, although no clavicle loading response corridors yet exist in the literature to ensure the model response biofidelity. Moreover, the typically developed structural level (e.g., force-deflection) response corridors were shown to be insufficient for verifying the injury prediction capacity of FE model, which usually is based on strain related injury criteria. Therefore, the purpose of this study is to develop both the structural (force vs deflection) and material level (strain vs force) clavicle response corridors for validating FE models for injury risk modeling. 20 Clavicles were loaded to failure under loading conditions representative of side and frontal crashes respectively, half of which in axial compression, and the other half in three point bending. Both structural and material response corridors were developed for each loading condition. FE model that can accurately predict structural response and strain level provides a more useful tool in injury risk modeling and prediction. The corridor development method in this study could also be extended to develop corridors for other components of the human body. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Efficient compression of molecular dynamics trajectory files.

    PubMed

    Marais, Patrick; Kenwood, Julian; Smith, Keegan Carruthers; Kuttel, Michelle M; Gain, James

    2012-10-15

    We investigate whether specific properties of molecular dynamics trajectory files can be exploited to achieve effective file compression. We explore two classes of lossy, quantized compression scheme: "interframe" predictors, which exploit temporal coherence between successive frames in a simulation, and more complex "intraframe" schemes, which compress each frame independently. Our interframe predictors are fast, memory-efficient and well suited to on-the-fly compression of massive simulation data sets, and significantly outperform the benchmark BZip2 application. Our schemes are configurable: atomic positional accuracy can be sacrificed to achieve greater compression. For high fidelity compression, our linear interframe predictor gives the best results at very little computational cost: at moderate levels of approximation (12-bit quantization, maximum error ≈ 10(-2) Å), we can compress a 1-2 fs trajectory file to 5-8% of its original size. For 200 fs time steps-typically used in fine grained water diffusion experiments-we can compress files to ~25% of their input size, still substantially better than BZip2. While compression performance degrades with high levels of quantization, the simulation error is typically much greater than the associated approximation error in such cases. Copyright © 2012 Wiley Periodicals, Inc.

  11. Vertebral Augmentation Involving Vertebroplasty or Kyphoplasty for Cancer-Related Vertebral Compression Fractures: An Economic Analysis.

    PubMed

    2016-01-01

    Untreated vertebral compression fractures can have serious clinical consequences and impose a considerable impact on patients' quality of life and on caregivers. Since non-surgical management of these fractures has limited effectiveness, vertebral augmentation procedures are gaining acceptance in clinical practice for pain control and fracture stabilization. The objective of this analysis was to determine the cost-effectiveness and budgetary impact of kyphoplasty or vertebroplasty compared with non-surgical management for the treatment of vertebral compression fractures in patients with cancer. We performed a systematic review of health economic studies to identify relevant studies that compare the cost-effectiveness of kyphoplasty or vertebroplasty with non-surgical management for the treatment of vertebral compression fractures in adults with cancer. We also performed a primary cost-effectiveness analysis to assess the clinical benefits and costs of kyphoplasty or vertebroplasty compared with non-surgical management in the same population. We developed a Markov model to forecast benefits and harms of treatments, and corresponding quality-adjusted life years and costs. Clinical data and utility data were derived from published sources, while costing data were derived using Ontario administrative sources. We performed sensitivity analyses to examine the robustness of the results. In addition, a 1-year budget impact analysis was performed using data from Ontario administrative sources. Two scenarios were explored: (a) an increase in the total number of vertebral augmentation procedures performed among patients with cancer in Ontario, maintaining the current proportion of kyphoplasty versus vertebroplasty; and (b) no increase in the total number of vertebral augmentation procedures performed among patients with cancer in Ontario but an increase in the proportion of kyphoplasties versus vertebroplasties. The base case considered each of kyphoplasty and vertebroplasty

  12. Vertebral Augmentation Involving Vertebroplasty or Kyphoplasty for Cancer-Related Vertebral Compression Fractures: An Economic Analysis

    PubMed Central

    2016-01-01

    Background Untreated vertebral compression fractures can have serious clinical consequences and impose a considerable impact on patients' quality of life and on caregivers. Since non-surgical management of these fractures has limited effectiveness, vertebral augmentation procedures are gaining acceptance in clinical practice for pain control and fracture stabilization. The objective of this analysis was to determine the cost-effectiveness and budgetary impact of kyphoplasty or vertebroplasty compared with non-surgical management for the treatment of vertebral compression fractures in patients with cancer. Methods We performed a systematic review of health economic studies to identify relevant studies that compare the cost-effectiveness of kyphoplasty or vertebroplasty with non-surgical management for the treatment of vertebral compression fractures in adults with cancer. We also performed a primary cost-effectiveness analysis to assess the clinical benefits and costs of kyphoplasty or vertebroplasty compared with non-surgical management in the same population. We developed a Markov model to forecast benefits and harms of treatments, and corresponding quality-adjusted life years and costs. Clinical data and utility data were derived from published sources, while costing data were derived using Ontario administrative sources. We performed sensitivity analyses to examine the robustness of the results. In addition, a 1-year budget impact analysis was performed using data from Ontario administrative sources. Two scenarios were explored: (a) an increase in the total number of vertebral augmentation procedures performed among patients with cancer in Ontario, maintaining the current proportion of kyphoplasty versus vertebroplasty; and (b) no increase in the total number of vertebral augmentation procedures performed among patients with cancer in Ontario but an increase in the proportion of kyphoplasties versus vertebroplasties. Results The base case considered each of

  13. An optimal output feedback gain variation scheme for the control of plants exhibiting gross parameter changes

    NASA Technical Reports Server (NTRS)

    Moerder, Daniel D.

    1987-01-01

    A concept for optimally designing output feedback controllers for plants whose dynamics exhibit gross changes over their operating regimes was developed. This was to formulate the design problem in such a way that the implemented feedback gains vary as the output of a dynamical system whose independent variable is a scalar parameterization of the plant operating point. The results of this effort include derivation of necessary conditions for optimality for the general problem formulation, and for several simplified cases. The question of existence of a solution to the design problem was also examined, and it was shown that the class of gain variation schemes developed are capable of achieving gain variation histories which are arbitrarily close to the unconstrained gain solution for each point in the plant operating range. The theory was implemented in a feedback design algorithm, which was exercised in a numerical example. The results are applicable to the design of practical high-performance feedback controllers for plants whose dynamics vary significanly during operation. Many aerospace systems fall into this category.

  14. Poor chest compression quality with mechanical compressions in simulated cardiopulmonary resuscitation: a randomized, cross-over manikin study.

    PubMed

    Blomberg, Hans; Gedeborg, Rolf; Berglund, Lars; Karlsten, Rolf; Johansson, Jakob

    2011-10-01

    Mechanical chest compression devices are being implemented as an aid in cardiopulmonary resuscitation (CPR), despite lack of evidence of improved outcome. This manikin study evaluates the CPR-performance of ambulance crews, who had a mechanical chest compression device implemented in their routine clinical practice 8 months previously. The objectives were to evaluate time to first defibrillation, no-flow time, and estimate the quality of compressions. The performance of 21 ambulance crews (ambulance nurse and emergency medical technician) with the authorization to perform advanced life support was studied in an experimental, randomized cross-over study in a manikin setup. Each crew performed two identical CPR scenarios, with and without the aid of the mechanical compression device LUCAS. A computerized manikin was used for data sampling. There were no substantial differences in time to first defibrillation or no-flow time until first defibrillation. However, the fraction of adequate compressions in relation to total compressions was remarkably low in LUCAS-CPR (58%) compared to manual CPR (88%) (95% confidence interval for the difference: 13-50%). Only 12 out of the 21 ambulance crews (57%) applied the mandatory stabilization strap on the LUCAS device. The use of a mechanical compression aid was not associated with substantial differences in time to first defibrillation or no-flow time in the early phase of CPR. However, constant but poor chest compressions due to failure in recognizing and correcting a malposition of the device may counteract a potential benefit of mechanical chest compressions. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  15. MHD simulation of plasma compression experiments

    NASA Astrophysics Data System (ADS)

    Reynolds, Meritt; Barsky, Sandra; de Vietien, Peter

    2017-10-01

    General Fusion (GF) is working to build a magnetized target fusion (MTF) power plant based on compression of magnetically-confined plasma by liquid metal. GF is testing this compression concept by collapsing solid aluminum liners onto plasmas formed by coaxial helicity injection in a series of experiments called PCS (Plasma Compression, Small). We simulate the PCS experiments using the finite-volume MHD code VAC. The single-fluid plasma model includes temperature-dependent resistivity and anisotropic heat transport. The time-dependent curvilinear mesh for MHD simulation is derived from LS-DYNA simulations of actual field tests of liner implosion. We will discuss how 3D simulations reproduced instability observed in the PCS13 experiment and correctly predicted stabilization of PCS14 by ramping the shaft current during compression. We will also present a comparison of simulated Mirnov and x-ray diagnostics with experimental measurements indicating that PCS14 compressed well to a linear compression ratio of 2.5:1.

  16. The size effects upon shock plastic compression of nanocrystals

    NASA Astrophysics Data System (ADS)

    Malygin, G. A.; Klyavin, O. V.

    2017-10-01

    For the first time a theoretical analysis of scale effects upon the shock plastic compression of nanocrystals is implemented in the context of a dislocation kinetic approach based on the equations and relationships of dislocation kinetics. The yield point of crystals τy is established as a quantitative function of their cross-section size D and the rate of shock deformation as τy ɛ2/3 D. This dependence is valid in the case of elastic stress relaxation on account of emission of dislocations from single-pole Frank-Read sources near the crystal surface.

  17. The influence of different sub-bandage pressure values on venous leg ulcers healing when treated with compression therapy.

    PubMed

    Milic, Dragan J; Zivic, Sasa S; Bogdanovic, Dragan C; Jovanovic, Milan M; Jankovic, Radmilo J; Milosevic, Zoran D; Stamenkovic, Dragan M; Trenkic, Marija S

    2010-03-01

    Venous leg ulcers (VLU) have a huge social and economic impact. An estimated 1.5% of European adults will suffer a venous ulcer at some point in their lives. Despite the widespread use of bandaging with high pressure in the treatment of this condition, recurrence rates range between 25% to 70%. Numerous studies have suggested that the compression system should provide sub-bandage pressure values in the range from 35 mm Hg to 45 mm Hg in order to achieve the best possible healing results. An open, randomized, prospective, single-center study was performed in order to determine the healing rates of VLU when treated with different compression systems and different sub-bandage pressure values. One hundred thirty-one patients (72 women, 59 men; mean age, 59-years-old) with VLU (ulcer surface >3 cm(2); duration >3 months) were randomized into three groups: group A - 42 patients who were treated using an open-toed, elastic, class III compression device knitted in tubular form (Tubulcus, Laboratoires Innothera, Arcueil, France); group B - 46 patients treated with the multi-component bandaging system comprised of Tubulcus and one elastic bandage (15 cm wide and 5 cm long with 200% stretch, Niva, Novi Sad, Serbia); and group C - forty-three patients treated with the multi-component bandaging system comprised of Tubulcus and two elastic bandages. Pressure measurements were taken with the Kikuhime device (TT MediTrade, Soro, Denmark) at the B1 measuring point in the supine, sitting, and standing positions under the three different compression systems. The median resting values in the supine and standing positions in examined study groups were as follows: group A - 36.2 mm Hg and 43.9 mm Hg; group B - 53.9 mm Hg and 68.2 mm Hg; group C - 74.0 mm Hg and 87.4 mm Hg. The healing rate during the 26-week treatment period was 25% (13/42) in group A, 67.4% (31/46) in group B, and 74.4% (32/43) in group C. The success of compression treatment in group A was strongly associated with the

  18. 46 CFR 147.60 - Compressed gases.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 5 2010-10-01 2010-10-01 false Compressed gases. 147.60 Section 147.60 Shipping COAST... Other Special Requirements for Particular Materials § 147.60 Compressed gases. (a) Cylinder requirements. Cylinders used for containing hazardous ships' stores that are compressed gases must be— (1) Authorized for...

  19. Comparative study of oxihydrogen injection in turbocharged compression ignition engines

    NASA Astrophysics Data System (ADS)

    Barna, L.; Lelea, D.

    2018-01-01

    This document proposes for analysis, comparative study of the turbocharged, compression-ignition engine, equipped with EGR valve, operation in case the injection in intake manifold thereof a maximum flow rate of 1l/min oxyhydrogen resulted of water electrolysis, at two different injection pressures, namely 100 Pa and 3000 Pa, from the point of view of flue gas opacity. We found a substantial reduction of flue gas opacity in both cases compared to conventional diesel operation, but in different proportions.

  20. Excessive Gestational Weight Gain and Subsequent Maternal Obesity at Age 40: A Hypothetical Intervention.

    PubMed

    Abrams, Barbara; Coyle, Jeremy; Cohen, Alison K; Headen, Irene; Hubbard, Alan; Ritchie, Lorrene; Rehkopf, David H

    2017-09-01

    To model the hypothetical impact of preventing excessive gestational weight gain on midlife obesity and compare the estimated reduction with the US Healthy People 2020 goal of a 10% reduction of obesity prevalence in adults. We analyzed 3917 women with 1 to 3 pregnancies in the prospective US National Longitudinal Survey of Youth, from 1979 to 2012. We compared the estimated obesity prevalence between 2 scenarios: gestational weight gain as reported and under the scenario of a hypothetical intervention that all women with excessive gestational weight gain instead gained as recommended by the Institute of Medicine (2009). A hypothetical intervention was associated with a significantly reduced estimated prevalence of obesity for first (3.3 percentage points; 95% confidence interval [CI] = 1.0, 5.6) and second (3.0 percentage points; 95% CI = 0.7, 5.2) births, and twice as high in Black as in White mothers, but not significant in Hispanics. The population attributable fraction was 10.7% (95% CI = 3.3%, 18.1%) in first and 9.3% (95% CI = 2.2%, 16.5%) in second births. Development of effective weight-management interventions for childbearing women could lead to meaningful reductions in long-term obesity.

  1. Compressing with dominant hand improves quality of manual chest compressions for rescuers who performed suboptimal CPR in manikins.

    PubMed

    Wang, Juan; Tang, Ce; Zhang, Lei; Gong, Yushun; Yin, Changlin; Li, Yongqin

    2015-07-01

    The question of whether the placement of the dominant hand against the sternum could improve the quality of manual chest compressions remains controversial. In the present study, we evaluated the influence of dominant vs nondominant hand positioning on the quality of conventional cardiopulmonary resuscitation (CPR) during prolonged basic life support (BLS) by rescuers who performed optimal and suboptimal compressions. Six months after completing a standard BLS training course, 101 medical students were instructed to perform adult single-rescuer BLS for 8 minutes on a manikin with a randomized hand position. Twenty-four hours later, the students placed the opposite hand in contact with the sternum while performing CPR. Those with an average compression depth of less than 50 mm were considered suboptimal. Participants who had performed suboptimal compressions were significantly shorter (170.2 ± 6.8 vs 174.0 ± 5.6 cm, P = .008) and lighter (58.9 ± 7.6 vs 66.9 ± 9.6 kg, P < .001) than those who performed optimal compressions. No significant differences in CPR quality were observed between dominant and nondominant hand placements for these who had an average compression depth of greater than 50 mm. However, both the compression depth (49.7 ± 4.2 vs 46.5 ± 4.1 mm, P = .003) and proportion of chest compressions with an appropriate depth (47.6% ± 27.8% vs 28.0% ± 23.4%, P = .006) were remarkably higher when compressing the chest with the dominant hand against the sternum for those who performed suboptimal CPR. Chest compression quality significantly improved when the dominant hand was placed against the sternum for those who performed suboptimal compressions during conventional CPR. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Cluster compression algorithm: A joint clustering/data compression concept

    NASA Technical Reports Server (NTRS)

    Hilbert, E. E.

    1977-01-01

    The Cluster Compression Algorithm (CCA), which was developed to reduce costs associated with transmitting, storing, distributing, and interpreting LANDSAT multispectral image data is described. The CCA is a preprocessing algorithm that uses feature extraction and data compression to more efficiently represent the information in the image data. The format of the preprocessed data enables simply a look-up table decoding and direct use of the extracted features to reduce user computation for either image reconstruction, or computer interpretation of the image data. Basically, the CCA uses spatially local clustering to extract features from the image data to describe spectral characteristics of the data set. In addition, the features may be used to form a sequence of scalar numbers that define each picture element in terms of the cluster features. This sequence, called the feature map, is then efficiently represented by using source encoding concepts. Various forms of the CCA are defined and experimental results are presented to show trade-offs and characteristics of the various implementations. Examples are provided that demonstrate the application of the cluster compression concept to multi-spectral images from LANDSAT and other sources.

  3. Simultaneous Application of Fibrous Piezoresistive Sensors for Compression and Traction Detection in Glass Laminate Composites

    PubMed Central

    Nauman, Saad; Cristian, Irina; Koncar, Vladan

    2011-01-01

    This article describes further development of a novel Non Destructive Evaluation (NDE) approach described in one of our previous papers. Here these sensors have been used for the first time as a Piecewise Continuous System (PCS), which means that they are not only capable of following the deformation pattern but can also detect distinctive fracture events. In order to characterize the simultaneous compression and traction response of these sensors, multilayer glass laminate composite samples were prepared for 3-point bending tests. The laminate sample consisted of five layers of plain woven glass fabrics placed one over another. The sensors were placed at two strategic locations during the lay-up process so as to follow traction and compression separately. The reinforcements were then impregnated in epoxy resin and later subjected to 3-point bending tests. An appropriate data treatment and recording device has also been developed and used for simultaneous data acquisition from the two sensors. The results obtained, under standard testing conditions have shown that our textile fibrous sensors can not only be used for simultaneous detection of compression and traction in composite parts for on-line structural health monitoring but their sensitivity and carefully chosen location inside the composite ensures that each fracture event is indicated in real time by the output signal of the sensor. PMID:22163707

  4. Simultaneous application of fibrous piezoresistive sensors for compression and traction detection in glass laminate composites.

    PubMed

    Nauman, Saad; Cristian, Irina; Koncar, Vladan

    2011-01-01

    This article describes further development of a novel Non Destructive Evaluation (NDE) approach described in one of our previous papers. Here these sensors have been used for the first time as a Piecewise Continuous System (PCS), which means that they are not only capable of following the deformation pattern but can also detect distinctive fracture events. In order to characterize the simultaneous compression and traction response of these sensors, multilayer glass laminate composite samples were prepared for 3-point bending tests. The laminate sample consisted of five layers of plain woven glass fabrics placed one over another. The sensors were placed at two strategic locations during the lay-up process so as to follow traction and compression separately. The reinforcements were then impregnated in epoxy resin and later subjected to 3-point bending tests. An appropriate data treatment and recording device has also been developed and used for simultaneous data acquisition from the two sensors. The results obtained, under standard testing conditions have shown that our textile fibrous sensors can not only be used for simultaneous detection of compression and traction in composite parts for on-line structural health monitoring but their sensitivity and carefully chosen location inside the composite ensures that each fracture event is indicated in real time by the output signal of the sensor.

  5. Image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing

    NASA Astrophysics Data System (ADS)

    Zhou, Nanrun; Pan, Shumin; Cheng, Shan; Zhou, Zhihong

    2016-08-01

    Most image encryption algorithms based on low-dimensional chaos systems bear security risks and suffer encryption data expansion when adopting nonlinear transformation directly. To overcome these weaknesses and reduce the possible transmission burden, an efficient image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing is proposed. The original image is measured by the measurement matrices in two directions to achieve compression and encryption simultaneously, and then the resulting image is re-encrypted by the cycle shift operation controlled by a hyper-chaotic system. Cycle shift operation can change the values of the pixels efficiently. The proposed cryptosystem decreases the volume of data to be transmitted and simplifies the keys distribution simultaneously as a nonlinear encryption system. Simulation results verify the validity and the reliability of the proposed algorithm with acceptable compression and security performance.

  6. MP3 compression of Doppler ultrasound signals.

    PubMed

    Poepping, Tamie L; Gill, Jeremy; Fenster, Aaron; Holdsworth, David W

    2003-01-01

    The effect of lossy, MP3 compression on spectral parameters derived from Doppler ultrasound (US) signals was investigated. Compression was tested on signals acquired from two sources: 1. phase quadrature and 2. stereo audio directional output. A total of 11, 10-s acquisitions of Doppler US signal were collected from each source at three sites in a flow phantom. Doppler signals were digitized at 44.1 kHz and compressed using four grades of MP3 compression (in kilobits per second, kbps; compression ratios in brackets): 1400 kbps (uncompressed), 128 kbps (11:1), 64 kbps (22:1) and 32 kbps (44:1). Doppler spectra were characterized by peak velocity, mean velocity, spectral width, integrated power and ratio of spectral power between negative and positive velocities. The results suggest that MP3 compression on digital Doppler US signals is feasible at 128 kbps, with a resulting 11:1 compression ratio, without compromising clinically relevant information. Higher compression ratios led to significant differences for both signal sources when compared with the uncompressed signals. Copyright 2003 World Federation for Ultrasound in Medicine & Biology

  7. A New Compression Method for FITS Tables

    NASA Technical Reports Server (NTRS)

    Pence, William; Seaman, Rob; White, Richard L.

    2010-01-01

    As the size and number of FITS binary tables generated by astronomical observatories increases, so does the need for a more efficient compression method to reduce the amount disk space and network bandwidth required to archive and down1oad the data tables. We have developed a new compression method for FITS binary tables that is modeled after the FITS tiled-image compression compression convention that has been in use for the past decade. Tests of this new method on a sample of FITS binary tables from a variety of current missions show that on average this new compression technique saves about 50% more disk space than when simply compressing the whole FITS file with gzip. Other advantages of this method are (1) the compressed FITS table is itself a valid FITS table, (2) the FITS headers remain uncompressed, thus allowing rapid read and write access to the keyword values, and (3) in the common case where the FITS file contains multiple tables, each table is compressed separately and may be accessed without having to uncompress the whole file.

  8. Predicting risk of substantial weight gain in German adults-a multi-center cohort approach.

    PubMed

    Bachlechner, Ursula; Boeing, Heiner; Haftenberger, Marjolein; Schienkiewitz, Anja; Scheidt-Nave, Christa; Vogt, Susanne; Thorand, Barbara; Peters, Annette; Schipf, Sabine; Ittermann, Till; Völzke, Henry; Nöthlings, Ute; Neamat-Allah, Jasmine; Greiser, Karin-Halina; Kaaks, Rudolf; Steffen, Annika

    2017-08-01

    A risk-targeted prevention strategy may efficiently utilize limited resources available for prevention of overweight and obesity. Likewise, more efficient intervention trials could be designed if selection of subjects was based on risk. The aim of the study was to develop a risk score predicting substantial weight gain among German adults. We developed the risk score using information on 15 socio-demographic, dietary and lifestyle factors from 32 204 participants of five population-based German cohort studies. Substantial weight gain was defined as gaining ≥10% of weight between baseline and follow-up (>6 years apart). The cases were censored according to the theoretical point in time when the threshold of 10% baseline-based weight gain was crossed assuming linearity of weight gain. Beta coefficients derived from proportional hazards regression were used as weights to compute the risk score as a linear combination of the predictors. Cross-validation was used to evaluate the score's discriminatory accuracy. The cross-validated c index (95% CI) was 0.71 (0.67-0.75). A cutoff value of ≥475 score points yielded a sensitivity of 71% and a specificity of 63%. The corresponding positive and negative predictive values were 10.4% and 97.6%, respectively. The proposed risk score may support healthcare providers in decision making and referral and facilitate an efficient selection of subjects into intervention trials. © The Author 2016. Published by Oxford University Press on behalf of the European Public Health Association.

  9. Predicting risk of substantial weight gain in German adults—a multi-center cohort approach

    PubMed Central

    Bachlechner, Ursula; Boeing, Heiner; Haftenberger, Marjolein; Schienkiewitz, Anja; Scheidt-Nave, Christa; Vogt, Susanne; Thorand, Barbara; Peters, Annette; Schipf, Sabine; Ittermann, Till; Völzke, Henry; Nöthlings, Ute; Neamat-Allah, Jasmine; Greiser, Karin-Halina; Kaaks, Rudolf

    2017-01-01

    Abstract Background A risk-targeted prevention strategy may efficiently utilize limited resources available for prevention of overweight and obesity. Likewise, more efficient intervention trials could be designed if selection of subjects was based on risk. The aim of the study was to develop a risk score predicting substantial weight gain among German adults. Methods We developed the risk score using information on 15 socio-demographic, dietary and lifestyle factors from 32 204 participants of five population-based German cohort studies. Substantial weight gain was defined as gaining ≥10% of weight between baseline and follow-up (>6 years apart). The cases were censored according to the theoretical point in time when the threshold of 10% baseline-based weight gain was crossed assuming linearity of weight gain. Beta coefficients derived from proportional hazards regression were used as weights to compute the risk score as a linear combination of the predictors. Cross-validation was used to evaluate the score’s discriminatory accuracy. Results The cross-validated c index (95% CI) was 0.71 (0.67–0.75). A cutoff value of ≥475 score points yielded a sensitivity of 71% and a specificity of 63%. The corresponding positive and negative predictive values were 10.4% and 97.6%, respectively. Conclusions The proposed risk score may support healthcare providers in decision making and referral and facilitate an efficient selection of subjects into intervention trials. PMID:28013243

  10. 76 FR 4338 - Research and Development Strategies for Compressed & Cryo-Compressed Hydrogen Storage Workshops

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-25

    ... DEPARTMENT OF ENERGY Research and Development Strategies for Compressed & Cryo- Compressed Hydrogen Storage Workshops AGENCY: Fuel Cell Technologies Program, Office of Energy Efficiency and Renewable Energy, Department of Energy. ACTION: Notice of meeting. SUMMARY: The Systems Integration group of...

  11. Multichannel Compression, Temporal Cues, and Audibility.

    ERIC Educational Resources Information Center

    Souza, Pamela E.; Turner, Christopher W.

    1998-01-01

    The effect of the reduction of the temporal envelope produced by multichannel compression on recognition was examined in 16 listeners with hearing loss, with particular focus on audibility of the speech signal. Multichannel compression improved speech recognition when superior audibility was provided by a two-channel compression system over linear…

  12. Contact Analog/Compressed Symbology Heading Tape Assessment

    NASA Technical Reports Server (NTRS)

    Shively, R. Jay; Atencio, Adolph; Turpin, Terry; Dowell, Susan

    2002-01-01

    A simulation assessed the performance, handling qualities and workload associated with a contact-analog, world-referenced heading tape as implemented on the Comanche Helmet Integrated Display Sight System (HIDSS) when compared with a screen-fixed, compressed heading tape. Six pilots, four active duty Army Aviators and two civilians flew three ADS-33 maneuvers and a traffic pattern in the Ames Vertical Motion Simulation facility. Small, but statistically significant advantages were found for the compressed symbology for handling qualities, workload, and some of the performance measures. It should be noted however that the level of performance and handling qualities for both symbology sets fell within the acceptable tolerance levels. Both symbology sets yield satisfactory handling qualities and performance in velocity stabilization mode and adequate handling qualities in the automatic flight control mode. Pilot comments about the contact analog symbology highlighted the lack of useful rate of change information in the heading tape and "blurring" due to the rapid movement of the heading tape. These issues warrant further study. Care must be taken in interpreting the operational significance of these results. The symbology sets yielded categorically similar data, i.e., acceptable handling qualities and adequate performance, so while the results point to the need for further study, their operational significance has yet to be determined.

  13. An Ultra-Low Power Turning Angle Based Biomedical Signal Compression Engine with Adaptive Threshold Tuning.

    PubMed

    Zhou, Jun; Wang, Chao

    2017-08-06

    Intelligent sensing is drastically changing our everyday life including healthcare by biomedical signal monitoring, collection, and analytics. However, long-term healthcare monitoring generates tremendous data volume and demands significant wireless transmission power, which imposes a big challenge for wearable healthcare sensors usually powered by batteries. Efficient compression engine design to reduce wireless transmission data rate with ultra-low power consumption is essential for wearable miniaturized healthcare sensor systems. This paper presents an ultra-low power biomedical signal compression engine for healthcare data sensing and analytics in the era of big data and sensor intelligence. It extracts the feature points of the biomedical signal by window-based turning angle detection. The proposed approach has low complexity and thus low power consumption while achieving a large compression ratio (CR) and good quality of reconstructed signal. Near-threshold design technique is adopted to further reduce the power consumption on the circuit level. Besides, the angle threshold for compression can be adaptively tuned according to the error between the original signal and reconstructed signal to address the variation of signal characteristics from person to person or from channel to channel to meet the required signal quality with optimal CR. For demonstration, the proposed biomedical compression engine has been used and evaluated for ECG compression. It achieves an average (CR) of 71.08% and percentage root-mean-square difference (PRD) of 5.87% while consuming only 39 nW. Compared to several state-of-the-art ECG compression engines, the proposed design has significantly lower power consumption while achieving similar CRD and PRD, making it suitable for long-term wearable miniaturized sensor systems to sense and collect healthcare data for remote data analytics.

  14. Compression-sensitive magnetic resonance elastography

    NASA Astrophysics Data System (ADS)

    Hirsch, Sebastian; Beyer, Frauke; Guo, Jing; Papazoglou, Sebastian; Tzschaetzsch, Heiko; Braun, Juergen; Sack, Ingolf

    2013-08-01

    Magnetic resonance elastography (MRE) quantifies the shear modulus of biological tissue to detect disease. Complementary to the shear elastic properties of tissue, the compression modulus may be a clinically useful biomarker because it is sensitive to tissue pressure and poromechanical interactions. In this work, we analyze the capability of MRE to measure volumetric strain and the dynamic bulk modulus (P-wave modulus) at a harmonic drive frequency commonly used in shear-wave-based MRE. Gel phantoms with various densities were created by introducing CO2-filled cavities to establish a compressible effective medium. The dependence of the effective medium's bulk modulus on phantom density was investigated via static compression tests, which confirmed theoretical predictions. The P-wave modulus of three compressible phantoms was calculated from volumetric strain measured by 3D wave-field MRE at 50 Hz drive frequency. The results demonstrate the MRE-derived volumetric strain and P-wave modulus to be sensitive to the compression properties of effective media. Since the reconstruction of the P-wave modulus requires third-order derivatives, noise remains critical, and P-wave moduli are systematically underestimated. Focusing on relative changes in the effective bulk modulus of tissue, compression-sensitive MRE may be useful for the noninvasive detection of diseases involving pathological pressure alterations such as hepatic hypertension or hydrocephalus.

  15. Effect of Kollidon VA®64 particle size and morphology as directly compressible excipient on tablet compression properties.

    PubMed

    Chaudhary, R S; Patel, C; Sevak, V; Chan, M

    2018-01-01

    The study evaluates use of Kollidon VA ® 64 and a combination of Kollidon VA ® 64 with Kollidon VA ® 64 Fine as excipient in direct compression process of tablets. The combination of the two grades of material is evaluated for capping, lamination and excessive friability. Inter particulate void space is higher for such excipient due to the hollow structure of the Kollidon VA ® 64 particles. During tablet compression air remains trapped in the blend exhibiting poor compression with compromised physical properties of the tablets. Composition of Kollidon VA ® 64 and Kollidon VA ® 64 Fine is evaluated by design of experiment (DoE). A scanning electron microscopy (SEM) of two grades of Kollidon VA ® 64 exhibits morphological differences between coarse and fine grade. The tablet compression process is evaluated with a mix consisting of entirely Kollidon VA ® 64 and two mixes containing Kollidon VA ® 64 and Kollidon VA ® 64 Fine in ratio of 77:23 and 65:35. A statistical modeling on the results from the DoE trials resulted in the optimum composition for direct tablet compression as combination of Kollidon VA ® 64 and Kollidon VA ® 64 Fine in ratio of 77:23. This combination compressed with the predicted parameters based on the statistical modeling and applying main compression force between 5 and 15 kN, pre-compression force between 2 and 3 kN, feeder speed fixed at 25 rpm and compression range of 45-49 rpm produced tablets with hardness ranging between 19 and 21 kp, with no friability, capping, or lamination issue.

  16. GPU Lossless Hyperspectral Data Compression System

    NASA Technical Reports Server (NTRS)

    Aranki, Nazeeh I.; Keymeulen, Didier; Kiely, Aaron B.; Klimesh, Matthew A.

    2014-01-01

    Hyperspectral imaging systems onboard aircraft or spacecraft can acquire large amounts of data, putting a strain on limited downlink and storage resources. Onboard data compression can mitigate this problem but may require a system capable of a high throughput. In order to achieve a high throughput with a software compressor, a graphics processing unit (GPU) implementation of a compressor was developed targeting the current state-of-the-art GPUs from NVIDIA(R). The implementation is based on the fast lossless (FL) compression algorithm reported in "Fast Lossless Compression of Multispectral-Image Data" (NPO- 42517), NASA Tech Briefs, Vol. 30, No. 8 (August 2006), page 26, which operates on hyperspectral data and achieves excellent compression performance while having low complexity. The FL compressor uses an adaptive filtering method and achieves state-of-the-art performance in both compression effectiveness and low complexity. The new Consultative Committee for Space Data Systems (CCSDS) Standard for Lossless Multispectral & Hyperspectral image compression (CCSDS 123) is based on the FL compressor. The software makes use of the highly-parallel processing capability of GPUs to achieve a throughput at least six times higher than that of a software implementation running on a single-core CPU. This implementation provides a practical real-time solution for compression of data from airborne hyperspectral instruments.

  17. Lossless compression of otoneurological eye movement signals.

    PubMed

    Tossavainen, Timo; Juhola, Martti

    2002-12-01

    We studied the performance of several lossless compression algorithms on eye movement signals recorded in otoneurological balance and other physiological laboratories. Despite the wide use of these signals their compression has not been studied prior to our research. The compression methods were based on the common model of using a predictor to decorrelate the input and using an entropy coder to encode the residual. We found that these eye movement signals recorded at 400 Hz and with 13 bit amplitude resolution could losslessly be compressed with a compression ratio of about 2.7.

  18. Effect of uniaxial stress on electroluminescence, valence band modification, optical gain, and polarization modes in tensile strained p-AlGaAs/GaAsP/n-AlGaAs laser diode structures: Numerical calculations and experimental results

    NASA Astrophysics Data System (ADS)

    Bogdanov, E. V.; Minina, N. Ya.; Tomm, J. W.; Kissel, H.

    2012-11-01

    The effects of uniaxial compression in [110] direction on energy-band structures, heavy and light hole mixing, optical matrix elements, and gain in laser diodes with "light hole up" configuration of valence band levels in GaAsP quantum wells with different widths and phosphorus contents are numerically calculated. The development of light and heavy hole mixing caused by symmetry lowering and converging behavior of light and heavy hole levels in such quantum wells under uniaxial compression is displayed. The light or heavy hole nature of each level is established for all considered values of uniaxial stress. The results of optical gain calculations for TM and TE polarization modes show that uniaxial compression leads to a significant increase of the TE mode and a minor decrease of the TM mode. Electroluminescence experiments were performed under uniaxial compression up to 5 kbar at 77 K on a model laser diode structure (p-AlxGa1-xAs/GaAs1-yPy/n-AlxGa1-xAs) with y = 0.16 and a quantum well width of 14 nm. They reveal a maximum blue shift of 27 meV of the electroluminescence spectra that is well described by the calculated change of the optical gap and the increase of the intensity being referred to a TE mode enhancement. Numerical calculations and electroluminescence data indicate that uniaxial compression may be used for a moderate wavelength and TM/TE intensity ratio tuning.

  19. Resumption of gainful employment in aphasics: preliminary findings.

    PubMed

    Carriero, M R; Faglia, L; Vignolo, L A

    1987-12-01

    We report preliminary data on aphasic patients who, in spite of their language problems, have succeeded in finding a reasonably satisfactory occupational resettlement. Patients who: (a) still had a moderate to sever aphasia, (b) had resumed a gainful employment requiring interpersonal communication, were recalled for a check-up and assessed with: (1) a comprehensive aphasia test: (2) a semistructured interview including detailed questioning about the type and reaction to aphasia, the type of work before the onset of aphasia, the type of current work with particular emphasis on the patients' compensatory mechanisms and emotional reactions. Results comprise 10 cases up to date. One case is described in detail. Findings indicate that the ability to resume a gainful occupation is often greater than could be expected on the sole basis of formal language examination. Findings are discussed from a neuropsychological, social and rehabilitation point of view.

  20. Spectroscopic evidence for negative electronic compressibility in a quasi-three-dimensional spin–orbit correlated metal

    DOE PAGES

    He, Junfeng; Hogan, T.; Mion, Thomas R.; ...

    2015-04-27

    Negative compressibility is a sign of thermodynamic instability of open1,2,3 or non-equilibrium4,5 systems. In quantum materials consisting of multiple mutually coupled subsystems, the compressibility of one subsystem can be negative if it is countered by positive compressibility of the others. Manifestations of this effect have so far been limited to low-dimensional dilute electron systems6,7,8,9,10,11. Here, we present evidence from angle-resolved photoemission spectroscopy (ARPES) for negative electronic compressibility (NEC) in the quasi-three-dimensional (3D) spin–orbit correlated metal (Sr1-xLax)3Ir2O7. Increased electron filling accompanies an anomalous decrease of the chemical potential, as indicated by the overall movement of the deep valence bands. Such anomaly,more » suggestive of NEC, is shown to be primarily driven by the lowering in energy of the conduction band as the correlated bandgap reduces. Our finding points to a distinct pathway towards an uncharted territory of NEC featuring bulk correlated metals with unique potential for applications in low-power nanoelectronics and novel metamaterials.« less

  1. Widefield compressive multiphoton microscopy.

    PubMed

    Alemohammad, Milad; Shin, Jaewook; Tran, Dung N; Stroud, Jasper R; Chin, Sang Peter; Tran, Trac D; Foster, Mark A

    2018-06-15

    A single-pixel compressively sensed architecture is exploited to simultaneously achieve a 10× reduction in acquired data compared with the Nyquist rate, while alleviating limitations faced by conventional widefield temporal focusing microscopes due to scattering of the fluorescence signal. Additionally, we demonstrate an adaptive sampling scheme that further improves the compression and speed of our approach.

  2. Fast Lossless Compression of Multispectral-Image Data

    NASA Technical Reports Server (NTRS)

    Klimesh, Matthew

    2006-01-01

    An algorithm that effects fast lossless compression of multispectral-image data is based on low-complexity, proven adaptive-filtering algorithms. This algorithm is intended for use in compressing multispectral-image data aboard spacecraft for transmission to Earth stations. Variants of this algorithm could be useful for lossless compression of three-dimensional medical imagery and, perhaps, for compressing image data in general.

  3. Image Segmentation, Registration, Compression, and Matching

    NASA Technical Reports Server (NTRS)

    Yadegar, Jacob; Wei, Hai; Yadegar, Joseph; Ray, Nilanjan; Zabuawala, Sakina

    2011-01-01

    A novel computational framework was developed of a 2D affine invariant matching exploiting a parameter space. Named as affine invariant parameter space (AIPS), the technique can be applied to many image-processing and computer-vision problems, including image registration, template matching, and object tracking from image sequence. The AIPS is formed by the parameters in an affine combination of a set of feature points in the image plane. In cases where the entire image can be assumed to have undergone a single affine transformation, the new AIPS match metric and matching framework becomes very effective (compared with the state-of-the-art methods at the time of this reporting). No knowledge about scaling or any other transformation parameters need to be known a priori to apply the AIPS framework. An automated suite of software tools has been created to provide accurate image segmentation (for data cleaning) and high-quality 2D image and 3D surface registration (for fusing multi-resolution terrain, image, and map data). These tools are capable of supporting existing GIS toolkits already in the marketplace, and will also be usable in a stand-alone fashion. The toolkit applies novel algorithmic approaches for image segmentation, feature extraction, and registration of 2D imagery and 3D surface data, which supports first-pass, batched, fully automatic feature extraction (for segmentation), and registration. A hierarchical and adaptive approach is taken for achieving automatic feature extraction, segmentation, and registration. Surface registration is the process of aligning two (or more) data sets to a common coordinate system, during which the transformation between their different coordinate systems is determined. Also developed here are a novel, volumetric surface modeling and compression technique that provide both quality-guaranteed mesh surface approximations and compaction of the model sizes by efficiently coding the geometry and connectivity

  4. Magnetized Plasma Compression for Fusion Energy

    NASA Astrophysics Data System (ADS)

    Degnan, James; Grabowski, Christopher; Domonkos, Matthew; Amdahl, David

    2013-10-01

    Magnetized Plasma Compression (MPC) uses magnetic inhibition of thermal conduction and enhancement of charge particle product capture to greatly reduce the temporal and spatial compression required relative to un-magnetized inertial fusion (IFE)--to microseconds, centimeters vs nanoseconds, sub-millimeter. MPC greatly reduces the required confinement time relative to MFE--to microseconds vs minutes. Proof of principle can be demonstrated or refuted using high current pulsed power driven compression of magnetized plasmas using magnetic pressure driven implosions of metal shells, known as imploding liners. This can be done at a cost of a few tens of millions of dollars. If demonstrated, it becomes worthwhile to develop repetitive implosion drivers. One approach is to use arrays of heavy ion beams for energy production, though with much less temporal and spatial compression than that envisioned for un-magnetized IFE, with larger compression targets, and with much less ambitious compression ratios. A less expensive, repetitive pulsed power driver, if feasible, would require engineering development for transient, rapidly replaceable transmission lines such as envisioned by Sandia National Laboratories. Supported by DOE-OFES.

  5. Energy Gain. Teacher's Guide and Student Guide. Net Energy Unit. Draft.

    ERIC Educational Resources Information Center

    McLeod, Richard J.

    This module focuses on gains and losses of energy as society processes resources for energy production. It particularly focuses on the end point of energy conversion and addresses decisions which society must make concerning benefits and costs of various energy production methods and generating facilities. One class period is needed to implement…

  6. Enhancement of orientation gradients during simple shear deformation by application of simple compression

    NASA Astrophysics Data System (ADS)

    Jahedi, Mohammad; Ardeljan, Milan; Beyerlein, Irene J.; Paydar, Mohammad Hossein; Knezevic, Marko

    2015-06-01

    We use a multi-scale, polycrystal plasticity micromechanics model to study the development of orientation gradients within crystals deforming by slip. At the largest scale, the model is a full-field crystal plasticity finite element model with explicit 3D grain structures created by DREAM.3D, and at the finest scale, at each integration point, slip is governed by a dislocation density based hardening law. For deformed polycrystals, the model predicts intra-granular misorientation distributions that follow well the scaling law seen experimentally by Hughes et al., Acta Mater. 45(1), 105-112 (1997), independent of strain level and deformation mode. We reveal that the application of a simple compression step prior to simple shearing significantly enhances the development of intra-granular misorientations compared to simple shearing alone for the same amount of total strain. We rationalize that the changes in crystallographic orientation and shape evolution when going from simple compression to simple shearing increase the local heterogeneity in slip, leading to the boost in intra-granular misorientation development. In addition, the analysis finds that simple compression introduces additional crystal orientations that are prone to developing intra-granular misorientations, which also help to increase intra-granular misorientations. Many metal working techniques for refining grain sizes involve a preliminary or concurrent application of compression with severe simple shearing. Our finding reveals that a pre-compression deformation step can, in fact, serve as another processing variable for improving the rate of grain refinement during the simple shearing of polycrystalline metals.

  7. The Pointing Self-calibration Algorithm for Aperture Synthesis Radio Telescopes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhatnagar, S.; Cornwell, T. J., E-mail: sbhatnag@nrao.edu

    This paper is concerned with algorithms for calibration of direction-dependent effects (DDE) in aperture synthesis radio telescopes (ASRT). After correction of direction-independent effects (DIE) using self-calibration, imaging performance can be limited by the imprecise knowledge of the forward gain of the elements in the array. In general, the forward gain pattern is directionally dependent and varies with time due to a number of reasons. Some factors, such as rotation of the primary beam with Parallactic Angle for Azimuth–Elevation mount antennas are known a priori. Some, such as antenna pointing errors and structural deformation/projection effects for aperture-array elements cannot be measuredmore » a priori. Thus, in addition to algorithms to correct for DD effects known a priori, algorithms to solve for DD gains are required for high dynamic range imaging. Here, we discuss a mathematical framework for antenna-based DDE calibration algorithms and show that this framework leads to computationally efficient optimal algorithms that scale well in a parallel computing environment. As an example of an antenna-based DD calibration algorithm, we demonstrate the Pointing SelfCal (PSC) algorithm to solve for the antenna pointing errors. Our analysis show that the sensitivity of modern ASRT is sufficient to solve for antenna pointing errors and other DD effects. We also discuss the use of the PSC algorithm in real-time calibration systems and extensions for antenna Shape SelfCal algorithm for real-time tracking and corrections for pointing offsets and changes in antenna shape.« less

  8. Lossless compression of VLSI layout image data.

    PubMed

    Dai, Vito; Zakhor, Avideh

    2006-09-01

    We present a novel lossless compression algorithm called Context Copy Combinatorial Code (C4), which integrates the advantages of two very disparate compression techniques: context-based modeling and Lempel-Ziv (LZ) style copying. While the algorithm can be applied to many lossless compression applications, such as document image compression, our primary target application has been lossless compression of integrated circuit layout image data. These images contain a heterogeneous mix of data: dense repetitive data better suited to LZ-style coding, and less dense structured data, better suited to context-based encoding. As part of C4, we have developed a novel binary entropy coding technique called combinatorial coding which is simultaneously as efficient as arithmetic coding, and as fast as Huffman coding. Compression results show C4 outperforms JBIG, ZIP, BZIP2, and two-dimensional LZ, and achieves lossless compression ratios greater than 22 for binary layout image data, and greater than 14 for gray-pixel image data.

  9. Compressed/reconstructed test images for CRAF/Cassini

    NASA Technical Reports Server (NTRS)

    Dolinar, S.; Cheung, K.-M.; Onyszchuk, I.; Pollara, F.; Arnold, S.

    1991-01-01

    A set of compressed, then reconstructed, test images submitted to the Comet Rendezvous Asteroid Flyby (CRAF)/Cassini project is presented as part of its evaluation of near lossless high compression algorithms for representing image data. A total of seven test image files were provided by the project. The seven test images were compressed, then reconstructed with high quality (root mean square error of approximately one or two gray levels on an 8 bit gray scale), using discrete cosine transforms or Hadamard transforms and efficient entropy coders. The resulting compression ratios varied from about 2:1 to about 10:1, depending on the activity or randomness in the source image. This was accomplished without any special effort to optimize the quantizer or to introduce special postprocessing to filter the reconstruction errors. A more complete set of measurements, showing the relative performance of the compression algorithms over a wide range of compression ratios and reconstruction errors, shows that additional compression is possible at a small sacrifice in fidelity.

  10. Energy transfer in compressible magnetohydrodynamic turbulence for isothermal self-gravitating fluids

    NASA Astrophysics Data System (ADS)

    Banerjee, Supratik; Kritsuk, Alexei G.

    2018-02-01

    Three-dimensional, compressible, magnetohydrodynamic turbulence of an isothermal, self-gravitating fluid is analyzed using two-point statistics in the asymptotic limit of large Reynolds numbers (both kinetic and magnetic). Following an alternative formulation proposed by Banerjee and Galtier [Phys. Rev. E 93, 033120 (2016), 10.1103/PhysRevE.93.033120; J. Phys. A: Math. Theor. 50, 015501 (2017), 10.1088/1751-8113/50/1/015501], an exact relation has been derived for the total energy transfer. This approach results in a simpler relation expressed entirely in terms of mixed second-order structure functions. The kinetic, thermodynamic, magnetic, and gravitational contributions to the energy transfer rate can be easily separated in the present form. By construction, the new formalism includes such additional effects as global rotation, the Hall term in the induction equation, etc. The analysis shows that solid-body rotation cannot alter the energy flux rate of compressible turbulence. However, the contribution of a uniform background magnetic field to the flux is shown to be nontrivial unlike in the incompressible case. Finally, the compressible, turbulent energy flux rate does not vanish completely due to simple alignments, which leads to a zero turbulent energy flux rate in the incompressible case.

  11. Energy transfer in compressible magnetohydrodynamic turbulence for isothermal self-gravitating fluids.

    PubMed

    Banerjee, Supratik; Kritsuk, Alexei G

    2018-02-01

    Three-dimensional, compressible, magnetohydrodynamic turbulence of an isothermal, self-gravitating fluid is analyzed using two-point statistics in the asymptotic limit of large Reynolds numbers (both kinetic and magnetic). Following an alternative formulation proposed by Banerjee and Galtier [Phys. Rev. E 93, 033120 (2016)2470-004510.1103/PhysRevE.93.033120; J. Phys. A: Math. Theor. 50, 015501 (2017)1751-811310.1088/1751-8113/50/1/015501], an exact relation has been derived for the total energy transfer. This approach results in a simpler relation expressed entirely in terms of mixed second-order structure functions. The kinetic, thermodynamic, magnetic, and gravitational contributions to the energy transfer rate can be easily separated in the present form. By construction, the new formalism includes such additional effects as global rotation, the Hall term in the induction equation, etc. The analysis shows that solid-body rotation cannot alter the energy flux rate of compressible turbulence. However, the contribution of a uniform background magnetic field to the flux is shown to be nontrivial unlike in the incompressible case. Finally, the compressible, turbulent energy flux rate does not vanish completely due to simple alignments, which leads to a zero turbulent energy flux rate in the incompressible case.

  12. An Immediate Death by Seat Belt Compression; a Forensic Medicine Report

    PubMed Central

    Najari, Fares; Alimohammadi, Ali Mohammad

    2015-01-01

    Although death is a gradual process, sometimes sudden death occurs in a fraction of a minute or seconds. Here we report a 49-year-old man without any underlying disease, which has instantly died in an accident scene due to compression of neck critical elements by a three-point seat belt. The examination of the body and the results of the autopsy, toxicology and pathology tests are described from the viewpoint of forensic medicine. PMID:26495409

  13. Compression of rehydratable vegetables and cereals

    NASA Technical Reports Server (NTRS)

    Burns, E. E.

    1978-01-01

    Characteristics of freeze-dried compressed carrots, such as rehydration, volatile retention, and texture, were studied by relating histological changes to textural quality evaluation, and by determining the effects of storage temperature on freeze-dried compressed carrot bars. Results show that samples compressed with a high moisture content undergo only slight structural damage and rehydrate quickly. Cellular disruption as a result of compression at low moisture levels was the main reason for rehydration and texture differences. Products prepared from carrot cubes having 48% moisture compared favorably with a freshly cooked product in cohesiveness and elasticity, but were found slightly harder and more chewy.

  14. Memory hierarchy using row-based compression

    DOEpatents

    Loh, Gabriel H.; O'Connor, James M.

    2016-10-25

    A system includes a first memory and a device coupleable to the first memory. The device includes a second memory to cache data from the first memory. The second memory includes a plurality of rows, each row including a corresponding set of compressed data blocks of non-uniform sizes and a corresponding set of tag blocks. Each tag block represents a corresponding compressed data block of the row. The device further includes decompression logic to decompress data blocks accessed from the second memory. The device further includes compression logic to compress data blocks to be stored in the second memory.

  15. Corneal Staining and Hot Black Tea Compresses.

    PubMed

    Achiron, Asaf; Birger, Yael; Karmona, Lily; Avizemer, Haggay; Bartov, Elisha; Rahamim, Yocheved; Burgansky-Eliash, Zvia

    2017-03-01

    Warm compresses are widely touted as an effective treatment for ocular surface disorders. Black tea compresses are a common household remedy, although there is no evidence in the medical literature proving their effect and their use may lead to harmful side effects. To describe a case in which the application of black tea to an eye with a corneal epithelial defect led to anterior stromal discoloration; evaluate the prevalence of hot tea compress use; and analyze, in vitro, the discoloring effect of tea compresses on a model of a porcine eye. We assessed the prevalence of hot tea compresses in our community and explored the effect of warm tea compresses on the cornea when the corneal epithelium's integrity is disrupted. An in vitro experiment in which warm compresses were applied to 18 fresh porcine eyes was performed. In half the eyes a corneal epithelial defect was created and in the other half the epithelium was intact. Both groups were divided into subgroups of three eyes each and treated experimentally with warm black tea compresses, pure water, or chamomile tea compresses. We also performed a study in patients with a history of tea compress use. Brown discoloration of the anterior stroma appeared only in the porcine corneas that had an epithelial defect and were treated with black tea compresses. No other eyes from any group showed discoloration. Of the patients included in our survey, approximately 50% had applied some sort of tea ingredient as a solid compressor or as the hot liquid. An intact corneal epithelium serves as an effective barrier against tea-stain discoloration. Only when this layer is disrupted does the damage occur. Therefore, direct application of black tea (Camellia sinensis) to a cornea with an epithelial defect should be avoided.

  16. Compression stockings in the management of fractures of the ankle: a randomised controlled trial.

    PubMed

    Sultan, M J; Zhing, T; Morris, J; Kurdy, N; McCollum, C N

    2014-08-01

    In this randomised controlled trial, we evaluated the role of elastic compression using ankle injury stockings (AIS) in the management of fractures of the ankle. A total of 90 patients with a mean age of 47 years (16 to 79) were treated within 72 hours of presentation with a fracture of the ankle, 31 of whom were treated operatively and 59 conservatively, were randomised to be treated either with compression by AIS plus an Aircast boot or Tubigrip plus an Aircast boot. Male to female ratio was 36:54. The primary outcome measure was the functional Olerud-Molander ankle score (OMAS). The secondary outcome measures were; the American Orthopaedic Foot and Ankle Society score (AOFAS); the Short Form (SF)-12v2 Quality of Life score; and the frequency of deep vein thrombosis (DVT). Compression using AIS reduced swelling of the ankle at all time points and improved the mean OMAS score at six months to 98 (95% confidence interval (CI) 96 to 99) compared with a mean of 67 (95% CI 62 to 73) for the Tubigrip group (p < 0.001). The mean AOFAS and SF-12v2 scores at six months were also significantly improved by compression. Of 86 patients with duplex imaging at four weeks, five (12%) of 43 in the AIS group and ten (23%) of 43 in the Tubigrip group developed a DVT (p = 0.26). Compression improved functional outcome and quality of life following fracture of the ankle. DVTs were frequent, but a larger study would be needed to confirm that compression with AISs reduces the incidence of DVT. ©2014 The British Editorial Society of Bone & Joint Surgery.

  17. The Effect of Capital Gains Taxation on Home Sales: Evidence from the Taxpayer Relief Act of 1997

    PubMed Central

    Shan, Hui

    2010-01-01

    The Taxpayer Relief Act of 1997 (TRA97) significantly changed the tax treatment of housing capital gains in the United States. Before 1997, homeowners were subject to capital gains taxation when they sold their houses unless they purchased replacement homes of equal or greater value. Since 1997, homeowners can exclude capital gains of $500,000 (or $250,000 for single filers) when they sell their houses. Such dramatic changes provide a good opportunity to study the lock-in effect of capital gains taxation on home sales. Using 1982–2008 transaction data on single-family houses in 16 affluent towns within the Boston metropolitan area, I find that TRA97 reversed the lock-in effect of capital gains taxes on houses with low and moderate capital gains. Specifically, the semiannual sales rate of houses with positive gains up to $500,000 increased by 0.40–0.62 percentage points after TRA97, representing a 19–24 percent increase from the pre-TRA97 baseline sales rate. In contrast, I do not find TRA97 to have a significant effect on houses with gains above $500,000. Moreover, the short-term effect of TRA97 is much larger than the long-term effect, suggesting that many previously locked-in homeowners took advantage of the exclusions immediately after TRA97. In addition, I exploit the 2001 and 2003 legislative changes in the capital gains tax rate to estimate the tax elasticity of home sales during the post-TRA97 period. The estimation results suggest that a $10,000 increase in capital gains taxes reduces the semiannual home sales rate by about 0.1–0.2 percentage points, or 6–13 percent from the post-TRA97 average sales rate. PMID:21170145

  18. Perceptual Image Compression in Telemedicine

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ahumada, Albert J., Jr.; Eckstein, Miguel; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    The next era of space exploration, especially the "Mission to Planet Earth" will generate immense quantities of image data. For example, the Earth Observing System (EOS) is expected to generate in excess of one terabyte/day. NASA confronts a major technical challenge in managing this great flow of imagery: in collection, pre-processing, transmission to earth, archiving, and distribution to scientists at remote locations. Expected requirements in most of these areas clearly exceed current technology. Part of the solution to this problem lies in efficient image compression techniques. For much of this imagery, the ultimate consumer is the human eye. In this case image compression should be designed to match the visual capacities of the human observer. We have developed three techniques for optimizing image compression for the human viewer. The first consists of a formula, developed jointly with IBM and based on psychophysical measurements, that computes a DCT quantization matrix for any specified combination of viewing distance, display resolution, and display brightness. This DCT quantization matrix is used in most recent standards for digital image compression (JPEG, MPEG, CCITT H.261). The second technique optimizes the DCT quantization matrix for each individual image, based on the contents of the image. This is accomplished by means of a model of visual sensitivity to compression artifacts. The third technique extends the first two techniques to the realm of wavelet compression. Together these two techniques will allow systematic perceptual optimization of image compression in NASA imaging systems. Many of the image management challenges faced by NASA are mirrored in the field of telemedicine. Here too there are severe demands for transmission and archiving of large image databases, and the imagery is ultimately used primarily by human observers, such as radiologists. In this presentation I will describe some of our preliminary explorations of the applications

  19. Domestication and Breeding of Tomatoes: What have We Gained and What Can We Gain in the Future?

    PubMed Central

    Bai, Yuling; Lindhout, Pim

    2007-01-01

    Background It has been shown that a large variation is present and exploitable from wild Solanum species but most of it is still untapped. Considering the thousands of Solanum accessions in different gene banks and probably even more that are still untouched in the Andes, it is a challenge to exploit the diversity of tomato. What have we gained from tomato domestication and breeding and what can we gain in the future? Scope This review summarizes progress on tomato domestication and breeding and current efforts in tomato genome research. Also, it points out potential challenges in exploiting tomato biodiversity and depicts future perspectives in tomato breeding with the emerging knowledge from tomato-omics. Conclusions From first domestication to modern breeding, the tomato has been continually subjected to human selection for a wide array of applications in both science and commerce. Current efforts in tomato breeding are focused on discovering and exploiting genes for the most important traits in tomato germplasm. In the future, breeders will design cultivars by a process named ‘breeding by design’ based on the combination of science and technologies from the genomic era as well as their practical skills. PMID:17717024

  20. Friction of Compression-ignition Engines

    NASA Technical Reports Server (NTRS)

    Moore, Charles S; Collins, John H , Jr

    1936-01-01

    The cost in mean effective pressure of generating air flow in the combustion chambers of single-cylinder compression-ignition engines was determined for the prechamber and the displaced-piston types of combustion chamber. For each type a wide range of air-flow quantities, speeds, and boost pressures was investigated. Supplementary tests were made to determine the effect of lubricating-oil temperature, cooling-water temperature, and compression ratio on the friction mean effective pressure of the single-cylinder test engine. Friction curves are included for two 9-cylinder, radial, compression-ignition aircraft engines. The results indicate that generating the optimum forced air flow increased the motoring losses approximately 5 pounds per square inch mean effective pressure regardless of chamber type or engine speed. With a given type of chamber, the rate of increase in friction mean effective pressure with engine speed is independent of the air-flow speed. The effect of boost pressure on the friction cannot be predicted because the friction was decreased, unchanged, or increased depending on the combustion-chamber type and design details. High compression ratio accounts for approximately 5 pounds per square inch mean effective pressure of the friction of these single-cylinder compression-ignition engines. The single-cylinder test engines used in this investigation had a much higher friction mean effective pressure than conventional aircraft engines or than the 9-cylinder, radial, compression-ignition engines tested so that performance should be compared on an indicated basis.