Hydrogen Fueling Station Using Thermal Compression: a techno-economic analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kriha, Kenneth; Petitpas, Guillaume; Melchionda, Michael
The goal of this project was to demonstrate the technical and economic feasibility of using thermal compression to create the hydrogen pressure necessary to operate vehicle hydrogen fueling stations. The concept of utilizing the exergy within liquid hydrogen to build pressure rather than mechanical components such as compressors or cryogenic liquid pumps has several advantages. In theory, the compressor-less hydrogen station will have lower operating and maintenance costs because the compressors found in conventional stations require large amounts of electricity to run and are prone to mechanical breakdowns. The thermal compression station also utilizes some of the energy used tomore » liquefy the hydrogen as work to build pressure, this is energy that in conventional stations is lost as heat to the environment.« less
Investigating the Methane Footprint of Compressed Natural Gas Stations in the Los Angeles Basin
NASA Astrophysics Data System (ADS)
Carranza, V.; Hopkins, F. M.; Randerson, J. T.; Bush, S.; Ehleringer, J. R.; Miu, J.
2013-12-01
In recent years, natural gas has taken on a larger role in the United States' discourse on energy policy because it is seen as a fuel that can alleviate the country's dependence on foreign energy while simultaneously reducing greenhouse gas emissions. To this end, the State of California promotes the use of vehicles fueled by compressed natural gas (CNG). However, the implications of increased CNG vehicles for greenhouse gas emission reduction are not fully understood. Specifically, methane (CH4) leakages from natural gas infrastructure could make the switch from conventional to CNG vehicles a source of CH4 to the atmosphere, and negate the greenhouse-gas reduction benefit of this policy. The goal of our research is to provide an analysis of potential CH4 leakages from thirteen CNG filling stations in Orange County, California. To improve our understanding of CH4 leakages, we used a mobile laboratory, which is a Ford Transit van equipped with cavity-ring down Picarro spectrometers, to measure CH4 mixing ratios in these CNG stations. MATLAB and ArcGIS were used to conduct statistical analysis and to construct spatial and temporal maps for each transect. We observed mean levels of excess CH4 (relative to background CH4 mixing ratios) ranging from 60 to 1700 ppb at the CNG stations we sampled. Repeated sampling of CNG stations revealed higher levels of excess CH4 during the daytime compared to the nighttime. From our observations, CNG storage tanks and pumps have approximately the same CH4 leakage levels. By improving our understanding of the spatial and temporal patterns of CH4 emissions from CNG stations, our research can provide valuable information to reduce the climate footprint of the natural gas industry.
NASA Technical Reports Server (NTRS)
Reveley, W. F.; Nuccio, P. P.
1975-01-01
Potable water for the Space Station Prototype life support system is generated by the vapor compression technique of vacuum distillation. A description of a complete three-man modular vapor compression water renovation loop that was built and tested is presented; included are all of the pumps, tankage, chemical post-treatment, instrumentation, and controls necessary to make the loop representative of an automatic, self-monitoring, null gravity system. The design rationale is given and the evolved configuration is described. Presented next are the results of an extensive parametric test during which distilled water was generated from urine and urinal flush water with concentration of solids in the evaporating liquid increasing progressively to 60 percent. Water quality, quantity and production rate are shown together with measured energy consumption rate in terms of watt-hours per kilogram of distilled water produced.
Fast Lossless Compression of Multispectral-Image Data
NASA Technical Reports Server (NTRS)
Klimesh, Matthew
2006-01-01
An algorithm that effects fast lossless compression of multispectral-image data is based on low-complexity, proven adaptive-filtering algorithms. This algorithm is intended for use in compressing multispectral-image data aboard spacecraft for transmission to Earth stations. Variants of this algorithm could be useful for lossless compression of three-dimensional medical imagery and, perhaps, for compressing image data in general.
NASA Technical Reports Server (NTRS)
Stack, John
1935-01-01
Simultaneous air-flow photographs and pressure-distribution measurements have been made of the NACA 4412 airfoil at high speeds in order to determine the physical nature of the compressibility burble. The flow photographs were obtained by the Schlieren method and the pressures were simultaneously measured for 54 stations on the 5-inch-chord wing by means of a multiple-tube photographic manometer. Pressure-measurement results and typical Schlieren photographs are presented. The general nature of the phenomenon called the "compressibility burble" is shown by these experiments. The source of the increased drag is the compression shock that occurs, the excess drag being due to the conversion of a considerable amount of the air-stream kinetic energy into heat at the compression shock.
Vapor Compression Distillation Flight Experiment
NASA Technical Reports Server (NTRS)
Hutchens, Cindy F.
2002-01-01
One of the major requirements associated with operating the International Space Station is the transportation -- space shuttle and Russian Progress spacecraft launches - necessary to re-supply station crews with food and water. The Vapor Compression Distillation (VCD) Flight Experiment, managed by NASA's Marshall Space Flight Center in Huntsville, Ala., is a full-scale demonstration of technology being developed to recycle crewmember urine and wastewater aboard the International Space Station and thereby reduce the amount of water that must be re-supplied. Based on results of the VCD Flight Experiment, an operational urine processor will be installed in Node 3 of the space station in 2005.
Alternative Fuels Data Center: Compressed Natural Gas Fueling Stations
infrastructure: time-fill and fast-fill. The main structural differences between the two systems are the amount fuel dispensed and the time it takes for CNG to be delivered. Most CNG stations include one of these into account. Learn more about filling CNG tanks. Time-Fill CNG Station Enlarge illustration Time-fill
NASA Astrophysics Data System (ADS)
Bonne, F.; Alamir, M.; Bonnay, P.
2017-02-01
This paper deals with multivariable constrained model predictive control for Warm Compression Stations (WCS). WCSs are subject to numerous constraints (limits on pressures, actuators) that need to be satisfied using appropriate algorithms. The strategy is to replace all the PID loops controlling the WCS with an optimally designed model-based multivariable loop. This new strategy leads to high stability and fast disturbance rejection such as those induced by a turbine or a compressor stop, a key-aspect in the case of large scale cryogenic refrigeration. The proposed control scheme can be used to achieve precise control of pressures in normal operation or to avoid reaching stopping criteria (such as excessive pressures) under high disturbances (such as a pulsed heat load expected to take place in future fusion reactors, expected in the cryogenic cooling systems of the International Thermonuclear Experimental Reactor ITER or the Japan Torus-60 Super Advanced fusion experiment JT-60SA). The paper details the simulator used to validate this new control scheme and the associated simulation results on the SBTs WCS. This work is partially supported through the French National Research Agency (ANR), task agreement ANR-13-SEED-0005.
Orbiting dynamic compression laboratory
NASA Technical Reports Server (NTRS)
Ahrens, T. J.; Vreeland, T., Jr.; Kasiraj, P.; Frisch, B.
1984-01-01
In order to examine the feasibility of carrying out dynamic compression experiments on a space station, the possibility of using explosive gun launchers is studied. The question of whether powders of a refractory metal (molybdenum) and a metallic glass could be well considered by dynamic compression is examined. In both cases extremely good bonds are obtained between grains of metal and metallic glass at 180 and 80 kb, respectively. When the oxide surface is reduced and the dynamic consolidation is carried out in vacuum, in the case of molybdenum, tensile tests of the recovered samples demonstrated beneficial ultimate tensile strengths.
Hyperspectral data compression using a Wiener filter predictor
NASA Astrophysics Data System (ADS)
Villeneuve, Pierre V.; Beaven, Scott G.; Stocker, Alan D.
2013-09-01
The application of compression to hyperspectral image data is a significant technical challenge. A primary bottleneck in disseminating data products to the tactical user community is the limited communication bandwidth between the airborne sensor and the ground station receiver. This report summarizes the newly-developed "Z-Chrome" algorithm for lossless compression of hyperspectral image data. A Wiener filter prediction framework is used as a basis for modeling new image bands from already-encoded bands. The resulting residual errors are then compressed using available state-of-the-art lossless image compression functions. Compression performance is demonstrated using a large number of test data collected over a wide variety of scene content from six different airborne and spaceborne sensors .
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Terry A.; Bowman, Robert; Smith, Barton
Conventional hydrogen compressors often contribute over half of the cost of hydrogen stations, have poor reliability, and have insufficient flow rates for a mature FCEV market. Fatigue associated with their moving parts including cracking of diaphragms and failure of seal leads to failure in conventional compressors, which is exacerbated by the repeated starts and stops expected at fueling stations. Furthermore, the conventional lubrication of these compressors with oil is generally unacceptable at fueling stations due to potential fuel contamination. Metal hydride (MH) technology offers a very good alternative to both conventional (mechanical) and newly developed (electrochemical, ionic liquid pistons) methodsmore » of hydrogen compression. Advantages of MH compression include simplicity in design and operation, absence of moving parts, compactness, safety and reliability, and the possibility to utilize waste industrial heat to power the compressor. Beyond conventional H2 supplies of pipelines or tanker trucks, another attractive scenario is the on-site generating, pressuring and delivering pure H 2 at pressure (≥ 875 bar) for refueling vehicles at electrolysis, wind, or solar generating production facilities in distributed locations that are too remote or widely distributed for cost effective bulk transport. MH hydrogen compression utilizes a reversible heat-driven interaction of a hydride-forming metal alloy with hydrogen gas to form the MH phase and is a promising process for hydrogen energy applications [1,2]. To deliver hydrogen continuously, each stage of the compressor must consist of multiple MH beds with synchronized hydrogenation & dehydrogenation cycles. Multistage pressurization allows achievement of greater compression ratios using reduced temperature swings compared to single stage compressors. The objectives of this project are to investigate and demonstrate on a laboratory scale a two-stage MH hydrogen (H 2) gas compressor with a feed pressure
Computer networking at SLR stations
NASA Technical Reports Server (NTRS)
Novotny, Antonin
1993-01-01
There are several existing communication methods to deliver data from the satellite laser ranging (SLR) station to the SLR data center and back: telephonmodem, telex, and computer networks. The SLR scientific community has been exploiting mainly INTERNET, BITNET/EARN, and SPAN. The total of 56 countries are connected to INTERNET and the number of nodes is exponentially growing. The computer networks mentioned above and others are connected through E-mail protocol. The scientific progress of SLR requires the increase of communication speed and the amount of the transmitted data. The TOPEX/POSEIDON test campaign required to deliver Quick Look data (1.7 kB/pass) from a SLR site to SLR data center within 8 hours and full rate data (up to 500 kB/pass) within 24 hours. We developed networking for the remote SLR station in Helwan, Egypt. The reliable scheme for data delivery consists of: compression of MERIT2 format (up to 89 percent), encoding to ASCII Me (files); and e-mail sending from SLR station--e-mail receiving, decoding, and decompression at the center. We do propose to use the ZIP method for compression/decompression and the UUCODE method for ASCII encoding/decoding. This method will be useful for stations connected via telephonemodems or commercial networks. The electronics delivery could solve the problem of the too late receiving of the FR data by SLR data center.
Computer networking at SLR stations
NASA Astrophysics Data System (ADS)
Novotny, Antonin
1993-06-01
There are several existing communication methods to deliver data from the satellite laser ranging (SLR) station to the SLR data center and back: telephonmodem, telex, and computer networks. The SLR scientific community has been exploiting mainly INTERNET, BITNET/EARN, and SPAN. The total of 56 countries are connected to INTERNET and the number of nodes is exponentially growing. The computer networks mentioned above and others are connected through E-mail protocol. The scientific progress of SLR requires the increase of communication speed and the amount of the transmitted data. The TOPEX/POSEIDON test campaign required to deliver Quick Look data (1.7 kB/pass) from a SLR site to SLR data center within 8 hours and full rate data (up to 500 kB/pass) within 24 hours. We developed networking for the remote SLR station in Helwan, Egypt. The reliable scheme for data delivery consists of: compression of MERIT2 format (up to 89 percent), encoding to ASCII Me (files); and e-mail sending from SLR station--e-mail receiving, decoding, and decompression at the center. We do propose to use the ZIP method for compression/decompression and the UUCODE method for ASCII encoding/decoding. This method will be useful for stations connected via telephonemodems or commercial networks. The electronics delivery could solve the problem of the too late receiving of the FR data by SLR data center.
Alternative Fueling Station Locator - Android
DOE Office of Scientific and Technical Information (OSTI.GOV)
The Alternative Fueling Station Locator app helps users locate fueling stations that offer electricity, natural gas, biodiesel, E85, propane, or hydrogen. The users' current location or a custom location can be used to find the 20 closest stations within a 30-mile radius. View the stations on a map or see a list of stations ordered by distance from your location. Select your alternative fuel of choice and adjust the custom filters to fit your needs. Select a station from the map or list to view contact info and other details: address, phone number, and hours of operation; payment types accepted;more » public or private access; special services; compression (natural gas); vehicle size access (natural gas); number and types of chargers (electric); blends available (biodiesel); and blender pumps (ethanol) The app draws information from the U.S. Department of Energy's Alternative Fuels Data Center, which houses the most comprehensive, up-to-date database of alternative fueling stations in the United States. The database contains location information for more than 20,000 alternative fueling stations throughout the country.« less
Costs Associated With Compressed Natural Gas Vehicle Fueling Infrastructure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, M.; Gonzales, J.
2014-09-01
This document is designed to help fleets understand the cost factors associated with fueling infrastructure for compressed natural gas (CNG) vehicles. It provides estimated cost ranges for various sizes and types of CNG fueling stations and an overview of factors that contribute to the total cost of an installed station. The information presented is based on input from professionals in the natural gas industry who design, sell equipment for, and/or own and operate CNG stations.
The International Space Station Assembly on Schedule
NASA Technical Reports Server (NTRS)
1997-01-01
As engineers continue to prepare the International Space Station (ISS) for in-orbit assembly in the year 2002, ANSYS software has proven instrumental in resolving a structural problem in the project's two primary station modules -- Nodes 1 and 2. Proof pressure tests performed in May revealed "low temperature, post-yield creep" in some of the Nodes' gussets, which were designed to reinforce ports for loads from station keeping and reboost motion of the entire space station. An extensive effort was undertaken to characterize the creep behavior of the 2219-T851 aluminum forging material from which the gussets were made. Engineers at Sverdrup Technology, Inc. (Huntsville, AL) were responsible for conducting a combined elastic-plastic-creep analysis of the gussets to determine the amount of residual compressive stress which existed in the gussets following the proof pressure tests, and to determine the stress-strain history in the gussets while on-orbit. Boeing, NASA's Space Station prime contractor, supplied the Finite Element Analysis (FEA) model geometry and developed the creep equations from the experimental data taken by NASA's Marshall Space Flight Center and Langley Research Center. The goal of this effort was to implement the uniaxial creep equations into a three dimensional finite element program, and to determine analytically whether or not the creep was something that the space station program could live with. The objective was to show analytically that either the creep rate was at an acceptable level, or that the node module had to be modified to lower the stress levels to where creep did not occur. The elastic-plastic-creep analysis was performed using the ANSYS finite element program of ANSYS, Inc. (Houston, PA). The analysis revealed that the gussets encountered a compressive stress of approximately 30,000 pounds per square inch (psi) when unloaded. This compressive residual stress significantly lowered the maximum tension stress in the gussets which
Improved waste water vapor compression distillation technology. [for Spacelab
NASA Technical Reports Server (NTRS)
Johnson, K. L.; Nuccio, P. P.; Reveley, W. F.
1977-01-01
The vapor compression distillation process is a method of recovering potable water from crewman urine in a manned spacecraft or space station. A description is presented of the research and development approach to the solution of the various problems encountered with previous vapor compression distillation units. The design solutions considered are incorporated in the preliminary design of a vapor compression distillation subsystem. The new design concepts are available for integration in the next generation of support systems and, particularly, the regenerative life support evaluation intended for project Spacelab.
Phase change water processing for Space Station
NASA Technical Reports Server (NTRS)
Zdankiewicz, E. M.; Price, D. F.
1985-01-01
The use of a vapor compression distillation subsystem (VCDS) for water recovery on the Space Station is analyzed. The self-contained automated system can process waste water at a rate of 32.6 kg/day and requires only 115 W of electric power. The improvements in the mechanical components of VCDS are studied. The operation of VCDS in the normal mode is examined. The VCDS preprototype is evaluated based on water quality, water production rate, and specific energy. The relation between water production rate and fluids pump speed is investigated; it is concluded that a variable speed fluids pump will optimize water production. Components development and testing currently being conducted are described. The properties and operation of the proposed phase change water processing system for the Space Station, based on vapor compression distillation, are examined.
Compressed television transmission: A market survey
NASA Technical Reports Server (NTRS)
Lizak, R. M.; Cagan, L. Q.
1981-01-01
NASA's compressed television transmission technology is described, and its potential market is considered; a market that encompasses teleconferencing, remote medical diagnosis, patient monitoring, transit station surveillance, as well as traffic management and control. In addition, current and potential television transmission systems and their costs and potential manufacturers are considered.
Video requirements for materials processing experiments in the space station US laboratory
NASA Technical Reports Server (NTRS)
Baugher, Charles R.
1989-01-01
Full utilization of the potential of the materials research on the Space Station can be achieved only if adequate means are available for interactive experimentation between the science facilities and ground-based investigators. Extensive video interfaces linking these three elements are the only alternative for establishing a viable relation. Because of the limit in the downlink capability, a comprehensive complement of on-board video processing, and video compression is needed. The application of video compression will be an absolute necessity since it's effectiveness will directly impact the quantity of data which will be available to ground investigator teams, and their ability to review the effects of process changes and the experiment progress. Video data compression utilization on the Space Station is discussed.
Comparison of reversible methods for data compression
NASA Astrophysics Data System (ADS)
Heer, Volker K.; Reinfelder, Hans-Erich
1990-07-01
Widely differing methods for data compression described in the ACR-NEMA draft are used in medical imaging. In our contribution we will review various methods briefly and discuss the relevant advantages and disadvantages. In detail we evaluate 1st order DPCM pyramid transformation and S transformation. We compare as coding algorithms both fixed and adaptive Huffman coding and Lempel-Ziv coding. Our comparison is performed on typical medical images from CT MR DSA and DLR (Digital Luminescence Radiography). Apart from the achieved compression factors we take into account CPU time required and main memory requirement both for compression and for decompression. For a realistic comparison we have implemented the mentioned algorithms in the C program language on a MicroVAX II and a SPARC station 1. 2.
18. VIEW OF EAST SIDE INTERIOR OF MST AT STATIONS ...
18. VIEW OF EAST SIDE INTERIOR OF MST AT STATIONS 3 AND 12, FACING WEST. COMPRESSED AIR TANK AND GENERATOR AT STATION 3. CURTAIN FOR NORTH ENVIRONMENTAL DOOR VISIBLE ON LEFT SIDE OF PHOTOGRAPH; RAIL VISIBLE AT BOTTOM OF PHOTOGRAPH. - Vandenberg Air Force Base, Space Launch Complex 3, Launch Pad 3 East, Napa & Alden Roads, Lompoc, Santa Barbara County, CA
Results of the Vapor Compression Distillation Flight Experiment (VCD-FE)
NASA Technical Reports Server (NTRS)
Hutchens, Cindy; Graves, Rex
2004-01-01
Vapor Compression Distillation (VCD) is the chosen technology for urine processing aboard the International Space Station (ISS). Key aspects of the VCD design have been verified and significant improvements made throughout the ground;based development history. However, an important element lacking from previous subsystem development efforts was flight-testing. Consequently, the demonstration and validation of the VCD technology and the investigation of subsystem performance in micro-gravity were the primary goals of the VCD-FE. The Vapor Compression Distillation Flight Experiment (VCD-E) was a flight experiment aboard the Space Shuttle Columbia during the STS-107 mission. The VCD-FE was a full-scale developmental version of the Space Station Urine Processor Assembly (UPA) and was designed to test some of the potential micro-gravity issues with the design. This paper summarizes the experiment results.
Lossless Compression of Classification-Map Data
NASA Technical Reports Server (NTRS)
Hua, Xie; Klimesh, Matthew
2009-01-01
A lossless image-data-compression algorithm intended specifically for application to classification-map data is based on prediction, context modeling, and entropy coding. The algorithm was formulated, in consideration of the differences between classification maps and ordinary images of natural scenes, so as to be capable of compressing classification- map data more effectively than do general-purpose image-data-compression algorithms. Classification maps are typically generated from remote-sensing images acquired by instruments aboard aircraft (see figure) and spacecraft. A classification map is a synthetic image that summarizes information derived from one or more original remote-sensing image(s) of a scene. The value assigned to each pixel in such a map is the index of a class that represents some type of content deduced from the original image data for example, a type of vegetation, a mineral, or a body of water at the corresponding location in the scene. When classification maps are generated onboard the aircraft or spacecraft, it is desirable to compress the classification-map data in order to reduce the volume of data that must be transmitted to a ground station.
Vapor compression distiller and membrane technology for water revitalization
NASA Technical Reports Server (NTRS)
Ashida, A.; Mitani, K.; Ebara, K.; Kurokawa, H.; Sawada, I.; Kashiwagi, H.; Tsuji, T.; Hayashi, S.; Otsubo, K.; Nitta, K.
1987-01-01
Water revitalization for a space station can consist of membrane filtration processes and a distillation process. Water recycling equipment using membrane filtration processes was manufactured for ground testing. It was assembled using commercially available components. Two systems for the distillation are studied: one is absorption type thermopervaporation cell and the other is a vapor compression distiller. Absorption type thermopervaporation, able to easily produce condensed water under zero gravity, was investigated experimentally and through simulated calculation. The vapor compression distiller was studied experimentally and it offers significant energy savings for evaporation of water.
Compressed air production with waste heat utilization in industry
NASA Astrophysics Data System (ADS)
Nolting, E.
1984-06-01
The centralized power-heat coupling (PHC) technique using block heating power stations, is presented. Compressed air production in PHC technique with internal combustion engine drive achieves a high degree of primary energy utilization. Cost savings of 50% are reached compared to conventional production. The simultaneous utilization of compressed air and heat is especially interesting. A speed regulated drive via an internal combustion motor gives a further saving of 10% to 20% compared to intermittent operation. The high fuel utilization efficiency ( 80%) leads to a pay off after two years for operation times of 3000 hr.
Vapor compression distiller and membrane technology for water revitalization.
Ashida, A; Mitani, K; Ebara, K; Kurokawa, H; Sawada, I; Kashiwagi, H; Tsuji, T; Hayashi, S; Otsubo, K; Nitta, K
1987-01-01
Water revitalization for a space station can consist of membrane filtration processes and a distillation process. Water recycling equipment using membrane filtration processes was manufactured for ground testing. It was assembled using commercially available components. Two systems for the distillation are studied; one is an absorption type thermopervaporation cell and the other is a vapor compression distiller. Absorption type thermopervaporation able to easily produce condensed water under zero gravity was investigated experimentally and through simulated calculation. The vapor compression distiller was studied experimentally and it offers significant energy savings for evaporation of water.
Analysis and testing of axial compression in imperfect slender truss struts
NASA Technical Reports Server (NTRS)
Lake, Mark S.; Georgiadis, Nicholas
1990-01-01
The axial compression of imperfect slender struts for large space structures is addressed. The load-shortening behavior of struts with initially imperfect shapes and eccentric compressive end loading is analyzed using linear beam-column theory and results are compared with geometrically nonlinear solutions to determine the applicability of linear analysis. A set of developmental aluminum clad graphite/epoxy struts sized for application to the Space Station Freedom truss are measured to determine their initial imperfection magnitude, load eccentricity, and cross sectional area and moment of inertia. Load-shortening curves are determined from axial compression tests of these specimens and are correlated with theoretical curves generated using linear analysis.
NASA Astrophysics Data System (ADS)
Li, Guiying; Sun, Hongwei; Zhang, Zhengyong; An, Taicheng; Hu, Jianfang
2013-09-01
Semi-volatile organic compounds (SVOCs) air pollution caused by municipal garbage compressing process was investigated at a garbage compressing station (GCS). The most abundant contaminants were phthalate esters (PAEs), followed by polycyclic aromatic hydrocarbons (PAHs) and organic chlorinated pesticides (OCPs). ∑16PAHs concentrations ranged from 58.773 to 68.840 ng m-3 in gas and from 6.489 to 17.291 ng m-3 in particulate phase; ∑20OCPs ranged from 4.181 to 5.550 ng m-3 and from 0.823 to 2.443 ng m-3 in gas and particulate phase, respectively; ∑15PAEs ranged from 46.498 to 87.928 ng m-3 and from 414.765 to 763.009 ng m-3 in gas and particulate phase. Lung-cancer risk due to PAHs exposure was 1.13 × 10-4. Both non-cancer and cancer risk levels due to OCPs exposure were acceptable. Non-cancer hazard index of PAEs was 4.57 × 10-3, suggesting safety of workers as only exposure to PAEs at GCS. At pilot scale, 60.18% of PAHs, 70.89% of OCPs and 63.2% of PAEs were removed by an integrated biotrickling filter-photocatalytic reactor at their stable state, and health risk levels were reduced about 50%, demonstrating high removal capacity of integrated reactor.
DNABIT Compress - Genome compression algorithm.
Rajarajeswari, Pothuraju; Apparao, Allam
2011-01-22
Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.
33. BENCH CORE STATION, GREY IRON FOUNDRY CORE ROOM WHERE ...
33. BENCH CORE STATION, GREY IRON FOUNDRY CORE ROOM WHERE CORE MOLDS WERE HAND FILLED AND OFTEN PNEUMATICALLY COMPRESSED WITH A HAND-HELD RAMMER BEFORE THEY WERE BAKED. - Stockham Pipe & Fittings Company, Grey Iron Foundry, 4000 Tenth Avenue North, Birmingham, Jefferson County, AL
Design and Analysis of a Hydrogen Compression and Storage Station
2017-12-01
Holmes THIS PAGE INTENTIONALLY LEFT BLANK i REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704–0188 Public reporting burden for this collection...SECURITY CLASSIFICATION OF ABSTRACT Unclassified 20. LIMITATION OF ABSTRACT UU NSN 7540–01-280-5500 Standard Form 298 (Rev. 2–89...than fossil fuels [2]. Renewably generated hydrogen gas, such as the hydrogen station demonstrated at NPS, falls into this category of alternative
DNABIT Compress – Genome compression algorithm
Rajarajeswari, Pothuraju; Apparao, Allam
2011-01-01
Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that “DNABIT Compress” algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases. PMID:21383923
Compression of facsimile graphics for transmission over digital mobile satellite circuits
NASA Astrophysics Data System (ADS)
Dimolitsas, Spiros; Corcoran, Frank L.
A technique for reducing the transmission requirements of facsimile images while maintaining high intelligibility in mobile communications environments is described. The algorithms developed are capable of achieving a compression of approximately 32 to 1. The technique focuses on the implementation of a low-cost interface unit suitable for facsimile communication between low-power mobile stations and fixed stations for both point-to-point and point-to-multipoint transmissions. This interface may be colocated with the transmitting facsimile terminals. The technique was implemented and tested by intercepting facsimile documents in a store-and-forward mode.
Halaçoğlu, Mekin Doğa; Uğurlu, Timuçin
2015-01-01
The objective of this study was to investigate the effects of conventional lubricants including a new candidate lubricant "hexagonal boron nitride (HBN)" on direct compression powders. Lubricants such as magnesium stearate (MGST), glyceryl behenate, stearic acid, talc and polyethylene glycol6000 were studied and tablets were manufactured on a single station instrumented tablet press. This study comprised the continuation of our previous one, so mixture of microcrystalline cellulose and modified starch was used as a master formula to evaluate effects of lubricants on pharmaceutical excipients that undergo complete plastic deformation without any fragmentation under compression pressure. Bulk and tapped densities, and Carr's index parameters were calculated for powders. Tensile strength, cohesion index, lower punch ejection force and lubricant effectiveness values were investigated for tablets. The deformation mechanisms of tablets were studied during compression from the Heckel plots with or without lubricant. MGST was found to be the most effective lubricant and HBN was found very close to it. HBN did not show a significant negative effect on the crushing strength and disintegration time of the tablets when we compared with MGST. Based on the Heckel plots at the level of 1%, formulation prepared with HBN showed the most pronounced plastic character.
Parallel hyperspectral compressive sensing method on GPU
NASA Astrophysics Data System (ADS)
Bernabé, Sergio; Martín, Gabriel; Nascimento, José M. P.
2015-10-01
Remote hyperspectral sensors collect large amounts of data per flight usually with low spatial resolution. It is known that the bandwidth connection between the satellite/airborne platform and the ground station is reduced, thus a compression onboard method is desirable to reduce the amount of data to be transmitted. This paper presents a parallel implementation of an compressive sensing method, called parallel hyperspectral coded aperture (P-HYCA), for graphics processing units (GPU) using the compute unified device architecture (CUDA). This method takes into account two main properties of hyperspectral dataset, namely the high correlation existing among the spectral bands and the generally low number of endmembers needed to explain the data, which largely reduces the number of measurements necessary to correctly reconstruct the original data. Experimental results conducted using synthetic and real hyperspectral datasets on two different GPU architectures by NVIDIA: GeForce GTX 590 and GeForce GTX TITAN, reveal that the use of GPUs can provide real-time compressive sensing performance. The achieved speedup is up to 20 times when compared with the processing time of HYCA running on one core of the Intel i7-2600 CPU (3.4GHz), with 16 Gbyte memory.
Application of wavefield compressive sensing in surface wave tomography
NASA Astrophysics Data System (ADS)
Zhan, Zhongwen; Li, Qingyang; Huang, Jianping
2018-06-01
Dense arrays allow sampling of seismic wavefield without significant aliasing, and surface wave tomography has benefitted from exploiting wavefield coherence among neighbouring stations. However, explicit or implicit assumptions about wavefield, irregular station spacing and noise still limit the applicability and resolution of current surface wave methods. Here, we propose to apply the theory of compressive sensing (CS) to seek a sparse representation of the surface wavefield using a plane-wave basis. Then we reconstruct the continuous surface wavefield on a dense regular grid before applying any tomographic methods. Synthetic tests demonstrate that wavefield CS improves robustness and resolution of Helmholtz tomography and wavefield gradiometry, especially when traditional approaches have difficulties due to sub-Nyquist sampling or complexities in wavefield.
Algorithm for Compressing Time-Series Data
NASA Technical Reports Server (NTRS)
Hawkins, S. Edward, III; Darlington, Edward Hugo
2012-01-01
An algorithm based on Chebyshev polynomials effects lossy compression of time-series data or other one-dimensional data streams (e.g., spectral data) that are arranged in blocks for sequential transmission. The algorithm was developed for use in transmitting data from spacecraft scientific instruments to Earth stations. In spite of its lossy nature, the algorithm preserves the information needed for scientific analysis. The algorithm is computationally simple, yet compresses data streams by factors much greater than two. The algorithm is not restricted to spacecraft or scientific uses: it is applicable to time-series data in general. The algorithm can also be applied to general multidimensional data that have been converted to time-series data, a typical example being image data acquired by raster scanning. However, unlike most prior image-data-compression algorithms, this algorithm neither depends on nor exploits the two-dimensional spatial correlations that are generally present in images. In order to understand the essence of this compression algorithm, it is necessary to understand that the net effect of this algorithm and the associated decompression algorithm is to approximate the original stream of data as a sequence of finite series of Chebyshev polynomials. For the purpose of this algorithm, a block of data or interval of time for which a Chebyshev polynomial series is fitted to the original data is denoted a fitting interval. Chebyshev approximation has two properties that make it particularly effective for compressing serial data streams with minimal loss of scientific information: The errors associated with a Chebyshev approximation are nearly uniformly distributed over the fitting interval (this is known in the art as the "equal error property"); and the maximum deviations of the fitted Chebyshev polynomial from the original data have the smallest possible values (this is known in the art as the "min-max property").
Role of Compressibility on Tsunami Propagation
NASA Astrophysics Data System (ADS)
Abdolali, Ali; Kirby, James T.
2017-12-01
In the present paper, we aim to reduce the discrepancies between tsunami arrival times evaluated from tsunami models and real measurements considering the role of ocean compressibility. We perform qualitative studies to reveal the phase speed reduction rate via a modified version of the Mild Slope Equation for Weakly Compressible fluid (MSEWC) proposed by Sammarco et al. (2013). The model is validated against a 3-D computational model. Physical properties of surface gravity waves are studied and compared with those for waves evaluated from an incompressible flow solver over realistic geometry for 2011 Tohoku-oki event, revealing reduction in phase speed.
Cognitive Radios Exploiting Gray Spaces via Compressed Sensing
NASA Astrophysics Data System (ADS)
Wieruch, Dennis; Jung, Peter; Wirth, Thomas; Dekorsy, Armin; Haustein, Thomas
2016-07-01
We suggest an interweave cognitive radio system with a gray space detector, which is properly identifying a small fraction of unused resources within an active band of a primary user system like 3GPP LTE. Therefore, the gray space detector can cope with frequency fading holes and distinguish them from inactive resources. Different approaches of the gray space detector are investigated, the conventional reduced-rank least squares method as well as the compressed sensing-based orthogonal matching pursuit and basis pursuit denoising algorithm. In addition, the gray space detector is compared with the classical energy detector. Simulation results present the receiver operating characteristic at several SNRs and the detection performance over further aspects like base station system load for practical false alarm rates. The results show, that especially for practical false alarm rates the compressed sensing algorithm are more suitable than the classical energy detector and reduced-rank least squares approach.
Natural Gas Compressor Stations on the Interstate Pipeline Network: Developments Since 1996
2007-01-01
This special report looks at the use of natural gas pipeline compressor stations on the interstate natural gas pipeline network that serves the lower 48 states. It examines the compression facilities added over the past 10 years and how the expansions have supported pipeline capacity growth intended to meet the increasing demand for natural gas.
NASA Technical Reports Server (NTRS)
Thesken, John C.; Bowman, Cheryl L.; Arnold, Steven M.
2003-01-01
Successful spaceflight operations require onboard power management systems that reliably achieve mission objectives for a minimal launch weight. Because of their high specific energies and potential for reduced maintenance and logistics, composite flywheels are an attractive alternative to electrochemical batteries. The Rotor Durability Team, which comprises members from the Ohio Aerospace Institute (OAI) and the NASA Glenn Research Center, completed a program of elevated temperature testing at Glenn' s Life Prediction Branch's Fatigue Laboratory. The experiments provided unique design data essential to the safety and durability of flywheel energy storage systems for the International Space Station and other manned spaceflight applications. Analysis of the experimental data (ref. 1) demonstrated that the compressive stress relaxation of composite flywheel rotor material is significantly greater than the commonly available tensile stress relaxation data. Durability analysis of compression preloaded flywheel rotors is required for accurate safe-life predictions for use in the International Space Station.
Compressed Air Working in Chennai During Metro Tunnel Construction: Occupational Health Problems.
Kulkarni, Ajit C
2017-01-01
Chennai metropolis has been growing rapidly. Need was felt of a metro rail system. Two corridors were planned. Corridor 1, of 23 km starting from Washermanpet to Airport. 14.3 km of this would be underground. Corridor 2, of 22 km starting from Chennai Central Railway station to St. Thomas Mount. 9.7 km of this would be underground. Occupational health centre's role involved selection of miners and assessing their fitness to work under compressed air. Planning and execution of compression and decompression, health monitoring and treatment of compression related illnesses. More than thirty five thousand man hours of work was carried out under compressed air pressure ranged from 1.2 to 1.9 bar absolute. There were only three cases of pain only ( Type I) decompression sickness which were treated with recompression. Vigilant medical supervision, experienced lock operators and reduced working hours under pressure because of inclement environmental conditions viz. high temperature and humidity, has helped achieve this low incident. Tunnelling activity will increase in India as more cities will soon opt for underground metro railway. Indian standard IS 4138 - 1977 " Safety code for working in compressed air" needs to be updated urgently keeping pace with modern working methods.
Compressed Air Working in Chennai During Metro Tunnel Construction: Occupational Health Problems
Kulkarni, Ajit C.
2017-01-01
Chennai metropolis has been growing rapidly. Need was felt of a metro rail system. Two corridors were planned. Corridor 1, of 23 km starting from Washermanpet to Airport. 14.3 km of this would be underground. Corridor 2, of 22 km starting from Chennai Central Railway station to St. Thomas Mount. 9.7 km of this would be underground. Occupational health centre's role involved selection of miners and assessing their fitness to work under compressed air. Planning and execution of compression and decompression, health monitoring and treatment of compression related illnesses. More than thirty five thousand man hours of work was carried out under compressed air pressure ranged from 1.2 to 1.9 bar absolute. There were only three cases of pain only ( Type I) decompression sickness which were treated with recompression. Vigilant medical supervision, experienced lock operators and reduced working hours under pressure because of inclement environmental conditions viz. high temperature and humidity, has helped achieve this low incident. Tunnelling activity will increase in India as more cities will soon opt for underground metro railway. Indian standard IS 4138 – 1977 ” Safety code for working in compressed air” needs to be updated urgently keeping pace with modern working methods. PMID:29618908
Sandford, M.T. II; Handel, T.G.; Bradley, J.N.
1998-07-07
A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique are disclosed. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%. 21 figs.
Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.
1998-01-01
A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%.
Comparative data compression techniques and multi-compression results
NASA Astrophysics Data System (ADS)
Hasan, M. R.; Ibrahimy, M. I.; Motakabber, S. M. A.; Ferdaus, M. M.; Khan, M. N. H.
2013-12-01
Data compression is very necessary in business data processing, because of the cost savings that it offers and the large volume of data manipulated in many business applications. It is a method or system for transmitting a digital image (i.e., an array of pixels) from a digital data source to a digital data receiver. More the size of the data be smaller, it provides better transmission speed and saves time. In this communication, we always want to transmit data efficiently and noise freely. This paper will provide some compression techniques for lossless text type data compression and comparative result of multiple and single compression, that will help to find out better compression output and to develop compression algorithms.
Compression of electromyographic signals using image compression techniques.
Costa, Marcus Vinícius Chaffim; Berger, Pedro de Azevedo; da Rocha, Adson Ferreira; de Carvalho, João Luiz Azevedo; Nascimento, Francisco Assis de Oliveira
2008-01-01
Despite the growing interest in the transmission and storage of electromyographic signals for long periods of time, few studies have addressed the compression of such signals. In this article we present an algorithm for compression of electromyographic signals based on the JPEG2000 coding system. Although the JPEG2000 codec was originally designed for compression of still images, we show that it can also be used to compress EMG signals for both isotonic and isometric contractions. For EMG signals acquired during isometric contractions, the proposed algorithm provided compression factors ranging from 75 to 90%, with an average PRD ranging from 3.75% to 13.7%. For isotonic EMG signals, the algorithm provided compression factors ranging from 75 to 90%, with an average PRD ranging from 3.4% to 7%. The compression results using the JPEG2000 algorithm were compared to those using other algorithms based on the wavelet transform.
Error mitigation for CCSD compressed imager data
NASA Astrophysics Data System (ADS)
Gladkova, Irina; Grossberg, Michael; Gottipati, Srikanth; Shahriar, Fazlul; Bonev, George
2009-08-01
To efficiently use the limited bandwidth available on the downlink from satellite to ground station, imager data is usually compressed before transmission. Transmission introduces unavoidable errors, which are only partially removed by forward error correction and packetization. In the case of the commonly used CCSD Rice-based compression, it results in a contiguous sequence of dummy values along scan lines in a band of the imager data. We have developed a method capable of using the image statistics to provide a principled estimate of the missing data. Our method outperforms interpolation yet can be performed fast enough to provide uninterrupted data flow. The estimation of the lost data provides significant value to end users who may use only part of the data, may not have statistical tools, or lack the expertise to mitigate the impact of the lost data. Since the locations of the lost data will be clearly marked as meta-data in the HDF or NetCDF header, experts who prefer to handle error mitigation themselves will be free to use or ignore our estimates as they see fit.
30 CFR 75.1730 - Compressed air; general; compressed air systems.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped with...
30 CFR 75.1730 - Compressed air; general; compressed air systems.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped with...
30 CFR 75.1730 - Compressed air; general; compressed air systems.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped with...
30 CFR 75.1730 - Compressed air; general; compressed air systems.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped with...
NASA Technical Reports Server (NTRS)
Mulloth, Lila M.; Rosen, Micha; Affleck, David; LeVan, M. Douglas; Moate, Joe R.
2005-01-01
The current CO2 removal technology of NASA is very energy intensive and contains many non-optimized subsystems. This paper discusses the design and prototype development of a two-stage CO2 removal and compression system that will utilize much less power than NASA s current CO2 removal technology. This integrated system contains a Nafion membrane followed by a residual water adsorber that performs the function of the desiccant beds in the four-bed molecular sieve (4BMS) system of the International Space Station (ISS). The membrane and the water adsorber are followed by a two-stage CO2 removal and compression subsystem that satisfies the operations of the CO2 adsorbent beds of the 4BMS aid the interface compressor for the Sabatier reactor connection. The two-stage compressor will utilize the principles of temperature-swing adsorption (TSA) compression technology for CO2 removal and compression. The similarities in operation and cycle times of the CO2 removal (first stage) and compression (second stage) operations will allow thermal coupling of the processes to maximize the efficiency of the system. In addition to the low-power advantage, this processor will maintain a lower CO2 concentration in the cabin than that can be achieved by the existing CO2 removal systems. The compact, consolidated, configuration of membrane gas dryer and CO2 separator and compressor will allow continuous recycling of humid air in the cabin and supply of compressed CO2 to the reduction unit for oxygen recovery. The device has potential application to the International Space Station and future, long duration, transit, and planetary missions.
Preliminary Design Program: Vapor Compression Distillation Flight Experiment Program
NASA Technical Reports Server (NTRS)
Schubert, F. H.; Boyda, R. B.
1995-01-01
This document provides a description of the results of a program to prepare a preliminary design of a flight experiment to demonstrate the function of a Vapor Compression Distillation (VCD) Wastewater Processor (WWP) in microgravity. This report describes the test sequence to be performed and the hardware, control/monitor instrumentation and software designs prepared to perform the defined tests. the purpose of the flight experiment is to significantly reduce the technical and programmatic risks associated with implementing a VCD-based WWP on board the International Space Station Alpha.
SeqCompress: an algorithm for biological sequence compression.
Sardaraz, Muhammad; Tahir, Muhammad; Ikram, Ataul Aziz; Bajwa, Hassan
2014-10-01
The growth of Next Generation Sequencing technologies presents significant research challenges, specifically to design bioinformatics tools that handle massive amount of data efficiently. Biological sequence data storage cost has become a noticeable proportion of total cost in the generation and analysis. Particularly increase in DNA sequencing rate is significantly outstripping the rate of increase in disk storage capacity, which may go beyond the limit of storage capacity. It is essential to develop algorithms that handle large data sets via better memory management. This article presents a DNA sequence compression algorithm SeqCompress that copes with the space complexity of biological sequences. The algorithm is based on lossless data compression and uses statistical model as well as arithmetic coding to compress DNA sequences. The proposed algorithm is compared with recent specialized compression tools for biological sequences. Experimental results show that proposed algorithm has better compression gain as compared to other existing algorithms. Copyright © 2014 Elsevier Inc. All rights reserved.
Vapor compression distillation module
NASA Technical Reports Server (NTRS)
Nuccio, P. P.
1975-01-01
A Vapor Compression Distillation (VCD) module was developed and evaluated as part of a Space Station Prototype (SSP) environmental control and life support system. The VCD module includes the waste tankage, pumps, post-treatment cells, automatic controls and fault detection instrumentation. Development problems were encountered with two components: the liquid pumps, and the waste tank and quantity gauge. Peristaltic pumps were selected instead of gear pumps, and a sub-program of materials and design optimization was undertaken leading to a projected life greater than 10,000 hours of continuous operation. A bladder tank was designed and built to contain the waste liquids and deliver it to the processor. A detrimental pressure pattern imposed upon the bladder by a force-operated quantity gauge was corrected by rearranging the force application, and design goals were achieved. System testing has demonstrated that all performance goals have been fulfilled.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonne, François; Bonnay, Patrick; Alamir, Mazen
2014-01-29
In this paper, a multivariable model-based non-linear controller for Warm Compression Stations (WCS) is proposed. The strategy is to replace all the PID loops controlling the WCS with an optimally designed model-based multivariable loop. This new strategy leads to high stability and fast disturbance rejection such as those induced by a turbine or a compressor stop, a key-aspect in the case of large scale cryogenic refrigeration. The proposed control scheme can be used to have precise control of every pressure in normal operation or to stabilize and control the cryoplant under high variation of thermal loads (such as a pulsedmore » heat load expected to take place in future fusion reactors such as those expected in the cryogenic cooling systems of the International Thermonuclear Experimental Reactor ITER or the Japan Torus-60 Super Advanced fusion experiment JT-60SA). The paper details how to set the WCS model up to synthesize the Linear Quadratic Optimal feedback gain and how to use it. After preliminary tuning at CEA-Grenoble on the 400W@1.8K helium test facility, the controller has been implemented on a Schneider PLC and fully tested first on the CERN's real-time simulator. Then, it was experimentally validated on a real CERN cryoplant. The efficiency of the solution is experimentally assessed using a reasonable operating scenario of start and stop of compressors and cryogenic turbines. This work is partially supported through the European Fusion Development Agreement (EFDA) Goal Oriented Training Program, task agreement WP10-GOT-GIRO.« less
NASA Astrophysics Data System (ADS)
Bonne, François; Alamir, Mazen; Bonnay, Patrick; Bradu, Benjamin
2014-01-01
In this paper, a multivariable model-based non-linear controller for Warm Compression Stations (WCS) is proposed. The strategy is to replace all the PID loops controlling the WCS with an optimally designed model-based multivariable loop. This new strategy leads to high stability and fast disturbance rejection such as those induced by a turbine or a compressor stop, a key-aspect in the case of large scale cryogenic refrigeration. The proposed control scheme can be used to have precise control of every pressure in normal operation or to stabilize and control the cryoplant under high variation of thermal loads (such as a pulsed heat load expected to take place in future fusion reactors such as those expected in the cryogenic cooling systems of the International Thermonuclear Experimental Reactor ITER or the Japan Torus-60 Super Advanced fusion experiment JT-60SA). The paper details how to set the WCS model up to synthesize the Linear Quadratic Optimal feedback gain and how to use it. After preliminary tuning at CEA-Grenoble on the 400W@1.8K helium test facility, the controller has been implemented on a Schneider PLC and fully tested first on the CERN's real-time simulator. Then, it was experimentally validated on a real CERN cryoplant. The efficiency of the solution is experimentally assessed using a reasonable operating scenario of start and stop of compressors and cryogenic turbines. This work is partially supported through the European Fusion Development Agreement (EFDA) Goal Oriented Training Program, task agreement WP10-GOT-GIRO.
Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.
1998-01-01
A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method.
Trusses Of Tensegrity Type In A Concept Of Train Station Renovation In Żary
NASA Astrophysics Data System (ADS)
Lechocka, Paulina
2015-09-01
The first railway station in Żary was built in 1843 in Germany. After the Second World War and years of socialism in Poland the meaning of railway decreased and its technical condition deteriorated. Now the building needs renovation and change of function. Tensegrity structures may be useful in renovation of platforms shelter. They are strut and tie construction, in which there is self-stabilization between compressed and tensioned elements. Conception of new platform shelter is based on exemplary tensegrity module consist of three struts and nine cables (called "Simplex"). Tensegrity would make railway station more modern, but not cover its original elevation.
The effects of video compression on acceptability of images for monitoring life sciences experiments
NASA Astrophysics Data System (ADS)
Haines, Richard F.; Chuang, Sherry L.
1992-07-01
Future manned space operations for Space Station Freedom will call for a variety of carefully planned multimedia digital communications, including full-frame-rate color video, to support remote operations of scientific experiments. This paper presents the results of an investigation to determine if video compression is a viable solution to transmission bandwidth constraints. It reports on the impact of different levels of compression and associated calculational parameters on image acceptability to investigators in life-sciences research at ARC. Three nonhuman life-sciences disciplines (plant, rodent, and primate biology) were selected for this study. A total of 33 subjects viewed experimental scenes in their own scientific disciplines. Ten plant scientists viewed still images of wheat stalks at various stages of growth. Each image was compressed to four different compression levels using the Joint Photographic Expert Group (JPEG) standard algorithm, and the images were presented in random order. Twelve and eleven staffmembers viewed 30-sec videotaped segments showing small rodents and a small primate, respectively. Each segment was repeated at four different compression levels in random order using an inverse cosine transform (ICT) algorithm. Each viewer made a series of subjective image-quality ratings. There was a significant difference in image ratings according to the type of scene viewed within disciplines; thus, ratings were scene dependent. Image (still and motion) acceptability does, in fact, vary according to compression level. The JPEG still-image-compression levels, even with the large range of 5:1 to 120:1 in this study, yielded equally high levels of acceptability. In contrast, the ICT algorithm for motion compression yielded a sharp decline in acceptability below 768 kb/sec. Therefore, if video compression is to be used as a solution for overcoming transmission bandwidth constraints, the effective management of the ratio and compression parameters
The effects of video compression on acceptability of images for monitoring life sciences experiments
NASA Technical Reports Server (NTRS)
Haines, Richard F.; Chuang, Sherry L.
1992-01-01
Future manned space operations for Space Station Freedom will call for a variety of carefully planned multimedia digital communications, including full-frame-rate color video, to support remote operations of scientific experiments. This paper presents the results of an investigation to determine if video compression is a viable solution to transmission bandwidth constraints. It reports on the impact of different levels of compression and associated calculational parameters on image acceptability to investigators in life-sciences research at ARC. Three nonhuman life-sciences disciplines (plant, rodent, and primate biology) were selected for this study. A total of 33 subjects viewed experimental scenes in their own scientific disciplines. Ten plant scientists viewed still images of wheat stalks at various stages of growth. Each image was compressed to four different compression levels using the Joint Photographic Expert Group (JPEG) standard algorithm, and the images were presented in random order. Twelve and eleven staffmembers viewed 30-sec videotaped segments showing small rodents and a small primate, respectively. Each segment was repeated at four different compression levels in random order using an inverse cosine transform (ICT) algorithm. Each viewer made a series of subjective image-quality ratings. There was a significant difference in image ratings according to the type of scene viewed within disciplines; thus, ratings were scene dependent. Image (still and motion) acceptability does, in fact, vary according to compression level. The JPEG still-image-compression levels, even with the large range of 5:1 to 120:1 in this study, yielded equally high levels of acceptability. In contrast, the ICT algorithm for motion compression yielded a sharp decline in acceptability below 768 kb/sec. Therefore, if video compression is to be used as a solution for overcoming transmission bandwidth constraints, the effective management of the ratio and compression parameters
Sandford, M.T. II; Handel, T.G.; Bradley, J.N.
1998-03-10
A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique is disclosed. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method. 11 figs.
Compressed domain indexing of losslessly compressed images
NASA Astrophysics Data System (ADS)
Schaefer, Gerald
2001-12-01
Image retrieval and image compression have been pursued separately in the past. Only little research has been done on a synthesis of the two by allowing image retrieval to be performed directly in the compressed domain of images without the need to uncompress them first. In this paper methods for image retrieval in the compressed domain of losslessly compressed images are introduced. While most image compression techniques are lossy, i.e. discard visually less significant information, lossless techniques are still required in fields like medical imaging or in situations where images must not be changed due to legal reasons. The algorithms in this paper are based on predictive coding methods where a pixel is encoded based on the pixel values of its (already encoded) neighborhood. The first method is based on an understanding that predictively coded data is itself indexable and represents a textural description of the image. The second method operates directly on the entropy encoded data by comparing codebooks of images. Experiments show good image retrieval results for both approaches.
Radiological Image Compression
NASA Astrophysics Data System (ADS)
Lo, Shih-Chung Benedict
The movement toward digital images in radiology presents the problem of how to conveniently and economically store, retrieve, and transmit the volume of digital images. Basic research into image data compression is necessary in order to move from a film-based department to an efficient digital -based department. Digital data compression technology consists of two types of compression technique: error-free and irreversible. Error -free image compression is desired; however, present techniques can only achieve compression ratio of from 1.5:1 to 3:1, depending upon the image characteristics. Irreversible image compression can achieve a much higher compression ratio; however, the image reconstructed from the compressed data shows some difference from the original image. This dissertation studies both error-free and irreversible image compression techniques. In particular, some modified error-free techniques have been tested and the recommended strategies for various radiological images are discussed. A full-frame bit-allocation irreversible compression technique has been derived. A total of 76 images which include CT head and body, and radiographs digitized to 2048 x 2048, 1024 x 1024, and 512 x 512 have been used to test this algorithm. The normalized mean -square-error (NMSE) on the difference image, defined as the difference between the original and the reconstructed image from a given compression ratio, is used as a global measurement on the quality of the reconstructed image. The NMSE's of total of 380 reconstructed and 380 difference images are measured and the results tabulated. Three complex compression methods are also suggested to compress images with special characteristics. Finally, various parameters which would effect the quality of the reconstructed images are discussed. A proposed hardware compression module is given in the last chapter.
NASA Astrophysics Data System (ADS)
Araya, Guillermo; Jansen, Kenneth
2017-11-01
DNS of compressible spatially-developing turbulent boundary layers is performed at a Mach number of 2.5 over an isothermal flat plate. Turbulent inflow information is generated by following the concept of the rescaling-recycling approach introduced by Lund et al. (J. Comp. Phys. 140, 233-258, 1998); although, the proposed methodology is extended to compressible flows. Furthermore, a dynamic approach is employed to connect the friction velocities at the inlet and recycle stations (i.e., there is no need of an empirical correlation as in Lund et al.). Additionally, the Morkovin's Strong Reynolds Analogy (SRA) is used in the rescaling process of the thermal fluctuations from the recycle plane. Low/high order flow statistics is compared with direct simulations of an incompressible isothermal ZPG boundary layer at similar Reynolds numbers and temperature regarded as a passive scalar. Focus is given to the effect assessment of flow compressibility on the dynamics of thermal coherent structures. AFOSR #FA9550-17-1-0051.
NASA Astrophysics Data System (ADS)
Akoguz, A.; Bozkurt, S.; Gozutok, A. A.; Alp, G.; Turan, E. G.; Bogaz, M.; Kent, S.
2016-06-01
High resolution level in satellite imagery came with its fundamental problem as big amount of telemetry data which is to be stored after the downlink operation. Moreover, later the post-processing and image enhancement steps after the image is acquired, the file sizes increase even more and then it gets a lot harder to store and consume much more time to transmit the data from one source to another; hence, it should be taken into account that to save even more space with file compression of the raw and various levels of processed data is a necessity for archiving stations to save more space. Lossless data compression algorithms that will be examined in this study aim to provide compression without any loss of data holding spectral information. Within this objective, well-known open source programs supporting related compression algorithms have been implemented on processed GeoTIFF images of Airbus Defence & Spaces SPOT 6 & 7 satellites having 1.5 m. of GSD, which were acquired and stored by ITU Center for Satellite Communications and Remote Sensing (ITU CSCRS), with the algorithms Lempel-Ziv-Welch (LZW), Lempel-Ziv-Markov chain Algorithm (LZMA & LZMA2), Lempel-Ziv-Oberhumer (LZO), Deflate & Deflate 64, Prediction by Partial Matching (PPMd or PPM2), Burrows-Wheeler Transform (BWT) in order to observe compression performances of these algorithms over sample datasets in terms of how much of the image data can be compressed by ensuring lossless compression.
On the possibility of generation of cold and additional electric energy at thermal power stations
NASA Astrophysics Data System (ADS)
Klimenko, A. V.; Agababov, V. S.; Borisova, P. N.
2017-06-01
A layout of a cogeneration plant for centralized supply of the users with electricity and cold (ECCG plant) is presented. The basic components of the plant are an expander-generator unit (EGU) and a vapor-compression thermotransformer (VCTT). At the natural-gas-pressure-reducing stations, viz., gas-distribution stations and gas-control units, the plant is connected in parallel to a throttler and replaces the latter completely or partially. The plant operates using only the energy of the natural gas flow without burning the gas; therefore, it can be classified as a fuelless installation. The authors compare the thermodynamic efficiencies of a centralized cold supply system based on the proposed plant integrated into the thermal power station scheme and a decentralized cold supply system in which the cold is generated by electrically driven vapor-compression thermotransformers installed on the user's premises. To perform comparative analysis, the exergy efficiency was taken as the criterion since in one of the systems under investigation the electricity and the cold are generated, which are energies of different kinds. It is shown that the thermodynamic efficiency of the power supply using the proposed plant proves to be higher within the entire range of the parameters under consideration. The article presents the results of investigating the impact of the gas heating temperature upstream from the expander on the electric power of the plant, its total cooling capacity, and the cooling capacities of the heat exchangers installed downstream from the EGU and the evaporator of the VCTT. The results of calculations are discussed that show that the cold generated at the gas-control unit of a powerful thermal power station can be used for the centralized supply of the cold to the ventilation and conditioning systems of both the buildings of the power station and the neighboring dwelling houses, schools, and public facilities during the summer season.
Compressed NMR: Combining compressive sampling and pure shift NMR techniques.
Aguilar, Juan A; Kenwright, Alan M
2017-12-26
Historically, the resolution of multidimensional nuclear magnetic resonance (NMR) has been orders of magnitude lower than the intrinsic resolution that NMR spectrometers are capable of producing. The slowness of Nyquist sampling as well as the existence of signals as multiplets instead of singlets have been two of the main reasons for this underperformance. Fortunately, two compressive techniques have appeared that can overcome these limitations. Compressive sensing, also known as compressed sampling (CS), avoids the first limitation by exploiting the compressibility of typical NMR spectra, thus allowing sampling at sub-Nyquist rates, and pure shift techniques eliminate the second issue "compressing" multiplets into singlets. This paper explores the possibilities and challenges presented by this combination (compressed NMR). First, a description of the CS framework is given, followed by a description of the importance of combining it with the right pure shift experiment. Second, examples of compressed NMR spectra and how they can be combined with covariance methods will be shown. Copyright © 2017 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Lindsay, R. A.; Cox, B. V.
Universal and adaptive data compression techniques have the capability to globally compress all types of data without loss of information but have the disadvantage of complexity and computation speed. Advances in hardware speed and the reduction of computational costs have made universal data compression feasible. Implementations of the Adaptive Huffman and Lempel-Ziv compression algorithms are evaluated for performance. Compression ratios versus run times for different size data files are graphically presented and discussed in the paper. Required adjustments needed for optimum performance of the algorithms relative to theoretical achievable limits will be outlined.
DCS - A high flux beamline for time resolved dynamic compression science – Design highlights
Capatina, D.; D’Amico, K.; Nudell, J.; ...
2016-07-27
The Dynamic Compression Sector (DCS) beamline, a national user facility for time resolved dynamic compression science supported by the National Nuclear Security Administration (NNSA) of the Department of Energy (DOE), has recently completed construction and is being commissioned at Sector 35 of the Advanced Photon Source (APS) at Argonne National Laboratory (ANL). The beamline consists of a First Optics Enclosure (FOE) and four experimental enclosures. A Kirkpatrick–Baez focusing mirror system with 2.2 mrad incident angles in the FOE delivers pink beam to the experimental stations. A refocusing Kirkpatrick–Baez mirror system is situated in each of the two most downstream enclosures.more » Experiments can be conducted in either white, monochromatic, pink or monochromatic-reflected beam mode in any of the experimental stations by changing the position of two interlocked components in the FOE. The beamline Radiation Safety System (RSS) components have been designed to handle the continuous beam provided by two in-line revolver undulators with periods of 27 and 30 mm, at closed gap, 150 mA beam current, and passing through a power limiting aperture of 1.5 x 1.0 mm 2. A novel pink beam end station stop [1] is used to stop the continuous and focused pink beam which can achieve a peak heat flux of 105 kW/mm 2. Finally, a new millisecond shutter design [2] is used to deliver a quick pulse of beam to the sample, synchronized with the dynamic event, the microsecond shutter, and the storage ring clock.« less
DCS - A high flux beamline for time resolved dynamic compression science – Design highlights
DOE Office of Scientific and Technical Information (OSTI.GOV)
Capatina, D., E-mail: capatina@aps.anl.gov; D’Amico, K., E-mail: kdamico@aps.anl.gov; Nudell, J., E-mail: jnudell@aps.anl.gov
2016-07-27
The Dynamic Compression Sector (DCS) beamline, a national user facility for time resolved dynamic compression science supported by the National Nuclear Security Administration (NNSA) of the Department of Energy (DOE), has recently completed construction and is being commissioned at Sector 35 of the Advanced Photon Source (APS) at Argonne National Laboratory (ANL). The beamline consists of a First Optics Enclosure (FOE) and four experimental enclosures. A Kirkpatrick–Baez focusing mirror system with 2.2 mrad incident angles in the FOE delivers pink beam to the experimental stations. A refocusing Kirkpatrick–Baez mirror system is situated in each of the two most downstream enclosures.more » Experiments can be conducted in either white, monochromatic, pink or monochromatic-reflected beam mode in any of the experimental stations by changing the position of two interlocked components in the FOE. The beamline Radiation Safety System (RSS) components have been designed to handle the continuous beam provided by two in-line revolver undulators with periods of 27 and 30 mm, at closed gap, 150 mA beam current, and passing through a power limiting aperture of 1.5 x 1.0 mm{sup 2}. A novel pink beam end station stop [1] is used to stop the continuous and focused pink beam which can achieve a peak heat flux of 105 kW/mm{sup 2}. A new millisecond shutter design [2] is used to deliver a quick pulse of beam to the sample, synchronized with the dynamic event, the microsecond shutter, and the storage ring clock.« less
DCS - A High Flux Beamline for Time Resolved Dynamic Compression Science – Design Highlights
DOE Office of Scientific and Technical Information (OSTI.GOV)
Capatina, D.; D'Amico, Kevin L.; Nudell, J.
2016-07-27
The Dynamic Compression Sector (DCS) beamline, a national user facility for time resolved dynamic compression science supported by the National Nuclear Security Administration (NNSA) of the Department of Energy (DOE), has recently completed construction and is being commissioned at Sector 35 of the Advanced Photon Source (APS) at Argonne National Laboratory (ANL). The beamline consists of a First Optics Enclosure (FOE) and four experimental enclosures. A Kirkpatrick–Baez focusing mirror system with 2.2 mrad incident angles in the FOE delivers pink beam to the experimental stations. A refocusing Kirkpatrick–Baez mirror system is situated in each of the two most downstream enclosures.more » Experiments can be conducted in either white, monochromatic, pink or monochromatic-reflected beam mode in any of the experimental stations by changing the position of two interlocked components in the FOE. The beamline Radiation Safety System (RSS) components have been designed to handle the continuous beam provided by two in-line revolver undulators with periods of 27 and 30 mm, at closed gap, 150 mA beam current, and passing through a power limiting aperture of 1.5 x 1.0 mm2. A novel pink beam end station stop [1] is used to stop the continuous and focused pink beam which can achieve a peak heat flux of 105 kW/mm2. A new millisecond shutter design [2] is used to deliver a quick pulse of beam to the sample, synchronized with the dynamic event, the microsecond shutter, and the storage ring clock.« less
Space Station Freedom solar array containment box mechanisms
NASA Technical Reports Server (NTRS)
Johnson, Mark E.; Haugen, Bert; Anderson, Grant
1994-01-01
Space Station Freedom will feature six large solar arrays, called solar array wings, built by Lockheed Missiles & Space Company under contract to Rockwell International, Rocketdyne Division. Solar cells are mounted on flexible substrate panels which are hinged together to form a 'blanket.' Each wing is comprised of two blankets supported by a central mast, producing approximately 32 kW of power at beginning-of-life. During launch, the blankets are fan-folded and compressed to 1.5 percent of their deployed length into containment boxes. This paper describes the main containment box mechanisms designed to protect, deploy, and retract the solar array blankets: the latch, blanket restraint, tension, and guidewire mechanisms.
Optimization of Error-Bounded Lossy Compression for Hard-to-Compress HPC Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Di, Sheng; Cappello, Franck
Since today’s scientific applications are producing vast amounts of data, compressing them before storage/transmission is critical. Results of existing compressors show two types of HPC data sets: highly compressible and hard to compress. In this work, we carefully design and optimize the error-bounded lossy compression for hard-tocompress scientific data. We propose an optimized algorithm that can adaptively partition the HPC data into best-fit consecutive segments each having mutually close data values, such that the compression condition can be optimized. Another significant contribution is the optimization of shifting offset such that the XOR-leading-zero length between two consecutive unpredictable data points canmore » be maximized. We finally devise an adaptive method to select the best-fit compressor at runtime for maximizing the compression factor. We evaluate our solution using 13 benchmarks based on real-world scientific problems, and we compare it with 9 other state-of-the-art compressors. Experiments show that our compressor can always guarantee the compression errors within the user-specified error bounds. Most importantly, our optimization can improve the compression factor effectively, by up to 49% for hard-tocompress data sets with similar compression/decompression time cost.« less
Recce imagery compression options
NASA Astrophysics Data System (ADS)
Healy, Donald J.
1995-09-01
The errors introduced into reconstructed RECCE imagery by ATARS DPCM compression are compared to those introduced by the more modern DCT-based JPEG compression algorithm. For storage applications in which uncompressed sensor data is available JPEG provides better mean-square-error performance while also providing more flexibility in the selection of compressed data rates. When ATARS DPCM compression has already been performed, lossless encoding techniques may be applied to the DPCM deltas to achieve further compression without introducing additional errors. The abilities of several lossless compression algorithms including Huffman, Lempel-Ziv, Lempel-Ziv-Welch, and Rice encoding to provide this additional compression of ATARS DPCM deltas are compared. It is shown that the amount of noise in the original imagery significantly affects these comparisons.
Magnetic Flux Compression Concept for Aerospace Propulsion and Power
NASA Technical Reports Server (NTRS)
Litchford, Ron J.; Robertson, Tony; Hawk, Clark W.; Turner, Matt; Koelfgen, Syri
2000-01-01
The objective of this research is to investigate system level performance and design issues associated with magnetic flux compression devices for aerospace power generation and propulsion. The proposed concept incorporates the principles of magnetic flux compression for direct conversion of nuclear/chemical detonation energy into electrical power. Specifically a magnetic field is compressed between an expanding detonation driven diamagnetic plasma and a stator structure formed from a high temperature superconductor (HTSC). The expanding plasma cloud is entirely confined by the compressed magnetic field at the expense of internal kinetic energy. Electrical power is inductively extracted, and the detonation products are collimated and expelled through a magnetic nozzle. The long-term development of this highly integrated generator/propulsion system opens up revolutionary NASA Mission scenarios for future interplanetary and interstellar spacecraft. The unique features of this concept with respect to future space travel opportunities are as follows: ability to implement high energy density chemical detonations or ICF microfusion bursts as the impulsive diamagnetic plasma source; high power density system characteristics constrain the size, weight, and cost of the vehicle architecture; provides inductive storage pulse power with a very short pulse rise time; multimegajoule energy bursts/terawatt power bursts; compact pulse power driver for low-impedance dense plasma devices; utilization of low cost HTSC material and casting technology to increase magnetic flux conservation and inductive energy storage; improvement in chemical/nuclear-to-electric energy conversion efficiency and the ability to generate significant levels of thrust with very high specific impulse; potential for developing a small, lightweight, low cost, self-excited integrated propulsion and power system suitable for space stations, planetary bases, and interplanetary and interstellar space travel
Compressing turbulence and sudden viscous dissipation with compression-dependent ionization state
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davidovits, Seth; Fisch, Nathaniel J.
Turbulent plasma flow, amplified by rapid three-dimensional compression, can be suddenly dissipated under continuing compression. Furthermore, this effect relies on the sensitivity of the plasma viscosity to the temperature, μ ~ T 5 / 2 . The plasma viscosity is also sensitive to the plasma ionization state. Here, we show that the sudden dissipation phenomenon may be prevented when the plasma ionization state increases during compression, and we demonstrate the regime of net viscosity dependence on compression where sudden dissipation is guaranteed. In addition, it is shown that, compared to cases with no ionization, ionization during compression is associated withmore » larger increases in turbulent energy and can make the difference between growing and decreasing turbulent energy.« less
Compressing turbulence and sudden viscous dissipation with compression-dependent ionization state
Davidovits, Seth; Fisch, Nathaniel J.
2016-11-14
Turbulent plasma flow, amplified by rapid three-dimensional compression, can be suddenly dissipated under continuing compression. Furthermore, this effect relies on the sensitivity of the plasma viscosity to the temperature, μ ~ T 5 / 2 . The plasma viscosity is also sensitive to the plasma ionization state. Here, we show that the sudden dissipation phenomenon may be prevented when the plasma ionization state increases during compression, and we demonstrate the regime of net viscosity dependence on compression where sudden dissipation is guaranteed. In addition, it is shown that, compared to cases with no ionization, ionization during compression is associated withmore » larger increases in turbulent energy and can make the difference between growing and decreasing turbulent energy.« less
Compressed Natural Gas (CNG) Transit Bus Experience Survey: April 2009--April 2010
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, R.; Horne, D. B.
2010-09-01
This survey was commissioned by the U.S. Department of Energy (DOE) and the National Renewable Energy Laboratory (NREL) to collect and analyze experiential data and information from a cross-section of U.S. transit agencies with varying degrees of compressed natural gas (CNG) bus and station experience. This information will be used to assist DOE and NREL in determining areas of success and areas where further technical or other assistance might be required, and to assist them in focusing on areas judged by the CNG transit community as priority items.
Compression for radiological images
NASA Astrophysics Data System (ADS)
Wilson, Dennis L.
1992-07-01
The viewing of radiological images has peculiarities that must be taken into account in the design of a compression technique. The images may be manipulated on a workstation to change the contrast, to change the center of the brightness levels that are viewed, and even to invert the images. Because of the possible consequences of losing information in a medical application, bit preserving compression is used for the images used for diagnosis. However, for archiving the images may be compressed to 10 of their original size. A compression technique based on the Discrete Cosine Transform (DCT) takes the viewing factors into account by compressing the changes in the local brightness levels. The compression technique is a variation of the CCITT JPEG compression that suppresses the blocking of the DCT except in areas of very high contrast.
Intelligent bandwith compression
NASA Astrophysics Data System (ADS)
Tseng, D. Y.; Bullock, B. L.; Olin, K. E.; Kandt, R. K.; Olsen, J. D.
1980-02-01
The feasibility of a 1000:1 bandwidth compression ratio for image transmission has been demonstrated using image-analysis algorithms and a rule-based controller. Such a high compression ratio was achieved by first analyzing scene content using auto-cueing and feature-extraction algorithms, and then transmitting only the pertinent information consistent with mission requirements. A rule-based controller directs the flow of analysis and performs priority allocations on the extracted scene content. The reconstructed bandwidth-compressed image consists of an edge map of the scene background, with primary and secondary target windows embedded in the edge map. The bandwidth-compressed images are updated at a basic rate of 1 frame per second, with the high-priority target window updated at 7.5 frames per second. The scene-analysis algorithms used in this system together with the adaptive priority controller are described. Results of simulated 1000:1 band width-compressed images are presented. A video tape simulation of the Intelligent Bandwidth Compression system has been produced using a sequence of video input from the data base.
Milliwatt radioisotope power supply for the PASCAL Mars surface stations
NASA Astrophysics Data System (ADS)
Allen, Daniel T.; Murbach, Marcus S.
2001-02-01
A milliwatt power supply is being developed based on the 1 watt Light-Weight Radioisotope Heater Unit (RHU), which has already been used to provide heating alone on numerous spacecraft. In the past year the power supply has been integrated into the design of the proposed PASCAL Mars Network Mission, which is intended to place 24 surface climate monitoring stations on Mars. The PASCAL Mars mission calls for the individual surface stations to be transported together in one spacecraft on a trajectory direct from launch to orbit around Mars. From orbit around Mars each surface station will be deployed on a SCRAMP (slotted compression ramp) probe and, after aerodynamic and parachute deceleration, land at a preselected location on the planet. During descent sounding data and still images will be accumulated, and, once on the surface, the station will take measurements of pressure, temperature and overhead atmospheric optical depth for a period of 10 Mars years (18.8 Earth years). Power for periodic data acquisition and transmission to orbital then to Earth relay will come from a bank of ultracapacitors which will be continuously recharged by the radioisotope power supply. This electronic system has been designed and a breadboard built. In the ultimate design the electronics will be arrayed on the exterior surface of the radioisotope power supply in order to take advantage of the reject heat. This assembly in turn is packaged within the SCRAMP, and that assembly comprises the surface station. An electrically heated but otherwise prototypical power supply was operated in combination with the surface station breadboard system, which included the ultracapacitors. Other issues addressed in this work have been the capability of the generator to withstand the mechanical shock of the landing on Mars and the effectiveness of the generator's multi-foil vacuum thermal insulation. .
Turbulence in Compressible Flows
NASA Technical Reports Server (NTRS)
1997-01-01
Lecture notes for the AGARD Fluid Dynamics Panel (FDP) Special Course on 'Turbulence in Compressible Flows' have been assembled in this report. The following topics were covered: Compressible Turbulent Boundary Layers, Compressible Turbulent Free Shear Layers, Turbulent Combustion, DNS/LES and RANS Simulations of Compressible Turbulent Flows, and Case Studies of Applications of Turbulence Models in Aerospace.
JPEG 2000 in advanced ground station architectures
NASA Astrophysics Data System (ADS)
Chien, Alan T.; Brower, Bernard V.; Rajan, Sreekanth D.
2000-11-01
The integration and management of information from distributed and heterogeneous information producers and providers must be a key foundation of any developing imagery intelligence system. Historically, imagery providers acted as production agencies for imagery, imagery intelligence, and geospatial information. In the future, these imagery producers will be evolving to act more like e-business information brokers. The management of imagery and geospatial information-visible, spectral, infrared (IR), radar, elevation, or other feature and foundation data-is crucial from a quality and content perspective. By 2005, there will be significantly advanced collection systems and a myriad of storage devices. There will also be a number of automated and man-in-the-loop correlation, fusion, and exploitation capabilities. All of these new imagery collection and storage systems will result in a higher volume and greater variety of imagery being disseminated and archived in the future. This paper illustrates the importance-from a collection, storage, exploitation, and dissemination perspective-of the proper selection and implementation of standards-based compression technology for ground station and dissemination/archive networks. It specifically discusses the new compression capabilities featured in JPEG 2000 and how that commercially based technology can provide significant improvements to the overall imagery and geospatial enterprise both from an architectural perspective as well as from a user's prospective.
Hildebrand, Richard J.; Wozniak, John J.
2001-01-01
A compressed gas storage cell interconnecting manifold including a thermally activated pressure relief device, a manual safety shut-off valve, and a port for connecting the compressed gas storage cells to a motor vehicle power source and to a refueling adapter. The manifold is mechanically and pneumatically connected to a compressed gas storage cell by a bolt including a gas passage therein.
Data Compression Techniques for Maps
1989-01-01
Lempel - Ziv compression is applied to the classified and unclassified images as also to the output of the compression algorithms . The algorithms ...resulted in a compression of 7:1. The output of the quadtree coding algorithm was then compressed using Lempel - Ziv coding. The compression ratio achieved...using Lempel - Ziv coding. The unclassified image gave a compression ratio of only 1.4:1. The K means classified image
Mammographic compression in Asian women.
Lau, Susie; Abdul Aziz, Yang Faridah; Ng, Kwan Hoong
2017-01-01
To investigate: (1) the variability of mammographic compression parameters amongst Asian women; and (2) the effects of reducing compression force on image quality and mean glandular dose (MGD) in Asian women based on phantom study. We retrospectively collected 15818 raw digital mammograms from 3772 Asian women aged 35-80 years who underwent screening or diagnostic mammography between Jan 2012 and Dec 2014 at our center. The mammograms were processed using a volumetric breast density (VBD) measurement software (Volpara) to assess compression force, compression pressure, compressed breast thickness (CBT), breast volume, VBD and MGD against breast contact area. The effects of reducing compression force on image quality and MGD were also evaluated based on measurement obtained from 105 Asian women, as well as using the RMI156 Mammographic Accreditation Phantom and polymethyl methacrylate (PMMA) slabs. Compression force, compression pressure, CBT, breast volume, VBD and MGD correlated significantly with breast contact area (p<0.0001). Compression parameters including compression force, compression pressure, CBT and breast contact area were widely variable between [relative standard deviation (RSD)≥21.0%] and within (p<0.0001) Asian women. The median compression force should be about 8.1 daN compared to the current 12.0 daN. Decreasing compression force from 12.0 daN to 9.0 daN increased CBT by 3.3±1.4 mm, MGD by 6.2-11.0%, and caused no significant effects on image quality (p>0.05). Force-standardized protocol led to widely variable compression parameters in Asian women. Based on phantom study, it is feasible to reduce compression force up to 32.5% with minimal effects on image quality and MGD.
Peterson, P Gabriel; Pak, Sung K; Nguyen, Binh; Jacobs, Genevieve; Folio, Les
2012-12-01
This study aims to evaluate the utility of compressed computed tomography (CT) studies (to expedite transmission) using Motion Pictures Experts Group, Layer 4 (MPEG-4) movie formatting in combat hospitals when guiding major treatment regimens. This retrospective analysis was approved by Walter Reed Army Medical Center institutional review board with a waiver for the informed consent requirement. Twenty-five CT chest, abdomen, and pelvis exams were converted from Digital Imaging and Communications in Medicine to MPEG-4 movie format at various compression ratios. Three board-certified radiologists reviewed various levels of compression on emergent CT findings on 25 combat casualties and compared with the interpretation of the original series. A Universal Trauma Window was selected at -200 HU level and 1,500 HU width, then compressed at three lossy levels. Sensitivities and specificities for each reviewer were calculated along with 95 % confidence intervals using the method of general estimating equations. The compression ratios compared were 171:1, 86:1, and 41:1 with combined sensitivities of 90 % (95 % confidence interval, 79-95), 94 % (87-97), and 100 % (93-100), respectively. Combined specificities were 100 % (85-100), 100 % (85-100), and 96 % (78-99), respectively. The introduction of CT in combat hospitals with increasing detectors and image data in recent military operations has increased the need for effective teleradiology; mandating compression technology. Image compression is currently used to transmit images from combat hospital to tertiary care centers with subspecialists and our study demonstrates MPEG-4 technology as a reasonable means of achieving such compression.
Alternative Compression Garments
NASA Technical Reports Server (NTRS)
Stenger, M. B.; Lee, S. M. C.; Ribeiro, L. C.; Brown, A. K.; Westby, C. M.; Platts, S. H.
2011-01-01
Orthostatic intolerance after spaceflight is still an issue for astronauts as no in-flight countermeasure has been 100% effective. Future anti-gravity suits (AGS) may be similar to the Shuttle era inflatable AGS or may be a mechanical compression device like the Russian Kentavr. We have evaluated the above garments as well as elastic, gradient compression garments of varying magnitude and determined that breast-high elastic compression garments may be a suitable replacement to the current AGS. This new garment should be more comfortable than the AGS, easy to don and doff, and as effective a countermeasure to orthostatic intolerance. Furthermore, these new compression garments could be worn for several days after space flight as necessary if symptoms persisted. We conducted two studies to evaluate elastic, gradient compression garments. The purpose of these studies was to evaluate the comfort and efficacy of an alternative compression garment (ACG) immediately after actual space flight and 6 degree head-down tilt bed rest as a model of space flight, and to determine if they would impact recovery if worn for up to three days after bed rest.
Intelligent bandwidth compression
NASA Astrophysics Data System (ADS)
Tseng, D. Y.; Bullock, B. L.; Olin, K. E.; Kandt, R. K.; Olsen, J. D.
1980-02-01
The feasibility of a 1000:1 bandwidth compression ratio for image transmission has been demonstrated using image-analysis algorithms and a rule-based controller. Such a high compression ratio was achieved by first analyzing scene content using auto-cueing and feature-extraction algorithms, and then transmitting only the pertinent information consistent with mission requirements. A rule-based controller directs the flow of analysis and performs priority allocations on the extracted scene content. The reconstructed bandwidth-compressed image consists of an edge map of the scene background, with primary and secondary target windows embedded in the edge map. The bandwidth-compressed images are updated at a basic rate of 1 frame per second, with the high-priority target window updated at 7.5 frames per second. The scene-analysis algorithms used in this system together with the adaptive priority controller are described. Results of simulated 1000:1 bandwidth-compressed images are presented.
Biological sequence compression algorithms.
Matsumoto, T; Sadakane, K; Imai, H
2000-01-01
Today, more and more DNA sequences are becoming available. The information about DNA sequences are stored in molecular biology databases. The size and importance of these databases will be bigger and bigger in the future, therefore this information must be stored or communicated efficiently. Furthermore, sequence compression can be used to define similarities between biological sequences. The standard compression algorithms such as gzip or compress cannot compress DNA sequences, but only expand them in size. On the other hand, CTW (Context Tree Weighting Method) can compress DNA sequences less than two bits per symbol. These algorithms do not use special structures of biological sequences. Two characteristic structures of DNA sequences are known. One is called palindromes or reverse complements and the other structure is approximate repeats. Several specific algorithms for DNA sequences that use these structures can compress them less than two bits per symbol. In this paper, we improve the CTW so that characteristic structures of DNA sequences are available. Before encoding the next symbol, the algorithm searches an approximate repeat and palindrome using hash and dynamic programming. If there is a palindrome or an approximate repeat with enough length then our algorithm represents it with length and distance. By using this preprocessing, a new program achieves a little higher compression ratio than that of existing DNA-oriented compression algorithms. We also describe new compression algorithm for protein sequences.
Video bandwidth compression system
NASA Astrophysics Data System (ADS)
Ludington, D.
1980-08-01
The objective of this program was the development of a Video Bandwidth Compression brassboard model for use by the Air Force Avionics Laboratory, Wright-Patterson Air Force Base, in evaluation of bandwidth compression techniques for use in tactical weapons and to aid in the selection of particular operational modes to be implemented in an advanced flyable model. The bandwidth compression system is partitioned into two major divisions: the encoder, which processes the input video with a compression algorithm and transmits the most significant information; and the decoder where the compressed data is reconstructed into a video image for display.
Compressed baryonic matter at FAIR: JINR participation
NASA Astrophysics Data System (ADS)
Kurilkin, P.; Ladygin, V.; Malakhov, A.; Senger, P.
2015-11-01
The scientific mission of the Compressed Baryonic Matter(CBM) experiment is the study of the nuclear matter properties at the high baryon densities in heavy ion collisions at the Facility of Antiproton and Ion Research (FAIR) in Darmstadt. We present the results on JINR participation in the CBM experiment. JINR teams are responsible on the design, the coordination of superconducting(SC) magnet manufacture, its testing and installation in CBM cave. Together with Silicon Tracker System it will provide the momentum resolution better 1% for different configuration of CBM setup. The characteristics and technical aspects of the magnet are discussed. JINR plays also a significant role in the manufacture of two straw tracker station for the muon detection system. JINR team takes part in the development of new method for simulation, processing and analysis experimental data for different basic detectors of CBM.
Mental Aptitude and Comprehension of Time-Compressed and Compressed-Expanded Listening Selections.
ERIC Educational Resources Information Center
Sticht, Thomas G.
The comprehensibility of materials compressed and then expanded by means of an electromechanical process was tested with 280 Army inductees divided into groups of high and low mental aptitude. Three short listening selections relating to military activities were subjected to compression and compression-expansion to produce seven versions. Data…
Cánovas, Rodrigo; Moffat, Alistair; Turpin, Andrew
2016-12-15
Next generation sequencing machines produce vast amounts of genomic data. For the data to be useful, it is essential that it can be stored and manipulated efficiently. This work responds to the combined challenge of compressing genomic data, while providing fast access to regions of interest, without necessitating decompression of whole files. We describe CSAM (Compressed SAM format), a compression approach offering lossless and lossy compression for SAM files. The structures and techniques proposed are suitable for representing SAM files, as well as supporting fast access to the compressed information. They generate more compact lossless representations than BAM, which is currently the preferred lossless compressed SAM-equivalent format; and are self-contained, that is, they do not depend on any external resources to compress or decompress SAM files. An implementation is available at https://github.com/rcanovas/libCSAM CONTACT: canovas-ba@lirmm.frSupplementary Information: Supplementary data is available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Lietaert, Karel; Cutolo, Antonio; Boey, Dries; Van Hooreweder, Brecht
2018-03-21
Mechanical performance of additively manufactured (AM) Ti6Al4V scaffolds has mostly been studied in uniaxial compression. However, in real-life applications, more complex load conditions occur. To address this, a novel sample geometry was designed, tested and analyzed in this work. The new scaffold geometry, with porosity gradient between the solid ends and scaffold middle, was successfully used for quasi-static tension, tension-tension (R = 0.1), tension-compression (R = -1) and compression-compression (R = 10) fatigue tests. Results show that global loading in tension-tension leads to a decreased fatigue performance compared to global loading in compression-compression. This difference in fatigue life can be understood fairly well by approximating the local tensile stress amplitudes in the struts near the nodes. Local stress based Haigh diagrams were constructed to provide more insight in the fatigue behavior. When fatigue life is interpreted in terms of local stresses, the behavior of single struts is shown to be qualitatively the same as bulk Ti6Al4V. Compression-compression and tension-tension fatigue regimes lead to a shorter fatigue life than fully reversed loading due to the presence of a mean local tensile stress. Fractographic analysis showed that most fracture sites were located close to the nodes, where the highest tensile stresses are located.
Distributed Compressive CSIT Estimation and Feedback for FDD Multi-User Massive MIMO Systems
NASA Astrophysics Data System (ADS)
Rao, Xiongbin; Lau, Vincent K. N.
2014-06-01
To fully utilize the spatial multiplexing gains or array gains of massive MIMO, the channel state information must be obtained at the transmitter side (CSIT). However, conventional CSIT estimation approaches are not suitable for FDD massive MIMO systems because of the overwhelming training and feedback overhead. In this paper, we consider multi-user massive MIMO systems and deploy the compressive sensing (CS) technique to reduce the training as well as the feedback overhead in the CSIT estimation. The multi-user massive MIMO systems exhibits a hidden joint sparsity structure in the user channel matrices due to the shared local scatterers in the physical propagation environment. As such, instead of naively applying the conventional CS to the CSIT estimation, we propose a distributed compressive CSIT estimation scheme so that the compressed measurements are observed at the users locally, while the CSIT recovery is performed at the base station jointly. A joint orthogonal matching pursuit recovery algorithm is proposed to perform the CSIT recovery, with the capability of exploiting the hidden joint sparsity in the user channel matrices. We analyze the obtained CSIT quality in terms of the normalized mean absolute error, and through the closed-form expressions, we obtain simple insights into how the joint channel sparsity can be exploited to improve the CSIT recovery performance.
NASA Technical Reports Server (NTRS)
Perry, J. C. (Inventor)
1980-01-01
A system for displaying at a remote station data generated at a central station and for powering the remote station from the central station is presented. A power signal is generated at the central station and time multiplexed with the data and then transmitted to the remote station. An energy storage device at the remote station is responsive to the transmitted power signal to provide energizing power for the circuits at the remote station during the time interval data is being transmitted to the remote station. Energizing power for the circuits at the remote station is provided by the power signal itself during the time this signal is transmitted. Preferably the energy storage device is a capacitor which is charged by the power signal during the time the power is transmitted and is slightly discharged during the time the data is transmitted to energize the circuits at the remote station.
Thermofluidic compression effects to achieve combustion in a low-compression scramjet engine
NASA Astrophysics Data System (ADS)
Moura, A. F.; Wheatley, V.; Jahn, I.
2018-07-01
The compression provided by a scramjet inlet is an important parameter in its design. It must be low enough to limit thermal and structural loads and stagnation pressure losses, but high enough to provide the conditions favourable for combustion. Inlets are typically designed to achieve sufficient compression without accounting for the fluidic, and subsequently thermal, compression provided by the fuel injection, which can enable robust combustion in a low-compression engine. This is investigated using Reynolds-averaged Navier-Stokes numerical simulations of a simplified scramjet engine designed to have insufficient compression to auto-ignite fuel in the absence of thermofluidic compression. The engine was designed with a wide rectangular combustor and a single centrally located injector, in order to reduce three-dimensional effects of the walls on the fuel plume. By varying the injected mass flow rate of hydrogen fuel (equivalence ratios of 0.22, 0.17, and 0.13), it is demonstrated that higher equivalence ratios lead to earlier ignition and more rapid combustion, even though mean conditions in the combustor change by no more than 5% for pressure and 3% for temperature with higher equivalence ratio. By supplementing the lower equivalence ratio with helium to achieve a higher mass flow rate, it is confirmed that these benefits are primarily due to the local compression provided by the extra injected mass. Investigation of the conditions around the fuel plume indicated two connected mechanisms. The higher mass flow rate for higher equivalence ratios generated a stronger injector bow shock that compresses the free-stream gas, increasing OH radical production and promoting ignition. This was observed both in the higher equivalence ratio case and in the case with helium. This earlier ignition led to increased temperature and pressure downstream and, consequently, stronger combustion. The heat release from combustion provided thermal compression in the combustor, further
Thermofluidic compression effects to achieve combustion in a low-compression scramjet engine
NASA Astrophysics Data System (ADS)
Moura, A. F.; Wheatley, V.; Jahn, I.
2017-12-01
The compression provided by a scramjet inlet is an important parameter in its design. It must be low enough to limit thermal and structural loads and stagnation pressure losses, but high enough to provide the conditions favourable for combustion. Inlets are typically designed to achieve sufficient compression without accounting for the fluidic, and subsequently thermal, compression provided by the fuel injection, which can enable robust combustion in a low-compression engine. This is investigated using Reynolds-averaged Navier-Stokes numerical simulations of a simplified scramjet engine designed to have insufficient compression to auto-ignite fuel in the absence of thermofluidic compression. The engine was designed with a wide rectangular combustor and a single centrally located injector, in order to reduce three-dimensional effects of the walls on the fuel plume. By varying the injected mass flow rate of hydrogen fuel (equivalence ratios of 0.22, 0.17, and 0.13), it is demonstrated that higher equivalence ratios lead to earlier ignition and more rapid combustion, even though mean conditions in the combustor change by no more than 5% for pressure and 3% for temperature with higher equivalence ratio. By supplementing the lower equivalence ratio with helium to achieve a higher mass flow rate, it is confirmed that these benefits are primarily due to the local compression provided by the extra injected mass. Investigation of the conditions around the fuel plume indicated two connected mechanisms. The higher mass flow rate for higher equivalence ratios generated a stronger injector bow shock that compresses the free-stream gas, increasing OH radical production and promoting ignition. This was observed both in the higher equivalence ratio case and in the case with helium. This earlier ignition led to increased temperature and pressure downstream and, consequently, stronger combustion. The heat release from combustion provided thermal compression in the combustor, further
Bunch length compression method for free electron lasers to avoid parasitic compressions
Douglas, David R.; Benson, Stephen; Nguyen, Dinh Cong; Tennant, Christopher; Wilson, Guy
2015-05-26
A method of bunch length compression method for a free electron laser (FEL) that avoids parasitic compressions by 1) applying acceleration on the falling portion of the RF waveform, 2) compressing using a positive momentum compaction (R.sub.56>0), and 3) compensating for aberration by using nonlinear magnets in the compressor beam line.
NASA Technical Reports Server (NTRS)
Finckenor, M. M.; Albyn, K. C.; Watts, E. W.
2006-01-01
Onorbit photos of the International Space Station (ISS) solar array blanket box foam pad assembly indicate degradation of the Kapton film covering the foam, leading to atomic oxygen (AO) exposure of the foam. The purpose of this test was to determine the magnitude of particulate generation caused by low-Earth orbital environment exposure of the foam and also by compression of the foam during solar array wing retraction. The polyimide foam used in the ISS solar array wing blanket box assembly is susceptible to significant AO erosion. The foam sample in this test lost one-third of its mass after exposure to the equivalent of 22 mo onorbit. Some particulate was generated by exposure to simulated orbital conditions and the simulated solar array retraction (compression test). However, onorbit, these particles would also be eroded by AO. The captured particles were generally <1 mm, and the particles shaken free of the sample had a maximum size of 4 mm. The foam sample maintained integrity after a compression load of 2.5 psi.
Fu, Chi-Yung; Petrich, Loren I.
1997-01-01
An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace's equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image.
NASA Astrophysics Data System (ADS)
Yong, Sang-Soon; Ra, Sung-Woong
2007-10-01
Multi-Spectral Camera(MSC) is a main payload on the KOMPSAT-2 satellite to perform the earth remote sensing. The MSC instrument has one(1) channel for panchromatic imaging and four(4) channel for multi-spectral imaging covering the spectral range from 450nm to 900nm using TDI CCD Focal Plane Array (FPA). The instrument images the earth using a push-broom motion with a swath width of 15 km and a ground sample distance (GSD) of 1 m over the entire field of view (FOV) at altitude 685 Km. The instrument is designed to have an on-orbit operation duty cycle of 20% over the mission lifetime of 3 years with the functions of programmable gain/ offset and on-board image data compression/ storage. The compression method on KOMPSAT-2 MSC was selected and used to match EOS input rate and PDTS output data rate on MSC image data chain. At once the MSC performance was carefully handled to minimize any degradation so that it was analyzed and restored in KGS(KOMPSAT Ground Station) during LEOP and Cal./Val.(Calibration and Validation) phase. In this paper, on-orbit image data chain in MSC and image data processing on KGS including general MSC description is briefly described. The influences on image performance between on-board compression algorithms and between performance restoration methods in ground station are analyzed, and the relation between both methods is to be analyzed and discussed.
NASA Technical Reports Server (NTRS)
Reif, John H.
1987-01-01
A parallel compression algorithm for the 16,384 processor MPP machine was developed. The serial version of the algorithm can be viewed as a combination of on-line dynamic lossless test compression techniques (which employ simple learning strategies) and vector quantization. These concepts are described. How these concepts are combined to form a new strategy for performing dynamic on-line lossy compression is discussed. Finally, the implementation of this algorithm in a massively parallel fashion on the MPP is discussed.
Sequential neural text compression.
Schmidhuber, J; Heil, S
1996-01-01
The purpose of this paper is to show that neural networks may be promising tools for data compression without loss of information. We combine predictive neural nets and statistical coding techniques to compress text files. We apply our methods to certain short newspaper articles and obtain compression ratios exceeding those of the widely used Lempel-Ziv algorithms (which build the basis of the UNIX functions "compress" and "gzip"). The main disadvantage of our methods is that they are about three orders of magnitude slower than standard methods.
1993-12-01
0~0 S* NAVAL POSTGRADUATE SCHOOL Monterey, California DTIC ELECTE THESIS S APR 11 1994DU A SIMPLE, LOW OVERHEAD DATA COMPRESSION ALGORITHM FOR...A SIMPLE. LOW OVERHEAD DATA COMPRESSION ALGORITHM FOR CONVERTING LOSSY COMPRESSION PROCESSES TO LOSSLESS. 6. AUTHOR(S) Abbott, Walter D., III 7...Approved for public release; distribution is unlimited. A Simple, Low Overhead Data Compression Algorithm for Converting Lossy Processes to Lossless by
Using phase contrast imaging to measure the properties of shock compressed aerogel
NASA Astrophysics Data System (ADS)
Hawreliak, James; Erskine, Dave; Schropp, Andres; Galtier, Eric C.; Heimann, Phil
2017-01-01
The Hugoniot states of low density materials, such as silica aerogel, are used in high energy density physics research because they can achieve a range of high temperature and pressure states through shock compression. The shock properties of 100mg/cc silica aerogel were studied at the Materials in Extreme Conditions end station using x-ray phase contrast imaging of spherically expanding shock waves. The shockwaves were generated by focusing a high power 532nm laser to a 50μm focal spot on a thin aluminum ablator. The shock speed was measured in separate experiments using line-VISAR measurements from the reflecting shock front. The relative timing between the x-ray probe and the optical laser pump was varied so x-ray PCI images were taken at pressures between 10GPa and 30GPa. Modeling the compression of the foam in the strong shock limit uses a Gruneisen parameter of 0.49 to fit the data rather than a value of 0.66 that would correspond to a plasma state.
Compressing DNA sequence databases with coil.
White, W Timothy J; Hendy, Michael D
2008-05-20
Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip) compression - an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression - the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST) data. Finally, coil can efficiently encode incremental additions to a sequence database. coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work.
Fu, C.Y.; Petrich, L.I.
1997-03-25
An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace`s equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image. 16 figs.
Structure and Properties of Silica Glass Densified in Cold Compression and Hot Compression
NASA Astrophysics Data System (ADS)
Guerette, Michael; Ackerson, Michael R.; Thomas, Jay; Yuan, Fenglin; Bruce Watson, E.; Walker, David; Huang, Liping
2015-10-01
Silica glass has been shown in numerous studies to possess significant capacity for permanent densification under pressure at different temperatures to form high density amorphous (HDA) silica. However, it is unknown to what extent the processes leading to irreversible densification of silica glass in cold-compression at room temperature and in hot-compression (e.g., near glass transition temperature) are common in nature. In this work, a hot-compression technique was used to quench silica glass from high temperature (1100 °C) and high pressure (up to 8 GPa) conditions, which leads to density increase of ~25% and Young’s modulus increase of ~71% relative to that of pristine silica glass at ambient conditions. Our experiments and molecular dynamics (MD) simulations provide solid evidences that the intermediate-range order of the hot-compressed HDA silica is distinct from that of the counterpart cold-compressed at room temperature. This explains the much higher thermal and mechanical stability of the former than the latter upon heating and compression as revealed in our in-situ Brillouin light scattering (BLS) experiments. Our studies demonstrate the limitation of the resulting density as a structural indicator of polyamorphism, and point out the importance of temperature during compression in order to fundamentally understand HDA silica.
Real-time demonstration hardware for enhanced DPCM video compression algorithm
NASA Technical Reports Server (NTRS)
Bizon, Thomas P.; Whyte, Wayne A., Jr.; Marcopoli, Vincent R.
1992-01-01
along with implementation of a buffer control algorithm to accommodate the variable data rate output of the multilevel Huffman encoder. A video CODEC of this type could be used to compress NTSC color television signals where high quality reconstruction is desirable (e.g., Space Station video transmission, transmission direct-to-the-home via direct broadcast satellite systems or cable television distribution to system headends and direct-to-the-home).
Compressing DNA sequence databases with coil
White, W Timothy J; Hendy, Michael D
2008-01-01
Background Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip) compression – an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. Results We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression – the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST) data. Finally, coil can efficiently encode incremental additions to a sequence database. Conclusion coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work. PMID:18489794
Compressive sensing in medical imaging
Graff, Christian G.; Sidky, Emil Y.
2015-01-01
The promise of compressive sensing, exploitation of compressibility to achieve high quality image reconstructions with less data, has attracted a great deal of attention in the medical imaging community. At the Compressed Sensing Incubator meeting held in April 2014 at OSA Headquarters in Washington, DC, presentations were given summarizing some of the research efforts ongoing in compressive sensing for x-ray computed tomography and magnetic resonance imaging systems. This article provides an expanded version of these presentations. Sparsity-exploiting reconstruction algorithms that have gained popularity in the medical imaging community are studied, and examples of clinical applications that could benefit from compressive sensing ideas are provided. The current and potential future impact of compressive sensing on the medical imaging field is discussed. PMID:25968400
Adaptive efficient compression of genomes
2012-01-01
Modern high-throughput sequencing technologies are able to generate DNA sequences at an ever increasing rate. In parallel to the decreasing experimental time and cost necessary to produce DNA sequences, computational requirements for analysis and storage of the sequences are steeply increasing. Compression is a key technology to deal with this challenge. Recently, referential compression schemes, storing only the differences between a to-be-compressed input and a known reference sequence, gained a lot of interest in this field. However, memory requirements of the current algorithms are high and run times often are slow. In this paper, we propose an adaptive, parallel and highly efficient referential sequence compression method which allows fine-tuning of the trade-off between required memory and compression speed. When using 12 MB of memory, our method is for human genomes on-par with the best previous algorithms in terms of compression ratio (400:1) and compression speed. In contrast, it compresses a complete human genome in just 11 seconds when provided with 9 GB of main memory, which is almost three times faster than the best competitor while using less main memory. PMID:23146997
Wang, Xiaoyuan; Xie, Bing; Wu, Dong; Hassan, Muhammad; Huang, Changying
2015-09-01
The generation and seasonal variations of secondary pollutants were investigated during three municipal solid waste (MSW) compression and transfer in Shanghai, China. The results showed that the raw wastewater generated from three MSW transfer stations had pH of 4.2-6.0, COD 40,000-70,000mg/L, BOD5 15,000-25,000mg/L, ammonia nitrogen (NH3-N) 400-700mg/L, total nitrogen (TN) 600-1500mg/L, total phosphorus (TP) 50-200mg/L and suspended solids (SS) 1000-80,000mg/L. The pH, COD, BOD5 and NH3-N did not show regular change throughout the year while the concentration of TN, TP and SS were higher in summer and autumn. The animal and vegetable oil content was extremely high. The average produced raw wastewater of three transfer stations ranged from 2.3% to 8.4% of total refuse. The major air pollutants of H2S 0.01-0.17mg/m(3), NH3 0.75-1.8mg/m(3) in transfer stations, however, the regular seasonal change was not discovered. During the transfer process, the generated leachate in container had pH of 5.7-6.4, SS of 9120-32,475mg/L. The COD and BOD5 were 41,633-89,060mg/L and 18,116-34,130mg/L respectively, higher than that in the compress process. The concentration of NH3-N and TP were 587-1422mg/L and 80-216mg/L, respectively, and both increased during transfer process. H2S, VOC, CH4 and NH3 were 0.4-4mg/m(3), 7-19mg/m(3), 0-3.4% and 1-4mg/m(3), respectively. The PCA analysis showed that the production of secondary pollutants is closely related to temperature, especially CH4. Therefore, avoiding high temperature is a key means of reducing the production of gaseous pollutants. And above all else, refuse classification in source, deodorization and anti-acid corrosion are the important processes to control the secondary pollutants during compression and transfer of MSW. Copyright © 2015 Elsevier Ltd. All rights reserved.
Mužíková, Jitka; Kubíčková, Alena
2016-09-01
The paper evaluates and compares the compressibility and compactibility of directly compressible tableting materials for the preparation of hydrophilic gel matrix tablets containing tramadol hydrochloride and the coprocessed dry binders Prosolv® SMCC 90 and Disintequik™ MCC 25. The selected types of hypromellose are Methocel™ Premium K4M and Methocel™ Premium K100M in 30 and 50 % concentrations, the lubricant being magnesium stearate in a 1 % concentration. Compressibility is evaluated by means of the energy profile of compression process and compactibility by the tensile strength of tablets. The values of total energy of compression and plasticity were higher in the tableting materials containing Prosolv® SMCC 90 than in those containing Disintequik™ MCC 25. Tramadol slightly decreased the values of total energy of compression and plasticity. Tableting materials containing Prosolv® SMCC 90 yielded stronger tablets. Tramadol decreased the strength of tablets from both coprocessed dry binders.
Compression fractures of the back
... treatments. Surgery can include: Balloon kyphoplasty Vertebroplasty Spinal fusion Other surgery may be done to remove bone ... Alternative Names Vertebral compression fractures; Osteoporosis - compression fracture Images Compression fracture References Cosman F, de Beur SJ, ...
Vapor Compression Distillation Urine Processor Lessons Learned from Development and Life Testing
NASA Technical Reports Server (NTRS)
Hutchens, Cindy F.; Long, David A.
1999-01-01
Vapor Compression Distillation (VCD) is the chosen technology for urine processing aboard the International Space Station (155). Development and life testing over the past several years have brought to the forefront problems and solutions for the VCD technology. Testing between 1992 and 1998 has been instrumental in developing estimates of hardware life and reliability. It has also helped improve the hardware design in ways that either correct existing problems or enhance the existing design of the hardware. The testing has increased the confidence in the VCD technology and reduced technical and programmatic risks. This paper summarizes the test results and changes that have been made to the VCD design.
Digital compression algorithms for HDTV transmission
NASA Technical Reports Server (NTRS)
Adkins, Kenneth C.; Shalkhauser, Mary JO; Bibyk, Steven B.
1990-01-01
Digital compression of video images is a possible avenue for high definition television (HDTV) transmission. Compression needs to be optimized while picture quality remains high. Two techniques for compression the digital images are explained and comparisons are drawn between the human vision system and artificial compression techniques. Suggestions for improving compression algorithms through the use of neural and analog circuitry are given.
High rate science data handling on Space Station Freedom
NASA Technical Reports Server (NTRS)
Handley, Thomas H., Jr.; Masline, Richard C.
1990-01-01
A study by NASA's User Information System Working Group for Space Station Freedom (SSF) has determined that the proposed onboard Data Management System, as initially configured, will be incapable of handling the data-generation rates typical of numerous scientific sensor payloads; many of these generate data at rates in excess of 10 Mbps, and there are at least four cases of rates in excess of 300 Mbps. The SSF Working Group has accordingly suggested an alternative conceptual architecture based on technology expected to achieve space-qualified status by 1995. The architecture encompasses recorders with rapid data-ingest capabilities and massive storage capabilities, optical delay lines allowing the recording of only the phenomena of interest, and data flow-compressing image processors.
1991-01-01
In 1982, the Space Station Task Force was formed, signaling the initiation of the Space Station Freedom Program, and eventually resulting in the Marshall Space Flight Center's responsibilities for Space Station Work Package 1.
Subjective evaluation of compressed image quality
NASA Astrophysics Data System (ADS)
Lee, Heesub; Rowberg, Alan H.; Frank, Mark S.; Choi, Hyung-Sik; Kim, Yongmin
1992-05-01
Lossy data compression generates distortion or error on the reconstructed image and the distortion becomes visible as the compression ratio increases. Even at the same compression ratio, the distortion appears differently depending on the compression method used. Because of the nonlinearity of the human visual system and lossy data compression methods, we have evaluated subjectively the quality of medical images compressed with two different methods, an intraframe and interframe coding algorithms. The evaluated raw data were analyzed statistically to measure interrater reliability and reliability of an individual reader. Also, the analysis of variance was used to identify which compression method is better statistically, and from what compression ratio the quality of a compressed image is evaluated as poorer than that of the original. Nine x-ray CT head images from three patients were used as test cases. Six radiologists participated in reading the 99 images (some were duplicates) compressed at four different compression ratios, original, 5:1, 10:1, and 15:1. The six readers agree more than by chance alone and their agreement was statistically significant, but there were large variations among readers as well as within a reader. The displacement estimated interframe coding algorithm is significantly better in quality than that of the 2-D block DCT at significance level 0.05. Also, 10:1 compressed images with the interframe coding algorithm do not show any significant differences from the original at level 0.05.
Sensitivity Analysis in RIPless Compressed Sensing
2014-10-01
SECURITY CLASSIFICATION OF: The compressive sensing framework finds a wide range of applications in signal processing and analysis. Within this...Analysis of Compressive Sensing Solutions Report Title The compressive sensing framework finds a wide range of applications in signal processing and...compressed sensing. More specifically, we show that in a noiseless and RIP-less setting [11], the recovery process of a compressed sensing framework is
NASA Technical Reports Server (NTRS)
Cebeci, T.; Kaups, K.; Ramsey, J. A.
1977-01-01
The method described utilizes a nonorthogonal coordinate system for boundary-layer calculations. It includes a geometry program that represents the wing analytically, and a velocity program that computes the external velocity components from a given experimental pressure distribution when the external velocity distribution is not computed theoretically. The boundary layer method is general, however, and can also be used for an external velocity distribution computed theoretically. Several test cases were computed by this method and the results were checked with other numerical calculations and with experiments when available. A typical computation time (CPU) on an IBM 370/165 computer for one surface of a wing which roughly consist of 30 spanwise stations and 25 streamwise stations, with 30 points across the boundary layer is less than 30 seconds for an incompressible flow and a little more for a compressible flow.
International Space Station (ISS)
2001-02-01
The Marshall Space Flight Center (MSFC) is responsible for designing and building the life support systems that will provide the crew of the International Space Station (ISS) a comfortable environment in which to live and work. Scientists and engineers at the MSFC are working together to provide the ISS with systems that are safe, efficient, and cost-effective. These compact and powerful systems are collectively called the Environmental Control and Life Support Systems, or simply, ECLSS. This photograph shows the fifth generation Urine Processor Development Hardware. The Urine Processor Assembly (UPA) is a part of the Water Recovery System (WRS) on the ISS. It uses a chase change process called vapor compression distillation technology to remove contaminants from urine. The UPA accepts and processes pretreated crewmember urine to allow it to be processed along with other wastewaters in the Water Processor Assembly (WPA). The WPA removes free gas, organic, and nonorganic constituents before the water goes through a series of multifiltration beds for further purification. Product water quality is monitored primarily through conductivity measurements. Unacceptable water is sent back through the WPA for reprocessing. Clean water is sent to a storage tank.
Jawień, Arkadiusz; Cierzniakowska, Katarzyna; Cwajda-Białasik, Justyna; Mościcka, Paulina
2010-01-01
Introduction The aim of the research was to compare the dynamics of venous ulcer healing when treated with the use of compression stockings as well as original two- and four-layer bandage systems. Material and methods A group of 46 patients suffering from venous ulcers was studied. This group consisted of 36 (78.3%) women and 10 (21.70%) men aged between 41 and 88 years (the average age was 66.6 years and the median was 67). Patients were randomized into three groups, for treatment with the ProGuide two-layer system, Profore four-layer compression, and with the use of compression stockings class II. In the case of multi-layer compression, compression ensuring 40 mmHg blood pressure at ankle level was used. Results In all patients, independently of the type of compression therapy, a few significant statistical changes of ulceration area in time were observed (Student’s t test for matched pairs, p < 0.05). The largest loss of ulceration area in each of the successive measurements was observed in patients treated with the four-layer system – on average 0.63 cm2/per week. The smallest loss of ulceration area was observed in patients using compression stockings – on average 0.44 cm2/per week. However, the observed differences were not statistically significant (Kruskal-Wallis test H = 4.45, p > 0.05). Conclusions A systematic compression therapy, applied with preliminary blood pressure of 40 mmHg, is an effective method of conservative treatment of venous ulcers. Compression stockings and prepared systems of multi-layer compression were characterized by similar clinical effectiveness. PMID:22419941
NASA Astrophysics Data System (ADS)
Zhou, Nanrun; Zhang, Aidi; Zheng, Fen; Gong, Lihua
2014-10-01
The existing ways to encrypt images based on compressive sensing usually treat the whole measurement matrix as the key, which renders the key too large to distribute and memorize or store. To solve this problem, a new image compression-encryption hybrid algorithm is proposed to realize compression and encryption simultaneously, where the key is easily distributed, stored or memorized. The input image is divided into 4 blocks to compress and encrypt, then the pixels of the two adjacent blocks are exchanged randomly by random matrices. The measurement matrices in compressive sensing are constructed by utilizing the circulant matrices and controlling the original row vectors of the circulant matrices with logistic map. And the random matrices used in random pixel exchanging are bound with the measurement matrices. Simulation results verify the effectiveness, security of the proposed algorithm and the acceptable compression performance.
Park, Sang-Sub
2014-01-01
The purpose of this study is to grasp difference in quality of chest compression accuracy between the modified chest compression method with the use of smartphone application and the standardized traditional chest compression method. Participants were progressed 64 people except 6 absentees among 70 people who agreed to participation with completing the CPR curriculum. In the classification of group in participants, the modified chest compression method was called as smartphone group (33 people). The standardized chest compression method was called as traditional group (31 people). The common equipments in both groups were used Manikin for practice and Manikin for evaluation. In the meantime, the smartphone group for application was utilized Android and iOS Operating System (OS) of 2 smartphone products (G, i). The measurement period was conducted from September 25th to 26th, 2012. Data analysis was used SPSS WIN 12.0 program. As a result of research, the proper compression depth (mm) was shown the proper compression depth (p< 0.01) in traditional group (53.77 mm) compared to smartphone group (48.35 mm). Even the proper chest compression (%) was formed suitably (p< 0.05) in traditional group (73.96%) more than smartphone group (60.51%). As for the awareness of chest compression accuracy, the traditional group (3.83 points) had the higher awareness of chest compression accuracy (p< 0.001) than the smartphone group (2.32 points). In the questionnaire that was additionally carried out 1 question only in smartphone group, the modified chest compression method with the use of smartphone had the high negative reason in rescuer for occurrence of hand back pain (48.5%) and unstable posture (21.2%).
Survey of Header Compression Techniques
NASA Technical Reports Server (NTRS)
Ishac, Joseph
2001-01-01
This report provides a summary of several different header compression techniques. The different techniques included are: (1) Van Jacobson's header compression (RFC 1144); (2) SCPS (Space Communications Protocol Standards) header compression (SCPS-TP, SCPS-NP); (3) Robust header compression (ROHC); and (4) The header compression techniques in RFC2507 and RFC2508. The methodology for compression and error correction for these schemes are described in the remainder of this document. All of the header compression schemes support compression over simplex links, provided that the end receiver has some means of sending data back to the sender. However, if that return path does not exist, then neither Van Jacobson's nor SCPS can be used, since both rely on TCP (Transmission Control Protocol). In addition, under link conditions of low delay and low error, all of the schemes perform as expected. However, based on the methodology of the schemes, each scheme is likely to behave differently as conditions degrade. Van Jacobson's header compression relies heavily on the TCP retransmission timer and would suffer an increase in loss propagation should the link possess a high delay and/or bit error rate (BER). The SCPS header compression scheme protects against high delay environments by avoiding delta encoding between packets. Thus, loss propagation is avoided. However, SCPS is still affected by an increased BER (bit-error-rate) since the lack of delta encoding results in larger header sizes. Next, the schemes found in RFC2507 and RFC2508 perform well for non-TCP connections in poor conditions. RFC2507 performance with TCP connections is improved by various techniques over Van Jacobson's, but still suffers a performance hit with poor link properties. Also, RFC2507 offers the ability to send TCP data without delta encoding, similar to what SCPS offers. ROHC is similar to the previous two schemes, but adds additional CRCs (cyclic redundancy check) into headers and improves
Reversible Watermarking Surviving JPEG Compression.
Zain, J; Clarke, M
2005-01-01
This paper will discuss the properties of watermarking medical images. We will also discuss the possibility of such images being compressed by JPEG and give an overview of JPEG compression. We will then propose a watermarking scheme that is reversible and robust to JPEG compression. The purpose is to verify the integrity and authenticity of medical images. We used 800x600x8 bits ultrasound (US) images in our experiment. SHA-256 of the image is then embedded in the Least significant bits (LSB) of an 8x8 block in the Region of Non Interest (RONI). The image is then compressed using JPEG and decompressed using Photoshop 6.0. If the image has not been altered, the watermark extracted will match the hash (SHA256) of the original image. The result shown that the embedded watermark is robust to JPEG compression up to image quality 60 (~91% compressed).
Compressed Sensing for Body MRI
Feng, Li; Benkert, Thomas; Block, Kai Tobias; Sodickson, Daniel K; Otazo, Ricardo; Chandarana, Hersh
2016-01-01
The introduction of compressed sensing for increasing imaging speed in MRI has raised significant interest among researchers and clinicians, and has initiated a large body of research across multiple clinical applications over the last decade. Compressed sensing aims to reconstruct unaliased images from fewer measurements than that are traditionally required in MRI by exploiting image compressibility or sparsity. Moreover, appropriate combinations of compressed sensing with previously introduced fast imaging approaches, such as parallel imaging, have demonstrated further improved performance. The advent of compressed sensing marks the prelude to a new era of rapid MRI, where the focus of data acquisition has changed from sampling based on the nominal number of voxels and/or frames to sampling based on the desired information content. This paper presents a brief overview of the application of compressed sensing techniques in body MRI, where imaging speed is crucial due to the presence of respiratory motion along with stringent constraints on spatial and temporal resolution. The first section provides an overview of the basic compressed sensing methodology, including the notion of sparsity, incoherence, and non-linear reconstruction. The second section reviews state-of-the-art compressed sensing techniques that have been demonstrated for various clinical body MRI applications. In the final section, the paper discusses current challenges and future opportunities. PMID:27981664
NASA Technical Reports Server (NTRS)
Tilton, James C.; Manohar, Mareboyana
1994-01-01
Recent advances in imaging technology make it possible to obtain imagery data of the Earth at high spatial, spectral and radiometric resolutions from Earth orbiting satellites. The rate at which the data is collected from these satellites can far exceed the channel capacity of the data downlink. Reducing the data rate to within the channel capacity can often require painful trade-offs in which certain scientific returns are sacrificed for the sake of others. In this paper we model the radiometric version of this form of lossy compression by dropping a specified number of least significant bits from each data pixel and compressing the remaining bits using an appropriate lossless compression technique. We call this approach 'truncation followed by lossless compression' or TLLC. We compare the TLLC approach with applying a lossy compression technique to the data for reducing the data rate to the channel capacity, and demonstrate that each of three different lossy compression techniques (JPEG/DCT, VQ and Model-Based VQ) give a better effective radiometric resolution than TLLC for a given channel rate.
Delos Reyes, Arthur P; Partsch, Hugo; Mosti, Giovanni; Obi, Andrea; Lurie, Fedor
2014-10-01
The International Compression Club, a collaboration of medical experts and industry representatives, was founded in 2005 to develop consensus reports and recommendations regarding the use of compression therapy in the treatment of acute and chronic vascular disease. During the recent meeting of the International Compression Club, member presentations were focused on the clinical application of intermittent pneumatic compression in different disease scenarios as well as on the use of inelastic and short stretch compression therapy. In addition, several new compression devices and systems were introduced by industry representatives. This article summarizes the presentations and subsequent discussions and provides a description of the new compression therapies presented. Copyright © 2014 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
1972-01-01
This is an artist's concept of a modular space station. In 1970 the Marshall Space Flight Center arnounced the completion of a study concerning a modular space station that could be launched by the planned-for reusable Space Shuttle. The study envisioned a space station composed of cylindrical sections 14 feet in diameter and of varying lengths joined to form any one of a number of possible shapes. The sections were restricted to 14 feet in diameter and 58 feet in length to be consistent with a shuttle cargo bay size of 15 by 60 feet. Center officials said that the first elements of the space station could be in orbit by about 1978 and could be manned by three or six men. This would be an interim space station with sections that could be added later to form a full 12-man station by the early 1980s.
A Bunch Compression Method for Free Electron Lasers that Avoids Parasitic Compressions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benson, Stephen V.; Douglas, David R.; Tennant, Christopher D.
2015-09-01
Virtually all existing high energy (>few MeV) linac-driven FELs compress the electron bunch length though the use of off-crest acceleration on the rising side of the RF waveform followed by transport through a magnetic chicane. This approach has at least three flaws: 1) it is difficult to correct aberrations--particularly RF curvature, 2) rising side acceleration exacerbates space charge-induced distortion of the longitudinal phase space, and 3) all achromatic "negative compaction" compressors create parasitic compression during the final compression process, increasing the CSR-induced emittance growth. One can avoid these deficiencies by using acceleration on the falling side of the RF waveformmore » and a compressor with M 56>0. This approach offers multiple advantages: 1) It is readily achieved in beam lines supporting simple schemes for aberration compensation, 2) Longitudinal space charge (LSC)-induced phase space distortion tends, on the falling side of the RF waveform, to enhance the chirp, and 3) Compressors with M 56>0 can be configured to avoid spurious over-compression. We will discuss this bunch compression scheme in detail and give results of a successful beam test in April 2012 using the JLab UV Demo FEL« less
Generalized massive optimal data compression
NASA Astrophysics Data System (ADS)
Alsing, Justin; Wandelt, Benjamin
2018-05-01
In this paper, we provide a general procedure for optimally compressing N data down to n summary statistics, where n is equal to the number of parameters of interest. We show that compression to the score function - the gradient of the log-likelihood with respect to the parameters - yields n compressed statistics that are optimal in the sense that they preserve the Fisher information content of the data. Our method generalizes earlier work on linear Karhunen-Loéve compression for Gaussian data whilst recovering both lossless linear compression and quadratic estimation as special cases when they are optimal. We give a unified treatment that also includes the general non-Gaussian case as long as mild regularity conditions are satisfied, producing optimal non-linear summary statistics when appropriate. As a worked example, we derive explicitly the n optimal compressed statistics for Gaussian data in the general case where both the mean and covariance depend on the parameters.
Fractal-Based Image Compression
1990-01-01
used Ziv - Lempel - experiments and for software development. Addi- Welch compression algorithm (ZLW) [51 [4] was used tional thanks to Roger Boss, Bill...vol17no. 6 (June 4) and with the minimum number of maps. [5] J. Ziv and A. Lempel , Compression of !ndivid- 5 Summary ual Sequences via Variable-Rate...transient and should be discarded. 2.5 Collage Theorem algorithm2 C3.2 Deterministic Algorithm for IFS Attractor For fast image compression the best
29 CFR 1917.154 - Compressed air.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 29 Labor 7 2013-07-01 2013-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed a...
29 CFR 1917.154 - Compressed air.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 29 Labor 7 2012-07-01 2012-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed a...
29 CFR 1917.154 - Compressed air.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 29 Labor 7 2014-07-01 2014-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed a...
Image quality (IQ) guided multispectral image compression
NASA Astrophysics Data System (ADS)
Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik
2016-05-01
Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.
NASA Technical Reports Server (NTRS)
Akkerman, J. W.
1982-01-01
New mechanism alters compression ratio of internal-combustion engine according to load so that engine operates at top fuel efficiency. Ordinary gasoline, diesel and gas engines with their fixed compression ratios are inefficient at partial load and at low-speed full load. Mechanism ensures engines operate as efficiently under these conditions as they do at highload and high speed.
Next Generation Hydrogen Station Composite Data Products: All Stations |
/11/17 Fuel Temperature at Receptacle 30 s After Start of Fill CDP INFR 77, 10/11/17 Cost Compressor Operation Cost CDP INFR 39, 10/11/17 Station Cost by Daily Capacity CDP INFR 40, 10/11/17 Average Station Cost by Category CDP INFR 41, 10/11/17 Station Cost CDP INFR 42, 10/11/17 Station Cost by Type CDP INFR
Next Generation Hydrogen Station Composite Data Products: Retail Stations |
-Cool of -40°C CDP RETAIL INFR 57, 9/25/17 Cost Compressor Operation Cost CDP RETAIL INFR 39, 9/25/17 Station Cost by Daily Capacity CDP RETAIL INFR 40, 9/25/17 Average Station Cost by Category CDP RETAIL INFR 41, 9/25/17 Station Cost CDP RETAIL INFR 42, 9/25/17 Station Cost by Type CDP RETAIL INFR 43, 9/25
Compressibility of the protein-water interface
NASA Astrophysics Data System (ADS)
Persson, Filip; Halle, Bertil
2018-06-01
The compressibility of a protein relates to its stability, flexibility, and hydrophobic interactions, but the measurement, interpretation, and computation of this important thermodynamic parameter present technical and conceptual challenges. Here, we present a theoretical analysis of protein compressibility and apply it to molecular dynamics simulations of four globular proteins. Using additively weighted Voronoi tessellation, we decompose the solution compressibility into contributions from the protein and its hydration shells. We find that positively cross-correlated protein-water volume fluctuations account for more than half of the protein compressibility that governs the protein's pressure response, while the self correlations correspond to small (˜0.7%) fluctuations of the protein volume. The self compressibility is nearly the same as for ice, whereas the total protein compressibility, including cross correlations, is ˜45% of the bulk-water value. Taking the inhomogeneous solvent density into account, we decompose the experimentally accessible protein partial compressibility into intrinsic, hydration, and molecular exchange contributions and show how they can be computed with good statistical accuracy despite the dominant bulk-water contribution. The exchange contribution describes how the protein solution responds to an applied pressure by redistributing water molecules from lower to higher density; it is negligibly small for native proteins, but potentially important for non-native states. Because the hydration shell is an open system, the conventional closed-system compressibility definitions yield a pseudo-compressibility. We define an intrinsic shell compressibility, unaffected by occupation number fluctuations, and show that it approaches the bulk-water value exponentially with a decay "length" of one shell, less than the bulk-water compressibility correlation length. In the first hydration shell, the intrinsic compressibility is 25%-30% lower than in
Cosmological Particle Data Compression in Practice
NASA Astrophysics Data System (ADS)
Zeyen, M.; Ahrens, J.; Hagen, H.; Heitmann, K.; Habib, S.
2017-12-01
In cosmological simulations trillions of particles are handled and several terabytes of unstructured particle data are generated in each time step. Transferring this data directly from memory to disk in an uncompressed way results in a massive load on I/O and storage systems. Hence, one goal of domain scientists is to compress the data before storing it to disk while minimizing the loss of information. To prevent reading back uncompressed data from disk, this can be done in an in-situ process. Since the simulation continuously generates data, the available time for the compression of one time step is limited. Therefore, the evaluation of compression techniques has shifted from only focusing on compression rates to include run-times and scalability.In recent years several compression techniques for cosmological data have become available. These techniques can be either lossy or lossless, depending on the technique. For both cases, this study aims to evaluate and compare the state of the art compression techniques for unstructured particle data. This study focuses on the techniques available in the Blosc framework with its multi-threading support, the XZ Utils toolkit with the LZMA algorithm that achieves high compression rates, and the widespread FPZIP and ZFP methods for lossy compressions.For the investigated compression techniques, quantitative performance indicators such as compression rates, run-time/throughput, and reconstruction errors are measured. Based on these factors, this study offers a comprehensive analysis of the individual techniques and discusses their applicability for in-situ compression. In addition, domain specific measures are evaluated on the reconstructed data sets, and the relative error rates and statistical properties are analyzed and compared. Based on this study future challenges and directions in the compression of unstructured cosmological particle data were identified.
Compressibility of the protein-water interface.
Persson, Filip; Halle, Bertil
2018-06-07
The compressibility of a protein relates to its stability, flexibility, and hydrophobic interactions, but the measurement, interpretation, and computation of this important thermodynamic parameter present technical and conceptual challenges. Here, we present a theoretical analysis of protein compressibility and apply it to molecular dynamics simulations of four globular proteins. Using additively weighted Voronoi tessellation, we decompose the solution compressibility into contributions from the protein and its hydration shells. We find that positively cross-correlated protein-water volume fluctuations account for more than half of the protein compressibility that governs the protein's pressure response, while the self correlations correspond to small (∼0.7%) fluctuations of the protein volume. The self compressibility is nearly the same as for ice, whereas the total protein compressibility, including cross correlations, is ∼45% of the bulk-water value. Taking the inhomogeneous solvent density into account, we decompose the experimentally accessible protein partial compressibility into intrinsic, hydration, and molecular exchange contributions and show how they can be computed with good statistical accuracy despite the dominant bulk-water contribution. The exchange contribution describes how the protein solution responds to an applied pressure by redistributing water molecules from lower to higher density; it is negligibly small for native proteins, but potentially important for non-native states. Because the hydration shell is an open system, the conventional closed-system compressibility definitions yield a pseudo-compressibility. We define an intrinsic shell compressibility, unaffected by occupation number fluctuations, and show that it approaches the bulk-water value exponentially with a decay "length" of one shell, less than the bulk-water compressibility correlation length. In the first hydration shell, the intrinsic compressibility is 25%-30% lower than
NASA Astrophysics Data System (ADS)
Andrianov, M. N.; Kostenko, V. I.; Likhachev, S. F.
2018-01-01
The algorithms for achieving a practical increase in the rate of data transmission on the space-craft-ground tracking station line has been considered. This increase is achieved by applying spectral-effective modulation techniques, the technology of orthogonal frequency compression of signals using millimeterrange radio waves. The advantages and disadvantages of each of three algorithms have been revealed. A significant advantage of data transmission in the millimeter range has been indicated.
Magnetic compression laser driving circuit
Ball, D.G.; Birx, D.; Cook, E.G.
1993-01-05
A magnetic compression laser driving circuit is disclosed. The magnetic compression laser driving circuit compresses voltage pulses in the range of 1.5 microseconds at 20 kilovolts of amplitude to pulses in the range of 40 nanoseconds and 60 kilovolts of amplitude. The magnetic compression laser driving circuit includes a multi-stage magnetic switch where the last stage includes a switch having at least two turns which has larger saturated inductance with less core material so that the efficiency of the circuit and hence the laser is increased.
Magnetic compression laser driving circuit
Ball, Don G.; Birx, Dan; Cook, Edward G.
1993-01-01
A magnetic compression laser driving circuit is disclosed. The magnetic compression laser driving circuit compresses voltage pulses in the range of 1.5 microseconds at 20 Kilovolts of amplitude to pulses in the range of 40 nanoseconds and 60 Kilovolts of amplitude. The magnetic compression laser driving circuit includes a multi-stage magnetic switch where the last stage includes a switch having at least two turns which has larger saturated inductance with less core material so that the efficiency of the circuit and hence the laser is increased.
Effect of compressibility on the hypervelocity penetration
NASA Astrophysics Data System (ADS)
Song, W. J.; Chen, X. W.; Chen, P.
2018-02-01
We further consider the effect of rod strength by employing the compressible penetration model to study the effect of compressibility on hypervelocity penetration. Meanwhile, we define different instances of penetration efficiency in various modified models and compare these penetration efficiencies to identify the effects of different factors in the compressible model. To systematically discuss the effect of compressibility in different metallic rod-target combinations, we construct three cases, i.e., the penetrations by the more compressible rod into the less compressible target, rod into the analogously compressible target, and the less compressible rod into the more compressible target. The effects of volumetric strain, internal energy, and strength on the penetration efficiency are analyzed simultaneously. It indicates that the compressibility of the rod and target increases the pressure at the rod/target interface. The more compressible rod/target has larger volumetric strain and higher internal energy. Both the larger volumetric strain and higher strength enhance the penetration or anti-penetration ability. On the other hand, the higher internal energy weakens the penetration or anti-penetration ability. The two trends conflict, but the volumetric strain dominates in the variation of the penetration efficiency, which would not approach the hydrodynamic limit if the rod and target are not analogously compressible. However, if the compressibility of the rod and target is analogous, it has little effect on the penetration efficiency.
Korycki, Rafal
2014-05-01
Since the appearance of digital audio recordings, audio authentication has been becoming increasingly difficult. The currently available technologies and free editing software allow a forger to cut or paste any single word without audible artifacts. Nowadays, the only method referring to digital audio files commonly approved by forensic experts is the ENF criterion. It consists in fluctuation analysis of the mains frequency induced in electronic circuits of recording devices. Therefore, its effectiveness is strictly dependent on the presence of mains signal in the recording, which is a rare occurrence. Recently, much attention has been paid to authenticity analysis of compressed multimedia files and several solutions were proposed for detection of double compression in both digital video and digital audio. This paper addresses the problem of tampering detection in compressed audio files and discusses new methods that can be used for authenticity analysis of digital recordings. Presented approaches consist in evaluation of statistical features extracted from the MDCT coefficients as well as other parameters that may be obtained from compressed audio files. Calculated feature vectors are used for training selected machine learning algorithms. The detection of multiple compression covers up tampering activities as well as identification of traces of montage in digital audio recordings. To enhance the methods' robustness an encoder identification algorithm was developed and applied based on analysis of inherent parameters of compression. The effectiveness of tampering detection algorithms is tested on a predefined large music database consisting of nearly one million of compressed audio files. The influence of compression algorithms' parameters on the classification performance is discussed, based on the results of the current study. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
On-Chip Neural Data Compression Based On Compressed Sensing With Sparse Sensing Matrices.
Zhao, Wenfeng; Sun, Biao; Wu, Tong; Yang, Zhi
2018-02-01
On-chip neural data compression is an enabling technique for wireless neural interfaces that suffer from insufficient bandwidth and power budgets to transmit the raw data. The data compression algorithm and its implementation should be power and area efficient and functionally reliable over different datasets. Compressed sensing is an emerging technique that has been applied to compress various neurophysiological data. However, the state-of-the-art compressed sensing (CS) encoders leverage random but dense binary measurement matrices, which incur substantial implementation costs on both power and area that could offset the benefits from the reduced wireless data rate. In this paper, we propose two CS encoder designs based on sparse measurement matrices that could lead to efficient hardware implementation. Specifically, two different approaches for the construction of sparse measurement matrices, i.e., the deterministic quasi-cyclic array code (QCAC) matrix and -sparse random binary matrix [-SRBM] are exploited. We demonstrate that the proposed CS encoders lead to comparable recovery performance. And efficient VLSI architecture designs are proposed for QCAC-CS and -SRBM encoders with reduced area and total power consumption.
FRESCO: Referential compression of highly similar sequences.
Wandelt, Sebastian; Leser, Ulf
2013-01-01
In many applications, sets of similar texts or sequences are of high importance. Prominent examples are revision histories of documents or genomic sequences. Modern high-throughput sequencing technologies are able to generate DNA sequences at an ever-increasing rate. In parallel to the decreasing experimental time and cost necessary to produce DNA sequences, computational requirements for analysis and storage of the sequences are steeply increasing. Compression is a key technology to deal with this challenge. Recently, referential compression schemes, storing only the differences between a to-be-compressed input and a known reference sequence, gained a lot of interest in this field. In this paper, we propose a general open-source framework to compress large amounts of biological sequence data called Framework for REferential Sequence COmpression (FRESCO). Our basic compression algorithm is shown to be one to two orders of magnitudes faster than comparable related work, while achieving similar compression ratios. We also propose several techniques to further increase compression ratios, while still retaining the advantage in speed: 1) selecting a good reference sequence; and 2) rewriting a reference sequence to allow for better compression. In addition,we propose a new way of further boosting the compression ratios by applying referential compression to already referentially compressed files (second-order compression). This technique allows for compression ratios way beyond state of the art, for instance,4,000:1 and higher for human genomes. We evaluate our algorithms on a large data set from three different species (more than 1,000 genomes, more than 3 TB) and on a collection of versions of Wikipedia pages. Our results show that real-time compression of highly similar sequences at high compression ratios is possible on modern hardware.
Micromechanics of composite laminate compression failure
NASA Technical Reports Server (NTRS)
Guynn, E. Gail; Bradley, Walter L.
1986-01-01
The Dugdale analysis for metals loaded in tension was adapted to model the failure of notched composite laminates loaded in compression. Compression testing details, MTS alignment verification, and equipment needs were resolved. Thus far, only 2 ductile material systems, HST7 and F155, were selected for study. A Wild M8 Zoom Stereomicroscope and necessary attachments for video taping and 35 mm pictures were purchased. Currently, this compression test system is fully operational. A specimen is loaded in compression, and load vs shear-crippling zone size is monitored and recorded. Data from initial compression tests indicate that the Dugdale model does not accurately predict the load vs damage zone size relationship of notched composite specimens loaded in compression.
NASA Technical Reports Server (NTRS)
Hurst, Victor, IV; West, Sarah; Austin, Paul; Branson, Richard; Beck, George
2005-01-01
Astronaut crew medical officers (CMO) aboard the International Space Station (ISS) receive 40 hours of medical training over 18 months before each mission, including two-person cardiopulmonary resuscitation (2CPR) as recommended by the American Heart Association (AHA). Recent studies have concluded that the use of metronomic tones improves the coordination of 2CPR by trained clinicians. 2CPR performance data for minimally-trained caregivers has been limited. The goal of this study was to determine whether use of a metronome by minimally-trained caregivers (CMO analogues) would improve 2CPR performance. 20 pairs of minimally-trained caregivers certified in 2CPR via AHA guidelines performed 2CPR for 4 minutes on an instrumented manikin using 3 interventions: 1) Standard 2CPR without a metronome [NONE], 2) Standard 2CPR plus a metronome for coordinating compression rate only [MET], 3) Standard 2CPR plus a metronome for coordinating both the compression rate and ventilation rate [BOTH]. Caregivers were evaluated for their ability to meet the AHA guideline of 32 breaths-240 compressions in 4 minutes. All (100%) caregivers using the BOTH intervention provided the required number of ventilation breaths as compared with the NONE caregivers (10%) and MET caregivers (0%). For compressions, 97.5% of the BOTH caregivers were not successful in meeting the AHA compression guideline; however, an average of 238 compressions of the desired 240 were completed. None of the caregivers were successful in meeting the compression guideline using the NONE and MET interventions. This study demonstrates that use of metronomic tones by minimally-trained caregivers for coordinating both compressions and breaths improves 2CPR performance. Meeting the breath guideline is important to minimize air entering the stomach, thus decreasing the likelihood of gastric aspiration. These results suggest that manifesting a metronome for the ISS may augment the performance of 2CPR on orbit and thus may
Fixed-Rate Compressed Floating-Point Arrays.
Lindstrom, Peter
2014-12-01
Current compression schemes for floating-point data commonly take fixed-precision values and compress them to a variable-length bit stream, complicating memory management and random access. We present a fixed-rate, near-lossless compression scheme that maps small blocks of 4(d) values in d dimensions to a fixed, user-specified number of bits per block, thereby allowing read and write random access to compressed floating-point data at block granularity. Our approach is inspired by fixed-rate texture compression methods widely adopted in graphics hardware, but has been tailored to the high dynamic range and precision demands of scientific applications. Our compressor is based on a new, lifted, orthogonal block transform and embedded coding, allowing each per-block bit stream to be truncated at any point if desired, thus facilitating bit rate selection using a single compression scheme. To avoid compression or decompression upon every data access, we employ a software write-back cache of uncompressed blocks. Our compressor has been designed with computational simplicity and speed in mind to allow for the possibility of a hardware implementation, and uses only a small number of fixed-point arithmetic operations per compressed value. We demonstrate the viability and benefits of lossy compression in several applications, including visualization, quantitative data analysis, and numerical simulation.
4. EASTBOUND VIEW. NORTH TRACK WAITING STATION ON LEFT. STATION ...
4. EASTBOUND VIEW. NORTH TRACK WAITING STATION ON LEFT. STATION ON RIGHT. NOTE TUNNEL IN BACKGROUND. - Baltimore & Ohio Railroad, Harpers Ferry Station, Potomac Street, Harpers Ferry, Jefferson County, WV
JPEG and wavelet compression of ophthalmic images
NASA Astrophysics Data System (ADS)
Eikelboom, Robert H.; Yogesan, Kanagasingam; Constable, Ian J.; Barry, Christopher J.
1999-05-01
This study was designed to determine the degree and methods of digital image compression to produce ophthalmic imags of sufficient quality for transmission and diagnosis. The photographs of 15 subjects, which inclined eyes with normal, subtle and distinct pathologies, were digitized to produce 1.54MB images and compressed to five different methods: (i) objectively by calculating the RMS error between the uncompressed and compressed images, (ii) semi-subjectively by assessing the visibility of blood vessels, and (iii) subjectively by asking a number of experienced observers to assess the images for quality and clinical interpretation. Results showed that as a function of compressed image size, wavelet compressed images produced less RMS error than JPEG compressed images. Blood vessel branching could be observed to a greater extent after Wavelet compression compared to JPEG compression produced better images then a JPEG compression for a given image size. Overall, it was shown that images had to be compressed to below 2.5 percent for JPEG and 1.7 percent for Wavelet compression before fine detail was lost, or when image quality was too poor to make a reliable diagnosis.
NASA Astrophysics Data System (ADS)
Duplaga, M.; Leszczuk, M. I.; Papir, Z.; Przelaskowski, A.
2008-12-01
Wider dissemination of medical digital video libraries is affected by two correlated factors, resource effective content compression that directly influences its diagnostic credibility. It has been proved that it is possible to meet these contradictory requirements halfway for long-lasting and low motion surgery recordings at compression ratios close to 100 (bronchoscopic procedures were a case study investigated). As the main supporting assumption, it has been accepted that the content can be compressed as far as clinicians are not able to sense a loss of video diagnostic fidelity (a visually lossless compression). Different market codecs were inspected by means of the combined subjective and objective tests toward their usability in medical video libraries. Subjective tests involved a panel of clinicians who had to classify compressed bronchoscopic video content according to its quality under the bubble sort algorithm. For objective tests, two metrics (hybrid vector measure and hosaka Plots) were calculated frame by frame and averaged over a whole sequence.
Competitive Parallel Processing For Compression Of Data
NASA Technical Reports Server (NTRS)
Diner, Daniel B.; Fender, Antony R. H.
1990-01-01
Momentarily-best compression algorithm selected. Proposed competitive-parallel-processing system compresses data for transmission in channel of limited band-width. Likely application for compression lies in high-resolution, stereoscopic color-television broadcasting. Data from information-rich source like color-television camera compressed by several processors, each operating with different algorithm. Referee processor selects momentarily-best compressed output.
Video compression via log polar mapping
NASA Astrophysics Data System (ADS)
Weiman, Carl F. R.
1990-09-01
A three stage process for compressing real time color imagery by factors in the range of 1600-to-i is proposed for remote driving'. The key is to match the resolution gradient of human vision and preserve only those cues important for driving. Some hardware components have been built and a research prototype is planned. Stage 1 is log polar mapping, which reduces peripheral image sampling resolution to match the peripheral gradient in human visual acuity. This can yield 25-to-i compression. Stage 2 partitions color and contrast into separate channels. This can yield 8-to-i compression. Stage 3 is conventional block data compression such as hybrid DCT/DPCM which can yield 8-to-i compression. The product of all three stages is i600-to-i data compression. The compressed signal can be transmitted over FM bands which do not require line-of-sight, greatly increasing the range of operation and reducing the topographic exposure of teleoperated vehicles. Since the compressed channel data contains the essential constituents of human visual perception, imagery reconstructed by inverting each of the three compression stages is perceived as complete, provided the operator's direction of gaze is at the center of the mapping. This can be achieved by eye-tracker feedback which steers the center of log polar mapping in the remote vehicle to match the teleoperator's direction of gaze.
Boonruab, Jurairat; Nimpitakpong, Netraya; Damjuti, Watchara
2018-01-01
This randomized controlled trial aimed to investigate the distinctness after treatment among hot herbal compress, hot compress, and topical diclofenac. The registrants were equally divided into groups and received the different treatments including hot herbal compress, hot compress, and topical diclofenac group, which served as the control group. After treatment courses, Visual Analog Scale and 36-Item Short Form Health survey were, respectively, used to establish the level of pain intensity and quality of life. In addition, cervical range of motion and pressure pain threshold were also examined to identify the motional effects. All treatments showed significantly decreased level of pain intensity and increased cervical range of motion, while the intervention groups exhibited extraordinary capability compared with the topical diclofenac group in pressure pain threshold and quality of life. In summary, hot herbal compress holds promise to be an efficacious treatment parallel to hot compress and topical diclofenac.
Prediction of compression-induced image interpretability degradation
NASA Astrophysics Data System (ADS)
Blasch, Erik; Chen, Hua-Mei; Irvine, John M.; Wang, Zhonghai; Chen, Genshe; Nagy, James; Scott, Stephen
2018-04-01
Image compression is an important component in modern imaging systems as the volume of the raw data collected is increasing. To reduce the volume of data while collecting imagery useful for analysis, choosing the appropriate image compression method is desired. Lossless compression is able to preserve all the information, but it has limited reduction power. On the other hand, lossy compression, which may result in very high compression ratios, suffers from information loss. We model the compression-induced information loss in terms of the National Imagery Interpretability Rating Scale or NIIRS. NIIRS is a user-based quantification of image interpretability widely adopted by the Geographic Information System community. Specifically, we present the Compression Degradation Image Function Index (CoDIFI) framework that predicts the NIIRS degradation (i.e., a decrease of NIIRS level) for a given compression setting. The CoDIFI-NIIRS framework enables a user to broker the maximum compression setting while maintaining a specified NIIRS rating.
ROI-Based On-Board Compression for Hyperspectral Remote Sensing Images on GPU.
Giordano, Rossella; Guccione, Pietro
2017-05-19
In recent years, hyperspectral sensors for Earth remote sensing have become very popular. Such systems are able to provide the user with images having both spectral and spatial information. The current hyperspectral spaceborne sensors are able to capture large areas with increased spatial and spectral resolution. For this reason, the volume of acquired data needs to be reduced on board in order to avoid a low orbital duty cycle due to limited storage space. Recently, literature has focused the attention on efficient ways for on-board data compression. This topic is a challenging task due to the difficult environment (outer space) and due to the limited time, power and computing resources. Often, the hardware properties of Graphic Processing Units (GPU) have been adopted to reduce the processing time using parallel computing. The current work proposes a framework for on-board operation on a GPU, using NVIDIA's CUDA (Compute Unified Device Architecture) architecture. The algorithm aims at performing on-board compression using the target's related strategy. In detail, the main operations are: the automatic recognition of land cover types or detection of events in near real time in regions of interest (this is a user related choice) with an unsupervised classifier; the compression of specific regions with space-variant different bit rates including Principal Component Analysis (PCA), wavelet and arithmetic coding; and data volume management to the Ground Station. Experiments are provided using a real dataset taken from an AVIRIS (Airborne Visible/Infrared Imaging Spectrometer) airborne sensor in a harbor area.
NASA Technical Reports Server (NTRS)
1996-01-01
Optivision developed two PC-compatible boards and associated software under a Goddard Space Flight Center Small Business Innovation Research grant for NASA applications in areas such as telerobotics, telesciences and spaceborne experimentation. From this technology, the company used its own funds to develop commercial products, the OPTIVideo MPEG Encoder and Decoder, which are used for realtime video compression and decompression. They are used in commercial applications including interactive video databases and video transmission. The encoder converts video source material to a compressed digital form that can be stored or transmitted, and the decoder decompresses bit streams to provide high quality playback.
Data compression for sequencing data
2013-01-01
Post-Sanger sequencing methods produce tons of data, and there is a general agreement that the challenge to store and process them must be addressed with data compression. In this review we first answer the question “why compression” in a quantitative manner. Then we also answer the questions “what” and “how”, by sketching the fundamental compression ideas, describing the main sequencing data types and formats, and comparing the specialized compression algorithms and tools. Finally, we go back to the question “why compression” and give other, perhaps surprising answers, demonstrating the pervasiveness of data compression techniques in computational biology. PMID:24252160
Tan, E S; Mat Jais, I S; Abdul Rahim, S; Tay, S C
2018-01-01
We investigated the effect of an interfragmentary gap on the final compression force using the Acutrak 2 Mini headless compression screw (length 26 mm) (Acumed, Hillsboro, OR, USA). Two blocks of solid rigid polyurethane foam in a custom jig were separated by spacers of varying thickness (1.0, 1.5, 2.0 and 2.5 mm) to simulate an interfragmentary gap. The spacers were removed before full insertion of the screw and the compression force was measured when the screw was buried 2 mm below the surface of the upper block. Gaps of 1.5 mm and 2.0 mm resulted in significantly decreased compression forces, whereas there was no significant decrease in compression force with a gap of 1 mm. An interfragmentary gap of 2.5 mm did not result in any contact between blocks. We conclude that an increased interfragmentary gap leads to decreased compression force with this screw, which may have implications on fracture healing.
1952-01-01
This is a von Braun 1952 space station concept. In a 1952 series of articles written in Collier's, Dr. Wernher von Braun, then Technical Director of the Army Ordnance Guided Missiles Development Group at Redstone Arsenal, wrote of a large wheel-like space station in a 1,075-mile orbit. This station, made of flexible nylon, would be carried into space by a fully reusable three-stage launch vehicle. Once in space, the station's collapsible nylon body would be inflated much like an automobile tire. The 250-foot-wide wheel would rotate to provide artificial gravity, an important consideration at the time because little was known about the effects of prolonged zero-gravity on humans. Von Braun's wheel was slated for a number of important missions: a way station for space exploration, a meteorological observatory and a navigation aid. This concept was illustrated by artist Chesley Bonestell.
NASA Technical Reports Server (NTRS)
Anderton, D. A.
1985-01-01
The official start of a bold new space program, essential to maintain the United States' leadership in space was signaled by a Presidential directive to move aggressively again into space by proceeding with the development of a space station. Development concepts for a permanently manned space station are discussed. Reasons for establishing an inhabited space station are given. Cost estimates and timetables are also cited.
Compression techniques in tele-radiology
NASA Astrophysics Data System (ADS)
Lu, Tianyu; Xiong, Zixiang; Yun, David Y.
1999-10-01
This paper describes a prototype telemedicine system for remote 3D radiation treatment planning. Due to voluminous medical image data and image streams generated in interactive frame rate involved in the application, the importance of deploying adjustable lossy to lossless compression techniques is emphasized in order to achieve acceptable performance via various kinds of communication networks. In particular, the compression of the data substantially reduces the transmission time and therefore allows large-scale radiation distribution simulation and interactive volume visualization using remote supercomputing resources in a timely fashion. The compression algorithms currently used in the software we developed are JPEG and H.263 lossy methods and Lempel-Ziv (LZ77) lossless methods. Both objective and subjective assessment of the effect of lossy compression methods on the volume data are conducted. Favorable results are obtained showing that substantial compression ratio is achievable within distortion tolerance. From our experience, we conclude that 30dB (PSNR) is about the lower bound to achieve acceptable quality when applying lossy compression to anatomy volume data (e.g. CT). For computer simulated data, much higher PSNR (up to 100dB) is expectable. This work not only introduces such novel approach for delivering medical services that will have significant impact on the existing cooperative image-based services, but also provides a platform for the physicians to assess the effects of lossy compression techniques on the diagnostic and aesthetic appearance of medical imaging.
NASA Astrophysics Data System (ADS)
Sivaganesan, S.; Chandrasekaran, M.; Ruban, M.
2017-03-01
The present experimental investigation evaluates the effects of using blends of diesel fuel with 20% concentration of Methyl Ester of Jatropha biodiesel blended with various compression ratio. Both the diesel and biodiesel fuel blend was injected at 23º BTDC to the combustion chamber. The experiment was carried out with three different compression ratio. Biodiesel was extracted from Jatropha oil, 20% (B20) concentration is found to be best blend ratio from the earlier experimental study. The engine was maintained at various compression ratio i.e., 17.5, 16.5 and 15.5 respectively. The main objective is to obtain minimum specific fuel consumption, better efficiency and lesser Emission with different compression ratio. The results concluded that full load show an increase in efficiency when compared with diesel, highest efficiency is obtained with B20MEOJBA with compression ratio 17.5. It is noted that there is an increase in thermal efficiency as the blend ratio increases. Biodiesel blend has performance closer to diesel, but emission is reduced in all blends of B20MEOJBA compared to diesel. Thus this work focuses on the best compression ratio and suitability of biodiesel blends in diesel engine as an alternate fuel.
1969-01-01
This picture illustrates a concept of a 33-Foot-Diameter Space Station Leading to a Space Base. In-house work of the Marshall Space Flight Center, as well as a Phase B contract with the McDornel Douglas Astronautics Company, resulted in a preliminary design for a space station in 1969 and l970. The Marshall-McDonnel Douglas approach envisioned the use of two common modules as the core configuration of a 12-man space station. Each common module was 33 feet in diameter and 40 feet in length and provided the building blocks, not only for the space station, but also for a 50-man space base. Coupled together, the two modules would form a four-deck facility: two decks for laboratories and two decks for operations and living quarters. Zero-gravity would be the normal mode of operation, although the station would have an artificial gravity capability. This general-purpose orbital facility was to provide wide-ranging research capabilities. The design of the facility was driven by the need to accommodate a broad spectrum of activities in support of astronomy, astrophysics, aerospace medicine, biology, materials processing, space physics, and space manufacturing. To serve the needs of Earth observations, the station was to be placed in a 242-nautical-mile orbit at a 55-degree inclination. An Intermediate-21 vehicle (comprised of Saturn S-IC and S-II stages) would have launched the station in 1977.
Compression of Probabilistic XML Documents
NASA Astrophysics Data System (ADS)
Veldman, Irma; de Keijzer, Ander; van Keulen, Maurice
Database techniques to store, query and manipulate data that contains uncertainty receives increasing research interest. Such UDBMSs can be classified according to their underlying data model: relational, XML, or RDF. We focus on uncertain XML DBMS with as representative example the Probabilistic XML model (PXML) of [10,9]. The size of a PXML document is obviously a factor in performance. There are PXML-specific techniques to reduce the size, such as a push down mechanism, that produces equivalent but more compact PXML documents. It can only be applied, however, where possibilities are dependent. For normal XML documents there also exist several techniques for compressing a document. Since Probabilistic XML is (a special form of) normal XML, it might benefit from these methods even more. In this paper, we show that existing compression mechanisms can be combined with PXML-specific compression techniques. We also show that best compression rates are obtained with a combination of PXML-specific technique with a rather simple generic DAG-compression technique.
Oblivious image watermarking combined with JPEG compression
NASA Astrophysics Data System (ADS)
Chen, Qing; Maitre, Henri; Pesquet-Popescu, Beatrice
2003-06-01
For most data hiding applications, the main source of concern is the effect of lossy compression on hidden information. The objective of watermarking is fundamentally in conflict with lossy compression. The latter attempts to remove all irrelevant and redundant information from a signal, while the former uses the irrelevant information to mask the presence of hidden data. Compression on a watermarked image can significantly affect the retrieval of the watermark. Past investigations of this problem have heavily relied on simulation. It is desirable not only to measure the effect of compression on embedded watermark, but also to control the embedding process to survive lossy compression. In this paper, we focus on oblivious watermarking by assuming that the watermarked image inevitably undergoes JPEG compression prior to watermark extraction. We propose an image-adaptive watermarking scheme where the watermarking algorithm and the JPEG compression standard are jointly considered. Watermark embedding takes into consideration the JPEG compression quality factor and exploits an HVS model to adaptively attain a proper trade-off among transparency, hiding data rate, and robustness to JPEG compression. The scheme estimates the image-dependent payload under JPEG compression to achieve the watermarking bit allocation in a determinate way, while maintaining consistent watermark retrieval performance.
Fractal-Based Image Compression, II
1990-06-01
data for figure 3 ----------------------------------- 10 iv 1. INTRODUCTION The need for data compression is not new. With humble beginnings such as...the use of acronyms and abbreviations in spoken and written word, the methods for data compression became more advanced as the need for information...grew. The Morse code, developed because of the need for faster telegraphy, was an early example of a data compression technique. Largely because of the
NASA Technical Reports Server (NTRS)
Melcher, Kevin J.
2006-01-01
The Compressible Flow Toolbox is primarily a MATLAB-language implementation of a set of algorithms that solve approximately 280 linear and nonlinear classical equations for compressible flow. The toolbox is useful for analysis of one-dimensional steady flow with either constant entropy, friction, heat transfer, or Mach number greater than 1. The toolbox also contains algorithms for comparing and validating the equation-solving algorithms against solutions previously published in open literature. The classical equations solved by the Compressible Flow Toolbox are as follows: The isentropic-flow equations, The Fanno flow equations (pertaining to flow of an ideal gas in a pipe with friction), The Rayleigh flow equations (pertaining to frictionless flow of an ideal gas, with heat transfer, in a pipe of constant cross section), The normal-shock equations, The oblique-shock equations, and The expansion equations.
Compressed Sensing for Chemistry
NASA Astrophysics Data System (ADS)
Sanders, Jacob Nathan
Many chemical applications, from spectroscopy to quantum chemistry, involve measuring or computing a large amount of data, and then compressing this data to retain the most chemically-relevant information. In contrast, compressed sensing is an emergent technique that makes it possible to measure or compute an amount of data that is roughly proportional to its information content. In particular, compressed sensing enables the recovery of a sparse quantity of information from significantly undersampled data by solving an ℓ 1-optimization problem. This thesis represents the application of compressed sensing to problems in chemistry. The first half of this thesis is about spectroscopy. Compressed sensing is used to accelerate the computation of vibrational and electronic spectra from real-time time-dependent density functional theory simulations. Using compressed sensing as a drop-in replacement for the discrete Fourier transform, well-resolved frequency spectra are obtained at one-fifth the typical simulation time and computational cost. The technique is generalized to multiple dimensions and applied to two-dimensional absorption spectroscopy using experimental data collected on atomic rubidium vapor. Finally, a related technique known as super-resolution is applied to open quantum systems to obtain realistic models of a protein environment, in the form of atomistic spectral densities, at lower computational cost. The second half of this thesis deals with matrices in quantum chemistry. It presents a new use of compressed sensing for more efficient matrix recovery whenever the calculation of individual matrix elements is the computational bottleneck. The technique is applied to the computation of the second-derivative Hessian matrices in electronic structure calculations to obtain the vibrational modes and frequencies of molecules. When applied to anthracene, this technique results in a threefold speed-up, with greater speed-ups possible for larger molecules. The
Compression Frequency Choice for Compression Mass Gauge Method and Effect on Measurement Accuracy
NASA Astrophysics Data System (ADS)
Fu, Juan; Chen, Xiaoqian; Huang, Yiyong
2013-12-01
It is a difficult job to gauge the liquid fuel mass in a tank on spacecrafts under microgravity condition. Without the presence of strong buoyancy, the configuration of the liquid and gas in the tank is uncertain and more than one bubble may exist in the liquid part. All these will affect the measure accuracy of liquid mass gauge, especially for a method called Compression Mass Gauge (CMG). Four resonance resources affect the choice of compression frequency for CMG method. There are the structure resonance, liquid sloshing, transducer resonance and bubble resonance. Ground experimental apparatus are designed and built to validate the gauging method and the influence of different compression frequencies at different fill levels on the measurement accuracy. Harmonic phenomenon should be considered during filter design when processing test data. Results demonstrate the ground experiment system performances well with high accuracy and the measurement accuracy increases as the compression frequency climbs in low fill levels. But low compression frequencies should be the better choice for high fill levels. Liquid sloshing induces the measurement accuracy to degrade when the surface is excited to wave by external disturbance at the liquid natural frequency. The measurement accuracy is still acceptable at small amplitude vibration.
Compressed normalized block difference for object tracking
NASA Astrophysics Data System (ADS)
Gao, Yun; Zhang, Dengzhuo; Cai, Donglan; Zhou, Hao; Lan, Ge
2018-04-01
Feature extraction is very important for robust and real-time tracking. Compressive sensing provided a technical support for real-time feature extraction. However, all existing compressive tracking were based on compressed Haar-like feature, and how to compress many more excellent high-dimensional features is worth researching. In this paper, a novel compressed normalized block difference feature (CNBD) was proposed. For resisting noise effectively in a highdimensional normalized pixel difference feature (NPD), a normalized block difference feature extends two pixels in the original formula of NPD to two blocks. A CNBD feature can be obtained by compressing a normalized block difference feature based on compressive sensing theory, with the sparse random Gaussian matrix as the measurement matrix. The comparative experiments of 7 trackers on 20 challenging sequences showed that the tracker based on CNBD feature can perform better than other trackers, especially than FCT tracker based on compressed Haar-like feature, in terms of AUC, SR and Precision.
Chung, Tae Nyoung; Bae, Jinkun; Kim, Eui Chung; Cho, Yun Kyung; You, Je Sung; Choi, Sung Wook; Kim, Ok Jun
2013-07-01
Recent studies have shown that there may be an interaction between duty cycle and other factors related to the quality of chest compression. Duty cycle represents the fraction of compression phase. We aimed to investigate the effect of shorter compression phase on average chest compression depth during metronome-guided cardiopulmonary resuscitation. Senior medical students performed 12 sets of chest compressions following the guiding sounds, with three down-stroke patterns (normal, fast and very fast) and four rates (80, 100, 120 and 140 compressions/min) in random sequence. Repeated-measures analysis of variance was used to compare the average chest compression depth and duty cycle among the trials. The average chest compression depth increased and the duty cycle decreased in a linear fashion as the down-stroke pattern shifted from normal to very fast (p<0.001 for both). Linear increase of average chest compression depth following the increase of the rate of chest compression was observed only with normal down-stroke pattern (p=0.004). Induction of a shorter compression phase is correlated with a deeper chest compression during metronome-guided cardiopulmonary resuscitation.
Tomographic Image Compression Using Multidimensional Transforms.
ERIC Educational Resources Information Center
Villasenor, John D.
1994-01-01
Describes a method for compressing tomographic images obtained using Positron Emission Tomography (PET) and Magnetic Resonance (MR) by applying transform compression using all available dimensions. This takes maximum advantage of redundancy of the data, allowing significant increases in compression efficiency and performance. (13 references) (KRN)
30 CFR 77.412 - Compressed air systems.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Compressed air systems. 77.412 Section 77.412... for Mechanical Equipment § 77.412 Compressed air systems. (a) Compressors and compressed-air receivers... involving the pressure system of compressors, receivers, or compressed-air-powered equipment shall not be...
30 CFR 77.412 - Compressed air systems.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Compressed air systems. 77.412 Section 77.412... for Mechanical Equipment § 77.412 Compressed air systems. (a) Compressors and compressed-air receivers... involving the pressure system of compressors, receivers, or compressed-air-powered equipment shall not be...
47 CFR 90.476 - Interconnection of fixed stations and certain mobile stations.
Code of Federal Regulations, 2010 CFR
2010-10-01
... mobile stations. 90.476 Section 90.476 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES PRIVATE LAND MOBILE RADIO SERVICES Transmitter Control Interconnected Systems § 90.476 Interconnection of fixed stations and certain mobile stations. (a) Fixed stations and...
NASA Technical Reports Server (NTRS)
Vandermey, Nancy E.; Morris, Don H.; Masters, John E.
1991-01-01
Damage initiation and growth under compression-compression fatigue loading were investigated for a stitched uniweave material system with an underlying AS4/3501-6 quasi-isotropic layup. Performance of unnotched specimens having stitch rows at either 0 degree or 90 degrees to the loading direction was compared. Special attention was given to the effects of stitching related manufacturing defects. Damage evaluation techniques included edge replication, stiffness monitoring, x-ray radiography, residual compressive strength, and laminate sectioning. It was found that the manufacturing defect of inclined stitches had the greatest adverse effect on material performance. Zero degree and 90 degree specimen performances were generally the same. While the stitches were the source of damage initiation, they also slowed damage propagation both along the length and across the width and affected through-the-thickness damage growth. A pinched layer zone formed by the stitches particularly affected damage initiation and growth. The compressive failure mode was transverse shear for all specimens, both in static compression and fatigue cycling effects.
Application of content-based image compression to telepathology
NASA Astrophysics Data System (ADS)
Varga, Margaret J.; Ducksbury, Paul G.; Callagy, Grace
2002-05-01
Telepathology is a means of practicing pathology at a distance, viewing images on a computer display rather than directly through a microscope. Without compression, images take too long to transmit to a remote location and are very expensive to store for future examination. However, to date the use of compressed images in pathology remains controversial. This is because commercial image compression algorithms such as JPEG achieve data compression without knowledge of the diagnostic content. Often images are lossily compressed at the expense of corrupting informative content. None of the currently available lossy compression techniques are concerned with what information has been preserved and what data has been discarded. Their sole objective is to compress and transmit the images as fast as possible. By contrast, this paper presents a novel image compression technique, which exploits knowledge of the slide diagnostic content. This 'content based' approach combines visually lossless and lossy compression techniques, judiciously applying each in the appropriate context across an image so as to maintain 'diagnostic' information while still maximising the possible compression. Standard compression algorithms, e.g. wavelets, can still be used, but their use in a context sensitive manner can offer high compression ratios and preservation of diagnostically important information. When compared with lossless compression the novel content-based approach can potentially provide the same degree of information with a smaller amount of data. When compared with lossy compression it can provide more information for a given amount of compression. The precise gain in the compression performance depends on the application (e.g. database archive or second opinion consultation) and the diagnostic content of the images.
Efficient compression of molecular dynamics trajectory files.
Marais, Patrick; Kenwood, Julian; Smith, Keegan Carruthers; Kuttel, Michelle M; Gain, James
2012-10-15
We investigate whether specific properties of molecular dynamics trajectory files can be exploited to achieve effective file compression. We explore two classes of lossy, quantized compression scheme: "interframe" predictors, which exploit temporal coherence between successive frames in a simulation, and more complex "intraframe" schemes, which compress each frame independently. Our interframe predictors are fast, memory-efficient and well suited to on-the-fly compression of massive simulation data sets, and significantly outperform the benchmark BZip2 application. Our schemes are configurable: atomic positional accuracy can be sacrificed to achieve greater compression. For high fidelity compression, our linear interframe predictor gives the best results at very little computational cost: at moderate levels of approximation (12-bit quantization, maximum error ≈ 10(-2) Å), we can compress a 1-2 fs trajectory file to 5-8% of its original size. For 200 fs time steps-typically used in fine grained water diffusion experiments-we can compress files to ~25% of their input size, still substantially better than BZip2. While compression performance degrades with high levels of quantization, the simulation error is typically much greater than the associated approximation error in such cases. Copyright © 2012 Wiley Periodicals, Inc.
Blomberg, Hans; Gedeborg, Rolf; Berglund, Lars; Karlsten, Rolf; Johansson, Jakob
2011-10-01
Mechanical chest compression devices are being implemented as an aid in cardiopulmonary resuscitation (CPR), despite lack of evidence of improved outcome. This manikin study evaluates the CPR-performance of ambulance crews, who had a mechanical chest compression device implemented in their routine clinical practice 8 months previously. The objectives were to evaluate time to first defibrillation, no-flow time, and estimate the quality of compressions. The performance of 21 ambulance crews (ambulance nurse and emergency medical technician) with the authorization to perform advanced life support was studied in an experimental, randomized cross-over study in a manikin setup. Each crew performed two identical CPR scenarios, with and without the aid of the mechanical compression device LUCAS. A computerized manikin was used for data sampling. There were no substantial differences in time to first defibrillation or no-flow time until first defibrillation. However, the fraction of adequate compressions in relation to total compressions was remarkably low in LUCAS-CPR (58%) compared to manual CPR (88%) (95% confidence interval for the difference: 13-50%). Only 12 out of the 21 ambulance crews (57%) applied the mandatory stabilization strap on the LUCAS device. The use of a mechanical compression aid was not associated with substantial differences in time to first defibrillation or no-flow time in the early phase of CPR. However, constant but poor chest compressions due to failure in recognizing and correcting a malposition of the device may counteract a potential benefit of mechanical chest compressions. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
MHD simulation of plasma compression experiments
NASA Astrophysics Data System (ADS)
Reynolds, Meritt; Barsky, Sandra; de Vietien, Peter
2017-10-01
General Fusion (GF) is working to build a magnetized target fusion (MTF) power plant based on compression of magnetically-confined plasma by liquid metal. GF is testing this compression concept by collapsing solid aluminum liners onto plasmas formed by coaxial helicity injection in a series of experiments called PCS (Plasma Compression, Small). We simulate the PCS experiments using the finite-volume MHD code VAC. The single-fluid plasma model includes temperature-dependent resistivity and anisotropic heat transport. The time-dependent curvilinear mesh for MHD simulation is derived from LS-DYNA simulations of actual field tests of liner implosion. We will discuss how 3D simulations reproduced instability observed in the PCS13 experiment and correctly predicted stabilization of PCS14 by ramping the shaft current during compression. We will also present a comparison of simulated Mirnov and x-ray diagnostics with experimental measurements indicating that PCS14 compressed well to a linear compression ratio of 2.5:1.
Visually lossless compression of digital hologram sequences
NASA Astrophysics Data System (ADS)
Darakis, Emmanouil; Kowiel, Marcin; Näsänen, Risto; Naughton, Thomas J.
2010-01-01
Digital hologram sequences have great potential for the recording of 3D scenes of moving macroscopic objects as their numerical reconstruction can yield a range of perspective views of the scene. Digital holograms inherently have large information content and lossless coding of holographic data is rather inefficient due to the speckled nature of the interference fringes they contain. Lossy coding of still holograms and hologram sequences has shown promising results. By definition, lossy compression introduces errors in the reconstruction. In all of the previous studies, numerical metrics were used to measure the compression error and through it, the coding quality. Digital hologram reconstructions are highly speckled and the speckle pattern is very sensitive to data changes. Hence, numerical quality metrics can be misleading. For example, for low compression ratios, a numerically significant coding error can have visually negligible effects. Yet, in several cases, it is of high interest to know how much lossy compression can be achieved, while maintaining the reconstruction quality at visually lossless levels. Using an experimental threshold estimation method, the staircase algorithm, we determined the highest compression ratio that was not perceptible to human observers for objects compressed with Dirac and MPEG-4 compression methods. This level of compression can be regarded as the point below which compression is perceptually lossless although physically the compression is lossy. It was found that up to 4 to 7.5 fold compression can be obtained with the above methods without any perceptible change in the appearance of video sequences.
46 CFR 147.60 - Compressed gases.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 5 2010-10-01 2010-10-01 false Compressed gases. 147.60 Section 147.60 Shipping COAST... Other Special Requirements for Particular Materials § 147.60 Compressed gases. (a) Cylinder requirements. Cylinders used for containing hazardous ships' stores that are compressed gases must be— (1) Authorized for...
Wang, Juan; Tang, Ce; Zhang, Lei; Gong, Yushun; Yin, Changlin; Li, Yongqin
2015-07-01
The question of whether the placement of the dominant hand against the sternum could improve the quality of manual chest compressions remains controversial. In the present study, we evaluated the influence of dominant vs nondominant hand positioning on the quality of conventional cardiopulmonary resuscitation (CPR) during prolonged basic life support (BLS) by rescuers who performed optimal and suboptimal compressions. Six months after completing a standard BLS training course, 101 medical students were instructed to perform adult single-rescuer BLS for 8 minutes on a manikin with a randomized hand position. Twenty-four hours later, the students placed the opposite hand in contact with the sternum while performing CPR. Those with an average compression depth of less than 50 mm were considered suboptimal. Participants who had performed suboptimal compressions were significantly shorter (170.2 ± 6.8 vs 174.0 ± 5.6 cm, P = .008) and lighter (58.9 ± 7.6 vs 66.9 ± 9.6 kg, P < .001) than those who performed optimal compressions. No significant differences in CPR quality were observed between dominant and nondominant hand placements for these who had an average compression depth of greater than 50 mm. However, both the compression depth (49.7 ± 4.2 vs 46.5 ± 4.1 mm, P = .003) and proportion of chest compressions with an appropriate depth (47.6% ± 27.8% vs 28.0% ± 23.4%, P = .006) were remarkably higher when compressing the chest with the dominant hand against the sternum for those who performed suboptimal CPR. Chest compression quality significantly improved when the dominant hand was placed against the sternum for those who performed suboptimal compressions during conventional CPR. Copyright © 2015 Elsevier Inc. All rights reserved.
Cluster compression algorithm: A joint clustering/data compression concept
NASA Technical Reports Server (NTRS)
Hilbert, E. E.
1977-01-01
The Cluster Compression Algorithm (CCA), which was developed to reduce costs associated with transmitting, storing, distributing, and interpreting LANDSAT multispectral image data is described. The CCA is a preprocessing algorithm that uses feature extraction and data compression to more efficiently represent the information in the image data. The format of the preprocessed data enables simply a look-up table decoding and direct use of the extracted features to reduce user computation for either image reconstruction, or computer interpretation of the image data. Basically, the CCA uses spatially local clustering to extract features from the image data to describe spectral characteristics of the data set. In addition, the features may be used to form a sequence of scalar numbers that define each picture element in terms of the cluster features. This sequence, called the feature map, is then efficiently represented by using source encoding concepts. Various forms of the CCA are defined and experimental results are presented to show trade-offs and characteristics of the various implementations. Examples are provided that demonstrate the application of the cluster compression concept to multi-spectral images from LANDSAT and other sources.
Image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing
NASA Astrophysics Data System (ADS)
Zhou, Nanrun; Pan, Shumin; Cheng, Shan; Zhou, Zhihong
2016-08-01
Most image encryption algorithms based on low-dimensional chaos systems bear security risks and suffer encryption data expansion when adopting nonlinear transformation directly. To overcome these weaknesses and reduce the possible transmission burden, an efficient image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing is proposed. The original image is measured by the measurement matrices in two directions to achieve compression and encryption simultaneously, and then the resulting image is re-encrypted by the cycle shift operation controlled by a hyper-chaotic system. Cycle shift operation can change the values of the pixels efficiently. The proposed cryptosystem decreases the volume of data to be transmitted and simplifies the keys distribution simultaneously as a nonlinear encryption system. Simulation results verify the validity and the reliability of the proposed algorithm with acceptable compression and security performance.
MP3 compression of Doppler ultrasound signals.
Poepping, Tamie L; Gill, Jeremy; Fenster, Aaron; Holdsworth, David W
2003-01-01
The effect of lossy, MP3 compression on spectral parameters derived from Doppler ultrasound (US) signals was investigated. Compression was tested on signals acquired from two sources: 1. phase quadrature and 2. stereo audio directional output. A total of 11, 10-s acquisitions of Doppler US signal were collected from each source at three sites in a flow phantom. Doppler signals were digitized at 44.1 kHz and compressed using four grades of MP3 compression (in kilobits per second, kbps; compression ratios in brackets): 1400 kbps (uncompressed), 128 kbps (11:1), 64 kbps (22:1) and 32 kbps (44:1). Doppler spectra were characterized by peak velocity, mean velocity, spectral width, integrated power and ratio of spectral power between negative and positive velocities. The results suggest that MP3 compression on digital Doppler US signals is feasible at 128 kbps, with a resulting 11:1 compression ratio, without compromising clinically relevant information. Higher compression ratios led to significant differences for both signal sources when compared with the uncompressed signals. Copyright 2003 World Federation for Ultrasound in Medicine & Biology
A New Compression Method for FITS Tables
NASA Technical Reports Server (NTRS)
Pence, William; Seaman, Rob; White, Richard L.
2010-01-01
As the size and number of FITS binary tables generated by astronomical observatories increases, so does the need for a more efficient compression method to reduce the amount disk space and network bandwidth required to archive and down1oad the data tables. We have developed a new compression method for FITS binary tables that is modeled after the FITS tiled-image compression compression convention that has been in use for the past decade. Tests of this new method on a sample of FITS binary tables from a variety of current missions show that on average this new compression technique saves about 50% more disk space than when simply compressing the whole FITS file with gzip. Other advantages of this method are (1) the compressed FITS table is itself a valid FITS table, (2) the FITS headers remain uncompressed, thus allowing rapid read and write access to the keyword values, and (3) in the common case where the FITS file contains multiple tables, each table is compressed separately and may be accessed without having to uncompress the whole file.
47 CFR 73.6018 - Digital Class A TV station protection of DTV stations.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 4 2011-10-01 2011-10-01 false Digital Class A TV station protection of DTV... RADIO SERVICES RADIO BROADCAST SERVICES Class A Television Broadcast Stations § 73.6018 Digital Class A TV station protection of DTV stations. Digital Class A TV stations must protect the DTV service that...
47 CFR 73.6018 - Digital Class A TV station protection of DTV stations.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 4 2010-10-01 2010-10-01 false Digital Class A TV station protection of DTV... RADIO SERVICES RADIO BROADCAST SERVICES Class A Television Broadcast Stations § 73.6018 Digital Class A TV station protection of DTV stations. Digital Class A TV stations must protect the DTV service that...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-25
... DEPARTMENT OF ENERGY Research and Development Strategies for Compressed & Cryo- Compressed Hydrogen Storage Workshops AGENCY: Fuel Cell Technologies Program, Office of Energy Efficiency and Renewable Energy, Department of Energy. ACTION: Notice of meeting. SUMMARY: The Systems Integration group of...
Multichannel Compression, Temporal Cues, and Audibility.
ERIC Educational Resources Information Center
Souza, Pamela E.; Turner, Christopher W.
1998-01-01
The effect of the reduction of the temporal envelope produced by multichannel compression on recognition was examined in 16 listeners with hearing loss, with particular focus on audibility of the speech signal. Multichannel compression improved speech recognition when superior audibility was provided by a two-channel compression system over linear…
1970-01-01
This is an illustration of the Space Base concept. In-house work of the Marshall Space Flight Center, as well as a Phase B contract with the McDornel Douglas Astronautics Company, resulted in a preliminary design for a space station in 1969 and l970. The Marshall-McDonnel Douglas approach envisioned the use of two common modules as the core configuration of a 12-man space station. Each common module was 33 feet in diameter and 40 feet in length and provided the building blocks, not only for the space station, but also for a 50-man space base. Coupled together, the two modules would form a four-deck facility: two decks for laboratories and two decks for operations and living quarters. Zero-gravity would be the normal mode of operation, although the station would have an artificial-gravity capability. This general-purpose orbital facility was to provide wide-ranging research capabilities. The design of the facility was driven by the need to accommodate a broad spectrum of activities in support of astronomy, astrophysics, aerospace medicine, biology, materials processing, space physics, and space manufacturing. To serve the needs of Earth observations, the station was to be placed in a 242-nautical-mile orbit at a 55-degree inclination. An Intermediate-21 vehicle (comprised of Saturn S-IC and S-II stages) would have launched the station in 1977.
Compression-sensitive magnetic resonance elastography
NASA Astrophysics Data System (ADS)
Hirsch, Sebastian; Beyer, Frauke; Guo, Jing; Papazoglou, Sebastian; Tzschaetzsch, Heiko; Braun, Juergen; Sack, Ingolf
2013-08-01
Magnetic resonance elastography (MRE) quantifies the shear modulus of biological tissue to detect disease. Complementary to the shear elastic properties of tissue, the compression modulus may be a clinically useful biomarker because it is sensitive to tissue pressure and poromechanical interactions. In this work, we analyze the capability of MRE to measure volumetric strain and the dynamic bulk modulus (P-wave modulus) at a harmonic drive frequency commonly used in shear-wave-based MRE. Gel phantoms with various densities were created by introducing CO2-filled cavities to establish a compressible effective medium. The dependence of the effective medium's bulk modulus on phantom density was investigated via static compression tests, which confirmed theoretical predictions. The P-wave modulus of three compressible phantoms was calculated from volumetric strain measured by 3D wave-field MRE at 50 Hz drive frequency. The results demonstrate the MRE-derived volumetric strain and P-wave modulus to be sensitive to the compression properties of effective media. Since the reconstruction of the P-wave modulus requires third-order derivatives, noise remains critical, and P-wave moduli are systematically underestimated. Focusing on relative changes in the effective bulk modulus of tissue, compression-sensitive MRE may be useful for the noninvasive detection of diseases involving pathological pressure alterations such as hepatic hypertension or hydrocephalus.
Chaudhary, R S; Patel, C; Sevak, V; Chan, M
2018-01-01
The study evaluates use of Kollidon VA ® 64 and a combination of Kollidon VA ® 64 with Kollidon VA ® 64 Fine as excipient in direct compression process of tablets. The combination of the two grades of material is evaluated for capping, lamination and excessive friability. Inter particulate void space is higher for such excipient due to the hollow structure of the Kollidon VA ® 64 particles. During tablet compression air remains trapped in the blend exhibiting poor compression with compromised physical properties of the tablets. Composition of Kollidon VA ® 64 and Kollidon VA ® 64 Fine is evaluated by design of experiment (DoE). A scanning electron microscopy (SEM) of two grades of Kollidon VA ® 64 exhibits morphological differences between coarse and fine grade. The tablet compression process is evaluated with a mix consisting of entirely Kollidon VA ® 64 and two mixes containing Kollidon VA ® 64 and Kollidon VA ® 64 Fine in ratio of 77:23 and 65:35. A statistical modeling on the results from the DoE trials resulted in the optimum composition for direct tablet compression as combination of Kollidon VA ® 64 and Kollidon VA ® 64 Fine in ratio of 77:23. This combination compressed with the predicted parameters based on the statistical modeling and applying main compression force between 5 and 15 kN, pre-compression force between 2 and 3 kN, feeder speed fixed at 25 rpm and compression range of 45-49 rpm produced tablets with hardness ranging between 19 and 21 kp, with no friability, capping, or lamination issue.
GPU Lossless Hyperspectral Data Compression System
NASA Technical Reports Server (NTRS)
Aranki, Nazeeh I.; Keymeulen, Didier; Kiely, Aaron B.; Klimesh, Matthew A.
2014-01-01
Hyperspectral imaging systems onboard aircraft or spacecraft can acquire large amounts of data, putting a strain on limited downlink and storage resources. Onboard data compression can mitigate this problem but may require a system capable of a high throughput. In order to achieve a high throughput with a software compressor, a graphics processing unit (GPU) implementation of a compressor was developed targeting the current state-of-the-art GPUs from NVIDIA(R). The implementation is based on the fast lossless (FL) compression algorithm reported in "Fast Lossless Compression of Multispectral-Image Data" (NPO- 42517), NASA Tech Briefs, Vol. 30, No. 8 (August 2006), page 26, which operates on hyperspectral data and achieves excellent compression performance while having low complexity. The FL compressor uses an adaptive filtering method and achieves state-of-the-art performance in both compression effectiveness and low complexity. The new Consultative Committee for Space Data Systems (CCSDS) Standard for Lossless Multispectral & Hyperspectral image compression (CCSDS 123) is based on the FL compressor. The software makes use of the highly-parallel processing capability of GPUs to achieve a throughput at least six times higher than that of a software implementation running on a single-core CPU. This implementation provides a practical real-time solution for compression of data from airborne hyperspectral instruments.
Lossless compression of otoneurological eye movement signals.
Tossavainen, Timo; Juhola, Martti
2002-12-01
We studied the performance of several lossless compression algorithms on eye movement signals recorded in otoneurological balance and other physiological laboratories. Despite the wide use of these signals their compression has not been studied prior to our research. The compression methods were based on the common model of using a predictor to decorrelate the input and using an entropy coder to encode the residual. We found that these eye movement signals recorded at 400 Hz and with 13 bit amplitude resolution could losslessly be compressed with a compression ratio of about 2.7.
Widefield compressive multiphoton microscopy.
Alemohammad, Milad; Shin, Jaewook; Tran, Dung N; Stroud, Jasper R; Chin, Sang Peter; Tran, Trac D; Foster, Mark A
2018-06-15
A single-pixel compressively sensed architecture is exploited to simultaneously achieve a 10× reduction in acquired data compared with the Nyquist rate, while alleviating limitations faced by conventional widefield temporal focusing microscopes due to scattering of the fluorescence signal. Additionally, we demonstrate an adaptive sampling scheme that further improves the compression and speed of our approach.
Magnetized Plasma Compression for Fusion Energy
NASA Astrophysics Data System (ADS)
Degnan, James; Grabowski, Christopher; Domonkos, Matthew; Amdahl, David
2013-10-01
Magnetized Plasma Compression (MPC) uses magnetic inhibition of thermal conduction and enhancement of charge particle product capture to greatly reduce the temporal and spatial compression required relative to un-magnetized inertial fusion (IFE)--to microseconds, centimeters vs nanoseconds, sub-millimeter. MPC greatly reduces the required confinement time relative to MFE--to microseconds vs minutes. Proof of principle can be demonstrated or refuted using high current pulsed power driven compression of magnetized plasmas using magnetic pressure driven implosions of metal shells, known as imploding liners. This can be done at a cost of a few tens of millions of dollars. If demonstrated, it becomes worthwhile to develop repetitive implosion drivers. One approach is to use arrays of heavy ion beams for energy production, though with much less temporal and spatial compression than that envisioned for un-magnetized IFE, with larger compression targets, and with much less ambitious compression ratios. A less expensive, repetitive pulsed power driver, if feasible, would require engineering development for transient, rapidly replaceable transmission lines such as envisioned by Sandia National Laboratories. Supported by DOE-OFES.
Lossless compression of VLSI layout image data.
Dai, Vito; Zakhor, Avideh
2006-09-01
We present a novel lossless compression algorithm called Context Copy Combinatorial Code (C4), which integrates the advantages of two very disparate compression techniques: context-based modeling and Lempel-Ziv (LZ) style copying. While the algorithm can be applied to many lossless compression applications, such as document image compression, our primary target application has been lossless compression of integrated circuit layout image data. These images contain a heterogeneous mix of data: dense repetitive data better suited to LZ-style coding, and less dense structured data, better suited to context-based encoding. As part of C4, we have developed a novel binary entropy coding technique called combinatorial coding which is simultaneously as efficient as arithmetic coding, and as fast as Huffman coding. Compression results show C4 outperforms JBIG, ZIP, BZIP2, and two-dimensional LZ, and achieves lossless compression ratios greater than 22 for binary layout image data, and greater than 14 for gray-pixel image data.
Compressed/reconstructed test images for CRAF/Cassini
NASA Technical Reports Server (NTRS)
Dolinar, S.; Cheung, K.-M.; Onyszchuk, I.; Pollara, F.; Arnold, S.
1991-01-01
A set of compressed, then reconstructed, test images submitted to the Comet Rendezvous Asteroid Flyby (CRAF)/Cassini project is presented as part of its evaluation of near lossless high compression algorithms for representing image data. A total of seven test image files were provided by the project. The seven test images were compressed, then reconstructed with high quality (root mean square error of approximately one or two gray levels on an 8 bit gray scale), using discrete cosine transforms or Hadamard transforms and efficient entropy coders. The resulting compression ratios varied from about 2:1 to about 10:1, depending on the activity or randomness in the source image. This was accomplished without any special effort to optimize the quantizer or to introduce special postprocessing to filter the reconstruction errors. A more complete set of measurements, showing the relative performance of the compression algorithms over a wide range of compression ratios and reconstruction errors, shows that additional compression is possible at a small sacrifice in fidelity.
Compression of rehydratable vegetables and cereals
NASA Technical Reports Server (NTRS)
Burns, E. E.
1978-01-01
Characteristics of freeze-dried compressed carrots, such as rehydration, volatile retention, and texture, were studied by relating histological changes to textural quality evaluation, and by determining the effects of storage temperature on freeze-dried compressed carrot bars. Results show that samples compressed with a high moisture content undergo only slight structural damage and rehydrate quickly. Cellular disruption as a result of compression at low moisture levels was the main reason for rehydration and texture differences. Products prepared from carrot cubes having 48% moisture compared favorably with a freshly cooked product in cohesiveness and elasticity, but were found slightly harder and more chewy.
Memory hierarchy using row-based compression
Loh, Gabriel H.; O'Connor, James M.
2016-10-25
A system includes a first memory and a device coupleable to the first memory. The device includes a second memory to cache data from the first memory. The second memory includes a plurality of rows, each row including a corresponding set of compressed data blocks of non-uniform sizes and a corresponding set of tag blocks. Each tag block represents a corresponding compressed data block of the row. The device further includes decompression logic to decompress data blocks accessed from the second memory. The device further includes compression logic to compress data blocks to be stored in the second memory.
Corneal Staining and Hot Black Tea Compresses.
Achiron, Asaf; Birger, Yael; Karmona, Lily; Avizemer, Haggay; Bartov, Elisha; Rahamim, Yocheved; Burgansky-Eliash, Zvia
2017-03-01
Warm compresses are widely touted as an effective treatment for ocular surface disorders. Black tea compresses are a common household remedy, although there is no evidence in the medical literature proving their effect and their use may lead to harmful side effects. To describe a case in which the application of black tea to an eye with a corneal epithelial defect led to anterior stromal discoloration; evaluate the prevalence of hot tea compress use; and analyze, in vitro, the discoloring effect of tea compresses on a model of a porcine eye. We assessed the prevalence of hot tea compresses in our community and explored the effect of warm tea compresses on the cornea when the corneal epithelium's integrity is disrupted. An in vitro experiment in which warm compresses were applied to 18 fresh porcine eyes was performed. In half the eyes a corneal epithelial defect was created and in the other half the epithelium was intact. Both groups were divided into subgroups of three eyes each and treated experimentally with warm black tea compresses, pure water, or chamomile tea compresses. We also performed a study in patients with a history of tea compress use. Brown discoloration of the anterior stroma appeared only in the porcine corneas that had an epithelial defect and were treated with black tea compresses. No other eyes from any group showed discoloration. Of the patients included in our survey, approximately 50% had applied some sort of tea ingredient as a solid compressor or as the hot liquid. An intact corneal epithelium serves as an effective barrier against tea-stain discoloration. Only when this layer is disrupted does the damage occur. Therefore, direct application of black tea (Camellia sinensis) to a cornea with an epithelial defect should be avoided.
Deregulation and Station Trafficking.
ERIC Educational Resources Information Center
Bates, Benjamin J.
To test whether the revocation of the Federal Communications Commission's "Anti-Trafficking" rule (requiring television station owners to keep a station for three years before transferring its license to another party) impacted station owner behavior, a study compared the behavior of television station "traffickers" (owners…
Perceptual Image Compression in Telemedicine
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Ahumada, Albert J., Jr.; Eckstein, Miguel; Null, Cynthia H. (Technical Monitor)
1996-01-01
The next era of space exploration, especially the "Mission to Planet Earth" will generate immense quantities of image data. For example, the Earth Observing System (EOS) is expected to generate in excess of one terabyte/day. NASA confronts a major technical challenge in managing this great flow of imagery: in collection, pre-processing, transmission to earth, archiving, and distribution to scientists at remote locations. Expected requirements in most of these areas clearly exceed current technology. Part of the solution to this problem lies in efficient image compression techniques. For much of this imagery, the ultimate consumer is the human eye. In this case image compression should be designed to match the visual capacities of the human observer. We have developed three techniques for optimizing image compression for the human viewer. The first consists of a formula, developed jointly with IBM and based on psychophysical measurements, that computes a DCT quantization matrix for any specified combination of viewing distance, display resolution, and display brightness. This DCT quantization matrix is used in most recent standards for digital image compression (JPEG, MPEG, CCITT H.261). The second technique optimizes the DCT quantization matrix for each individual image, based on the contents of the image. This is accomplished by means of a model of visual sensitivity to compression artifacts. The third technique extends the first two techniques to the realm of wavelet compression. Together these two techniques will allow systematic perceptual optimization of image compression in NASA imaging systems. Many of the image management challenges faced by NASA are mirrored in the field of telemedicine. Here too there are severe demands for transmission and archiving of large image databases, and the imagery is ultimately used primarily by human observers, such as radiologists. In this presentation I will describe some of our preliminary explorations of the applications
Design of a monitor and simulation terminal (master) for space station telerobotics and telescience
NASA Technical Reports Server (NTRS)
Lopez, L.; Konkel, C.; Harmon, P.; King, S.
1989-01-01
Based on Space Station and planetary spacecraft communication time delays and bandwidth limitations, it will be necessary to develop an intelligent, general purpose ground monitor terminal capable of sophisticated data display and control of on-orbit facilities and remote spacecraft. The basic elements that make up a Monitor and Simulation Terminal (MASTER) include computer overlay video, data compression, forward simulation, mission resource optimization and high level robotic control. Hardware and software elements of a MASTER are being assembled for testbed use. Applications of Neural Networks (NNs) to some key functions of a MASTER are also discussed. These functions are overlay graphics adjustment, object correlation and kinematic-dynamic characterization of the manipulator.
Friction of Compression-ignition Engines
NASA Technical Reports Server (NTRS)
Moore, Charles S; Collins, John H , Jr
1936-01-01
The cost in mean effective pressure of generating air flow in the combustion chambers of single-cylinder compression-ignition engines was determined for the prechamber and the displaced-piston types of combustion chamber. For each type a wide range of air-flow quantities, speeds, and boost pressures was investigated. Supplementary tests were made to determine the effect of lubricating-oil temperature, cooling-water temperature, and compression ratio on the friction mean effective pressure of the single-cylinder test engine. Friction curves are included for two 9-cylinder, radial, compression-ignition aircraft engines. The results indicate that generating the optimum forced air flow increased the motoring losses approximately 5 pounds per square inch mean effective pressure regardless of chamber type or engine speed. With a given type of chamber, the rate of increase in friction mean effective pressure with engine speed is independent of the air-flow speed. The effect of boost pressure on the friction cannot be predicted because the friction was decreased, unchanged, or increased depending on the combustion-chamber type and design details. High compression ratio accounts for approximately 5 pounds per square inch mean effective pressure of the friction of these single-cylinder compression-ignition engines. The single-cylinder test engines used in this investigation had a much higher friction mean effective pressure than conventional aircraft engines or than the 9-cylinder, radial, compression-ignition engines tested so that performance should be compared on an indicated basis.
Optimal Compression Methods for Floating-point Format Images
NASA Technical Reports Server (NTRS)
Pence, W. D.; White, R. L.; Seaman, R.
2009-01-01
We report on the results of a comparison study of different techniques for compressing FITS images that have floating-point (real*4) pixel values. Standard file compression methods like GZIP are generally ineffective in this case (with compression ratios only in the range 1.2 - 1.6), so instead we use a technique of converting the floating-point values into quantized scaled integers which are compressed using the Rice algorithm. The compressed data stream is stored in FITS format using the tiled-image compression convention. This is technically a lossy compression method, since the pixel values are not exactly reproduced, however all the significant photometric and astrometric information content of the image can be preserved while still achieving file compression ratios in the range of 4 to 8. We also show that introducing dithering, or randomization, when assigning the quantized pixel-values can significantly improve the photometric and astrometric precision in the stellar images in the compressed file without adding additional noise. We quantify our results by comparing the stellar magnitudes and positions as measured in the original uncompressed image to those derived from the same image after applying successively greater amounts of compression.
LOW-VELOCITY COMPRESSIBLE FLOW THEORY
The widespread application of incompressible flow theory dominates low-velocity fluid dynamics, virtually preventing research into compressible low-velocity flow dynamics. Yet, compressible solutions to simple and well-defined flow problems and a series of contradictions in incom...
1991-01-01
This artist's concept depicts the Space Station Freedom as it would look orbiting the Earth, illustrated by Marshall Space Flight Center artist, Tom Buzbee. Scheduled to be completed in late 1999, this smaller configuration of the Space Station featured a horizontal truss structure that supported U.S., European, and Japanese Laboratory Modules; the U.S. Habitation Module; and three sets of solar arrays. The Space Station Freedom was an international, permanently marned, orbiting base to be assembled in orbit by a series of Space Shuttle missions that were to begin in the mid-1990's.
1991-01-01
This artist's concept depicts the Space Station Freedom as it would look orbiting the Earth; illustrated by Marshall Space Flight Center artist, Tom Buzbee. Scheduled to be completed in late 1999, this smaller configuration of the Space Station features a horizontal truss structure that supported U.S., European, and Japanese Laboratory Modules; the U.S. Habitation Module; and three sets of solar arrays. The Space Station Freedom was an international, permanently marned, orbiting base to be assembled in orbit by a series of Space Shuttle missions that were to begin in the mid-1990's.
NASA Technical Reports Server (NTRS)
Mulloth, Lila; LeVan, Douglas
2002-01-01
The current CO2 removal technology of NASA is very energy intensive and contains many non-optimized subsystems. This paper discusses the concept of a next-generation, membrane integrated, adsorption processor for CO2 removal nd compression in closed-loop air revitalization systems. This processor will use many times less power than NASA's current CO2 removal technology and will be capable of maintaining a lower CO2 concentration in the cabin than that can be achieved by the existing CO2 removal systems. The compact, consolidated, configuration of gas dryer, CO2 separator, and CO2 compressor will allow continuous recycling of humid air in the cabin and supply of compressed CO2 to the reduction unit for oxygen recovery. The device has potential application to the International Space Station and future, long duration, transit, and planetary missions.
Flour, Mieke; Clark, Michael; Partsch, Hugo; Mosti, Giovanni; Uhl, Jean-Francois; Chauveau, Michel; Cros, Francois; Gelade, Pierre; Bender, Dean; Andriessen, Anneke; Schuren, Jan; Cornu-Thenard, André; Arkans, Ed; Milic, Dragan; Benigni, Jean-Patrick; Damstra, Robert; Szolnoky, Gyozo; Schingale, Franz
2013-10-01
The International Compression Club (ICC) is a partnership between academics, clinicians and industry focused upon understanding the role of compression in the management of different clinical conditions. The ICC meet regularly and from these meetings have produced a series of eight consensus publications upon topics ranging from evidence-based compression to compression trials for arm lymphoedema. All of the current consensus documents can be accessed on the ICC website (http://www.icc-compressionclub.com/index.php). In May 2011, the ICC met in Brussels during the European Wound Management Association (EWMA) annual conference. With almost 50 members in attendance, the day-long ICC meeting challenged a series of dogmas and myths that exist when considering compression therapies. In preparation for a discussion on beliefs surrounding compression, a forum was established on the ICC website where presenters were able to display a summary of their thoughts upon each dogma to be discussed during the meeting. Members of the ICC could then provide comments on each topic thereby widening the discussion to the entire membership of the ICC rather than simply those who were attending the EWMA conference. This article presents an extended report of the issues that were discussed, with each dogma covered in a separate section. The ICC discussed 12 'dogmas' with areas 1 through 7 dedicated to materials and application techniques used to apply compression with the remaining topics (8 through 12) related to the indications for using compression. © 2012 The Authors. International Wound Journal © 2012 John Wiley & Sons Ltd and Medicalhelplines.com Inc.
Real-Time Aggressive Image Data Compression
1990-03-31
implemented with higher degrees of modularity, concurrency, and higher levels of machine intelligence , thereby providing higher data -throughput rates...Project Summary Project Title: Real-Time Aggressive Image Data Compression Principal Investigators: Dr. Yih-Fang Huang and Dr. Ruey-wen Liu Institution...Summary The objective of the proposed research is to develop reliable algorithms !.hat can achieve aggressive image data compression (with a compression
Transverse compression of PPTA fibers
NASA Astrophysics Data System (ADS)
Singletary, James
2000-07-01
Results of single transverse compression testing of PPTA and PIPD fibers, using a novel test device, are presented and discussed. In the tests, short lengths of single fibers are compressed between two parallel, stiff platens. The fiber elastic deformation is analyzed as a Hertzian contact problem. The inelastic deformation is analyzed by elastic-plastic FE simulation and by laser-scanning confocal microscopy of the compressed fibers ex post facto. The results obtained are compared to those in the literature and to the theoretical predictions of PPTA fiber transverse elasticity based on PPTA crystal elasticity.
Data Compression Using the Dictionary Approach Algorithm
1990-12-01
Compression Technique The LZ77 is an OPM/L data compression scheme suggested by Ziv and Lempel . A slightly modified...June 1984. 12. Witten H. I., Neal M. R. and Cleary G. J., Arithmetic Coding For Data Compression , Communication ACM June 1987. 13. Ziv I. and Lempel A...AD-A242 539 NAVAL POSTGRADUATE SCHOOL Monterey, California DTIC NOV 181991 0 THESIS DATA COMPRESSION USING THE DICTIONARY APPROACH ALGORITHM
A hybrid data compression approach for online backup service
NASA Astrophysics Data System (ADS)
Wang, Hua; Zhou, Ke; Qin, MingKang
2009-08-01
With the popularity of Saas (Software as a service), backup service has becoming a hot topic of storage application. Due to the numerous backup users, how to reduce the massive data load is a key problem for system designer. Data compression provides a good solution. Traditional data compression application used to adopt a single method, which has limitations in some respects. For example data stream compression can only realize intra-file compression, de-duplication is used to eliminate inter-file redundant data, compression efficiency cannot meet the need of backup service software. This paper proposes a novel hybrid compression approach, which includes two levels: global compression and block compression. The former can eliminate redundant inter-file copies across different users, the latter adopts data stream compression technology to realize intra-file de-duplication. Several compressing algorithms were adopted to measure the compression ratio and CPU time. Adaptability using different algorithm in certain situation is also analyzed. The performance analysis shows that great improvement is made through the hybrid compression policy.
Comparative performance between compressed and uncompressed airborne imagery
NASA Astrophysics Data System (ADS)
Phan, Chung; Rupp, Ronald; Agarwal, Sanjeev; Trang, Anh; Nair, Sumesh
2008-04-01
The US Army's RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD), Countermine Division is evaluating the compressibility of airborne multi-spectral imagery for mine and minefield detection application. Of particular interest is to assess the highest image data compression rate that can be afforded without the loss of image quality for war fighters in the loop and performance of near real time mine detection algorithm. The JPEG-2000 compression standard is used to perform data compression. Both lossless and lossy compressions are considered. A multi-spectral anomaly detector such as RX (Reed & Xiaoli), which is widely used as a core algorithm baseline in airborne mine and minefield detection on different mine types, minefields, and terrains to identify potential individual targets, is used to compare the mine detection performance. This paper presents the compression scheme and compares detection performance results between compressed and uncompressed imagery for various level of compressions. The compression efficiency is evaluated and its dependence upon different backgrounds and other factors are documented and presented using multi-spectral data.
Lossy compression of weak lensing data
Vanderveld, R. Ali; Bernstein, Gary M.; Stoughton, Chris; ...
2011-07-12
Future orbiting observatories will survey large areas of sky in order to constrain the physics of dark matter and dark energy using weak gravitational lensing and other methods. Lossy compression of the resultant data will improve the cost and feasibility of transmitting the images through the space communication network. We evaluate the consequences of the lossy compression algorithm of Bernstein et al. (2010) for the high-precision measurement of weak-lensing galaxy ellipticities. This square-root algorithm compresses each pixel independently, and the information discarded is by construction less than the Poisson error from photon shot noise. For simulated space-based images (without cosmicmore » rays) digitized to the typical 16 bits per pixel, application of the lossy compression followed by image-wise lossless compression yields images with only 2.4 bits per pixel, a factor of 6.7 compression. We demonstrate that this compression introduces no bias in the sky background. The compression introduces a small amount of additional digitization noise to the images, and we demonstrate a corresponding small increase in ellipticity measurement noise. The ellipticity measurement method is biased by the addition of noise, so the additional digitization noise is expected to induce a multiplicative bias on the galaxies measured ellipticities. After correcting for this known noise-induced bias, we find a residual multiplicative ellipticity bias of m {approx} -4 x 10 -4. This bias is small when compared to the many other issues that precision weak lensing surveys must confront, and furthermore we expect it to be reduced further with better calibration of ellipticity measurement methods.« less
High-performance compression of astronomical images
NASA Technical Reports Server (NTRS)
White, Richard L.
1993-01-01
Astronomical images have some rather unusual characteristics that make many existing image compression techniques either ineffective or inapplicable. A typical image consists of a nearly flat background sprinkled with point sources and occasional extended sources. The images are often noisy, so that lossless compression does not work very well; furthermore, the images are usually subjected to stringent quantitative analysis, so any lossy compression method must be proven not to discard useful information, but must instead discard only the noise. Finally, the images can be extremely large. For example, the Space Telescope Science Institute has digitized photographic plates covering the entire sky, generating 1500 images each having 14000 x 14000 16-bit pixels. Several astronomical groups are now constructing cameras with mosaics of large CCD's (each 2048 x 2048 or larger); these instruments will be used in projects that generate data at a rate exceeding 100 MBytes every 5 minutes for many years. An effective technique for image compression may be based on the H-transform (Fritze et al. 1977). The method that we have developed can be used for either lossless or lossy compression. The digitized sky survey images can be compressed by at least a factor of 10 with no noticeable losses in the astrometric and photometric properties of the compressed images. The method has been designed to be computationally efficient: compression or decompression of a 512 x 512 image requires only 4 seconds on a Sun SPARCstation 1. The algorithm uses only integer arithmetic, so it is completely reversible in its lossless mode, and it could easily be implemented in hardware for space applications.
ERGC: an efficient referential genome compression algorithm
Saha, Subrata; Rajasekaran, Sanguthevar
2015-01-01
Motivation: Genome sequencing has become faster and more affordable. Consequently, the number of available complete genomic sequences is increasing rapidly. As a result, the cost to store, process, analyze and transmit the data is becoming a bottleneck for research and future medical applications. So, the need for devising efficient data compression and data reduction techniques for biological sequencing data is growing by the day. Although there exists a number of standard data compression algorithms, they are not efficient in compressing biological data. These generic algorithms do not exploit some inherent properties of the sequencing data while compressing. To exploit statistical and information-theoretic properties of genomic sequences, we need specialized compression algorithms. Five different next-generation sequencing data compression problems have been identified and studied in the literature. We propose a novel algorithm for one of these problems known as reference-based genome compression. Results: We have done extensive experiments using five real sequencing datasets. The results on real genomes show that our proposed algorithm is indeed competitive and performs better than the best known algorithms for this problem. It achieves compression ratios that are better than those of the currently best performing algorithms. The time to compress and decompress the whole genome is also very promising. Availability and implementation: The implementations are freely available for non-commercial purposes. They can be downloaded from http://engr.uconn.edu/∼rajasek/ERGC.zip. Contact: rajasek@engr.uconn.edu PMID:26139636
Compression and fast retrieval of SNP data
Sambo, Francesco; Di Camillo, Barbara; Toffolo, Gianna; Cobelli, Claudio
2014-01-01
Motivation: The increasing interest in rare genetic variants and epistatic genetic effects on complex phenotypic traits is currently pushing genome-wide association study design towards datasets of increasing size, both in the number of studied subjects and in the number of genotyped single nucleotide polymorphisms (SNPs). This, in turn, is leading to a compelling need for new methods for compression and fast retrieval of SNP data. Results: We present a novel algorithm and file format for compressing and retrieving SNP data, specifically designed for large-scale association studies. Our algorithm is based on two main ideas: (i) compress linkage disequilibrium blocks in terms of differences with a reference SNP and (ii) compress reference SNPs exploiting information on their call rate and minor allele frequency. Tested on two SNP datasets and compared with several state-of-the-art software tools, our compression algorithm is shown to be competitive in terms of compression rate and to outperform all tools in terms of time to load compressed data. Availability and implementation: Our compression and decompression algorithms are implemented in a C++ library, are released under the GNU General Public License and are freely downloadable from http://www.dei.unipd.it/~sambofra/snpack.html. Contact: sambofra@dei.unipd.it or cobelli@dei.unipd.it. PMID:25064564
Improved compression technique for multipass color printers
NASA Astrophysics Data System (ADS)
Honsinger, Chris
1998-01-01
A multipass color printer prints a color image by printing one color place at a time in a prescribed order, e.g., in a four-color systems, the cyan plane may be printed first, the magenta next, and so on. It is desirable to discard the data related to each color plane once it has been printed, so that data from the next print may be downloaded. In this paper, we present a compression scheme that allows the release of a color plane memory, but still takes advantage of the correlation between the color planes. The compression scheme is based on a block adaptive technique for decorrelating the color planes followed by a spatial lossy compression of the decorrelated data. A preferred method of lossy compression is the DCT-based JPEG compression standard, as it is shown that the block adaptive decorrelation operations can be efficiently performed in the DCT domain. The result of the compression technique are compared to that of using JPEG on RGB data without any decorrelating transform. In general, the technique is shown to improve the compression performance over a practical range of compression ratios by at least 30 percent in all images, and up to 45 percent in some images.
47 CFR 73.6016 - Digital Class A TV station protection of TV broadcast stations.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 4 2011-10-01 2011-10-01 false Digital Class A TV station protection of TV...) BROADCAST RADIO SERVICES RADIO BROADCAST SERVICES Class A Television Broadcast Stations § 73.6016 Digital Class A TV station protection of TV broadcast stations. Digital Class A TV stations must protect...
47 CFR 73.6016 - Digital Class A TV station protection of TV broadcast stations.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 4 2010-10-01 2010-10-01 false Digital Class A TV station protection of TV...) BROADCAST RADIO SERVICES RADIO BROADCAST SERVICES Class A Television Broadcast Stations § 73.6016 Digital Class A TV station protection of TV broadcast stations. Digital Class A TV stations must protect...
Simulating compressible-incompressible two-phase flows
NASA Astrophysics Data System (ADS)
Denner, Fabian; van Wachem, Berend
2017-11-01
Simulating compressible gas-liquid flows, e.g. air-water flows, presents considerable numerical issues and requires substantial computational resources, particularly because of the stiff equation of state for the liquid and the different Mach number regimes. Treating the liquid phase (low Mach number) as incompressible, yet concurrently considering the gas phase (high Mach number) as compressible, can improve the computational performance of such simulations significantly without sacrificing important physical mechanisms. A pressure-based algorithm for the simulation of two-phase flows is presented, in which a compressible and an incompressible fluid are separated by a sharp interface. The algorithm is based on a coupled finite-volume framework, discretised in conservative form, with a compressive VOF method to represent the interface. The bulk phases are coupled via a novel acoustically-conservative interface discretisation method that retains the acoustic properties of the compressible phase and does not require a Riemann solver. Representative test cases are presented to scrutinize the proposed algorithm, including the reflection of acoustic waves at the compressible-incompressible interface, shock-drop interaction and gas-liquid flows with surface tension. Financial support from the EPSRC (Grant EP/M021556/1) is gratefully acknowledged.
Broadcasting Stations of the World; Part III. Frequency Modulation Broadcasting Stations.
ERIC Educational Resources Information Center
Foreign Broadcast Information Service, Washington, DC.
This third part of "Broadcasting Stations of the World", which lists all reported radio broadcasting and television stations, with the exception of those in the United States which broadcast on domestic channels, covers frequency modulation broadcasting stations. It contains two sections: one indexed alphabetically by country and city, and the…
Distributed Coding of Compressively Sensed Sources
NASA Astrophysics Data System (ADS)
Goukhshtein, Maxim
In this work we propose a new method for compressing multiple correlated sources with a very low-complexity encoder in the presence of side information. Our approach uses ideas from compressed sensing and distributed source coding. At the encoder, syndromes of the quantized compressively sensed sources are generated and transmitted. The decoder uses side information to predict the compressed sources. The predictions are then used to recover the quantized measurements via a two-stage decoding process consisting of bitplane prediction and syndrome decoding. Finally, guided by the structure of the sources and the side information, the sources are reconstructed from the recovered measurements. As a motivating example, we consider the compression of multispectral images acquired on board satellites, where resources, such as computational power and memory, are scarce. Our experimental results exhibit a significant improvement in the rate-distortion trade-off when compared against approaches with similar encoder complexity.
Exploring compression techniques for ROOT IO
NASA Astrophysics Data System (ADS)
Zhang, Z.; Bockelman, B.
2017-10-01
ROOT provides an flexible format used throughout the HEP community. The number of use cases - from an archival data format to end-stage analysis - has required a number of tradeoffs to be exposed to the user. For example, a high “compression level” in the traditional DEFLATE algorithm will result in a smaller file (saving disk space) at the cost of slower decompression (costing CPU time when read). At the scale of the LHC experiment, poor design choices can result in terabytes of wasted space or wasted CPU time. We explore and attempt to quantify some of these tradeoffs. Specifically, we explore: the use of alternate compressing algorithms to optimize for read performance; an alternate method of compressing individual events to allow efficient random access; and a new approach to whole-file compression. Quantitative results are given, as well as guidance on how to make compression decisions for different use cases.
Wavelet-based audio embedding and audio/video compression
NASA Astrophysics Data System (ADS)
Mendenhall, Michael J.; Claypoole, Roger L., Jr.
2001-12-01
Watermarking, traditionally used for copyright protection, is used in a new and exciting way. An efficient wavelet-based watermarking technique embeds audio information into a video signal. Several effective compression techniques are applied to compress the resulting audio/video signal in an embedded fashion. This wavelet-based compression algorithm incorporates bit-plane coding, index coding, and Huffman coding. To demonstrate the potential of this audio embedding and audio/video compression algorithm, we embed an audio signal into a video signal and then compress. Results show that overall compression rates of 15:1 can be achieved. The video signal is reconstructed with a median PSNR of nearly 33 dB. Finally, the audio signal is extracted from the compressed audio/video signal without error.
Homogenous charge compression ignition engine having a cylinder including a high compression space
Agama, Jorge R.; Fiveland, Scott B.; Maloney, Ronald P.; Faletti, James J.; Clarke, John M.
2003-12-30
The present invention relates generally to the field of homogeneous charge compression engines. In these engines, fuel is injected upstream or directly into the cylinder when the power piston is relatively close to its bottom dead center position. The fuel mixes with air in the cylinder as the power piston advances to create a relatively lean homogeneous mixture that preferably ignites when the power piston is relatively close to the top dead center position. However, if the ignition event occurs either earlier or later than desired, lowered performance, engine misfire, or even engine damage, can result. Thus, the present invention divides the homogeneous charge between a controlled volume higher compression space and a lower compression space to better control the start of ignition.
Psychophysical Comparisons in Image Compression Algorithms.
1999-03-01
Leister, M., "Lossy Lempel - Ziv Algorithm for Large Alphabet Sources and Applications to Image Compression ," IEEE Proceedings, v.I, pp. 225-228, September...1623-1642, September 1990. Sanford, M.A., An Analysis of Data Compression Algorithms used in the Transmission of Imagery, Master’s Thesis, Naval...NAVAL POSTGRADUATE SCHOOL Monterey, California THESIS PSYCHOPHYSICAL COMPARISONS IN IMAGE COMPRESSION ALGORITHMS by % Christopher J. Bodine • March
The effects of compressive preloads on the compression-after-impact strength of carbon/epoxy
NASA Technical Reports Server (NTRS)
Nettles, A. T.; Lance, D. G.
1992-01-01
A preloading device was used to examine the effects of compressive prestress on the compression-after-impact (CAI) strength of 16-ply, quasi-isotropic carbon epoxy test coupons. T300/934 material was evaluated at preloads from 200 to 4000 lb at impact energies from 1 to 9 joules. IM7/8551-7 material was evaluated at preloads from 4000 to 10,000 lb at impact energies from 4 to 16 joules. Advanced design of experiments methodology was used to design and evaluate the test matrices. The results showed that no statistically significant change in CAI strength could be contributed to the amount of compressive preload applied to the specimen.
Space Station Freedom Utilization Conference
NASA Technical Reports Server (NTRS)
1992-01-01
The topics addressed in Space Station Freedom Utilization Conference are: (1) space station freedom overview and research capabilities; (2) space station freedom research plans and opportunities; (3) life sciences research on space station freedom; (4) technology research on space station freedom; (5) microgravity research and biotechnology on space station freedom; and (6) closing plenary.
Development Status of the International Space Station Urine Processor Assembly
NASA Technical Reports Server (NTRS)
Holder, Donald W.; Hutchens, Cindy F.
2003-01-01
NASA, Marshall Space Flight Center (MSFC) is developing a Urine Processor Assembly (UPA) for the International Space Station (ISS). The UPA uses Vapor Compression Distillation (VCD) technology to reclaim water from pre-treated urine. This water is further processed by the Water Processor Assembly (WPA) to potable quality standards for use on the ISS. NASA has developed this technology over the last 25-30 years. Over this history, many technical issues were solved with thousands of hours of ground testing that demonstrate the ability of the UPA technology to reclaim water from urine. In recent years, NASA MSFC has been responsible for taking the UPA technology to "flight design" maturity. This paper will give a brief overview of the UPA design and a status of the major design and development efforts completed recently to mature the UPA to a flight level.
Preliminary Test Results of Heshe Hydrogeological Experimental Well Station in Taiwan
NASA Astrophysics Data System (ADS)
Chuang, P.; Liu, C.; Lin, M.; Chan, W.; Lee, T.; Chia, Y.; Teng, M.; Liu, C.
2013-12-01
Safe disposal of radioactive waste is a critical issue for the development of nuclear energy. The design of final disposal system is based on the concept of multiple barriers which integrate the natural barriers and engineering barriers for long-term isolation of radioactive wastes. As groundwater is the major medium that can transport radionuclides to our living environment, it is essential to characterize groundwater flow at the disposal site. Taiwan is located at the boundary between the Eurasian plate and the Philippine Sea plate. Geologic formations are often fractured due to tectonic compression and extension. In this study, a well station for the research and development of hydrogeological techniques was established at the Experimental Forest of the National Taiwan University in central Taiwan. There are 10 testing wells, ranging in depth from 25 m to 100 m, at the station. The bedrock beneath the regolith is highly fractured mudstone. As fracture is the preferential pathway of the groundwater flow, the focus of in-situ tests is to investigate the location of permeable fractures and the connection of permeable fractures. Several field tests have been conducted, including geophysical logging, heat-pulse flowmeter, hydraulic test, tracer test and double packer test, for the development of advanced technologies to detect the preferential groundwater flow in fractured rocks.
An efficient compression scheme for bitmap indices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Kesheng; Otoo, Ekow J.; Shoshani, Arie
2004-04-13
When using an out-of-core indexing method to answer a query, it is generally assumed that the I/O cost dominates the overall query response time. Because of this, most research on indexing methods concentrate on reducing the sizes of indices. For bitmap indices, compression has been used for this purpose. However, in most cases, operations on these compressed bitmaps, mostly bitwise logical operations such as AND, OR, and NOT, spend more time in CPU than in I/O. To speedup these operations, a number of specialized bitmap compression schemes have been developed; the best known of which is the byte-aligned bitmap codemore » (BBC). They are usually faster in performing logical operations than the general purpose compression schemes, but, the time spent in CPU still dominates the total query response time. To reduce the query response time, we designed a CPU-friendly scheme named the word-aligned hybrid (WAH) code. In this paper, we prove that the sizes of WAH compressed bitmap indices are about two words per row for large range of attributes. This size is smaller than typical sizes of commonly used indices, such as a B-tree. Therefore, WAH compressed indices are not only appropriate for low cardinality attributes but also for high cardinality attributes.In the worst case, the time to operate on compressed bitmaps is proportional to the total size of the bitmaps involved. The total size of the bitmaps required to answer a query on one attribute is proportional to the number of hits. These indicate that WAH compressed bitmap indices are optimal. To verify their effectiveness, we generated bitmap indices for four different datasets and measured the response time of many range queries. Tests confirm that sizes of compressed bitmap indices are indeed smaller than B-tree indices, and query processing with WAH compressed indices is much faster than with BBC compressed indices, projection indices and B-tree indices. In addition, we also verified that the average query
A New Approach for Fingerprint Image Compression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mazieres, Bertrand
1997-12-01
The FBI has been collecting fingerprint cards since 1924 and now has over 200 million of them. Digitized with 8 bits of grayscale resolution at 500 dots per inch, it means 2000 terabytes of information. Also, without any compression, transmitting a 10 Mb card over a 9600 baud connection will need 3 hours. Hence we need a compression and a compression as close to lossless as possible: all fingerprint details must be kept. A lossless compression usually do not give a better compression ratio than 2:1, which is not sufficient. Compressing these images with the JPEG standard leads to artefactsmore » which appear even at low compression rates. Therefore the FBI has chosen in 1993 a scheme of compression based on a wavelet transform, followed by a scalar quantization and an entropy coding : the so-called WSQ. This scheme allows to achieve compression ratios of 20:1 without any perceptible loss of quality. The publication of the FBI specifies a decoder, which means that many parameters can be changed in the encoding process: the type of analysis/reconstruction filters, the way the bit allocation is made, the number of Huffman tables used for the entropy coding. The first encoder used 9/7 filters for the wavelet transform and did the bit allocation using a high-rate bit assumption. Since the transform is made into 64 subbands, quite a lot of bands receive only a few bits even at an archival quality compression rate of 0.75 bit/pixel. Thus, after a brief overview of the standard, we will discuss a new approach for the bit-allocation that seems to make more sense where theory is concerned. Then we will talk about some implementation aspects, particularly for the new entropy coder and the features that allow other applications than fingerprint image compression. Finally, we will compare the performances of the new encoder to those of the first encoder.« less
A biological compression model and its applications.
Cao, Minh Duc; Dix, Trevor I; Allison, Lloyd
2011-01-01
A biological compression model, expert model, is presented which is superior to existing compression algorithms in both compression performance and speed. The model is able to compress whole eukaryotic genomes. Most importantly, the model provides a framework for knowledge discovery from biological data. It can be used for repeat element discovery, sequence alignment and phylogenetic analysis. We demonstrate that the model can handle statistically biased sequences and distantly related sequences where conventional knowledge discovery tools often fail.
Wearable EEG via lossless compression.
Dufort, Guillermo; Favaro, Federico; Lecumberry, Federico; Martin, Alvaro; Oliver, Juan P; Oreggioni, Julian; Ramirez, Ignacio; Seroussi, Gadiel; Steinfeld, Leonardo
2016-08-01
This work presents a wearable multi-channel EEG recording system featuring a lossless compression algorithm. The algorithm, based in a previously reported algorithm by the authors, exploits the existing temporal correlation between samples at different sampling times, and the spatial correlation between different electrodes across the scalp. The low-power platform is able to compress, by a factor between 2.3 and 3.6, up to 300sps from 64 channels with a power consumption of 176μW/ch. The performance of the algorithm compares favorably with the best compression rates reported up to date in the literature.
47 CFR 95.139 - Adding a small base station or a small control station.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 5 2010-10-01 2010-10-01 false Adding a small base station or a small control... base station or a small control station. (a) Except for a GMRS system licensed to a non-individual, one or more small base stations or a small control station may be added to a GMRS system at any point...
Wavelet compression of noisy tomographic images
NASA Astrophysics Data System (ADS)
Kappeler, Christian; Mueller, Stefan P.
1995-09-01
3D data acquisition is increasingly used in positron emission tomography (PET) to collect a larger fraction of the emitted radiation. A major practical difficulty with data storage and transmission in 3D-PET is the large size of the data sets. A typical dynamic study contains about 200 Mbyte of data. PET images inherently have a high level of photon noise and therefore usually are evaluated after being processed by a smoothing filter. In this work we examined lossy compression schemes under the postulate not induce image modifications exceeding those resulting from low pass filtering. The standard we will refer to is the Hanning filter. Resolution and inhomogeneity serve as figures of merit for quantification of image quality. The images to be compressed are transformed to a wavelet representation using Daubechies12 wavelets and compressed after filtering by thresholding. We do not include further compression by quantization and coding here. Achievable compression factors at this level of processing are thirty to fifty.
Optimisation algorithms for ECG data compression.
Haugland, D; Heber, J G; Husøy, J H
1997-07-01
The use of exact optimisation algorithms for compressing digital electrocardiograms (ECGs) is demonstrated. As opposed to traditional time-domain methods, which use heuristics to select a small subset of representative signal samples, the problem of selecting the subset is formulated in rigorous mathematical terms. This approach makes it possible to derive algorithms guaranteeing the smallest possible reconstruction error when a bounded selection of signal samples is interpolated. The proposed model resembles well-known network models and is solved by a cubic dynamic programming algorithm. When applied to standard test problems, the algorithm produces a compressed representation for which the distortion is about one-half of that obtained by traditional time-domain compression techniques at reasonable compression ratios. This illustrates that, in terms of the accuracy of decoded signals, existing time-domain heuristics for ECG compression may be far from what is theoretically achievable. The paper is an attempt to bridge this gap.
Nonpainful wide-area compression inhibits experimental pain
Honigman, Liat; Bar-Bachar, Ofrit; Yarnitsky, David; Sprecher, Elliot; Granovsky, Yelena
2016-01-01
Abstract Compression therapy, a well-recognized treatment for lymphoedema and venous disorders, pressurizes limbs and generates massive non-noxious afferent sensory barrages. The aim of this study was to study whether such afferent activity has an analgesic effect when applied on the lower limbs, hypothesizing that larger compression areas will induce stronger analgesic effects, and whether this effect correlates with conditioned pain modulation (CPM). Thirty young healthy subjects received painful heat and pressure stimuli (47°C for 30 seconds, forearm; 300 kPa for 15 seconds, wrist) before and during 3 compression protocols of either SMALL (up to ankles), MEDIUM (up to knees), or LARGE (up to hips) compression areas. Conditioned pain modulation (heat pain conditioned by noxious cold water) was tested before and after each compression protocol. The LARGE protocol induced more analgesia for heat than the SMALL protocol (P < 0.001). The analgesic effect interacted with gender (P = 0.015). The LARGE protocol was more efficient for females, whereas the MEDIUM protocol was more efficient for males. Pressure pain was reduced by all protocols (P < 0.001) with no differences between protocols and no gender effect. Conditioned pain modulation was more efficient than the compression-induced analgesia. For the LARGE protocol, precompression CPM efficiency positively correlated with compression-induced analgesia. Large body area compression exerts an area-dependent analgesic effect on experimental pain stimuli. The observed correlation with pain inhibition in response to robust non-noxious sensory stimulation may suggest that compression therapy shares similar mechanisms with inhibitory pain modulation assessed through CPM. PMID:27152691
ERGC: an efficient referential genome compression algorithm.
Saha, Subrata; Rajasekaran, Sanguthevar
2015-11-01
Genome sequencing has become faster and more affordable. Consequently, the number of available complete genomic sequences is increasing rapidly. As a result, the cost to store, process, analyze and transmit the data is becoming a bottleneck for research and future medical applications. So, the need for devising efficient data compression and data reduction techniques for biological sequencing data is growing by the day. Although there exists a number of standard data compression algorithms, they are not efficient in compressing biological data. These generic algorithms do not exploit some inherent properties of the sequencing data while compressing. To exploit statistical and information-theoretic properties of genomic sequences, we need specialized compression algorithms. Five different next-generation sequencing data compression problems have been identified and studied in the literature. We propose a novel algorithm for one of these problems known as reference-based genome compression. We have done extensive experiments using five real sequencing datasets. The results on real genomes show that our proposed algorithm is indeed competitive and performs better than the best known algorithms for this problem. It achieves compression ratios that are better than those of the currently best performing algorithms. The time to compress and decompress the whole genome is also very promising. The implementations are freely available for non-commercial purposes. They can be downloaded from http://engr.uconn.edu/∼rajasek/ERGC.zip. rajasek@engr.uconn.edu. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Nonpainful wide-area compression inhibits experimental pain.
Honigman, Liat; Bar-Bachar, Ofrit; Yarnitsky, David; Sprecher, Elliot; Granovsky, Yelena
2016-09-01
Compression therapy, a well-recognized treatment for lymphoedema and venous disorders, pressurizes limbs and generates massive non-noxious afferent sensory barrages. The aim of this study was to study whether such afferent activity has an analgesic effect when applied on the lower limbs, hypothesizing that larger compression areas will induce stronger analgesic effects, and whether this effect correlates with conditioned pain modulation (CPM). Thirty young healthy subjects received painful heat and pressure stimuli (47°C for 30 seconds, forearm; 300 kPa for 15 seconds, wrist) before and during 3 compression protocols of either SMALL (up to ankles), MEDIUM (up to knees), or LARGE (up to hips) compression areas. Conditioned pain modulation (heat pain conditioned by noxious cold water) was tested before and after each compression protocol. The LARGE protocol induced more analgesia for heat than the SMALL protocol (P < 0.001). The analgesic effect interacted with gender (P = 0.015). The LARGE protocol was more efficient for females, whereas the MEDIUM protocol was more efficient for males. Pressure pain was reduced by all protocols (P < 0.001) with no differences between protocols and no gender effect. Conditioned pain modulation was more efficient than the compression-induced analgesia. For the LARGE protocol, precompression CPM efficiency positively correlated with compression-induced analgesia. Large body area compression exerts an area-dependent analgesic effect on experimental pain stimuli. The observed correlation with pain inhibition in response to robust non-noxious sensory stimulation may suggest that compression therapy shares similar mechanisms with inhibitory pain modulation assessed through CPM.
Classification Techniques for Digital Map Compression
1989-03-01
classification improved the performance of the K-means classification algorithm resulting in a compression of 8.06:1 with Lempel - Ziv coding. Run-length coding... compression performance are run-length coding [2], [8] and Lempel - Ziv coding 110], [11]. These techniques are chosen because they are most efficient when...investigated. After the classification, some standard file compression methods, such as Lempel - Ziv and run-length encoding were applied to the
Compression of surface myoelectric signals using MP3 encoding.
Chan, Adrian D C
2011-01-01
The potential of MP3 compression of surface myoelectric signals is explored in this paper. MP3 compression is a perceptual-based encoder scheme, used traditionally to compress audio signals. The ubiquity of MP3 compression (e.g., portable consumer electronics and internet applications) makes it an attractive option for remote monitoring and telemedicine applications. The effects of muscle site and contraction type are examined at different MP3 encoding bitrates. Results demonstrate that MP3 compression is sensitive to the myoelectric signal bandwidth, with larger signal distortion associated with myoelectric signals that have higher bandwidths. Compared to other myoelectric signal compression techniques reported previously (embedded zero-tree wavelet compression and adaptive differential pulse code modulation), MP3 compression demonstrates superior performance (i.e., lower percent residual differences for the same compression ratios).
Near-wall modeling of compressible turbulent flow
NASA Technical Reports Server (NTRS)
So, Ronald M. C.
1991-01-01
A near-wall two-equation model for compressible flows is proposed. The model is formulated by relaxing the assumption of dynamic field similarity between compressible and incompressible flows. A postulate is made to justify the extension of incompressible models to ammount for compressibility effects. This requires formulation the turbulent kinetic energy equation in a form similar to its incompressible counterpart. As a result, the compressible dissipation function has to be split into a solenoidal part, which is not sensitive to changes of compressibility indicators, and a dilatational part, which is directly affected by these changes. A model with an explicit dependence on the turbulent Mach number is proposed for the dilatational dissipation rate.
Advances in high throughput DNA sequence data compression.
Sardaraz, Muhammad; Tahir, Muhammad; Ikram, Ataul Aziz
2016-06-01
Advances in high throughput sequencing technologies and reduction in cost of sequencing have led to exponential growth in high throughput DNA sequence data. This growth has posed challenges such as storage, retrieval, and transmission of sequencing data. Data compression is used to cope with these challenges. Various methods have been developed to compress genomic and sequencing data. In this article, we present a comprehensive review of compression methods for genome and reads compression. Algorithms are categorized as referential or reference free. Experimental results and comparative analysis of various methods for data compression are presented. Finally, key challenges and research directions in DNA sequence data compression are highlighted.
Compression and fast retrieval of SNP data.
Sambo, Francesco; Di Camillo, Barbara; Toffolo, Gianna; Cobelli, Claudio
2014-11-01
The increasing interest in rare genetic variants and epistatic genetic effects on complex phenotypic traits is currently pushing genome-wide association study design towards datasets of increasing size, both in the number of studied subjects and in the number of genotyped single nucleotide polymorphisms (SNPs). This, in turn, is leading to a compelling need for new methods for compression and fast retrieval of SNP data. We present a novel algorithm and file format for compressing and retrieving SNP data, specifically designed for large-scale association studies. Our algorithm is based on two main ideas: (i) compress linkage disequilibrium blocks in terms of differences with a reference SNP and (ii) compress reference SNPs exploiting information on their call rate and minor allele frequency. Tested on two SNP datasets and compared with several state-of-the-art software tools, our compression algorithm is shown to be competitive in terms of compression rate and to outperform all tools in terms of time to load compressed data. Our compression and decompression algorithms are implemented in a C++ library, are released under the GNU General Public License and are freely downloadable from http://www.dei.unipd.it/~sambofra/snpack.html. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Highly Efficient Compression Algorithms for Multichannel EEG.
Shaw, Laxmi; Rahman, Daleef; Routray, Aurobinda
2018-05-01
The difficulty associated with processing and understanding the high dimensionality of electroencephalogram (EEG) data requires developing efficient and robust compression algorithms. In this paper, different lossless compression techniques of single and multichannel EEG data, including Huffman coding, arithmetic coding, Markov predictor, linear predictor, context-based error modeling, multivariate autoregression (MVAR), and a low complexity bivariate model have been examined and their performances have been compared. Furthermore, a high compression algorithm named general MVAR and a modified context-based error modeling for multichannel EEG have been proposed. The resulting compression algorithm produces a higher relative compression ratio of 70.64% on average compared with the existing methods, and in some cases, it goes up to 83.06%. The proposed methods are designed to compress a large amount of multichannel EEG data efficiently so that the data storage and transmission bandwidth can be effectively used. These methods have been validated using several experimental multichannel EEG recordings of different subjects and publicly available standard databases. The satisfactory parametric measures of these methods, namely percent-root-mean square distortion, peak signal-to-noise ratio, root-mean-square error, and cross correlation, show their superiority over the state-of-the-art compression methods.
Halftoning processing on a JPEG-compressed image
NASA Astrophysics Data System (ADS)
Sibade, Cedric; Barizien, Stephane; Akil, Mohamed; Perroton, Laurent
2003-12-01
Digital image processing algorithms are usually designed for the raw format, that is on an uncompressed representation of the image. Therefore prior to transforming or processing a compressed format, decompression is applied; then, the result of the processing application is finally re-compressed for further transfer or storage. The change of data representation is resource-consuming in terms of computation, time and memory usage. In the wide format printing industry, this problem becomes an important issue: e.g. a 1 m2 input color image, scanned at 600 dpi exceeds 1.6 GB in its raw representation. However, some image processing algorithms can be performed in the compressed-domain, by applying an equivalent operation on the compressed format. This paper is presenting an innovative application of the halftoning processing operation by screening, to be applied on JPEG-compressed image. This compressed-domain transform is performed by computing the threshold operation of the screening algorithm in the DCT domain. This algorithm is illustrated by examples for different halftone masks. A pre-sharpening operation, applied on a JPEG-compressed low quality image is also described; it allows to de-noise and to enhance the contours of this image.
Data Compression With Application to Geo-Location
2010-08-01
wireless sensor network requires the estimation of time-difference-of-arrival (TDOA) parameters using data collected by a set of spatially separated sensors. Compressing the data that is shared among the sensors can provide tremendous savings in terms of the energy and transmission latency. Traditional MSE and perceptual based data compression schemes fail to accurately capture the effects of compression on the TDOA estimation task; therefore, it is necessary to investigate compression algorithms suitable for TDOA parameter estimation. This thesis explores the
Chattoraj, Sayantan; Sun, Changquan Calvin
2018-04-01
Continuous manufacturing of tablets has many advantages, including batch size flexibility, demand-adaptive scale up or scale down, consistent product quality, small operational foot print, and increased manufacturing efficiency. Simplicity makes direct compression the most suitable process for continuous tablet manufacturing. However, deficiencies in powder flow and compression of active pharmaceutical ingredients (APIs) limit the range of drug loading that can routinely be considered for direct compression. For the widespread adoption of continuous direct compression, effective API engineering strategies to address power flow and compression problems are needed. Appropriate implementation of these strategies would facilitate the design of high-quality robust drug products, as stipulated by the Quality-by-Design framework. Here, several crystal and particle engineering strategies for improving powder flow and compression properties are summarized. The focus is on the underlying materials science, which is the foundation for effective API engineering to enable successful continuous manufacturing by the direct compression process. Copyright © 2018 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
1972-01-01
A user's handbook for the modular space station concept is presented. The document is designed to acquaint science personnel with the overall modular space station program, the general nature and capabilities of the station itself, some of the scientific opportunities presented by the station, the general policy governing its operation, and the relationship between the program and participants from the scientific community.
Temporal compressive imaging for video
NASA Astrophysics Data System (ADS)
Zhou, Qun; Zhang, Linxia; Ke, Jun
2018-01-01
In many situations, imagers are required to have higher imaging speed, such as gunpowder blasting analysis and observing high-speed biology phenomena. However, measuring high-speed video is a challenge to camera design, especially, in infrared spectrum. In this paper, we reconstruct a high-frame-rate video from compressive video measurements using temporal compressive imaging (TCI) with a temporal compression ratio T=8. This means that, 8 unique high-speed temporal frames will be obtained from a single compressive frame using a reconstruction algorithm. Equivalently, the video frame rates is increased by 8 times. Two methods, two-step iterative shrinkage/threshold (TwIST) algorithm and the Gaussian mixture model (GMM) method, are used for reconstruction. To reduce reconstruction time and memory usage, each frame of size 256×256 is divided into patches of size 8×8. The influence of different coded mask to reconstruction is discussed. The reconstruction qualities using TwIST and GMM are also compared.
Compressed gas fuel storage system
Wozniak, John J.; Tiller, Dale B.; Wienhold, Paul D.; Hildebrand, Richard J.
2001-01-01
A compressed gas vehicle fuel storage system comprised of a plurality of compressed gas pressure cells supported by shock-absorbing foam positioned within a shape-conforming container. The container is dimensioned relative to the compressed gas pressure cells whereby a radial air gap surrounds each compressed gas pressure cell. The radial air gap allows pressure-induced expansion of the pressure cells without resulting in the application of pressure to adjacent pressure cells or physical pressure to the container. The pressure cells are interconnected by a gas control assembly including a thermally activated pressure relief device, a manual safety shut-off valve, and means for connecting the fuel storage system to a vehicle power source and a refueling adapter. The gas control assembly is enclosed by a protective cover attached to the container. The system is attached to the vehicle with straps to enable the chassis to deform as intended in a high-speed collision.
NASA Astrophysics Data System (ADS)
Butler, G. V.
1981-04-01
Early space station designs are considered, taking into account Herman Oberth's first space station, the London Daily Mail Study, the first major space station design developed during the moon mission, and the Manned Orbiting Laboratory Program of DOD. Attention is given to Skylab, new space station studies, the Shuttle and Spacelab, communication satellites, solar power satellites, a 30 meter diameter radiometer for geological measurements and agricultural assessments, the mining of the moons, and questions of international cooperation. It is thought to be very probable that there will be very large space stations at some time in the future. However, for the more immediate future a step-by-step development that will start with Spacelab stations of 3-4 men is envisaged.
30 CFR 57.13020 - Use of compressed air.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Use of compressed air. 57.13020 Section 57... MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-UNDERGROUND METAL AND NONMETAL MINES Compressed Air and Boilers § 57.13020 Use of compressed air. At no time shall compressed air be directed toward a...
30 CFR 57.13020 - Use of compressed air.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Use of compressed air. 57.13020 Section 57... MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-UNDERGROUND METAL AND NONMETAL MINES Compressed Air and Boilers § 57.13020 Use of compressed air. At no time shall compressed air be directed toward a...
30 CFR 57.13020 - Use of compressed air.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Use of compressed air. 57.13020 Section 57... MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-UNDERGROUND METAL AND NONMETAL MINES Compressed Air and Boilers § 57.13020 Use of compressed air. At no time shall compressed air be directed toward a...
NASA Astrophysics Data System (ADS)
Gurovich, V.; Virozub, A.; Rososhek, A.; Bland, S.; Spielman, R. B.; Krasik, Ya. E.
2018-05-01
A major experimental research area in material equation-of-state today involves the use of off-Hugoniot measurements rather than shock experiments that give only Hugoniot data. There is a wide range of applications using quasi-isentropic compression of matter including the direct measurement of the complete isentrope of materials in a single experiment and minimizing the heating of flyer plates for high-velocity shock measurements. We propose a novel approach to generating quasi-isentropic compression of matter. Using analytical modeling and hydrodynamic simulations, we show that a working fluid composed of compressed water, generated by an underwater electrical explosion of a planar wire array, might be used to efficiently drive the quasi-isentropic compression of a copper target to pressures ˜2 × 1011 Pa without any complex target designs.
Centrifugal Gas Compression Cycle
NASA Astrophysics Data System (ADS)
Fultun, Roy
2002-11-01
A centrifuged gas of kinetic, elastic hard spheres compresses isothermally and without flow of heat in a process that reverses free expansion. This theorem follows from stated assumptions via a collection of thought experiments, theorems and other supporting results, and it excludes application of the reversible mechanical adiabatic power law in this context. The existence of an isothermal adiabatic centrifugal compression process makes a three-process cycle possible using a fixed sample of the working gas. The three processes are: adiabatic mechanical expansion and cooling against a piston, isothermal adiabatic centrifugal compression back to the original volume, and isochoric temperature rise back to the original temperature due to an influx of heat. This cycle forms the basis for a Thomson perpetuum mobile that induces a loop of energy flow in an isolated system consisting of a heat bath connectable by a thermal path to the working gas, a mechanical extractor of the gas's internal energy, and a device that uses that mechanical energy and dissipates it as heat back into the heat bath. We present a simple experimental procedure to test the assertion that adiabatic centrifugal compression is isothermal. An energy budget for the cycle provides a criterion for breakeven in the conversion of heat to mechanical energy.
CoGI: Towards Compressing Genomes as an Image.
Xie, Xiaojing; Zhou, Shuigeng; Guan, Jihong
2015-01-01
Genomic science is now facing an explosive increase of data thanks to the fast development of sequencing technology. This situation poses serious challenges to genomic data storage and transferring. It is desirable to compress data to reduce storage and transferring cost, and thus to boost data distribution and utilization efficiency. Up to now, a number of algorithms / tools have been developed for compressing genomic sequences. Unlike the existing algorithms, most of which treat genomes as one-dimensional text strings and compress them based on dictionaries or probability models, this paper proposes a novel approach called CoGI (the abbreviation of Compressing Genomes as an Image) for genome compression, which transforms the genomic sequences to a two-dimensional binary image (or bitmap), then applies a rectangular partition coding algorithm to compress the binary image. CoGI can be used as either a reference-based compressor or a reference-free compressor. For the former, we develop two entropy-based algorithms to select a proper reference genome. Performance evaluation is conducted on various genomes. Experimental results show that the reference-based CoGI significantly outperforms two state-of-the-art reference-based genome compressors GReEn and RLZ-opt in both compression ratio and compression efficiency. It also achieves comparable compression ratio but two orders of magnitude higher compression efficiency in comparison with XM--one state-of-the-art reference-free genome compressor. Furthermore, our approach performs much better than Gzip--a general-purpose and widely-used compressor, in both compression speed and compression ratio. So, CoGI can serve as an effective and practical genome compressor. The source code and other related documents of CoGI are available at: http://admis.fudan.edu.cn/projects/cogi.htm.
30 CFR 56.13020 - Use of compressed air.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Use of compressed air. 56.13020 Section 56... MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-SURFACE METAL AND NONMETAL MINES Compressed Air and Boilers § 56.13020 Use of compressed air. At no time shall compressed air be directed toward a person...
30 CFR 56.13020 - Use of compressed air.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Use of compressed air. 56.13020 Section 56... MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-SURFACE METAL AND NONMETAL MINES Compressed Air and Boilers § 56.13020 Use of compressed air. At no time shall compressed air be directed toward a person...
30 CFR 56.13020 - Use of compressed air.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Use of compressed air. 56.13020 Section 56... MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-SURFACE METAL AND NONMETAL MINES Compressed Air and Boilers § 56.13020 Use of compressed air. At no time shall compressed air be directed toward a person...
30 CFR 56.13020 - Use of compressed air.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Use of compressed air. 56.13020 Section 56... MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-SURFACE METAL AND NONMETAL MINES Compressed Air and Boilers § 56.13020 Use of compressed air. At no time shall compressed air be directed toward a person...
Cloud Optimized Image Format and Compression
NASA Astrophysics Data System (ADS)
Becker, P.; Plesea, L.; Maurer, T.
2015-04-01
Cloud based image storage and processing requires revaluation of formats and processing methods. For the true value of the massive volumes of earth observation data to be realized, the image data needs to be accessible from the cloud. Traditional file formats such as TIF and NITF were developed in the hay day of the desktop and assumed fast low latency file access. Other formats such as JPEG2000 provide for streaming protocols for pixel data, but still require a server to have file access. These concepts no longer truly hold in cloud based elastic storage and computation environments. This paper will provide details of a newly evolving image storage format (MRF) and compression that is optimized for cloud environments. Although the cost of storage continues to fall for large data volumes, there is still significant value in compression. For imagery data to be used in analysis and exploit the extended dynamic range of the new sensors, lossless or controlled lossy compression is of high value. Compression decreases the data volumes stored and reduces the data transferred, but the reduced data size must be balanced with the CPU required to decompress. The paper also outlines a new compression algorithm (LERC) for imagery and elevation data that optimizes this balance. Advantages of the compression include its simple to implement algorithm that enables it to be efficiently accessed using JavaScript. Combing this new cloud based image storage format and compression will help resolve some of the challenges of big image data on the internet.
Aerodynamics inside a rapid compression machine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mittal, Gaurav; Sung, Chih-Jen
2006-04-15
The aerodynamics inside a rapid compression machine after the end of compression is investigated using planar laser-induced fluorescence (PLIF) of acetone. To study the effect of reaction chamber configuration on the resulting aerodynamics and temperature field, experiments are conducted and compared using a creviced piston and a flat piston under varying conditions. Results show that the flat piston design leads to significant mixing of the cold vortex with the hot core region, which causes alternate hot and cold regions inside the combustion chamber. At higher pressures, the effect of the vortex is reduced. The creviced piston head configuration is demonstratedmore » to result in drastic reduction of the effect of the vortex. Experimental conditions are also simulated using the Star-CD computational fluid dynamics package. Computed results closely match with experimental observation. Numerical results indicate that with a flat piston design, gas velocity after compression is very high and the core region shrinks quickly due to rapid entrainment of cold gases. Whereas, for a creviced piston head design, gas velocity after compression is significantly lower and the core region remains unaffected for a long duration. As a consequence, for the flat piston, adiabatic core assumption can significantly overpredict the maximum temperature after the end of compression. For the creviced piston, the adiabatic core assumption is found to be valid even up to 100 ms after compression. This work therefore experimentally and numerically substantiates the importance of piston head design for achieving a homogeneous core region inside a rapid compression machine. (author)« less
Chaos-Based Simultaneous Compression and Encryption for Hadoop.
Usama, Muhammad; Zakaria, Nordin
2017-01-01
Data compression and encryption are key components of commonly deployed platforms such as Hadoop. Numerous data compression and encryption tools are presently available on such platforms and the tools are characteristically applied in sequence, i.e., compression followed by encryption or encryption followed by compression. This paper focuses on the open-source Hadoop framework and proposes a data storage method that efficiently couples data compression with encryption. A simultaneous compression and encryption scheme is introduced that addresses an important implementation issue of source coding based on Tent Map and Piece-wise Linear Chaotic Map (PWLM), which is the infinite precision of real numbers that result from their long products. The approach proposed here solves the implementation issue by removing fractional components that are generated by the long products of real numbers. Moreover, it incorporates a stealth key that performs a cyclic shift in PWLM without compromising compression capabilities. In addition, the proposed approach implements a masking pseudorandom keystream that enhances encryption quality. The proposed algorithm demonstrated a congruent fit within the Hadoop framework, providing robust encryption security and compression.
Chaos-Based Simultaneous Compression and Encryption for Hadoop
Zakaria, Nordin
2017-01-01
Data compression and encryption are key components of commonly deployed platforms such as Hadoop. Numerous data compression and encryption tools are presently available on such platforms and the tools are characteristically applied in sequence, i.e., compression followed by encryption or encryption followed by compression. This paper focuses on the open-source Hadoop framework and proposes a data storage method that efficiently couples data compression with encryption. A simultaneous compression and encryption scheme is introduced that addresses an important implementation issue of source coding based on Tent Map and Piece-wise Linear Chaotic Map (PWLM), which is the infinite precision of real numbers that result from their long products. The approach proposed here solves the implementation issue by removing fractional components that are generated by the long products of real numbers. Moreover, it incorporates a stealth key that performs a cyclic shift in PWLM without compromising compression capabilities. In addition, the proposed approach implements a masking pseudorandom keystream that enhances encryption quality. The proposed algorithm demonstrated a congruent fit within the Hadoop framework, providing robust encryption security and compression. PMID:28072850
Method for preventing jamming conditions in a compression device
Williams, Paul M.; Faller, Kenneth M.; Bauer, Edward J.
2002-06-18
A compression device for feeding a waste material to a reactor includes a waste material feed assembly having a hopper, a supply tube and a compression tube. Each of the supply and compression tubes includes feed-inlet and feed-outlet ends. A feed-discharge valve assembly is located between the feed-outlet end of the compression tube and the reactor. A feed auger-screw extends axially in the supply tube between the feed-inlet and feed-outlet ends thereof. A compression auger-screw extends axially in the compression tube between the feed-inlet and feed-outlet ends thereof. The compression tube is sloped downwardly towards the reactor to drain fluid from the waste material to the reactor and is oriented at generally right angle to the supply tube such that the feed-outlet end of the supply tube is adjacent to the feed-inlet end of the compression tube. A programmable logic controller is provided for controlling the rotational speed of the feed and compression auger-screws for selectively varying the compression of the waste material and for overcoming jamming conditions within either the supply tube or the compression tube.
Multiresolution Distance Volumes for Progressive Surface Compression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laney, D E; Bertram, M; Duchaineau, M A
2002-04-18
We present a surface compression method that stores surfaces as wavelet-compressed signed-distance volumes. Our approach enables the representation of surfaces with complex topology and arbitrary numbers of components within a single multiresolution data structure. This data structure elegantly handles topological modification at high compression rates. Our method does not require the costly and sometimes infeasible base mesh construction step required by subdivision surface approaches. We present several improvements over previous attempts at compressing signed-distance functions, including an 0(n) distance transform, a zero set initialization method for triangle meshes, and a specialized thresholding algorithm. We demonstrate the potential of sampled distancemore » volumes for surface compression and progressive reconstruction for complex high genus surfaces.« less
Compressive Properties of Metal Matrix Syntactic Foams in Free and Constrained Compression
NASA Astrophysics Data System (ADS)
Orbulov, Imre Norbert; Májlinger, Kornél
2014-06-01
Metal matrix syntactic foam (MMSF) blocks were produced by an inert gas-assisted pressure infiltration technique. MMSFs are advanced hollow sphere reinforced-composite materials having promising application in the fields of aviation, transport, and automotive engineering, as well as in civil engineering. The produced blocks were investigated in free and constrained compression modes, and besides the characteristic mechanical properties, their deformation mechanisms and failure modes were studied. In the tests, the chemical composition of the matrix material, the size of the reinforcing ceramic hollow spheres, the applied heat treatment, and the compression mode were considered as investigation parameters. The monitored mechanical properties were the compressive strength, the fracture strain, the structural stiffness, the fracture energy, and the overall absorbed energy. These characteristics were strongly influenced by the test parameters. By the proper selection of the matrix and the reinforcement and by proper design, the mechanical properties of the MMSFs can be effectively tailored for specific and given applications.
ICER-3D Hyperspectral Image Compression Software
NASA Technical Reports Server (NTRS)
Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh
2010-01-01
Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, three-dimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume. The ICER-3D software is capable of providing either lossless or lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The compression algorithm, which was derived from the ICER image compression algorithm, includes wavelet-transform, context-modeling, and entropy coding subalgorithms. The 3D wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of sets of hyperspectral image data, while facilitating elimination of spectral ringing artifacts, using a technique summarized in "Improving 3D Wavelet-Based Compression of Spectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. Correlation is further exploited by a context-modeling subalgorithm, which exploits spectral dependencies in the wavelet-transformed hyperspectral data, using an algorithm that is summarized in "Context Modeler for Wavelet Compression of Hyperspectral Images" (NPO-43239), which follows this article. An important feature of ICER-3D is a scheme for limiting the adverse effects of loss of data during transmission. In this scheme, as in the similar scheme used by ICER, the spatial-frequency domain is partitioned into rectangular error-containment regions. In ICER-3D, the partitions extend through all the wavelength bands. The data in each partition are compressed independently of those in the other partitions, so that loss or corruption of data from any partition does not affect the other partitions. Furthermore, because compression is progressive within each partition, when data are lost, any data from that partition received
NASA Technical Reports Server (NTRS)
Noll, Carey E.; Pearlman, Michael Reisman; Torrence, Mark H.
2013-01-01
Network stations provided system configuration documentation upon joining the ILRS. This information, found in the various site and system log files available on the ILRS website, is essential to the ILRS analysis centers, combination centers, and general user community. Therefore, it is imperative that the station personnel inform the ILRS community in a timely fashion when changes to the system occur. This poster provides some information about the various documentation that must be maintained. The ILRS network consists of over fifty global sites actively ranging to over sixty satellites as well as five lunar reflectors. Information about these stations are available on the ILRS website (http://ilrs.gsfc.nasa.gov/network/stations/index.html). The ILRS Analysis Centers must have current information about the stations and their system configuration in order to use their data in generation of derived products. However, not all information available on the ILRS website is as up-to-date as necessary for correct analysis of their data.
Perceptually lossless fractal image compression
NASA Astrophysics Data System (ADS)
Lin, Huawu; Venetsanopoulos, Anastasios N.
1996-02-01
According to the collage theorem, the encoding distortion for fractal image compression is directly related to the metric used in the encoding process. In this paper, we introduce a perceptually meaningful distortion measure based on the human visual system's nonlinear response to luminance and the visual masking effects. Blackwell's psychophysical raw data on contrast threshold are first interpolated as a function of background luminance and visual angle, and are then used as an error upper bound for perceptually lossless image compression. For a variety of images, experimental results show that the algorithm produces a compression ratio of 8:1 to 10:1 without introducing visual artifacts.
Data compression using Chebyshev transform
NASA Technical Reports Server (NTRS)
Cheng, Andrew F. (Inventor); Hawkins, III, S. Edward (Inventor); Nguyen, Lillian (Inventor); Monaco, Christopher A. (Inventor); Seagrave, Gordon G. (Inventor)
2007-01-01
The present invention is a method, system, and computer program product for implementation of a capable, general purpose compression algorithm that can be engaged on the fly. This invention has particular practical application with time-series data, and more particularly, time-series data obtained form a spacecraft, or similar situations where cost, size and/or power limitations are prevalent, although it is not limited to such applications. It is also particularly applicable to the compression of serial data streams and works in one, two, or three dimensions. The original input data is approximated by Chebyshev polynomials, achieving very high compression ratios on serial data streams with minimal loss of scientific information.
Compression fractures detection on CT
NASA Astrophysics Data System (ADS)
Bar, Amir; Wolf, Lior; Bergman Amitai, Orna; Toledano, Eyal; Elnekave, Eldad
2017-03-01
The presence of a vertebral compression fracture is highly indicative of osteoporosis and represents the single most robust predictor for development of a second osteoporotic fracture in the spine or elsewhere. Less than one third of vertebral compression fractures are diagnosed clinically. We present an automated method for detecting spine compression fractures in Computed Tomography (CT) scans. The algorithm is composed of three processes. First, the spinal column is segmented and sagittal patches are extracted. The patches are then binary classified using a Convolutional Neural Network (CNN). Finally a Recurrent Neural Network (RNN) is utilized to predict whether a vertebral fracture is present in the series of patches.
Task-oriented lossy compression of magnetic resonance images
NASA Astrophysics Data System (ADS)
Anderson, Mark C.; Atkins, M. Stella; Vaisey, Jacques
1996-04-01
A new task-oriented image quality metric is used to quantify the effects of distortion introduced into magnetic resonance images by lossy compression. This metric measures the similarity between a radiologist's manual segmentation of pathological features in the original images and the automated segmentations performed on the original and compressed images. The images are compressed using a general wavelet-based lossy image compression technique, embedded zerotree coding, and segmented using a three-dimensional stochastic model-based tissue segmentation algorithm. The performance of the compression system is then enhanced by compressing different regions of the image volume at different bit rates, guided by prior knowledge about the location of important anatomical regions in the image. Application of the new system to magnetic resonance images is shown to produce compression results superior to the conventional methods, both subjectively and with respect to the segmentation similarity metric.
Intelligent Virtual Station (IVS)
NASA Technical Reports Server (NTRS)
2002-01-01
The Intelligent Virtual Station (IVS) is enabling the integration of design, training, and operations capabilities into an intelligent virtual station for the International Space Station (ISS). A viewgraph of the IVS Remote Server is presented.
46 CFR 197.338 - Compressed gas cylinders.
Code of Federal Regulations, 2010 CFR
2010-10-01
... STANDARDS GENERAL PROVISIONS Commercial Diving Operations Equipment § 197.338 Compressed gas cylinders. Each compressed gas cylinder must— (a) Be stored in a ventilated area; (b) Be protected from excessive heat; (c... 46 Shipping 7 2010-10-01 2010-10-01 false Compressed gas cylinders. 197.338 Section 197.338...
Light-weight reference-based compression of FASTQ data.
Zhang, Yongpeng; Li, Linsen; Yang, Yanli; Yang, Xiao; He, Shan; Zhu, Zexuan
2015-06-09
The exponential growth of next generation sequencing (NGS) data has posed big challenges to data storage, management and archive. Data compression is one of the effective solutions, where reference-based compression strategies can typically achieve superior compression ratios compared to the ones not relying on any reference. This paper presents a lossless light-weight reference-based compression algorithm namely LW-FQZip to compress FASTQ data. The three components of any given input, i.e., metadata, short reads and quality score strings, are first parsed into three data streams in which the redundancy information are identified and eliminated independently. Particularly, well-designed incremental and run-length-limited encoding schemes are utilized to compress the metadata and quality score streams, respectively. To handle the short reads, LW-FQZip uses a novel light-weight mapping model to fast map them against external reference sequence(s) and produce concise alignment results for storage. The three processed data streams are then packed together with some general purpose compression algorithms like LZMA. LW-FQZip was evaluated on eight real-world NGS data sets and achieved compression ratios in the range of 0.111-0.201. This is comparable or superior to other state-of-the-art lossless NGS data compression algorithms. LW-FQZip is a program that enables efficient lossless FASTQ data compression. It contributes to the state of art applications for NGS data storage and transmission. LW-FQZip is freely available online at: http://csse.szu.edu.cn/staff/zhuzx/LWFQZip.
Compression failure of composite laminates
NASA Technical Reports Server (NTRS)
Pipes, R. B.
1983-01-01
This presentation attempts to characterize the compressive behavior of Hercules AS-1/3501-6 graphite-epoxy composite. The effect of varying specimen geometry on test results is examined. The transition region is determined between buckling and compressive failure. Failure modes are defined and analytical models to describe these modes are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stevens, Andrew; Kovarik, Libor; Abellan, Patricia
One of the main limitations of imaging at high spatial and temporal resolution during in-situ TEM experiments is the frame rate of the camera being used to image the dynamic process. While the recent development of direct detectors has provided the hardware to achieve frame rates approaching 0.1ms, the cameras are expensive and must replace existing detectors. In this paper, we examine the use of coded aperture compressive sensing methods [1, 2, 3, 4] to increase the framerate of any camera with simple, low-cost hardware modifications. The coded aperture approach allows multiple sub-frames to be coded and integrated into amore » single camera frame during the acquisition process, and then extracted upon readout using statistical compressive sensing inversion. Our simulations show that it should be possible to increase the speed of any camera by at least an order of magnitude. Compressive Sensing (CS) combines sensing and compression in one operation, and thus provides an approach that could further improve the temporal resolution while correspondingly reducing the electron dose rate. Because the signal is measured in a compressive manner, fewer total measurements are required. When applied to TEM video capture, compressive imaging couled improve acquisition speed and reduce the electron dose rate. CS is a recent concept, and has come to the forefront due the seminal work of Candès [5]. Since the publication of Candès, there has been enormous growth in the application of CS and development of CS variants. For electron microscopy applications, the concept of CS has also been recently applied to electron tomography [6], and reduction of electron dose in scanning transmission electron microscopy (STEM) imaging [7]. To demonstrate the applicability of coded aperture CS video reconstruction for atomic level imaging, we simulate compressive sensing on observations of Pd nanoparticles and Ag nanoparticles during exposure to high temperatures and other environmental
Compressed Air/Vacuum Transportation Techniques
NASA Astrophysics Data System (ADS)
Guha, Shyamal
2011-03-01
General theory of compressed air/vacuum transportation will be presented. In this transportation, a vehicle (such as an automobile or a rail car) is powered either by compressed air or by air at near vacuum pressure. Four version of such transportation is feasible. In all versions, a ``c-shaped'' plastic or ceramic pipe lies buried a few inches under the ground surface. This pipe carries compressed air or air at near vacuum pressure. In type I transportation, a vehicle draws compressed air (or vacuum) from this buried pipe. Using turbine or reciprocating air cylinder, mechanical power is generated from compressed air (or from vacuum). This mechanical power transferred to the wheels of an automobile (or a rail car) drives the vehicle. In type II-IV transportation techniques, a horizontal force is generated inside the plastic (or ceramic) pipe. A set of vertical and horizontal steel bars is used to transmit this force to the automobile on the road (or to a rail car on rail track). The proposed transportation system has following merits: virtually accident free; highly energy efficient; pollution free and it will not contribute to carbon dioxide emission. Some developmental work on this transportation will be needed before it can be used by the traveling public. The entire transportation system could be computer controlled.
Compression selective solid-state chemistry
NASA Astrophysics Data System (ADS)
Hu, Anguang
Compression selective solid-state chemistry refers to mechanically induced selective reactions of solids under thermomechanical extreme conditions. Advanced quantum solid-state chemistry simulations, based on density functional theory with localized basis functions, were performed to provide a remarkable insight into bonding pathways of high-pressure chemical reactions in all agreement with experiments. These pathways clearly demonstrate reaction mechanisms in unprecedented structural details, showing not only the chemical identity of reactive intermediates but also how atoms move along the reaction coordinate associated with a specific vibrational mode, directed by induced chemical stress occurred during bond breaking and forming. It indicates that chemical bonds in solids can break and form precisely under compression as we wish. This can be realized through strongly coupling of mechanical work to an initiation vibrational mode when all other modes can be suppressed under compression, resulting in ultrafast reactions to take place isothermally in a few femtoseconds. Thermodynamically, such reactions correspond to an entropy minimum process on an isotherm where the compression can force thermal expansion coefficient equal to zero. Combining a significantly brief reaction process with specific mode selectivity, both statistical laws and quantum uncertainty principle can be bypassed to precisely break chemical bonds, establishing fundamental principles of compression selective solid-state chemistry. Naturally this leads to understand the ''alchemy'' to purify, grow, and perfect certain materials such as emerging novel disruptive energetics.
Space Station crew workload - Station operations and customer accommodations
NASA Technical Reports Server (NTRS)
Shinkle, G. L.
1985-01-01
The features of the Space Station which permit crew members to utilize work time for payload operations are discussed. The user orientation, modular design, nonstressful flight regime, in space construction, on board control, automation and robotics, and maintenance and servicing of the Space Station are examined. The proposed crew size, skills, and functions as station operator and mission specialists are described. Mission objectives and crew functions, which include performing material processing, life science and astronomy experiments, satellite and payload equipment servicing, systems monitoring and control, maintenance and repair, Orbital Maneuvering Vehicle and Mobile Remote Manipulator System operations, on board planning, housekeeping, and health maintenance and recreation, are studied.
Compression failure mechanisms of composite structures
NASA Technical Reports Server (NTRS)
Hahn, H. T.; Sohi, M.; Moon, S.
1986-01-01
An experimental and analytical study was conducted to delineate the compression failure mechanisms of composite structures. The present report summarizes further results on kink band formation in unidirectional composites. In order to assess the compressive strengths and failure modes of fibers them selves, a fiber bundle was embedded in epoxy casting and tested in compression. A total of six different fibers were used together with two resins of different stiffnesses. The failure of highly anisotropic fibers such as Kevlar 49 and P-75 graphite was due to kinking of fibrils. However, the remaining fibers--T300 and T700 graphite, E-glass, and alumina--failed by localized microbuckling. Compressive strengths of the latter group of fibers were not fully utilized in their respective composite. In addition, acoustic emission monitoring revealed that fiber-matrix debonding did not occur gradually but suddenly at final failure. The kink band formation in unidirectional composites under compression was studied analytically and through microscopy. The material combinations selected include seven graphite/epoxy composites, two graphite/thermoplastic resin composites, one Kevlar 49/epoxy composite and one S-glass/epoxy composite.
The New CCSDS Image Compression Recommendation
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu; Armbruster, Philippe; Kiely, Aaron; Masschelein, Bart; Moury, Gilles; Schaefer, Christoph
2005-01-01
The Consultative Committee for Space Data Systems (CCSDS) data compression working group has recently adopted a recommendation for image data compression, with a final release expected in 2005. The algorithm adopted in the recommendation consists of a two-dimensional discrete wavelet transform of the image, followed by progressive bit-plane coding of the transformed data. The algorithm can provide both lossless and lossy compression, and allows a user to directly control the compressed data volume or the fidelity with which the wavelet-transformed data can be reconstructed. The algorithm is suitable for both frame-based image data and scan-based sensor data, and has applications for near-Earth and deep-space missions. The standard will be accompanied by free software sources on a future web site. An Application-Specific Integrated Circuit (ASIC) implementation of the compressor is currently under development. This paper describes the compression algorithm along with the requirements that drove the selection of the algorithm. Performance results and comparisons with other compressors are given for a test set of space images.
Wave energy devices with compressible volumes.
Kurniawan, Adi; Greaves, Deborah; Chaplin, John
2014-12-08
We present an analysis of wave energy devices with air-filled compressible submerged volumes, where variability of volume is achieved by means of a horizontal surface free to move up and down relative to the body. An analysis of bodies without power take-off (PTO) systems is first presented to demonstrate the positive effects a compressible volume could have on the body response. Subsequently, two compressible device variations are analysed. In the first variation, the compressible volume is connected to a fixed volume via an air turbine for PTO. In the second variation, a water column separates the compressible volume from another volume, which is fitted with an air turbine open to the atmosphere. Both floating and bottom-fixed, axisymmetric, configurations are considered, and linear analysis is employed throughout. Advantages and disadvantages of each device are examined in detail. Some configurations with displaced volumes less than 2000 m 3 and with constant turbine coefficients are shown to be capable of achieving 80% of the theoretical maximum absorbed power over a wave period range of about 4 s.
Wave energy devices with compressible volumes
Kurniawan, Adi; Greaves, Deborah; Chaplin, John
2014-01-01
We present an analysis of wave energy devices with air-filled compressible submerged volumes, where variability of volume is achieved by means of a horizontal surface free to move up and down relative to the body. An analysis of bodies without power take-off (PTO) systems is first presented to demonstrate the positive effects a compressible volume could have on the body response. Subsequently, two compressible device variations are analysed. In the first variation, the compressible volume is connected to a fixed volume via an air turbine for PTO. In the second variation, a water column separates the compressible volume from another volume, which is fitted with an air turbine open to the atmosphere. Both floating and bottom-fixed, axisymmetric, configurations are considered, and linear analysis is employed throughout. Advantages and disadvantages of each device are examined in detail. Some configurations with displaced volumes less than 2000 m3 and with constant turbine coefficients are shown to be capable of achieving 80% of the theoretical maximum absorbed power over a wave period range of about 4 s. PMID:25484609
Fast lossless compression via cascading Bloom filters
2014-01-01
Background Data from large Next Generation Sequencing (NGS) experiments present challenges both in terms of costs associated with storage and in time required for file transfer. It is sometimes possible to store only a summary relevant to particular applications, but generally it is desirable to keep all information needed to revisit experimental results in the future. Thus, the need for efficient lossless compression methods for NGS reads arises. It has been shown that NGS-specific compression schemes can improve results over generic compression methods, such as the Lempel-Ziv algorithm, Burrows-Wheeler transform, or Arithmetic Coding. When a reference genome is available, effective compression can be achieved by first aligning the reads to the reference genome, and then encoding each read using the alignment position combined with the differences in the read relative to the reference. These reference-based methods have been shown to compress better than reference-free schemes, but the alignment step they require demands several hours of CPU time on a typical dataset, whereas reference-free methods can usually compress in minutes. Results We present a new approach that achieves highly efficient compression by using a reference genome, but completely circumvents the need for alignment, affording a great reduction in the time needed to compress. In contrast to reference-based methods that first align reads to the genome, we hash all reads into Bloom filters to encode, and decode by querying the same Bloom filters using read-length subsequences of the reference genome. Further compression is achieved by using a cascade of such filters. Conclusions Our method, called BARCODE, runs an order of magnitude faster than reference-based methods, while compressing an order of magnitude better than reference-free methods, over a broad range of sequencing coverage. In high coverage (50-100 fold), compared to the best tested compressors, BARCODE saves 80-90% of the running time
Fast lossless compression via cascading Bloom filters.
Rozov, Roye; Shamir, Ron; Halperin, Eran
2014-01-01
Data from large Next Generation Sequencing (NGS) experiments present challenges both in terms of costs associated with storage and in time required for file transfer. It is sometimes possible to store only a summary relevant to particular applications, but generally it is desirable to keep all information needed to revisit experimental results in the future. Thus, the need for efficient lossless compression methods for NGS reads arises. It has been shown that NGS-specific compression schemes can improve results over generic compression methods, such as the Lempel-Ziv algorithm, Burrows-Wheeler transform, or Arithmetic Coding. When a reference genome is available, effective compression can be achieved by first aligning the reads to the reference genome, and then encoding each read using the alignment position combined with the differences in the read relative to the reference. These reference-based methods have been shown to compress better than reference-free schemes, but the alignment step they require demands several hours of CPU time on a typical dataset, whereas reference-free methods can usually compress in minutes. We present a new approach that achieves highly efficient compression by using a reference genome, but completely circumvents the need for alignment, affording a great reduction in the time needed to compress. In contrast to reference-based methods that first align reads to the genome, we hash all reads into Bloom filters to encode, and decode by querying the same Bloom filters using read-length subsequences of the reference genome. Further compression is achieved by using a cascade of such filters. Our method, called BARCODE, runs an order of magnitude faster than reference-based methods, while compressing an order of magnitude better than reference-free methods, over a broad range of sequencing coverage. In high coverage (50-100 fold), compared to the best tested compressors, BARCODE saves 80-90% of the running time while only increasing space
A zero-error operational video data compression system
NASA Technical Reports Server (NTRS)
Kutz, R. L.
1973-01-01
A data compression system has been operating since February 1972, using ATS spin-scan cloud cover data. With the launch of ITOS 3 in October 1972, this data compression system has become the only source of near-realtime very high resolution radiometer image data at the data processing facility. The VHRR image data are compressed and transmitted over a 50 kilobit per second wideband ground link. The goal of the data compression experiment was to send data quantized to six bits at twice the rate possible when no compression is used, while maintaining zero error between the transmitted and reconstructed data. All objectives of the data compression experiment were met, and thus a capability of doubling the data throughput of the system has been achieved.
Internal combustion engine with compressed air collection system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, P.W.
1988-08-23
This patent describes an internal combustion engine comprising cylinders respectively including a pressure port, pistons respectively movable in the cylinders through respective compression strokes, fuel injectors respectively connected to the cylinders and operative to supply, from a fuel source to the respective cylinders, a metered quantity of fuel conveyed by compressed gas in response to fuel injector operation during the compression strokes of the respective cylinders, a storage tank for accumulating and storing compressed gas, means for selectively connecting the pressure ports to the storage tank only during the compression strokes of the respective cylinders, and duct means connecting themore » storage tank to the fuel injectors for supplying the fuel injectors with compressed gas in response to fuel injector operation.« less
Evaluation of NDI compressed air foam system (cafs) applied as a retrofit. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duncan, S.
1994-08-01
Army Engineer Firefighting Detachments require increased firefighting capability to compensate for deficiencies in structural, brush, or wildland and large petroleum storage site fires. Additionally, Army fire departments responsible for protection and prevention on posts, camps and stations have difficulty accessing new or emerging technology do not possess state-of-the-art equipment. The results of this evaluation and subsequent projects, will be reported throughout the Army in an attempt to mitigate operational deficiencies and widen the scope of knowledge in the Army fire service. The evaluation of non-developmental retrofitted compressed air foam systems show an efficiency of suppressive capabilities of water superseded bymore » water alone. Retrofitting the equipment was not easy or inexpensive but it was very successful.« less
Erlich, Yaniv; Gordon, Assaf; Brand, Michael; Hannon, Gregory J.; Mitra, Partha P.
2011-01-01
Over the past three decades we have steadily increased our knowledge on the genetic basis of many severe disorders. Nevertheless, there are still great challenges in applying this knowledge routinely in the clinic, mainly due to the relatively tedious and expensive process of genotyping. Since the genetic variations that underlie the disorders are relatively rare in the population, they can be thought of as a sparse signal. Using methods and ideas from compressed sensing and group testing, we have developed a cost-effective genotyping protocol to detect carriers for severe genetic disorders. In particular, we have adapted our scheme to a recently developed class of high throughput DNA sequencing technologies. The mathematical framework presented here has some important distinctions from the ’traditional’ compressed sensing and group testing frameworks in order to address biological and technical constraints of our setting. PMID:21451737
Pressure Oscillations in Adiabatic Compression
ERIC Educational Resources Information Center
Stout, Roland
2011-01-01
After finding Moloney and McGarvey's modified adiabatic compression apparatus, I decided to insert this experiment into my physical chemistry laboratory at the last minute, replacing a problematic experiment. With insufficient time to build the apparatus, we placed a bottle between two thick textbooks and compressed it with a third textbook forced…
NASA Technical Reports Server (NTRS)
Elden, N. C.; Winkler, H. E.; Price, D. F.; Reysa, R. P.
1983-01-01
Water recovery subsystems are being tested at the NASA Lyndon B. Johnson Space Center for Space Station use to process waste water generated from urine and wash water collection facilities. These subsystems are being integrated into a water management system that will incorporate wash water and urine processing through the use of hyperfiltration and vapor compression distillation subsystems. Other hardware in the water management system includes a whole body shower, a clothes washing facility, a urine collection and pretreatment unit, a recovered water post-treatment system, and a water quality monitor. This paper describes the integrated test configuration, pertinent performance data, and feasibility and design compatibility conclusions of the integrated water management system.
1989-08-01
In response to President Reagan's directive to NASA to develop a permanent marned Space Station within a decade, part of the State of the Union message to Congress on January 25, 1984, NASA and the Administration adopted a phased approach to Station development. This approach provided an initial capability at reduced costs, to be followed by an enhanced Space Station capability in the future. This illustration depicts the baseline configuration, which features a 110-meter-long horizontal boom with four pressurized modules attached in the middle. Located at each end are four photovoltaic arrays generating a total of 75-kW of power. Two attachment points for external payloads are provided along this boom. The four pressurized modules include the following: A laboratory and habitation module provided by the United States; two additional laboratories, one each provided by the European Space Agency (ESA) and Japan; and an ESA-provided Man-Tended Free Flyer, a pressurized module capable of operations both attached to and separate from the Space Station core. Canada was expected to provide the first increment of a Mobile Serving System.
Optimal wavelets for biomedical signal compression.
Nielsen, Mogens; Kamavuako, Ernest Nlandu; Andersen, Michael Midtgaard; Lucas, Marie-Françoise; Farina, Dario
2006-07-01
Signal compression is gaining importance in biomedical engineering due to the potential applications in telemedicine. In this work, we propose a novel scheme of signal compression based on signal-dependent wavelets. To adapt the mother wavelet to the signal for the purpose of compression, it is necessary to define (1) a family of wavelets that depend on a set of parameters and (2) a quality criterion for wavelet selection (i.e., wavelet parameter optimization). We propose the use of an unconstrained parameterization of the wavelet for wavelet optimization. A natural performance criterion for compression is the minimization of the signal distortion rate given the desired compression rate. For coding the wavelet coefficients, we adopted the embedded zerotree wavelet coding algorithm, although any coding scheme may be used with the proposed wavelet optimization. As a representative example of application, the coding/encoding scheme was applied to surface electromyographic signals recorded from ten subjects. The distortion rate strongly depended on the mother wavelet (for example, for 50% compression rate, optimal wavelet, mean+/-SD, 5.46+/-1.01%; worst wavelet 12.76+/-2.73%). Thus, optimization significantly improved performance with respect to previous approaches based on classic wavelets. The algorithm can be applied to any signal type since the optimal wavelet is selected on a signal-by-signal basis. Examples of application to ECG and EEG signals are also reported.
Quantization Distortion in Block Transform-Compressed Data
NASA Technical Reports Server (NTRS)
Boden, A. F.
1995-01-01
The popular JPEG image compression standard is an example of a block transform-based compression scheme; the image is systematically subdivided into block that are individually transformed, quantized, and encoded. The compression is achieved by quantizing the transformed data, reducing the data entropy and thus facilitating efficient encoding. A generic block transform model is introduced.
Station Set Residual: Event Classification Using Historical Distribution of Observing Stations
NASA Astrophysics Data System (ADS)
Procopio, Mike; Lewis, Jennifer; Young, Chris
2010-05-01
Analysts working at the International Data Centre in support of treaty monitoring through the Comprehensive Nuclear-Test-Ban Treaty Organization spend a significant amount of time reviewing hypothesized seismic events produced by an automatic processing system. When reviewing these events to determine their legitimacy, analysts take a variety of approaches that rely heavily on training and past experience. One method used by analysts to gauge the validity of an event involves examining the set of stations involved in the detection of an event. In particular, leveraging past experience, an analyst can say that an event located in a certain part of the world is expected to be detected by Stations A, B, and C. Implicit in this statement is that such an event would usually not be detected by Stations X, Y, or Z. For some well understood parts of the world, the absence of one or more "expected" stations—or the presence of one or more "unexpected" stations—is correlated with a hypothesized event's legitimacy and to its survival to the event bulletin. The primary objective of this research is to formalize and quantify the difference between the observed set of stations detecting some hypothesized event, versus the expected set of stations historically associated with detecting similar nearby events close in magnitude. This Station Set Residual can be quantified in many ways, some of which are correlated with the analysts' determination of whether or not the event is valid. We propose that this Station Set Residual score can be used to screen out certain classes of "false" events produced by automatic processing with a high degree of confidence, reducing the analyst burden. Moreover, we propose that the visualization of the historically expected distribution of detecting stations can be immediately useful as an analyst aid during their review process.
Digital Data Registration and Differencing Compression System
NASA Technical Reports Server (NTRS)
Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)
1996-01-01
A process for X-ray registration and differencing results in more efficient compression. Differencing of registered modeled subject image with a modeled reference image forms a differenced image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three-dimensional model, which three-dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either a remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic X-ray digital images.
Digital data registration and differencing compression system
NASA Technical Reports Server (NTRS)
Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)
1992-01-01
A process for x ray registration and differencing results in more efficient compression is discussed. Differencing of registered modeled subject image with a modeled reference image forms a differential image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three dimensional model, which three dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic x ray digital images.
Ultrahigh Pressure Dynamic Compression
NASA Astrophysics Data System (ADS)
Duffy, T. S.
2017-12-01
Laser-based dynamic compression provides a new opportunity to study the lattice structure and other properties of geological materials to ultrahigh pressure conditions ranging from 100 - 1000 GPa (1 TPa) and beyond. Such studies have fundamental applications to understanding the Earth's core as well as the interior structure of super-Earths and giant planets. This talk will review recent dynamic compression experiments using high-powered lasers on materials including Fe-Si, MgO, and SiC. Experiments were conducted at the Omega laser (University of Rochester) and the Linac Coherent Light Source (LCLS, Stanford). At Omega, laser drives as large as 2 kJ are applied over 10 ns to samples that are 50 microns thick. At peak compression, the sample is probed with quasi-monochromatic X-rays from a laser-plasma source and diffraction is recorded on image plates. At LCLS, shock waves are driven into the sample using a 40-J laser with a 10-ns pulse. The sample is probed with X-rays form the LCLS free electron laser providing 1012 photons in a monochromatic pulse near 10 keV energy. Diffraction is recorded using pixel array detectors. By varying the delay between the laser and the x-ray beam, the sample can be probed at various times relative to the shock wave transiting the sample. By controlling the shape and duration of the incident laser pulse, either shock or ramp (shockless) loading can be produced. Ramp compression produces less heating than shock compression, allowing samples to be probed to ultrahigh pressures without melting. Results for iron alloys, oxides, and carbides provide new constraints on equations of state and phase transitions that are relevant to the interior structure of large, extrasolar terrestrial-type planets.
Leadership at Antarctic Stations.
1987-03-01
expeditioners, and amongst OICs themselves. Leadership in Antarctica stirs images associated with names such as Scott, Shackleton and Mawson , of men...operates three Antarctic stations - Casey, Davis, and Mawson , and one sub-Antarctic station - Macquarie Island. Station populations vary, but are
Texture Studies and Compression Behaviour of Apple Flesh
NASA Astrophysics Data System (ADS)
James, Bryony; Fonseca, Celia
Compressive behavior of fruit flesh has been studied using mechanical tests and microstructural analysis. Apple flesh from two cultivars (Braeburn and Cox's Orange Pippin) was investigated to represent the extremes in a spectrum of fruit flesh types, hard and juicy (Braeburn) and soft and mealy (Cox's). Force-deformation curves produced during compression of unconstrained discs of apple flesh followed trends predicted from the literature for each of the "juicy" and "mealy" types. The curves display the rupture point and, in some cases, a point of inflection that may be related to the point of incipient juice release. During compression these discs of flesh generally failed along the centre line, perpendicular to the direction of loading, through a barrelling mechanism. Cryo-Scanning Electron Microscopy (cryo-SEM) was used to examine the behavior of the parenchyma cells during fracture and compression using a purpose designed sample holder and compression tester. Fracture behavior reinforced the difference in mechanical properties between crisp and mealy fruit flesh. During compression testing prior to cryo-SEM imaging the apple flesh was constrained perpendicular to the direction of loading. Microstructural analysis suggests that, in this arrangement, the material fails along a compression front ahead of the compressing plate. Failure progresses by whole lines of parenchyma cells collapsing, or rupturing, with juice filling intercellular spaces, before the compression force is transferred to the next row of cells.
2014-01-15
in a Light Duty Engine Under Conventional Diesel, Homogeneous Charge Compression Ignition , and Reactivity Controlled Compression Ignition ...Conventional Diesel (CDC), Homogeneous Charge Compression Ignition (HCCI), and Reactivity Controlled Compression Ignition (RCCI) combustion...LTC) regimes, including reactivity controlled compression ignition (RCCI), partially premixed combustion (PPC), and homogenous charge compression
Compression failure of angle-ply laminates
NASA Technical Reports Server (NTRS)
Peel, Larry D.; Hyer, Michael W.; Shuart, Mark J.
1991-01-01
The present work deals with modes and mechanisms of failure in compression of angle-ply laminates. Experimental results were obtained from 42 angle-ply IM7/8551-7a specimens with a lay-up of ((plus or minus theta)/(plus or minus theta)) sub 6s where theta, the off-axis angle, ranged from 0 degrees to 90 degrees. The results showed four failure modes, these modes being a function of off-axis angle. Failure modes include fiber compression, inplane transverse tension, inplane shear, and inplane transverse compression. Excessive interlaminar shear strain was also considered as an important mode of failure. At low off-axis angles, experimentally observed values were considerably lower than published strengths. It was determined that laminate imperfections in the form of layer waviness could be a major factor in reducing compression strength. Previously developed linear buckling and geometrically nonlinear theories were used, with modifications and enhancements, to examine the influence of layer waviness on compression response. The wavy layer is described by a wave amplitude and a wave length. Linear elastic stress-strain response is assumed. The geometrically nonlinear theory, in conjunction with the maximum stress failure criterion, was used to predict compression failure and failure modes for the angle-ply laminates. A range of wave length and amplitudes were used. It was found that for 0 less than or equal to theta less than or equal to 15 degrees failure was most likely due to fiber compression. For 15 degrees less than theta less than or equal to 35 degrees, failure was most likely due to inplane transverse tension. For 35 degrees less than theta less than or equal to 70 degrees, failure was most likely due to inplane shear. For theta less than 70 degrees, failure was most likely due to inplane transverse compression. The fiber compression and transverse tension failure modes depended more heavily on wave length than on wave amplitude. Thus using a single
NASA Technical Reports Server (NTRS)
Cohen, Marc M. (Editor); Eichold, Alice (Editor); Heers, Susan (Editor)
1987-01-01
Articles are presented on a space station architectural elements model study, space station group activities habitability module study, full-scale architectural simulation techniques for space stations, and social factors in space station interiors.
Outer planet Pioneer imaging communications system study. [data compression
NASA Technical Reports Server (NTRS)
1974-01-01
The effects of different types of imaging data compression on the elements of the Pioneer end-to-end data system were studied for three imaging transmission methods. These were: no data compression, moderate data compression, and the advanced imaging communications system. It is concluded that: (1) the value of data compression is inversely related to the downlink telemetry bit rate; (2) the rolling characteristics of the spacecraft limit the selection of data compression ratios; and (3) data compression might be used to perform acceptable outer planet mission at reduced downlink telemetry bit rates.
NASA Astrophysics Data System (ADS)
Hernandez, C.
2010-09-01
The weakness of small island electrical grids implies a handicap for the electrical generation with renewable energy sources. With the intention of maximizing the installation of photovoltaic generators in the Canary Islands, arises the need to develop a solar forecasting system that allows knowing in advance the amount of PV generated electricity that will be going into the grid, from the installed PV power plants installed in the island. The forecasting tools need to get feedback from real weather data in "real time" from remote weather stations. Nevertheless, the transference of this data to the calculation computer servers is very complicated with the old point to point telecommunication systems that, neither allow the transfer of data from several remote weather stations simultaneously nor high frequency of sampling of weather parameters due to slowness of the connection. This one project has developed a telecommunications infrastructure that allows sensorizadas remote stations, to send data of its sensors, once every minute and simultaneously, to the calculation server running the solar forecasting numerical models. For it, the Canary Islands Institute of Technology has added a sophisticated communications network to its 30 weather stations measuring irradiation at strategic sites, areas with high penetration of photovoltaic generation or that have potential to host in the future photovoltaic power plants connected to the grid. In each one of the stations, irradiance and temperature measurement instruments have been installed, over inclined silicon cell, global radiation on horizontal surface and room temperature. Mobile telephone devices have been installed and programmed in each one of the weather stations, which allow the transfer of their data taking advantage of the UMTS service offered by the local telephone operator. Every minute the computer server running the numerical weather forecasting models receives data inputs from 120 instruments distributed
Fuels for high-compression engines
NASA Technical Reports Server (NTRS)
Sparrow, Stanwood W
1926-01-01
From theoretical considerations one would expect an increase in power and thermal efficiency to result from increasing the compression ratio of an internal combustion engine. In reality it is upon the expansion ratio that the power and thermal efficiency depend, but since in conventional engines this is equal to the compression ratio, it is generally understood that a change in one ratio is accompanied by an equal change in the other. Tests over a wide range of compression ratios (extending to ratios as high as 14.1) have shown that ordinarily an increase in power and thermal efficiency is obtained as expected provided serious detonation or preignition does not result from the increase in ratio.
Rectal perforation by compressed air.
Park, Young Jin
2017-07-01
As the use of compressed air in industrial work has increased, so has the risk of associated pneumatic injury from its improper use. However, damage of large intestine caused by compressed air is uncommon. Herein a case of pneumatic rupture of the rectum is described. The patient was admitted to the Emergency Room complaining of abdominal pain and distension. His colleague triggered a compressed air nozzle over his buttock. On arrival, vital signs were stable but physical examination revealed peritoneal irritation and marked distension of the abdomen. Computed tomography showed a large volume of air in the peritoneal cavity and subcutaneous emphysema at the perineum. A rectal perforation was found at laparotomy and the Hartmann procedure was performed.
NASA Technical Reports Server (NTRS)
Hodge, Andrew J.; Nettles, Alan T.; Jackson, Justin R.
2011-01-01
Notched (open hole) composite laminates were tested in compression. The effect on strength of various sizes of through holes was examined. Results were compared to the average stress criterion model. Additionally, laminated sandwich structures were damaged from low-velocity impact with various impact energy levels and different impactor geometries. The compression strength relative to damage size was compared to the notched compression result strength. Open-hole compression strength was found to provide a reasonable bound on compression after impact.
NASA Technical Reports Server (NTRS)
Neiner, G. H.; Cole, G. L.; Arpasi, D. J.
1972-01-01
Digital computer control of a mixed-compression inlet is discussed. The inlet was terminated with a choked orifice at the compressor face station to dynamically simulate a turbojet engine. Inlet diffuser exit airflow disturbances were used. A digital version of a previously tested analog control system was used for both normal shock and restart control. Digital computer algorithms were derived using z-transform and finite difference methods. Using a sample rate of 1000 samples per second, the digital normal shock and restart controls essentially duplicated the inlet analog computer control results. At a sample rate of 100 samples per second, the control system performed adequately but was less stable.
Stokes Profile Compression Applied to VSM Data
NASA Astrophysics Data System (ADS)
Toussaint, W. A.; Henney, C. J.; Harvey, J. W.
2012-02-01
The practical details of applying the Expansion in Hermite Functions (EHF) method to compression of full-disk full-Stokes solar spectroscopic data from the SOLIS/VSM instrument are discussed in this paper. The algorithm developed and discussed here preserves the 630.15 and 630.25 nm Fe i lines, along with the local continuum and telluric lines. This compression greatly reduces the amount of space required to store these data sets while maintaining the quality of the data, allowing these observations to be archived and made publicly available with limited bandwidth. Applying EHF to the full-Stokes profiles and saving the coefficient files with Rice compression reduces the disk space required to store these observations by a factor of 20, while maintaining the quality of the data and with a total compression time only 35% slower than the standard gzip (GNU zip) compression.
The Capabilities of Space Stations
NASA Technical Reports Server (NTRS)
1995-01-01
Over the past two years the U.S. space station program has evolved to a three-phased international program, with the first phase consisting of the use of the U.S. Space Shuttle and the upgrading and use of the Russian Mir Space Station, and the second and third phases consisting of the assembly and use of the new International Space Station. Projected capabilities for research, and plans for utilization, have also evolved and it has been difficult for those not directly involved in the design and engineering of these space stations to learn and understand their technical details. The Committee on the Space Station of the National Research Council, with the concurrence of NASA, undertook to write this short report in order to provide concise and objective information on space stations and platforms -- with emphasis on the Mir Space Station and International Space Station -- and to supply a summary of the capabilities of previous, existing, and planned space stations. In keeping with the committee charter and with the task statement for this report, the committee has summarized the research capabilities of five major space platforms: the International Space Station, the Mir Space Station, the Space Shuttle (with a Spacelab or Spacehab module in its cargo bay), the Space Station Freedom (which was redesigned to become the International Space Station in 1993 and 1994), and Skylab. By providing the summary, together with brief descriptions of the platforms, the committee hopes to assist interested readers, including scientists and engineers, government officials, and the general public, in evaluating the utility of each system to meet perceived user needs.
Logarithmic compression methods for spectral data
Dunham, Mark E.
2003-01-01
A method is provided for logarithmic compression, transmission, and expansion of spectral data. A log Gabor transformation is made of incoming time series data to output spectral phase and logarithmic magnitude values. The output phase and logarithmic magnitude values are compressed by selecting only magnitude values above a selected threshold and corresponding phase values to transmit compressed phase and logarithmic magnitude values. A reverse log Gabor transformation is then performed on the transmitted phase and logarithmic magnitude values to output transmitted time series data to a user.
Machine compliance in compression tests
NASA Astrophysics Data System (ADS)
Sousa, Pedro; Ivens, Jan; Lomov, Stepan V.
2018-05-01
The compression behavior of a material cannot be accurately determined if the machine compliance is not accounted prior to the measurements. This work discusses the machine compliance during a compressibility test with fiberglass fabrics. The thickness variation was measured during loading and unloading cycles with a relaxation stage of 30 minutes between them. The measurements were performed using an indirect technique based on the comparison between the displacement at a free compression cycle and the displacement with a sample. Relating to the free test, it has been noticed the nonexistence of machine relaxation during relaxation stage. Considering relaxation or not, the characteristic curves for a free compression cycle can be overlapped precisely in the majority of the points. For the compression test with sample, it was noticed a non-physical decrease of about 30 µm during the relaxation stage, what can be explained by the greater fabric relaxation in relation to the machine relaxation. Beyond the technique normally used, another technique was used which allows a constant thickness during relaxation. Within this second method, machine displacement with sample is simply subtracted to the machine displacement without sample being imposed as constant. If imposed as a constant it will remain constant during relaxation stage and it will suddenly decrease after relaxation. If constantly calculated it will decrease gradually during relaxation stage. Independently of the technique used the final result will remain unchanged. The uncertainty introduced by this imprecision is about ±15 µm.
Cardiovascular causes of airway compression.
Kussman, Barry D; Geva, Tal; McGowan, Francis X
2004-01-01
Compression of the paediatric airway is a relatively common and often unrecognized complication of congenital cardiac and aortic arch anomalies. Airway obstruction may be the result of an anomalous relationship between the tracheobronchial tree and vascular structures (producing a vascular ring) or the result of extrinsic compression caused by dilated pulmonary arteries, left atrial enlargement, massive cardiomegaly, or intraluminal bronchial obstruction. A high index of suspicion of mechanical airway compression should be maintained in infants and children with recurrent respiratory difficulties, stridor, wheezing, dysphagia, or apnoea unexplained by other causes. Prompt diagnosis is required to avoid death and minimize airway damage. In addition to plain chest radiography and echocardiography, diagnostic investigations may consist of barium oesophagography, magnetic resonance imaging (MRI), computed tomography, cardiac catheterization and bronchoscopy. The most important recent advance is MRI, which can produce high quality three-dimensional reconstruction of all anatomic elements allowing for precise anatomic delineation and improved surgical planning. Anaesthetic technique will depend on the type of vascular ring and the presence of any congenital heart disease or intrinsic lesions of the tracheobronchial tree. Vascular rings may be repaired through a conventional posterolateral thoracotomy, or utilizing video-assisted thoracoscopic surgery (VATS) or robotic endoscopic surgery. Persistent airway obstruction following surgical repair may be due to residual compression, secondary airway wall instability (malacia), or intrinsic lesions of the airway. Simultaneous repair of cardiac defects and vascular tracheobronchial compression carries a higher risk of morbidity and mortality.
H.264/AVC Video Compression on Smartphones
NASA Astrophysics Data System (ADS)
Sharabayko, M. P.; Markov, N. G.
2017-01-01
In this paper, we studied the usage of H.264/AVC video compression tools by the flagship smartphones. The results show that only a subset of tools is used, meaning that there is still a potential to achieve higher compression efficiency within the H.264/AVC standard, but the most advanced smartphones are already reaching the compression efficiency limit of H.264/AVC.
Wavelet compression techniques for hyperspectral data
NASA Technical Reports Server (NTRS)
Evans, Bruce; Ringer, Brian; Yeates, Mathew
1994-01-01
Hyperspectral sensors are electro-optic sensors which typically operate in visible and near infrared bands. Their characteristic property is the ability to resolve a relatively large number (i.e., tens to hundreds) of contiguous spectral bands to produce a detailed profile of the electromagnetic spectrum. In contrast, multispectral sensors measure relatively few non-contiguous spectral bands. Like multispectral sensors, hyperspectral sensors are often also imaging sensors, measuring spectra over an array of spatial resolution cells. The data produced may thus be viewed as a three dimensional array of samples in which two dimensions correspond to spatial position and the third to wavelength. Because they multiply the already large storage/transmission bandwidth requirements of conventional digital images, hyperspectral sensors generate formidable torrents of data. Their fine spectral resolution typically results in high redundancy in the spectral dimension, so that hyperspectral data sets are excellent candidates for compression. Although there have been a number of studies of compression algorithms for multispectral data, we are not aware of any published results for hyperspectral data. Three algorithms for hyperspectral data compression are compared. They were selected as representatives of three major approaches for extending conventional lossy image compression techniques to hyperspectral data. The simplest approach treats the data as an ensemble of images and compresses each image independently, ignoring the correlation between spectral bands. The second approach transforms the data to decorrelate the spectral bands, and then compresses the transformed data as a set of independent images. The third approach directly generalizes two-dimensional transform coding by applying a three-dimensional transform as part of the usual transform-quantize-entropy code procedure. The algorithms studied all use the discrete wavelet transform. In the first two cases, a wavelet
Lossless medical image compression with a hybrid coder
NASA Astrophysics Data System (ADS)
Way, Jing-Dar; Cheng, Po-Yuen
1998-10-01
The volume of medical image data is expected to increase dramatically in the next decade due to the large use of radiological image for medical diagnosis. The economics of distributing the medical image dictate that data compression is essential. While there is lossy image compression, the medical image must be recorded and transmitted lossless before it reaches the users to avoid wrong diagnosis due to the image data lost. Therefore, a low complexity, high performance lossless compression schematic that can approach the theoretic bound and operate in near real-time is needed. In this paper, we propose a hybrid image coder to compress the digitized medical image without any data loss. The hybrid coder is constituted of two key components: an embedded wavelet coder and a lossless run-length coder. In this system, the medical image is compressed with the lossy wavelet coder first, and the residual image between the original and the compressed ones is further compressed with the run-length coder. Several optimization schemes have been used in these coders to increase the coding performance. It is shown that the proposed algorithm is with higher compression ratio than run-length entropy coders such as arithmetic, Huffman and Lempel-Ziv coders.
Space station mobile transporter
NASA Technical Reports Server (NTRS)
Renshall, James; Marks, Geoff W.; Young, Grant L.
1988-01-01
The first quarter of the next century will see an operational space station that will provide a permanently manned base for satellite servicing, multiple strategic scientific and commercial payload deployment, and Orbital Maneuvering Vehicle/Orbital Transfer Vehicle (OMV/OTV) retrieval replenishment and deployment. The space station, as conceived, is constructed in orbit and will be maintained in orbit. The construction, servicing, maintenance and deployment tasks, when coupled with the size of the station, dictate that some form of transportation and manipulation device be conceived. The Transporter described will work in conjunction with the Orbiter and an Assembly Work Platform (AWP) to construct the Work Station. The Transporter will also work in conjunction with the Mobile Remote Servicer to service and install payloads, retrieve, service and deploy satellites, and service and maintain the station itself. The Transporter involved in station construction when mounted on the AWP and later supporting a maintenance or inspection task with the Mobile Remote Servicer and the Flight Telerobotic Servicer is shown.
Comparison of two SVD-based color image compression schemes.
Li, Ying; Wei, Musheng; Zhang, Fengxia; Zhao, Jianli
2017-01-01
Color image compression is a commonly used process to represent image data as few bits as possible, which removes redundancy in the data while maintaining an appropriate level of quality for the user. Color image compression algorithms based on quaternion are very common in recent years. In this paper, we propose a color image compression scheme, based on the real SVD, named real compression scheme. First, we form a new real rectangular matrix C according to the red, green and blue components of the original color image and perform the real SVD for C. Then we select several largest singular values and the corresponding vectors in the left and right unitary matrices to compress the color image. We compare the real compression scheme with quaternion compression scheme by performing quaternion SVD using the real structure-preserving algorithm. We compare the two schemes in terms of operation amount, assignment number, operation speed, PSNR and CR. The experimental results show that with the same numbers of selected singular values, the real compression scheme offers higher CR, much less operation time, but a little bit smaller PSNR than the quaternion compression scheme. When these two schemes have the same CR, the real compression scheme shows more prominent advantages both on the operation time and PSNR.
Comparison of two SVD-based color image compression schemes
Li, Ying; Wei, Musheng; Zhang, Fengxia; Zhao, Jianli
2017-01-01
Color image compression is a commonly used process to represent image data as few bits as possible, which removes redundancy in the data while maintaining an appropriate level of quality for the user. Color image compression algorithms based on quaternion are very common in recent years. In this paper, we propose a color image compression scheme, based on the real SVD, named real compression scheme. First, we form a new real rectangular matrix C according to the red, green and blue components of the original color image and perform the real SVD for C. Then we select several largest singular values and the corresponding vectors in the left and right unitary matrices to compress the color image. We compare the real compression scheme with quaternion compression scheme by performing quaternion SVD using the real structure-preserving algorithm. We compare the two schemes in terms of operation amount, assignment number, operation speed, PSNR and CR. The experimental results show that with the same numbers of selected singular values, the real compression scheme offers higher CR, much less operation time, but a little bit smaller PSNR than the quaternion compression scheme. When these two schemes have the same CR, the real compression scheme shows more prominent advantages both on the operation time and PSNR. PMID:28257451
Tseng, Yun-Hua; Lu, Chih-Wen
2017-01-01
Compressed sensing (CS) is a promising approach to the compression and reconstruction of electrocardiogram (ECG) signals. It has been shown that following reconstruction, most of the changes between the original and reconstructed signals are distributed in the Q, R, and S waves (QRS) region. Furthermore, any increase in the compression ratio tends to increase the magnitude of the change. This paper presents a novel approach integrating the near-precise compressed (NPC) and CS algorithms. The simulation results presented notable improvements in signal-to-noise ratio (SNR) and compression ratio (CR). The efficacy of this approach was verified by fabricating a highly efficient low-cost chip using the Taiwan Semiconductor Manufacturing Company’s (TSMC) 0.18-μm Complementary Metal-Oxide-Semiconductor (CMOS) technology. The proposed core has an operating frequency of 60 MHz and gate counts of 2.69 K. PMID:28991216
Lossless Astronomical Image Compression and the Effects of Random Noise
NASA Technical Reports Server (NTRS)
Pence, William
2009-01-01
In this paper we compare a variety of modern image compression methods on a large sample of astronomical images. We begin by demonstrating from first principles how the amount of noise in the image pixel values sets a theoretical upper limit on the lossless compression ratio of the image. We derive simple procedures for measuring the amount of noise in an image and for quantitatively predicting how much compression will be possible. We then compare the traditional technique of using the GZIP utility to externally compress the image, with a newer technique of dividing the image into tiles, and then compressing and storing each tile in a FITS binary table structure. This tiled-image compression technique offers a choice of other compression algorithms besides GZIP, some of which are much better suited to compressing astronomical images. Our tests on a large sample of images show that the Rice algorithm provides the best combination of speed and compression efficiency. In particular, Rice typically produces 1.5 times greater compression and provides much faster compression speed than GZIP. Floating point images generally contain too much noise to be effectively compressed with any lossless algorithm. We have developed a compression technique which discards some of the useless noise bits by quantizing the pixel values as scaled integers. The integer images can then be compressed by a factor of 4 or more. Our image compression and uncompression utilities (called fpack and funpack) that were used in this study are publicly available from the HEASARC web site.Users may run these stand-alone programs to compress and uncompress their own images.
Method for compression of binary data
Berlin, G.J.
1996-03-26
The disclosed method for compression of a series of data bytes, based on LZSS-based compression methods, provides faster decompression of the stored data. The method involves the creation of a flag bit buffer in a random access memory device for temporary storage of flag bits generated during normal LZSS-based compression. The flag bit buffer stores the flag bits separately from their corresponding pointers and uncompressed data bytes until all input data has been read. Then, the flag bits are appended to the compressed output stream of data. Decompression can be performed much faster because bit manipulation is only required when reading the flag bits and not when reading uncompressed data bytes and pointers. Uncompressed data is read using byte length instructions and pointers are read using word instructions, thus reducing the time required for decompression. 5 figs.
Method for compression of binary data
Berlin, Gary J.
1996-01-01
The disclosed method for compression of a series of data bytes, based on LZSS-based compression methods, provides faster decompression of the stored data. The method involves the creation of a flag bit buffer in a random access memory device for temporary storage of flag bits generated during normal LZSS-based compression. The flag bit buffer stores the flag bits separately from their corresponding pointers and uncompressed data bytes until all input data has been read. Then, the flag bits are appended to the compressed output stream of data. Decompression can be performed much faster because bit manipulation is only required when reading the flag bits and not when reading uncompressed data bytes and pointers. Uncompressed data is read using byte length instructions and pointers are read using word instructions, thus reducing the time required for decompression.
Breaking of rod-shaped model material during compression
NASA Astrophysics Data System (ADS)
Lukas, Kulaviak; Vera, Penkavova; Marek, Ruzicka; Miroslav, Puncochar; Petr, Zamostny; Zdenek, Grof; Frantisek, Stepanek; Marek, Schongut; Jaromir, Havlica
2017-06-01
The breakage of a model anisometric dry granular material caused by uniaxial compression was studied. The bed of uniform rod-like pasta particles (8 mm long, aspect ratio 1:8) was compressed (Gamlen Tablet Press) and their size distribution was measured after each run (Dynamic Image Analysing). The compression dynamics was recorded and the effect of several parameters was tested (rate of compression, volume of granular bed, pressure magnitude and mode of application). Besides the experiments, numerical modelling of the compressed breakable material was performed as well, employing the DEM approach (Discrete Element Method). The comparison between the data and the model looks promising.
NRGC: a novel referential genome compression algorithm.
Saha, Subrata; Rajasekaran, Sanguthevar
2016-11-15
Next-generation sequencing techniques produce millions to billions of short reads. The procedure is not only very cost effective but also can be done in laboratory environment. The state-of-the-art sequence assemblers then construct the whole genomic sequence from these reads. Current cutting edge computing technology makes it possible to build genomic sequences from the billions of reads within a minimal cost and time. As a consequence, we see an explosion of biological sequences in recent times. In turn, the cost of storing the sequences in physical memory or transmitting them over the internet is becoming a major bottleneck for research and future medical applications. Data compression techniques are one of the most important remedies in this context. We are in need of suitable data compression algorithms that can exploit the inherent structure of biological sequences. Although standard data compression algorithms are prevalent, they are not suitable to compress biological sequencing data effectively. In this article, we propose a novel referential genome compression algorithm (NRGC) to effectively and efficiently compress the genomic sequences. We have done rigorous experiments to evaluate NRGC by taking a set of real human genomes. The simulation results show that our algorithm is indeed an effective genome compression algorithm that performs better than the best-known algorithms in most of the cases. Compression and decompression times are also very impressive. The implementations are freely available for non-commercial purposes. They can be downloaded from: http://www.engr.uconn.edu/~rajasek/NRGC.zip CONTACT: rajasek@engr.uconn.edu. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Compression mechanisms in the plasma focus pinch
NASA Astrophysics Data System (ADS)
Lee, S.; Saw, S. H.; Ali, Jalil
2017-03-01
The compression of the plasma focus pinch is a dynamic process, governed by the electrodynamics of pinch elongation and opposed by the negative rate of change of current dI/dt associated with the current dip. The compressibility of the plasma is influenced by the thermodynamics primarily the specific heat ratio; with greater compressibility as the specific heat ratio γ reduces with increasing degree of freedom f of the plasma ensemble due to ionization energy for the higher Z (atomic number) gases. The most drastic compression occurs when the emitted radiation of a high-Z plasma dominates the dynamics leading in extreme cases to radiative collapse which is terminated only when the compressed density is sufficiently high for the inevitable self-absorption of radiation to occur. We discuss the central pinch equation which contains the basic electrodynamic terms with built-in thermodynamic factors and a dQ/dt term; with Q made up of a Joule heat component and absorption-corrected radiative terms. Deuterium is considered as a thermodynamic reference (fully ionized perfect gas with f = 3) as well as a zero-radiation reference (bremsstrahlung only; with radiation power negligible compared with electrodynamic power). Higher Z gases are then considered and regimes of thermodynamic enhancement of compression are systematically identified as are regimes of radiation-enhancement. The code which incorporates all these effects is used to compute pinch radius ratios in various gases as a measure of compression. Systematic numerical experiments reveal increasing severity in radiation-enhancement of compressions as atomic number increases. The work progresses towards a scaling law for radiative collapse and a generalized specific heat ratio incorporating radiation.
1985-12-01
Skylab's success proved that scientific experimentation in a low gravity environment was essential to scientific progress. A more permanent structure was needed to provide this space laboratory. President Ronald Reagan, on January 25, 1984, during his State of the Union address, claimed that the United States should exploit the new frontier of space, and directed NASA to build a permanent marned space station within a decade. The idea was that the space station would not only be used as a laboratory for the advancement of science and medicine, but would also provide a staging area for building a lunar base and manned expeditions to Mars and elsewhere in the solar system. President Reagan invited the international community to join with the United States in this endeavour. NASA and several countries moved forward with this concept. By December 1985, the first phase of the space station was well underway with the design concept for the crew compartments and laboratories. Pictured are two NASA astronauts, at Marshall Space Flight Center's (MSFC) Neutral Buoyancy Simulator (NBS), practicing construction techniques they later used to construct the space station after it was deployed.
Digital data registration and differencing compression system
NASA Technical Reports Server (NTRS)
Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)
1990-01-01
A process is disclosed for x ray registration and differencing which results in more efficient compression. Differencing of registered modeled subject image with a modeled reference image forms a differenced image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three-dimensional model, which three-dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either a remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic x ray digital images.
Image compression system and method having optimized quantization tables
NASA Technical Reports Server (NTRS)
Ratnakar, Viresh (Inventor); Livny, Miron (Inventor)
1998-01-01
A digital image compression preprocessor for use in a discrete cosine transform-based digital image compression device is provided. The preprocessor includes a gathering mechanism for determining discrete cosine transform statistics from input digital image data. A computing mechanism is operatively coupled to the gathering mechanism to calculate a image distortion array and a rate of image compression array based upon the discrete cosine transform statistics for each possible quantization value. A dynamic programming mechanism is operatively coupled to the computing mechanism to optimize the rate of image compression array against the image distortion array such that a rate-distortion-optimal quantization table is derived. In addition, a discrete cosine transform-based digital image compression device and a discrete cosine transform-based digital image compression and decompression system are provided. Also, a method for generating a rate-distortion-optimal quantization table, using discrete cosine transform-based digital image compression, and operating a discrete cosine transform-based digital image compression and decompression system are provided.
NASA Technical Reports Server (NTRS)
Lundebjerg, Kristen
2016-01-01
The STEM on Station team is part of Education which is part of the External Relations organization (ERO). ERO has traditional goals based around BHAG (Big Hairy Audacious Goal). The BHAG model is simplified to a saying: Everything we do stimulates actions by others to advance human space exploration. The STEM on Station education initiate is a project focused on bringing off the earth research and learning into classrooms. Educational resources such as lesson plans, activities to connect with the space station and STEM related contests are available and hosted by the STEM on Station team along with their partners such as Texas Instruments. These educational activities engage teachers and students in the current happenings aboard the international space station, inspiring the next generation of space explorers.
Some Results Relevant to Statistical Closures for Compressible Turbulence
NASA Technical Reports Server (NTRS)
Ristorcelli, J. R.
1998-01-01
For weakly compressible turbulent fluctuations there exists a small parameter, the square of the fluctuating Mach number, that allows an investigation using a perturbative treatment. The consequences of such a perturbative analysis in three different subject areas are described: 1) initial conditions in direct numerical simulations, 2) an explanation for the oscillations seen in the compressible pressure in the direct numerical simulations of homogeneous shear, and 3) for turbulence closures accounting for the compressibility of velocity fluctuations. Initial conditions consistent with small turbulent Mach number asymptotics are constructed. The importance of consistent initial conditions in the direct numerical simulation of compressible turbulence is dramatically illustrated: spurious oscillations associated with inconsistent initial conditions are avoided, and the fluctuating dilatational field is some two orders of magnitude smaller for a compressible isotropic turbulence. For the isotropic decay it is shown that the choice of initial conditions can change the scaling law for the compressible dissipation. A two-time expansion of the Navier-Stokes equations is used to distinguish compressible acoustic and compressible advective modes. A simple conceptual model for weakly compressible turbulence - a forced linear oscillator is described. It is shown that the evolution equations for the compressible portions of turbulence can be understood as a forced wave equation with refraction. Acoustic modes of the flow can be amplified by refraction and are able to manifest themselves in large fluctuations of the compressible pressure.
NASA Astrophysics Data System (ADS)
Koziel, Michal; Amar-Youcef, Samir; Bialas, Norbert; Deveaux, Michael; Fröhlich, Ingo; Klaus, Philipp; Michel, Jan; Milanović, Borislav; Müntz, Christian; Stroth, Joachim; Tischler, Tobias; Weirich, Roland; Wiebusch, Michael
2017-02-01
The Compressed Baryonic Matter (CBM) Experiment is one of the core experiments of the future FAIR facility near Darmstadt (Germany). The fixed-target experiment will explore the phase diagram of strongly interacting matter in the regime of high net baryon densities with numerous probes, among them open charm mesons. The Micro Vertex Detector (MVD) will provide the secondary vertex resolution of ∼ 50 μm along the beam axis, contribute to the background rejection in dielectron spectroscopy, and to the reconstruction of weak decays. The detector comprises four stations placed at 5, 10, 15, and 20 cm downstream the target and inside the target vacuum. The stations will be populated with highly granular CMOS Monolithic Active Pixel Sensors, which will feature a spatial resolution of < 5 μm, a non-ionizing radiation tolerance of >1013neq /cm2, an ionizing radiation tolerance of ∼ 3 Mrad, and a readout speed of a few 10 μs/frame. This work introduces the MVD-PRESTO project, which aims at integrating a precursor of the second station of the CBM-MVD meeting the following requirements: material budget of x /X0 < 0.5 %, vacuum compatibility, double-sided sensor integration on a Thermal Pyrolytic Graphite (TPG) carrier, and heat evacuation of about 350 mW/cm2/sensor with a temperature gradient of a few K/cm.
Image data compression having minimum perceptual error
NASA Technical Reports Server (NTRS)
Watson, Andrew B. (Inventor)
1995-01-01
A method for performing image compression that eliminates redundant and invisible image components is described. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.
Rectal perforation by compressed air
2017-01-01
As the use of compressed air in industrial work has increased, so has the risk of associated pneumatic injury from its improper use. However, damage of large intestine caused by compressed air is uncommon. Herein a case of pneumatic rupture of the rectum is described. The patient was admitted to the Emergency Room complaining of abdominal pain and distension. His colleague triggered a compressed air nozzle over his buttock. On arrival, vital signs were stable but physical examination revealed peritoneal irritation and marked distension of the abdomen. Computed tomography showed a large volume of air in the peritoneal cavity and subcutaneous emphysema at the perineum. A rectal perforation was found at laparotomy and the Hartmann procedure was performed. PMID:28706893
ZFP compression plugin (filter) for HDF5
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Mark C.
H5Z-ZFP is a compression plugin (filter) for the HDF5 library based upon the ZFP-0.5.0 compression library. It supports 4- or 8-byte integer or floating point HDF5 datasets of any dimension but partitioned in 1, 2, or 3 dimensional chunks. It supports ZFP's four fundamental modes of operation; rate, precision, accuracy or expert. It is a lossy compression plugin.
Sandia 25-meter compressed helium/air gun
NASA Astrophysics Data System (ADS)
Setchell, R. E.
1982-04-01
For nearly twenty years the Sandia 25-meter compressed gas gun has been an important tool for studying condensed materials subjected to transient shock compression. Major system modifications are now in progress to provide new control, instrumentation, and data acquisition capabilities. These features will ensure that the facility can continue as an effective means of investigating a variety of physical and chemical processes in shock-compressed solids.
On Compression of a Heavy Compressible Layer of an Elastoplastic or Elastoviscoplastic Medium
NASA Astrophysics Data System (ADS)
Kovtanyuk, L. V.; Panchenko, G. L.
2017-11-01
The problem of deformation of a horizontal plane layer of a compressible material is solved in the framework of the theory of small strains. The upper boundary of the layer is under the action of shear and compressing loads, and the no-slip condition is satisfied on the lower boundary of the layer. The loads increase in absolute value with time, then become constant, and then decrease to zero.Various plasticity conditions are consideredwith regard to the material compressibility, namely, the Coulomb-Mohr plasticity condition, the von Mises-Schleicher plasticity condition, and the same conditions with the viscous properties of the material taken into account. To solve the system of partial differential equations for the components of irreversible strains, a finite-difference scheme is developed for a spatial domain increasing with time. The laws of motion of elastoplastic boundaries are presented, the stresses, strains, rates of strain, and displacements are calculated, and the residual stresses and strains are found.
47 CFR 74.783 - Station identification.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Booster Stations § 74.783 Station identification. (a) Each low power TV and TV translator station not... suffix “-LP.” (f) TV broadcast booster station shall be identified by their primary stations by...
47 CFR 74.783 - Station identification.
Code of Federal Regulations, 2012 CFR
2012-10-01
... Booster Stations § 74.783 Station identification. (a) Each low power TV and TV translator station not... suffix “-LP.” (f) TV broadcast booster station shall be identified by their primary stations by...
47 CFR 74.783 - Station identification.
Code of Federal Regulations, 2013 CFR
2013-10-01
... Booster Stations § 74.783 Station identification. (a) Each low power TV and TV translator station not... suffix “-LP.” (f) TV broadcast booster station shall be identified by their primary stations by...
Code of Federal Regulations, 2013 CFR
2013-10-01
... by coast stations and coast earth stations. 80.1119 Section 80.1119 Telecommunication FEDERAL... § 80.1119 Receipt and acknowledgement of distress alerts by coast stations and coast earth stations. (a... for coast stations.) (b) Coast earth stations in receipt of distress alerts must ensure that they are...
Code of Federal Regulations, 2010 CFR
2010-10-01
... by coast stations and coast earth stations. 80.1119 Section 80.1119 Telecommunication FEDERAL... § 80.1119 Receipt and acknowledgement of distress alerts by coast stations and coast earth stations. (a... for coast stations.) (b) Coast earth stations in receipt of distress alerts must ensure that they are...
Code of Federal Regulations, 2012 CFR
2012-10-01
... by coast stations and coast earth stations. 80.1119 Section 80.1119 Telecommunication FEDERAL... § 80.1119 Receipt and acknowledgement of distress alerts by coast stations and coast earth stations. (a... for coast stations.) (b) Coast earth stations in receipt of distress alerts must ensure that they are...
Code of Federal Regulations, 2011 CFR
2011-10-01
... by coast stations and coast earth stations. 80.1119 Section 80.1119 Telecommunication FEDERAL... § 80.1119 Receipt and acknowledgement of distress alerts by coast stations and coast earth stations. (a... for coast stations.) (b) Coast earth stations in receipt of distress alerts must ensure that they are...
Code of Federal Regulations, 2014 CFR
2014-10-01
... by coast stations and coast earth stations. 80.1119 Section 80.1119 Telecommunication FEDERAL... § 80.1119 Receipt and acknowledgement of distress alerts by coast stations and coast earth stations. (a... for coast stations.) (b) Coast earth stations in receipt of distress alerts must ensure that they are...
47 CFR 74.1283 - Station identification.
Code of Federal Regulations, 2012 CFR
2012-10-01
... FM Broadcast Booster Stations § 74.1283 Station identification. (a) The call sign of an FM broadcast... of an FM booster station will consist of the call sign of the primary station followed by the letters “FM” and the number of the booster station being authorized, e.g., WFCCFM-1. (c) A translator station...
47 CFR 74.1283 - Station identification.
Code of Federal Regulations, 2011 CFR
2011-10-01
... FM Broadcast Booster Stations § 74.1283 Station identification. (a) The call sign of an FM broadcast... of an FM booster station will consist of the call sign of the primary station followed by the letters “FM” and the number of the booster station being authorized, e.g., WFCCFM-1. (c) A translator station...
47 CFR 74.1283 - Station identification.
Code of Federal Regulations, 2013 CFR
2013-10-01
... FM Broadcast Booster Stations § 74.1283 Station identification. (a) The call sign of an FM broadcast... of an FM booster station will consist of the call sign of the primary station followed by the letters “FM” and the number of the booster station being authorized, e.g., WFCCFM-1. (c) A translator station...
47 CFR 74.1283 - Station identification.
Code of Federal Regulations, 2014 CFR
2014-10-01
... FM Broadcast Booster Stations § 74.1283 Station identification. (a) The call sign of an FM broadcast... of an FM booster station will consist of the call sign of the primary station followed by the letters “FM” and the number of the booster station being authorized, e.g., WFCCFM-1. (c) A translator station...
Dissipative processes under the shock compression of glass
NASA Astrophysics Data System (ADS)
Savinykh, A. S.; Kanel, G. I.; Cherepanov, I. A.; Razorenov, S. V.
2016-03-01
New experimental data on the behavior of the K8 and TF1 glasses under shock-wave loading conditions are obtained. It is found that the propagation of shock waves is close to the self-similar one in the maximum compression stress range 4-12 GPa. Deviations from a general deformation diagram, which are related to viscous dissipation, take place when the final state of compression is approached. The parameter region in which failure waves form in glass is found not to be limited to the elastic compression stress range, as was thought earlier. The failure front velocity increases with the shock compression stress. Outside the region covered by a failure wave, the glasses demonstrate a high tensile dynamic strength (6-7 GPa) in the case of elastic compression, and this strength is still very high after transition through the elastic limit in a compression wave.
NASA Technical Reports Server (NTRS)
Winters, AL
1990-01-01
Viewgraphs on space station fluid resupply are presented. Space Station Freedom is resupplied with supercritical O2 and N2 for the ECLSS and USL on a 180 day resupply cycle. Resupply fluids are stored in the subcarriers on station between resupply cycles and transferred to the users as required. ECLSS contingency fluids (O2 and N2) are supplied and stored on station in a gaseous state. Efficiency and flexibility are major design considerations. Subcarrier approach allows multiple manifest combinations. Growth is achieved by adding modular subcarriers.
High-speed and high-ratio referential genome compression.
Liu, Yuansheng; Peng, Hui; Wong, Limsoon; Li, Jinyan
2017-11-01
The rapidly increasing number of genomes generated by high-throughput sequencing platforms and assembly algorithms is accompanied by problems in data storage, compression and communication. Traditional compression algorithms are unable to meet the demand of high compression ratio due to the intrinsic challenging features of DNA sequences such as small alphabet size, frequent repeats and palindromes. Reference-based lossless compression, by which only the differences between two similar genomes are stored, is a promising approach with high compression ratio. We present a high-performance referential genome compression algorithm named HiRGC. It is based on a 2-bit encoding scheme and an advanced greedy-matching search on a hash table. We compare the performance of HiRGC with four state-of-the-art compression methods on a benchmark dataset of eight human genomes. HiRGC takes <30 min to compress about 21 gigabytes of each set of the seven target genomes into 96-260 megabytes, achieving compression ratios of 217 to 82 times. This performance is at least 1.9 times better than the best competing algorithm on its best case. Our compression speed is also at least 2.9 times faster. HiRGC is stable and robust to deal with different reference genomes. In contrast, the competing methods' performance varies widely on different reference genomes. More experiments on 100 human genomes from the 1000 Genome Project and on genomes of several other species again demonstrate that HiRGC's performance is consistently excellent. The C ++ and Java source codes of our algorithm are freely available for academic and non-commercial use. They can be downloaded from https://github.com/yuansliu/HiRGC. jinyan.li@uts.edu.au. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Television image compression and small animal remote monitoring
NASA Technical Reports Server (NTRS)
Haines, Richard F.; Jackson, Robert W.
1990-01-01
It was shown that a subject can reliably discriminate a difference in video image quality (using a specific commercial product) for image compression levels ranging from 384 kbits per second to 1536 kbits per second. However, their discriminations are significantly influenced by whether or not the TV camera is stable or moving and whether or not the animals are quiescent or active, which is correlated with illumination level (daylight versus night illumination, respectively). The highest video rate used here was 1.54 megabits per second, which is about 18 percent of the so-called normal TV resolution of 8.4MHz. Since this video rate was judged to be acceptable by 27 of the 34 subjects (79 percent), for monitoring the general health and status of small animals within their illuminated (lights on) cages (regardless of whether the camera was stable or moved), it suggests that an immediate Space Station Freedom to ground bandwidth reduction of about 80 percent can be tolerated without a significant loss in general monitoring capability. Another general conclusion is that the present methodology appears to be effective in quantifying visual judgments of video image quality.
International Space Station (ISS)
1997-07-20
Photograph shows the International Space Station Laboratory Module under fabrication at Marshall Space Flight Center (MSFC), Building 4708 West High Bay. Although management of the U.S. elements for the Station were consolidated in 1994, module and node development continued at MSFC by Boeing Company, the prime contractor for the Space Station.
Fluffy dust forms icy planetesimals by static compression
NASA Astrophysics Data System (ADS)
Kataoka, Akimasa; Tanaka, Hidekazu; Okuzumi, Satoshi; Wada, Koji
2013-09-01
Context. Several barriers have been proposed in planetesimal formation theory: bouncing, fragmentation, and radial drift problems. Understanding the structure evolution of dust aggregates is a key in planetesimal formation. Dust grains become fluffy by coagulation in protoplanetary disks. However, once they are fluffy, they are not sufficiently compressed by collisional compression to form compact planetesimals. Aims: We aim to reveal the pathway of dust structure evolution from dust grains to compact planetesimals. Methods: Using the compressive strength formula, we analytically investigate how fluffy dust aggregates are compressed by static compression due to ram pressure of the disk gas and self-gravity of the aggregates in protoplanetary disks. Results: We reveal the pathway of the porosity evolution from dust grains via fluffy aggregates to form planetesimals, circumventing the barriers in planetesimal formation. The aggregates are compressed by the disk gas to a density of 10-3 g/cm3 in coagulation, which is more compact than is the case with collisional compression. Then, they are compressed more by self-gravity to 10-1 g/cm3 when the radius is 10 km. Although the gas compression decelerates the growth, the aggregates grow rapidly enough to avoid the radial drift barrier when the orbital radius is ≲6 AU in a typical disk. Conclusions: We propose a fluffy dust growth scenario from grains to planetesimals. It enables icy planetesimal formation in a wide range beyond the snowline in protoplanetary disks. This result proposes a concrete initial condition of planetesimals for the later stages of the planet formation.
NASA Technical Reports Server (NTRS)
Smith, M.; Barratt, M.; Lloyd, C.
1992-01-01
Because of the time and distance involved in returning a patient from space to a definitive medical care facility, the capability for Advanced Cardiac Life Support (ACLS) exists onboard Space Station Freedom. Methods: In order to evaluate the effectiveness of terrestrial ACLS protocols in microgravity, a medical team conducted simulations during parabolic flights onboard the KC-135 aircraft. The hardware planned for use during the MTC phase of the space station was utilized to increase the fidelity of the scenario and to evaluate the prototype equipment. Based on initial KC-135 testing of CPR and ACLS, changes were made to the ventricular fibrillation algorithm in order to accommodate the space environment. Other constraints to delivery of ACLS onboard the space station include crew size, minimum training, crew deconditioning, and limited supplies and equipment. Results: The delivery of ACLS in microgravity is hindered by the environment, but should be adequate. Factors specific to microgravity were identified for inclusion in the protocol including immediate restraint of the patient and early intubation to insure airway. External cardiac compressions of adequate force and frequency were administered using various methods. The more significant limiting factors appear to be crew training, crew size, and limited supplies. Conclusions: Although ACLS is possible in the microgravity environment, future evaluations are necessary to further refine the protocols. Proper patient and medical officer restraint is crucial prior to advanced procedures. Also emphasis should be placed on early intubation for airway management and drug administration. Preliminary results and further testing will be utilized in the design of medical hardware, determination of crew training, and medical operations for space station and beyond.
NASA Astrophysics Data System (ADS)
Gedalin, Daniel; Oiknine, Yaniv; August, Isaac; Blumberg, Dan G.; Rotman, Stanley R.; Stern, Adrian
2017-04-01
Compressive sensing theory was proposed to deal with the high quantity of measurements demanded by traditional hyperspectral systems. Recently, a compressive spectral imaging technique dubbed compressive sensing miniature ultraspectral imaging (CS-MUSI) was presented. This system uses a voltage controlled liquid crystal device to create multiplexed hyperspectral cubes. We evaluate the utility of the data captured using the CS-MUSI system for the task of target detection. Specifically, we compare the performance of the matched filter target detection algorithm in traditional hyperspectral systems and in CS-MUSI multiplexed hyperspectral cubes. We found that the target detection algorithm performs similarly in both cases, despite the fact that the CS-MUSI data is up to an order of magnitude less than that in conventional hyperspectral cubes. Moreover, the target detection is approximately an order of magnitude faster in CS-MUSI data.
NASA Technical Reports Server (NTRS)
Bailey, Andrea; Kietzman, John; King, Shirlyn; Stover, Rae; Wegner, Torsten
1992-01-01
The objective of this project was to design an onboard operator station for the conceptual Lunar Work Vehicle (LWV). The LWV would be used in the colonization of a lunar outpost. The details that follow, however, are for an Earth-bound model. The operator station is designed to be dimensionally correct for an astronaut wearing the current space shuttle EVA suit (which include life support). The proposed operator station will support and restrain an astronaut as well as to provide protection from the hazards of vehicle rollover. The threat of suit puncture is eliminated by rounding all corners and edges. A step-plate, located at the front of the vehicle, provides excellent ease of entry and exit. The operator station weight requirements are met by making efficient use of rigid members, semi-rigid members, and woven fabrics.
Breast compression in mammography: how much is enough?
Poulos, Ann; McLean, Donald; Rickard, Mary; Heard, Robert
2003-06-01
The amount of breast compression that is applied during mammography potentially influences image quality and the discomfort experienced. The aim of this study was to determine the relationship between applied compression force, breast thickness, reported discomfort and image quality. Participants were women attending routine breast screening by mammography at BreastScreen New South Wales Central and Eastern Sydney. During the mammographic procedure, an 'extra' craniocaudal (CC) film was taken at a reduced level of compression ranging from 10 to 30 Newtons. Breast thickness measurements were recorded for both the normal and the extra CC film. Details of discomfort experienced, cup size, menstrual status, existing breast pain and breast problems were also recorded. Radiologists were asked to compare the image quality of the normal and manipulated film. The results indicated that 24% of women did not experience a difference in thickness when the compression was reduced. This is an important new finding because the aim of breast compression is to reduce breast thickness. If breast thickness is not reduced when compression force is applied then discomfort is increased with no benefit in image quality. This has implications for mammographic practice when determining how much breast compression is sufficient. Radiologists found a decrease in contrast resolution within the fatty area of the breast between the normal and the extra CC film, confirming a decrease in image quality due to insufficient applied compression force.
Space Station Freedom as an engineering experiment station: An overview
NASA Technical Reports Server (NTRS)
Rose, M. Frank
1992-01-01
In this presentation, the premise that Space Station Freedom has great utility as an engineering experiment station will be explored. There are several modes in which it can be used for this purpose. The most obvious are space qualification, process development, in space satellite repair, and materials engineering. The range of engineering experiments which can be done at Space Station Freedom run the gamut from small process oriented experiments to full exploratory development models. A sampling of typical engineering experiments are discussed in this session. First and foremost, Space Station Freedom is an elaborate experiment itself, which, if properly instrumented, will provide engineering guidelines for even larger structures which must surely be built if humankind is truly 'outward bound.' Secondly, there is the test, evaluation and space qualification of advanced electric thruster concepts, advanced power technology and protective coatings which must of necessity be tested in the vacuum of space. The current approach to testing these technologies is to do exhaustive laboratory simulation followed by shuttle or unmanned flights. Third, the advanced development models of life support systems intended for future space stations, manned mars missions, and lunar colonies can be tested for operation in a low gravity environment. Fourth, it will be necessary to develop new protective coatings, establish construction techniques, evaluate new materials to be used in the upgrading and repair of Space Station Freedom. Finally, the industrial sector, if it is ever to build facilities for the production of commercial products, must have all the engineering aspects of the process evaluated in space prior to a commitment to such a facility.
NASA Technical Reports Server (NTRS)
Fahnestock, R. J.; Renzetti, N. A.
1975-01-01
The Madrid space station, operated under bilateral agreements between the governments of the United States and Spain, is described in both Spanish and English. The space station utilizes two tracking and data acquisition networks: the Deep Space Network (DSN) of the National Aeronautics and Space Administration and the Spaceflight Tracking and Data Network (STDN) operated under the direction of the Goddard Space Flight Center. The station, which is staffed by Spanish employees, comprises four facilities: Robledo 1, Cebreros, and Fresnedillas-Navalagamella, all with 26-meter-diameter antennas, and Robledo 2, with a 64-meter antenna.
NASA Technical Reports Server (NTRS)
Munoz, Abraham
1988-01-01
Conceived since the beginning of time, living in space is no longer a dream but rather a very near reality. The concept of a Space Station is not a new one, but a redefined one. Many investigations on the kinds of experiments and work assignments the Space Station will need to accommodate have been completed, but NASA specialists are constantly talking with potential users of the Station to learn more about the work they, the users, want to do in space. Present configurations are examined along with possible new ones.
Fundamental study of compression for movie files of coronary angiography
NASA Astrophysics Data System (ADS)
Ando, Takekazu; Tsuchiya, Yuichiro; Kodera, Yoshie
2005-04-01
When network distribution of movie files was considered as reference, it could be useful that the lossy compression movie files which has small file size. We chouse three kinds of coronary stricture movies with different moving speed as an examination object; heart rate of slow, normal and fast movies. The movies of MPEG-1, DivX5.11, WMV9 (Windows Media Video 9), and WMV9-VCM (Windows Media Video 9-Video Compression Manager) were made from three kinds of AVI format movies with different moving speeds. Five kinds of movies that are four kinds of compression movies and non-compression AVI instead of the DICOM format were evaluated by Thurstone's method. The Evaluation factors of movies were determined as "sharpness, granularity, contrast, and comprehensive evaluation." In the virtual bradycardia movie, AVI was the best evaluation at all evaluation factors except the granularity. In the virtual normal movie, an excellent compression technique is different in all evaluation factors. In the virtual tachycardia movie, MPEG-1 was the best evaluation at all evaluation factors expects the contrast. There is a good compression form depending on the speed of movies because of the difference of compression algorithm. It is thought that it is an influence by the difference of the compression between frames. The compression algorithm for movie has the compression between the frames and the intra-frame compression. As the compression algorithm give the different influence to image by each compression method, it is necessary to examine the relation of the compression algorithm and our results.
Combined Industry, Space and Earth Science Data Compression Workshop
NASA Technical Reports Server (NTRS)
Kiely, Aaron B. (Editor); Renner, Robert L. (Editor)
1996-01-01
The sixth annual Space and Earth Science Data Compression Workshop and the third annual Data Compression Industry Workshop were held as a single combined workshop. The workshop was held April 4, 1996 in Snowbird, Utah in conjunction with the 1996 IEEE Data Compression Conference, which was held at the same location March 31 - April 3, 1996. The Space and Earth Science Data Compression sessions seek to explore opportunities for data compression to enhance the collection, analysis, and retrieval of space and earth science data. Of particular interest is data compression research that is integrated into, or has the potential to be integrated into, a particular space or earth science data information system. Preference is given to data compression research that takes into account the scien- tist's data requirements, and the constraints imposed by the data collection, transmission, distribution and archival systems.
Free-beam soliton self-compression in air
NASA Astrophysics Data System (ADS)
Voronin, A. A.; Mitrofanov, A. V.; Sidorov-Biryukov, D. A.; Fedotov, A. B.; Pugžlys, A.; Panchenko, V. Ya; Shumakova, V.; Ališauskas, S.; Baltuška, A.; Zheltikov, A. M.
2018-02-01
We identify a physical scenario whereby soliton transients generated in freely propagating laser beams within the regions of anomalous dispersion in air can be compressed as a part of their free-beam spatiotemporal evolution to yield few-cycle mid- and long-wavelength-infrared field waveforms, whose peak power is substantially higher than the peak power of the input pulses. We show that this free-beam soliton self-compression scenario does not require ionization or laser-induced filamentation, enabling high-throughput self-compression of mid- and long-wavelength-infrared laser pulses within a broad range of peak powers from tens of gigawatts up to the terawatt level. We also demonstrate that this method of pulse compression can be extended to long-range propagation, providing self-compression of high-peak-power laser pulses in atmospheric air within propagation ranges as long as hundreds of meters, suggesting new ways towards longer-range standoff detection and remote sensing.
Interactive computer graphics applications for compressible aerodynamics
NASA Technical Reports Server (NTRS)
Benson, Thomas J.
1994-01-01
Three computer applications have been developed to solve inviscid compressible fluids problems using interactive computer graphics. The first application is a compressible flow calculator which solves for isentropic flow, normal shocks, and oblique shocks or centered expansions produced by two dimensional ramps. The second application couples the solutions generated by the first application to a more graphical presentation of the results to produce a desk top simulator of three compressible flow problems: 1) flow past a single compression ramp; 2) flow past two ramps in series; and 3) flow past two opposed ramps. The third application extends the results of the second to produce a design tool which solves for the flow through supersonic external or mixed compression inlets. The applications were originally developed to run on SGI or IBM workstations running GL graphics. They are currently being extended to solve additional types of flow problems and modified to operate on any X-based workstation.
Tether applications for space station
NASA Technical Reports Server (NTRS)
Nobles, W.
1986-01-01
A wide variety of space station applications for tethers were reviewed. Many will affect the operation of the station itself while others are in the category of research or scientific platforms. One of the most expensive aspects of operating the space station will be the continuing shuttle traffic to transport logistic supplies and payloads to the space station. If a means can be found to use tethers to improve the efficiency of that transportation operation, it will increase the operating efficiency of the system and reduce the overall cost of the space station. The concept studied consists of using a tether to lower the shuttle from the space station. This results in a transfer of angular momentum and energy from the orbiter to the space station. The consequences of this transfer is studied and how beneficial use can be made of it.
Fast and accurate face recognition based on image compression
NASA Astrophysics Data System (ADS)
Zheng, Yufeng; Blasch, Erik
2017-05-01
Image compression is desired for many image-related applications especially for network-based applications with bandwidth and storage constraints. The face recognition community typical reports concentrate on the maximal compression rate that would not decrease the recognition accuracy. In general, the wavelet-based face recognition methods such as EBGM (elastic bunch graph matching) and FPB (face pattern byte) are of high performance but run slowly due to their high computation demands. The PCA (Principal Component Analysis) and LDA (Linear Discriminant Analysis) algorithms run fast but perform poorly in face recognition. In this paper, we propose a novel face recognition method based on standard image compression algorithm, which is termed as compression-based (CPB) face recognition. First, all gallery images are compressed by the selected compression algorithm. Second, a mixed image is formed with the probe and gallery images and then compressed. Third, a composite compression ratio (CCR) is computed with three compression ratios calculated from: probe, gallery and mixed images. Finally, the CCR values are compared and the largest CCR corresponds to the matched face. The time cost of each face matching is about the time of compressing the mixed face image. We tested the proposed CPB method on the "ASUMSS face database" (visible and thermal images) from 105 subjects. The face recognition accuracy with visible images is 94.76% when using JPEG compression. On the same face dataset, the accuracy of FPB algorithm was reported as 91.43%. The JPEG-compressionbased (JPEG-CPB) face recognition is standard and fast, which may be integrated into a real-time imaging device.
Towards Natural Transition in Compressible Boundary Layers
2016-06-29
AFRL-AFOSR-CL-TR-2016-0011 Towards natural transition in compressible boundary layers Marcello Faraco de Medeiros FUNDACAO PARA O INCREMENTO DA...to 29-03-2016 Towards natural transition in compressible boundary layers FA9550-11-1-0354 Marcello A. Faraco de Medeiros Germán Andrés Gaviria...unlimited. 109 Final report Towards natural transition in compressible boundary layers Principal Investigator: Marcello Augusto Faraco de Medeiros
POLYCOMP: Efficient and configurable compression of astronomical timelines
NASA Astrophysics Data System (ADS)
Tomasi, M.
2016-07-01
This paper describes the implementation of polycomp, a open-sourced, publicly available program for compressing one-dimensional data series in tabular format. The program is particularly suited for compressing smooth, noiseless streams of data like pointing information, as one of the algorithms it implements applies a combination of least squares polynomial fitting and discrete Chebyshev transforms that is able to achieve a compression ratio Cr up to ≈ 40 in the examples discussed in this work. This performance comes at the expense of a loss of information, whose upper bound is configured by the user. I show two areas in which the usage of polycomp is interesting. In the first example, I compress the ephemeris table of an astronomical object (Ganymede), obtaining Cr ≈ 20, with a compression error on the x , y , z coordinates smaller than 1 m. In the second example, I compress the publicly available timelines recorded by the Low Frequency Instrument (LFI), an array of microwave radiometers onboard the ESA Planck spacecraft. The compression reduces the needed storage from ∼ 6.5 TB to ≈ 0.75 TB (Cr ≈ 9), thus making them small enough to be kept in a portable hard drive.
Coil Compression for Accelerated Imaging with Cartesian Sampling
Zhang, Tao; Pauly, John M.; Vasanawala, Shreyas S.; Lustig, Michael
2012-01-01
MRI using receiver arrays with many coil elements can provide high signal-to-noise ratio and increase parallel imaging acceleration. At the same time, the growing number of elements results in larger datasets and more computation in the reconstruction. This is of particular concern in 3D acquisitions and in iterative reconstructions. Coil compression algorithms are effective in mitigating this problem by compressing data from many channels into fewer virtual coils. In Cartesian sampling there often are fully sampled k-space dimensions. In this work, a new coil compression technique for Cartesian sampling is presented that exploits the spatially varying coil sensitivities in these non-subsampled dimensions for better compression and computation reduction. Instead of directly compressing in k-space, coil compression is performed separately for each spatial location along the fully-sampled directions, followed by an additional alignment process that guarantees the smoothness of the virtual coil sensitivities. This important step provides compatibility with autocalibrating parallel imaging techniques. Its performance is not susceptible to artifacts caused by a tight imaging fieldof-view. High quality compression of in-vivo 3D data from a 32 channel pediatric coil into 6 virtual coils is demonstrated. PMID:22488589
NASA Astrophysics Data System (ADS)
Secchi, Paolo
2005-05-01
We introduce the main known results of the theory of incompressible and compressible vortex sheets. Moreover, we present recent results obtained by the author with J. F. Coulombel about supersonic compressible vortex sheets in two space dimensions. The problem is a nonlinear free boundary hyperbolic problem with two difficulties: the free boundary is characteristic and the Lopatinski condition holds only in a weak sense, yielding losses of derivatives. Under a supersonic condition that precludes violent instabilities, we prove an energy estimate for the boundary value problem obtained by linearization around an unsteady piecewise solution.
Compression device for feeding a waste material to a reactor
Williams, Paul M.; Faller, Kenneth M.; Bauer, Edward J.
2001-08-21
A compression device for feeding a waste material to a reactor includes a waste material feed assembly having a hopper, a supply tube and a compression tube. Each of the supply and compression tubes includes feed-inlet and feed-outlet ends. A feed-discharge valve assembly is located between the feed-outlet end of the compression tube and the reactor. A feed auger-screw extends axially in the supply tube between the feed-inlet and feed-outlet ends thereof. A compression auger-screw extends axially in the compression tube between the feed-inlet and feed-outlet ends thereof. The compression tube is sloped downwardly towards the reactor to drain fluid from the waste material to the reactor and is oriented at generally right angle to the supply tube such that the feed-outlet end of the supply tube is adjacent to the feed-inlet end of the compression tube. A programmable logic controller is provided for controlling the rotational speed of the feed and compression auger-screws for selectively varying the compression of the waste material and for overcoming jamming conditions within either the supply tube or the compression tube.
High-quality lossy compression: current and future trends
NASA Astrophysics Data System (ADS)
McLaughlin, Steven W.
1995-01-01
This paper is concerned with current and future trends in the lossy compression of real sources such as imagery, video, speech and music. We put all lossy compression schemes into common framework where each can be characterized in terms of three well-defined advantages: cell shape, region shape and memory advantages. We concentrate on image compression and discuss how new entropy constrained trellis-based compressors achieve cell- shape, region-shape and memory gain resulting in high fidelity and high compression.
Prechamber Compression-Ignition Engine Performance
NASA Technical Reports Server (NTRS)
Moore, Charles S; Collins, John H , Jr
1938-01-01
Single-cylinder compression-ignition engine tests were made to investigate the performance characteristics of prechamber type of cylinder head. Certain fundamental variables influencing engine performance -- clearance distribution, size, shape, and direction of the passage connecting the cylinder and prechamber, shape of prechamber, cylinder clearance, compression ratio, and boosting -- were independently tested. Results of motoring and of power tests, including several typical indicator cards, are presented.
Compression of the Global Land 1-km AVHRR dataset
Kess, B. L.; Steinwand, D.R.; Reichenbach, S.E.
1996-01-01
Large datasets, such as the Global Land 1-km Advanced Very High Resolution Radiometer (AVHRR) Data Set (Eidenshink and Faundeen 1994), require compression methods that provide efficient storage and quick access to portions of the data. A method of lossless compression is described that provides multiresolution decompression within geographic subwindows of multi-spectral, global, 1-km, AVHRR images. The compression algorithm segments each image into blocks and compresses each block in a hierarchical format. Users can access the data by specifying either a geographic subwindow or the whole image and a resolution (1,2,4, 8, or 16 km). The Global Land 1-km AVHRR data are presented in the Interrupted Goode's Homolosine map projection. These images contain masked regions for non-land areas which comprise 80 per cent of the image. A quadtree algorithm is used to compress the masked regions. The compressed region data are stored separately from the compressed land data. Results show that the masked regions compress to 0·143 per cent of the bytes they occupy in the test image and the land areas are compressed to 33·2 per cent of their original size. The entire image is compressed hierarchically to 6·72 per cent of the original image size, reducing the data from 9·05 gigabytes to 623 megabytes. These results are compared to the first order entropy of the residual image produced with lossless Joint Photographic Experts Group predictors. Compression results are also given for Lempel-Ziv-Welch (LZW) and LZ77, the algorithms used by UNIX compress and GZIP respectively. In addition to providing multiresolution decompression of geographic subwindows of the data, the hierarchical approach and the use of quadtrees for storing the masked regions gives a marked improvement over these popular methods.
Recognizable or Not: Towards Image Semantic Quality Assessment for Compression
NASA Astrophysics Data System (ADS)
Liu, Dong; Wang, Dandan; Li, Houqiang
2017-12-01
Traditionally, image compression was optimized for the pixel-wise fidelity or the perceptual quality of the compressed images given a bit-rate budget. But recently, compressed images are more and more utilized for automatic semantic analysis tasks such as recognition and retrieval. For these tasks, we argue that the optimization target of compression is no longer perceptual quality, but the utility of the compressed images in the given automatic semantic analysis task. Accordingly, we propose to evaluate the quality of the compressed images neither at pixel level nor at perceptual level, but at semantic level. In this paper, we make preliminary efforts towards image semantic quality assessment (ISQA), focusing on the task of optical character recognition (OCR) from compressed images. We propose a full-reference ISQA measure by comparing the features extracted from text regions of original and compressed images. We then propose to integrate the ISQA measure into an image compression scheme. Experimental results show that our proposed ISQA measure is much better than PSNR and SSIM in evaluating the semantic quality of compressed images; accordingly, adopting our ISQA measure to optimize compression for OCR leads to significant bit-rate saving compared to using PSNR or SSIM. Moreover, we perform subjective test about text recognition from compressed images, and observe that our ISQA measure has high consistency with subjective recognizability. Our work explores new dimensions in image quality assessment, and demonstrates promising direction to achieve higher compression ratio for specific semantic analysis tasks.
Monitoring compaction and compressibility changes in offshore chalk reservoirs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dean, G.; Hardy, R.; Eltvik, P.
1994-03-01
Some of the North Sea's largest and most important oil fields are in chalk reservoirs. In these fields, it is important to measure reservoir compaction and compressibility because compaction can result in platform subsidence. Also, compaction drive is a main drive mechanism in these fields, so an accurate reserves estimate cannot be made without first measuring compressibility. Estimating compaction and reserves is difficult because compressibility changes throughout field life. Installing of accurate, permanent downhole pressure gauges on offshore chalk fields makes it possible to use a new method to monitor compressibility -- measurement of reservoir pressure changes caused by themore » tide. This tidal-monitoring technique is an in-situ method that can greatly increase compressibility information. It can be used to estimate compressibility and to measure compressibility variation over time. This paper concentrates on application of the tidal-monitoring technique to North Sea chalk reservoirs. However, the method is applicable for any tidal offshore area and can be applied whenever necessary to monitor in-situ rock compressibility. One such application would be if platform subsidence was expected.« less
Compressible homogeneous shear: Simulation and modeling
NASA Technical Reports Server (NTRS)
Sarkar, S.; Erlebacher, G.; Hussaini, M. Y.
1992-01-01
Compressibility effects were studied on turbulence by direct numerical simulation of homogeneous shear flow. A primary observation is that the growth of the turbulent kinetic energy decreases with increasing turbulent Mach number. The sinks provided by compressible dissipation and the pressure dilatation, along with reduced Reynolds shear stress, are shown to contribute to the reduced growth of kinetic energy. Models are proposed for these dilatational terms and verified by direct comparison with the simulations. The differences between the incompressible and compressible fields are brought out by the examination of spectra, statistical moments, and structure of the rate of strain tensor.
Compressible homogeneous shear - Simulation and modeling
NASA Technical Reports Server (NTRS)
Sarkar, S.; Erlebacher, G.; Hussaini, M. Y.
1991-01-01
Compressibility effects were studied on turbulence by direct numerical simulation of homogeneous shear flow. A primary observation is that the growth of the turbulent kinetic energy decreases with increasing turbulent Mach number. The sinks provided by compressible dissipation and the pressure dilatation, along with reduced Reynolds shear stress, are shown to contribute to the reduced growth of kinetic energy. Models are proposed for these dilatational terms and verified by direct comparison with the simulations. The differences between the incompressible and compressible fields are brought out by the examination of spectra, statistical moments, and structure of the rate of strain tensor.
Effects of compressibility on turbulent relative particle dispersion
NASA Astrophysics Data System (ADS)
Shivamoggi, Bhimsen K.
2016-08-01
In this paper, phenomenological developments are used to explore the effects of compressibility on the relative particle dispersion (RPD) in three-dimensional (3D) fully developed turbulence (FDT). The role played by the compressible FDT cascade physics underlying this process is investigated. Compressibility effects are found to lead to reduction of RPD, development of the ballistic regime and particle clustering, corroborating the laboratory experiment and numerical simulation results (Cressman J. R. et al., New J. Phys., 6 (2004) 53) on the motion of Lagrangian tracers on a surface flow that constitutes a 2D compressible subsystem. These formulations are developed from the scaling relations for compressible FDT and are validated further via an alternative dimensional/scaling development for compressible FDT similar to the one given for incompressible FDT by Batchelor and Townsend (Surveys in Mechanics (Cambridge University Press) 1956, p. 352). The rationale for spatial intermittency effects is legitimized via the nonlinear scaling dependence of RPD on the kinetic-energy dissipation rate.
Effects of video compression on target acquisition performance
NASA Astrophysics Data System (ADS)
Espinola, Richard L.; Cha, Jae; Preece, Bradley
2008-04-01
The bandwidth requirements of modern target acquisition systems continue to increase with larger sensor formats and multi-spectral capabilities. To obviate this problem, still and moving imagery can be compressed, often resulting in greater than 100 fold decrease in required bandwidth. Compression, however, is generally not error-free and the generated artifacts can adversely affect task performance. The U.S. Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate recently performed an assessment of various compression techniques on static imagery for tank identification. In this paper, we expand this initial assessment by studying and quantifying the effect of various video compression algorithms and their impact on tank identification performance. We perform a series of controlled human perception tests using three dynamic simulated scenarios: target moving/sensor static, target static/sensor static, sensor tracking the target. Results of this study will quantify the effect of video compression on target identification and provide a framework to evaluate video compression on future sensor systems.
Multivariable control of vapor compression systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, X.D.; Liu, S.; Asada, H.H.
1999-07-01
This paper presents the results of a study of multi-input multi-output (MIMO) control of vapor compression cycles that have multiple actuators and sensors for regulating multiple outputs, e.g., superheat and evaporating temperature. The conventional single-input single-output (SISO) control was shown to have very limited performance. A low order lumped-parameter model was developed to describe the significant dynamics of vapor compression cycles. Dynamic modes were analyzed based on the low order model to provide physical insight of system dynamic behavior. To synthesize a MIMO control system, the Linear-Quadratic Gaussian (LQG) technique was applied to coordinate compressor speed and expansion valve openingmore » with guaranteed stability robustness in the design. Furthermore, to control a vapor compression cycle over a wide range of operating conditions where system nonlinearities become evident, a gain scheduling scheme was used so that the MIMO controller could adapt to changing operating conditions. Both analytical studies and experimental tests showed that the MIMO control could significantly improve the transient behavior of vapor compression cycles compared to the conventional SISO control scheme. The MIMO control proposed in this paper could be extended to the control of vapor compression cycles in a variety of HVAC and refrigeration applications to improve system performance and energy efficiency.« less
Effects of Instantaneous Multiband Dynamic Compression on Speech Intelligibility
NASA Astrophysics Data System (ADS)
Herzke, Tobias; Hohmann, Volker
2005-12-01
The recruitment phenomenon, that is, the reduced dynamic range between threshold and uncomfortable level, is attributed to the loss of instantaneous dynamic compression on the basilar membrane. Despite this, hearing aids commonly use slow-acting dynamic compression for its compensation, because this was found to be the most successful strategy in terms of speech quality and intelligibility rehabilitation. Former attempts to use fast-acting compression gave ambiguous results, raising the question as to whether auditory-based recruitment compensation by instantaneous compression is in principle applicable in hearing aids. This study thus investigates instantaneous multiband dynamic compression based on an auditory filterbank. Instantaneous envelope compression is performed in each frequency band of a gammatone filterbank, which provides a combination of time and frequency resolution comparable to the normal healthy cochlea. The gain characteristics used for dynamic compression are deduced from categorical loudness scaling. In speech intelligibility tests, the instantaneous dynamic compression scheme was compared against a linear amplification scheme, which used the same filterbank for frequency analysis, but employed constant gain factors that restored the sound level for medium perceived loudness in each frequency band. In subjective comparisons, five of nine subjects preferred the linear amplification scheme and would not accept the instantaneous dynamic compression in hearing aids. Four of nine subjects did not perceive any quality differences. A sentence intelligibility test in noise (Oldenburg sentence test) showed little to no negative effects of the instantaneous dynamic compression, compared to linear amplification. A word intelligibility test in quiet (one-syllable rhyme test) showed that the subjects benefit from the larger amplification at low levels provided by instantaneous dynamic compression. Further analysis showed that the increase in intelligibility
An Optimal Seed Based Compression Algorithm for DNA Sequences
Gopalakrishnan, Gopakumar; Karunakaran, Muralikrishnan
2016-01-01
This paper proposes a seed based lossless compression algorithm to compress a DNA sequence which uses a substitution method that is similar to the LempelZiv compression scheme. The proposed method exploits the repetition structures that are inherent in DNA sequences by creating an offline dictionary which contains all such repeats along with the details of mismatches. By ensuring that only promising mismatches are allowed, the method achieves a compression ratio that is at par or better than the existing lossless DNA sequence compression algorithms. PMID:27555868
Compressive Classification for TEM-EELS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hao, Weituo; Stevens, Andrew; Yang, Hao
Electron energy loss spectroscopy (EELS) is typically conducted in STEM mode with a spectrometer, or in TEM mode with energy selction. These methods produce a 3D data set (x, y, energy). Some compressive sensing [1,2] and inpainting [3,4,5] approaches have been proposed for recovering a full set of spectra from compressed measurements. In many cases the final form of the spectral data is an elemental map (an image with channels corresponding to elements). This means that most of the collected data is unused or summarized. We propose a method to directly recover the elemental map with reduced dose and acquisitionmore » time. We have designed a new computational TEM sensor for compressive classification [6,7] of energy loss spectra called TEM-EELS.« less
Compressibility characteristics of Sabak Bernam Marine Clay
NASA Astrophysics Data System (ADS)
Lat, D. C.; Ali, N.; Jais, I. B. M.; Baharom, B.; Yunus, N. Z. M.; Salleh, S. M.; Azmi, N. A. C.
2018-04-01
This study is carried out to determine the geotechnical properties and compressibility characteristics of marine clay collected at Sabak Bernam. The compressibility characteristics of this soil are determined from 1-D consolidation test and verified by existing correlations by other researchers. No literature has been found on the compressibility characteristics of Sabak Bernam Marine Clay. It is important to carry out this study since this type of marine clay covers large coastal area of west coast Malaysia. This type of marine clay was found on the main road connecting Klang to Perak and the road keeps experiencing undulation and uneven settlement which jeopardise the safety of the road users. The soil is indicated in the Generalised Soil Map of Peninsular Malaysia as a CLAY with alluvial soil on recent marine and riverine alluvium. Based on the British Standard Soil Classification and Plasticity Chart, the soil is classified as a CLAY with very high plasticity (CV). Results from laboratory test on physical properties and compressibility parameters show that Sabak Bernam Marine Clay (SBMC) is highly compressible, has low permeability and poor drainage characteristics. The compressibility parameters obtained for SBMC is in a good agreement with other researchers in the same field.
Aging and compressibility of municipal solid wastes.
Chen, Y M; Zhan, Tony L T; Wei, H Y; Ke, H
2009-01-01
The expansion of a municipal solid waste (MSW) landfill requires the ability to predict settlement behavior of the existing landfill. The practice of using a single compressibility value when performing a settlement analysis may lead to inaccurate predictions. This paper gives consideration to changes in the mechanical compressibility of MSW as a function of the fill age of MSW as well as the embedding depth of MSW. Borehole samples representative of various fill ages were obtained from five boreholes drilled to the bottom of the Qizhishan landfill in Suzhou, China. Thirty-one borehole samples were used to perform confined compression tests. Waste composition and volume-mass properties (i.e., unit weight, void ratio, and water content) were measured on all the samples. The test results showed that the compressible components of the MSW (i.e., organics, plastics, paper, wood and textiles) decreased with an increase in the fill age. The in situ void ratio of the MSW was shown to decrease with depth into the landfill. The compression index, Cc, was observed to decrease from 1.0 to 0.3 with depth into the landfill. Settlement analyses were performed on the existing landfill, demonstrating that the variation of MSW compressibility with fill age or depth should be taken into account in the settlement prediction.
Fukatsu, Hiroshi; Naganawa, Shinji; Yumura, Shinnichiro
2008-04-01
This study was aimed to validate the performance of a novel image compression method using a neural network to achieve a lossless compression. The encoding consists of the following blocks: a prediction block; a residual data calculation block; a transformation and quantization block; an organization and modification block; and an entropy encoding block. The predicted image is divided into four macro-blocks using the original image for teaching; and then redivided into sixteen sub-blocks. The predicted image is compared to the original image to create the residual image. The spatial and frequency data of the residual image are compared and transformed. Chest radiography, computed tomography (CT), magnetic resonance imaging, positron emission tomography, radioisotope mammography, ultrasonography, and digital subtraction angiography images were compressed using the AIC lossless compression method; and the compression rates were calculated. The compression rates were around 15:1 for chest radiography and mammography, 12:1 for CT, and around 6:1 for other images. This method thus enables greater lossless compression than the conventional methods. This novel method should improve the efficiency of handling of the increasing volume of medical imaging data.
Fpack and Funpack Utilities for FITS Image Compression and Uncompression
NASA Technical Reports Server (NTRS)
Pence, W.
2008-01-01
Fpack is a utility program for optimally compressing images in the FITS (Flexible Image Transport System) data format (see http://fits.gsfc.nasa.gov). The associated funpack program restores the compressed image file back to its original state (as long as a lossless compression algorithm is used). These programs may be run from the host operating system command line and are analogous to the gzip and gunzip utility programs except that they are optimized for FITS format images and offer a wider choice of compression algorithms. Fpack stores the compressed image using the FITS tiled image compression convention (see http://fits.gsfc.nasa.gov/fits_registry.html). Under this convention, the image is first divided into a user-configurable grid of rectangular tiles, and then each tile is individually compressed and stored in a variable-length array column in a FITS binary table. By default, fpack usually adopts a row-by-row tiling pattern. The FITS image header keywords remain uncompressed for fast access by FITS reading and writing software. The tiled image compression convention can in principle support any number of different compression algorithms. The fpack and funpack utilities call on routines in the CFITSIO library (http://hesarc.gsfc.nasa.gov/fitsio) to perform the actual compression and uncompression of the FITS images, which currently supports the GZIP, Rice, H-compress, and PLIO IRAF pixel list compression algorithms.
47 CFR 80.109 - Transmission to a plurality of mobile stations by a public coast station.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 5 2010-10-01 2010-10-01 false Transmission to a plurality of mobile stations... Procedures Operating Procedures-Land Stations § 80.109 Transmission to a plurality of mobile stations by a... mobile stations. ...
Compression Fracture of CFRP Laminates Containing Stress Intensifications.
Leopold, Christian; Schütt, Martin; Liebig, Wilfried V; Philipkowski, Timo; Kürten, Jonas; Schulte, Karl; Fiedler, Bodo
2017-09-05
For brittle fracture behaviour of carbon fibre reinforced plastics (CFRP) under compression, several approaches exist, which describe different mechanisms during failure, especially at stress intensifications. The failure process is not only initiated by the buckling fibres, but a shear driven fibre compressive failure beneficiaries or initiates the formation of fibres into a kink-band. Starting from this kink-band further damage can be detected, which leads to the final failure. The subject of this work is an experimental investigation on the influence of ply thickness and stacking sequence in quasi-isotropic CFRP laminates containing stress intensifications under compression loading. Different effects that influence the compression failure and the role the stacking sequence has on damage development and the resulting compressive strength are identified and discussed. The influence of stress intensifications is investigated in detail at a hole in open hole compression (OHC) tests. A proposed interrupted test approach allows identifying the mechanisms of damage initiation and propagation from the free edge of the hole by causing a distinct damage state and examine it at a precise instant of time during fracture process. Compression after impact (CAI) tests are executed in order to compare the OHC results to a different type of stress intensifications. Unnotched compression tests are carried out for comparison as a reference. With this approach, a more detailed description of the failure mechanisms during the sudden compression failure of CFRP is achieved. By microscopic examination of single plies from various specimens, the different effects that influence the compression failure are identified. First damage of fibres occurs always in 0°-ply. Fibre shear failure leads to local microbuckling and the formation and growth of a kink-band as final failure mechanisms. The formation of a kink-band and finally steady state kinking is shifted to higher compressive strains
Compression Fracture of CFRP Laminates Containing Stress Intensifications
Schütt, Martin; Philipkowski, Timo; Kürten, Jonas; Schulte, Karl
2017-01-01
For brittle fracture behaviour of carbon fibre reinforced plastics (CFRP) under compression, several approaches exist, which describe different mechanisms during failure, especially at stress intensifications. The failure process is not only initiated by the buckling fibres, but a shear driven fibre compressive failure beneficiaries or initiates the formation of fibres into a kink-band. Starting from this kink-band further damage can be detected, which leads to the final failure. The subject of this work is an experimental investigation on the influence of ply thickness and stacking sequence in quasi-isotropic CFRP laminates containing stress intensifications under compression loading. Different effects that influence the compression failure and the role the stacking sequence has on damage development and the resulting compressive strength are identified and discussed. The influence of stress intensifications is investigated in detail at a hole in open hole compression (OHC) tests. A proposed interrupted test approach allows identifying the mechanisms of damage initiation and propagation from the free edge of the hole by causing a distinct damage state and examine it at a precise instant of time during fracture process. Compression after impact (CAI) tests are executed in order to compare the OHC results to a different type of stress intensifications. Unnotched compression tests are carried out for comparison as a reference. With this approach, a more detailed description of the failure mechanisms during the sudden compression failure of CFRP is achieved. By microscopic examination of single plies from various specimens, the different effects that influence the compression failure are identified. First damage of fibres occurs always in 0°-ply. Fibre shear failure leads to local microbuckling and the formation and growth of a kink-band as final failure mechanisms. The formation of a kink-band and finally steady state kinking is shifted to higher compressive strains
46 CFR 147.60 - Compressed gases.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 46 Shipping 5 2012-10-01 2012-10-01 false Compressed gases. 147.60 Section 147.60 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) DANGEROUS CARGOES HAZARDOUS SHIPS' STORES Stowage and Other Special Requirements for Particular Materials § 147.60 Compressed gases. (a) Cylinder requirements...
46 CFR 147.60 - Compressed gases.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 5 2014-10-01 2014-10-01 false Compressed gases. 147.60 Section 147.60 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) DANGEROUS CARGOES HAZARDOUS SHIPS' STORES Stowage and Other Special Requirements for Particular Materials § 147.60 Compressed gases. (a) Cylinder requirements...
46 CFR 147.60 - Compressed gases.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 5 2011-10-01 2011-10-01 false Compressed gases. 147.60 Section 147.60 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) DANGEROUS CARGOES HAZARDOUS SHIPS' STORES Stowage and Other Special Requirements for Particular Materials § 147.60 Compressed gases. (a) Cylinder requirements...
46 CFR 147.60 - Compressed gases.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 5 2013-10-01 2013-10-01 false Compressed gases. 147.60 Section 147.60 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) DANGEROUS CARGOES HAZARDOUS SHIPS' STORES Stowage and Other Special Requirements for Particular Materials § 147.60 Compressed gases. (a) Cylinder requirements...
A customer-friendly Space Station
NASA Technical Reports Server (NTRS)
Pivirotto, D. S.
1984-01-01
This paper discusses the relationship of customers to the Space Station Program currently being defined by NASA. Emphasis is on definition of the Program such that the Space Station will be conducive to use by customers, that is by people who utilize the services provided by the Space Station and its associated platforms and vehicles. Potential types of customers are identified. Scenarios are developed for ways in which different types of customers can utilize the Space Station. Both management and technical issues involved in making the Station 'customer friendly' are discussed.
Space Station transition through Spacelab
NASA Technical Reports Server (NTRS)
Craft, Harry G., Jr.; Wicks, Thomas G.
1990-01-01
It is appropriate that NASA's Office of Space Science and Application's science management structures and processes that have proven successful on Spacelab be applied and extrapolated to Space Station utilization, wherever practical. Spacelab has many similarities and complementary aspects to Space Station Freedom. An understanding of the similarities and differences between Spacelab and Space Station is necessary in order to understand how to transition from Spacelab to Space Station. These relationships are discussed herein as well as issues which must be dealt with and approaches for transition and evolution from Spacelab to Space Station.
47 CFR 74.682 - Station identification.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 4 2010-10-01 2010-10-01 false Station identification. 74.682 Section 74.682... Stations § 74.682 Station identification. (a) Each television broadcast auxiliary station operating with a transmitter output power of 1 watt or more must, when actually transmitting programs, transmit station...
Pulse compression and prepulse suppression apparatus
Dane, Clifford B.; Hackel, Lloyd A.; George, Edward V.; Miller, John L.; Krupke, William F.
1993-01-01
A pulse compression and prepulse suppression apparatus (10) for time compressing the output of a laser (14). A pump pulse (46) is separated from a seed pulse (48) by a first polarized beam splitter (20) according to the orientation of a half wave plate (18). The seed pulse (48) is directed into an SBS oscillator (44) by two plane mirrors (22, 26) and a corner mirror (24), the corner mirror (24) being movable to adjust timing. The pump pulse (46) is directed into an SBS amplifier 34 wherein SBS occurs. The seed pulse (48), having been propagated from the SBS oscillator (44), is then directed through the SBS amplifier (34) wherein it sweeps the energy of the pump pulse (46) out of the SBS amplifier (34) and is simultaneously compressed, and the time compressed pump pulse (46) is emitted as a pulse output (52). A second polarized beam splitter (38) directs any undepleted pump pulse 58 away from the SBS oscillator (44).
Pulse compression and prepulse suppression apparatus
Dane, C.B.; Hackel, L.A.; George, E.V.; Miller, J.L.; Krupke, W.F.
1993-11-09
A pulse compression and prepulse suppression apparatus (10) for time compressing the output of a laser (14). A pump pulse (46) is separated from a seed pulse (48) by a first polarized beam splitter (20) according to the orientation of a half wave plate (18). The seed pulse (48) is directed into an SBS oscillator (44) by two plane mirrors (22, 26) and a corner mirror (24), the corner mirror (24) being movable to adjust timing. The pump pulse (46) is directed into an SBS amplifier 34 wherein SBS occurs. The seed pulse (48), having been propagated from the SBS oscillator (44), is then directed through the SBS amplifier (34) wherein it sweeps the energy of the pump pulse (46) out of the SBS amplifier (34) and is simultaneously compressed, and the time compressed pump pulse (46) is emitted as a pulse output (52). A second polarized beam splitter (38) directs any undepleted pump pulse 58 away from the SBS oscillator (44).
Compression asphyxia from a human pyramid.
Tumram, Nilesh Keshav; Ambade, Vipul Namdeorao; Biyabani, Naushad
2015-12-01
In compression asphyxia, respiration is stopped by external forces on the body. It is usually due to an external force compressing the trunk such as a heavy weight on the chest or abdomen and is associated with internal injuries. In present case, the victim was trapped and crushed under the falling persons from a human pyramid formation for a "Dahi Handi" festival. There was neither any severe blunt force injury nor any significant pathological natural disease contributing to the cause of death. The victim was unable to remove himself from the situation because his cognitive responses and coordination were impaired due to alcohol intake. The victim died from asphyxia due to compression of his chest and abdomen. Compression asphyxia resulting from the collapse of a human pyramid and the dynamics of its impact force in these circumstances is very rare and is not reported previously to the best of our knowledge. © The Author(s) 2015.
Garner, Alan A; Hsu, Jeremy; McShane, Anne; Sroor, Adam
Increased fracture displacement has previously been described with the application of pelvic circumferential compression devices (PCCDs) in patients with lateral compression-type pelvic fracture. We describe the first reported case of hemodynamic deterioration temporally associated with the prehospital application of a PCCD in a patient with a complex acetabular fracture with medial displacement of the femoral head. Active hemorrhage from a site adjacent to the acetabular fracture was subsequently demonstrated on angiography. Caution in the application of PCCDs to patients with lateral compression-type fractures is warranted. Copyright © 2017 Air Medical Journal Associates. All rights reserved.
The analysis and modelling of dilatational terms in compressible turbulence
NASA Technical Reports Server (NTRS)
Sarkar, S.; Erlebacher, G.; Hussaini, M. Y.; Kreiss, H. O.
1991-01-01
It is shown that the dilatational terms that need to be modeled in compressible turbulence include not only the pressure-dilatation term but also another term - the compressible dissipation. The nature of these dilatational terms in homogeneous turbulence is explored by asymptotic analysis of the compressible Navier-Stokes equations. A non-dimensional parameter which characterizes some compressible effects in moderate Mach number, homogeneous turbulence is identified. Direct numerical simulations (DNS) of isotropic, compressible turbulence are performed, and their results are found to be in agreement with the theoretical analysis. A model for the compressible dissipation is proposed; the model is based on the asymptotic analysis and the direct numerical simulations. This model is calibrated with reference to the DNS results regarding the influence of compressibility on the decay rate of isotropic turbulence. An application of the proposed model to the compressible mixing layer has shown that the model is able to predict the dramatically reduced growth rate of the compressible mixing layer.
The analysis and modeling of dilatational terms in compressible turbulence
NASA Technical Reports Server (NTRS)
Sarkar, S.; Erlebacher, G.; Hussaini, M. Y.; Kreiss, H. O.
1989-01-01
It is shown that the dilatational terms that need to be modeled in compressible turbulence include not only the pressure-dilatation term but also another term - the compressible dissipation. The nature of these dilatational terms in homogeneous turbulence is explored by asymptotic analysis of the compressible Navier-Stokes equations. A non-dimensional parameter which characterizes some compressible effects in moderate Mach number, homogeneous turbulence is identified. Direct numerical simulations (DNS) of isotropic, compressible turbulence are performed, and their results are found to be in agreement with the theoretical analysis. A model for the compressible dissipation is proposed; the model is based on the asymptotic analysis and the direct numerical simulations. This model is calibrated with reference to the DNS results regarding the influence of compressibility on the decay rate of isotropic turbulence. An application of the proposed model to the compressible mixing layer has shown that the model is able to predict the dramatically reduced growth rate of the compressible mixing layer.
46 CFR 112.50-7 - Compressed air starting.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 4 2013-10-01 2013-10-01 false Compressed air starting. 112.50-7 Section 112.50-7... air starting. A compressed air starting system must meet the following: (a) The starting, charging... air compressors addressed in paragraph (c)(3)(i) of this section. (b) The compressed air starting...
46 CFR 112.50-7 - Compressed air starting.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 4 2014-10-01 2014-10-01 false Compressed air starting. 112.50-7 Section 112.50-7... air starting. A compressed air starting system must meet the following: (a) The starting, charging... air compressors addressed in paragraph (c)(3)(i) of this section. (b) The compressed air starting...
NASA Astrophysics Data System (ADS)
Leihong, Zhang; Zilan, Pan; Luying, Wu; Xiuhua, Ma
2016-11-01
To solve the problem that large images can hardly be retrieved for stringent hardware restrictions and the security level is low, a method based on compressive ghost imaging (CGI) with Fast Fourier Transform (FFT) is proposed, named FFT-CGI. Initially, the information is encrypted by the sender with FFT, and the FFT-coded image is encrypted by the system of CGI with a secret key. Then the receiver decrypts the image with the aid of compressive sensing (CS) and FFT. Simulation results are given to verify the feasibility, security, and compression of the proposed encryption scheme. The experiment suggests the method can improve the quality of large images compared with conventional ghost imaging and achieve the imaging for large-sized images, further the amount of data transmitted largely reduced because of the combination of compressive sensing and FFT, and improve the security level of ghost images through ciphertext-only attack (COA), chosen-plaintext attack (CPA), and noise attack. This technique can be immediately applied to encryption and data storage with the advantages of high security, fast transmission, and high quality of reconstructed information.
Case study: the introduction of stereoscopic games on the Sony PlayStation 3
NASA Astrophysics Data System (ADS)
Bickerstaff, Ian
2012-03-01
A free stereoscopic firmware update on Sony Computer Entertainment's PlayStation® 3 console provides the potential to increase enormously the popularity of stereoscopic 3D in the home. For this to succeed though, a large selection of content has to become available that exploits 3D in the best way possible. In addition to the existing challenges found in creating 3D movies and television programmes, the stereography must compensate for the dynamic and unpredictable environments found in games. Automatically, the software must map the depth range of the scene into the display's comfort zone, while minimising depth compression. This paper presents a range of techniques developed to solve this problem and the challenge of creating twice as many images as the 2D version without excessively compromising the frame rate or image quality. At the time of writing, over 80 stereoscopic PlayStation 3 games have been released and notable titles are used as examples to illustrate how the techniques have been adapted for different game genres. Since the firmware's introduction in 2010, the industry has matured with a large number of developers now producing increasingly sophisticated 3D content. New technologies such as viewer head tracking and head-mounted displays should increase the appeal of 3D in the home still further.
Space Station thermal storage/refrigeration system research and development
NASA Astrophysics Data System (ADS)
Dean, W. G.; Karu, Z. S.
1993-02-01
Space Station thermal loading conditions represent an order of magnitude increase over current and previous spacecraft such as Skylab, Apollo, Pegasus III, Lunar Rover Vehicle, and Lockheed TRIDENT missiles. Thermal storage units (TSU's) were successfully used on these as well as many applications for ground based solar energy storage applications. It is desirable to store thermal energy during peak loading conditions as an alternative to providing increased radiator surface area which adds to the weight of the system. Basically, TSU's store heat by melting a phase change material (PCM) such as a paraffin. The physical property data for the PCM's used in the design of these TSU's is well defined in the literature. Design techniques are generally well established for the TSU's. However, the Space Station provides a new challenge in the application of these data and techniques because of three factors: the large size of the TSU required, the integration of the TSU for the Space Station thermal management concept with its diverse opportunities for storage application, and the TSU's interface with a two-phase (liquid/vapor) thermal bus/central heat rejection system. The objective in the thermal storage research and development task was to design, fabricate, and test a demonstration unit. One test article was to be a passive thermal storage unit capable of storing frozen food at -20 F for a minimum of 90 days. A second unit was to be capable of storing frozen biological samples at -94 F, again for a minimum of 90 days. The articles developed were compatible with shuttle mission conditions, including safety and handling by astronauts. Further, storage rack concepts were presented so that these units can be integrated into Space Station logistics module storage racks. The extreme sensitivity of spacecraft radiator systems design-to-heat rejection temperature requirements is well known. A large radiator area penalty is incurred if low temperatures are accommodated via a
Space Station thermal storage/refrigeration system research and development
NASA Technical Reports Server (NTRS)
Dean, W. G.; Karu, Z. S.
1993-01-01
Space Station thermal loading conditions represent an order of magnitude increase over current and previous spacecraft such as Skylab, Apollo, Pegasus III, Lunar Rover Vehicle, and Lockheed TRIDENT missiles. Thermal storage units (TSU's) were successfully used on these as well as many applications for ground based solar energy storage applications. It is desirable to store thermal energy during peak loading conditions as an alternative to providing increased radiator surface area which adds to the weight of the system. Basically, TSU's store heat by melting a phase change material (PCM) such as a paraffin. The physical property data for the PCM's used in the design of these TSU's is well defined in the literature. Design techniques are generally well established for the TSU's. However, the Space Station provides a new challenge in the application of these data and techniques because of three factors: the large size of the TSU required, the integration of the TSU for the Space Station thermal management concept with its diverse opportunities for storage application, and the TSU's interface with a two-phase (liquid/vapor) thermal bus/central heat rejection system. The objective in the thermal storage research and development task was to design, fabricate, and test a demonstration unit. One test article was to be a passive thermal storage unit capable of storing frozen food at -20 F for a minimum of 90 days. A second unit was to be capable of storing frozen biological samples at -94 F, again for a minimum of 90 days. The articles developed were compatible with shuttle mission conditions, including safety and handling by astronauts. Further, storage rack concepts were presented so that these units can be integrated into Space Station logistics module storage racks. The extreme sensitivity of spacecraft radiator systems design-to-heat rejection temperature requirements is well known. A large radiator area penalty is incurred if low temperatures are accommodated via a
Agricultural Experiment Stations and Branch Stations in the United States
ERIC Educational Resources Information Center
Pearson, Calvin H.; Atucha, Amaya
2015-01-01
In 1887, Congress passed the Hatch Act, which formally established and provided a funding mechanism for agricultural experiment stations in each state and territory in the United States. The main purpose of agricultural experiment stations is to conduct agricultural research to meet the needs of the citizens of the United States. The objective of…
Space station propulsion requirements study
NASA Technical Reports Server (NTRS)
Wilkinson, C. L.; Brennan, S. M.
1985-01-01
Propulsion system requirements to support Low Earth Orbit (LEO) manned space station development and evolution over a wide range of potential capabilities and for a variety of STS servicing and space station operating strategies are described. The term space station and the overall space station configuration refers, for the purpose of this report, to a group of potential LEO spacecraft that support the overall space station mission. The group consisted of the central space station at 28.5 deg or 90 deg inclinations, unmanned free-flying spacecraft that are both tethered and untethered, a short-range servicing vehicle, and a longer range servicing vehicle capable of GEO payload transfer. The time phasing for preferred propulsion technology approaches is also investigated, as well as the high-leverage, state-of-the-art advancements needed, and the qualitative and quantitative benefits of these advancements on STS/space station operations. The time frame of propulsion technologies applicable to this study is the early 1990's to approximately the year 2000.
Apollo experience report: Crew station integration. Volume 1: Crew station design and development
NASA Technical Reports Server (NTRS)
Allen, L. D.; Nussman, D. A.
1976-01-01
An overview of the evolution of the design and development of the Apollo command module and lunar module crew stations is given, with emphasis placed on the period from 1964 to 1969. The organizational planning, engineering techniques, and documentation involved are described, and a detailed chronology of the meetings, reviews, and exercises is presented. Crew station anomalies for the Apollo 7 to 11 missions are discussed, and recommendations for the solution of recurring problems of crew station acoustics, instrument glass failure, and caution and warning system performance are presented. Photographs of the various crew station configurations are also provided.
47 CFR 73.1201 - Station identification.
Code of Federal Regulations, 2014 CFR
2014-10-01
...; Provided, That the name of the licensee, the station's frequency, the station's channel number, as stated... number in the station identification must use the station's major channel number and may distinguish multicast program streams. For example, a DTV station with major channel number 26 may use 26.1 to identify...
47 CFR 73.1201 - Station identification.
Code of Federal Regulations, 2013 CFR
2013-10-01
...; Provided, That the name of the licensee, the station's frequency, the station's channel number, as stated... number in the station identification must use the station's major channel number and may distinguish multicast program streams. For example, a DTV station with major channel number 26 may use 26.1 to identify...
47 CFR 73.1201 - Station identification.
Code of Federal Regulations, 2011 CFR
2011-10-01
...; Provided, That the name of the licensee, the station's frequency, the station's channel number, as stated... number in the station identification must use the station's major channel number and may distinguish multicast program streams. For example, a DTV station with major channel number 26 may use 26.1 to identify...
47 CFR 73.1201 - Station identification.
Code of Federal Regulations, 2012 CFR
2012-10-01
...; Provided, That the name of the licensee, the station's frequency, the station's channel number, as stated... number in the station identification must use the station's major channel number and may distinguish multicast program streams. For example, a DTV station with major channel number 26 may use 26.1 to identify...
Locally adaptive vector quantization: Data compression with feature preservation
NASA Technical Reports Server (NTRS)
Cheung, K. M.; Sayano, M.
1992-01-01
A study of a locally adaptive vector quantization (LAVQ) algorithm for data compression is presented. This algorithm provides high-speed one-pass compression and is fully adaptable to any data source and does not require a priori knowledge of the source statistics. Therefore, LAVQ is a universal data compression algorithm. The basic algorithm and several modifications to improve performance are discussed. These modifications are nonlinear quantization, coarse quantization of the codebook, and lossless compression of the output. Performance of LAVQ on various images using irreversible (lossy) coding is comparable to that of the Linde-Buzo-Gray algorithm, but LAVQ has a much higher speed; thus this algorithm has potential for real-time video compression. Unlike most other image compression algorithms, LAVQ preserves fine detail in images. LAVQ's performance as a lossless data compression algorithm is comparable to that of Lempel-Ziv-based algorithms, but LAVQ uses far less memory during the coding process.
Squish: Near-Optimal Compression for Archival of Relational Datasets
Gao, Yihan; Parameswaran, Aditya
2017-01-01
Relational datasets are being generated at an alarmingly rapid rate across organizations and industries. Compressing these datasets could significantly reduce storage and archival costs. Traditional compression algorithms, e.g., gzip, are suboptimal for compressing relational datasets since they ignore the table structure and relationships between attributes. We study compression algorithms that leverage the relational structure to compress datasets to a much greater extent. We develop Squish, a system that uses a combination of Bayesian Networks and Arithmetic Coding to capture multiple kinds of dependencies among attributes and achieve near-entropy compression rate. Squish also supports user-defined attributes: users can instantiate new data types by simply implementing five functions for a new class interface. We prove the asymptotic optimality of our compression algorithm and conduct experiments to show the effectiveness of our system: Squish achieves a reduction of over 50% in storage size relative to systems developed in prior work on a variety of real datasets. PMID:28180028
Bitshuffle: Filter for improving compression of typed binary data
NASA Astrophysics Data System (ADS)
Masui, Kiyoshi
2017-12-01
Bitshuffle rearranges typed, binary data for improving compression; the algorithm is implemented in a python/C package within the Numpy framework. The library can be used alongside HDF5 to compress and decompress datasets and is integrated through the dynamically loaded filters framework. Algorithmically, Bitshuffle is closely related to HDF5's Shuffle filter except it operates at the bit level instead of the byte level. Arranging a typed data array in to a matrix with the elements as the rows and the bits within the elements as the columns, Bitshuffle "transposes" the matrix, such that all the least-significant-bits are in a row, etc. This transposition is performed within blocks of data roughly 8kB long; this does not in itself compress data, but rearranges it for more efficient compression. A compression library is necessary to perform the actual compression. This scheme has been used for compression of radio data in high performance computing.
Heterogeneous Compression of Large Collections of Evolutionary Trees.
Matthews, Suzanne J
2015-01-01
Compressing heterogeneous collections of trees is an open problem in computational phylogenetics. In a heterogeneous tree collection, each tree can contain a unique set of taxa. An ideal compression method would allow for the efficient archival of large tree collections and enable scientists to identify common evolutionary relationships over disparate analyses. In this paper, we extend TreeZip to compress heterogeneous collections of trees. TreeZip is the most efficient algorithm for compressing homogeneous tree collections. To the best of our knowledge, no other domain-based compression algorithm exists for large heterogeneous tree collections or enable their rapid analysis. Our experimental results indicate that TreeZip averages 89.03 percent (72.69 percent) space savings on unweighted (weighted) collections of trees when the level of heterogeneity in a collection is moderate. The organization of the TRZ file allows for efficient computations over heterogeneous data. For example, consensus trees can be computed in mere seconds. Lastly, combining the TreeZip compressed (TRZ) file with general-purpose compression yields average space savings of 97.34 percent (81.43 percent) on unweighted (weighted) collections of trees. Our results lead us to believe that TreeZip will prove invaluable in the efficient archival of tree collections, and enables scientists to develop novel methods for relating heterogeneous collections of trees.
Single-Station Sigma for the Iranian Strong Motion Stations
NASA Astrophysics Data System (ADS)
Zafarani, H.; Soghrat, M. R.
2017-11-01
In development of ground motion prediction equations (GMPEs), the residuals are assumed to have a log-normal distribution with a zero mean and a standard deviation, designated as sigma. Sigma has significant effect on evaluation of seismic hazard for designing important infrastructures such as nuclear power plants and dams. Both aleatory and epistemic uncertainties are involved in the sigma parameter. However, ground-motion observations over long time periods are not available at specific sites and the GMPEs have been derived using observed data from multiple sites for a small number of well-recorded earthquakes. Therefore, sigma is dominantly related to the statistics of the spatial variability of ground motion instead of temporal variability at a single point (ergodic assumption). The main purpose of this study is to reduce the variability of the residuals so as to handle it as epistemic uncertainty. In this regard, it is tried to partially apply the non-ergodic assumption by removing repeatable site effects from total variability of six GMPEs driven from the local, Europe-Middle East and worldwide data. For this purpose, we used 1837 acceleration time histories from 374 shallow earthquakes with moment magnitudes ranging from M w 4.0 to 7.3 recorded at 370 stations with at least two recordings per station. According to estimated single-station sigma for the Iranian strong motion stations, the ratio of event-corrected single-station standard deviation ( Φ ss) to within-event standard deviation ( Φ) is about 0.75. In other words, removing the ergodic assumption on site response resulted in 25% reduction of the within-event standard deviation that reduced the total standard deviation by about 15%.
Image splitting and remapping method for radiological image compression
NASA Astrophysics Data System (ADS)
Lo, Shih-Chung B.; Shen, Ellen L.; Mun, Seong K.
1990-07-01
A new decomposition method using image splitting and gray-level remapping has been proposed for image compression, particularly for images with high contrast resolution. The effects of this method are especially evident in our radiological image compression study. In our experiments, we tested the impact of this decomposition method on image compression by employing it with two coding techniques on a set of clinically used CT images and several laser film digitized chest radiographs. One of the compression techniques used was full-frame bit-allocation in the discrete cosine transform domain, which has been proven to be an effective technique for radiological image compression. The other compression technique used was vector quantization with pruned tree-structured encoding, which through recent research has also been found to produce a low mean-square-error and a high compression ratio. The parameters we used in this study were mean-square-error and the bit rate required for the compressed file. In addition to these parameters, the difference between the original and reconstructed images will be presented so that the specific artifacts generated by both techniques can be discerned by visual perception.
Mixed raster content (MRC) model for compound image compression
NASA Astrophysics Data System (ADS)
de Queiroz, Ricardo L.; Buckley, Robert R.; Xu, Ming
1998-12-01
This paper will describe the Mixed Raster Content (MRC) method for compressing compound images, containing both binary test and continuous-tone images. A single compression algorithm that simultaneously meets the requirements for both text and image compression has been elusive. MRC takes a different approach. Rather than using a single algorithm, MRC uses a multi-layered imaging model for representing the results of multiple compression algorithms, including ones developed specifically for text and for images. As a result, MRC can combine the best of existing or new compression algorithms and offer different quality-compression ratio tradeoffs. The algorithms used by MRC set the lower bound on its compression performance. Compared to existing algorithms, MRC has some image-processing overhead to manage multiple algorithms and the imaging model. This paper will develop the rationale for the MRC approach by describing the multi-layered imaging model in light of a rate-distortion trade-off. Results will be presented comparing images compressed using MRC, JPEG and state-of-the-art wavelet algorithms such as SPIHT. MRC has been approved or proposed as an architectural model for several standards, including ITU Color Fax, IETF Internet Fax, and JPEG 2000.
Compression in wearable sensor nodes: impacts of node topology.
Imtiaz, Syed Anas; Casson, Alexander J; Rodriguez-Villegas, Esther
2014-04-01
Wearable sensor nodes monitoring the human body must operate autonomously for very long periods of time. Online and low-power data compression embedded within the sensor node is therefore essential to minimize data storage/transmission overheads. This paper presents a low-power MSP430 compressive sensing implementation for providing such compression, focusing particularly on the impact of the sensor node architecture on the compression performance. Compression power performance is compared for four different sensor nodes incorporating different strategies for wireless transmission/on-sensor-node local storage of data. The results demonstrate that the compressive sensing used must be designed differently depending on the underlying node topology, and that the compression strategy should not be guided only by signal processing considerations. We also provide a practical overview of state-of-the-art sensor node topologies. Wireless transmission of data is often preferred as it offers increased flexibility during use, but in general at the cost of increased power consumption. We demonstrate that wireless sensor nodes can highly benefit from the use of compressive sensing and now can achieve power consumptions comparable to, or better than, the use of local memory.
47 CFR 97.209 - Earth station.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 47 Telecommunication 5 2013-10-01 2013-10-01 false Earth station. 97.209 Section 97.209... SERVICE Special Operations § 97.209 Earth station. (a) Any amateur station may be an Earth station. A holder of any class operator license may be the control operator of an Earth station, subject to the...
47 CFR 97.209 - Earth station.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 5 2011-10-01 2011-10-01 false Earth station. 97.209 Section 97.209... SERVICE Special Operations § 97.209 Earth station. (a) Any amateur station may be an Earth station. A holder of any class operator license may be the control operator of an Earth station, subject to the...
47 CFR 97.209 - Earth station.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 47 Telecommunication 5 2012-10-01 2012-10-01 false Earth station. 97.209 Section 97.209... SERVICE Special Operations § 97.209 Earth station. (a) Any amateur station may be an Earth station. A holder of any class operator license may be the control operator of an Earth station, subject to the...
47 CFR 97.209 - Earth station.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 5 2010-10-01 2010-10-01 false Earth station. 97.209 Section 97.209... SERVICE Special Operations § 97.209 Earth station. (a) Any amateur station may be an Earth station. A holder of any class operator license may be the control operator of an Earth station, subject to the...
47 CFR 97.209 - Earth station.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 47 Telecommunication 5 2014-10-01 2014-10-01 false Earth station. 97.209 Section 97.209... SERVICE Special Operations § 97.209 Earth station. (a) Any amateur station may be an Earth station. A holder of any class operator license may be the control operator of an Earth station, subject to the...
47 CFR 97.109 - Station control.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 47 Telecommunication 5 2013-10-01 2013-10-01 false Station control. 97.109 Section 97.109... SERVICE Station Operation Standards § 97.109 Station control. (a) Each amateur station must have at least one control point. (b) When a station is being locally controlled, the control operator must be at the...
47 CFR 97.109 - Station control.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 5 2010-10-01 2010-10-01 false Station control. 97.109 Section 97.109... SERVICE Station Operation Standards § 97.109 Station control. (a) Each amateur station must have at least one control point. (b) When a station is being locally controlled, the control operator must be at the...
47 CFR 97.109 - Station control.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 5 2011-10-01 2011-10-01 false Station control. 97.109 Section 97.109... SERVICE Station Operation Standards § 97.109 Station control. (a) Each amateur station must have at least one control point. (b) When a station is being locally controlled, the control operator must be at the...
47 CFR 97.109 - Station control.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 47 Telecommunication 5 2014-10-01 2014-10-01 false Station control. 97.109 Section 97.109... SERVICE Station Operation Standards § 97.109 Station control. (a) Each amateur station must have at least one control point. (b) When a station is being locally controlled, the control operator must be at the...
Self-Similar Compressible Free Vortices
NASA Technical Reports Server (NTRS)
vonEllenrieder, Karl
1998-01-01
Lie group methods are used to find both exact and numerical similarity solutions for compressible perturbations to all incompressible, two-dimensional, axisymmetric vortex reference flow. The reference flow vorticity satisfies an eigenvalue problem for which the solutions are a set of two-dimensional, self-similar, incompressible vortices. These solutions are augmented by deriving a conserved quantity for each eigenvalue, and identifying a Lie group which leaves the reference flow equations invariant. The partial differential equations governing the compressible perturbations to these reference flows are also invariant under the action of the same group. The similarity variables found with this group are used to determine the decay rates of the velocities and thermodynamic variables in the self-similar flows, and to reduce the governing partial differential equations to a set of ordinary differential equations. The ODE's are solved analytically and numerically for a Taylor vortex reference flow, and numerically for an Oseen vortex reference flow. The solutions are used to examine the dependencies of the temperature, density, entropy, dissipation and radial velocity on the Prandtl number. Also, experimental data on compressible free vortex flow are compared to the analytical results, the evolution of vortices from initial states which are not self-similar is discussed, and the energy transfer in a slightly-compressible vortex is considered.
Shock-wave studies of anomalous compressibility of glassy carbon
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molodets, A. M., E-mail: molodets@icp.ac.ru; Golyshev, A. A.; Savinykh, A. S.
2016-02-15
The physico-mechanical properties of amorphous glassy carbon are investigated under shock compression up to 10 GPa. Experiments are carried out on the continuous recording of the mass velocity of compression pulses propagating in glassy carbon samples with initial densities of 1.502(5) g/cm{sup 3} and 1.55(2) g/cm{sup 3}. It is shown that, in both cases, a compression wave in glassy carbon contains a leading precursor with amplitude of 0.135(5) GPa. It is established that, in the range of pressures up to 2 GPa, a shock discontinuity in glassy carbon is transformed into a broadened compression wave, and shock waves are formedmore » in the release wave, which generally means the anomalous compressibility of the material in both the compression and release waves. It is shown that, at pressure higher than 3 GPa, anomalous behavior turns into normal behavior, accompanied by the formation of a shock compression wave. In the investigated area of pressure, possible structural changes in glassy carbon under shock compression have a reversible character. A physico-mechanical model of glassy carbon is proposed that involves the equation of state and a constitutive relation for Poisson’s ratio and allows the numerical simulation of physico-mechanical and thermophysical properties of glassy carbon of different densities in the region of its anomalous compressibility.« less
Alkaline RFC Space Station prototype - 'Next step Space Station'. [Regenerative Fuel Cells
NASA Technical Reports Server (NTRS)
Hackler, I. M.
1986-01-01
The regenerative fuel cell, a candidate technology for the Space Station's energy storage system, is described. An advanced development program was initiated to design, manufacture, and integrate a regenerative fuel cell Space Station prototype (RFC SSP). The RFC SSP incorporates long-life fuel cell technology, increased cell area for the fuel cells, and high voltage cell stacks for both units. The RFC SSP's potential for integration with the Space Station's life support and propulsion systems is discussed.
Self-diffusion in compressively strained Ge
NASA Astrophysics Data System (ADS)
Kawamura, Yoko; Uematsu, Masashi; Hoshi, Yusuke; Sawano, Kentarou; Myronov, Maksym; Shiraki, Yasuhiro; Haller, Eugene E.; Itoh, Kohei M.
2011-08-01
Under a compressive biaxial strain of ˜ 0.71%, Ge self-diffusion has been measured using an isotopically controlled Ge single-crystal layer grown on a relaxed Si0.2Ge0.8 virtual substrate. The self-diffusivity is enhanced by the compressive strain and its behavior is fully consistent with a theoretical prediction of a generalized activation volume model of a simple vacancy mediated diffusion, reported by Aziz et al. [Phys. Rev. B 73, 054101 (2006)]. The activation volume of (-0.65±0.21) times the Ge atomic volume quantitatively describes the observed enhancement due to the compressive biaxial strain very well.
Survived ileocecal blowout from compressed air.
Weber, Marco; Kolbus, Frank; Dressler, Jan; Lessig, Rüdiger
2011-03-01
Industrial accidents with compressed air entering the gastro-intestinal tract often run fatally. The pressures usually over-exceed those used by medical applications such as colonoscopy and lead to vast injuries of the intestines with high mortality. The case described in this report is of a 26-year-old man who was harmed by compressed air that entered through the anus. He survived because of fast emergency operation. This case underlines necessity of explicit instruction considering hazards handling compressed air devices to maintain safety at work. Further, our observations support the hypothesis that the mucosa is the most elastic layer of the intestine wall.
Code of Federal Regulations, 2014 CFR
2014-10-01
... by ship stations and ship earth stations. 80.1121 Section 80.1121 Telecommunication FEDERAL... § 80.1121 Receipt and acknowledgement of distress alerts by ship stations and ship earth stations. (a) Ship or ship earth stations that receive a distress alert must, as soon as possible, inform the master...
Code of Federal Regulations, 2011 CFR
2011-10-01
... by ship stations and ship earth stations. 80.1121 Section 80.1121 Telecommunication FEDERAL... § 80.1121 Receipt and acknowledgement of distress alerts by ship stations and ship earth stations. (a) Ship or ship earth stations that receive a distress alert must, as soon as possible, inform the master...
Code of Federal Regulations, 2013 CFR
2013-10-01
... by ship stations and ship earth stations. 80.1121 Section 80.1121 Telecommunication FEDERAL... § 80.1121 Receipt and acknowledgement of distress alerts by ship stations and ship earth stations. (a) Ship or ship earth stations that receive a distress alert must, as soon as possible, inform the master...
Code of Federal Regulations, 2010 CFR
2010-10-01
... by ship stations and ship earth stations. 80.1121 Section 80.1121 Telecommunication FEDERAL... § 80.1121 Receipt and acknowledgement of distress alerts by ship stations and ship earth stations. (a) Ship or ship earth stations that receive a distress alert must, as soon as possible, inform the master...
Code of Federal Regulations, 2012 CFR
2012-10-01
... by ship stations and ship earth stations. 80.1121 Section 80.1121 Telecommunication FEDERAL... § 80.1121 Receipt and acknowledgement of distress alerts by ship stations and ship earth stations. (a) Ship or ship earth stations that receive a distress alert must, as soon as possible, inform the master...
Bae, Jinkun; Chung, Tae Nyoung; Je, Sang Mo
2016-01-01
Objectives To assess how the quality of metronome-guided cardiopulmonary resuscitation (CPR) was affected by the chest compression rate familiarised by training before the performance and to determine a possible mechanism for any effect shown. Design Prospective crossover trial of a simulated, one-person, chest-compression-only CPR. Setting Participants were recruited from a medical school and two paramedic schools of South Korea. Participants 42 senior students of a medical school and two paramedic schools were enrolled but five dropped out due to physical restraints. Intervention Senior medical and paramedic students performed 1 min of metronome-guided CPR with chest compressions only at a speed of 120 compressions/min after training for chest compression with three different rates (100, 120 and 140 compressions/min). Friedman's test was used to compare average compression depths based on the different rates used during training. Results Average compression depths were significantly different according to the rate used in training (p<0.001). A post hoc analysis showed that average compression depths were significantly different between trials after training at a speed of 100 compressions/min and those at speeds of 120 and 140 compressions/min (both p<0.001). Conclusions The depth of chest compression during metronome-guided CPR is affected by the relative difference between the rate of metronome guidance and the chest compression rate practised in previous training. PMID:26873050
Lagrangian statistics in compressible isotropic homogeneous turbulence
NASA Astrophysics Data System (ADS)
Yang, Yantao; Wang, Jianchun; Shi, Yipeng; Chen, Shiyi
2011-11-01
In this work we conducted the Direct Numerical Simulation (DNS) of a forced compressible isotropic homogeneous turbulence and investigated the flow statistics from the Lagrangian point of view, namely the statistics is computed following the passive tracers trajectories. The numerical method combined the Eulerian field solver which was developed by Wang et al. (2010, J. Comp. Phys., 229, 5257-5279), and a Lagrangian module for tracking the tracers and recording the data. The Lagrangian probability density functions (p.d.f.'s) have then been calculated for both kinetic and thermodynamic quantities. In order to isolate the shearing part from the compressing part of the flow, we employed the Helmholtz decomposition to decompose the flow field (mainly the velocity field) into the solenoidal and compressive parts. The solenoidal part was compared with the incompressible case, while the compressibility effect showed up in the compressive part. The Lagrangian structure functions and cross-correlation between various quantities will also be discussed. This work was supported in part by the China's Turbulence Program under Grant No.2009CB724101.
Brophy-Williams, Ned; Driller, Matthew William; Shing, Cecilia Mary; Fell, James William; Halson, Shona Leigh; Halson, Shona Louise
2015-01-01
The purpose of this investigation was to measure the interface pressure exerted by lower body sports compression garments, in order to assess the effect of garment type, size and posture in athletes. Twelve national-level boxers were fitted with sports compression garments (tights and leggings), each in three different sizes (undersized, recommended size and oversized). Interface pressure was assessed across six landmarks on the lower limb (ranging from medial malleolus to upper thigh) as athletes assumed sitting, standing and supine postures. Sports compression leggings exerted a significantly higher mean pressure than sports compression tights (P < 0.001). Oversized tights applied significantly less pressure than manufacturer-recommended size or undersized tights (P < 0.001), yet no significant differences were apparent between different-sized leggings. Standing posture resulted in significantly higher mean pressure application than a seated posture for both tights and leggings (P < 0.001 and P = 0.002, respectively). Pressure was different across landmarks, with analyses revealing a pressure profile that was neither strictly graduated nor progressive in nature. The pressure applied by sports compression garments is significantly affected by garment type, size and posture assumed by the wearer.
Rupture of esophagus by compressed air.
Wu, Jie; Tan, Yuyong; Huo, Jirong
2016-11-01
Currently, beverages containing compressed air such as cola and champagne are widely used in our daily life. Improper ways to unscrew the bottle, usually by teeth, could lead to an injury, even a rupture of the esophagus. This letter to editor describes a case of esophageal rupture caused by compressed air.
Context Modeler for Wavelet Compression of Spectral Hyperspectral Images
NASA Technical Reports Server (NTRS)
Kiely, Aaron; Xie, Hua; Klimesh, matthew; Aranki, Nazeeh
2010-01-01
A context-modeling sub-algorithm has been developed as part of an algorithm that effects three-dimensional (3D) wavelet-based compression of hyperspectral image data. The context-modeling subalgorithm, hereafter denoted the context modeler, provides estimates of probability distributions of wavelet-transformed data being encoded. These estimates are utilized by an entropy coding subalgorithm that is another major component of the compression algorithm. The estimates make it possible to compress the image data more effectively than would otherwise be possible. The following background discussion is prerequisite to a meaningful summary of the context modeler. This discussion is presented relative to ICER-3D, which is the name attached to a particular compression algorithm and the software that implements it. The ICER-3D software is summarized briefly in the preceding article, ICER-3D Hyperspectral Image Compression Software (NPO-43238). Some aspects of this algorithm were previously described, in a slightly more general context than the ICER-3D software, in "Improving 3D Wavelet-Based Compression of Hyperspectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. In turn, ICER-3D is a product of generalization of ICER, another previously reported algorithm and computer program that can perform both lossless and lossy wavelet-based compression and decompression of gray-scale-image data. In ICER-3D, hyperspectral image data are decomposed using a 3D discrete wavelet transform (DWT). Following wavelet decomposition, mean values are subtracted from spatial planes of spatially low-pass subbands prior to encoding. The resulting data are converted to sign-magnitude form and compressed. In ICER-3D, compression is progressive, in that compressed information is ordered so that as more of the compressed data stream is received, successive reconstructions of the hyperspectral image data are of successively higher overall fidelity.
Compressive stress system for a gas turbine engine
Hogberg, Nicholas Alvin
2015-03-24
The present application provides a compressive stress system for a gas turbine engine. The compressive stress system may include a first bucket attached to a rotor, a second bucket attached to the rotor, the first and the second buckets defining a shank pocket therebetween, and a compressive stress spring positioned within the shank pocket.
1971-01-01
This is an artist's concept of the Research and Applications Modules (RAM). Evolutionary growth was an important consideration in space station plarning, and another project was undertaken in 1971 to facilitate such growth. The RAM study, conducted through a Marshall Space Flight Center contract with General Dynamics Convair Aerospace, resulted in the conceptualization of a series of RAM payload carrier-sortie laboratories, pallets, free-flyers, and payload and support modules. The study considered two basic manned systems. The first would use RAM hardware for sortie mission, where laboratories were carried into space and remained attached to the Shuttle for operational periods up to 7 days. The second envisioned a modular space station capability that could be evolved by mating RAM modules to the space station core configuration. The RAM hardware was to be built by Europeans, thus fostering international participation in the space program.
The effect of compression on individual pressure vessel nickel/hydrogen components
NASA Technical Reports Server (NTRS)
Manzo, Michelle A.; Perez-Davis, Marla E.
1988-01-01
Compression tests were performed on representative Individual Pressure Vessel (IPV) Nickel/Hydrogen cell components in an effort to better understand the effects of force on component compression and the interactions of components under compression. It appears that the separator is the most easily compressed of all of the stack components. It will typically partially compress before any of the other components begin to compress. The compression characteristics of the cell components in assembly differed considerably from what would be predicted based on individual compression characteristics. Component interactions played a significant role in the stack response to compression. The results of the compression tests were factored into the design and selection of Belleville washers added to the cell stack to accommodate nickel electrode expansion while keeping the pressure on the stack within a reasonable range of the original preset.
High-quality compressive ghost imaging
NASA Astrophysics Data System (ADS)
Huang, Heyan; Zhou, Cheng; Tian, Tian; Liu, Dongqi; Song, Lijun
2018-04-01
We propose a high-quality compressive ghost imaging method based on projected Landweber regularization and guided filter, which effectively reduce the undersampling noise and improve the resolution. In our scheme, the original object is reconstructed by decomposing of regularization and denoising steps instead of solving a minimization problem in compressive reconstruction process. The simulation and experimental results show that our method can obtain high ghost imaging quality in terms of PSNR and visual observation.
Results of subscale MTF compression experiments
NASA Astrophysics Data System (ADS)
Howard, Stephen; Mossman, A.; Donaldson, M.; Fusion Team, General
2016-10-01
In magnetized target fusion (MTF) a magnetized plasma torus is compressed in a time shorter than its own energy confinement time, thereby heating to fusion conditions. Understanding plasma behavior and scaling laws is needed to advance toward a reactor-scale demonstration. General Fusion is conducting a sequence of subscale experiments of compact toroid (CT) plasmas being compressed by chemically driven implosion of an aluminum liner, providing data on several key questions. CT plasmas are formed by a coaxial Marshall gun, with magnetic fields supported by internal plasma currents and eddy currents in the wall. Configurations that have been compressed so far include decaying and sustained spheromaks and an ST that is formed into a pre-existing toroidal field. Diagnostics measure B, ne, visible and x-ray emission, Ti and Te. Before compression the CT has an energy of 10kJ magnetic, 1 kJ thermal, with Te of 100 - 200 eV, ne 5x1020 m-3. Plasma was stable during a compression factor R0/R >3 on best shots. A reactor scale demonstration would require 10x higher initial B and ne but similar Te. Liner improvements have minimized ripple, tearing and ejection of micro-debris. Plasma facing surfaces have included plasma-sprayed tungsten, bare Cu and Al, and gettering with Ti and Li.
External Compression Headaches
... People likely to get external compression headaches include construction workers, people in the military, police officers and ... If protective headwear, such as a sports or construction helmet, is necessary, make sure it fits properly ...
NASA Technical Reports Server (NTRS)
Hurst, Victor, IV; West, Sarah; Austin, Paul; Branson, Richard; Beck, George
2006-01-01
Astronaut crew medical officers (CMO) aboard the International Space Station (ISS) receive 40 hours of medical training during the 18 months preceding each mission. Part of this training ilncludes twoperson cardiopulmonary resuscitation (CPR) per training guidelines from the American Heart Association (AHA). Recent studies concluded that the use of metronomic tones improves the coordination of CPR by trained clinicians. Similar data for bystander or "trained lay people" (e.g. CMO) performance of CPR (BCPR) have been limited. The purpose of this study was to evailuate whether use of timing devices, such as audible metronomic tones, would improve BCPR perfomance by trained bystanders. Twenty pairs of bystanders trained in two-person BCPR performled BCPR for 4 minutes on a simulated cardiopulmonary arrest patient using three interventions: 1) BCPR with no timing devices, 2) BCPR plus metronomic tones for coordinating compression rate only, 3) BCPR with a timing device and metronome for coordinating ventilation and compression rates, respectively. Bystanders were evaluated on their ability to meet international and AHA CPR guidelines. Bystanders failed to provide the recommended number of breaths and number of compressions in the absence of a timing device and in the presence of audible metronomic tones for only coordinating compression rate. Bystanders using timing devices to coordinate both components of BCPR provided the reco number of breaths and were closer to providing the recommended number of compressions compared with the other interventions. Survey results indicated that bystanders preferred to use a metronome for delivery of compressions during BCPR. BCPR performance is improved by timing devices that coordinate both compressions and breaths.
46 CFR 108.633 - Fire stations.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 4 2010-10-01 2010-10-01 false Fire stations. 108.633 Section 108.633 Shipping COAST... Equipment Markings and Instructions § 108.633 Fire stations. Each fire station must be identified by marking: “FIRE STATION NO. __;” next to the station in letters and numbers at least 5 centimeters (2 inches) high. ...
46 CFR 108.633 - Fire stations.
Code of Federal Regulations, 2013 CFR
2013-10-01
... Equipment Markings and Instructions § 108.633 Fire stations. Each fire station must be identified by marking: “FIRE STATION NO. __;” next to the station in letters and numbers at least 5 centimeters (2 inches) high. ... 46 Shipping 4 2013-10-01 2013-10-01 false Fire stations. 108.633 Section 108.633 Shipping COAST...
46 CFR 108.633 - Fire stations.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Equipment Markings and Instructions § 108.633 Fire stations. Each fire station must be identified by marking: “FIRE STATION NO. __;” next to the station in letters and numbers at least 5 centimeters (2 inches) high. ... 46 Shipping 4 2014-10-01 2014-10-01 false Fire stations. 108.633 Section 108.633 Shipping COAST...
46 CFR 108.633 - Fire stations.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Equipment Markings and Instructions § 108.633 Fire stations. Each fire station must be identified by marking: “FIRE STATION NO. __;” next to the station in letters and numbers at least 5 centimeters (2 inches) high. ... 46 Shipping 4 2011-10-01 2011-10-01 false Fire stations. 108.633 Section 108.633 Shipping COAST...
46 CFR 108.633 - Fire stations.
Code of Federal Regulations, 2012 CFR
2012-10-01
... Equipment Markings and Instructions § 108.633 Fire stations. Each fire station must be identified by marking: “FIRE STATION NO. __;” next to the station in letters and numbers at least 5 centimeters (2 inches) high. ... 46 Shipping 4 2012-10-01 2012-10-01 false Fire stations. 108.633 Section 108.633 Shipping COAST...
47 CFR 74.1281 - Station records.
Code of Federal Regulations, 2011 CFR
2011-10-01
... FM Broadcast Booster Stations § 74.1281 Station records. (a) The licensee of a station authorized... booster, except that the station records of a booster or translator licensed to the licensee of the...
47 CFR 74.1281 - Station records.
Code of Federal Regulations, 2012 CFR
2012-10-01
... FM Broadcast Booster Stations § 74.1281 Station records. (a) The licensee of a station authorized... booster, except that the station records of a booster or translator licensed to the licensee of the...
47 CFR 74.1281 - Station records.
Code of Federal Regulations, 2014 CFR
2014-10-01
... FM Broadcast Booster Stations § 74.1281 Station records. (a) The licensee of a station authorized... booster, except that the station records of a booster or translator licensed to the licensee of the...
47 CFR 74.1281 - Station records.
Code of Federal Regulations, 2013 CFR
2013-10-01
... FM Broadcast Booster Stations § 74.1281 Station records. (a) The licensee of a station authorized... booster, except that the station records of a booster or translator licensed to the licensee of the...
Analysis-Preserving Video Microscopy Compression via Correlation and Mathematical Morphology
Shao, Chong; Zhong, Alfred; Cribb, Jeremy; Osborne, Lukas D.; O’Brien, E. Timothy; Superfine, Richard; Mayer-Patel, Ketan; Taylor, Russell M.
2015-01-01
The large amount video data produced by multi-channel, high-resolution microscopy system drives the need for a new high-performance domain-specific video compression technique. We describe a novel compression method for video microscopy data. The method is based on Pearson's correlation and mathematical morphology. The method makes use of the point-spread function (PSF) in the microscopy video acquisition phase. We compare our method to other lossless compression methods and to lossy JPEG, JPEG2000 and H.264 compression for various kinds of video microscopy data including fluorescence video and brightfield video. We find that for certain data sets, the new method compresses much better than lossless compression with no impact on analysis results. It achieved a best compressed size of 0.77% of the original size, 25× smaller than the best lossless technique (which yields 20% for the same video). The compressed size scales with the video's scientific data content. Further testing showed that existing lossy algorithms greatly impacted data analysis at similar compression sizes. PMID:26435032
Lossless compression techniques for maskless lithography data
NASA Astrophysics Data System (ADS)
Dai, Vito; Zakhor, Avideh
2002-07-01
Future lithography systems must produce more dense chips with smaller feature sizes, while maintaining the throughput of one wafer per sixty seconds per layer achieved by today's optical lithography systems. To achieve this throughput with a direct-write maskless lithography system, using 25 nm pixels for 50 nm feature sizes, requires data rates of about 10 Tb/s. In a previous paper, we presented an architecture which achieves this data rate contingent on consistent 25 to 1 compression of lithography data, and on implementation of a decoder-writer chip with a real-time decompressor fabricated on the same chip as the massively parallel array of lithography writers. In this paper, we examine the compression efficiency of a spectrum of techniques suitable for lithography data, including two industry standards JBIG and JPEG-LS, a wavelet based technique SPIHT, general file compression techniques ZIP and BZIP2, our own 2D-LZ technique, and a simple list-of-rectangles representation RECT. Layouts rasterized both to black-and-white pixels, and to 32 level gray pixels are considered. Based on compression efficiency, JBIG, ZIP, 2D-LZ, and BZIP2 are found to be strong candidates for application to maskless lithography data, in many cases far exceeding the required compression ratio of 25. To demonstrate the feasibility of implementing the decoder-writer chip, we consider the design of a hardware decoder based on ZIP, the simplest of the four candidate techniques. The basic algorithm behind ZIP compression is Lempel-Ziv 1977 (LZ77), and the design parameters of LZ77 decompression are optimized to minimize circuit usage while maintaining compression efficiency.
Pressure prediction model for compression garment design.
Leung, W Y; Yuen, D W; Ng, Sun Pui; Shi, S Q
2010-01-01
Based on the application of Laplace's law to compression garments, an equation for predicting garment pressure, incorporating the body circumference, the cross-sectional area of fabric, applied strain (as a function of reduction factor), and its corresponding Young's modulus, is developed. Design procedures are presented to predict garment pressure using the aforementioned parameters for clinical applications. Compression garments have been widely used in treating burning scars. Fabricating a compression garment with a required pressure is important in the healing process. A systematic and scientific design method can enable the occupational therapist and compression garments' manufacturer to custom-make a compression garment with a specific pressure. The objectives of this study are 1) to develop a pressure prediction model incorporating different design factors to estimate the pressure exerted by the compression garments before fabrication; and 2) to propose more design procedures in clinical applications. Three kinds of fabrics cut at different bias angles were tested under uniaxial tension, as were samples made in a double-layered structure. Sets of nonlinear force-extension data were obtained for calculating the predicted pressure. Using the value at 0° bias angle as reference, the Young's modulus can vary by as much as 29% for fabric type P11117, 43% for fabric type PN2170, and even 360% for fabric type AP85120 at a reduction factor of 20%. When comparing the predicted pressure calculated from the single-layered and double-layered fabrics, the double-layered construction provides a larger range of target pressure at a particular strain. The anisotropic and nonlinear behaviors of the fabrics have thus been determined. Compression garments can be methodically designed by the proposed analytical pressure prediction model.
Modular space station mass properties
NASA Technical Reports Server (NTRS)
1972-01-01
An update of the space station mass properties is presented. Included are the final status update of the Initial Space Station (ISS) modules and logistic module plus incorporation of the Growth Space Station (GSS) module additions.
JPEG XS-based frame buffer compression inside HEVC for power-aware video compression
NASA Astrophysics Data System (ADS)
Willème, Alexandre; Descampe, Antonin; Rouvroy, Gaël.; Pellegrin, Pascal; Macq, Benoit
2017-09-01
With the emergence of Ultra-High Definition video, reference frame buffers (FBs) inside HEVC-like encoders and decoders have to sustain huge bandwidth. The power consumed by these external memory accesses accounts for a significant share of the codec's total consumption. This paper describes a solution to significantly decrease the FB's bandwidth, making HEVC encoder more suitable for use in power-aware applications. The proposed prototype consists in integrating an embedded lightweight, low-latency and visually lossless codec at the FB interface inside HEVC in order to store each reference frame as several compressed bitstreams. As opposed to previous works, our solution compresses large picture areas (ranging from a CTU to a frame stripe) independently in order to better exploit the spatial redundancy found in the reference frame. This work investigates two data reuse schemes namely Level-C and Level-D. Our approach is made possible thanks to simplified motion estimation mechanisms further reducing the FB's bandwidth and inducing very low quality degradation. In this work, we integrated JPEG XS, the upcoming standard for lightweight low-latency video compression, inside HEVC. In practice, the proposed implementation is based on HM 16.8 and on XSM 1.1.2 (JPEG XS Test Model). Through this paper, the architecture of our HEVC with JPEG XS-based frame buffer compression is described. Then its performance is compared to HM encoder. Compared to previous works, our prototype provides significant external memory bandwidth reduction. Depending on the reuse scheme, one can expect bandwidth and FB size reduction ranging from 50% to 83.3% without significant quality degradation.
Calculation methods for compressible turbulent boundary layers, 1976
NASA Technical Reports Server (NTRS)
Bushnell, D. M.; Cary, A. M., Jr.; Harris, J. E.
1977-01-01
Equations and closure methods for compressible turbulent boundary layers are discussed. Flow phenomena peculiar to calculation of these boundary layers were considered, along with calculations of three dimensional compressible turbulent boundary layers. Procedures for ascertaining nonsimilar two and three dimensional compressible turbulent boundary layers were appended, including finite difference, finite element, and mass-weighted residual methods.
A Posteriori Restoration of Block Transform-Compressed Data
NASA Technical Reports Server (NTRS)
Brown, R.; Boden, A. F.
1995-01-01
The Galileo spacecraft will use lossy data compression for the transmission of its science imagery over the low-bandwidth communication system. The technique chosen for image compression is a block transform technique based on the Integer Cosine Transform, a derivative of the JPEG image compression standard. Considered here are two known a posteriori enhancement techniques, which are adapted.
Chest compression rates and survival following out-of-hospital cardiac arrest.
Idris, Ahamed H; Guffey, Danielle; Pepe, Paul E; Brown, Siobhan P; Brooks, Steven C; Callaway, Clifton W; Christenson, Jim; Davis, Daniel P; Daya, Mohamud R; Gray, Randal; Kudenchuk, Peter J; Larsen, Jonathan; Lin, Steve; Menegazzi, James J; Sheehan, Kellie; Sopko, George; Stiell, Ian; Nichol, Graham; Aufderheide, Tom P
2015-04-01
Guidelines for cardiopulmonary resuscitation recommend a chest compression rate of at least 100 compressions/min. A recent clinical study reported optimal return of spontaneous circulation with rates between 100 and 120/min during cardiopulmonary resuscitation for out-of-hospital cardiac arrest. However, the relationship between compression rate and survival is still undetermined. Prospective, observational study. Data is from the Resuscitation Outcomes Consortium Prehospital Resuscitation IMpedance threshold device and Early versus Delayed analysis clinical trial. Adults with out-of-hospital cardiac arrest treated by emergency medical service providers. None. Data were abstracted from monitor-defibrillator recordings for the first five minutes of emergency medical service cardiopulmonary resuscitation. Multiple logistic regression assessed odds ratio for survival by compression rate categories (<80, 80-99, 100-119, 120-139, ≥140), both unadjusted and adjusted for sex, age, witnessed status, attempted bystander cardiopulmonary resuscitation, location of arrest, chest compression fraction and depth, first rhythm, and study site. Compression rate data were available for 10,371 patients; 6,399 also had chest compression fraction and depth data. Age (mean±SD) was 67±16 years. Chest compression rate was 111±19 per minute, compression fraction was 0.70±0.17, and compression depth was 42±12 mm. Circulation was restored in 34%; 9% survived to hospital discharge. After adjustment for covariates without chest compression depth and fraction (n=10,371), a global test found no significant relationship between compression rate and survival (p=0.19). However, after adjustment for covariates including chest compression depth and fraction (n=6,399), the global test found a significant relationship between compression rate and survival (p=0.02), with the reference group (100-119 compressions/min) having the greatest likelihood for survival. After adjustment for chest