Sample records for compressed process stream

  1. Apparatus for the liquefaction of natural gas and methods relating to same

    DOEpatents

    Wilding, Bruce M [Idaho Falls, ID; McKellar, Michael G [Idaho Falls, ID; Turner, Terry D [Ammon, ID; Carney, Francis H [Idaho Falls, ID

    2009-09-29

    An apparatus and method for producing liquefied natural gas. A liquefaction plant may be coupled to a source of unpurified natural gas, such as a natural gas pipeline at a pressure letdown station. A portion of the gas is drawn off and split into a process stream and a cooling stream. The cooling stream passes through an expander creating work output. A compressor may be driven by the work output and compresses the process stream. The compressed process stream is cooled, such as by the expanded cooling stream. The cooled, compressed process stream is divided into first and second portions with the first portion being expanded to liquefy the natural gas. A gas-liquid separator separates the vapor from the liquid natural gas. The second portion of the cooled, compressed process stream is also expanded and used to cool the compressed process stream.

  2. Apparatus for the liquefaction of natural gas and methods relating to same

    DOEpatents

    Wilding, Bruce M [Idaho Falls, ID; Bingham, Dennis N [Idaho Falls, ID; McKellar, Michael G [Idaho Falls, ID; Turner, Terry D [Ammon, ID; Raterman, Kevin T [Idaho Falls, ID; Palmer, Gary L [Shelley, ID; Klingler, Kerry M [Idaho Falls, ID; Vranicar, John J [Concord, CA

    2007-05-22

    An apparatus and method for producing liquefied natural gas. A liquefaction plant may be coupled to a source of unpurified natural gas, such as a natural gas pipeline at a pressure letdown station. A portion of the gas is drawn off and split into a process stream and a cooling stream. The cooling stream passes through a turbo expander creating work output. A compressor is driven by the work output and compresses the process stream. The compressed process stream is cooled, such as by the expanded cooling stream. The cooled, compressed process stream is divided into first and second portions with the first portion being expanded to liquefy the natural gas. A gas-liquid separator separates the vapor from the liquid natural gas. The second portion of the cooled, compressed process stream is also expanded and used to cool the compressed process stream. Additional features and techniques may be integrated with the liquefaction process including a water clean-up cycle and a carbon dioxide (CO.sub.2) clean-up cycle.

  3. Apparatus For The Liquefaaction Of Natural Gas And Methods Relating To Same

    DOEpatents

    Wilding, Bruce M.; Bingham, Dennis N.; McKellar, Michael G.; Turner, Terry D.; Rateman, Kevin T.; Palmer, Gary L.; Klinger, Kerry M.; Vranicar, John J.

    2005-11-08

    An apparatus and method for producing liquefied natural gas. A liquefaction plant may be coupled to a source of unpurified natural gas, such as a natural gas pipeline at a pressure letdown station. A portion of the gas is drawn off and split into a process stream and a cooling stream. The cooling stream passes through a turbo expander creating work output. A compressor is driven by the work output and compresses the process stream. The compressed process stream is cooled, such as by the expanded cooling stream. The cooled, compressed process stream is divided into first and second portions with the first portion being expanded to liquefy the natural gas. A gas-liquid separator separates the vapor from the liquid natural gas. The second portion of the cooled, compressed process stream is also expanded and used to cool the compressed process stream. Additional features and techniques may be integrated with the liquefaction process including a water clean-up cycle and a carbon dioxide (CO2) clean-up cycle.

  4. Apparatus For The Liquefaaction Of Natural Gas And Methods Relating To Same

    DOEpatents

    Wilding, Bruce M.; Bingham, Dennis N.; McKellar, Michael G.; Turner, Terry D.; Raterman, Kevin T.; Palmer, Gary L.; Klingler, Kerry M.; Vranicar, John J.

    2005-05-03

    An apparatus and method for producing liquefied natural gas. A liquefaction plant may be coupled to a source of unpurified natural gas, such as a natural gas pipeline at a pressure letdown station. A portion of the gas is drawn off and split into a process stream and a cooling stream. The cooling stream passes through a turbo expander creating work output. A compressor is driven by the work output and compresses the process stream. The compressed process stream is cooled, such as by the expanded cooling stream. The cooled, compressed process stream is divided into first and second portions with the first portion being expanded to liquefy the natural gas. A gas-liquid separator separates the vapor from the liquid natural gas. The second portion of the cooled, compressed process stream is also expanded and used to cool the compressed process stream. Additional features and techniques may be integrated with the liquefaction process including a water clean-up cycle and a carbon dioxide (CO2) clean-up cycle.

  5. Apparatus For The Liquefaaction Of Natural Gas And Methods Relating To Same

    DOEpatents

    Wilding, Bruce M.; Bingham, Dennis N.; McKellar, Michael G.; Turner, Terry D.; Raterman, Kevin T.; Palmer, Gary L.; Klingler, Kerry M.; Vranicar, John J.

    2003-06-24

    An apparatus and method for producing liquefied natural gas. A liquefaction plant may be coupled to a source of unpurified natural gas, such as a natural gas pipeline at a pressure letdown station. A portion of the gas is drawn off and split into a process stream and a cooling stream. The cooling stream passes through a turbo expander creating work output. A compressor is driven by the work output and compresses the process stream. The compressed process stream is cooled, such as by the expanded cooling stream. The cooled, compressed process stream is divided into first and second portions with the first portion being expanded to liquefy the natural gas. A gas-liquid separator separates the vapor from the liquid natural gas. The second portion of the cooled, compressed process stream is also expanded and used to cool the compressed process stream. Additional features and techniques may be integrated with the liquefaction process including a water clean-up cycle and a carbon dioxide (CO.sub.2) clean-up cycle.

  6. Optimized heat exchange in a CO2 de-sublimation process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baxter, Larry; Terrien, Paul; Tessier, Pascal

    The present invention is a process for removing carbon dioxide from a compressed gas stream including cooling the compressed gas in a first heat exchanger, introducing the cooled gas into a de-sublimating heat exchanger, thereby producing a first solid carbon dioxide stream and a first carbon dioxide poor gas stream, expanding the carbon dioxide poor gas stream, thereby producing a second solid carbon dioxide stream and a second carbon dioxide poor gas stream, combining the first solid carbon dioxide stream and the second solid carbon dioxide stream, thereby producing a combined solid carbon dioxide stream, and indirectly exchanging heat betweenmore » the combined solid carbon dioxide stream and the compressed gas in the first heat exchanger.« less

  7. Apparatus for the liquefaction of natural gas and methods relating to same

    DOEpatents

    Turner, Terry D [Ammon, ID; Wilding, Bruce M [Idaho Falls, ID; McKellar, Michael G [Idaho Falls, ID

    2009-09-22

    An apparatus and method for producing liquefied natural gas. A liquefaction plant may be coupled to a source of unpurified natural gas, such as a natural gas pipeline at a pressure letdown station. A portion of the gas is drawn off and split into a process stream and a cooling stream. The cooling stream passes through an expander creating work output. A compressor may be driven by the work output and compresses the process stream. The compressed process stream is cooled, such as by the expanded cooling stream. The cooled, compressed process stream is expanded to liquefy the natural gas. A gas-liquid separator separates a vapor from the liquid natural gas. A portion of the liquid gas is used for additional cooling. Gas produced within the system may be recompressed for reintroduction into a receiving line or recirculation within the system for further processing.

  8. Apparatus for the liquefaction of a gas and methods relating to same

    DOEpatents

    Turner, Terry D [Idaho Falls, ID; Wilding, Bruce M [Idaho Falls, ID; McKellar, Michael G [Idaho Falls, ID

    2009-12-29

    Apparatuses and methods are provided for producing liquefied gas, such as liquefied natural gas. In one embodiment, a liquefaction plant may be coupled to a source of unpurified natural gas, such as a natural gas pipeline at a pressure letdown station. A portion of the gas is drawn off and split into a process stream and a cooling stream. The cooling stream may be sequentially pass through a compressor and an expander. The process stream may also pass through a compressor. The compressed process stream is cooled, such as by the expanded cooling stream. The cooled, compressed process stream is expanded to liquefy the natural gas. A gas-liquid separator separates the vapor from the liquid natural gas. A portion of the liquid gas may be used for additional cooling. Gas produced within the system may be recompressed for reintroduction into a receiving line.

  9. Objective assessment of MPEG-2 video quality

    NASA Astrophysics Data System (ADS)

    Gastaldo, Paolo; Zunino, Rodolfo; Rovetta, Stefano

    2002-07-01

    The increasing use of video compression standards in broadcasting television systems has required, in recent years, the development of video quality measurements that take into account artifacts specifically caused by digital compression techniques. In this paper we present a methodology for the objective quality assessment of MPEG video streams by using circular back-propagation feedforward neural networks. Mapping neural networks can render nonlinear relationships between objective features and subjective judgments, thus avoiding any simplifying assumption on the complexity of the model. The neural network processes an instantaneous set of input values, and yields an associated estimate of perceived quality. Therefore, the neural-network approach turns objective quality assessment into adaptive modeling of subjective perception. The objective features used for the estimate are chosen according to the assessed relevance to perceived quality and are continuously extracted in real time from compressed video streams. The overall system mimics perception but does not require any analytical model of the underlying physical phenomenon. The capability to process compressed video streams represents an important advantage over existing approaches, like avoiding the stream-decoding process greatly enhances real-time performance. Experimental results confirm that the system provides satisfactory, continuous-time approximations for actual scoring curves concerning real test videos.

  10. An Image Processing Technique for Achieving Lossy Compression of Data at Ratios in Excess of 100:1

    DTIC Science & Technology

    1992-11-01

    5 Lempel , Ziv , Welch (LZW) Compression ............... 7 Lossless Compression Tests Results ................. 9 Exact...since IBM holds the patent for this technique. Lempel , Ziv , Welch (LZW) Compression The LZW compression is related to two compression techniques known as... compression , using the input stream as data . This step is possible because the compression algorithm always outputs the phrase and character components of a

  11. System using data compression and hashing adapted for use for multimedia encryption

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coffland, Douglas R

    2011-07-12

    A system and method is disclosed for multimedia encryption. Within the system of the present invention, a data compression module receives and compresses a media signal into a compressed data stream. A data acquisition module receives and selects a set of data from the compressed data stream. And, a hashing module receives and hashes the set of data into a keyword. The method of the present invention includes the steps of compressing a media signal into a compressed data stream; selecting a set of data from the compressed data stream; and hashing the set of data into a keyword.

  12. Partial oxidation power plant with reheating and method thereof

    DOEpatents

    Newby, Richard A.; Yang, Wen-Ching; Bannister, Ronald L.

    1999-01-01

    A system and method for generating power having an air compression/partial oxidation system, a turbine, and a primary combustion system. The air compression/partial oxidation system receives a first air stream and a fuel stream and produces a first partially oxidized fuel stream and a first compressed air stream therefrom. The turbine expands the first partially oxidized fuel stream while being cooled by the first compressed air stream to produce a heated air stream. The heated air stream is injected into the expanding first partially oxidized fuel stream, thereby reheating it in the turbine. A second partially oxidized fuel stream is emitted from the turbine. The primary combustion system receives said second partially oxidized fuel stream and a second air stream, combusts said second partially oxidized fuel stream, and produces rotating shaft power and an emission stream therefrom.

  13. Partial oxidation power plant with reheating and method thereof

    DOEpatents

    Newby, R.A.; Yang, W.C.; Bannister, R.L.

    1999-08-10

    A system and method are disclosed for generating power having an air compression/partial oxidation system, a turbine, and a primary combustion system. The air compression/partial oxidation system receives a first air stream and a fuel stream and produces a first partially oxidized fuel stream and a first compressed air stream therefrom. The turbine expands the first partially oxidized fuel stream while being cooled by the first compressed air stream to produce a heated air stream. The heated air stream is injected into the expanding first partially oxidized fuel stream, thereby reheating it in the turbine. A second partially oxidized fuel stream is emitted from the turbine. The primary combustion system receives said second partially oxidized fuel stream and a second air stream, combusts said second partially oxidized fuel stream, and produces rotating shaft power and an emission stream therefrom. 2 figs.

  14. Light-weight reference-based compression of FASTQ data.

    PubMed

    Zhang, Yongpeng; Li, Linsen; Yang, Yanli; Yang, Xiao; He, Shan; Zhu, Zexuan

    2015-06-09

    The exponential growth of next generation sequencing (NGS) data has posed big challenges to data storage, management and archive. Data compression is one of the effective solutions, where reference-based compression strategies can typically achieve superior compression ratios compared to the ones not relying on any reference. This paper presents a lossless light-weight reference-based compression algorithm namely LW-FQZip to compress FASTQ data. The three components of any given input, i.e., metadata, short reads and quality score strings, are first parsed into three data streams in which the redundancy information are identified and eliminated independently. Particularly, well-designed incremental and run-length-limited encoding schemes are utilized to compress the metadata and quality score streams, respectively. To handle the short reads, LW-FQZip uses a novel light-weight mapping model to fast map them against external reference sequence(s) and produce concise alignment results for storage. The three processed data streams are then packed together with some general purpose compression algorithms like LZMA. LW-FQZip was evaluated on eight real-world NGS data sets and achieved compression ratios in the range of 0.111-0.201. This is comparable or superior to other state-of-the-art lossless NGS data compression algorithms. LW-FQZip is a program that enables efficient lossless FASTQ data compression. It contributes to the state of art applications for NGS data storage and transmission. LW-FQZip is freely available online at: http://csse.szu.edu.cn/staff/zhuzx/LWFQZip.

  15. Joint image encryption and compression scheme based on a new hyperchaotic system and curvelet transform

    NASA Astrophysics Data System (ADS)

    Zhang, Miao; Tong, Xiaojun

    2017-07-01

    This paper proposes a joint image encryption and compression scheme based on a new hyperchaotic system and curvelet transform. A new five-dimensional hyperchaotic system based on the Rabinovich system is presented. By means of the proposed hyperchaotic system, a new pseudorandom key stream generator is constructed. The algorithm adopts diffusion and confusion structure to perform encryption, which is based on the key stream generator and the proposed hyperchaotic system. The key sequence used for image encryption is relation to plain text. By means of the second generation curvelet transform, run-length coding, and Huffman coding, the image data are compressed. The joint operation of compression and encryption in a single process is performed. The security test results indicate the proposed methods have high security and good compression effect.

  16. An Assessment of the Effect of Compressibility on Dynamic Stall

    NASA Technical Reports Server (NTRS)

    Carr, Lawrence W.; Chandrasekhara, M. S.; David, Sanford S. (Technical Monitor)

    1994-01-01

    Compressibility plays a significant role in the development of separation on airfoils experiencing unsteady motion, even at moderately compressible free-stream flow velocities. This effect can result in completely changed stall characteristics compared to those observed at incompressible speed, and can dramatically affect techniques used to control separation. There has been a significant effort in recent years directed toward better understanding; of this process, and its impact on possible techniques for control of separation in this complex environment. A review of existing research in this area will be presented, with emphasis on the physical mechanisms that play such an important role in the development of separation on airfoils. The increasing impact of compressibility on the stall process will be discussed as a function of free-stream Mach number, and an analysis of the changing flow physics will be presented. Examples of the effect of compressibility on dynamic stall will be selected from both recent and historical efforts by members of the aerospace community, as well as from the ongoing research program of the present authors. This will include a presentation of a sample of high speed filming of compressible dynamic stall which has recently been created using real-time interferometry.

  17. Low energy consumption method for separating gaseous mixtures and in particular for medium purity oxygen production

    DOEpatents

    Jujasz, Albert J.; Burkhart, James A.; Greenberg, Ralph

    1988-01-01

    A method for the separation of gaseous mixtures such as air and for producing medium purity oxygen, comprising compressing the gaseous mixture in a first compressor to about 3.9-4.1 atmospheres pressure, passing said compressed gaseous mixture in heat exchange relationship with sub-ambient temperature gaseous nitrogen, dividing the cooled, pressurized gaseous mixture into first and second streams, introducing the first stream into the high pressure chamber of a double rectification column, separating the gaseous mixture in the rectification column into a liquid oxygen-enriched stream and a gaseous nitrogen stream and supplying the gaseous nitrogen stream for cooling the compressed gaseous mixture, removing the liquid oxygen-enriched stream from the low pressure chamber of the rectification column and pumping the liquid, oxygen-enriched steam to a predetermined pressure, cooling the second stream, condensing the cooled second stream and evaporating the oxygen-enriched stream in an evaporator-condenser, delivering the condensed second stream to the high pressure chamber of the rectification column, and heating the oxygen-enriched stream and blending the oxygen-enriched stream with a compressed blend-air stream to the desired oxygen concentration.

  18. Application of Porous Polydimethylsiloxane (PDMS) in oil absorption

    NASA Astrophysics Data System (ADS)

    Norfatriah, Abdullah; Syamaizar, Ahmad Sabli Ahmad; Samah Zuruzi, Abu

    2018-04-01

    Porous polydimethysiloxane (PDMS) displays both hydrophobic and oleophilic behaviour which makes it a suitable material to absorb oil in an aqueous stream. Furthermore, its elastomeric nature means that porous PDMS can be a reusable sorbent for oil. For such application, porous PDMS has to (i) absorb oil from aqueous stream quickly and (ii) discharge oil rapidly when compressed. In this study, porous polydimethylsiloxane (PDMS) has been fabricated using sugar templating method. The ability of porous PDMS to absorb olive, sunflower and vegetable oils with and without vibration was investigated. Small amplitude vibration was found to accelerate the oil uptake process and accelerates the absorption of olive and vegetable oil by 2.5 and 3 times, respectively. Compressive stress-strain curves over compression rates between 2 and 100 mm per min are similar and indicate mechanical property of porous PDMS does not vary significantly and can be rapidly compressed.

  19. A new display stream compression standard under development in VESA

    NASA Astrophysics Data System (ADS)

    Jacobson, Natan; Thirumalai, Vijayaraghavan; Joshi, Rajan; Goel, James

    2017-09-01

    The Advanced Display Stream Compression (ADSC) codec project is in development in response to a call for technologies from the Video Electronics Standards Association (VESA). This codec targets visually lossless compression of display streams at a high compression rate (typically 6 bits/pixel) for mobile/VR/HDR applications. Functionality of the ADSC codec is described in this paper, and subjective trials results are provided using the ISO 29170-2 testing protocol.

  20. Influence of video compression on the measurement error of the television system

    NASA Astrophysics Data System (ADS)

    Sotnik, A. V.; Yarishev, S. N.; Korotaev, V. V.

    2015-05-01

    Video data require a very large memory capacity. Optimal ratio quality / volume video encoding method is one of the most actual problem due to the urgent need to transfer large amounts of video over various networks. The technology of digital TV signal compression reduces the amount of data used for video stream representation. Video compression allows effective reduce the stream required for transmission and storage. It is important to take into account the uncertainties caused by compression of the video signal in the case of television measuring systems using. There are a lot digital compression methods. The aim of proposed work is research of video compression influence on the measurement error in television systems. Measurement error of the object parameter is the main characteristic of television measuring systems. Accuracy characterizes the difference between the measured value abd the actual parameter value. Errors caused by the optical system can be selected as a source of error in the television systems measurements. Method of the received video signal processing is also a source of error. Presence of error leads to large distortions in case of compression with constant data stream rate. Presence of errors increases the amount of data required to transmit or record an image frame in case of constant quality. The purpose of the intra-coding is reducing of the spatial redundancy within a frame (or field) of television image. This redundancy caused by the strong correlation between the elements of the image. It is possible to convert an array of image samples into a matrix of coefficients that are not correlated with each other, if one can find corresponding orthogonal transformation. It is possible to apply entropy coding to these uncorrelated coefficients and achieve a reduction in the digital stream. One can select such transformation that most of the matrix coefficients will be almost zero for typical images . Excluding these zero coefficients also possible reducing of the digital stream. Discrete cosine transformation is most widely used among possible orthogonal transformation. Errors of television measuring systems and data compression protocols analyzed In this paper. The main characteristics of measuring systems and detected sources of their error detected. The most effective methods of video compression are determined. The influence of video compression error on television measuring systems was researched. Obtained results will increase the accuracy of the measuring systems. In television image quality measuring system reduces distortion identical distortion in analog systems and specific distortions resulting from the process of coding / decoding digital video signal and errors in the transmission channel. By the distortions associated with encoding / decoding signal include quantization noise, reducing resolution, mosaic effect, "mosquito" effect edging on sharp drops brightness, blur colors, false patterns, the effect of "dirty window" and other defects. The size of video compression algorithms used in television measuring systems based on the image encoding with intra- and inter prediction individual fragments. The process of encoding / decoding image is non-linear in space and in time, because the quality of the playback of a movie at the reception depends on the pre- and post-history of a random, from the preceding and succeeding tracks, which can lead to distortion of the inadequacy of the sub-picture and a corresponding measuring signal.

  1. A packet data compressor

    NASA Technical Reports Server (NTRS)

    Grunes, Mitchell R.; Choi, Junho

    1995-01-01

    We are in the preliminary stages of creating an operational system for losslessly compressing packet data streams. The end goal is to reduce costs. Real world constraints include transmission in the presence of error, tradeoffs between the costs of compression and the costs of transmission and storage, and imperfect knowledge of the data streams to be transmitted. The overall method is to bring together packets of similar type, split the data into bit fields, and test a large number of compression algorithms. Preliminary results are very encouraging, typically offering compression factors substantially higher than those obtained with simpler generic byte stream compressors, such as Unix Compress and HA 0.98.

  2. New Algorithms and Lower Bounds for Sequential-Access Data Compression

    NASA Astrophysics Data System (ADS)

    Gagie, Travis

    2009-02-01

    This thesis concerns sequential-access data compression, i.e., by algorithms that read the input one or more times from beginning to end. In one chapter we consider adaptive prefix coding, for which we must read the input character by character, outputting each character's self-delimiting codeword before reading the next one. We show how to encode and decode each character in constant worst-case time while producing an encoding whose length is worst-case optimal. In another chapter we consider one-pass compression with memory bounded in terms of the alphabet size and context length, and prove a nearly tight tradeoff between the amount of memory we can use and the quality of the compression we can achieve. In a third chapter we consider compression in the read/write streams model, which allows us passes and memory both polylogarithmic in the size of the input. We first show how to achieve universal compression using only one pass over one stream. We then show that one stream is not sufficient for achieving good grammar-based compression. Finally, we show that two streams are necessary and sufficient for achieving entropy-only bounds.

  3. Process for CO.sub.2 capture using zeolites from high pressure and moderate temperature gas streams

    DOEpatents

    Siriwardane, Ranjani V [Morgantown, WV; Stevens, Robert W [Morgantown, WV

    2012-03-06

    A method for separating CO.sub.2 from a gas stream comprised of CO.sub.2 and other gaseous constituents using a zeolite sorbent in a swing-adsorption process, producing a high temperature CO.sub.2 stream at a higher CO.sub.2 pressure than the input gas stream. The method utilizes CO.sub.2 desorption in a CO.sub.2 atmosphere and effectively integrates heat transfers for optimizes overall efficiency. H.sub.2O adsorption does not preclude effective operation of the sorbent. The cycle may be incorporated in an IGCC for efficient pre-combustion CO.sub.2 capture. A particular application operates on shifted syngas at a temperature exceeding 200.degree. C. and produces a dry CO.sub.2 stream at low temperature and high CO.sub.2 pressure, greatly reducing any compression energy requirements which may be subsequently required.

  4. A comparative study of several compressibility corrections to turbulence models applied to high-speed shear layers

    NASA Technical Reports Server (NTRS)

    Viegas, John R.; Rubesin, Morris W.

    1991-01-01

    Several recently published compressibility corrections to the standard k-epsilon turbulence model are used with the Navier-Stokes equations to compute the mixing region of a large variety of high speed flows. These corrections, specifically developed to address the weakness of higher order turbulence models to accurately predict the spread rate of compressible free shear flows, are applied to two stream flows of the same gas mixing under a large variety of free stream conditions. Results are presented for two types of flows: unconfined streams with either (1) matched total temperatures and static pressures, or (2) matched static temperatures and pressures, and a confined stream.

  5. A numerical study of axisymmetric compressible non-isothermal and reactive swirling flow

    NASA Astrophysics Data System (ADS)

    Tavernetti, William E.; Hafez, Mohamed M.

    2017-09-01

    Non-linear dynamical phenomena in combustion processes is an active area of experimental and theoretical research. This is in large part due to increasingly strict environmental pressures to make gas turbine engines and industrial burners more efficient. Using numerical methods, for steady and unsteady confined and unconfined compressible flow, this study examines the modeling influence of compressibility for axisymmetric swirling flow. The compressible reactive Navier-Stokes equations in terms of stream function, vorticity, circulation are used. Results, details of the numerical algorithms, as well as numerical verification techniques and validation with sources from the literature will be presented. Understanding how vortex breakdown phenomena are affected by modeling reactant consumption with compressibility effect is the main goal of this study.

  6. Effect of initial conditions on constant pressure mixing between two turbulent streams

    NASA Astrophysics Data System (ADS)

    Kangovi, S.

    1983-02-01

    It is pointed out that a study of the process of mixing between two dissimilar streams has varied applications in different fields. The applications include the design of an after burner in a high by-pass ratio aircraft engine and the disposal of effluents in a stream. The mixing process determines important quantities related to the energy transfer from main stream to the secondary stream, the temperature and velocity profiles, and the local kinematic and dissipative structure within the mixing region, and the growth of the mixing layer. Hill and Page (1968) have proposed the employment of an 'assumed epsilon' method in which the eddy viscosity model of Goertler (1942) is modified to account for the initial boundary layer. The present investigation is concerned with the application of the assumed epsilon technique to the study of the effect of initial conditions on the development of the turbulent mixing layer between two compressible, nonisoenergetic streams at constant pressure.

  7. Large Amplitude IMF Fluctuations in Corotating Interaction Regions: Ulysses at Midlatitudes

    NASA Technical Reports Server (NTRS)

    Tsurutani, Bruce T.; Ho, Christian M.; Arballo, John K.; Goldstein, Bruce E.; Balogh, Andre

    1995-01-01

    Corotating Interaction Regions (CIRs), formed by high-speed corotating streams interacting with slow speed streams, have been examined from -20 deg to -36 deg heliolatitudes. The high-speed streams emanate from a polar coronal hole that Ulysses eventually becomes fully embedded in as it travels towards the south pole. We find that the trailing portion of the CIR, from the interface surface (IF) to the reverse shock (RS), contains both large amplitude transverse fluctuations and magnitude fluctuations. Similar fluctuations have been previously noted to exist within CIRs detected in the ecliptic plane, but their existence has not been explained. The normalized magnetic field component variances within this portion of the CIR and in the trailing high-speed stream are approximately the same, indicating that the fluctuations in the CIR are compressed Alfven waves. Mirror mode structures with lower intensities are also observed in the trailing portion of the CIR, presumably generated from a local instability driven by free energy associated with compression of the high-speed solar wind plasma. The mixture of these two modes (compressed Alfven waves and mirror modes) plus other modes generated by three wave processes (wave-shock interactions) lead to a lower Alfvenicity within the trailing portion of the CfR than in the high-speed stream proper. The results presented in this paper suggest a mechanism for generation of large amplitude B(sub z) fluctuations within CIRS. Such phenomena have been noted to be responsible for the generation of moderate geomagnetic storms during the declining phase of the solar cycle.

  8. No Bit Left Behind: The Limits of Heap Data Compression

    DTIC Science & Technology

    2008-06-01

    Lempel - Ziv compression is non-lossy, in other words, the original data can be fully recovered by decompression. Unlike the data representations for most...of the other models, Lempel - Ziv compressed data does not permit random access, let alone in-place update. To compute this model as accu- rately as...of the collection, we print the size of the full stream, i.e., all live data in the heap. We then apply Lempel - Ziv compression to the stream

  9. Apparatus and process for the separation of gases using supersonic expansion and oblique wave compression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    VanOsdol, John G.

    The disclosure provides an apparatus and method for gas separation through the supersonic expansion and subsequent deceleration of a gaseous stream. The gaseous constituent changes phase from the gaseous state by desublimation or condensation during the acceleration producing a collectible constituent, and an oblique shock diffuser decelerates the gaseous stream to a subsonic velocity while maintain the collectible constituent in the non-gaseous state. Following deceleration, the carrier gas and the collectible constituent at the subsonic velocity are separated by a separation means, such as a centrifugal, electrostatic, or impingement separator. In an embodiment, the gaseous stream issues from a combustionmore » process and is comprised of N.sub.2 and CO.sub.2.« less

  10. Finite Element Modeling and Analysis of Powder Stream in Low Pressure Cold Spray Process

    NASA Astrophysics Data System (ADS)

    Goyal, Tarun; Walia, Ravinderjit Singh; Sharma, Prince; Sidhu, Tejinder Singh

    2016-07-01

    Low pressure cold gas dynamic spray (LPCGDS) is a coating process that utilize low pressure gas (5-10 bars instead of 25-30 bars) and the radial injection of powder instead of axial injection with the particle range (1-50 μm). In the LPCGDS process, pressurized compressed gas is accelerated to the critical velocity, which depends on length of the divergent section of nozzle, the propellant gas and particle characteristics, and the diameters ratio of the inlet and outer diameters. This paper presents finite element modeling (FEM) of powder stream in supersonic nozzle wherein adiabatic gas flow and expansion of gas occurs in uniform manner and the same is used to evaluate the resultant temperature and velocity contours during coating process. FEM analyses were performed using commercial finite volume package, ANSYS CFD FLUENT. The results are helpful to predict the characteristics of powder stream at the exit of the supersonic nozzle.

  11. Pre-processing SAR image stream to facilitate compression for transport on bandwidth-limited-link

    DOEpatents

    Rush, Bobby G.; Riley, Robert

    2015-09-29

    Pre-processing is applied to a raw VideoSAR (or similar near-video rate) product to transform the image frame sequence into a product that resembles more closely the type of product for which conventional video codecs are designed, while sufficiently maintaining utility and visual quality of the product delivered by the codec.

  12. A hybrid data compression approach for online backup service

    NASA Astrophysics Data System (ADS)

    Wang, Hua; Zhou, Ke; Qin, MingKang

    2009-08-01

    With the popularity of Saas (Software as a service), backup service has becoming a hot topic of storage application. Due to the numerous backup users, how to reduce the massive data load is a key problem for system designer. Data compression provides a good solution. Traditional data compression application used to adopt a single method, which has limitations in some respects. For example data stream compression can only realize intra-file compression, de-duplication is used to eliminate inter-file redundant data, compression efficiency cannot meet the need of backup service software. This paper proposes a novel hybrid compression approach, which includes two levels: global compression and block compression. The former can eliminate redundant inter-file copies across different users, the latter adopts data stream compression technology to realize intra-file de-duplication. Several compressing algorithms were adopted to measure the compression ratio and CPU time. Adaptability using different algorithm in certain situation is also analyzed. The performance analysis shows that great improvement is made through the hybrid compression policy.

  13. Chemically assisted mechanical refrigeration process

    DOEpatents

    Vobach, Arnold R.

    1987-01-01

    There is provided a chemically assisted mechanical refrigeration process including the steps of: mechanically compressing a refrigerant stream which includes vaporized refrigerant; contacting the refrigerant with a solvent in a mixer (11) at a pressure sufficient to promote substantial dissolving of the refrigerant in the solvent in the mixer (11) to form a refrigerant-solvent solution while concurrently placing the solution in heat exchange relation with a working medium to transfer energy to the working medium, said refrigerant-solvent solution exhibiting a negative deviation from Raoult's Law; reducing the pressure over the refrigerant-solvent solution in an evaporator (10) to allow the refrigerant to vaporize and substantially separate from the solvent while concurrently placing the evolving refrigerant-solvent solution in heat exchange relation with a working medium to remove energy from the working medium to thereby form a refrigerant stream and a solvent stream; and passing the solvent and refrigerant stream from the evaporator.

  14. Chemically assisted mechanical refrigeration process

    DOEpatents

    Vobach, Arnold R.

    1987-01-01

    There is provided a chemically assisted mechanical refrigeration process including the steps of: mechanically compressing a refrigerant stream which includes vaporized refrigerant; contacting the refrigerant with a solvent in a mixer (11) at a pressure sufficient to promote substantial dissolving of the refrigerant in the solvent in the mixer (11) to form a refrigerant-solvent solution while concurrently placing the solution in heat exchange relation with a working medium to transfer energy to the working medium, said refrigerant-solvent solution exhibiting a negative deviation from Raoult's Law; reducing the pressure over the refrigerant-solvent solution in an evaporator (10) to allow the refrigerant to vaporize and substantially separate from the solvent while concurrently placing he evolving refrigerant-solvent solution in heat exchange relation with a working medium to remove energy from the working medium to thereby form a refrigerant stream and a solvent stream; and passing the solvent and refrigerant stream from the evaporator.

  15. Chemically assisted mechanical refrigeration process

    DOEpatents

    Vobach, A.R.

    1987-06-23

    There is provided a chemically assisted mechanical refrigeration process including the steps of: mechanically compressing a refrigerant stream which includes vaporized refrigerant; contacting the refrigerant with a solvent in a mixer at a pressure sufficient to promote substantial dissolving of the refrigerant in the solvent in the mixer to form a refrigerant-solvent solution while concurrently placing the solution in heat exchange relation with a working medium to transfer energy to the working medium, said refrigerant-solvent solution exhibiting a negative deviation from Raoult's Law; reducing the pressure over the refrigerant-solvent solution in an evaporator to allow the refrigerant to vaporize and substantially separate from the solvent while concurrently placing the evolving refrigerant-solvent solution in heat exchange relation with a working medium to remove energy from the working medium to thereby form a refrigerant stream and a solvent stream; and passing the solvent and refrigerant stream from the evaporator. 5 figs.

  16. Chemically assisted mechanical refrigeration process

    DOEpatents

    Vobach, A.R.

    1987-11-24

    There is provided a chemically assisted mechanical refrigeration process including the steps of: mechanically compressing a refrigerant stream which includes vaporized refrigerant; contacting the refrigerant with a solvent in a mixer at a pressure sufficient to promote substantial dissolving of the refrigerant in the solvent in the mixer to form a refrigerant-solvent solution while concurrently placing the solution in heat exchange relation with a working medium to transfer energy to the working medium, said refrigerant-solvent solution exhibiting a negative deviation from Raoult's Law; reducing the pressure over the refrigerant-solvent solution in an evaporator to allow the refrigerant to vaporize and substantially separate from the solvent while concurrently placing the evolving refrigerant-solvent solution in heat exchange relation with a working medium to remove energy from the working medium to thereby form a refrigerant stream and a solvent stream; and passing the solvent and refrigerant stream from the evaporator. 5 figs.

  17. Supercritical fluid crystallization of griseofulvin: crystal habit modification with a selective growth inhibitor.

    PubMed

    Jarmer, Daniel J; Lengsfeld, Corinne S; Anseth, Kristi S; Randolph, Theodore W

    2005-12-01

    Poly (sebacic anhydride) (PSA) was used as a growth inhibitor to selectively modify habit of griseofulvin crystals formed via the Precipitation with a compressed-fluid antisolvent (PCA) process. PSA and griseofulvin were coprecipitated within a PCA injector, which provided efficient mixing between the solution and compressed antisolvent process streams. Griseofulvin crystal habit was modified from acicular to bipyramidal when the mass ratio of PSA/griseofulvin in the solution feed stream was

  18. Data compression using Chebyshev transform

    NASA Technical Reports Server (NTRS)

    Cheng, Andrew F. (Inventor); Hawkins, III, S. Edward (Inventor); Nguyen, Lillian (Inventor); Monaco, Christopher A. (Inventor); Seagrave, Gordon G. (Inventor)

    2007-01-01

    The present invention is a method, system, and computer program product for implementation of a capable, general purpose compression algorithm that can be engaged on the fly. This invention has particular practical application with time-series data, and more particularly, time-series data obtained form a spacecraft, or similar situations where cost, size and/or power limitations are prevalent, although it is not limited to such applications. It is also particularly applicable to the compression of serial data streams and works in one, two, or three dimensions. The original input data is approximated by Chebyshev polynomials, achieving very high compression ratios on serial data streams with minimal loss of scientific information.

  19. A perioperative echocardiographic reporting and recording system.

    PubMed

    Pybus, David A

    2004-11-01

    Advances in video capture, compression, and streaming technology, coupled with improvements in central processing unit design and the inclusion of a database engine in the Windows operating system, have simplified the task of implementing a digital echocardiographic recording system. I describe an application that uses these technologies and runs on a notebook computer.

  20. The Linear Perturbation Theory of Axially Symmetric Compressible Flow with Application to the Effect of Compressibility on the Pressure Coefficient at the Surface of a Body of Revolution

    DTIC Science & Technology

    1947-07-18

    which + la constant constitute a surface vhlch say he called a streaa surface. The stream surface Is In torn Bode up of streaallnee. If a...potential and stream function would be, reapeetHely, VpX and ia ^r8. The stream awfaeoa would he right circular cylinders with axes along the x...there is a double infinity of methods. In general, !n transforming frem the compreeslhlo-flov field to the IncrwpreSBlble-flow field, streaa

  1. SAR data compression: Application, requirements, and designs

    NASA Technical Reports Server (NTRS)

    Curlander, John C.; Chang, C. Y.

    1991-01-01

    The feasibility of reducing data volume and data rate is evaluated for the Earth Observing System (EOS) Synthetic Aperture Radar (SAR). All elements of data stream from the sensor downlink data stream to electronic delivery of browse data products are explored. The factors influencing design of a data compression system are analyzed, including the signal data characteristics, the image quality requirements, and the throughput requirements. The conclusion is that little or no reduction can be achieved in the raw signal data using traditional data compression techniques (e.g., vector quantization, adaptive discrete cosine transform) due to the induced phase errors in the output image. However, after image formation, a number of techniques are effective for data compression.

  2. Data Streaming for Metabolomics: Accelerating Data Processing and Analysis from Days to Minutes

    PubMed Central

    2016-01-01

    The speed and throughput of analytical platforms has been a driving force in recent years in the “omics” technologies and while great strides have been accomplished in both chromatography and mass spectrometry, data analysis times have not benefited at the same pace. Even though personal computers have become more powerful, data transfer times still represent a bottleneck in data processing because of the increasingly complex data files and studies with a greater number of samples. To meet the demand of analyzing hundreds to thousands of samples within a given experiment, we have developed a data streaming platform, XCMS Stream, which capitalizes on the acquisition time to compress and stream recently acquired data files to data processing servers, mimicking just-in-time production strategies from the manufacturing industry. The utility of this XCMS Online-based technology is demonstrated here in the analysis of T cell metabolism and other large-scale metabolomic studies. A large scale example on a 1000 sample data set demonstrated a 10 000-fold time savings, reducing data analysis time from days to minutes. Further, XCMS Stream has the capability to increase the efficiency of downstream biochemical dependent data acquisition (BDDA) analysis by initiating data conversion and data processing on subsets of data acquired, expanding its application beyond data transfer to smart preliminary data decision-making prior to full acquisition. PMID:27983788

  3. Data streaming for metabolomics: Accelerating data processing and analysis from days to minutes

    DOE PAGES

    Montenegro-Burke, J. Rafael; Aisporna, Aries E.; Benton, H. Paul; ...

    2016-12-16

    The speed and throughput of analytical platforms has been a driving force in recent years in the “omics” technologies and while great strides have been accomplished in both chromatography and mass spectrometry, data analysis times have not benefited at the same pace. Even though personal computers have become more powerful, data transfer times still represent a bottleneck in data processing because of the increasingly complex data files and studies with a greater number of samples. To meet the demand of analyzing hundreds to thousands of samples within a given experiment, we have developed a data streaming platform, XCMS Stream, whichmore » capitalizes on the acquisition time to compress and stream recently acquired data files to data processing servers, mimicking just-in-time production strategies from the manufacturing industry. The utility of this XCMS Online-based technology is demonstrated here in the analysis of T cell metabolism and other large-scale metabolomic studies. A large scale example on a 1000 sample data set demonstrated a 10 000-fold time savings, reducing data analysis time from days to minutes. Here, XCMS Stream has the capability to increase the efficiency of downstream biochemical dependent data acquisition (BDDA) analysis by initiating data conversion and data processing on subsets of data acquired, expanding its application beyond data transfer to smart preliminary data decision-making prior to full acquisition.« less

  4. Data Streaming for Metabolomics: Accelerating Data Processing and Analysis from Days to Minutes.

    PubMed

    Montenegro-Burke, J Rafael; Aisporna, Aries E; Benton, H Paul; Rinehart, Duane; Fang, Mingliang; Huan, Tao; Warth, Benedikt; Forsberg, Erica; Abe, Brian T; Ivanisevic, Julijana; Wolan, Dennis W; Teyton, Luc; Lairson, Luke; Siuzdak, Gary

    2017-01-17

    The speed and throughput of analytical platforms has been a driving force in recent years in the "omics" technologies and while great strides have been accomplished in both chromatography and mass spectrometry, data analysis times have not benefited at the same pace. Even though personal computers have become more powerful, data transfer times still represent a bottleneck in data processing because of the increasingly complex data files and studies with a greater number of samples. To meet the demand of analyzing hundreds to thousands of samples within a given experiment, we have developed a data streaming platform, XCMS Stream, which capitalizes on the acquisition time to compress and stream recently acquired data files to data processing servers, mimicking just-in-time production strategies from the manufacturing industry. The utility of this XCMS Online-based technology is demonstrated here in the analysis of T cell metabolism and other large-scale metabolomic studies. A large scale example on a 1000 sample data set demonstrated a 10 000-fold time savings, reducing data analysis time from days to minutes. Further, XCMS Stream has the capability to increase the efficiency of downstream biochemical dependent data acquisition (BDDA) analysis by initiating data conversion and data processing on subsets of data acquired, expanding its application beyond data transfer to smart preliminary data decision-making prior to full acquisition.

  5. Process for recovering organic vapors from air

    DOEpatents

    Baker, Richard W.

    1985-01-01

    A process for recovering and concentrating organic vapor from a feed stream of air having an organic vapor content of no more than 20,000 ppm by volume. A thin semipermeable membrane is provided which has a feed side and a permeate side, a selectivity for organic vapor over air of at least 50, as measured by the ratio of organic vapor permeability to nitrogen permeability, and a permeability of organic vapor of at least 3.times.10.sup.-7 cm.sup.3 (STP) cm/cm.sup.2 sec.cm Hg. The feed stream is passed across the feed side of the thin semipermeable membrane while providing a pressure on the permeate side which is lower than the feed side by creating a partial vacuum on the permeate side so that organic vapor passes preferentially through the membrane to form an organic vapor depleted air stream on the feed side and an organic vapor enriched stream on the permeate side. The organic vapor which has passed through the membrane is compressed and condensed to recover the vapor as a liquid.

  6. Algorithm for Compressing Time-Series Data

    NASA Technical Reports Server (NTRS)

    Hawkins, S. Edward, III; Darlington, Edward Hugo

    2012-01-01

    An algorithm based on Chebyshev polynomials effects lossy compression of time-series data or other one-dimensional data streams (e.g., spectral data) that are arranged in blocks for sequential transmission. The algorithm was developed for use in transmitting data from spacecraft scientific instruments to Earth stations. In spite of its lossy nature, the algorithm preserves the information needed for scientific analysis. The algorithm is computationally simple, yet compresses data streams by factors much greater than two. The algorithm is not restricted to spacecraft or scientific uses: it is applicable to time-series data in general. The algorithm can also be applied to general multidimensional data that have been converted to time-series data, a typical example being image data acquired by raster scanning. However, unlike most prior image-data-compression algorithms, this algorithm neither depends on nor exploits the two-dimensional spatial correlations that are generally present in images. In order to understand the essence of this compression algorithm, it is necessary to understand that the net effect of this algorithm and the associated decompression algorithm is to approximate the original stream of data as a sequence of finite series of Chebyshev polynomials. For the purpose of this algorithm, a block of data or interval of time for which a Chebyshev polynomial series is fitted to the original data is denoted a fitting interval. Chebyshev approximation has two properties that make it particularly effective for compressing serial data streams with minimal loss of scientific information: The errors associated with a Chebyshev approximation are nearly uniformly distributed over the fitting interval (this is known in the art as the "equal error property"); and the maximum deviations of the fitted Chebyshev polynomial from the original data have the smallest possible values (this is known in the art as the "min-max property").

  7. Cloud Optimized Image Format and Compression

    NASA Astrophysics Data System (ADS)

    Becker, P.; Plesea, L.; Maurer, T.

    2015-04-01

    Cloud based image storage and processing requires revaluation of formats and processing methods. For the true value of the massive volumes of earth observation data to be realized, the image data needs to be accessible from the cloud. Traditional file formats such as TIF and NITF were developed in the hay day of the desktop and assumed fast low latency file access. Other formats such as JPEG2000 provide for streaming protocols for pixel data, but still require a server to have file access. These concepts no longer truly hold in cloud based elastic storage and computation environments. This paper will provide details of a newly evolving image storage format (MRF) and compression that is optimized for cloud environments. Although the cost of storage continues to fall for large data volumes, there is still significant value in compression. For imagery data to be used in analysis and exploit the extended dynamic range of the new sensors, lossless or controlled lossy compression is of high value. Compression decreases the data volumes stored and reduces the data transferred, but the reduced data size must be balanced with the CPU required to decompress. The paper also outlines a new compression algorithm (LERC) for imagery and elevation data that optimizes this balance. Advantages of the compression include its simple to implement algorithm that enables it to be efficiently accessed using JavaScript. Combing this new cloud based image storage format and compression will help resolve some of the challenges of big image data on the internet.

  8. Dynamic modeling of sludge compaction and consolidation processes in wastewater secondary settling tanks.

    PubMed

    Abusam, A; Keesman, K J

    2009-01-01

    The double exponential settling model is the widely accepted model for wastewater secondary settling tanks. However, this model does not estimate accurately solids concentrations in the settler underflow stream, mainly because sludge compression and consolidation processes are not considered. In activated sludge systems, accurate estimation of the solids in the underflow stream will facilitate the calibration process and can lead to correct estimates of particularly kinetic parameters related to biomass growth. Using principles of compaction and consolidation, as in soil mechanics, a dynamic model of the sludge consolidation processes taking place in the secondary settling tanks is developed and incorporated to the commonly used double exponential settling model. The modified double exponential model is calibrated and validated using data obtained from a full-scale wastewater treatment plant. Good agreement between predicted and measured data confirmed the validity of the modified model.

  9. A Streaming PCA VLSI Chip for Neural Data Compression.

    PubMed

    Wu, Tong; Zhao, Wenfeng; Guo, Hongsun; Lim, Hubert H; Yang, Zhi

    2017-12-01

    Neural recording system miniaturization and integration with low-power wireless technologies require compressing neural data before transmission. Feature extraction is a procedure to represent data in a low-dimensional space; its integration into a recording chip can be an efficient approach to compress neural data. In this paper, we propose a streaming principal component analysis algorithm and its microchip implementation to compress multichannel local field potential (LFP) and spike data. The circuits have been designed in a 65-nm CMOS technology and occupy a silicon area of 0.06 mm. Throughout the experiments, the chip compresses LFPs by 10 at the expense of as low as 1% reconstruction errors and 144-nW/channel power consumption; for spikes, the achieved compression ratio is 25 with 8% reconstruction errors and 3.05-W/channel power consumption. In addition, the algorithm and its hardware architecture can swiftly adapt to nonstationary spiking activities, which enables efficient hardware sharing among multiple channels to support a high-channel count recorder.

  10. Corotating pressure waves without streams in the solar wind

    NASA Technical Reports Server (NTRS)

    Burlaga, L. F.

    1983-01-01

    Voyager 1 and 2 magnetic field and plasma data are presented which demonstrate the existence of large scale, corotating, non-linear pressure waves between 2 AU and 4 AU that are not accompanied by fast streams. The pressure waves are presumed to be generated by corotating streams near the Sun. For two of the three pressure waves that are discussed, the absence of a stream is probably a real, physical effect, viz., a consequence of deceleration of the stream by the associated compression wave. For the third pressure wave, the apparent absence of a stream may be a geometrical effect; it is likely that the stream was at latitudes just above those of the spacecraft, while the associated shocks and compression wave extended over a broader range of latitudes so that they could be observed by the spacecraft. It is suggested that the development of large-scale non-linear pressure waves at the expense of the kinetic energy of streams produces a qualitative change in the solar wind in the outer heliosphere. Within a few AU the quasi-stationary solar wind structure is determined by corotating streams whose structure is determined by the boundary conditions near the Sun.

  11. A Streaming Language Implementation of the Discontinuous Galerkin Method

    NASA Technical Reports Server (NTRS)

    Barth, Timothy; Knight, Timothy

    2005-01-01

    We present a Brook streaming language implementation of the 3-D discontinuous Galerkin method for compressible fluid flow on tetrahedral meshes. Efficient implementation of the discontinuous Galerkin method using the streaming model of computation introduces several algorithmic design challenges. Using a cycle-accurate simulator, performance characteristics have been obtained for the Stanford Merrimac stream processor. The current Merrimac design achieves 128 Gflops per chip and the desktop board is populated with 16 chips yielding a peak performance of 2 Teraflops. Total parts cost for the desktop board is less than $20K. Current cycle-accurate simulations for discretizations of the 3-D compressible flow equations yield approximately 40-50% of the peak performance of the Merrimac streaming processor chip. Ongoing work includes the assessment of the performance of the same algorithm on the 2 Teraflop desktop board with a target goal of achieving 1 Teraflop performance.

  12. Wavelet-based scalable L-infinity-oriented compression.

    PubMed

    Alecu, Alin; Munteanu, Adrian; Cornelis, Jan P H; Schelkens, Peter

    2006-09-01

    Among the different classes of coding techniques proposed in literature, predictive schemes have proven their outstanding performance in near-lossless compression. However, these schemes are incapable of providing embedded L(infinity)-oriented compression, or, at most, provide a very limited number of potential L(infinity) bit-stream truncation points. We propose a new multidimensional wavelet-based L(infinity)-constrained scalable coding framework that generates a fully embedded L(infinity)-oriented bit stream and that retains the coding performance and all the scalability options of state-of-the-art L2-oriented wavelet codecs. Moreover, our codec instantiation of the proposed framework clearly outperforms JPEG2000 in L(infinity) coding sense.

  13. Solid oxide fuel cell power plant with an anode recycle loop turbocharger

    DOEpatents

    Saito, Kazuo; Skiba, Tommy; Patel, Kirtikumar H.

    2015-07-14

    An anode exhaust recycle turbocharger (100) has a turbocharger turbine (102) secured in fluid communication with a compressed oxidant stream within an oxidant inlet line (218) downstream from a compressed oxidant supply (104), and the anode exhaust recycle turbocharger (100) also includes a turbocharger compressor (106) mechanically linked to the turbocharger turbine (102) and secured in fluid communication with a flow of anode exhaust passing through an anode exhaust recycle loop (238) of the solid oxide fuel cell power plant (200). All or a portion of compressed oxidant within an oxidant inlet line (218) drives the turbocharger turbine (102) to thereby compress the anode exhaust stream in the recycle loop (238). A high-temperature, automotive-type turbocharger (100) replaces a recycle loop blower-compressor (52).

  14. Solid oxide fuel cell power plant with an anode recycle loop turbocharger

    DOEpatents

    Saito, Kazuo; Skiba, Tommy; Patel, Kirtikumar H.

    2016-09-27

    An anode exhaust recycle turbocharger (100) has a turbocharger turbine (102) secured in fluid communication with a compressed oxidant stream within an oxidant inlet line (218) downstream from a compressed oxidant supply (104), and the anode exhaust recycle turbocharger (100) also includes a turbocharger compressor (106) mechanically linked to the turbocharger turbine (102) and secured in fluid communication with a flow of anode exhaust passing through an anode exhaust recycle loop (238) of the solid oxide fuel cell power plant (200). All or a portion of compressed oxidant within an oxidant inlet line (218) drives the turbocharger turbine (102) to thereby compress the anode exhaust stream in the recycle loop (238). A high-temperature, automotive-type turbocharger (100) replaces a recycle loop blower-compressor (52).

  15. Fixed-Rate Compressed Floating-Point Arrays.

    PubMed

    Lindstrom, Peter

    2014-12-01

    Current compression schemes for floating-point data commonly take fixed-precision values and compress them to a variable-length bit stream, complicating memory management and random access. We present a fixed-rate, near-lossless compression scheme that maps small blocks of 4(d) values in d dimensions to a fixed, user-specified number of bits per block, thereby allowing read and write random access to compressed floating-point data at block granularity. Our approach is inspired by fixed-rate texture compression methods widely adopted in graphics hardware, but has been tailored to the high dynamic range and precision demands of scientific applications. Our compressor is based on a new, lifted, orthogonal block transform and embedded coding, allowing each per-block bit stream to be truncated at any point if desired, thus facilitating bit rate selection using a single compression scheme. To avoid compression or decompression upon every data access, we employ a software write-back cache of uncompressed blocks. Our compressor has been designed with computational simplicity and speed in mind to allow for the possibility of a hardware implementation, and uses only a small number of fixed-point arithmetic operations per compressed value. We demonstrate the viability and benefits of lossy compression in several applications, including visualization, quantitative data analysis, and numerical simulation.

  16. Acoustic streaming in the cochlea under compressive bone conduction excitation

    NASA Astrophysics Data System (ADS)

    Aho, Katherine; Sunny, Megha; Nabat, Taoufik; Au, Jenny; Thompson, Charles

    2012-02-01

    This work examines the acoustic streaming in the cochlea. A model will be developed to examine the steady flow over a flexible boundary that is induced by compressive excitation of the cochlear capsule. A stokeslet based analysis of oscillatory flows was used to model fluid motion. The influence of evanescent modes on the pressure field is considered as the limit of the aspect ratio epsilon approaches zero. We will show a uniformly valid solution in space.

  17. Heat Transfer in the Turbulent Boundary Layer of a Compressible Gas at High Speeds

    NASA Technical Reports Server (NTRS)

    Frankl, F.

    1942-01-01

    The Reynolds law of heat transfer from a wall to a turbulent stream is extended to the case of flow of a compressible gas at high speeds. The analysis is based on the modern theory of the turbulent boundary layer with laminar sublayer. The investigation is carried out for the case of a plate situated in a parallel stream. The results are obtained independently of the velocity distribution in the turbulent boundar layer.

  18. Innovative hyperchaotic encryption algorithm for compressed video

    NASA Astrophysics Data System (ADS)

    Yuan, Chun; Zhong, Yuzhuo; Yang, Shiqiang

    2002-12-01

    It is accepted that stream cryptosystem can achieve good real-time performance and flexibility which implements encryption by selecting few parts of the block data and header information of the compressed video stream. Chaotic random number generator, for example Logistics Map, is a comparatively promising substitute, but it is easily attacked by nonlinear dynamic forecasting and geometric information extracting. In this paper, we present a hyperchaotic cryptography scheme to encrypt the compressed video, which integrates Logistics Map with Z(232 - 1) field linear congruential algorithm to strengthen the security of the mono-chaotic cryptography, meanwhile, the real-time performance and flexibility of the chaotic sequence cryptography are maintained. It also integrates with the dissymmetrical public-key cryptography and implements encryption and identity authentification on control parameters at initialization phase. In accord with the importance of data in compressed video stream, encryption is performed in layered scheme. In the innovative hyperchaotic cryptography, the value and the updating frequency of control parameters can be changed online to satisfy the requirement of the network quality, processor capability and security requirement. The innovative hyperchaotic cryprography proves robust security by cryptoanalysis, shows good real-time performance and flexible implement capability through the arithmetic evaluating and test.

  19. Vehicle-triggered video compression/decompression for fast and efficient searching in large video databases

    NASA Astrophysics Data System (ADS)

    Bulan, Orhan; Bernal, Edgar A.; Loce, Robert P.; Wu, Wencheng

    2013-03-01

    Video cameras are widely deployed along city streets, interstate highways, traffic lights, stop signs and toll booths by entities that perform traffic monitoring and law enforcement. The videos captured by these cameras are typically compressed and stored in large databases. Performing a rapid search for a specific vehicle within a large database of compressed videos is often required and can be a time-critical life or death situation. In this paper, we propose video compression and decompression algorithms that enable fast and efficient vehicle or, more generally, event searches in large video databases. The proposed algorithm selects reference frames (i.e., I-frames) based on a vehicle having been detected at a specified position within the scene being monitored while compressing a video sequence. A search for a specific vehicle in the compressed video stream is performed across the reference frames only, which does not require decompression of the full video sequence as in traditional search algorithms. Our experimental results on videos captured in a local road show that the proposed algorithm significantly reduces the search space (thus reducing time and computational resources) in vehicle search tasks within compressed video streams, particularly those captured in light traffic volume conditions.

  20. Onboard Processor for Compressing HSI Data

    NASA Technical Reports Server (NTRS)

    Cook, Sid; Harsanyi, Joe; Day, John H. (Technical Monitor)

    2002-01-01

    With EO-1 Hyperion and MightySat in orbit NASA and the DoD are showing their continued commitment to hyperspectral imaging (HSI). As HSI sensor technology continues to mature, the ever-increasing amounts of sensor data generated will result in a need for more cost effective communication and data handling systems. Lockheed Martin, with considerable experience in spacecraft design and developing special purpose onboard processors, has teamed with Applied Signal & Image Technology (ASIT), who has an extensive heritage in HSI, to develop a real-time and intelligent onboard processing (OBP) system to reduce HSI sensor downlink requirements. Our goal is to reduce the downlink requirement by a factor greater than 100, while retaining the necessary spectral fidelity of the sensor data needed to satisfy the many science, military, and intelligence goals of these systems. Our initial spectral compression experiments leverage commercial-off-the-shelf (COTS) spectral exploitation algorithms for segmentation, material identification and spectral compression that ASIT has developed. ASIT will also support the modification and integration of this COTS software into the OBP. Other commercially available COTS software for spatial compression will also be employed as part of the overall compression processing sequence. Over the next year elements of a high-performance reconfigurable OBP will be developed to implement proven preprocessing steps that distill the HSI data stream in both spectral and spatial dimensions. The system will intelligently reduce the volume of data that must be stored, transmitted to the ground, and processed while minimizing the loss of information.

  1. A three-dimensional model of corotating streams in the solar wind. 1: Theoretical foundations

    NASA Technical Reports Server (NTRS)

    Pizzo, V. J.

    1978-01-01

    The theoretical and mathematical background pertinent to the study of steady, corotating solar wind structure in all three spatial dimensions (3-D) is discussed. The dynamical evolution of the plasma in interplanetary space (defined as the region beyond roughly 35 solar radii where the flow is supersonic) is approximately described by the nonlinear, single fluid, polytropic (magneto-) hydrodynamic equations. Efficient numerical techniques for solving this complex system of coupled, hyperbolic partial differential equations are outlined. The formulation is inviscid and nonmagnetic, but methods allow for the potential inclusion of both features with only modest modifications. One simple, highly idealized, hydrodynamic model stream is examined to illustrate the fundamental processes involved in the 3-D dynamics of stream evolution. Spatial variations in the rotational stream interaction mechanism were found to produce small nonradial flows on a global scale that lead to the transport of mass, energy, and momentum away from regions of relative compression and into regions of relative rarefaction.

  2. Shock-capturing parabolized Navier-Stokes model /SCIPVIS/ for the analysis of turbulent underexpanded jets

    NASA Technical Reports Server (NTRS)

    Dash, S. M.; Wolf, D. E.

    1983-01-01

    A new computational model, SCIPVIS, has been developed to predict the multiple-cell wave/shock structure in under or over-expanded turbulent jets. SCIPVIS solves the parabolized Navier-Stokes jet mixing equations utilizing a shock-capturing approach in supersonic regions of the jet and a pressure-split approach in subsonic regions. Turbulence processes are represented by the solution of compressibility corrected two-equation turbulence models. The formation of Mach discs in the jet and the interactive turbulent mixing process occurring behind the disc are handled in a detailed fashion. SCIPVIS presently analyzes jets exhausting into a quiescent or supersonic external stream for which a single-pass spatial marching solution can be obtained. The iterative coupling of SCIPVIS with a potential flow solver for the analysis of subsonic/transonic external streams is under development.

  3. Energetic approach of biomass hydrolysis in supercritical water.

    PubMed

    Cantero, Danilo A; Vaquerizo, Luis; Mato, Fidel; Bermejo, M Dolores; Cocero, M José

    2015-03-01

    Cellulose hydrolysis can be performed in supercritical water with a high selectivity of soluble sugars. The process produces high-pressure steam that can be integrated, from an energy point of view, with the whole biomass treating process. This work investigates the integration of biomass hydrolysis reactors with commercial combined heat and power (CHP) schemes, with special attention to reactor outlet streams. The innovation developed in this work allows adequate energy integration possibilities for heating and compression by using high temperature of the flue gases and direct shaft work from the turbine. The integration of biomass hydrolysis with a CHP process allows the selective conversion of biomass into sugars with low heat requirements. Integrating these two processes, the CHP scheme yield is enhanced around 10% by injecting water in the gas turbine. Furthermore, the hydrolysis reactor can be held at 400°C and 23 MPa using only the gas turbine outlet streams. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Video Analysis in Multi-Intelligence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Key, Everett Kiusan; Van Buren, Kendra Lu; Warren, Will

    This is a project which was performed by a graduated high school student at Los Alamos National Laboratory (LANL). The goal of the Multi-intelligence (MINT) project is to determine the state of a facility from multiple data streams. The data streams are indirect observations. The researcher is using DARHT (Dual-Axis Radiographic Hydrodynamic Test Facility) as a proof of concept. In summary, videos from the DARHT facility contain a rich amount of information. Distribution of car activity can inform us about the state of the facility. Counting large vehicles shows promise as another feature for identifying the state of operations. Signalmore » processing techniques are limited by the low resolution and compression of the videos. We are working on integrating these features with features obtained from other data streams to contribute to the MINT project. Future work can pursue other observations, such as when the gate is functioning or non-functioning.« less

  5. Performance Assessment of the Exploration Water Recovery System

    NASA Technical Reports Server (NTRS)

    Carter. D. Layne; Tabb, David; Perry, Jay

    2008-01-01

    A new water recovery system architecture designed to fulfill the National Aeronautics and Space Administration s (NASA) Space Exploration Policy has been tested at the Marshall Space Flight Center (MSFC). This water recovery system architecture evolved from the current state-of-the-art system developed for the International Space Station (ISS). Through novel integration of proven technologies for air and water purification, this system promises to elevate existing system optimization. The novel aspect of the system is twofold. First, volatile organic compounds (VOC) are removed from the cabin air via catalytic oxidation in the vapor phase, prior to their absorption into the aqueous phase. Second, vapor compression distillation (VCD) technology processes the condensate and hygiene waste streams in addition to the urine waste stream. Oxidation kinetics dictate that removing VOCs from the vapor phase is more efficient. Treating the various waste streams by VCD reduces the load on the expendable ion exchange and adsorption media which follows, as well as the aqueous-phase catalytic oxidation process further downstream. This paper documents the results of testing this new architecture.

  6. Improved heat recovery and high-temperature clean-up for coal-gas fired combustion turbines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barthelemy, N.M.; Lynn, S.

    1991-07-01

    This study investigates the performance of an Improved Heat Recovery Method (IHRM) applied to a coal-gas fired power-generating system using a high-temperature clean-up. This heat recovery process has been described by Higdon and Lynn (1990). The IHRM is an integrated heat-recovery network that significantly increases the thermal efficiency of a gas turbine in the generation of electric power. Its main feature is to recover both low- and high-temperature heat reclaimed from various gas streams by means of evaporating heated water into combustion air in an air saturation unit. This unit is a packed column where compressed air flows countercurrently tomore » the heated water prior to being sent to the combustor, where it is mixed with coal-gas and burned. The high water content of the air stream thus obtained reduces the amount of excess air required to control the firing temperature of the combustor, which in turn lowers the total work of compression and results in a high thermal efficiency. Three designs of the IHRM were developed to accommodate three different gasifying process. The performances of those designs were evaluated and compared using computer simulations. The efficiencies obtained with the IHRM are substantially higher those yielded by other heat-recovery technologies using the same gasifying processes. The study also revealed that the IHRM compares advantageously to most advanced power-generation technologies currently available or tested commercially. 13 refs., 34 figs., 10 tabs.« less

  7. A systems approach for data compression and latency reduction in cortically controlled brain machine interfaces.

    PubMed

    Oweiss, Karim G

    2006-07-01

    This paper suggests a new approach for data compression during extracutaneous transmission of neural signals recorded by high-density microelectrode array in the cortex. The approach is based on exploiting the temporal and spatial characteristics of the neural recordings in order to strip the redundancy and infer the useful information early in the data stream. The proposed signal processing algorithms augment current filtering and amplification capability and may be a viable replacement to on chip spike detection and sorting currently employed to remedy the bandwidth limitations. Temporal processing is devised by exploiting the sparseness capabilities of the discrete wavelet transform, while spatial processing exploits the reduction in the number of physical channels through quasi-periodic eigendecomposition of the data covariance matrix. Our results demonstrate that substantial improvements are obtained in terms of lower transmission bandwidth, reduced latency and optimized processor utilization. We also demonstrate the improvements qualitatively in terms of superior denoising capabilities and higher fidelity of the obtained signals.

  8. Dual pressure-dual temperature isotope exchange process

    DOEpatents

    Babcock, D.F.

    1974-02-12

    A liquid and a gas stream, each containing a desired isotope, flow countercurrently through two liquid-gas contacting towers maintained at different temperatures and pressures. The liquid is enriched in the isotope in one tower while the gas is enriched within the other and a portion of at least one of the enriched streams is withdrawn from the system for use or further enrichment. The tower operated at the lower temperature is also maintained at the lower pressure to prevent formation of solid solvates. Gas flow between the towers passes through an expander-compressor apparatas to recover work from the expansion of gas to the lower pressure and thereby compress the gas returning to the tower of higher pressure. (Official Gazette)

  9. Low-complexity transcoding algorithm from H.264/AVC to SVC using data mining

    NASA Astrophysics Data System (ADS)

    Garrido-Cantos, Rosario; De Cock, Jan; Martínez, Jose Luis; Van Leuven, Sebastian; Cuenca, Pedro; Garrido, Antonio

    2013-12-01

    Nowadays, networks and terminals with diverse characteristics of bandwidth and capabilities coexist. To ensure a good quality of experience, this diverse environment demands adaptability of the video stream. In general, video contents are compressed to save storage capacity and to reduce the bandwidth required for its transmission. Therefore, if these compressed video streams were compressed using scalable video coding schemes, they would be able to adapt to those heterogeneous networks and a wide range of terminals. Since the majority of the multimedia contents are compressed using H.264/AVC, they cannot benefit from that scalability. This paper proposes a low-complexity algorithm to convert an H.264/AVC bitstream without scalability to scalable bitstreams with temporal scalability in baseline and main profiles by accelerating the mode decision task of the scalable video coding encoding stage using machine learning tools. The results show that when our technique is applied, the complexity is reduced by 87% while maintaining coding efficiency.

  10. Automatic attention-based prioritization of unconstrained video for compression

    NASA Astrophysics Data System (ADS)

    Itti, Laurent

    2004-06-01

    We apply a biologically-motivated algorithm that selects visually-salient regions of interest in video streams to multiply-foveated video compression. Regions of high encoding priority are selected based on nonlinear integration of low-level visual cues, mimicking processing in primate occipital and posterior parietal cortex. A dynamic foveation filter then blurs (foveates) every frame, increasingly with distance from high-priority regions. Two variants of the model (one with continuously-variable blur proportional to saliency at every pixel, and the other with blur proportional to distance from three independent foveation centers) are validated against eye fixations from 4-6 human observers on 50 video clips (synthetic stimuli, video games, outdoors day and night home video, television newscast, sports, talk-shows, etc). Significant overlap is found between human and algorithmic foveations on every clip with one variant, and on 48 out of 50 clips with the other. Substantial compressed file size reductions by a factor 0.5 on average are obtained for foveated compared to unfoveated clips. These results suggest a general-purpose usefulness of the algorithm in improving compression ratios of unconstrained video.

  11. Lossy compression for Animated Web Visualisation

    NASA Astrophysics Data System (ADS)

    Prudden, R.; Tomlinson, J.; Robinson, N.; Arribas, A.

    2017-12-01

    This talk will discuss an technique for lossy data compression specialised for web animation. We set ourselves the challenge of visualising a full forecast weather field as an animated 3D web page visualisation. This data is richly spatiotemporal, however it is routinely communicated to the public as a 2D map, and scientists are largely limited to visualising data via static 2D maps or 1D scatter plots. We wanted to present Met Office weather forecasts in a way that represents all the generated data. Our approach was to repurpose the technology used to stream high definition videos. This enabled us to achieve high rates of compression, while being compatible with both web browsers and GPU processing. Since lossy compression necessarily involves discarding information, evaluating the results is an important and difficult problem. This is essentially a problem of forecast verification. The difficulty lies in deciding what it means for two weather fields to be "similar", as simple definitions such as mean squared error often lead to undesirable results. In the second part of the talk, I will briefly discuss some ideas for alternative measures of similarity.

  12. The Parker Instability with Cosmic-Ray Streaming

    NASA Astrophysics Data System (ADS)

    Heintz, Evan; Zweibel, Ellen G.

    2018-06-01

    Recent studies have found that cosmic-ray transport plays an important role in feedback processes such as star formation and the launching of galactic winds. Although cosmic-ray buoyancy is widely held to be a destabilizing force in galactic disks, the effect of cosmic-ray transport on the stability of stratified systems has yet to be analyzed. We perform a stability analysis of a stratified layer for three different cosmic-ray transport models: decoupled (Classic Parker), coupled with γ c = 4/3 but not streaming (Modified Parker), and finally coupled with streaming at the Alfvén speed. When the compressibility of the cosmic rays is decreased the system becomes much more stable, but the addition of cosmic-ray streaming to the Parker instability severely destabilizes it. Through comparison of these three cases and analysis of the work contributions for the perturbed quantities of each system, we demonstrate that cosmic-ray heating of the gas is responsible for the destabilization of the system. We find that a 3D system is unstable over a larger range of wavelengths than the 2D system. Therefore, the Parker instability with cosmic-ray streaming may play an important role in cosmic-ray feedback.

  13. Stackable middleware services for advanced multimedia applications. Final report for period July 14, 1999 - July 14, 2001

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng, Wu-chi; Crawfis, Roger, Weide, Bruce

    2002-02-01

    In this project, the authors propose the research, development, and distribution of a stackable component-based multimedia streaming protocol middleware service. The goals of this stackable middleware interface include: (1) The middleware service will provide application writers and scientists easy to use interfaces that support their visualization needs. (2) The middleware service will support a variety of image compression modes. Currently, many of the network adaptation protocols for video have been developed with DCT-based compression algorithms like H.261, MPEG-1, or MPEG-2 in mind. It is expected that with advanced scientific computing applications that the lossy compression of the image data willmore » be unacceptable in certain instances. The middleware service will support several in-line lossless compression modes for error-sensitive scientific visualization data. (3) The middleware service will support two different types of streaming video modes: one for interactive collaboration of scientists and a stored video streaming mode for viewing prerecorded animations. The use of two different streaming types will allow the quality of the video delivered to the user to be maximized. Most importantly, this service will happen transparently to the user (with some basic controls exported to the user for domain specific tweaking). In the spirit of layered network protocols (like ISO and TCP/IP), application writers should not have to know a large amount about lower level network details. Currently, many example video streaming players have their congestion management techniques tightly integrated into the video player itself and are, for the most part, ''one-off'' applications. As more networked multimedia and video applications are written in the future, a larger percentage of these programmers and scientist will most likely know little about the underlying networking layer. By providing a simple, powerful, and semi-transparent middleware layer, the successful completion of this project will help serve as a catalyst to support future video-based applications, particularly those of advanced scientific computing applications.« less

  14. Compressible liquid flow in nano- or micro-sized circular tubes considering wall-liquid Lifshitz-van der Waals interaction

    NASA Astrophysics Data System (ADS)

    Zhang, Xueling; Zhu, Weiyao; Cai, Qiang; Shi, Yutao; Wu, Xuehong; Jin, Tingxiang; Yang, Lianzhi; Song, Hongqing

    2018-06-01

    Although nano- and micro-scale phenomena for fluid flows are ubiquitous in tight oil reservoirs or in nano- or micro-sized channels, the mechanisms behind them remain unclear. In this study, we consider the wall-liquid interaction to investigate the flow mechanisms behind a compressible liquid flow in nano- or micro-sized circular tubes. We assume that the liquid is attracted by the wall surface primarily by the Lifshitz-van der Waals (LW) force, whereas electrostatic forces are negligible. The long-range LW force is thus introduced into the Navier-Stokes equations. The nonlinear equations of motion are decoupled by using the hydrodynamic vorticity-stream functions, from which an approximate analytical perturbation solution is obtained. The proposed model considers the LW force and liquid compressibility to obtain the velocity and pressure fields, which are consistent with experimentally observed micro-size effects. A smaller tube radius implies smaller dimensionless velocity, and when the tube radius decreases to a certain radius Rm, a fluid no longer flows, where Rm is the lower limit of the movable-fluid radius. The radius Rm is calculated, and the results are consistent with previous experimental results. These results reveal that micro-size effects are caused by liquid compressibility and wall-liquid interactions, such as the LW force, for a liquid flowing in nano- or micro-sized channels or pores. The attractive LW force enhances the flow's radial resistance, and the liquid compressibility transmits the radial resistance to the streaming direction via volume deformation, thereby decreasing the streaming velocity.

  15. Inviscid spatial stability of a compressible mixing layer. II - The flame sheet model

    NASA Technical Reports Server (NTRS)

    Jackson, T. L.; Grosch, C. E.

    1990-01-01

    The results of an inviscid spatial calculation for a compressible reacting mixing layer are reported. The limit of infinitive activation energy is taken and the diffusion flame is approximated by a flame sheet. Results are reported for the phase speeds of the neutral waves and maximum growth rates of the unstable waves as a function of the parameters of the problem: the ratio of the temperature of the stationary stream to that of the moving stream, the Mach number of the moving streams, the heat release per unit mass fraction of the reactant, the equivalence ratio of the reaction, and the frequency of the disturbance. These results are compared to the phase speeds and growth rates of the corresponding nonreacting mixing layer. We show that the addition of combustion has important and complex effects on the flow stability.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lindstrom, P; Cohen, J D

    We present a streaming geometry compression codec for multiresolution, uniformly-gridded, triangular terrain patches that supports very fast decompression. Our method is based on linear prediction and residual coding for lossless compression of the full-resolution data. As simplified patches on coarser levels in the hierarchy already incur some data loss, we optionally allow further quantization for more lossy compression. The quantization levels are adaptive on a per-patch basis, while still permitting seamless, adaptive tessellations of the terrain. Our geometry compression on such a hierarchy achieves compression ratios of 3:1 to 12:1. Our scheme is not only suitable for fast decompression onmore » the CPU, but also for parallel decoding on the GPU with peak throughput over 2 billion triangles per second. Each terrain patch is independently decompressed on the fly from a variable-rate bitstream by a GPU geometry program with no branches or conditionals. Thus we can store the geometry compressed on the GPU, reducing storage and bandwidth requirements throughout the system. In our rendering approach, only compressed bitstreams and the decoded height values in the view-dependent 'cut' are explicitly stored on the GPU. Normal vectors are computed in a streaming fashion, and remaining geometry and texture coordinates, as well as mesh connectivity, are shared and re-used for all patches. We demonstrate and evaluate our algorithms on a small prototype system in which all compressed geometry fits in the GPU memory and decompression occurs on the fly every rendering frame without any cache maintenance.« less

  17. Navigation domain representation for interactive multiview imaging.

    PubMed

    Maugey, Thomas; Daribo, Ismael; Cheung, Gene; Frossard, Pascal

    2013-09-01

    Enabling users to interactively navigate through different viewpoints of a static scene is a new interesting functionality in 3D streaming systems. While it opens exciting perspectives toward rich multimedia applications, it requires the design of novel representations and coding techniques to solve the new challenges imposed by the interactive navigation. In particular, the encoder must prepare a priori a compressed media stream that is flexible enough to enable the free selection of multiview navigation paths by different streaming media clients. Interactivity clearly brings new design constraints: the encoder is unaware of the exact decoding process, while the decoder has to reconstruct information from incomplete subsets of data since the server generally cannot transmit images for all possible viewpoints due to resource constrains. In this paper, we propose a novel multiview data representation that permits us to satisfy bandwidth and storage constraints in an interactive multiview streaming system. In particular, we partition the multiview navigation domain into segments, each of which is described by a reference image (color and depth data) and some auxiliary information. The auxiliary information enables the client to recreate any viewpoint in the navigation segment via view synthesis. The decoder is then able to navigate freely in the segment without further data request to the server; it requests additional data only when it moves to a different segment. We discuss the benefits of this novel representation in interactive navigation systems and further propose a method to optimize the partitioning of the navigation domain into independent segments, under bandwidth and storage constraints. Experimental results confirm the potential of the proposed representation; namely, our system leads to similar compression performance as classical inter-view coding, while it provides the high level of flexibility that is required for interactive streaming. Because of these unique properties, our new framework represents a promising solution for 3D data representation in novel interactive multimedia services.

  18. Binary video codec for data reduction in wireless visual sensor networks

    NASA Astrophysics Data System (ADS)

    Khursheed, Khursheed; Ahmad, Naeem; Imran, Muhammad; O'Nils, Mattias

    2013-02-01

    Wireless Visual Sensor Networks (WVSN) is formed by deploying many Visual Sensor Nodes (VSNs) in the field. Typical applications of WVSN include environmental monitoring, health care, industrial process monitoring, stadium/airports monitoring for security reasons and many more. The energy budget in the outdoor applications of WVSN is limited to the batteries and the frequent replacement of batteries is usually not desirable. So the processing as well as the communication energy consumption of the VSN needs to be optimized in such a way that the network remains functional for longer duration. The images captured by VSN contain huge amount of data and require efficient computational resources for processing the images and wide communication bandwidth for the transmission of the results. Image processing algorithms must be designed and developed in such a way that they are computationally less complex and must provide high compression rate. For some applications of WVSN, the captured images can be segmented into bi-level images and hence bi-level image coding methods will efficiently reduce the information amount in these segmented images. But the compression rate of the bi-level image coding methods is limited by the underlined compression algorithm. Hence there is a need for designing other intelligent and efficient algorithms which are computationally less complex and provide better compression rate than that of bi-level image coding methods. Change coding is one such algorithm which is computationally less complex (require only exclusive OR operations) and provide better compression efficiency compared to image coding but it is effective for applications having slight changes between adjacent frames of the video. The detection and coding of the Region of Interest (ROIs) in the change frame efficiently reduce the information amount in the change frame. But, if the number of objects in the change frames is higher than a certain level then the compression efficiency of both the change coding and ROI coding becomes worse than that of image coding. This paper explores the compression efficiency of the Binary Video Codec (BVC) for the data reduction in WVSN. We proposed to implement all the three compression techniques i.e. image coding, change coding and ROI coding at the VSN and then select the smallest bit stream among the results of the three compression techniques. In this way the compression performance of the BVC will never become worse than that of image coding. We concluded that the compression efficiency of BVC is always better than that of change coding and is always better than or equal that of ROI coding and image coding.

  19. A CMOS Imager with Focal Plane Compression using Predictive Coding

    NASA Technical Reports Server (NTRS)

    Leon-Salas, Walter D.; Balkir, Sina; Sayood, Khalid; Schemm, Nathan; Hoffman, Michael W.

    2007-01-01

    This paper presents a CMOS image sensor with focal-plane compression. The design has a column-level architecture and it is based on predictive coding techniques for image decorrelation. The prediction operations are performed in the analog domain to avoid quantization noise and to decrease the area complexity of the circuit, The prediction residuals are quantized and encoded by a joint quantizer/coder circuit. To save area resources, the joint quantizerlcoder circuit exploits common circuitry between a single-slope analog-to-digital converter (ADC) and a Golomb-Rice entropy coder. This combination of ADC and encoder allows the integration of the entropy coder at the column level. A prototype chip was fabricated in a 0.35 pm CMOS process. The output of the chip is a compressed bit stream. The test chip occupies a silicon area of 2.60 mm x 5.96 mm which includes an 80 X 44 APS array. Tests of the fabricated chip demonstrate the validity of the design.

  20. Coding for Efficient Image Transmission

    NASA Technical Reports Server (NTRS)

    Rice, R. F.; Lee, J. J.

    1986-01-01

    NASA publication second in series on data-coding techniques for noiseless channels. Techniques used even in noisy channels, provided data further processed with Reed-Solomon or other error-correcting code. Techniques discussed in context of transmission of monochrome imagery from Voyager II spacecraft but applicable to other streams of data. Objective of this type coding to "compress" data; that is, to transmit using as few bits as possible by omitting as much as possible of portion of information repeated in subsequent samples (or picture elements).

  1. Review: Water recovery from brines and salt-saturated solutions: operability and thermodynamic efficiency considerations for desalination technologies

    PubMed Central

    Vane, Leland M.

    2017-01-01

    BACKGROUND When water is recovered from a saline source, a brine concentrate stream is produced. Management of the brine stream can be problematic, particularly in inland regions. An alternative to brine disposal is recovery of water and possibly salts from the concentrate. RESULTS This review provides an overview of desalination technologies and discusses the thermodynamic efficiencies and operational issues associated with the various technologies particularly with regard to high salinity streams. CONCLUSION Due to the high osmotic pressures of the brine concentrates, reverse osmosis, the most common desalination technology, is impractical. Mechanical vapor compression which, like reverse osmosis, utilizes mechanical work to operate, is reported to have the highest thermodynamic efficiency of the desalination technologies for treatment of salt-saturated brines. Thermally-driven processes, such as flash evaporation and distillation, are technically able to process saturated salt solutions, but suffer from low thermodynamic efficiencies. This inefficiency could be offset if an inexpensive source of waste or renewable heat could be used. Overarching issues posed by high salinity solutions include corrosion and the formation of scales/precipitates. These issues limit the materials, conditions, and unit operation designs that can be used. PMID:29225395

  2. Review: Water recovery from brines and salt-saturated solutions: operability and thermodynamic efficiency considerations for desalination technologies.

    PubMed

    Vane, Leland M

    2017-03-08

    When water is recovered from a saline source, a brine concentrate stream is produced. Management of the brine stream can be problematic, particularly in inland regions. An alternative to brine disposal is recovery of water and possibly salts from the concentrate. This review provides an overview of desalination technologies and discusses the thermodynamic efficiencies and operational issues associated with the various technologies particularly with regard to high salinity streams. Due to the high osmotic pressures of the brine concentrates, reverse osmosis, the most common desalination technology, is impractical. Mechanical vapor compression which, like reverse osmosis, utilizes mechanical work to operate, is reported to have the highest thermodynamic efficiency of the desalination technologies for treatment of salt-saturated brines. Thermally-driven processes, such as flash evaporation and distillation, are technically able to process saturated salt solutions, but suffer from low thermodynamic efficiencies. This inefficiency could be offset if an inexpensive source of waste or renewable heat could be used. Overarching issues posed by high salinity solutions include corrosion and the formation of scales/precipitates. These issues limit the materials, conditions, and unit operation designs that can be used.

  3. The Infrared Astronomical Satellite /IRAS/ Scientific Data Analysis System /SDAS/ sky flux subsystem

    NASA Technical Reports Server (NTRS)

    Stagner, J. R.; Girard, M. A.

    1980-01-01

    The sky flux subsystem of the Infrared Astronomical Satellite Scientific Data Analysis System is described. Its major output capabilities are (1) the all-sky lune maps (8-arcminute pixel size), (2) galactic plane maps (2-arcminute pixel size) and (3) regional maps of small areas such as extended sources greater than 1-degree in extent. The major processing functions are to (1) merge the CRDD and pointing data, (2) phase the detector streams, (3) compress the detector streams in the in-scan and cross-scan directions, and (4) extract data. Functional diagrams of the various capabilities of the subsystem are given. Although this device is inherently nonimaging, various calibrated and geometrically controlled imaging products are created, suitable for quantitative and qualitative scientific interpretation.

  4. A Water Recovery System Evolved for Exploration

    NASA Technical Reports Server (NTRS)

    ORourke, Mary Jane E.; Perry, Jay L.; Carter, Donald L.

    2006-01-01

    A new water recovery system designed towards fulfillment of NASA's Vision for Space Exploration is presented. This water recovery system is an evolution of the current state-of-the-art system. Through novel integration of proven technologies for air and water purification, this system promises to elevate existing technology to higher levels of optimization. The novel aspect of the system is twofold: Volatile organic contaminants will be removed from the cabin air via catalytic oxidation in the vapor phase, prior to their absorption into the aqueous phase, and vapor compression distillation technology will be used to process the condensate and hygiene waste streams in addition to the urine waste stream. Oxidation kinetics dictate that removal of volatile organic contaminants from the vapor phase is more efficient. Treatment of the various waste streams by VCD will reduce the load on the expendable ion exchange and adsorption media which follow, and on the aqueous-phase volatile removal assembly further downstream. Incorporating these advantages will reduce the weight, volume, and power requirements of the system, as well as resupply.

  5. Low-speed performance of an axisymmetric, mixed-compression, supersonic inlet with auxiliary inlets

    NASA Technical Reports Server (NTRS)

    Trefny, C. J.; Wasserbauer, J. W.

    1986-01-01

    A test program was conducted to determine the aerodynamic performance and acoustic characteristics associated with the low-speed operation of a supersonic, axisymmetric, mixed-compression inlet with auxiliary inlets. Blow-in-auxiliary doors were installed on the NASA Ames P inlet. One door per quadrant was located on the cowl in the subsonic diffuser selection of the inlet. Auxiliary inlets with areas of 20 and 40 percent of the inlet capture area were tested statically and at free-stream Mach numbers of 0.1 and 0.2. The effects of boundary layer bleed inflow were investigated. A JT8D fan simulator driven by compressed air was used to pump inlet flow and to provide a characteristic noise signature. Baseline data were obtained at static free-stream conditions with the sharp P-inlet cowl lip replaced by a blunt lip. Auxiliary inlets increased overall total pressure recovery of the order of 10 percent.

  6. An Interactive, Design and Educational Tool for Supersonic External-Compression Inlets

    NASA Technical Reports Server (NTRS)

    Benson, Thomas J.

    1994-01-01

    A workstation-based interactive design tool called VU-INLET was developed for the inviscid flow in rectangular, supersonic, external-compression inlets. VU-INLET solves for the flow conditions from free stream, through the supersonic compression ramps, across the terminal normal shock region and the subsonic diffuser to the engine face. It calculates the shock locations, the capture streamtube, and the additive drag of the inlet. The inlet geometry can be modified using a graphical user interface and the new flow conditions recalculated interactively. Free stream conditions and engine airflow can also be interactively varied and off-design performance evaluated. Flow results from VU-INLET can be saved to a file for a permanent record, and a series of help screens make the simulator easy to learn and use. This paper will detail the underlying assumptions of the models and the numerical methods used in the simulator.

  7. On the flow of a compressible fluid by the hodograph method I : unification and extension of present-day results

    NASA Technical Reports Server (NTRS)

    Garrick, I E; Kaplan, Carl

    1944-01-01

    Elementary basic solutions of the equations of motion of a compressible fluid in the hodograph variables are developed and used to provide a basis for comparison, in the form of velocity correction formulas, of corresponding compressible and incompressible flows. The known approximate results of Chaplygin, Von Karman and Tsien, Temple and Yarwood, and Prandtl and Glauert are unified by means of the analysis of the present paper. Two new types of approximations, obtained from the basic solutions, are introduced; they possess certain desirable features of the other approximations and appear preferable as a basis for extrapolation into the range of high stream Mach numbers and large disturbances to the main stream. Tables and figures giving velocity and pressure-coefficient correction factors are included in order to facilitate the practical application of the results.

  8. Inviscid spatial stability of a compressible mixing layer. Part 2: The flame sheet model

    NASA Technical Reports Server (NTRS)

    Jackson, T. L.; Grosch, C. E.

    1989-01-01

    The results of an inviscid spatial calculation for a compressible reacting mixing layer are reported. The limit of infinitive activation energy is taken and the diffusion flame is approximated by a flame sheet. Results are reported for the phase speeds of the neutral waves and maximum growth rates of the unstable waves as a function of the parameters of the problem: the ratio of the temperature of the stationary stream to that of the moving stream, the Mach number of the moving streams, the heat release per unit mass fraction of the reactant, the equivalence ratio of the reaction, and the frequency of the disturbance. These results are compared to the phase speeds and growth rates of the corresponding nonreacting mixing layer. We show that the addition of combustion has important, and complex effects on the flow stability.

  9. Corrections on the Thermometer Reading in an Air Stream

    NASA Technical Reports Server (NTRS)

    Van Der Maas, H J; Wynia, S

    1940-01-01

    A method is described for checking a correction formula, based partly on theoretical considerations, for adiabatic compression and friction in flight tests and determining the value of the constant. It is necessary to apply a threefold correction to each thermometer reading. They are a correction for adiabatic compression, friction and for time lag.

  10. Backwards compatible high dynamic range video compression

    NASA Astrophysics Data System (ADS)

    Dolzhenko, Vladimir; Chesnokov, Vyacheslav; Edirisinghe, Eran A.

    2014-02-01

    This paper presents a two layer CODEC architecture for high dynamic range video compression. The base layer contains the tone mapped video stream encoded with 8 bits per component which can be decoded using conventional equipment. The base layer content is optimized for rendering on low dynamic range displays. The enhancement layer contains the image difference, in perceptually uniform color space, between the result of inverse tone mapped base layer content and the original video stream. Prediction of the high dynamic range content reduces the redundancy in the transmitted data while still preserves highlights and out-of-gamut colors. Perceptually uniform colorspace enables using standard ratedistortion optimization algorithms. We present techniques for efficient implementation and encoding of non-uniform tone mapping operators with low overhead in terms of bitstream size and number of operations. The transform representation is based on human vision system model and suitable for global and local tone mapping operators. The compression techniques include predicting the transform parameters from previously decoded frames and from already decoded data for current frame. Different video compression techniques are compared: backwards compatible and non-backwards compatible using AVC and HEVC codecs.

  11. Some Practical Universal Noiseless Coding Techniques

    NASA Technical Reports Server (NTRS)

    Rice, Robert F.

    1994-01-01

    Report discusses noiseless data-compression-coding algorithms, performance characteristics and practical consideration in implementation of algorithms in coding modules composed of very-large-scale integrated circuits. Report also has value as tutorial document on data-compression-coding concepts. Coding techniques and concepts in question "universal" in sense that, in principle, applicable to streams of data from variety of sources. However, discussion oriented toward compression of high-rate data generated by spaceborne sensors for lower-rate transmission back to earth.

  12. RAZOR: A Compression and Classification Solution for the Internet of Things

    PubMed Central

    Danieletto, Matteo; Bui, Nicola; Zorzi, Michele

    2014-01-01

    The Internet of Things is expected to increase the amount of data produced and exchanged in the network, due to the huge number of smart objects that will interact with one another. The related information management and transmission costs are increasing and becoming an almost unbearable burden, due to the unprecedented number of data sources and the intrinsic vastness and variety of the datasets. In this paper, we propose RAZOR, a novel lightweight algorithm for data compression and classification, which is expected to alleviate both aspects by leveraging the advantages offered by data mining methods for optimizing communications and by enhancing information transmission to simplify data classification. In particular, RAZOR leverages the concept of motifs, recurrent features used for signal categorization, in order to compress data streams: in such a way, it is possible to achieve compression levels of up to an order of magnitude, while maintaining the signal distortion within acceptable bounds and allowing for simple lightweight distributed classification. In addition, RAZOR is designed to keep the computational complexity low, in order to allow its implementation in the most constrained devices. The paper provides results about the algorithm configuration and a performance comparison against state-of-the-art signal processing techniques. PMID:24451454

  13. Development of a fast framing detector for electron microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Ian J.; Bustillo, Karen C.; Ciston, Jim

    2016-10-01

    A high frame rate detector system is described that enables fast real-time data analysis of scanning diffraction experiments in scanning transmission electron microscopy (STEM). This is an end-to-end development that encompasses the data producing detector, data transportation, and real-time processing of data. The detector will consist of a central pixel sensor that is surrounded by annular silicon diodes. Both components of the detector system will synchronously capture data at almost 100 kHz frame rate, which produces an approximately 400 Gb/s data stream. Low-level preprocessing will be implemented in firmware before the data is streamed from the National Center for Electronmore » Microscopy (NCEM) to the National Energy Research Scientific Computing Center (NERSC). Live data processing, before it lands on disk, will happen on the Cori supercomputer and aims to present scientists with prompt experimental feedback. This online analysis will provide rough information of the sample that can be utilized for sample alignment, sample monitoring and verification that the experiment is set up correctly. Only a compressed version of the relevant data is then selected for more in-depth processing.« less

  14. Deterring watermark collusion attacks using signal processing techniques

    NASA Astrophysics Data System (ADS)

    Lemma, Aweke N.; van der Veen, Michiel

    2007-02-01

    Collusion attack is a malicious watermark removal attack in which the hacker has access to multiple copies of the same content with different watermarks and tries to remove the watermark using averaging. In the literature, several solutions to collusion attacks have been reported. The main stream solutions aim at designing watermark codes that are inherently resistant to collusion attacks. The other approaches propose signal processing based solutions that aim at modifying the watermarked signals in such a way that averaging multiple copies of the content leads to a significant degradation of the content quality. In this paper, we present signal processing based technique that may be deployed for deterring collusion attacks. We formulate the problem in the context of electronic music distribution where the content is generally available in the compressed domain. Thus, we first extend the collusion resistance principles to bit stream signals and secondly present experimental based analysis to estimate a bound on the maximum number of modified versions of a content that satisfy good perceptibility requirement on one hand and destructive averaging property on the other hand.

  15. The Compressibility Burble

    NASA Technical Reports Server (NTRS)

    Stack, John

    1935-01-01

    Simultaneous air-flow photographs and pressure-distribution measurements have been made of the NACA 4412 airfoil at high speeds in order to determine the physical nature of the compressibility burble. The flow photographs were obtained by the Schlieren method and the pressures were simultaneously measured for 54 stations on the 5-inch-chord wing by means of a multiple-tube photographic manometer. Pressure-measurement results and typical Schlieren photographs are presented. The general nature of the phenomenon called the "compressibility burble" is shown by these experiments. The source of the increased drag is the compression shock that occurs, the excess drag being due to the conversion of a considerable amount of the air-stream kinetic energy into heat at the compression shock.

  16. Transients which are born on the way from the Sun to Earth

    NASA Astrophysics Data System (ADS)

    Yermolaev, Yuri; Nikolaeva, Nadezhda; Lodkina, Irina; Yermolaev, Michael

    2016-07-01

    As well known only disturbed types of solar wind (SW) streams can contain the IMF component perpendicular to the ecliptic plane (in particular the southward IMF component) and be geoeffective. Such disturbed types are the following SW streams: interplanetary manifestation of coronal mass ejection (ICME) including magnetic cloud (MC) and Ejecta, Sheath - compression region before ICME and corotating interaction region (CIR) - compression region before high-speed stream (HSS) of solar wind. Role of solar transients, CME and ICME, in generation of geomagnetic disturbances and space weather prediction is intensively studied by many researchers. However transients Sheath and CIR which are born on the way from the Sun to Earth due to corresponding high speed piston (fast ICME for Sheath and HSS from coronal hole for CIR), are investigated less intensively, and their contribution to geoefficiency are underestimated. For example, on 19 December, 1980 the southward component of IMF Bz increased up to 30 nT and the compressed region Sheath before MC induced the strong magnetic storm with Dst ~ -250 nT. We present and discuss statistical data on Sheath and CIR geoeffectiveness. The work was supported by the Russian Foundation for Basic Research, project 16-02-00125 and by Program of Presidium of the Russian Academy of Sciences.

  17. Dynamic video encryption algorithm for H.264/AVC based on a spatiotemporal chaos system.

    PubMed

    Xu, Hui; Tong, Xiao-Jun; Zhang, Miao; Wang, Zhu; Li, Ling-Hao

    2016-06-01

    Video encryption schemes mostly employ the selective encryption method to encrypt parts of important and sensitive video information, aiming to ensure the real-time performance and encryption efficiency. The classic block cipher is not applicable to video encryption due to the high computational overhead. In this paper, we propose the encryption selection control module to encrypt video syntax elements dynamically which is controlled by the chaotic pseudorandom sequence. A novel spatiotemporal chaos system and binarization method is used to generate a key stream for encrypting the chosen syntax elements. The proposed scheme enhances the resistance against attacks through the dynamic encryption process and high-security stream cipher. Experimental results show that the proposed method exhibits high security and high efficiency with little effect on the compression ratio and time cost.

  18. Calculation of two-dimensional inlet flow fields in a supersonic free stream: Program documentation and test cases

    NASA Technical Reports Server (NTRS)

    Biringen, S. H.; Mcmillan, O. J.

    1980-01-01

    The use of a computer code for the calculation of two dimensional inlet flow fields in a supersonic free stream and a nonorthogonal mesh-generation code are illustrated by specific examples. Input, output, and program operation and use are given and explained for the case of supercritical inlet operation at a subdesign Mach number (M Mach free stream = 2.09) for an isentropic-compression, drooped-cowl inlet. Source listings of the computer codes are also provided.

  19. Design of joint source/channel coders

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The need to transmit large amounts of data over a band limited channel has led to the development of various data compression schemes. Many of these schemes function by attempting to remove redundancy from the data stream. An unwanted side effect of this approach is to make the information transfer process more vulnerable to channel noise. Efforts at protecting against errors involve the reinsertion of redundancy and an increase in bandwidth requirements. The papers presented within this document attempt to deal with these problems from a number of different approaches.

  20. The Theory of a Free Jet of a Compressible Gas

    NASA Technical Reports Server (NTRS)

    Abramovich, G. N.

    1944-01-01

    In the present report the theory of free turbulence propagation and the boundary layer theory are developed for a plane-parallel free stream of a compressible fluid. In constructing the theory use was made of the turbulence hypothesis by Taylor (transport of vorticity) which gives best agreement with test results for problems involving heat transfer in free jets.

  1. A defect stream function, law of the wall/wake method for compressible turbulent boundary layers

    NASA Technical Reports Server (NTRS)

    Barnwell, Richard W.; Dejarnette, Fred R.; Wahls, Richard A.

    1989-01-01

    The application of the defect stream function to the solution of the two-dimensional, compressible boundary layer is examined. A law of the wall/law of the wake formulation for the inner part of the boundary layer is presented which greatly simplifies the computational task near the wall and eliminates the need for an eddy viscosity model in this region. The eddy viscosity model in the outer region is arbitrary. The modified Crocco temperature-velocity relationship is used as a simplification of the differential energy equation. Formulations for both equilibrium and nonequilibrium boundary layers are presented including a constrained zero-order form which significantly reduces the computational workload while retaining the significant physics of the flow. A formulation for primitive variables is also presented. Results are given for the constrained zero-order and second-order equilibrium formulations and are compared with experimental data. A compressible wake function valid near the wall has been developed from the present results.

  2. Direct numerical simulation of transition and turbulence in a spatially evolving boundary layer

    NASA Technical Reports Server (NTRS)

    Rai, Man M.; Moin, Parviz

    1991-01-01

    A high-order-accurate finite-difference approach to direct simulations of transition and turbulence in compressible flows is described. Attention is given to the high-free-stream disturbance case in which transition to turbulence occurs close to the leading edge. In effect, computation requirements are reduced. A method for numerically generating free-stream disturbances is presented.

  3. Block selective redaction for minimizing loss during de-identification of burned in text in irreversibly compressed JPEG medical images.

    PubMed

    Clunie, David A; Gebow, Dan

    2015-01-01

    Deidentification of medical images requires attention to both header information as well as the pixel data itself, in which burned-in text may be present. If the pixel data to be deidentified is stored in a compressed form, traditionally it is decompressed, identifying text is redacted, and if necessary, pixel data are recompressed. Decompression without recompression may result in images of excessive or intractable size. Recompression with an irreversible scheme is undesirable because it may cause additional loss in the diagnostically relevant regions of the images. The irreversible (lossy) JPEG compression scheme works on small blocks of the image independently, hence, redaction can selectively be confined only to those blocks containing identifying text, leaving all other blocks unchanged. An open source implementation of selective redaction and a demonstration of its applicability to multiframe color ultrasound images is described. The process can be applied either to standalone JPEG images or JPEG bit streams encapsulated in other formats, which in the case of medical images, is usually DICOM.

  4. A burst compression and expansion technique for variable-rate users in satellite-switched TDMA networks

    NASA Technical Reports Server (NTRS)

    Budinger, James M.

    1990-01-01

    A burst compression and expansion technique is described for asynchronously interconnecting variable-data-rate users with cost-efficient ground terminals in a satellite-switched, time-division-multiple-access (SS/TDMA) network. Compression and expansion buffers in each ground terminal convert between lower rate, asynchronous, continuous-user data streams and higher-rate TDMA bursts synchronized with the satellite-switched timing. The technique described uses a first-in, first-out (FIFO) memory approach which enables the use of inexpensive clock sources by both the users and the ground terminals and obviates the need for elaborate user clock synchronization processes. A continous range of data rates from kilobits per second to that approaching the modulator burst rate (hundreds of megabits per second) can be accommodated. The technique was developed for use in the NASA Lewis Research Center System Integration, Test, and Evaluation (SITE) facility. Some key features of the technique have also been implemented in the gound terminals developed at NASA Lewis for use in on-orbit evaluation of the Advanced Communications Technology Satellite (ACTS) high burst rate (HBR) system.

  5. Experiments in MPEG-4 content authoring, browsing, and streaming

    NASA Astrophysics Data System (ADS)

    Puri, Atul; Schmidt, Robert L.; Basso, Andrea; Civanlar, Mehmet R.

    2000-12-01

    In this paper, within the context of the MPEG-4 standard we report on preliminary experiments in three areas -- authoring of MPEG-4 content, a player/browser for MPEG-4 content, and streaming of MPEG-4 content. MPEG-4 is a new standard for coding of audiovisual objects; the core of MPEG-4 standard is complete while amendments are in various stages of completion. MPEG-4 addresses compression of audio and visual objects, their integration by scene description, and interactivity of users with such objects. MPEG-4 scene description is based on VRML like language for 3D scenes, extended to 2D scenes, and supports integration of 2D and 3D scenes. This scene description language is called BIFS. First, we introduce the basic concepts behind BIFS and then show with an example, textual authoring of different components needed to describe an audiovisual scene in BIFS; the textual BIFS is then saved as compressed binary file/s for storage or transmission. Then, we discuss a high level design of an MPEG-4 player/browser that uses the main components from authoring such as encoded BIFS stream, media files it refers to, and multiplexed object descriptor stream to play an MPEG-4 scene. We also discuss our extensions to such a player/browser. Finally, we present our work in streaming of MPEG-4 -- the payload format, modification to client MPEG-4 player/browser, server-side infrastructure and example content used in our MPEG-4 streaming experiments.

  6. Investigation of Mixing a Supersonic Stream with the Flow Downstream of a Wedge

    NASA Technical Reports Server (NTRS)

    Sheeley, Joseph

    1997-01-01

    The flow characteristics in the base region of a two-dimensional supersonic compression ramp are investigated. A stream-wise oriented air jet, M = 1.75, is injected through a thin horizontal slot into a supersonic air main flow, M = 2.3, at the end of a two-dimensional compression ramp. The velocity profile and basic characteristics of the flow in the base region immediately following the ramp are determined. Visualization of the flowfield for qualitative observations is accomplished via Dark Central Ground Interferometry (DCGI). Two-dimensional velocity profiles are obtained using Laser Doppler Velocimetry (LDV). The study is the initial phase of a four-year investigation of base flow mixing. The current study is to provide more details of the flowfield.

  7. Technology Directions for the 21st Century. Volume 4

    NASA Technical Reports Server (NTRS)

    Crimi, Giles; Verheggen, Henry; Botta, Robert; Paul, Heywood; Vuong, Xuyen

    1998-01-01

    Data compression is an important tool for reducing the bandwidth of communications systems, and thus for reducing the size, weight, and power of spacecraft systems. For data requiring lossless transmissions, including most science data from spacecraft sensors, small compression factors of two to three may be expected. Little improvement can be expected over time. For data that is suitable for lossy compression, such as video data streams, much higher compression factors can be expected, such as 100 or more. More progress can be expected in this branch of the field, since there is more hidden redundancy and many more ways to exploit that redundancy.

  8. Effect of water temperature and air stream velocity on performance of direct evaporative air cooler for thermal comfort

    NASA Astrophysics Data System (ADS)

    Aziz, Azridjal; Mainil, Rahmat Iman; Mainil, Afdhal Kurniawan; Listiono, Hendra

    2017-01-01

    The aim of this work was to determine the effects of water temperature and air stream velocity on the performance of direct evaporative air cooler (DEAC) for thermal comfort. DEAC system requires the lower cost than using vapor compression refrigeration system (VCRS), because VCRS use a compressor to circulate refrigerant while DEAC uses a pump for circulating water in the cooling process to achieve thermal comfort. The study was conducted by varying the water temperature (10°C, 20°C, 30°C, 40°C, and 50°C) at different air stream velocity (2,93 m/s, 3.9 m/s and 4,57 m/s). The results show that the relative humidity (RH) in test room tends to increase with the increasing of water temperature, while on the variation of air stream velocity, RH remains constant at the same water temperature, because the amount of water that evaporates increase with the increasing water temperature. The cooling effectiveness (CE) increase with the increasing of air stream velocity where the higher CE was obtained at lower water temperature (10°C) with high air velocity (4,57m/s). The lower room temperature (26°C) was achieved at water temperature 10°C and air stream velocity 4.57 m/s with the relative humidity 85,87%. DEAC can be successfully used in rooms that have smoothly air circulation to fulfill the indoor thermal comfort.

  9. Feasibility of video codec algorithms for software-only playback

    NASA Astrophysics Data System (ADS)

    Rodriguez, Arturo A.; Morse, Ken

    1994-05-01

    Software-only video codecs can provide good playback performance in desktop computers with a 486 or 68040 CPU running at 33 MHz without special hardware assistance. Typically, playback of compressed video can be categorized into three tasks: the actual decoding of the video stream, color conversion, and the transfer of decoded video data from system RAM to video RAM. By current standards, good playback performance is the decoding and display of video streams of 320 by 240 (or larger) compressed frames at 15 (or greater) frames-per- second. Software-only video codecs have evolved by modifying and tailoring existing compression methodologies to suit video playback in desktop computers. In this paper we examine the characteristics used to evaluate software-only video codec algorithms, namely: image fidelity (i.e., image quality), bandwidth (i.e., compression) ease-of-decoding (i.e., playback performance), memory consumption, compression to decompression asymmetry, scalability, and delay. We discuss the tradeoffs among these variables and the compromises that can be made to achieve low numerical complexity for software-only playback. Frame- differencing approaches are described since software-only video codecs typically employ them to enhance playback performance. To complement other papers that appear in this session of the Proceedings, we review methods derived from binary pattern image coding since these methods are amenable for software-only playback. In particular, we introduce a novel approach called pixel distribution image coding.

  10. Wavelet compression of multichannel ECG data by enhanced set partitioning in hierarchical trees algorithm.

    PubMed

    Sharifahmadian, Ershad

    2006-01-01

    The set partitioning in hierarchical trees (SPIHT) algorithm is very effective and computationally simple technique for image and signal compression. Here the author modified the algorithm which provides even better performance than the SPIHT algorithm. The enhanced set partitioning in hierarchical trees (ESPIHT) algorithm has performance faster than the SPIHT algorithm. In addition, the proposed algorithm reduces the number of bits in a bit stream which is stored or transmitted. I applied it to compression of multichannel ECG data. Also, I presented a specific procedure based on the modified algorithm for more efficient compression of multichannel ECG data. This method employed on selected records from the MIT-BIH arrhythmia database. According to experiments, the proposed method attained the significant results regarding compression of multichannel ECG data. Furthermore, in order to compress one signal which is stored for a long time, the proposed multichannel compression method can be utilized efficiently.

  11. Video Compression

    NASA Technical Reports Server (NTRS)

    1996-01-01

    Optivision developed two PC-compatible boards and associated software under a Goddard Space Flight Center Small Business Innovation Research grant for NASA applications in areas such as telerobotics, telesciences and spaceborne experimentation. From this technology, the company used its own funds to develop commercial products, the OPTIVideo MPEG Encoder and Decoder, which are used for realtime video compression and decompression. They are used in commercial applications including interactive video databases and video transmission. The encoder converts video source material to a compressed digital form that can be stored or transmitted, and the decoder decompresses bit streams to provide high quality playback.

  12. OTEC gas desorption studies

    NASA Astrophysics Data System (ADS)

    Chen, F. C.; Golshani, A.

    1982-02-01

    Experiments on deaeration in packed columns and barometric intake systems, and with hydraulic air compression for open-cycle OTEC systems are reported. A gas desorption test loop consisting of water storage tanks, a vacuum system, a liquid recirculating system, an air supply, a column test section, and two barometric leg test sections was used to perform the tests. The aerated water was directed through columns filled with either ceramic Raschig rings or plastic pall rings, and the system vacuum pressure, which drives the deaeration process, was found to be dependent on water velocity and intake pipe height. The addition of a barometric intake pipe increased the deaeration effect 10%, and further tests were run with lengths of PVC pipe as potential means for noncondensibles disposal through hydraulic air compression. Using the kinetic energy from the effluent flow to condense steam in the noncondensible stream improved the system efficiency.

  13. Efficient Prediction Structures for H.264 Multi View Coding Using Temporal Scalability

    NASA Astrophysics Data System (ADS)

    Guruvareddiar, Palanivel; Joseph, Biju K.

    2014-03-01

    Prediction structures with "disposable view components based" hierarchical coding have been proven to be efficient for H.264 multi view coding. Though these prediction structures along with the QP cascading schemes provide superior compression efficiency when compared to the traditional IBBP coding scheme, the temporal scalability requirements of the bit stream could not be met to the fullest. On the other hand, a fully scalable bit stream, obtained by "temporal identifier based" hierarchical coding, provides a number of advantages including bit rate adaptations and improved error resilience, but lacks in compression efficiency when compared to the former scheme. In this paper it is proposed to combine the two approaches such that a fully scalable bit stream could be realized with minimal reduction in compression efficiency when compared to state-of-the-art "disposable view components based" hierarchical coding. Simulation results shows that the proposed method enables full temporal scalability with maximum BDPSNR reduction of only 0.34 dB. A novel method also has been proposed for the identification of temporal identifier for the legacy H.264/AVC base layer packets. Simulation results also show that this enables the scenario where the enhancement views could be extracted at a lower frame rate (1/2nd or 1/4th of base view) with average extraction time for a view component of only 0.38 ms.

  14. An immersive surgery training system with live streaming capability.

    PubMed

    Yang, Yang; Guo, Xinqing; Yu, Zhan; Steiner, Karl V; Barner, Kenneth E; Bauer, Thomas L; Yu, Jingyi

    2014-01-01

    Providing real-time, interactive immersive surgical training has been a key research area in telemedicine. Earlier approaches have mainly adopted videotaped training that can only show imagery from a fixed view point. Recent advances on commodity 3D imaging have enabled a new paradigm for immersive surgical training by acquiring nearly complete 3D reconstructions of actual surgical procedures. However, unlike 2D videotaping that can easily stream data in real-time, by far 3D imaging based solutions require pre-capturing and processing the data; surgical trainings using the data have to be conducted offline after the acquisition. In this paper, we present a new real-time immersive 3D surgical training system. Our solution builds upon the recent multi-Kinect based surgical training system [1] that can acquire and display high delity 3D surgical procedures using only a small number of Microsoft Kinect sensors. We build on top of the system a client-server model for real-time streaming. On the server front, we efficiently fuse multiple Kinect data acquired from different viewpoints and compress and then stream the data to the client. On the client front, we build an interactive space-time navigator to allow remote users (e.g., trainees) to witness the surgical procedure in real-time as if they were present in the room.

  15. Reduction of aerobic and lactic acid bacteria in dairy desludge using an integrated compressed CO2 and ultrasonic process.

    PubMed

    Overton, Tim W; Lu, Tiejun; Bains, Narinder; Leeke, Gary A

    Current treatment routes are not suitable to reduce and stabilise bacterial content in some dairy process streams such as separator and bactofuge desludges which currently present a major emission problem faced by dairy producers. In this study, a novel method for the processing of desludge was developed. The new method, elevated pressure sonication (EPS), uses a combination of low frequency ultrasound (20 kHz) and elevated CO 2 pressure (50 to 100 bar). Process conditions (pressure, sonicator power, processing time) were optimised for batch and continuous EPS processes to reduce viable numbers of aerobic and lactic acid bacteria in bactofuge desludge by ≥3-log fold. Coagulation of proteins present in the desludge also occurred, causing separation of solid (curd) and liquid (whey) fractions. The proposed process offers a 10-fold reduction in energy compared to high temperature short time (HTST) treatment of milk.

  16. Extended frequency turbofan model

    NASA Technical Reports Server (NTRS)

    Mason, J. R.; Park, J. W.; Jaekel, R. F.

    1980-01-01

    The fan model was developed using two dimensional modeling techniques to add dynamic radial coupling between the core stream and the bypass stream of the fan. When incorporated into a complete TF-30 engine simulation, the fan model greatly improved compression system frequency response to planar inlet pressure disturbances up to 100 Hz. The improved simulation also matched engine stability limits at 15 Hz, whereas the one dimensional fan model required twice the inlet pressure amplitude to stall the simulation. With verification of the two dimensional fan model, this program formulated a high frequency F-100(3) engine simulation using row by row compression system characteristics. In addition to the F-100(3) remote splitter fan, the program modified the model fan characteristics to simulate a proximate splitter version of the F-100(3) engine.

  17. Method for compression of binary data

    DOEpatents

    Berlin, Gary J.

    1996-01-01

    The disclosed method for compression of a series of data bytes, based on LZSS-based compression methods, provides faster decompression of the stored data. The method involves the creation of a flag bit buffer in a random access memory device for temporary storage of flag bits generated during normal LZSS-based compression. The flag bit buffer stores the flag bits separately from their corresponding pointers and uncompressed data bytes until all input data has been read. Then, the flag bits are appended to the compressed output stream of data. Decompression can be performed much faster because bit manipulation is only required when reading the flag bits and not when reading uncompressed data bytes and pointers. Uncompressed data is read using byte length instructions and pointers are read using word instructions, thus reducing the time required for decompression.

  18. Use of zerotree coding in a high-speed pyramid image multiresolution decomposition

    NASA Astrophysics Data System (ADS)

    Vega-Pineda, Javier; Cabrera, Sergio D.; Lucero, Aldo

    1995-03-01

    A Zerotree (ZT) coding scheme is applied as a post-processing stage to avoid transmitting zero data in the High-Speed Pyramid (HSP) image compression algorithm. This algorithm has features that increase the capability of the ZT coding to give very high compression rates. In this paper the impact of the ZT coding scheme is analyzed and quantified. The HSP algorithm creates a discrete-time multiresolution analysis based on a hierarchical decomposition technique that is a subsampling pyramid. The filters used to create the image residues and expansions can be related to wavelet representations. According to the pixel coordinates and the level in the pyramid, N2 different wavelet basis functions of various sizes and rotations are linearly combined. The HSP algorithm is computationally efficient because of the simplicity of the required operations, and as a consequence, it can be very easily implemented with VLSI hardware. This is the HSP's principal advantage over other compression schemes. The ZT coding technique transforms the different quantized image residual levels created by the HSP algorithm into a bit stream. The use of ZT's compresses even further the already compressed image taking advantage of parent-child relationships (trees) between the pixels of the residue images at different levels of the pyramid. Zerotree coding uses the links between zeros along the hierarchical structure of the pyramid, to avoid transmission of those that form branches of all zeros. Compression performance and algorithm complexity of the combined HSP-ZT method are compared with those of the JPEG standard technique.

  19. NW Lab for Integrated Systems

    DTIC Science & Technology

    1991-05-24

    hardware data compressors. [BuBo89, BuBo90, BuBo9l] The data compression scheme of Ziv and Lempel repeatedly matches the input stream to words contained...most significantly reduce dictionary size requirements in practical Ziv - Lempel encoders, without compromising; compression . How- ever, the additional...achieve a fixed 20MB/sec data rate. Thus, our Ziv - Lempel implementation realizes a speed improvement of 10 to 20 times that of the fastest recent

  20. Compressed-air flow control system.

    PubMed

    Bong, Ki Wan; Chapin, Stephen C; Pregibon, Daniel C; Baah, David; Floyd-Smith, Tamara M; Doyle, Patrick S

    2011-02-21

    We present the construction and operation of a compressed-air driven flow system that can be used for a variety of microfluidic applications that require rapid dynamic response and precise control of multiple inlet streams. With the use of inexpensive and readily available parts, we describe how to assemble this versatile control system and further explore its utility in continuous- and pulsed-flow microfluidic procedures for the synthesis and analysis of microparticles.

  1. Stability analysis applied to the early stages of viscous drop breakup by a high-speed gas stream

    NASA Astrophysics Data System (ADS)

    Padrino, Juan C.; Longmire, Ellen K.

    2013-11-01

    The instability of a liquid drop suddenly exposed to a high-speed gas stream behind a shock wave is studied by considering the gas-liquid motion at the drop interface. The discontinuous velocity profile given by the uniform, parallel flow of an inviscid, compressible gas over a viscous liquid is considered, and drop acceleration is included. Our analysis considers compressibility effects not only in the base flow, but also in the equations of motion for the perturbations. Recently published high-resolution images of the process of drop breakup by a passing shock have provided experimental evidence supporting the idea that a critical gas dynamic pressure can be found above which drop piercing by the growth of acceleration-driven instabilities gives way to drop breakup by liquid entrainment resulting from the gas shearing action. For a set of experimental runs from the literature, results show that, for shock Mach numbers >= 2, a band of rapidly growing waves forms in the region well upstream of the drop's equator at the location where the base flow passes from subsonic to supersonic, in agreement with experimental images. Also, the maximum growth rate can be used to predict the transition of the breakup mode from Rayleigh-Taylor piercing to shear-induced entrainment. The authors acknowledge support of the NSF (DMS-0908561).

  2. Polymeric compositions and their method of manufacture. [forming filled polymer systems using cryogenics

    NASA Technical Reports Server (NTRS)

    Moser, B. G.; Landel, R. F. (Inventor)

    1972-01-01

    Filled polymer compositions are made by dissolving the polymer binder in a suitable sublimable solvent, mixing the filler material with the polymer and its solvent, freezing the resultant mixture, and subliming the frozen solvent from the mixture from which it is then removed. The remaining composition is suitable for conventional processing such as compression molding or extruding. A particular feature of the method of manufacture is pouring the mixed solution slowly in a continuous stream into a cryogenic bath wherein frozen particles of the mixture result. The frozen individual particles are then subjected to the sublimation.

  3. Digital Video (DV): A Primer for Developing an Enterprise Video Strategy

    NASA Astrophysics Data System (ADS)

    Talovich, Thomas L.

    2002-09-01

    The purpose of this thesis is to provide an overview of digital video production and delivery. The thesis presents independent research demonstrating the educational value of incorporating video and multimedia content in training and education programs. The thesis explains the fundamental concepts associated with the process of planning, preparing, and publishing video content and assists in the development of follow-on strategies for incorporation of video content into distance training and education programs. The thesis provides an overview of the following technologies: Digital Video, Digital Video Editors, Video Compression, Streaming Video, and Optical Storage Media.

  4. Inlet Development for a Rocket Based Combined Cycle, Single Stage to Orbit Vehicle Using Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    DeBonis, J. R.; Trefny, C. J.; Steffen, C. J., Jr.

    1999-01-01

    Design and analysis of the inlet for a rocket based combined cycle engine is discussed. Computational fluid dynamics was used in both the design and subsequent analysis. Reynolds averaged Navier-Stokes simulations were performed using both perfect gas and real gas assumptions. An inlet design that operates over the required Mach number range from 0 to 12 was produced. Performance data for cycle analysis was post processed using a stream thrust averaging technique. A detailed performance database for cycle analysis is presented. The effect ot vehicle forebody compression on air capture is also examined.

  5. Self-Regulating Water-Separator System for Fuel Cells

    NASA Technical Reports Server (NTRS)

    Vasquez, Arturo; McCurdy, Kerri; Bradley, Karla F.

    2007-01-01

    proposed system would perform multiple coordinated functions in regulating the pressure of the oxidant gas (usually, pure oxygen) flowing to a fuelcell stack and in removing excess product water that is generated in the normal fuel-cell operation. The system could function in the presence or absence of gravitation, and in any orientation in a gravitational field. Unlike some prior systems for removing product water, the proposed system would not depend on hydrophobicity or hydrophilicity of surfaces that are subject to fouling and, consequently, to gradual deterioration in performance. Also unlike some prior systems, the proposed system would not include actively controlled electric motors for pumping; instead, motive power for separation and pumping away of product water would be derived primarily from the oxidant flow and perhaps secondarily from the fuel flow. The net effect of these and other features would be to make the proposed system more reliable and safer, relative to the prior systems. The proposed system (see figure) would include a pressure regulator and sensor in the oxidant supply just upstream from an ejector reactant pump. The pressure of the oxidant supply would depend on the consumption flow. In one of two control subsystems, the pressure of oxidant flowing from the supply to the ejector would be sensed and used to control the speed of a set of a reciprocating constant-displacement pump so that the volumetric flow of nominally incompressible water away from the system would slightly exceed the rate at which water was produced by the fuel cell(s). The two-phase (gas/liquid water) outlet stream from the fuel cell(s) would enter the water separator, a turbinelike centrifugal separator machine driven primarily by the oxidant gas stream. A second control subsystem would utilize feedback derived from the compressibility of the outlet stream: As the separator was emptied of liquid water, the compressibility of the pumped stream would increase. The compressibility would be sensed, and an increase in compressibility beyond a preset point (signifying a decrease in water content below an optimum low level) would cause the outflow from the reciprocating pump to be diverted back to the separator to recycle some water.

  6. Solar Wind Features Responsible for Magnetic Storms and Substorms During the Declining Phase of the Solar Cycle: 197

    NASA Technical Reports Server (NTRS)

    Tsurutani, B.; Arballo, J.

    1994-01-01

    We examine interplanetary data and geomagnetic activity indices during 1974 when two long-lasting solar wind corotating streams existed. We find that only 3 major storms occurred during 1974, and all were associated with coronal mass ejections. Each high speed stream was led by a shock, so the three storms had sudden commencements. Two of the 1974 major storms were associated with shock compression of preexisting southward fields and one was caused by southward fields within a magnetic cloud. Corotating streams were responsible for recurring moderate to weak magnetic storms.

  7. A novel multiple description scalable coding scheme for mobile wireless video transmission

    NASA Astrophysics Data System (ADS)

    Zheng, Haifeng; Yu, Lun; Chen, Chang Wen

    2005-03-01

    We proposed in this paper a novel multiple description scalable coding (MDSC) scheme based on in-band motion compensation temporal filtering (IBMCTF) technique in order to achieve high video coding performance and robust video transmission. The input video sequence is first split into equal-sized groups of frames (GOFs). Within a GOF, each frame is hierarchically decomposed by discrete wavelet transform. Since there is a direct relationship between wavelet coefficients and what they represent in the image content after wavelet decomposition, we are able to reorganize the spatial orientation trees to generate multiple bit-streams and employed SPIHT algorithm to achieve high coding efficiency. We have shown that multiple bit-stream transmission is very effective in combating error propagation in both Internet video streaming and mobile wireless video. Furthermore, we adopt the IBMCTF scheme to remove the redundancy for inter-frames along the temporal direction using motion compensated temporal filtering, thus high coding performance and flexible scalability can be provided in this scheme. In order to make compressed video resilient to channel error and to guarantee robust video transmission over mobile wireless channels, we add redundancy to each bit-stream and apply error concealment strategy for lost motion vectors. Unlike traditional multiple description schemes, the integration of these techniques enable us to generate more than two bit-streams that may be more appropriate for multiple antenna transmission of compressed video. Simulate results on standard video sequences have shown that the proposed scheme provides flexible tradeoff between coding efficiency and error resilience.

  8. Treatment of low level radioactive liquid waste containing appreciable concentration of TBP degraded products.

    PubMed

    Valsala, T P; Sonavane, M S; Kore, S G; Sonar, N L; De, Vaishali; Raghavendra, Y; Chattopadyaya, S; Dani, U; Kulkarni, Y; Changrani, R D

    2011-11-30

    The acidic and alkaline low level radioactive liquid waste (LLW) generated during the concentration of high level radioactive liquid waste (HLW) prior to vitrification and ion exchange treatment of intermediate level radioactive liquid waste (ILW), respectively are decontaminated by chemical co-precipitation before discharge to the environment. LLW stream generated from the ion exchange treatment of ILW contained high concentrations of carbonates, tributyl phosphate (TBP) degraded products and problematic radio nuclides like (106)Ru and (99)Tc. Presence of TBP degraded products was interfering with the co-precipitation process. In view of this a modified chemical treatment scheme was formulated for the treatment of this waste stream. By mixing the acidic LLW and alkaline LLW, the carbonates in the alkaline LLW were destroyed and the TBP degraded products got separated as a layer at the top of the vessel. By making use of the modified co-precipitation process the effluent stream (1-2 μCi/L) became dischargeable to the environment after appropriate dilution. Based on the lab scale studies about 250 m(3) of LLW was treated in the plant. The higher activity of the TBP degraded products separated was due to short lived (90)Y isotope. The cement waste product prepared using the TBP degraded product was having good chemical durability and compressive strength. Copyright © 2011 Elsevier B.V. All rights reserved.

  9. Optimal Compression Methods for Floating-point Format Images

    NASA Technical Reports Server (NTRS)

    Pence, W. D.; White, R. L.; Seaman, R.

    2009-01-01

    We report on the results of a comparison study of different techniques for compressing FITS images that have floating-point (real*4) pixel values. Standard file compression methods like GZIP are generally ineffective in this case (with compression ratios only in the range 1.2 - 1.6), so instead we use a technique of converting the floating-point values into quantized scaled integers which are compressed using the Rice algorithm. The compressed data stream is stored in FITS format using the tiled-image compression convention. This is technically a lossy compression method, since the pixel values are not exactly reproduced, however all the significant photometric and astrometric information content of the image can be preserved while still achieving file compression ratios in the range of 4 to 8. We also show that introducing dithering, or randomization, when assigning the quantized pixel-values can significantly improve the photometric and astrometric precision in the stellar images in the compressed file without adding additional noise. We quantify our results by comparing the stellar magnitudes and positions as measured in the original uncompressed image to those derived from the same image after applying successively greater amounts of compression.

  10. Hey! A Chigger Bit Me!

    MedlinePlus

    ... over the place, including in grassy fields, along lakes and streams, and in forests. There are adult ... some calamine lotion or a cold compress (like ice wrapped in a clean towel) on the area. ...

  11. Plating by glass-bead peening

    NASA Technical Reports Server (NTRS)

    Babecki, A. J.; Haehner, C. L.

    1971-01-01

    Technique permits plating of primarily metallic substrates with either metals or nonmetals at normal temperature. Peening uses compressed air to apply concurrent streams of small glass beads and powdered plating material to the substrate.

  12. Experimental and numerical investigations of resonant acoustic waves in near-critical carbon dioxide.

    PubMed

    Hasan, Nusair; Farouk, Bakhtier

    2015-10-01

    Flow and transport induced by resonant acoustic waves in a near-critical fluid filled cylindrical enclosure is investigated both experimentally and numerically. Supercritical carbon dioxide (near the critical or the pseudo-critical states) in a confined resonator is subjected to acoustic field created by an electro-mechanical acoustic transducer and the induced pressure waves are measured by a fast response pressure field microphone. The frequency of the acoustic transducer is chosen such that the lowest acoustic mode propagates along the enclosure. For numerical simulations, a real-fluid computational fluid dynamics model representing the thermo-physical and transport properties of the supercritical fluid is considered. The simulated acoustic field in the resonator is compared with measurements. The formation of acoustic streaming structures in the highly compressible medium is revealed by time-averaging the numerical solutions over a given period. Due to diverging thermo-physical properties of supercritical fluid near the critical point, large scale oscillations are generated even for small sound field intensity. The strength of the acoustic wave field is found to be in direct relation with the thermodynamic state of the fluid. The effects of near-critical property variations and the operating pressure on the formation process of the streaming structures are also investigated. Irregular streaming patterns with significantly higher streaming velocities are observed for near-pseudo-critical states at operating pressures close to the critical pressure. However, these structures quickly re-orient to the typical Rayleigh streaming patterns with the increase operating pressure.

  13. SOLIDIFICATION OF THE HANFORD LAW WASTE STREAM PRODUCED AS A RESULT OF NEAR-TANK CONTINUOUS SLUDGE LEACHING AND SODIUM HYDROXIDE RECOVERY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reigel, M.; Johnson, F.; Crawford, C.

    2011-09-20

    The U.S. Department of Energy (DOE), Office of River Protection (ORP), is responsible for the remediation and stabilization of the Hanford Site tank farms, including 53 million gallons of highly radioactive mixed wasted waste contained in 177 underground tanks. The plan calls for all waste retrieved from the tanks to be transferred to the Waste Treatment Plant (WTP). The WTP will consist of three primary facilities including pretreatment facilities for Low Activity Waste (LAW) to remove aluminum, chromium and other solids and radioisotopes that are undesirable in the High Level Waste (HLW) stream. Removal of aluminum from HLW sludge canmore » be accomplished through continuous sludge leaching of the aluminum from the HLW sludge as sodium aluminate; however, this process will introduce a significant amount of sodium hydroxide into the waste stream and consequently will increase the volume of waste to be dispositioned. A sodium recovery process is needed to remove the sodium hydroxide and recycle it back to the aluminum dissolution process. The resulting LAW waste stream has a high concentration of aluminum and sodium and will require alternative immobilization methods. Five waste forms were evaluated for immobilization of LAW at Hanford after the sodium recovery process. The waste forms considered for these two waste streams include low temperature processes (Saltstone/Cast stone and geopolymers), intermediate temperature processes (steam reforming and phosphate glasses) and high temperature processes (vitrification). These immobilization methods and the waste forms produced were evaluated for (1) compliance with the Performance Assessment (PA) requirements for disposal at the IDF, (2) waste form volume (waste loading), and (3) compatibility with the tank farms and systems. The iron phosphate glasses tested using the product consistency test had normalized release rates lower than the waste form requirements although the CCC glasses had higher release rates than the quenched glasses. However, the waste form failed to meet the vapor hydration test criteria listed in the WTP contract. In addition, the waste loading in the phosphate glasses were not as high as other candidate waste forms. Vitrification of HLW waste as borosilicate glass is a proven process; however the HLW and LAW streams at Hanford can vary significantly from waste currently being immobilized. The ccc glasses show lower release rates for B and Na than the quenched glasses and all glasses meet the acceptance criterion of < 4 g/L. Glass samples spiked with Re{sub 2}O{sub 7} also passed the PCT test. However, further vapor hydration testing must be performed since all the samples cracked and the test could not be performed. The waste loading of the iron phosphate and borosilicate glasses are approximately 20 and 25% respectively. The steam reforming process produced the predicted waste form for both the high and low aluminate waste streams. The predicted waste loadings for the monolithic samples is approximately 39%, which is higher than the glass waste forms; however, at the time of this report, no monolithic samples were made and therefore compliance with the PA cannot be determined. The waste loading in the geopolymer is approximately 40% but can vary with the sodium hydroxide content in the waste stream. Initial geopolymer mixes revealed compressive strengths that are greater than 500 psi for the low aluminate mixes and less than 500 psi for the high aluminate mixes. Further work testing needs to be performed to formulate a geopolymer waste form made using a high aluminate salt solution. A cementitious waste form has the advantage that the process is performed at ambient conditions and is a proven process currently in use for LAW disposal. The Saltstone/Cast Stone formulated using low and high aluminate salt solutions retained at least 97% of the Re that was added to the mix as a dopant. While this data is promising, additional leaching testing must be performed to show compliance with the PA. Compressive strength tests must also be performed on the Cast Stone monoliths to verify PA compliance. Based on testing performed for this report, the borosilicate glass and Cast Stone are the recommended waste forms for further testing. Both are proven technologies for radioactive waste disposal and the initial testing using simulated Hanford LAW waste shows compliance with the PA. Both are resistant to leaching and have greater than 25% waste loading.« less

  14. Method for compression of binary data

    DOEpatents

    Berlin, G.J.

    1996-03-26

    The disclosed method for compression of a series of data bytes, based on LZSS-based compression methods, provides faster decompression of the stored data. The method involves the creation of a flag bit buffer in a random access memory device for temporary storage of flag bits generated during normal LZSS-based compression. The flag bit buffer stores the flag bits separately from their corresponding pointers and uncompressed data bytes until all input data has been read. Then, the flag bits are appended to the compressed output stream of data. Decompression can be performed much faster because bit manipulation is only required when reading the flag bits and not when reading uncompressed data bytes and pointers. Uncompressed data is read using byte length instructions and pointers are read using word instructions, thus reducing the time required for decompression. 5 figs.

  15. MPEG content summarization based on compressed domain feature analysis

    NASA Astrophysics Data System (ADS)

    Sugano, Masaru; Nakajima, Yasuyuki; Yanagihara, Hiromasa

    2003-11-01

    This paper addresses automatic summarization of MPEG audiovisual content on compressed domain. By analyzing semantically important low-level and mid-level audiovisual features, our method universally summarizes the MPEG-1/-2 contents in the form of digest or highlight. The former is a shortened version of an original, while the latter is an aggregation of important or interesting events. In our proposal, first, the incoming MPEG stream is segmented into shots and the above features are derived from each shot. Then the features are adaptively evaluated in an integrated manner, and finally the qualified shots are aggregated into a summary. Since all the processes are performed completely on compressed domain, summarization is achieved at very low computational cost. The experimental results show that news highlights and sports highlights in TV baseball games can be successfully extracted according to simple shot transition models. As for digest extraction, subjective evaluation proves that meaningful shots are extracted from content without a priori knowledge, even if it contains multiple genres of programs. Our method also has the advantage of generating an MPEG-7 based description such as summary and audiovisual segments in the course of summarization.

  16. Discontinuity minimization for omnidirectional video projections

    NASA Astrophysics Data System (ADS)

    Alshina, Elena; Zakharchenko, Vladyslav

    2017-09-01

    Advances in display technologies both for head mounted devices and television panels demand resolution increase beyond 4K for source signal in virtual reality video streaming applications. This poses a problem of content delivery trough a bandwidth limited distribution networks. Considering a fact that source signal covers entire surrounding space investigation reviled that compression efficiency may fluctuate 40% in average depending on origin selection at the conversion stage from 3D space to 2D projection. Based on these knowledge the origin selection algorithm for video compression applications has been proposed. Using discontinuity entropy minimization function projection origin rotation may be defined to provide optimal compression results. Outcome of this research may be applied across various video compression solutions for omnidirectional content.

  17. Equilibrium and rate data for the extraction of lipids using compressed carbon dioxide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    King, M.B.; Bott, T.R.; Barr, M.J.

    1987-01-01

    Equilibrium data are given for the solubilities in compressed CO2 of the lipid components in freshly ground rape seed and of glycerol trioleate (a typical constituent of rape oil) at pressures up to 200 bar and temperatures 25 to 75C. Continuous flow tests in which a bed of ground rape seed was contacted with a stream of liquid CO2 at 25C and varied flow conditions are also reported. The results are collated in terms of an empirical mass transfer coefficient. A sharp change took place in the lipid concentration in the extractant stream leaving the bed when about 65% ofmore » the available oil had been extracted. This, and changes in the composition of the extract, are discussed, together with the use of this type of data for design purposes.« less

  18. Computer program for calculating laminar, transitional, and turbulent boundary layers for a compressible axisymmetric flow

    NASA Technical Reports Server (NTRS)

    Albers, J. A.; Gregg, J. L.

    1974-01-01

    A finite-difference program is described for calculating the viscous compressible boundary layer flow over either planar or axisymmetric surfaces. The flow may be initially laminar and progress through a transitional zone to fully turbulent flow, or it may remain laminar, depending on the imposed boundary conditions, laws of viscosity, and numerical solution of the momentum and energy equations. The flow may also be forced into a turbulent flow at a chosen spot by the data input. The input may contain the factors of arbitrary Reynolds number, free-stream Mach number, free-stream turbulence, wall heating or cooling, longitudinal wall curvature, wall suction or blowing, and wall roughness. The solution may start from an initial Falkner-Skan similarity profile, an approximate equilibrium turbulent profile, or an initial arbitrary input profile.

  19. Dragonfly: an implementation of the expand-maximize-compress algorithm for single-particle imaging.

    PubMed

    Ayyer, Kartik; Lan, Ti-Yen; Elser, Veit; Loh, N Duane

    2016-08-01

    Single-particle imaging (SPI) with X-ray free-electron lasers has the potential to change fundamentally how biomacromolecules are imaged. The structure would be derived from millions of diffraction patterns, each from a different copy of the macromolecule before it is torn apart by radiation damage. The challenges posed by the resultant data stream are staggering: millions of incomplete, noisy and un-oriented patterns have to be computationally assembled into a three-dimensional intensity map and then phase reconstructed. In this paper, the Dragonfly software package is described, based on a parallel implementation of the expand-maximize-compress reconstruction algorithm that is well suited for this task. Auxiliary modules to simulate SPI data streams are also included to assess the feasibility of proposed SPI experiments at the Linac Coherent Light Source, Stanford, California, USA.

  20. Electron heating within interaction zones of simple high-speed solar wind streams

    NASA Technical Reports Server (NTRS)

    Feldman, W. C.; Asbridge, J. R.; Bame, S. J.; Gosling, J. T.; Lemons, D. S.

    1978-01-01

    In the present paper, electron heating within the high-speed portions of three simple stream-stream interaction zones is studied to further our understanding of the physics of heat flux regulation in interplanetary space. To this end, the thermal signals present in the compressions at the leading edges of the simple high-speed streams are analyzed, showing that the data are inconsistent with the Spitzer conductivity. Instead, a polynomial law is found to apply. Its implication concerning the mechanism of interplanetary heat conduction is discussed, and the results of applying this conductivity law to high-speed flows inside of 1 AU are studied. A self-consistent model of the radial evolution of electrons in the high-speed solar wind is proposed.

  1. THRSTER: A THRee-STream Ejector Ramjet Analysis and Design Tool

    NASA Technical Reports Server (NTRS)

    Chue, R. S.; Sabean, J.; Tyll, J.; Bakos, R. J.

    2000-01-01

    An engineering tool for analyzing ejectors in rocket based combined cycle (RBCC) engines has been developed. A key technology for multi-cycle RBCC propulsion systems is the ejector which functions as the compression stage of the ejector ramjet cycle. The THRee STream Ejector Ramjet analysis tool was developed to analyze the complex aerothermodynamic and combustion processes that occur in this device. The formulated model consists of three quasi-one-dimensional streams, one each for the ejector primary flow, the secondary flow, and the mixed region. The model space marches through the mixer, combustor, and nozzle to evaluate the solution along the engine. In its present form, the model is intended for an analysis mode in which the diffusion rates of the primary and secondary into the mixed stream are stipulated. The model offers the ability to analyze the highly two-dimensional ejector flowfield while still benefits from the simplicity and speed of an engineering tool. To validate the developed code, wall static pressure measurements from the Penn-State and NASA-ART RBCC experiments were used to compare with the results generated by the code. The calculated solutions were generally found to have satisfactory agreement with the pressure measurements along the engines, although further modeling effort may be required when a strong shock train is formed at the rocket exhaust. The range of parameters in which the code would generate valid results are presented and discussed.

  2. THRSTER: A Three-Stream Ejector Ramjet Analysis and Design Tool

    NASA Technical Reports Server (NTRS)

    Chue, R. S.; Sabean, J.; Tyll, J.; Bakos, R. J.; Komar, D. R. (Technical Monitor)

    2000-01-01

    An engineering tool for analyzing ejectors in rocket based combined cycle (RBCC) engines has been developed. A key technology for multi-cycle RBCC propulsion systems is the ejector which functions as the compression stage of the ejector ramjet cycle. The THRee STream Ejector Ramjet analysis tool was developed to analyze the complex aerothermodynamic and combustion processes that occur in this device. The formulated model consists of three quasi-one-dimensional streams, one each for the ejector primary flow, the secondary flow, and the mixed region. The model space marches through the mixer, combustor, and nozzle to evaluate the solution along the engine. In its present form, the model is intended for an analysis mode in which the diffusion rates of the primary and secondary into the mixed stream are stipulated. The model offers the ability to analyze the highly two-dimensional ejector flowfield while still benefits from the simplicity and speed of an engineering tool. To validate the developed code, wall static pressure measurements from the Penn-State and NASA-ART RBCC experiments were used to compare with the results generated by the code. The calculated solutions were generally found to have satisfactory agreement with the pressure measurements along the engines, although further modeling effort may be required when a strong shock train is formed at the rocket exhaust. The range of parameters in which the code would generate valid results are presented and discussed.

  3. Retrofit device and method to improve humidity control of vapor compression cooling systems

    DOEpatents

    Roth, Robert Paul; Hahn, David C.; Scaringe, Robert P.

    2016-08-16

    A method and device for improving moisture removal capacity of a vapor compression system is disclosed. The vapor compression system is started up with the evaporator blower initially set to a high speed. A relative humidity in a return air stream is measured with the evaporator blower operating at the high speed. If the measured humidity is above the predetermined high relative humidity value, the evaporator blower speed is reduced from the initially set high speed to the lowest possible speed. The device is a control board connected with the blower and uses a predetermined change in measured relative humidity to control the blower motor speed.

  4. Interactive browsing of 3D environment over the Internet

    NASA Astrophysics Data System (ADS)

    Zhang, Cha; Li, Jin

    2000-12-01

    In this paper, we describe a system for wandering in a realistic environment over the Internet. The environment is captured by the concentric mosaic, compressed via the reference block coder (RBC), and accessed and delivered over the Internet through the virtual media (Vmedia) access protocol. Capturing the environment through the concentric mosaic is easy. We mount a camera at the end of a level beam, and shoot images as the beam rotates. The huge dataset of the concentric mosaic is then compressed through the RBC, which is specifically designed for both high compression efficiency and just-in-time (JIT) rendering. Through the JIT rendering function, only a portion of the RBC bitstream is accessed, decoded and rendered for each virtual view. A multimedia communication protocol -- the Vmedia protocol, is then proposed to deliver the compressed concentric mosaic data over the Internet. Only the bitstream segments corresponding to the current view are streamed over the Internet. Moreover, the delivered bitstream segments are managed by a local Vmedia cache so that frequently used bitstream segments need not be streamed over the Internet repeatedly, and the Vmedia is able to handle a RBC bitstream larger than its memory capacity. A Vmedia concentric mosaic interactive browser is developed where the user can freely wander in a realistic environment, e.g., rotate around, walk forward/backward and sidestep, even under a tight bandwidth of 33.6 kbps.

  5. Strong wave/mean-flow coupling in baroclinic acoustic streaming

    NASA Astrophysics Data System (ADS)

    Chini, Greg; Michel, Guillaume

    2017-11-01

    Recently, Chini et al. demonstrated the potential for large-amplitude acoustic streaming in compressible channel flows subjected to strong background cross-channel density variations. In contrast with classic Rayleigh streaming, standing acoustic waves of O (ɛ) amplitude acquire vorticity owing to baroclinic torques acting throughout the domain rather than via viscous torques acting in Stokes boundary layers. More significantly, these baroclinically-driven streaming flows have a magnitude that also is O (ɛ) , i.e. comparable to that of the sound waves. In the present study, the consequent potential for fully two-way coupling between the waves and streaming flows is investigated using a novel WKBJ analysis. The analysis confirms that the wave-driven streaming flows are sufficiently strong to modify the background density gradient, thereby modifying the leading-order acoustic wave structure. Simulations of the wave/mean-flow system enabled by the WKBJ analysis are performed to illustrate the nature of the two-way coupling, which contrasts sharply with classic Rayleigh streaming, for which the waves can first be determined and the streaming flows subsequently computed.

  6. Development of a defect stream function, law of the wall/wake method for compressible turbulent boundary layers. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Wahls, Richard A.

    1990-01-01

    The method presented is designed to improve the accuracy and computational efficiency of existing numerical methods for the solution of flows with compressible turbulent boundary layers. A compressible defect stream function formulation of the governing equations assuming an arbitrary turbulence model is derived. This formulation is advantageous because it has a constrained zero-order approximation with respect to the wall shear stress and the tangential momentum equation has a first integral. Previous problems with this type of formulation near the wall are eliminated by using empirically based analytic expressions to define the flow near the wall. The van Driest law of the wall for velocity and the modified Crocco temperature-velocity relationship are used. The associated compressible law of the wake is determined and it extends the valid range of the analytical expressions beyond the logarithmic region of the boundary layer. The need for an inner-region eddy viscosity model is completely avoided. The near-wall analytic expressions are patched to numerically computed outer region solutions at a point determined during the computation. A new boundary condition on the normal derivative of the tangential velocity at the surface is presented; this condition replaces the no-slip condition and enables numerical integration to the surface with a relatively coarse grid using only an outer region turbulence model. The method was evaluated for incompressible and compressible equilibrium flows and was implemented into an existing Navier-Stokes code using the assumption of local equilibrium flow with respect to the patching. The method has proven to be accurate and efficient.

  7. The Formation of CIRs at Stream-Stream Interfaces and Resultant Geomagnetic Activity

    NASA Technical Reports Server (NTRS)

    Richardson, I. G.

    2005-01-01

    Corotating interaction regions (CIRs) are regions of compressed plasma formed at the leading edges of corotating high-speed solar wind streams originating in coronal holes as they interact with the preceding slow solar wind. Although particularly prominent features of the solar wind during the declining and minimum phases of the 11-year solar cycle, they may also be present at times of higher solar activity. We describe how CIRs are formed, and their geomagnetic effects, which principally result from brief southward interplanetary magnetic field excursions associated with Alfven waves. Seasonal and long-term variations in these effects are briefly discussed.

  8. Apparatus for dispensing compressed natural gas and liquified natural gas to natural gas powered vehicles

    DOEpatents

    Bingham, Dennis A.; Clark, Michael L.; Wilding, Bruce M.; Palmer, Gary L.

    2007-05-29

    A fueling facility and method for dispensing liquid natural gas (LNG), compressed natural gas (CNG) or both on-demand. The fueling facility may include a source of LNG, such as cryogenic storage vessel. A low volume high pressure pump is coupled to the source of LNG to produce a stream of pressurized LNG. The stream of pressurized LNG may be selectively directed through an LNG flow path or to a CNG flow path which includes a vaporizer configured to produce CNG from the pressurized LNG. A portion of the CNG may be drawn from the CNG flow path and introduced into the CNG flow path to control the temperature of LNG flowing therethrough. Similarly, a portion of the LNG may be drawn from the LNG flow path and introduced into the CNG flow path to control the temperature of CNG flowing therethrough.

  9. Method and apparatus for dispensing compressed natural gas and liquified natural gas to natural gas powered vehicles

    DOEpatents

    Bingham, Dennis A.; Clark, Michael L.; Wilding, Bruce M.; Palmer, Gary L.

    2005-05-31

    A fueling facility and method for dispensing liquid natural gas (LNG), compressed natural gas (CNG) or both on-demand. The fueling facility may include a source of LNG, such as cryogenic storage vessel. A low volume high pressure pump is coupled to the source of LNG to produce a stream of pressurized LNG. The stream of pressurized LNG may be selectively directed through an LNG flow path or to a CNG flow path which includes a vaporizer configured to produce CNG from the pressurized LNG. A portion of the CNG may be drawn from the CNG flow path and introduced into the CNG flow path to control the temperature of LNG flowing therethrough. Similarly, a portion of the LNG may be drawn from the LNG flow path and introduced into the CNG flow path to control the temperature of CNG flowing therethrough.

  10. Similar solutions for the compressible laminar boundary layer with heat transfer and pressure gradient

    NASA Technical Reports Server (NTRS)

    Cohen, Clarence B; Reshotko, Eli

    1956-01-01

    Stewartson's transformation is applied to the laminar compressible boundary-layer equations and the requirement of similarity is introduced, resulting in a set of ordinary nonlinear differential equations previously quoted by Stewartson, but unsolved. The requirements of the system are Prandtl number of 1.0, linear viscosity-temperature relation across the boundary layer, an isothermal surface, and the particular distributions of free-stream velocity consistent with similar solutions. This system admits axial pressure gradients of arbitrary magnitude, heat flux normal to the surface, and arbitrary Mach numbers. The system of differential equations is transformed to integral system, with the velocity ratio as the independent variable. For this system, solutions are found by digital computation for pressure gradients varying from that causing separation to the infinitely favorable gradient and for wall temperatures from absolute zero to twice the free-stream stagnation temperature. Some solutions for separated flows are also presented.

  11. Subliminal speech priming.

    PubMed

    Kouider, Sid; Dupoux, Emmanuel

    2005-08-01

    We present a novel subliminal priming technique that operates in the auditory modality. Masking is achieved by hiding a spoken word within a stream of time-compressed speechlike sounds with similar spectral characteristics. Participants were unable to consciously identify the hidden words, yet reliable repetition priming was found. This effect was unaffected by a change in the speaker's voice and remained restricted to lexical processing. The results show that the speech modality, like the written modality, involves the automatic extraction of abstract word-form representations that do not include nonlinguistic details. In both cases, priming operates at the level of discrete and abstract lexical entries and is little influenced by overlap in form or semantics.

  12. Method for compression of data using single pass LZSS and run-length encoding

    DOEpatents

    Berlin, G.J.

    1994-01-01

    A method used preferably with LZSS-based compression methods for compressing a stream of digital data. The method uses a run-length encoding scheme especially suited for data strings of identical data bytes having large run-lengths, such as data representing scanned images. The method reads an input data stream to determine the length of the data strings. Longer data strings are then encoded in one of two ways depending on the length of the string. For data strings having run-lengths less than 18 bytes, a cleared offset and the actual run-length are written to an output buffer and then a run byte is written to the output buffer. For data strings of 18 bytes or longer, a set offset and an encoded run-length are written to the output buffer and then a run byte is written to the output buffer. The encoded run-length is written in two parts obtained by dividing the run length by a factor of 255. The first of two parts of the encoded run-length is the quotient; the second part is the remainder. Data bytes that are not part of data strings of sufficient length are written directly to the output buffer.

  13. Method for compression of data using single pass LZSS and run-length encoding

    DOEpatents

    Berlin, Gary J.

    1997-01-01

    A method used preferably with LZSS-based compression methods for compressing a stream of digital data. The method uses a run-length encoding scheme especially suited for data strings of identical data bytes having large run-lengths, such as data representing scanned images. The method reads an input data stream to determine the length of the data strings. Longer data strings are then encoded in one of two ways depending on the length of the string. For data strings having run-lengths less than 18 bytes, a cleared offset and the actual run-length are written to an output buffer and then a run byte is written to the output buffer. For data strings of 18 bytes or longer, a set offset and an encoded run-length are written to the output buffer and then a run byte is written to the output buffer. The encoded run-length is written in two parts obtained by dividing the run length by a factor of 255. The first of two parts of the encoded run-length is the quotient; the second part is the remainder. Data bytes that are not part of data strings of sufficient length are written directly to the output buffer.

  14. Compressed ultrasound video image-quality evaluation using a Likert scale and Kappa statistical analysis

    NASA Astrophysics Data System (ADS)

    Stewart, Brent K.; Carter, Stephen J.; Langer, Steven G.; Andrew, Rex K.

    1998-06-01

    Experiments using NASA's Advanced Communications Technology Satellite were conducted to provide an estimate of the compressed video quality required for preservation of clinically relevant features for the detection of trauma. Bandwidth rates of 128, 256 and 384 kbps were used. A five point Likert scale (1 equals no useful information and 5 equals good diagnostic quality) was used for a subjective preference questionnaire to evaluate the quality of the compressed ultrasound imagery at the three compression rates for several anatomical regions of interest. At 384 kbps the Likert scores (mean plus or minus SD) were abdomen (4.45 plus or minus 0.71), carotid artery (4.70 plus or minus 0.36), kidney (5.0 plus or minus 0.0), liver (4.67 plus or minus 0.58) and thyroid (4.03 plus or minus 0.74). Due to the volatile nature of the H.320 compressed digital video stream, no statistically significant results can be derived through this methodology. As the MPEG standard has at its roots many of the same intraframe and motion vector compression algorithms as the H.261 (such as that used in the previous ACTS/AMT experiments), we are using the MPEG compressed video sequences to best gauge what minimum bandwidths are necessary for preservation of clinically relevant features for the detection of trauma. We have been using an MPEG codec board to collect losslessly compressed video clips from high quality S- VHS tapes and through direct digitization of S-video. Due to the large number of videoclips and questions to be presented to the radiologists and for ease of application, we have developed a web browser interface for this video visual perception study. Due to the large numbers of observations required to reach statistical significance in most ROC studies, Kappa statistical analysis is used to analyze the degree of agreement between observers and between viewing assessment. If the degree of agreement amongst readers is high, then there is a possibility that the ratings (i.e., average Likert score at each bandwidth) do in fact reflect the dimension they are purported to reflect (video quality versus bandwidth). It is then possible to make intelligent choice of bandwidth for streaming compressed video and compressed videoclips.

  15. A conservative staggered-grid Chebyshev multidomain method for compressible flows

    NASA Technical Reports Server (NTRS)

    Kopriva, David A.; Kolias, John H.

    1995-01-01

    We present a new multidomain spectral collocation method that uses staggered grids for the solution of compressible flow problems. The solution unknowns are defined at the nodes of a Gauss quadrature rule. The fluxes are evaluated at the nodes of a Gauss-Lobatto rule. The method is conservative, free-stream preserving, and exponentially accurate. A significant advantage of the method is that subdomain corners are not included in the approximation, making solutions in complex geometries easier to compute.

  16. Concepts for on board satellite image registration. Volume 4: Impact of data set selection on satellite on board signal processing

    NASA Technical Reports Server (NTRS)

    Ruedger, W. H.; Aanstoos, J. V.; Snyder, W. E.

    1982-01-01

    The NASA NEEDS program goals present a requirement for on-board signal processing to achieve user-compatible, information-adaptive data acquisition. This volume addresses the impact of data set selection on data formatting required for efficient telemetering of the acquired satellite sensor data. More specifically, the FILE algorithm developed by Martin-Marietta provides a means for the determination of those pixels from the data stream effects an improvement in the achievable system throughput. It will be seen that based on the lack of statistical stationarity in cloud cover, spatial distribution periods exist where data acquisition rates exceed the throughput capability. The study therefore addresses various approaches to data compression and truncation as applicable to this sensor mission.

  17. POLYCOMP: Efficient and configurable compression of astronomical timelines

    NASA Astrophysics Data System (ADS)

    Tomasi, M.

    2016-07-01

    This paper describes the implementation of polycomp, a open-sourced, publicly available program for compressing one-dimensional data series in tabular format. The program is particularly suited for compressing smooth, noiseless streams of data like pointing information, as one of the algorithms it implements applies a combination of least squares polynomial fitting and discrete Chebyshev transforms that is able to achieve a compression ratio Cr up to ≈ 40 in the examples discussed in this work. This performance comes at the expense of a loss of information, whose upper bound is configured by the user. I show two areas in which the usage of polycomp is interesting. In the first example, I compress the ephemeris table of an astronomical object (Ganymede), obtaining Cr ≈ 20, with a compression error on the x , y , z coordinates smaller than 1 m. In the second example, I compress the publicly available timelines recorded by the Low Frequency Instrument (LFI), an array of microwave radiometers onboard the ESA Planck spacecraft. The compression reduces the needed storage from ∼ 6.5 TB to ≈ 0.75 TB (Cr ≈ 9), thus making them small enough to be kept in a portable hard drive.

  18. JPEG 2000 Encoding with Perceptual Distortion Control

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Liu, Zhen; Karam, Lina J.

    2008-01-01

    An alternative approach has been devised for encoding image data in compliance with JPEG 2000, the most recent still-image data-compression standard of the Joint Photographic Experts Group. Heretofore, JPEG 2000 encoding has been implemented by several related schemes classified as rate-based distortion-minimization encoding. In each of these schemes, the end user specifies a desired bit rate and the encoding algorithm strives to attain that rate while minimizing a mean squared error (MSE). While rate-based distortion minimization is appropriate for transmitting data over a limited-bandwidth channel, it is not the best approach for applications in which the perceptual quality of reconstructed images is a major consideration. A better approach for such applications is the present alternative one, denoted perceptual distortion control, in which the encoding algorithm strives to compress data to the lowest bit rate that yields at least a specified level of perceptual image quality. Some additional background information on JPEG 2000 is prerequisite to a meaningful summary of JPEG encoding with perceptual distortion control. The JPEG 2000 encoding process includes two subprocesses known as tier-1 and tier-2 coding. In order to minimize the MSE for the desired bit rate, a rate-distortion- optimization subprocess is introduced between the tier-1 and tier-2 subprocesses. In tier-1 coding, each coding block is independently bit-plane coded from the most-significant-bit (MSB) plane to the least-significant-bit (LSB) plane, using three coding passes (except for the MSB plane, which is coded using only one "clean up" coding pass). For M bit planes, this subprocess involves a total number of (3M - 2) coding passes. An embedded bit stream is then generated for each coding block. Information on the reduction in distortion and the increase in the bit rate associated with each coding pass is collected. This information is then used in a rate-control procedure to determine the contribution of each coding block to the output compressed bit stream.

  19. HPC enabled real-time remote processing of laparoscopic surgery

    NASA Astrophysics Data System (ADS)

    Ronaghi, Zahra; Sapra, Karan; Izard, Ryan; Duffy, Edward; Smith, Melissa C.; Wang, Kuang-Ching; Kwartowitz, David M.

    2016-03-01

    Laparoscopic surgery is a minimally invasive surgical technique. The benefit of small incisions has a disadvantage of limited visualization of subsurface tissues. Image-guided surgery (IGS) uses pre-operative and intra-operative images to map subsurface structures. One particular laparoscopic system is the daVinci-si robotic surgical system. The video streams generate approximately 360 megabytes of data per second. Real-time processing this large stream of data on a bedside PC, single or dual node setup, has become challenging and a high-performance computing (HPC) environment may not always be available at the point of care. To process this data on remote HPC clusters at the typical 30 frames per second rate, it is required that each 11.9 MB video frame be processed by a server and returned within 1/30th of a second. We have implement and compared performance of compression, segmentation and registration algorithms on Clemson's Palmetto supercomputer using dual NVIDIA K40 GPUs per node. Our computing framework will also enable reliability using replication of computation. We will securely transfer the files to remote HPC clusters utilizing an OpenFlow-based network service, Steroid OpenFlow Service (SOS) that can increase performance of large data transfers over long-distance and high bandwidth networks. As a result, utilizing high-speed OpenFlow- based network to access computing clusters with GPUs will improve surgical procedures by providing real-time medical image processing and laparoscopic data.

  20. Dynamic quality of service model for improving performance of multimedia real-time transmission in industrial networks.

    PubMed

    Gopalakrishnan, Ravichandran C; Karunakaran, Manivannan

    2014-01-01

    Nowadays, quality of service (QoS) is very popular in various research areas like distributed systems, multimedia real-time applications and networking. The requirements of these systems are to satisfy reliability, uptime, security constraints and throughput as well as application specific requirements. The real-time multimedia applications are commonly distributed over the network and meet various time constraints across networks without creating any intervention over control flows. In particular, video compressors make variable bit-rate streams that mismatch the constant-bit-rate channels typically provided by classical real-time protocols, severely reducing the efficiency of network utilization. Thus, it is necessary to enlarge the communication bandwidth to transfer the compressed multimedia streams using Flexible Time Triggered- Enhanced Switched Ethernet (FTT-ESE) protocol. FTT-ESE provides automation to calculate the compression level and change the bandwidth of the stream. This paper focuses on low-latency multimedia transmission over Ethernet with dynamic quality-of-service (QoS) management. This proposed framework deals with a dynamic QoS for multimedia transmission over Ethernet with FTT-ESE protocol. This paper also presents distinct QoS metrics based both on the image quality and network features. Some experiments with recorded and live video streams show the advantages of the proposed framework. To validate the solution we have designed and implemented a simulator based on the Matlab/Simulink, which is a tool to evaluate different network architecture using Simulink blocks.

  1. Quality Scalability Aware Watermarking for Visual Content.

    PubMed

    Bhowmik, Deepayan; Abhayaratne, Charith

    2016-11-01

    Scalable coding-based content adaptation poses serious challenges to traditional watermarking algorithms, which do not consider the scalable coding structure and hence cannot guarantee correct watermark extraction in media consumption chain. In this paper, we propose a novel concept of scalable blind watermarking that ensures more robust watermark extraction at various compression ratios while not effecting the visual quality of host media. The proposed algorithm generates scalable and robust watermarked image code-stream that allows the user to constrain embedding distortion for target content adaptations. The watermarked image code-stream consists of hierarchically nested joint distortion-robustness coding atoms. The code-stream is generated by proposing a new wavelet domain blind watermarking algorithm guided by a quantization based binary tree. The code-stream can be truncated at any distortion-robustness atom to generate the watermarked image with the desired distortion-robustness requirements. A blind extractor is capable of extracting watermark data from the watermarked images. The algorithm is further extended to incorporate a bit-plane discarding-based quantization model used in scalable coding-based content adaptation, e.g., JPEG2000. This improves the robustness against quality scalability of JPEG2000 compression. The simulation results verify the feasibility of the proposed concept, its applications, and its improved robustness against quality scalable content adaptation. Our proposed algorithm also outperforms existing methods showing 35% improvement. In terms of robustness to quality scalable video content adaptation using Motion JPEG2000 and wavelet-based scalable video coding, the proposed method shows major improvement for video watermarking.

  2. Dynamics of Large-Scale Solar-Wind Streams Obtained by the Double Superposed Epoch Analysis: 2. Comparisons of CIRs vs. Sheaths and MCs vs. Ejecta

    NASA Astrophysics Data System (ADS)

    Yermolaev, Y. I.; Lodkina, I. G.; Nikolaeva, N. S.; Yermolaev, M. Y.

    2017-12-01

    This work is a continuation of our previous article (Yermolaev et al. in J. Geophys. Res. 120, 7094, 2015), which describes the average temporal profiles of interplanetary plasma and field parameters in large-scale solar-wind (SW) streams: corotating interaction regions (CIRs), interplanetary coronal mass ejections (ICMEs including both magnetic clouds (MCs) and ejecta), and sheaths as well as interplanetary shocks (ISs). As in the previous article, we use the data of the OMNI database, our catalog of large-scale solar-wind phenomena during 1976 - 2000 (Yermolaev et al. in Cosmic Res., 47, 2, 81, 2009) and the method of double superposed epoch analysis (Yermolaev et al. in Ann. Geophys., 28, 2177, 2010a). We rescale the duration of all types of structures in such a way that the beginnings and endings for all of them coincide. We present new detailed results comparing pair phenomena: 1) both types of compression regions ( i.e. CIRs vs. sheaths) and 2) both types of ICMEs (MCs vs. ejecta). The obtained data allow us to suggest that the formation of the two types of compression regions responds to the same physical mechanism, regardless of the type of piston (high-speed stream (HSS) or ICME); the differences are connected to the geometry ( i.e. the angle between the speed gradient in front of the piston and the satellite trajectory) and the jumps in speed at the edges of the compression regions. In our opinion, one of the possible reasons behind the observed differences in the parameters in MCs and ejecta is that when ejecta are observed, the satellite passes farther from the nose of the area of ICME than when MCs are observed.

  3. Optimal erasure protection for scalably compressed video streams with limited retransmission.

    PubMed

    Taubman, David; Thie, Johnson

    2005-08-01

    This paper shows how the priority encoding transmission (PET) framework may be leveraged to exploit both unequal error protection and limited retransmission for RD-optimized delivery of streaming media. Previous work on scalable media protection with PET has largely ignored the possibility of retransmission. Conversely, the PET framework has not been harnessed by the substantial body of previous work on RD optimized hybrid forward error correction/automatic repeat request schemes. We limit our attention to sources which can be modeled as independently compressed frames (e.g., video frames), where each element in the scalable representation of each frame can be transmitted in one or both of two transmission slots. An optimization algorithm determines the level of protection which should be assigned to each element in each slot, subject to transmission bandwidth constraints. To balance the protection assigned to elements which are being transmitted for the first time with those which are being retransmitted, the proposed algorithm formulates a collection of hypotheses concerning its own behavior in future transmission slots. We show how the PET framework allows for a decoupled optimization algorithm with only modest complexity. Experimental results obtained with Motion JPEG2000 compressed video demonstrate that substantial performance benefits can be obtained using the proposed framework.

  4. Method for compression of data using single pass LZSS and run-length encoding

    DOEpatents

    Berlin, G.J.

    1997-12-23

    A method used preferably with LZSS-based compression methods for compressing a stream of digital data is disclosed. The method uses a run-length encoding scheme especially suited for data strings of identical data bytes having large run-lengths, such as data representing scanned images. The method reads an input data stream to determine the length of the data strings. Longer data strings are then encoded in one of two ways depending on the length of the string. For data strings having run-lengths less than 18 bytes, a cleared offset and the actual run-length are written to an output buffer and then a run byte is written to the output buffer. For data strings of 18 bytes or longer, a set offset and an encoded run-length are written to the output buffer and then a run byte is written to the output buffer. The encoded run-length is written in two parts obtained by dividing the run length by a factor of 255. The first of two parts of the encoded run-length is the quotient; the second part is the remainder. Data bytes that are not part of data strings of sufficient length are written directly to the output buffer. 3 figs.

  5. An Extensible Processing Framework for Eddy-covariance Data

    NASA Astrophysics Data System (ADS)

    Durden, D.; Fox, A. M.; Metzger, S.; Sturtevant, C.; Durden, N. P.; Luo, H.

    2016-12-01

    The evolution of large data collecting networks has not only led to an increase of available information, but also in the complexity of analyzing the observations. Timely dissemination of readily usable data products necessitates a streaming processing framework that is both automatable and flexible. Tower networks, such as ICOS, Ameriflux, and NEON, exemplify this issue by requiring large amounts of data to be processed from dispersed measurement sites. Eddy-covariance data from across the NEON network are expected to amount to 100 Gigabytes per day. The complexity of the algorithmic processing necessary to produce high-quality data products together with the continued development of new analysis techniques led to the development of a modular R-package, eddy4R. This allows algorithms provided by NEON and the larger community to be deployed in streaming processing, and to be used by community members alike. In order to control the processing environment, provide a proficient parallel processing structure, and certify dependencies are available during processing, we chose Docker as our "Development and Operations" (DevOps) platform. The Docker framework allows our processing algorithms to be developed, maintained and deployed at scale. Additionally, the eddy4R-Docker framework fosters community use and extensibility via pre-built Docker images and the Github distributed version control system. The capability to process large data sets is reliant upon efficient input and output of data, data compressibility to reduce compute resource loads, and the ability to easily package metadata. The Hierarchical Data Format (HDF5) is a file format that can meet these needs. A NEON standard HDF5 file structure and metadata attributes allow users to explore larger data sets in an intuitive "directory-like" structure adopting the NEON data product naming conventions.

  6. Stream dynamics between 1 AU and 2 AU: A detailed comparison of observations and theory

    NASA Technical Reports Server (NTRS)

    Burlaga, L. F.; Pizzo, V.; Lazarus, A.; Gazis, P. R.

    1984-01-01

    A radial alignment of three solar wind stream structures observed by IMP-7 and -8 (at 1.0 AU) and Voyager 1 and 2 (in the range 1.4 to 1.8 AU) in late 1977 is presented. It is demonstrated that several important aspects of the observed dynamical evolution can be both qualitatively and quantitatively described with a single-fluid 2-D MHD numerical model of quasi-steady corotating flow, including accurate prediction of: (1) the formation of a corotating shock pair at 1.75 AU in the case of a simple, quasi-steady stream; (2) the coalescence of the thermodynamic and magnetic structures associated with the compression regions of two neighboring, interacting, corotating streams; and (3) the dynamical destruction of a small (i.e., low velocity-amplitude, short spatial-scale) stream by its overtaking of a slower moving, high-density region associated with a preceding transient flow. The evolution of these flow systems is discussed in terms of the concepts of filtering and entrainment.

  7. The causes of recurrent geomagnetic storms

    NASA Technical Reports Server (NTRS)

    Burlaga, L. F.; Lepping, R. P.

    1976-01-01

    The causes of recurrent geomagnetic activity were studied by analyzing interplanetary magnetic field and plasma data from earth-orbiting spacecraft in the interval from November 1973 to February 1974. This interval included the start of two long sequences of geomagnetic activity and two corresponding corotating interplanetary streams. In general, the geomagnetic activity was related to an electric field which was due to two factors: (1) the ordered, mesoscale pattern of the stream itself, and (2) random, smaller-scale fluctuations in the southward component of the interplanetary magnetic field Bz. The geomagnetic activity in each recurrent sequence consisted of two successive stages. The first stage was usually the most intense, and it occurred during the passage of the interaction region at the front of a stream. These large amplitudes of Bz were primarily produced in the interplanetary medium by compression of ambient fluctuations as the stream steepened in transit to 1 A.U. The second stage of geomagnetic activity immediately following the first was associated with the highest speeds in the stream.

  8. The Importance of Reconnection at Sector Boundaries: Another Space Weather Hazard?

    NASA Astrophysics Data System (ADS)

    Qi, Y.; Lai, H.; Russell, C. T.

    2017-12-01

    Sector Boundaries are interfaces between nearly oppositely directed magnetic flux in the solar wind. When the leading solar wind stream is moving more slowly than the following stream a high-pressure ridge appears at the interface, that compresses the plasma sometimes leading to a forward and reverse shock pair that slows the fast stream and accelerate the slow stream. If reconnection at the interface between the streams occurs part of the magnetic flux will be annihilated but the plasma once associated with that magnetic flux remains near the interface causing a sometimes significant short-lived dynamic pressure increase. The declining phase of solar cycle 24 exhibits several examples of the phenomenon with densities reaching over 80 protons cm-3 at speed of about 400 km sec-1. We examine the solar wind context of the phenomenon and the consequences at the magnetosphere using space-based and ground-based observations and comment on their possible generation of geomagnetically-induced currents.

  9. Oxy-fuel combustion with integrated pollution control

    DOEpatents

    Patrick, Brian R [Chicago, IL; Ochs, Thomas Lilburn [Albany, OR; Summers, Cathy Ann [Albany, OR; Oryshchyn, Danylo B [Philomath, OR; Turner, Paul Chandler [Independence, OR

    2012-01-03

    An oxygen fueled integrated pollutant removal and combustion system includes a combustion system and an integrated pollutant removal system. The combustion system includes a furnace having at least one burner that is configured to substantially prevent the introduction of air. An oxygen supply supplies oxygen at a predetermine purity greater than 21 percent and a carbon based fuel supply supplies a carbon based fuel. Oxygen and fuel are fed into the furnace in controlled proportion to each other and combustion is controlled to produce a flame temperature in excess of 3000 degrees F. and a flue gas stream containing CO2 and other gases. The flue gas stream is substantially void of non-fuel borne nitrogen containing combustion produced gaseous compounds. The integrated pollutant removal system includes at least one direct contact heat exchanger for bringing the flue gas into intimated contact with a cooling liquid to produce a pollutant-laden liquid stream and a stripped flue gas stream and at least one compressor for receiving and compressing the stripped flue gas stream.

  10. Cascaded recompression closed brayton cycle system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pasch, James J.

    The present disclosure is directed to a cascaded recompression closed Brayton cycle (CRCBC) system and method of operation thereof, where the CRCBC system includes a compressor for compressing the system fluid, a separator for generating fluid feed streams for each of the system's turbines, and separate segments of a heater that heat the fluid feed streams to different feed temperatures for the system's turbines. Fluid exiting each turbine is used to preheat the fluid to the turbine. In an embodiment, the amount of heat extracted is determined by operational costs.

  11. Cascaded recompression closed Brayton cycle system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pasch, James Jay

    The present disclosure is directed to a cascaded recompression closed Brayton cycle (CRCBC) system and method of operation thereof, where the CRCBC system includes a compressor for compressing the system fluid, a separator for generating fluid feed streams for each of the system's turbines, and separate segments of a heater that heat the fluid feed streams to different feed temperatures for the system's turbines. Fluid exiting each turbine is used to preheat the fluid to the turbine. In an embodiment, the amount of heat extracted is determined by operational costs.

  12. Independent transmission of sign language interpreter in DVB: assessment of image compression

    NASA Astrophysics Data System (ADS)

    Zatloukal, Petr; Bernas, Martin; Dvořák, LukáÅ.¡

    2015-02-01

    Sign language on television provides information to deaf that they cannot get from the audio content. If we consider the transmission of the sign language interpreter over an independent data stream, the aim is to ensure sufficient intelligibility and subjective image quality of the interpreter with minimum bit rate. The work deals with the ROI-based video compression of Czech sign language interpreter implemented to the x264 open source library. The results of this approach are verified in subjective tests with the deaf. They examine the intelligibility of sign language expressions containing minimal pairs for different levels of compression and various resolution of image with interpreter and evaluate the subjective quality of the final image for a good viewing experience.

  13. Performance of customized DCT quantization tables on scientific data

    NASA Technical Reports Server (NTRS)

    Ratnakar, Viresh; Livny, Miron

    1994-01-01

    We show that it is desirable to use data-specific or customized quantization tables for scaling the spatial frequency coefficients obtained using the Discrete Cosine Transform (DCT). DCT is widely used for image and video compression (MP89, PM93) but applications typically use default quantization matrices. Using actual scientific data gathered from divers sources such as spacecrafts and electron-microscopes, we show that the default compression/quality tradeoffs can be significantly improved upon by using customized tables. We also show that significant improvements are possible for the standard test images Lena and Baboon. This work is part of an effort to develop a practical scheme for optimizing quantization matrices for any given image or video stream, under any given quality or compression constraints.

  14. Gas turbine power plant with supersonic shock compression ramps

    DOEpatents

    Lawlor, Shawn P [Bellevue, WA; Novaresi, Mark A [San Diego, CA; Cornelius, Charles C [Kirkland, WA

    2008-10-14

    A gas turbine engine. The engine is based on the use of a gas turbine driven rotor having a compression ramp traveling at a local supersonic inlet velocity (based on the combination of inlet gas velocity and tangential speed of the ramp) which compresses inlet gas against a stationary sidewall. The supersonic compressor efficiently achieves high compression ratios while utilizing a compact, stabilized gasdynamic flow path. Operated at supersonic speeds, the inlet stabilizes an oblique/normal shock system in the gasdynamic flow path formed between the rim of the rotor, the strakes, and a stationary external housing. Part load efficiency is enhanced by use of a lean pre-mix system, a pre-swirl compressor, and a bypass stream to bleed a portion of the gas after passing through the pre-swirl compressor to the combustion gas outlet. Use of a stationary low NOx combustor provides excellent emissions results.

  15. System-Level Design of a 64-Channel Low Power Neural Spike Recording Sensor.

    PubMed

    Delgado-Restituto, Manuel; Rodriguez-Perez, Alberto; Darie, Angela; Soto-Sanchez, Cristina; Fernandez-Jover, Eduardo; Rodriguez-Vazquez, Angel

    2017-04-01

    This paper reports an integrated 64-channel neural spike recording sensor, together with all the circuitry to process and configure the channels, process the neural data, transmit via a wireless link the information and receive the required instructions. Neural signals are acquired, filtered, digitized and compressed in the channels. Additionally, each channel implements an auto-calibration algorithm which individually configures the transfer characteristics of the recording site. The system has two transmission modes; in one case the information captured by the channels is sent as uncompressed raw data; in the other, feature vectors extracted from the detected neural spikes are released. Data streams coming from the channels are serialized by the embedded digital processor. Experimental results, including in vivo measurements, show that the power consumption of the complete system is lower than 330 μW.

  16. The passage of an infinite swept airfoil through an oblique gust. [approximate solution for aerodynamic response

    NASA Technical Reports Server (NTRS)

    Adamczyk, J. L.

    1974-01-01

    An approximate solution is reported for the unsteady aerodynamic response of an infinite swept wing encountering a vertical oblique gust in a compressible stream. The approximate expressions are of closed form and do not require excessive computer storage or computation time, and further, they are in good agreement with the results of exact theory. This analysis is used to predict the unsteady aerodynamic response of a helicopter rotor blade encountering the trailing vortex from a previous blade. Significant effects of three dimensionality and compressibility are evident in the results obtained. In addition, an approximate solution for the unsteady aerodynamic forces associated with the pitching or plunging motion of a two dimensional airfoil in a subsonic stream is presented. The mathematical form of this solution approaches the incompressible solution as the Mach number vanishes, the linear transonic solution as the Mach number approaches one, and the solution predicted by piston theory as the reduced frequency becomes large.

  17. Poromechanics of compressible charged porous media using the theory of mixtures.

    PubMed

    Huyghe, J M; Molenaar, M M; Baajens, F P T

    2007-10-01

    Osmotic, electrostatic, and/or hydrational swellings are essential mechanisms in the deformation behavior of porous media, such as biological tissues, synthetic hydrogels, and clay-rich rocks. Present theories are restricted to incompressible constituents. This assumption typically fails for bone, in which electrokinetic effects are closely coupled to deformation. An electrochemomechanical formulation of quasistatic finite deformation of compressible charged porous media is derived from the theory of mixtures. The model consists of a compressible charged porous solid saturated with a compressible ionic solution. Four constituents following different kinematic paths are identified: a charged solid and three streaming constituents carrying either a positive, negative, or no electrical charge, which are the cations, anions, and fluid, respectively. The finite deformation model is reduced to infinitesimal theory. In the limiting case without ionic effects, the presented model is consistent with Blot's theory. Viscous drag compression is computed under closed circuit and open circuit conditions. Viscous drag compression is shown to be independent of the storage modulus. A compressible version of the electrochemomechanical theory is formulated. Using material parameter values for bone, the theory predicts a substantial influence of density changes on a viscous drag compression simulation. In the context of quasistatic deformations, conflicts between poromechanics and mixture theory are only semantic in nature.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vienna, John D.; Todd, Terry A.; Gray, Kimberly D.

    The U.S. Department of Energy, Office of Nuclear Energy has chartered an effort to develop technologies to enable safe and cost effective recycle of commercial used nuclear fuel (UNF) in the U.S. Part of this effort includes the evaluation of exiting waste management technologies for effective treatment of wastes in the context of current U.S. regulations and development of waste forms and processes with significant cost and/or performance benefits over those existing. This study summarizes the results of these ongoing efforts with a focus on the highly radioactive primary waste streams. The primary streams considered and the recommended waste formsmore » include: •Tritium separated from either a low volume gas stream or a high volume water stream. The recommended waste form is low-water cement in high integrity containers. •Iodine-129 separated from off-gas streams in aqueous processing. There are a range of potentially suitable waste forms. As a reference case, a glass composite material (GCM) formed by the encapsulation of the silver Mordenite (AgZ) getter material in a low-temperature glass is assumed. A number of alternatives with distinct advantages are also considered including a fused silica waste form with encapsulated nano-sized AgI crystals. •Carbon-14 separated from LWR fuel treatment off-gases and immobilized as a CaCO3 in a cement waste form. •Krypton-85 separated from LWR and SFR fuel treatment off-gases and stored as a compressed gas. •An aqueous reprocessing high-level waste (HLW) raffinate waste which is immobilized by the vitrification process in one of three forms: a single phase borosilicate glass, a borosilicate based glass ceramic, or a multi-phased titanate ceramic [e.g., synthetic rock (Synroc)]. •An undissolved solids (UDS) fraction from aqueous reprocessing of LWR fuel that is either included in the borosilicate HLW glass or is immobilized in the form of a metal alloy in the case of glass ceramics or titanate ceramics. •Zirconium-based LWR fuel cladding hulls and stainless steel (SS) fuel assembly hardware that are washed and super-compacted for disposal or as an alternative Zr purification and reuse (or disposal as low-level waste, LLW) by reactive gas separations. •Electrochemical process salt HLW which is immobilized in a glass bonded Sodalite waste form known as the ceramic waste form (CWF). •Electrochemical process UDS and SS cladding hulls which are melted into an iron based alloy waste form. Mass and volume estimates for each of the recommended waste forms based on the source terms from a representative flowsheet are reported.« less

  19. Vibration-based monitoring and diagnostics using compressive sensing

    NASA Astrophysics Data System (ADS)

    Ganesan, Vaahini; Das, Tuhin; Rahnavard, Nazanin; Kauffman, Jeffrey L.

    2017-04-01

    Vibration data from mechanical systems carry important information that is useful for characterization and diagnosis. Standard approaches rely on continually streaming data at a fixed sampling frequency. For applications involving continuous monitoring, such as Structural Health Monitoring (SHM), such approaches result in high volume data and rely on sensors being powered for prolonged durations. Furthermore, for spatial resolution, structures are instrumented with a large array of sensors. This paper shows that both volume of data and number of sensors can be reduced significantly by applying Compressive Sensing (CS) in vibration monitoring applications. The reduction is achieved by using random sampling and capitalizing on the sparsity of vibration signals in the frequency domain. Preliminary experimental results validating CS-based frequency recovery are also provided. By exploiting the sparsity of mode shapes, CS can also enable efficient spatial reconstruction using fewer spatially distributed sensors. CS can thereby reduce the cost and power requirement of sensing as well as streamline data storage and processing in monitoring applications. In well-instrumented structures, CS can enable continued monitoring in case of sensor or computational failures.

  20. A new approach of objective quality evaluation on JPEG2000 lossy-compressed lung cancer CT images

    NASA Astrophysics Data System (ADS)

    Cai, Weihua; Tan, Yongqiang; Zhang, Jianguo

    2007-03-01

    Image compression has been used to increase the communication efficiency and storage capacity. JPEG 2000 compression, based on the wavelet transformation, has its advantages comparing to other compression methods, such as ROI coding, error resilience, adaptive binary arithmetic coding and embedded bit-stream. However it is still difficult to find an objective method to evaluate the image quality of lossy-compressed medical images so far. In this paper, we present an approach to evaluate the image quality by using a computer aided diagnosis (CAD) system. We selected 77 cases of CT images, bearing benign and malignant lung nodules with confirmed pathology, from our clinical Picture Archiving and Communication System (PACS). We have developed a prototype of CAD system to classify these images into benign ones and malignant ones, the performance of which was evaluated by the receiver operator characteristics (ROC) curves. We first used JPEG 2000 to compress these cases of images with different compression ratio from lossless to lossy, and used the CAD system to classify the cases with different compressed ratio, then compared the ROC curves from the CAD classification results. Support vector machine (SVM) and neural networks (NN) were used to classify the malignancy of input nodules. In each approach, we found that the area under ROC (AUC) decreases with the increment of compression ratio with small fluctuations.

  1. Fast acoustic streaming in standing waves: generation of an additional outer streaming cell.

    PubMed

    Reyt, Ida; Daru, Virginie; Bailliet, Hélène; Moreau, Solène; Valière, Jean-Christophe; Baltean-Carlès, Diana; Weisman, Catherine

    2013-09-01

    Rayleigh streaming in a cylindrical acoustic standing waveguide is studied both experimentally and numerically for nonlinear Reynolds numbers from 1 to 30 [Re(NL)=(U0/c0)(2)(R/δν)(2), with U0 the acoustic velocity amplitude at the velocity antinode, c0 the speed of sound, R the tube radius, and δν the acoustic boundary layer thickness]. Streaming velocity is measured by means of laser Doppler velocimetry in a cylindrical resonator filled with air at atmospheric pressure at high intensity sound levels. The compressible Navier-Stokes equations are solved numerically with high resolution finite difference schemes. The resonator is excited by shaking it along the axis at imposed frequency. Results of measurements and of numerical calculation are compared with results given in the literature and with each other. As expected, the axial streaming velocity measured and calculated agrees reasonably well with the slow streaming theory for small ReNL but deviates significantly from such predictions for fast streaming (ReNL>1). Both experimental and numerical results show that when ReNL is increased, the center of the outer streaming cells are pushed toward the acoustic velocity nodes until counter-rotating additional vortices are generated near the acoustic velocity antinodes.

  2. Does the presence of cosmic dust influence the displacement of the Earth's Magnetopause?

    NASA Astrophysics Data System (ADS)

    Mann, I.; Hamrin, M.

    2012-04-01

    In a recent paper Treumann and Baumjohann propose that dust particles in interplanetary space occasionally cause large compressions of the magnetopause that, in the absence of coronal mass ejections, are difficult to explain by other mechanisms (R.A. Treumann and W. Baumjohann, Ann. Geophys. 30, 119-130, 2012). They suggest that enhanced dust number density raises the contribution of the dust component to the solar wind dynamical pressure and hence to the pressure balance that determines the extension of the magnetopause. They quantify the influence of the dust component in terms of a variation of the magnetopause stagnation point distance. As a possible event to trigger the compressions they propose the encounters with meteoroid dust streams along Earth's orbit. We investigate the conditions under which these compressions may occur. The estimate by Treumann and Baumjohann of the magnetopause variation presupposes that the dust particles have reached solar wind speed. Acceleration by electromagnetic forces is efficient in the solar wind for dust particles that have a sufficiently large ratio of surface charge to mass (Mann et al. Plasma Phys. Contr. Fusion, Vol. 52, 124012, 2010). This applies to small dust particles that contribute little to the total dust mass in meteoroid streams. The major fraction of dust particles that reach high speed in the solar wind are nanometer-sized dust particles that form and are accelerated in the inner solar system (Czechowski and Mann, ApJ, Vol. 714, 89, 2010). Observations suggest that the flux of these nanodust particles near 1 AU is highly time-variable (Meyer-Vernet, et al. Solar Physics, Vol. 256, 463, 2009). We estimate a possible variation of the magnetopause stagnation point distance caused by these nanodust fluxes and by the dust associated to meteoroid streams. We conclude that the Earth's encounters with meteoroid dust streams are not likely to strongly influence the magnetopause according to the proposed effect. We further use the expression for the magnetopause stagnation point distance used by Treumann and Baumjohann to investigate the possible influence of time-variable nanoddust fluxes on the magnetopause.

  3. Evaluation of in-network adaptation of scalable high efficiency video coding (SHVC) in mobile environments

    NASA Astrophysics Data System (ADS)

    Nightingale, James; Wang, Qi; Grecos, Christos; Goma, Sergio

    2014-02-01

    High Efficiency Video Coding (HEVC), the latest video compression standard (also known as H.265), can deliver video streams of comparable quality to the current H.264 Advanced Video Coding (H.264/AVC) standard with a 50% reduction in bandwidth. Research into SHVC, the scalable extension to the HEVC standard, is still in its infancy. One important area for investigation is whether, given the greater compression ratio of HEVC (and SHVC), the loss of packets containing video content will have a greater impact on the quality of delivered video than is the case with H.264/AVC or its scalable extension H.264/SVC. In this work we empirically evaluate the layer-based, in-network adaptation of video streams encoded using SHVC in situations where dynamically changing bandwidths and datagram loss ratios require the real-time adaptation of video streams. Through the use of extensive experimentation, we establish a comprehensive set of benchmarks for SHVC-based highdefinition video streaming in loss prone network environments such as those commonly found in mobile networks. Among other results, we highlight that packet losses of only 1% can lead to a substantial reduction in PSNR of over 3dB and error propagation in over 130 pictures following the one in which the loss occurred. This work would be one of the earliest studies in this cutting-edge area that reports benchmark evaluation results for the effects of datagram loss on SHVC picture quality and offers empirical and analytical insights into SHVC adaptation to lossy, mobile networking conditions.

  4. Compression of multispectral Landsat imagery using the Embedded Zerotree Wavelet (EZW) algorithm

    NASA Technical Reports Server (NTRS)

    Shapiro, Jerome M.; Martucci, Stephen A.; Czigler, Martin

    1994-01-01

    The Embedded Zerotree Wavelet (EZW) algorithm has proven to be an extremely efficient and flexible compression algorithm for low bit rate image coding. The embedding algorithm attempts to order the bits in the bit stream in numerical importance and thus a given code contains all lower rate encodings of the same algorithm. Therefore, precise bit rate control is achievable and a target rate or distortion metric can be met exactly. Furthermore, the technique is fully image adaptive. An algorithm for multispectral image compression which combines the spectral redundancy removal properties of the image-dependent Karhunen-Loeve Transform (KLT) with the efficiency, controllability, and adaptivity of the embedded zerotree wavelet algorithm is presented. Results are shown which illustrate the advantage of jointly encoding spectral components using the KLT and EZW.

  5. Pornographic image recognition and filtering using incremental learning in compressed domain

    NASA Astrophysics Data System (ADS)

    Zhang, Jing; Wang, Chao; Zhuo, Li; Geng, Wenhao

    2015-11-01

    With the rapid development and popularity of the network, the openness, anonymity, and interactivity of networks have led to the spread and proliferation of pornographic images on the Internet, which have done great harm to adolescents' physical and mental health. With the establishment of image compression standards, pornographic images are mainly stored with compressed formats. Therefore, how to efficiently filter pornographic images is one of the challenging issues for information security. A pornographic image recognition and filtering method in the compressed domain is proposed by using incremental learning, which includes the following steps: (1) low-resolution (LR) images are first reconstructed from the compressed stream of pornographic images, (2) visual words are created from the LR image to represent the pornographic image, and (3) incremental learning is adopted to continuously adjust the classification rules to recognize the new pornographic image samples after the covering algorithm is utilized to train and recognize the visual words in order to build the initial classification model of pornographic images. The experimental results show that the proposed pornographic image recognition method using incremental learning has a higher recognition rate as well as costing less recognition time in the compressed domain.

  6. Bypass transition in compressible boundary layers

    NASA Technical Reports Server (NTRS)

    Vandervegt, J. J.

    1992-01-01

    Transition to turbulence in aerospace applications usually occurs in a strongly disturbed environment. For instance, the effects of free-stream turbulence, roughness and obstacles in the boundary layer strongly influence transition. Proper understanding of the mechanisms leading to transition is crucial in the design of aircraft wings and gas turbine blades, because lift, drag and heat transfer strongly depend on the state of the boundary layer, laminar or turbulent. Unfortunately, most of the transition research, both theoretical and experimental, has focused on natural transition. Many practical flows, however, defy any theoretical analysis and are extremely difficult to measure. Morkovin introduced in his review paper the concept of bypass transition as those forms of transition which bypass the known mechanisms of linear and non-linear transition theories and are currently not understood by experiments. In an effort to better understand the mechanisms leading to transition in a disturbed environment, experiments are conducted studying simpler cases, viz. the effects of free stream turbulence on transition on a flat plate. It turns out that these experiments are very difficult to conduct, because generation of free stream turbulence with sufficiently high fluctuation levels and reasonable homogeneity is non trivial. For a discussion see Morkovin. Serious problems also appear due to the fact that at high Reynolds numbers the boundary layers are very thin, especially in the nose region of the plate where the transition occurs, which makes the use of very small probes necessary. The effects of free-stream turbulence on transition are the subject of this research and are especially important in a gas turbine environment, where turbulence intensities are measured between 5 and 20 percent, Wang et al. Due to the fact that the Reynolds number for turbine blades is considerably lower than for aircraft wings, generally a larger portion of the blade will be in a laminar transitional state. The effects of large free stream turbulence in compressible boundary layers at Mach numbers are examined both in the subsonic and transonic regime using direct numerical simulations. The flow is computed over a flat plate and curved surface. while many applications operate in the transonic regime. Due the nature of their numerical scheme, a non-conservation formulation of the Navier-Stokes equations, it is a non-trivial extension to compute flow fields in the transonic regime. This project aims at better understanding the effects of large free-stream turbulence in compressible boundary layers at mach number both in the subsonic and transonic regime using direct numerical simulations. The present project aims at computing the flow over a flat plate and curved surface. This research will provide data which can be used to clarify mechanisms leading to transition in an environment with high free stream turbulence. This information is useful for the development of turbulence models, which are of great importance for CFD applications, and are currently unreliable for more complex flows, such as transitional flows.

  7. Ignition and structure of a laminar diffusion flame in a compressible mixing layer with finite rate chemistry

    NASA Technical Reports Server (NTRS)

    Grosch, C. E.; Jackson, T. L.

    1991-01-01

    The ignition and structure of a reacting compressible mixing layer is considered using finite rate chemistry lying between two streams of reactants with different freestream speeds and temperatures. Numerical integration of the governing equations show that the structure of the reacting flow can be quite complicated depending on the magnitude of the Zeldovich number. An analysis of both the ignition a diffusion flame regimes is presented using a combination of large Zeldovich number asymptotics and numerics. This allows to analyze the behavior of these regimes as a function of the parameters of the problem.

  8. Data compression and information retrieval via symbolization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, X.Z.; Tracy, E.R.

    Converting a continuous signal into a multisymbol stream is a simple method of data compression which preserves much of the dynamical information present in the original signal. The retrieval of selected types of information from symbolic data involves binary operations and is therefore optimal for digital computers. For example, correlation time scales can be easily recovered, even at high noise levels, by varying the time delay for symbolization. Also, the presence of periodicity in the signal can be reliably detected even if it is weak and masked by a dominant chaotic/stochastic background. {copyright} {ital 1998 American Institute of Physics.}

  9. Application of M-JPEG compression hardware to dynamic stimulus production.

    PubMed

    Mulligan, J B

    1997-01-01

    Inexpensive circuit boards have appeared on the market which transform a normal micro-computer's disk drive into a video disk capable of playing extended video sequences in real time. This technology enables the performance of experiments which were previously impossible, or at least prohibitively expensive. The new technology achieves this capability using special-purpose hardware to compress and decompress individual video frames, enabling a video stream to be transferred over relatively low-bandwidth disk interfaces. This paper will describe the use of such devices for visual psychophysics and present the technical issues that must be considered when evaluating individual products.

  10. Disk-based compression of data from genome sequencing.

    PubMed

    Grabowski, Szymon; Deorowicz, Sebastian; Roguski, Łukasz

    2015-05-01

    High-coverage sequencing data have significant, yet hard to exploit, redundancy. Most FASTQ compressors cannot efficiently compress the DNA stream of large datasets, since the redundancy between overlapping reads cannot be easily captured in the (relatively small) main memory. More interesting solutions for this problem are disk based, where the better of these two, from Cox et al. (2012), is based on the Burrows-Wheeler transform (BWT) and achieves 0.518 bits per base for a 134.0 Gbp human genome sequencing collection with almost 45-fold coverage. We propose overlapping reads compression with minimizers, a compression algorithm dedicated to sequencing reads (DNA only). Our method makes use of a conceptually simple and easily parallelizable idea of minimizers, to obtain 0.317 bits per base as the compression ratio, allowing to fit the 134.0 Gbp dataset into only 5.31 GB of space. http://sun.aei.polsl.pl/orcom under a free license. sebastian.deorowicz@polsl.pl Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  11. Compressibility effects in the shear layer over a rectangular cavity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beresh, Steven J.; Wagner, Justin L.; Casper, Katya M.

    2016-10-26

    we studied the influence of compressibility on the shear layer over a rectangular cavity of variable width in a free stream Mach number range of 0.6–2.5 using particle image velocimetry data in the streamwise centre plane. As the Mach number increases, the vertical component of the turbulence intensity diminishes modestly in the widest cavity, but the two narrower cavities show a more substantial drop in all three components as well as the turbulent shear stress. Furthermore, this contrasts with canonical free shear layers, which show significant reductions in only the vertical component and the turbulent shear stress due to compressibility.more » The vorticity thickness of the cavity shear layer grows rapidly as it initially develops, then transitions to a slower growth rate once its instability saturates. When normalized by their estimated incompressible values, the growth rates prior to saturation display the classic compressibility effect of suppression as the convective Mach number rises, in excellent agreement with comparable free shear layer data. The specific trend of the reduction in growth rate due to compressibility is modified by the cavity width.« less

  12. Linearized compressible-flow theory for sonic flight speeds

    NASA Technical Reports Server (NTRS)

    Heaslet, Max A; Lomax, Harvard; Spreiter, John R

    1950-01-01

    The partial differential equation for the perturbation velocity potential is examined for free-stream Mach numbers close to and equal to one. It is found that, under the assumptions of linearized theory, solutions can be found consistent with the theory for lifting-surface problems both in stationary three-dimensional flow and in unsteady two-dimensional flow. Several examples are solved including a three dimensional swept-back wing and two dimensional harmonically-oscillating wing, both for a free stream Mach number equal to one. Momentum relations for the evaluation of wave and vortex drag are also discussed. (author)

  13. Satellite/Terrestrial Networks: End-to-End Communication Interoperability Quality of Service Experiments

    NASA Technical Reports Server (NTRS)

    Ivancic, William D.

    1998-01-01

    Various issues associated with satellite/terrestrial end-to-end communication interoperability are presented in viewgraph form. Specific topics include: 1) Quality of service; 2) ATM performance characteristics; 3) MPEG-2 transport stream mapping to AAL-5; 4) Observation and discussion of compressed video tests over ATM; 5) Digital video over satellites status; 6) Satellite link configurations; 7) MPEG-2 over ATM with binomial errors; 8) MPEG-2 over ATM channel characteristics; 8) MPEG-2 over ATM over emulated satellites; 9) MPEG-2 transport stream with errors; and a 10) Dual decoder test.

  14. A compressible near-wall turbulence model for boundary layer calculations

    NASA Technical Reports Server (NTRS)

    So, R. M. C.; Zhang, H. S.; Lai, Y. G.

    1992-01-01

    A compressible near-wall two-equation model is derived by relaxing the assumption of dynamical field similarity between compressible and incompressible flows. This requires justifications for extending the incompressible models to compressible flows and the formulation of the turbulent kinetic energy equation in a form similar to its incompressible counterpart. As a result, the compressible dissipation function has to be split into a solenoidal part, which is not sensitive to changes of compressibility indicators, and a dilational part, which is directly affected by these changes. This approach isolates terms with explicit dependence on compressibility so that they can be modeled accordingly. An equation that governs the transport of the solenoidal dissipation rate with additional terms that are explicitly dependent on the compressibility effects is derived similarly. A model with an explicit dependence on the turbulent Mach number is proposed for the dilational dissipation rate. Thus formulated, all near-wall incompressible flow models could be expressed in terms of the solenoidal dissipation rate and straight-forwardly extended to compressible flows. Therefore, the incompressible equations are recovered correctly in the limit of constant density. The two-equation model and the assumption of constant turbulent Prandtl number are used to calculate compressible boundary layers on a flat plate with different wall thermal boundary conditions and free-stream Mach numbers. The calculated results, including the near-wall distributions of turbulence statistics and their limiting behavior, are in good agreement with measurements. In particular, the near-wall asymptotic properties are found to be consistent with incompressible behavior; thus suggesting that turbulent flows in the viscous sublayer are not much affected by compressibility effects.

  15. Compression techniques in tele-radiology

    NASA Astrophysics Data System (ADS)

    Lu, Tianyu; Xiong, Zixiang; Yun, David Y.

    1999-10-01

    This paper describes a prototype telemedicine system for remote 3D radiation treatment planning. Due to voluminous medical image data and image streams generated in interactive frame rate involved in the application, the importance of deploying adjustable lossy to lossless compression techniques is emphasized in order to achieve acceptable performance via various kinds of communication networks. In particular, the compression of the data substantially reduces the transmission time and therefore allows large-scale radiation distribution simulation and interactive volume visualization using remote supercomputing resources in a timely fashion. The compression algorithms currently used in the software we developed are JPEG and H.263 lossy methods and Lempel-Ziv (LZ77) lossless methods. Both objective and subjective assessment of the effect of lossy compression methods on the volume data are conducted. Favorable results are obtained showing that substantial compression ratio is achievable within distortion tolerance. From our experience, we conclude that 30dB (PSNR) is about the lower bound to achieve acceptable quality when applying lossy compression to anatomy volume data (e.g. CT). For computer simulated data, much higher PSNR (up to 100dB) is expectable. This work not only introduces such novel approach for delivering medical services that will have significant impact on the existing cooperative image-based services, but also provides a platform for the physicians to assess the effects of lossy compression techniques on the diagnostic and aesthetic appearance of medical imaging.

  16. Local flow measurements at the inlet spike tip of a Mach 3 supersonic cruise airplane

    NASA Technical Reports Server (NTRS)

    Johnson, H. J.; Montoya, E. J.

    1973-01-01

    The flow field at the left inlet spike tip of a YF-12A airplane was examined using at 26 deg included angle conical flow sensor to obtain measurements at free-stream Mach numbers from 1.6 to 3.0. Local flow angularity, Mach number, impact pressure, and mass flow were determined and compared with free-stream values. Local flow changes occurred at the same time as free-stream changes. The local flow usually approached the spike centerline from the upper outboard side because of spike cant and toe-in. Free-stream Mach number influenced the local flow angularity; as Mach number increased above 2.2, local angle of attack increased and local sideslip angle decreased. Local Mach number was generally 3 percent less than free-stream Mach number. Impact-pressure ratio and mass flow ratio increased as free-stream Mach number increased above 2.2, indicating a beneficial forebody compression effect. No degradation of the spike tip instrumentation was observed after more than 40 flights in the high-speed thermal environment encountered by the airplane. The sensor is rugged, simple, and sensitive to small flow changes. It can provide accurate imputs necessary to control an inlet.

  17. Discretized Streams: A Fault-Tolerant Model for Scalable Stream Processing

    DTIC Science & Technology

    2012-12-14

    Discretized Streams: A Fault-Tolerant Model for Scalable Stream Processing Matei Zaharia Tathagata Das Haoyuan Li Timothy Hunter Scott Shenker Ion...SUBTITLE Discretized Streams: A Fault-Tolerant Model for Scalable Stream Processing 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER...time. However, current programming models for distributed stream processing are relatively low-level often leaving the user to worry about consistency of

  18. Similarities and distinctions of CIR and Sheath

    NASA Astrophysics Data System (ADS)

    Yermolaev, Yuri; Lodkina, Irina; Nikolaeva, Nadezhda; Yermolaev, Michael

    2016-04-01

    On the basis of OMNI data and our catalog of large scale solar wind (SW) streams during 1976-2000 [Yermolaev et al., 2009] we study the average temporal profiles for two types of compressed regions: CIR (corotating interaction region - compressed region before High Speed Stream (HSS)) and Sheath (compressed region before fast Interplanetary CMEs (ICMEs), including Magnetic Cloud (MC) and Ejecta). As have been shown by Nikolaeva et al, [2015], the efficiency of magnetic storm generation is ~50% higher for Sheath and CIR than for ICME (MC and Ejecta), i.e. reaction magnetosphere depends on type of driver. To take into account the different durations of SW types, we use the double superposed epoch analysis (DSEA) method: rescaling the duration of the interval for all types in such a manner that, respectively, beginning and end for all intervals of selected type coincide [Yermolaev et al., 2010; 2015]. Obtained data allows us to suggest that the formation of all types of compression regions has the same physical mechanism irrespective of piston (HSS or ICME) type and differences are connected with geometry and full jumps of speed in edges of compression regions. If making the natural assumption that the gradient of speed is directed approximately on normal to the piston, CIR has the largest angle between the gradient of speed and the direction of average SW speed, and ICME - the smallest angle. The work was supported by the Russian Foundation for Basic Research, projects 13-02-00158, 16-02-00125 and by Program of Presidium of the Russian Academy of Sciences. References: Nikolaeva, N. S. , Yu. I. Yermolaev, and I. G. Lodkina (2015), Modeling of the Corrected Dst* Index Temporal Profile on the Main Phase of the Magnetic Storms Generated by Different Types of Solar Wind, Cosmic Research, Vol. 53, No. 2, pp. 119-127. Yermolaev, Yu. I., N. S. Nikolaeva, I. G. Lodkina, and M. Yu. Yermolaev (2009), Catalog of Large-Scale Solar Wind Phenomena during 1976-2000, Cosmic Research, , Vol. 47, No. 2, pp. 81-94. Yermolaev, Y. I., N. S. Nikolaeva, I. G. Lodkina, and M. Y. Yermolaev (2010), Specific interplanetary conditions for CIR-induced, Sheath-induced, and ICME-induced geomagnetic storms obtained by double superposed epoch analysis, Ann. Geophys., 28, pp. 2177-2186. Yermolaev, Yu. I., I. G. Lodkina, N. S. Nikolaeva, and M. Yu. Yermolaev (2015), Dynamics of large-scale solar wind streams obtained by the double superposed epoch analysis, J. Geophys. Res. Space Physics, 120, doi:10.1002/2015JA021274.

  19. The Exploration Water Recovery System

    NASA Technical Reports Server (NTRS)

    ORourke, Mary Jane E.; Carter, Layne; Holder, Donald W.; Tomes, Kristin M.

    2006-01-01

    The Exploration Water Recovery System is designed towards fulfillment of NASA s Vision for Space Exploration, which will require elevation of existing technologies to higher levels of optimization. This new system, designed for application to the Exploration infrastructure, presents a novel combination of proven air and water purification technologies. The integration of unit operations is modified from that of the current state-of-the-art water recovery system so as to optimize treatment of the various waste water streams, contaminant loads, and flow rates. Optimization is achieved primarily through the removal of volatile organic contaminants from the vapor phase prior to their absorption into the liquid phase. In the current state-of-the-art system, the water vapor in the cabin atmosphere is condensed, and the volatile organic contaminants present in that atmosphere are absorbed into the aqueous phase. Removal of contaminants the5 occurs via catalytic oxidation in the liquid phase. Oxidation kinetics, however, dictate that removal of volatile organic contaminants from the vapor phase can inherently be more efficient than their removal from the aqueous phase. Taking advantage of this efficiency reduces the complexity of the water recovery system. This reduction in system complexity is accompanied by reductions in the weight, volume, power, and resupply requirements of the system. Vapor compression distillation technology is used to treat the urine, condensate, and hygiene waste streams. This contributes to the reduction in resupply, as incorporation of vapor compression distillation technology at this point in the process reduces reliance on the expendable ion exchange and adsorption media used in the current state-of-the-art water recovery system. Other proven technologies that are incorporated into the Exploration Water Recovery System include the Trace Contaminant Control System and the Volatile Removal Assembly.

  20. Efficient 3D Watermarked Video Communication with Chaotic Interleaving, Convolution Coding, and LMMSE Equalization

    NASA Astrophysics Data System (ADS)

    El-Shafai, W.; El-Bakary, E. M.; El-Rabaie, S.; Zahran, O.; El-Halawany, M.; Abd El-Samie, F. E.

    2017-06-01

    Three-Dimensional Multi-View Video (3D-MVV) transmission over wireless networks suffers from Macro-Blocks losses due to either packet dropping or fading-motivated bit errors. Thus, the robust performance of 3D-MVV transmission schemes over wireless channels becomes a recent considerable hot research issue due to the restricted resources and the presence of severe channel errors. The 3D-MVV is composed of multiple video streams shot by several cameras around a single object, simultaneously. Therefore, it is an urgent task to achieve high compression ratios to meet future bandwidth constraints. Unfortunately, the highly-compressed 3D-MVV data becomes more sensitive and vulnerable to packet losses, especially in the case of heavy channel faults. Thus, in this paper, we suggest the application of a chaotic Baker interleaving approach with equalization and convolution coding for efficient Singular Value Decomposition (SVD) watermarked 3D-MVV transmission over an Orthogonal Frequency Division Multiplexing wireless system. Rayleigh fading and Additive White Gaussian Noise are considered in the real scenario of 3D-MVV transmission. The SVD watermarked 3D-MVV frames are primarily converted to their luminance and chrominance components, which are then converted to binary data format. After that, chaotic interleaving is applied prior to the modulation process. It is used to reduce the channel effects on the transmitted bit streams and it also adds a degree of encryption to the transmitted 3D-MVV frames. To test the performance of the proposed framework; several simulation experiments on different SVD watermarked 3D-MVV frames have been executed. The experimental results show that the received SVD watermarked 3D-MVV frames still have high Peak Signal-to-Noise Ratios and watermark extraction is possible in the proposed framework.

  1. Optimization of the oxidant supply system for combined cycle MHD power plants

    NASA Technical Reports Server (NTRS)

    Juhasz, A. J.

    1982-01-01

    An in-depth study was conducted to determine what, if any, improvements could be made on the oxidant supply system for combined cycle MHD power plants which could be reflected in higher thermal efficiency and a reduction in the cost of electricity, COE. A systematic analysis of air separation process varitions which showed that the specific energy consumption could be minimized when the product stream oxygen concentration is about 70 mole percent was conducted. The use of advanced air compressors, having variable speed and guide vane position control, results in additional power savings. The study also led to the conceptual design of a new air separation process, sized for a 500 MW sub e MHD plant, referred to a internal compression is discussed. In addition to its lower overall energy consumption, potential capital cost savings were identified for air separation plants using this process when constructed in a single large air separation train rather than multiple parallel trains, typical of conventional practice.

  2. Process and apparatus for afterburning of combustible pollutants from an internal combustion engine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laurent, P.A.

    1978-07-04

    In a process for the afterburning of the combustible pollutants from an internal combustion engine, in order to automatically reduce the secondary induction rate when power increases without using a controlling valve actuatd by the carburetor venturi depression, there is provided a volumetric efficiency of the secondary air pump linked to and activated by the engine and a volumetric efficiency which decreases when the ratio between its back pressure and suction pressure increases, this reduction being achieved through the proper selection of the pump volumetric compression ratio r: between 0.6 c and 1.3 c when a steeply decreasing trend ismore » required, and above 1.3 c if a slower and slower decreasing trend is required. To perform this process an afterburner apparatus has a nitrogen oxide reducing catalyst placed inside the afterburner reactor on the gas stream immediately at the outlet of a torus, in which the gases are homogenized and their reaction with preinjection air is terminated.« less

  3. n-Gram-Based Text Compression.

    PubMed

    Nguyen, Vu H; Nguyen, Hien T; Duong, Hieu N; Snasel, Vaclav

    2016-01-01

    We propose an efficient method for compressing Vietnamese text using n -gram dictionaries. It has a significant compression ratio in comparison with those of state-of-the-art methods on the same dataset. Given a text, first, the proposed method splits it into n -grams and then encodes them based on n -gram dictionaries. In the encoding phase, we use a sliding window with a size that ranges from bigram to five grams to obtain the best encoding stream. Each n -gram is encoded by two to four bytes accordingly based on its corresponding n -gram dictionary. We collected 2.5 GB text corpus from some Vietnamese news agencies to build n -gram dictionaries from unigram to five grams and achieve dictionaries with a size of 12 GB in total. In order to evaluate our method, we collected a testing set of 10 different text files with different sizes. The experimental results indicate that our method achieves compression ratio around 90% and outperforms state-of-the-art methods.

  4. Compression technique for large statistical data bases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eggers, S.J.; Olken, F.; Shoshani, A.

    1981-03-01

    The compression of large statistical databases is explored and are proposed for organizing the compressed data, such that the time required to access the data is logarithmic. The techniques exploit special characteristics of statistical databases, namely, variation in the space required for the natural encoding of integer attributes, a prevalence of a few repeating values or constants, and the clustering of both data of the same length and constants in long, separate series. The techniques are variations of run-length encoding, in which modified run-lengths for the series are extracted from the data stream and stored in a header, which ismore » used to form the base level of a B-tree index into the database. The run-lengths are cumulative, and therefore the access time of the data is logarithmic in the size of the header. The details of the compression scheme and its implementation are discussed, several special cases are presented, and an analysis is given of the relative performance of the various versions.« less

  5. Numerical solutions of the Navier-Stokes equations for the supersonic laminar flow over a two-dimensional compression corner

    NASA Technical Reports Server (NTRS)

    Carter, J. E.

    1972-01-01

    Numerical solutions have been obtained for the supersonic, laminar flow over a two-dimensional compression corner. These solutions were obtained as steady-state solutions to the unsteady Navier-Stokes equations using the finite difference method of Brailovskaya, which has second-order accuracy in the spatial coordinates. Good agreement was obtained between the computed results and wall pressure distributions measured experimentally for Mach numbers of 4 and 6.06, and respective Reynolds numbers, based on free-stream conditions and the distance from the leading edge to the corner. In those calculations, as well as in others, sufficient resolution was obtained to show the streamline pattern in the separation bubble. Upstream boundary conditions to the compression corner flow were provided by numerically solving the unsteady Navier-Stokes equations for the flat plate flow field, beginning at the leading edge. The compression corner flow field was enclosed by a computational boundary with the unknown boundary conditions supplied by extrapolation from internally computed points.

  6. n-Gram-Based Text Compression

    PubMed Central

    Duong, Hieu N.; Snasel, Vaclav

    2016-01-01

    We propose an efficient method for compressing Vietnamese text using n-gram dictionaries. It has a significant compression ratio in comparison with those of state-of-the-art methods on the same dataset. Given a text, first, the proposed method splits it into n-grams and then encodes them based on n-gram dictionaries. In the encoding phase, we use a sliding window with a size that ranges from bigram to five grams to obtain the best encoding stream. Each n-gram is encoded by two to four bytes accordingly based on its corresponding n-gram dictionary. We collected 2.5 GB text corpus from some Vietnamese news agencies to build n-gram dictionaries from unigram to five grams and achieve dictionaries with a size of 12 GB in total. In order to evaluate our method, we collected a testing set of 10 different text files with different sizes. The experimental results indicate that our method achieves compression ratio around 90% and outperforms state-of-the-art methods. PMID:27965708

  7. Compressible flow about symmetrical Joukowski profiles

    NASA Technical Reports Server (NTRS)

    Kaplan, Carl

    1938-01-01

    The method of Poggi is employed for the determination of the effects of compressibility upon the flow past an obstacle. A general expression for the velocity increment due to compressibility is obtained. The general result holds whatever the shape of the obstacle; but, in order to obtain the complete solution, it is necessary to know a certain Fourier expansion of the square of the velocity of flow past the obstacle. An application is made to the case flow of a symmetrical Joukowski profile with a sharp trailing edge, fixed in a stream of an arbitrary angle of attack and with the circulation determined by the Kutta condition. The results are obtained in a closed form and are exact insofar as the second approximation to the compressible flow is concerned, the first approximation being the result for the corresponding incompressible flow. Formulas for lift and moment analogous to the Blasius formulas in incompressible flow are developed and are applied to thin symmetrical Joukowski profiles for small angles of attack.

  8. Potential of two-stage membrane system with recycle stream for CO{sub 2} capture from postcombustion gas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dongxiao Yang; Zhi Wang; Jixiao Wang

    2009-09-15

    In order to restrict greenhouse gases emissions, CO{sub 2} should be captured from the postcombustion gas for further treatment, for example, geosequestration. In this work, the separation performance of the two-stage membrane system with a recycle stream was investigated using the cross-flow model. For larger CO{sub 2}/N{sub 2} selectivities that can be achieved in the lab, for example, selectivity of 52, the separation target of CO{sub 2} purity >95% and CO{sub 2} recovery >90% can be fulfilled by the two-stage system. The process cost of the two-stage membrane process was investigated. There is an optimum pressure ratio with which themore » capital cost and the energy cost can be balanced to minimize the total cost. Using the optimum pressure ratios and efficient membranes, the total cost of the two-stage system can be reduced to a range that is competitive with the process cost of the traditional chemical absorption method. For example, with feed compression applied, the total cost of the two-stage membrane system using the membrane with CO{sub 2}/N{sub 2} selectivity of 52 and CO{sub 2} permeance of 3.12 x 10{sup -3} m{sup 3} (STP) m{sup -2} s{sup -1} MPa{sup -1} is estimated to be $47.9/(ton CO{sub 2} recovered). 22 refs., 11 figs., 3 tabs.« less

  9. MPEG-1 low-cost encoder solution

    NASA Astrophysics Data System (ADS)

    Grueger, Klaus; Schirrmeister, Frank; Filor, Lutz; von Reventlow, Christian; Schneider, Ulrich; Mueller, Gerriet; Sefzik, Nicolai; Fiedrich, Sven

    1995-02-01

    A solution for real-time compression of digital YCRCB video data to an MPEG-1 video data stream has been developed. As an additional option, motion JPEG and video telephone streams (H.261) can be generated. For MPEG-1, up to two bidirectional predicted images are supported. The required computational power for motion estimation and DCT/IDCT, memory size and memory bandwidth have been the main challenges. The design uses fast-page-mode memory accesses and requires only one single 80 ns EDO-DRAM with 256 X 16 organization for video encoding. This can be achieved only by using adequate access and coding strategies. The architecture consists of an input processing and filter unit, a memory interface, a motion estimation unit, a motion compensation unit, a DCT unit, a quantization control, a VLC unit and a bus interface. For using the available memory bandwidth by the processing tasks, a fixed schedule for memory accesses has been applied, that can be interrupted for asynchronous events. The motion estimation unit implements a highly sophisticated hierarchical search strategy based on block matching. The DCT unit uses a separated fast-DCT flowgraph realized by a switchable hardware unit for both DCT and IDCT operation. By appropriate multiplexing, only one multiplier is required for: DCT, quantization, inverse quantization, and IDCT. The VLC unit generates the video-stream up to the video sequence layer and is directly coupled with an intelligent bus-interface. Thus, the assembly of video, audio and system data can easily be performed by the host computer. Having a relatively low complexity and only small requirements for DRAM circuits, the developed solution can be applied to low-cost encoding products for consumer electronics.

  10. Sensitivity Analysis in RIPless Compressed Sensing

    DTIC Science & Technology

    2014-10-01

    SECURITY CLASSIFICATION OF: The compressive sensing framework finds a wide range of applications in signal processing and analysis. Within this...Analysis of Compressive Sensing Solutions Report Title The compressive sensing framework finds a wide range of applications in signal processing and...compressed sensing. More specifically, we show that in a noiseless and RIP-less setting [11], the recovery process of a compressed sensing framework is

  11. An Evaluation of the Vapor Phase Catalytic Ammonia Removal Process for Use in a Mars Transit Vehicle

    NASA Technical Reports Server (NTRS)

    Flynn, Michael; Borchers, Bruce

    1998-01-01

    An experimental program has been developed to evaluate the potential of the Vapor Phase Catalytic Ammonia Reduction (VPCAR) technology for use as a Mars Transit Vehicle water purification system. Design modifications which will be required to ensure proper operation of the VPCAR system in reduced gravity are also evaluated. The VPCAR system is an integrated wastewater treatment technology that combines a distillation process with high temperature catalytic oxidation. The distillation portion of the system utilizes a vapor compression distillation process to provide an energy efficient phase change separation. This portion of the system removes any inorganic salts and large molecular weight, organic contaminates, i.e., non-volatile, from the product water stream and concentrates these contaminates into a byproduct stream. To oxidize the volatile organic compounds and ammonia, a vapor phase, high temperature catalytic oxidizer is used. This catalytic system converts these compounds along with the aqueous product into CO2, H2O, and N2O. A secondary catalytic bed can then be used to reduce the N2O to nitrogen and oxygen (although not evaluated in this study). This paper describes the design specification of the VPCAR process, the relative benefits of its utilization in a Mars Transit Vehicle, and the design modification which will be required to ensure its proper operation in reduced gravity. In addition, the results of an experimental evaluation of the processors is presented. This evaluation presents the processors performance based upon product water purity, water recovery rates, and power.

  12. Method for enhanced atomization of liquids

    DOEpatents

    Thompson, Richard E.; White, Jerome R.

    1993-01-01

    In a process for atomizing a slurry or liquid process stream in which a slurry or liquid is passed through a nozzle to provide a primary atomized process stream, an improvement which comprises subjecting the liquid or slurry process stream to microwave energy as the liquid or slurry process stream exits the nozzle, wherein sufficient microwave heating is provided to flash vaporize the primary atomized process stream.

  13. Method of Separating Oxygen From Spacecraft Cabin Air to Enable Extravehicular Activities

    NASA Technical Reports Server (NTRS)

    Graf, John C.

    2013-01-01

    Extravehicular activities (EVAs) require high-pressure, high-purity oxygen. Shuttle EVAs use oxygen that is stored and transported as a cryogenic fluid. EVAs on the International Space Station (ISS) presently use the Shuttle cryo O2, which is transported to the ISS using a transfer hose. The fluid is compressed to elevated pressures and stored as a high-pressure gas. With the retirement of the shuttle, NASA has been searching for ways to deliver oxygen to fill the highpressure oxygen tanks on the ISS. A method was developed using low-pressure oxygen generated onboard the ISS and released into ISS cabin air, filtering the oxygen from ISS cabin air using a pressure swing absorber to generate a low-pressure (high-purity) oxygen stream, compressing the oxygen with a mechanical compressor, and transferring the high-pressure, high-purity oxygen to ISS storage tanks. The pressure swing absorber (PSA) can be either a two-stage device, or a single-stage device, depending on the type of sorbent used. The key is to produce a stream with oxygen purity greater than 99.5 percent. The separator can be a PSA device, or a VPSA device (that uses both vacuum and pressure for the gas separation). The compressor is a multi-stage mechanical compressor. If the gas flow rates are on the order of 5 to 10 lb (.2.3 to 4.6 kg) per day, the compressor can be relatively small [3 16 16 in. (.8 41 41 cm)]. Any spacecraft system, or other remote location that has a supply of lowpressure oxygen, a method of separating oxygen from cabin air, and a method of compressing the enriched oxygen stream, has the possibility of having a regenerable supply of highpressure, high-purity oxygen that is compact, simple, and safe. If cabin air is modified so there is very little argon, the separator can be smaller, simpler, and use less power.

  14. Commodity cluster and hardware-based massively parallel implementations of hyperspectral imaging algorithms

    NASA Astrophysics Data System (ADS)

    Plaza, Antonio; Chang, Chein-I.; Plaza, Javier; Valencia, David

    2006-05-01

    The incorporation of hyperspectral sensors aboard airborne/satellite platforms is currently producing a nearly continual stream of multidimensional image data, and this high data volume has soon introduced new processing challenges. The price paid for the wealth spatial and spectral information available from hyperspectral sensors is the enormous amounts of data that they generate. Several applications exist, however, where having the desired information calculated quickly enough for practical use is highly desirable. High computing performance of algorithm analysis is particularly important in homeland defense and security applications, in which swift decisions often involve detection of (sub-pixel) military targets (including hostile weaponry, camouflage, concealment, and decoys) or chemical/biological agents. In order to speed-up computational performance of hyperspectral imaging algorithms, this paper develops several fast parallel data processing techniques. Techniques include four classes of algorithms: (1) unsupervised classification, (2) spectral unmixing, and (3) automatic target recognition, and (4) onboard data compression. A massively parallel Beowulf cluster (Thunderhead) at NASA's Goddard Space Flight Center in Maryland is used to measure parallel performance of the proposed algorithms. In order to explore the viability of developing onboard, real-time hyperspectral data compression algorithms, a Xilinx Virtex-II field programmable gate array (FPGA) is also used in experiments. Our quantitative and comparative assessment of parallel techniques and strategies may help image analysts in selection of parallel hyperspectral algorithms for specific applications.

  15. Real-time 3D video compression for tele-immersive environments

    NASA Astrophysics Data System (ADS)

    Yang, Zhenyu; Cui, Yi; Anwar, Zahid; Bocchino, Robert; Kiyanclar, Nadir; Nahrstedt, Klara; Campbell, Roy H.; Yurcik, William

    2006-01-01

    Tele-immersive systems can improve productivity and aid communication by allowing distributed parties to exchange information via a shared immersive experience. The TEEVE research project at the University of Illinois at Urbana-Champaign and the University of California at Berkeley seeks to foster the development and use of tele-immersive environments by a holistic integration of existing components that capture, transmit, and render three-dimensional (3D) scenes in real time to convey a sense of immersive space. However, the transmission of 3D video poses significant challenges. First, it is bandwidth-intensive, as it requires the transmission of multiple large-volume 3D video streams. Second, existing schemes for 2D color video compression such as MPEG, JPEG, and H.263 cannot be applied directly because the 3D video data contains depth as well as color information. Our goal is to explore from a different angle of the 3D compression space with factors including complexity, compression ratio, quality, and real-time performance. To investigate these trade-offs, we present and evaluate two simple 3D compression schemes. For the first scheme, we use color reduction to compress the color information, which we then compress along with the depth information using zlib. For the second scheme, we use motion JPEG to compress the color information and run-length encoding followed by Huffman coding to compress the depth information. We apply both schemes to 3D videos captured from a real tele-immersive environment. Our experimental results show that: (1) the compressed data preserves enough information to communicate the 3D images effectively (min. PSNR > 40) and (2) even without inter-frame motion estimation, very high compression ratios (avg. > 15) are achievable at speeds sufficient to allow real-time communication (avg. ~ 13 ms per 3D video frame).

  16. Graphics processing unit-assisted lossless decompression

    DOEpatents

    Loughry, Thomas A.

    2016-04-12

    Systems and methods for decompressing compressed data that has been compressed by way of a lossless compression algorithm are described herein. In a general embodiment, a graphics processing unit (GPU) is programmed to receive compressed data packets and decompress such packets in parallel. The compressed data packets are compressed representations of an image, and the lossless compression algorithm is a Rice compression algorithm.

  17. Processing and properties of a solid energy fuel from municipal solid waste (MSW) and recycled plastics.

    PubMed

    Gug, JeongIn; Cacciola, David; Sobkowicz, Margaret J

    2015-01-01

    Diversion of waste streams such as plastics, woods, papers and other solid trash from municipal landfills and extraction of useful materials from landfills is an area of increasing interest especially in densely populated areas. One promising technology for recycling municipal solid waste (MSW) is to burn the high-energy-content components in standard coal power plant. This research aims to reform wastes into briquettes that are compatible with typical coal combustion processes. In order to comply with the standards of coal-fired power plants, the feedstock must be mechanically robust, free of hazardous contaminants, and moisture resistant, while retaining high fuel value. This study aims to investigate the effects of processing conditions and added recyclable plastics on the properties of MSW solid fuels. A well-sorted waste stream high in paper and fiber content was combined with controlled levels of recyclable plastics PE, PP, PET and PS and formed into briquettes using a compression molding technique. The effect of added plastics and moisture content on binding attraction and energy efficiency were investigated. The stability of the briquettes to moisture exposure, the fuel composition by proximate analysis, briquette mechanical strength, and burning efficiency were evaluated. It was found that high processing temperature ensures better properties of the product addition of milled mixed plastic waste leads to better encapsulation as well as to greater calorific value. Also some moisture removal (but not complete) improves the compacting process and results in higher heating value. Analysis of the post-processing water uptake and compressive strength showed a correlation between density and stability to both mechanical stress and humid environment. Proximate analysis indicated heating values comparable to coal. The results showed that mechanical and moisture uptake stability were improved when the moisture and air contents were optimized. Moreover, the briquette sample composition was similar to biomass fuels but had significant advantages due to addition of waste plastics that have high energy content compared to other waste types. Addition of PP and HDPE presented better benefits than addition of PET due to lower softening temperature and lower oxygen content. It should be noted that while harmful emissions such as dioxins, furans and mercury can result from burning plastics, WTE facilities have been able to control these emissions to meet US EPA standards. This research provides a drop-in coal replacement that reduces demand on landfill space and replaces a significant fraction of fossil-derived fuel with a renewable alternative. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. RF-photonic chirp encoder and compressor for seamless analysis of information flow.

    PubMed

    Zalevsky, Zeev; Shemer, Amir; Zach, Shlomo

    2008-05-26

    In this paper we realize an RF photonic chirp compression system that compresses a continuous stream of incoming RF data (modulated on top of an optical carrier) into a train of temporal short pulses. Each pulse in the train can be separated and treated individually while being sampled by low rate optical switch and without temporal loses of the incoming flow of information. Each such pulse can be filtered and analyzed differently. The main advantage of the proposed system is its capability of being able to handle, seamlessly, high rate information flow with all-optical means and with low rate optical switches.

  19. Modern CFD applications for the design of a reacting shear layer facility

    NASA Technical Reports Server (NTRS)

    Yu, S. T.; Chang, C. T.; Marek, C. J.

    1991-01-01

    The RPLUS2D code, capable of calculating high speed reacting flows, was adopted to design a compressible shear layer facility. In order to create reacting shear layers at high convective Mach numbers, hot air streams at supersonic speeds, rendered by converging-diverging nozzles, must be provided. A finite rate chemistry model is used to simulate the nozzle flows. Results are compared with one-dimensional solutions at chemical equilibrium. Additionally, a two equation turbulence model with compressibility effects was successfully incorporated with the RPLUS code. The model was applied to simulate a supersonic shear layer. Preliminary results show favorable comparisons with the experimental data.

  20. Near-wall modelling of compressible turbulent flows

    NASA Technical Reports Server (NTRS)

    So, Ronald M. C.

    1990-01-01

    Work was carried out to formulate near-wall models for the equations governing the transport of the temperature-variance and its dissipation rate. With these equations properly modeled, a foundation is laid for their extension together with the heat-flux equations to compressible flows. This extension is carried out in a manner similar to that used to extend the incompressible near-wall Reynolds-stress models to compressible flows. The methodology used to accomplish the extension of the near-wall Reynolds-stress models is examined and the actual extension of the models for the Reynolds-stress equations and the near-wall dissipation-rate equation to compressible flows is given. Then the formulation of the near-wall models for the equations governing the transport of the temperature variance and its dissipation rate is discussed. Finally, a sample calculation of a flat plate compressible turbulent boundary-layer flow with adiabatic wall boundary condition and a free-stream Mach number of 2.5 using a two-equation near-wall closure is presented. The results show that the near-wall two-equation closure formulated for compressible flows is quite valid and the calculated properties are in good agreement with measurements. Furthermore, the near-wall behavior of the turbulence statistics and structure parameters is consistent with that found in incompressible flows.

  1. Statistical properties of MHD fluctuations associated with high speed streams from HELIOS 2 observations

    NASA Technical Reports Server (NTRS)

    Bavassano, B.; Dobrowolny, H.; Fanfoni, G.; Mariani, F.; Ness, N. F.

    1981-01-01

    Helios 2 magnetic data were used to obtain several statistical properties of MHD fluctuations associated with the trailing edge of a given stream served in different solar rotations. Eigenvalues and eigenvectors of the variance matrix, total power and degree of compressibility of the fluctuations were derived and discussed both as a function of distance from the Sun and as a function of the frequency range included in the sample. The results obtained add new information to the picture of MHD turbulence in the solar wind. In particular, a dependence from frequency range of the radial gradients of various statistical quantities is obtained.

  2. A simple apparatus for the experimental study of non-steady flow thrust-augmenter ejector configurations

    NASA Technical Reports Server (NTRS)

    Khare, J. M.; Kentfield, J. A. C.

    1979-01-01

    A flexible, and easily modified, test rig is described which allows a one dimensional nonsteady flow stream to be generated, economically from a steady flow source of compressed air. This nonsteady flow is used as the primary stream in a nonsteady flow ejector constituting part of the test equipment. Standard piezo-electric pressure transducers etc. allow local pressures to be studied, as functions of time, in both the primary and secondary (mixed) flow portions of the apparatus. Provision is also made for measuring the primary and secondary mass flows and the thrust generated. Sample results obtained with the equipment are presented.

  3. Field Testing of Cryogenic Carbon Capture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sayre, Aaron; Frankman, Dave; Baxter, Andrew

    Sustainable Energy Solutions has been developing Cryogenic Carbon Capture™ (CCC) since 2008. In that time two processes have been developed, the External Cooling Loop and Compressed Flue Gas Cryogenic Carbon Capture processes (CCC ECL™ and CCC CFG™ respectively). The CCC ECL™ process has been scaled up to a 1TPD CO2 system. In this process the flue gas is cooled by an external refrigerant loop. SES has tested CCC ECL™ on real flue gas slip streams from subbituminous coal, bituminous coal, biomass, natural gas, shredded tires, and municipal waste fuels at field sites that include utility power stations, heating plants, cementmore » kilns, and pilot-scale research reactors. The CO2 concentrations from these tests ranged from 5 to 22% on a dry basis. CO2 capture ranged from 95-99+% during these tests. Several other condensable species were also captured including NO2, SO2 and PMxx at 95+%. NO was also captured at a modest rate. The CCC CFG™ process has been scaled up to a .25 ton per day system. This system has been tested on real flue gas streams including subbituminous coal, bituminous coal and natural gas at field sites that include utility power stations, heating plants, and pilot-scale research reactors. CO2 concentrations for these tests ranged from 5 to 15% on a dry basis. CO2 capture ranged from 95-99+% during these tests. Several other condensable species were also captured including NO2, SO2 and PMxx at 95+%. NO was also captured at 90+%. Hg capture was also verified and the resulting effluent from CCC CFG™ was below a 1ppt concentration. This paper will focus on discussion of the capabilities of CCC, the results of field testing and the future steps surrounding the development of this technology.« less

  4. Extended Sleeve Products Allow Control and Monitoring of Process Fluid Flows Inside Shielding, Behind Walls and Beneath Floors - 13041

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abbott, Mark W.

    2013-07-01

    Throughout power generation, delivery and waste remediation, the ability to control process streams in difficult or impossible locations becomes increasingly necessary as the complexity of processes increases. Example applications include radioactive environments, inside concrete installations, buried in dirt, or inside a shielded or insulated pipe. In these situations, it is necessary to implement innovative solutions to tackle such issues as valve maintenance, valve control from remote locations, equipment cleaning in hazardous environments, and flow stream analysis. The Extended Sleeve family of products provides a scalable solution to tackle some of the most challenging applications in hazardous environments which require flowmore » stream control and monitoring. The Extended Sleeve family of products is defined in three groups: Extended Sleeve (ESV), Extended Bonnet (EBV) and Instrument Enclosure (IE). Each of the products provides a variation on the same requirements: to provide access to the internals of a valve, or to monitor the fluid passing through the pipeline through shielding around the process pipe. The shielding can be as simple as a grout filled pipe covering a process pipe or as complex as a concrete deck protecting a room in which the valves and pipes pass through at varying elevations. Extended Sleeves are available between roughly 30 inches and 18 feet of distance between the pipeline centerline and the top of the surface to which it mounts. The Extended Sleeve provides features such as ± 1.5 inches of adjustment between the pipeline and deck location, internal flush capabilities, automatic alignment of the internal components during assembly and integrated actuator mounting pads. The Extended Bonnet is a shorter fixed height version of the Extended Sleeve which has a removable deck flange to facilitate installation through walls, and is delivered fully assembled. The Instrument Enclosure utilizes many of the same components as an Extended Sleeve, yet allows the installation of process monitoring instruments, such as a turbidity meter to be placed in the flow stream. The basis of the design is a valve body, which, rather than having a directly mounted bonnet has lengths of concentric pipe added, which move the bonnet away from the valve body. The pipe is conceptually similar to an oil field well, with the various strings of casing, and tubing installed. Each concentric pipe provides a required function, such as the outermost pipes, the valve sleeve and penetration sleeve, which provide structural support to the deck flange. For plug valve based designs, the next inner pipe provides compression on the environmental seals at the top of the body to bonnet joint, followed by the innermost pipe which provides rotation of the plug, in the same manner as an extended stem. Ball valve ESVs have an additional pipe to provide compressive loading on the stem packing. Due to the availability of standard pipe grades and weights, the product can be configured to fit a wide array of valve sizes, and application lengths, with current designs as short as seven inches and as tall as 18 feet. Central to the design is the requirement for no special tools or downhole tools to remove parts or configure the product. Off the shelf wrenches, sockets or other hand tools are all that is required. Compared to other products historically available, this design offers a lightweight option, which, while not as rigidly stiff, can deflect compliantly under extreme seismic loading, rather than break. Application conditions vary widely, as the base product is 316 and 304 stainless steel, but utilizes 17-4PH, and other allows as needed based on the temperature range and mechanical requirements. Existing designs are installed in applications as hot as 1400 deg. F, at low pressure, and separately in highly radioactive environments. The selection of plug versus ball valve, metal versus soft seats, and the material of the seals and seats is all dependent on the application requirements. The design of the Extended Sleeve family of products provides a platform which solves a variety of accessibility problems associated with controlling process flow streams in remote, hard to reach locations in harsh environments. Installation of the equipment described has shown to allow access to flow streams that otherwise would require exceptional means to access and control. The Extended Sleeve family of products provides a scalable solution to both control and monitor process fluid flow through shielding, walls or floors when direct connection is advantageous. (authors)« less

  5. Methods of natural gas liquefaction and natural gas liquefaction plants utilizing multiple and varying gas streams

    DOEpatents

    Wilding, Bruce M; Turner, Terry D

    2014-12-02

    A method of natural gas liquefaction may include cooling a gaseous NG process stream to form a liquid NG process stream. The method may further include directing the first tail gas stream out of a plant at a first pressure and directing a second tail gas stream out of the plant at a second pressure. An additional method of natural gas liquefaction may include separating CO.sub.2 from a liquid NG process stream and processing the CO.sub.2 to provide a CO.sub.2 product stream. Another method of natural gas liquefaction may include combining a marginal gaseous NG process stream with a secondary substantially pure NG stream to provide an improved gaseous NG process stream. Additionally, a NG liquefaction plant may include a first tail gas outlet, and at least a second tail gas outlet, the at least a second tail gas outlet separate from the first tail gas outlet.

  6. Back-end and interface implementation of the STS-XYTER2 prototype ASIC for the CBM experiment

    NASA Astrophysics Data System (ADS)

    Kasinski, K.; Szczygiel, R.; Zabolotny, W.

    2016-11-01

    Each front-end readout ASIC for the High-Energy Physics experiments requires robust and effective hit data streaming and control mechanism. A new STS-XYTER2 full-size prototype chip for the Silicon Tracking System and Muon Chamber detectors in the Compressed Baryonic Matter experiment at Facility for Antiproton and Ion Research (FAIR, Germany) is a 128-channel time and amplitude measuring solution for silicon microstrip and gas detectors. It operates at 250 kHit/s/channel hit rate, each hit producing 27 bits of information (5-bit amplitude, 14-bit timestamp, position and diagnostics data). The chip back-end implements fast front-end channel read-out, timestamp-wise hit sorting, and data streaming via a scalable interface implementing the dedicated protocol (STS-HCTSP) for chip control and hit transfer with data bandwidth from 9.7 MHit/s up to 47 MHit/s. It also includes multiple options for link diagnostics, failure detection, and throttling features. The back-end is designed to operate with the data acquisition architecture based on the CERN GBTx transceivers. This paper presents the details of the back-end and interface design and its implementation in the UMC 180 nm CMOS process.

  7. Mpeg2 codec HD improvements with medical and robotic imaging benefits

    NASA Astrophysics Data System (ADS)

    Picard, Wayne F. J.

    2010-02-01

    In this report, we propose an efficient scheme to use High Definition Television (HDTV) in a console or notebook format as a computer terminal in addition to their role as TV display unit. In the proposed scheme, we assume that the main computer is situated at a remote location. The computer raster in the remote server is compressed using an HD E- >Mpeg2 encoder and transmitted to the terminal at home. The built-in E->Mpeg2 decoder in the terminal decompresses the compressed bit stream, and displays the raster. The terminal will be fitted with a mouse and keyboard, through which the interaction with the remote computer server can be performed via a communications back channel. The terminal in a notebook format can thus be used as a high resolution computer and multimedia device. We will consider developments such as the required HD enhanced Mpeg2 resolution (E->Mpeg2) and its medical ramifications due to improvements on compressed image quality with 2D to 3D conversion (Mpeg3) and using the compressed Discrete Cosine Transform coefficients in the reality compression of vision and control of medical robotic surgeons.

  8. A Simple, Low Overhead Data Compression Algorithm for Converting Lossy Compression Processes to Lossless

    DTIC Science & Technology

    1993-12-01

    0~0 S* NAVAL POSTGRADUATE SCHOOL Monterey, California DTIC ELECTE THESIS S APR 11 1994DU A SIMPLE, LOW OVERHEAD DATA COMPRESSION ALGORITHM FOR...A SIMPLE. LOW OVERHEAD DATA COMPRESSION ALGORITHM FOR CONVERTING LOSSY COMPRESSION PROCESSES TO LOSSLESS. 6. AUTHOR(S) Abbott, Walter D., III 7...Approved for public release; distribution is unlimited. A Simple, Low Overhead Data Compression Algorithm for Converting Lossy Processes to Lossless by

  9. Changes In the Pickup Ion Cutoff Under Variable Solar Wind Conditions

    NASA Astrophysics Data System (ADS)

    Bower, J.; Moebius, E.; Taut, A.; Berger, L.; Drews, C.; Lee, M. A.; Farrugia, C. J.

    2017-12-01

    We present the first systematic analysis to determine pickup ion (PUI) cutoff speed variations,both during compression regions, identified by their structure, and during times of highly variablesolar wind (SW) speed or magnetic field strength. This study is motivated by the attempt toremove or correct these effects on the determination of the longitude of the interstellar neutralgas flow from the flow pattern related variation of the PUI cutoff with ecliptic longitude. At thesame time, this study sheds light on the physical mechanisms that lead to energy transferbetween the SW and the embedded PUI population. Using 2007-2014 STEREO A PLASTICobservations we identify compression regions in the solar wind and analyze the PUI velocitydistribution function (VDF). We developed a routine to identify stream interaction regions andCIRs, by identifying the stream interface and the successive velocity increase in the solar windspeed and density. Characterizing these individual compression events and combining them in asuperposed epoch analysis allows us to analyze the PUI population in similar conditions andfind the local cutoff shift with adequate statistics. The result of this method yields cutoff shifts forcompression regions with large solar wind speed gradients. Additionally, through sorting theentire set of PUI VDFs at high time resolution we obtain a noticeable correlation of the cutoffshift with gradients in the SW speed and interplanetary magnetic field strength. We willdiscuss implications for the understanding of the PUI VDF evolution and the PUI cutoff analysisof the interstellar gas flow.

  10. Novel concepts for the compression of large volumes of carbon dioxide-phase III

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moore, J. Jeffrey; Allison, Timothy C.; Evans, Neal D.

    In the effort to reduce the release of CO 2 greenhouse gases to the atmosphere, sequestration of CO 2 from Integrated Gasification Combined Cycle (IGCC) and Oxy-Fuel power plants is being pursued. This approach, however, requires significant compression power to boost the pressure to typical pipeline levels. The penalty can be as high as 8-12% on a typical IGCC plant. The goal of this research is to reduce this penalty through novel compression concepts and integration with existing IGCC processes. The primary objective of the study of novel CO 2 compression concepts is to reliably boost the pressure of COmore » 2 to pipeline pressures with the minimal amount of energy required. Fundamental thermodynamics were studied to explore pressure rise in both liquid and gaseous states. For gaseous compression, the project investigated novel methods to compress CO 2 while removing the heat of compression internal to the compressor. The highpressure ratio, due to the delivery pressure of the CO 2 for enhanced oil recovery, results in significant heat of compression. Since less energy is required to boost the pressure of a cooler gas stream, both upstream and inter-stage cooling is desirable. While isothermal compression has been utilized in some services, it has not been optimized for the IGCC environment. Phase I of this project determined the optimum compressor configuration and developed technology concepts for internal heat removal. Other compression options using liquefied CO 2 and cryogenic pumping were explored as well. Preliminary analysis indicated up to a 35% reduction in power is possible with the new concepts being considered. In the Phase II program, two experimental test rigs were developed to investigate the two concepts further. A new pump loop facility was constructed to qualify a cryogenic turbopump for use on liquid CO 2 . Also, an internally cooled compressor diaphragm was developed and tested in a closed loop compressor facility using CO 2 . Both test programs successfully demonstrated good performance and mechanical behavior. In Phase III, a pilot compression plant consisting of a multi-stage centrifugal compressor with cooled diaphragm technology has been designed, constructed, and tested. Comparative testing of adiabatic and cooled tests at equivalent inlet conditions shows that the cooled diaphragms reduce power consumption by 3-8% when the compressor is operated as a back-to-back unit and by up to 9% when operated as a straight-though compressor with no intercooler. The power savings, heat exchanger effectiveness, and temperature drops for the cooled diaphragm were all slightly higher than predicted values but showed the same trends.« less

  11. Removal of hydrogen sulfide as ammonium sulfate from hydropyrolysis product vapors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marker, Terry L.; Felix, Larry G.; Linck, Martin B.

    A system and method for processing biomass into hydrocarbon fuels that includes processing a biomass in a hydropyrolysis reactor resulting in hydrocarbon fuels and a process vapor stream and cooling the process vapor stream to a condensation temperature resulting in an aqueous stream. The aqueous stream is sent to a catalytic reactor where it is oxidized to obtain a product stream containing ammonia and ammonium sulfate. A resulting cooled product vapor stream includes non-condensable process vapors comprising H.sub.2, CH.sub.4, CO, CO.sub.2, ammonia and hydrogen sulfide.

  12. Removal of hydrogen sulfide as ammonium sulfate from hydropyrolysis product vapors

    DOEpatents

    Marker, Terry L; Felix, Larry G; Linck, Martin B; Roberts, Michael J

    2014-10-14

    A system and method for processing biomass into hydrocarbon fuels that includes processing a biomass in a hydropyrolysis reactor resulting in hydrocarbon fuels and a process vapor stream and cooling the process vapor stream to a condensation temperature resulting in an aqueous stream. The aqueous stream is sent to a catalytic reactor where it is oxidized to obtain a product stream containing ammonia and ammonium sulfate. A resulting cooled product vapor stream includes non-condensable process vapors comprising H.sub.2, CH.sub.4, CO, CO.sub.2, ammonia and hydrogen sulfide.

  13. Prioritized Contact Transport Stream

    NASA Technical Reports Server (NTRS)

    Hunt, Walter Lee, Jr. (Inventor)

    2015-01-01

    A detection process, contact recognition process, classification process, and identification process are applied to raw sensor data to produce an identified contact record set containing one or more identified contact records. A prioritization process is applied to the identified contact record set to assign a contact priority to each contact record in the identified contact record set. Data are removed from the contact records in the identified contact record set based on the contact priorities assigned to those contact records. A first contact stream is produced from the resulting contact records. The first contact stream is streamed in a contact transport stream. The contact transport stream may include and stream additional contact streams. The contact transport stream may be varied dynamically over time based on parameters such as available bandwidth, contact priority, presence/absence of contacts, system state, and configuration parameters.

  14. Robust audio-visual speech recognition under noisy audio-video conditions.

    PubMed

    Stewart, Darryl; Seymour, Rowan; Pass, Adrian; Ming, Ji

    2014-02-01

    This paper presents the maximum weighted stream posterior (MWSP) model as a robust and efficient stream integration method for audio-visual speech recognition in environments, where the audio or video streams may be subjected to unknown and time-varying corruption. A significant advantage of MWSP is that it does not require any specific measurements of the signal in either stream to calculate appropriate stream weights during recognition, and as such it is modality-independent. This also means that MWSP complements and can be used alongside many of the other approaches that have been proposed in the literature for this problem. For evaluation we used the large XM2VTS database for speaker-independent audio-visual speech recognition. The extensive tests include both clean and corrupted utterances with corruption added in either/both the video and audio streams using a variety of types (e.g., MPEG-4 video compression) and levels of noise. The experiments show that this approach gives excellent performance in comparison to another well-known dynamic stream weighting approach and also compared to any fixed-weighted integration approach in both clean conditions or when noise is added to either stream. Furthermore, our experiments show that the MWSP approach dynamically selects suitable integration weights on a frame-by-frame basis according to the level of noise in the streams and also according to the naturally fluctuating relative reliability of the modalities even in clean conditions. The MWSP approach is shown to maintain robust recognition performance in all tested conditions, while requiring no prior knowledge about the type or level of noise.

  15. Feature integration and object representations along the dorsal stream visual hierarchy

    PubMed Central

    Perry, Carolyn Jeane; Fallah, Mazyar

    2014-01-01

    The visual system is split into two processing streams: a ventral stream that receives color and form information and a dorsal stream that receives motion information. Each stream processes that information hierarchically, with each stage building upon the previous. In the ventral stream this leads to the formation of object representations that ultimately allow for object recognition regardless of changes in the surrounding environment. In the dorsal stream, this hierarchical processing has classically been thought to lead to the computation of complex motion in three dimensions. However, there is evidence to suggest that there is integration of both dorsal and ventral stream information into motion computation processes, giving rise to intermediate object representations, which facilitate object selection and decision making mechanisms in the dorsal stream. First we review the hierarchical processing of motion along the dorsal stream and the building up of object representations along the ventral stream. Then we discuss recent work on the integration of ventral and dorsal stream features that lead to intermediate object representations in the dorsal stream. Finally we propose a framework describing how and at what stage different features are integrated into dorsal visual stream object representations. Determining the integration of features along the dorsal stream is necessary to understand not only how the dorsal stream builds up an object representation but also which computations are performed on object representations instead of local features. PMID:25140147

  16. Thermal imagers: from ancient analog video output to state-of-the-art video streaming

    NASA Astrophysics Data System (ADS)

    Haan, Hubertus; Feuchter, Timo; Münzberg, Mario; Fritze, Jörg; Schlemmer, Harry

    2013-06-01

    The video output of thermal imagers stayed constant over almost two decades. When the famous Common Modules were employed a thermal image at first was presented to the observer in the eye piece only. In the early 1990s TV cameras were attached and the standard output was CCIR. In the civil camera market output standards changed to digital formats a decade ago with digital video streaming being nowadays state-of-the-art. The reasons why the output technique in the thermal world stayed unchanged over such a long time are: the very conservative view of the military community, long planning and turn-around times of programs and a slower growth of pixel number of TIs in comparison to consumer cameras. With megapixel detectors the CCIR output format is not sufficient any longer. The paper discusses the state-of-the-art compression and streaming solutions for TIs.

  17. Competitive Parallel Processing For Compression Of Data

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B.; Fender, Antony R. H.

    1990-01-01

    Momentarily-best compression algorithm selected. Proposed competitive-parallel-processing system compresses data for transmission in channel of limited band-width. Likely application for compression lies in high-resolution, stereoscopic color-television broadcasting. Data from information-rich source like color-television camera compressed by several processors, each operating with different algorithm. Referee processor selects momentarily-best compressed output.

  18. HVS-based quantization steps for validation of digital cinema extended bitrates

    NASA Astrophysics Data System (ADS)

    Larabi, M.-C.; Pellegrin, P.; Anciaux, G.; Devaux, F.-O.; Tulet, O.; Macq, B.; Fernandez, C.

    2009-02-01

    In Digital Cinema, the video compression must be as transparent as possible to provide the best image quality to the audience. The goal of compression is to simplify transport, storing, distribution and projection of films. For all those tasks, equipments need to be developed. It is thus mandatory to reduce the complexity of the equipments by imposing limitations in the specifications. In this sense, the DCI has fixed the maximum bitrate for a compressed stream to 250 Mbps independently from the input format (4K/24fps, 2K/48fps or 2K/24fps). The work described in this paper This parameter is discussed in this paper because it is not consistent to double/quadruple the input rate without increasing the output rate. The work presented in this paper is intended to define quantization steps ensuring the visually lossless compression. Two steps are followed first to evaluate the effect of each subband separately and then to fin the scaling ratio. The obtained results show that it is necessary to increase the bitrate limit for cinema material in order to achieve the visually lossless.

  19. Device for staged carbon monoxide oxidation

    DOEpatents

    Vanderborgh, Nicholas E.; Nguyen, Trung V.; Guante, Jr., Joseph

    1993-01-01

    A method and apparatus for selectively oxidizing carbon monoxide in a hydrogen rich feed stream. The method comprises mixing a feed stream consisting essentially of hydrogen, carbon dioxide, water and carbon monoxide with a first predetermined quantity of oxygen (air). The temperature of the mixed feed/oxygen stream is adjusted in a first the heat exchanger assembly (20) to a first temperature. The mixed feed/oxygen stream is sent to reaction chambers (30,32) having an oxidation catalyst contained therein. The carbon monoxide of the feed stream preferentially absorbs on the catalyst at the first temperature to react with the oxygen in the chambers (30,32) with minimal simultaneous reaction of the hydrogen to form an intermediate hydrogen rich process stream having a lower carbon monoxide content than the feed stream. The elevated outlet temperature of the process stream is carefully controlled in a second heat exchanger assembly (42) to a second temperature above the first temperature. The process stream is then mixed with a second predetermined quantity of oxygen (air). The carbon monoxide of the process stream preferentially reacts with the second quantity of oxygen in a second stage reaction chamber (56) with minimal simultaneous reaction of the hydrogen in the process stream. The reaction produces a hydrogen rich product stream having a lower carbon monoxide content than the process stream. The product stream is then cooled in a third heat exchanger assembly (72) to a third predetermined temperature. Three or more stages may be desirable, each with metered oxygen injection.

  20. Context Modeler for Wavelet Compression of Spectral Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron; Xie, Hua; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    A context-modeling sub-algorithm has been developed as part of an algorithm that effects three-dimensional (3D) wavelet-based compression of hyperspectral image data. The context-modeling subalgorithm, hereafter denoted the context modeler, provides estimates of probability distributions of wavelet-transformed data being encoded. These estimates are utilized by an entropy coding subalgorithm that is another major component of the compression algorithm. The estimates make it possible to compress the image data more effectively than would otherwise be possible. The following background discussion is prerequisite to a meaningful summary of the context modeler. This discussion is presented relative to ICER-3D, which is the name attached to a particular compression algorithm and the software that implements it. The ICER-3D software is summarized briefly in the preceding article, ICER-3D Hyperspectral Image Compression Software (NPO-43238). Some aspects of this algorithm were previously described, in a slightly more general context than the ICER-3D software, in "Improving 3D Wavelet-Based Compression of Hyperspectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. In turn, ICER-3D is a product of generalization of ICER, another previously reported algorithm and computer program that can perform both lossless and lossy wavelet-based compression and decompression of gray-scale-image data. In ICER-3D, hyperspectral image data are decomposed using a 3D discrete wavelet transform (DWT). Following wavelet decomposition, mean values are subtracted from spatial planes of spatially low-pass subbands prior to encoding. The resulting data are converted to sign-magnitude form and compressed. In ICER-3D, compression is progressive, in that compressed information is ordered so that as more of the compressed data stream is received, successive reconstructions of the hyperspectral image data are of successively higher overall fidelity.

  1. Effect of compressibility at high subsonic velocities on the lifting force acting on an elliptic cylinder

    NASA Technical Reports Server (NTRS)

    Kaplan, Carl

    1946-01-01

    An extended form of the Ackeret iteration method, applicable to arbitrary profiles, is utilized to calculate the compressible flow at high subsonic velocities past an elliptic cylinder. The angle of attack to the direction of the undisturbed stream is small and the circulation is fixed by the Kutta condition at the trailing end of the major axis. The expression for the lifting force on the elliptic cylinder is derived and shows a first-step improvement of the Prandtl-Glauert rule. It is further shown that the expression for the lifting force, although derived specifically for an elliptic cylinder, may be extended to arbitrary symmetrical profiles.

  2. Real-Time SCADA Cyber Protection Using Compression Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lyle G. Roybal; Gordon H Rueff

    2013-11-01

    The Department of Energy’s Office of Electricity Delivery and Energy Reliability (DOE-OE) has a critical mission to secure the energy infrastructure from cyber attack. Through DOE-OE’s Cybersecurity for Energy Delivery Systems (CEDS) program, the Idaho National Laboratory (INL) has developed a method to detect malicious traffic on Supervisory, Control, and Data Acquisition (SCADA) network using a data compression technique. SCADA network traffic is often repetitive with only minor differences between packets. Research performed at the INL showed that SCADA network traffic has traits desirable for using compression analysis to identify abnormal network traffic. An open source implementation of a Lempel-Ziv-Welchmore » (LZW) lossless data compression algorithm was used to compress and analyze surrogate SCADA traffic. Infected SCADA traffic was found to have statistically significant differences in compression when compared against normal SCADA traffic at the packet level. The initial analyses and results are clearly able to identify malicious network traffic from normal traffic at the packet level with a very high confidence level across multiple ports and traffic streams. Statistical differentiation between infected and normal traffic level was possible using a modified data compression technique at the 99% probability level for all data analyzed. However, the conditions tested were rather limited in scope and need to be expanded into more realistic simulations of hacking events using techniques and approaches that are better representative of a real-world attack on a SCADA system. Nonetheless, the use of compression techniques to identify malicious traffic on SCADA networks in real time appears to have significant merit for infrastructure protection.« less

  3. Experimental investigation of a transonic potential flow around a symmetric airfoil

    NASA Technical Reports Server (NTRS)

    Hiller, W. J.; Meier, G. E. A.

    1981-01-01

    Experimental flow investigations on smooth airfoils were done using numerical solutions for transonic airfoil streaming with shockless supersonic range. The experimental flow reproduced essential sections of the theoretically computed frictionless solution. Agreement is better in the expansion part of the of the flow than in the compression part. The flow was nearly stationary in the entire velocity range investigated.

  4. Novel methodology for wide-ranged multistage morphing waverider based on conical theory

    NASA Astrophysics Data System (ADS)

    Liu, Zhen; Liu, Jun; Ding, Feng; Xia, Zhixun

    2017-11-01

    This study proposes the wide-ranged multistage morphing waverider design method. The flow field structure and aerodynamic characteristics of multistage waveriders are also analyzed. In this method, the multistage waverider is generated in the same conical flowfield, which contains a free-stream surface and different compression-stream surfaces. The obtained results show that the introduction of the multistage waverider design method can solve the problem of aerodynamic performance deterioration in the off-design state and allow the vehicle to always maintain the optimal flight state. The multistage waverider design method, combined with transfiguration flight strategy, can lead to greater design flexibility and the optimization of hypersonic wide-ranged waverider vehicles.

  5. TERMINAL ELECTRON ACCEPTING PROCESSES IN THE ALLUVIAL SEDIMENTS OF A HEADWATER STREAM

    EPA Science Inventory

    Chemical fluxes between catchments and streams are influenced by biochemical processes in the groundwater-stream water (GW-SW) ecotone, the interface between stream surface water and groundwater. Terminal electron accepting processes (TEAPs) that are utilized in respiration of ...

  6. Indexing and retrieval of MPEG compressed video

    NASA Astrophysics Data System (ADS)

    Kobla, Vikrant; Doermann, David S.

    1998-04-01

    To keep pace with the increased popularity of digital video as an archival medium, the development of techniques for fast and efficient analysis of ideo streams is essential. In particular, solutions to the problems of storing, indexing, browsing, and retrieving video data from large multimedia databases are necessary to a low access to these collections. Given that video is often stored efficiently in a compressed format, the costly overhead of decompression can be reduced by analyzing the compressed representation directly. In earlier work, we presented compressed domain parsing techniques which identified shots, subshots, and scenes. In this article, we present efficient key frame selection, feature extraction, indexing, and retrieval techniques that are directly applicable to MPEG compressed video. We develop a frame type independent representation which normalizes spatial and temporal features including frame type, frame size, macroblock encoding, and motion compensation vectors. Features for indexing are derived directly from this representation and mapped to a low- dimensional space where they can be accessed using standard database techniques. Spatial information is used as primary index into the database and temporal information is used to rank retrieved clips and enhance the robustness of the system. The techniques presented enable efficient indexing, querying, and retrieval of compressed video as demonstrated by our system which typically takes a fraction of a second to retrieve similar video scenes from a database, with over 95 percent recall.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gug, JeongIn, E-mail: Jeongin_gug@student.uml.edu; Cacciola, David, E-mail: david_cacciola@student.uml.edu; Sobkowicz, Margaret J., E-mail: Margaret_sobkowiczkline@uml.edu

    Highlights: • Briquetting was used to produce solid fuels from municipal solid waste and recycled plastics. • Optimal drying, processing temperature and pressure were found to produce stable briquettes. • Addition of waste plastics yielded heating values comparable with typical coal feedstocks. • This processing method improves utilization of paper and plastic diverted from landfills. - Abstract: Diversion of waste streams such as plastics, woods, papers and other solid trash from municipal landfills and extraction of useful materials from landfills is an area of increasing interest especially in densely populated areas. One promising technology for recycling municipal solid waste (MSW)more » is to burn the high-energy-content components in standard coal power plant. This research aims to reform wastes into briquettes that are compatible with typical coal combustion processes. In order to comply with the standards of coal-fired power plants, the feedstock must be mechanically robust, free of hazardous contaminants, and moisture resistant, while retaining high fuel value. This study aims to investigate the effects of processing conditions and added recyclable plastics on the properties of MSW solid fuels. A well-sorted waste stream high in paper and fiber content was combined with controlled levels of recyclable plastics PE, PP, PET and PS and formed into briquettes using a compression molding technique. The effect of added plastics and moisture content on binding attraction and energy efficiency were investigated. The stability of the briquettes to moisture exposure, the fuel composition by proximate analysis, briquette mechanical strength, and burning efficiency were evaluated. It was found that high processing temperature ensures better properties of the product addition of milled mixed plastic waste leads to better encapsulation as well as to greater calorific value. Also some moisture removal (but not complete) improves the compacting process and results in higher heating value. Analysis of the post-processing water uptake and compressive strength showed a correlation between density and stability to both mechanical stress and humid environment. Proximate analysis indicated heating values comparable to coal. The results showed that mechanical and moisture uptake stability were improved when the moisture and air contents were optimized. Moreover, the briquette sample composition was similar to biomass fuels but had significant advantages due to addition of waste plastics that have high energy content compared to other waste types. Addition of PP and HDPE presented better benefits than addition of PET due to lower softening temperature and lower oxygen content. It should be noted that while harmful emissions such as dioxins, furans and mercury can result from burning plastics, WTE facilities have been able to control these emissions to meet US EPA standards. This research provides a drop-in coal replacement that reduces demand on landfill space and replaces a significant fraction of fossil-derived fuel with a renewable alternative.« less

  8. Block-based scalable wavelet image codec

    NASA Astrophysics Data System (ADS)

    Bao, Yiliang; Kuo, C.-C. Jay

    1999-10-01

    This paper presents a high performance block-based wavelet image coder which is designed to be of very low implementational complexity yet with rich features. In this image coder, the Dual-Sliding Wavelet Transform (DSWT) is first applied to image data to generate wavelet coefficients in fixed-size blocks. Here, a block only consists of wavelet coefficients from a single subband. The coefficient blocks are directly coded with the Low Complexity Binary Description (LCBiD) coefficient coding algorithm. Each block is encoded using binary context-based bitplane coding. No parent-child correlation is exploited in the coding process. There is also no intermediate buffering needed in between DSWT and LCBiD. The compressed bit stream generated by the proposed coder is both SNR and resolution scalable, as well as highly resilient to transmission errors. Both DSWT and LCBiD process the data in blocks whose size is independent of the size of the original image. This gives more flexibility in the implementation. The codec has a very good coding performance even the block size is (16,16).

  9. Visual and visuomotor processing of hands and tools as a case study of cross talk between the dorsal and ventral streams.

    PubMed

    Almeida, Jorge; Amaral, Lénia; Garcea, Frank E; Aguiar de Sousa, Diana; Xu, Shan; Mahon, Bradford Z; Martins, Isabel Pavão

    2018-05-24

    A major principle of organization of the visual system is between a dorsal stream that processes visuomotor information and a ventral stream that supports object recognition. Most research has focused on dissociating processing across these two streams. Here we focus on how the two streams interact. We tested neurologically-intact and impaired participants in an object categorization task over two classes of objects that depend on processing within both streams-hands and tools. We measured how unconscious processing of images from one of these categories (e.g., tools) affects the recognition of images from the other category (i.e., hands). Our findings with neurologically-intact participants demonstrated that processing an image of a hand hampers the subsequent processing of an image of a tool, and vice versa. These results were not present in apraxic patients (N = 3). These findings suggest local and global inhibitory processes working in tandem to co-register information across the two streams.

  10. Systems Analysis of Physical Absorption of CO2 in Ionic Liquids for Pre-Combustion Carbon Capture.

    PubMed

    Zhai, Haibo; Rubin, Edward S

    2018-04-17

    This study develops an integrated technical and economic modeling framework to investigate the feasibility of ionic liquids (ILs) for precombustion carbon capture. The IL 1-hexyl-3-methylimidazolium bis(trifluoromethylsulfonyl)imide is modeled as a potential physical solvent for CO 2 capture at integrated gasification combined cycle (IGCC) power plants. The analysis reveals that the energy penalty of the IL-based capture system comes mainly from the process and product streams compression and solvent pumping, while the major capital cost components are the compressors and absorbers. On the basis of the plant-level analysis, the cost of CO 2 avoided by the IL-based capture and storage system is estimated to be $63 per tonne of CO 2 . Technical and economic comparisons between IL- and Selexol-based capture systems at the plant level show that an IL-based system could be a feasible option for CO 2 capture. Improving the CO 2 solubility of ILs can simplify the capture process configuration and lower the process energy and cost penalties to further enhance the viability of this technology.

  11. Thermofluidic compression effects to achieve combustion in a low-compression scramjet engine

    NASA Astrophysics Data System (ADS)

    Moura, A. F.; Wheatley, V.; Jahn, I.

    2018-07-01

    The compression provided by a scramjet inlet is an important parameter in its design. It must be low enough to limit thermal and structural loads and stagnation pressure losses, but high enough to provide the conditions favourable for combustion. Inlets are typically designed to achieve sufficient compression without accounting for the fluidic, and subsequently thermal, compression provided by the fuel injection, which can enable robust combustion in a low-compression engine. This is investigated using Reynolds-averaged Navier-Stokes numerical simulations of a simplified scramjet engine designed to have insufficient compression to auto-ignite fuel in the absence of thermofluidic compression. The engine was designed with a wide rectangular combustor and a single centrally located injector, in order to reduce three-dimensional effects of the walls on the fuel plume. By varying the injected mass flow rate of hydrogen fuel (equivalence ratios of 0.22, 0.17, and 0.13), it is demonstrated that higher equivalence ratios lead to earlier ignition and more rapid combustion, even though mean conditions in the combustor change by no more than 5% for pressure and 3% for temperature with higher equivalence ratio. By supplementing the lower equivalence ratio with helium to achieve a higher mass flow rate, it is confirmed that these benefits are primarily due to the local compression provided by the extra injected mass. Investigation of the conditions around the fuel plume indicated two connected mechanisms. The higher mass flow rate for higher equivalence ratios generated a stronger injector bow shock that compresses the free-stream gas, increasing OH radical production and promoting ignition. This was observed both in the higher equivalence ratio case and in the case with helium. This earlier ignition led to increased temperature and pressure downstream and, consequently, stronger combustion. The heat release from combustion provided thermal compression in the combustor, further increasing combustion efficiency.

  12. Thermofluidic compression effects to achieve combustion in a low-compression scramjet engine

    NASA Astrophysics Data System (ADS)

    Moura, A. F.; Wheatley, V.; Jahn, I.

    2017-12-01

    The compression provided by a scramjet inlet is an important parameter in its design. It must be low enough to limit thermal and structural loads and stagnation pressure losses, but high enough to provide the conditions favourable for combustion. Inlets are typically designed to achieve sufficient compression without accounting for the fluidic, and subsequently thermal, compression provided by the fuel injection, which can enable robust combustion in a low-compression engine. This is investigated using Reynolds-averaged Navier-Stokes numerical simulations of a simplified scramjet engine designed to have insufficient compression to auto-ignite fuel in the absence of thermofluidic compression. The engine was designed with a wide rectangular combustor and a single centrally located injector, in order to reduce three-dimensional effects of the walls on the fuel plume. By varying the injected mass flow rate of hydrogen fuel (equivalence ratios of 0.22, 0.17, and 0.13), it is demonstrated that higher equivalence ratios lead to earlier ignition and more rapid combustion, even though mean conditions in the combustor change by no more than 5% for pressure and 3% for temperature with higher equivalence ratio. By supplementing the lower equivalence ratio with helium to achieve a higher mass flow rate, it is confirmed that these benefits are primarily due to the local compression provided by the extra injected mass. Investigation of the conditions around the fuel plume indicated two connected mechanisms. The higher mass flow rate for higher equivalence ratios generated a stronger injector bow shock that compresses the free-stream gas, increasing OH radical production and promoting ignition. This was observed both in the higher equivalence ratio case and in the case with helium. This earlier ignition led to increased temperature and pressure downstream and, consequently, stronger combustion. The heat release from combustion provided thermal compression in the combustor, further increasing combustion efficiency.

  13. An improvement analysis on video compression using file segmentation

    NASA Astrophysics Data System (ADS)

    Sharma, Shubhankar; Singh, K. John; Priya, M.

    2017-11-01

    From the past two decades the extreme evolution of the Internet has lead a massive rise in video technology and significantly video consumption over the Internet which inhabits the bulk of data traffic in general. Clearly, video consumes that so much data size on the World Wide Web, to reduce the burden on the Internet and deduction of bandwidth consume by video so that the user can easily access the video data.For this, many video codecs are developed such as HEVC/H.265 and V9. Although after seeing codec like this one gets a dilemma of which would be improved technology in the manner of rate distortion and the coding standard.This paper gives a solution about the difficulty for getting low delay in video compression and video application e.g. ad-hoc video conferencing/streaming or observation by surveillance. Also this paper describes the benchmark of HEVC and V9 technique of video compression on subjective oral estimations of High Definition video content, playback on web browsers. Moreover, this gives the experimental ideology of dividing the video file into several segments for compression and putting back together to improve the efficiency of video compression on the web as well as on the offline mode.

  14. Photogrammetric point cloud compression for tactical networks

    NASA Astrophysics Data System (ADS)

    Madison, Andrew C.; Massaro, Richard D.; Wayant, Clayton D.; Anderson, John E.; Smith, Clint B.

    2017-05-01

    We report progress toward the development of a compression schema suitable for use in the Army's Common Operating Environment (COE) tactical network. The COE facilitates the dissemination of information across all Warfighter echelons through the establishment of data standards and networking methods that coordinate the readout and control of a multitude of sensors in a common operating environment. When integrated with a robust geospatial mapping functionality, the COE enables force tracking, remote surveillance, and heightened situational awareness to Soldiers at the tactical level. Our work establishes a point cloud compression algorithm through image-based deconstruction and photogrammetric reconstruction of three-dimensional (3D) data that is suitable for dissimination within the COE. An open source visualization toolkit was used to deconstruct 3D point cloud models based on ground mobile light detection and ranging (LiDAR) into a series of images and associated metadata that can be easily transmitted on a tactical network. Stereo photogrammetric reconstruction is then conducted on the received image stream to reveal the transmitted 3D model. The reported method boasts nominal compression ratios typically on the order of 250 while retaining tactical information and accurate georegistration. Our work advances the scope of persistent intelligence, surveillance, and reconnaissance through the development of 3D visualization and data compression techniques relevant to the tactical operations environment.

  15. An Unequal Secure Encryption Scheme for H.264/AVC Video Compression Standard

    NASA Astrophysics Data System (ADS)

    Fan, Yibo; Wang, Jidong; Ikenaga, Takeshi; Tsunoo, Yukiyasu; Goto, Satoshi

    H.264/AVC is the newest video coding standard. There are many new features in it which can be easily used for video encryption. In this paper, we propose a new scheme to do video encryption for H.264/AVC video compression standard. We define Unequal Secure Encryption (USE) as an approach that applies different encryption schemes (with different security strength) to different parts of compressed video data. This USE scheme includes two parts: video data classification and unequal secure video data encryption. Firstly, we classify the video data into two partitions: Important data partition and unimportant data partition. Important data partition has small size with high secure protection, while unimportant data partition has large size with low secure protection. Secondly, we use AES as a block cipher to encrypt the important data partition and use LEX as a stream cipher to encrypt the unimportant data partition. AES is the most widely used symmetric cryptography which can ensure high security. LEX is a new stream cipher which is based on AES and its computational cost is much lower than AES. In this way, our scheme can achieve both high security and low computational cost. Besides the USE scheme, we propose a low cost design of hybrid AES/LEX encryption module. Our experimental results show that the computational cost of the USE scheme is low (about 25% of naive encryption at Level 0 with VEA used). The hardware cost for hybrid AES/LEX module is 4678 Gates and the AES encryption throughput is about 50Mbps.

  16. Reuse of Aluminum Dross as an Engineered Product

    NASA Astrophysics Data System (ADS)

    Dai, Chen; Apelian, Diran

    To prevent the leaching of landfilled aluminum dross waste and save the energy consumed by recovering metallic aluminum from dross, aluminum dross is reused as an engineering product directly rather than "refurbished" ineffectively. The concept is to reduce waste and to reuse. Two kinds of aluminum dross from industrial streams were selected and characterized. We have shown that dross can be applied directly, or accompanied with a simple conditioning process, to manufacture refractory components. Dross particles below 50 mesh are most effective. Mechanical property evaluations revealed the possibility for dross waste to be utilized as filler in concrete, resulting in up to 40% higher flexural strength and 10% higher compressive strength compared to pure cement, as well as cement with sand additions. The potential usage of aluminum dross as a raw material for such engineering applications is presented and discussed.

  17. Direct compression of chitosan: process and formulation factors to improve powder flow and tablet performance.

    PubMed

    Buys, Gerhard M; du Plessis, Lissinda H; Marais, Andries F; Kotze, Awie F; Hamman, Josias H

    2013-06-01

    Chitosan is a polymer derived from chitin that is widely available at relatively low cost, but due to compression challenges it has limited application for the production of direct compression tablets. The aim of this study was to use certain process and formulation variables to improve manufacturing of tablets containing chitosan as bulking agent. Chitosan particle size and flow properties were determined, which included bulk density, tapped density, compressibility and moisture uptake. The effect of process variables (i.e. compression force, punch depth, percentage compaction in a novel double fill compression process) and formulation variables (i.e. type of glidant, citric acid, pectin, coating with Eudragit S®) on chitosan tablet performance (i.e. mass variation, tensile strength, dissolution) was investigated. Moisture content of the chitosan powder, particle size and the inclusion of glidants had a pronounced effect on its flow ability. Varying the percentage compaction during the first cycle of a double fill compression process produced chitosan tablets with more acceptable tensile strength and dissolution rate properties. The inclusion of citric acid and pectin into the formulation significantly decreased the dissolution rate of isoniazid from the tablets due to gel formation. Direct compression of chitosan powder into tablets can be significantly improved by the investigated process and formulation variables as well as applying a double fill compression process.

  18. Valve For Extracting Samples From A Process Stream

    NASA Technical Reports Server (NTRS)

    Callahan, Dave

    1995-01-01

    Valve for extracting samples from process stream includes cylindrical body bolted to pipe that contains stream. Opening in valve body matched and sealed against opening in pipe. Used to sample process streams in variety of facilities, including cement plants, plants that manufacture and reprocess plastics, oil refineries, and pipelines.

  19. Citizen Hydrology and Compressed-Air Hydropower for Rural Electrification in Haiti

    NASA Astrophysics Data System (ADS)

    Allen, S. M.

    2015-12-01

    At the present time, only one in eight residents of Haiti has access to electricity. Two recent engineering and statistical innovations have the potential for vastly reducing the cost of installation of hydropower in Haiti and the rest of the developing world. The engineering innovation is that wind, solar and fluvial energy have been used to compress air for generation of electricity for only 20 per megawatt-hour, in contrast to the conventional World Bank practice of funding photovoltaic cells for 156 per megawatt-hour. The installation of hydropower requires a record of stream discharge, which is conventionally obtained by installing a gaging station that automatically monitors gage height (height of the water surface above a fixed datum). An empirical rating curve is then used to convert gage height to stream discharge. The multiple field measurements of gage height and discharge over a wide range of discharge values that are required to develop and maintain a rating curve require a manpower of hydrologic technicians that is prohibitive in remote and impoverished areas of the world. The statistical innovation is that machine learning has been applied to the USGS database of nearly four million simultaneous measurements of gage height and discharge to develop a new classification of rivers so that a rating curve can be developed solely from the stream slope, channel geometry, horizontal and vertical distances to the nearest upstream and downstream confluences, and two pairs of discharge - gage height measurements. The objective of this study is to organize local residents to monitor gage height at ten stream sites in the northern peninsula of Haiti over a one-year period in preparation for installation of hydropower at one of the sites. The necessary baseline discharge measurements and channel surveying are being carried out for conversion of gage height to discharge. Results will be reported at the meeting.

  20. Halftoning processing on a JPEG-compressed image

    NASA Astrophysics Data System (ADS)

    Sibade, Cedric; Barizien, Stephane; Akil, Mohamed; Perroton, Laurent

    2003-12-01

    Digital image processing algorithms are usually designed for the raw format, that is on an uncompressed representation of the image. Therefore prior to transforming or processing a compressed format, decompression is applied; then, the result of the processing application is finally re-compressed for further transfer or storage. The change of data representation is resource-consuming in terms of computation, time and memory usage. In the wide format printing industry, this problem becomes an important issue: e.g. a 1 m2 input color image, scanned at 600 dpi exceeds 1.6 GB in its raw representation. However, some image processing algorithms can be performed in the compressed-domain, by applying an equivalent operation on the compressed format. This paper is presenting an innovative application of the halftoning processing operation by screening, to be applied on JPEG-compressed image. This compressed-domain transform is performed by computing the threshold operation of the screening algorithm in the DCT domain. This algorithm is illustrated by examples for different halftone masks. A pre-sharpening operation, applied on a JPEG-compressed low quality image is also described; it allows to de-noise and to enhance the contours of this image.

  1. Particle dispersing system and method for testing semiconductor manufacturing equipment

    DOEpatents

    Chandrachood, Madhavi; Ghanayem, Steve G.; Cantwell, Nancy; Rader, Daniel J.; Geller, Anthony S.

    1998-01-01

    The system and method prepare a gas stream comprising particles at a known concentration using a particle disperser for moving particles from a reservoir of particles into a stream of flowing carrier gas. The electrostatic charges on the particles entrained in the carrier gas are then neutralized or otherwise altered, and the resulting particle-laden gas stream is then diluted to provide an acceptable particle concentration. The diluted gas stream is then split into a calibration stream and the desired output stream. The particles in the calibration stream are detected to provide an indication of the actual size distribution and concentration of particles in the output stream that is supplied to a process chamber being analyzed. Particles flowing out of the process chamber within a vacuum pumping system are detected, and the output particle size distribution and concentration are compared with the particle size distribution and concentration of the calibration stream in order to determine the particle transport characteristics of a process chamber, or to determine the number of particles lodged in the process chamber as a function of manufacturing process parameters such as pressure, flowrate, temperature, process chamber geometry, particle size, particle charge, and gas composition.

  2. Viscoelastic behavior of basaltic ash from Stromboli volcano inferred from intermittent compression experiments

    NASA Astrophysics Data System (ADS)

    Kurokawa, A. K.; Miwa, T.; Okumura, S.; Uesugi, K.

    2017-12-01

    After ash-dominated Strombolian eruption, considerable amount of ash falls back to the volcanic conduit forming a dense near-surface region compacted by weights of its own and other fallback clasts (Patrick et al., 2007). Gas accumulation below this dense cap causes a substantial increase in pressure within the conduit, causing the volcanic activity to shift to the preliminary stages of a forthcoming eruption (Del Bello et al., 2015). Under such conditions, rheology of the fallback ash plays an important role because it controls whether the fallback ash can be the cap. However, little attention has been given to the point. We examined the rheology of ash collected at Stromboli volcano via intermittent compression experiments changing temperature and compression time/rate. The ash deformed at a constant rate during compression process, and then it was compressed without any deformation during rest process. The compression and rest processes repeated during each experiment to see rheological variations with progression of compaction. Viscoelastic changes during the experiment were estimated by Maxwell model. The results show that both elasticity and viscosity increases with decreasing porosity. On the other hand, the elasticity shows strong rate-dependence in the both compression and rest processes while the viscosity dominantly depends on the temperature, although the compression rate also affects the viscosity in the case of the compression process. Thus, the ash behaves either elastically or viscously depending on experimental process, temperature, and compression rate/time. The viscoelastic characteristics can be explained by magnitude relationships between the characteristic relaxation times and times for compression and rest processes. This indicates that the balance of the time scales is key to determining the rheological characteristics and whether the ash behaves elastically or viscously may control cyclic Strombolian eruptions.

  3. Study of a Compression-Molding Process for Ultraviolet Light-Emitting Diode Exposure Systems via Finite-Element Analysis

    PubMed Central

    Wu, Kuo-Tsai; Hwang, Sheng-Jye; Lee, Huei-Huang

    2017-01-01

    Although wafer-level camera lenses are a very promising technology, problems such as warpage with time and non-uniform thickness of products still exist. In this study, finite element simulation was performed to simulate the compression molding process for acquiring the pressure distribution on the product on completion of the process and predicting the deformation with respect to the pressure distribution. Results show that the single-gate compression molding process significantly increases the pressure at the center of the product, whereas the multi-gate compressing molding process can effectively distribute the pressure. This study evaluated the non-uniform thickness of product and changes in the process parameters through computer simulations, which could help to improve the compression molding process. PMID:28617315

  4. Semidiscrete Galerkin modelling of compressible viscous flow past a circular cone at incidence. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Meade, Andrew James, Jr.

    1989-01-01

    A numerical study of the laminar and compressible boundary layer, about a circular cone in a supersonic free stream, is presented. It is thought that if accurate and efficient numerical schemes can be produced to solve the boundary layer equations, they can be joined to numerical codes that solve the inviscid outer flow. The combination of these numerical codes is competitive with the accurate, but computationally expensive, Navier-Stokes schemes. The primary goal is to develop a finite element method for the calculation of 3-D compressible laminar boundary layer about a yawed cone. The proposed method can, in principle, be extended to apply to the 3-D boundary layer of pointed bodies of arbitrary cross section. The 3-D boundary layer equations governing supersonic free stream flow about a cone are examined. The 3-D partial differential equations are reduced to 2-D integral equations by applying the Howarth, Mangler, Crocco transformations, a linear relation between viscosity, and a Blasius-type of similarity variable. This is equivalent to a Dorodnitsyn-type formulation. The reduced equations are independent of density and curvature effects, and resemble the weak form of the 2-D incompressible boundary layer equations in Cartesian coordinates. In addition the coordinate normal to the wall has been stretched, which reduces the gradients across the layer and provides high resolution near the surface. Utilizing the parabolic nature of the boundary layer equations, a finite element method is applied to the Dorodnitsyn formulation. The formulation is presented in a Petrov-Galerkin finite element form and discretized across the layer using linear interpolation functions. The finite element discretization yields a system of ordinary differential equations in the circumferential direction. The circumferential derivatives are solved by an implicit and noniterative finite difference marching scheme. Solutions are presented for a 15 deg half angle cone at angles of attack of 5 and 10 deg. The numerical solutions assume a laminar boundary layer with free stream Mach number of 7. Results include circumferential distribution of skin friction and surface heat transfer, and cross flow velocity distributions across the layer.

  5. Evaluating the Effects of Culvert Designs on Ecosystem Processes in Northern Wisconsin Streams

    Treesearch

    J. C. Olson; A. M. Marcarelli; A.L. Timm; S.L. Eggert; R.K. Kolka

    2017-01-01

    Culvert replacements are commonly undertaken to restore aquatic organism passage and stream hydrologic and geomorphic conditions, but their effects on ecosystem processes are rarely quantified. The objective of this study was to investigate the effects of two culvert replacement designs on stream ecosystem processes. The stream simulation design, where culverts...

  6. Apparatus and process for the refrigeration, liquefaction and separation of gases with varying levels of purity

    DOEpatents

    Bingham, Dennis N.; Wilding, Bruce M.; McKellar, Michael G.

    2002-01-01

    A process for the separation and liquefaction of component gasses from a pressurized mix gas stream is disclosed. The process involves cooling the pressurized mixed gas stream in a heat exchanger so as to condensing one or more of the gas components having the highest condensation point; separating the condensed components from the remaining mixed gas stream in a gas-liquid separator; cooling the separated condensed component stream by passing it through an expander; and passing the cooled component stream back through the heat exchanger such that the cooled component stream functions as the refrigerant for the heat exchanger. The cycle is then repeated for the remaining mixed gas stream so as to draw off the next component gas and further cool the remaining mixed gas stream. The process continues until all of the component gases are separated from the desired gas stream. The final gas stream is then passed through a final heat exchanger and expander. The expander decreases the pressure on the gas stream, thereby cooling the stream and causing a portion of the gas stream to liquify within a tank. The portion of the gas which is hot liquefied is passed back through each of the heat exchanges where it functions as a refrigerant.

  7. Apparatus and process for the refrigeration, liquefaction and separation of gases with varying levels of purity

    DOEpatents

    Bingham, Dennis N.; Wilding, Bruce M.; McKellar, Michael G.

    2000-01-01

    A process for the separation and liquefaction of component gasses from a pressurized mix gas stream is disclosed. The process involves cooling the pressurized mixed gas stream in a heat exchanger so as to condense one or more of the gas components having the highest condensation point; separating the condensed components from the remaining mixed gas stream in a gas-liquid separator; cooling the separated condensed component stream by passing it through an expander; and passing the cooled component stream back through the heat exchanger such that the cooled component stream functions as the refrigerant for the heat exchanger. The cycle is then repeated for the remaining mixed gas stream so as to draw off the next component gas and further cool the remaining mixed gas stream. The process continues until all of the component gases are separated from the desired gas stream. The final gas stream is then passed through a final heat exchanger and expander. The expander decreases the pressure on the gas stream, thereby cooling the stream and causing a portion of the gas stream to liquify within a tank. The portion of the gas which is not liquefied is passed back through each of the heat exchanges where it functions as a refrigerant.

  8. An Improved Analytical Model of the Local Interstellar Magnetic Field: The Extension to Compressibility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kleimann, Jens; Fichtner, Horst; Röken, Christian, E-mail: jk@tp4.rub.de, E-mail: hf@tp4.rub.de, E-mail: christian.roeken@mathematik.uni-regensburg.de

    A previously published analytical magnetohydrodynamic model for the local interstellar magnetic field in the vicinity of the heliopause (Röken et al. 2015) is extended from incompressible to compressible, yet predominantly subsonic flow, considering both isothermal and adiabatic equations of state. Exact expressions and suitable approximations for the density and the flow velocity are derived and discussed. In addition to the stationary induction equation, these expressions also satisfy the momentum balance equation along stream lines. The practical usefulness of the corresponding, still exact, analytical magnetic field solution is assessed by comparing it quantitatively to results from a fully self-consistent magnetohydrodynamic simulationmore » of the interstellar magnetic field draping around the heliopause.« less

  9. A parallel computing engine for a class of time critical processes.

    PubMed

    Nabhan, T M; Zomaya, A Y

    1997-01-01

    This paper focuses on the efficient parallel implementation of systems of numerically intensive nature over loosely coupled multiprocessor architectures. These analytical models are of significant importance to many real-time systems that have to meet severe time constants. A parallel computing engine (PCE) has been developed in this work for the efficient simplification and the near optimal scheduling of numerical models over the different cooperating processors of the parallel computer. First, the analytical system is efficiently coded in its general form. The model is then simplified by using any available information (e.g., constant parameters). A task graph representing the interconnections among the different components (or equations) is generated. The graph can then be compressed to control the computation/communication requirements. The task scheduler employs a graph-based iterative scheme, based on the simulated annealing algorithm, to map the vertices of the task graph onto a Multiple-Instruction-stream Multiple-Data-stream (MIMD) type of architecture. The algorithm uses a nonanalytical cost function that properly considers the computation capability of the processors, the network topology, the communication time, and congestion possibilities. Moreover, the proposed technique is simple, flexible, and computationally viable. The efficiency of the algorithm is demonstrated by two case studies with good results.

  10. Distributed Coding/Decoding Complexity in Video Sensor Networks

    PubMed Central

    Cordeiro, Paulo J.; Assunção, Pedro

    2012-01-01

    Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality. PMID:22736972

  11. Distributed coding/decoding complexity in video sensor networks.

    PubMed

    Cordeiro, Paulo J; Assunção, Pedro

    2012-01-01

    Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality.

  12. Development of Novel Carbon Sorbents for CO{sub 2} Capture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishnan, Gopala; Hornbostel, Marc; Bao, Jianer

    2013-11-30

    An innovative, low-cost, and low-energy-consuming carbon dioxide (CO{sub 2}) capture technology was developed, based on CO{sub 2}adsorption on a high-capacity and durable carbon sorbent. This report describes the (1) performance of the concept on a bench-scale system; (2) results of parametric tests to determine the optimum operating conditions; (3) results of the testing with a flue gas from coal-fired boilers; and (4) evaluation of the technical and economic viability of the technology. The process uses a falling bed of carbon sorbent microbeads to separate the flue gas into two streams: a CO{sub 2} -lean flue gas stream from which >more » 90% of the CP{sub 2} is removed and a pure stream of CO{sub 2} that is ready for compression and sequestration. The carbo sorbent microbeads have several unique properties such as high CO{sub 2} capacity, low heat of adsorption and desorption (25 to 28 kJ/mole), mechanically robust, and rapid adsorption and desorption rates. The capture of CO{sub 2} from the flue gas is performed at near ambient temperatures in whic the sorbent microbeads flow down by gravity counter-current with the up-flow of the flue gas. The adsorbed CO{sub 2} is stripped by heating the CO{sub 2}-loaded sorbent to - 100°C, in contact with low-pressure (- 5 psig) steam in a section at the bottom of the adsorber. The regenerated sorben is dehydrated of adsorbed moisture, cooled, and lifted back to the adsorber. The CO{sub 2} from the desorber is essentially pure and can be dehydrated, compressed, and transported to a sequestration site. Bench-scale tests using a simulated flue gas showed that the integrated system can be operated to provide > 90% CO{sub 2} capture from a 15% CO{sub 2} stream in the adsorber and produce > 98% CO{sub 2} at the outlet of the stripper. Long-term tests ( 1,000 cycles) showed that the system can be operated reliably without sorbent agglomeration or attrition. The bench-scale reactor was also operated using a flue gas stream from a coal-fired boil at the University of Toledo campus for about 135 h, comprising 7,000 cycles of adsorption and desorption using the desulfurized flue gas that contained only 4.5% v/v CO{sub 2}. A capture efficiency of 85 to 95% CO{sub 2} was achieved under steady-state conditi ons. The CO{sub 2} adsorption capacity did not change significantly during the field test, as determined from the CO{sub 2} adsorptio isotherms of fresh and used sorbents. The process is also being tested using the flue gas from a PC-fired power plant at the National Carbon Capture Center (NCCC), Wilsonville, AL. The cost of electricity was calculated for CO{sub 2} capture using the carbon sorbent and compared with the no-CO{sub 2} capture and CO{sub 2} capture with an amine-based system. The increase i the levelized cost of electricity (L-COE) is about 37% for CO{sub 2} capture using the carbon sorbent in comparison to 80% for an amine-based system, demonstrating the economic advantage of C capture using the carbon sorbent. The 37% increase in the L-COE corresponds to a cost of capture of $30/ton of CO{sub 2}, including compression costs, capital cost for the capture system, and increased plant operating and capital costs to make up for reduced plant efficiency. Preliminary sensitivity analyses showed capital costs, pressure drops in the adsorber, and steam requirement for the regenerator are the major variables in determining the cost of CO{sub 2} capture. The results indicate that further long-term testing with a flue gas from a pulverized coal­ fired boiler should be performed to obtain additional data relating to the effects of flue gas contaminants, the ability to reduce pressure drop by using alternate structural packing , and the use of low-cost construction materials.« less

  13. REVIEWS OF TOPICAL PROBLEMS: Free convection in geophysical processes

    NASA Astrophysics Data System (ADS)

    Alekseev, V. V.; Gusev, A. M.

    1983-10-01

    A highly significant geophysical process, free convection, is examined. Thermal convection often controls the dynamical behavior in several of the earth's envelopes: the atmosphere, ocean, and mantle. Section 2 sets forth the thermohydrodynamic equations that describe convection in a compressible or incompressible fluid, thermochemical convection, and convection in the presence of thermal diffusion. Section 3 reviews the mechanisms for the origin of the global atmospheric and oceanic circulation. Interlatitudinal convection and jet streams are discussed, as well as monsoon circulation and the mean meridional circulation of ocean waters due to the temperature and salinity gradients. Also described are the hypotheses for convective motion in the mantle and the thermal-wave (moving flame) mechanism for inducing global circulation (the atmospheres of Venus and Mars provide illustrations). Eddy formation by convection in a centrifugal force field is considered. Section 4 deals with medium- and small-scale convective processes, including hurricane systems with phase transitions, cellular cloud structure, and convection penetrating into the ocean, with its stepped vertical temperature and salinity microstructure. Self-oscillatory processes involving convection in fresh-water basins are discussed, including effects due to the anomalous (p,T) relation for water.

  14. Dual-stream modulation failure: a novel hypothesis for the formation and maintenance of delusions in schizophrenia.

    PubMed

    Speechley, William J; Ngan, Elton T C

    2008-01-01

    Delusions, a cardinal feature of schizophrenia, are characterized by the development and preservation of false beliefs despite reason and evidence to the contrary. A number of cognitive models have made important contributions to our understanding of delusions, though it remains unclear which core cognitive processes are malfunctioning to enable individuals with delusions to form and maintain erroneous beliefs. We propose a modified dual-stream processing model that provides a viable and testable mechanism that can account for this debilitating symptom. Dual-stream models divide decision-making into two streams: a fast, intuitive and automatic form of processing (Stream 1); and a slower, conscious and deliberative process (Stream 2). Our novel model proposes two key influences on the way these streams interact in everyday decision-making: conflict and emotion. Conflict: in most decision-making scenarios one obvious answer presents itself and the two streams converge onto the same conclusion. However, in instances where there are competing alternative possibilities, an individual often experiences dissonance, or a sense of conflict. The detection of this conflict biases processing towards the more deliberative Stream 2. Emotion: highly emotional states can result in behavior that is reflexive and action-oriented. This may be due to the power of emotionally valenced stimuli to bias reasoning towards Stream 1. We propose that in schizophrenia, an abnormal response to these two influences results in a pathological schism between Stream 1 and Stream 2, enabling erroneous intuitive explanations to coexist with contrary logical explanations of the same event. Specifically, we suggest that delusions are the result of a failure to reconcile the two streams due to both a failure of conflict to bias decision-making towards Stream 2 and an accentuated emotional bias towards Stream 1.

  15. Photorealistic scene presentation: virtual video camera

    NASA Astrophysics Data System (ADS)

    Johnson, Michael J.; Rogers, Joel Clark W.

    1994-07-01

    This paper presents a low cost alternative for presenting photo-realistic imagery during the final approach, which often is a peak workload phase of flight. The method capitalizes on `a priori' information. It accesses out-the-window `snapshots' from a mass storage device, selecting the snapshots that deliver the best match for a given aircraft position and runway scene. It then warps the snapshots to align them more closely with the current viewpoint. The individual snapshots, stored as highly compressed images, are decompressed and interpolated to produce a `clear-day' video stream. The paper shows how this warping, when combined with other compression methods, saves considerable amounts of storage; compression factors from 1000 to 3000 were achieved. Thus, a CD-ROM today can store reference snapshots for thousands of different runways. Dynamic scene elements not present in the snapshot database can be inserted as separate symbolic or pictorial images. When underpinned by an appropriate suite of sensor technologies, the methods discussed indicate an all-weather virtual video camera is possible.

  16. A comparison of the experimental subsonic pressure distributions about several bodies of revolution with pressure distributions computed by means of the linearized theory

    NASA Technical Reports Server (NTRS)

    Matthews, Clarence W

    1953-01-01

    An analysis is made of the effects of compressibility on the pressure coefficients about several bodies of revolution by comparing experimentally determined pressure coefficients with corresponding pressure coefficients calculated by the use of the linearized equations of compressible flow. The results show that the theoretical methods predict the subsonic pressure-coefficient changes over the central part of the body but do not predict the pressure-coefficient changes near the nose. Extrapolation of the linearized subsonic theory into the mixed subsonic-supersonic flow region fails to predict a rearward movement of the negative pressure-coefficient peak which occurs after the critical stream Mach number has been attained. Two equations developed from a consideration of the subsonic compressible flow about a prolate spheroid are shown to predict, approximately, the change with Mach number of the subsonic pressure coefficients for regular bodies of revolution of fineness ratio 6 or greater.

  17. Investigation of Compressibility Effect for Aeropropulsive Shear Flows

    NASA Technical Reports Server (NTRS)

    Balasubramanyam, M. S.; Chen, C. P.

    2005-01-01

    Rocket Based Combined Cycle (RBCC) engines operate within a wide range of Mach numbers and altitudes. Fundamental fluid dynamic mechanisms involve complex choking, mass entrainment, stream mixing and wall interactions. The Propulsion Research Center at the University of Alabama in Huntsville is involved in an on- going experimental and numerical modeling study of non-axisymmetric ejector-based combined cycle propulsion systems. This paper attempts to address the modeling issues related to mixing, shear layer/wall interaction in a supersonic Strutjet/ejector flow field. Reynolds Averaged Navier-Stokes (RANS) solutions incorporating turbulence models are sought and compared to experimental measurements to characterize detailed flow dynamics. The effect of compressibility on fluids mixing and wall interactions were investigated using an existing CFD methodology. The compressibility correction to conventional incompressible two- equation models is found to be necessary for the supersonic mixing aspect of the ejector flows based on 2-D simulation results. 3-D strut-base flows involving flow separations were also investigated.

  18. The liquid fuel jet in subsonic crossflow

    NASA Technical Reports Server (NTRS)

    Nguyen, T. T.; Karagozian, A. R.

    1990-01-01

    An analytical/numerical model is described which predicts the behavior of nonreacting and reacting liquid jets injected transversely into subsonic cross flow. The compressible flowfield about the elliptical jet cross section is solved at various locations along the jet trajectory by analytical means for free-stream local Mach number perpendicular to jet cross section smaller than 0.3 and by numerical means for free-stream local Mach number perpendicular to jet cross section in the range 0.3-1.0. External and internal boundary layers along the jet cross section are solved by integral and numerical methods, and the mass losses due to boundary layer shedding, evaporation, and combustion are calculated and incorporated into the trajectory calculation. Comparison of predicted trajectories is made with limited experimental observations.

  19. Analytical and numerical performance models of a Heisenberg Vortex Tube

    NASA Astrophysics Data System (ADS)

    Bunge, C. D.; Cavender, K. A.; Matveev, K. I.; Leachman, J. W.

    2017-12-01

    Analytical and numerical investigations of a Heisenberg Vortex Tube (HVT) are performed to estimate the cooling potential with cryogenic hydrogen. The Ranque-Hilsch Vortex Tube (RHVT) is a device that tangentially injects a compressed fluid stream into a cylindrical geometry to promote enthalpy streaming and temperature separation between inner and outer flows. The HVT is the result of lining the inside of a RHVT with a hydrogen catalyst. This is the first concept to utilize the endothermic heat of para-orthohydrogen conversion to aid primary cooling. A review of 1st order vortex tube models available in the literature is presented and adapted to accommodate cryogenic hydrogen properties. These first order model predictions are compared with 2-D axisymmetric Computational Fluid Dynamics (CFD) simulations.

  20. Using high-frequency nitrogen and carbon measurements to decouple temporal dynamics of catchment and in-stream transport and reaction processes in a headwater stream

    NASA Astrophysics Data System (ADS)

    Blaen, P.; Riml, J.; Khamis, K.; Krause, S.

    2017-12-01

    Within river catchments across the world, headwater streams represent important sites of nutrient transformation and uptake due to their high rates of microbial community processing and relative abundance in the landscape. However, separating the combined influence of in-stream transport and reaction processes from the overall catchment response can be difficult due to spatio-temporal variability in nutrient and organic matter inputs, flow regimes, and reaction rates. Recent developments in optical sensor technologies enable high-frequency, in situ nutrient measurements, and thus provide opportunities for greater insights into in-stream processes. Here, we use in-stream observations of hourly nitrate (NO3-N), dissolved organic carbon (DOC) and dissolved oxygen (DO) measurements from paired in situ sensors that bound a 1 km headwater stream reach in a mixed-use catchment in central England. We employ a spectral approach to decompose (1) variances in solute loading from the surrounding landscape, and (2) variances in reach-scale in-stream nutrient transport and reaction processes. In addition, we estimate continuous rates of reach-scale NO3-N and DOC assimilation/dissimilation, ecosystem respiration and primary production. Comparison of these results over a range of hydrological conditions (baseflow, variable storm events) and timescales (event-based, diel, seasonal) facilitates new insights into the physical and biogeochemical processes that drive in-stream nutrient dynamics in headwater streams.

  1. Separation process using pervaporation and dephlegmation

    DOEpatents

    Vane, Leland M.; Mairal, Anurag P.; Ng, Alvin; Alvarez, Franklin R.; Baker, Richard W.

    2004-06-29

    A process for treating liquids containing organic compounds and water. The process includes a pervaporation step in conjunction with a dephlegmation step to treat at least a portion of the permeate vapor from the pervaporation step. The process yields a membrane residue stream, a stream enriched in the more volatile component (usually the organic) as the overhead stream from the dephlegmator and a condensate stream enriched in the less volatile component (usually the water) as a bottoms stream from the dephlegmator. Any of these may be the principal product of the process. The membrane separation step may also be performed in the vapor phase, or by membrane distillation.

  2. Novel modes and adaptive block scanning order for intra prediction in AV1

    NASA Astrophysics Data System (ADS)

    Hadar, Ofer; Shleifer, Ariel; Mukherjee, Debargha; Joshi, Urvang; Mazar, Itai; Yuzvinsky, Michael; Tavor, Nitzan; Itzhak, Nati; Birman, Raz

    2017-09-01

    The demand for streaming video content is on the rise and growing exponentially. Networks bandwidth is very costly and therefore there is a constant effort to improve video compression rates and enable the sending of reduced data volumes while retaining quality of experience (QoE). One basic feature that utilizes the spatial correlation of pixels for video compression is Intra-Prediction, which determines the codec's compression efficiency. Intra prediction enables significant reduction of the Intra-Frame (I frame) size and, therefore, contributes to efficient exploitation of bandwidth. In this presentation, we propose new Intra-Prediction algorithms that improve the AV1 prediction model and provide better compression ratios. Two (2) types of methods are considered: )1( New scanning order method that maximizes spatial correlation in order to reduce prediction error; and )2( New Intra-Prediction modes implementation in AVI. Modern video coding standards, including AVI codec, utilize fixed scan orders in processing blocks during intra coding. The fixed scan orders typically result in residual blocks with high prediction error mainly in blocks with edges. This means that the fixed scan orders cannot fully exploit the content-adaptive spatial correlations between adjacent blocks, thus the bitrate after compression tends to be large. To reduce the bitrate induced by inaccurate intra prediction, the proposed approach adaptively chooses the scanning order of blocks according to criteria of firstly predicting blocks with maximum number of surrounding, already Inter-Predicted blocks. Using the modified scanning order method and the new modes has reduced the MSE by up to five (5) times when compared to conventional TM mode / Raster scan and up to two (2) times when compared to conventional CALIC mode / Raster scan, depending on the image characteristics (which determines the percentage of blocks predicted with Inter-Prediction, which in turn impacts the efficiency of the new scanning method). For the same cases, the PSNR was shown to improve by up to 7.4dB and up to 4 dB, respectively. The new modes have yielded 5% improvement in BD-Rate over traditionally used modes, when run on K-Frame, which is expected to yield 1% of overall improvement.

  3. Development of plenoptic infrared camera using low dimensional material based photodetectors

    NASA Astrophysics Data System (ADS)

    Chen, Liangliang

    Infrared (IR) sensor has extended imaging from submicron visible spectrum to tens of microns wavelength, which has been widely used for military and civilian application. The conventional bulk semiconductor materials based IR cameras suffer from low frame rate, low resolution, temperature dependent and highly cost, while the unusual Carbon Nanotube (CNT), low dimensional material based nanotechnology has been made much progress in research and industry. The unique properties of CNT lead to investigate CNT based IR photodetectors and imaging system, resolving the sensitivity, speed and cooling difficulties in state of the art IR imagings. The reliability and stability is critical to the transition from nano science to nano engineering especially for infrared sensing. It is not only for the fundamental understanding of CNT photoresponse induced processes, but also for the development of a novel infrared sensitive material with unique optical and electrical features. In the proposed research, the sandwich-structured sensor was fabricated within two polymer layers. The substrate polyimide provided sensor with isolation to background noise, and top parylene packing blocked humid environmental factors. At the same time, the fabrication process was optimized by real time electrical detection dielectrophoresis and multiple annealing to improve fabrication yield and sensor performance. The nanoscale infrared photodetector was characterized by digital microscopy and precise linear stage in order for fully understanding it. Besides, the low noise, high gain readout system was designed together with CNT photodetector to make the nano sensor IR camera available. To explore more of infrared light, we employ compressive sensing algorithm into light field sampling, 3-D camera and compressive video sensing. The redundant of whole light field, including angular images for light field, binocular images for 3-D camera and temporal information of video streams, are extracted and expressed in compressive approach. The following computational algorithms are applied to reconstruct images beyond 2D static information. The super resolution signal processing was then used to enhance and improve the image spatial resolution. The whole camera system brings a deeply detailed content for infrared spectrum sensing.

  4. Compressing Aviation Data in XML Format

    NASA Technical Reports Server (NTRS)

    Patel, Hemil; Lau, Derek; Kulkarni, Deepak

    2003-01-01

    Design, operations and maintenance activities in aviation involve analysis of variety of aviation data. This data is typically in disparate formats making it difficult to use with different software packages. Use of a self-describing and extensible standard called XML provides a solution to this interoperability problem. XML provides a standardized language for describing the contents of an information stream, performing the same kind of definitional role for Web content as a database schema performs for relational databases. XML data can be easily customized for display using Extensible Style Sheets (XSL). While self-describing nature of XML makes it easy to reuse, it also increases the size of data significantly. Therefore, transfemng a dataset in XML form can decrease throughput and increase data transfer time significantly. It also increases storage requirements significantly. A natural solution to the problem is to compress the data using suitable algorithm and transfer it in the compressed form. We found that XML-specific compressors such as Xmill and XMLPPM generally outperform traditional compressors. However, optimal use of Xmill requires of discovery of optimal options to use while running Xmill. This, in turn, depends on the nature of data used. Manual disc0ver.y of optimal setting can require an engineer to experiment for weeks. We have devised an XML compression advisory tool that can analyze sample data files and recommend what compression tool would work the best for this data and what are the optimal settings to be used with a XML compression tool.

  5. A failure of conflict to modulate dual-stream processing may underlie the formation and maintenance of delusions.

    PubMed

    Speechley, W J; Murray, C B; McKay, R M; Munz, M T; Ngan, E T C

    2010-03-01

    Dual-stream information processing proposes that reasoning is composed of two interacting processes: a fast, intuitive system (Stream 1) and a slower, more logical process (Stream 2). In non-patient controls, divergence of these streams may result in the experience of conflict, modulating decision-making towards Stream 2, and initiating a more thorough examination of the available evidence. In delusional schizophrenia patients, a failure of conflict to modulate decision-making towards Stream 2 may reduce the influence of contradictory evidence, resulting in a failure to correct erroneous beliefs. Delusional schizophrenia patients and non-patient controls completed a deductive reasoning task requiring logical validity judgments of two-part conditional statements. Half of the statements were characterized by a conflict between logical validity (Stream 2) and content believability (Stream 1). Patients were significantly worse than controls in determining the logical validity of both conflict and non-conflict conditional statements. This between groups difference was significantly greater for the conflict condition. The results are consistent with the hypothesis that delusional schizophrenia patients fail to use conflict to modulate towards Stream 2 when the two streams of reasoning arrive at incompatible judgments. This finding provides encouraging preliminary support for the Dual-Stream Modulation Failure model of delusion formation and maintenance. 2009 Elsevier Masson SAS. All rights reserved.

  6. Recurrent solar wind streams observed by interplanetary scintillation of 3C 48

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watanabe, T.; Kakinuma, T.

    1972-10-01

    The interplanetary scintillation of 3C 48 was observed by two spaced receivers (69.3 MHz) during February and March 1971. The recurrent property of the observed velocity increase of the solar wind is clearly seen, and their recurrent period is 24 to 25 days. This value is shorter than the synodic period of 27 days, but this deviation may be explained by the displacement of the closest point to the Sun on the line of sight for 3C 48. A comparison with the data of the wind velocity obtained by apace probes shows that the observed enhancements are associated with twomore » high-velocity streams corotating around the Sun. The enhancements of the scintillation index precede by about two days the velocity enhancements, and it may be concluded that such enhancement of the scintillation index has resulted from the compressed region of the interplanetary plasma formed in front of the high-velocity corotating stream. (auth)« less

  7. Process for recovering organic components from liquid streams

    DOEpatents

    Blume, Ingo; Baker, Richard W.

    1991-01-01

    A separation process for recovering organic components from liquid streams. The process is a combination of pervaporation and decantation. In cases where the liquid stream contains the organic to be separated in dissolved form, the pervaporation step is used to concentrate the organic to a point above the solubility limit, so that a two-phase permeate is formed and then decanted. In cases where the liquid stream is a two-phase mixture, the decantation step is performed first, to remove the organic product phase, and the residue from the decanter is then treated by pervaporation. The condensed permeate from the pervaporation unit is sufficiently concentrated in the organic component to be fed back to the decanter. The process can be tailored to produce only two streams: an essentially pure organic product stream suitable for reuse, and a residue stream for discharge or reuse.

  8. CONCEPTUAL DESIGN AND ECONOMICS OF THE ADVANCED CO2 HYBRID POWER CYCLE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    A. Nehrozoglu

    2004-12-01

    Research has been conducted under United States Department of Energy Contract DEFC26-02NT41621 to analyze the feasibility of a new type of coal-fired plant for electric power generation. This new type of plant, called the Advanced CO{sub 2} Hybrid Power Plant, offers the promise of efficiencies nearing 36 percent, while concentrating CO{sub 2} for 100% sequestration. Other pollutants, such as SO{sub 2} and NOx, are sequestered along with the CO{sub 2} yielding a zero emissions coal plant. The CO{sub 2} Hybrid is a gas turbine-steam turbine combined cycle plant that uses CO{sub 2} as its working fluid to facilitate carbon sequestration. The key components of the plant are a cryogenic air separation unit (ASU), a pressurized circulating fluidized bed gasifier, a CO{sub 2} powered gas turbine, a circulating fluidized bed boiler, and a super-critical pressure steam turbine. The gasifier generates a syngas that fuels the gas turbine and a char residue that, together with coal, fuels a CFB boiler to power the supercritical pressure steam turbine. Both the gasifier and the CFB boiler use a mix of ASU oxygen and recycled boiler flue gas as their oxidant. The resulting CFB boiler flue gas is essentially a mixture of oxygen, carbon dioxide and water. Cooling the CFB flue gas to 80 deg. F condenses most of the moisture and leaves a CO{sub 2} rich stream containing 3%v oxygen. Approximately 30% of this flue gas stream is further cooled, dried, and compressed for pipeline transport to the sequestration site (the small amount of oxygen in this stream is released and recycled to the system when the CO{sub 2} is condensed after final compression and cooling). The remaining 70% of the flue gas stream is mixed with oxygen from the ASU and is ducted to the gas turbine compressor inlet. As a result, the gas turbine compresses a mixture of carbon dioxide (ca. 64%v) and oxygen (ca. 32.5%v) rather than air. This carbon dioxide rich mixture then becomes the gas turbine working fluid and also becomes the oxidant in the gasification and combustion processes. As a result, the plant provides CO{sub 2} for sequestration without the performance and economic penalties associated with water gas shifting and separating CO{sub 2} from gas streams containing nitrogen. The cost estimate of the reference plant (the Foster Wheeler combustion hybrid) was based on a detailed prior study of a nominal 300 MWe demonstration plant with a 6F turbine. Therefore, the reference plant capital costs were found to be 30% higher than an estimate for a 425 MW fully commercial IGCC with an H class turbine (1438more » $/kW vs. 1111 $$/kW). Consequently, the capital cost of the CO{sub 2} hybrid plant was found to be 25% higher than that of the IGCC with pre-combustion CO{sub 2} removal (1892 $$/kW vs. 1510 $/kW), and the levelized cost of electricity (COE) was found to be 20% higher (7.53 c/kWh vs. 6.26 c/kWh). Although the final costs for the CO{sub 2} hybrid are higher, the study confirms that the relative change in cost (or mitigation cost) will be lower. The conceptual design of the plant and its performance and cost, including losses due to CO{sub 2} sequestration, is reported. Comparison with other proposed power plant CO{sub 2} removal techniques reported by a December 2000 EPRI report is shown. This project supports the DOE research objective of development of concepts for the capture and storage of CO{sub 2}.« less

  9. Naval Postgraduate School Research. Volume 9, Number 1, February 1999

    DTIC Science & Technology

    1999-02-01

    before the digitization, since these add noise and nonlinear distortion to the signal. After digitization by the digital antenna, the data stream can be...instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information...like pulse compression. (Gener- ally, few experiments have measured the jitter of the lasers.) From the data , we note that the pulse width require

  10. External Catalyst Breakup Phenomena

    DTIC Science & Technology

    1975-09-01

    thruster exposure. a Erosion .by a pulsed liquid stream at high velocity. * Thermal shock from liquid quench cooldovn. 9 Erosion resulting from solid...the liquid velocity. During a cold start contact with hydrazine leading to liquid wetting can lead to very high internal pressures as a result ot the...compression and final dilation, suggest benefits from reducing this variable. A . Isolating the catalyst particles from one another so as to avoid high

  11. Data compression/error correction digital test system. Appendix 2: Theory of operation

    NASA Technical Reports Server (NTRS)

    1972-01-01

    An overall block diagram of the DC/EC digital system test is shown. The system is divided into two major units: the transmitter and the receiver. In operation, the transmitter and receiver are connected only by a real or simulated transmission link. The system inputs consist of: (1) standard format TV video, (2) two channels of analog voice, and (3) one serial PCM bit stream.

  12. Investigation of flow turning phenomenon - Effect of upstream and downstream propagation

    NASA Astrophysics Data System (ADS)

    Baum, Joseph D.

    1988-01-01

    Upstream acoustic-wave propagation in flow injected laterally through the boundary layer of a tube (simulating the flow in a solid-rocket motor) is investigated analytically. A noniterative linearized-block implicit scheme is used to solve the time-dependent compressible Navier-Stokes equations, and the results are presented in extensive graphs and characterized. Acoustic streaming interaction is shown to be significantly greater for upstream than for downstream propagation.

  13. Compressive Information Extraction: A Dynamical Systems Approach

    DTIC Science & Technology

    2016-01-24

    sparsely encoded in very large data streams. (a) Target tracking in an urban canyon; (b) and (c) sample frames showing contextually abnormal events: onset...extraction to identify contextually abnormal se- quences (see section 2.2.3). Formally, the problem of interest can be stated as establishing whether a noisy...relaxations with optimality guarantees can be obtained using tools from semi-algebraic geometry. 2.2 Application: Detecting Contextually Abnormal Events

  14. A Conformal, Fully-Conservative Approach for Predicting Blast Effects on Ground Vehicles

    DTIC Science & Technology

    2014-04-01

    time integration  Approximate Riemann Fluxes (HLLE, HLLC) ◦ Robust mixture model for multi-material flows  Multiple Equations of State ◦ Perfect Gas...Loci/CHEM: Chemically reacting compressible flow solver . ◦ Currently in production use by NASA for the simulation of rocket motors, plumes, and...vehicles  Loci/DROPLET: Eulerian and Lagrangian multiphase solvers  Loci/STREAM: pressure-based solver ◦ Developed by Streamline Numerics and

  15. HIGH TEMPERATURE THERMOCOUPLE

    DOEpatents

    Eshayu, A.M.

    1963-02-12

    This invention contemplates a high temperature thermocouple for use in an inert or a reducing atmosphere. The thermocouple limbs are made of rhenium and graphite and these limbs are connected at their hot ends in compressed removable contact. The rhenium and graphite are of high purity and are substantially stable and free from diffusion into each other even without shielding. Also, the graphite may be thick enough to support the thermocouple in a gas stream. (AEC)

  16. Prospects for Nonlinear Laser Diagnostics in the Jet Noise Laboratory

    NASA Technical Reports Server (NTRS)

    Herring, Gregory C.; Hart, Roger C.; Fletcher, mark T.; Balla, R. Jeffrey; Henderson, Brenda S.

    2007-01-01

    Two experiments were conducted to test whether optical methods, which rely on laser beam coherence, would be viable for off-body flow measurement in high-density, compressible-flow wind tunnels. These tests measured the effects of large, unsteady density gradients on laser diagnostics like laser-induced thermal acoustics (LITA). The first test was performed in the Low Speed Aeroacoustics Wind Tunnel (LSAWT) of NASA Langley Research Center's Jet Noise Laboratory (JNL). This flow facility consists of a dual-stream jet engine simulator (with electric heat and propane burners) exhausting into a simulated flight stream, reaching Mach numbers up to 0.32. A laser beam transited the LSAWT flow field and was imaged with a high-speed gated camera to measure beam steering and transverse mode distortion. A second, independent test was performed on a smaller laboratory jet (Mach number < 1.2 and mass flow rate < 0.1 kg/sec). In this test, time-averaged LITA velocimetry and thermometry were performed at the jet exit plane, where the effect of unsteady density gradients is observed on the LITA signal. Both experiments show that LITA (and other diagnostics relying on beam overlap or coherence) faces significant hurdles in the high-density, compressible, and turbulent flow environments similar to those of the JNL.

  17. Autosophy: an alternative vision for satellite communication, compression, and archiving

    NASA Astrophysics Data System (ADS)

    Holtz, Klaus; Holtz, Eric; Kalienky, Diana

    2006-08-01

    Satellite communication and archiving systems are now designed according to an outdated Shannon information theory where all data is transmitted in meaningless bit streams. Video bit rates, for example, are determined by screen size, color resolution, and scanning rates. The video "content" is irrelevant so that totally random images require the same bit rates as blank images. An alternative system design, based on the newer Autosophy information theory, is now evolving, which transmits data "contend" or "meaning" in a universally compatible 64bit format. This would allow mixing all multimedia transmissions in the Internet's packet stream. The new systems design uses self-assembling data structures, which grow like data crystals or data trees in electronic memories, for both communication and archiving. The advantages for satellite communication and archiving may include: very high lossless image and video compression, unbreakable encryption, resistance to transmission errors, universally compatible data formats, self-organizing error-proof mass memories, immunity to the Internet's Quality of Service problems, and error-proof secure communication protocols. Legacy data transmission formats can be converted by simple software patches or integrated chipsets to be forwarded through any media - satellites, radio, Internet, cable - without needing to be reformatted. This may result in orders of magnitude improvements for all communication and archiving systems.

  18. Methods of producing alkylated hydrocarbons from an in situ heat treatment process liquid

    DOEpatents

    Roes, Augustinus Wilhelmus Maria [Houston, TX; Mo, Weijian [Sugar Land, TX; Muylle, Michel Serge Marie [Houston, TX; Mandema, Remco Hugo [Houston, TX; Nair, Vijay [Katy, TX

    2009-09-01

    A method for producing alkylated hydrocarbons is disclosed. Formation fluid is produced from a subsurface in situ heat treatment process. The formation fluid is separated to produce a liquid stream and a first gas stream. The first gas stream includes olefins. The liquid stream is fractionated to produce at least a second gas stream including hydrocarbons having a carbon number of at least 3. The first gas stream and the second gas stream are introduced into an alkylation unit to produce alkylated hydrocarbons. At least a portion of the olefins in the first gas stream enhance alkylation.

  19. Potential for real-time understanding of coupled hydrologic and biogeochemical processes in stream ecosystems: Future integration of telemetered data with process models for glacial meltwater streams

    NASA Astrophysics Data System (ADS)

    McKnight, Diane M.; Cozzetto, Karen; Cullis, James D. S.; Gooseff, Michael N.; Jaros, Christopher; Koch, Joshua C.; Lyons, W. Berry; Neupauer, Roseanna; Wlostowski, Adam

    2015-08-01

    While continuous monitoring of streamflow and temperature has been common for some time, there is great potential to expand continuous monitoring to include water quality parameters such as nutrients, turbidity, oxygen, and dissolved organic material. In many systems, distinguishing between watershed and stream ecosystem controls can be challenging. The usefulness of such monitoring can be enhanced by the application of quantitative models to interpret observed patterns in real time. Examples are discussed primarily from the glacial meltwater streams of the McMurdo Dry Valleys, Antarctica. Although the Dry Valley landscape is barren of plants, many streams harbor thriving cyanobacterial mats. Whereas a daily cycle of streamflow is controlled by the surface energy balance on the glaciers and the temporal pattern of solar exposure, the daily signal for biogeochemical processes controlling water quality is generated along the stream. These features result in an excellent outdoor laboratory for investigating fundamental ecosystem process and the development and validation of process-based models. As part of the McMurdo Dry Valleys Long-Term Ecological Research project, we have conducted field experiments and developed coupled biogeochemical transport models for the role of hyporheic exchange in controlling weathering reactions, microbial nitrogen cycling, and stream temperature regulation. We have adapted modeling approaches from sediment transport to understand mobilization of stream biomass with increasing flows. These models help to elucidate the role of in-stream processes in systems where watershed processes also contribute to observed patterns, and may serve as a test case for applying real-time stream ecosystem models.

  20. Application of the Hydroecological Integrity Assessment Process for Missouri Streams

    USGS Publications Warehouse

    Kennen, Jonathan G.; Henriksen, James A.; Heasley, John; Cade, Brian S.; Terrell, James W.

    2009-01-01

    Natural flow regime concepts and theories have established the justification for maintaining or restoring the range of natural hydrologic variability so that physiochemical processes, native biodiversity, and the evolutionary potential of aquatic and riparian assemblages can be sustained. A synthesis of recent research advances in hydroecology, coupled with stream classification using hydroecologically relevant indices, has produced the Hydroecological Integrity Assessment Process (HIP). HIP consists of (1) a regional classification of streams into hydrologic stream types based on flow data from long-term gaging-station records for relatively unmodified streams, (2) an identification of stream-type specific indices that address 11 subcomponents of the flow regime, (3) an ability to establish environmental flow standards, (4) an evaluation of hydrologic alteration, and (5) a capacity to conduct alternative analyses. The process starts with the identification of a hydrologic baseline (reference condition) for selected locations, uses flow data from a stream-gage network, and proceeds to classify streams into hydrologic stream types. Concurrently, the analysis identifies a set of non-redundant and ecologically relevant hydrologic indices for 11 subcomponents of flow for each stream type. Furthermore, regional hydrologic models for synthesizing flow conditions across a region and the development of flow-ecology response relations for each stream type can be added to further enhance the process. The application of HIP to Missouri streams identified five stream types ((1) intermittent, (2) perennial runoff-flashy, (3) perennial runoff-moderate baseflow, (4) perennial groundwater-stable, and (5) perennial groundwater-super stable). Two Missouri-specific computer software programs were developed: (1) a Missouri Hydrologic Assessment Tool (MOHAT) which is used to establish a hydrologic baseline, provide options for setting environmental flow standards, and compare past and proposed hydrologic alterations; and (2) a Missouri Stream Classification Tool (MOSCT) designed for placing previously unclassified streams into one of the five pre-defined stream types.

  1. Stream dynamics: An overview for land managers

    Treesearch

    Burchard H. Heede

    1980-01-01

    Concepts of stream dynamics are demonstrated through discussion of processes and process indicators; theory is included only where helpful to explain concepts. Present knowledge allows only qualitative prediction of stream behavior. However, such predictions show how management actions will affect the stream and its environment.

  2. Leaf litter processing in West Virginia mountain streams: effects of temperature and stream chemistry

    Treesearch

    Jacquelyn M. Rowe; William B. Perry; Sue A. Perry

    1996-01-01

    Climate change has the potential to alter detrital processing in headwater streams, which receive the majority of their nutrient input as terrestrial leaf litter. Early placement of experimental leaf packs in streams, one month prior to most abscission, was used as an experimental manipulation to increase stream temperature during leaf pack breakdown. We studied leaf...

  3. Riparian communities associated with pacific northwest headwater streams: assemblages, processes, and uniqueness.

    Treesearch

    John S. Richardson; Robert J. Naiman; Frederick J. Swanson; David E. Hibbs

    2005-01-01

    Riparian areas of large streams provide important habitat to many species and control many instream processes - but is the same true for the margins of small streams? This review considers riparian areas alongside small streams in forested, mountainous areas of the Pacific Northwest and asks if there are fundamental ecological differences from larger streams and from...

  4. Functional Process Zones Characterizing Aquatic Insect Communities in Streams of the Brazilian Cerrado.

    PubMed

    Godoy, B S; Simião-Ferreira, J; Lodi, S; Oliveira, L G

    2016-04-01

    Stream ecology studies see to understand ecological dynamics in lotic systems. The characterization of streams into Functional Process Zones (FPZ) has been currently debated in stream ecology because aquatic communities respond to functional processes of river segments. Therefore, we tested if different functional process zones have different number of genera and trophic structure using the aquatic insect community of Neotropical streams. We also assessed whether using physical and chemical variables may complement the approach of using FPZ to model communities of aquatic insects in Cerrado streams. This study was conducted in 101 streams or rivers from the central region of the state of Goiás, Brazil. We grouped the streams into six FPZ associated to size of the river system, presence of riparian forest, and riverbed heterogeneity. We used Bayesian models to compare number of genera and relative frequency of the feeding groups between FPZs. Streams classified in different FPZs had a different number of genera, and the largest and best preserved rivers had an average of four additional genera. Trophic structure exhibited low variability among FPZs, with little difference both in the number of genera and in abundance. Using functional process zones in Cerrado streams yielded good results for Ephemeroptera, Plecoptera, and Trichoptera communities. Thus, species distribution and community structure in the river basin account for functional processes and not necessarily for the position of the community along a longitudinal dimension of the lotic system.

  5. The chemistry of iron, aluminum, and dissolved organic material in three acidic, metal-enriched, mountain streams, as controlled by watershed and in-stream processes

    USGS Publications Warehouse

    McKnight, Diane M.; Bencala, Kenneth E.

    1990-01-01

    Several studies were conducted in three acidic, metal-enriched, mountain streams, and the results are discussed together in this paper to provide a synthesis of watershed and in-stream processes controlling Fe, Al, and DOC (dissolved organic carbon) concentrations. One of the streams, the Snake River, is naturally acidic; the other two, Peru Creek and St. Kevin Gulch, receive acid mine drainage. Analysis of stream water chemistry data for the acidic headwaters of the Snake River shows that some trace metal solutes (Al, Mn, Zn) are correlated with major ions, indicating that watershed processes control their concentrations. Once in the stream, biogeochemical processes can control transport if they occur over time scales comparable to those for hydrologic transport. Examples of the following in-stream reactions are presented: (1) photoreduction and dissolution of hydrous iron oxides in response to an experimental decrease in stream pH, (2) precipitation of Al at three stream confluences, and (3) sorption of dissolved organic material by hydrous iron and aluminum oxides in a stream confluence. The extent of these reactions is evaluated using conservative tracers and a transport model that includes storage in the substream zone.

  6. Preliminary Investigation of an Underwater Ramjet Powered by Compressed Air

    NASA Technical Reports Server (NTRS)

    Mottard, Elmo J.; Shoemaker, Charles J.

    1961-01-01

    Part I contains the results of a preliminary experimental investigation of a particular design of an underwater ramjet or hydroduct powered by compressed air. The hydroduct is a propulsion device in which the energy of an expanding gas imparts additional momentum to a stream of water through mixing. The hydroduct model had a fineness ratio of 5.9, a maximum diameter of 3.2 inches, and a ratio of inlet area to frontal area of 0.32. The model was towed at a depth of 1 inch at forward speeds between 20 and 60 feet per second for airflow rates from 0.1 to 0.3 pound per second. Longitudinal force and pressures at the inlet and in the mixing chamber were determined. The hydroduct produced a positive thrust-minus-drag force at every test speed. The force and pressure coefficients were functions primarily of the ratio of weight airflow to free-stream velocity. The maximum propulsive efficiency based on the net internal thrust and an isothermal expansion of the air was approximately 53 percent at a thrust coefficient of 0.10. The performance of the test model may have been influenced by choking of the exit flow. Part II is a theoretical development of an underwater ramjet using air as "fuel." The basic assumption of the theoretical analysis is that a mixture of water and air can be treated as a compressible gas. More information on the properties of air-water mixtures is required to confirm this assumption or to suggest another approach. A method is suggested from which a more complete theoretical development, with the effects of choking included, may be obtained. An exploratory computation, in which this suggested method was used, indicated that the effect of choked flow on the thrust coefficient was minor.

  7. The high-rate data challenge: computing for the CBM experiment

    NASA Astrophysics Data System (ADS)

    Friese, V.; CBM Collaboration

    2017-10-01

    The Compressed Baryonic Matter experiment (CBM) is a next-generation heavy-ion experiment to be operated at the FAIR facility, currently under construction in Darmstadt, Germany. A key feature of CBM is very high interaction rate, exceeding those of contemporary nuclear collision experiments by several orders of magnitude. Such interaction rates forbid a conventional, hardware-triggered readout; instead, experiment data will be freely streaming from self-triggered front-end electronics. In order to reduce the huge raw data volume to a recordable rate, data will be selected exclusively on CPU, which necessitates partial event reconstruction in real-time. Consequently, the traditional segregation of online and offline software vanishes; an integrated on- and offline data processing concept is called for. In this paper, we will report on concepts and developments for computing for CBM as well as on the status of preparations for its first physics run.

  8. In situ surface roughness measurement using a laser scattering method

    NASA Astrophysics Data System (ADS)

    Tay, C. J.; Wang, S. H.; Quan, C.; Shang, H. M.

    2003-03-01

    In this paper, the design and development of an optical probe for in situ measurement of surface roughness are discussed. Based on this light scattering principle, the probe which consists of a laser diode, measuring lens and a linear photodiode array, is designed to capture the scattered light from a test surface with a relatively large scattering angle ϕ (=28°). This capability increases the measuring range and enhances repeatability of the results. The coaxial arrangement that incorporates a dual-laser beam and a constant compressed air stream renders the proposed system insensitive to movement or vibration of the test surface as well as surface conditions. Tests were conducted on workpieces which were mounted on a turning machine that operates with different cutting speeds. Test specimens which underwent different machining processes and of different surface finish were also studied. The results obtained demonstrate the feasibility of surface roughness measurement using the proposed method.

  9. Geometric optimization of thermal systems

    NASA Astrophysics Data System (ADS)

    Alebrahim, Asad Mansour

    2000-10-01

    The work in chapter 1 extends to three dimensions and to convective heat transfer the constructal method of minimizing the thermal resistance between a volume and one point. In the first part, the heat flow mechanism is conduction, and the heat generating volume is occupied by low conductivity material (k 0) and high conductivity inserts (kp) that are shaped as constant-thickness disks mounted on a common stem of kp material. In the second part the interstitial spaces once occupied by k0 material are bathed by forced convection. The internal and external geometric aspect ratios of the elemental volume and the first assembly are optimized numerically subject to volume constraints. Chapter 2 presents the constrained thermodynamic optimization of a cross-flow heat exchanger with ram air on the cold side, which is used in the environmental control systems of aircraft. Optimized geometric features such as the ratio of channel spacings and flow lengths are reported. It is found that the optimized features are relatively insensitive to changes in other physical parameters of the installation and relatively insensitive to the additional irreversibility due to discharging the ram-air stream into the atmosphere, emphasizing the robustness of the thermodynamic optimum. In chapter 3 the problem of maximizing exergy extraction from a hot stream by distributing streams over a heat transfer surface is studied. In the first part, the cold stream is compressed in an isothermal compressor, expanded in an adiabatic turbine, and discharged into the ambient. In the second part, the cold stream is compressed in an adiabatic compressor. Both designs are optimized with respect to the capacity-rate imbalance of the counter-flow and the pressure ratio maintained by the compressor. This study shows the tradeoff between simplicity and increased performance, and outlines the path for further conceptual work on the extraction of exergy from a hot stream that is being cooled gradually. The aim of chapter 4 was to optimize the performance of a boot-strap air cycle of an environmental control system (ECS) for aircraft. New in the present study was that the optimization refers to the performance of the entire ECS system, not to the performance of an individual component. Also, there were two heat exchangers, not one, and their relative positions and sizes were not specified in advance. This study showed that geometric optimization can be identified when the optimization procedure refers to the performance of the entire ECS system, not to the performance of an individual component. This optimized features were robust relative to some physical parameters. This robustness may be used to simplify future optimization of similar systems.

  10. Local growth of dust- and ice-mixed aggregates as cometary building blocks in the solar nebula

    NASA Astrophysics Data System (ADS)

    Lorek, S.; Lacerda, P.; Blum, J.

    2018-03-01

    Context. Comet formation by gravitational instability requires aggregates that trigger the streaming instability and cluster in pebble-clouds. These aggregates form as mixtures of dust and ice from (sub-)micrometre-sized dust and ice grains via coagulation in the solar nebula. Aim. We investigate the growth of aggregates from (sub-)micrometre-sized dust and ice monomer grains. We are interested in the properties of these aggregates: whether they might trigger the streaming instability, how they compare to pebbles found on comets, and what the implications are for comet formation in collapsing pebble-clouds. Methods: We used Monte Carlo simulations to study the growth of aggregates through coagulation locally in the comet-forming region at 30 au. We used a collision model that can accommodate sticking, bouncing, fragmentation, and porosity of dust- and ice-mixed aggregates. We compared our results to measurements of pebbles on comet 67P/Churyumov-Gerasimenko. Results: We find that aggregate growth becomes limited by radial drift towards the Sun for 1 μm sized monomers and by bouncing collisions for 0.1 μm sized monomers before the aggregates reach a Stokes number that would trigger the streaming instability (Stmin). We argue that in a bouncing-dominated system, aggregates can reach Stmin through compression in bouncing collisions if compression is faster than radial drift. In the comet-forming region ( 30 au), aggregates with Stmin have volume-filling factors of 10-2 and radii of a few millimetres. These sizes are comparable to the sizes of pebbles found on comet 67P/Churyumov-Gerasimenko. The porosity of the aggregates formed in the solar nebula would imply that comets formed in pebble-clouds with masses equivalent to planetesimals of the order of 100 km in diameter.

  11. Revealing the dual streams of speech processing.

    PubMed

    Fridriksson, Julius; Yourganov, Grigori; Bonilha, Leonardo; Basilakos, Alexandra; Den Ouden, Dirk-Bart; Rorden, Christopher

    2016-12-27

    Several dual route models of human speech processing have been proposed suggesting a large-scale anatomical division between cortical regions that support motor-phonological aspects vs. lexical-semantic aspects of speech processing. However, to date, there is no complete agreement on what areas subserve each route or the nature of interactions across these routes that enables human speech processing. Relying on an extensive behavioral and neuroimaging assessment of a large sample of stroke survivors, we used a data-driven approach using principal components analysis of lesion-symptom mapping to identify brain regions crucial for performance on clusters of behavioral tasks without a priori separation into task types. Distinct anatomical boundaries were revealed between a dorsal frontoparietal stream and a ventral temporal-frontal stream associated with separate components. Collapsing over the tasks primarily supported by these streams, we characterize the dorsal stream as a form-to-articulation pathway and the ventral stream as a form-to-meaning pathway. This characterization of the division in the data reflects both the overlap between tasks supported by the two streams as well as the observation that there is a bias for phonological production tasks supported by the dorsal stream and lexical-semantic comprehension tasks supported by the ventral stream. As such, our findings show a division between two processing routes that underlie human speech processing and provide an empirical foundation for studying potential computational differences that distinguish between the two routes.

  12. Hamming and Accumulator Codes Concatenated with MPSK or QAM

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Samuel

    2009-01-01

    In a proposed coding-and-modulation scheme, a high-rate binary data stream would be processed as follows: 1. The input bit stream would be demultiplexed into multiple bit streams. 2. The multiple bit streams would be processed simultaneously into a high-rate outer Hamming code that would comprise multiple short constituent Hamming codes a distinct constituent Hamming code for each stream. 3. The streams would be interleaved. The interleaver would have a block structure that would facilitate parallelization for high-speed decoding. 4. The interleaved streams would be further processed simultaneously into an inner two-state, rate-1 accumulator code that would comprise multiple constituent accumulator codes - a distinct accumulator code for each stream. 5. The resulting bit streams would be mapped into symbols to be transmitted by use of a higher-order modulation - for example, M-ary phase-shift keying (MPSK) or quadrature amplitude modulation (QAM). The novelty of the scheme lies in the concatenation of the multiple-constituent Hamming and accumulator codes and the corresponding parallel architectures of the encoder and decoder circuitry (see figure) needed to process the multiple bit streams simultaneously. As in the cases of other parallel-processing schemes, one advantage of this scheme is that the overall data rate could be much greater than the data rate of each encoder and decoder stream and, hence, the encoder and decoder could handle data at an overall rate beyond the capability of the individual encoder and decoder circuits.

  13. Hardware Implementation of Lossless Adaptive Compression of Data From a Hyperspectral Imager

    NASA Technical Reports Server (NTRS)

    Keymeulen, Didlier; Aranki, Nazeeh I.; Klimesh, Matthew A.; Bakhshi, Alireza

    2012-01-01

    Efficient onboard data compression can reduce the data volume from hyperspectral imagers on NASA and DoD spacecraft in order to return as much imagery as possible through constrained downlink channels. Lossless compression is important for signature extraction, object recognition, and feature classification capabilities. To provide onboard data compression, a hardware implementation of a lossless hyperspectral compression algorithm was developed using a field programmable gate array (FPGA). The underlying algorithm is the Fast Lossless (FL) compression algorithm reported in Fast Lossless Compression of Multispectral- Image Data (NPO-42517), NASA Tech Briefs, Vol. 30, No. 8 (August 2006), p. 26 with the modification reported in Lossless, Multi-Spectral Data Comressor for Improved Compression for Pushbroom-Type Instruments (NPO-45473), NASA Tech Briefs, Vol. 32, No. 7 (July 2008) p. 63, which provides improved compression performance for data from pushbroom-type imagers. An FPGA implementation of the unmodified FL algorithm was previously developed and reported in Fast and Adaptive Lossless Onboard Hyperspectral Data Compression System (NPO-46867), NASA Tech Briefs, Vol. 36, No. 5 (May 2012) p. 42. The essence of the FL algorithm is adaptive linear predictive compression using the sign algorithm for filter adaption. The FL compressor achieves a combination of low complexity and compression effectiveness that exceeds that of stateof- the-art techniques currently in use. The modification changes the predictor structure to tolerate differences in sensitivity of different detector elements, as occurs in pushbroom-type imagers, which are suitable for spacecraft use. The FPGA implementation offers a low-cost, flexible solution compared to traditional ASIC (application specific integrated circuit) and can be integrated as an intellectual property (IP) for part of, e.g., a design that manages the instrument interface. The FPGA implementation was benchmarked on the Xilinx Virtex IV LX25 device, and ported to a Xilinx prototype board. The current implementation has a critical path of 29.5 ns, which dictated a clock speed of 33 MHz. The critical path delay is end-to-end measurement between the uncompressed input data and the output compression data stream. The implementation compresses one sample every clock cycle, which results in a speed of 33 Msample/s. The implementation has a rather low device use of the Xilinx Virtex IV LX25, making the total power consumption of the implementation about 1.27 W.

  14. Processing Maple Syrup with a Vapor Compression Distiller: An Economic Analysis

    Treesearch

    Lawrence D. Garrett

    1977-01-01

    A test of vapor compression distillers for processing maple syrup revealed that: (1) vapor compression equipment tested evaporated 1 pound of water with .047 pounds of steam equivalent (electrical energy); open-pan evaporators of similar capacity required 1.5 pounds of steam equivalent (oil energy) to produce 1 pound of water; (2) vapor compression evaporation produced...

  15. Structures linking physical and biological processes in headwater streams of the Maybeso watershed, Southeast Alaska

    Treesearch

    Mason D. Bryant; Takashi Gomi; Jack J. Piccolo

    2007-01-01

    We focus on headwater streams originating in the mountainous terrain of northern temperate rain forests. These streams rapidly descend from gradients greater than 20% to less than 5% in U-shaped glacial valleys. We use a set of studies on headwater streams in southeast Alaska to define headwater stream catchments, link physical and biological processes, and describe...

  16. SAR correlation technique - An algorithm for processing data with large range walk

    NASA Technical Reports Server (NTRS)

    Jin, M.; Wu, C.

    1983-01-01

    This paper presents an algorithm for synthetic aperture radar (SAR) azimuth correlation with extraneously large range migration effect which can not be accommodated by the existing frequency domain interpolation approach used in current SEASAT SAR processing. A mathematical model is first provided for the SAR point-target response in both the space (or time) and the frequency domain. A simple and efficient processing algorithm derived from the hybrid algorithm is then given. This processing algorithm enables azimuth correlation by two steps. The first step is a secondary range compression to handle the dispersion of the spectra of the azimuth response along range. The second step is the well-known frequency domain range migration correction approach for the azimuth compression. This secondary range compression can be processed simultaneously with range pulse compression. Simulation results provided here indicate that this processing algorithm yields a satisfactory compressed impulse response for SAR data with large range migration.

  17. Performance of rice husk ash produced using a new technology as a mineral admixture in concrete

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nehdi, M.; Duquette, J.; El Damatty, A

    2003-08-01

    This article investigates the use of a new technique for the controlled combustion of Egyptian rice husk to mitigate the environmental concerns associated with its uncontrolled burning and provide a supplementary cementing material for the local construction industry. The reactor used provides efficient combustion of rice husk in a short residency time via the suspension of processed particles by jets of a process air stream that is forced though stationary angled blades at high velocity. Investigations on the rice husk ash (RHA) thus produced included oxide analysis, X-ray diffraction, carbon content, grindability, water demand, pozzolanic activity index, surface area, andmore » particle size distribution measurements. In addition, concrete mixtures incorporating various proportions of silica fume (SF) and Egyptian RHA (EG-RHA) produced at different combustion temperatures were made and compared. The workability, superplasticizer and air-entraining admixture requirements, and compressive strength at various ages of these concrete mixtures were evaluated, and their resistance to rapid chloride penetrability and deicing salt surface scaling were examined. Test results indicate that contrary to RHA produced using existing technology, the superplasticizer and air-entraining agent requirements did not increase drastically when the RHA developed in this study was used. Compressive strengths achieved by concrete mixtures incorporating the new RHA exceeded those of concretes containing similar proportions of SF. The resistance to surface scaling of RHA concrete was better than that of concrete containing similar proportions of SF. While the chloride penetrability was substantially decreased by RHA, it remained slightly higher than that achieved by SF concrete.« less

  18. Treatment of gas from an in situ conversion process

    DOEpatents

    Diaz, Zaida [Katy, TX; Del Paggio, Alan Anthony [Spring, TX; Nair, Vijay [Katy, TX; Roes, Augustinus Wilhelmus Maria [Houston, TX

    2011-12-06

    A method of producing methane is described. The method includes providing formation fluid from a subsurface in situ conversion process. The formation fluid is separated to produce a liquid stream and a first gas stream. The first gas stream includes olefins. At least the olefins in the first gas stream are contacted with a hydrogen source in the presence of one or more catalysts and steam to produce a second gas stream. The second gas stream is contacted with a hydrogen source in the presence of one or more additional catalysts to produce a third gas stream. The third gas stream includes methane.

  19. Aqueous stream characterization from biomass fast pyrolysis and catalytic fast pyrolysis

    DOE PAGES

    Black, Brenna A.; Michener, William E.; Ramirez, Kelsey J.; ...

    2016-09-05

    Here, biomass pyrolysis offers a promising means to rapidly depolymerize lignocellulosic biomass for subsequent catalytic upgrading to renewable fuels. Substantial efforts are currently ongoing to optimize pyrolysis processes including various fast pyrolysis and catalytic fast pyrolysis schemes. In all cases, complex aqueous streams are generated containing solubilized organic compounds that are not converted to target fuels or chemicals and are often slated for wastewater treatment, in turn creating an economic burden on the biorefinery. Valorization of the species in these aqueous streams, however, offers significant potential for substantially improving the economics and sustainability of thermochemical biorefineries. To that end, heremore » we provide a thorough characterization of the aqueous streams from four pilot-scale pyrolysis processes: namely, from fast pyrolysis, fast pyrolysis with downstream fractionation, in situ catalytic fast pyrolysis, and ex situ catalytic fast pyrolysis. These configurations and processes represent characteristic pyrolysis processes undergoing intense development currently. Using a comprehensive suite of aqueous-compatible analytical techniques, we quantitatively characterize between 12 g kg -1 of organic carbon of a highly aqueous catalytic fast pyrolysis stream and up to 315 g kg -1 of organic carbon present in the fast pyrolysis aqueous streams. In all cases, the analysis ranges between 75 and 100% of mass closure. The composition and stream properties closely match the nature of pyrolysis processes, with high contents of carbohydrate-derived compounds in the fast pyrolysis aqueous phase, high acid content in nearly all streams, and mostly recalcitrant phenolics in the heavily deoxygenated ex situ catalytic fast pyrolysis stream. Overall, this work provides a detailed compositional analysis of aqueous streams from leading thermochemical processes -- analyses that are critical for subsequent development of selective valorization strategies for these waste streams.« less

  20. Aqueous stream characterization from biomass fast pyrolysis and catalytic fast pyrolysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Black, Brenna A.; Michener, William E.; Ramirez, Kelsey J.

    Here, biomass pyrolysis offers a promising means to rapidly depolymerize lignocellulosic biomass for subsequent catalytic upgrading to renewable fuels. Substantial efforts are currently ongoing to optimize pyrolysis processes including various fast pyrolysis and catalytic fast pyrolysis schemes. In all cases, complex aqueous streams are generated containing solubilized organic compounds that are not converted to target fuels or chemicals and are often slated for wastewater treatment, in turn creating an economic burden on the biorefinery. Valorization of the species in these aqueous streams, however, offers significant potential for substantially improving the economics and sustainability of thermochemical biorefineries. To that end, heremore » we provide a thorough characterization of the aqueous streams from four pilot-scale pyrolysis processes: namely, from fast pyrolysis, fast pyrolysis with downstream fractionation, in situ catalytic fast pyrolysis, and ex situ catalytic fast pyrolysis. These configurations and processes represent characteristic pyrolysis processes undergoing intense development currently. Using a comprehensive suite of aqueous-compatible analytical techniques, we quantitatively characterize between 12 g kg -1 of organic carbon of a highly aqueous catalytic fast pyrolysis stream and up to 315 g kg -1 of organic carbon present in the fast pyrolysis aqueous streams. In all cases, the analysis ranges between 75 and 100% of mass closure. The composition and stream properties closely match the nature of pyrolysis processes, with high contents of carbohydrate-derived compounds in the fast pyrolysis aqueous phase, high acid content in nearly all streams, and mostly recalcitrant phenolics in the heavily deoxygenated ex situ catalytic fast pyrolysis stream. Overall, this work provides a detailed compositional analysis of aqueous streams from leading thermochemical processes -- analyses that are critical for subsequent development of selective valorization strategies for these waste streams.« less

  1. Galileo mission planning for Low Gain Antenna based operations

    NASA Technical Reports Server (NTRS)

    Gershman, R.; Buxbaum, K. L.; Ludwinski, J. M.; Paczkowski, B. G.

    1994-01-01

    The Galileo mission operations concept is undergoing substantial redesign, necessitated by the deployment failure of the High Gain Antenna, while the spacecraft is on its way to Jupiter. The new design applies state-of-the-art technology and processes to increase the telemetry rate available through the Low Gain Antenna and to increase the information density of the telemetry. This paper describes the mission planning process being developed as part of this redesign. Principal topics include a brief description of the new mission concept and anticipated science return (these have been covered more extensively in earlier papers), identification of key drivers on the mission planning process, a description of the process and its implementation schedule, a discussion of the application of automated mission planning tool to the process, and a status report on mission planning work to date. Galileo enhancements include extensive reprogramming of on-board computers and substantial hard ware and software upgrades for the Deep Space Network (DSN). The principal mode of operation will be onboard recording of science data followed by extended playback periods. A variety of techniques will be used to compress and edit the data both before recording and during playback. A highly-compressed real-time science data stream will also be important. The telemetry rate will be increased using advanced coding techniques and advanced receivers. Galileo mission planning for orbital operations now involves partitioning of several scarce resources. Particularly difficult are division of the telemetry among the many users (eleven instruments, radio science, engineering monitoring, and navigation) and allocation of space on the tape recorder at each of the ten satellite encounters. The planning process is complicated by uncertainty in forecast performance of the DSN modifications and the non-deterministic nature of the new data compression schemes. Key mission planning steps include quantifying resource or capabilities to be allocated, prioritizing science observations and estimating resource needs for each, working inter-and intra-orbit trades of these resources among the Project elements, and planning real-time science activity. The first major mission planning activity, a high level, orbit-by-orbit allocation of resources among science objectives, has already been completed; and results are illustrated in the paper. To make efficient use of limited resources, Galileo mission planning will rely on automated mission planning tools capable of dealing with interactions among time-varying downlink capability, real-time science and engineering data transmission, and playback of recorded data. A new generic mission planning tool is being adapted for this purpose.

  2. Galileo mission planning for Low Gain Antenna based operations

    NASA Astrophysics Data System (ADS)

    Gershman, R.; Buxbaum, K. L.; Ludwinski, J. M.; Paczkowski, B. G.

    1994-11-01

    The Galileo mission operations concept is undergoing substantial redesign, necessitated by the deployment failure of the High Gain Antenna, while the spacecraft is on its way to Jupiter. The new design applies state-of-the-art technology and processes to increase the telemetry rate available through the Low Gain Antenna and to increase the information density of the telemetry. This paper describes the mission planning process being developed as part of this redesign. Principal topics include a brief description of the new mission concept and anticipated science return (these have been covered more extensively in earlier papers), identification of key drivers on the mission planning process, a description of the process and its implementation schedule, a discussion of the application of automated mission planning tool to the process, and a status report on mission planning work to date. Galileo enhancements include extensive reprogramming of on-board computers and substantial hard ware and software upgrades for the Deep Space Network (DSN). The principal mode of operation will be onboard recording of science data followed by extended playback periods. A variety of techniques will be used to compress and edit the data both before recording and during playback. A highly-compressed real-time science data stream will also be important. The telemetry rate will be increased using advanced coding techniques and advanced receivers. Galileo mission planning for orbital operations now involves partitioning of several scarce resources. Particularly difficult are division of the telemetry among the many users (eleven instruments, radio science, engineering monitoring, and navigation) and allocation of space on the tape recorder at each of the ten satellite encounters. The planning process is complicated by uncertainty in forecast performance of the DSN modifications and the non-deterministic nature of the new data compression schemes. Key mission planning steps include quantifying resource or capabilities to be allocated, prioritizing science observations and estimating resource needs for each, working inter-and intra-orbit trades of these resources among the Project elements, and planning real-time science activity. The first major mission planning activity, a high level, orbit-by-orbit allocation of resources among science objectives, has already been completed; and results are illustrated in the paper. To make efficient use of limited resources, Galileo mission planning will rely on automated mission planning tools capable of dealing with interactions among time-varying downlink capability, real-time science and engineering data transmission, and playback of recorded data. A new generic mission planning tool is being adapted for this purpose.

  3. Data Package for Secondary Waste Form Down-Selection—Cast Stone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Serne, R. Jeffrey; Westsik, Joseph H.

    2011-09-05

    Available literature on Cast Stone and Saltstone was reviewed with an emphasis on determining how Cast Stone and related grout waste forms performed in relationship to various criteria that will be used to decide whether a specific type of waste form meets acceptance criteria for disposal in the Integrated Disposal Facility (IDF) at Hanford. After the critical review of the Cast Stone/Saltstone literature, we conclude that Cast Stone is a good candidate waste form for further consideration. Cast stone meets the target IDF acceptance criteria for compressive strength, no free liquids, TCLP leachate are below the UTS permissible concentrations andmore » leach rates for Na and Tc-99 are suiteably low. The cost of starting ingredients and equipment necessary to generate Cast Stone waste forms with secondary waste streams are low and the Cast Stone dry blend formulation can be tailored to accommodate variations in liquid waste stream compositions. The database for Cast Stone short-term performance is quite extensive compared to the other three candidate waste solidification processes. The solidification of liquid wastes in Cast Stone is a mature process in comparison to the other three candidates. Successful production of Cast Stone or Saltstone has been demonstrated from lab-scale monoliths with volumes of cm3 through m3 sized blocks to 210-liter sized drums all the way to the large pours into vaults at Savannah River. To date over 9 million gallons of low activity liquid waste has been solidified and disposed in concrete vaults at Savannah River.« less

  4. Compressive behavior of laminated neoprene bridge bearing pads under thermal aging condition

    NASA Astrophysics Data System (ADS)

    Jun, Xie; Zhang, Yannian; Shan, Chunhong

    2017-10-01

    The present study was conducted to obtain a better understanding of the variation rule of mechanical properties of laminated neoprene bridge bearing pads under thermal aging condition using compression tests. A total of 5 specimens were processed in a high-temperature chamber. After that, the specimens were tested subjected to axial load. The parameter mainly considered time of thermal aging processing for specimens. The results of compression tests show that the specimens after thermal aging processing are more probably brittle failure than the standard specimen. Moreover, the exposure of steel plate, cracks and other failure phenomena are more serious than the standard specimen. The compressive capacity, ultimate compressive strength, compressive elastic modulus of the laminated neoprene bridge bearing pads decreased dramatically with the increasing in the aging time of thermal aging processing. The attenuation trends of ultimate compressive strength, compressive elastic modulus of laminated neoprene bridge bearing pads under thermal aging condition accord with power function. The attenuation models are acquired by regressing data of experiment with the least square method. The attenuation models conform to reality well which shows that this model is applicable and has vast prospect in assessing the performance of laminated neoprene bridge bearing pads under thermal aging condition.

  5. Quantitative measurement of stream respiration using the resazurin-resorufin system

    NASA Astrophysics Data System (ADS)

    Gonzalez Pinzon, R. A.; Acker, S.; Haggerty, R.; Myrold, D.

    2011-12-01

    After three decades of active research in hydrology and stream ecology, the relationship between stream solute transport, metabolism and nutrient dynamics is still unresolved. These knowledge gaps obscure the function of stream ecosystems and how they interact with other landscape processes. To date, measuring rates of stream metabolism is accomplished with techniques that have vast uncertainties and are not spatially representative. These limitations mask the role of metabolism in nutrient processing. Clearly, more robust techniques are needed to develop mechanistic relationships that will ultimately improve our fundamental understanding of in-stream processes and how streams interact with other ecosystems. We investigated the "metabolic window of detection" of the Resazurin (Raz)-Resorufin (Rru) system (Haggerty et al., 2008, 2009). Although previous results have shown that the transformation of Raz to Rru is strongly correlated with respiration, a quantitative relationship between them is needed. We investigated this relationship using batch experiments with pure cultures (aerobic and anaerobic) and flow-through columns with incubated sediments from four different streams. The results suggest that the Raz-Rru system is a suitable approach that will enable hydrologists and stream ecologists to measure in situ and in vivo respiration at different scales, thus opening a reliable alternative to investigate how solute transport and stream metabolism control nutrient processing.

  6. In Situ Measurement of Ground-Surface Flow Resistivity

    NASA Technical Reports Server (NTRS)

    Zuckerwar, A. J.

    1984-01-01

    New instrument allows in situ measurement of flow resistivity on Earth's ground surface. Nonintrusive instrument includes specimen holder inserted into ground. Flow resistivity measured by monitoring compressed air passing through flow-meters; pressure gages record pressure at ground surface. Specimen holder with knife-edged inner and outer cylinders easily driven into ground. Air-stream used in measuring flow resistivity of ground enters through quick-connect fitting and exits through screen and venthole.

  7. AFRESh: an adaptive framework for compression of reads and assembled sequences with random access functionality.

    PubMed

    Paridaens, Tom; Van Wallendael, Glenn; De Neve, Wesley; Lambert, Peter

    2017-05-15

    The past decade has seen the introduction of new technologies that lowered the cost of genomic sequencing increasingly. We can even observe that the cost of sequencing is dropping significantly faster than the cost of storage and transmission. The latter motivates a need for continuous improvements in the area of genomic data compression, not only at the level of effectiveness (compression rate), but also at the level of functionality (e.g. random access), configurability (effectiveness versus complexity, coding tool set …) and versatility (support for both sequenced reads and assembled sequences). In that regard, we can point out that current approaches mostly do not support random access, requiring full files to be transmitted, and that current approaches are restricted to either read or sequence compression. We propose AFRESh, an adaptive framework for no-reference compression of genomic data with random access functionality, targeting the effective representation of the raw genomic symbol streams of both reads and assembled sequences. AFRESh makes use of a configurable set of prediction and encoding tools, extended by a Context-Adaptive Binary Arithmetic Coding scheme (CABAC), to compress raw genetic codes. To the best of our knowledge, our paper is the first to describe an effective implementation CABAC outside of its' original application. By applying CABAC, the compression effectiveness improves by up to 19% for assembled sequences and up to 62% for reads. By applying AFRESh to the genomic symbols of the MPEG genomic compression test set for reads, a compression gain is achieved of up to 51% compared to SCALCE, 42% compared to LFQC and 44% compared to ORCOM. When comparing to generic compression approaches, a compression gain is achieved of up to 41% compared to GNU Gzip and 22% compared to 7-Zip at the Ultra setting. Additionaly, when compressing assembled sequences of the Human Genome, a compression gain is achieved up to 34% compared to GNU Gzip and 16% compared to 7-Zip at the Ultra setting. A Windows executable version can be downloaded at https://github.com/tparidae/AFresh . tom.paridaens@ugent.be. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  8. Modeling nutrient retention at the watershed scale: Does small stream research apply to the whole river network?

    NASA Astrophysics Data System (ADS)

    Aguilera, Rosana; Marcé, Rafael; Sabater, Sergi

    2013-06-01

    are conveyed from terrestrial and upstream sources through drainage networks. Streams and rivers contribute to regulate the material exported downstream by means of transformation, storage, and removal of nutrients. It has been recently suggested that the efficiency of process rates relative to available nutrient concentration in streams eventually declines, following an efficiency loss (EL) dynamics. However, most of these predictions are based at the reach scale in pristine streams, failing to describe the role of entire river networks. Models provide the means to study nutrient cycling from the stream network perspective via upscaling to the watershed the key mechanisms occurring at the reach scale. We applied a hybrid process-based and statistical model (SPARROW, Spatially Referenced Regression on Watershed Attributes) as a heuristic approach to describe in-stream nutrient processes in a highly impaired, high stream order watershed (the Llobregat River Basin, NE Spain). The in-stream decay specifications of the model were modified to include a partial saturation effect in uptake efficiency (expressed as a power law) and better capture biological nutrient retention in river systems under high anthropogenic stress. The stream decay coefficients were statistically significant in both nitrate and phosphate models, indicating the potential role of in-stream processing in limiting nutrient export. However, the EL concept did not reliably describe the patterns of nutrient uptake efficiency for the concentration gradient and streamflow values found in the Llobregat River basin, posing in doubt its complete applicability to explain nutrient retention processes in stream networks comprising highly impaired rivers.

  9. Visualization of Concrete Slump Flow Using the Kinect Sensor

    PubMed Central

    Park, Minbeom

    2018-01-01

    Workability is regarded as one of the important parameters of high-performance concrete and monitoring it is essential in concrete quality management at construction sites. The conventional workability test methods are basically based on length and time measured by a ruler and a stopwatch and, as such, inevitably involves human error. In this paper, we propose a 4D slump test method based on digital measurement and data processing as a novel concrete workability test. After acquiring the dynamically changing 3D surface of fresh concrete using a 3D depth sensor during the slump flow test, the stream images are processed with the proposed 4D slump processing algorithm and the results are compressed into a single 4D slump image. This image basically represents the dynamically spreading cross-section of fresh concrete along the time axis. From the 4D slump image, it is possible to determine the slump flow diameter, slump flow time, and slump height at any location simultaneously. The proposed 4D slump test will be able to activate research related to concrete flow simulation and concrete rheology by providing spatiotemporal measurement data of concrete flow. PMID:29510510

  10. Visualization of Concrete Slump Flow Using the Kinect Sensor.

    PubMed

    Kim, Jung-Hoon; Park, Minbeom

    2018-03-03

    Workability is regarded as one of the important parameters of high-performance concrete and monitoring it is essential in concrete quality management at construction sites. The conventional workability test methods are basically based on length and time measured by a ruler and a stopwatch and, as such, inevitably involves human error. In this paper, we propose a 4D slump test method based on digital measurement and data processing as a novel concrete workability test. After acquiring the dynamically changing 3D surface of fresh concrete using a 3D depth sensor during the slump flow test, the stream images are processed with the proposed 4D slump processing algorithm and the results are compressed into a single 4D slump image. This image basically represents the dynamically spreading cross-section of fresh concrete along the time axis. From the 4D slump image, it is possible to determine the slump flow diameter, slump flow time, and slump height at any location simultaneously. The proposed 4D slump test will be able to activate research related to concrete flow simulation and concrete rheology by providing spatiotemporal measurement data of concrete flow.

  11. General Mode Scanning Probe Microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Somnath, Suhas; Jesse, Stephen

    A critical part of SPM measurements is the information transfer from the probe-sample junction to the measurement system. Current information transfer methods heavily compress the information-rich data stream by averaging the data over a time interval, or via heterodyne detection approaches such as lock-in amplifiers and phase-locked loops. As a consequence, highly valuable information at the sub-microsecond time scales or information from frequencies outside the measurement band is lost. We have developed a fundamentally new approach called General Mode (G-mode), where we can capture the complete information stream from the detectors in the microscope. The availability of the complete informationmore » allows the microscope operator to analyze the data via information-theory analysis or comprehensive physical models. Furthermore, the complete data stream enables advanced data-driven filtering algorithms, multi-resolution imaging, ultrafast spectroscropic imaging, spatial mapping of multidimensional variability in material properties, etc. Though we applied this approach to scanning probe microscopy, the general philosophy of G-mode can be applied to many other modes of microscopy. G-mode data is captured by completely custom software written in LabVIEW and Matlab. The software generates the waveforms to electrically, thermally, or mechanically excite the SPM probe. It handles real-time communications with the microscope software for operations such as moving the SPM probe position and also controls other instrumentation hardware. The software also controls multiple variants of high-speed data acquisition cards to excite the SPM probe with the excitation waveform and simultaneously measure multiple channels of information from the microscope detectors at sampling rates of 1-100 MHz. The software also saves the raw data to the computer and allows the microscope operator to visualize processed or filtered data during the experiment. The software performs all these features while offering a user-friendly interface.« less

  12. Contrasting habitat associations of imperilled endemic stream fishes from a global biodiversity hot spot

    PubMed Central

    2012-01-01

    Background Knowledge of the factors that drive species distributions provides a fundamental baseline for several areas of research including biogeography, phylogeography and biodiversity conservation. Data from 148 minimally disturbed sites across a large drainage system in the Cape Floristic Region of South Africa were used to test the hypothesis that stream fishes have similar responses to environmental determinants of species distribution. Two complementary statistical approaches, boosted regression trees and hierarchical partitioning, were used to model the responses of four fish species to 11 environmental predictors, and to quantify the independent explanatory power of each predictor. Results Elevation, slope, stream size, depth and water temperature were identified by both approaches as the most important causal factors for the spatial distribution of the fishes. However, the species showed marked differences in their responses to these environmental variables. Elevation and slope were of primary importance for the laterally compressed Sandelia spp. which had an upstream boundary below 430 m above sea level. The fusiform shaped Pseudobarbus ‘Breede’ was strongly influenced by stream width and water temperature. The small anguilliform shaped Galaxias ‘nebula’ was more sensitive to stream size and depth, and also penetrated into reaches at higher elevation than Sandelia spp. and Pseudobarbus ‘Breede’. Conclusions The hypothesis that stream fishes have a common response to environmental descriptors is rejected. The contrasting habitat associations of stream fishes considered in this study could be a reflection of their morphological divergence which may allow them to exploit specific habitats that differ in their environmental stressors. Findings of this study encourage wider application of complementary methods in ecological studies, as they provide more confidence and deeper insights into the variables that should be managed to achieve desired conservation outcomes. PMID:23009367

  13. Multi-mode sensor processing on a dynamically reconfigurable massively parallel processor array

    NASA Astrophysics Data System (ADS)

    Chen, Paul; Butts, Mike; Budlong, Brad; Wasson, Paul

    2008-04-01

    This paper introduces a novel computing architecture that can be reconfigured in real time to adapt on demand to multi-mode sensor platforms' dynamic computational and functional requirements. This 1 teraOPS reconfigurable Massively Parallel Processor Array (MPPA) has 336 32-bit processors. The programmable 32-bit communication fabric provides streamlined inter-processor connections with deterministically high performance. Software programmability, scalability, ease of use, and fast reconfiguration time (ranging from microseconds to milliseconds) are the most significant advantages over FPGAs and DSPs. This paper introduces the MPPA architecture, its programming model, and methods of reconfigurability. An MPPA platform for reconfigurable computing is based on a structural object programming model. Objects are software programs running concurrently on hundreds of 32-bit RISC processors and memories. They exchange data and control through a network of self-synchronizing channels. A common application design pattern on this platform, called a work farm, is a parallel set of worker objects, with one input and one output stream. Statically configured work farms with homogeneous and heterogeneous sets of workers have been used in video compression and decompression, network processing, and graphics applications.

  14. Refurbishment of one-person regenerative air revitalization system

    NASA Technical Reports Server (NTRS)

    Powell, Ferolyn T.

    1989-01-01

    Regenerative processes for the revitalization of spacecraft atmospheres and reclamation of waste waters are essential for making long-term manned space missions a reality. Processes studied include: static feed water electrolysis for oxygen generation, Bosch carbon dioxide reduction, electrochemical carbon dioxide concentration, vapor compression distillation water recovery, and iodine monitoring. The objectives were to: provide engineering support to Marshall Space Flight Center personnel throughout all phases of the test program, e.g., planning through data analysis; fabricate, test, and deliver to Marshall Space Flight Center an electrochemical carbon dioxide module and test stand; fabricate and deliver an iodine monitor; evaluate the electrochemical carbon dioxide concentrator subsystem configuration and its ability to ensure safe utilization of hydrogen gas; evaluate techniques for recovering oxygen from a product oxygen and carbon dioxide stream; and evaluate the performance of an electrochemical carbon dioxide concentrator module to operate without hydrogen as a method of safe haven operation. Each of the tasks were related in that all focused on providing a better understanding of the function, operation, and performance of developmental pieces of environmental control and life support system hardware.

  15. Heat Pump Clothes Dryer Model Development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, Bo

    A heat pump clothes dryer (HPCD) is an innovative appliance that uses a vapor compression system to dry clothes. Air circulates in a closed loop through the drum, so no vent is required. The condenser heats air to evaporate moisture out of the clothes, and the evaporator condenses water out of the air stream. As a result, the HPCD can achieve 50% energy savings compared to a conventional electric resistance dryer. We developed a physics-based, quasi-steady-state HPCD system model with detailed heat exchanger and compressor models. In a novel approach, we applied a heat and mass transfer effectiveness model tomore » simulate the drying process of the clothes load in the drum. The system model is able to simulate the inherently transient HPCD drying process, to size components, and to reveal trends in key variables (e.g. compressor discharge temperature, power consumption, required drying time, etc.) The system model was calibrated using experimental data on a prototype HPCD. In the paper, the modeling method is introduced, and the model predictions are compared with experimental data measured on a prototype HPCD.« less

  16. An image compression survey and algorithm switching based on scene activity

    NASA Technical Reports Server (NTRS)

    Hart, M. M.

    1985-01-01

    Data compression techniques are presented. A description of these techniques is provided along with a performance evaluation. The complexity of the hardware resulting from their implementation is also addressed. The compression effect on channel distortion and the applicability of these algorithms to real-time processing are presented. Also included is a proposed new direction for an adaptive compression technique for real-time processing.

  17. Pervaporation process and use in treating waste stream from glycol dehydrator

    DOEpatents

    Kaschemekat, Jurgen; Baker, Richard W.

    1994-01-01

    Pervaporation processes and apparatus with few moving parts. Ideally, only one pump is used to provide essentially all of the motive power and driving force needed. The process is particularly useful for handling small streams with flow rates less than about 700 gpd. Specifically, the process can be used to treat waste streams from glycol dehydrator regeneration units.

  18. Current and potential uses of bioactive molecules from marine processing waste.

    PubMed

    Suleria, Hafiz Ansar Rasul; Masci, Paul; Gobe, Glenda; Osborne, Simone

    2016-03-15

    Food industries produce huge amounts of processing waste that are often disposed of incurring expenses and impacting upon the environment. For these and other reasons, food processing waste streams, in particular marine processing waste streams, are gaining popularity amongst pharmaceutical, cosmetic and nutraceutical industries as sources of bioactive molecules. In the last 30 years, there has been a gradual increase in processed marine products with a concomitant increase in waste streams that include viscera, heads, skins, fins, bones, trimmings and shellfish waste. In 2010, these waste streams equated to approximately 24 million tonnes of mostly unused resources. Marine processing waste streams not only represent an abundant resource, they are also enriched with structurally diverse molecules that possess a broad panel of bioactivities including anti-oxidant, anti-coagulant, anti-thrombotic, anti-cancer and immune-stimulatory activities. Retrieval and characterisation of bioactive molecules from marine processing waste also contributes valuable information to the vast field of marine natural product discovery. This review summarises the current use of bioactive molecules from marine processing waste in different products and industries. Moreover, this review summarises new research into processing waste streams and the potential for adoption by industries in the creation of new products containing marine processing waste bioactives. © 2015 Society of Chemical Industry.

  19. 2013 - JPL's Snow Data System Year in Review

    NASA Astrophysics Data System (ADS)

    Goodale, C. E.; Painter, T. H.; Mattmann, C. A.; Brodzik, M.; Rittger, K. E.; Burgess, A.

    2013-12-01

    2013 has been a big year for JPL's Snow Data Processing System. This year our efforts have been focused on supporting the Colorado Basin River Forecast Center, working on products in the Western United States and Alaska for the National Climate Assessment, as well as research efforts in the Hindu Kush Himalaya Region of Asia through the generation and publication of our MODSCAG, MODDRFS and MODICE products. We have revisited the processing stream for our snow properties products as we expand to global coverage providing a higher quality, consistent dataset. We have enabled lossless compression as well as using more efficient data types for our data arrays. This has enabled our archive to expand its coverage while conserving disk space, which comes with the added benefit of making the data downloads smaller and faster. Storage and compression aren't the only changes we have made; there have also been several improvements with cloud masking and missing data identification. These improvements have resulted in products that are smaller and more concise. Our source for MOD09GA data is the LP.DAAC. In June of this year the DAAC switched from an FTP server to HTTP. Our team has been able to adapt to this change by using NASA's ECHO Catalog to search for links to download the MOD09GA HDF files our processing pipeline uses to create the SCAG and DRFS products. We looked at using Reverb, but they limit the search results to only 2000 granules, where our typical granule count for a single tile exceeds 4000. We would like to share our improvements to the output products, the lessons we learned from the DAAC changing to HTTP and how others can learn from us, and share the tools we have created. Our plan is to also show our current archive coverage of products, and where we will are planning to process data in 2014 and beyond.

  20. Flow directionality, mountain barriers and functional traits determine diatom metacommunity structuring of high mountain streams.

    PubMed

    Dong, Xiaoyu; Li, Bin; He, Fengzhi; Gu, Yuan; Sun, Meiqin; Zhang, Haomiao; Tan, Lu; Xiao, Wen; Liu, Shuoran; Cai, Qinghua

    2016-04-19

    Stream metacommunities are structured by a combination of local (environmental filtering) and regional (dispersal) processes. The unique characters of high mountain streams could potentially determine metacommunity structuring, which is currently poorly understood. Aiming at understanding how these characters influenced metacommunity structuring, we explored the relative importance of local environmental conditions and various dispersal processes, including through geographical (overland), topographical (across mountain barriers) and network (along flow direction) pathways in shaping benthic diatom communities. From a trait perspective, diatoms were categorized into high-profile, low-profile and motile guild to examine the roles of functional traits. Our results indicated that both environmental filtering and dispersal processes influenced metacommunity structuring, with dispersal contributing more than environmental processes. Among the three pathways, stream corridors were primary pathway. Deconstructive analysis suggested different responses to environmental and spatial factors for each of three ecological guilds. However, regardless of traits, dispersal among streams was limited by mountain barriers, while dispersal along stream was promoted by rushing flow in high mountain stream. Our results highlighted that directional processes had prevailing effects on metacommunity structuring in high mountain streams. Flow directionality, mountain barriers and ecological guilds contributed to a better understanding of the roles that mountains played in structuring metacommunity.

  1. Chromium: A Stress-Processing Framework for Interactive Rendering on Clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humphreys, G,; Houston, M.; Ng, Y.-R.

    2002-01-11

    We describe Chromium, a system for manipulating streams of graphics API commands on clusters of workstations. Chromium's stream filters can be arranged to create sort-first and sort-last parallel graphics architectures that, in many cases, support the same applications while using only commodity graphics accelerators. In addition, these stream filters can be extended programmatically, allowing the user to customize the stream transformations performed by nodes in a cluster. Because our stream processing mechanism is completely general, any cluster-parallel rendering algorithm can be either implemented on top of or embedded in Chromium. In this paper, we give examples of real-world applications thatmore » use Chromium to achieve good scalability on clusters of workstations, and describe other potential uses of this stream processing technology. By completely abstracting the underlying graphics architecture, network topology, and API command processing semantics, we allow a variety of applications to run in different environments.« less

  2. Mass, energy and material balances of SRF production process. Part 1: SRF produced from commercial and industrial waste.

    PubMed

    Nasrullah, Muhammad; Vainikka, Pasi; Hannula, Janne; Hurme, Markku; Kärki, Janne

    2014-08-01

    This paper presents the mass, energy and material balances of a solid recovered fuel (SRF) production process. The SRF is produced from commercial and industrial waste (C&IW) through mechanical treatment (MT). In this work various streams of material produced in SRF production process are analyzed for their proximate and ultimate analysis. Based on this analysis and composition of process streams their mass, energy and material balances are established for SRF production process. Here mass balance describes the overall mass flow of input waste material in the various output streams, whereas material balance describes the mass flow of components of input waste stream (such as paper and cardboard, wood, plastic (soft), plastic (hard), textile and rubber) in the various output streams of SRF production process. A commercial scale experimental campaign was conducted on an MT waste sorting plant to produce SRF from C&IW. All the process streams (input and output) produced in this MT plant were sampled and treated according to the CEN standard methods for SRF: EN 15442 and EN 15443. The results from the mass balance of SRF production process showed that of the total input C&IW material to MT waste sorting plant, 62% was recovered in the form of SRF, 4% as ferrous metal, 1% as non-ferrous metal and 21% was sorted out as reject material, 11.6% as fine fraction, and 0.4% as heavy fraction. The energy flow balance in various process streams of this SRF production process showed that of the total input energy content of C&IW to MT plant, 75% energy was recovered in the form of SRF, 20% belonged to the reject material stream and rest 5% belonged with the streams of fine fraction and heavy fraction. In the material balances, mass fractions of plastic (soft), plastic (hard), paper and cardboard and wood recovered in the SRF stream were 88%, 70%, 72% and 60% respectively of their input masses to MT plant. A high mass fraction of plastic (PVC), rubber material and non-combustibles (such as stone/rock and glass particles), was found in the reject material stream. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. 40 CFR 63.138 - Process wastewater provisions-performance standards for treatment processes managing Group 1...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... each treatment process. (b) Control options: Group 1 wastewater streams for Table 9 compounds. The... section. (c) Control options: Group 1 wastewater streams for Table 8 compounds. The owner or operator...) Residuals. For each residual removed from a Group 1 wastewater stream, the owner or operator shall control...

  4. 40 CFR 63.138 - Process wastewater provisions-performance standards for treatment processes managing Group 1...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... each treatment process. (b) Control options: Group 1 wastewater streams for Table 9 compounds. The... section. (c) Control options: Group 1 wastewater streams for Table 8 compounds. The owner or operator...) Residuals. For each residual removed from a Group 1 wastewater stream, the owner or operator shall control...

  5. 40 CFR 63.138 - Process wastewater provisions-performance standards for treatment processes managing Group 1...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... each treatment process. (b) Control options: Group 1 wastewater streams for Table 9 compounds. The... section. (c) Control options: Group 1 wastewater streams for Table 8 compounds. The owner or operator...) Residuals. For each residual removed from a Group 1 wastewater stream, the owner or operator shall control...

  6. Increased functional connectivity in the ventral and dorsal streams during retrieval of novel words in professional musicians.

    PubMed

    Dittinger, Eva; Valizadeh, Seyed Abolfazl; Jäncke, Lutz; Besson, Mireille; Elmer, Stefan

    2018-02-01

    Current models of speech and language processing postulate the involvement of two parallel processing streams (the dual stream model): a ventral stream involved in mapping sensory and phonological representations onto lexical and conceptual representations and a dorsal stream contributing to sound-to-motor mapping, articulation, and to how verbal information is encoded and manipulated in memory. Based on previous evidence showing that music training has an influence on language processing, cognitive functions, and word learning, we examined EEG-based intracranial functional connectivity in the ventral and dorsal streams while musicians and nonmusicians learned the meaning of novel words through picture-word associations. In accordance with the dual stream model, word learning was generally associated with increased beta functional connectivity in the ventral stream compared to the dorsal stream. In addition, in the linguistically most demanding "semantic task," musicians outperformed nonmusicians, and this behavioral advantage was accompanied by increased left-hemispheric theta connectivity in both streams. Moreover, theta coherence in the left dorsal pathway was positively correlated with the number of years of music training. These results provide evidence for a complex interplay within a network of brain regions involved in semantic processing and verbal memory functions, and suggest that intensive music training can modify its functional architecture leading to advantages in novel word learning. © 2017 Wiley Periodicals, Inc.

  7. SCOPES: steganography with compression using permutation search

    NASA Astrophysics Data System (ADS)

    Boorboor, Sahar; Zolfaghari, Behrouz; Mozafari, Saadat Pour

    2011-10-01

    LSB (Least Significant Bit) is a widely used method for image steganography, which hides the secret message as a bit stream in LSBs of pixel bytes in the cover image. This paper proposes a variant of LSB named SCOPES that encodes and compresses the secret message while being hidden through storing addresses instead of message bytes. Reducing the length of the stored message improves the storage capacity and makes the stego image visually less suspicious to the third party. The main idea behind the SCOPES approach is dividing the message into 3-character segments, seeking each segment in the cover image and storing the address of the position containing the segment instead of the segment itself. In this approach, every permutation of the 3 bytes (if found) can be stored along with some extra bits indicating the permutation. In some rare cases the segment may not be found in the image and this can cause the message to be expanded by some overhead bits2 instead of being compressed. But experimental results show that SCOPES performs overlay better than traditional LSB even in the worst cases.

  8. Digital Motion Imagery, Interoperability Challenges for Space Operations

    NASA Technical Reports Server (NTRS)

    Grubbs, Rodney

    2012-01-01

    With advances in available bandwidth from spacecraft and between terrestrial control centers, digital motion imagery and video is becoming more practical as a data gathering tool for science and engineering, as well as for sharing missions with the public. The digital motion imagery and video industry has done a good job of creating standards for compression, distribution, and physical interfaces. Compressed data streams can easily be transmitted or distributed over radio frequency, internet protocol, and other data networks. All of these standards, however, can make sharing video between spacecraft and terrestrial control centers a frustrating and complicated task when different standards and protocols are used by different agencies. This paper will explore the challenges presented by the abundance of motion imagery and video standards, interfaces and protocols with suggestions for common formats that could simplify interoperability between spacecraft and ground support systems. Real-world examples from the International Space Station will be examined. The paper will also discuss recent trends in the development of new video compression algorithms, as well likely expanded use of Delay (or Disruption) Tolerant Networking nodes.

  9. Nonlinear Stability and Structure of Compressible Reacting Mixing Layers

    NASA Technical Reports Server (NTRS)

    Day, M. J.; Mansour, N. N.; Reynolds, W. C.

    2000-01-01

    The parabolized stability equations (PSE) are used to investigate issues of nonlinear flow development and mixing in compressible reacting shear layers. Particular interest is placed on investigating the change in flow structure that occurs when compressibility and heat release are added to the flow. These conditions allow the 'outer' instability modes- one associated with each of the fast and slow streams-to dominate over the 'central', Kelvin-Helmholtz mode that unaccompanied in incompressible nonreacting mixing layers. Analysis of scalar probability density functions in flows with dominant outer modes demonstrates the ineffective, one-sided nature of mixing that accompany these flow structures. Colayer conditions, where two modes have equal growth rate and the mixing layer is formed by two sets of vortices, offer some opportunity for mixing enhancement. Their extent, however, is found to be limited in the mixing layer's parameter space. Extensive validation of the PSE technique also provides a unique perspective on central- mode vortex pairing, further supporting the view that pairing is primarily governed perspective sheds insight on how linear stability theory is able to provide such an accurate prediction of experimentally-observed, fully nonlinear flow phenomenon.

  10. Complex Catchment Processes that Control Stream Nitrogen and Organic Matter Concentrations in a Northeastern USA Upland Catchment

    NASA Astrophysics Data System (ADS)

    Sebestyen, S. D.; Shanley, J. B.; Pellerin, B.; Saraceno, J.; Aiken, G. R.; Boyer, E. W.; Doctor, D. H.; Kendall, C.

    2009-05-01

    There is a need to understand the coupled biogeochemical and hydrological processes that control stream hydrochemistry in upland forested catchments. At watershed 9 (W-9) of the Sleepers River Research Watershed in the northeastern USA, we use high-frequency sampling, environmental tracers, end-member mixing analysis, and stream reach mass balances to understand dynamic factors affect forms and concentrations of nitrogen and organic matter in streamflow. We found that rates of stream nitrate processing changed during autumn baseflow and that up to 70% of nitrate inputs to a stream reach were retained. At the same time, the stream reach was a net source of the dissolved organic carbon (DOC) and dissolved organic nitrogen (DON) fractions of dissolved organic matter (DOM). The in-stream nitrate loss and DOM gains are examples of hot moments of biogeochemical transformations during autumn when deciduous litter fall increases DOM availability. As hydrological flowpaths changed during rainfall events, the sources and transformations of nitrate and DOM differed from baseflow. For example, during storm flow we measured direct inputs of unprocessed atmospheric nitrate to streams that were as large as 30% of the stream nitrate loading. At the same time, stream DOM composition shifted to reflect inputs of reactive organic matter from surficial upland soils. The transport of atmospheric nitrate and reactive DOM to streams underscores the importance of quantifying source variation during short-duration stormflow events. Building upon these findings we present a conceptual model of interacting ecosystem processes that control the flow of water and nutrients to streams in a temperate upland catchment.

  11. Digital cinema system using JPEG2000 movie of 8-million pixel resolution

    NASA Astrophysics Data System (ADS)

    Fujii, Tatsuya; Nomura, Mitsuru; Shirai, Daisuke; Yamaguchi, Takahiro; Fujii, Tetsuro; Ono, Sadayasu

    2003-05-01

    We have developed a prototype digital cinema system that can store, transmit and display extra high quality movies of 8-million pixel resolution, using JPEG2000 coding algorithm. The image quality is 4 times better than HDTV in resolution, and enables us to replace conventional films with digital cinema archives. Using wide-area optical gigabit IP networks, cinema contents are distributed and played back as a video-on-demand (VoD) system. The system consists of three main devices, a video server, a real-time JPEG2000 decoder, and a large-venue LCD projector. All digital movie data are compressed by JPEG2000 and stored in advance. The coded streams of 300~500 Mbps can be continuously transmitted from the PC server using TCP/IP. The decoder can perform the real-time decompression at 24/48 frames per second, using 120 parallel JPEG2000 processing elements. The received streams are expanded into 4.5Gbps raw video signals. The prototype LCD projector uses 3 pieces of 3840×2048 pixel reflective LCD panels (D-ILA) to show RGB 30-bit color movies fed by the decoder. The brightness exceeds 3000 ANSI lumens for a 300-inch screen. The refresh rate is chosen to 96Hz to thoroughly eliminate flickers, while preserving compatibility to cinema movies of 24 frames per second.

  12. Modeling and Simulation of Compression Molding Process for Sheet Molding Compound (SMC) of Chopped Carbon Fiber Composites

    DOE PAGES

    Li, Yang; Chen, Zhangxing; Xu, Hongyi; ...

    2017-01-02

    Compression molded SMC composed of chopped carbon fiber and resin polymer which balances the mechanical performance and manufacturing cost presents a promising solution for vehicle lightweight strategy. However, the performance of the SMC molded parts highly depends on the compression molding process and local microstructure, which greatly increases the cost for the part level performance testing and elongates the design cycle. ICME (Integrated Computational Material Engineering) approaches are thus necessary tools to reduce the number of experiments required during part design and speed up the deployment of the SMC materials. As the fundamental stage of the ICME workflow, commercial softwaremore » packages for SMC compression molding exist yet remain not fully validated especially for chopped fiber systems. In this study, SMC plaques are prepared through compression molding process. The corresponding simulation models are built in Autodesk Moldflow with the same part geometry and processing conditions as in the molding tests. The output variables of the compression molding simulations, including press force history and fiber orientation of the part, are compared with experimental data. Influence of the processing conditions to the fiber orientation of the SMC plaque is also discussed. It is found that generally Autodesk Moldflow can achieve a good simulation of the compression molding process for chopped carbon fiber SMC, yet quantitative discrepancies still remain between predicted variables and experimental results.« less

  13. Modeling and Simulation of Compression Molding Process for Sheet Molding Compound (SMC) of Chopped Carbon Fiber Composites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Yang; Chen, Zhangxing; Xu, Hongyi

    Compression molded SMC composed of chopped carbon fiber and resin polymer which balances the mechanical performance and manufacturing cost presents a promising solution for vehicle lightweight strategy. However, the performance of the SMC molded parts highly depends on the compression molding process and local microstructure, which greatly increases the cost for the part level performance testing and elongates the design cycle. ICME (Integrated Computational Material Engineering) approaches are thus necessary tools to reduce the number of experiments required during part design and speed up the deployment of the SMC materials. As the fundamental stage of the ICME workflow, commercial softwaremore » packages for SMC compression molding exist yet remain not fully validated especially for chopped fiber systems. In this study, SMC plaques are prepared through compression molding process. The corresponding simulation models are built in Autodesk Moldflow with the same part geometry and processing conditions as in the molding tests. The output variables of the compression molding simulations, including press force history and fiber orientation of the part, are compared with experimental data. Influence of the processing conditions to the fiber orientation of the SMC plaque is also discussed. It is found that generally Autodesk Moldflow can achieve a good simulation of the compression molding process for chopped carbon fiber SMC, yet quantitative discrepancies still remain between predicted variables and experimental results.« less

  14. The effect of compression speed on intelligibility: simulated hearing-aid processing with and without original temporal fine structure information.

    PubMed

    Hopkins, Kathryn; King, Andrew; Moore, Brian C J

    2012-09-01

    Hearing aids use amplitude compression to compensate for the effects of loudness recruitment. The compression speed that gives the best speech intelligibility varies among individuals. Moore [(2008). Trends Amplif. 12, 300-315] suggested that an individual's sensitivity to temporal fine structure (TFS) information may affect which compression speed gives most benefit. This hypothesis was tested using normal-hearing listeners with a simulated hearing loss. Sentences in a competing talker background were processed using multi-channel fast or slow compression followed by a simulation of threshold elevation and loudness recruitment. Signals were either tone vocoded with 1-ERB(N)-wide channels (where ERB(N) is the bandwidth of normal auditory filters) to remove the original TFS information, or not processed further. In a second experiment, signals were vocoded with either 1 - or 2-ERB(N)-wide channels, to test whether the available spectral detail affects the optimal compression speed. Intelligibility was significantly better for fast than slow compression regardless of vocoder channel bandwidth. The results suggest that the availability of original TFS or detailed spectral information does not affect the optimal compression speed. This conclusion is tentative, since while the vocoder processing removed the original TFS information, listeners may have used the altered TFS in the vocoded signals.

  15. System for processing an encrypted instruction stream in hardware

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Griswold, Richard L.; Nickless, William K.; Conrad, Ryan C.

    A system and method of processing an encrypted instruction stream in hardware is disclosed. Main memory stores the encrypted instruction stream and unencrypted data. A central processing unit (CPU) is operatively coupled to the main memory. A decryptor is operatively coupled to the main memory and located within the CPU. The decryptor decrypts the encrypted instruction stream upon receipt of an instruction fetch signal from a CPU core. Unencrypted data is passed through to the CPU core without decryption upon receipt of a data fetch signal.

  16. On the physical mechanisms governing the cloud lifecycle in the Central Molecular Zone of the Milky Way

    NASA Astrophysics Data System (ADS)

    Jeffreson, S. M. R.; Kruijssen, J. M. D.; Krumholz, M. R.; Longmore, S. N.

    2018-05-01

    We apply an analytic theory for environmentally-dependent molecular cloud lifetimes to the Central Molecular Zone of the Milky Way. Within this theory, the cloud lifetime in the Galactic centre is obtained by combining the time-scales for gravitational instability, galactic shear, epicyclic perturbations and cloud-cloud collisions. We find that at galactocentric radii ˜45-120 pc, corresponding to the location of the `100-pc stream', cloud evolution is primarily dominated by gravitational collapse, with median cloud lifetimes between 1.4 and 3.9 Myr. At all other galactocentric radii, galactic shear dominates the cloud lifecycle, and we predict that molecular clouds are dispersed on time-scales between 3 and 9 Myr, without a significant degree of star formation. Along the outer edge of the 100-pc stream, between radii of 100 and 120 pc, the time-scales for epicyclic perturbations and gravitational free-fall are similar. This similarity of time-scales lends support to the hypothesis that, depending on the orbital geometry and timing of the orbital phase, cloud collapse and star formation in the 100-pc stream may be triggered by a tidal compression at pericentre. Based on the derived time-scales, this should happen in approximately 20 per cent of all accretion events onto the 100-pc stream.

  17. Posterior Parietal Cortex Drives Inferotemporal Activations During Three-Dimensional Object Vision.

    PubMed

    Van Dromme, Ilse C; Premereur, Elsie; Verhoef, Bram-Ernst; Vanduffel, Wim; Janssen, Peter

    2016-04-01

    The primate visual system consists of a ventral stream, specialized for object recognition, and a dorsal visual stream, which is crucial for spatial vision and actions. However, little is known about the interactions and information flow between these two streams. We investigated these interactions within the network processing three-dimensional (3D) object information, comprising both the dorsal and ventral stream. Reversible inactivation of the macaque caudal intraparietal area (CIP) during functional magnetic resonance imaging (fMRI) reduced fMRI activations in posterior parietal cortex in the dorsal stream and, surprisingly, also in the inferotemporal cortex (ITC) in the ventral visual stream. Moreover, CIP inactivation caused a perceptual deficit in a depth-structure categorization task. CIP-microstimulation during fMRI further suggests that CIP projects via posterior parietal areas to the ITC in the ventral stream. To our knowledge, these results provide the first causal evidence for the flow of visual 3D information from the dorsal stream to the ventral stream, and identify CIP as a key area for depth-structure processing. Thus, combining reversible inactivation and electrical microstimulation during fMRI provides a detailed view of the functional interactions between the two visual processing streams.

  18. Posterior Parietal Cortex Drives Inferotemporal Activations During Three-Dimensional Object Vision

    PubMed Central

    Van Dromme, Ilse C.; Premereur, Elsie; Verhoef, Bram-Ernst; Vanduffel, Wim; Janssen, Peter

    2016-01-01

    The primate visual system consists of a ventral stream, specialized for object recognition, and a dorsal visual stream, which is crucial for spatial vision and actions. However, little is known about the interactions and information flow between these two streams. We investigated these interactions within the network processing three-dimensional (3D) object information, comprising both the dorsal and ventral stream. Reversible inactivation of the macaque caudal intraparietal area (CIP) during functional magnetic resonance imaging (fMRI) reduced fMRI activations in posterior parietal cortex in the dorsal stream and, surprisingly, also in the inferotemporal cortex (ITC) in the ventral visual stream. Moreover, CIP inactivation caused a perceptual deficit in a depth-structure categorization task. CIP-microstimulation during fMRI further suggests that CIP projects via posterior parietal areas to the ITC in the ventral stream. To our knowledge, these results provide the first causal evidence for the flow of visual 3D information from the dorsal stream to the ventral stream, and identify CIP as a key area for depth-structure processing. Thus, combining reversible inactivation and electrical microstimulation during fMRI provides a detailed view of the functional interactions between the two visual processing streams. PMID:27082854

  19. Atypical energetic particle events observed prior energetic particle enhancements associated with corotating interaction regions

    NASA Astrophysics Data System (ADS)

    Khabarova, Olga; Malandraki, Olga; Zank, Gary; Jackson, Bernard; Bisi, Mario; Desai, Mihir; Li, Gang; le Roux, Jakobus; Yu, Hsiu-Shan

    2017-04-01

    Recent studies of mechanisms of particle acceleration in the heliosphere have revealed the importance of the comprehensive analysis of stream-stream interactions as well as the heliospheric current sheet (HCS) - stream interactions that often occur in the solar wind, producing huge magnetic cavities bounded by strong current sheets. Such cavities are usually filled with small-scale magnetic islands that trap and re-accelerate energetic particles (Zank et al. ApJ, 2014, 2015; le Roux et al. ApJ, 2015, 2016; Khabarova et al. ApJ, 2015, 2016). Crossings of these regions are associated with unusual variations in the energetic particle flux up to several MeV/nuc near the Earth's orbit. These energetic particle flux enhancements called "atypical energetic particle events" (AEPEs) are not associated with standard mechanisms of particle acceleration. The analysis of multi-spacecraft measurements of energetic particle flux, plasma and the interplanetary magnetic field shows that AEPEs have a local origin as they are observed by different spacecraft with a time delay corresponding to the solar wind propagation from one spacecraft to another, which is a signature of local particle acceleration in the region embedded in expanding and rotating background solar wind. AEPEs are often observed before the arrival of corotating interaction regions (CIRs) or stream interaction regions (SIRs) to the Earth's orbit. When fast solar wind streams catch up with slow solar wind, SIRs of compressed heated plasma or more regular CIRs are created at the leading edge of the high-speed stream. Since coronal holes are often long-lived structures, the same CIR re-appears often for several consecutive solar rotations. At low heliographic latitudes, such CIRs are typically bounded by forward and reverse waves on their leading and trailing edges, respectively, that steepen into shocks at heliocentric distances beyond 1 AU. Energetic ion increases have been frequently observed in association with CIR's shocks, and these shocks to be believed to accelerate ions up to several MeV per nucleon. In this paradigm particle acceleration is commonly believed to occur mainly at the well-formed reverse shock at 2-3 AU with particles streaming back from the shocks from the outer heliosphere to 1 AU (Malandraki et al., 2007). However, AEPEs observed for many hours before the crossing of the forward shock (or even before the leading edge of a CIR without well-formed forward shock) cannot be explained within the framework of this paradigm. We have recently found that the effect of pre-CIR AEPEs occurs mainly as a result of the formation of a region filled with magnetic islands compressed between the high-density leading edge of a CIR and the HCS (Khabarova et al. ApJ, 2016). We show here that any kind of complicated stream-CIR interactions may lead to the same effect due to the formation of magnetic cavities in front of CIRs. The analysis of in situ multi-spacecraft measurements often suggests very complicated ways of propagation of streams and current sheets that form magnetic cavities. In the case of multiple stream-stream interaction, comparisons of data from distant spacecraft may be puzzling and even useless for understanding the large-scale topology of the region of particle acceleration, because even several point measurements cannot reconstruct approximate forms of the magnetic cavities and shed light on the pre-history of their origin and evolution. We employ interplanetary scintillation tomographic data for reconstructions of the solar wind speed, density and interplanetary magnetic field profiles to understand a 3-D picture of stream interactions responsible for pre-CIR AEPEs. This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 637324

  20. Comparison of drinking water treatment process streams for optimal bacteriological water quality.

    PubMed

    Ho, Lionel; Braun, Kalan; Fabris, Rolando; Hoefel, Daniel; Morran, Jim; Monis, Paul; Drikas, Mary

    2012-08-01

    Four pilot-scale treatment process streams (Stream 1 - Conventional treatment (coagulation/flocculation/dual media filtration); Stream 2 - Magnetic ion exchange (MIEX)/Conventional treatment; Stream 3 - MIEX/Conventional treatment/granular activated carbon (GAC) filtration; Stream 4 - Microfiltration/nanofiltration) were commissioned to compare their effectiveness in producing high quality potable water prior to disinfection. Despite receiving highly variable source water quality throughout the investigation, each stream consistently reduced colour and turbidity to below Australian Drinking Water Guideline levels, with the exception of Stream 1 which was difficult to manage due to the reactive nature of coagulation control. Of particular interest was the bacteriological quality of the treated waters where flow cytometry was shown to be the superior monitoring tool in comparison to the traditional heterotrophic plate count method. Based on removal of total and active bacteria, the treatment process streams were ranked in the order: Stream 4 (average log removal of 2.7) > Stream 2 (average log removal of 2.3) > Stream 3 (average log removal of 1.5) > Stream 1 (average log removal of 1.0). The lower removals in Stream 3 were attributed to bacteria detaching from the GAC filter. Bacterial community analysis revealed that the treatments affected the bacteria present, with the communities in streams incorporating conventional treatment clustering with each other, while the community composition of Stream 4 was very different to those of Streams 1, 2 and 3. MIEX treatment was shown to enhance removal of bacteria due to more efficient flocculation which was validated through the novel application of the photometric dispersion analyser. Copyright © 2012 Elsevier Ltd. All rights reserved.

  1. Development of the Hydroecological Integrity Assessment Process for Determining Environmental Flows for New Jersey Streams

    USGS Publications Warehouse

    Kennen, Jonathan G.; Henriksen, James A.; Nieswand, Steven P.

    2007-01-01

    The natural flow regime paradigm and parallel stream ecological concepts and theories have established the benefits of maintaining or restoring the full range of natural hydrologic variation for physiochemical processes, biodiversity, and the evolutionary potential of aquatic and riparian communities. A synthesis of recent advances in hydroecological research coupled with stream classification has resulted in a new process to determine environmental flows and assess hydrologic alteration. This process has national and international applicability. It allows classification of streams into hydrologic stream classes and identification of a set of non-redundant and ecologically relevant hydrologic indices for 10 critical sub-components of flow. Three computer programs have been developed for implementing the Hydroecological Integrity Assessment Process (HIP): (1) the Hydrologic Indices Tool (HIT), which calculates 171 ecologically relevant hydrologic indices on the basis of daily-flow and peak-flow stream-gage data; (2) the New Jersey Hydrologic Assessment Tool (NJHAT), which can be used to establish a hydrologic baseline period, provide options for setting baseline environmental-flow standards, and compare past and proposed streamflow alterations; and (3) the New Jersey Stream Classification Tool (NJSCT), designed for placing unclassified streams into pre-defined stream classes. Biological and multivariate response models including principal-component, cluster, and discriminant-function analyses aided in the development of software and implementation of the HIP for New Jersey. A pilot effort is currently underway by the New Jersey Department of Environmental Protection in which the HIP is being used to evaluate the effects of past and proposed surface-water use, ground-water extraction, and land-use changes on stream ecosystems while determining the most effective way to integrate the process into ongoing regulatory programs. Ultimately, this scientifically defensible process will help to quantify the effects of anthropogenic changes and development on hydrologic variability and help planners and resource managers balance current and future water requirements with ecological needs.

  2. Metallic Filters

    NASA Technical Reports Server (NTRS)

    1985-01-01

    Filtration technology originated in a mid 1960's NASA study. The results were distributed to the filter industry, an HR Textron responded, using the study as a departure for the development of 421 Filter Media. The HR system is composed of ultrafine steel fibers metallurgically bonded and compressed so that the pore structure is locked in place. The filters are used to filter polyesters, plastics, to remove hydrocarbon streams, etc. Several major companies use the product in chemical applications, pollution control, etc.

  3. Chemical Reactions in Turbulent Mixing Flows

    DTIC Science & Technology

    1989-10-15

    response, following the program control actuation. 3 in 2 vs. roughly the 8in2 required for the M1 = 1.5 (high speed stream Mach number) flow planned for...Papamoschou & Roshko (1988) as the compressibility-effect parameter, on the growth rate of the mixing layers was studied. In a finite thickness...attained is at present controversial; an issue that will have to be resolved both theoretically as well as with the planned for experiments. A sample plot

  4. Dynamics of large-scale solar wind streams obtained by the double superposed epoch analysis

    NASA Astrophysics Data System (ADS)

    Yermolaev, Yu. I.; Lodkina, I. G.; Nikolaeva, N. S.; Yermolaev, M. Yu.

    2015-09-01

    Using the OMNI data for period 1976-2000, we investigate the temporal profiles of 20 plasma and field parameters in the disturbed large-scale types of solar wind (SW): corotating interaction regions (CIR), interplanetary coronal mass ejections (ICME) (both magnetic cloud (MC) and Ejecta), and Sheath as well as the interplanetary shock (IS). To take into account the different durations of SW types, we use the double superposed epoch analysis (DSEA) method: rescaling the duration of the interval for all types in such a manner that, respectively, beginning and end for all intervals of selected type coincide. As the analyzed SW types can interact with each other and change parameters as a result of such interaction, we investigate separately eights sequences of SW types: (1) CIR, (2) IS/CIR, (3) Ejecta, (4) Sheath/Ejecta, (5) IS/Sheath/Ejecta, (6) MC, (7) Sheath/MC, and (8) IS/Sheath/MC. The main conclusion is that the behavior of parameters in Sheath and in CIR are very similar both qualitatively and quantitatively. Both the high-speed stream (HSS) and the fast ICME play a role of pistons which push the plasma located ahead them. The increase of speed in HSS and ICME leads at first to formation of compression regions (CIR and Sheath, respectively) and then to IS. The occurrence of compression regions and IS increases the probability of growth of magnetospheric activity.

  5. Machine vision system for automated detection of stained pistachio nuts

    NASA Astrophysics Data System (ADS)

    Pearson, Tom C.

    1995-01-01

    A machine vision system was developed to separate stained pistachio nuts, which comprise of about 5% of the California crop, from unstained nuts. The system may be used to reduce labor involved with manual grading or to remove aflatoxin contaminated product from low grade process streams. The system was tested on two different pistachio process streams: the bi- chromatic color sorter reject stream and the small nut shelling stock stream. The system had a minimum overall error rate of 14% for the bi-chromatic sorter reject stream and 15% for the small shelling stock stream.

  6. Ammonia Monitor

    NASA Technical Reports Server (NTRS)

    Sauer, Richard L. (Inventor); Akse, James R. (Inventor); Thompson, John O. (Inventor); Atwater, James E. (Inventor)

    1999-01-01

    Ammonia monitor and method of use are disclosed. A continuous, real-time determination of the concentration of ammonia in an aqueous process stream is possible over a wide dynamic range of concentrations. No reagents are required because pH is controlled by an in-line solid-phase base. Ammonia is selectively transported across a membrane from the process stream to an analytical stream to an analytical stream under pH control. The specific electrical conductance of the analytical stream is measured and used to determine the concentration of ammonia.

  7. Man-made vitreous fiber produced from incinerator ash using the thermal plasma technique and application as reinforcement in concrete.

    PubMed

    Yang, Sheng-Fu; Wang, To-Mai; Lee, Wen-Cheng; Sun, Kin-Seng; Tzeng, Chin-Ching

    2010-10-15

    This study proposes using thermal plasma technology to treat municipal solid waste incinerator ashes. A feasible fiberization method was developed and applied to produce man-made vitreous fiber (MMVF) from plasma vitrified slag. MMVF were obtained through directly blending the oxide melt stream with high velocity compressed air. The basic technological characteristics of MMVF, including morphology, diameter, shot content, length and chemical resistance, are described in this work. Laboratory experiments were conducted on the fiber-reinforced concrete. The effects of fibrous content on compressive strength and flexural strength are presented. The experimental results showed the proper additive of MMVF in concrete can enhance its mechanical properties. MMVF products produced from incinerator ashes treated with the thermal plasma technique have great potential for reinforcement in concrete. 2010 Elsevier B.V. All rights reserved.

  8. Design of an H.264/SVC resilient watermarking scheme

    NASA Astrophysics Data System (ADS)

    Van Caenegem, Robrecht; Dooms, Ann; Barbarien, Joeri; Schelkens, Peter

    2010-01-01

    The rapid dissemination of media technologies has lead to an increase of unauthorized copying and distribution of digital media. Digital watermarking, i.e. embedding information in the multimedia signal in a robust and imperceptible manner, can tackle this problem. Recently, there has been a huge growth in the number of different terminals and connections that can be used to consume multimedia. To tackle the resulting distribution challenges, scalable coding is often employed. Scalable coding allows the adaptation of a single bit-stream to varying terminal and transmission characteristics. As a result of this evolution, watermarking techniques that are robust against scalable compression become essential in order to control illegal copying. In this paper, a watermarking technique resilient against scalable video compression using the state-of-the-art H.264/SVC codec is therefore proposed and evaluated.

  9. Mass, energy and material balances of SRF production process. Part 2: SRF produced from construction and demolition waste.

    PubMed

    Nasrullah, Muhammad; Vainikka, Pasi; Hannula, Janne; Hurme, Markku; Kärki, Janne

    2014-11-01

    In this work, the fraction of construction and demolition waste (C&D waste) complicated and economically not feasible to sort out for recycling purposes is used to produce solid recovered fuel (SRF) through mechanical treatment (MT). The paper presents the mass, energy and material balances of this SRF production process. All the process streams (input and output) produced in MT waste sorting plant to produce SRF from C&D waste are sampled and treated according to CEN standard methods for SRF. Proximate and ultimate analysis of these streams is performed and their composition is determined. Based on this analysis and composition of process streams their mass, energy and material balances are established for SRF production process. By mass balance means the overall mass flow of input waste material stream in the various output streams and material balances mean the mass flow of components of input waste material stream (such as paper and cardboard, wood, plastic (soft), plastic (hard), textile and rubber) in the various output streams of SRF production process. The results from mass balance of SRF production process showed that of the total input C&D waste material to MT waste sorting plant, 44% was recovered in the form of SRF, 5% as ferrous metal, 1% as non-ferrous metal, and 28% was sorted out as fine fraction, 18% as reject material and 4% as heavy fraction. The energy balance of this SRF production process showed that of the total input energy content of C&D waste material to MT waste sorting plant, 74% was recovered in the form of SRF, 16% belonged to the reject material and rest 10% belonged to the streams of fine fraction and heavy fraction. From the material balances of this process, mass fractions of plastic (soft), paper and cardboard, wood and plastic (hard) recovered in the SRF stream were 84%, 82%, 72% and 68% respectively of their input masses to MT plant. A high mass fraction of plastic (PVC) and rubber material was found in the reject material stream. Streams of heavy fraction and fine fraction mainly contained non-combustible material (such as stone/rock, sand particles and gypsum material). Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. On the organization of the perisylvian cortex: Insights from the electrophysiology of language. Comment on "Towards a Computational Comparative Neuroprimatology: Framing the language-ready brain" by M.A. Arbib

    NASA Astrophysics Data System (ADS)

    Brouwer, Harm; Crocker, Matthew W.

    2016-03-01

    The Mirror System Hypothesis (MSH) on the evolution of the language-ready brain draws upon the parallel dorsal-ventral stream architecture for vision [1]. The dorsal ;how; stream provides a mapping of parietally-mediated affordances onto the motor system (supporting preshape), whereas the ventral ;what; stream engages in object recognition and visual scene analysis (supporting pantomime and verbal description). Arbib attempts to integrate this MSH perspective with a recent conceptual dorsal-ventral stream model of auditory language comprehension [5] (henceforth, the B&S model). In the B&S model, the dorsal stream engages in time-dependent combinatorial processing, which subserves syntactic structuring and linkage to action, whereas the ventral stream performs time-independent unification of conceptual schemata. These streams are integrated in the left Inferior Frontal Gyrus (lIFG), which is assumed to subserve cognitive control, and no linguistic processing functions. Arbib criticizes the B&S model on two grounds: (i) the time-independence of the semantic processing in the ventral stream (by arguing that semantic processing is just as time-dependent as syntactic processing), and (ii) the absence of linguistic processing in the lIFG (reconciling syntactic and semantic representations is very much linguistic processing proper). Here, we provide further support for these two points of criticism on the basis of insights from the electrophysiology of language. In the course of our argument, we also sketch the contours of an alternative model that may prove better suited for integration with the MSH.

  11. Efficient gas-separation process to upgrade dilute methane stream for use as fuel

    DOEpatents

    Wijmans, Johannes G [Menlo Park, CA; Merkel, Timothy C [Menlo Park, CA; Lin, Haiqing [Mountain View, CA; Thompson, Scott [Brecksville, OH; Daniels, Ramin [San Jose, CA

    2012-03-06

    A membrane-based gas separation process for treating gas streams that contain methane in low concentrations. The invention involves flowing the stream to be treated across the feed side of a membrane and flowing a sweep gas stream, usually air, across the permeate side. Carbon dioxide permeates the membrane preferentially and is picked up in the sweep air stream on the permeate side; oxygen permeates in the other direction and is picked up in the methane-containing stream. The resulting residue stream is enriched in methane as well as oxygen and has an EMC value enabling it to be either flared or combusted by mixing with ordinary air.

  12. Compression of magnetized target in the magneto-inertial fusion

    NASA Astrophysics Data System (ADS)

    Kuzenov, V. V.

    2017-12-01

    This paper presents a mathematical model, numerical method and results of the computer analysis of the compression process and the energy transfer in the target plasma, used in magneto-inertial fusion. The computer simulation of the compression process of magnetized cylindrical target by high-power laser pulse is presented.

  13. Platinum recovery from industrial process streams by halophilic bacteria: Influence of salt species and platinum speciation.

    PubMed

    Maes, Synthia; Claus, Mathias; Verbeken, Kim; Wallaert, Elien; De Smet, Rebecca; Vanhaecke, Frank; Boon, Nico; Hennebel, Tom

    2016-11-15

    The increased use and criticality of platinum asks for the development of effective low-cost strategies for metal recovery from process and waste streams. Although biotechnological processes can be applied for the valorization of diluted aqueous industrial streams, investigations considering real stream conditions (e.g., high salt levels, acidic pH, metal speciation) are lacking. This study investigated the recovery of platinum by a halophilic microbial community in the presence of increased salt concentrations (10-80 g L -1 ), different salt matrices (phosphate salts, sea salts and NH 4 Cl) and a refinery process stream. The halophiles were able to recover 79-99% of the Pt at 10-80 g L -1 salts and at pH 2.3. Transmission electron microscopy suggested a positive correlation between intracellular Pt cluster size and elevated salt concentrations. Furthermore, the halophiles recovered 46-95% of the Pt-amine complex Pt[NH 3 ] 4 2+ from a process stream after the addition of an alternative Pt source (K 2 PtCl 4 , 0.1-1.0 g L -1 Pt). Repeated Pt-tetraamine recovery (from an industrial process stream) was obtained after concomitant addition of fresh biomass and harvesting of Pt saturated biomass. This study demonstrates how aqueous Pt streams can be transformed into Pt rich biomass, which would be an interesting feed of a precious metals refinery. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Using compression calorimetry to characterize powder compaction behavior of pharmaceutical materials.

    PubMed

    Buckner, Ira S; Friedman, Ross A; Wurster, Dale Eric

    2010-02-01

    The process by which pharmaceutical powders are compressed into cohesive compacts or tablets has been studied using a compression calorimeter. Relating the various thermodynamic results to relevant physical processes has been emphasized. Work, heat, and internal energy change values have been determined with the compression calorimeter for common pharmaceutical materials. A framework of equations has been proposed relating the physical processes of friction, reversible deformation, irreversible deformation, and inter-particle bonding to the compression calorimetry values. The results indicate that irreversible deformation dominated many of the thermodynamic values, especially the net internal energy change following the compression-decompression cycle. The relationships between the net work and the net heat from the complete cycle were very clear indicators of predominating deformation mechanisms. Likewise, the ratio of energy stored as internal energy to the initial work input distinguished the materials according to their brittle or plastic deformation tendencies. (c) 2009 Wiley-Liss, Inc. and the American Pharmacists Association.

  15. Acceleration of plasma electrons by intense nonrelativistic ion and electron beams propagating in background plasma due to two-stream instability

    NASA Astrophysics Data System (ADS)

    Kaganovich, Igor D.

    2015-11-01

    In this paper we study the effects of the two-stream instability on the propagation of intense nonrelativistic ion and electron beams in background plasma. Development of the two-stream instability between the beam ions and plasma electrons leads to beam breakup, a slowing down of the beam particles, acceleration of the plasma particles, and transfer of the beam energy to the plasma particles and wave excitations. Making use of the particle-in-cell codes EDIPIC and LSP, and analytic theory we have simulated the effects of the two-stream instability on beam propagation over a wide range of beam and plasma parameters. Because of the two-stream instability the plasma electrons can be accelerated to velocities as high as twice the beam velocity. The resulting return current of the accelerated electrons may completely change the structure of the beam self - magnetic field, thereby changing its effect on the beam from focusing to defocusing. Therefore, previous theories of beam self-electromagnetic fields that did not take into account the effects of the two-stream instability must be significantly modified. This effect can be observed on the National Drift Compression Experiment-II (NDCX-II) facility by measuring the spot size of the extracted beamlet propagating through several meters of plasma. Particle-in-cell, fluid simulations, and analytical theory also reveal the rich complexity of beam- plasma interaction phenomena: intermittency and multiple regimes of the two-stream instability in dc discharges; band structure of the growth rate of the two-stream instability of an electron beam propagating in a bounded plasma and repeated acceleration of electrons in a finite system. In collaboration with E. Tokluoglu, D. Sydorenko, E. A. Startsev, J. Carlsson, and R. C. Davidson. Research supported by the U.S. Department of Energy.

  16. VLSI-based video event triggering for image data compression

    NASA Astrophysics Data System (ADS)

    Williams, Glenn L.

    1994-02-01

    Long-duration, on-orbit microgravity experiments require a combination of high resolution and high frame rate video data acquisition. The digitized high-rate video stream presents a difficult data storage problem. Data produced at rates of several hundred million bytes per second may require a total mission video data storage requirement exceeding one terabyte. A NASA-designed, VLSI-based, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term (DC-like) or short term (AC-like) changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pre-trigger and post-trigger storage techniques are then adaptable to archiving only the significant video images.

  17. A Determinate Model of Thrust-Augmenting Ejectors

    NASA Astrophysics Data System (ADS)

    Whitley, N.; Krothapalli, A.; van Dommelen, L.

    1996-01-01

    A theoretical analysis of the compressible flow through a constant-area jet-engine ejector in which a primary jet mixes with ambient fluid from a uniform free stream is pursued. The problem is reduced to a determinate mathematical one by prescribing the ratios of stagnation properties between the primary and secondary flows. For some selections of properties and parameters more than one solution is possible and the meaning of these solutions is discussed by means of asymptotic expansions. Our results further show that while under stationary conditions the thrust-augmentation ratio assumes a value of 2 in the large area-ratio limit, for a free-stream Mach number greater than 0.6 very little thrust augmentation is left. Due to the assumptions made, the analysis provides idealized values for the thrust-augmentation ratio and the mass flux entrainment factor.

  18. VLSI-based Video Event Triggering for Image Data Compression

    NASA Technical Reports Server (NTRS)

    Williams, Glenn L.

    1994-01-01

    Long-duration, on-orbit microgravity experiments require a combination of high resolution and high frame rate video data acquisition. The digitized high-rate video stream presents a difficult data storage problem. Data produced at rates of several hundred million bytes per second may require a total mission video data storage requirement exceeding one terabyte. A NASA-designed, VLSI-based, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term (DC-like) or short term (AC-like) changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pre-trigger and post-trigger storage techniques are then adaptable to archiving only the significant video images.

  19. Liquid secondary waste: Waste form formulation and qualification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cozzi, A. D.; Dixon, K. L.; Hill, K. A.

    The Hanford Site Effluent Treatment Facility (ETF) currently treats aqueous waste streams generated during site cleanup activities. When the Hanford Tank Waste Treatment and Immobilization Plant (WTP) begins operations, including Direct Feed Low Activity Waste (DFLAW) vitrification, a liquid secondary waste (LSW) stream from the WTP will need to be treated. The volume of effluent for treatment at the ETF will increase significantly. The powdered salt waste form produced by the ETF will be replaced by a stabilized solidified waste form for disposal in Hanford’s Integrated Disposal Facility (IDF). Washington River Protection Solutions is implementing a Secondary Liquid Waste Immobilizationmore » Technology Development Plan to address the technology needs for a waste form and solidification process to treat the increased volume of waste planned for disposal at the IDF. Waste form testing to support this plan is composed of work in the near term to provide data as input to a performance assessment (PA) for Hanford’s IDF. In 2015, three Hanford Liquid Secondary Waste simulants were developed based on existing and projected waste streams. Using these waste simulants, fourteen mixes of Hanford Liquid Secondary Waste were prepared and tested varying the waste simulant, the water-to-dry materials ratio, and the dry materials blend composition.1 In FY16, testing was performed using a simulant of the EMF process condensate blended with the caustic scrubber—from the Low Activity Waste (LAW) melter—, processed through the ETF. The initial EMF-16 simulant will be based on modeling efforts performed to determine the mass balance of the ETF for the DFLAW.2 The compressive strength of all of the mixes exceeded the target of 3.4 MPa (500 psi) to meet the requirements identified as potential IDF Waste Acceptance Criteria in Table 1 of the Secondary Liquid Waste Immobilization Technology Development Plan.3 The hydraulic properties of the waste forms tested (hydraulic conductivity and water characteristic curves) were comparable to the properties measured on the Savannah River Site (SRS) Saltstone waste form. Future testing should include efforts to first; 1) determine the rate and amount of ammonia released during each unit operation of the treatment process to determine if additional ammonia management is required, then; 2) reduce the ammonia content of the ETF concentrated brine prior to solidification, making the waste more amenable to grouting, or 3) manage the release of ammonia during production and ongoing release during storage of the waste form, or 4) develop a lower pH process/waste form thereby precluding ammonia release.« less

  20. Incorporation of additives into polymers

    DOEpatents

    McCleskey, T. Mark; Yates, Matthew Z.

    2003-07-29

    There has been invented a method for incorporating additives into polymers comprising: (a) forming an aqueous or alcohol-based colloidal system of the polymer; (b) emulsifying the colloidal system with a compressed fluid; and (c) contacting the colloidal polymer with the additive in the presence of the compressed fluid. The colloidal polymer can be contacted with the additive by having the additive in the compressed fluid used for emulsification or by adding the additive to the colloidal system before or after emulsification with the compressed fluid. The invention process can be carried out either as a batch process or as a continuous on-line process.

  1. Two stroke engine exhaust emissions separator

    DOEpatents

    Turner, Terry D.; Wilding, Bruce M.; McKellar, Michael G.; Raterman, Kevin T.

    2003-04-22

    A separator for substantially resolving at least one component of a process stream, such as from the exhaust of an internal combustion engine. The separator includes a body defining a chamber therein. A nozzle housing is located proximate the chamber. An exhaust inlet is in communication with the nozzle housing and the chamber. A nozzle assembly is positioned in the nozzle housing and includes a nozzle moveable within and relative to the nozzle housing. The nozzle includes at least one passage formed therethrough such that a process stream entering the exhaust inlet connection passes through the passage formed in the nozzle and imparts a substantially rotational flow to the process stream as it enters the chamber. A positioning member is configured to position the nozzle relative to the nozzle housing in response to changes in process stream pressure thereby adjusting flowrate of said process stream entering into the chamber.

  2. Two stroke engine exhaust emissions separator

    DOEpatents

    Turner, Terry D.; Wilding, Bruce M.; McKellar, Michael G.; Raterman, Kevin T.

    2002-01-01

    A separator for substantially resolving at least one component of a process stream, such as from the exhaust of an internal combustion engine. The separator includes a body defining a chamber therein. A nozzle housing is located proximate the chamber. An exhaust inlet is in communication with the nozzle housing and the chamber. A nozzle assembly is positioned in the nozzle housing and includes a nozzle moveable within and relative to the nozzle housing. The nozzle includes at least one passage formed therethrough such that a process stream entering the exhaust inlet connection passes through the passage formed in the nozzle, which imparts a substantially rotational flow to the process stream as it enters the chamber. A positioning member is configured to position the nozzle relative to the nozzle housing in response to changes in process stream pressure to adjust flowrate of said process stream entering into the chamber.

  3. A new hyperspectral image compression paradigm based on fusion

    NASA Astrophysics Data System (ADS)

    Guerra, Raúl; Melián, José; López, Sebastián.; Sarmiento, Roberto

    2016-10-01

    The on-board compression of remote sensed hyperspectral images is an important task nowadays. One of the main difficulties is that the compression of these images must be performed in the satellite which carries the hyperspectral sensor. Hence, this process must be performed by space qualified hardware, having area, power and speed limitations. Moreover, it is important to achieve high compression ratios without compromising the quality of the decompress image. In this manuscript we proposed a new methodology for compressing hyperspectral images based on hyperspectral image fusion concepts. The proposed compression process has two independent steps. The first one is to spatially degrade the remote sensed hyperspectral image to obtain a low resolution hyperspectral image. The second step is to spectrally degrade the remote sensed hyperspectral image to obtain a high resolution multispectral image. These two degraded images are then send to the earth surface, where they must be fused using a fusion algorithm for hyperspectral and multispectral image, in order to recover the remote sensed hyperspectral image. The main advantage of the proposed methodology for compressing remote sensed hyperspectral images is that the compression process, which must be performed on-board, becomes very simple, being the fusion process used to reconstruct image the more complex one. An extra advantage is that the compression ratio can be fixed in advanced. Many simulations have been performed using different fusion algorithms and different methodologies for degrading the hyperspectral image. The results obtained in the simulations performed corroborate the benefits of the proposed methodology.

  4. DWPF RECYCLE EVAPORATOR FLOWSHEET EVALUATION (U)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stone, M

    2005-04-30

    The Defense Waste Processing Facility (DWPF) converts the high level waste slurries stored at the Savannah River Site into borosilicate glass for long-term storage. The vitrification process results in the generation of approximately five gallons of dilute recycle streams for each gallon of waste slurry vitrified. This dilute recycle stream is currently transferred to the H-area Tank Farm and amounts to approximately 1,400,000 gallons of effluent per year. Process changes to incorporate salt waste could increase the amount of effluent to approximately 2,900,000 gallons per year. The recycle consists of two major streams and four smaller streams. The first majormore » recycle stream is condensate from the Chemical Process Cell (CPC), and is collected in the Slurry Mix Evaporator Condensate Tank (SMECT). The second major recycle stream is the melter offgas which is collected in the Off Gas Condensate Tank (OGCT). The four smaller streams are the sample flushes, sump flushes, decon solution, and High Efficiency Mist Eliminator (HEME) dissolution solution. These streams are collected in the Decontamination Waste Treatment Tank (DWTT) or the Recycle Collection Tank (RCT). All recycle streams are currently combined in the RCT and treated with sodium nitrite and sodium hydroxide prior to transfer to the tank farm. Tank Farm space limitations and previous outages in the 2H Evaporator system due to deposition of sodium alumino-silicates have led to evaluation of alternative methods of dealing with the DWPF recycle. One option identified for processing the recycle was a dedicated evaporator to concentrate the recycle stream to allow the solids to be recycled to the DWPF Sludge Receipt and Adjustment Tank (SRAT) and the condensate from this evaporation process to be sent and treated in the Effluent Treatment Plant (ETP). In order to meet process objectives, the recycle stream must be concentrated to 1/30th of the feed volume during the evaporation process. The concentrated stream must be pumpable to the DWPF SRAT vessel and should not precipitate solids to avoid fouling the evaporator vessel and heat transfer coils. The evaporation process must not generate excessive foam and must have a high Decontamination Factor (DF) for many species in the evaporator feed to allow the condensate to be transferred to the ETP. An initial scoping study was completed in 2001 to evaluate the feasibility of the evaporator which concluded that the concentration objectives could be met. This initial study was based on initial estimates of recycle concentration and was based solely on OLI modeling of the evaporation process. The Savannah River National Laboratory (SRNL) has completed additional studies using simulated recycle streams and OLI{reg_sign} simulations. Based on this work, the proposed flowsheet for the recycle evaporator was evaluated for feasibility, evaporator design considerations, and impact on the DWPF process. This work was in accordance with guidance from DWPF-E and was performed in accordance with the Technical Task and Quality Assurance Plan.« less

  5. Chapter 22: Compressed Air Evaluation Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurnik, Charles W; Benton, Nathanael; Burns, Patrick

    Compressed-air systems are used widely throughout industry for many operations, including pneumatic tools, packaging and automation equipment, conveyors, and other industrial process operations. Compressed-air systems are defined as a group of subsystems composed of air compressors, air treatment equipment, controls, piping, pneumatic tools, pneumatically powered machinery, and process applications using compressed air. A compressed-air system has three primary functional subsystems: supply, distribution, and demand. Air compressors are the primary energy consumers in a compressed-air system and are the primary focus of this protocol. The two compressed-air energy efficiency measures specifically addressed in this protocol are: High-efficiency/variable speed drive (VSD) compressormore » replacing modulating, load/unload, or constant-speed compressor; and Compressed-air leak survey and repairs. This protocol provides direction on how to reliably verify savings from these two measures using a consistent approach for each.« less

  6. High rates of organic carbon processing in the hyporheic zone of intermittent streams.

    PubMed

    Burrows, Ryan M; Rutlidge, Helen; Bond, Nick R; Eberhard, Stefan M; Auhl, Alexandra; Andersen, Martin S; Valdez, Dominic G; Kennard, Mark J

    2017-10-16

    Organic carbon cycling is a fundamental process that underpins energy transfer through the biosphere. However, little is known about the rates of particulate organic carbon processing in the hyporheic zone of intermittent streams, which is often the only wetted environment remaining when surface flows cease. We used leaf litter and cotton decomposition assays, as well as rates of microbial respiration, to quantify rates of organic carbon processing in surface and hyporheic environments of intermittent and perennial streams under a range of substrate saturation conditions. Leaf litter processing was 48% greater, and cotton processing 124% greater, in the hyporheic zone compared to surface environments when calculated over multiple substrate saturation conditions. Processing was also greater in more saturated surface environments (i.e. pools). Further, rates of microbial respiration on incubated substrates in the hyporheic zone were similar to, or greater than, rates in surface environments. Our results highlight that intermittent streams are important locations for particulate organic carbon processing and that the hyporheic zone sustains this fundamental process even without surface flow. Not accounting for carbon processing in the hyporheic zone of intermittent streams may lead to an underestimation of its local ecological significance and collective contribution to landscape carbon processes.

  7. Where’s Waldo? How perceptual, cognitive, and emotional brain processes cooperate during learning to categorize and find desired objects in a cluttered scene

    PubMed Central

    Chang, Hung-Cheng; Grossberg, Stephen; Cao, Yongqiang

    2014-01-01

    The Where’s Waldo problem concerns how individuals can rapidly learn to search a scene to detect, attend, recognize, and look at a valued target object in it. This article develops the ARTSCAN Search neural model to clarify how brain mechanisms across the What and Where cortical streams are coordinated to solve the Where’s Waldo problem. The What stream learns positionally-invariant object representations, whereas the Where stream controls positionally-selective spatial and action representations. The model overcomes deficiencies of these computationally complementary properties through What and Where stream interactions. Where stream processes of spatial attention and predictive eye movement control modulate What stream processes whereby multiple view- and positionally-specific object categories are learned and associatively linked to view- and positionally-invariant object categories through bottom-up and attentive top-down interactions. Gain fields control the coordinate transformations that enable spatial attention and predictive eye movements to carry out this role. What stream cognitive-emotional learning processes enable the focusing of motivated attention upon the invariant object categories of desired objects. What stream cognitive names or motivational drives can prime a view- and positionally-invariant object category of a desired target object. A volitional signal can convert these primes into top-down activations that can, in turn, prime What stream view- and positionally-specific categories. When it also receives bottom-up activation from a target, such a positionally-specific category can cause an attentional shift in the Where stream to the positional representation of the target, and an eye movement can then be elicited to foveate it. These processes describe interactions among brain regions that include visual cortex, parietal cortex, inferotemporal cortex, prefrontal cortex (PFC), amygdala, basal ganglia (BG), and superior colliculus (SC). PMID:24987339

  8. A very efficient RCS data compression and reconstruction technique, volume 4

    NASA Technical Reports Server (NTRS)

    Tseng, N. Y.; Burnside, W. D.

    1992-01-01

    A very efficient compression and reconstruction scheme for RCS measurement data was developed. The compression is done by isolating the scattering mechanisms on the target and recording their individual responses in the frequency and azimuth scans, respectively. The reconstruction, which is an inverse process of the compression, is granted by the sampling theorem. Two sets of data, the corner reflectors and the F-117 fighter model, were processed and the results were shown to be convincing. The compression ratio can be as large as several hundred, depending on the target's geometry and scattering characteristics.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, Rui; Praggastis, Brenda L.; Smith, William P.

    While streaming data have become increasingly more popular in business and research communities, semantic models and processing software for streaming data have not kept pace. Traditional semantic solutions have not addressed transient data streams. Semantic web languages (e.g., RDF, OWL) have typically addressed static data settings and linked data approaches have predominantly addressed static or growing data repositories. Streaming data settings have some fundamental differences; in particular, data are consumed on the fly and data may expire. Stream reasoning, a combination of stream processing and semantic reasoning, has emerged with the vision of providing "smart" processing of streaming data. C-SPARQLmore » is a prominent stream reasoning system that handles semantic (RDF) data streams. Many stream reasoning systems including C-SPARQL use a sliding window and use data arrival time to evict data. For data streams that include expiration times, a simple arrival time scheme is inadequate if the window size does not match the expiration period. In this paper, we propose a cache-enabled, order-aware, ontology-based stream reasoning framework. This framework consumes RDF streams with expiration timestamps assigned by the streaming source. Our framework utilizes both arrival and expiration timestamps in its cache eviction policies. In addition, we introduce the notion of "semantic importance" which aims to address the relevance of data to the expected reasoning, thus enabling the eviction algorithms to be more context- and reasoning-aware when choosing what data to maintain for question answering. We evaluate this framework by implementing three different prototypes and utilizing five metrics. The trade-offs of deploying the proposed framework are also discussed.« less

  10. Geospatial Data Stream Processing in Python Using FOSS4G Components

    NASA Astrophysics Data System (ADS)

    McFerren, G.; van Zyl, T.

    2016-06-01

    One viewpoint of current and future IT systems holds that there is an increase in the scale and velocity at which data are acquired and analysed from heterogeneous, dynamic sources. In the earth observation and geoinformatics domains, this process is driven by the increase in number and types of devices that report location and the proliferation of assorted sensors, from satellite constellations to oceanic buoy arrays. Much of these data will be encountered as self-contained messages on data streams - continuous, infinite flows of data. Spatial analytics over data streams concerns the search for spatial and spatio-temporal relationships within and amongst data "on the move". In spatial databases, queries can assess a store of data to unpack spatial relationships; this is not the case on streams, where spatial relationships need to be established with the incomplete data available. Methods for spatially-based indexing, filtering, joining and transforming of streaming data need to be established and implemented in software components. This article describes the usage patterns and performance metrics of a number of well known FOSS4G Python software libraries within the data stream processing paradigm. In particular, we consider the RTree library for spatial indexing, the Shapely library for geometric processing and transformation and the PyProj library for projection and geodesic calculations over streams of geospatial data. We introduce a message oriented Python-based geospatial data streaming framework called Swordfish, which provides data stream processing primitives, functions, transports and a common data model for describing messages, based on the Open Geospatial Consortium Observations and Measurements (O&M) and Unidata Common Data Model (CDM) standards. We illustrate how the geospatial software components are integrated with the Swordfish framework. Furthermore, we describe the tight temporal constraints under which geospatial functionality can be invoked when processing high velocity, potentially infinite geospatial data streams. The article discusses the performance of these libraries under simulated streaming loads (size, complexity and volume of messages) and how they can be deployed and utilised with Swordfish under real load scenarios, illustrated by a set of Vessel Automatic Identification System (AIS) use cases. We conclude that the described software libraries are able to perform adequately under geospatial data stream processing scenarios - many real application use cases will be handled sufficiently by the software.

  11. The Role of Riparian Vegetation in Protecting and Improving Chemical Water Quality in Streams

    Treesearch

    Michael G. Dosskey; Philippe Vidon; Noel P. Gurwick; Craig J. Allan; Tim P. Duval; Richard Lowrance

    2010-01-01

    We review the research literature and summarize the major processes by which riparian vegetation influences chemical water quality in streams, as well as how these processes vary among vegetation types, and discuss how these processes respond to removal and restoration of riparian vegetation and thereby determine the timing and level of response in stream water quality...

  12. New metrics for evaluating channel networks extracted in grid digital elevation models

    NASA Astrophysics Data System (ADS)

    Orlandini, S.; Moretti, G.

    2017-12-01

    Channel networks are critical components of drainage basins and delta regions. Despite the important role played by these systems in hydrology and geomorphology, there are at present no well-defined methods to evaluate numerically how two complex channel networks are geometrically far apart. The present study introduces new metrics for evaluating numerically channel networks extracted in grid digital elevation models with respect to a reference channel network (see the figure below). Streams of the evaluated network (EN) are delineated as in the Horton ordering system and examined through a priority climbing algorithm based on the triple index (ID1,ID2,ID3), where ID1 is a stream identifier that increases as the elevation of lower end of the stream increases, ID2 indicates the ID1 of the draining stream, and ID3 is the ID1 of the corresponding stream in the reference network (RN). Streams of the RN are identified by the double index (ID1,ID2). Streams of the EN are processed in the order of increasing ID1 (plots a-l in the figure below). For each processed stream of the EN, the closest stream of the RN is sought by considering all the streams of the RN sharing the same ID2. This ID2 in the RN is equal in the EN to the ID3 of the stream draining the processed stream, the one having ID1 equal to the ID2 of the processed stream. The mean stream planar distance (MSPD) and the mean stream elevation drop (MSED) are computed as the mean distance and drop, respectively, between corresponding streams. The MSPD is shown to be useful for evaluating slope direction methods and thresholds for channel initiation, whereas the MSED is shown to indicate the ability of grid coarsening strategies to retain the profiles of observed channels. The developed metrics fill a gap in the existing literature by allowing hydrologists and geomorphologists to compare descriptions of a fixed physical system obtained by using different terrain analysis methods, or different physical systems described by using the same methods.

  13. Transform-Based Channel-Data Compression to Improve the Performance of a Real-Time GPU-Based Software Beamformer.

    PubMed

    Lok, U-Wai; Li, Pai-Chi

    2016-03-01

    Graphics processing unit (GPU)-based software beamforming has advantages over hardware-based beamforming of easier programmability and a faster design cycle, since complicated imaging algorithms can be efficiently programmed and modified. However, the need for a high data rate when transferring ultrasound radio-frequency (RF) data from the hardware front end to the software back end limits the real-time performance. Data compression methods can be applied to the hardware front end to mitigate the data transfer issue. Nevertheless, most decompression processes cannot be performed efficiently on a GPU, thus becoming another bottleneck of the real-time imaging. Moreover, lossless (or nearly lossless) compression is desirable to avoid image quality degradation. In a previous study, we proposed a real-time lossless compression-decompression algorithm and demonstrated that it can reduce the overall processing time because the reduction in data transfer time is greater than the computation time required for compression/decompression. This paper analyzes the lossless compression method in order to understand the factors limiting the compression efficiency. Based on the analytical results, a nearly lossless compression is proposed to further enhance the compression efficiency. The proposed method comprises a transformation coding method involving modified lossless compression that aims at suppressing amplitude data. The simulation results indicate that the compression ratio (CR) of the proposed approach can be enhanced from nearly 1.8 to 2.5, thus allowing a higher data acquisition rate at the front end. The spatial and contrast resolutions with and without compression were almost identical, and the process of decompressing the data of a single frame on a GPU took only several milliseconds. Moreover, the proposed method has been implemented in a 64-channel system that we built in-house to demonstrate the feasibility of the proposed algorithm in a real system. It was found that channel data from a 64-channel system can be transferred using the standard USB 3.0 interface in most practical imaging applications.

  14. VPipe: Virtual Pipelining for Scheduling of DAG Stream Query Plans

    NASA Astrophysics Data System (ADS)

    Wang, Song; Gupta, Chetan; Mehta, Abhay

    There are data streams all around us that can be harnessed for tremendous business and personal advantage. For an enterprise-level stream processing system such as CHAOS [1] (Continuous, Heterogeneous Analytic Over Streams), handling of complex query plans with resource constraints is challenging. While several scheduling strategies exist for stream processing, efficient scheduling of complex DAG query plans is still largely unsolved. In this paper, we propose a novel execution scheme for scheduling complex directed acyclic graph (DAG) query plans with meta-data enriched stream tuples. Our solution, called Virtual Pipelined Chain (or VPipe Chain for short), effectively extends the "Chain" pipelining scheduling approach to complex DAG query plans.

  15. Investigating category- and shape-selective neural processing in ventral and dorsal visual stream under interocular suppression.

    PubMed

    Ludwig, Karin; Kathmann, Norbert; Sterzer, Philipp; Hesselmann, Guido

    2015-01-01

    Recent behavioral and neuroimaging studies using continuous flash suppression (CFS) have suggested that action-related processing in the dorsal visual stream might be independent of perceptual awareness, in line with the "vision-for-perception" versus "vision-for-action" distinction of the influential dual-stream theory. It remains controversial if evidence suggesting exclusive dorsal stream processing of tool stimuli under CFS can be explained by their elongated shape alone or by action-relevant category representations in dorsal visual cortex. To approach this question, we investigated category- and shape-selective functional magnetic resonance imaging-blood-oxygen level-dependent responses in both visual streams using images of faces and tools. Multivariate pattern analysis showed enhanced decoding of elongated relative to non-elongated tools, both in the ventral and dorsal visual stream. The second aim of our study was to investigate whether the depth of interocular suppression might differentially affect processing in dorsal and ventral areas. However, parametric modulation of suppression depth by varying the CFS mask contrast did not yield any evidence for differential modulation of category-selective activity. Together, our data provide evidence for shape-selective processing under CFS in both dorsal and ventral stream areas and, therefore, do not support the notion that dorsal "vision-for-action" processing is exclusively preserved under interocular suppression. © 2014 Wiley Periodicals, Inc.

  16. Foundations for Streaming Model Transformations by Complex Event Processing.

    PubMed

    Dávid, István; Ráth, István; Varró, Dániel

    2018-01-01

    Streaming model transformations represent a novel class of transformations to manipulate models whose elements are continuously produced or modified in high volume and with rapid rate of change. Executing streaming transformations requires efficient techniques to recognize activated transformation rules over a live model and a potentially infinite stream of events. In this paper, we propose foundations of streaming model transformations by innovatively integrating incremental model query, complex event processing (CEP) and reactive (event-driven) transformation techniques. Complex event processing allows to identify relevant patterns and sequences of events over an event stream. Our approach enables event streams to include model change events which are automatically and continuously populated by incremental model queries. Furthermore, a reactive rule engine carries out transformations on identified complex event patterns. We provide an integrated domain-specific language with precise semantics for capturing complex event patterns and streaming transformations together with an execution engine, all of which is now part of the Viatra reactive transformation framework. We demonstrate the feasibility of our approach with two case studies: one in an advanced model engineering workflow; and one in the context of on-the-fly gesture recognition.

  17. A Stream Morphology Classification for Eco-hydraulic Purposes Based on Geospatial Data: a Solute Transport Application Case

    NASA Astrophysics Data System (ADS)

    Jiménez Jaramillo, M. A.; Camacho Botero, L. A.; Vélez Upegui, J. I.

    2010-12-01

    Variation in stream morphology along a basin drainage network leads to different hydraulic patterns and sediment transport processes. Moreover, solute transport processes along streams, and stream habitats for fisheries and microorganisms, rely on stream corridor structure, including elements such as bed forms, channel patterns, riparian vegetation, and the floodplain. In this work solute transport processes simulation and stream habitat identification are carried out at the basin scale. A reach-scale morphological classification system based on channel slope and specific stream power was implemented by using digital elevation models and hydraulic geometry relationships. Although the morphological framework allows identification of cascade, step-pool, plane bed and pool-riffle morphologies along the drainage network, it still does not account for floodplain configuration and bed-forms identification of those channel types. Hence, as a first application case in order to obtain parsimonious three-dimensional characterizations of drainage channels, the morphological framework has been updated by including topographical floodplain delimitation through a Multi-resolution Valley Bottom Flatness Index assessing, and a stochastic bed form representation of the step-pool morphology. Model outcomes were tested in relation to in-stream water storage for different flow conditions and representative travel times according to the Aggregated Dead Zone -ADZ- model conceptualization of solute transport processes.

  18. Mental Aptitude and Comprehension of Time-Compressed and Compressed-Expanded Listening Selections.

    ERIC Educational Resources Information Center

    Sticht, Thomas G.

    The comprehensibility of materials compressed and then expanded by means of an electromechanical process was tested with 280 Army inductees divided into groups of high and low mental aptitude. Three short listening selections relating to military activities were subjected to compression and compression-expansion to produce seven versions. Data…

  19. Sewage treatment method

    DOEpatents

    Fassbender, Alex G.

    1995-01-01

    The invention greatly reduces the amount of ammonia in sewage plant effluent. The process of the invention has three main steps. The first step is dewatering without first digesting, thereby producing a first ammonia-containing stream having a low concentration of ammonia, and a second solids-containing stream. The second step is sending the second solids-containing stream through a means for separating the solids from the liquid and producing an aqueous stream containing a high concentration of ammonia. The third step is removal of ammonia from the aqueous stream using a hydrothermal process.

  20. ICER-3D Hyperspectral Image Compression Software

    NASA Technical Reports Server (NTRS)

    Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, three-dimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume. The ICER-3D software is capable of providing either lossless or lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The compression algorithm, which was derived from the ICER image compression algorithm, includes wavelet-transform, context-modeling, and entropy coding subalgorithms. The 3D wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of sets of hyperspectral image data, while facilitating elimination of spectral ringing artifacts, using a technique summarized in "Improving 3D Wavelet-Based Compression of Spectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. Correlation is further exploited by a context-modeling subalgorithm, which exploits spectral dependencies in the wavelet-transformed hyperspectral data, using an algorithm that is summarized in "Context Modeler for Wavelet Compression of Hyperspectral Images" (NPO-43239), which follows this article. An important feature of ICER-3D is a scheme for limiting the adverse effects of loss of data during transmission. In this scheme, as in the similar scheme used by ICER, the spatial-frequency domain is partitioned into rectangular error-containment regions. In ICER-3D, the partitions extend through all the wavelength bands. The data in each partition are compressed independently of those in the other partitions, so that loss or corruption of data from any partition does not affect the other partitions. Furthermore, because compression is progressive within each partition, when data are lost, any data from that partition received prior to the loss can be used to reconstruct that partition at lower fidelity. By virtue of the compression improvement it achieves relative to previous means of onboard data compression, this software enables (1) increased return of hyperspectral scientific data in the presence of limits on the rates of transmission of data from spacecraft to Earth via radio communication links and/or (2) reduction in spacecraft radio-communication power and/or cost through reduction in the amounts of data required to be downlinked and stored onboard prior to downlink. The software is also suitable for compressing hyperspectral images for ground storage or archival purposes.

  1. Macrophyte presence is an indicator of enhanced denitrification and nitrification in sediments of a temperate restored agricultural stream

    EPA Science Inventory

    Stream macrophytes are often removed with their sediments to deepen stream channels, stabilize channel banks, or provide habitat for target species. These sediments may support enhanced nitrogen processing. To evaluate sediment nitrogen processing, identify seasonal patterns, and...

  2. Serial and Parallel Processing in the Primate Auditory Cortex Revisited

    PubMed Central

    Recanzone, Gregg H.; Cohen, Yale E.

    2009-01-01

    Over a decade ago it was proposed that the primate auditory cortex is organized in a serial and parallel manner in which there is a dorsal stream processing spatial information and a ventral stream processing non-spatial information. This organization is similar to the “what”/“where” processing of the primate visual cortex. This review will examine several key studies, primarily electrophysiological, that have tested this hypothesis. We also review several human imaging studies that have attempted to define these processing streams in the human auditory cortex. While there is good evidence that spatial information is processed along a particular series of cortical areas, the support for a non-spatial processing stream is not as strong. Why this should be the case and how to better test this hypothesis is also discussed. PMID:19686779

  3. The OGC Innovation Program Testbeds - Advancing Architectures for Earth and Systems

    NASA Astrophysics Data System (ADS)

    Bermudez, L. E.; Percivall, G.; Simonis, I.; Serich, S.

    2017-12-01

    The OGC Innovation Program provides a collaborative agile process for solving challenging science problems and advancing new technologies. Since 1999, 100 initiatives have taken place, from multi-million dollar testbeds to small interoperability experiments. During these initiatives, sponsors and technology implementers (including academia and private sector) come together to solve problems, produce prototypes, develop demonstrations, provide best practices, and advance the future of standards. This presentation will provide the latest system architectures that can be used for Earth and space systems as a result of the OGC Testbed 13, including the following components: Elastic cloud autoscaler for Earth Observations (EO) using a WPS in an ESGF hybrid climate data research platform. Accessibility of climate data for the scientist and non-scientist users via on demand models wrapped in WPS. Standards descriptions for containerize applications to discover processes on the cloud, including using linked data, a WPS extension for hybrid clouds and linking to hybrid big data stores. OpenID and OAuth to secure OGC Services with built-in Attribute Based Access Control (ABAC) infrastructures leveraging GeoDRM patterns. Publishing and access of vector tiles, including use of compression and attribute options reusing patterns from WMS, WMTS and WFS. Servers providing 3D Tiles and streaming of data, including Indexed 3d Scene Layer (I3S), CityGML and Common DataBase (CDB). Asynchronous Services with advanced pushed notifications strategies, with a filter language instead of simple topic subscriptions, that can be use across OGC services. Testbed 14 will continue advancing topics like Big Data, security, and streaming, as well as making easier to use OGC services (e.g. RESTful APIs). The Call for Participation will be issued in December and responses are due on mid January 2018.

  4. The OGC Innovation Program Testbeds - Advancing Architectures for Earth and Systems

    NASA Astrophysics Data System (ADS)

    Bermudez, L. E.; Percivall, G.; Simonis, I.; Serich, S.

    2016-12-01

    The OGC Innovation Program provides a collaborative agile process for solving challenging science problems and advancing new technologies. Since 1999, 100 initiatives have taken place, from multi-million dollar testbeds to small interoperability experiments. During these initiatives, sponsors and technology implementers (including academia and private sector) come together to solve problems, produce prototypes, develop demonstrations, provide best practices, and advance the future of standards. This presentation will provide the latest system architectures that can be used for Earth and space systems as a result of the OGC Testbed 13, including the following components: Elastic cloud autoscaler for Earth Observations (EO) using a WPS in an ESGF hybrid climate data research platform. Accessibility of climate data for the scientist and non-scientist users via on demand models wrapped in WPS. Standards descriptions for containerize applications to discover processes on the cloud, including using linked data, a WPS extension for hybrid clouds and linking to hybrid big data stores. OpenID and OAuth to secure OGC Services with built-in Attribute Based Access Control (ABAC) infrastructures leveraging GeoDRM patterns. Publishing and access of vector tiles, including use of compression and attribute options reusing patterns from WMS, WMTS and WFS. Servers providing 3D Tiles and streaming of data, including Indexed 3d Scene Layer (I3S), CityGML and Common DataBase (CDB). Asynchronous Services with advanced pushed notifications strategies, with a filter language instead of simple topic subscriptions, that can be use across OGC services. Testbed 14 will continue advancing topics like Big Data, security, and streaming, as well as making easier to use OGC services (e.g. RESTful APIs). The Call for Participation will be issued in December and responses are due on mid January 2018.

  5. Geology of the Arabian Peninsula; shield area of western Saudi Arabia

    USGS Publications Warehouse

    Brown, Glen F.; Schmidt, Dwight L.; Huffman, A. Curtis

    1989-01-01

    A second stage of sea-floor spreading about 4-5 m.y. produced the Red Sea axial trough, consisting of oceanic crust, as well as renewed uplift and tilting of the three tectonic provinces in response to compression from counterclockwise rotation against the Dead Sea Rift. This late movement caused widespread major stream capture, especially along the wadis that formerly drained southwesterly or northwesterly, the channels turning westward through narrow gorges to the coastal plain and the Red Sea.

  6. Influence of Spectral Transfer Processes in Compressible Low Frequency Plasma Turbulence on Scattering and Refraction of Electromagnetic Signals

    DTIC Science & Technology

    2015-01-01

    AFRL-RY-WP-TR-2014-0230 INFLUENCE OF SPECTRAL TRANSFER PROCESSES IN COMPRESSIBLE LOW FREQUENCY PLASMA TURBULENCE ON SCATTERING AND...INFLUENCE OF SPECTRAL TRANSFER PROCESSES IN COMPRESSIBLE LOW FREQUENCY PLASMA TURBULENCE ON SCATTERING AND REFRACTION OF ELECTROMAGNETIC SIGNALS 5a...research is to analyze influence of plasma turbulence on hypersonic sensor systems and NGOTHR applications and to meet the Air Force’s ever-increasing

  7. Rock Erodibility as a Dynamic Variable Driven by the Interplay between Erosion and Weathering in Bedrock Channels: Examples from Great Falls, Virginia, USA

    NASA Astrophysics Data System (ADS)

    Hancock, G. S.; Huettenmoser, J.; Shobe, C. M.; Eppes, M. C.

    2016-12-01

    Rock erodibility in channels is a primary control on the stresses required to erode bedrock (e.g., Sklar and Dietrich, 2001). Erodibility tends to be treated as a uniform and fixed variable at the scale of channel cross-sections, particularly in models of channel profile evolution. Here we present field data supporting the hypothesis (Hancock et al., 2011) that erodibility is a dynamic variable, driven by the interplay between erosion rate and weathering processes within cross-sections. We hypothesize that rock weathering varies in cross-sections from virtually unweathered in the thalweg, where frequent stripping removes weathered rock, to a degree of weathering determined by the frequency of erosive events higher on the channel margin. We test this hypothesis on three tributaries to the Potomac River underlain by similar bedrock but with varying erosion rates ( 0.01 to 0.8 m/ky). At multiple heights within three cross-sections on three tributaries, we measured compressive strength with a Schmidt hammer, surface roughness with a contour gage, and density and length of visible cracks. Compressive strength decreased with height in all nine cross-sections by 10% to 50%, and surface roughness increased with height in seven cross-sections by 25% - 45%, with the remaining two showing minimal change. Crack density increased with height in the three cross-sections measured. Taken together these data demonstrate increases in weathering intensity, and presumably, rock erodibility, with height. The y-intercept of the relation between height and the three measured variables were nearly identical, suggesting that thalweg erodibility was similar on each channel, as predicted, even though erodibility higher in the cross-section were markedly different. The rate at which the three variables changed with height in each cross-section is strongly related to stream power. Assuming stream power is a reasonable surrogate for erosion rate, this result implies that erosion rate can be a primary influence on the distribution of erodibility within channel cross-sections. We conclude that the interplay between rates of erosion and weathering produces spatial as well as temporal variability in erodibility which, in turn, influences channel form and gradient.

  8. Ultrasonic manipulation of particles and cells. Ultrasonic separation of cells.

    PubMed

    Coakley, W T; Whitworth, G; Grundy, M A; Gould, R K; Allman, R

    1994-04-01

    Cells or particles suspended in a sonic standing wave field experience forces which concentrate them at positions separated by half a wavelength. The aims of the study were: (1) To optimise conditions and test theoretical predictions for ultrasonic concentration and separation of particles or cells. (2) To investigate the scale-up of experimental systems. (3) To establish the maximum acoustic pressure to which a suspension might be exposed without inducing order-disrupting cavitation. (4) To compare the efficiencies of techniques for harvesting concentrated particles. The primary outcomes were: (1) To design of an acoustic pressure distribution within cylindrical containers which led to uniformly repeating sound pressure patterns throughout the containers in the standing wave mode, concentrated suspended eukaryotic cells or latex beads in clumps on the axis of wide containers, and provided uniform response of all particle clumps to acoustic harvesting regimes. Theory for the behaviour (e.g. movement to different preferred sites) of particles as a function of specific gravity and compressibility in containers of different lateral dimensions was extended and was confirmed experimentally. Convective streaming in the container was identified as a variable requiring control in the manipulation of particles of 1 micron or smaller size. (2) Consideration of scale-up from the model 10 ml volume led to the conclusion that flow systems in intermediate volume containers have more promise than scaled up batch systems. (3) The maximum acoustic pressures applicable to a suspension without inducing order-disrupting cavitation or excessive conductive streaming at 1 MHz and 3 MHz induce a force equivalent to a centrifugal field of about 10(3) g. (4) The most efficient technique for harvesting concentrated particles was the introduction of a frequency increment between two transducers to form a slowly sweeping pseudo-standing wave. The attractive inter-droplet ultrasonic standing wave force was employed to enhance the rate of aqueous biphasic cell separation and harvesting. The results help clarify the particle size, concentration, density and compressibility for which standing wave separation techniques can contribute either on a process engineering scale or on the scale of the manipulation of small particles for industrial and medical diagnostic procedures.

  9. EFFECTS OF HYDROLOGY ON NITROGEN PROCESSING IN A RESTORED URBAN STREAM

    EPA Science Inventory

    In 2001, EPA undertook an intensive research effort to evaluate the impact of stream restoration on water quality at a degraded stream in an urban watershed. An essential piece of this comprehensive study was to characterize, measure and quantify stream ground water/ stream wate...

  10. Experimental reductions in stream flow alter litter processing and consumer subsidies in headwater streams

    Treesearch

    Robert M. Northington; Jackson R. Webster

    2017-01-01

    SummaryForested headwater streams are connected to their surrounding catchments by a reliance on terrestrial subsidies. Changes in precipitation patterns and stream flow represent a potential disruption in stream ecosystem function, as the delivery of terrestrial detritus to aquatic consumers and...

  11. 40 CFR 63.146 - Process wastewater provisions-reporting.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... wastewater provisions—reporting. (a) For each waste management unit, treatment process, or control device... for Group 2 wastewater streams. This paragraph does not apply to Group 2 wastewater streams that are used to comply with § 63.138(g). For Group 2 wastewater streams, the owner or operator shall include...

  12. 40 CFR 63.146 - Process wastewater provisions-reporting.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... wastewater provisions—reporting. (a) For each waste management unit, treatment process, or control device... for Group 2 wastewater streams. This paragraph does not apply to Group 2 wastewater streams that are used to comply with § 63.138(g). For Group 2 wastewater streams, the owner or operator shall include...

  13. 40 CFR 63.146 - Process wastewater provisions-reporting.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... wastewater provisions—reporting. (a) For each waste management unit, treatment process, or control device... for Group 2 wastewater streams. This paragraph does not apply to Group 2 wastewater streams that are used to comply with § 63.138(g). For Group 2 wastewater streams, the owner or operator shall include...

  14. 40 CFR 63.146 - Process wastewater provisions-reporting.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... wastewater provisions—reporting. (a) For each waste management unit, treatment process, or control device... for Group 2 wastewater streams. This paragraph does not apply to Group 2 wastewater streams that are used to comply with § 63.138(g). For Group 2 wastewater streams, the owner or operator shall include...

  15. Using Compressed Speech to Measure Simultaneous Processing in Persons with and without Visual Impairment

    ERIC Educational Resources Information Center

    Marks, William J.; Jones, W. Paul; Loe, Scott A.

    2013-01-01

    This study investigated the use of compressed speech as a modality for assessment of the simultaneous processing function for participants with visual impairment. A 24-item compressed speech test was created using a sound editing program to randomly remove sound elements from aural stimuli, holding pitch constant, with the objective to emulate the…

  16. Applications of data compression techniques in modal analysis for on-orbit system identification

    NASA Technical Reports Server (NTRS)

    Carlin, Robert A.; Saggio, Frank; Garcia, Ephrahim

    1992-01-01

    Data compression techniques have been investigated for use with modal analysis applications. A redundancy-reduction algorithm was used to compress frequency response functions (FRFs) in order to reduce the amount of disk space necessary to store the data and/or save time in processing it. Tests were performed for both single- and multiple-degree-of-freedom (SDOF and MDOF, respectively) systems, with varying amounts of noise. Analysis was done on both the compressed and uncompressed FRFs using an SDOF Nyquist curve fit as well as the Eigensystem Realization Algorithm. Significant savings were realized with minimal errors incurred by the compression process.

  17. Continuous-flow free acid monitoring method and system

    DOEpatents

    Strain, J.E.; Ross, H.H.

    1980-01-11

    A free acid monitoring method and apparatus is provided for continuously measuring the excess acid present in a process stream. The disclosed monitoring system and method is based on the relationship of the partial pressure ratio of water and acid in equilibrium with an acid solution at constant temperature. A portion of the process stream is pumped into and flows through the monitor under the influence of gravity and back to the process stream. A continuous flowing sample is vaporized at a constant temperature and the vapor is subsequently condensed. Conductivity measurements of the condensate produces a nonlinear response function from which the free acid molarity of the sample process stream is determined.

  18. Continuous-flow free acid monitoring method and system

    DOEpatents

    Strain, James E.; Ross, Harley H.

    1981-01-01

    A free acid monitoring method and apparatus is provided for continuously measuring the excess acid present in a process stream. The disclosed monitoring system and method is based on the relationship of the partial pressure ratio of water and acid in equilibrium with an acid solution at constant temperature. A portion of the process stream is pumped into and flows through the monitor under the influence of gravity and back to the process stream. A continuous flowing sample is vaporized at a constant temperature and the vapor is subsequently condensed. Conductivity measurements of the condensate produces a nonlinear response function from which the free acid molarity of the sample process stream is determined.

  19. Sub-component modeling for face image reconstruction in video communications

    NASA Astrophysics Data System (ADS)

    Shiell, Derek J.; Xiao, Jing; Katsaggelos, Aggelos K.

    2008-08-01

    Emerging communications trends point to streaming video as a new form of content delivery. These systems are implemented over wired systems, such as cable or ethernet, and wireless networks, cell phones, and portable game systems. These communications systems require sophisticated methods of compression and error-resilience encoding to enable communications across band-limited and noisy delivery channels. Additionally, the transmitted video data must be of high enough quality to ensure a satisfactory end-user experience. Traditionally, video compression makes use of temporal and spatial coherence to reduce the information required to represent an image. In many communications systems, the communications channel is characterized by a probabilistic model which describes the capacity or fidelity of the channel. The implication is that information is lost or distorted in the channel, and requires concealment on the receiving end. We demonstrate a generative model based transmission scheme to compress human face images in video, which has the advantages of a potentially higher compression ratio, while maintaining robustness to errors and data corruption. This is accomplished by training an offline face model and using the model to reconstruct face images on the receiving end. We propose a sub-component AAM modeling the appearance of sub-facial components individually, and show face reconstruction results under different types of video degradation using a weighted and non-weighted version of the sub-component AAM.

  20. Controlled temperature expansion in oxygen production by molten alkali metal salts

    DOEpatents

    Erickson, Donald C.

    1985-06-04

    A continuous process is set forth for the production of oxygen from an oxygen containing gas stream, such as air, by contacting a feed gas stream with a molten solution of an oxygen acceptor to oxidize the acceptor and cyclically regenerating the oxidized acceptor by releasing oxygen from the acceptor wherein the oxygen-depleted gas stream from the contact zone is treated sequentially to temperature reduction by heat exchange against the feed stream so as to condense out entrained oxygen acceptor for recycle to the process, combustion of the gas stream with fuel to elevate its temperature and expansion of the combusted high temperature gas stream in a turbine to recover power.

  1. Controlled temperature expansion in oxygen production by molten alkali metal salts

    DOEpatents

    Erickson, D.C.

    1985-06-04

    A continuous process is set forth for the production of oxygen from an oxygen containing gas stream, such as air, by contacting a feed gas stream with a molten solution of an oxygen acceptor to oxidize the acceptor and cyclically regenerating the oxidized acceptor by releasing oxygen from the acceptor wherein the oxygen-depleted gas stream from the contact zone is treated sequentially to temperature reduction by heat exchange against the feed stream so as to condense out entrained oxygen acceptor for recycle to the process, combustion of the gas stream with fuel to elevate its temperature and expansion of the combusted high temperature gas stream in a turbine to recover power. 1 fig.

  2. Methanation of gas streams containing carbon monoxide and hydrogen

    DOEpatents

    Frost, Albert C.

    1983-01-01

    Carbon monoxide-containing gas streams having a relatively high concentration of hydrogen are pretreated so as to remove the hydrogen in a recoverable form for use in the second step of a cyclic, essentially two-step process for the production of methane. The thus-treated streams are then passed over a catalyst to deposit a surface layer of active surface carbon thereon essentially without the formation of inactive coke. This active carbon is reacted with said hydrogen removed from the feed gas stream to form methane. The utilization of the CO in the feed gas stream is appreciably increased, enhancing the overall process for the production of relatively pure, low-cost methane from CO-containing waste gas streams.

  3. Method for removing undesired particles from gas streams

    DOEpatents

    Durham, M.D.; Schlager, R.J.; Ebner, T.G.; Stewart, R.M.; Hyatt, D.E.; Bustard, C.J.; Sjostrom, S.

    1998-11-10

    The present invention discloses a process for removing undesired particles from a gas stream including the steps of contacting a composition containing an adhesive with the gas stream; collecting the undesired particles and adhesive on a collection surface to form an aggregate comprising the adhesive and undesired particles on the collection surface; and removing the agglomerate from the collection zone. The composition may then be atomized and injected into the gas stream. The composition may include a liquid that vaporizes in the gas stream. After the liquid vaporizes, adhesive particles are entrained in the gas stream. The process may be applied to electrostatic precipitators and filtration systems to improve undesired particle collection efficiency. 11 figs.

  4. Method and apparatus for decreased undesired particle emissions in gas streams

    DOEpatents

    Durham, M.D.; Schlager, R.J.; Ebner, T.G.; Stewart, R.M.; Bustard, C.J.

    1999-04-13

    The present invention discloses a process for removing undesired particles from a gas stream including the steps of contacting a composition containing an adhesive with the gas stream; collecting the undesired particles and adhesive on a collection surface to form an aggregate comprising the adhesive and undesired particles on the collection surface; and removing the agglomerate from the collection zone. The composition may then be atomized and injected into the gas stream. The composition may include a liquid that vaporizes in the gas stream. After the liquid vaporizes, adhesive particles are entrained in the gas stream. The process may be applied to electrostatic precipitators and filtration systems to improve undesired particle collection efficiency. 5 figs.

  5. Method and apparatus for decreased undesired particle emissions in gas streams

    DOEpatents

    Durham, Michael Dean; Schlager, Richard John; Ebner, Timothy George; Stewart, Robin Michele; Bustard, Cynthia Jean

    1999-01-01

    The present invention discloses a process for removing undesired particles from a gas stream including the steps of contacting a composition containing an adhesive with the gas stream; collecting the undesired particles and adhesive on a collection surface to form an aggregate comprising the adhesive and undesired particles on the collection surface; and removing the agglomerate from the collection zone. The composition may then be atomized and injected into the gas stream. The composition may include a liquid that vaporizes in the gas stream. After the liquid vaporizes, adhesive particles are entrained in the gas stream. The process may be applied to electrostatic precipitators and filtration systems to improve undesired particle collection efficiency.

  6. Method for removing undesired particles from gas streams

    DOEpatents

    Durham, Michael Dean; Schlager, Richard John; Ebner, Timothy George; Stewart, Robin Michele; Hyatt, David E.; Bustard, Cynthia Jean; Sjostrom, Sharon

    1998-01-01

    The present invention discloses a process for removing undesired particles from a gas stream including the steps of contacting a composition containing an adhesive with the gas stream; collecting the undesired particles and adhesive on a collection surface to form an aggregate comprising the adhesive and undesired particles on the collection surface; and removing the agglomerate from the collection zone. The composition may then be atomized and injected into the gas stream. The composition may include a liquid that vaporizes in the gas stream. After the liquid vaporizes, adhesive particles are entrained in the gas stream. The process may be applied to electrostatic precipitators and filtration systems to improve undesired particle collection efficiency.

  7. Experimental service of 3DTV broadcasting relay in Korea

    NASA Astrophysics Data System (ADS)

    Hur, Namho; Ahn, Chung-Hyun; Ahn, Chieteuk

    2002-11-01

    This paper introduces 3D HDTV relay broadcasting experiments of 2002 FIFA World Cup Korea/Japan using a terrestrial and satellite network. We have developed 3D HDTV cameras, 3D HDTV video multiplexer/demultiplexer, a 3D HDTV receiver, and a 3D HDTV OB van for field productions. By using a terrestrial and satellite network, we distributed a compressed 3D HDTV signal to predetermined demonstration venues which are approved by host broadcast services (HBS), KirchMedia, and FIFA. In this case, we transmitted a 40Mbps MPEG-2 transport stream (DVB-ASI) over a DS-3 network specified in ITU-T Rec. G.703. The video/audio compression formats are MPEG-2 main-profile, high-level and Dolby Digital AC-3 respectively. Then at venues, the recovered left and right images by the 3D HDTV receiver are displayed on a screen with polarized beam projectors.

  8. The compressed average image intensity metric for stereoscopic video quality assessment

    NASA Astrophysics Data System (ADS)

    Wilczewski, Grzegorz

    2016-09-01

    The following article depicts insights towards design, creation and testing of a genuine metric designed for a 3DTV video quality evaluation. The Compressed Average Image Intensity (CAII) mechanism is based upon stereoscopic video content analysis, setting its core feature and functionality to serve as a versatile tool for an effective 3DTV service quality assessment. Being an objective type of quality metric it may be utilized as a reliable source of information about the actual performance of a given 3DTV system, under strict providers evaluation. Concerning testing and the overall performance analysis of the CAII metric, the following paper presents comprehensive study of results gathered across several testing routines among selected set of samples of stereoscopic video content. As a result, the designed method for stereoscopic video quality evaluation is investigated across the range of synthetic visual impairments injected into the original video stream.

  9. Computation of three-dimensional compressible boundary layers to fourth-order accuracy on wings and fuselages

    NASA Technical Reports Server (NTRS)

    Iyer, Venkit

    1990-01-01

    A solution method, fourth-order accurate in the body-normal direction and second-order accurate in the stream surface directions, to solve the compressible 3-D boundary layer equations is presented. The transformation used, the discretization details, and the solution procedure are described. Ten validation cases of varying complexity are presented and results of calculation given. The results range from subsonic flow to supersonic flow and involve 2-D or 3-D geometries. Applications to laminar flow past wing and fuselage-type bodies are discussed. An interface procedure is used to solve the surface Euler equations with the inviscid flow pressure field as the input to assure accurate boundary conditions at the boundary layer edge. Complete details of the computer program used and information necessary to run each of the test cases are given in the Appendix.

  10. A Computational Study of Shear Layer Receptivity

    NASA Astrophysics Data System (ADS)

    Barone, Matthew; Lele, Sanjiva

    2002-11-01

    The receptivity of two-dimensional, compressible shear layers to local and external excitation sources is examined using a computational approach. The family of base flows considered consists of a laminar supersonic stream separated from nearly quiescent fluid by a thin, rigid splitter plate with a rounded trailing edge. The linearized Euler and linearized Navier-Stokes equations are solved numerically in the frequency domain. The flow solver is based on a high order finite difference scheme, coupled with an overset mesh technique developed for computational aeroacoustics applications. Solutions are obtained for acoustic plane wave forcing near the most unstable shear layer frequency, and are compared to the existing low frequency theory. An adjoint formulation to the present problem is developed, and adjoint equation calculations are performed using the same numerical methods as for the regular equation sets. Solutions to the adjoint equations are used to shed light on the mechanisms which control the receptivity of finite-width compressible shear layers.

  11. Control of shock wave-boundary layer interactions by bleed in supersonic mixed compression inlets

    NASA Technical Reports Server (NTRS)

    Fukuda, M. K.; Hingst, W. G.; Reshotko, E.

    1975-01-01

    An experimental investigation was conducted to determine the effect of bleed on a shock wave-boundary layer interaction in an axisymmetric mixed-compression supersonic inlet. The inlet was designed for a free-stream Mach number of 2.50 with 60-percent supersonic internal area contraction. The experiment was conducted in the NASA Lewis Research Center 10-Foot Supersonic Wind Tunnel. The effects of bleed amount and bleed geometry on the boundary layer after a shock wave-boundary layer interaction were studied. The effect of bleed on the transformed form factor is such that the full realizable reduction is obtained by bleeding of a mass flow equal to about one-half of the incident boundary layer mass flow. More bleeding does not yield further reduction. Bleeding upstream or downstream of the shock-induced pressure rise is preferable to bleeding across the shock-induced pressure rise.

  12. FBCOT: a fast block coding option for JPEG 2000

    NASA Astrophysics Data System (ADS)

    Taubman, David; Naman, Aous; Mathew, Reji

    2017-09-01

    Based on the EBCOT algorithm, JPEG 2000 finds application in many fields, including high performance scientific, geospatial and video coding applications. Beyond digital cinema, JPEG 2000 is also attractive for low-latency video communications. The main obstacle for some of these applications is the relatively high computational complexity of the block coder, especially at high bit-rates. This paper proposes a drop-in replacement for the JPEG 2000 block coding algorithm, achieving much higher encoding and decoding throughputs, with only modest loss in coding efficiency (typically < 0.5dB). The algorithm provides only limited quality/SNR scalability, but offers truly reversible transcoding to/from any standard JPEG 2000 block bit-stream. The proposed FAST block coder can be used with EBCOT's post-compression RD-optimization methodology, allowing a target compressed bit-rate to be achieved even at low latencies, leading to the name FBCOT (Fast Block Coding with Optimized Truncation).

  13. The Contribution of Compressional Magnetic Pumping to the Energization of the Earth's Outer Electron Radiation Belt During High-Speed Stream-Driven Storms

    NASA Astrophysics Data System (ADS)

    Borovsky, Joseph E.; Horne, Richard B.; Meredith, Nigel P.

    2017-12-01

    Compressional magnetic pumping is an interaction between cyclic magnetic compressions and pitch angle scattering with the scattering acting as a catalyst to allow the cyclic compressions to energize particles. Compressional magnetic pumping of the outer electron radiation belt at geosynchronous orbit in the dayside magnetosphere is analyzed by means of computer simulations, wherein solar wind compressions of the dayside magnetosphere energize electrons with electron pitch angle scattering by chorus waves and by electromagnetic ion cyclotron (EMIC) waves. The magnetic pumping is found to produce a weak bulk heating of the electron radiation belt, and it also produces an energetic tail on the electron energy distribution. The amount of energization depends on the robustness of the solar wind compressions and on the amplitude of the chorus and/or EMIC waves. Chorus-catalyzed pumping is better at energizing medium-energy (50-200 keV) electrons than it is at energizing higher-energy electrons; at high energies (500 keV-2 MeV) EMIC-catalyzed pumping is a stronger energizer. The magnetic pumping simulation results are compared with energy diffusion calculations for chorus waves in the dayside magnetosphere; in general, compressional magnetic pumping is found to be weaker at accelerating electrons than is chorus-driven energy diffusion. In circumstances when solar wind compressions are robust and when EMIC waves are present in the dayside magnetosphere without the presence of chorus, EMIC-catalyzed magnetic pumping could be the dominant energization mechanism in the dayside magnetosphere, but at such times loss cone losses will be strong.

  14. Process and system for removing impurities from a gas

    DOEpatents

    Henningsen, Gunnar; Knowlton, Teddy Merrill; Findlay, John George; Schlather, Jerry Neal; Turk, Brian S

    2014-04-15

    A fluidized reactor system for removing impurities from a gas and an associated process are provided. The system includes a fluidized absorber for contacting a feed gas with a sorbent stream to reduce the impurity content of the feed gas; a fluidized solids regenerator for contacting an impurity loaded sorbent stream with a regeneration gas to reduce the impurity content of the sorbent stream; a first non-mechanical gas seal forming solids transfer device adapted to receive an impurity loaded sorbent stream from the absorber and transport the impurity loaded sorbent stream to the regenerator at a controllable flow rate in response to an aeration gas; and a second non-mechanical gas seal forming solids transfer device adapted to receive a sorbent stream of reduced impurity content from the regenerator and transfer the sorbent stream of reduced impurity content to the absorber without changing the flow rate of the sorbent stream.

  15. Tracking Training-Related Plasticity by Combining fMRI and DTI: The Right Hemisphere Ventral Stream Mediates Musical Syntax Processing.

    PubMed

    Oechslin, Mathias S; Gschwind, Markus; James, Clara E

    2018-04-01

    As a functional homolog for left-hemispheric syntax processing in language, neuroimaging studies evidenced involvement of right prefrontal regions in musical syntax processing, of which underlying white matter connectivity remains unexplored so far. In the current experiment, we investigated the underlying pathway architecture in subjects with 3 levels of musical expertise. Employing diffusion tensor imaging tractography, departing from seeds from our previous functional magnetic resonance imaging study on music syntax processing in the same participants, we identified a pathway in the right ventral stream that connects the middle temporal lobe with the inferior frontal cortex via the extreme capsule, and corresponds to the left hemisphere ventral stream, classically attributed to syntax processing in language comprehension. Additional morphometric consistency analyses allowed dissociating tract core from more dispersed fiber portions. Musical expertise related to higher tract consistency of the right ventral stream pathway. Specifically, tract consistency in this pathway predicted the sensitivity for musical syntax violations. We conclude that enduring musical practice sculpts ventral stream architecture. Our results suggest that training-related pathway plasticity facilitates the right hemisphere ventral stream information transfer, supporting an improved sound-to-meaning mapping in music.

  16. Data Streams: An Overview and Scientific Applications

    NASA Astrophysics Data System (ADS)

    Aggarwal, Charu C.

    In recent years, advances in hardware technology have facilitated the ability to collect data continuously. Simple transactions of everyday life such as using a credit card, a phone, or browsing the web lead to automated data storage. Similarly, advances in information technology have lead to large flows of data across IP networks. In many cases, these large volumes of data can be mined for interesting and relevant information in a wide variety of applications. When the volume of the underlying data is very large, it leads to a number of computational and mining challenges: With increasing volume of the data, it is no longer possible to process the data efficiently by using multiple passes. Rather, one can process a data item at most once. This leads to constraints on the implementation of the underlying algorithms. Therefore, stream mining algorithms typically need to be designed so that the algorithms work with one pass of the data. In most cases, there is an inherent temporal component to the stream mining process. This is because the data may evolve over time. This behavior of data streams is referred to as temporal locality. Therefore, a straightforward adaptation of one-pass mining algorithms may not be an effective solution to the task. Stream mining algorithms need to be carefully designed with a clear focus on the evolution of the underlying data. Another important characteristic of data streams is that they are often mined in a distributed fashion. Furthermore, the individual processors may have limited processing and memory. Examples of such cases include sensor networks, in which it may be desirable to perform in-network processing of data stream with limited processing and memory [1, 2]. This chapter will provide an overview of the key challenges in stream mining algorithms which arise from the unique setup in which these problems are encountered. This chapter is organized as follows. In the next section, we will discuss the generic challenges that stream mining poses to a variety of data management and data mining problems. The next section also deals with several issues which arise in the context of data stream management. In Sect. 3, we discuss several mining algorithms on the data stream model. Section 4 discusses various scientific applications of data streams. Section 5 discusses the research directions and conclusions.

  17. Role of submerged vegetation in the retention processes of three plant protection products in flow-through stream mesocosms.

    PubMed

    Stang, Christoph; Wieczorek, Matthias Valentin; Noss, Christian; Lorke, Andreas; Scherr, Frank; Goerlitz, Gerhard; Schulz, Ralf

    2014-07-01

    Quantitative information on the processes leading to the retention of plant protection products (PPPs) in surface waters is not available, particularly for flow-through systems. The influence of aquatic vegetation on the hydraulic- and sorption-mediated mitigation processes of three PPPs (triflumuron, pencycuron, and penflufen; logKOW 3.3-4.9) in 45-m slow-flowing stream mesocosms was investigated. Peak reductions were 35-38% in an unvegetated stream mesocosm, 60-62% in a sparsely vegetated stream mesocosm (13% coverage with Elodea nuttallii), and in a similar range of 57-69% in a densely vegetated stream mesocosm (100% coverage). Between 89% and 93% of the measured total peak reductions in the sparsely vegetated stream can be explained by an increase of vegetation-induced dispersion (estimated with the one-dimensional solute transport model OTIS), while 7-11% of the peak reduction can be attributed to sorption processes. However, dispersion contributed only 59-71% of the peak reductions in the densely vegetated stream mesocosm, where 29% to 41% of the total peak reductions can be attributed to sorption processes. In the densely vegetated stream, 8-27% of the applied PPPs, depending on the logKOW values of the compounds, were temporarily retained by macrophytes. Increasing PPP recoveries in the aqueous phase were accompanied by a decrease of PPP concentrations in macrophytes indicating kinetic desorption over time. This is the first study to provide quantitative data on how the interaction of dispersion and sorption, driven by aquatic macrophytes, influences the mitigation of PPP concentrations in flowing vegetated stream systems. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Solid-shape energy fuels from recyclable municipal solid waste and plastics

    NASA Astrophysics Data System (ADS)

    Gug, Jeongin

    Diversion of waste streams, such as plastics, wood and paper, from municipal landfills and extraction of useful materials from landfills is an area of increasing interest across the country, especially in densely populated areas. One promising technology for recycling MSW (municipal solid waste) is to burn the high energy content components in standard coal boilers. This research seeks to reform wastes into briquette that are compatible with typical coal combustion processes. In order to comply with the standards of coal-fired power plants, the feedstock must be mechanically robust, moisture resistance, and retain high fuel value. Household waste with high paper and fibers content was used as the base material for this study. It was combined with recyclable plastics such as PE, PP, PET and PS for enhanced binding and energy efficiency. Fuel pellets were processed using a compression molding technique. The resulting moisture absorption, proximate analysis from burning, and mechanical properties were investigated after sample production and then compared with reference data for commercial coals and biomass briquettes. The effects of moisture content, compression pressure and processing temperature were studied to identify the optimal processing conditions with water uptake tests for the durability of samples under humid conditions and burning tests to examine the composition of samples. Lastly, mechanical testing revealed the structural stability of solid fuels. The properties of fuel briquettes produced from waste and recycled plastics improved with higher processing temperature but without charring the material. Optimization of moisture content and removal of air bubbles increased the density, stability and mechanical strength. The sample composition was found to be more similar to biomass fuels than coals because the majority of the starting material was paper-based solid waste. According to the proximate analysis results, the waste fuels can be expected to have low temperature ignition, less char formation and reduced CO2 emission with the high heating energy value similar to coal. It is concluded that solid fuels from paper based waste and plastics can be a good energy resource as an alternative and sustainable fuel, which may help to alleviate the environmental problems related to landfill space at the same time.

  19. Process for the physical segregation of minerals

    DOEpatents

    Yingling, Jon C.; Ganguli, Rajive

    2004-01-06

    With highly heterogeneous groups or streams of minerals, physical segregation using online quality measurements is an economically important first stage of the mineral beneficiation process. Segregation enables high quality fractions of the stream to bypass processing, such as cleaning operations, thereby reducing the associated costs and avoiding the yield losses inherent in any downstream separation process. The present invention includes various methods for reliably segregating a mineral stream into at least one fraction meeting desired quality specifications while at the same time maximizing yield of that fraction.

  20. Analytic Strategies of Streaming Data for eHealth.

    PubMed

    Yoon, Sunmoo

    2016-01-01

    New analytic strategies for streaming big data from wearable devices and social media are emerging in ehealth. We face challenges to find meaningful patterns from big data because researchers face difficulties to process big volume of streaming data using traditional processing applications.1 This introductory 180 minutes tutorial offers hand-on instruction on analytics2 (e.g., topic modeling, social network analysis) of streaming data. This tutorial aims to provide practical strategies of information on reducing dimensionality using examples of big data. This tutorial will highlight strategies of incorporating domain experts and a comprehensive approach to streaming social media data.

  1. Nitrogen processing by grazers in a headwater stream: riparian connections

    DOE PAGES

    Hill, Walter R.; Griffiths, Natalie A.

    2016-10-19

    Primary consumers play important roles in the cycling of nutrients in headwater streams, storing assimilated nutrients in growing tissue and recycling them through excretion. Though environmental conditions in most headwater streams and their surrounding terrestrial ecosystems vary considerably over the course of a year, relatively little is known about the effects of seasonality on consumer nutrient recycling these streams. Here, we measured nitrogen accumulated through growth and excreted by the grazing snail Elimia clavaeformis (Pleuroceridae) over the course of 12 months in Walker Branch, identifying close connections between in-stream nitrogen processing and seasonal changes in the surrounding forest.

  2. Method and apparatus for separation of heavy and tritiated water

    DOEpatents

    Lee, Myung W.

    2001-01-01

    The present invention is a bi-thermal membrane process for separating and recovering hydrogen isotopes from a fluid containing hydrogen isotopes, such as water and hydrogen gas. The process in accordance with the present invention provides counter-current cold and hot streams of the fluid separated with a thermally insulating and chemically transparent proton exchange membrane (PEM). The two streams exchange hydrogen isotopes through the membrane: the heavier isotopes migrate into the cold stream, while the lighter isotopes migrate into the hot stream. The heavy and light isotopes are continuously withdrawn from the cold and hot streams respectively.

  3. Membrane Process to Capture CO{sub 2} from Coal-Fired Power Plant Flue Gas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merkel, Tim; Wei, Xiaotong; Firat, Bilgen

    2012-03-31

    This final report describes work conducted for the U.S. Department of Energy National Energy Technology Laboratory (DOE NETL) on development of an efficient membrane process to capture carbon dioxide (CO{sub 2}) from power plant flue gas (award number DE-NT0005312). The primary goal of this research program was to demonstrate, in a field test, the ability of a membrane process to capture up to 90% of CO{sub 2} in coal-fired flue gas, and to evaluate the potential of a full-scale version of the process to perform this separation with less than a 35% increase in the levelized cost of electricity (LCOE).more » Membrane Technology and Research (MTR) conducted this project in collaboration with Arizona Public Services (APS), who hosted a membrane field test at their Cholla coal-fired power plant, and the Electric Power Research Institute (EPRI) and WorleyParsons (WP), who performed a comparative cost analysis of the proposed membrane CO{sub 2} capture process. The work conducted for this project included membrane and module development, slipstream testing of commercial-sized modules with natural gas and coal-fired flue gas, process design optimization, and a detailed systems and cost analysis of a membrane retrofit to a commercial power plant. The Polaris? membrane developed over a number of years by MTR represents a step-change improvement in CO{sub 2} permeance compared to previous commercial CO{sub 2}-selective membranes. During this project, membrane optimization work resulted in a further doubling of the CO{sub 2} permeance of Polaris membrane while maintaining the CO{sub 2}/N{sub 2} selectivity. This is an important accomplishment because increased CO{sub 2} permeance directly impacts the membrane skid cost and footprint: a doubling of CO{sub 2} permeance halves the skid cost and footprint. In addition to providing high CO{sub 2} permeance, flue gas CO{sub 2} capture membranes must be stable in the presence of contaminants including SO{sub 2}. Laboratory tests showed no degradation in Polaris membrane performance during two months of continuous operation in a simulated flue gas environment containing up to 1,000 ppm SO{sub 2}. A successful slipstream field test at the APS Cholla power plant was conducted with commercialsize Polaris modules during this project. This field test is the first demonstration of stable performance by commercial-sized membrane modules treating actual coal-fired power plant flue gas. Process design studies show that selective recycle of CO{sub 2} using a countercurrent membrane module with air as a sweep stream can double the concentration of CO{sub 2} in coal flue gas with little energy input. This pre-concentration of CO{sub 2} by the sweep membrane reduces the minimum energy of CO{sub 2} separation in the capture unit by up to 40% for coal flue gas. Variations of this design may be even more promising for CO{sub 2} capture from NGCC flue gas, in which the CO{sub 2} concentration can be increased from 4% to 20% by selective sweep recycle. EPRI and WP conducted a systems and cost analysis of a base case MTR membrane CO{sub 2} capture system retrofitted to the AEP Conesville Unit 5 boiler. Some of the key findings from this study and a sensitivity analysis performed by MTR include: The MTR membrane process can capture 90% of the CO{sub 2} in coal flue gas and produce high-purity CO{sub 2} (>99%) ready for sequestration. CO{sub 2} recycle to the boiler appears feasible with minimal impact on boiler performance; however, further study by a boiler OEM is recommended. For a membrane process built today using a combination of slight feed compression, permeate vacuum, and current compression equipment costs, the membrane capture process can be competitive with the base case MEA process at 90% CO{sub 2} capture from a coal-fired power plant. The incremental LCOE for the base case membrane process is about equal to that of a base case MEA process, within the uncertainty in the analysis. With advanced membranes (5,000 gpu for CO{sub 2} and 50 for CO{sub 2}/N{sub 2}), operating with no feed compression and low-cost CO{sub 2} compression equipment, an incremental LCOE of $33/MWh at 90% capture can be achieved (40% lower than the advanced MEA case). Even with lower cost compression, it appears unlikely that a membrane process using high feed compression (>5 bar) can be competitive with amine absorption, due to the capital cost and energy consumption of this equipment. Similarly, low vacuum pressure (<0.2 bar) cannot be used due to poor efficiency and high cost of this equipment. High membrane permeance is important to reduce the capital cost and footprint of the membrane unit. CO{sub 2}/N{sub 2} selectivity is less important because it is too costly to generate a pressure ratio where high selectivity can be useful. A potential cost ?sweet spot? exists for use of membrane-based technology, if 50-70% CO{sub 2} capture is acceptable. There is a minimum in the cost of CO{sub 2} avoided/ton that membranes can deliver at 60% CO{sub 2} capture, which is 20% lower than the cost at 90% capture. Membranes operating with no feed compression are best suited for lower capture rates. Currently, it appears that the biggest hurdle to use of membranes for post-combustion CO{sub 2} capture is compression equipment cost. An alternative approach is to use sweep membranes in parallel with another CO{sub 2} capture technology that does not require feed compression or vacuum equipment. Hybrid designs that utilize sweep membranes for selective CO{sub 2} recycle show potential to significantly reduce the minimum energy of CO{sub 2} separation.« less

  4. A multistream model of visual word recognition.

    PubMed

    Allen, Philip A; Smith, Albert F; Lien, Mei-Ching; Kaut, Kevin P; Canfield, Angie

    2009-02-01

    Four experiments are reported that test a multistream model of visual word recognition, which associates letter-level and word-level processing channels with three known visual processing streams isolated in macaque monkeys: the magno-dominated (MD) stream, the interblob-dominated (ID) stream, and the blob-dominated (BD) stream (Van Essen & Anderson, 1995). We show that mixing the color of adjacent letters of words does not result in facilitation of response times or error rates when the spatial-frequency pattern of a whole word is familiar. However, facilitation does occur when the spatial-frequency pattern of a whole word is not familiar. This pattern of results is not due to different luminance levels across the different-colored stimuli and the background because isoluminant displays were used. Also, the mixed-case, mixed-hue facilitation occurred when different display distances were used (Experiments 2 and 3), so this suggests that image normalization can adjust independently of object size differences. Finally, we show that this effect persists in both spaced and unspaced conditions (Experiment 4)--suggesting that inappropriate letter grouping by hue cannot account for these results. These data support a model of visual word recognition in which lower spatial frequencies are processed first in the more rapid MD stream. The slower ID and BD streams may process some lower spatial frequency information in addition to processing higher spatial frequency information, but these channels tend to lose the processing race to recognition unless the letter string is unfamiliar to the MD stream--as with mixed-case presentation.

  5. Video Data Link Provides Television Pictures In Near Real Time Via Tactical Radio And Satellite Channels

    NASA Astrophysics Data System (ADS)

    Hartman, Richard V.

    1987-02-01

    Advances in sophisticated algorithms and parallel VLSI processing have resulted in the capability for near real-time transmission of television pictures (optical and FLIR) via existing telephone lines, tactical radios, and military satellite channels. Concepts have been field demonstrated with production ready engineering development models using transform compression techniques. Preliminary design has been completed for packaging an existing command post version into a 20 pound 1/2 ATR enclosure for use on jeeps, backpacks, RPVs, helicopters, and reconnaissance aircraft. The system will also have a built-in error correction code 2 (ECC) unit, allowing operation via communicatons media exhibiting a bit error rate of 1 X 10-or better. In the past several years, two nearly simultaneous developments show promise of allowing the breakthrough needed to give the operational commander a practical means for obtaining pictorial information from the battlefield. And, he can obtain this information in near real time using available communications channels--his long sought after pictorial force multiplier: • High speed digital integrated circuitry that is affordable, and • An understanding of the practical applications of information theory. High speed digital integrated circuits allow an analog television picture to be nearly instantaneously converted to a digital serial bit stream so that it can be transmitted as rapidly or slowly as desired, depending on the available transmission channel bandwidth. Perhaps more importantly, digitizing the picture allows it to be stored and processed in a number of ways. Most typically, processing is performed to reduce the amount of data that must be transmitted, while still maintaining maximum picture quality. Reducing the amount of data that must be transmitted is important since it allows a narrower bandwidth in the scarce frequency spectrum to be used for transmission of pictures, or if only a narrow bandwidth is available, it takes less time for the picture to be transmitted. This process of reducing the amount of data that must be transmitted to represent a picture is called compression, truncation, or most typically, video compression. Keep in mind that the pictures you see on your home TV are nothing more than a series of still pictures displayed at a rate of 30 frames per second. If you grabbed one of those frames, digitized it, stored it in memory, and then transmitted it at the most rapid rate the bandwidth of your communications channel would allow, you would be using the so-called slow scan techniques.

  6. A theory of local and global processes which affect solar wind electrons. 2: Experimental support

    NASA Technical Reports Server (NTRS)

    Scudder, J. D.; Olbert, S.

    1979-01-01

    The microscopic characteristics of the Coulomb cross section show that there are three natural subpopulations for plasma electrons: the subthermals with local kinetic energy E kT sub c; the transthermals with kT sub c E 7 kT sub c and the extrathermals E 7 kT sub c. Data from three experimental groups on three different spacecraft in the interplanetary medium over a radial range are presented to support the five interrelations projected between solar wind electron properties and changes in the interplanetary medium: (1) subthermals respond primarily to local changes (compression and rarefactions) in stream dynamics; (2) the extrathermal fraction of the ambient electron density should be anti-correlated with the asymptotic bulk speed; (3) the extrathermal "temperature" should be anti-correlated with the local wind speed at 1 AU; (4) the heat flux carried by electrons should be anti-correlated with the local bulk speed; and (5) the extrathermal differential 'temperature' should be nearly independent of radius within 1 AU.

  7. DCT-based cyber defense techniques

    NASA Astrophysics Data System (ADS)

    Amsalem, Yaron; Puzanov, Anton; Bedinerman, Anton; Kutcher, Maxim; Hadar, Ofer

    2015-09-01

    With the increasing popularity of video streaming services and multimedia sharing via social networks, there is a need to protect the multimedia from malicious use. An attacker may use steganography and watermarking techniques to embed malicious content, in order to attack the end user. Most of the attack algorithms are robust to basic image processing techniques such as filtering, compression, noise addition, etc. Hence, in this article two novel, real-time, defense techniques are proposed: Smart threshold and anomaly correction. Both techniques operate at the DCT domain, and are applicable for JPEG images and H.264 I-Frames. The defense performance was evaluated against a highly robust attack, and the perceptual quality degradation was measured by the well-known PSNR and SSIM quality assessment metrics. A set of defense techniques is suggested for improving the defense efficiency. For the most aggressive attack configuration, the combination of all the defense techniques results in 80% protection against cyber-attacks with PSNR of 25.74 db.

  8. Crystal and Particle Engineering Strategies for Improving Powder Compression and Flow Properties to Enable Continuous Tablet Manufacturing by Direct Compression.

    PubMed

    Chattoraj, Sayantan; Sun, Changquan Calvin

    2018-04-01

    Continuous manufacturing of tablets has many advantages, including batch size flexibility, demand-adaptive scale up or scale down, consistent product quality, small operational foot print, and increased manufacturing efficiency. Simplicity makes direct compression the most suitable process for continuous tablet manufacturing. However, deficiencies in powder flow and compression of active pharmaceutical ingredients (APIs) limit the range of drug loading that can routinely be considered for direct compression. For the widespread adoption of continuous direct compression, effective API engineering strategies to address power flow and compression problems are needed. Appropriate implementation of these strategies would facilitate the design of high-quality robust drug products, as stipulated by the Quality-by-Design framework. Here, several crystal and particle engineering strategies for improving powder flow and compression properties are summarized. The focus is on the underlying materials science, which is the foundation for effective API engineering to enable successful continuous manufacturing by the direct compression process. Copyright © 2018 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  9. COLA: Optimizing Stream Processing Applications via Graph Partitioning

    NASA Astrophysics Data System (ADS)

    Khandekar, Rohit; Hildrum, Kirsten; Parekh, Sujay; Rajan, Deepak; Wolf, Joel; Wu, Kun-Lung; Andrade, Henrique; Gedik, Buğra

    In this paper, we describe an optimization scheme for fusing compile-time operators into reasonably-sized run-time software units called processing elements (PEs). Such PEs are the basic deployable units in System S, a highly scalable distributed stream processing middleware system. Finding a high quality fusion significantly benefits the performance of streaming jobs. In order to maximize throughput, our solution approach attempts to minimize the processing cost associated with inter-PE stream traffic while simultaneously balancing load across the processing hosts. Our algorithm computes a hierarchical partitioning of the operator graph based on a minimum-ratio cut subroutine. We also incorporate several fusion constraints in order to support real-world System S jobs. We experimentally compare our algorithm with several other reasonable alternative schemes, highlighting the effectiveness of our approach.

  10. A kinematic eddy viscosity model including the influence of density variations and preturbulence

    NASA Technical Reports Server (NTRS)

    Cohen, L. S.

    1973-01-01

    A model for the kinematic eddy viscosity was developed which accounts for the turbulence produced as a result of jet interactions between adjacent streams as well as the turbulence initially present in the streams. In order to describe the turbulence contribution from jet interaction, the eddy viscosity suggested by Prandtl was adopted, and a modification was introduced to account for the effect of density variation through the mixing layer. The form of the modification was ascertained from a study of the compressible turbulent boundary layer on a flat plate. A kinematic eddy viscosity relation which corresponds to the initial turbulence contribution was derived by employing arguments used by Prandtl in his mixing length hypothesis. The resulting expression for self-preserving flow is similar to that which describes the mixing of a submerged jet. Application of the model has led to analytical predictions which are in good agreement with available turbulent mixing experimental data.

  11. Investigation of flow fields within large scale hypersonic inlet models

    NASA Technical Reports Server (NTRS)

    Gnos, A. V.; Watson, E. C.; Seebaugh, W. R.; Sanator, R. J.; Decarlo, J. P.

    1973-01-01

    Analytical and experimental investigations were conducted to determine the internal flow characteristics in model passages representative of hypersonic inlets for use at Mach numbers to about 12. The passages were large enough to permit measurements to be made in both the core flow and boundary layers. The analytical techniques for designing the internal contours and predicting the internal flow-field development accounted for coupling between the boundary layers and inviscid flow fields by means of a displacement-thickness correction. Three large-scale inlet models, each having a different internal compression ratio, were designed to provide high internal performance with an approximately uniform static-pressure distribution at the throat station. The models were tested in the Ames 3.5-Foot Hypersonic Wind Tunnel at a nominal free-stream Mach number of 7.4 and a unit free-stream Reynolds number of 8.86 X one million per meter.

  12. Multivariate statistical analysis of stream-sediment geochemistry in the Grazer Paläozoikum, Austria

    USGS Publications Warehouse

    Weber, L.; Davis, J.C.

    1990-01-01

    The Austrian reconnaissance study of stream-sediment composition — more than 30000 clay-fraction samples collected over an area of 40000 km2 — is summarized in an atlas of regional maps that show the distributions of 35 elements. These maps, rich in information, reveal complicated patterns of element abundance that are difficult to compare on more than a small number of maps at one time. In such a study, multivariate procedures such as simultaneous R-Q mode components analysis may be helpful. They can compress a large number of variables into a much smaller number of independent linear combinations. These composite variables may be mapped and relationships sought between them and geological properties. As an example, R-Q mode components analysis is applied here to the Grazer Paläozoikum, a tectonic unit northeast of the city of Graz, which is composed of diverse lithologies and contains many mineral deposits.

  13. Source Apportionment of Suspended Sediment Sources using 137Cs and 210Pbxs

    NASA Astrophysics Data System (ADS)

    Lamba, J.; Karthikeyan, K.; Thompson, A.

    2017-12-01

    A study was conducted in the Pleasant Valley Watershed (50 km 2) in South Central Wisconsin to better understand sediment transport processes using sediment fingerprinting technique. Previous studies conducted in this watershed showed that resuspension of fine sediment deposited on the stream bed is an important source of suspended sediment. To better understand the role of fine sediment deposited on the stream bed, fallout radionuclides,137Cs and 210Pbxs were used to determine relative contribution to suspended sediment from in-stream (stream bank and stream bed) and upland sediment sources. Suspended sediment samples were collected during the crop growing season. Potential sources of suspended sediment considered in this study included cropland, pasture and in-stream (stream bed and stream bank). Suspended sediment sources were determined at a subwatershed level. Results of this study showed that in-stream sediment sources are important sources of suspended sediment. Future research should be conducted to better understand the role of legacy sediment in watershed-level sediment transport processes.

  14. Stream-related preferences of inputs to the superior colliculus from areas of dorsal and ventral streams of mouse visual cortex.

    PubMed

    Wang, Quanxin; Burkhalter, Andreas

    2013-01-23

    Previous studies of intracortical connections in mouse visual cortex have revealed two subnetworks that resemble the dorsal and ventral streams in primates. Although calcium imaging studies have shown that many areas of the ventral stream have high spatial acuity whereas areas of the dorsal stream are highly sensitive for transient visual stimuli, there are some functional inconsistencies that challenge a simple grouping into "what/perception" and "where/action" streams known in primates. The superior colliculus (SC) is a major center for processing of multimodal sensory information and the motor control of orienting the eyes, head, and body. Visual processing is performed in superficial layers, whereas premotor activity is generated in deep layers of the SC. Because the SC is known to receive input from visual cortex, we asked whether the projections from 10 visual areas of the dorsal and ventral streams terminate in differential depth profiles within the SC. We found that inputs from primary visual cortex are by far the strongest. Projections from the ventral stream were substantially weaker, whereas the sparsest input originated from areas of the dorsal stream. Importantly, we found that ventral stream inputs terminated in superficial layers, whereas dorsal stream inputs tended to be patchy and either projected equally to superficial and deep layers or strongly preferred deep layers. The results suggest that the anatomically defined ventral and dorsal streams contain areas that belong to distinct functional systems, specialized for the processing of visual information and visually guided action, respectively.

  15. 40 CFR 65.149 - Boilers and process heaters.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... stream is not introduced as or with the primary fuel, a temperature monitoring device in the fire box...-throughput transfer racks, as applicable, shall meet the requirements of this section. (2) The vent stream... thermal units per hour) or greater. (ii) A boiler or process heater into which the vent stream is...

  16. 40 CFR 63.1082 - What definitions do I need to know?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... includes direct-contact cooling water. Spent caustic waste stream means the continuously flowing process... compounds from process streams, typically cracked gas. The spent caustic waste stream does not include spent..., and the C4 butadiene storage equipment; and spent wash water from the C4 crude butadiene carbonyl wash...

  17. 40 CFR 63.1082 - What definitions do I need to know?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... includes direct-contact cooling water. Spent caustic waste stream means the continuously flowing process... compounds from process streams, typically cracked gas. The spent caustic waste stream does not include spent..., and the C4 butadiene storage equipment; and spent wash water from the C4 crude butadiene carbonyl wash...

  18. 40 CFR 63.1082 - What definitions do I need to know?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... includes direct-contact cooling water. Spent caustic waste stream means the continuously flowing process... compounds from process streams, typically cracked gas. The spent caustic waste stream does not include spent..., and the C4 butadiene storage equipment; and spent wash water from the C4 crude butadiene carbonyl wash...

  19. 40 CFR 63.1082 - What definitions do I need to know?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... includes direct-contact cooling water. Spent caustic waste stream means the continuously flowing process... compounds from process streams, typically cracked gas. The spent caustic waste stream does not include spent..., and the C4 butadiene storage equipment; and spent wash water from the C4 crude butadiene carbonyl wash...

  20. M-Stream Deficits and Reading-Related Visual Processes in Developmental Dyslexia

    ERIC Educational Resources Information Center

    Boden, Catherine; Giaschi, Deborah

    2007-01-01

    Some visual processing deficits in developmental dyslexia have been attributed to abnormalities in the subcortical M stream and/or the cortical dorsal stream of the visual pathways. The nature of the relationship between these visual deficits and reading is unknown. The purpose of the present article was to characterize reading-related perceptual…

  1. Streaming data analytics via message passing with application to graph algorithms

    DOE PAGES

    Plimpton, Steven J.; Shead, Tim

    2014-05-06

    The need to process streaming data, which arrives continuously at high-volume in real-time, arises in a variety of contexts including data produced by experiments, collections of environmental or network sensors, and running simulations. Streaming data can also be formulated as queries or transactions which operate on a large dynamic data store, e.g. a distributed database. We describe a lightweight, portable framework named PHISH which enables a set of independent processes to compute on a stream of data in a distributed-memory parallel manner. Datums are routed between processes in patterns defined by the application. PHISH can run on top of eithermore » message-passing via MPI or sockets via ZMQ. The former means streaming computations can be run on any parallel machine which supports MPI; the latter allows them to run on a heterogeneous, geographically dispersed network of machines. We illustrate how PHISH can support streaming MapReduce operations, and describe streaming versions of three algorithms for large, sparse graph analytics: triangle enumeration, subgraph isomorphism matching, and connected component finding. Lastly, we also provide benchmark timings for MPI versus socket performance of several kernel operations useful in streaming algorithms.« less

  2. Method and apparatus for producing oxygen and nitrogen and membrane therefor

    DOEpatents

    Roman, I.C.; Baker, R.W.

    1985-09-17

    Process and apparatus for the separation and purification of oxygen and nitrogen as well as a novel membrane useful therein are disclosed. The process utilizes novel facilitated transport membranes to selectively transport oxygen from one gaseous stream to another, leaving nitrogen as a byproduct. In the method, an oxygen carrier capable of reversibly binding molecular oxygen is dissolved in a polar organic membrane which separates a gaseous feed stream such as atmospheric air and a gaseous product stream. The feed stream is maintained at a sufficiently high oxygen pressure to keep the oxygen carrier in its oxygenated form at the interface of the feed stream with the membrane, while the product stream is maintained at a sufficiently low oxygen pressure to keep the carrier in its deoxygenated form at the interface of the product stream with the membrane. In an alternate mode of operation, the feed stream is maintained at a sufficiently low temperature and high oxygen pressure to keep the oxygen carrier in its oxygenated form at the interface of the feed stream with the membrane and the product stream is maintained at a sufficiently high temperature to keep the carrier in its deoxygenated form at the interface of the product stream with the membrane. Under such conditions, the carrier acts as a shuttle, picking up oxygen at the feed side of the membrane, diffusing across the membrane as the oxygenated complex, releasing oxygen to the product stream, and then diffusing back to the feed side to repeat the process. Exceptionally and unexpectedly high O[sub 2]/N[sub 2] selectivity, on the order of 10 to 30, is obtained, as well as exceptionally high oxygen permeability, on the order of 6 to 15 [times] 10[sup [minus]8] cm[sup 3]-cm/cm[sup 2]-sec-cmHg, as well as a long membrane life of in excess of 3 months, making the process commercially feasible. 2 figs.

  3. Method and apparatus for producing oxygen and nitrogen and membrane therefor

    DOEpatents

    Roman, Ian C.; Baker, Richard W.

    1985-01-01

    Process and apparatus for the separation and purification of oxygen and nitrogen as well as a novel membrane useful therein are disclosed. The process utilizes novel facilitated transport membranes to selectively transport oxygen from one gaseous stream to another, leaving nitrogen as a byproduct. In the method, an oxygen carrier capable of reversibly binding molecular oxygen is dissolved in a polar organic membrane which separates a gaseous feed stream such as atmospheric air and a gaseous product stream. The feed stream is maintained at a sufficiently high oxygen pressure to keep the oxygen carrier in its oxygenated form at the interface of the feed stream with the membrane, while the product stream is maintained at a sufficiently low oxygen pressure to keep the carrier in its deoxygenated form at the interface of the product stream with the membrane. In an alternate mode of operation, the feed stream is maintained at a sufficiently low temperature and high oxygen pressure to keep the oxygen carrier in its oxygenated form at the interface of the feed stream with the membrane and the product stream is maintained at a sufficiently high temperature to keep the carrier in its deoxygenated form at the interface of the product stream with the membrane. Under such conditions, the carrier acts as a shuttle, picking up oxygen at the feed side of the membrane, diffusing across the membrane as the oxygenated complex, releasing oxygen to the product stream, and then diffusing back to the feed side to repeat the process. Exceptionally and unexpectedly high O.sub.2 /N.sub.2 selectivity, on the order of 10 to 30, is obtained, as well as exceptionally high oxygen permeability, on the order of 6 to 15.times.10.sup.-8 cm.sup.3 -cm/cm.sup.2 -sec-cmHg, as well as a long membrane life of in excess of 3 months, making the process commercially feasible.

  4. 40 CFR 63.11970 - What are my initial compliance requirements for process wastewater?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... requirements for process wastewater? 63.11970 Section 63.11970 Protection of Environment ENVIRONMENTAL... What are my initial compliance requirements for process wastewater? (a) Demonstration of initial compliance for process wastewater streams that must be treated. For each process wastewater stream that must...

  5. 40 CFR 63.11970 - What are my initial compliance requirements for process wastewater?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... requirements for process wastewater? 63.11970 Section 63.11970 Protection of Environment ENVIRONMENTAL... What are my initial compliance requirements for process wastewater? (a) Demonstration of initial compliance for process wastewater streams that must be treated. For each process wastewater stream that must...

  6. 40 CFR 63.11970 - What are my initial compliance requirements for process wastewater?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... requirements for process wastewater? 63.11970 Section 63.11970 Protection of Environment ENVIRONMENTAL... What are my initial compliance requirements for process wastewater? (a) Demonstration of initial compliance for process wastewater streams that must be treated. For each process wastewater stream that must...

  7. Wavelet-based reversible watermarking for authentication

    NASA Astrophysics Data System (ADS)

    Tian, Jun

    2002-04-01

    In the digital information age, digital content (audio, image, and video) can be easily copied, manipulated, and distributed. Copyright protection and content authentication of digital content has become an urgent problem to content owners and distributors. Digital watermarking has provided a valuable solution to this problem. Based on its application scenario, most digital watermarking methods can be divided into two categories: robust watermarking and fragile watermarking. As a special subset of fragile watermark, reversible watermark (which is also called lossless watermark, invertible watermark, erasable watermark) enables the recovery of the original, unwatermarked content after the watermarked content has been detected to be authentic. Such reversibility to get back unwatermarked content is highly desired in sensitive imagery, such as military data and medical data. In this paper we present a reversible watermarking method based on an integer wavelet transform. We look into the binary representation of each wavelet coefficient and embed an extra bit to expandable wavelet coefficient. The location map of all expanded coefficients will be coded by JBIG2 compression and these coefficient values will be losslessly compressed by arithmetic coding. Besides these two compressed bit streams, an SHA-256 hash of the original image will also be embedded for authentication purpose.

  8. Solar-powered compression-enhanced ejector air conditioner

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sokolov, M.; Hershgal, D.

    1993-09-01

    This article is an extension of an earlier investigation into the possibility of adaptation of the ejector refrigeration cycle to solar air-conditioning. In a previous work the ejector cycle has been proven a viable option only for a limited number of cases. These include systems with combined (heating, cooling, and hot water supply) loads where means for obtaining low condensing temperature are available. The purpose of this work is to extend the applicability of such systems by enhancing their efficiency and thereby improving their economical attractiveness. This is done by introducing the compression enhanced ejector system in which mechanical (rathermore » than thermal) energy is used to boost the pressure of the secondary stream into the ejector, Such a boost improves the performance of the whole system. Similar to the conventional ejector, the compression-enhanced ejector system utilizes practically the same hardware for solar heating during the winter and for solar cooling during the summer. Thus, it is capable of providing a year-round space air-conditioning. Optimization of the best combination in which the solar and refrigeration systems combine through the vapor generator working temperature is also presented.« less

  9. Comparison of compression efficiency between HEVC/H.265 and VP9 based on subjective assessments

    NASA Astrophysics Data System (ADS)

    Řeřábek, Martin; Ebrahimi, Touradj

    2014-09-01

    Current increasing effort of broadcast providers to transmit UHD (Ultra High Definition) content is likely to increase demand for ultra high definition televisions (UHDTVs). To compress UHDTV content, several alternative encoding mechanisms exist. In addition to internationally recognized standards, open access proprietary options, such as VP9 video encoding scheme, have recently appeared and are gaining popularity. One of the main goals of these encoders is to efficiently compress video sequences beyond HDTV resolution for various scenarios, such as broadcasting or internet streaming. In this paper, a broadcast scenario rate-distortion performance analysis and mutual comparison of one of the latest video coding standards H.265/HEVC with recently released proprietary video coding scheme VP9 is presented. Also, currently one of the most popular and widely spread encoder H.264/AVC has been included into the evaluation to serve as a comparison baseline. The comparison is performed by means of subjective evaluations showing actual differences between encoding algorithms in terms of perceived quality. The results indicate a general dominance of HEVC based encoding algorithm in comparison to other alternatives, while VP9 and AVC showing similar performance.

  10. Plasma waves associated with the AMPTE artificial comet

    NASA Technical Reports Server (NTRS)

    Gurnett, D. A.; Anderson, R. R.; Haeusler, B.; Haerendel, G.; Bauer, O. H.

    1985-01-01

    Numerous plasma wave effects were detected by the AMPTE/IRM spacecraft during the artificial comet experiment on December 27, 1984. As the barium ion cloud produced by the explosion expanded over the spacecraft, emissions at the electron plasma frequency and ion plasma frequency provided a determination of the local electron density. The electron density in the diamagnetic cavity produced by the ion cloud reached a peak of more than 5 x 10 to the 5th per cu cm, then decayed smoothly as the cloud expanded, varying approximately as t exp-2. As the cloud began to move due to interactions with the solar wind, a region of compressed plasma was encountered on the upstream side of the diamagnetic cavity. The peak electron density in the compression region was about 1.5 x 10 to the 4th per cu cm. Later, a very intense (140 mVolt/m) broadband burst of electrostatic noise was encountered on the sunward side of the compression region. This noise has characteristics very similar to noise observed in the earth's bow shock, and is believed to be a shocklike interaction produced by an ion beam-plasma instability between the nearly stationary barium ions and the streaming solar wind protons.

  11. Steady Secondary Flows Generated by Periodic Compression and Expansion of an Ideal Gas in a Pulse Tube

    NASA Technical Reports Server (NTRS)

    Lee, Jeffrey M.

    1999-01-01

    This study establishes a consistent set of differential equations for use in describing the steady secondary flows generated by periodic compression and expansion of an ideal gas in pulse tubes. Also considered is heat transfer between the gas and the tube wall of finite thickness. A small-amplitude series expansion solution in the inverse Strouhal number is proposed for the two-dimensional axisymmetric mass, momentum and energy equations. The anelastic approach applies when shock and acoustic energies are small compared with the energy needed to compress and expand the gas. An analytic solution to the ordered series is obtained in the strong temperature limit where the zeroth-order temperature is constant. The solution shows steady velocities increase linearly for small Valensi number and can be of order I for large Valensi number. A conversion of steady work flow to heat flow occurs whenever temperature, velocity or phase angle gradients are present. Steady enthalpy flow is reduced by heat transfer and is scaled by the Prandtl times Valensi numbers. Particle velocities from a smoke-wire experiment were compared with predictions for the basic and orifice pulse tube configurations. The theory accurately predicted the observed steady streaming.

  12. Integrating complex business processes for knowledge-driven clinical decision support systems.

    PubMed

    Kamaleswaran, Rishikesan; McGregor, Carolyn

    2012-01-01

    This paper presents in detail the component of the Complex Business Process for Stream Processing framework that is responsible for integrating complex business processes to enable knowledge-driven Clinical Decision Support System (CDSS) recommendations. CDSSs aid the clinician in supporting the care of patients by providing accurate data analysis and evidence-based recommendations. However, the incorporation of a dynamic knowledge-management system that supports the definition and enactment of complex business processes and real-time data streams has not been researched. In this paper we discuss the process web service as an innovative method of providing contextual information to a real-time data stream processing CDSS.

  13. Method for processing aqueous wastes

    DOEpatents

    Pickett, John B.; Martin, Hollis L.; Langton, Christine A.; Harley, Willie W.

    1993-01-01

    A method for treating waste water such as that from an industrial processing facility comprising the separation of the waste water into a dilute waste stream and a concentrated waste stream. The concentrated waste stream is treated chemically to enhance precipitation and then allowed to separate into a sludge and a supernate. The supernate is skimmed or filtered from the sludge and blended with the dilute waste stream to form a second dilute waste stream. The sludge remaining is mixed with cementitious material, rinsed to dissolve soluble components, then pressed to remove excess water and dissolved solids before being allowed to cure. The dilute waste stream is also chemically treated to decompose carbonate complexes and metal ions and then mixed with cationic polymer to cause the precipitated solids to flocculate. Filtration of the flocculant removes sufficient solids to allow the waste water to be discharged to the surface of a stream. The filtered material is added to the sludge of the concentrated waste stream. The method is also applicable to the treatment and removal of soluble uranium from aqueous streams, such that the treated stream may be used as a potable water supply.

  14. Parallel efficient rate control methods for JPEG 2000

    NASA Astrophysics Data System (ADS)

    Martínez-del-Amor, Miguel Á.; Bruns, Volker; Sparenberg, Heiko

    2017-09-01

    Since the introduction of JPEG 2000, several rate control methods have been proposed. Among them, post-compression rate-distortion optimization (PCRD-Opt) is the most widely used, and the one recommended by the standard. The approach followed by this method is to first compress the entire image split in code blocks, and subsequently, optimally truncate the set of generated bit streams according to the maximum target bit rate constraint. The literature proposes various strategies on how to estimate ahead of time where a block will get truncated in order to stop the execution prematurely and save time. However, none of them have been defined bearing in mind a parallel implementation. Today, multi-core and many-core architectures are becoming popular for JPEG 2000 codecs implementations. Therefore, in this paper, we analyze how some techniques for efficient rate control can be deployed in GPUs. In order to do that, the design of our GPU-based codec is extended, allowing stopping the process at a given point. This extension also harnesses a higher level of parallelism on the GPU, leading to up to 40% of speedup with 4K test material on a Titan X. In a second step, three selected rate control methods are adapted and implemented in our parallel encoder. A comparison is then carried out, and used to select the best candidate to be deployed in a GPU encoder, which gave an extra 40% of speedup in those situations where it was really employed.

  15. The mathematical theory of signal processing and compression-designs

    NASA Astrophysics Data System (ADS)

    Feria, Erlan H.

    2006-05-01

    The mathematical theory of signal processing, named processor coding, will be shown to inherently arise as the computational time dual of Shannon's mathematical theory of communication which is also known as source coding. Source coding is concerned with signal source memory space compression while processor coding deals with signal processor computational time compression. Their combination is named compression-designs and referred as Conde in short. A compelling and pedagogically appealing diagram will be discussed highlighting Conde's remarkable successful application to real-world knowledge-aided (KA) airborne moving target indicator (AMTI) radar.

  16. Integration and segregation in auditory scene analysis

    NASA Astrophysics Data System (ADS)

    Sussman, Elyse S.

    2005-03-01

    Assessment of the neural correlates of auditory scene analysis, using an index of sound change detection that does not require the listener to attend to the sounds [a component of event-related brain potentials called the mismatch negativity (MMN)], has previously demonstrated that segregation processes can occur without attention focused on the sounds and that within-stream contextual factors influence how sound elements are integrated and represented in auditory memory. The current study investigated the relationship between the segregation and integration processes when they were called upon to function together. The pattern of MMN results showed that the integration of sound elements within a sound stream occurred after the segregation of sounds into independent streams and, further, that the individual streams were subject to contextual effects. These results are consistent with a view of auditory processing that suggests that the auditory scene is rapidly organized into distinct streams and the integration of sequential elements to perceptual units takes place on the already formed streams. This would allow for the flexibility required to identify changing within-stream sound patterns, needed to appreciate music or comprehend speech..

  17. Remote Sensing Image Quality Assessment Experiment with Post-Processing

    NASA Astrophysics Data System (ADS)

    Jiang, W.; Chen, S.; Wang, X.; Huang, Q.; Shi, H.; Man, Y.

    2018-04-01

    This paper briefly describes the post-processing influence assessment experiment, the experiment includes three steps: the physical simulation, image processing, and image quality assessment. The physical simulation models sampled imaging system in laboratory, the imaging system parameters are tested, the digital image serving as image processing input are produced by this imaging system with the same imaging system parameters. The gathered optical sampled images with the tested imaging parameters are processed by 3 digital image processes, including calibration pre-processing, lossy compression with different compression ratio and image post-processing with different core. Image quality assessment method used is just noticeable difference (JND) subject assessment based on ISO20462, through subject assessment of the gathered and processing images, the influence of different imaging parameters and post-processing to image quality can be found. The six JND subject assessment experimental data can be validated each other. Main conclusions include: image post-processing can improve image quality; image post-processing can improve image quality even with lossy compression, image quality with higher compression ratio improves less than lower ratio; with our image post-processing method, image quality is better, when camera MTF being within a small range.

  18. Processing reafferent and exafferent visual information for action and perception.

    PubMed

    Reichenbach, Alexandra; Diedrichsen, Jörn

    2015-01-01

    A recent study suggests that reafferent hand-related visual information utilizes a privileged, attention-independent processing channel for motor control. This process was termed visuomotor binding to reflect its proposed function: linking visual reafferences to the corresponding motor control centers. Here, we ask whether the advantage of processing reafferent over exafferent visual information is a specific feature of the motor processing stream or whether the improved processing also benefits the perceptual processing stream. Human participants performed a bimanual reaching task in a cluttered visual display, and one of the visual hand cursors could be displaced laterally during the movement. We measured the rapid feedback responses of the motor system as well as matched perceptual judgments of which cursor was displaced. Perceptual judgments were either made by watching the visual scene without moving or made simultaneously to the reaching tasks, such that the perceptual processing stream could also profit from the specialized processing of reafferent information in the latter case. Our results demonstrate that perceptual judgments in the heavily cluttered visual environment were improved when performed based on reafferent information. Even in this case, however, the filtering capability of the perceptual processing stream suffered more from the increasing complexity of the visual scene than the motor processing stream. These findings suggest partly shared and partly segregated processing of reafferent information for vision for motor control versus vision for perception.

  19. Geomorphic variation in riparian tree mortality and stream coarse woody debris recruitment from record flooding in a coastal plain stream

    Treesearch

    Brian J. Palik; Stephen W. Golladay; P. Charles Goebel; Brad W. Taylor

    1998-01-01

    Large floods are an important process controlling the structure and function of stream ecosystems. One of the ways floods affect streams is through the recruitment of coarse woody debris from stream-side forests. Stream valley geomorphology may mediate this interaction by altering flood velocity, depth, and duration. Little research has examined how floods and...

  20. A Review of Alfvénic Turbulence in High-Speed Solar Wind Streams: Hints From Cometary Plasma Turbulence

    NASA Astrophysics Data System (ADS)

    Tsurutani, Bruce T.; Lakhina, Gurbax S.; Sen, Abhijit; Hellinger, Petr; Glassmeier, Karl-Heinz; Mannucci, Anthony J.

    2018-04-01

    Solar wind turbulence within high-speed streams is reviewed from the point of view of embedded single nonlinear Alfvén wave cycles, discontinuities, magnetic decreases (MDs), and shocks. For comparison and guidance, cometary plasma turbulence is also briefly reviewed. It is demonstrated that cometary nonlinear magnetosonic waves phase-steepen, with a right-hand circular polarized foreshortened front and an elongated, compressive trailing edge. The former part is a form of "wave breaking" and the latter that of "period doubling." Interplanetary nonlinear Alfvén waves, which are arc polarized, have a 180° foreshortened front and with an elongated trailing edge. Alfvén waves have polarizations different from those of cometary magnetosonic waves, indicating that helicity is a durable feature of plasma turbulence. Interplanetary Alfvén waves are noted to be spherical waves, suggesting the possibility of additional local generation. They kinetically dissipate, forming MDs, indicating that the solar wind is partially "compressive" and static. The 2 MeV protons can nonresonantly interact with MDs leading to rapid cross-field ( 5.5% Bohm) diffusion. The possibility of local ( 1 AU) generation of Alfvén waves may make it difficult to forecast High-Intensity, Long-Duration AE Activity and relativistic magnetospheric electrons with great accuracy. The future Solar Orbiter and Solar Probe Plus missions should be able to not only test these ideas but to also extend our knowledge of plasma turbulence evolution.

  1. A new neural framework for visuospatial processing.

    PubMed

    Kravitz, Dwight J; Saleem, Kadharbatcha S; Baker, Chris I; Mishkin, Mortimer

    2011-04-01

    The division of cortical visual processing into distinct dorsal and ventral streams is a key framework that has guided visual neuroscience. The characterization of the ventral stream as a 'What' pathway is relatively uncontroversial, but the nature of dorsal stream processing is less clear. Originally proposed as mediating spatial perception ('Where'), more recent accounts suggest it primarily serves non-conscious visually guided action ('How'). Here, we identify three pathways emerging from the dorsal stream that consist of projections to the prefrontal and premotor cortices, and a major projection to the medial temporal lobe that courses both directly and indirectly through the posterior cingulate and retrosplenial cortices. These three pathways support both conscious and non-conscious visuospatial processing, including spatial working memory, visually guided action and navigation, respectively.

  2. EVALUATION OF REGISTRATION, COMPRESSION AND CLASSIFICATION ALGORITHMS

    NASA Technical Reports Server (NTRS)

    Jayroe, R. R.

    1994-01-01

    Several types of algorithms are generally used to process digital imagery such as Landsat data. The most commonly used algorithms perform the task of registration, compression, and classification. Because there are different techniques available for performing registration, compression, and classification, imagery data users need a rationale for selecting a particular approach to meet their particular needs. This collection of registration, compression, and classification algorithms was developed so that different approaches could be evaluated and the best approach for a particular application determined. Routines are included for six registration algorithms, six compression algorithms, and two classification algorithms. The package also includes routines for evaluating the effects of processing on the image data. This collection of routines should be useful to anyone using or developing image processing software. Registration of image data involves the geometrical alteration of the imagery. Registration routines available in the evaluation package include image magnification, mapping functions, partitioning, map overlay, and data interpolation. The compression of image data involves reducing the volume of data needed for a given image. Compression routines available in the package include adaptive differential pulse code modulation, two-dimensional transforms, clustering, vector reduction, and picture segmentation. Classification of image data involves analyzing the uncompressed or compressed image data to produce inventories and maps of areas of similar spectral properties within a scene. The classification routines available include a sequential linear technique and a maximum likelihood technique. The choice of the appropriate evaluation criteria is quite important in evaluating the image processing functions. The user is therefore given a choice of evaluation criteria with which to investigate the available image processing functions. All of the available evaluation criteria basically compare the observed results with the expected results. For the image reconstruction processes of registration and compression, the expected results are usually the original data or some selected characteristics of the original data. For classification processes the expected result is the ground truth of the scene. Thus, the comparison process consists of determining what changes occur in processing, where the changes occur, how much change occurs, and the amplitude of the change. The package includes evaluation routines for performing such comparisons as average uncertainty, average information transfer, chi-square statistics, multidimensional histograms, and computation of contingency matrices. This collection of routines is written in FORTRAN IV for batch execution and has been implemented on an IBM 360 computer with a central memory requirement of approximately 662K of 8 bit bytes. This collection of image processing and evaluation routines was developed in 1979.

  3. Space communication system for compressed data with a concatenated Reed-Solomon-Viterbi coding channel

    NASA Technical Reports Server (NTRS)

    Rice, R. F.; Hilbert, E. E. (Inventor)

    1976-01-01

    A space communication system incorporating a concatenated Reed Solomon Viterbi coding channel is discussed for transmitting compressed and uncompressed data from a spacecraft to a data processing center on Earth. Imaging (and other) data are first compressed into source blocks which are then coded by a Reed Solomon coder and interleaver, followed by a convolutional encoder. The received data is first decoded by a Viterbi decoder, followed by a Reed Solomon decoder and deinterleaver. The output of the latter is then decompressed, based on the compression criteria used in compressing the data in the spacecraft. The decompressed data is processed to reconstruct an approximation of the original data-producing condition or images.

  4. Enhancement of Satellite Image Compression Using a Hybrid (DWT-DCT) Algorithm

    NASA Astrophysics Data System (ADS)

    Shihab, Halah Saadoon; Shafie, Suhaidi; Ramli, Abdul Rahman; Ahmad, Fauzan

    2017-12-01

    Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) image compression techniques have been utilized in most of the earth observation satellites launched during the last few decades. However, these techniques have some issues that should be addressed. The DWT method has proven to be more efficient than DCT for several reasons. Nevertheless, the DCT can be exploited to improve the high-resolution satellite image compression when combined with the DWT technique. Hence, a proposed hybrid (DWT-DCT) method was developed and implemented in the current work, simulating an image compression system on-board on a small remote sensing satellite, with the aim of achieving a higher compression ratio to decrease the onboard data storage and the downlink bandwidth, while avoiding further complex levels of DWT. This method also succeeded in maintaining the reconstructed satellite image quality through replacing the standard forward DWT thresholding and quantization processes with an alternative process that employed the zero-padding technique, which also helped to reduce the processing time of DWT compression. The DCT, DWT and the proposed hybrid methods were implemented individually, for comparison, on three LANDSAT 8 images, using the MATLAB software package. A comparison was also made between the proposed method and three other previously published hybrid methods. The evaluation of all the objective and subjective results indicated the feasibility of using the proposed hybrid (DWT-DCT) method to enhance the image compression process on-board satellites.

  5. Both vision-for-perception and vision-for-action follow Weber's law at small object sizes, but violate it at larger sizes.

    PubMed

    Bruno, Nicola; Uccelli, Stefano; Viviani, Eva; de'Sperati, Claudio

    2016-10-01

    According to a previous report, the visual coding of size does not obey Weber's law when aimed at guiding a grasp (Ganel et al., 2008a). This result has been interpreted as evidence for a fundamental difference between sensory processing in vision-for-perception, which needs to compress a wide range of physical objects to a restricted range of percepts, and vision-for-action when applied to the much narrower range of graspable and reachable objects. We compared finger aperture in a motor task (precision grip) and perceptual task (cross modal matching or "manual estimation" of the object's size). Crucially, we tested the whole range of graspable objects. We report that both grips and estimations clearly violate Weber's law with medium-to-large objects, but are essentially consistent with Weber's law with smaller objects. These results differ from previous characterizations of perception-action dissociations in the precision of representations of object size. Implications for current functional interpretations of the dorsal and ventral processing streams in the human visual system are discussed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Evaluation of Brine Processing Technologies for Spacecraft Wastewater

    NASA Technical Reports Server (NTRS)

    Shaw, Hali L.; Flynn, Michael; Wisniewski, Richard; Lee, Jeffery; Jones, Harry; Delzeit, Lance; Shull, Sarah; Sargusingh, Miriam; Beeler, David; Howard, Jeanie; hide

    2015-01-01

    Brine drying systems may be used in spaceflight. There are several advantages to using brine processing technologies for long-duration human missions including a reduction in resupply requirements and achieving high water recovery ratios. The objective of this project was to evaluate four technologies for the drying of spacecraft water recycling system brine byproducts. The technologies tested were NASA's Forward Osmosis Brine Drying (FOBD), Paragon's Ionomer Water Processor (IWP), NASA's Brine Evaporation Bag (BEB) System, and UMPQUA's Ultrasonic Brine Dewatering System (UBDS). The purpose of this work was to evaluate the hardware using feed streams composed of brines similar to those generated on board the International Space Station (ISS) and future exploration missions. The brine formulations used for testing were the ISS Alternate Pretreatment and Solution 2 (Alt Pretreat). The brines were generated using the Wiped-film Rotating-disk (WFRD) evaporator, which is a vapor compression distillation system that is used to simulate the function of the ISS Urine Processor Assembly (UPA). Each system was evaluated based on the results from testing and Equivalent System Mass (ESM) calculations. A Quality Function Deployment (QFD) matrix was also developed as a method to compare the different technologies based on customer and engineering requirements.

  7. The ecology and biogeochemistry of stream biofilms.

    PubMed

    Battin, Tom J; Besemer, Katharina; Bengtsson, Mia M; Romani, Anna M; Packmann, Aaron I

    2016-04-01

    Streams and rivers form dense networks, shape the Earth's surface and, in their sediments, provide an immensely large surface area for microbial growth. Biofilms dominate microbial life in streams and rivers, drive crucial ecosystem processes and contribute substantially to global biogeochemical fluxes. In turn, water flow and related deliveries of nutrients and organic matter to biofilms constitute major constraints on microbial life. In this Review, we describe the ecology and biogeochemistry of stream biofilms and highlight the influence of physical and ecological processes on their structure and function. Recent advances in the study of biofilm ecology may pave the way towards a mechanistic understanding of the effects of climate and environmental change on stream biofilms and the biogeochemistry of stream ecosystems.

  8. Cooling and solidification of heavy hydrocarbon liquid streams

    DOEpatents

    Antieri, Salvatore J.; Comolli, Alfred G.

    1983-01-01

    A process and apparatus for cooling and solidifying a stream of heavy hydrocarbon material normally boiling above about 850.degree. F., such as vacuum bottoms material from a coal liquefaction process. The hydrocarbon stream is dropped into a liquid bath, preferably water, which contains a screw conveyor device and the stream is rapidly cooled, solidified and broken therein to form discrete elongated particles. The solid extrudates or prills are then dried separately to remove substantially all surface moisture, and passed to further usage.

  9. Electrophysiological Evidence for Ventral Stream Deficits in Schizophrenia Patients

    PubMed Central

    Plomp, Gijs; Roinishvili, Maya; Chkonia, Eka; Kapanadze, George; Kereselidze, Maia; Brand, Andreas; Herzog, Michael H.

    2013-01-01

    Schizophrenic patients suffer from many deficits including visual, attentional, and cognitive ones. Visual deficits are of particular interest because they are at the fore-end of information processing and can provide clear examples of interactions between sensory, perceptual, and higher cognitive functions. Visual deficits in schizophrenic patients are often attributed to impairments in the dorsal (where) rather than the ventral (what) stream of visual processing. We used a visual-masking paradigm in which patients and matched controls discriminated small vernier offsets. We analyzed the evoked electroencephalography (EEG) responses and applied distributed electrical source imaging techniques to estimate activity differences between conditions and groups throughout the brain. Compared with controls, patients showed strongly reduced discrimination accuracy, confirming previous work. The behavioral deficits corresponded to pronounced decreases in the evoked EEG response at around 200 ms after stimulus onset. At this latency, patients showed decreased activity for targets in left parietal cortex (dorsal stream), but the decrease was most pronounced in lateral occipital cortex (in the ventral stream). These deficiencies occurred at latencies that reflect object processing and fine shape discriminations. We relate the reduced ventral stream activity to deficient top-down processing of target stimuli and provide a framework for relating the commonly observed dorsal stream deficiencies with the currently observed ventral stream deficiencies. PMID:22258884

  10. Electrophysiological evidence for ventral stream deficits in schizophrenia patients.

    PubMed

    Plomp, Gijs; Roinishvili, Maya; Chkonia, Eka; Kapanadze, George; Kereselidze, Maia; Brand, Andreas; Herzog, Michael H

    2013-05-01

    Schizophrenic patients suffer from many deficits including visual, attentional, and cognitive ones. Visual deficits are of particular interest because they are at the fore-end of information processing and can provide clear examples of interactions between sensory, perceptual, and higher cognitive functions. Visual deficits in schizophrenic patients are often attributed to impairments in the dorsal (where) rather than the ventral (what) stream of visual processing. We used a visual-masking paradigm in which patients and matched controls discriminated small vernier offsets. We analyzed the evoked electroencephalography (EEG) responses and applied distributed electrical source imaging techniques to estimate activity differences between conditions and groups throughout the brain. Compared with controls, patients showed strongly reduced discrimination accuracy, confirming previous work. The behavioral deficits corresponded to pronounced decreases in the evoked EEG response at around 200 ms after stimulus onset. At this latency, patients showed decreased activity for targets in left parietal cortex (dorsal stream), but the decrease was most pronounced in lateral occipital cortex (in the ventral stream). These deficiencies occurred at latencies that reflect object processing and fine shape discriminations. We relate the reduced ventral stream activity to deficient top-down processing of target stimuli and provide a framework for relating the commonly observed dorsal stream deficiencies with the currently observed ventral stream deficiencies.

  11. 40 CFR 63.11940 - What continuous monitoring requirements must I meet for control devices required to install CPMS...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... consistent with the manufacturer's recommendations within 15 days or by the next time any process vent stream... the manufacturer's recommendations within 15 days or by the next time any process vent stream is...) Determine gas stream flow using the design blower capacity, with appropriate adjustments for pressure drop...

  12. Carbon dioxide removal process

    DOEpatents

    Baker, Richard W.; Da Costa, Andre R.; Lokhandwala, Kaaeid A.

    2003-11-18

    A process and apparatus for separating carbon dioxide from gas, especially natural gas, that also contains C.sub.3+ hydrocarbons. The invention uses two or three membrane separation steps, optionally in conjunction with cooling/condensation under pressure, to yield a lighter, sweeter product natural gas stream, and/or a carbon dioxide stream of reinjection quality and/or a natural gas liquids (NGL) stream.

  13. Interaction of Substrate and Nutrient Availability on wood Biofilm Processes in Streams

    Treesearch

    Jennifer L. Tank; J.R. Webster

    1998-01-01

    We examined the effect of decomposing leaf litter and dissolved inorganic nutrients on the heterotrophic biofilm of submerged wood in streams with and without leaves. Leaf litter was excluded from one headwater stream in August 1993 at Coweeta Hydrologic Laboratory in the southern Appalachian Mountains. We compared microbial processes on wood in the litter-excluded...

  14. Converting Panax ginseng DNA and chemical fingerprints into two-dimensional barcode.

    PubMed

    Cai, Yong; Li, Peng; Li, Xi-Wen; Zhao, Jing; Chen, Hai; Yang, Qing; Hu, Hao

    2017-07-01

    In this study, we investigated how to convert the Panax ginseng DNA sequence code and chemical fingerprints into a two-dimensional code. In order to improve the compression efficiency, GATC2Bytes and digital merger compression algorithms are proposed. HPLC chemical fingerprint data of 10 groups of P. ginseng from Northeast China and the internal transcribed spacer 2 (ITS2) sequence code as the DNA sequence code were ready for conversion. In order to convert such data into a two-dimensional code, the following six steps were performed: First, the chemical fingerprint characteristic data sets were obtained through the inflection filtering algorithm. Second, precompression processing of such data sets is undertaken. Third, precompression processing was undertaken with the P. ginseng DNA (ITS2) sequence codes. Fourth, the precompressed chemical fingerprint data and the DNA (ITS2) sequence code were combined in accordance with the set data format. Such combined data can be compressed by Zlib, an open source data compression algorithm. Finally, the compressed data generated a two-dimensional code called a quick response code (QR code). Through the abovementioned converting process, it can be found that the number of bytes needed for storing P. ginseng chemical fingerprints and its DNA (ITS2) sequence code can be greatly reduced. After GTCA2Bytes algorithm processing, the ITS2 compression rate reaches 75% and the chemical fingerprint compression rate exceeds 99.65% via filtration and digital merger compression algorithm processing. Therefore, the overall compression ratio even exceeds 99.36%. The capacity of the formed QR code is around 0.5k, which can easily and successfully be read and identified by any smartphone. P. ginseng chemical fingerprints and its DNA (ITS2) sequence code can form a QR code after data processing, and therefore the QR code can be a perfect carrier of the authenticity and quality of P. ginseng information. This study provides a theoretical basis for the development of a quality traceability system of traditional Chinese medicine based on a two-dimensional code.

  15. Paired comparisons of nonlinear frequency compression, extended bandwidth, and restricted bandwidth hearing-aid processing for children and adults with hearing loss

    PubMed Central

    Brennan, Marc A.; McCreery, Ryan; Kopun, Judy; Hoover, Brenda; Alexander, Joshua; Lewis, Dawna; Stelmachowicz, Patricia G.

    2014-01-01

    Background Preference for speech and music processed with nonlinear frequency compression and two controls (restricted and extended bandwidth hearing-aid processing) was examined in adults and children with hearing loss. Purpose Determine if stimulus type (music, sentences), age (children, adults) and degree of hearing loss influence listener preference for nonlinear frequency compression, restricted bandwidth and extended bandwidth. Research Design Within-subject, quasi-experimental study. Using a round-robin procedure, participants listened to amplified stimuli that were 1) frequency-lowered using nonlinear frequency compression, 2) low-pass filtered at 5 kHz to simulate the restricted bandwidth of conventional hearing aid processing, or 3) low-pass filtered at 11 kHz to simulate extended bandwidth amplification. The examiner and participants were blinded to the type of processing. Using a two-alternative forced-choice task, participants selected the preferred music or sentence passage. Study Sample Sixteen children (8–16 years) and 16 adults (19–65 years) with mild-to-severe sensorineural hearing loss. Intervention All subjects listened to speech and music processed using a hearing-aid simulator fit to the Desired Sensation Level algorithm v.5.0a (Scollie et al, 2005). Results Children and adults did not differ in their preferences. For speech, participants preferred extended bandwidth to both nonlinear frequency compression and restricted bandwidth. Participants also preferred nonlinear frequency compression to restricted bandwidth. Preference was not related to degree of hearing loss. For music, listeners did not show a preference. However, participants with greater hearing loss preferred nonlinear frequency compression to restricted bandwidth more than participants with less hearing loss. Conversely, participants with greater hearing loss were less likely to prefer extended bandwidth to restricted bandwidth. Conclusion Both age groups preferred access to high frequency sounds, as demonstrated by their preference for either the extended bandwidth or nonlinear frequency compression conditions over the restricted bandwidth condition. Preference for extended bandwidth can be limited for those with greater degrees of hearing loss, but participants with greater hearing loss may be more likely to prefer nonlinear frequency compression. Further investigation using participants with more severe hearing loss may be warranted. PMID:25514451

  16. Process and application of shock compression by nanosecond pulses of frequency-doubled Nd:YAG laser

    NASA Astrophysics Data System (ADS)

    Sano, Yuji; Kimura, Motohiko; Mukai, Naruhiko; Yoda, Masaki; Obata, Minoru; Ogisu, Tatsuki

    2000-02-01

    The authors have developed a new process of laser-induced shock compression to introduce a residual compressive stress on material surface, which is effective for prevention of stress corrosion cracking (SCC) and enhancement of fatigue strength of metal materials. The process developed is unique and beneficial. It requires no pre-conditioning for the surface, whereas the conventional process requires that the so-called sacrificial layer is made to protect the surface from damage. The new process can be freely applied to water- immersed components, since it uses water-penetrable green light of a frequency-doubled Nd:YAG laser. The process developed has the potential to open up new high-power laser applications in manufacturing and maintenance technologies. The laser-induced shock compression process (LSP) can be used to improve a residual stress field from tensile to compressive. In order to understand the physics and optimize the process, the propagation of a shock wave generated by the impulse of laser irradiation and the dynamic response of the material were analyzed by time-dependent elasto-plastic calculations with a finite element program using laser-induced plasma pressure as an external load. The analysis shows that a permanent strain and a residual compressive stress remain after the passage of the shock wave with amplitude exceeding the yield strength of the material. A practical system materializing the LSP was designed, manufactured, and tested to confirm the applicability to core components of light water reactors (LWRs). The system accesses the target component and remotely irradiates laser pulses to the heat affected zone (HAZ) along weld lines. Various functional tests were conducted using a full-scale mockup facility, in which remote maintenance work in a reactor vessel could be simulated. The results showed that the system remotely accessed the target weld lines and successfully introduced a residual compressive stress. After sufficient training for operational personnel, the system was applied to the core shroud of an existing nuclear power plant.

  17. Liquid additives for particulate emissions control

    DOEpatents

    Durham, Michael Dean; Schlager, Richard John; Ebner, Timothy George; Stewart, Robin Michele; Hyatt, David E.; Bustard, Cynthia Jean; Sjostrom, Sharon

    1999-01-01

    The present invention discloses a process for removing undesired particles from a gas stream including the steps of contacting a composition containing an adhesive with the gas stream; collecting the undesired particles and adhesive on a collection surface to form an aggregate comprising the adhesive and undesired particles on the collection surface; and removing the agglomerate from the collection zone. The composition may then be atomized and injected into the gas stream. The composition may include a liquid that vaporizes in the gas stream. After the liquid vaporizes, adhesive particles are entrained in the gas stream. The process may be applied to electrostatic precipitators and filtration systems to improve undesired particle collection efficiency.

  18. Combinations of NIR, Raman spectroscopy and physicochemical measurements for improved monitoring of solvent extraction processes using hierarchical multivariate analysis models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nee, K.; Bryan, S.; Levitskaia, T.

    The reliability of chemical processes can be greatly improved by implementing inline monitoring systems. Combining multivariate analysis with non-destructive sensors can enhance the process without interfering with the operation. Here, we present here hierarchical models using both principal component analysis and partial least square analysis developed for different chemical components representative of solvent extraction process streams. A training set of 380 samples and an external validation set of 95 samples were prepared and Near infrared and Raman spectral data as well as conductivity under variable temperature conditions were collected. The results from the models indicate that careful selection of themore » spectral range is important. By compressing the data through Principal Component Analysis (PCA), we lower the rank of the data set to its most dominant features while maintaining the key principal components to be used in the regression analysis. Within the studied data set, concentration of five chemical components were modeled; total nitrate (NO 3 -), total acid (H +), neodymium (Nd 3+), sodium (Na +), and ionic strength (I.S.). The best overall model prediction for each of the species studied used a combined data set comprised of complementary techniques including NIR, Raman, and conductivity. Finally, our study shows that chemometric models are powerful but requires significant amount of carefully analyzed data to capture variations in the chemistry.« less

  19. Processor core for real time background identification of HD video based on OpenCV Gaussian mixture model algorithm

    NASA Astrophysics Data System (ADS)

    Genovese, Mariangela; Napoli, Ettore

    2013-05-01

    The identification of moving objects is a fundamental step in computer vision processing chains. The development of low cost and lightweight smart cameras steadily increases the request of efficient and high performance circuits able to process high definition video in real time. The paper proposes two processor cores aimed to perform the real time background identification on High Definition (HD, 1920 1080 pixel) video streams. The implemented algorithm is the OpenCV version of the Gaussian Mixture Model (GMM), an high performance probabilistic algorithm for the segmentation of the background that is however computationally intensive and impossible to implement on general purpose CPU with the constraint of real time processing. In the proposed paper, the equations of the OpenCV GMM algorithm are optimized in such a way that a lightweight and low power implementation of the algorithm is obtained. The reported performances are also the result of the use of state of the art truncated binary multipliers and ROM compression techniques for the implementation of the non-linear functions. The first circuit has commercial FPGA devices as a target and provides speed and logic resource occupation that overcome previously proposed implementations. The second circuit is oriented to an ASIC (UMC-90nm) standard cell implementation. Both implementations are able to process more than 60 frames per second in 1080p format, a frame rate compatible with HD television.

  20. Combinations of NIR, Raman spectroscopy and physicochemical measurements for improved monitoring of solvent extraction processes using hierarchical multivariate analysis models

    DOE PAGES

    Nee, K.; Bryan, S.; Levitskaia, T.; ...

    2017-12-28

    The reliability of chemical processes can be greatly improved by implementing inline monitoring systems. Combining multivariate analysis with non-destructive sensors can enhance the process without interfering with the operation. Here, we present here hierarchical models using both principal component analysis and partial least square analysis developed for different chemical components representative of solvent extraction process streams. A training set of 380 samples and an external validation set of 95 samples were prepared and Near infrared and Raman spectral data as well as conductivity under variable temperature conditions were collected. The results from the models indicate that careful selection of themore » spectral range is important. By compressing the data through Principal Component Analysis (PCA), we lower the rank of the data set to its most dominant features while maintaining the key principal components to be used in the regression analysis. Within the studied data set, concentration of five chemical components were modeled; total nitrate (NO 3 -), total acid (H +), neodymium (Nd 3+), sodium (Na +), and ionic strength (I.S.). The best overall model prediction for each of the species studied used a combined data set comprised of complementary techniques including NIR, Raman, and conductivity. Finally, our study shows that chemometric models are powerful but requires significant amount of carefully analyzed data to capture variations in the chemistry.« less

Top