Science.gov

Sample records for compressible hydrodynamics codes

  1. VH-1: Multidimensional ideal compressible hydrodynamics code

    NASA Astrophysics Data System (ADS)

    Hawley, John; Blondin, John; Lindahl, Greg; Lufkin, Eric

    2012-04-01

    VH-1 is a multidimensional ideal compressible hydrodynamics code written in FORTRAN for use on any computing platform, from desktop workstations to supercomputers. It uses a Lagrangian remap version of the Piecewise Parabolic Method developed by Paul Woodward and Phil Colella in their 1984 paper. VH-1 comes in a variety of versions, from a simple one-dimensional serial variant to a multi-dimensional version scalable to thousands of processors.

  2. Pencil: Finite-difference Code for Compressible Hydrodynamic Flows

    NASA Astrophysics Data System (ADS)

    Brandenburg, Axel; Dobler, Wolfgang

    2010-10-01

    The Pencil code is a high-order finite-difference code for compressible hydrodynamic flows with magnetic fields. It is highly modular and can easily be adapted to different types of problems. The code runs efficiently under MPI on massively parallel shared- or distributed-memory computers, like e.g. large Beowulf clusters. The Pencil code is primarily designed to deal with weakly compressible turbulent flows. To achieve good parallelization, explicit (as opposed to compact) finite differences are used. Typical scientific targets include driven MHD turbulence in a periodic box, convection in a slab with non-periodic upper and lower boundaries, a convective star embedded in a fully nonperiodic box, accretion disc turbulence in the shearing sheet approximation, self-gravity, non-local radiation transfer, dust particle evolution with feedback on the gas, etc. A range of artificial viscosity and diffusion schemes can be invoked to deal with supersonic flows. For direct simulations regular viscosity and diffusion is being used. The code is written in well-commented Fortran90.

  3. Reliable estimation of shock position in shock-capturing compressible hydrodynamics codes

    SciTech Connect

    Nelson, Eric M

    2008-01-01

    The displacement method for estimating shock position in a shock-capturing compressible hydrodynamics code is introduced. Common estimates use simulation data within the captured shock, but the displacement method uses data behind the shock, making the estimate consistent with and as reliable as estimates of material parameters obtained from averages or fits behind the shock. The displacement method is described in the context of a steady shock in a one-dimensional lagrangian hydrodynamics code, and demonstrated on a piston problem and a spherical blast wave.The displacement method's estimates of shock position are much better than common estimates in such applications.

  4. Compressible Astrophysics Simulation Code

    Energy Science and Technology Software Center (ESTSC)

    2007-07-18

    This is an astrophysics simulation code involving a radiation diffusion module developed at LLNL coupled to compressible hydrodynamics and adaptive mesh infrastructure developed at LBNL. One intended application is to neutrino diffusion in core collapse supernovae.

  5. CASTRO: A New AMR Radiation-Hydrodynamics Code for Compressible Astrophysics

    NASA Astrophysics Data System (ADS)

    Almgren, Ann; Bell, J.; Day, M.; Howell, L.; Joggerst, C.; Myra, E.; Nordhaus, J.; Singer, M.; Zingale, M.

    2010-01-01

    CASTRO is a new, multi-dimensional, Eulerian AMR radiation-hydrodynamics code designed for astrophysical simulations. The code includes routines for various equations of state and nuclear reaction networks, and can be used with Cartesian, cylindrical or spherical coordinates. Time integration of the hydrodynamics equations is based on a higher-order, unsplit Godunov scheme. Self-gravity can be calculated on the adaptive hierarchy using a simple monopole approximation or a full Poisson solve for the potential. CASTRO includes gray and multigroup radiation diffusion. Multi-species neutrino diffusion for supernovae is nearing completion. The adaptive framework of CASTRO is based on an time-evolving hierarchy of nested rectangular grids with refinement in both space and time; the entire implementation is designed to run on thousands of processors. We describe in more detail how CASTRO is implemented and can be used for a number of different simulations. Our initial applications of CASTRO include Type Ia and Type II supernovae. This work has been supported by the SciDAC Program of the DOE Office of Mathematics, Information, and Computational Sciences under contracts No. DE-AC02-05CH11231 (LBNL), No. DE-FC02-06ER41438 (UCSC), and No. DE-AC52-07NA27344 (LLNL); and LLNL contracts B582735 and B574691(Stony Brook). Calculations shown were carried out on Franklin at NERSC.

  6. Shadowfax: Moving mesh hydrodynamical integration code

    NASA Astrophysics Data System (ADS)

    Vandenbroucke, Bert

    2016-05-01

    Shadowfax simulates galaxy evolution. Written in object-oriented modular C++, it evolves a mixture of gas, subject to the laws of hydrodynamics and gravity, and any collisionless fluid only subject to gravity, such as cold dark matter or stars. For the hydrodynamical integration, it makes use of a (co-) moving Lagrangian mesh. The code has a 2D and 3D version, contains utility programs to generate initial conditions and visualize simulation snapshots, and its input/output is compatible with a number of other simulation codes, e.g. Gadget2 (ascl:0003.001) and GIZMO (ascl:1410.003).

  7. TORUS: Radiation transport and hydrodynamics code

    NASA Astrophysics Data System (ADS)

    Harries, Tim

    2014-04-01

    TORUS is a flexible radiation transfer and radiation-hydrodynamics code. The code has a basic infrastructure that includes the AMR mesh scheme that is used by several physics modules including atomic line transfer in a moving medium, molecular line transfer, photoionization, radiation hydrodynamics and radiative equilibrium. TORUS is useful for a variety of problems, including magnetospheric accretion onto T Tauri stars, spiral nebulae around Wolf-Rayet stars, discs around Herbig AeBe stars, structured winds of O supergiants and Raman-scattered line formation in symbiotic binaries, and dust emission and molecular line formation in star forming clusters. The code is written in Fortran 2003 and is compiled using a standard Gnu makefile. The code is parallelized using both MPI and OMP, and can use these parallel sections either separately or in a hybrid mode.

  8. An implicit Smooth Particle Hydrodynamic code

    SciTech Connect

    Charles E. Knapp

    2000-04-01

    An implicit version of the Smooth Particle Hydrodynamic (SPH) code SPHINX has been written and is working. In conjunction with the SPHINX code the new implicit code models fluids and solids under a wide range of conditions. SPH codes are Lagrangian, meshless and use particles to model the fluids and solids. The implicit code makes use of the Krylov iterative techniques for solving large linear-systems and a Newton-Raphson method for non-linear corrections. It uses numerical derivatives to construct the Jacobian matrix. It uses sparse techniques to save on memory storage and to reduce the amount of computation. It is believed that this is the first implicit SPH code to use Newton-Krylov techniques, and is also the first implicit SPH code to model solids. A description of SPH and the techniques used in the implicit code are presented. Then, the results of a number of tests cases are discussed, which include a shock tube problem, a Rayleigh-Taylor problem, a breaking dam problem, and a single jet of gas problem. The results are shown to be in very good agreement with analytic solutions, experimental results, and the explicit SPHINX code. In the case of the single jet of gas case it has been demonstrated that the implicit code can do a problem in much shorter time than the explicit code. The problem was, however, very unphysical, but it does demonstrate the potential of the implicit code. It is a first step toward a useful implicit SPH code.

  9. Production code control system for hydrodynamics simulations

    SciTech Connect

    Slone, D.M.

    1997-08-18

    We describe how the Production Code Control System (pCCS), written in Perl, has been used to control and monitor the execution of a large hydrodynamics simulation code in a production environment. We have been able to integrate new, disparate, and often independent, applications into the PCCS framework without the need to modify any of our existing application codes. Both users and code developers see a consistent interface to the simulation code and associated applications regardless of the physical platform, whether an MPP, SMP, server, or desktop workstation. We will also describe our use of Perl to develop a configuration management system for the simulation code, as well as a code usage database and report generator. We used Perl to write a backplane that allows us plug in preprocessors, the hydrocode, postprocessors, visualization tools, persistent storage requests, and other codes. We need only teach PCCS a minimal amount about any new tool or code to essentially plug it in and make it usable to the hydrocode. PCCS has made it easier to link together disparate codes, since using Perl has removed the need to learn the idiosyncrasies of system or RPC programming. The text handling in Perl makes it easy to teach PCCS about new codes, or changes to existing codes.

  10. Radiation hydrodynamics integrated in the PLUTO code

    NASA Astrophysics Data System (ADS)

    Kolb, Stefan M.; Stute, Matthias; Kley, Wilhelm; Mignone, Andrea

    2013-11-01

    Aims: The transport of energy through radiation is very important in many astrophysical phenomena. In dynamical problems the time-dependent equations of radiation hydrodynamics have to be solved. We present a newly developed radiation-hydrodynamics module specifically designed for the versatile magnetohydrodynamic (MHD) code PLUTO. Methods: The solver is based on the flux-limited diffusion approximation in the two-temperature approach. All equations are solved in the co-moving frame in the frequency-independent (gray) approximation. The hydrodynamics is solved by the different Godunov schemes implemented in PLUTO, and for the radiation transport we use a fully implicit scheme. The resulting system of linear equations is solved either using the successive over-relaxation (SOR) method (for testing purposes) or using matrix solvers that are available in the PETSc library. We state in detail the methodology and describe several test cases to verify the correctness of our implementation. The solver works in standard coordinate systems, such as Cartesian, cylindrical, and spherical, and also for non-equidistant grids. Results: We present a new radiation-hydrodynamics solver coupled to the MHD-code PLUTO that is a modern, versatile, and efficient new module for treating complex radiation hydrodynamical problems in astrophysics. As test cases, either purely radiative situations, or full radiation-hydrodynamical setups (including radiative shocks and convection in accretion disks) were successfully studied. The new module scales very well on parallel computers using MPI. For problems in star or planet formation, we added the possibility of irradiation by a central source.

  11. CASTRO: A NEW COMPRESSIBLE ASTROPHYSICAL SOLVER. II. GRAY RADIATION HYDRODYNAMICS

    SciTech Connect

    Zhang, W.; Almgren, A.; Bell, J.; Howell, L.; Burrows, A.

    2011-10-01

    We describe the development of a flux-limited gray radiation solver for the compressible astrophysics code, CASTRO. CASTRO uses an Eulerian grid with block-structured adaptive mesh refinement based on a nested hierarchy of logically rectangular variable-sized grids with simultaneous refinement in both space and time. The gray radiation solver is based on a mixed-frame formulation of radiation hydrodynamics. In our approach, the system is split into two parts, one part that couples the radiation and fluid in a hyperbolic subsystem, and another parabolic part that evolves radiation diffusion and source-sink terms. The hyperbolic subsystem is solved explicitly with a high-order Godunov scheme, whereas the parabolic part is solved implicitly with a first-order backward Euler method.

  12. Building a Hydrodynamics Code with Kinetic Theory

    NASA Astrophysics Data System (ADS)

    Sagert, Irina; Bauer, Wolfgang; Colbry, Dirk; Pickett, Rodney; Strother, Terrance

    2013-08-01

    We report on the development of a test-particle based kinetic Monte Carlo code for large systems and its application to simulate matter in the continuum regime. Our code combines advantages of the Direct Simulation Monte Carlo and the Point-of-Closest-Approach methods to solve the collision integral of the Boltzmann equation. With that, we achieve a high spatial accuracy in simulations while maintaining computational feasibility when applying a large number of test-particles. The hybrid setup of our approach allows us to study systems which move in and out of the hydrodynamic regime, with low and high particle densities. To demonstrate our code's ability to reproduce hydrodynamic behavior we perform shock wave simulations and focus here on the Sedov blast wave test. The blast wave problem describes the evolution of a spherical expanding shock front and is an important verification problem for codes which are applied in astrophysical simulation, especially for approaches which aim to study core-collapse supernovae.

  13. EUNHA: a New Cosmological Hydrodynamic Simulation Code

    NASA Astrophysics Data System (ADS)

    Shin, Jihye; Kim, Juhan; Kim, Sungsoo S.; Park, Changbom

    2014-06-01

    We develop a parallel cosmological hydrodynamic simulation code designed for the study of formation and evolution of cosmological structures. The gravitational force is calculated using the TreePM method and the hydrodynamics is implemented based on the smoothed particle hydrodynamics. The initial displacement and velocity of simulation particles are calculated according to second-order Lagrangian perturbation theory using the power spectra of dark matter and baryonic matter. The initial background temperature is given by Recfast and the temperature fluctuations at the initial particle position are assigned according to the adiabatic model. We use a time-limiter scheme over the individual time steps to capture shock-fronts and to ease the time-step tension between the shock and preshock particles. We also include the astrophysical gas processes of radiative heating/cooling, star formation, metal enrichment, and supernova feedback. We test the code in several standard cases such as one-dimensional Riemann problems, Kelvin-Helmholtz, and Sedov blast wave instability. Star formation on the galactic disk is investigated to check whether the Schmidt-Kennicutt relation is properly recovered. We also study global star formation history at different simulation resolutions and compare them with observations.

  14. Superresonant instability of a compressible hydrodynamic vortex

    NASA Astrophysics Data System (ADS)

    Oliveira, Leandro A.; Cardoso, Vitor; Crispino, Luís C. B.

    2016-06-01

    We show that a purely circulating and compressible system, in an adiabatic regime of acoustic propagation, presents superresonant instabilities. To show the existence these instabilities, we compute the quasinormal mode frequencies of this system numerically using two different frequency domain methods.

  15. CASTRO: A NEW COMPRESSIBLE ASTROPHYSICAL SOLVER. III. MULTIGROUP RADIATION HYDRODYNAMICS

    SciTech Connect

    Zhang, W.; Almgren, A.; Bell, J.; Howell, L.; Burrows, A.; Dolence, J.

    2013-01-15

    We present a formulation for multigroup radiation hydrodynamics that is correct to order O(v/c) using the comoving-frame approach and the flux-limited diffusion approximation. We describe a numerical algorithm for solving the system, implemented in the compressible astrophysics code, CASTRO. CASTRO uses a Eulerian grid with block-structured adaptive mesh refinement based on a nested hierarchy of logically rectangular variable-sized grids with simultaneous refinement in both space and time. In our multigroup radiation solver, the system is split into three parts: one part that couples the radiation and fluid in a hyperbolic subsystem, another part that advects the radiation in frequency space, and a parabolic part that evolves radiation diffusion and source-sink terms. The hyperbolic subsystem and the frequency space advection are solved explicitly with high-order Godunov schemes, whereas the parabolic part is solved implicitly with a first-order backward Euler method. Our multigroup radiation solver works for both neutrino and photon radiation.

  16. CASTRO: A New Compressible Astrophysical Solver. III. Multigroup Radiation Hydrodynamics

    NASA Astrophysics Data System (ADS)

    Zhang, W.; Howell, L.; Almgren, A.; Burrows, A.; Dolence, J.; Bell, J.

    2013-01-01

    We present a formulation for multigroup radiation hydrodynamics that is correct to order O(v/c) using the comoving-frame approach and the flux-limited diffusion approximation. We describe a numerical algorithm for solving the system, implemented in the compressible astrophysics code, CASTRO. CASTRO uses a Eulerian grid with block-structured adaptive mesh refinement based on a nested hierarchy of logically rectangular variable-sized grids with simultaneous refinement in both space and time. In our multigroup radiation solver, the system is split into three parts: one part that couples the radiation and fluid in a hyperbolic subsystem, another part that advects the radiation in frequency space, and a parabolic part that evolves radiation diffusion and source-sink terms. The hyperbolic subsystem and the frequency space advection are solved explicitly with high-order Godunov schemes, whereas the parabolic part is solved implicitly with a first-order backward Euler method. Our multigroup radiation solver works for both neutrino and photon radiation.

  17. Compressible Lagrangian hydrodynamics without Lagrangian cells

    NASA Astrophysics Data System (ADS)

    Clark, Robert A.

    The partial differential Eqs [2.1, 2.2, and 2.3], along with the equation of state 2.4, which describe the time evolution of compressible fluid flow can be solved without the use of a Lagrangian mesh. The method follows embedded fluid points and uses finite difference approximations to ěc nablaP and ěc nabla · ěc u to update p, ěc u and e. We have demonstrated that the method can accurately calculate highly distorted flows without difficulty. The finite difference approximations are not unique, improvements may be found in the near future. The neighbor selection is not unique, but the one being used at present appears to do an excellent job. The method could be directly extended to three dimensions. One drawback to the method is the failure toexplicitly conserve mass, momentum and energy. In fact, at any given time, the mass is not defined. We must perform an auxiliary calculation by integrating the density field over space to obtain mass, energy and momentum. However, in all cases where we have done this, we have found the drift in these quantities to be no more than a few percent.

  18. Subband Coding Methods for Seismic Data Compression

    NASA Technical Reports Server (NTRS)

    Kiely, A.; Pollara, F.

    1995-01-01

    This paper presents a study of seismic data compression techniques and a compression algorithm based on subband coding. The compression technique described could be used as a progressive transmission system, where successive refinements of the data can be requested by the user. This allows seismologists to first examine a coarse version of waveforms with minimal usage of the channel and then decide where refinements are required. Rate-distortion performance results are presented and comparisons are made with two block transform methods.

  19. Motion-adaptive compressive coded apertures

    NASA Astrophysics Data System (ADS)

    Harmany, Zachary T.; Oh, Albert; Marcia, Roummel; Willett, Rebecca

    2011-09-01

    This paper describes an adaptive compressive coded aperture imaging system for video based on motion-compensated video sparsity models. In particular, motion models based on optical flow and sparse deviations from optical flow (i.e. salient motion) can be used to (a) predict future video frames from previous compressive measurements, (b) perform reconstruction using efficient online convex programming techniques, and (c) adapt the coded aperture to yield higher reconstruction fidelity in the vicinity of this salient motion.

  20. Image coding compression based on DCT

    NASA Astrophysics Data System (ADS)

    Feng, Fei; Liu, Peixue; Jiang, Baohua

    2012-04-01

    With the development of computer science and communications, the digital image processing develops more and more fast. High quality images are loved by people, but it will waste more stored space in our computer and it will waste more bandwidth when it is transferred by Internet. Therefore, it's necessary to have an study on technology of image compression. At present, many algorithms about image compression is applied to network and the image compression standard is established. In this dissertation, some analysis on DCT will be written. Firstly, the principle of DCT will be shown. It's necessary to realize image compression, because of the widely using about this technology; Secondly, we will have a deep understanding of DCT by the using of Matlab, the process of image compression based on DCT, and the analysis on Huffman coding; Thirdly, image compression based on DCT will be shown by using Matlab and we can have an analysis on the quality of the picture compressed. It is true that DCT is not the only algorithm to realize image compression. I am sure there will be more algorithms to make the image compressed have a high quality. I believe the technology about image compression will be widely used in the network or communications in the future.

  1. KIVA reactive hydrodynamics code applied to detonations in high vacuum

    NASA Astrophysics Data System (ADS)

    Greiner, N. Roy

    1989-08-01

    The KIVA reactive hydrodynamics code was adapted for modeling detonation hydrodynamics in a high vacuum. Adiabatic cooling rapidly freezes detonation reactions as a result of free expansion into the vacuum. After further expansion, a molecular beam of the products is admitted without disturbance into a drift tube, where the products are analyzed with a mass spectrometer. How the model is used for interpretation and design of experiments for detonation chemistry is explained. Modeling of experimental hydrodynamic characterization by laser-schlieren imaging and model-aided mapping that will link chemical composition data to particular volume elements in the explosive charge are also discussed.

  2. Pulse compression using binary phase codes

    NASA Technical Reports Server (NTRS)

    Farley, D. T.

    1983-01-01

    In most MST applications pulsed radars are peak power limited and have excess average power capacity. Short pulses are required for good range resolution, but the problem of range ambiguity (signals received simultaneously from more than one altitude) sets a minimum limit on the interpulse period (IPP). Pulse compression is a technique which allows more of the transmitter average power capacity to be used without sacrificing range resolution. As the name implies, a pulse of power P and duration T is in a certain sense converted into one of power nP and duration T/n. In the frequency domain, compression involves manipulating the phases of the different frequency components of the pulse. One way to compress a pulse is via phase coding, especially binary phase coding, a technique which is particularly amenable to digital processing techniques. This method, which is used extensively in radar probing of the atmosphere and ionosphere is discussed. Barker codes, complementary and quasi-complementary code sets, and cyclic codes are addressed.

  3. Compression of polyphase codes with Doppler shift

    NASA Astrophysics Data System (ADS)

    Wirth, W. D.

    It is shown that pulse compression with sufficient Doppler tolerance may be achieved with polyphase codes derived from linear frequency modulation (LFM) and nonlinear frequency modulation (NLFM). Low sidelobes in range and Doppler are required especially for the radar search function. These may be achieved by an LFM derived phase coder together with Hamming weighting or by applying a PNL polyphase code derived from NLFM. For a discrete and known Doppler frequency with an expanded and mismatched reference vector a sidelobe reduction is possible. The compression is then achieved without a loss in resolution. A set up for the expanded reference gives zero sidelobes only in an interval around the signal peak or a least square minimization for all range elements. This version may be useful for target tracking.

  4. FARGO3D: Hydrodynamics/magnetohydrodynamics code

    NASA Astrophysics Data System (ADS)

    Benítez Llambay, Pablo; Masset, Frédéric

    2015-09-01

    A successor of FARGO (ascl:1102.017), FARGO3D is a versatile HD/MHD code that runs on clusters of CPUs or GPUs, with special emphasis on protoplanetary disks. FARGO3D offers Cartesian, cylindrical or spherical geometry; 1-, 2- or 3-dimensional calculations; and orbital advection (aka FARGO) for HD and MHD calculations. As in FARGO, a simple Runge-Kutta N-body solver may be used to describe the orbital evolution of embedded point-like objects. There is no need to know CUDA; users can develop new functions in C and have them translated to CUDA automatically to run on GPUs.

  5. High compression image and image sequence coding

    NASA Technical Reports Server (NTRS)

    Kunt, Murat

    1989-01-01

    The digital representation of an image requires a very large number of bits. This number is even larger for an image sequence. The goal of image coding is to reduce this number, as much as possible, and reconstruct a faithful duplicate of the original picture or image sequence. Early efforts in image coding, solely guided by information theory, led to a plethora of methods. The compression ratio reached a plateau around 10:1 a couple of years ago. Recent progress in the study of the brain mechanism of vision and scene analysis has opened new vistas in picture coding. Directional sensitivity of the neurones in the visual pathway combined with the separate processing of contours and textures has led to a new class of coding methods capable of achieving compression ratios as high as 100:1 for images and around 300:1 for image sequences. Recent progress on some of the main avenues of object-based methods is presented. These second generation techniques make use of contour-texture modeling, new results in neurophysiology and psychophysics and scene analysis.

  6. Coding Strategies and Implementations of Compressive Sensing

    NASA Astrophysics Data System (ADS)

    Tsai, Tsung-Han

    This dissertation studies the coding strategies of computational imaging to overcome the limitation of conventional sensing techniques. The information capacity of conventional sensing is limited by the physical properties of optics, such as aperture size, detector pixels, quantum efficiency, and sampling rate. These parameters determine the spatial, depth, spectral, temporal, and polarization sensitivity of each imager. To increase sensitivity in any dimension can significantly compromise the others. This research implements various coding strategies subject to optical multidimensional imaging and acoustic sensing in order to extend their sensing abilities. The proposed coding strategies combine hardware modification and signal processing to exploiting bandwidth and sensitivity from conventional sensors. We discuss the hardware architecture, compression strategies, sensing process modeling, and reconstruction algorithm of each sensing system. Optical multidimensional imaging measures three or more dimensional information of the optical signal. Traditional multidimensional imagers acquire extra dimensional information at the cost of degrading temporal or spatial resolution. Compressive multidimensional imaging multiplexes the transverse spatial, spectral, temporal, and polarization information on a two-dimensional (2D) detector. The corresponding spectral, temporal and polarization coding strategies adapt optics, electronic devices, and designed modulation techniques for multiplex measurement. This computational imaging technique provides multispectral, temporal super-resolution, and polarization imaging abilities with minimal loss in spatial resolution and noise level while maintaining or gaining higher temporal resolution. The experimental results prove that the appropriate coding strategies may improve hundreds times more sensing capacity. Human auditory system has the astonishing ability in localizing, tracking, and filtering the selected sound sources or

  7. A new hydrodynamics code for Type Ia supernovae

    NASA Astrophysics Data System (ADS)

    Leung, S.-C.; Chu, M.-C.; Lin, L.-M.

    2015-12-01

    A two-dimensional hydrodynamics code for Type Ia supernova (SNIa) simulations is presented. The code includes a fifth-order shock-capturing scheme WENO, detailed nuclear reaction network, flame-capturing scheme and sub-grid turbulence. For post-processing, we have developed a tracer particle scheme to record the thermodynamical history of the fluid elements. We also present a one-dimensional radiative transfer code for computing observational signals. The code solves the Lagrangian hydrodynamics and moment-integrated radiative transfer equations. A local ionization scheme and composition dependent opacity are included. Various verification tests are presented, including standard benchmark tests in one and two dimensions. SNIa models using the pure turbulent deflagration model and the delayed-detonation transition model are studied. The results are consistent with those in the literature. We compute the detailed chemical evolution using the tracer particles' histories, and we construct corresponding bolometric light curves from the hydrodynamics results. We also use a GPU to speed up the computation of some highly repetitive subroutines. We achieve an acceleration of 50 times for some subroutines and a factor of 6 in the global run time.

  8. RAMSES: A new N-body and hydrodynamical code

    NASA Astrophysics Data System (ADS)

    Teyssier, Romain

    2010-11-01

    A new N-body and hydrodynamical code, called RAMSES, is presented. It has been designed to study structure formation in the universe with high spatial resolution. The code is based on Adaptive Mesh Refinement (AMR) technique, with a tree based data structure allowing recursive grid refinements on a cell-by-cell basis. The N-body solver is very similar to the one developed for the ART code (Kravtsov et al. 97), with minor differences in the exact implementation. The hydrodynamical solver is based on a second-order Godunov method, a modern shock-capturing scheme known to compute accurately the thermal history of the fluid component. The accuracy of the code is carefully estimated using various test cases, from pure gas dynamical tests to cosmological ones. The specific refinement strategy used in cosmological simulations is described, and potential spurious effects associated to shock waves propagation in the resulting AMR grid are discussed and found to be negligible. Results obtained in a large N-body and hydrodynamical simulation of structure formation in a low density LCDM universe are finally reported, with 256^3 particles and 4.1 10^7 cells in the AMR grid, reaching a formal resolution of 8192^3. A convergence analysis of different quantities, such as dark matter density power spectrum, gas pressure power spectrum and individual haloes temperature profiles, shows that numerical results are converging down to the actual resolution limit of the code, and are well reproduced by recent analytical predictions in the framework of the halo model.

  9. Developing a Multi-Dimensional Hydrodynamics Code with Astrochemical Reactions

    NASA Astrophysics Data System (ADS)

    Kwak, Kyujin; Yang, Seungwon

    2015-08-01

    The Atacama Large Millimeter/submillimeter Array (ALMA) revealed high resolution molecular lines some of which are still unidentified yet. Because formation of these astrochemical molecules has been seldom studied in traditional chemistry, observations of new molecular lines drew a lot of attention from not only astronomers but also chemists both experimental and theoretical. Theoretical calculations for the formation of these astrochemical molecules have been carried out providing reaction rates for some important molecules, and some of theoretical predictions have been measured in laboratories. The reaction rates for the astronomically important molecules are now collected to form databases some of which are publically available. By utilizing these databases, we develop a multi-dimensional hydrodynamics code that includes the reaction rates of astrochemical molecules. Because this type of hydrodynamics code is able to trace the molecular formation in a non-equilibrium fashion, it is useful to study the formation history of these molecules that affects the spatial distribution of some specific molecules. We present the development procedure of this code and some test problems in order to verify and validate the developed code.

  10. Adding kinetics and hydrodynamics to the CHEETAH thermochemical code

    SciTech Connect

    Fried, L.E., Howard, W.M., Souers, P.C.

    1997-01-15

    In FY96 we released CHEETAH 1.40, which made extensive improvements on the stability and user friendliness of the code. CHEETAH now has over 175 users in government, academia, and industry. Efforts have also been focused on adding new advanced features to CHEETAH 2.0, which is scheduled for release in FY97. We have added a new chemical kinetics capability to CHEETAH. In the past, CHEETAH assumed complete thermodynamic equilibrium and independence of time. The addition of a chemical kinetic framework will allow for modeling of time-dependent phenomena, such as partial combustion and detonation in composite explosives with large reaction zones. We have implemented a Wood-Kirkwood detonation framework in CHEETAH, which allows for the treatment of nonideal detonations and explosive failure. A second major effort in the project this year has been linking CHEETAH to hydrodynamic codes to yield an improved HE product equation of state. We have linked CHEETAH to 1- and 2-D hydrodynamic codes, and have compared the code to experimental data. 15 refs., 13 figs., 1 tab.

  11. A nonlocal electron conduction model for multidimensional radiation hydrodynamics codes

    NASA Astrophysics Data System (ADS)

    Schurtz, G. P.; Nicolaï, Ph. D.; Busquet, M.

    2000-10-01

    Numerical simulation of laser driven Inertial Confinement Fusion (ICF) related experiments require the use of large multidimensional hydro codes. Though these codes include detailed physics for numerous phenomena, they deal poorly with electron conduction, which is the leading energy transport mechanism of these systems. Electron heat flow is known, since the work of Luciani, Mora, and Virmont (LMV) [Phys. Rev. Lett. 51, 1664 (1983)], to be a nonlocal process, which the local Spitzer-Harm theory, even flux limited, is unable to account for. The present work aims at extending the original formula of LMV to two or three dimensions of space. This multidimensional extension leads to an equivalent transport equation suitable for easy implementation in a two-dimensional radiation-hydrodynamic code. Simulations are presented and compared to Fokker-Planck simulations in one and two dimensions of space.

  12. External-Compression Supersonic Inlet Design Code

    NASA Technical Reports Server (NTRS)

    Slater, John W.

    2011-01-01

    A computer code named SUPIN has been developed to perform aerodynamic design and analysis of external-compression, supersonic inlets. The baseline set of inlets include axisymmetric pitot, two-dimensional single-duct, axisymmetric outward-turning, and two-dimensional bifurcated-duct inlets. The aerodynamic methods are based on low-fidelity analytical and numerical procedures. The geometric methods are based on planar geometry elements. SUPIN has three modes of operation: 1) generate the inlet geometry from a explicit set of geometry information, 2) size and design the inlet geometry and analyze the aerodynamic performance, and 3) compute the aerodynamic performance of a specified inlet geometry. The aerodynamic performance quantities includes inlet flow rates, total pressure recovery, and drag. The geometry output from SUPIN includes inlet dimensions, cross-sectional areas, coordinates of planar profiles, and surface grids suitable for input to grid generators for analysis by computational fluid dynamics (CFD) methods. The input data file for SUPIN and the output file from SUPIN are text (ASCII) files. The surface grid files are output as formatted Plot3D or stereolithography (STL) files. SUPIN executes in batch mode and is available as a Microsoft Windows executable and Fortran95 source code with a makefile for Linux.

  13. RAM: a Relativistic Adaptive Mesh Refinement Hydrodynamics Code

    SciTech Connect

    Zhang, Wei-Qun; MacFadyen, Andrew I.; /Princeton, Inst. Advanced Study

    2005-06-06

    The authors have developed a new computer code, RAM, to solve the conservative equations of special relativistic hydrodynamics (SRHD) using adaptive mesh refinement (AMR) on parallel computers. They have implemented a characteristic-wise, finite difference, weighted essentially non-oscillatory (WENO) scheme using the full characteristic decomposition of the SRHD equations to achieve fifth-order accuracy in space. For time integration they use the method of lines with a third-order total variation diminishing (TVD) Runge-Kutta scheme. They have also implemented fourth and fifth order Runge-Kutta time integration schemes for comparison. The implementation of AMR and parallelization is based on the FLASH code. RAM is modular and includes the capability to easily swap hydrodynamics solvers, reconstruction methods and physics modules. In addition to WENO they have implemented a finite volume module with the piecewise parabolic method (PPM) for reconstruction and the modified Marquina approximate Riemann solver to work with TVD Runge-Kutta time integration. They examine the difficulty of accurately simulating shear flows in numerical relativistic hydrodynamics codes. They show that under-resolved simulations of simple test problems with transverse velocity components produce incorrect results and demonstrate the ability of RAM to correctly solve these problems. RAM has been tested in one, two and three dimensions and in Cartesian, cylindrical and spherical coordinates. they have demonstrated fifth-order accuracy for WENO in one and two dimensions and performed detailed comparison with other schemes for which they show significantly lower convergence rates. Extensive testing is presented demonstrating the ability of RAM to address challenging open questions in relativistic astrophysics.

  14. Ultraspectral sounder data compression using the Tunstall coding

    NASA Astrophysics Data System (ADS)

    Wei, Shih-Chieh; Huang, Bormin; Gu, Lingjia

    2007-09-01

    In an error-prone environment the compression of ultraspectral sounder data is vulnerable to error propagation. The Tungstall coding is a variable-to-fixed length code which compresses data by mapping a variable number of source symbols to a fixed number of codewords. It avoids the resynchronization difficulty encountered in fixed-to-variable length codes such as Huffman coding and arithmetic coding. This paper explores the use of the Tungstall coding in reducing the error propagation for ultraspectral sounder data compression. The results show that our Tunstall approach has a favorable compression ratio compared with JPEG-2000, 3D SPIHT, JPEG-LS, CALIC and CCSDS IDC 5/3. It also has less error propagation compared with JPEG-2000.

  15. Compressing industrial computed tomography images by means of contour coding

    NASA Astrophysics Data System (ADS)

    Jiang, Haina; Zeng, Li

    2013-10-01

    An improved method for compressing industrial computed tomography (CT) images is presented. To have higher resolution and precision, the amount of industrial CT data has become larger and larger. Considering that industrial CT images are approximately piece-wise constant, we develop a compression method based on contour coding. The traditional contour-based method for compressing gray images usually needs two steps. The first is contour extraction and then compression, which is negative for compression efficiency. So we merge the Freeman encoding idea into an improved method for two-dimensional contours extraction (2-D-IMCE) to improve the compression efficiency. By exploiting the continuity and logical linking, preliminary contour codes are directly obtained simultaneously with the contour extraction. By that, the two steps of the traditional contour-based compression method are simplified into only one. Finally, Huffman coding is employed to further losslessly compress preliminary contour codes. Experimental results show that this method can obtain a good compression ratio as well as keeping satisfactory quality of compressed images.

  16. Coding For Compression Of Low-Entropy Data

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu

    1994-01-01

    Improved method of encoding digital data provides for efficient lossless compression of partially or even mostly redundant data from low-information-content source. Method of coding implemented in relatively simple, high-speed arithmetic and logic circuits. Also increases coding efficiency beyond that of established Huffman coding method in that average number of bits per code symbol can be less than 1, which is the lower bound for Huffman code.

  17. Modeling Relativistic Jets Using the Athena Hydrodynamics Code

    NASA Astrophysics Data System (ADS)

    Pauls, David; Pollack, Maxwell; Wiita, Paul

    2014-11-01

    We used the Athena hydrodynamics code (Beckwith & Stone 2011) to model early-stage two-dimensional relativistic jets as approximations to the growth of radio-loud active galactic nuclei. We analyzed variability of the radio emission by calculating fluxes from a vertical strip of zones behind a standing shock, as discussed in the accompanying poster. We found the advance speed of the jet bow shock for various input jet velocities and jet-to-ambient density ratios. Faster jets and higher jet densities produce faster shock advances. We investigated the effects of parameters such as the Courant-Friedrichs-Lewy number, the input jet velocity, and the density ratio on the stability of the simulated jet, finding that numerical instabilities grow rapidly when the CFL number is above 0.1. We found that greater jet input velocities and higher density ratios lengthen the time the jet remains stable. We also examined the effects of the boundary conditions, the CFL number, the input jet velocity, the grid resolution, and the density ratio on the premature termination of Athena code. We found that a grid of 1200 by 1000 zones allows the code to run with minimal errors, while still maintaining an adequate resolution. This work is supported by the Mentored Undergraduate Summer Experience program at TCNJ.

  18. Adaptive rezoner in a two-dimensional Lagrangian hydrodynamic code

    SciTech Connect

    Pyun, J.J.; Saltzman, J.S.; Scannapieco, A.J.; Carroll, D.

    1985-01-01

    In an effort to increase spatial resolution without adding additional meshes, an adaptive mesh was incorporated into a two-dimensional Lagrangian hydrodynamics code along with two-dimensional flux corrected (FCT) remapper. The adaptive mesh automatically generates a mesh based on smoothness and orthogonality, and at the same time also tracks physical conditions of interest by focusing mesh points in regions that exhibit those conditions; this is done by defining a weighting function associated with the physical conditions to be tracked. The FCT remapper calculates the net transportive fluxes based on a weighted average of two fluxes computed by a low-order scheme and a high-order scheme. This averaging procedure produces solutions which are conservative and nondiffusive, and maintains positivity. 10 refs., 12 figs.

  19. Bit-wise arithmetic coding for data compression

    NASA Technical Reports Server (NTRS)

    Kiely, A. B.

    1994-01-01

    This article examines the problem of compressing a uniformly quantized independent and identically distributed (IID) source. We present a new compression technique, bit-wise arithmetic coding, that assigns fixed-length codewords to the quantizer output and uses arithmetic coding to compress the codewords, treating the codeword bits as independent. We examine the performance of this method and evaluate the overhead required when used block-adaptively. Simulation results are presented for Gaussian and Laplacian sources. This new technique could be used as the entropy coder in a transform or subband coding system.

  20. Compressed image transmission based on fountain codes

    NASA Astrophysics Data System (ADS)

    Wu, Jiaji; Wu, Xinhong; Jiao, L. C.

    2011-11-01

    In this paper, we propose a joint source-channel coding (JSCC) scheme for image transmission over wireless channel. In the scheme, fountain codes are integrated into bit-plane coding for channel coding. Compared to traditional erasure codes for error correcting, such as Reed-Solomon codes, fountain codes are rateless and can generate sufficient symbols on the fly. Two schemes, the EEP (Equal Error Protection) scheme and the UEP (Unequal Error Protection) scheme are described in the paper. Furthermore, the UEP scheme performs better than the EEP scheme. The proposed scheme not only can adaptively adjust the length of fountain codes according to channel loss rate but also reconstruct image even on bad channel.

  1. A 2-dimensional MHD code & survey of the ``buckling'' phenomenon in cylindrical magnetic flux compression experiments

    NASA Astrophysics Data System (ADS)

    Xiao, Bo; Wang, Ganghua; Gu, Zhuowei; Computational Physics Team

    2015-11-01

    We made a 2-dimensional magneto-hydrodynamics Lagrangian code. The code handles two kinds of magnetic configuration, a (x-y) plane with z-direction magnetic field Bz and a (r-z) plane with θ-direction magnetic field Bθ. The solving of the MHD equations is split into a pure dynamical step (i.e., ideal MHD) and a diffusion step. In the diffusion step, the Joule heat is calculated with a numerical scheme based on an specific form of the Joule heat production equation, ∂eJ/∂t = ∇ . (η/μ0 º × (∇ × º)) -∂/∂t (1/2μ0 B2) , where the term ∂/∂t (1/2μ0 B2) is the magnetic field energy variation caused solely by diffusion. This scheme insures the equality of the total Joule heat produced and the total electromagnetic energy lost in the system. Material elastoplasticity is considered in the code. An external circuit is coupled to the magneto-hydrodynamics and a detonation module is also added to enhance the code's ability for simulating magnetically-driven compression experiments. As a first application, the code was utilized to simulate a cylindrical magnetic flux compression experiment. The origin of the ``buckling'' phenomenon observed in the experiment is explored.

  2. New Methods for Lossless Image Compression Using Arithmetic Coding.

    ERIC Educational Resources Information Center

    Howard, Paul G.; Vitter, Jeffrey Scott

    1992-01-01

    Identifies four components of a good predictive lossless image compression method: (1) pixel sequence, (2) image modeling and prediction, (3) error modeling, and (4) error coding. Highlights include Laplace distribution and a comparison of the multilevel progressive method for image coding with the prediction by partial precision matching method.…

  3. Techniques for region coding in object-based image compression

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.

    2004-01-01

    Object-based compression (OBC) is an emerging technology that combines region segmentation and coding to produce a compact representation of a digital image or video sequence. Previous research has focused on a variety of segmentation and representation techniques for regions that comprise an image. The author has previously suggested [1] partitioning of the OBC problem into three steps: (1) region segmentation, (2) region boundary extraction and compression, and (3) region contents compression. A companion paper [2] surveys implementationally feasible techniques for boundary compression. In this paper, we analyze several strategies for region contents compression, including lossless compression, lossy VPIC, EPIC, and EBLAST compression, wavelet-based coding (e.g., JPEG-2000), as well as texture matching approaches. This paper is part of a larger study that seeks to develop highly efficient compression algorithms for still and video imagery, which would eventually support automated object recognition (AOR) and semantic lookup of images in large databases or high-volume OBC-format datastreams. Example applications include querying journalistic archives, scientific or medical imaging, surveillance image processing and target tracking, as well as compression of video for transmission over the Internet. Analysis emphasizes time and space complexity, as well as sources of reconstruction error in decompressed imagery.

  4. Wavelet based hierarchical coding scheme for radar image compression

    NASA Astrophysics Data System (ADS)

    Sheng, Wen; Jiao, Xiaoli; He, Jifeng

    2007-12-01

    This paper presents a wavelet based hierarchical coding scheme for radar image compression. Radar signal is firstly quantized to digital signal, and reorganized as raster-scanned image according to radar's repeated period frequency. After reorganization, the reformed image is decomposed to image blocks with different frequency band by 2-D wavelet transformation, each block is quantized and coded by the Huffman coding scheme. A demonstrating system is developed, showing that under the requirement of real time processing, the compression ratio can be very high, while with no significant loss of target signal in restored radar image.

  5. Syndrome-source-coding and its universal generalization. [error correcting codes for data compression

    NASA Technical Reports Server (NTRS)

    Ancheta, T. C., Jr.

    1976-01-01

    A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A 'universal' generalization of syndrome-source-coding is formulated which provides robustly effective distortionless coding of source ensembles. Two examples are given, comparing the performance of noiseless universal syndrome-source-coding to (1) run-length coding and (2) Lynch-Davisson-Schalkwijk-Cover universal coding for an ensemble of binary memoryless sources.

  6. Streamlined Genome Sequence Compression using Distributed Source Coding

    PubMed Central

    Wang, Shuang; Jiang, Xiaoqian; Chen, Feng; Cui, Lijuan; Cheng, Samuel

    2014-01-01

    We aim at developing a streamlined genome sequence compression algorithm to support alternative miniaturized sequencing devices, which have limited communication, storage, and computation power. Existing techniques that require heavy client (encoder side) cannot be applied. To tackle this challenge, we carefully examined distributed source coding theory and developed a customized reference-based genome compression protocol to meet the low-complexity need at the client side. Based on the variation between source and reference, our protocol will pick adaptively either syndrome coding or hash coding to compress subsequences of changing code length. Our experimental results showed promising performance of the proposed method when compared with the state-of-the-art algorithm (GRS). PMID:25520552

  7. Description of a parallel, 3D, finite element, hydrodynamics-diffusion code

    SciTech Connect

    Milovich, J L; Prasad, M K; Shestakov, A I

    1999-04-11

    We describe a parallel, 3D, unstructured grid finite element, hydrodynamic diffusion code for inertial confinement fusion (ICF) applications and the ancillary software used to run it. The code system is divided into two entities, a controller and a stand-alone physics code. The code system may reside on different computers; the controller on the user's workstation and the physics code on a supercomputer. The physics code is composed of separate hydrodynamic, equation-of-state, laser energy deposition, heat conduction, and radiation transport packages and is parallelized for distributed memory architectures. For parallelization, a SPMD model is adopted; the domain is decomposed into a disjoint collection of subdomains, one per processing element (PE). The PEs communicate using MPI. The code is used to simulate the hydrodynamic implosion of a spherical bubble.

  8. Wavelet based ECG compression with adaptive thresholding and efficient coding.

    PubMed

    Alshamali, A

    2010-01-01

    This paper proposes a new wavelet-based ECG compression technique. It is based on optimized thresholds to determine significant wavelet coefficients and an efficient coding for their positions. Huffman encoding is used to enhance the compression ratio. The proposed technique is tested using several records taken from the MIT-BIH arrhythmia database. Simulation results show that the proposed technique outperforms others obtained by previously published schemes. PMID:20608811

  9. Conditional entropy coding of DCT coefficients for video compression

    NASA Astrophysics Data System (ADS)

    Sipitca, Mihai; Gillman, David W.

    2000-04-01

    We introduce conditional Huffman encoding of DCT run-length events to improve the coding efficiency of low- and medium-bit rate video compression algorithms. We condition the Huffman code for each run-length event on a classification of the current block. We classify blocks according to coding mode and signal type, which are known to the decoder, and according to energy, which the decoder must receive as side information. Our classification schemes improve coding efficiency with little or no increased running time and some increased memory use.

  10. Implementation of the Turn Function Method in a three-dimensional, parallelized hydrodynamics code

    NASA Astrophysics Data System (ADS)

    Orourke, P. J.; Fairfield, M. S.

    1992-08-01

    The implementation of the Turn Function Method in KIVA-F90, a version of the KIVA computer program written in the FORTRAN 90 programming language that is used on some massively parallel computers is described. The Turn Function Method solves both linear momentum and vorticity equations in numerical calculations of compressible fluid flow. Solving a vorticity equation allows vorticity to be both conserved and transported more accurately than in traditional methods for computing compressible flow. This first implementation of the Turn Function Method in a three-dimensional hydrodynamics code involved some modification of the original method and some numerical difference approximations. In particular, a penalty method is used to keep the divergence of the computed vorticity field close to zero. Difference operators are also defined in such a way that the finite difference analog of del(del x u) = 0 is exactly satisfied. Three example problems show the increased computational cost and the accuracy to be gained by using the Turn Function Method in calculations of flows with rotational motion. Use of the Method can increase by 60 percent the computational times of the Euler equation solver in KIVA-F90, but it is concluded that this increased cost is justified by the increased accuracy.

  11. THEHYCO-3DT: Thermal hydrodynamic code for the 3 dimensional transient calculation of advanced LMFBR core

    SciTech Connect

    Vitruk, S.G.; Korsun, A.S.; Ushakov, P.A.

    1995-09-01

    The multilevel mathematical model of neutron thermal hydrodynamic processes in a passive safety core without assemblies duct walls and appropriate computer code SKETCH, consisted of thermal hydrodynamic module THEHYCO-3DT and neutron one, are described. A new effective discretization technique for energy, momentum and mass conservation equations is applied in hexagonal - z geometry. The model adequacy and applicability are presented. The results of the calculations show that the model and the computer code could be used in conceptual design of advanced reactors.

  12. Development of 1D Liner Compression Code for IDL

    NASA Astrophysics Data System (ADS)

    Shimazu, Akihisa; Slough, John; Pancotti, Anthony

    2015-11-01

    A 1D liner compression code is developed to model liner implosion dynamics in the Inductively Driven Liner Experiment (IDL) where FRC plasmoid is compressed via inductively-driven metal liners. The driver circuit, magnetic field, joule heating, and liner dynamics calculations are performed at each time step in sequence to couple these effects in the code. To obtain more realistic magnetic field results for a given drive coil geometry, 2D and 3D effects are incorporated into the 1D field calculation through use of correction factor table lookup approach. Commercial low-frequency electromagnetic fields solver, ANSYS Maxwell 3D, is used to solve the magnetic field profile for static liner condition at various liner radius in order to derive correction factors for the 1D field calculation in the code. The liner dynamics results from the code is verified to be in good agreement with the results from commercial explicit dynamics solver, ANSYS Explicit Dynamics, and previous liner experiment. The developed code is used to optimize the capacitor bank and driver coil design for better energy transfer and coupling. FRC gain calculations are also performed using the liner compression data from the code for the conceptual design of the reactor sized system for fusion energy gains.

  13. Compressed data organization for high throughput parallel entropy coding

    NASA Astrophysics Data System (ADS)

    Said, Amir; Mahfoodh, Abo-Talib; Yea, Sehoon

    2015-09-01

    The difficulty of parallelizing entropy coding is increasingly limiting the data throughputs achievable in media compression. In this work we analyze what are the fundamental limitations, using finite-state-machine models for identifying the best manner of separating tasks that can be processed independently, while minimizing compression losses. This analysis confirms previous works showing that effective parallelization is feasible only if the compressed data is organized in a proper way, which is quite different from conventional formats. The proposed new formats exploit the fact that optimal compression is not affected by the arrangement of coded bits, but it goes further in exploiting the decreasing cost of data processing and memory. Additional advantages include the ability to use, within this framework, increasingly more complex data modeling techniques, and the freedom to mix different types of coding. We confirm the parallelization effectiveness using coding simulations that run on multi-core processors, and show how throughput scales with the number of cores, and analyze the additional bit-rate overhead.

  14. A Two-Dimensional Compressible Gas Flow Code

    Energy Science and Technology Software Center (ESTSC)

    1995-03-17

    F2D is a general purpose, two dimensional, fully compressible thermal-fluids code that models most of the phenomena found in situations of coupled fluid flow and heat transfer. The code solves momentum, continuity, gas-energy, and structure-energy equations using a predictor-correction solution algorithm. The corrector step includes a Poisson pressure equation. The finite difference form of the equation is presented along with a description of input and output. Several example problems are included that demonstrate the applicabilitymore » of the code in problems ranging from free fluid flow, shock tubes and flow in heated porous media.« less

  15. A progressive data compression scheme based upon adaptive transform coding: Mixture block coding of natural images

    NASA Technical Reports Server (NTRS)

    Rost, Martin C.; Sayood, Khalid

    1991-01-01

    A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.

  16. A seismic data compression system using subband coding

    NASA Technical Reports Server (NTRS)

    Kiely, A. B.; Pollara, F.

    1995-01-01

    This article presents a study of seismic data compression techniques and a compression algorithm based on subband coding. The algorithm includes three stages: a decorrelation stage, a quantization stage that introduces a controlled amount of distortion to allow for high compression ratios, and a lossless entropy coding stage based on a simple but efficient arithmetic coding method. Subband coding methods are particularly suited to the decorrelation of nonstationary processes such as seismic events. Adaptivity to the nonstationary behavior of the waveform is achieved by dividing the data into separate blocks that are encoded separately with an adaptive arithmetic encoder. This is done with high efficiency due to the low overhead introduced by the arithmetic encoder in specifying its parameters. The technique could be used as a progressive transmission system, where successive refinements of the data can be requested by the user. This allows seismologists to first examine a coarse version of waveforms with minimal usage of the channel and then decide where refinements are required. Rate-distortion performance results are presented and comparisons are made with two block transform methods.

  17. Neptune: An astrophysical smooth particle hydrodynamics code for massively parallel computer architectures

    NASA Astrophysics Data System (ADS)

    Sandalski, Stou

    Smooth particle hydrodynamics is an efficient method for modeling the dynamics of fluids. It is commonly used to simulate astrophysical processes such as binary mergers. We present a newly developed GPU accelerated smooth particle hydrodynamics code for astrophysical simulations. The code is named neptune after the Roman god of water. It is written in OpenMP parallelized C++ and OpenCL and includes octree based hydrodynamic and gravitational acceleration. The design relies on object-oriented methodologies in order to provide a flexible and modular framework that can be easily extended and modified by the user. Several pre-built scenarios for simulating collisions of polytropes and black-hole accretion are provided. The code is released under the MIT Open Source license and publicly available at http://code.google.com/p/neptune-sph/.

  18. Incompressible-compressible flows with a transient discontinuous interface using smoothed particle hydrodynamics (SPH)

    NASA Astrophysics Data System (ADS)

    Lind, S. J.; Stansby, P. K.; Rogers, B. D.

    2016-03-01

    A new two-phase incompressible-compressible Smoothed Particle Hydrodynamics (SPH) method has been developed where the interface is discontinuous in density. This is applied to water-air problems with a large density difference. The incompressible phase requires surface pressure from the compressible phase and the compressible phase requires surface velocity from the incompressible phase. Compressible SPH is used for the air phase (with the isothermal stiffened ideal gas equation of state for low Mach numbers) and divergence-free (projection based) incompressible SPH is used for the water phase, with the addition of Fickian shifting to produce sufficiently homogeneous particle distributions to enable stable, accurate, converged solutions without noise in the pressure field. Shifting is a purely numerical particle regularisation device. The interface remains a true material discontinuity at a high density ratio with continuous pressure and velocity at the interface. This approach with the physics of compressibility and incompressibility represented is novel within SPH and is validated against semi-analytical results for a two-phase elongating and oscillating water drop, analytical results for low amplitude inviscid standing waves, the Kelvin-Helmholtz instability, and a dam break problem with high interface distortion and impact on a vertical wall where experimental and other numerical results are available.

  19. A compressible Navier-Stokes code for turbulent flow modeling

    NASA Technical Reports Server (NTRS)

    Coakley, T. J.

    1984-01-01

    An implicit, finite volume code for solving two dimensional, compressible turbulent flows is described. Second order upwind differencing of the inviscid terms of the equations is used to enhance stability and accuracy. A diagonal form of the implicit algorithm is used to improve efficiency. Several zero and two equation turbulence models are incorporated to study their impact on overall flow modeling accuracy. Applications to external and internal flows are discussed.

  20. CoCoNuT: General relativistic hydrodynamics code with dynamical space-time evolution

    NASA Astrophysics Data System (ADS)

    Dimmelmeier, Harald; Novak, Jérôme; Cerdá-Durán, Pablo

    2012-02-01

    CoCoNuT is a general relativistic hydrodynamics code with dynamical space-time evolution. The main aim of this numerical code is the study of several astrophysical scenarios in which general relativity can play an important role, namely the collapse of rapidly rotating stellar cores and the evolution of isolated neutron stars. The code has two flavors: CoCoA, the axisymmetric (2D) magnetized version, and CoCoNuT, the 3D non-magnetized version.

  1. Hyperspectral pixel classification from coded-aperture compressive imaging

    NASA Astrophysics Data System (ADS)

    Ramirez, Ana; Arce, Gonzalo R.; Sadler, Brian M.

    2012-06-01

    This paper describes a new approach and its associated theoretical performance guarantees for supervised hyperspectral image classification from compressive measurements obtained by a Coded Aperture Snapshot Spectral Imaging System (CASSI). In one snapshot, the two-dimensional focal plane array (FPA) in the CASSI system captures the coded and spectrally dispersed source field of a three-dimensional data cube. Multiple snapshots are used to construct a set of compressive spectral measurements. The proposed approach is based on the concept that each pixel in the hyper-spectral image lies in a low-dimensional subspace obtained from the training samples, and thus it can be represented as a sparse linear combination of vectors in the given subspace. The sparse vector representing the test pixel is then recovered from the set of compressive spectral measurements and it is used to determine the class label of the test pixel. The theoretical performance bounds of the classifier exploit the distance preservation condition satisfied by the multiple shot CASSI system and depend on the number of measurements collected, code aperture pattern, and similarity between spectral signatures in the dictionary. Simulation experiments illustrate the performance of the proposed classification approach.

  2. Gaseous laser targets and optical diagnostics for studying compressible hydrodynamic instabilities

    SciTech Connect

    Edwards, J M; Robey, H; Mackinnon, A

    2001-06-29

    Explore the combination of optical diagnostics and gaseous targets to obtain important information about compressible turbulent flows that cannot be derived from traditional laser experiments for the purposes of V and V of hydrodynamics models and understanding scaling. First year objectives: Develop and characterize blast wave-gas jet test bed; Perform single pulse shadowgraphy of blast wave interaction with turbulent gas jet as a function of blast wave Mach number; Explore double pulse shadowgraphy and image correlation for extracting velocity spectra in the shock-turbulent flow interaction; and Explore the use/adaptation of advanced diagnostics.

  3. Hydrodynamics of rotating stars and close binary interactions: Compressible ellipsoid models

    NASA Technical Reports Server (NTRS)

    Lai, Dong; Rasio, Frederic A.; Shapiro, Stuart L.

    1994-01-01

    We develop a new formalism to study the dynamics of fluid polytropes in three dimensions. The stars are modeled as compressible ellipsoids, and the hydrodynamic equations are reduced to a set of ordinary differential equations for the evolution of the principal axes and other global quantities. Both viscous dissipation and the gravitational radiation reaction are incorporated. We establish the validity of our approximations and demonstrate the simplicity and power of the method by rederiving a number of known results concerning the stability and dynamical oscillations of rapidly rotating polytropes. In particular, we present a generalization to compressible fluids of Chandrasekhar's classical results for the secular and dynamical instabilities of incompressible Maclaurin spheroids. We also present several applications of our method to astrophysical problems of great current interest, such as the tidal disruption of a star by a massive black hole, the coalescence of compact binaries driven by the emission of gravitational waves, and the development of instabilities in close binary systems.

  4. Parallelization of a three-dimensional compressible transition code

    NASA Technical Reports Server (NTRS)

    Erlebacher, G.; Hussaini, M. Y.; Bokhari, Shahid H.

    1990-01-01

    The compressible, three-dimensional, time-dependent Navier-Stokes equations are solved on a 20 processor Flex/32 computer. The code is a parallel implementation of an existing code operational on the Cray-2 at NASA Ames, which performs direct simulations of the initial stages of the transition process of wall-bounded flow at supersonic Mach numbers. Spectral collocation in all three spatial directions (Fourier along the plate and Chebyshev normal to it) ensures high accuracy of the flow variables. By hiding most of the parallelism in low-level routines, the casual user is shielded from most of the nonstandard coding constructs. Speedups of 13 out of a maximum of 16 are achieved on the largest computational grids.

  5. Coded aperture design in mismatched compressive spectral imaging.

    PubMed

    Galvis, Laura; Arguello, Henry; Arce, Gonzalo R

    2015-11-20

    Compressive spectral imaging (CSI) senses a scene by using two-dimensional coded projections such that the number of measurements is far less than that used in spectral scanning-type instruments. An architecture that efficiently implements CSI is the coded aperture snapshot spectral imager (CASSI). A physical limitation of the CASSI is the system resolution, which is determined by the lowest resolution element used in the detector and the coded aperture. Although the final resolution of the system is usually given by the detector, in the CASSI, for instance, the use of a low resolution coded aperture implemented using a digital micromirror device (DMD), which induces the grouping of pixels in superpixels in the detector, is decisive to the final resolution. The mismatch occurs by the differences in the pitch size of the DMD mirrors and focal plane array (FPA) pixels. A traditional solution to this mismatch consists of grouping several pixels in square features, which subutilizes the DMD and the detector resolution and, therefore, reduces the spatial and spectral resolution of the reconstructed spectral images. This paper presents a model for CASSI which admits the mismatch and permits exploiting the maximum resolution of the coding element and the FPA sensor. A super-resolution algorithm and a synthetic coded aperture are developed in order to solve the mismatch. The mathematical models are verified using a real implementation of CASSI. The results of the experiments show a significant gain in spatial and spectral imaging quality over the traditional grouping pixel technique. PMID:26836551

  6. Block-based conditional entropy coding for medical image compression

    NASA Astrophysics Data System (ADS)

    Bharath Kumar, Sriperumbudur V.; Nagaraj, Nithin; Mukhopadhyay, Sudipta; Xu, Xiaofeng

    2003-05-01

    In this paper, we propose a block-based conditional entropy coding scheme for medical image compression using the 2-D integer Haar wavelet transform. The main motivation to pursue conditional entropy coding is that the first-order conditional entropy is always theoretically lesser than the first and second-order entropies. We propose a sub-optimal scan order and an optimum block size to perform conditional entropy coding for various modalities. We also propose that a similar scheme can be used to obtain a sub-optimal scan order and an optimum block size for other wavelets. The proposed approach is motivated by a desire to perform better than JPEG2000 in terms of compression ratio. We hint towards developing a block-based conditional entropy coder, which has the potential to perform better than JPEG2000. Though we don't indicate a method to achieve the first-order conditional entropy coder, the use of conditional adaptive arithmetic coder would achieve arbitrarily close to the theoretical conditional entropy. All the results in this paper are based on the medical image data set of various bit-depths and various modalities.

  7. Magneto-hydrodynamic calculation of magnetic flux compression using imploding cylindrical liners

    NASA Astrophysics Data System (ADS)

    Zhao, Jibo; Sun, Chengwei; Gu, Zhuowei

    2015-06-01

    Based on the one-dimensional elastic-plastic reactive hydrodynamic code SSS, the one-dimensional magneto-hydrodynamics code SSS/MHD is developed successfully, and calculation is carried for cylindrical magneto cumulative generators (MC-1 device). The magnetic field diffusion into liner and sample tuber is analyzed, and the result shows that the maximum value of magnetic induction intensity to cavity 0.2 mm in liner is only sixteen Tesla, while the one in sample tuber is several hundred Tesla, which is caused by balancing of electromagnetism force and imploding one for the different velocity of liner and sample tuber. The curves of magnetic induction intensity on axes of cavity and the velocity history on the wall of sample tuber are calculated, which accord with the experiment results. The works in this paper account for that code SSS/MHD can be applied in experiment configures of detonation, shock and electromagnetism load and improve of parameter successfully. The experiment data can be estimated, analyzed and checked validly, and the physics course of correlative device can be understood deeply, according to SSS/MHD. This work was supported by the special funds of the National Natural Science Foundation of China under Grant 11176002.

  8. CRASH: A BLOCK-ADAPTIVE-MESH CODE FOR RADIATIVE SHOCK HYDRODYNAMICS-IMPLEMENTATION AND VERIFICATION

    SciTech Connect

    Van der Holst, B.; Toth, G.; Sokolov, I. V.; Myra, E. S.; Fryxell, B.; Drake, R. P.; Powell, K. G.; Holloway, J. P.; Stout, Q.; Adams, M. L.; Morel, J. E.; Karni, S.

    2011-06-01

    We describe the Center for Radiative Shock Hydrodynamics (CRASH) code, a block-adaptive-mesh code for multi-material radiation hydrodynamics. The implementation solves the radiation diffusion model with a gray or multi-group method and uses a flux-limited diffusion approximation to recover the free-streaming limit. Electrons and ions are allowed to have different temperatures and we include flux-limited electron heat conduction. The radiation hydrodynamic equations are solved in the Eulerian frame by means of a conservative finite-volume discretization in either one-, two-, or three-dimensional slab geometry or in two-dimensional cylindrical symmetry. An operator-split method is used to solve these equations in three substeps: (1) an explicit step of a shock-capturing hydrodynamic solver; (2) a linear advection of the radiation in frequency-logarithm space; and (3) an implicit solution of the stiff radiation diffusion, heat conduction, and energy exchange. We present a suite of verification test problems to demonstrate the accuracy and performance of the algorithms. The applications are for astrophysics and laboratory astrophysics. The CRASH code is an extension of the Block-Adaptive Tree Solarwind Roe Upwind Scheme (BATS-R-US) code with a new radiation transfer and heat conduction library and equation-of-state and multi-group opacity solvers. Both CRASH and BATS-R-US are part of the publicly available Space Weather Modeling Framework.

  9. CRASH: A Block-Adaptive-Mesh Code for Radiative Shock Hydrodynamics

    NASA Astrophysics Data System (ADS)

    van der Holst, B.; Toth, G.; Sokolov, I. V.; Powell, K. G.; Holloway, J. P.; Myra, E. S.; Stout, Q.; Adams, M. L.; Morel, J. E.; Drake, R. P.

    2011-01-01

    We describe the CRASH (Center for Radiative Shock Hydrodynamics) code, a block adaptive mesh code for multi-material radiation hydrodynamics. The implementation solves the radiation diffusion model with the gray or multigroup method and uses a flux limited diffusion approximation to recover the free-streaming limit. The electrons and ions are allowed to have different temperatures and we include a flux limited electron heat conduction. The radiation hydrodynamic equations are solved in the Eulerian frame by means of a conservative finite volume discretization in either one, two, or three-dimensional slab geometry or in two-dimensional cylindrical symmetry. An operator split method is used to solve these equations in three substeps: (1) solve the hydrodynamic equations with shock-capturing schemes, (2) a linear advection of the radiation in frequency-logarithm space, and (3) an implicit solve of the stiff radiation diffusion, heat conduction, and energy exchange. We present a suite of verification test problems to demonstrate the accuracy and performance of the algorithms. The CRASH code is an extension of the Block-Adaptive Tree Solarwind Roe Upwind Scheme (BATS-R-US) code with this new radiation transfer and heat conduction library and equation-of-state and multigroup opacity solvers. Both CRASH and BATS-R-US are part of the publicly available Space Weather Modeling Framework (SWMF).

  10. A compressible high-order unstructured spectral difference code for stratified convection in rotating spherical shells

    NASA Astrophysics Data System (ADS)

    Wang, Junfeng; Liang, Chunlei; Miesch, Mark S.

    2015-06-01

    We present a novel and powerful Compressible High-ORder Unstructured Spectral-difference (CHORUS) code for simulating thermal convection and related fluid dynamics in the interiors of stars and planets. The computational geometries are treated as rotating spherical shells filled with stratified gas. The hydrodynamic equations are discretized by a robust and efficient high-order Spectral Difference Method (SDM) on unstructured meshes. The computational stencil of the spectral difference method is compact and advantageous for parallel processing. CHORUS demonstrates excellent parallel performance for all test cases reported in this paper, scaling up to 12 000 cores on the Yellowstone High-Performance Computing cluster at NCAR. The code is verified by defining two benchmark cases for global convection in Jupiter and the Sun. CHORUS results are compared with results from the ASH code and good agreement is found. The CHORUS code creates new opportunities for simulating such varied phenomena as multi-scale solar convection, core convection, and convection in rapidly-rotating, oblate stars.

  11. PEGAS: Hydrodynamical code for numerical simulation of the gas components of interacting galaxies

    NASA Astrophysics Data System (ADS)

    Kulikov, Igor

    A new hydrodynamical code for numerical simulation of the gravitational gas dynamics is described in the paper. The code is based on the Fluid-in-Cell method with a Godunov-type scheme at the Eulerian stage. The numerical method was adapted for GPU-based supercomputers. The performance of the code is shown by the simulation of the collision of the gas components of two similar disc galaxies in the course of the central collision of the galaxies in the polar direction.

  12. EvoL: the new Padova Tree-SPH parallel code for cosmological simulations. I. Basic code: gravity and hydrodynamics

    NASA Astrophysics Data System (ADS)

    Merlin, E.; Buonomo, U.; Grassi, T.; Piovan, L.; Chiosi, C.

    2010-04-01

    Context. We present the new release of the Padova N-body code for cosmological simulations of galaxy formation and evolution, EvoL. The basic Tree + SPH code is presented and analysed, together with an overview of the software architectures. Aims: EvoL is a flexible parallel Fortran95 code, specifically designed for simulations of cosmological structure formations on cluster, galactic and sub-galactic scales. Methods: EvoL is a fully Lagrangian self-adaptive code, based on the classical oct-tree by Barnes & Hut (1986, Nature, 324, 446) and on the smoothed particle hydrodynamics algorithm (SPH, Lucy 1977, AJ, 82, 1013). It includes special features like adaptive softening lengths with correcting extra-terms, and modern formulations of SPH and artificial viscosity. It is designed to be run in parallel on multiple CPUs to optimise the performance and save computational time. Results: We describe the code in detail, and present the results of a number of standard hydrodynamical tests.

  13. A CMOS Imager with Focal Plane Compression using Predictive Coding

    NASA Technical Reports Server (NTRS)

    Leon-Salas, Walter D.; Balkir, Sina; Sayood, Khalid; Schemm, Nathan; Hoffman, Michael W.

    2007-01-01

    This paper presents a CMOS image sensor with focal-plane compression. The design has a column-level architecture and it is based on predictive coding techniques for image decorrelation. The prediction operations are performed in the analog domain to avoid quantization noise and to decrease the area complexity of the circuit, The prediction residuals are quantized and encoded by a joint quantizer/coder circuit. To save area resources, the joint quantizerlcoder circuit exploits common circuitry between a single-slope analog-to-digital converter (ADC) and a Golomb-Rice entropy coder. This combination of ADC and encoder allows the integration of the entropy coder at the column level. A prototype chip was fabricated in a 0.35 pm CMOS process. The output of the chip is a compressed bit stream. The test chip occupies a silicon area of 2.60 mm x 5.96 mm which includes an 80 X 44 APS array. Tests of the fabricated chip demonstrate the validity of the design.

  14. Simulating hypervelocity impact effects on structures using the smoothed particle hydrodynamics code MAGI

    NASA Technical Reports Server (NTRS)

    Libersky, Larry; Allahdadi, Firooz A.; Carney, Theodore C.

    1992-01-01

    Analysis of interaction occurring between space debris and orbiting structures is of great interest to the planning and survivability of space assets. Computer simulation of the impact events using hydrodynamic codes can provide some understanding of the processes but the problems involved with this fundamental approach are formidable. First, any realistic simulation is necessarily three-dimensional, e.g., the impact and breakup of a satellite. Second, the thickness of important components such as satellite skins or bumper shields are small with respect to the dimension of the structure as a whole, presenting severe zoning problems for codes. Thirdly, the debris cloud produced by the primary impact will yield many secondary impacts which will contribute to the damage and possible breakup of the structure. The problem was approached by choosing a relatively new computational technique that has virtues peculiar to space impacts. The method is called Smoothed Particle Hydrodynamics.

  15. AstroBEAR: Adaptive Mesh Refinement Code for Ideal Hydrodynamics & Magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Cunningham, Andrew J.; Frank, Adam; Varniere, Peggy; Mitran, Sorin; Jones, Thomas W.

    2011-04-01

    AstroBEAR is a modular hydrodynamic & magnetohydrodynamic code environment designed for a variety of astrophysical applications. It uses the BEARCLAW package, a multidimensional, Eulerian computational code used to solve hyperbolic systems of equations. AstroBEAR allows adaptive-mesh-refinment (AMR) simulations in 2, 2.5 (i.e., cylindrical), and 3 dimensions, in either cartesian or curvilinear coordinates. Parallel applications are supported through the MPI architecture. AstroBEAR is written in Fortran 90/95 using standard libraries. AstroBEAR supports hydrodynamic (HD) and magnetohydrodynamic (MHD) applications using a variety of spatial and temporal methods. MHD simulations are kept divergence-free via the constrained transport (CT) methods of Balsara & Spicer. Three different equation of state environments are available: ideal gas, gas with differing isentropic γ, and the analytic Thomas-Fermi formulation of A.R. Bell [2]. Current work is being done to develop a more advanced real gas equation of state.

  16. Channel coding and data compression system considerations for efficient communication of planetary imaging data

    NASA Technical Reports Server (NTRS)

    Rice, R. F.

    1974-01-01

    End-to-end system considerations involving channel coding and data compression which could drastically improve the efficiency in communicating pictorial information from future planetary spacecraft are presented.

  17. Simulations of implosions with a 3D, parallel, unstructured-grid, radiation-hydrodynamics code

    SciTech Connect

    Kaiser, T B; Milovich, J L; Prasad, M K; Rathkopf, J; Shestakov, A I

    1998-12-28

    An unstructured-grid, radiation-hydrodynamics code is used to simulate implosions. Although most of the problems are spherically symmetric, they are run on 3D, unstructured grids in order to test the code's ability to maintain spherical symmetry of the converging waves. Three problems, of increasing complexity, are presented. In the first, a cold, spherical, ideal gas bubble is imploded by an enclosing high pressure source. For the second, we add non-linear heat conduction and drive the implosion with twelve laser beams centered on the vertices of an icosahedron. In the third problem, a NIF capsule is driven with a Planckian radiation source.

  18. Hydrodynamic Optimization Method and Design Code for Stall-Regulated Hydrokinetic Turbine Rotors

    SciTech Connect

    Sale, D.; Jonkman, J.; Musial, W.

    2009-08-01

    This report describes the adaptation of a wind turbine performance code for use in the development of a general use design code and optimization method for stall-regulated horizontal-axis hydrokinetic turbine rotors. This rotor optimization code couples a modern genetic algorithm and blade-element momentum performance code in a user-friendly graphical user interface (GUI) that allows for rapid and intuitive design of optimal stall-regulated rotors. This optimization method calculates the optimal chord, twist, and hydrofoil distributions which maximize the hydrodynamic efficiency and ensure that the rotor produces an ideal power curve and avoids cavitation. Optimizing a rotor for maximum efficiency does not necessarily create a turbine with the lowest cost of energy, but maximizing the efficiency is an excellent criterion to use as a first pass in the design process. To test the capabilities of this optimization method, two conceptual rotors were designed which successfully met the design objectives.

  19. A 3+1 dimensional viscous hydrodynamic code for relativistic heavy ion collisions

    NASA Astrophysics Data System (ADS)

    Karpenko, Iu.; Huovinen, P.; Bleicher, M.

    2014-11-01

    We describe the details of 3+1 dimensional relativistic hydrodynamic code for the simulations of quark-gluon/hadron matter expansion in ultra-relativistic heavy ion collisions. The code solves the equations of relativistic viscous hydrodynamics in the Israel-Stewart framework. With the help of ideal-viscous splitting, we keep the ability to solve the equations of ideal hydrodynamics in the limit of zero viscosities using a Godunov-type algorithm. Milne coordinates are used to treat the predominant expansion in longitudinal (beam) direction effectively. The results are successfully tested against known analytical relativistic inviscid and viscous solutions, as well as against existing 2+1D relativistic viscous code. Catalogue identifier: AETZ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AETZ_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 13 825 No. of bytes in distributed program, including test data, etc.: 92 750 Distribution format: tar.gz Programming language: C++. Computer: any with a C++ compiler and the CERN ROOT libraries. Operating system: tested on GNU/Linux Ubuntu 12.04 x64 (gcc 4.6.3), GNU/Linux Ubuntu 13.10 (gcc 4.8.2), Red Hat Linux 6 (gcc 4.4.7). RAM: scales with the number of cells in hydrodynamic grid; 1900 Mbytes for 3D 160×160×100 grid. Classification: 1.5, 4.3, 12. External routines: CERN ROOT (http://root.cern.ch), Gnuplot (http://www.gnuplot.info/) for plotting the results. Nature of problem: relativistic hydrodynamical description of the 3-dimensional quark-gluon/hadron matter expansion in ultra-relativistic heavy ion collisions. Solution method: finite volume Godunov-type method. Running time: scales with the number of hydrodynamic cells; typical running times on Intel(R) Core(TM) i7-3770 CPU @ 3.40 GHz, single thread mode, 160

  20. Narrative-compression coding for a channel with errors. Professional paper for period ending June 1987

    SciTech Connect

    Bond, J.W.

    1988-01-01

    Data-compression codes offer the possibility of improving the thruput of existing communication systems in the near term. This study was undertaken to determine if data-compression codes could be utilized to provide message compression in a channel with up to a 0.10-bit error rate. The data-compression capabilities of codes were investigated by estimating the average number of bits-per-character required to transmit narrative files. The performance of the codes in a channel with errors (a noisy channel) was investigated in terms of the average numbers of characters-decoded-in-error and of characters-printed-in-error-per-bit-error. Results were obtained by encoding four narrative files, which were resident on an IBM-PC and use a 58-character set. The study focused on Huffman codes and suffix/prefix comma-free codes. Other data-compression codes, in particular, block codes and some simple variants of block codes, are briefly discussed to place the study results in context. Comma-free codes were found to have the most-promising data compression because error propagation due to bit errors are limited to a few characters for these codes. A technique was found to identify a suffix/prefix comma-free code giving nearly the same data compressions as a Huffman code with much less error propagation than the Huffman codes. Greater data compression can be achieved through the use of this comma-free code word assignments based on conditioned probabilities of character occurrence.

  1. Design and Analysis of Fast Text Compression Based on Quasi-Arithmetic Coding.

    ERIC Educational Resources Information Center

    Howard, Paul G; Vitter, Jeffrey Scott

    1994-01-01

    Describes a detailed algorithm for fast text compression. Related to the PPM (prediction by partial matching) method, it simplifies the modeling phase by eliminating the escape mechanism and speeds up coding by using a combination of quasi-arithmetic coding and Rice coding. Details of the use of quasi-arithmetic code tables are given, and their…

  2. Developing a weakly compressible smoothed particle hydrodynamics model for biological flows

    NASA Astrophysics Data System (ADS)

    Vasyliv, Yaroslav; Alexeev, Alexander

    2014-11-01

    Smoothed Particle Hydrodynamics (SPH) is a meshless particle method originally developed for astrophysics applications in 1977. Over the years, limitations of the original formulations have been addressed by different groups to extend the domain of SPH application. In biologically relevant internal flows, two of the several challenges still facing SPH are 1) treatment of inlet, outlet, and no slip boundary conditions and 2) treatment of second derivatives present in the viscous terms. In this work, we develop a 2D weakly compressible SPH (WCSPH) for simulating viscous internal flows which incorporates some of the recent advancements made by groups in the above two areas. The method is validated against several analytical and experimental benchmark solutions for both steady and unsteady laminar flows. In particular, the 2013 U.S. Food and Drug Administration benchmark test case for medical devices - steady forward flow through a nozzle with a sudden contraction and conical diffuser - is simulated for different Reynolds numbers in the laminar region and results are validated against the published experimental and CFD datasets. Support from the National Science Foundation Graduate Research Fellowship Program (NSF GRFP) is gratefully acknowledged.

  3. A smooth particle hydrodynamics code to model collisions between solid, self-gravitating objects

    NASA Astrophysics Data System (ADS)

    Schäfer, C.; Riecker, S.; Maindl, T. I.; Speith, R.; Scherrer, S.; Kley, W.

    2016-05-01

    Context. Modern graphics processing units (GPUs) lead to a major increase in the performance of the computation of astrophysical simulations. Owing to the different nature of GPU architecture compared to traditional central processing units (CPUs) such as x86 architecture, existing numerical codes cannot be easily migrated to run on GPU. Here, we present a new implementation of the numerical method smooth particle hydrodynamics (SPH) using CUDA and the first astrophysical application of the new code: the collision between Ceres-sized objects. Aims: The new code allows for a tremendous increase in speed of astrophysical simulations with SPH and self-gravity at low costs for new hardware. Methods: We have implemented the SPH equations to model gas, liquids and elastic, and plastic solid bodies and added a fragmentation model for brittle materials. Self-gravity may be optionally included in the simulations and is treated by the use of a Barnes-Hut tree. Results: We find an impressive performance gain using NVIDIA consumer devices compared to our existing OpenMP code. The new code is freely available to the community upon request. If you are interested in our CUDA SPH code miluphCUDA, please write an email to Christoph Schäfer. miluphCUDA is the CUDA port of miluph. miluph is pronounced [maßl2v]. We do not support the use of the code for military purposes.

  4. A smooth particle hydrodynamics code to model collisions between solid, self-gravitating objects

    NASA Astrophysics Data System (ADS)

    Schäfer, C.; Riecker, S.; Maindl, T. I.; Speith, R.; Scherrer, S.; Kley, W.

    2016-04-01

    Context. Modern graphics processing units (GPUs) lead to a major increase in the performance of the computation of astrophysical simulations. Owing to the different nature of GPU architecture compared to traditional central processing units (CPUs) such as x86 architecture, existing numerical codes cannot be easily migrated to run on GPU. Here, we present a new implementation of the numerical method smooth particle hydrodynamics (SPH) using CUDA and the first astrophysical application of the new code: the collision between Ceres-sized objects. Aims: The new code allows for a tremendous increase in speed of astrophysical simulations with SPH and self-gravity at low costs for new hardware. Methods: We have implemented the SPH equations to model gas, liquids and elastic, and plastic solid bodies and added a fragmentation model for brittle materials. Self-gravity may be optionally included in the simulations and is treated by the use of a Barnes-Hut tree. Results: We find an impressive performance gain using NVIDIA consumer devices compared to our existing OpenMP code. The new code is freely available to the community upon request. If you are interested in our CUDA SPH code miluphCUDA, please write an email to Christoph Schäfer. miluphCUDA is the CUDA port of miluph. miluph is pronounced [maßl2v]. We do not support the use of the code for military purposes.

  5. Simulation of a ceramic impact experiment using the SPHINX smooth particle hydrodynamics code

    SciTech Connect

    Mandell, D.A.; Wingate, C.A.; Schwalbe, L.A.

    1996-08-01

    We are developing statistically based, brittle-fracture models and are implementing them into hydrocodes that can be used for designing systems with components of ceramics, glass, and/or other brittle materials. Because of the advantages it has simulating fracture, we are working primarily with the smooth particle hydrodynamics code SPHINX. We describe a new brittle fracture model that we have implemented into SPHINX, and we discuss how the model differs from others. To illustrate the code`s current capability, we simulate an experiment in which a tungsten rod strikes a target of heavily confined ceramic. Simulations in 3D at relatively coarse resolution yield poor results. However, 2D plane-strain approximations to the test produce crack patterns that are strikingly similar to the data, although the fracture model needs further refinement to match some of the finer details. We conclude with an outline of plans for continuing research and development.

  6. Investigating the Magnetorotational Instability with Dedalus, and Open-Souce Hydrodynamics Code

    SciTech Connect

    Burns, Keaton J; /UC, Berkeley, aff SLAC

    2012-08-31

    The magnetorotational instability is a fluid instability that causes the onset of turbulence in discs with poloidal magnetic fields. It is believed to be an important mechanism in the physics of accretion discs, namely in its ability to transport angular momentum outward. A similar instability arising in systems with a helical magnetic field may be easier to produce in laboratory experiments using liquid sodium, but the applicability of this phenomenon to astrophysical discs is unclear. To explore and compare the properties of these standard and helical magnetorotational instabilities (MRI and HRMI, respectively), magnetohydrodynamic (MHD) capabilities were added to Dedalus, an open-source hydrodynamics simulator. Dedalus is a Python-based pseudospectral code that uses external libraries and parallelization with the goal of achieving speeds competitive with codes implemented in lower-level languages. This paper will outline the MHD equations as implemented in Dedalus, the steps taken to improve the performance of the code, and the status of MRI investigations using Dedalus.

  7. Test Compression for Robust Testable Path Delay Fault Testing Using Interleaving and Statistical Coding

    NASA Astrophysics Data System (ADS)

    Namba, Kazuteru; Ito, Hideo

    This paper proposes a method providing efficient test compression. The proposed method is for robust testable path delay fault testing with scan design facilitating two-pattern testing. In the proposed method, test data are interleaved before test compression using statistical coding. This paper also presents test architecture for two-pattern testing using the proposed method. The proposed method is experimentally evaluated from several viewpoints such as compression rates, test application time and area overhead. For robust testable path delay fault testing on 11 out of 20 ISCAS89 benchmark circuits, the proposed method provides better compression rates than the existing methods such as Huffman coding, run-length coding, Golomb coding, frequency-directed run-length (FDR) coding and variable-length input Huffman coding (VIHC).

  8. Space communication system for compressed data with a concatenated Reed-Solomon-Viterbi coding channel

    NASA Technical Reports Server (NTRS)

    Rice, R. F.; Hilbert, E. E. (Inventor)

    1976-01-01

    A space communication system incorporating a concatenated Reed Solomon Viterbi coding channel is discussed for transmitting compressed and uncompressed data from a spacecraft to a data processing center on Earth. Imaging (and other) data are first compressed into source blocks which are then coded by a Reed Solomon coder and interleaver, followed by a convolutional encoder. The received data is first decoded by a Viterbi decoder, followed by a Reed Solomon decoder and deinterleaver. The output of the latter is then decompressed, based on the compression criteria used in compressing the data in the spacecraft. The decompressed data is processed to reconstruct an approximation of the original data-producing condition or images.

  9. Prediction of material strength and fracture of glass using the SPHINX smooth particle hydrodynamics code

    SciTech Connect

    Mandell, D.A.; Wingate, C.A.

    1994-08-01

    The design of many military devices involves numerical predictions of the material strength and fracture of brittle materials. The materials of interest include ceramics, that are used in armor packages; glass that is used in truck and jeep windshields and in helicopters; and rock and concrete that are used in underground bunkers. As part of a program to develop advanced hydrocode design tools, the authors have implemented a brittle fracture model for glass into the SPHINX smooth particle hydrodynamics code. The authors have evaluated this model and the code by predicting data from one-dimensional flyer plate impacts into glass, and data from tungsten rods impacting glass. Since fractured glass properties, which are needed in the model, are not available, the authors did sensitivity studies of these properties, as well as sensitivity studies to determine the number of particles needed in the calculations. The numerical results are in good agreement with the data.

  10. Prediction of material strength and fracture of brittle materials using the SPHINX smooth particle hydrodynamics code

    SciTech Connect

    Mandell, D.A.; Wingate, C.A.; Stellingwwerf, R.F.

    1995-12-31

    The design of many devices involves numerical predictions of the material strength and fracture of brittle materials. The materials of interest include ceramics that are used in armor packages; glass that is used in windshields; and rock and concrete that are used in oil wells. As part of a program to develop advanced hydrocode design tools, the authors have implemented a brittle fracture model for glass into the SPHINX smooth particle hydrodynamics code. The authors have evaluated this model and the code by predicting data from tungsten rods impacting glass. Since fractured glass properties, which are needed in the model, are not available, they did sensitivity studies of these properties, as well as sensitivity studies to determine the number of particles needed in the calculations. The numerical results are in good agreement with the data.

  11. A new multidimensional, energy-dependent two-moment transport code for neutrino-hydrodynamics

    NASA Astrophysics Data System (ADS)

    Just, O.; Obergaulinger, M.; Janka, H.-T.

    2015-11-01

    We present the new code ALCAR developed to model multidimensional, multienergy-group neutrino transport in the context of supernovae and neutron-star mergers. The algorithm solves the evolution equations of the zeroth- and first-order angular moments of the specific intensity, supplemented by an algebraic relation for the second-moment tensor to close the system. The scheme takes into account frame-dependent effects of the order O(v/c) as well as the most important types of neutrino interactions. The transport scheme is significantly more efficient than a multidimensional solver of the Boltzmann equation, while it is more accurate and consistent than the flux-limited diffusion method. The finite-volume discretization of the essentially hyperbolic system of moment equations employs methods well-known from hydrodynamics. For the time integration of the potentially stiff moment equations we employ a scheme in which only the local source terms are treated implicitly, while the advection terms are kept explicit, thereby allowing for an efficient computational parallelization of the algorithm. We investigate various problem set-ups in one and two dimensions to verify the implementation and to test the quality of the algebraic closure scheme. In our most detailed test, we compare a fully dynamic, one-dimensional core-collapse simulation with two published calculations performed with well-known Boltzmann-type neutrino-hydrodynamics codes and we find very satisfactory agreement.

  12. Comparison among five hydrodynamic codes with a diverging-converging nozzle experiment

    SciTech Connect

    L. E. Thode; M. C. Cline; B. G. DeVolder; M. S. Sahota; D. K. Zerkle

    1999-09-01

    A realistic open-cycle gas-core nuclear rocket simulation model must be capable of a self-consistent nozzle calculation in conjunction with coupled radiation and neutron transport in three spatial dimensions. As part of the development effort for such a model, five hydrodynamic codes were used to compare with a converging-diverging nozzle experiment. The codes used in the comparison are CHAD, FLUENT, KIVA2, RAMPANT, and VNAP2. Solution accuracy as a function of mesh size is important because, in the near term, a practical three-dimensional simulation model will require rather coarse zoning across the nozzle throat. In the study, four different grids were considered. (1) coarse, radially uniform grid, (2) coarse, radially nonuniform grid, (3) fine, radially uniform grid, and (4) fine, radially nonuniform grid. The study involves code verification, not prediction. In other words, the authors know the solution they want to match, so they can change methods and/or modify an algorithm to best match this class of problem. In this context, it was necessary to use the higher-order methods in both FLUENT and RAMPANT. In addition, KIVA2 required a modification that allows significantly more accurate solutions for a converging-diverging nozzle. From a predictive point of view, code accuracy with no tuning is an important result. The most accurate codes on a coarse grid, CHAD and VNAP2, did not require any tuning. Their main comparison among the codes was the radial dependence of the Mach number across the nozzle throat. All five codes yielded a very similar solution with fine, radially uniform and radially nonuniform grids. However, the codes yielded significantly different solutions with coarse, radially uniform and radially nonuniform grids. For all the codes, radially nonuniform zoning across the throat significantly increased solution accuracy with a coarse mesh. None of the codes agrees in detail with the weak shock located downstream of the nozzle throat, but all the

  13. Evaluation of a Cray performance tool using a large hydrodynamics code

    SciTech Connect

    Lord, K.M.; Simmons, M.L.

    1992-06-01

    This paper will discuss one of these automatic tools that has been developed recently by Cray Research, Inc. for use on its parallel supercomputer. The tool is called ATEXPERT; when used in conjunction with the Cray Fortran compiling system, CF77, it produces a parallelized version of a code based on loop-level parallelism, plus information to enable the programmer to optimize the parallelized code and improve performance. The information obtained through the use of the tool is presented in an easy-to-read graphical format, making the digestion of such a large quantity of data relatively easy and thus, improving programmer productivity. In this paper we address the issues that we found when the took a large Los Alamos hydrodynamics code, PUEBLO, that was highly vectorizable, but not parallelized, and using ATEXPERT proceeded to parallelize it. We show that through the advice of ATEXPERT, bottlenecks in the code can be found, leading to improved performance. We also show the dependence of performance on problem size, and finally, we contrast the speedup predicted by ATEXPERT with that measured on a dedicated eight-processor Y-MP.

  14. Semi-fixed-length motion vector coding for H.263-based low bit rate video compression.

    PubMed

    Côté, G; Gallant, M; Kossentini, F

    1999-01-01

    We present a semi-fixed-length motion vector coding method for H.263-based low bit rate video compression. The method exploits structural constraints within the motion field. The motion vectors are encoded using semi-fixed-length codes, yielding essentially the same levels of rate-distortion performance and subjective quality achieved by H.263's Huffman-based variable length codes in a noiseless environment. However, such codes provide substantially higher error resilience in a noisy environment. PMID:18267417

  15. MULTI2D - a computer code for two-dimensional radiation hydrodynamics

    NASA Astrophysics Data System (ADS)

    Ramis, R.; Meyer-ter-Vehn, J.; Ramírez, J.

    2009-06-01

    Simulation of radiation hydrodynamics in two spatial dimensions is developed, having in mind, in particular, target design for indirectly driven inertial confinement energy (IFE) and the interpretation of related experiments. Intense radiation pulses by laser or particle beams heat high-Z target configurations of different geometries and lead to a regime which is optically thick in some regions and optically thin in others. A diffusion description is inadequate in this situation. A new numerical code has been developed which describes hydrodynamics in two spatial dimensions (cylindrical R-Z geometry) and radiation transport along rays in three dimensions with the 4 π solid angle discretized in direction. Matter moves on a non-structured mesh composed of trilateral and quadrilateral elements. Radiation flux of a given direction enters on two (one) sides of a triangle and leaves on the opposite side(s) in proportion to the viewing angles depending on the geometry. This scheme allows to propagate sharply edged beams without ray tracing, though at the price of some lateral diffusion. The algorithm treats correctly both the optically thin and optically thick regimes. A symmetric semi-implicit (SSI) method is used to guarantee numerical stability. Program summaryProgram title: MULTI2D Catalogue identifier: AECV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 151 098 No. of bytes in distributed program, including test data, etc.: 889 622 Distribution format: tar.gz Programming language: C Computer: PC (32 bits architecture) Operating system: Linux/Unix RAM: 2 Mbytes Word size: 32 bits Classification: 19.7 External routines: X-window standard library (libX11.so) and corresponding heading files (X11/*.h) are

  16. Three-dimensional hydrodynamic Bondi-Hoyle accretion. 1: Code validation and stationary accretors

    NASA Technical Reports Server (NTRS)

    Ruffert, Maximilian

    1994-01-01

    We investigate the hydrodynamics of three-dimensional classical Bondi-Hoyle accretion. Totally absorbing stationary spheres of varying sizes (from 10.0 down to 0.02 Bondi radii) accrete matter from a homogeneous and slightly perturbed medium, which is taken to be an ideal gas (gamma = 5/3 or 1.2). To accommodate the long-range gravitational forces, the extent of the computational volume is typically a factor of 100 larger than the radius of the accretor. We compare the numerical mass accretion rates with the theoretical predictions of Bondi, to assess the validity of the code. The hydrodynamics is modeled by the piecewise parabolic method. No energy sources (nuclear burning) or sinks (radiation, conduction) are included. The resolution in the vicinity of the accretor is increased by multiply nesting several (6-8) grids around the stationary sphere, each finer grid being a factor of 2 smaller spatially than the next coarser grid. This allows us to include a coarse model for the surface of the accretor (vacuum sphere) on the finest grid while at the same time evolving the gas on the coarser grids. The accretion rates derived numerically are in in very good agreement (to about 10% over several orders of magnitude) with the values given by Bondi for a stationary accretor within a hydrodynamic medium. However, the equations have to be changed in order to include the finite size of the accretor (in some cases very large compared to the sonic point or even to the Bondi radius).

  17. Theoretical study of use of optical orthogonal codes for compressed video transmission in optical code division multiple access (OCDMA) system

    NASA Astrophysics Data System (ADS)

    Ghosh, Shila; Chatterji, B. N.

    2007-09-01

    A theoretical investigation to evaluate the performance of optical code division multiple access (OCDMA) for compressed video transmission is shown. OCDMA has many advantages than a typical synchronous protocol time division multiple access (TDMA). A pulsed laser transmission of multi channel digital video can be done using various techniques depending on whether the multi channel data are to be synchronous or asynchronous. A typical form of asynchronous digital operation is wavelength division multiplexing (WDM) in which the digital data of each video source are assigned a specific and separate wavelength. A sophisticated hardware such as accurate wavelength control of all lasers and tunable narrow band optical filters at the receivers is required in this case. A major disadvantage with CDMA is the reduction in per channel data rate (relative to the speeds available in the laser itself) that occurs in the insertion of code addressing. Hence optical CDMA for the video transmission application is meaningful when individual channel video bit rates can be significantly reduced and that can be done by compression of video data. In our work for compression of video image standard JPEG is implemented where a compression ratio of about 60 % is obtained without noticeable image degradation. Compared to the other existing techniques JPEG standard achieves higher compression ration with high S/N ratio. Here we demonstrated the auto and cross correlation properties of the codes. We have shown the implementation of bipolar Walsh coding in OCDMA system and their use in transmission of image/video.

  18. Context-based lossless image compression with optimal codes for discretized Laplacian distributions

    NASA Astrophysics Data System (ADS)

    Giurcaneanu, Ciprian Doru; Tabus, Ioan; Stanciu, Cosmin

    2003-05-01

    Lossless image compression has become an important research topic, especially in relation with the JPEG-LS standard. Recently, the techniques known for designing optimal codes for sources with infinite alphabets have been applied for the quantized Laplacian sources which have probability mass functions with two geometrically decaying tails. Due to the simple parametric model of the source distribution the Huffman iterations are possible to be carried out analytically, using the concept of reduced source, and the final codes are obtained as a sequence of very simple arithmetic operations, avoiding the need to store coding tables. We propose the use of these (optimal) codes in conjunction with context-based prediction, for noiseless compression of images. To reduce further the average code length, we design Escape sequences to be employed when the estimation of the distribution parameter is unreliable. Results on standard test files show improvements in compression ratio when comparing with JPEG-LS.

  19. Research on compression and improvement of vertex chain code

    NASA Astrophysics Data System (ADS)

    Yu, Guofang; Zhang, Yujie

    2009-10-01

    Combined with the Huffman encoding theory, the code 2 with highest emergence-probability and continution-frequency is indicated by a binary number 0,the combination of 1 and 3 with higher emergence-probability and continutionfrequency are indicated by two binary number 10,and the corresponding frequency-code are attached to the two kinds of code,the length of the frequency-code can be assigned beforehand or adaptive automatically,the code 1 and 3 with lowest emergence-probability and continution-frequency are indicated by the binary number 110 and 111 respectively.The relative encoding efficiency and decoding efficiency are supplemented to the current performance evaluation system of the chain code.the new chain code is compared with a current chain code through the test system progamed by VC++, the results show that the basic performances of the new chain code are significantly improved, and the performance advantages are improved with the size increase of the graphics.

  20. Adaptive uniform grayscale coded aperture design for high dynamic range compressive spectral imaging

    NASA Astrophysics Data System (ADS)

    Diaz, Nelson; Rueda, Hoover; Arguello, Henry

    2016-05-01

    Imaging spectroscopy is an important area with many applications in surveillance, agriculture and medicine. The disadvantage of conventional spectroscopy techniques is that they collect the whole datacube. In contrast, compressive spectral imaging systems capture snapshot compressive projections, which are the input of reconstruction algorithms to yield the underlying datacube. Common compressive spectral imagers use coded apertures to perform the coded projections. The coded apertures are the key elements in these imagers since they define the sensing matrix of the system. The proper design of the coded aperture entries leads to a good quality in the reconstruction. In addition, the compressive measurements are prone to saturation due to the limited dynamic range of the sensor, hence the design of coded apertures must consider saturation. The saturation errors in compressive measurements are unbounded and compressive sensing recovery algorithms only provide solutions for bounded noise or bounded with high probability. In this paper it is proposed the design of uniform adaptive grayscale coded apertures (UAGCA) to improve the dynamic range of the estimated spectral images by reducing the saturation levels. The saturation is attenuated between snapshots using an adaptive filter which updates the entries of the grayscale coded aperture based on the previous snapshots. The coded apertures are optimized in terms of transmittance and number of grayscale levels. The advantage of the proposed method is the efficient use of the dynamic range of the image sensor. Extensive simulations show improvements in the image reconstruction of the proposed method compared with grayscale coded apertures (UGCA) and adaptive block-unblock coded apertures (ABCA) in up to 10 dB.

  1. Introducing Flow-er: a Hydrodynamics Code for Relativistic and Newtonian Flows

    NASA Astrophysics Data System (ADS)

    Motl, P. M.; Tohline, J. E.; Lehner, L.

    2005-12-01

    We present a new numerical code (Flow-er) for calculating astrophysical flows in 1, 2 or 3 dimensions. We have implemented equations appropriate for the treatment of Newtonian gravity as well as the general relativistic formalism to treat flows with either a static or dynamic metric. The heart of the code is the recent non-oscillatory central difference scheme by Kurganov and Tadmor (2000; hereafter KT). With this technique, we do not require a characteristic decomposition or the solution of Riemann problems that are required by most other high resolution, shock capturing techniques. Furthermore, the KT scheme naturally incorporates the Method of Lines, allowing considerable flexibility in the choice of time integrators. We have implemented several interpolation kernels that allow us to choose the spatial accuracy of an evolution. Through the Cactus framework or independent code, Flow-er serves as a driver for the hydrodynamical portion of a simulation utilizing adaptive mesh refinement or on a unigrid. In addition to describing Flow-er, we present results from several test problems. We are pleased to acknowledge support for this work from the National Science Foundation through grants PHY-0326311 and AST-0407070.

  2. Medical Image Compression Using a New Subband Coding Method

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Smith, Mark J. T.; Scales, Allen; Tucker, Doug

    1995-01-01

    A recently introduced iterative complexity- and entropy-constrained subband quantization design algorithm is generalized and applied to medical image compression. In particular, the corresponding subband coder is used to encode Computed Tomography (CT) axial slice head images, where statistical dependencies between neighboring image subbands are exploited. Inter-slice conditioning is also employed for further improvements in compression performance. The subband coder features many advantages such as relatively low complexity and operation over a very wide range of bit rates. Experimental results demonstrate that the performance of the new subband coder is relatively good, both objectively and subjectively.

  3. Hydrodynamic code calculations of airblast for an explosive test in a shallow underground storage magazine

    NASA Astrophysics Data System (ADS)

    Kennedy, Lynn W.; Schneider, Kenneth D.

    1990-07-01

    A large-sclae test of the detonation of 20,000 kilograms of high explosive inside a shallow underground tunnel/chamber complex, simulating an ammunition storage magazine, was carried out in August, 1988, at the Naval Weapons Center, China Lake, California. The test was jointly sponsored by the U.S. Department of Defense Explosives Safety Board; the Safety Services Organisation of the Ministry of Defence, United Kingdom; and the Norwegian Defence Construction Service. The overall objective of the test was to determine the hazardous effects (debris, airblast, and ground motion) produced in this configuration. Actual storage magazines have considerably more overburden and are expected to contain and accidental detonation. The test configuration, on the other hand, was expected to rupture, and to scatter a significant amount of rocks, dirt and debris. Among the observations and measurements made in this test was study of airblast propagation within the storage chamber, in the access tunnel, and outside, on the tunnel ramp, prior to overburden venting. The results of these observations are being used to evaluate and validate current quantity-distance standards for the underground storage of munitions near inabited structures. As part of the prediction effort for this test, to assist with transducer ranging in the access tunnel and with post-test interpretation of the results, S-CUBED was asked to perform two-dimensional inviscid hydrodynamic code calculations of the explosive detonation and subsequent blastwave propagation in the interior chamber and access tunnel. This was accomplished using the S-CUBED Hydrodynamic Advanced Research Code (SHARC). In this paper, details of the calculations configuration will be presented. These will be compared to the actual as-built internal configuration of the tunnel/chamber complex. Results from the calculations, including contour plots and airblast waveforms, will be shown. The latter will be compared with experimental records

  4. Comparison study of EMG signals compression by methods transform using vector quantization, SPIHT and arithmetic coding.

    PubMed

    Ntsama, Eloundou Pascal; Colince, Welba; Ele, Pierre

    2016-01-01

    In this article, we make a comparative study for a new approach compression between discrete cosine transform (DCT) and discrete wavelet transform (DWT). We seek the transform proper to vector quantization to compress the EMG signals. To do this, we initially associated vector quantization and DCT, then vector quantization and DWT. The coding phase is made by the SPIHT coding (set partitioning in hierarchical trees coding) associated with the arithmetic coding. The method is demonstrated and evaluated on actual EMG data. Objective performance evaluations metrics are presented: compression factor, percentage root mean square difference and signal to noise ratio. The results show that method based on the DWT is more efficient than the method based on the DCT. PMID:27104132

  5. Speech coding and compression using wavelets and lateral inhibitory networks

    NASA Astrophysics Data System (ADS)

    Ricart, Richard

    1990-12-01

    The purpose of this thesis is to introduce the concept of lateral inhibition as a generalized technique for compressing time/frequency representations of electromagnetic and acoustical signals, particularly speech. This requires at least a rudimentary treatment of the theory of frames- which generalizes most commonly known time/frequency distributions -the biology of hearing, and digital signal processing. As such, this material, along with the interrelationships of the disparate subjects, is presented in a tutorial style. This may leave the mathematician longing for more rigor, the neurophysiological psychologist longing for more substantive support of the hypotheses presented, and the engineer longing for a reprieve from the theoretical barrage. Despite the problems that arise when trying to appeal to too wide an audience, this thesis should be a cogent analysis of the compression of time/frequency distributions via lateral inhibitory networks.

  6. Pulse code modulation data compression for automated test equipment

    SciTech Connect

    Navickas, T.A.; Jones, S.G.

    1991-05-01

    Development of automated test equipment for an advanced telemetry system requires continuous monitoring of PCM data while exercising telemetry inputs. This requirements leads to a large amount of data that needs to be stored and later analyzed. For example, a data stream of 4 Mbits/s and a test time of thirty minutes would yield 900 Mbytes of raw data. With this raw data, information needs to be stored to correlate the raw data to the test stimulus. This leads to a total of 1.8 Gb of data to be stored and analyzed. There is no method to analyze this amount of data in a reasonable time. A data compression method is needed to reduce the amount of data collected to a reasonable amount. The solution to the problem was data reduction. Data reduction was accomplished by real time limit checking, time stamping, and smart software. Limit checking was accomplished by an eight state finite state machine and four compression algorithms. Time stamping was needed to correlate stimulus to the appropriate output for data reconstruction. The software was written in the C programming language with a DOS extender used to allow it to run in extended mode. A 94--98% compression in the amount of data gathered was accomplished using this method. 1 fig.

  7. Joint source-channel coding: secured and progressive transmission of compressed medical images on the Internet.

    PubMed

    Babel, Marie; Parrein, Benoît; Déforges, Olivier; Normand, Nicolas; Guédon, Jean-Pierre; Coat, Véronique

    2008-06-01

    The joint source-channel coding system proposed in this paper has two aims: lossless compression with a progressive mode and the integrity of medical data, which takes into account the priorities of the image and the properties of a network with no guaranteed quality of service. In this context, the use of scalable coding, locally adapted resolution (LAR) and a discrete and exact Radon transform, known as the Mojette transform, meets this twofold requirement. In this paper, details of this joint coding implementation are provided as well as a performance evaluation with respect to the reference CALIC coding and to unequal error protection using Reed-Solomon codes. PMID:18289830

  8. Introducing Flow-er: a Hydrodynamics Code for Relativistic and Newtonian Flows

    NASA Astrophysics Data System (ADS)

    Motl, Patrick; Olabarrieta, Ignacio; Tohline, Joel

    2006-04-01

    We present a new numerical code (Flow-er) for calculating astrophysical flows in 1, 2 or 3 dimensions. We have implemented equations appropriate for the treatment of Newtonian gravity as well as the general relativistic formalism to treat flows with either a static or dynamic metric. The heart of the code is the recent non-oscillatory central difference scheme by Kurganov and Tadmor (2000). With this technique, we do not require a characteristic decomposition or the solution of Riemann problems that are required by most other high resolution, shock capturing techniques. Furthermore, the KT scheme naturally incorporates the Method of Lines, allowing considerable flexibility in the choice of time integrators. We have implemented several interpolation kernels that allow us to choose the spatial accuracy of an evolution. Flow-er has been tested against an independent implementation of the KT scheme to solve the relativistic equations in 1d - which we also describe. Flow-er can serve as the driver for the hydrodynamical portion of a simulation utilizing adaptive mesh refinement or on a unigrid. In addition to describing Flow-er, we present results from several test problems.

  9. Priority-based error correction using turbo codes for compressed AIRS data

    NASA Astrophysics Data System (ADS)

    Gladkova, I.; Grossberg, M.; Grayver, E.; Olsen, D.; Nalli, N.; Wolf, W.; Zhou, L.; Goldberg, M.

    2006-08-01

    Errors due to wireless transmission can have an arbitrarily large impact on a compressed file. A single bit error appearing in the compressed file can propagate during a decompression procedure and destroy the entire granule. Such a loss is unacceptable since this data is critical for a range of applications, including weather prediction and emergency response planning. The impact of a bit error in the compressed granule is very sensitive to the error's location in the file. There is a natural hierarchy of compressed data in terms of impact on the final retrieval products. For the considered compression scheme, errors in some parts of the data yield no noticeable degradation in the final products. We formulate a priority scheme for the compressed data and present an error correction approach based on minimizing impact on the retrieval products. Forward error correction codes (e.g., turbo, LDPC) allow the tradeoff between error correction strength and file inflation (bandwidth expansion). We propose segmenting the compressed data based on its priority and applying different-strength FEC codes to different segments. In this paper we demonstrate that this approach can achieve negligible product degradation while maintaining an overall 3-to-1 compression ratio on the final file. We apply this to AIRS sounder data to demonstrate viability for the sounder on the next-generation GOES-R platform.

  10. FORCE2: A state-of-the-art two-phase code for hydrodynamic calculations

    SciTech Connect

    Ding, Jianmin; Lyczkowski, R.W.; Burge, S.W.

    1993-02-01

    A three-dimensional computer code for two-phase flow named FORCE2 has been developed by Babcock and Wilcox (B & W) in close collaboration with Argonne National Laboratory (ANL). FORCE2 is capable of both transient as well as steady-state simulations. This Cartesian coordinates computer program is a finite control volume, industrial grade and quality embodiment of the pilot-scale FLUFIX/MOD2 code and contains features such as three-dimensional blockages, volume and surface porosities to account for various obstructions in the flow field, and distributed resistance modeling to account for pressure drops caused by baffles, distributor plates and large tube banks. Recently computed results demonstrated the significance of and necessity for three-dimensional models of hydrodynamics and erosion. This paper describes the process whereby ANL`s pilot-scale FLUFIX/MOD2 models and numerics were implemented into FORCE2. A description of the quality control to assess the accuracy of the new code and the validation using some of the measured data from Illinois Institute of Technology (UT) and the University of Illinois at Urbana-Champaign (UIUC) are given. It is envisioned that one day, FORCE2 with additional modules such as radiation heat transfer, combustion kinetics and multi-solids together with user-friendly pre- and post-processor software and tailored for massively parallel multiprocessor shared memory computational platforms will be used by industry and researchers to assist in reducing and/or eliminating the environmental and economic barriers which limit full consideration of coal, shale and biomass as energy sources, to retain energy security, and to remediate waste and ecological problems.

  11. FORCE2: A state-of-the-art two-phase code for hydrodynamic calculations

    SciTech Connect

    Ding, Jianmin; Lyczkowski, R.W. ); Burge, S.W. . Research Center)

    1993-02-01

    A three-dimensional computer code for two-phase flow named FORCE2 has been developed by Babcock and Wilcox (B W) in close collaboration with Argonne National Laboratory (ANL). FORCE2 is capable of both transient as well as steady-state simulations. This Cartesian coordinates computer program is a finite control volume, industrial grade and quality embodiment of the pilot-scale FLUFIX/MOD2 code and contains features such as three-dimensional blockages, volume and surface porosities to account for various obstructions in the flow field, and distributed resistance modeling to account for pressure drops caused by baffles, distributor plates and large tube banks. Recently computed results demonstrated the significance of and necessity for three-dimensional models of hydrodynamics and erosion. This paper describes the process whereby ANL's pilot-scale FLUFIX/MOD2 models and numerics were implemented into FORCE2. A description of the quality control to assess the accuracy of the new code and the validation using some of the measured data from Illinois Institute of Technology (UT) and the University of Illinois at Urbana-Champaign (UIUC) are given. It is envisioned that one day, FORCE2 with additional modules such as radiation heat transfer, combustion kinetics and multi-solids together with user-friendly pre- and post-processor software and tailored for massively parallel multiprocessor shared memory computational platforms will be used by industry and researchers to assist in reducing and/or eliminating the environmental and economic barriers which limit full consideration of coal, shale and biomass as energy sources, to retain energy security, and to remediate waste and ecological problems.

  12. Multispectral data compression through transform coding and block quantization

    NASA Technical Reports Server (NTRS)

    Ready, P. J.; Wintz, P. A.

    1972-01-01

    Transform coding and block quantization techniques are applied to multispectral aircraft scanner data, and digitized satellite imagery. The multispectral source is defined and an appropriate mathematical model proposed. The Karhunen-Loeve, Fourier, and Hadamard encoders are considered and are compared to the rate distortion function for the equivalent Gaussian source and to the performance of the single sample PCM encoder.

  13. User manual for INVICE 0.1-beta : a computer code for inverse analysis of isentropic compression experiments.

    SciTech Connect

    Davis, Jean-Paul

    2005-03-01

    INVICE (INVerse analysis of Isentropic Compression Experiments) is a FORTRAN computer code that implements the inverse finite-difference method to analyze velocity data from isentropic compression experiments. This report gives a brief description of the methods used and the options available in the first beta version of the code, as well as instructions for using the code.

  14. Ultraspectral sounder data compression using error-detecting reversible variable-length coding

    NASA Astrophysics Data System (ADS)

    Huang, Bormin; Ahuja, Alok; Huang, Hung-Lung; Schmit, Timothy J.; Heymann, Roger W.

    2005-08-01

    Nonreversible variable-length codes (e.g. Huffman coding, Golomb-Rice coding, and arithmetic coding) have been used in source coding to achieve efficient compression. However, a single bit error during noisy transmission can cause many codewords to be misinterpreted by the decoder. In recent years, increasing attention has been given to the design of reversible variable-length codes (RVLCs) for better data transmission in error-prone environments. RVLCs allow instantaneous decoding in both directions, which affords better detection of bit errors due to synchronization losses over a noisy channel. RVLCs have been adopted in emerging video coding standards--H.263+ and MPEG-4--to enhance their error-resilience capabilities. Given the large volume of three-dimensional data that will be generated by future space-borne ultraspectral sounders (e.g. IASI, CrIS, and HES), the use of error-robust data compression techniques will be beneficial to satellite data transmission. In this paper, we investigate a reversible variable-length code for ultraspectral sounder data compression, and present its numerical experiments on error propagation for the ultraspectral sounder data. The results show that the RVLC performs significantly better error containment than JPEG2000 Part 2.

  15. Application of grammar-based codes for lossless compression of digital mammograms

    NASA Astrophysics Data System (ADS)

    Li, Xiaoli; Krishnan, Srithar; Ma, Ngok-Wah

    2006-01-01

    A newly developed grammar-based lossless source coding theory and its implementation was proposed in 1999 and 2000, respectively, by Yang and Kieffer. The code first transforms the original data sequence into an irreducible context-free grammar, which is then compressed using arithmetic coding. In the study of grammar-based coding for mammography applications, we encountered two issues: processing time and limited number of single-character grammar G variables. For the first issue, we discover a feature that can simplify the matching subsequence search in the irreducible grammar transform process. Using this discovery, an extended grammar code technique is proposed and the processing time of the grammar code can be significantly reduced. For the second issue, we propose to use double-character symbols to increase the number of grammar variables. Under the condition that all the G variables have the same probability of being used, our analysis shows that the double- and single-character approaches have the same compression rates. By using the methods proposed, we show that the grammar code can outperform three other schemes: Lempel-Ziv-Welch (LZW), arithmetic, and Huffman on compression ratio, and has similar error tolerance capabilities as LZW coding under similar circumstances.

  16. Onset of hydrodynamic mix in high-velocity, highly compressed inertial confinement fusion implosions.

    PubMed

    Ma, T; Patel, P K; Izumi, N; Springer, P T; Key, M H; Atherton, L J; Benedetti, L R; Bradley, D K; Callahan, D A; Celliers, P M; Cerjan, C J; Clark, D S; Dewald, E L; Dixit, S N; Döppner, T; Edgell, D H; Epstein, R; Glenn, S; Grim, G; Haan, S W; Hammel, B A; Hicks, D; Hsing, W W; Jones, O S; Khan, S F; Kilkenny, J D; Kline, J L; Kyrala, G A; Landen, O L; Le Pape, S; MacGowan, B J; Mackinnon, A J; MacPhee, A G; Meezan, N B; Moody, J D; Pak, A; Parham, T; Park, H-S; Ralph, J E; Regan, S P; Remington, B A; Robey, H F; Ross, J S; Spears, B K; Smalyuk, V; Suter, L J; Tommasini, R; Town, R P; Weber, S V; Lindl, J D; Edwards, M J; Glenzer, S H; Moses, E I

    2013-08-23

    Deuterium-tritium inertial confinement fusion implosion experiments on the National Ignition Facility have demonstrated yields ranging from 0.8 to 7×10(14), and record fuel areal densities of 0.7 to 1.3 g/cm2. These implosions use hohlraums irradiated with shaped laser pulses of 1.5-1.9 MJ energy. The laser peak power and duration at peak power were varied, as were the capsule ablator dopant concentrations and shell thicknesses. We quantify the level of hydrodynamic instability mix of the ablator into the hot spot from the measured elevated absolute x-ray emission of the hot spot. We observe that DT neutron yield and ion temperature decrease abruptly as the hot spot mix mass increases above several hundred ng. The comparison with radiation-hydrodynamic modeling indicates that low mode asymmetries and increased ablator surface perturbations may be responsible for the current performance. PMID:24010449

  17. Non-US data compression and coding research. FASAC Technical Assessment Report

    SciTech Connect

    Gray, R.M.; Cohn, M.; Craver, L.W.; Gersho, A.; Lookabaugh, T.; Pollara, F.; Vetterli, M.

    1993-11-01

    This assessment of recent data compression and coding research outside the United States examines fundamental and applied work in the basic areas of signal decomposition, quantization, lossless compression, and error control, as well as application development efforts in image/video compression and speech/audio compression. Seven computer scientists and engineers who are active in development of these technologies in US academia, government, and industry carried out the assessment. Strong industrial and academic research groups in Western Europe, Israel, and the Pacific Rim are active in the worldwide search for compression algorithms that provide good tradeoffs among fidelity, bit rate, and computational complexity, though the theoretical roots and virtually all of the classical compression algorithms were developed in the United States. Certain areas, such as segmentation coding, model-based coding, and trellis-coded modulation, have developed earlier or in more depth outside the United States, though the United States has maintained its early lead in most areas of theory and algorithm development. Researchers abroad are active in other currently popular areas, such as quantizer design techniques based on neural networks and signal decompositions based on fractals and wavelets, but, in most cases, either similar research is or has been going on in the United States, or the work has not led to useful improvements in compression performance. Because there is a high degree of international cooperation and interaction in this field, good ideas spread rapidly across borders (both ways) through international conferences, journals, and technical exchanges. Though there have been no fundamental data compression breakthroughs in the past five years--outside or inside the United State--there have been an enormous number of significant improvements in both places in the tradeoffs among fidelity, bit rate, and computational complexity.

  18. Research on spatial coding compressive spectral imaging and its applicability for rural survey

    NASA Astrophysics Data System (ADS)

    Chen, Yuheng; Ji, Yiqun; Zhou, Jiankang; Chen, Xinhua; Shen, Weimin

    Compressive spectral imaging combines traditional spectral imaging method with new concept of compressive sensing thus has the advantages such as reducing acquisition data amount, realizing snapshot imaging for large field of view and increasing image signal-to-noise and its preliminary application effectiveness has been explored by early usage on the occasions such as high-speed imaging and fluorescent imaging. In this paper, the application potentiality for spatial coding compressive spectral imaging technique on rural survey is revealed. The physical model for spatial coding compressive spectral imaging is built on which its data flow procession is analyzed and its data reconstruction issue is concluded. The existing sparse reconstruction methods are reviewed thus specific module based on the two-step iterative shrinkage/thresholding algorithm is built so as to execute the imaging data reconstruction. The simulating imaging experiment based on AVIRIS visible band data of a specific selected rural scene is carried out. The spatial identification and spectral featuring extraction capacity for different ground species are evaluated by visual judgment of both single band image and spectral curve. The data fidelity evaluation parameters (RMSE and PSNR) are put forward so as to verify the data fidelity maintaining ability of this compressive imaging method quantitatively. The application potentiality of spatial coding compressive spectral imaging on rural survey, crop monitoring, vegetation inspection and further agricultural development demand is verified in this paper.

  19. Energy requirements for quantum data compression and 1-1 coding

    SciTech Connect

    Rallan, Luke; Vedral, Vlatko

    2003-10-01

    By looking at quantum data compression in the second quantization, we present a model for the efficient generation and use of variable length codes. In this picture, lossless data compression can be seen as the minimum energy required to faithfully represent or transmit classical information contained within a quantum state. In order to represent information, we create quanta in some predefined modes (i.e., frequencies) prepared in one of the two possible internal states (the information carrying degrees of freedom). Data compression is now seen as the selective annihilation of these quanta, the energy of which is effectively dissipated into the environment. As any increase in the energy of the environment is intricately linked to any information loss and is subject to Landauer's erasure principle, we use this principle to distinguish lossless and lossy schemes and to suggest bounds on the efficiency of our lossless compression protocol. In line with the work of Bostroem and Felbinger [Phys. Rev. A 65, 032313 (2002)], we also show that when using variable length codes the classical notions of prefix or uniquely decipherable codes are unnecessarily restrictive given the structure of quantum mechanics and that a 1-1 mapping is sufficient. In the absence of this restraint, we translate existing classical results on 1-1 coding to the quantum domain to derive a new upper bound on the compression of quantum information. Finally, we present a simple quantum circuit to implement our scheme.

  20. A Test Data Compression Scheme Based on Irrational Numbers Stored Coding

    PubMed Central

    Wu, Hai-feng; Cheng, Yu-sheng; Zhan, Wen-fa; Cheng, Yi-fei; Wu, Qiong; Zhu, Shi-juan

    2014-01-01

    Test question has already become an important factor to restrict the development of integrated circuit industry. A new test data compression scheme, namely irrational numbers stored (INS), is presented. To achieve the goal of compress test data efficiently, test data is converted into floating-point numbers, stored in the form of irrational numbers. The algorithm of converting floating-point number to irrational number precisely is given. Experimental results for some ISCAS 89 benchmarks show that the compression effect of proposed scheme is better than the coding methods such as FDR, AARLC, INDC, FAVLC, and VRL. PMID:25258744

  1. Embedded zeroblock coding algorithm based on KLT and wavelet transform for hyperspectral image compression

    NASA Astrophysics Data System (ADS)

    Hou, Ying

    2009-10-01

    In this paper, a hyperspectral image lossy coder using three-dimensional Embedded ZeroBlock Coding (3D EZBC) algorithm based on Karhunen-Loève transform (KLT) and wavelet transform (WT) is proposed. This coding scheme adopts 1D KLT as spectral decorrelator and 2D WT as spatial decorrelator. Furthermore, the computational complexity and the coding performance of the low-complexity KLT are compared and evaluated. In comparison with several stateof- the-art coding algorithms, experimental results indicate that our coder can achieve better lossy compression performance.

  2. Global Time Dependent Solutions of Stochastically Driven Standard Accretion Disks: Development of Hydrodynamical Code

    NASA Astrophysics Data System (ADS)

    Wani, Naveel; Maqbool, Bari; Iqbal, Naseer; Misra, Ranjeev

    2016-07-01

    X-ray binaries and AGNs are powered by accretion discs around compact objects, where the x-rays are emitted from the inner regions and uv emission arise from the relatively cooler outer parts. There has been an increasing evidence that the variability of the x-rays in different timescales is caused by stochastic fluctuations in the accretion disc at different radii. These fluctuations although arise in the outer parts of the disc but propagate inwards to give rise to x-ray variability and hence provides a natural connection between the x-ray and uv variability. There are analytical expressions to qualitatively understand the effect of these stochastic variabilities, but quantitative predictions are only possible by a detailed hydrodynamical study of the global time dependent solution of standard accretion disc. We have developed numerical efficient code (to incorporate all these effects), which considers gas pressure dominated solutions and stochastic fluctuations with the inclusion of boundary effect of the last stable orbit.

  3. Numerical Simulation of Supersonic Compression Corners and Hypersonic Inlet Flows Using the RPLUS2D Code

    NASA Technical Reports Server (NTRS)

    Kapoor, Kamlesh; Anderson, Bernhard H.; Shaw, Robert J.

    1994-01-01

    A two-dimensional computational code, PRLUS2D, which was developed for the reactive propulsive flows of ramjets and scramjets, was validated for two-dimensional shock-wave/turbulent-boundary-layer interactions. The problem of compression corners at supersonic speeds was solved using the RPLUS2D code. To validate the RPLUS2D code for hypersonic speeds, it was applied to a realistic hypersonic inlet geometry. Both the Baldwin-Lomax and the Chien two-equation turbulence models were used. Computational results showed that the RPLUS2D code compared very well with experimentally obtained data for supersonic compression corner flows, except in the case of large separated flows resulting from the interactions between the shock wave and turbulent boundary layer. The computational results compared well with the experiment results in a hypersonic NASA P8 inlet case, with the Chien two-equation turbulence model performing better than the Baldwin-Lomax model.

  4. Research on Differential Coding Method for Satellite Remote Sensing Data Compression

    NASA Astrophysics Data System (ADS)

    Lin, Z. J.; Yao, N.; Deng, B.; Wang, C. Z.; Wang, J. H.

    2012-07-01

    Data compression, in the process of Satellite Earth data transmission, is of great concern to improve the efficiency of data transmission. Information amounts inherent to remote sensing images provide a foundation for data compression in terms of information theory. In particular, distinct degrees of uncertainty inherent to distinct land covers result in the different information amounts. This paper first proposes a lossless differential encoding method to improve compression rates. Then a district forecast differential encoding method is proposed to further improve the compression rates. Considering the stereo measurements in modern photogrammetry are basically accomplished by means of automatic stereo image matching, an edge protection operator is finally utilized to appropriately filter out high frequency noises which could help magnify the signals and further improve the compression rates. The three steps were applied to a Landsat TM multispectral image and a set of SPOT-5 panchromatic images of four typical land cover types (i.e., urban areas, farm lands, mountain areas and water bodies). Results revealed that the average code lengths obtained by the differential encoding method, compared with Huffman encoding, were more close to the information amounts inherent to remote sensing images. And the compression rates were improved to some extent. Furthermore, the compression rates of the four land cover images obtained by the district forecast differential encoding method were nearly doubled. As for the images with the edge features preserved, the compression rates are average four times as large as those of the original images.

  5. THC: a new high-order finite-difference high-resolution shock-capturing code for special-relativistic hydrodynamics

    NASA Astrophysics Data System (ADS)

    Radice, D.; Rezzolla, L.

    2012-11-01

    We present THC: a new high-order flux-vector-splitting code for Newtonian and special-relativistic hydrodynamics designed for direct numerical simulations of turbulent flows. Our code implements a variety of different reconstruction algorithms, such as the popular weighted essentially non oscillatory and monotonicity-preserving schemes, or the more specialised bandwidth-optimised WENO scheme that has been specifically designed for the study of compressible turbulence. We show the first systematic comparison of these schemes in Newtonian physics as well as for special-relativistic flows. In particular we will present the results obtained in simulations of grid-aligned and oblique shock waves and nonlinear, large-amplitude, smooth adiabatic waves. We will also discuss the results obtained in classical benchmarks such as the double-Mach shock reflection test in Newtonian physics or the linear and nonlinear development of the relativistic Kelvin-Helmholtz instability in two and three dimensions. Finally, we study the turbulent flow induced by the Kelvin-Helmholtz instability and we show that our code is able to obtain well-converged velocity spectra, from which we benchmark the effective resolution of the different schemes.

  6. Implementation of a simple model for linear and nonlinear mixing at unstable fluid interfaces in hydrodynamics codes

    SciTech Connect

    Ramshaw, J D

    2000-10-01

    A simple model was recently described for predicting the time evolution of the width of the mixing layer at an unstable fluid interface [J. D. Ramshaw, Phys. Rev. E 58, 5834 (1998); ibid. 61, 5339 (2000)]. The ordinary differential equations of this model have been heuristically generalized into partial differential equations suitable for implementation in multicomponent hydrodynamics codes. The central ingredient in this generalization is a nun-diffusional expression for the species mass fluxes. These fluxes describe the relative motion of the species, and thereby determine the local mixing rate and spatial distribution of mixed fluid as a function of time. The generalized model has been implemented in a two-dimensional hydrodynamics code. The model equations and implementation procedure are summarized, and comparisons with experimental mixing data are presented.

  7. Wavelet transform and Huffman coding based electrocardiogram compression algorithm: Application to telecardiology

    NASA Astrophysics Data System (ADS)

    Chouakri, S. A.; Djaafri, O.; Taleb-Ahmed, A.

    2013-08-01

    We present in this work an algorithm for electrocardiogram (ECG) signal compression aimed to its transmission via telecommunication channel. Basically, the proposed ECG compression algorithm is articulated on the use of wavelet transform, leading to low/high frequency components separation, high order statistics based thresholding, using level adjusted kurtosis value, to denoise the ECG signal, and next a linear predictive coding filter is applied to the wavelet coefficients producing a lower variance signal. This latter one will be coded using the Huffman encoding yielding an optimal coding length in terms of average value of bits per sample. At the receiver end point, with the assumption of an ideal communication channel, the inverse processes are carried out namely the Huffman decoding, inverse linear predictive coding filter and inverse discrete wavelet transform leading to the estimated version of the ECG signal. The proposed ECG compression algorithm is tested upon a set of ECG records extracted from the MIT-BIH Arrhythmia Data Base including different cardiac anomalies as well as the normal ECG signal. The obtained results are evaluated in terms of compression ratio and mean square error which are, respectively, around 1:8 and 7%. Besides the numerical evaluation, the visual perception demonstrates the high quality of ECG signal restitution where the different ECG waves are recovered correctly.

  8. [A quality controllable algorithm for ECG compression based on wavelet transform and ROI coding].

    PubMed

    Zhao, An; Wu, Baoming

    2006-12-01

    This paper presents an ECG compression algorithm based on wavelet transform and region of interest (ROI) coding. The algorithm has realized near-lossless coding in ROI and quality controllable lossy coding outside of ROI. After mean removal of the original signal, multi-layer orthogonal discrete wavelet transform is performed. Simultaneously,feature extraction is performed on the original signal to find the position of ROI. The coefficients related to the ROI are important coefficients and kept. Otherwise, the energy loss of the transform domain is calculated according to the goal PRDBE (Percentage Root-mean-square Difference with Baseline Eliminated), and then the threshold of the coefficients outside of ROI is determined according to the loss of energy. The important coefficients, which include the coefficients of ROI and the coefficients that are larger than the threshold outside of ROI, are put into a linear quantifier. The map, which records the positions of the important coefficients in the original wavelet coefficients vector, is compressed with a run-length encoder. Huffman coding has been applied to improve the compression ratio. ECG signals taken from the MIT/BIH arrhythmia database are tested, and satisfactory results in terms of clinical information preserving, quality and compress ratio are obtained. PMID:17228703

  9. Random wavelet transforms, algebraic geometric coding, and their applications in signal compression and de-noising

    SciTech Connect

    Bieleck, T.; Song, L.M.; Yau, S.S.T.; Kwong, M.K.

    1995-07-01

    The concepts of random wavelet transforms and discrete random wavelet transforms are introduced. It is shown that these transforms can lead to simultaneous compression and de-noising of signals that have been corrupted with fractional noises. Potential applications of algebraic geometric coding theory to encode the ensuing data are also discussed.

  10. Ultraspectral sounder data compression using the non-exhaustive Tunstall coding

    NASA Astrophysics Data System (ADS)

    Wei, Shih-Chieh; Huang, Bormin

    2008-08-01

    With its bulky volume, the ultraspectral sounder data might still suffer a few bits of error after channel coding. Therefore it is beneficial to incorporate some mechanism in source coding for error containment. The Tunstall code is a variable-to- fixed length code which can reduce the error propagation encountered in fixed-to-variable length codes like Huffman and arithmetic codes. The original Tunstall code uses an exhaustive parse tree where internal nodes extend every symbol in branching. It might result in assignment of precious codewords to less probable parse strings. Based on an infinitely extended parse tree, a modified Tunstall code is proposed which grows an optimal non-exhaustive parse tree by assigning the complete codewords only to top probability nodes in the infinite tree. Comparison will be made among the original exhaustive Tunstall code, our modified non-exhaustive Tunstall code, the CCSDS Rice code, and JPEG-2000 in terms of compression ratio and percent error rate using the ultraspectral sounder data.

  11. Data compression in wireless sensors network using MDCT and embedded harmonic coding.

    PubMed

    Alsalaet, Jaafar K; Ali, Abduladhem A

    2015-05-01

    One of the major applications of wireless sensors networks (WSNs) is vibration measurement for the purpose of structural health monitoring and machinery fault diagnosis. WSNs have many advantages over the wired networks such as low cost and reduced setup time. However, the useful bandwidth is limited, as compared to wired networks, resulting in relatively low sampling. One solution to this problem is data compression which, in addition to enhancing sampling rate, saves valuable power of the wireless nodes. In this work, a data compression scheme, based on Modified Discrete Cosine Transform (MDCT) followed by Embedded Harmonic Components Coding (EHCC) is proposed to compress vibration signals. The EHCC is applied to exploit harmonic redundancy present is most vibration signals resulting in improved compression ratio. This scheme is made suitable for the tiny hardware of wireless nodes and it is proved to be fast and effective. The efficiency of the proposed scheme is investigated by conducting several experimental tests. PMID:25541332

  12. Radiological image compression using error-free irreversible two-dimensional direct-cosine-transform coding techniques.

    PubMed

    Huang, H K; Lo, S C; Ho, B K; Lou, S L

    1987-05-01

    Some error-free and irreversible two-dimensional direct-cosine-transform (2D-DCT) coding, image-compression techniques applied to radiological images are discussed in this paper. Run-length coding and Huffman coding are described, and examples are given for error-free image compression. In the case of irreversible 2D-DCT coding, the block-quantization technique and the full-frame bit-allocation (FFBA) technique are described. Error-free image compression can achieve a compression ratio from 2:1 to 3:1, whereas the irreversible 2D-DCT coding compression technique can, in general, achieve a much higher acceptable compression ratio. The currently available block-quantization hardware may lead to visible block artifacts at certain compression ratios, but FFBA may be employed with the same or higher compression ratios without generating such artifacts. An even higher compression ratio can be achieved if the image is compressed by using first FFBA and then Huffman coding. The disadvantages of FFBA are that it is sensitive to sharp edges and no hardware is available. This paper also describes the design of the FFBA technique. PMID:3598750

  13. Compression and Encryption of ECG Signal Using Wavelet and Chaotically Huffman Code in Telemedicine Application.

    PubMed

    Raeiatibanadkooki, Mahsa; Quchani, Saeed Rahati; KhalilZade, MohammadMahdi; Bahaadinbeigy, Kambiz

    2016-03-01

    In mobile health care monitoring, compression is an essential tool for solving storage and transmission problems. The important issue is able to recover the original signal from the compressed signal. The main purpose of this paper is compressing the ECG signal with no loss of essential data and also encrypting the signal to keep it confidential from everyone, except for physicians. In this paper, mobile processors are used and there is no need for any computers to serve this purpose. After initial preprocessing such as removal of the baseline noise, Gaussian noise, peak detection and determination of heart rate, the ECG signal is compressed. In compression stage, after 3 steps of wavelet transform (db04), thresholding techniques are used. Then, Huffman coding with chaos for compression and encryption of the ECG signal are used. The compression rates of proposed algorithm is 97.72 %. Then, the ECG signals are sent to a telemedicine center to acquire specialist diagnosis by TCP/IP protocol. PMID:26779641

  14. Split field coding: low complexity error-resilient entropy coding for image compression

    NASA Astrophysics Data System (ADS)

    Meany, James J.; Martens, Christopher J.

    2008-08-01

    In this paper, we describe split field coding, an approach for low complexity, error-resilient entropy coding which splits code words into two fields: a variable length prefix and a fixed length suffix. Once a prefix has been decoded correctly, then the associated fixed length suffix is error-resilient, with bit errors causing no loss of code word synchronization and only a limited amount of distortion on the decoded value. When the fixed length suffixes are segregated to a separate block, this approach becomes suitable for use with a variety of methods which provide varying protection to different portions of the bitstream, such as unequal error protection or progressive ordering schemes. Split field coding is demonstrated in the context of a wavelet-based image codec, with examples of various error resilience properties, and comparisons to the rate-distortion and computational performance of JPEG 2000.

  15. Compression performance of HEVC and its format range and screen content coding extensions

    NASA Astrophysics Data System (ADS)

    Li, Bin; Xu, Jizheng; Sullivan, Gary J.

    2015-09-01

    This paper presents a comparison-based test of the objective compression performance of the High Efficiency Video Coding (HEVC) standard, its format range extensions (RExt), and its draft screen content coding extensions (SCC). The current dominant standard, H.264/MPEG-4 AVC, is used as an anchor reference in the comparison. The conditions used for the comparison tests were designed to reflect relevant application scenarios and to enable a fair comparison to the maximum extent feasible - i.e., using comparable quantization settings, reference frame buffering, intra refresh periods, rate-distortion optimization decision processing, etc. It is noted that such PSNR-based objective comparisons generally provide more conservative estimates of HEVC benefit than are found in subjective studies. The experimental results show that, when compared with H.264/MPEG-4 AVC, HEVC version 1 provides a bit rate savings for equal PSNR of about 23% for all-intra coding, 34% for random access coding, and 38% for low-delay coding. This is consistent with prior studies and the general characterization that HEVC can provide about a bit rate savings of about 50% for equal subjective quality for most applications. The HEVC format range extensions provide a similar bit rate savings of about 13-25% for all-intra coding, 28-33% for random access coding, and 32-38% for low-delay coding at different bit rate ranges. For lossy coding of screen content, the HEVC screen content coding extensions achieve a bit rate savings of about 66%, 63%, and 61% for all-intra coding, random access coding, and low-delay coding, respectively. For lossless coding, the corresponding bit rate savings are about 40%, 33%, and 32%, respectively.

  16. Correlation channel modeling for practical Slepian-Wolf distributed video compression system using irregular LDPC codes

    NASA Astrophysics Data System (ADS)

    Li, Li; Hu, Xiao; Zeng, Rui

    2007-11-01

    The development of practical distributed video coding schemes is based on the consequence of information-theoretic bounds established in the 1970s by Slepian and Wolf for distributed lossless coding, and by Wyner and Ziv for lossy coding with decoder side information. In distributed video compression application, it is hard to accurately describe the non-stationary behavior of the virtual correlation channel between X and side information Y although it plays a very important role in overall system performance. In this paper, we implement a practical Slepian-Wolf asymmetric distributed video compression system using irregular LDPC codes. Moreover, based on exploiting the dependencies of previously decode bit planes from video frame X and side information Y, we present improvement schemes to divide different reliable regions. Our simulation results show improving schemes of exploiting the dependencies between previously decoded bit planes can get better overall encoding rate performance as BER approach zero. We also show, compared with BSC model, BC channel model is more suitable for distributed video compression scenario because of the non-stationary properties of the virtual correlation channel and adaptive detecting channel model parameters from previously adjacent decoded bit planes can provide more accurately initial belief messages from channel at LDPC decoder.

  17. Combining node-centered parallel radiation transport and higher-order multi-material cell-centered hydrodynamics methods in three-temperature radiation hydrodynamics code TRHD

    NASA Astrophysics Data System (ADS)

    Sijoy, C. D.; Chaturvedi, S.

    2016-06-01

    Higher-order cell-centered multi-material hydrodynamics (HD) and parallel node-centered radiation transport (RT) schemes are combined self-consistently in three-temperature (3T) radiation hydrodynamics (RHD) code TRHD (Sijoy and Chaturvedi, 2015) developed for the simulation of intense thermal radiation or high-power laser driven RHD. For RT, a node-centered gray model implemented in a popular RHD code MULTI2D (Ramis et al., 2009) is used. This scheme, in principle, can handle RT in both optically thick and thin materials. The RT module has been parallelized using message passing interface (MPI) for parallel computation. Presently, for multi-material HD, we have used a simple and robust closure model in which common strain rates to all materials in a mixed cell is assumed. The closure model has been further generalized to allow different temperatures for the electrons and ions. In addition to this, electron and radiation temperatures are assumed to be in non-equilibrium. Therefore, the thermal relaxation between the electrons and ions and the coupling between the radiation and matter energies are required to be computed self-consistently. This has been achieved by using a node-centered symmetric-semi-implicit (SSI) integration scheme. The electron thermal conduction is calculated using a cell-centered, monotonic, non-linear finite volume scheme (NLFV) suitable for unstructured meshes. In this paper, we have described the details of the 2D, 3T, non-equilibrium, multi-material RHD code developed with a special attention to the coupling of various cell-centered and node-centered formulations along with a suite of validation test problems to demonstrate the accuracy and performance of the algorithms. We also report the parallel performance of RT module. Finally, in order to demonstrate the full capability of the code implementation, we have presented the simulation of laser driven shock propagation in a layered thin foil. The simulation results are found to be in good

  18. Combining node-centered parallel radiation transport and higher-order multi-material cell-centered hydrodynamics methods in three-temperature radiation hydrodynamics code TRHD

    NASA Astrophysics Data System (ADS)

    Sijoy, C. D.; Chaturvedi, S.

    2016-06-01

    Higher-order cell-centered multi-material hydrodynamics (HD) and parallel node-centered radiation transport (RT) schemes are combined self-consistently in three-temperature (3T) radiation hydrodynamics (RHD) code TRHD (Sijoy and Chaturvedi, 2015) developed for the simulation of intense thermal radiation or high-power laser driven RHD. For RT, a node-centered gray model implemented in a popular RHD code MULTI2D (Ramis et al., 2009) is used. This scheme, in principle, can handle RT in both optically thick and thin materials. The RT module has been parallelized using message passing interface (MPI) for parallel computation. Presently, for multi-material HD, we have used a simple and robust closure model in which common strain rates to all materials in a mixed cell is assumed. The closure model has been further generalized to allow different temperatures for the electrons and ions. In addition to this, electron and radiation temperatures are assumed to be in non-equilibrium. Therefore, the thermal relaxation between the electrons and ions and the coupling between the radiation and matter energies are required to be computed self-consistently. This has been achieved by using a node-centered symmetric-semi-implicit (SSI) integration scheme. The electron thermal conduction is calculated using a cell-centered, monotonic, non-linear finite volume scheme (NLFV) suitable for unstructured meshes. In this paper, we have described the details of the 2D, 3T, non-equilibrium, multi-material RHD code developed with a special attention to the coupling of various cell-centered and node-centered formulations along with a suite of validation test problems to demonstrate the accuracy and performance of the algorithms. We also report the parallel performance of RT module. Finally, in order to demonstrate the full capability of the code implementation, we have presented the simulation of laser driven shock propagation in a layered thin foil. The simulation results are found to be in good

  19. One-Dimensional Lagrangian Code for Plasma Hydrodynamic Analysis of a Fusion Pellet Driven by Ion Beams.

    Energy Science and Technology Software Center (ESTSC)

    1986-12-01

    Version 00 The MEDUSA-IB code performs implosion and thermonuclear burn calculations of an ion beam driven ICF target, based on one-dimensional plasma hydrodynamics and transport theory. It can calculate the following values in spherical geometry through the progress of implosion and fuel burnup of a multi-layered target. (1) Hydrodynamic velocities, density, ion, electron and radiation temperature, radiation energy density, Rs and burn rate of target as a function of coordinates and time, (2) Fusion gainmore » as a function of time, (3) Ionization degree, (4) Temperature dependent ion beam energy deposition, (5) Radiation, -particle and neutron spectra as a function of time.« less

  20. Compressed Reactive Turbulence and Supernovae Ia Recollapse using the FLASH code

    NASA Astrophysics Data System (ADS)

    Dursi, J.; Niemeyer, J.; Calder, A.; Fryxell, B.; Lamb, D.; Olson, K.; Ricker, P.; Rosner, R.; Timmes, F.; Tufo, H.; Zingale, M.

    1999-12-01

    The collapse of turbulent fluid, apart from being interesting for its own sake, is also of interest to the supernova problem; a failed ignition can cause a turbulent re-collapse, which might lead to a subsequent reignition under more favourable circumstances. We use the FLASH code, developed at the Center on Astrophysical Thermonuclear Flashes, to run small-scale DNS of the evolution of a compressible, combustible turbulent fluid under the effect of a forced radial homogeneous compression. We follow the evolution of density and temperature fluctuations over the compression history. This work is supported by the Department of Energy under Grant No. B341495 to the Center for Astrophysical Thermonuclear Flashes at the University of Chicago.

  1. Investigation of perception-oriented coding techniques for video compression based on large block structures

    NASA Astrophysics Data System (ADS)

    Kaprykowsky, Hagen; Doshkov, Dimitar; Hoffmann, Christoph; Ndjiki-Nya, Patrick; Wiegand, Thomas

    2011-09-01

    Recent investigations have shown that one of the most beneficial elements for higher compression performance in highresolution video is the incorporation of larger block structures. In this work, we will address the question of how to incorporate perceptual aspects into new video coding schemes based on large block structures. This is rooted in the fact that especially high frequency regions such as textures yield high coding costs when using classical prediction modes as well as encoder control based on the mean squared error. To overcome this problem, we will investigate the incorporation of novel intra predictors based on image completion methods. Furthermore, the integration of a perceptualbased encoder control using the well-known structural similarity index will be analyzed. A major aspect of this article is the evaluation of the coding results in a quantitative (i.e. statistical analysis of changes in mode decisions) as well as qualitative (i.e. coding efficiency) manner.

  2. Assessment of error propagation in ultraspectral sounder data via JPEG2000 compression and turbo coding

    NASA Astrophysics Data System (ADS)

    Olsen, Donald P.; Wang, Charles C.; Sklar, Dean; Huang, Bormin; Ahuja, Alok

    2005-08-01

    Research has been undertaken to examine the robustness of JPEG2000 when corrupted by transmission bit errors in a satellite data stream. Contemporary and future ultraspectral sounders such as Atmospheric Infrared Sounder (AIRS), Cross-track Infrared Sounder (CrIS), Infrared Atmospheric Sounding Interferometer (IASI), Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS), and Hyperspectral Environmental Suite (HES) generate a large volume of three-dimensional data. Hence, compression of ultraspectral sounder data will facilitate data transmission and archiving. There is a need for lossless or near-lossless compression of ultraspectral sounder data to avoid potential retrieval degradation of geophysical parameters due to lossy compression. This paper investigates the simulated error propagation in AIRS ultraspectral sounder data with advanced source and channel coding in a satellite data stream. The source coding is done via JPEG2000, the latest International Organization for Standardization (ISO)/International Telecommunication Union (ITU) standard for image compression. After JPEG2000 compression the AIRS ultraspectral sounder data is then error correction encoded using a rate 0.954 turbo product code (TPC) for channel error control. Experimental results of error patterns on both channel and source decoding are presented. The error propagation effects are curbed via the block-based protection mechanism in the JPEG2000 codec as well as memory characteristics of the forward error correction (FEC) scheme to contain decoding errors within received blocks. A single nonheader bit error in a source code block tends to contaminate the bits until the end of the source code block before the inverse discrete wavelet transform (IDWT), and those erroneous bits propagate even further after the IDWT. Furthermore, a single header bit error may result in the corruption of almost the entire decompressed granule. JPEG2000 appears vulnerable to bit errors in a noisy channel of

  3. A secure approach for encrypting and compressing biometric information employing orthogonal code and steganography

    NASA Astrophysics Data System (ADS)

    Islam, Muhammad F.; Islam, Mohammed N.

    2012-04-01

    The objective of this paper is to develop a novel approach for encryption and compression of biometric information utilizing orthogonal coding and steganography techniques. Multiple biometric signatures are encrypted individually using orthogonal codes and then multiplexed together to form a single image, which is then embedded in a cover image using the proposed steganography technique. The proposed technique employs three least significant bits for this purpose and a secret key is developed to choose one from among these bits to be replaced by the corresponding bit of the biometric image. The proposed technique offers secure transmission of multiple biometric signatures in an identification document which will be protected from unauthorized steganalysis attempt.

  4. Optical information authentication using compressed double-random-phase-encoded images and quick-response codes.

    PubMed

    Wang, Xiaogang; Chen, Wen; Chen, Xudong

    2015-03-01

    In this paper, we develop a new optical information authentication system based on compressed double-random-phase-encoded images and quick-response (QR) codes, where the parameters of optical lightwave are used as keys for optical decryption and the QR code is a key for verification. An input image attached with QR code is first optically encoded in a simplified double random phase encoding (DRPE) scheme without using interferometric setup. From the single encoded intensity pattern recorded by a CCD camera, a compressed double-random-phase-encoded image, i.e., the sparse phase distribution used for optical decryption, is generated by using an iterative phase retrieval technique with QR code. We compare this technique to the other two methods proposed in literature, i.e., Fresnel domain information authentication based on the classical DRPE with holographic technique and information authentication based on DRPE and phase retrieval algorithm. Simulation results show that QR codes are effective on improving the security and data sparsity of optical information encryption and authentication system. PMID:25836845

  5. Application Of Hadamard, Haar, And Hadamard-Haar Transformation To Image Coding And Bandwidth Compression

    NASA Astrophysics Data System (ADS)

    Choras, Ryszard S.

    1983-03-01

    The paper presents a numerical techniques of transform image coding for the image codklg for the image bandwidth compression. Unitary transformations called Hadamard, Haar and Hadamard-Haar transformations are definied and developed. 'Te described the construction of the transformation matrices and presents algorithms for computation of the transformations and theirs inverse. Considered transformations are asolied to iiaa e processing and theirs utility and effectiveness are compared with other discrete transforms on the basic of some standard performance criteria.

  6. Channel coding and data compression system considerations for efficient communication of planetary imaging data

    NASA Technical Reports Server (NTRS)

    Rice, R. F.

    1974-01-01

    End-to-end system considerations involving channel coding and data compression are reported which could drastically improve the efficiency in communicating pictorial information from future planetary spacecraft. In addition to presenting new and potentially significant system considerations, this report attempts to fill a need for a comprehensive tutorial which makes much of this very subject accessible to readers whose disciplines lie outside of communication theory.

  7. Gaseous Laser Targets and Optical Dignostics for Studying Compressible Turbulent Hydrodynamic Instabilities

    SciTech Connect

    Edwards, M J; Hansen, J; Miles, A R; Froula, D; Gregori, G; Glenzer, S; Edens, A; Dittmire, T

    2005-02-08

    The possibility of studying compressible turbulent flows using gas targets driven by high power lasers and diagnosed with optical techniques is investigated. The potential advantage over typical laser experiments that use solid targets and x-ray diagnostics is more detailed information over a larger range of spatial scales. An experimental system is described to study shock - jet interactions at high Mach number. This consists of a mini-chamber full of nitrogen at a pressure {approx} 1 atms. The mini-chamber is situated inside a much larger vacuum chamber. An intense laser pulse ({approx}100J in {approx} 5ns) is focused on to a thin {approx} 0.3{micro}m thick silicon nitride window at one end of the mini-chamber. The window acts both as a vacuum barrier, and laser entrance hole. The ''explosion'' caused by the deposition of the laser energy just inside the window drives a strong blast wave out into the nitrogen atmosphere. The spherical shock expands and interacts with a jet of xenon introduced though the top of the mini-chamber. The Mach number of the interaction is controlled by the separation of the jet from the explosion. The resulting flow is visualized using an optical schlieren system using a pulsed laser source at a wavelength of 0.53 {micro}m. The technical path leading up to the design of this experiment is presented, and future prospects briefly considered. Lack of laser time in the final year of the project severely limited experimental results obtained using the new apparatus.

  8. Recent Hydrodynamics Improvements to the RELAP5-3D Code

    SciTech Connect

    Richard A. Riemke; Cliff B. Davis; Richard.R. Schultz

    2009-07-01

    The hydrodynamics section of the RELAP5-3D computer program has been recently improved. Changes were made as follows: (1) improved turbine model, (2) spray model for the pressurizer model, (3) feedwater heater model, (4) radiological transport model, (5) improved pump model, and (6) compressor model.

  9. Thermodynamic analysis of five compressed-air energy-storage cycles. [Using CAESCAP computer code

    SciTech Connect

    Fort, J. A.

    1983-03-01

    One important aspect of the Compressed-Air Energy-Storage (CAES) Program is the evaluation of alternative CAES plant designs. The thermodynamic performance of the various configurations is particularly critical to the successful demonstration of CAES as an economically feasible energy-storage option. A computer code, the Compressed-Air Energy-Storage Cycle-Analysis Program (CAESCAP), was developed in 1982 at the Pacific Northwest Laboratory. This code was designed specifically to calculate overall thermodynamic performance of proposed CAES-system configurations. The results of applying this code to the analysis of five CAES plant designs are presented in this report. The designs analyzed were: conventional CAES; adiabatic CAES; hybrid CAES; pressurized fluidized-bed CAES; and direct coupled steam-CAES. Inputs to the code were based on published reports describing each plant cycle. For each cycle analyzed, CAESCAP calculated the thermodynamic station conditions and individual-component efficiencies, as well as overall cycle-performance-parameter values. These data were then used to diagram the availability and energy flow for each of the five cycles. The resulting diagrams graphically illustrate the overall thermodynamic performance inherent in each plant configuration, and enable a more accurate and complete understanding of each design.

  10. Adaptive variable-length coding for efficient compression of spacecraft television data.

    NASA Technical Reports Server (NTRS)

    Rice, R. F.; Plaunt, J. R.

    1971-01-01

    An adaptive variable length coding system is presented. Although developed primarily for the proposed Grand Tour missions, many features of this system clearly indicate a much wider applicability. Using sample to sample prediction, the coding system produces output rates within 0.25 bit/picture element (pixel) of the one-dimensional difference entropy for entropy values ranging from 0 to 8 bit/pixel. This is accomplished without the necessity of storing any code words. Performance improvements of 0.5 bit/pixel can be simply achieved by utilizing previous line correlation. A Basic Compressor, using concatenated codes, adapts to rapid changes in source statistics by automatically selecting one of three codes to use for each block of 21 pixels. The system adapts to less frequent, but more dramatic, changes in source statistics by adjusting the mode in which the Basic Compressor operates on a line-to-line basis. Furthermore, the compression system is independent of the quantization requirements of the pulse-code modulation system.

  11. Improvement Text Compression Performance Using Combination of Burrows Wheeler Transform, Move to Front, and Huffman Coding Methods

    NASA Astrophysics Data System (ADS)

    Aprilianto, Mohammada; Abdurohman, Maman

    2014-04-01

    Text is a media that is often used to convey information in both wired and wireless-based network. One limitation of the wireless system is the network bandwidth. In this study we implemented a text compression application with lossless compression technique using combination of Burrows wheeler transform, move to front, and Huffman coding methods. With the addition of the compression of the text, it is expected to save network resources. This application provides information about compression ratio. From the testing process, it concludes that text compression with only Huffman coding method will be efficient when the number of text characters are above 400 characters, meanwhile text compression with burrows wheeler transform, move to front, and Huffman coding methods will be efficient when the number of text characters are above 531 characters. Combination of these methods are more efficient than just Huffman coding when the number of text characters are above 979 characters. The more characters that are compressed and the more patterns of the same symbol, the better the compression ratio.

  12. Cholla: 3D GPU-based hydrodynamics code for astrophysical simulation

    NASA Astrophysics Data System (ADS)

    Schneider, Evan E.; Robertson, Brant E.

    2016-07-01

    Cholla (Computational Hydrodynamics On ParaLLel Architectures) models the Euler equations on a static mesh and evolves the fluid properties of thousands of cells simultaneously using GPUs. It can update over ten million cells per GPU-second while using an exact Riemann solver and PPM reconstruction, allowing computation of astrophysical simulations with physically interesting grid resolutions (>256^3) on a single device; calculations can be extended onto multiple devices with nearly ideal scaling beyond 64 GPUs.

  13. End-to-end quality measure for transmission of compressed imagery over a noisy coded channel

    NASA Technical Reports Server (NTRS)

    Korwar, V. N.; Lee, P. J.

    1981-01-01

    For the transmission of imagery at high data rates over large distances with limited power and system gain, it is usually necessary to compress the data before transmitting it over a noisy channel that uses channel coding to reduce the effect of noise introduced errors. Both compression and channel noise introduce distortion into the imagery. In order to design a communication link that provides adequate quality of received images, it is necessary first to define some suitable distortion measure that accounts for both these kinds of distortion and then to perform various tradeoffs to arrive at system parameter values that will provide a sufficiently low level of received image distortion. The overall mean square error is used as the distortion measure and a description of how to perform these tradeoffs are included.

  14. Single Stock Dynamics on High-Frequency Data: From a Compressed Coding Perspective

    PubMed Central

    Fushing, Hsieh; Chen, Shu-Chun; Hwang, Chii-Ruey

    2014-01-01

    High-frequency return, trading volume and transaction number are digitally coded via a nonparametric computing algorithm, called hierarchical factor segmentation (HFS), and then are coupled together to reveal a single stock dynamics without global state-space structural assumptions. The base-8 digital coding sequence, which is capable of revealing contrasting aggregation against sparsity of extreme events, is further compressed into a shortened sequence of state transitions. This compressed digital code sequence vividly demonstrates that the aggregation of large absolute returns is the primary driving force for stimulating both the aggregations of large trading volumes and transaction numbers. The state of system-wise synchrony is manifested with very frequent recurrence in the stock dynamics. And this data-driven dynamic mechanism is seen to correspondingly vary as the global market transiting in and out of contraction-expansion cycles. These results not only elaborate the stock dynamics of interest to a fuller extent, but also contradict some classical theories in finance. Overall this version of stock dynamics is potentially more coherent and realistic, especially when the current financial market is increasingly powered by high-frequency trading via computer algorithms, rather than by individual investors. PMID:24586235

  15. Mutual information-based context template modeling for bitplane coding in remote sensing image compression

    NASA Astrophysics Data System (ADS)

    Zhang, Yongfei; Cao, Haiheng; Jiang, Hongxu; Li, Bo

    2016-04-01

    As remote sensing image applications are often characterized with limited bandwidth and high-quality demands, higher coding performance of remote sensing images are desirable. The embedded block coding with optimal truncation (EBCOT) is the fundamental part of JPEG2000 image compression standard. However, EBCOT only considers correlation within a sub-band and utilizes a context template of eight spatially neighboring coefficients in prediction. The existing optimization methods in literature using the current context template prove little performance improvements. To address this problem, this paper presents a new mutual information (MI)-based context template selection and modeling method. By further considering the correlation across the sub-bands, the potential prediction coefficients, including neighbors, far neighbors, parent and parent neighbors, are comprehensively examined and selected in such a manner that achieves a nice trade-off between the MI-based correlation criterion and the prediction complexity. Based on the selected context template, a high-order prediction model, which jointly considers the weight and the significance state of each coefficient, is proposed. Experimental results show that the proposed algorithm consistently outperforms the benchmark JPEG2000 standard and state-of-the-art algorithms in term of coding efficiency at a competitive computational cost, which makes it desirable in real-time compression applications, especially for remote sensing images.

  16. A segmentation-based lossless image coding method for high-resolution medical image compression.

    PubMed

    Shen, L; Rangayyan, R M

    1997-06-01

    Lossless compression techniques are essential in archival and communication of medical images. In this paper, a new segmentation-based lossless image coding (SLIC) method is proposed, which is based on a simple but efficient region growing procedure. The embedded region growing procedure produces an adaptive scanning pattern for the image with the help of a very-few-bits-needed discontinuity index map. Along with this scanning pattern, an error image data part with a very small dynamic range is generated. Both the error image data and the discontinuity index map data parts are then encoded by the Joint Bi-level Image experts Group (JBIG) method. The SLIC method resulted in, on the average, lossless compression to about 1.6 h/pixel from 8 b, and to about 2.9 h/pixel from 10 b with a database of ten high-resolution digitized chest and breast images. In comparison with direct coding by JBIG, Joint Photographic Experts Group (JPEG), hierarchical interpolation (HINT), and two-dimensional Burg Prediction plus Huffman error coding methods, the SLIC method performed better by 4% to 28% on the database used. PMID:9184892

  17. Single stock dynamics on high-frequency data: from a compressed coding perspective.

    PubMed

    Fushing, Hsieh; Chen, Shu-Chun; Hwang, Chii-Ruey

    2014-01-01

    High-frequency return, trading volume and transaction number are digitally coded via a nonparametric computing algorithm, called hierarchical factor segmentation (HFS), and then are coupled together to reveal a single stock dynamics without global state-space structural assumptions. The base-8 digital coding sequence, which is capable of revealing contrasting aggregation against sparsity of extreme events, is further compressed into a shortened sequence of state transitions. This compressed digital code sequence vividly demonstrates that the aggregation of large absolute returns is the primary driving force for stimulating both the aggregations of large trading volumes and transaction numbers. The state of system-wise synchrony is manifested with very frequent recurrence in the stock dynamics. And this data-driven dynamic mechanism is seen to correspondingly vary as the global market transiting in and out of contraction-expansion cycles. These results not only elaborate the stock dynamics of interest to a fuller extent, but also contradict some classical theories in finance. Overall this version of stock dynamics is potentially more coherent and realistic, especially when the current financial market is increasingly powered by high-frequency trading via computer algorithms, rather than by individual investors. PMID:24586235

  18. STEALTH: a Lagrange explicit finite difference code for solids, structural, and thermohydraulic analysis. Volume 7: implicit hydrodynamics. Computer code manual. [PWR; BWR

    SciTech Connect

    McKay, M.W.

    1982-06-01

    STEALTH is a family of computer codes that solve the equations of motion for a general continuum. These codes can be used to calculate a variety of physical processes in which the dynamic behavior of a continuum is involved. The versions of STEALTH described in this volume were designed for the calculation of problems involving low-speed fluid flow. They employ an implicit finite difference technique to solve the one- and two-dimensional equations of motion, written for an arbitrary coordinate system, for both incompressible and compressible fluids. The solution technique involves an iterative solution of the implicit, Lagrangian finite difference equations. Convection terms that result from the use of an arbitrarily-moving coordinate system are calculated separately. This volume provides the theoretical background, the finite difference equations, and the input instructions for the one- and two-dimensional codes; a discussion of several sample problems; and a listing of the input decks required to run those problems.

  19. High performance optical encryption based on computational ghost imaging with QR code and compressive sensing technique

    NASA Astrophysics Data System (ADS)

    Zhao, Shengmei; Wang, Le; Liang, Wenqiang; Cheng, Weiwen; Gong, Longyan

    2015-10-01

    In this paper, we propose a high performance optical encryption (OE) scheme based on computational ghost imaging (GI) with QR code and compressive sensing (CS) technique, named QR-CGI-OE scheme. N random phase screens, generated by Alice, is a secret key and be shared with its authorized user, Bob. The information is first encoded by Alice with QR code, and the QR-coded image is then encrypted with the aid of computational ghost imaging optical system. Here, measurement results from the GI optical system's bucket detector are the encrypted information and be transmitted to Bob. With the key, Bob decrypts the encrypted information to obtain the QR-coded image with GI and CS techniques, and further recovers the information by QR decoding. The experimental and numerical simulated results show that the authorized users can recover completely the original image, whereas the eavesdroppers can not acquire any information about the image even the eavesdropping ratio (ER) is up to 60% at the given measurement times. For the proposed scheme, the number of bits sent from Alice to Bob are reduced considerably and the robustness is enhanced significantly. Meantime, the measurement times in GI system is reduced and the quality of the reconstructed QR-coded image is improved.

  20. Belief Propagation for Error Correcting Codes and Lossy Compression Using Multilayer Perceptrons

    NASA Astrophysics Data System (ADS)

    Mimura, Kazushi; Cousseau, Florent; Okada, Masato

    2011-03-01

    The belief propagation (BP) based algorithm is investigated as a potential decoder for both of error correcting codes and lossy compression, which are based on non-monotonic tree-like multilayer perceptron encoders. We discuss that whether the BP can give practical algorithms or not in these schemes. The BP implementations in those kind of fully connected networks unfortunately shows strong limitation, while the theoretical results seems a bit promising. Instead, it reveals it might have a rich and complex structure of the solution space via the BP-based algorithms.

  1. Compact all-CMOS spatiotemporal compressive sensing video camera with pixel-wise coded exposure.

    PubMed

    Zhang, Jie; Xiong, Tao; Tran, Trac; Chin, Sang; Etienne-Cummings, Ralph

    2016-04-18

    We present a low power all-CMOS implementation of temporal compressive sensing with pixel-wise coded exposure. This image sensor can increase video pixel resolution and frame rate simultaneously while reducing data readout speed. Compared to previous architectures, this system modulates pixel exposure at the individual photo-diode electronically without external optical components. Thus, the system provides reduction in size and power compare to previous optics based implementations. The prototype image sensor (127 × 90 pixels) can reconstruct 100 fps videos from coded images sampled at 5 fps. With 20× reduction in readout speed, our CMOS image sensor only consumes 14μW to provide 100 fps videos. PMID:27137331

  2. Finite element modeling of magnetic compression using coupled electromagnetic-structural codes

    SciTech Connect

    Hainsworth, G.; Leonard, P.J.; Rodger, D.; Leyden, C.

    1996-05-01

    A link between the electromagnetic code, MEGA, and the structural code, DYNA3D has been developed. Although the primary use of this is for modelling of Railgun components, it has recently been applied to a small experimental Coilgun at Bath. The performance of Coilguns is very dependent on projectile material conductivity, and so high purity aluminium was investigated. However, due to its low strength, it is crushed significantly by magnetic compression in the gun. Although impractical as a real projectile material, this provides useful benchmark experimental data on high strain rate plastic deformation caused by magnetic forces. This setup is equivalent to a large scale version of the classic jumping ring experiment, where the ring jumps with an acceleration of 40 kG.

  3. A Lossless Multichannel Bio-Signal Compression Based on Low-Complexity Joint Coding Scheme for Portable Medical Devices

    PubMed Central

    Kim, Dong-Sun; Kwon, Jin-San

    2014-01-01

    Research on real-time health systems have received great attention during recent years and the needs of high-quality personal multichannel medical signal compression for personal medical product applications are increasing. The international MPEG-4 audio lossless coding (ALS) standard supports a joint channel-coding scheme for improving compression performance of multichannel signals and it is very efficient compression method for multi-channel biosignals. However, the computational complexity of such a multichannel coding scheme is significantly greater than that of other lossless audio encoders. In this paper, we present a multichannel hardware encoder based on a low-complexity joint-coding technique and shared multiplier scheme for portable devices. A joint-coding decision method and a reference channel selection scheme are modified for a low-complexity joint coder. The proposed joint coding decision method determines the optimized joint-coding operation based on the relationship between the cross correlation of residual signals and the compression ratio. The reference channel selection is designed to select a channel for the entropy coding of the joint coding. The hardware encoder operates at a 40 MHz clock frequency and supports two-channel parallel encoding for the multichannel monitoring system. Experimental results show that the compression ratio increases by 0.06%, whereas the computational complexity decreases by 20.72% compared to the MPEG-4 ALS reference software encoder. In addition, the compression ratio increases by about 11.92%, compared to the single channel based bio-signal lossless data compressor. PMID:25237900

  4. A lossless multichannel bio-signal compression based on low-complexity joint coding scheme for portable medical devices.

    PubMed

    Kim, Dong-Sun; Kwon, Jin-San

    2014-01-01

    Research on real-time health systems have received great attention during recent years and the needs of high-quality personal multichannel medical signal compression for personal medical product applications are increasing. The international MPEG-4 audio lossless coding (ALS) standard supports a joint channel-coding scheme for improving compression performance of multichannel signals and it is very efficient compression method for multi-channel biosignals. However, the computational complexity of such a multichannel coding scheme is significantly greater than that of other lossless audio encoders. In this paper, we present a multichannel hardware encoder based on a low-complexity joint-coding technique and shared multiplier scheme for portable devices. A joint-coding decision method and a reference channel selection scheme are modified for a low-complexity joint coder. The proposed joint coding decision method determines the optimized joint-coding operation based on the relationship between the cross correlation of residual signals and the compression ratio. The reference channel selection is designed to select a channel for the entropy coding of the joint coding. The hardware encoder operates at a 40 MHz clock frequency and supports two-channel parallel encoding for the multichannel monitoring system. Experimental results show that the compression ratio increases by 0.06%, whereas the computational complexity decreases by 20.72% compared to the MPEG-4 ALS reference software encoder. In addition, the compression ratio increases by about 11.92%, compared to the single channel based bio-signal lossless data compressor. PMID:25237900

  5. Multispectral image compression for spectral and color reproduction based on lossy to lossless coding

    NASA Astrophysics Data System (ADS)

    Shinoda, Kazuma; Murakami, Yuri; Yamaguchi, Masahiro; Ohyama, Nagaaki

    2010-01-01

    In this paper we propose a multispectral image compression based on lossy to lossless coding, suitable for both spectral and color reproduction. The proposed method divides a multispectral image data into two groups, RGB and residual. The RGB component is extracted from the multispectral image, for example, by using the XYZ Color Matching Functions, a color conversion matrix, and a gamma curve. The original multispectral image is estimated from RGB data encoder, and the difference between the original and the estimated multispectral images, referred as a residual component in this paper, is calculated in the encoder. Then the RGB and the residual components are encoded by JPEG2000, respectively a progressive decoding is possible from the losslessly encoded code-stream. Experimental results show that, although the proposed method is slightly inferior to JPEG2000 with a multicomponent transform in rate-distortion plot of the spectrum domain at low bit rate, a decoded RGB image shows high quality at low bit rate with primary encoding of the RGB component. Its lossless compression ratio is close to that of JPEG2000 with the integer KLT.

  6. A New Multi-dimensional General Relativistic Neutrino Hydrodynamic Code for Core-collapse Supernovae. I. Method and Code Tests in Spherical Symmetry

    NASA Astrophysics Data System (ADS)

    Müller, Bernhard; Janka, Hans-Thomas; Dimmelmeier, Harald

    2010-07-01

    We present a new general relativistic code for hydrodynamical supernova simulations with neutrino transport in spherical and azimuthal symmetry (one dimension and two dimensions, respectively). The code is a combination of the COCONUT hydro module, which is a Riemann-solver-based, high-resolution shock-capturing method, and the three-flavor, fully energy-dependent VERTEX scheme for the transport of massless neutrinos. VERTEX integrates the coupled neutrino energy and momentum equations with a variable Eddington factor closure computed from a model Boltzmann equation and uses the "ray-by-ray plus" approximation in two dimensions, assuming the neutrino distribution to be axially symmetric around the radial direction at every point in space, and thus the neutrino flux to be radial. Our spacetime treatment employs the Arnowitt-Deser-Misner 3+1 formalism with the conformal flatness condition for the spatial three metric. This approach is exact for the one-dimensional case and has previously been shown to yield very accurate results for spherical and rotational stellar core collapse. We introduce new formulations of the energy equation to improve total energy conservation in relativistic and Newtonian hydro simulations with grid-based Eulerian finite-volume codes. Moreover, a modified version of the VERTEX scheme is developed that simultaneously conserves energy and lepton number in the neutrino transport with better accuracy and higher numerical stability in the high-energy tail of the spectrum. To verify our code, we conduct a series of tests in spherical symmetry, including a detailed comparison with published results of the collapse, shock formation, shock breakout, and accretion phases. Long-time simulations of proto-neutron star cooling until several seconds after core bounce both demonstrate the robustness of the new COCONUT-VERTEX code and show the approximate treatment of relativistic effects by means of an effective relativistic gravitational potential as in

  7. A NEW MULTI-DIMENSIONAL GENERAL RELATIVISTIC NEUTRINO HYDRODYNAMIC CODE FOR CORE-COLLAPSE SUPERNOVAE. I. METHOD AND CODE TESTS IN SPHERICAL SYMMETRY

    SciTech Connect

    Mueller, Bernhard; Janka, Hans-Thomas; Dimmelmeier, Harald E-mail: thj@mpa-garching.mpg.d

    2010-07-15

    We present a new general relativistic code for hydrodynamical supernova simulations with neutrino transport in spherical and azimuthal symmetry (one dimension and two dimensions, respectively). The code is a combination of the COCONUT hydro module, which is a Riemann-solver-based, high-resolution shock-capturing method, and the three-flavor, fully energy-dependent VERTEX scheme for the transport of massless neutrinos. VERTEX integrates the coupled neutrino energy and momentum equations with a variable Eddington factor closure computed from a model Boltzmann equation and uses the 'ray-by-ray plus' approximation in two dimensions, assuming the neutrino distribution to be axially symmetric around the radial direction at every point in space, and thus the neutrino flux to be radial. Our spacetime treatment employs the Arnowitt-Deser-Misner 3+1 formalism with the conformal flatness condition for the spatial three metric. This approach is exact for the one-dimensional case and has previously been shown to yield very accurate results for spherical and rotational stellar core collapse. We introduce new formulations of the energy equation to improve total energy conservation in relativistic and Newtonian hydro simulations with grid-based Eulerian finite-volume codes. Moreover, a modified version of the VERTEX scheme is developed that simultaneously conserves energy and lepton number in the neutrino transport with better accuracy and higher numerical stability in the high-energy tail of the spectrum. To verify our code, we conduct a series of tests in spherical symmetry, including a detailed comparison with published results of the collapse, shock formation, shock breakout, and accretion phases. Long-time simulations of proto-neutron star cooling until several seconds after core bounce both demonstrate the robustness of the new COCONUT-VERTEX code and show the approximate treatment of relativistic effects by means of an effective relativistic gravitational potential as in

  8. User's manual for DYNA2D: an explicit two-dimensional hydrodynamic finite-element code with interactive rezoning

    SciTech Connect

    Hallquist, J.O.

    1982-02-01

    This revised report provides an updated user's manual for DYNA2D, an explicit two-dimensional axisymmetric and plane strain finite element code for analyzing the large deformation dynamic and hydrodynamic response of inelastic solids. A contact-impact algorithm permits gaps and sliding along material interfaces. By a specialization of this algorithm, such interfaces can be rigidly tied to admit variable zoning without the need of transition regions. Spatial discretization is achieved by the use of 4-node solid elements, and the equations-of motion are integrated by the central difference method. An interactive rezoner eliminates the need to terminate the calculation when the mesh becomes too distorted. Rather, the mesh can be rezoned and the calculation continued. The command structure for the rezoner is described and illustrated by an example.

  9. ZEUS-2D: A radiation magnetohydrodynamics code for astrophysical flows in two space dimensions. I - The hydrodynamic algorithms and tests.

    NASA Astrophysics Data System (ADS)

    Stone, James M.; Norman, Michael L.

    1992-06-01

    A detailed description of ZEUS-2D, a numerical code for the simulation of fluid dynamical flows including a self-consistent treatment of the effects of magnetic fields and radiation transfer is presented. Attention is given to the hydrodynamic (HD) algorithms which form the foundation for the more complex MHD and radiation HD algorithms. The effect of self-gravity on the flow dynamics is accounted for by an iterative solution of the sparse-banded matrix resulting from discretizing the Poisson equation in multidimensions. The results of an extensive series of HD test problems are presented. A detailed description of the MHD algorithms in ZEUS-2D is presented. A new method of computing the electromotive force is developed using the method of characteristics (MOC). It is demonstrated through the results of an extensive series of MHD test problems that the resulting hybrid MOC-constrained transport method provides for the accurate evolution of all modes of MHD wave families.

  10. Analysis of prediction algorithms for residual compression in a lossy to lossless scalable video coding system based on HEVC

    NASA Astrophysics Data System (ADS)

    Heindel, Andreas; Wige, Eugen; Kaup, André

    2014-09-01

    Lossless image and video compression is required in many professional applications. However, lossless coding results in a high data rate, which leads to a long wait for the user when the channel capacity is limited. To overcome this problem, scalable lossless coding is an elegant solution. It provides a fast accessible preview by a lossy compressed base layer, which can be refined to a lossless output when the enhancement layer is received. Therefore, this paper presents a lossy to lossless scalable coding system where the enhancement layer is coded by means of intra prediction and entropy coding. Several algorithms are evaluated for the prediction step in this paper. It turned out that Sample-based Weighted Prediction is a reasonable choice for usual consumer video sequences and the Median Edge Detection algorithm is better suited for medical content from computed tomography. For both types of sequences the efficiency may be further improved by the much more complex Edge-Directed Prediction algorithm. In the best case, in total only about 2.7% additional data rate has to be invested for scalable coding compared to single-layer JPEG-LS compression for usual consumer video sequences. For the case of the medical sequences scalable coding is even more efficient than JPEG-LS compression for certain values of QP.

  11. Design of indirectly driven, high-compression Inertial Confinement Fusion implosions with improved hydrodynamic stability using a 4-shock adiabat-shaped drive

    NASA Astrophysics Data System (ADS)

    Milovich, J. L.; Robey, H. F.; Clark, D. S.; Baker, K. L.; Casey, D. T.; Cerjan, C.; Field, J.; MacPhee, A. G.; Pak, A.; Patel, P. K.; Peterson, J. L.; Smalyuk, V. A.; Weber, C. R.

    2015-12-01

    Experimental results from indirectly driven ignition implosions during the National Ignition Campaign (NIC) [M. J. Edwards et al., Phys. Plasmas 20, 070501 (2013)] achieved a record compression of the central deuterium-tritium fuel layer with measured areal densities up to 1.2 g/cm2, but with significantly lower total neutron yields (between 1.5 × 1014 and 5.5 × 1014) than predicted, approximately 10% of the 2D simulated yield. An order of magnitude improvement in the neutron yield was subsequently obtained in the "high-foot" experiments [O. A. Hurricane et al., Nature 506, 343 (2014)]. However, this yield was obtained at the expense of fuel compression due to deliberately higher fuel adiabat. In this paper, the design of an adiabat-shaped implosion is presented, in which the laser pulse is tailored to achieve similar resistance to ablation-front instability growth, but with a low fuel adiabat to achieve high compression. Comparison with measured performance shows a factor of 3-10× improvement in the neutron yield (>40% of predicted simulated yield) over similar NIC implosions, while maintaining a reasonable fuel compression of >1 g/cm2. Extension of these designs to higher laser power and energy is discussed to further explore the trade-off between increased implosion velocity and the deleterious effects of hydrodynamic instabilities.

  12. Design of indirectly driven, high-compression Inertial Confinement Fusion implosions with improved hydrodynamic stability using a 4-shock adiabat-shaped drive

    SciTech Connect

    Milovich, J. L. Robey, H. F.; Clark, D. S.; Baker, K. L.; Casey, D. T.; Cerjan, C.; Field, J.; MacPhee, A. G.; Pak, A.; Patel, P. K.; Peterson, J. L.; Smalyuk, V. A.; Weber, C. R.

    2015-12-15

    Experimental results from indirectly driven ignition implosions during the National Ignition Campaign (NIC) [M. J. Edwards et al., Phys. Plasmas 20, 070501 (2013)] achieved a record compression of the central deuterium-tritium fuel layer with measured areal densities up to 1.2 g/cm{sup 2}, but with significantly lower total neutron yields (between 1.5 × 10{sup 14} and 5.5 × 10{sup 14}) than predicted, approximately 10% of the 2D simulated yield. An order of magnitude improvement in the neutron yield was subsequently obtained in the “high-foot” experiments [O. A. Hurricane et al., Nature 506, 343 (2014)]. However, this yield was obtained at the expense of fuel compression due to deliberately higher fuel adiabat. In this paper, the design of an adiabat-shaped implosion is presented, in which the laser pulse is tailored to achieve similar resistance to ablation-front instability growth, but with a low fuel adiabat to achieve high compression. Comparison with measured performance shows a factor of 3–10× improvement in the neutron yield (>40% of predicted simulated yield) over similar NIC implosions, while maintaining a reasonable fuel compression of >1 g/cm{sup 2}. Extension of these designs to higher laser power and energy is discussed to further explore the trade-off between increased implosion velocity and the deleterious effects of hydrodynamic instabilities.

  13. Effects of thermal fluctuations and fluid compressibility on hydrodynamic synchronization of microrotors at finite oscillatory Reynolds number: a multiparticle collision dynamics simulation study.

    PubMed

    Theers, Mario; Winkler, Roland G

    2014-08-28

    We investigate the emergent dynamical behavior of hydrodynamically coupled microrotors by means of multiparticle collision dynamics (MPC) simulations. The two rotors are confined in a plane and move along circles driven by active forces. Comparing simulations to theoretical results based on linearized hydrodynamics, we demonstrate that time-dependent hydrodynamic interactions lead to synchronization of the rotational motion. Thermal noise implies large fluctuations of the phase-angle difference between the rotors, but synchronization prevails and the ensemble-averaged time dependence of the phase-angle difference agrees well with analytical predictions. Moreover, we demonstrate that compressibility effects lead to longer synchronization times. In addition, the relevance of the inertia terms of the Navier-Stokes equation are discussed, specifically the linear unsteady acceleration term characterized by the oscillatory Reynolds number ReT. We illustrate the continuous breakdown of synchronization with the Reynolds number ReT, in analogy to the continuous breakdown of the scallop theorem with decreasing Reynolds number. PMID:25011003

  14. COSAL: A black-box compressible stability analysis code for transition prediction in three-dimensional boundary layers

    NASA Technical Reports Server (NTRS)

    Malik, M. R.

    1982-01-01

    A fast computer code COSAL for transition prediction in three dimensional boundary layers using compressible stability analysis is described. The compressible stability eigenvalue problem is solved using a finite difference method, and the code is a black box in the sense that no guess of the eigenvalue is required from the user. Several optimization procedures were incorporated into COSAL to calculate integrated growth rates (N factor) for transition correlation for swept and tapered laminar flow control wings using the well known e to the Nth power method. A user's guide to the program is provided.

  15. Piecewise spectrally band-pass for compressive coded aperture spectral imaging

    NASA Astrophysics Data System (ADS)

    Qian, Lu-Lu; Lü, Qun-Bo; Huang, Min; Xiang, Li-Bin

    2015-08-01

    Coded aperture snapshot spectral imaging (CASSI) has been discussed in recent years. It has the remarkable advantages of high optical throughput, snapshot imaging, etc. The entire spatial-spectral data-cube can be reconstructed with just a single two-dimensional (2D) compressive sensing measurement. On the other hand, for less spectrally sparse scenes, the insufficiency of sparse sampling and aliasing in spatial-spectral images reduce the accuracy of reconstructed three-dimensional (3D) spectral cube. To solve this problem, this paper extends the improved CASSI. A band-pass filter array is mounted on the coded mask, and then the first image plane is divided into some continuous spectral sub-band areas. The entire 3D spectral cube could be captured by the relative movement between the object and the instrument. The principle analysis and imaging simulation are presented. Compared with peak signal-to-noise ratio (PSNR) and the information entropy of the reconstructed images at different numbers of spectral sub-band areas, the reconstructed 3D spectral cube reveals an observable improvement in the reconstruction fidelity, with an increase in the number of the sub-bands and a simultaneous decrease in the number of spectral channels of each sub-band. Project supported by the National Natural Science Foundation for Distinguished Young Scholars of China (Grant No. 61225024) and the National High Technology Research and Development Program of China (Grant No. 2011AA7012022).

  16. Optimized FIR filters for digital pulse compression of biphase codes with low sidelobes

    NASA Astrophysics Data System (ADS)

    Sanal, M.; Kuloor, R.; Sagayaraj, M. J.

    In miniaturized radars where power, real estate, speed and low cost are tight constraints and Doppler tolerance is not a major concern biphase codes are popular and FIR filter is used for digital pulse compression (DPC) implementation to achieve required range resolution. Disadvantage of low peak to sidelobe ratio (PSR) of biphase codes can be overcome by linear programming for either single stage mismatched filter or two stage approach i.e. matched filter followed by sidelobe suppression filter (SSF) filter. Linear programming (LP) calls for longer filter lengths to obtain desirable PSR. Longer the filter length greater will be the number of multipliers, hence more will be the requirement of logic resources used in the FPGAs and many time becomes design challenge for system on chip (SoC) requirement. This requirement of multipliers can be brought down by clustering the tap weights of the filter by kmeans clustering algorithm at the cost of few dB deterioration in PSR. The cluster centroid as tap weight reduces logic used in FPGA for FIR filters to a great extent by reducing number of weight multipliers. Since k-means clustering is an iterative algorithm, centroid for weights cluster is different in different iterations and causes different clusters. This causes difference in clustering of weights and sometimes even it may happen that lesser number of multiplier and lesser length of filter provide better PSR.

  17. Worst configurations (instantons) for compressed sensing over reals: a channel coding approach

    SciTech Connect

    Chertkov, Michael; Chilappagari, Shashi K; Vasic, Bane

    2010-01-01

    We consider Linear Programming (LP) solution of a Compressed Sensing (CS) problem over reals, also known as the Basis Pursuit (BasP) algorithm. The BasP allows interpretation as a channel-coding problem, and it guarantees the error-free reconstruction over reals for properly chosen measurement matrix and sufficiently sparse error vectors. In this manuscript, we examine how the BasP performs on a given measurement matrix and develop a technique to discover sparse vectors for which the BasP fails. The resulting algorithm is a generalization of our previous results on finding the most probable error-patterns, so called instantons, degrading performance of a finite size Low-Density Parity-Check (LDPC) code in the error-floor regime. The BasP fails when its output is different from the actual error-pattern. We design CS-Instanton Search Algorithm (ISA) generating a sparse vector, called CS-instanton, such that the BasP fails on the instanton, while its action on any modification of the CS-instanton decreasing a properly defined norm is successful. We also prove that, given a sufficiently dense random input for the error-vector, the CS-ISA converges to an instanton in a small finite number of steps. Performance of the CS-ISA is tested on example of a randomly generated 512 * 120 matrix, that outputs the shortest instanton (error vector) pattern of length 11.

  18. Giant impacts during planet formation: Parallel tree code simulations using smooth particle hydrodynamics

    NASA Astrophysics Data System (ADS)

    Cohen, Randi L.

    There is both theoretical and observational evidence that giant planets collided with objects ≥ Mearth during their evolution. These impacts may play a key role in giant planet formation. This paper describes impacts of a ˜ Earth-mass object onto a suite of proto-giant-planets, as simulated using an SPH parallel tree code. We run 6 simulations, varying the impact angle and evolutionary stage of the proto-Jupiter. We find that it is possible for an impactor to free some mass from the core of the proto-planet it impacts through direct collision, as well as to make physical contact with the core yet escape partially, or even completely, intact. None of the 6 cases we consider produced a solid disk or resulted in a net decrease in the core mass of the pinto-planet (since the mass decrease due to disruption was outweighed by the increase due to the addition of the impactor's mass to the core). However, we suggest parameters which may have these effects, and thus decrease core mass and formation time in protoplanetary models and/or create satellite systems. We find that giant impacts can remove significant envelope mass from forming giant planets, leaving only 2 MEarth of gas, similar to Uranus and Neptune. They can also create compositional inhomogeneities in planetary cores, which creates differences in planetary thermal emission characteristics.

  19. Giant Impacts During Planet Formation: Parallel Tree Code Simulations Using Smooth Particle Hydrodynamics

    NASA Astrophysics Data System (ADS)

    Cohen, R.; Bodenheimer, P.; Asphaug, E.

    2000-12-01

    There is both theoretical and observational evidence that giant planets collided with objects with mass >= Mearth during their evolution. These impacts may help shorten planetary formation timescales by changing the opacity of the planetary atmosphere to allow quicker cooling. They may also redistribute heavy metals within giant planets, affect the core/envelope mass ratio, and help determine the ratio of emitted to absorbed energy within giant planets. Thus, the researchers propose to simulate the impact of a ~ Earth-mass object onto a proto-giant-planet with SPH. Results of the SPH collision models will be input into a steady-state planetary evolution code and the effect of impacts on formation timescales, core/envelope mass ratios, density profiles, and thermal emissions of giant planets will be quantified. The collision will be modelled using a modified version of an SPH routine which simulates the collision of two polytropes. The Saumon-Chabrier and Tillotson equations of state will replace the polytropic equation of state. The parallel tree algorithm of Olson & Packer will be used for the domain decomposition and neighbor search necessary to calculate pressure and self-gravity efficiently. This work is funded by the NASA Graduate Student Researchers Program.

  20. The Rice coding algorithm achieves high-performance lossless and progressive image compression based on the improving of integer lifting scheme Rice coding algorithm

    NASA Astrophysics Data System (ADS)

    Jun, Xie Cheng; Su, Yan; Wei, Zhang

    2006-08-01

    In this paper, a modified algorithm was introduced to improve Rice coding algorithm and researches of image compression with the CDF (2,2) wavelet lifting scheme was made. Our experiments show that the property of the lossless image compression is much better than Huffman, Zip, lossless JPEG, RAR, and a little better than (or equal to) the famous SPIHT. The lossless compression rate is improved about 60.4%, 45%, 26.2%, 16.7%, 0.4% on average. The speed of the encoder is faster about 11.8 times than the SPIHT's and its efficiency in time can be improved by 162%. The speed of the decoder is faster about 12.3 times than that of the SPIHT's and its efficiency in time can be rasied about 148%. This algorithm, instead of largest levels wavelet transform, has high coding efficiency when the wavelet transform levels is larger than 3. For the source model of distributions similar to the Laplacian, it can improve the efficiency of coding and realize the progressive transmit coding and decoding.

  1. Solwnd: A 3D Compressible MHD Code for Solar Wind Studies. Version 1.0: Cartesian Coordinates

    NASA Technical Reports Server (NTRS)

    Deane, Anil E.

    1996-01-01

    Solwnd 1.0 is a three-dimensional compressible MHD code written in Fortran for studying the solar wind. Time-dependent boundary conditions are available. The computational algorithm is based on Flux Corrected Transport and the code is based on the existing code of Zalesak and Spicer. The flow considered is that of shear flow with incoming flow that perturbs this base flow. Several test cases corresponding to pressure balanced magnetic structures with velocity shear flow and various inflows including Alfven waves are presented. Version 1.0 of solwnd considers a rectangular Cartesian geometry. Future versions of solwnd will consider a spherical geometry. Some discussions of this issue is presented.

  2. Contour-Based Image Compression for Fast Real-Time Coding

    NASA Astrophysics Data System (ADS)

    Vasilyev, Sergei

    A new method based on simultaneous contouring the image content with subsequent converting of the contours to a compact chained bit-flow, thus providing efficient spatial image compression, is proposed. It is computationally inexpensive and can be directly applied to compressing the high-resolution bitonal imagery, allowing to approach the ultimate speed performance. Combining the method with other compression schemes, for example, Huffman-type or arithmetic encoding, provides better lossless compression to the current telecommunication compression standards. The problems of method application to compressing the color images for remote sensing and mapping applications, as well as lossy method implementation, are discussed.

  3. Binary neutron-star mergers with Whisky and SACRA: First quantitative comparison of results from independent general-relativistic hydrodynamics codes

    NASA Astrophysics Data System (ADS)

    Baiotti, Luca; Shibata, Masaru; Yamamoto, Tetsuro

    2010-09-01

    We present the first quantitative comparison of two independent general-relativistic hydrodynamics codes, the whisky code and the sacra code. We compare the output of simulations starting from the same initial data and carried out with the configuration (numerical methods, grid setup, resolution, gauges) which for each code has been found to give consistent and sufficiently accurate results, in particular, in terms of cleanness of gravitational waveforms. We focus on the quantities that should be conserved during the evolution (rest mass, total mass energy, and total angular momentum) and on the gravitational-wave amplitude and frequency. We find that the results produced by the two codes agree at a reasonable level, with variations in the different quantities but always at better than about 10%.

  4. An optimal unequal error protection scheme with turbo product codes for wavelet compression of ultraspectral sounder data

    NASA Astrophysics Data System (ADS)

    Huang, Bormin; Sriraja, Y.; Ahuja, Alok; Goldberg, Mitchell D.

    2006-08-01

    Most source coding techniques generate bitstream where different regions have unequal influences on data reconstruction. An uncorrected error in a more influential region can cause more error propagation in the reconstructed data. Given a limited bandwidth, unequal error protection (UEP) via channel coding with different code rates for different regions of bitstream may yield much less error contamination than equal error protection (EEP). We propose an optimal UEP scheme that minimizes error contamination after channel and source decoding. We use JPEG2000 for source coding and turbo product code (TPC) for channel coding as an example to demonstrate this technique with ultraspectral sounder data. Wavelet compression yields unequal significance in different wavelet resolutions. In the proposed UEP scheme, the statistics of erroneous pixels after TPC and JPEG2000 decoding are used to determine the optimal channel code rates for each wavelet resolution. The proposed UEP scheme significantly reduces the number of pixel errors when compared to its EEP counterpart. In practice, with a predefined set of implementation parameters (available channel codes, desired code rate, noise level, etc.), the optimal code rate allocation for UEP needs to be determined only once and can be done offline.

  5. Low Complex Forward Adaptive Loss Compression Algorithm and Its Application in Speech Coding

    NASA Astrophysics Data System (ADS)

    Nikolić, Jelena; Perić, Zoran; Antić, Dragan; Jovanović, Aleksandra; Denić, Dragan

    2011-01-01

    This paper proposes a low complex forward adaptive loss compression algorithm that works on the frame by frame basis. Particularly, the algorithm we propose performs frame by frame analysis of the input speech signal, estimates and quantizes the gain within the frames in order to enable the quantization by the forward adaptive piecewise linear optimal compandor. In comparison to the solution designed according to the G.711 standard, our algorithm provides not only higher level of the average signal to quantization noise ratio, but also performs a reduction of the PCM bit rate for about 1 bits/sample. Moreover, the algorithm we propose completely satisfies the G.712 standard, since it provides overreaching the curve defined by the G.712 standard in the whole of variance range. Accordingly, we can reasonably believe that our algorithm will find its practical implementation in the high quality coding of signals, represented with less than 8 bits/sample, which as well as speech signals follow Laplacian distribution and have the time varying variances.

  6. Dynamic fission instabilities in rapidly rotating n = 3/2 polytropes - A comparison of results from finite-difference and smoothed particle hydrodynamics codes

    SciTech Connect

    Durisen, R.H.; Gingold, R.A.; Tohline, J.E.; Boss, A.P.

    1986-06-01

    The effectiveness of three different hydrodynamics models is evaluated for the analysis of the effects of fission instabilities in rapidly rotating, equilibrium flows. The instabilities arise in nonaxisymmetric Kelvin modes as rotational energy in the flow increases, which may occur in the formation of close binary stars and planets when the fluid proto-object contracts quasi-isostatically. Two finite-difference, donor-cell methods and a smoothed particle hydrodynamics (SPH) code are examined, using a polytropic index of 3/2 and ratios of total rotational kinetic energy to gravitational energy of 0.33 and 0.38. The models show that dynamic bar instabilities with the 3/2 polytropic index do not yield detached binaries and multiple systems. Ejected mass and angular momentum form two trailing spiral arms that become a disk or ring around the central remnant. The SPH code yields the same data as the finite difference codes but with less computational effort and without acceptable fluid constraints in low density regions. Methods for improving both types of codes are discussed. 68 references.

  7. A multigroup diffusion solver using pseudo transient continuation for a radiation-hydrodynamic code with patch-based AMR

    SciTech Connect

    Shestakov, Aleksei I. Offner, Stella S.R.

    2008-01-10

    We present a scheme to solve the nonlinear multigroup radiation diffusion (MGD) equations. The method is incorporated into a massively parallel, multidimensional, Eulerian radiation-hydrodynamic code with Adaptive Mesh Refinement (AMR). The patch-based AMR algorithm refines in both space and time creating a hierarchy of levels, coarsest to finest. The physics modules are time-advanced using operator splitting. On each level, separate 'level-solve' packages advance the modules. Our multigroup level-solve adapts an implicit procedure which leads to a two-step iterative scheme that alternates between elliptic solves for each group with intra-cell group coupling. For robustness, we introduce pseudo transient continuation ({psi}tc). We analyze the magnitude of the {psi}tc parameter to ensure positivity of the resulting linear system, diagonal dominance and convergence of the two-step scheme. For AMR, a level defines a subdomain for refinement. For diffusive processes such as MGD, the refined level uses Dirichlet boundary data at the coarse-fine interface and the data is derived from the coarse level solution. After advancing on the fine level, an additional procedure, the sync-solve (SS), is required in order to enforce conservation. The MGD SS reduces to an elliptic solve on a combined grid for a system of G equations, where G is the number of groups. We adapt the 'partial temperature' scheme for the SS; hence, we reuse the infrastructure developed for scalar equations. Results are presented. We consider a multigroup test problem with a known analytic solution. We demonstrate utility of {psi}tc by running with increasingly larger timesteps. Lastly, we simulate the sudden release of energy Y inside an Al sphere (r = 15 cm) suspended in air at STP. For Y = 11 kT, we find that gray radiation diffusion and MGD produce similar results. However, if Y = 1 MT, the two packages yield different results. Our large Y simulation contradicts a long-standing theory and demonstrates

  8. A Multigroup diffusion Solver Using Pseudo Transient Continuation for a Radiaiton-Hydrodynamic Code with Patch-Based AMR

    SciTech Connect

    Shestakov, A I; Offner, S R

    2007-03-02

    We present a scheme to solve the nonlinear multigroup radiation diffusion (MGD) equations. The method is incorporated into a massively parallel, multidimensional, Eulerian radiation-hydrodynamic code with adaptive mesh refinement (AMR). The patch-based AMR algorithm refines in both space and time creating a hierarchy of levels, coarsest to finest. The physics modules are time-advanced using operator splitting. On each level, separate 'level-solve' packages advance the modules. Our multigroup level-solve adapts an implicit procedure which leads to a two-step iterative scheme that alternates between elliptic solves for each group with intra-cell group coupling. For robustness, we introduce pseudo transient continuation ({Psi}tc). We analyze the magnitude of the {Psi}tc parameter to ensure positivity of the resulting linear system, diagonal dominance and convergence of the two-step scheme. For AMR, a level defines a subdomain for refinement. For diffusive processes such as MGD, the refined level uses Dirichet boundary data at the coarse-fine interface and the data is derived from the coarse level solution. After advancing on the fine level, an additional procedure, the sync-solve (SS), is required in order to enforce conservation. The MGD SS reduces to an elliptic solve on a combined grid for a system of G equations, where G is the number of groups. We adapt the 'partial temperature' scheme for the SS; hence, we reuse the infrastructure developed for scalar equations. Results are presented. We consider a multigroup test problem with a known analytic solution. We demonstrate utility of {Psi}tc by running with increasingly larger timesteps. Lastly, we simulate the sudden release of energy Y inside an Al sphere (r = 15 cm) suspended in air at STP. For Y = 11 kT, we find that gray radiation diffusion and MGD produce similar results. However, if Y = 1 MT, the two packages yield different results. Our large Y simulation contradicts a long-standing theory and demonstrates

  9. A multigroup diffusion solver using pseudo transient continuation for a radiation-hydrodynamic code with patch-based AMR

    NASA Astrophysics Data System (ADS)

    Shestakov, Aleksei I.; Offner, Stella S. R.

    2008-01-01

    We present a scheme to solve the nonlinear multigroup radiation diffusion (MGD) equations. The method is incorporated into a massively parallel, multidimensional, Eulerian radiation-hydrodynamic code with Adaptive Mesh Refinement (AMR). The patch-based AMR algorithm refines in both space and time creating a hierarchy of levels, coarsest to finest. The physics modules are time-advanced using operator splitting. On each level, separate "level-solve" packages advance the modules. Our multigroup level-solve adapts an implicit procedure which leads to a two-step iterative scheme that alternates between elliptic solves for each group with intra-cell group coupling. For robustness, we introduce pseudo transient continuation (Ψtc). We analyze the magnitude of the Ψtc parameter to ensure positivity of the resulting linear system, diagonal dominance and convergence of the two-step scheme. For AMR, a level defines a subdomain for refinement. For diffusive processes such as MGD, the refined level uses Dirichlet boundary data at the coarse-fine interface and the data is derived from the coarse level solution. After advancing on the fine level, an additional procedure, the sync-solve (SS), is required in order to enforce conservation. The MGD SS reduces to an elliptic solve on a combined grid for a system of G equations, where G is the number of groups. We adapt the "partial temperature" scheme for the SS; hence, we reuse the infrastructure developed for scalar equations. Results are presented. We consider a multigroup test problem with a known analytic solution. We demonstrate utility of Ψtc by running with increasingly larger timesteps. Lastly, we simulate the sudden release of energy Y inside an Al sphere (r = 15 cm) suspended in air at STP. For Y = 11 kT, we find that gray radiation diffusion and MGD produce similar results. However, if Y = 1 MT, the two packages yield different results. Our large Y simulation contradicts a long-standing theory and demonstrates the

  10. A Multigroup diffusion solver using pseudo transient continuation for a radiation-hydrodynamic code with patch-based AMR

    SciTech Connect

    Shestakov, A I; Offner, S R

    2006-09-21

    We present a scheme to solve the nonlinear multigroup radiation diffusion (MGD) equations. The method is incorporated into a massively parallel, multidimensional, Eulerian radiation-hydrodynamic code with adaptive mesh refinement (AMR). The patch-based AMR algorithm refines in both space and time creating a hierarchy of levels, coarsest to finest. The physics modules are time-advanced using operator splitting. On each level, separate 'level-solve' packages advance the modules. Our multigroup level-solve adapts an implicit procedure which leads to a two-step iterative scheme that alternates between elliptic solves for each group with intra-cell group coupling. For robustness, we introduce pseudo transient continuation ({Psi}tc). We analyze the magnitude of the {Psi}tc parameter to ensure positivity of the resulting linear system, diagonal dominance and convergence of the two-step scheme. For AMR, a level defines a subdomain for refinement. For diffusive processes such as MGD, the refined level uses Dirichet boundary data at the coarse-fine interface and the data is derived from the coarse level solution. After advancing on the fine level, an additional procedure, the sync-solve (SS), is required in order to enforce conservation. The MGD SS reduces to an elliptic solve on a combined grid for a system of G equations, where G is the number of groups. We adapt the 'partial temperature' scheme for the SS; hence, we reuse the infrastructure developed for scalar equations. Results are presented. We consider a multigroup test problem with a known analytic solution. We demonstrate utility of {Psi}tc by running with increasingly larger timesteps. Lastly, we simulate the sudden release of energy Y inside an Al sphere (r = 15 cm) suspended in air at STP. For Y = 11 kT, we find that gray radiation diffusion and MGD produce similar results. However, if Y = 1 MT, the two packages yield different results. Our large Y simulation contradicts a long-standing theory and demonstrates

  11. 3D Hydrodynamic Simulations with Yguazú-A Code to Model a Jet in a Galaxy Cluster

    NASA Astrophysics Data System (ADS)

    Haro-Corzo, S. A. R.; Velazquez, P.; Diaz, A.

    2009-05-01

    We present preliminary results for a galaxy's jet expanding into an intra-cluster medium (ICM). We attempt to model the jet-gas interaction and the evolution of a extragalactic collimated jet placed at center of computational grid, which it is modeled as a cylinder ejecting gas in the z-axis direction with fixed velocity. It has precession motion around z-axis (period of 10^5 sec.) and orbital motion in XY-plane (period of 500 yr.). This jet is embedded in the ICM, which is modeled as surrounding wind in the XZ plane. We carried out 3D hydrodynamical simulations using Yguazú-A code. This simulation do not include radiative losses. In order to compare the numerical results with observations, we generated synthetic X-ray emission images. X-ray observations with high-resolution of rich cluster of galaxies show diffuse emission with filamentary structure (sometimes called as cooling flow or X-ray filament). Radio observations show a jet-like emission of the central region of the cluster. Joining these observations, in this work we explore the possibility that the jet-ambient gas interaction leads to a filamentary morphology in the X-ray domain. We have found that simulation considering orbital motion offers the possibility to explain the diffuse emission observed in the X-ray domain. The circular orbital motion, additional to precession motion, contribute to disperse the shocked gas and the X-ray appearance of the 3D simulation reproduce some important details of Abel 1795 X-ray emission (Rodriguez-Martinez et al. 2006, A&A, 448, 15): A bright bow-shock at north (spot), where interact directly the jet and the ICM and which is observed in the X-ray image. Meanwhile, in the south side there is no bow-shock X-ray emission, but the wake appears as a X-ray source. This wake is part of the diffuse shocked ambient gas region.

  12. Development of a Three-Dimensional PSE Code for Compressible Flows: Stability of Three-Dimensional Compressible Boundary Layers

    NASA Technical Reports Server (NTRS)

    Balakumar, P.; Jeyasingham, Samarasingham

    1999-01-01

    A program is developed to investigate the linear stability of three-dimensional compressible boundary layer flows over bodies of revolutions. The problem is formulated as a two dimensional (2D) eigenvalue problem incorporating the meanflow variations in the normal and azimuthal directions. Normal mode solutions are sought in the whole plane rather than in a line normal to the wall as is done in the classical one dimensional (1D) stability theory. The stability characteristics of a supersonic boundary layer over a sharp cone with 50 half-angle at 2 degrees angle of attack is investigated. The 1D eigenvalue computations showed that the most amplified disturbances occur around x(sub 2) = 90 degrees and the azimuthal mode number for the most amplified disturbances range between m = -30 to -40. The frequencies of the most amplified waves are smaller in the middle region where the crossflow dominates the instability than the most amplified frequencies near the windward and leeward planes. The 2D eigenvalue computations showed that due to the variations in the azimuthal direction, the eigenmodes are clustered into isolated confined regions. For some eigenvalues, the eigenfunctions are clustered in two regions. Due to the nonparallel effect in the azimuthal direction, the eigenmodes are clustered into isolated confined regions. For some eigenvalues, the eigenfunctions are clustered in two regions. Due to the nonparallel effect in the azimuthal direction, the most amplified disturbances are shifted to 120 degrees compared to 90 degrees for the parallel theory. It is also observed that the nonparallel amplification rates are smaller than that is obtained from the parallel theory.

  13. Finite element stress analysis of a compression mold. Final report. [Using SASL and WILSON codes

    SciTech Connect

    Watterson, C.E.

    1980-03-01

    Thermally induced stresses occurring in a compression mold during production molding were evaluated using finite element analysis. A complementary experimental stress analysis, including strain gages and thermocouple arrays, verified the finite element model under typical loading conditions.

  14. Compression and smart coding of offset and gain maps for intraoral digital x-ray sensors

    SciTech Connect

    Frosio, I.; Borghese, N. A.

    2009-02-15

    The response of indirect x-ray digital imaging sensors is often not homogenous on the entire surface area. In this case, calibration is needed to build offset and gain maps, which are used to correct the sensor output. The sensors of new generation are equipped with an on-board memory, which serves to store these maps. However, because of its limited dimension, the maps have to be compressed before saving them. This step is critical because of the extremely high compression rate required. The authors propose here a novel method to achieve such a high compression rate, without degrading the quality of the sensor output. It is based on quad tree decomposition, which performs an adaptive sampling of the offset and gain maps, matched with a RBF-based interpolation strategy. The method was tested on a typical intraoral radiographic sensor and compared with traditional compression techniques. Qualitative and quantitative results show that the method achieves a higher compression rate and produces images of superior quality. The method can be adopted also in different fields where a high compression rate is required.

  15. Radiation Hydrodynamics

    NASA Astrophysics Data System (ADS)

    Mihalas, Dimitri

    Hydrodynamics Front Fitting Artificial Dissipation The Adaptive Grid The TITAN Code References

  16. A New Multi-energy Neutrino Radiation-Hydrodynamics Code in Full General Relativity and Its Application to the Gravitational Collapse of Massive Stars

    NASA Astrophysics Data System (ADS)

    Kuroda, Takami; Takiwaki, Tomoya; Kotake, Kei

    2016-02-01

    We present a new multi-dimensional radiation-hydrodynamics code for massive stellar core-collapse in full general relativity (GR). Employing an M1 analytical closure scheme, we solve spectral neutrino transport of the radiation energy and momentum based on a truncated moment formalism. Regarding neutrino opacities, we take into account a baseline set in state-of-the-art simulations, in which inelastic neutrino-electron scattering, thermal neutrino production via pair annihilation, and nucleon-nucleon bremsstrahlung are included. While the Einstein field equations and the spatial advection terms in the radiation-hydrodynamics equations are evolved explicitly, the source terms due to neutrino-matter interactions and energy shift in the radiation moment equations are integrated implicitly by an iteration method. To verify our code, we first perform a series of standard radiation tests with analytical solutions that include the check of gravitational redshift and Doppler shift. A good agreement in these tests supports the reliability of the GR multi-energy neutrino transport scheme. We then conduct several test simulations of core-collapse, bounce, and shock stall of a 15{M}⊙ star in the Cartesian coordinates and make a detailed comparison with published results. Our code performs quite well to reproduce the results of full Boltzmann neutrino transport especially before bounce. In the postbounce phase, our code basically performs well, however, there are several differences that are most likely to come from the insufficient spatial resolution in our current 3D-GR models. For clarifying the resolution dependence and extending the code comparison in the late postbounce phase, we discuss that next-generation Exaflops class supercomputers are needed at least.

  17. Approximate message-passing with spatially coupled structured operators, with applications to compressed sensing and sparse superposition codes

    NASA Astrophysics Data System (ADS)

    Barbier, Jean; Schülke, Christophe; Krzakala, Florent

    2015-05-01

    We study the behavior of approximate message-passing (AMP), a solver for linear sparse estimation problems such as compressed sensing, when the i.i.d matrices—for which it has been specifically designed—are replaced by structured operators, such as Fourier and Hadamard ones. We show empirically that after proper randomization, the structure of the operators does not significantly affect the performances of the solver. Furthermore, for some specially designed spatially coupled operators, this allows a computationally fast and memory efficient reconstruction in compressed sensing up to the information-theoretical limit. We also show how this approach can be applied to sparse superposition codes, allowing the AMP decoder to perform at large rates for moderate block length.

  18. KIVA-4: An unstructured ALE code for compressible gas flow with sprays

    NASA Astrophysics Data System (ADS)

    Torres, David J.; Trujillo, Mario F.

    2006-12-01

    The KIVA family of codes was developed to simulate the thermal and fluid processes taking place inside an internal combustion engine. In this latest version of this open source code, KIVA-4, the numerics have been generalized to unstructrured meshes. This change required modifications to the Lagrangian phase of the computations, the pressure solution and fundamental changes in the fluxing schemes of the rezoning phase. This newest version of the code inherits all the droplet phase capabilities and physical sub-models of previous versions. The integration of the gas phase equations with moving solid boundaries continues to employ the successful arbitrary Lagrangian-Eulerian (ALE) methodology. Its new unstructured capability facilitates grid construction in complicated geometries and affords a higher degree of flexibility. The numerics of the code, emphasizing the new additions, are described. Various computational examples are performed demonstrating the new capabilities of the code.

  19. Image and video compression/decompression based on human visual perception system and transform coding

    SciTech Connect

    Fu, Chi Yung., Petrich, L.I., Lee, M.

    1997-02-01

    The quantity of information has been growing exponentially, and the form and mix of information have been shifting into the image and video areas. However, neither the storage media nor the available bandwidth can accommodated the vastly expanding requirements for image information. A vital, enabling technology here is compression/decompression. Our compression work is based on a combination of feature-based algorithms inspired by the human visual- perception system (HVS), and some transform-based algorithms (such as our enhanced discrete cosine transform, wavelet transforms), vector quantization and neural networks. All our work was done on desktop workstations using the C++ programming language and commercially available software. During FY 1996, we explored and implemented an enhanced feature-based algorithms, vector quantization, and neural- network-based compression technologies. For example, we improved the feature compression for our feature-based algorithms by a factor of two to ten, a substantial improvement. We also found some promising results when using neural networks and applying them to some video sequences. In addition, we also investigated objective measures to characterize compression results, because traditional means such as the peak signal- to-noise ratio (PSNR) are not adequate to fully characterize the results, since such measures do not take into account the details of human visual perception. We have successfully used our one- year LDRD funding as seed money to explore new research ideas and concepts, the results of this work have led us to obtain external funding from the dud. At this point, we are seeking matching funds from DOE to match the dud funding so that we can bring such technologies into fruition. 9 figs., 2 tabs.

  20. Hydrodynamic effects in the atmosphere of variable stars

    NASA Technical Reports Server (NTRS)

    Davis, C. G., Jr.; Bunker, S. S.

    1975-01-01

    Numerical models of variable stars are established, using a nonlinear radiative transfer coupled hydrodynamics code. The variable Eddington method of radiative transfer is used. Comparisons are for models of W Virginis, beta Doradus, and eta Aquilae. From these models it appears that shocks are formed in the atmospheres of classical Cepheids as well as W Virginis stars. In classical Cepheids, with periods from 7 to 10 days, the bumps occurring in the light and velocity curves appear as the result of a compression wave that reflects from the star's center. At the head of the outward going compression wave, shocks form in the atmosphere. Comparisons between the hydrodynamic motions in W Virginis and classical Cepheids are made. The strong shocks in W Virginis do not penetrate into the interior as do the compression waves formed in classical Cepheids. The shocks formed in W Virginis stars cause emission lines, while in classical Cepheids the shocks are weaker.

  1. Lossless compression scheme of superhigh-definition images by partially decodable Golomb-Rice code

    NASA Astrophysics Data System (ADS)

    Kato, Shigeo; Hasegawa, Madoka; Guo, Muling

    1998-12-01

    Multimedia communication systems using super high definition (SHD) images are widely desired in various communities such as medical imagery, digital museum, digital libraries and so on. There are, however, many requirements in SHD image communication systems, because of high pixel accuracy and high resolution of a SHD image. We considered mandatory functions that should be realized in SHD image application systems, as summarized to three items, i.e, reversibility, scalability and progressibility. This paper proposes an SHD image communication systems based on reversibility, scalability and progressibility. To realize reversibility and progressibility, a lossless wavelet transform coding method is introduced as a coding model. To realize scalability, a partially decodable entropy code is proposed. Especially, we focus on a partially decodable coding method for realizing the scalability function in this paper.

  2. Application of wavelet filtering and Barker-coded pulse compression hybrid method to air-coupled ultrasonic testing

    NASA Astrophysics Data System (ADS)

    Zhou, Zhenggan; Ma, Baoquan; Jiang, Jingtao; Yu, Guang; Liu, Kui; Zhang, Dongmei; Liu, Weiping

    2014-10-01

    Air-coupled ultrasonic testing (ACUT) technique has been viewed as a viable solution in defect detection of advanced composites used in aerospace and aviation industries. However, the giant mismatch of acoustic impedance in air-solid interface makes the transmission efficiency of ultrasound low, and leads to poor signal-to-noise (SNR) ratio of received signal. The utilisation of signal-processing techniques in non-destructive testing is highly appreciated. This paper presents a wavelet filtering and phase-coded pulse compression hybrid method to improve the SNR and output power of received signal. The wavelet transform is utilised to filter insignificant components from noisy ultrasonic signal, and pulse compression process is used to improve the power of correlated signal based on cross-correction algorithm. For the purpose of reasonable parameter selection, different families of wavelets (Daubechies, Symlet and Coiflet) and decomposition level in discrete wavelet transform are analysed, different Barker codes (5-13 bits) are also analysed to acquire higher main-to-side lobe ratio. The performance of the hybrid method was verified in a honeycomb composite sample. Experimental results demonstrated that the proposed method is very efficient in improving the SNR and signal strength. The applicability of the proposed method seems to be a very promising tool to evaluate the integrity of high ultrasound attenuation composite materials using the ACUT.

  3. The role of molecular motors in the mechanics of active gels and the effects of inertia, hydrodynamic interaction and compressibility in passive microrheology

    NASA Astrophysics Data System (ADS)

    Uribe, Andres Cordoba

    The mechanical properties of soft biological materials are essential to their physiological function and cannot easily be duplicated by synthetic materials. The study of the mechanical properties of biological materials has lead to the development of new rheological characterization techniques. In the technique called passive microbead rheology, the positional autocorrelation function of a micron-sized bead embedded in a viscoelastic fluid is used to infer the dynamic modulus of the fluid. Single particle microrheology is limited to fluids were the microstructure is much smaller than the size of the probe bead. To overcome this limitation in two-bead microrheology the cross-correlated thermal motion of pairs of tracer particles is used to determine the dynamic modulus. Here we present a time-domain data analysis methodology and generalized Brownian dynamics simulations to examine the effects of inertia, hydrodynamic interaction, compressibility and non-conservative forces in passive microrheology. A type of biological material that has proven specially challenging to characterize are active gels. They are formed by semiflexible polymer filaments driven by motor proteins that convert chemical energy from the hydrolysis of adenosine triphosphate (ATP) to mechanical work and motion. Active gels perform essential functions in living tissue. Here we introduce a single-chain mean-field model to describe the mechanical properties of active gels. We model the semiflexible filaments as bead-spring chains and the molecular motors are accounted for by using a mean-field approach. The level of description of the model includes the end-to-end length and attachment state of the filaments, and the motor-generated forces, as stochastic state variables which evolve according to a proposed differential Chapman-Kolmogorov equation. The model allows accounting for physics that are not available in models that have been postulated on coarser levels of description. Moreover it allows

  4. Compression embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.

    1998-01-01

    A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method.

  5. Compression embedding

    DOEpatents

    Sandford, M.T. II; Handel, T.G.; Bradley, J.N.

    1998-03-10

    A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique is disclosed. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method. 11 figs.

  6. Radiation hydrodynamics

    SciTech Connect

    Pomraning, G.C.

    1982-12-31

    This course was intended to provide the participant with an introduction to the theory of radiative transfer, and an understanding of the coupling of radiative processes to the equations describing compressible flow. At moderate temperatures (thousands of degrees), the role of the radiation is primarily one of transporting energy by radiative processes. At higher temperatures (millions of degrees), the energy and momentum densities of the radiation field may become comparable to or even dominate the corresponding fluid quantities. In this case, the radiation field significantly affects the dynamics of the fluid, and it is the description of this regime which is generally the charter of radiation hydrodynamics. The course provided a discussion of the relevant physics and a derivation of the corresponding equations, as well as an examination of several simplified models. Practical applications include astrophysics and nuclear weapons effects phenomena.

  7. Scaling and performance of a 3-D radiation hydrodynamics code on message-passing parallel computers: final report

    SciTech Connect

    Hayes, J C; Norman, M

    1999-10-28

    This report details an investigation into the efficacy of two approaches to solving the radiation diffusion equation within a radiation hydrodynamic simulation. Because leading-edge scientific computing platforms have evolved from large single-node vector processors to parallel aggregates containing tens to thousands of individual CPU's, the ability of an algorithm to maintain high compute efficiency when distributed over a large array of nodes is critically important. The viability of an algorithm thus hinges upon the tripartite question of numerical accuracy, total time to solution, and parallel efficiency.

  8. Fast minimum-redundancy prefix coding for real-time space data compression

    NASA Astrophysics Data System (ADS)

    Huang, Bormin

    2007-09-01

    The minimum-redundancy prefix-free code problem is to determine an array l = {l I ,..., f n} of n integer codeword lengths, given an array f = {f I ,..., f n} of n symbol occurrence frequencies, such that the Kraft-McMillan inequality [equation] holds and the number of the total coded bits [equation] is minimized. Previous minimum-redundancy prefix-free code based on Huffman's greedy algorithm solves this problem in O (n) time if the input array f is sorted; but in O (n log n) time if f is unsorted. In this paper a fast algorithm is proposed to solve this problem in linear time if f is unsorted. It is suitable for real-time applications in satellite communication and consumer electronics. We also develop its VLSI architecture that consists of four modules, namely, the frequency table builder, the codeword length table builder, the codeword table builder, and the input-to-codeword mapper.

  9. Picture data compression coder using subband/transform coding with a Lempel-Ziv-based coder

    NASA Technical Reports Server (NTRS)

    Glover, Daniel R. (Inventor)

    1995-01-01

    Digital data coders/decoders are used extensively in video transmission. A digitally encoded video signal is separated into subbands. Separating the video into subbands allows transmission at low data rates. Once the data is separated into these subbands it can be coded and then decoded by statistical coders such as the Lempel-Ziv based coder.

  10. Data compression using adaptive transform coding. Appendix 1: Item 1. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Rost, Martin Christopher

    1988-01-01

    Adaptive low-rate source coders are described in this dissertation. These coders adapt by adjusting the complexity of the coder to match the local coding difficulty of the image. This is accomplished by using a threshold driven maximum distortion criterion to select the specific coder used. The different coders are built using variable blocksized transform techniques, and the threshold criterion selects small transform blocks to code the more difficult regions and larger blocks to code the less complex regions. A theoretical framework is constructed from which the study of these coders can be explored. An algorithm for selecting the optimal bit allocation for the quantization of transform coefficients is developed. The bit allocation algorithm is more fully developed, and can be used to achieve more accurate bit assignments than the algorithms currently used in the literature. Some upper and lower bounds for the bit-allocation distortion-rate function are developed. An obtainable distortion-rate function is developed for a particular scalar quantizer mixing method that can be used to code transform coefficients at any rate.

  11. Euler Technology Assessment for Preliminary Aircraft Design: Compressibility Predictions by Employing the Cartesian Unstructured Grid SPLITFLOW Code

    NASA Technical Reports Server (NTRS)

    Finley, Dennis B.; Karman, Steve L., Jr.

    1996-01-01

    The objective of the second phase of the Euler Technology Assessment program was to evaluate the ability of Euler computational fluid dynamics codes to predict compressible flow effects over a generic fighter wind tunnel model. This portion of the study was conducted by Lockheed Martin Tactical Aircraft Systems, using an in-house Cartesian-grid code called SPLITFLOW. The Cartesian grid technique offers several advantages, including ease of volume grid generation and reduced number of cells compared to other grid schemes. SPLITFLOW also includes grid adaption of the volume grid during the solution to resolve high-gradient regions. The SPLITFLOW code predictions of configuration forces and moments are shown to be adequate for preliminary design, including predictions of sideslip effects and the effects of geometry variations at low and high angles-of-attack. The transonic pressure prediction capabilities of SPLITFLOW are shown to be improved over subsonic comparisons. The time required to generate the results from initial surface data is on the order of several hours, including grid generation, which is compatible with the needs of the design environment.

  12. Development of a Fast Breeder Reactor Fuel Bundle-Duct Interaction Analysis Code - BAMBOO: Analysis Model and Validation by the Out-of-Pile Compression Test

    SciTech Connect

    Uwaba, Tomoyuki; Tanaka, Kosuke

    2001-10-15

    To analyze the wire-wrapped fast breeder reactor (FBR) fuel pin bundle deformation under bundle-duct interaction (BDI) conditions, the Japan Nuclear Cycle Development Institute has developed the BAMBOO computer code. A three-dimensional beam element model is used in this code to calculate fuel pin bowing and cladding oval distortion, which are the dominant deformation mechanisms in a fuel pin bundle. In this work, the property of the cladding oval distortion considering the wire-pitch was evaluated experimentally and introduced in the code analysis.The BAMBOO code was validated in this study by using an out-of-pile bundle compression testing apparatus and comparing these results with the code results. It is concluded that BAMBOO reasonably predicts the pin-to-duct clearances in the compression tests by treating the cladding oval distortion as the suppression mechanism to BDI.

  13. An Adaptive Data Gathering Scheme for Multi-Hop Wireless Sensor Networks Based on Compressed Sensing and Network Coding.

    PubMed

    Yin, Jun; Yang, Yuwang; Wang, Lei

    2016-01-01

    Joint design of compressed sensing (CS) and network coding (NC) has been demonstrated to provide a new data gathering paradigm for multi-hop wireless sensor networks (WSNs). By exploiting the correlation of the network sensed data, a variety of data gathering schemes based on NC and CS (Compressed Data Gathering-CDG) have been proposed. However, these schemes assume that the sparsity of the network sensed data is constant and the value of the sparsity is known before starting each data gathering epoch, thus they ignore the variation of the data observed by the WSNs which are deployed in practical circumstances. In this paper, we present a complete design of the feedback CDG scheme where the sink node adaptively queries those interested nodes to acquire an appropriate number of measurements. The adaptive measurement-formation procedure and its termination rules are proposed and analyzed in detail. Moreover, in order to minimize the number of overall transmissions in the formation procedure of each measurement, we have developed a NP-complete model (Maximum Leaf Nodes Minimum Steiner Nodes-MLMS) and realized a scalable greedy algorithm to solve the problem. Experimental results show that the proposed measurement-formation method outperforms previous schemes, and experiments on both datasets from ocean temperature and practical network deployment also prove the effectiveness of our proposed feedback CDG scheme. PMID:27043574

  14. An Adaptive Data Gathering Scheme for Multi-Hop Wireless Sensor Networks Based on Compressed Sensing and Network Coding

    PubMed Central

    Yin, Jun; Yang, Yuwang; Wang, Lei

    2016-01-01

    Joint design of compressed sensing (CS) and network coding (NC) has been demonstrated to provide a new data gathering paradigm for multi-hop wireless sensor networks (WSNs). By exploiting the correlation of the network sensed data, a variety of data gathering schemes based on NC and CS (Compressed Data Gathering—CDG) have been proposed. However, these schemes assume that the sparsity of the network sensed data is constant and the value of the sparsity is known before starting each data gathering epoch, thus they ignore the variation of the data observed by the WSNs which are deployed in practical circumstances. In this paper, we present a complete design of the feedback CDG scheme where the sink node adaptively queries those interested nodes to acquire an appropriate number of measurements. The adaptive measurement-formation procedure and its termination rules are proposed and analyzed in detail. Moreover, in order to minimize the number of overall transmissions in the formation procedure of each measurement, we have developed a NP-complete model (Maximum Leaf Nodes Minimum Steiner Nodes—MLMS) and realized a scalable greedy algorithm to solve the problem. Experimental results show that the proposed measurement-formation method outperforms previous schemes, and experiments on both datasets from ocean temperature and practical network deployment also prove the effectiveness of our proposed feedback CDG scheme. PMID:27043574

  15. SIMULATING THE COMMON ENVELOPE PHASE OF A RED GIANT USING SMOOTHED-PARTICLE HYDRODYNAMICS AND UNIFORM-GRID CODES

    SciTech Connect

    Passy, Jean-Claude; Mac Low, Mordecai-Mark; De Marco, Orsola; Fryer, Chris L.; Diehl, Steven; Rockefeller, Gabriel; Herwig, Falk; Oishi, Jeffrey S.; Bryan, Greg L.

    2012-01-01

    We use three-dimensional hydrodynamical simulations to study the rapid infall phase of the common envelope (CE) interaction of a red giant branch star of mass equal to 0.88 M{sub Sun} and a companion star of mass ranging from 0.9 down to 0.1 M{sub Sun }. We first compare the results obtained using two different numerical techniques with different resolutions, and find very good agreement overall. We then compare the outcomes of those simulations with observed systems thought to have gone through a CE. The simulations fail to reproduce those systems in the sense that most of the envelope of the donor remains bound at the end of the simulations and the final orbital separations between the donor's remnant and the companion, ranging from 26.8 down to 5.9 R{sub Sun }, are larger than the ones observed. We suggest that this discrepancy vouches for recombination playing an essential role in the ejection of the envelope and/or significant shrinkage of the orbit happening in the subsequent phase.

  16. Compressed X-ray phase-contrast imaging using a coded source

    NASA Astrophysics Data System (ADS)

    Sung, Yongjin; Xu, Ling; Nagarkar, Vivek; Gupta, Rajiv

    2014-12-01

    X-ray phase-contrast imaging (XPCI) holds great promise for medical X-ray imaging with high soft-tissue contrast. Obviating optical elements in the imaging chain, propagation-based XPCI (PB-XPCI) has definite advantages over other XPCI techniques in terms of cost, alignment and scalability. However, it imposes strict requirements on the spatial coherence of the source and the resolution of the detector. In this study, we demonstrate that using a coded X-ray source and sparsity-based reconstruction, we can significantly relax these requirements. Using numerical simulation, we assess the feasibility of our approach and study the effect of system parameters on the reconstructed image. The results are demonstrated with images obtained using a bench-top micro-focus XPCI system.

  17. Assessing the Effects of Data Compression in Simulations Using Physically Motivated Metrics

    DOE PAGESBeta

    Laney, Daniel; Langer, Steven; Weber, Christopher; Lindstrom, Peter; Wegener, Al

    2014-01-01

    This paper examines whether lossy compression can be used effectively in physics simulations as a possible strategy to combat the expected data-movement bottleneck in future high performance computing architectures. We show that, for the codes and simulations we tested, compression levels of 3–5X can be applied without causing significant changes to important physical quantities. Rather than applying signal processing error metrics, we utilize physics-based metrics appropriate for each code to assess the impact of compression. We evaluate three different simulation codes: a Lagrangian shock-hydrodynamics code, an Eulerian higher-order hydrodynamics turbulence modeling code, and an Eulerian coupled laser-plasma interaction code. Wemore » compress relevant quantities after each time-step to approximate the effects of tightly coupled compression and study the compression rates to estimate memory and disk-bandwidth reduction. We find that the error characteristics of compression algorithms must be carefully considered in the context of the underlying physics being modeled.« less

  18. Recent Advances in the Modeling of the Transport of Two-Plasmon-Decay Electrons in the 1-D Hydrodynamic Code LILAC

    NASA Astrophysics Data System (ADS)

    Delettrez, J. A.; Myatt, J. F.; Yaakobi, B.

    2015-11-01

    The modeling of the fast-electron transport in the 1-D hydrodynamic code LILAC was modified because of the addition of cross-beam-energy-transfer (CBET) in implosion simulations. Using the old fast-electron with source model CBET results in a shift of the peak of the hard x-ray (HXR) production from the end of the laser pulse, as observed in experiments, to earlier in the pulse. This is caused by a drop in the laser intensity of the quarter-critical surface from CBET interaction at lower densities. Data from simulations with the laser plasma simulation environment (LPSE) code will be used to modify the source algorithm in LILAC. In addition, the transport model in LILAC has been modified to include deviations from the straight-line algorithm and non-specular reflection at the sheath to take into account the scattering from collisions and magnetic fields in the corona. Simulation results will be compared with HXR emissions from both room-temperature plastic and cryogenic target experiments. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0001944.

  19. A New Multi-dimensional General Relativistic Neutrino Hydrodynamics Code for Core-collapse Supernovae. IV. The Neutrino Signal

    NASA Astrophysics Data System (ADS)

    Müller, Bernhard; Janka, Hans-Thomas

    2014-06-01

    Considering six general relativistic, two-dimensional (2D) supernova (SN) explosion models of progenitor stars between 8.1 and 27 M ⊙, we systematically analyze the properties of the neutrino emission from core collapse and bounce to the post-explosion phase. The models were computed with the VERTEX-COCONUT code, using three-flavor, energy-dependent neutrino transport in the ray-by-ray-plus approximation. Our results confirm the close similarity of the mean energies, langErang, of \\bar{\

  20. A New Multi-dimensional General Relativistic Neutrino Hydrodynamics Code for Core-collapse Supernovae. II. Relativistic Explosion Models of Core-collapse Supernovae

    NASA Astrophysics Data System (ADS)

    Müller, Bernhard; Janka, Hans-Thomas; Marek, Andreas

    2012-09-01

    We present the first two-dimensional general relativistic (GR) simulations of stellar core collapse and explosion with the COCONUT hydrodynamics code in combination with the VERTEX solver for energy-dependent, three-flavor neutrino transport, using the extended conformal flatness condition for approximating the space-time metric and a ray-by-ray-plus ansatz to tackle the multi-dimensionality of the transport. For both of the investigated 11.2 and 15 M ⊙ progenitors we obtain successful, though seemingly marginal, neutrino-driven supernova explosions. This outcome and the time evolution of the models basically agree with results previously obtained with the PROMETHEUS hydro solver including an approximative treatment of relativistic effects by a modified Newtonian potential. However, GR models exhibit subtle differences in the neutrinospheric conditions compared with Newtonian and pseudo-Newtonian simulations. These differences lead to significantly higher luminosities and mean energies of the radiated electron neutrinos and antineutrinos and therefore to larger energy-deposition rates and heating efficiencies in the gain layer with favorable consequences for strong nonradial mass motions and ultimately for an explosion. Moreover, energy transfer to the stellar medium around the neutrinospheres through nucleon recoil in scattering reactions of heavy-lepton neutrinos also enhances the mentioned effects. Together with previous pseudo-Newtonian models, the presented relativistic calculations suggest that the treatment of gravity and energy-exchanging neutrino interactions can make differences of even 50%-100% in some quantities and is likely to contribute to a finally successful explosion mechanism on no minor level than hydrodynamical differences between different dimensions.

  1. A NEW MULTI-DIMENSIONAL GENERAL RELATIVISTIC NEUTRINO HYDRODYNAMICS CODE FOR CORE-COLLAPSE SUPERNOVAE. II. RELATIVISTIC EXPLOSION MODELS OF CORE-COLLAPSE SUPERNOVAE

    SciTech Connect

    Mueller, Bernhard; Janka, Hans-Thomas; Marek, Andreas E-mail: thj@mpa-garching.mpg.de

    2012-09-01

    We present the first two-dimensional general relativistic (GR) simulations of stellar core collapse and explosion with the COCONUT hydrodynamics code in combination with the VERTEX solver for energy-dependent, three-flavor neutrino transport, using the extended conformal flatness condition for approximating the space-time metric and a ray-by-ray-plus ansatz to tackle the multi-dimensionality of the transport. For both of the investigated 11.2 and 15 M{sub Sun} progenitors we obtain successful, though seemingly marginal, neutrino-driven supernova explosions. This outcome and the time evolution of the models basically agree with results previously obtained with the PROMETHEUS hydro solver including an approximative treatment of relativistic effects by a modified Newtonian potential. However, GR models exhibit subtle differences in the neutrinospheric conditions compared with Newtonian and pseudo-Newtonian simulations. These differences lead to significantly higher luminosities and mean energies of the radiated electron neutrinos and antineutrinos and therefore to larger energy-deposition rates and heating efficiencies in the gain layer with favorable consequences for strong nonradial mass motions and ultimately for an explosion. Moreover, energy transfer to the stellar medium around the neutrinospheres through nucleon recoil in scattering reactions of heavy-lepton neutrinos also enhances the mentioned effects. Together with previous pseudo-Newtonian models, the presented relativistic calculations suggest that the treatment of gravity and energy-exchanging neutrino interactions can make differences of even 50%-100% in some quantities and is likely to contribute to a finally successful explosion mechanism on no minor level than hydrodynamical differences between different dimensions.

  2. Byte structure variable length coding (BS-VLC): a new specific algorithm applied in the compression of trajectories generated by molecular dynamics

    PubMed

    Melo; Puga; Gentil; Brito; Alves; Ramos

    2000-05-01

    Molecular dynamics is a well-known technique very much used in the study of biomolecular systems. The trajectory files produced by molecular dynamics simulations are extensive, and the classical lossless algorithms give poor efficiencies in their compression. In this work, a new specific algorithm, named byte structure variable length coding (BS-VLC), is introduced. Trajectory files, obtained by molecular dynamics applied to trypsin and a trypsin:pancreatic trypsin inhibitor complex, were compressed using four classical lossless algorithms (Huffman, adaptive Huffman, LZW, and LZ77) as well as the BS-VLC algorithm. The results obtained show that BS-VLC nearly triplicates the compression efficiency of the best classical lossless algorithm, preserving a near lossless behavior. Compression efficiencies close to 50% can be obtained with a high degree of precision, and the maximum efficiency possible (75%), within this algorithm, can be performed with good precision. PMID:10850759

  3. An investigative study of multispectral data compression for remotely-sensed images using vector quantization and difference-mapped shift-coding

    NASA Technical Reports Server (NTRS)

    Jaggi, S.

    1993-01-01

    A study is conducted to investigate the effects and advantages of data compression techniques on multispectral imagery data acquired by NASA's airborne scanners at the Stennis Space Center. The first technique used was vector quantization. The vector is defined in the multispectral imagery context as an array of pixels from the same location from each channel. The error obtained in substituting the reconstructed images for the original set is compared for different compression ratios. Also, the eigenvalues of the covariance matrix obtained from the reconstructed data set are compared with the eigenvalues of the original set. The effects of varying the size of the vector codebook on the quality of the compression and on subsequent classification are also presented. The output data from the Vector Quantization algorithm was further compressed by a lossless technique called Difference-mapped Shift-extended Huffman coding. The overall compression for 7 channels of data acquired by the Calibrated Airborne Multispectral Scanner (CAMS), with an RMS error of 15.8 pixels was 195:1 (0.41 bpp) and with an RMS error of 3.6 pixels was 18:1 (.447 bpp). The algorithms were implemented in software and interfaced with the help of dedicated image processing boards to an 80386 PC compatible computer. Modules were developed for the task of image compression and image analysis. Also, supporting software to perform image processing for visual display and interpretation of the compressed/classified images was developed.

  4. Burst error performance of 3DWT-RVLC with low-density parity-check codes for ultraspectral sounder data compression

    NASA Astrophysics Data System (ADS)

    Huang, Bormin; Ahuja, Alok; Wang, Charles C.; Goldberg, Mitchell D.

    2006-08-01

    Previous study shows 3D Wavelet Transform with Reversible Variable-Length Coding (3DWT-RVLC) has much better error resilience than JPEG2000 Part 2 on 1-bit random error remaining after channel decoding. Errors in satellite channels might have burst characteristics. Low-density parity-check (LDPC) codes are known to have excellent error correction capability near the Shannon limit performance. In this study, we investigate the burst error correction performance of LDPC codes via the new Digital Video Broadcasting - Second Generation (DVB-S2) standard for ultraspectral sounder data compressed by 3DWT-RVLC. We also study the error contamination after 3DWT-RVLC source decoding. Statistics show that 3DWT-RVLC produces significantly fewer erroneous pixels than JPEG2000 Part 2 for ultraspectral sounder data compression.

  5. Compression embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.

    1998-01-01

    A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%.

  6. Compression embedding

    DOEpatents

    Sandford, M.T. II; Handel, T.G.; Bradley, J.N.

    1998-07-07

    A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique are disclosed. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%. 21 figs.

  7. A new multi-dimensional general relativistic neutrino hydrodynamics code for core-collapse supernovae. IV. The neutrino signal

    SciTech Connect

    Müller, Bernhard; Janka, Hans-Thomas E-mail: bjmuellr@mpa-garching.mpg.de

    2014-06-10

    Considering six general relativistic, two-dimensional (2D) supernova (SN) explosion models of progenitor stars between 8.1 and 27 M {sub ☉}, we systematically analyze the properties of the neutrino emission from core collapse and bounce to the post-explosion phase. The models were computed with the VERTEX-COCONUT code, using three-flavor, energy-dependent neutrino transport in the ray-by-ray-plus approximation. Our results confirm the close similarity of the mean energies, (E), of ν-bar {sub e} and heavy-lepton neutrinos and even their crossing during the accretion phase for stars with M ≳ 10 M {sub ☉} as observed in previous 1D and 2D simulations with state-of-the-art neutrino transport. We establish a roughly linear scaling of 〈E{sub ν-bar{sub e}}〉 with the proto-neutron star (PNS) mass, which holds in time as well as for different progenitors. Convection inside the PNS affects the neutrino emission on the 10%-20% level, and accretion continuing beyond the onset of the explosion prevents the abrupt drop of the neutrino luminosities seen in artificially exploded 1D models. We demonstrate that a wavelet-based time-frequency analysis of SN neutrino signals in IceCube will offer sensitive diagnostics for the SN core dynamics up to at least ∼10 kpc distance. Strong, narrow-band signal modulations indicate quasi-periodic shock sloshing motions due to the standing accretion shock instability (SASI), and the frequency evolution of such 'SASI neutrino chirps' reveals shock expansion or contraction. The onset of the explosion is accompanied by a shift of the modulation frequency below 40-50 Hz, and post-explosion, episodic accretion downflows will be signaled by activity intervals stretching over an extended frequency range in the wavelet spectrogram.

  8. Vector quantization with self-resynchronizing coding for lossless compression and rebroadcast of the NASA Geostationary Imaging Fourier Transform Spectrometer (GIFTS) data

    NASA Astrophysics Data System (ADS)

    Huang, Bormin; Wei, Shih-Chieh; Huang, Hung-Lung; Smith, William L.; Bloom, Hal J.

    2008-08-01

    As part of NASA's New Millennium Program, the Geostationary Imaging Fourier Transform Spectrometer (GIFTS) is an advanced ultraspectral sounder with a 128x128 array of interferograms for the retrieval of such geophysical parameters as atmospheric temperature, moisture, and wind. With massive data volume that would be generated by future advanced satellite sensors such as GIFTS, chances are that even the state-of-the-art channel coding (e.g. Turbo codes, LDPC) with low BER might not correct all the errors. Due to the error-sensitive ill-posed nature of the retrieval problem, lossless compression with error resilience is desired for ultraspectral sounder data downlink and rebroadcast. Previously, we proposed the fast precomputed vector quantization (FPVQ) with arithmetic coding (AC) which can produce high compression gain for ground operation. In this paper we adopt FPVQ with the reversible variable-length coding (RVLC) to provide better resilience against satellite transmission errors remaining after channel decoding. The FPVQ-RVLC method is compared with the previous FPVQ-AC method for lossless compression of the GIFTS data. The experiment shows that the FPVQ-RVLC method is a significantly better tool for rebroadcast of massive ultraspectral sounder data.

  9. Progress in smooth particle hydrodynamics

    SciTech Connect

    Wingate, C.A.; Dilts, G.A.; Mandell, D.A.; Crotzer, L.A.; Knapp, C.E.

    1998-07-01

    Smooth Particle Hydrodynamics (SPH) is a meshless, Lagrangian numerical method for hydrodynamics calculations where calculational elements are fuzzy particles which move according to the hydrodynamic equations of motion. Each particle carries local values of density, temperature, pressure and other hydrodynamic parameters. A major advantage of SPH is that it is meshless, thus large deformation calculations can be easily done with no connectivity complications. Interface positions are known and there are no problems with advecting quantities through a mesh that typical Eulerian codes have. These underlying SPH features make fracture physics easy and natural and in fact, much of the applications work revolves around simulating fracture. Debris particles from impacts can be easily transported across large voids with SPH. While SPH has considerable promise, there are some problems inherent in the technique that have so far limited its usefulness. The most serious problem is the well known instability in tension leading to particle clumping and numerical fracture. Another problem is that the SPH interpolation is only correct when particles are uniformly spaced a half particle apart leading to incorrect strain rates, accelerations and other quantities for general particle distributions. SPH calculations are also sensitive to particle locations. The standard artificial viscosity treatment in SPH leads to spurious viscosity in shear flows. This paper will demonstrate solutions for these problems that they and others have been developing. The most promising is to replace the SPH interpolant with the moving least squares (MLS) interpolant invented by Lancaster and Salkauskas in 1981. SPH and MLS are closely related with MLS being essentially SPH with corrected particle volumes. When formulated correctly, JLS is conservative, stable in both compression and tension, does not have the SPH boundary problems and is not sensitive to particle placement. The other approach to

  10. TRHD: Three-temperature radiation-hydrodynamics code with an implicit non-equilibrium radiation transport using a cell-centered monotonic finite volume scheme on unstructured-grids

    NASA Astrophysics Data System (ADS)

    Sijoy, C. D.; Chaturvedi, S.

    2015-05-01

    Three-temperature (3T), unstructured-mesh, non-equilibrium radiation hydrodynamics (RHD) code have been developed for the simulation of intense thermal radiation or high-power laser driven radiative shock hydrodynamics in two-dimensional (2D) axis-symmetric geometries. The governing hydrodynamics equations are solved using a compatible unstructured Lagrangian method based on a control volume differencing (CVD) scheme. A second-order predictor-corrector (PC) integration scheme is used for the temporal discretization of the hydrodynamics equations. For the radiation energy transport, frequency averaged gray model is used in which the flux-limited diffusion (FLD) approximation is used to recover the free-streaming limit of the radiation propagation in optically thin regions. The proposed RHD model allows to have different temperatures for the electrons and ions. In addition to this, the electron and thermal radiation temperatures are assumed to be in non-equilibrium. Therefore, the thermal relaxation between the electrons and ions and the coupling between the radiation and matter energies are required to be computed self-consistently. For this, the coupled flux limited electron heat conduction and the non-equilibrium radiation diffusion equations are solved simultaneously by using an implicit, axis-symmetric, cell-centered, monotonic, nonlinear finite volume (NLFV) scheme. In this paper, we have described the details of the 2D, 3T, non-equilibrium RHD code developed along with a suite of validation test problems to demonstrate the accuracy and performance of the algorithms. We have also conducted a performance analysis with different linearity preserving interpolation schemes that are used for the evaluation of the nodal values in the NLFV scheme. Finally, in order to demonstrate full capability of the code implementation, we have presented the simulation of laser driven thin Aluminum (Al) foil acceleration. The simulation results are found to be in good agreement

  11. Radiation Hydrodynamics Test Problems with Linear Velocity Profiles

    SciTech Connect

    Hendon, Raymond C.; Ramsey, Scott D.

    2012-08-22

    As an extension of the works of Coggeshall and Ramsey, a class of analytic solutions to the radiation hydrodynamics equations is derived for code verification purposes. These solutions are valid under assumptions including diffusive radiation transport, a polytropic gas equation of state, constant conductivity, separable flow velocity proportional to the curvilinear radial coordinate, and divergence-free heat flux. In accordance with these assumptions, the derived solution class is mathematically invariant with respect to the presence of radiative heat conduction, and thus represents a solution to the compressible flow (Euler) equations with or without conduction terms included. With this solution class, a quantitative code verification study (using spatial convergence rates) is performed for the cell-centered, finite volume, Eulerian compressible flow code xRAGE developed at Los Alamos National Laboratory. Simulation results show near second order spatial convergence in all physical variables when using the hydrodynamics solver only, consistent with that solver's underlying order of accuracy. However, contrary to the mathematical properties of the solution class, when heat conduction algorithms are enabled the calculation does not converge to the analytic solution.

  12. Verification of the FBR fuel bundle-duct interaction analysis code BAMBOO by the out-of-pile bundle compression test with large diameter pins

    NASA Astrophysics Data System (ADS)

    Uwaba, Tomoyuki; Ito, Masahiro; Nemoto, Junichi; Ichikawa, Shoichi; Katsuyama, Kozo

    2014-09-01

    The BAMBOO computer code was verified by results for the out-of-pile bundle compression test with large diameter pin bundle deformation under the bundle-duct interaction (BDI) condition. The pin diameters of the examined test bundles were 8.5 mm and 10.4 mm, which are targeted as preliminary fuel pin diameters for the upgraded core of the prototype fast breeder reactor (FBR) and for demonstration and commercial FBRs studied in the FaCT project. In the bundle compression test, bundle cross-sectional views were obtained from X-ray computer tomography (CT) images and local parameters of bundle deformation such as pin-to-duct and pin-to-pin clearances were measured by CT image analyses. In the verification, calculation results of bundle deformation obtained by the BAMBOO code analyses were compared with the experimental results from the CT image analyses. The comparison showed that the BAMBOO code reasonably predicts deformation of large diameter pin bundles under the BDI condition by assuming that pin bowing and cladding oval distortion are the major deformation mechanisms, the same as in the case of small diameter pin bundles. In addition, the BAMBOO analysis results confirmed that cladding oval distortion effectively suppresses BDI in large diameter pin bundles as well as in small diameter pin bundles.

  13. X-ray radiographic imaging of hydrodynamic phenomena in radiation driven materials -- shock propagation, material compression and shear flow. Revision 1

    SciTech Connect

    Hammel, B.A.; Kilkenny, J.D.; Munro, D.; Remington, B.A.; Kornblum, H.N.; Perry, T.S.; Phillion, D.W.; Wallace, R.J.

    1994-02-01

    One- and two-dimensional, time resolved x-ray radiographic imaging at high photon energy (5-7 keV) is used to study shock propagation, material motion and compression, and the effects of shear flow in solid density samples which are driven by x-ray ablation with the Nova laser. By backlighting the samples with x-rays and observing the increase in sample areal density due to shock compression, the authors directly measure the trajectory of strong shocks ({approx}40 Mbar) in flight, in solid density plastic samples. Doping a section of the samples with high-Z material (Br) provides radiographic contrast, allowing the measurement of the shock induced particle motion. Instability growth due to shear flow at an interface is investigated by imbedding a metal wire in a cylindrical plastic sample and launching a shock in the axial direction. Time resolved radiographic measurements are made with either a slit-imager coupled to an x-ray streak camera or a pinhole camera coupled to a gated microchannel plate detector, providing {approx} 10-{mu}m spatial and {approx} 100-ps temporal resolution.

  14. X-ray radiographic imaging of hydrodynamic phenomena in radiation-driven materials---Shock propagation, material compression, and shear flow

    SciTech Connect

    Hammel, B.A.; Kilkenny, J.D.; Munro, D.; Remington, B.A.; Kornblum, H.N.; Perry, T.S.; Phillion, D.W.; Wallace, R.J. )

    1994-05-01

    One- and two-dimensional, time-resolved x-ray radiographic imaging at high photon energy (5--7 keV) is used to study shock propagation, material motion and compression, and the effects of shear flow in solid density samples which are driven by x-ray ablation with the Nova laser. By backlighting the samples with x rays and observing the increase in sample areal density due to shock compression, the trajectories of strong shocks ([similar to]40 Mbars) in flight are directly measured in solid density plastic samples. Doping a section of the samples with high-[ital Z] material (Br) provides radiographic contrast, allowing a measurement of the shock-induced particle motion. Instability growth due to shear flow at an interface is investigated by imbedding a metal wire in a cylindrical plastic sample and launching a shock in the axial direction. Time-resolved radiographic measurements are made with either a slit-imager coupled to an x-ray streak camera or a pinhole camera coupled to a gated microchannel plate detector, providing [similar to]10 [mu]m spatial and [similar to]100 ps temporal resolution.

  15. Skew resisting hydrodynamic seal

    DOEpatents

    Conroy, William T.; Dietle, Lannie L.; Gobeli, Jeffrey D.; Kalsi, Manmohan S.

    2001-01-01

    A novel hydrodynamically lubricated compression type rotary seal that is suitable for lubricant retention and environmental exclusion. Particularly, the seal geometry ensures constraint of a hydrodynamic seal in a manner preventing skew-induced wear and provides adequate room within the seal gland to accommodate thermal expansion. The seal accommodates large as-manufactured variations in the coefficient of thermal expansion of the sealing material, provides a relatively stiff integral spring effect to minimize pressure-induced shuttling of the seal within the gland, and also maintains interfacial contact pressure within the dynamic sealing interface in an optimum range for efficient hydrodynamic lubrication and environment exclusion. The seal geometry also provides for complete support about the circumference of the seal to receive environmental pressure, as compared the interrupted character of seal support set forth in U.S. Pat. Nos. 5,873,576 and 6,036,192 and provides a hydrodynamic seal which is suitable for use with non-Newtonian lubricants.

  16. Development of a Fast Breeder Reactor Fuel Bundle Deformation Analysis Code - BAMBOO: Development of a Pin Dispersion Model and Verification by the Out-of-Pile Compression Test

    SciTech Connect

    Uwaba, Tomoyuki; Ito, Masahiro; Ukai, Shigeharu

    2004-02-15

    To analyze the wire-wrapped fast breeder reactor fuel pin bundle deformation under bundle/duct interaction conditions, the Japan Nuclear Cycle Development Institute has developed the BAMBOO computer code. This code uses the three-dimensional beam element to calculate fuel pin bowing and cladding oval distortion as the primary deformation mechanisms in a fuel pin bundle. The pin dispersion, which is disarrangement of pins in a bundle and would occur during irradiation, was modeled in this code to evaluate its effect on bundle deformation. By applying the contact analysis method commonly used in the finite element method, this model considers the contact conditions at various axial positions as well as the nodal points and can analyze the irregular arrangement of fuel pins with the deviation of the wire configuration.The dispersion model was introduced in the BAMBOO code and verified by using the results of the out-of-pile compression test of the bundle, where the dispersion was caused by the deviation of the wire position. And the effect of the dispersion on the bundle deformation was evaluated based on the analysis results of the code.

  17. Compressible halftoning

    NASA Astrophysics Data System (ADS)

    Anderson, Peter G.; Liu, Changmeng

    2003-01-01

    We present a technique for converting continuous gray-scale images to halftone (black and white) images that lend themselves to lossless data compression with compression factor of three or better. Our method involves using novel halftone mask structures which consist of non-repeated threshold values. We have versions of both dispersed-dot and clustered-dot masks, which produce acceptable images for a variety of printers. Using the masks as a sort key allows us to reversibly rearrange the image pixels and partition them into groups with a highly skewed distribution allowing Huffman compression coding techniques to be applied. This gives compression ratios in the range 3:1 to 10:1.

  18. Comparison of transform coding methods with an optimal predictor for the data compression of digital elevation models

    NASA Technical Reports Server (NTRS)

    Lewis, Michael

    1994-01-01

    Statistical encoding techniques enable the reduction of the number of bits required to encode a set of symbols, and are derived from their probabilities. Huffman encoding is an example of statistical encoding that has been used for error-free data compression. The degree of compression given by Huffman encoding in this application can be improved by the use of prediction methods. These replace the set of elevations by a set of corrections that have a more advantageous probability distribution. In particular, the method of Lagrange Multipliers for minimization of the mean square error has been applied to local geometrical predictors. Using this technique, an 8-point predictor achieved about a 7 percent improvement over an existing simple triangular predictor.

  19. Ship Hydrodynamics

    ERIC Educational Resources Information Center

    Lafrance, Pierre

    1978-01-01

    Explores in a non-mathematical treatment some of the hydrodynamical phenomena and forces that affect the operation of ships, especially at high speeds. Discusses the major components of ship resistance such as the different types of drags and ways to reduce them and how to apply those principles for the hovercraft. (GA)

  20. Radiation Hydrodynamics

    SciTech Connect

    Castor, J I

    2003-10-16

    The discipline of radiation hydrodynamics is the branch of hydrodynamics in which the moving fluid absorbs and emits electromagnetic radiation, and in so doing modifies its dynamical behavior. That is, the net gain or loss of energy by parcels of the fluid material through absorption or emission of radiation are sufficient to change the pressure of the material, and therefore change its motion; alternatively, the net momentum exchange between radiation and matter may alter the motion of the matter directly. Ignoring the radiation contributions to energy and momentum will give a wrong prediction of the hydrodynamic motion when the correct description is radiation hydrodynamics. Of course, there are circumstances when a large quantity of radiation is present, yet can be ignored without causing the model to be in error. This happens when radiation from an exterior source streams through the problem, but the latter is so transparent that the energy and momentum coupling is negligible. Everything we say about radiation hydrodynamics applies equally well to neutrinos and photons (apart from the Einstein relations, specific to bosons), but in almost every area of astrophysics neutrino hydrodynamics is ignored, simply because the systems are exceedingly transparent to neutrinos, even though the energy flux in neutrinos may be substantial. Another place where we can do ''radiation hydrodynamics'' without using any sophisticated theory is deep within stars or other bodies, where the material is so opaque to the radiation that the mean free path of photons is entirely negligible compared with the size of the system, the distance over which any fluid quantity varies, and so on. In this case we can suppose that the radiation is in equilibrium with the matter locally, and its energy, pressure and momentum can be lumped in with those of the rest of the fluid. That is, it is no more necessary to distinguish photons from atoms, nuclei and electrons, than it is to distinguish

  1. Isogeometric analysis of Lagrangian hydrodynamics

    NASA Astrophysics Data System (ADS)

    Bazilevs, Y.; Akkerman, I.; Benson, D. J.; Scovazzi, G.; Shashkov, M. J.

    2013-06-01

    Isogeometric analysis of Lagrangian shock hydrodynamics is proposed. The Euler equations of compressible hydrodynamics in the weak form are discretized using Non-Uniform Rational B-Splines (NURBS) in space. The discretization has all the advantages of a higher-order method, with the additional benefits of exact symmetry preservation and better per-degree-of-freedom accuracy. An explicit, second-order accurate time integration procedure, which conserves total energy, is developed and employed to advance the equations in time. The performance of the method is examined on a set of standard 2D and 3D benchmark examples, where good quality of the computational results is attained.

  2. Bacterial Hydrodynamics

    NASA Astrophysics Data System (ADS)

    Lauga, Eric

    2016-01-01

    Bacteria predate plants and animals by billions of years. Today, they are the world's smallest cells, yet they represent the bulk of the world's biomass and the main reservoir of nutrients for higher organisms. Most bacteria can move on their own, and the majority of motile bacteria are able to swim in viscous fluids using slender helical appendages called flagella. Low-Reynolds number hydrodynamics is at the heart of the ability of flagella to generate propulsion at the micrometer scale. In fact, fluid dynamic forces impact many aspects of bacteriology, ranging from the ability of cells to reorient and search their surroundings to their interactions within mechanically and chemically complex environments. Using hydrodynamics as an organizing framework, I review the biomechanics of bacterial motility and look ahead to future challenges.

  3. GENASIS: General Astrophysical Simulation System. I. Refinable Mesh and Nonrelativistic Hydrodynamics

    NASA Astrophysics Data System (ADS)

    Cardall, Christian Y.; Budiardja, Reuben D.; Endeve, Eirik; Mezzacappa, Anthony

    2014-02-01

    GenASiS (General Astrophysical Simulation System) is a new code being developed initially and primarily, though by no means exclusively, for the simulation of core-collapse supernovae on the world's leading capability supercomputers. This paper—the first in a series—demonstrates a centrally refined coordinate patch suitable for gravitational collapse and documents methods for compressible nonrelativistic hydrodynamics. We benchmark the hydrodynamics capabilities of GenASiS against many standard test problems; the results illustrate the basic competence of our implementation, demonstrate the strengths and limitations of the HLLC relative to the HLL Riemann solver in a number of interesting cases, and provide preliminary indications of the code's ability to scale and to function with cell-by-cell fixed-mesh refinement.

  4. Lossy Text Compression Techniques

    NASA Astrophysics Data System (ADS)

    Palaniappan, Venka; Latifi, Shahram

    Most text documents contain a large amount of redundancy. Data compression can be used to minimize this redundancy and increase transmission efficiency or save storage space. Several text compression algorithms have been introduced for lossless text compression used in critical application areas. For non-critical applications, we could use lossy text compression to improve compression efficiency. In this paper, we propose three different source models for character-based lossy text compression: Dropped Vowels (DOV), Letter Mapping (LMP), and Replacement of Characters (ROC). The working principles and transformation methods associated with these methods are presented. Compression ratios obtained are included and compared. Comparisons of performance with those of the Huffman Coding and Arithmetic Coding algorithm are also made. Finally, some ideas for further improving the performance already obtained are proposed.

  5. Hydrodynamic supercontinuum.

    PubMed

    Chabchoub, A; Hoffmann, N; Onorato, M; Genty, G; Dudley, J M; Akhmediev, N

    2013-08-01

    We report the experimental observation of multi-bound-soliton solutions of the nonlinear Schrödinger equation (NLS) in the context of hydrodynamic surface gravity waves. Higher-order N-soliton solutions with N=2, 3 are studied in detail and shown to be associated with self-focusing in the wave group dynamics and the generation of a steep localized carrier wave underneath the group envelope. We also show that for larger input soliton numbers, the wave group experiences irreversible spectral broadening, which we refer to as a hydrodynamic supercontinuum by analogy with optics. This process is shown to be associated with the fission of the initial multisoliton into individual fundamental solitons due to higher-order nonlinear perturbations to the NLS. Numerical simulations using an extended NLS model described by the modified nonlinear Schrödinger equation, show excellent agreement with experiment and highlight the universal role that higher-order nonlinear perturbations to the NLS play in supercontinuum generation. PMID:23952405

  6. Two-temperature hydrodynamics of laser-generated ultrashort shock waves in elasto-plastic solids

    NASA Astrophysics Data System (ADS)

    Ilnitsky, Denis K.; Khokhlov, Viktor A.; Inogamov, Nail A.; Zhakhovsky, Vasily V.; Petrov, Yurii V.; Khishchenko, Konstantin V.; Migdal, Kirill P.; Anisimov, Sergey I.

    2014-05-01

    Shock-wave generation by ultrashort laser pulses opens new doors for study of hidden processes in materials happened at an atomic-scale spatiotemporal scales. The poorly explored mechanism of shock generation is started from a short-living two-temperature (2T) state of solid in a thin surface layer where laser energy is deposited. Such 2T state represents a highly non-equilibrium warm dense matter having cold ions and hot electrons with temperatures of 1-2 orders of magnitude higher than the melting point. Here for the first time we present results obtained by our new hybrid hydrodynamics code combining detailed description of 2T states with a model of elasticity together with a wide-range equation of state of solid. New hydro-code has higher accuracy in the 2T stage than molecular dynamics method, because it includes electron related phenomena including thermal conduction, electron-ion collisions and energy transfer, and electron pressure. From the other hand the new code significantly improves our previous version of 2T hydrodynamics model, because now it is capable of reproducing the elastic compression waves, which may have an imprint of supersonic melting like as in MD simulations. With help of the new code we have solved a difficult problem of thermal and dynamic coupling of a molten layer with an uniaxially compressed elastic solid. This approach allows us to describe the recent femtosecond laser experiments.

  7. Hydrodynamic effects on coalescence.

    SciTech Connect

    Dimiduk, Thomas G.; Bourdon, Christopher Jay; Grillet, Anne Mary; Baer, Thomas A.; de Boer, Maarten Pieter; Loewenberg, Michael; Gorby, Allen D.; Brooks, Carlton, F.

    2006-10-01

    The goal of this project was to design, build and test novel diagnostics to probe the effect of hydrodynamic forces on coalescence dynamics. Our investigation focused on how a drop coalesces onto a flat surface which is analogous to two drops coalescing, but more amenable to precise experimental measurements. We designed and built a flow cell to create an axisymmetric compression flow which brings a drop onto a flat surface. A computer-controlled system manipulates the flow to steer the drop and maintain a symmetric flow. Particle image velocimetry was performed to confirm that the control system was delivering a well conditioned flow. To examine the dynamics of the coalescence, we implemented an interferometry capability to measure the drainage of the thin film between the drop and the surface during the coalescence process. A semi-automated analysis routine was developed which converts the dynamic interferogram series into drop shape evolution data.

  8. Athena3D: Flux-conservative Godunov-type algorithm for compressible magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Hawley, John; Simon, Jake; Stone, James; Gardiner, Thomas; Teuben, Peter

    2015-05-01

    Written in FORTRAN, Athena3D, based on Athena (ascl:1010.014), is an implementation of a flux-conservative Godunov-type algorithm for compressible magnetohydrodynamics. Features of the Athena3D code include compressible hydrodynamics and ideal MHD in one, two or three spatial dimensions in Cartesian coordinates; adiabatic and isothermal equations of state; 1st, 2nd or 3rd order reconstruction using the characteristic variables; and numerical fluxes computed using the Roe scheme. In addition, it offers the ability to add source terms to the equations and is parallelized based on MPI.

  9. Fluid Film Bearing Code Development

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The next generation of rocket engine turbopumps is being developed by industry through Government-directed contracts. These turbopumps will use fluid film bearings because they eliminate the life and shaft-speed limitations of rolling-element bearings, increase turbopump design flexibility, and reduce the need for turbopump overhauls and maintenance. The design of the fluid film bearings for these turbopumps, however, requires sophisticated analysis tools to model the complex physical behavior characteristic of fluid film bearings operating at high speeds with low viscosity fluids. State-of-the-art analysis and design tools are being developed at the Texas A&M University under a grant guided by the NASA Lewis Research Center. The latest version of the code, HYDROFLEXT, is a thermohydrodynamic bulk flow analysis with fluid compressibility, full inertia, and fully developed turbulence models. It can predict the static and dynamic force response of rigid and flexible pad hydrodynamic bearings and of rigid and tilting pad hydrostatic bearings. The Texas A&M code is a comprehensive analysis tool, incorporating key fluid phenomenon pertinent to bearings that operate at high speeds with low-viscosity fluids typical of those used in rocket engine turbopumps. Specifically, the energy equation was implemented into the code to enable fluid properties to vary with temperature and pressure. This is particularly important for cryogenic fluids because their properties are sensitive to temperature as well as pressure. As shown in the figure, predicted bearing mass flow rates vary significantly depending on the fluid model used. Because cryogens are semicompressible fluids and the bearing dynamic characteristics are highly sensitive to fluid compressibility, fluid compressibility effects are also modeled. The code contains fluid properties for liquid hydrogen, liquid oxygen, and liquid nitrogen as well as for water and air. Other fluids can be handled by the code provided that the

  10. A NEW MULTI-DIMENSIONAL GENERAL RELATIVISTIC NEUTRINO HYDRODYNAMICS CODE OF CORE-COLLAPSE SUPERNOVAE. III. GRAVITATIONAL WAVE SIGNALS FROM SUPERNOVA EXPLOSION MODELS

    SciTech Connect

    Mueller, Bernhard; Janka, Hans-Thomas; Marek, Andreas E-mail: thj@mpa-garching.mpg.de

    2013-03-20

    We present a detailed theoretical analysis of the gravitational wave (GW) signal of the post-bounce evolution of core-collapse supernovae (SNe), employing for the first time relativistic, two-dimensional explosion models with multi-group, three-flavor neutrino transport based on the ray-by-ray-plus approximation. The waveforms reflect the accelerated mass motions associated with the characteristic evolutionary stages that were also identified in previous works: a quasi-periodic modulation by prompt post-shock convection is followed by a phase of relative quiescence before growing amplitudes signal violent hydrodynamical activity due to convection and the standing accretion shock instability during the accretion period of the stalled shock. Finally, a high-frequency, low-amplitude variation from proto-neutron star (PNS) convection below the neutrinosphere appears superimposed on the low-frequency trend associated with the aspherical expansion of the SN shock after the onset of the explosion. Relativistic effects in combination with detailed neutrino transport are shown to be essential for quantitative predictions of the GW frequency evolution and energy spectrum, because they determine the structure of the PNS surface layer and its characteristic g-mode frequency. Burst-like high-frequency activity phases, correlated with sudden luminosity increase and spectral hardening of electron (anti-)neutrino emission for some 10 ms, are discovered as new features after the onset of the explosion. They correspond to intermittent episodes of anisotropic accretion by the PNS in the case of fallback SNe. We find stronger signals for more massive progenitors with large accretion rates. The typical frequencies are higher for massive PNSs, though the time-integrated spectrum also strongly depends on the model dynamics.

  11. Compressive asymmetry evaluation for M-Band Radiation generated from the interaction of high energy laser and the hohlraum

    NASA Astrophysics Data System (ADS)

    Jiang, Shaoen; Huang, Yunbao; Li, Liling; Jing, Longfei; Lin, Zhiwei

    2015-11-01

    In indirect drive inertial confinement fusion, intense laser interacts with high-Z materials in the hohlraum and X-rays are generated to heat and drive the centrally located capsule. Most of these X-rays emitted from the wall of hohlraum are soft x-rays, but also a comparable fraction of them are high-energy X-rays (mainly from M band of wall material, >2keV for Au), which may lead to preheat and compressive asymmetry on the capsule, and affect final ignition result. Therefore, such preheat and compressive asymmetry needs to be characterized and evaluated, to enable it restrained or controlled. In this paper, by using one-dimensional multi-group radiation hydrodynamic codes and view-factor based radiation transport codes, we evaluate the compressive asymmetry on the centrally located capsule for various fractions of M-band X-rays. The result shows that: 1) The M-band X-rays may lead to significant compressive asymmetry when the thermal flux is symmetric,2) More fractions of M-band X-rays tends to result in more compressing asymmetry, and 3) 15% of M-band X-rays may result in 50% compressive asymmetry. Base on the above analysis, such significant compressive asymmetry due to M-band radiation may decrease the compressibility of the fuel or the capsule performance. Therefore, it motivates us to validate and measure such quantity of compressive asymmetry occurred on the capsule in recent experiments.

  12. Flash Kα radiography of laser-driven solid sphere compression for fast ignition

    NASA Astrophysics Data System (ADS)

    Sawada, H.; Lee, S.; Shiroto, T.; Nagatomo, H.; Arikawa, Y.; Nishimura, H.; Ueda, T.; Shigemori, K.; Sunahara, A.; Ohnishi, N.; Beg, F. N.; Theobald, W.; Pérez, F.; Patel, P. K.; Fujioka, S.

    2016-06-01

    Time-resolved compression of a laser-driven solid deuterated plastic sphere with a cone was measured with flash Kα x-ray radiography. A spherically converging shockwave launched by nanosecond GEKKO XII beams was used for compression while a flash of 4.51 keV Ti Kα x-ray backlighter was produced by a high-intensity, picosecond laser LFEX (Laser for Fast ignition EXperiment) near peak compression for radiography. Areal densities of the compressed core were inferred from two-dimensional backlit x-ray images recorded with a narrow-band spherical crystal imager. The maximum areal density in the experiment was estimated to be 87 ± 26 mg/cm2. The temporal evolution of the experimental and simulated areal densities with a 2-D radiation-hydrodynamics code is in good agreement.

  13. pyro: Python-based tutorial for computational methods for hydrodynamics

    NASA Astrophysics Data System (ADS)

    Zingale, Michael

    2015-07-01

    pyro is a simple python-based tutorial on computational methods for hydrodynamics. It includes 2-d solvers for advection, compressible, incompressible, and low Mach number hydrodynamics, diffusion, and multigrid. It is written with ease of understanding in mind. An extensive set of notes that is part of the Open Astrophysics Bookshelf project provides details of the algorithms.

  14. A hydrodynamic approach to cosmology - Methodology

    NASA Technical Reports Server (NTRS)

    Cen, Renyue

    1992-01-01

    The present study describes an accurate and efficient hydrodynamic code for evolving self-gravitating cosmological systems. The hydrodynamic code is a flux-based mesh code originally designed for engineering hydrodynamical applications. A variety of checks were performed which indicate that the resolution of the code is a few cells, providing accuracy for integral energy quantities in the present simulations of 1-3 percent over the whole runs. Six species (H I, H II, He I, He II, He III) are tracked separately, and relevant ionization and recombination processes, as well as line and continuum heating and cooling, are computed. The background radiation field is simultaneously determined in the range 1 eV to 100 keV, allowing for absorption, emission, and cosmological effects. It is shown how the inevitable numerical inaccuracies can be estimated and to some extent overcome.

  15. Data compression for satellite images

    NASA Technical Reports Server (NTRS)

    Chen, P. H.; Wintz, P. A.

    1976-01-01

    An efficient data compression system is presented for satellite pictures and two grey level pictures derived from satellite pictures. The compression techniques take advantages of the correlation between adjacent picture elements. Several source coding methods are investigated. Double delta coding is presented and shown to be the most efficient. Both predictive differential quantizing technique and double delta coding can be significantly improved by applying a background skipping technique. An extension code is constructed. This code requires very little storage space and operates efficiently. Simulation results are presented for various coding schemes and source codes.

  16. Convolutional coding techniques for data protection

    NASA Technical Reports Server (NTRS)

    Massey, J. L.

    1975-01-01

    Results of research on the use of convolutional codes in data communications are presented. Convolutional coding fundamentals are discussed along with modulation and coding interaction. Concatenated coding systems and data compression with convolutional codes are described.

  17. Algorithm refinement for fluctuating hydrodynamics

    SciTech Connect

    Williams, Sarah A.; Bell, John B.; Garcia, Alejandro L.

    2007-07-03

    This paper introduces an adaptive mesh and algorithmrefinement method for fluctuating hydrodynamics. This particle-continuumhybrid simulates the dynamics of a compressible fluid with thermalfluctuations. The particle algorithm is direct simulation Monte Carlo(DSMC), a molecular-level scheme based on the Boltzmann equation. Thecontinuum algorithm is based on the Landau-Lifshitz Navier-Stokes (LLNS)equations, which incorporate thermal fluctuations into macroscopichydrodynamics by using stochastic fluxes. It uses a recently-developedsolver for LLNS, based on third-order Runge-Kutta. We present numericaltests of systems in and out of equilibrium, including time-dependentsystems, and demonstrate dynamic adaptive refinement by the computationof a moving shock wave. Mean system behavior and second moment statisticsof our simulations match theoretical values and benchmarks well. We findthat particular attention should be paid to the spectrum of the flux atthe interface between the particle and continuum methods, specificallyfor the non-hydrodynamic (kinetic) time scales.

  18. Compression ratio effect on methane HCCI combustion

    SciTech Connect

    Aceves, S. M.; Pitz, W.; Smith, J. R.; Westbrook, C.

    1998-09-29

    We have used the HCT (Hydrodynamics, Chemistry and Transport) chemical kinetics code to simulate HCCI (homogeneous charge compression ignition) combustion of methane-air mixtures. HCT is applied to explore the ignition timing, bum duration, NOx production, gross indicated efficiency and gross IMEP of a supercharged engine (3 atm. Intake pressure) with 14:1, 16:l and 18:1 compression ratios at 1200 rpm. HCT has been modified to incorporate the effect of heat transfer and to calculate the temperature that results from mixing the recycled exhaust with the fresh mixture. This study uses a single control volume reaction zone that varies as a function of crank angle. The ignition process is controlled by adjusting the intake equivalence ratio and the residual gas trapping (RGT). RGT is internal exhaust gas recirculation which recycles both thermal energy and combustion product species. Adjustment of equivalence ratio and RGT is accomplished by varying the timing of the exhaust valve closure in either 2-stroke or 4-stroke engines. Inlet manifold temperature is held constant at 300 K. Results show that, for each compression ratio, there is a range of operational conditions that show promise of achieving the control necessary to vary power output while keeping indicated efficiency above 50% and NOx levels below 100 ppm. HCT results are also compared with a set of recent experimental data for natural gas.

  19. Supernova hydrodynamics experiments using the Nova laser

    SciTech Connect

    Remington, B.A.; Glendinning, S.G.; Estabrook, K.; Wallace, R.J.; Rubenchik, A.; Kane, J.; Arnett, D.; Drake, R.P.; McCray, R.

    1997-04-01

    We are developing experiments using the Nova laser to investigate two areas of physics relevant to core-collapse supernovae (SN): (1) compressible nonlinear hydrodynamic mixing and (2) radiative shock hydrodynamics. In the former, we are examining the differences between the 2D and 3D evolution of the Rayleigh-Taylor instability, an issue critical to the observables emerging from SN in the first year after exploding. In the latter, we are investigating the evolution of a colliding plasma system relevant to the ejecta-stellar wind interactions of the early stages of SN remnant formation. The experiments and astrophysical implications are discussed.

  20. Hydrodynamics from Landau initial conditions

    SciTech Connect

    Sen, Abhisek; Gerhard, Jochen; Torrieri, Giorgio; Read jr, Kenneth F.; Wong, Cheuk-Yin

    2015-01-01

    We investigate ideal hydrodynamic evolution, with Landau initial conditions, both in a semi-analytical 1+1D approach and in a numerical code incorporating event-by-event variation with many events and transverse density inhomogeneities. The object of the calculation is to test how fast would a Landau initial condition transition to a commonly used boost-invariant expansion. We show that the transition to boost-invariant flow occurs too late for realistic setups, with corrections of O (20 - 30%) expected at freezeout for most scenarios. Moreover, the deviation from boost-invariance is correlated with both transverse flow and elliptic flow, with the more highly transversely flowing regions also showing the most violation of boost invariance. Therefore, if longitudinal flow is not fully developed at the early stages of heavy ion collisions, 2+1 dimensional hydrodynamics is inadequate to extract transport coefficients of the quark-gluon plasma. Based on [1, 2

  1. Testing hydrodynamics schemes in galaxy disc simulations

    NASA Astrophysics Data System (ADS)

    Few, C. G.; Dobbs, C.; Pettitt, A.; Konstandin, L.

    2016-08-01

    We examine how three fundamentally different numerical hydrodynamics codes follow the evolution of an isothermal galactic disc with an external spiral potential. We compare an adaptive mesh refinement code (RAMSES), a smoothed particle hydrodynamics code (SPHNG), and a volume-discretized mesh-less code (GIZMO). Using standard refinement criteria, we find that RAMSES produces a disc that is less vertically concentrated and does not reach such high densities as the SPHNG or GIZMO runs. The gas surface density in the spiral arms increases at a lower rate for the RAMSES simulations compared to the other codes. There is also a greater degree of substructure in the SPHNG and GIZMO runs and secondary spiral arms are more pronounced. By resolving the Jeans length with a greater number of grid cells, we achieve more similar results to the Lagrangian codes used in this study. Other alterations to the refinement scheme (adding extra levels of refinement and refining based on local density gradients) are less successful in reducing the disparity between RAMSES and SPHNG/GIZMO. Although more similar, SPHNG displays different density distributions and vertical mass profiles to all modes of GIZMO (including the smoothed particle hydrodynamics version). This suggests differences also arise which are not intrinsic to the particular method but rather due to its implementation. The discrepancies between codes (in particular, the densities reached in the spiral arms) could potentially result in differences in the locations and time-scales for gravitational collapse, and therefore impact star formation activity in more complex galaxy disc simulations.

  2. Fractal image compression

    NASA Technical Reports Server (NTRS)

    Barnsley, Michael F.; Sloan, Alan D.

    1989-01-01

    Fractals are geometric or data structures which do not simplify under magnification. Fractal Image Compression is a technique which associates a fractal to an image. On the one hand, the fractal can be described in terms of a few succinct rules, while on the other, the fractal contains much or all of the image information. Since the rules are described with less bits of data than the image, compression results. Data compression with fractals is an approach to reach high compression ratios for large data streams related to images. The high compression ratios are attained at a cost of large amounts of computation. Both lossless and lossy modes are supported by the technique. The technique is stable in that small errors in codes lead to small errors in image data. Applications to the NASA mission are discussed.

  3. White Dwarf Mergers on Adaptive Meshes. I. Methodology and Code Verification

    NASA Astrophysics Data System (ADS)

    Katz, Max P.; Zingale, Michael; Calder, Alan C.; Swesty, F. Douglas; Almgren, Ann S.; Zhang, Weiqun

    2016-03-01

    The Type Ia supernova (SN Ia) progenitor problem is one of the most perplexing and exciting problems in astrophysics, requiring detailed numerical modeling to complement observations of these explosions. One possible progenitor that has merited recent theoretical attention is the white dwarf (WD) merger scenario, which has the potential to naturally explain many of the observed characteristics of SNe Ia. To date there have been relatively few self-consistent simulations of merging WD systems using mesh-based hydrodynamics. This is the first paper in a series describing simulations of these systems using a hydrodynamics code with adaptive mesh refinement. In this paper we describe our numerical methodology and discuss our implementation in the compressible hydrodynamics code CASTRO, which solves the Euler equations, and the Poisson equation for self-gravity, and couples the gravitational and rotation forces to the hydrodynamics. Standard techniques for coupling gravitation and rotation forces to the hydrodynamics do not adequately conserve the total energy of the system for our problem, but recent advances in the literature allow progress and we discuss our implementation here. We present a set of test problems demonstrating the extent to which our software sufficiently models a system where large amounts of mass are advected on the computational domain over long timescales. Future papers in this series will describe our treatment of the initial conditions of these systems and will examine the early phases of the merger to determine its viability for triggering a thermonuclear detonation.

  4. Noiseless Coding Of Magnetometer Signals

    NASA Technical Reports Server (NTRS)

    Rice, Robert F.; Lee, Jun-Ji

    1989-01-01

    Report discusses application of noiseless data-compression coding to digitized readings of spaceborne magnetometers for transmission back to Earth. Objective of such coding to increase efficiency by decreasing rate of transmission without sacrificing integrity of data. Adaptive coding compresses data by factors ranging from 2 to 6.

  5. Hydrodynamics of micropipette aspiration.

    PubMed Central

    Drury, J L; Dembo, M

    1999-01-01

    The dynamics of human neutrophils during micropipette aspiration are frequently analyzed by approximating these cells as simple slippery droplets of viscous fluid. Here, we present computations that reveal the detailed predictions of the simplest and most idealized case of such a scheme; namely, the case where the fluid of the droplet is homogeneous and Newtonian, and the surface tension of the droplet is constant. We have investigated the behavior of this model as a function of surface tension, droplet radius, viscosity, aspiration pressure, and pipette radius. In addition, we have tabulated a dimensionless factor, M, which can be utilized to calculate the apparent viscosity of the slippery droplet. Computations were carried out using a low Reynolds number hydrodynamics transport code based on the finite-element method. Although idealized and simplistic, we find that the slippery droplet model predicts many observed features of neutrophil aspiration. However, there are certain features that are not observed in neutrophils. In particular, the model predicts dilation of the membrane past the point of being continuous, as well as a reentrant jet at high aspiration pressures. PMID:9876128

  6. Disruptive Innovation in Numerical Hydrodynamics

    SciTech Connect

    Waltz, Jacob I.

    2012-09-06

    We propose the research and development of a high-fidelity hydrodynamic algorithm for tetrahedral meshes that will lead to a disruptive innovation in the numerical modeling of Laboratory problems. Our proposed innovation has the potential to reduce turnaround time by orders of magnitude relative to Advanced Simulation and Computing (ASC) codes; reduce simulation setup costs by millions of dollars per year; and effectively leverage Graphics Processing Unit (GPU) and future Exascale computing hardware. If successful, this work will lead to a dramatic leap forward in the Laboratory's quest for a predictive simulation capability.

  7. Lossless compression of medical images using Hilbert scan

    NASA Astrophysics Data System (ADS)

    Sun, Ziguang; Li, Chungui; Liu, Hao; Zhang, Zengfang

    2007-12-01

    The effectiveness of Hilbert scan in lossless medical images compression is discussed. In our methods, after coding of intensities, the pixels in a medical images have been decorrelated with differential pulse code modulation, then the error image has been rearranged using Hilbert scan, finally we implement five coding schemes, such as Huffman coding, RLE, lZW coding, Arithmetic coding, and RLE followed by Huffman coding. The experiments show that the case, which applies DPCM followed by Hilbert scan and then compressed by the Arithmetic coding scheme, has the best compression result, also indicate that Hilbert scan can enhance pixel locality, and increase the compression ratio effectively.

  8. Hydrodynamical evolution of coalescing binary neutron stars

    NASA Technical Reports Server (NTRS)

    Rasio, Frederic A.; Shapiro, Stuart L.

    1992-01-01

    The hydrodynamics of the final merging of two neutron stars and the corresponding gravitational wave emission is studied in detail. Various test calculations are presented, including the compressible Roche and Darwin problems and the head-on collision of two polytropes. A complete coalescence calculation is presented for the simplest case of two identical neutron stars, represented by Gamma = 2 polytropes, in a circular orbit, with their spins aligned and synchronized with the orbital rotation.

  9. ECG data compression by modeling.

    PubMed Central

    Madhukar, B.; Murthy, I. S.

    1992-01-01

    This paper presents a novel algorithm for data compression of single lead Electrocardiogram (ECG) data. The method is based on Parametric modeling of the Discrete Cosine Transformed ECG signal. Improved high frequency reconstruction is achieved by separately modeling the low and the high frequency regions of the transformed signal. Differential Pulse Code Modulation is applied on the model parameters to obtain a further increase in the compression. Compression ratios up to 1:40 were achieved without significant distortion. PMID:1482940

  10. Inertial-Fusion-Related Hydrodynamic Instabilities in a Spherical Gas Bubble Accelerated by a Planar Shock Wave

    SciTech Connect

    Niederhaus, John; Ranjan, Devesh; Anderson, Mark; Oakley, Jason; Bonazza, Riccardo; Greenough, Jeff

    2005-05-15

    Experiments studying the compression and unstable growth of a dense spherical bubble in a gaseous medium subjected to a strong planar shock wave (2.8 < M < 3.4) are performed in a vertical shock tube. The test gas is initially contained in a free-falling spherical soap-film bubble, and the shocked bubble is imaged using planar laser diagnostics. Concurrently, simulations are carried out using a compressible hydrodynamics code in r-z axisymmetric geometry.Experiments and computations indicate the formation of characteristic vortical structures in the post-shock flow, due to Richtmyer-Meshkov and Kelvin-Helmholtz instabilities, and smaller-scale vortices due to secondary effects. Inconsistencies between experimental and computational results are examined, and the usefulness of the current axisymmetric approach is evaluated.

  11. Design of Fiber Optic Sensors for Measuring Hydrodynamic Parameters

    NASA Technical Reports Server (NTRS)

    Lyons, Donald R.; Quiett, Carramah; Griffin, DeVon (Technical Monitor)

    2001-01-01

    The science of optical hydrodynamics involves relating the optical properties to the fluid dynamic properties of a hydrodynamic system. Fiber-optic sensors are being designed for measuring the hydrodynamic parameters of various systems. As a flowing fluid makes an encounter with a flat surface, it forms a boundary layer near this surface. The region between the boundary layer and the flat plate contains information about parameters such as viscosity, compressibility, pressure, density, and velocity. An analytical model has been developed for examining the hydrodynamic parameters near the surface of a fiber-optic sensor. An analysis of the conservation of momentum, the continuity equation and the Navier-Stokes equation for compressible flow were used to develop expressions for the velocity and the density as a function of the distance along the flow and above the surface. When examining the flow near the surface, these expressions are used to estimate the sensitivity required to perform direct optical measurements and to derive the shear force for indirect optical measurements. The derivation of this result permits the incorporation of better design parameters for other fiber-based sensors. Future work includes analyzing the optical parametric designs of fiber-optic sensors, modeling sensors to utilize the parameters for hydrodynamics and applying different mixtures of hydrodynamic flow. Finally, the fabrication of fiber-optic sensors for hydrodynamic flow applications of the type described in this presentation could enhance aerospace, submarine, and medical technology.

  12. Hydrodynamic simulations with the Godunov smoothed particle hydrodynamics

    NASA Astrophysics Data System (ADS)

    Murante, G.; Borgani, S.; Brunino, R.; Cha, S.-H.

    2011-10-01

    We present results based on an implementation of the Godunov smoothed particle hydrodynamics (GSPH), originally developed by Inutsuka, in the GADGET-3 hydrodynamic code. We first review the derivation of the GSPH discretization of the equations of moment and energy conservation, starting from the convolution of these equations with the interpolating kernel. The two most important aspects of the numerical implementation of these equations are (a) the appearance of fluid velocity and pressure obtained from the solution of the Riemann problem between each pair of particles, and (b) the absence of an artificial viscosity term. We carry out three different controlled hydrodynamical three-dimensional tests, namely the Sod shock tube, the development of Kelvin-Helmholtz instabilities in a shear-flow test and the 'blob' test describing the evolution of a cold cloud moving against a hot wind. The results of our tests confirm and extend in a number of aspects those recently obtained by Cha, Inutsuka & Nayakshin: (i) GSPH provides a much improved description of contact discontinuities, with respect to smoothed particle hydrodynamics (SPH), thus avoiding the appearance of spurious pressure forces; (ii) GSPH is able to follow the development of gas-dynamical instabilities, such as the Kevin-Helmholtz and the Rayleigh-Taylor ones; (iii) as a result, GSPH describes the development of curl structures in the shear-flow test and the dissolution of the cold cloud in the 'blob' test. Besides comparing the results of GSPH with those from standard SPH implementations, we also discuss in detail the effect on the performances of GSPH of changing different aspects of its implementation: choice of the number of neighbours, accuracy of the interpolation procedure to locate the interface between two fluid elements (particles) for the solution of the Riemann problem, order of the reconstruction for the assignment of variables at the interface, choice of the limiter to prevent oscillations of

  13. EEG data compression techniques.

    PubMed

    Antoniol, G; Tonella, P

    1997-02-01

    In this paper, electroencephalograph (EEG) and Holter EEG data compression techniques which allow perfect reconstruction of the recorded waveform from the compressed one are presented and discussed. Data compression permits one to achieve significant reduction in the space required to store signals and in transmission time. The Huffman coding technique in conjunction with derivative computation reaches high compression ratios (on average 49% on Holter and 58% on EEG signals) with low computational complexity. By exploiting this result a simple and fast encoder/decoder scheme capable of real-time performance on a PC was implemented. This simple technique is compared with other predictive transformations, vector quantization, discrete cosine transform (DCT), and repetition count compression methods. Finally, it is shown that the adoption of a collapsed Huffman tree for the encoding/decoding operations allows one to choose the maximum codeword length without significantly affecting the compression ratio. Therefore, low cost commercial microcontrollers and storage devices can be effectively used to store long Holter EEG's in a compressed format. PMID:9214790

  14. Shock Propagation and Instability Structures in Compressed Silica Aerogels

    SciTech Connect

    Howard, W M; Molitoris, J D; DeHaven, M R; Gash, A E; Satcher, J H

    2002-05-30

    We have performed a series of experiments examining shock propagation in low density aerogels. High-pressure ({approx}100 kbar) shock waves are produced by detonating high explosives. Radiography is used to obtain a time sequence imaging of the shocks as they enter and traverse the aerogel. We compress the aerogel by impinging shocks waves on either one or both sides of an aerogel slab. The shock wave initially transmitted to the aerogel is very narrow and flat, but disperses and curves as it propagates. Optical images of the shock front reveal the initial formation of a hot dense region that cools and evolves into a well-defined microstructure. Structures observed in the shock front are examined in the framework of hydrodynamic instabilities generated as the shock traverses the low-density aerogel. The primary features of shock propagation are compared to simulations, which also include modeling the detonation of the high explosive, with a 2-D Arbitrary Lagrange Eulerian hydrodynamics code The code includes a detailed thermochemical equation of state and rate law kinetics. We will present an analysis of the data from the time resolved imaging diagnostics and form a consistent picture of the shock transmission, propagation and instability structure.

  15. Supernova-relevant hydrodynamic instability experiment on the Nova laser

    SciTech Connect

    Kane, J.; Arnett, D.; Remington, B.A.; Glendinning, S.G.; Castor, J.; Rubenchik, A.; Berning, M.

    1996-02-12

    Supernova 1987A focused attention on the critical role of hydrodynamic instabilities in the evolution of supernovae. On quite a separate front, the detrimental effect of hydrodynamic instabilities in inertial confinement fusion (ICF) has long been known. Tools from both areas are being tested on a common project. At Lawrence Livermore National Laboratory (LLNL), the Nova Laser is being used in scaled laboratory experiments of hydrodynamic mixing under supernova-relevant conditions. Numerical simulations of the experiments are being done, using hydrodynamics codes at the Laboratory, and astrophysical codes successfully used to model the hydrodynamics of supernovae. A two-layer package composed of Cu and CH{sub 2} with a single mode sinusoidal 1D perturbation at the interface, shocked by indirect laser drive from the Cu side of the package, produced significant Rayleigh-Taylor (RT) growth in the nonlinear regime. The scale and gross structure of the growth was successfully modeled, by mapping an early-time simulation done with 1D HYADES, a radiation transport code, into 2D CALE, a LLNL hydrodynamics code. The HYADES result was also mapped in 2D into the supernova code PROMETHEUS, which was also able to reproduce the scale and gross structure of the growth.

  16. Speech coding

    SciTech Connect

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    1998-05-08

    coding techniques are equally applicable to any voice signal whether or not it carries any intelligible information, as the term speech implies. Other terms that are commonly used are speech compression and voice compression since the fundamental idea behind speech coding is to reduce (compress) the transmission rate (or equivalently the bandwidth) And/or reduce storage requirements In this document the terms speech and voice shall be used interchangeably.

  17. Image compression using constrained relaxation

    NASA Astrophysics Data System (ADS)

    He, Zhihai

    2007-01-01

    In this work, we develop a new data representation framework, called constrained relaxation for image compression. Our basic observation is that an image is not a random 2-D array of pixels. They have to satisfy a set of imaging constraints so as to form a natural image. Therefore, one of the major tasks in image representation and coding is to efficiently encode these imaging constraints. The proposed data representation and image compression method not only achieves more efficient data compression than the state-of-the-art H.264 Intra frame coding, but also provides much more resilience to wireless transmission errors with an internal error-correction capability.

  18. Applications of the computer codes FLUX2D and PHI3D for the electromagnetic analysis of compressed magnetic field generators and power flow channels

    SciTech Connect

    Hodgdon, M.L.; Oona, H.; Martinez, A.R.; Salon, S.; Wendling, P.; Krahenbuhl, L.; Nicolas, A.; Nicolas, L.

    1989-01-01

    We present herein the results of three electromagnetic field problems for compressed magnetic field generators and their associated power flow channels. The first problem is the computation of the transient magnetic field in a two-dimensional model of helical generator during loading. The second problem is the three-dimensional eddy current patterns in a section of an armature beneath a bifurcation point of a helical winding. Our third problem is the calculation of the three-dimensional electrostatic fields in a region known as the post-hole convolute in which a rod connects the inner and outer walls of a system of three concentric cylinders through a hole in the middle cylinder. While analytic solutions exist for many electromagnetic field problems in cases of special and ideal geometries, the solutions of these and similar problems for the proper analysis and design of compressed magnetic field generators and their related hardware require computer simulations. In earlier studies, computer models have been proposed, several based on research oriented hydrocodes to which uncoupled or partially coupled Maxwell's equations solvers are added. Although the hydrocode models address the problem of moving, deformable conductors, they are not useful for electromagnetic analysis, nor can they be considered design tools. For our studies, we take advantage of the commercial, electromagnetic computer-aided design software packages FLUX2D nd PHI3D that were developed for motor manufacturers and utilities industries. 4 refs., 6 figs.

  19. File Compression and Expansion of the Genetic Code by the use of the Yin/Yang Directions to find its Sphered Cube

    PubMed Central

    Castro-Chavez, Fernando

    2014-01-01

    Objective The objective of this article is to demonstrate that the genetic code can be studied and represented in a 3-D Sphered Cube for bioinformatics and for education by using the graphical help of the ancient “Book of Changes” or I Ching for the comparison, pair by pair, of the three basic characteristics of nucleotides: H-bonds, molecular structure, and their tautomerism. Methods The source of natural biodiversity is the high plasticity of the genetic code, analyzable with a reverse engineering of its 2-D and 3-D representations (here illustrated), but also through the classical 64-hexagrams of the ancient I Ching, as if they were the 64-codons or words of the genetic code. Results In this article, the four elements of the Yin/Yang were found by correlating the 3×2=6 sets of Cartesian comparisons of the mentioned properties of nucleic acids, to the directionality of their resulting blocks of codons grouped according to their resulting amino acids and/or functions, integrating a 384-codon Sphered Cube whose function is illustrated by comparing six brain peptides and a promoter of osteoblasts from Humans versus Neanderthal, as well as to Negadi’s work on the importance of the number 384 within the genetic code. Conclusions Starting with the codon/anticodon correlation of Nirenberg, published in full here for the first time, and by studying the genetic code and its 3-D display, the buffers of reiteration within codons codifying for the same amino acid, displayed the two long (binary number one) and older Yin/Yang arrows that travel in opposite directions, mimicking the parental DNA strands, while annealing to the two younger and broken (binary number zero) Yin/Yang arrows, mimicking the new DNA strands; the graphic analysis of the of the genetic code and its plasticity was helpful to compare compatible sequences (human compatible to human versus neanderthal compatible to neanderthal), while further exploring the wondrous biodiversity of nature for

  20. Hydrodynamic effects in proteins

    NASA Astrophysics Data System (ADS)

    Szymczak, Piotr; Cieplak, Marek

    2011-01-01

    Experimental and numerical results pertaining to flow-induced effects in proteins are reviewed. Special emphasis is placed on shear-induced unfolding and on the role of solvent mediated hydrodynamic interactions in the conformational transitions in proteins.

  1. Hydrodynamic effects in proteins.

    PubMed

    Szymczak, Piotr; Cieplak, Marek

    2011-01-26

    Experimental and numerical results pertaining to flow-induced effects in proteins are reviewed. Special emphasis is placed on shear-induced unfolding and on the role of solvent mediated hydrodynamic interactions in the conformational transitions in proteins. PMID:21406855

  2. Testing hydrodynamics schemes in galaxy disc simulations

    NASA Astrophysics Data System (ADS)

    Few, C. G.; Dobbs, C.; Pettitt, A.; Konstandin, L.

    2016-08-01

    We examine how three fundamentally different numerical hydrodynamics codes follow the evolution of an isothermal galactic disc with an external spiral potential. We compare an adaptive mesh refinement code (RAMSES), a smoothed particle hydrodynamics code (sphNG), and a volume-discretised meshless code (GIZMO). Using standard refinement criteria, we find that RAMSES produces a disc that is less vertically concentrated and does not reach such high densities as the sphNG or GIZMO runs. The gas surface density in the spiral arms increases at a lower rate for the RAMSES simulations compared to the other codes. There is also a greater degree of substructure in the sphNG and GIZMO runs and secondary spiral arms are more pronounced. By resolving the Jeans' length with a greater number of grid cells we achieve more similar results to the Lagrangian codes used in this study. Other alterations to the refinement scheme (adding extra levels of refinement and refining based on local density gradients) are less successful in reducing the disparity between RAMSES and sphNG/GIZMO. Although more similar, sphNG displays different density distributions and vertical mass profiles to all modes of GIZMO (including the smoothed particle hydrodynamics version). This suggests differences also arise which are not intrinsic to the particular method but rather due to its implementation. The discrepancies between codes (in particular, the densities reached in the spiral arms) could potentially result in differences in the locations and timescales for gravitational collapse, and therefore impact star formation activity in more complex galaxy disc simulations.

  3. Absolutely lossless compression of medical images.

    PubMed

    Ashraf, Robina; Akbar, Muhammad

    2005-01-01

    Data in medical images is very large and therefore for storage and/or transmission of these images, compression is essential. A method is proposed which provides high compression ratios for radiographic images with no loss of diagnostic quality. In the approach an image is first compressed at a high compression ratio but with loss, and the error image is then compressed losslessly. The resulting compression is not only strictly lossless, but also expected to yield a high compression ratio, especially if the lossy compression technique is good. A neural network vector quantizer (NNVQ) is used as a lossy compressor, while for lossless compression Huffman coding is used. Quality of images is evaluated by comparing with standard compression techniques available. PMID:17281110

  4. Image compression technique

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1997-01-01

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace's equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image.

  5. Image compression technique

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1997-03-25

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace`s equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image. 16 figs.

  6. RECENT RESULTS OF RADIATION HYDRODYNAMICS AND TURBULENCE EXPERIMENTS IN CYLINDRICAL GEOMETRY.

    SciTech Connect

    Magelssen G. R.; Scott, J. M.; Batha, S. H.; Holmes, R. L.; Lanier, N. E.; Tubbs, D. L.; Elliott, N. E.; Dunne, A. M.; Rothman, S.; Parker, K. W.; Youngs, D.

    2001-01-01

    Cylindrical implosion experiments at the University of Rochester laser facility, OMEGA, were performed to study radiation hydrodynamics and compressible turbulence in convergent geometry. Laser beams were used to directly drive a cylinder with either a gold (AU) or dichloropolystyrene (C6H8CL2) marker layer placed between a solid CH ablator and a foam cushion. When the cylinder is imploded the Richtmyer-Meshkov instability and convergence cause the marker layer to increase in thickness. Marker thickness measurements were made by x-ray backlighting along the cylinder axis. Experimental results of the effect of surface roughness will be presented. Computational results with an AMR code are in good agreement with the experimental results from targets with the roughest surface. Computational results suggest that marker layer 'end effects' and bowing increase the effective thickness of the marker layer at lower levels of roughness.

  7. Compressive Holography

    NASA Astrophysics Data System (ADS)

    Lim, Se Hoon

    Compressive holography estimates images from incomplete data by using sparsity priors. Compressive holography combines digital holography and compressive sensing. Digital holography consists of computational image estimation from data captured by an electronic focal plane array. Compressive sensing enables accurate data reconstruction by prior knowledge on desired signal. Computational and optical co-design optimally supports compressive holography in the joint computational and optical domain. This dissertation explores two examples of compressive holography: estimation of 3D tomographic images from 2D data and estimation of images from under sampled apertures. Compressive holography achieves single shot holographic tomography using decompressive inference. In general, 3D image reconstruction suffers from underdetermined measurements with a 2D detector. Specifically, single shot holographic tomography shows the uniqueness problem in the axial direction because the inversion is ill-posed. Compressive sensing alleviates the ill-posed problem by enforcing some sparsity constraints. Holographic tomography is applied for video-rate microscopic imaging and diffuse object imaging. In diffuse object imaging, sparsity priors are not valid in coherent image basis due to speckle. So incoherent image estimation is designed to hold the sparsity in incoherent image basis by support of multiple speckle realizations. High pixel count holography achieves high resolution and wide field-of-view imaging. Coherent aperture synthesis can be one method to increase the aperture size of a detector. Scanning-based synthetic aperture confronts a multivariable global optimization problem due to time-space measurement errors. A hierarchical estimation strategy divides the global problem into multiple local problems with support of computational and optical co-design. Compressive sparse aperture holography can be another method. Compressive sparse sampling collects most of significant field

  8. Hydrodynamic simulations of recurrent novae

    NASA Astrophysics Data System (ADS)

    Starrfield, S.; Sparks, W. M.; Truran, J. W.; Sion, E. M.

    1984-12-01

    Simulations of the 1979 outburst of the recurrent nova U Scorpii using a Lagrangian, hydrodynamic computer code which incorporates accretion in the evolution to the outburst are discussed. Three evolutionary sequences were computed in an attempt to understand the very rapid outburst and short recurrence time of this most unusual nova. It is now possible to reproduce the CNO composition of the ejected material, the light curve, the amount of ejected material, and the kinetic energy of the ejecta. The best sequence studied involved accretion of solar rich material onto a 1.38 solar magnatude white dwarf at a rate of 1.6 x 10 to the minus 8 solar magnatude per year.

  9. Environmental Fluid Dynamics Code

    EPA Science Inventory

    The Environmental Fluid Dynamics Code (EFDC)is a state-of-the-art hydrodynamic model that can be used to simulate aquatic systems in one, two, and three dimensions. It has evolved over the past two decades to become one of the most widely used and technically defensible hydrodyn...

  10. Scaling Laws for Hydrodynamically Equivalent Implosions

    NASA Astrophysics Data System (ADS)

    Murakami, Masakatsu

    2001-10-01

    The EPOC (equivalent physics of confinement) scenario for the proof of principle of high gain inertial confinement fusion is presented, where the key concept "hydrodynamically equivalent implosions" plays a crucial role. Scaling laws on the target and confinement parameters are derived by applying the Lie group analysis to the PDE (partially differential equations) chain of the hydrodynamic system. It turns out that the conventional scaling law based on adiabatic approximation significantly differs from one which takes such energy transport effect as electron heat conduction into account. Confinement plasma parameters of the hot spot such as the central temperature and the areal mass density at peak compression are obtained with a self-similar solution for spherical implosions.

  11. Impact of hydrodynamics on oral biofilm strength.

    PubMed

    Paramonova, E; Kalmykowa, O J; van der Mei, H C; Busscher, H J; Sharma, P K

    2009-10-01

    Mechanical removal of oral biofilms is ubiquitously accepted as the best way to prevent caries and periodontal diseases. Removal effectiveness strongly depends on biofilm strength. To investigate the influence of hydrodynamics on oral biofilm strength, we grew single- and multi-species biofilms of Streptococcus oralis J22, Actinomyces naeslundii TV14-J1, and full dental plaque at shear rates ranging from 0.1 to 50 1/sec and measured their compressive strength. Subsequently, biofilm architecture was evaluated by confocal laser scanning microscopy. Multi-species biofilms were stronger than single-species biofilms, with strength values ranging from 6 to 51 Pa and from 5 to 17 Pa, respectively. In response to increased hydrodynamic shear, biofilm strength decreased, and architecture changed from uniform carpet-like to more "fluffy" with higher thickness. S. oralis biofilms grown under variable shear of 7 and 50 1/sec possessed properties intermediate of those measured at the respective single shears. PMID:19783800

  12. Mosaic image compression

    NASA Astrophysics Data System (ADS)

    Chaudhari, Kapil A.; Reeves, Stanley J.

    2005-02-01

    Most consumer-level digital cameras use a color filter array to capture color mosaic data followed by demosaicking to obtain full-color images. However, many sophisticated demosaicking algorithms are too complex to implement on-board a camera. To use these algorithms, one must transfer the mosaic data from the camera to a computer without introducing compression losses that could generate artifacts in the demosaicked image. The memory required for losslessly stored mosaic images severely restricts the number of images that can be stored in the camera. Therefore, we need an algorithm to compress the original mosaic data losslessly so that it can later be transferred intact for demosaicking. We propose a new lossless compression technique for mosaic images in this paper. Ordinary image compression methods do not apply to mosaic images because of their non-canonical color sampling structure. Because standard compression methods such as JPEG, JPEG2000, etc. are already available in most digital cameras, we have chosen to build our algorithms using a standard method as a key part of the system. The algorithm begins by separating the mosaic image into 3 color (RGB) components. This is followed by an interpolation or down-sampling operation--depending on the particular variation of the algorithm--that makes all three components the same size. Using the three color components, we form a color image that is coded with JPEG. After appropriately reformatting the data, we calculate the residual between the original image and the coded image and then entropy-code the residual values corresponding to the mosaic data.

  13. Hybrid magneto-hydrodynamic simulation of a driven FRC

    SciTech Connect

    Rahman, H. U. Wessel, F. J.; Binderbauer, M. W.; Qerushi, A.; Rostoker, N.; Conti, F.; Ney, P.

    2014-03-15

    We simulate a field-reversed configuration (FRC), produced by an “inductively driven” FRC experiment; comprised of a central-flux coil and exterior-limiter coil. To account for the plasma kinetic behavior, a standard 2-dimensional magneto-hydrodynamic code is modified to preserve the azimuthal, two-fluid behavior. Simulations are run for the FRC's full-time history, sufficient to include: acceleration, formation, current neutralization, compression, and decay. At start-up, a net ion current develops that modifies the applied-magnetic field forming closed-field lines and a region of null-magnetic field (i.e., a FRC). After closed-field lines form, ion-electron drag increases the electron current, canceling a portion of the ion current. The equilibrium is lost as the total current eventually dissipates. The time evolution and magnitudes of the computed current, ion-rotation velocity, and plasma temperature agree with the experiments, as do the rigid-rotor-like, radial-profiles for the density and axial-magnetic field [cf. Conti et al. Phys. Plasmas 21, 022511 (2014)].

  14. Resurgence in extended hydrodynamics

    NASA Astrophysics Data System (ADS)

    Aniceto, Inês; Spaliński, Michał

    2016-04-01

    It has recently been understood that the hydrodynamic series generated by the Müller-Israel-Stewart theory is divergent and that this large-order behavior is consistent with the theory of resurgence. Furthermore, it was observed that the physical origin of this is the presence of a purely damped nonhydrodynamic mode. It is very interesting to ask whether this picture persists in cases where the spectrum of nonhydrodynamic modes is richer. We take the first step in this direction by considering the simplest hydrodynamic theory which, instead of the purely damped mode, contains a pair of nonhydrodynamic modes of complex conjugate frequencies. This mimics the pattern of black brane quasinormal modes which appear on the gravity side of the AdS/CFT description of N =4 supersymmetric Yang-Mills plasma. We find that the resulting hydrodynamic series is divergent in a way consistent with resurgence and precisely encodes information about the nonhydrodynamic modes of the theory.

  15. Scaling supernova hydrodynamics to the laboratory

    SciTech Connect

    Kane, J.O.

    1999-06-01

    Supernova (SN) 1987A focused attention on the critical role of hydrodynamic instabilities in the evolution of supernovae. To test the modeling of these instabilities, we are developing laboratory experiments of hydrodynamic mixing under conditions relevant to supernovae. Initial results were reported in J. Kane et al., Astrophys. J.478, L75 (1997) The Nova laser is used to shock two-layer targets, producing Richtmyer-Meshkov (RM) and Rayleigh-Taylor (RT) instabilities at the interfaces between the layers, analogous to instabilities seen at the interfaces of SN 1987A. Because the hydrodynamics in the laser experiments at intermediate times (3-40 ns) and in SN 1987A at intermediate times (5 s-10{sup 4} s) are well described by the Euler equations, the hydrodynamics scale between the two regimes. The experiments are modeled using the hydrodynamics codes HYADES and CALE, and the supernova code PROMETHEUS, thus serving as a benchmark for PROMETHEUS. Results of the experiments and simulations are presented. Analysis of the spike and bubble velocities in the experiment using potential flow theory and a modified Ott thin shell theory is presented. A numerical study of 2D vs. 3D differences in instability growth at the O-He and He-H interface of SN 1987A, and the design for analogous laser experiments are presented. We discuss further work to incorporate more features of the SN in the experiments, including spherical geometry, multiple layers and density gradients. Past and ongoing work in laboratory and laser astrophysics is reviewed, including experimental work on supernova remnants (SNRs). A numerical study of RM instability in SNRs is presented.

  16. Indirect-drive pre-compression of CH coated cone-in-shell target with guiding wire for fast ignition

    NASA Astrophysics Data System (ADS)

    Zhou, Weimin; Gu, Yuqiu; Shan, Lianqiang; Zhang, Baohan

    2013-10-01

    Compared with central ignition of laser fusion, fast ignition separates compression and ignition thus it can relax the requirements on the implosion symmetry and the driven energy. The Research Center of Laser Fusion has begun the related experimental researches on fast ignition based on SHENGUANG II laser facility. The small scale cone-in-shell target with guiding wire for fast ignition was pre-compressed by the SHENGUANG II eight 260J/2ns/3 ω laser beams indirectly since beam smoothing was not available currently. To minimize the mixing of the compressed fuel and high-Z vapor produced by the M-line emission from the gold holhraum, a 3 μm CH foil was coated on the full outer surface of the cone and guiding wire. The maximum density of the compressed cone-in-shell target 1.3 ns after the lasers' irradiation on the inside wall of hohlraum is about 5.0 g/cm3 , and the implosion velocity is close to 1.9*107 cm/s, which are well consistent with the simulation results with two-dimensional radiation hydrodynamic code. Experimental results and simulation results also demonstrated the coated CH foil could minimize the mixing effectively. By the appropriate design, target can remain robust before the maximum compression, that is, the time while the hot electrons produced by ignition laser pulse deposit energy in the compressed fuel.

  17. Consistent Hydrodynamics for Phase Field Crystals.

    PubMed

    Heinonen, V; Achim, C V; Kosterlitz, J M; Ying, See-Chen; Lowengrub, J; Ala-Nissila, T

    2016-01-15

    We use the amplitude expansion in the phase field crystal framework to formulate an approach where the fields describing the microscopic structure of the material are coupled to a hydrodynamic velocity field. The model is shown to reduce to the well-known macroscopic theories in appropriate limits, including compressible Navier-Stokes and wave equations. Moreover, we show that the dynamics proposed allows for long wavelength phonon modes and demonstrate the theory numerically showing that the elastic excitations in the system are relaxed through phonon emission. PMID:26824543

  18. Sensitivity analysis of hydrodynamic stability operators

    NASA Technical Reports Server (NTRS)

    Schmid, Peter J.; Henningson, Dan S.; Khorrami, Mehdi R.; Malik, Mujeeb R.

    1992-01-01

    The eigenvalue sensitivity for hydrodynamic stability operators is investigated. Classical matrix perturbation techniques as well as the concept of epsilon-pseudoeigenvalues are applied to show that parts of the spectrum are highly sensitive to small perturbations. Applications are drawn from incompressible plane Couette, trailing line vortex flow and compressible Blasius boundary layer flow. Parametric studies indicate a monotonically increasing effect of the Reynolds number on the sensitivity. The phenomenon of eigenvalue sensitivity is due to the non-normality of the operators and their discrete matrix analogs and may be associated with large transient growth of the corresponding initial value problem.

  19. Astronomical context coder for image compression

    NASA Astrophysics Data System (ADS)

    Pata, Petr; Schindler, Jaromir

    2015-10-01

    Recent lossless still image compression formats are powerful tools for compression of all kind of common images (pictures, text, schemes, etc.). Generally, the performance of a compression algorithm depends on its ability to anticipate the image function of the processed image. In other words, a compression algorithm to be successful, it has to take perfectly the advantage of coded image properties. Astronomical data form a special class of images and they have, among general image properties, also some specific characteristics which are unique. If a new coder is able to correctly use the knowledge of these special properties it should lead to its superior performance on this specific class of images at least in terms of the compression ratio. In this work, the novel lossless astronomical image data compression method will be presented. The achievable compression ratio of this new coder will be compared to theoretical lossless compression limit and also to the recent compression standards of the astronomy and general multimedia.

  20. Fast Coding Unit Encoding Mechanism for Low Complexity Video Coding

    PubMed Central

    Wu, Yueying; Jia, Kebin; Gao, Guandong

    2016-01-01

    In high efficiency video coding (HEVC), coding tree contributes to excellent compression performance. However, coding tree brings extremely high computational complexity. Innovative works for improving coding tree to further reduce encoding time are stated in this paper. A novel low complexity coding tree mechanism is proposed for HEVC fast coding unit (CU) encoding. Firstly, this paper makes an in-depth study of the relationship among CU distribution, quantization parameter (QP) and content change (CC). Secondly, a CU coding tree probability model is proposed for modeling and predicting CU distribution. Eventually, a CU coding tree probability update is proposed, aiming to address probabilistic model distortion problems caused by CC. Experimental results show that the proposed low complexity CU coding tree mechanism significantly reduces encoding time by 27% for lossy coding and 42% for visually lossless coding and lossless coding. The proposed low complexity CU coding tree mechanism devotes to improving coding performance under various application conditions. PMID:26999741

  1. Synchronization via Hydrodynamic Interactions

    NASA Astrophysics Data System (ADS)

    Kendelbacher, Franziska; Stark, Holger

    2013-12-01

    An object moving in a viscous fluid creates a flow field that influences the motion of neighboring objects. We review examples from nature in the microscopic world where such hydrodynamic interactions synchronize beating or rotating filaments. Bacteria propel themselves using a bundle of rotating helical filaments called flagella which have to be synchronized in phase. Other micro-organisms are covered with a carpet of smaller filaments called cilia on their surfaces. They beat highly synchronized so that metachronal waves propagate along the cell surfaces. We explore both examples with the help of simple model systems and identify generic properties for observing synchronization by hydrodynamic interactions.

  2. Two algorithms for compressing noise like signals

    NASA Astrophysics Data System (ADS)

    Agaian, Sos S.; Cherukuri, Ravindranath; Akopian, David

    2005-05-01

    Compression is a technique that is used to encode data so that the data needs less storage/memory space. Compression of random data is vital in case where data where we need preserve data that has low redundancy and whose power spectrum is close to noise. In case of noisy signals that are used in various data hiding schemes the data has low redundancy and low energy spectrum. Therefore, upon compressing with lossy compression algorithms the low energy spectrum might get lost. Since the LSB plane data has low redundancy, lossless compression algorithms like Run length, Huffman coding, Arithmetic coding are in effective in providing a good compression ratio. These problems motivated in developing a new class of compression algorithms for compressing noisy signals. In this paper, we introduce a two new compression technique that compresses the random data like noise with reference to know pseudo noise sequence generated using a key. In addition, we developed a representation model for digital media using the pseudo noise signals. For simulation, we have made comparison between our methods and existing compression techniques like Run length that shows the Run length cannot compress when data is random but the proposed algorithms can compress. Furthermore, the proposed algorithms can be extended to all kinds of random data used in various applications.

  3. Combined effects of laser and non-thermal electron beams on hydrodynamics and shock formation in the Shock Ignition scheme

    NASA Astrophysics Data System (ADS)

    Nicolai, Ph.; Feugeas, J. L.; Touati, M.; Breil, J.; Dubroca, B.; Nguyen-Buy, T.; Ribeyre, X.; Tikhonchuk, V.; Gus'kov, S.

    2014-10-01

    An issue to be addressed in Inertial Confinement Fusion (ICF) is the detailed description of the kinetic transport of relativistic or non-thermal electrons generated by laser within the time and space scales of the imploded target hydrodynamics. We have developed at CELIA the model M1, a fast and reduced kinetic model for relativistic electron transport. The latter has been implemented into the 2D radiation hydrodynamic code CHIC. In the framework of the Shock Ignition (SI) scheme, it has been shown in simplified conditions that the energy transferred by the non-thermal electrons from the corona to the compressed shell of an ICF target could be an important mechanism for the creation of ablation pressure. Nevertheless, in realistic configurations, taking the density profile and the electron energy spectrum into account, the target has to be carefully designed to avoid deleterious effects on compression efficiency. In addition, the electron energy deposition may modify the laser-driven shock formation and its propagation through the target. The non-thermal electron effects on the shock propagation will be analyzed in a realistic configuration.

  4. Coded aperture computed tomography

    NASA Astrophysics Data System (ADS)

    Choi, Kerkil; Brady, David J.

    2009-08-01

    Diverse physical measurements can be modeled by X-ray transforms. While X-ray tomography is the canonical example, reference structure tomography (RST) and coded aperture snapshot spectral imaging (CASSI) are examples of physically unrelated but mathematically equivalent sensor systems. Historically, most x-ray transform based systems sample continuous distributions and apply analytical inversion processes. On the other hand, RST and CASSI generate discrete multiplexed measurements implemented with coded apertures. This multiplexing of coded measurements allows for compression of measurements from a compressed sensing perspective. Compressed sensing (CS) is a revelation that if the object has a sparse representation in some basis, then a certain number, but typically much less than what is prescribed by Shannon's sampling rate, of random projections captures enough information for a highly accurate reconstruction of the object. This paper investigates the role of coded apertures in x-ray transform measurement systems (XTMs) in terms of data efficiency and reconstruction fidelity from a CS perspective. To conduct this, we construct a unified analysis using RST and CASSI measurement models. Also, we propose a novel compressive x-ray tomography measurement scheme which also exploits coding and multiplexing, and hence shares the analysis of the other two XTMs. Using this analysis, we perform a qualitative study on how coded apertures can be exploited to implement physical random projections by "regularizing" the measurement systems. Numerical studies and simulation results demonstrate several examples of the impact of coding.

  5. Segmentation-based CT image compression

    NASA Astrophysics Data System (ADS)

    Thammineni, Arunoday; Mukhopadhyay, Sudipta; Kamath, Vidya

    2004-04-01

    The existing image compression standards like JPEG and JPEG 2000, compress the whole image as a single frame. This makes the system simple but inefficient. The problem is acute for applications where lossless compression is mandatory viz. medical image compression. If the spatial characteristics of the image are considered, it can give rise to a more efficient coding scheme. For example, CT reconstructed images have uniform background outside the field of view (FOV). Even the portion within the FOV can be divided as anatomically relevant and irrelevant parts. They have distinctly different statistics. Hence coding them separately will result in more efficient compression. Segmentation is done based on thresholding and shape information is stored using 8-connected differential chain code. Simple 1-D DPCM is used as the prediction scheme. The experiments show that the 1st order entropies of images fall by more than 11% when each segment is coded separately. For simplicity and speed of decoding Huffman code is chosen for entropy coding. Segment based coding will have an overhead of one table per segment but the overhead is minimal. Lossless compression of image based on segmentation resulted in reduction of bit rate by 7%-9% compared to lossless compression of whole image as a single frame by the same prediction coder. Segmentation based scheme also has the advantage of natural ROI based progressive decoding. If it is allowed to delete the diagnostically irrelevant portions, the bit budget can go down as much as 40%. This concept can be extended to other modalities.

  6. TEM Video Compressive Sensing

    SciTech Connect

    Stevens, Andrew J.; Kovarik, Libor; Abellan, Patricia; Yuan, Xin; Carin, Lawrence; Browning, Nigel D.

    2015-08-02

    One of the main limitations of imaging at high spatial and temporal resolution during in-situ TEM experiments is the frame rate of the camera being used to image the dynamic process. While the recent development of direct detectors has provided the hardware to achieve frame rates approaching 0.1ms, the cameras are expensive and must replace existing detectors. In this paper, we examine the use of coded aperture compressive sensing methods [1, 2, 3, 4] to increase the framerate of any camera with simple, low-cost hardware modifications. The coded aperture approach allows multiple sub-frames to be coded and integrated into a single camera frame during the acquisition process, and then extracted upon readout using statistical compressive sensing inversion. Our simulations show that it should be possible to increase the speed of any camera by at least an order of magnitude. Compressive Sensing (CS) combines sensing and compression in one operation, and thus provides an approach that could further improve the temporal resolution while correspondingly reducing the electron dose rate. Because the signal is measured in a compressive manner, fewer total measurements are required. When applied to TEM video capture, compressive imaging couled improve acquisition speed and reduce the electron dose rate. CS is a recent concept, and has come to the forefront due the seminal work of Candès [5]. Since the publication of Candès, there has been enormous growth in the application of CS and development of CS variants. For electron microscopy applications, the concept of CS has also been recently applied to electron tomography [6], and reduction of electron dose in scanning transmission electron microscopy (STEM) imaging [7]. To demonstrate the applicability of coded aperture CS video reconstruction for atomic level imaging, we simulate compressive sensing on observations of Pd nanoparticles and Ag nanoparticles during exposure to high temperatures and other environmental

  7. Simple Waves in Ideal Radiation Hydrodynamics

    SciTech Connect

    Johnson, B M

    2008-09-03

    In the dynamic diffusion limit of radiation hydrodynamics, advection dominates diffusion; the latter primarily affects small scales and has negligible impact on the large scale flow. The radiation can thus be accurately regarded as an ideal fluid, i.e., radiative diffusion can be neglected along with other forms of dissipation. This viewpoint is applied here to an analysis of simple waves in an ideal radiating fluid. It is shown that much of the hydrodynamic analysis carries over by simply replacing the material sound speed, pressure and index with the values appropriate for a radiating fluid. A complete analysis is performed for a centered rarefaction wave, and expressions are provided for the Riemann invariants and characteristic curves of the one-dimensional system of equations. The analytical solution is checked for consistency against a finite difference numerical integration, and the validity of neglecting the diffusion operator is demonstrated. An interesting physical result is that for a material component with a large number of internal degrees of freedom and an internal energy greater than that of the radiation, the sound speed increases as the fluid is rarefied. These solutions are an excellent test for radiation hydrodynamic codes operating in the dynamic diffusion regime. The general approach may be useful in the development of Godunov numerical schemes for radiation hydrodynamics.

  8. Patch-based Adaptive Mesh Refinement for Multimaterial Hydrodynamics

    SciTech Connect

    Lomov, I; Pember, R; Greenough, J; Liu, B

    2005-10-18

    We present a patch-based direct Eulerian adaptive mesh refinement (AMR) algorithm for modeling real equation-of-state, multimaterial compressible flow with strength. Our approach to AMR uses a hierarchical, structured grid approach first developed by (Berger and Oliger 1984), (Berger and Oliger 1984). The grid structure is dynamic in time and is composed of nested uniform rectangular grids of varying resolution. The integration scheme on the grid hierarchy is a recursive procedure in which the coarse grids are advanced, then the fine grids are advanced multiple steps to reach the same time, and finally the coarse and fine grids are synchronized to remove conservation errors during the separate advances. The methodology presented here is based on a single grid algorithm developed for multimaterial gas dynamics by (Colella et al. 1993), refined by(Greenough et al. 1995), and extended to the solution of solid mechanics problems with significant strength by (Lomov and Rubin 2003). The single grid algorithm uses a second-order Godunov scheme with an approximate single fluid Riemann solver and a volume-of-fluid treatment of material interfaces. The method also uses a non-conservative treatment of the deformation tensor and an acoustic approximation for shear waves in the Riemann solver. This departure from a strict application of the higher-order Godunov methodology to the equation of solid mechanics is justified due to the fact that highly nonlinear behavior of shear stresses is rare. This algorithm is implemented in two codes, Geodyn and Raptor, the latter of which is a coupled rad-hydro code. The present discussion will be solely concerned with hydrodynamics modeling. Results from a number of simulations for flows with and without strength will be presented.

  9. Analytical model for ramp compression

    NASA Astrophysics Data System (ADS)

    Xue, Quanxi; Jiang, Shaoen; Wang, Zhebin; Wang, Feng; Hu, Yun; Ding, Yongkun

    2016-08-01

    An analytical ramp compression model for condensed matter, which can provide explicit solutions for isentropic compression flow fields, is reported. A ramp compression experiment can be easily designed according to the capability of the loading source using this model. Specifically, important parameters, such as the maximum isentropic region width, material properties, profile of the pressure pulse, and the pressure pulse duration can be reasonably allocated or chosen. To demonstrate and study this model, laser-direct-driven ramp compression experiments and code simulation are performed successively, and the factors influencing the accuracy of the model are studied. The application and simulation show that this model can be used as guidance in the design of a ramp compression experiment. However, it is verified that further optimization work is required for a precise experimental design.

  10. Context-Aware Image Compression

    PubMed Central

    Chan, Jacky C. K.; Mahjoubfar, Ata; Chen, Claire L.; Jalali, Bahram

    2016-01-01

    We describe a physics-based data compression method inspired by the photonic time stretch wherein information-rich portions of the data are dilated in a process that emulates the effect of group velocity dispersion on temporal signals. With this coding operation, the data can be downsampled at a lower rate than without it. In contrast to previous implementation of the warped stretch compression, here the decoding can be performed without the need of phase recovery. We present rate-distortion analysis and show improvement in PSNR compared to compression via uniform downsampling. PMID:27367904

  11. Quad Tree Structures for Image Compression Applications.

    ERIC Educational Resources Information Center

    Markas, Tassos; Reif, John

    1992-01-01

    Presents a class of distortion controlled vector quantizers that are capable of compressing images so they comply with certain distortion requirements. Highlights include tree-structured vector quantizers; multiresolution vector quantization; error coding vector quantizer; error coding multiresolution algorithm; and Huffman coding of the quad-tree…

  12. Three-dimensional Hybrid Continuum-Atomistic Simulations for Multiscale Hydrodynamics

    SciTech Connect

    Wijesinghe, S; Hornung, R; Garcia, A; Hadjiconstantinou, N

    2004-04-15

    We present an adaptive mesh and algorithmic refinement (AMAR) scheme for modeling multi-scale hydrodynamics. The AMAR approach extends standard conservative adaptive mesh refinement (AMR) algorithms by providing a robust flux-based method for coupling an atomistic fluid representation to a continuum model. The atomistic model is applied locally in regions where the continuum description is invalid or inaccurate, such as near strong flow gradients and at fluid interfaces, or when the continuum grid is refined to the molecular scale. The need for such ''hybrid'' methods arises from the fact that hydrodynamics modeled by continuum representations are often under-resolved or inaccurate while solutions generated using molecular resolution globally are not feasible. In the implementation described herein, Direct Simulation Monte Carlo (DSMC) provides an atomistic description of the flow and the compressible two-fluid Euler equations serve as our continuum-scale model. The AMR methodology provides local grid refinement while the algorithm refinement feature allows the transition to DSMC where needed. The continuum and atomistic representations are coupled by matching fluxes at the continuum-atomistic interfaces and by proper averaging and interpolation of data between scales. Our AMAR application code is implemented in C++ and is built upon the SAMRAI (Structured Adaptive Mesh Refinement Application Infrastructure) framework developed at Lawrence Livermore National Laboratory. SAMRAI provides the parallel adaptive gridding algorithm and enables the coupling between the continuum and atomistic methods.

  13. Reversible intraframe compression of medical images.

    PubMed

    Roos, P; Viergever, M A; van Dijke, M A; Peters, J H

    1988-01-01

    The performance of several reversible, intraframe compression methods is compared by applying them to angiographic and magnetic resonance (MR) images. Reversible data compression involves two consecutive steps: decorrelation and coding. The result of the decorrelation step is presented in terms of entropy. Because Huffman coding generally approximates these entropy measures within a few percent, coding has not been investigated separately. It appears that a hierarchical decorrelation method based on interpolation (HINT) outperforms all other methods considered. The compression ratio is around 3 for angiographic images of 8-9 b/pixel, but is considerably less for MR images whose noise level is substantially higher. PMID:18230486

  14. Supernova Hydrodynamics on the Omega Laser

    SciTech Connect

    R. Paul Drake

    2004-01-16

    (B204)The fundamental motivation for our work is that supernovae are not well understood. Recent observations have clarified the depth of our ignorance, by producing observed phenomena that current theory and computer simulations cannot reproduce. Such theories and simulations involve, however, a number of physical mechanisms that have never been studied in isolation. We perform experiments, in compressible hydrodynamics and radiation hydrodynamics, relevant to supernovae and supernova remnants. These experiments produce phenomena in the laboratory that are believed, based on simulations, to be important to astrophysics but that have not been directly observed in either the laboratory or in an astrophysical system. During the period of this grant, we have focused on the scaling of an astrophysically relevant, radiative-precursor shock, on preliminary studies of collapsing radiative shocks, and on the multimode behavior and the three-dimensional, deeply nonlinear evolution of the Rayleigh-Taylor (RT) instability at a decelerating, embedded interface. These experiments required strong compression and decompression, strong shocks (Mach {approx}10 or greater), flexible geometries, and very smooth laser beams, which means that the 60-beam Omega laser is the only facility capable of carrying out this program.

  15. Hydrodynamically Lubricated Rotary Shaft Having Twist Resistant Geometry

    DOEpatents

    Dietle, Lannie; Gobeli, Jeffrey D.

    1993-07-27

    A hydrodynamically lubricated squeeze packing type rotary shaft with a cross-sectional geometry suitable for pressurized lubricant retention is provided which, in the preferred embodiment, incorporates a protuberant static sealing interface that, compared to prior art, dramatically improves the exclusionary action of the dynamic sealing interface in low pressure and unpressurized applications by achieving symmetrical deformation of the seal at the static and dynamic sealing interfaces. In abrasive environments, the improved exclusionary action results in a dramatic reduction of seal and shaft wear, compared to prior art, and provides a significant increase in seal life. The invention also increases seal life by making higher levels of initial compression possible, compared to prior art, without compromising hydrodynamic lubrication; this added compression makes the seal more tolerant of compression set, abrasive wear, mechanical misalignment, dynamic runout, and manufacturing tolerances, and also makes hydrodynamic seals with smaller cross-sections more practical. In alternate embodiments, the benefits enumerated above are achieved by cooperative configurations of the seal and the gland which achieve symmetrical deformation of the seal at the static and dynamic sealing interfaces. The seal may also be configured such that predetermined radial compression deforms it to a desired operative configuration, even through symmetrical deformation is lacking.

  16. SPHGR: Smoothed-Particle Hydrodynamics Galaxy Reduction

    NASA Astrophysics Data System (ADS)

    Thompson, Robert

    2015-02-01

    SPHGR (Smoothed-Particle Hydrodynamics Galaxy Reduction) is a python based open-source framework for analyzing smoothed-particle hydrodynamic simulations. Its basic form can run a baryonic group finder to identify galaxies and a halo finder to identify dark matter halos; it can also assign said galaxies to their respective halos, calculate halo & galaxy global properties, and iterate through previous time steps to identify the most-massive progenitors of each halo and galaxy. Data about each individual halo and galaxy is collated and easy to access. SPHGR supports a wide range of simulations types including N-body, full cosmological volumes, and zoom-in runs. Support for multiple SPH code outputs is provided by pyGadgetReader (ascl:1411.001), mainly Gadget (ascl:0003.001) and TIPSY (ascl:1111.015).

  17. Lossless compression of VLSI layout image data.

    PubMed

    Dai, Vito; Zakhor, Avideh

    2006-09-01

    We present a novel lossless compression algorithm called Context Copy Combinatorial Code (C4), which integrates the advantages of two very disparate compression techniques: context-based modeling and Lempel-Ziv (LZ) style copying. While the algorithm can be applied to many lossless compression applications, such as document image compression, our primary target application has been lossless compression of integrated circuit layout image data. These images contain a heterogeneous mix of data: dense repetitive data better suited to LZ-style coding, and less dense structured data, better suited to context-based encoding. As part of C4, we have developed a novel binary entropy coding technique called combinatorial coding which is simultaneously as efficient as arithmetic coding, and as fast as Huffman coding. Compression results show C4 outperforms JBIG, ZIP, BZIP2, and two-dimensional LZ, and achieves lossless compression ratios greater than 22 for binary layout image data, and greater than 14 for gray-pixel image data. PMID:16948299

  18. Hydrodynamic shock wave studies within a kinetic Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Sagert, Irina; Bauer, Wolfgang; Colbry, Dirk; Howell, Jim; Pickett, Rodney; Staber, Alec; Strother, Terrance

    2014-06-01

    We introduce a massively parallelized test-particle based kinetic Monte Carlo code that is capable of modeling the phase space evolution of an arbitrarily sized system that is free to move in and out of the continuum limit. Our code combines advantages of the DSMC and the Point of Closest Approach techniques for solving the collision integral. With that, it achieves high spatial accuracy in simulations of large particle systems while maintaining computational feasibility. Using particle mean free paths which are small with respect to the characteristic length scale of the simulated system, we reproduce hydrodynamic behavior. To demonstrate that our code can retrieve continuum solutions, we perform a test-suite of classic hydrodynamic shock problems consisting of the Sod, the Noh, and the Sedov tests. We find that the results of our simulations which apply millions of test-particles match the analytic solutions well. In addition, we take advantage of the ability of kinetic codes to describe matter out of the continuum regime when applying large particle mean free paths. With that, we study and compare the evolution of shock waves in the hydrodynamic limit and in a regime which is not reachable by hydrodynamic codes.

  19. Computational brittle fracture using smooth particle hydrodynamics

    SciTech Connect

    Mandell, D.A.; Wingate, C.A.; Schwalbe, L.A.

    1996-10-01

    We are developing statistically based, brittle-fracture models and are implementing them into hydrocodes that can be used for designing systems with components of ceramics, glass, and/or other brittle materials. Because of the advantages it has simulating fracture, we are working primarily with the smooth particle hydrodynamics code SPBM. We describe a new brittle fracture model that we have implemented into SPBM. To illustrate the code`s current capability, we have simulated a number of experiments. We discuss three of these simulations in this paper. The first experiment consists of a brittle steel sphere impacting a plate. The experimental sphere fragment patterns are compared to the calculations. The second experiment is a steel flyer plate in which the recovered steel target crack patterns are compared to the calculated crack patterns. We also briefly describe a simulation of a tungsten rod impacting a heavily confined alumina target, which has been recently reported on in detail.

  20. Dynamic compression of solid HMX-based explosives under ramp wave loading

    NASA Astrophysics Data System (ADS)

    Wang, G. J.; Cai, J. T.; Zhang, H. P.; Zhao, F.; Tan, F. L.; Wu, G.

    2012-11-01

    By means of the new techniques of magnetically driven quasi-isentropic compression based on compact capacitor bank facility CQ-1.5 developed by us, the dynamic compression of two mixed HMX-based plastic bonded explosives (PBX) explosives is researched under ramp wave loading. A pressure of 5-8 GPa over 600-800 ns is realized on explosive samples by optimizing loading electrodes and controlling charging voltages of CQ-1.5. And loading strain rates vary from 105 1/s to 106 1/s along the thickness of explosive samples. For experiments, the particle velocities of interface between explosive samples with different thicknesses and LiF windows are measured to determine material response by a displacement interferometry technique of Doppler pins system (DPS), and the experimental compression isentropes of researched explosives are obtained using the data processing method of backward integration and Lagrangian analysis for quasi-isentropic compression experiments, which are in agreement with the theoretical isentropes based on Mie-Grüneisen equation of state (EOS) and the results by Baer. For simulations, one-dimensional hydrodynamics code SSS is used to analyze the dynamic process, and the calculated results of particle velocity of interfaces are consistent with the experimental ones. Finally, one of the explosive constituents, the binder fluoride rubber F2311, is also investigated using this technique, and some properties under ramp wave loading are gained.

  1. Hydrodynamics of Turning Flocks.

    PubMed

    Yang, Xingbo; Marchetti, M Cristina

    2015-12-18

    We present a hydrodynamic model of flocking that generalizes the familiar Toner-Tu equations to incorporate turning inertia of well-polarized flocks. The continuum equations controlled by only two dimensionless parameters, orientational inertia and alignment strength, are derived by coarse-graining the inertial spin model recently proposed by Cavagna et al. The interplay between orientational inertia and bend elasticity of the flock yields anisotropic spin waves that mediate the propagation of turning information throughout the flock. The coupling between spin-current density to the local vorticity field through a nonlinear friction gives rise to a hydrodynamic mode with angular-dependent propagation speed at long wavelengths. This mode becomes unstable as a result of the growth of bend and splay deformations augmented by the spin wave, signaling the transition to complex spatiotemporal patterns of continuously turning and swirling flocks. PMID:26722945

  2. Hydrodynamics of Turning Flocks

    NASA Astrophysics Data System (ADS)

    Yang, Xingbo; Marchetti, M. Cristina

    2015-12-01

    We present a hydrodynamic model of flocking that generalizes the familiar Toner-Tu equations to incorporate turning inertia of well-polarized flocks. The continuum equations controlled by only two dimensionless parameters, orientational inertia and alignment strength, are derived by coarse-graining the inertial spin model recently proposed by Cavagna et al. The interplay between orientational inertia and bend elasticity of the flock yields anisotropic spin waves that mediate the propagation of turning information throughout the flock. The coupling between spin-current density to the local vorticity field through a nonlinear friction gives rise to a hydrodynamic mode with angular-dependent propagation speed at long wavelengths. This mode becomes unstable as a result of the growth of bend and splay deformations augmented by the spin wave, signaling the transition to complex spatiotemporal patterns of continuously turning and swirling flocks.

  3. Video Compression

    NASA Technical Reports Server (NTRS)

    1996-01-01

    Optivision developed two PC-compatible boards and associated software under a Goddard Space Flight Center Small Business Innovation Research grant for NASA applications in areas such as telerobotics, telesciences and spaceborne experimentation. From this technology, the company used its own funds to develop commercial products, the OPTIVideo MPEG Encoder and Decoder, which are used for realtime video compression and decompression. They are used in commercial applications including interactive video databases and video transmission. The encoder converts video source material to a compressed digital form that can be stored or transmitted, and the decoder decompresses bit streams to provide high quality playback.

  4. Fluctuations in relativistic causal hydrodynamics

    NASA Astrophysics Data System (ADS)

    Kumar, Avdhesh; Bhatt, Jitesh R.; Mishra, Ananta P.

    2014-05-01

    Formalism to calculate the hydrodynamic fluctuations by applying the Onsager theory to the relativistic Navier-Stokes equation is already known. In this work, we calculate hydrodynamic fluctuations within the framework of the second order hydrodynamics of Müller, Israel and Stewart and its generalization to the third order. We have also calculated the fluctuations for several other causal hydrodynamical equations. We show that the form for the Onsager-coefficients and form of the correlation functions remain the same as those obtained by the relativistic Navier-Stokes equation and do not depend on any specific model of hydrodynamics. Further we numerically investigate evolution of the correlation function using the one dimensional boost-invariant (Bjorken) flow. We compare the correlation functions obtained using the causal hydrodynamics with the correlation function for the relativistic Navier-Stokes equation. We find that the qualitative behavior of the correlation functions remains the same for all the models of the causal hydrodynamics.

  5. Compressing TV-image data

    NASA Technical Reports Server (NTRS)

    Hilbert, E. E.; Lee, J.; Rice, R. F.; Schlutsmeyer, A. P.

    1981-01-01

    Compressing technique calculates activity estimator for each segment of image line. Estimator is used in conjunction with allowable bits per line, N, to determine number of bits necessary to code each segment and which segments can tolerate truncation. Preprocessed line data are then passed to adaptive variable-length coder, which selects optimum transmission code. Method increases capacity of broadcast and cable television transmissions and helps reduce size of storage medium for video and digital audio recordings.

  6. Hydrodynamics of insect spermatozoa

    NASA Astrophysics Data System (ADS)

    Pak, On Shun; Lauga, Eric

    2010-11-01

    Microorganism motility plays important roles in many biological processes including reproduction. Many microorganisms propel themselves by propagating traveling waves along their flagella. Depending on the species, propagation of planar waves (e.g. Ceratium) and helical waves (e.g. Trichomonas) were observed in eukaryotic flagellar motion, and hydrodynamic models for both were proposed in the past. However, the motility of insect spermatozoa remains largely unexplored. An interesting morphological feature of such cells, first observed in Tenebrio molitor and Bacillus rossius, is the double helical deformation pattern along the flagella, which is characterized by the presence of two superimposed helical flagellar waves (one with a large amplitude and low frequency, and the other with a small amplitude and high frequency). Here we present the first hydrodynamic investigation of the locomotion of insect spermatozoa. The swimming kinematics, trajectories and hydrodynamic efficiency of the swimmer are computed based on the prescribed double helical deformation pattern. We then compare our theoretical predictions with experimental measurements, and explore the dependence of the swimming performance on the geometric and dynamical parameters.

  7. Hydrodynamics of fossil fishes.

    PubMed

    Fletcher, Thomas; Altringham, John; Peakall, Jeffrey; Wignall, Paul; Dorrell, Robert

    2014-08-01

    From their earliest origins, fishes have developed a suite of adaptations for locomotion in water, which determine performance and ultimately fitness. Even without data from behaviour, soft tissue and extant relatives, it is possible to infer a wealth of palaeobiological and palaeoecological information. As in extant species, aspects of gross morphology such as streamlining, fin position and tail type are optimized even in the earliest fishes, indicating similar life strategies have been present throughout their evolutionary history. As hydrodynamical studies become more sophisticated, increasingly complex fluid movement can be modelled, including vortex formation and boundary layer control. Drag-reducing riblets ornamenting the scales of fast-moving sharks have been subjected to particularly intense research, but this has not been extended to extinct forms. Riblets are a convergent adaptation seen in many Palaeozoic fishes, and probably served a similar hydrodynamic purpose. Conversely, structures which appear to increase skin friction may act as turbulisors, reducing overall drag while serving a protective function. Here, we examine the diverse adaptions that contribute to drag reduction in modern fishes and review the few attempts to elucidate the hydrodynamics of extinct forms. PMID:24943377

  8. Hydrodynamics of fossil fishes

    PubMed Central

    Fletcher, Thomas; Altringham, John; Peakall, Jeffrey; Wignall, Paul; Dorrell, Robert

    2014-01-01

    From their earliest origins, fishes have developed a suite of adaptations for locomotion in water, which determine performance and ultimately fitness. Even without data from behaviour, soft tissue and extant relatives, it is possible to infer a wealth of palaeobiological and palaeoecological information. As in extant species, aspects of gross morphology such as streamlining, fin position and tail type are optimized even in the earliest fishes, indicating similar life strategies have been present throughout their evolutionary history. As hydrodynamical studies become more sophisticated, increasingly complex fluid movement can be modelled, including vortex formation and boundary layer control. Drag-reducing riblets ornamenting the scales of fast-moving sharks have been subjected to particularly intense research, but this has not been extended to extinct forms. Riblets are a convergent adaptation seen in many Palaeozoic fishes, and probably served a similar hydrodynamic purpose. Conversely, structures which appear to increase skin friction may act as turbulisors, reducing overall drag while serving a protective function. Here, we examine the diverse adaptions that contribute to drag reduction in modern fishes and review the few attempts to elucidate the hydrodynamics of extinct forms. PMID:24943377

  9. Combining Hydrodynamic and Evolution Calculations of Rotating Stars

    NASA Astrophysics Data System (ADS)

    Deupree, R. G.

    1996-12-01

    Rotation has two primary effects on stellar evolutionary models: the direct influence on the model structure produced by the rotational terms, and the indirect influence produced by rotational instabilities which redistribute angular momentum and composition inside the model. Using a two dimensional, fully implicit finite difference code, I can follow events on both evolutionary and hydrodynamic timescales, thus allowing the simulation of both effects. However, there are several issues concerning how to integrate the results from hydrodynamic runs into evolutionary runs that must be examined. The schemes I have devised for the integration of the hydrodynamic simulations into evolutionary calculations are outlined, and the positive and negative features summarized. The practical differences among the various schemes are small, and a successful marriage between hydrodynamic and evolution calculations is possible.

  10. An efficient medical image compression scheme.

    PubMed

    Li, Xiaofeng; Shen, Yi; Ma, Jiachen

    2005-01-01

    In this paper, a fast lossless compression scheme is presented for the medical image. This scheme consists of two stages. In the first stage, a Differential Pulse Code Modulation (DPCM) is used to decorrelate the raw image data, therefore increasing the compressibility of the medical image. In the second stage, an effective scheme based on the Huffman coding method is developed to encode the residual image. This newly proposed scheme could reduce the cost for the Huffman coding table while achieving high compression ratio. With this algorithm, a compression ratio higher than that of the lossless JPEG method for image can be obtained. At the same time, this method is quicker than the lossless JPEG2000. In other words, the newly proposed algorithm provides a good means for lossless medical image compression. PMID:17280962

  11. Compressing subbanded image data with Lempel-Ziv-based coders

    NASA Technical Reports Server (NTRS)

    Glover, Daniel; Kwatra, S. C.

    1993-01-01

    A method of improving the compression of image data using Lempel-Ziv-based coding is presented. Image data is first processed with a simple transform, such as the Walsh Hadamard Transform, to produce subbands. The subbanded data can be rounded to eight bits or it can be quantized for higher compression at the cost of some reduction in the quality of the reconstructed image. The data is then run-length coded to take advantage of the large runs of zeros produced by quantization. Compression results are presented and contrasted with a subband compression method using quantization followed by run-length coding and Huffman coding. The Lempel-Ziv-based coding in conjunction with run-length coding produces the best compression results at the same reconstruction quality (compared with the Huffman-based coding) on the image data used.

  12. Zombie Vortex Instability. I. A Purely Hydrodynamic Instability to Resurrect the Dead Zones of Protoplanetary Disks

    NASA Astrophysics Data System (ADS)

    Marcus, Philip S.; Pei, Suyang; Jiang, Chung-Hsiang; Barranco, Joseph A.; Hassanzadeh, Pedram; Lecoanet, Daniel

    2015-07-01

    There is considerable interest in hydrodynamic instabilities in dead zones of protoplanetary disks as a mechanism for driving angular momentum transport and as a source of particle-trapping vortices to mix chondrules and incubate planetesimal formation. We present simulations with a pseudo-spectral anelastic code and with the compressible code Athena, showing that stably stratified flows in a shearing, rotating box are violently unstable and produce space-filling, sustained turbulence dominated by large vortices with Rossby numbers of order ˜0.2-0.3. This Zombie Vortex Instability (ZVI) is observed in both codes and is triggered by Kolmogorov turbulence with Mach numbers less than ˜0.01. It is a common view that if a given constant density flow is stable, then stable vertical stratification should make the flow even more stable. Yet, we show that sufficient vertical stratification can be unstable to ZVI. ZVI is robust and requires no special tuning of boundary conditions, or initial radial entropy or vortensity gradients (though we have studied ZVI only in the limit of infinite cooling time). The resolution of this paradox is that stable stratification allows for a new avenue to instability: baroclinic critical layers. ZVI has not been seen in previous studies of flows in rotating, shearing boxes because those calculations frequently lacked vertical density stratification and/or sufficient numerical resolution. Although we do not expect appreciable angular momentum transport from ZVI in the small domains in this study, we hypothesize that ZVI in larger domains with compressible equations may lead to angular transport via spiral density waves.

  13. [Compression material].

    PubMed

    Perceau, Géraldine; Faure, Christine

    2012-01-01

    The compression of a venous ulcer is carried out with the use of bandages, and for less exudative ulcers, with socks, stockings or tights. The system of bandages is complex. Different forms of extension and therefore different types of models exist. PMID:22489428

  14. RADONE: a computer code for simulating fast-transient, one-dimensional hydrodynamic conditions and two-layer radionuclide concentrations including the effect of bed-deposition in controlled rivers and tidal estuaries

    SciTech Connect

    Eraslan, A.H.; Abdel-Razek, M.M.

    1985-05-01

    RADONE is a computer code for predicting the transient, one-dimensional transport of radiouclides in receiving water bodies. The model formulation considers the one-dimensional (cross-sectionally averaged) conservation of mass and momentum equations and the two coupled, depth-averaged radionuclide transport equations for the water layer and the bottom sediment layer. The coupling conditions incorporate bottom deposition and resuspension effects. The computer code uses a discrete-element method that offers variable river cross-section spacing, accurate representation of cross-sectional geometry, and numerical accuracy. A sample application is provided for the problem of hypothetical accidental releases and actual routine releases of radionuclides to the Hudson River.

  15. The threshold for hydrodynamic behaviour in solids under extreme compression

    NASA Astrophysics Data System (ADS)

    Bourne, N. K.

    2014-09-01

    Shock waves are known to display structure within their fronts. At lower stress amplitudes, elastic waves precede an inelastic rise to the final pressure whilst under more extreme loading there is a single inelastic shock to peak stress. These regimes are conventionally termed weak and strong shock behaviour and the transition stress between the two is called the weak shock limit (WSL) here. Shock speeds in an amorphous glass and a FCC metal are shown to change discontinuously as pulses of increasing peak pressure exceed this limit. Further this work correlates the stress at the WSL with the theoretical strength of ca. 40 solids and shows different dependence for close-packed and open structures in metals, polymers, ceramics, and ionic solids.

  16. Syndrome source coding and its universal generalization

    NASA Technical Reports Server (NTRS)

    Ancheta, T. C., Jr.

    1975-01-01

    A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A universal generalization of syndrome-source-coding is formulated which provides robustly-effective, distortionless, coding of source ensembles.

  17. Nonlinear hydrodynamics of cosmological sheets. 1: Numerical techniques and tests

    NASA Technical Reports Server (NTRS)

    Anninos, Wenbo Y.; Norman, Michael J.

    1994-01-01

    We present the numerical techniques and tests used to construct and validate a computer code designed to study the multidimensional nonlinear hydrodynamics of large-scale sheet structures in the universe, especially the fragmentation of such structures under various instabilities. This code is composed of two codes, the hydrodynamical code ZEUS-2D and a particle-mesh code. The ZEUS-2D code solves the hydrodynamical equations in two dimensions using explicit Eulerian finite-difference techniques, with modifications made to incorporate the expansion of the universe and the gas cooling due to Compton scattering, bremsstrahlung, and hydrogen and helium cooling. The particle-mesh code solves the equation of motion for the collisionless dark matter. The code uses two-dimensional Cartesian coordinates with a nonuniform grid in one direction to provide high resolution for the sheet structures. A series of one-dimensional and two-dimensional linear perturbation tests are presented which are designed to test the hydro solver and the Poisson solver with and without the expansion of the universe. We also present a radiative shock wave test which is designed to ensure the code's capability to handle radiative cooling properly. And finally a series of one-dimensional Zel'dovich pancake tests used to test the dark matter code and the hydro solver in the nonlinear regime are discussed and compared with the results of Bond et al. (1984) and Shapiro & Struck-Marcell (1985). Overall, the code is shown to produce accurate and stable results, which provide us a powerful tool to further our studies.

  18. Hydrodynamic Simulations of Contact Binaries

    NASA Astrophysics Data System (ADS)

    Kadam, Kundan; Clayton, Geoffrey C.; Frank, Juhan; Marcello, Dominic; Motl, Patrick M.; Staff, Jan E.

    2015-01-01

    The motivation for our project is the peculiar case of the 'red nova" V1309 Sco which erupted in September 2008. The progenitor was, in fact, a contact binary system. We are developing a simulation of contact binaries, so that their formation, structural, and merger properties could be studied using hydrodynamics codes. The observed transient event was the disruption of the secondary star by the primary, and their subsequent merger into one star; hence to replicate this behavior, we need a core-envelope structure for both the stars. We achieve this using a combination of Self Consistant Field (SCF) technique and composite polytropes, also known as bipolytropes. So far we have been able to generate close binaries with various mass ratios. Another consequence of using bipolytropes is that according to theoretical calculations, the radius of a star should expand when the core mass fraction exceeds a critical value, resulting in interesting consequences in a binary system. We present some initial results of these simulations.

  19. Impact modeling with Smooth Particle Hydrodynamics

    SciTech Connect

    Stellingwerf, R.F.; Wingate, C.A.

    1993-07-01

    Smooth Particle Hydrodynamics (SPH) can be used to model hypervelocity impact phenomena via the addition of a strength of materials treatment. SPH is the only technique that can model such problems efficiently due to the combination of 3-dimensional geometry, large translations of material, large deformations, and large void fractions for most problems of interest. This makes SPH an ideal candidate for modeling of asteroid impact, spacecraft shield modeling, and planetary accretion. In this paper we describe the derivation of the strength equations in SPH, show several basic code tests, and present several impact test cases with experimental comparisons.

  20. VAC: Versatile Advection Code

    NASA Astrophysics Data System (ADS)

    Tóth, Gábor; Keppens, Rony

    2012-07-01

    The Versatile Advection Code (VAC) is a freely available general hydrodynamic and magnetohydrodynamic simulation software that works in 1, 2 or 3 dimensions on Cartesian and logically Cartesian grids. VAC runs on any Unix/Linux system with a Fortran 90 (or 77) compiler and Perl interpreter. VAC can run on parallel machines using either the Message Passing Interface (MPI) library or a High Performance Fortran (HPF) compiler.

  1. Low torque hydrodynamic lip geometry for rotary seals

    SciTech Connect

    Dietle, Lannie L.; Schroeder, John E.

    2015-07-21

    A hydrodynamically lubricating geometry for the generally circular dynamic sealing lip of rotary seals that are employed to partition a lubricant from an environment. The dynamic sealing lip is provided for establishing compressed sealing engagement with a relatively rotatable surface, and for wedging a film of lubricating fluid into the interface between the dynamic sealing lip and the relatively rotatable surface in response to relative rotation that may occur in the clockwise or the counter-clockwise direction. A wave form incorporating an elongated dimple provides the gradual convergence, efficient impingement angle, and gradual interfacial contact pressure rise that are conducive to efficient hydrodynamic wedging. Skewed elevated contact pressure zones produced by compression edge effects provide for controlled lubricant movement within the dynamic sealing interface between the seal and the relatively rotatable surface, producing enhanced lubrication and low running torque.

  2. Compressed Genotyping

    PubMed Central

    Erlich, Yaniv; Gordon, Assaf; Brand, Michael; Hannon, Gregory J.; Mitra, Partha P.

    2011-01-01

    Over the past three decades we have steadily increased our knowledge on the genetic basis of many severe disorders. Nevertheless, there are still great challenges in applying this knowledge routinely in the clinic, mainly due to the relatively tedious and expensive process of genotyping. Since the genetic variations that underlie the disorders are relatively rare in the population, they can be thought of as a sparse signal. Using methods and ideas from compressed sensing and group testing, we have developed a cost-effective genotyping protocol to detect carriers for severe genetic disorders. In particular, we have adapted our scheme to a recently developed class of high throughput DNA sequencing technologies. The mathematical framework presented here has some important distinctions from the ’traditional’ compressed sensing and group testing frameworks in order to address biological and technical constraints of our setting. PMID:21451737

  3. Nonlinear Generalized Hydrodynamic Wave Equations in Strongly Coupled Dusty Plasmas

    SciTech Connect

    Veeresha, B. M.; Sen, A.; Kaw, P. K.

    2008-09-07

    A set of nonlinear equations for the study of low frequency waves in a strongly coupled dusty plasma medium is derived using the phenomenological generalized hydrodynamic (GH) model and is used to study the modulational stability of dust acoustic waves to parallel perturbations. Dust compressibility contributions arising from strong Coulomb coupling effects are found to introduce significant modifications in the threshold and range of the instability domain.

  4. Scaling relations in two-dimensional relativistic hydrodynamic turbulence

    NASA Astrophysics Data System (ADS)

    Westernacher-Schneider, John Ryan; Lehner, Luis; Oz, Yaron

    2015-12-01

    We derive exact scaling relations for two-dimensional relativistic hydrodynamic turbulence in the inertial range of scales. We consider both the energy cascade towards large scales and the enstrophy cascade towards small scales. We illustrate these relations by numerical simulations of turbulent weakly compressible flows. Intriguingly, the fluid-gravity correspondence implies that the gravitational field in black hole/black brane spacetimes with anti-de Sitter asymptotics should exhibit similar scaling relations.

  5. Recent advances in coding theory for near error-free communications

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.; Deutsch, L. J.; Dolinar, S. J.; Mceliece, R. J.; Pollara, F.; Shahshahani, M.; Swanson, L.

    1991-01-01

    Channel and source coding theories are discussed. The following subject areas are covered: large constraint length convolutional codes (the Galileo code); decoder design (the big Viterbi decoder); Voyager's and Galileo's data compression scheme; current research in data compression for images; neural networks for soft decoding; neural networks for source decoding; finite-state codes; and fractals for data compression.

  6. Hydrodynamic modes for granular gases.

    PubMed

    Dufty, James W; Brey, J Javier

    2003-09-01

    The eigenfunctions and eigenvalues of the linearized Boltzmann equation for inelastic hard spheres (d=3) or disks (d=2) corresponding to d+2 hydrodynamic modes are calculated in the long wavelength limit for a granular gas. The transport coefficients are identified and found to agree with those from the Chapman-Enskog solution. The dominance of hydrodynamic modes at long times and long wavelengths is studied via an exactly solvable kinetic model. A collisional continuum is bounded away from the hydrodynamic spectrum, assuring a hydrodynamic description at long times. The bound is closely related to the power law decay of the velocity distribution in the reference homogeneous cooling state. PMID:14524742

  7. Molecular Hydrodynamics from Memory Kernels.

    PubMed

    Lesnicki, Dominika; Vuilleumier, Rodolphe; Carof, Antoine; Rotenberg, Benjamin

    2016-04-01

    The memory kernel for a tagged particle in a fluid, computed from molecular dynamics simulations, decays algebraically as t^{-3/2}. We show how the hydrodynamic Basset-Boussinesq force naturally emerges from this long-time tail and generalize the concept of hydrodynamic added mass. This mass term is negative in the present case of a molecular solute, which is at odds with incompressible hydrodynamics predictions. Lastly, we discuss the various contributions to the friction, the associated time scales, and the crossover between the molecular and hydrodynamic regimes upon increasing the solute radius. PMID:27104730

  8. Modeling Warm Dense Matter Experiments using the 3D ALE-AMR Code and the Move Toward Exascale Computing

    SciTech Connect

    Koniges, A; Eder, E; Liu, W; Barnard, J; Friedman, A; Logan, G; Fisher, A; Masers, N; Bertozzi, A

    2011-11-04

    The Neutralized Drift Compression Experiment II (NDCX II) is an induction accelerator planned for initial commissioning in 2012. The final design calls for a 3 MeV, Li+ ion beam, delivered in a bunch with characteristic pulse duration of 1 ns, and transverse dimension of order 1 mm. The NDCX II will be used in studies of material in the warm dense matter (WDM) regime, and ion beam/hydrodynamic coupling experiments relevant to heavy ion based inertial fusion energy. We discuss recent efforts to adapt the 3D ALE-AMR code to model WDM experiments on NDCX II. The code, which combines Arbitrary Lagrangian Eulerian (ALE) hydrodynamics with Adaptive Mesh Refinement (AMR), has physics models that include ion deposition, radiation hydrodynamics, thermal diffusion, anisotropic material strength with material time history, and advanced models for fragmentation. Experiments at NDCX-II will explore the process of bubble and droplet formation (two-phase expansion) of superheated metal solids using ion beams. Experiments at higher temperatures will explore equation of state and heavy ion fusion beam-to-target energy coupling efficiency. Ion beams allow precise control of local beam energy deposition providing uniform volumetric heating on a timescale shorter than that of hydrodynamic expansion. The ALE-AMR code does not have any export control restrictions and is currently running at the National Energy Research Scientific Computing Center (NERSC) at LBNL and has been shown to scale well to thousands of CPUs. New surface tension models that are being implemented and applied to WDM experiments. Some of the approaches use a diffuse interface surface tension model that is based on the advective Cahn-Hilliard equations, which allows for droplet breakup in divergent velocity fields without the need for imposed perturbations. Other methods require seeding or other methods for droplet breakup. We also briefly discuss the effects of the move to exascale computing and related

  9. Load responsive hydrodynamic bearing

    DOEpatents

    Kalsi, Manmohan S.; Somogyi, Dezso; Dietle, Lannie L.

    2002-01-01

    A load responsive hydrodynamic bearing is provided in the form of a thrust bearing or journal bearing for supporting, guiding and lubricating a relatively rotatable member to minimize wear thereof responsive to relative rotation under severe load. In the space between spaced relatively rotatable members and in the presence of a liquid or grease lubricant, one or more continuous ring shaped integral generally circular bearing bodies each define at least one dynamic surface and a plurality of support regions. Each of the support regions defines a static surface which is oriented in generally opposed relation with the dynamic surface for contact with one of the relatively rotatable members. A plurality of flexing regions are defined by the generally circular body of the bearing and are integral with and located between adjacent support regions. Each of the flexing regions has a first beam-like element being connected by an integral flexible hinge with one of the support regions and a second beam-like element having an integral flexible hinge connection with an adjacent support region. A least one local weakening geometry of the flexing region is located intermediate the first and second beam-like elements. In response to application of load from one of the relatively rotatable elements to the bearing, the beam-like elements and the local weakening geometry become flexed, causing the dynamic surface to deform and establish a hydrodynamic geometry for wedging lubricant into the dynamic interface.

  10. Hydrodynamics of pronuclear migration

    NASA Astrophysics Data System (ADS)

    Nazockdast, Ehssan; Needleman, Daniel; Shelley, Michael

    2014-11-01

    Microtubule (MT) filaments play a key role in many processes involved in cell devision including spindle formation, chromosome segregation, and pronuclear positioning. We present a direct numerical technique to simulate MT dynamics in such processes. Our method includes hydrodynamically mediated interactions between MTs and other cytoskeletal objects, using singularity methods for Stokes flow. Long-ranged many-body hydrodynamic interactions are computed using a highly efficient and scalable fast multipole method, enabling the simulation of thousands of MTs. Our simulation method also takes into account the flexibility of MTs using Euler-Bernoulli beam theory as well as their dynamic instability. Using this technique, we simulate pronuclear migration in single-celled Caenorhabditis elegans embryos. Two different positioning mechanisms, based on the interactions of MTs with the motor proteins and the cell cortex, are explored: cytoplasmic pulling and cortical pushing. We find that although the pronuclear complex migrates towards the center of the cell in both models, the generated cytoplasmic flows are fundamentally different. This suggest that cytoplasmic flow visualization during pronuclear migration can be utilized to differentiate between the two mechanisms.

  11. Ramp Compression Experiments - a Sensitivity Study

    SciTech Connect

    Bastea, M; Reisman, D

    2007-02-26

    We present the first sensitivity study of the material isentropes extracted from ramp compression experiments. We perform hydrodynamic simulations of representative experimental geometries associated with ramp compression experiments and discuss the major factors determining the accuracy of the equation of state information extracted from such data. In conclusion, we analyzed both qualitatively and quantitatively the major experimental factors that determine the accuracy of equations of state extracted from ramp compression experiments. Since in actual experiments essentially all the effects discussed here will compound, factoring out individual signatures and magnitudes, as done in the present work, is especially important. This study should provide some guidance for the effective design and analysis of ramp compression experiments, as well as for further improvements of ramp generators performance.

  12. MAESTRO: An Adaptive Low Mach Number Hydrodynamics Algorithm for Stellar Flows

    NASA Astrophysics Data System (ADS)

    Nonaka, Andrew; Almgren, A. S.; Bell, J. B.; Malone, C. M.; Zingale, M.

    2010-01-01

    Many astrophysical phenomena are highly subsonic, requiring specialized numerical methods suitable for long-time integration. We present MAESTRO, a low Mach number stellar hydrodynamics code that can be used to simulate long-time, low-speed flows that would be prohibitively expensive to model using traditional compressible codes. MAESTRO is based on an equation set that we have derived using low Mach number asymptotics; this equation set does not explicitly track acoustic waves and thus allows a significant increase in the time step. MAESTRO is suitable for two- and three-dimensional local atmospheric flows as well as three-dimensional full-star flows, and uses adaptive mesh refinement (AMR) to locally refine grids in regions of interest. Our initial scientific applications include the convective phase of Type Ia supernovae and Type I X-ray Bursts on neutron stars. The work at LBNL was supported by the SciDAC Program of the DOE Office of Advanced Scientific Computing Research under the DOE under contract No. DE-AC02-05CH11231. The work at Stony Brook was supported by the DOE/Office of Nuclear Physics, grant No. DE-FG02-06ER41448. We made use of the Jaguar via a DOE INCITE allocation at the OLCF at ORNL and Franklin at NERSC at LBNL.

  13. Hydrodynamic Studies of Turbulent AGN Tori

    NASA Astrophysics Data System (ADS)

    Schartmann, M.; Meisenheimer, K.; Klahr, H.; Camenzind, M.; Wolf, S.; Henning, Th.; Burkert, A.; Krause, M.

    2011-01-01

    Recently, the MID-infrared Interferometric instrument (MIDI) at the VLTI has shown that dust tori in the two nearby Seyfert galaxies NGC 1068 and the Circinus galaxy are geometrically thick and can be well described by a thin, warm central disk, surrounded by a colder and fluffy torus component. By carrying out hydrodynamical simulations with the help of the TRAMP code (Klahr et al. 1999), we follow the evolution of a young nuclear star cluster in terms of discrete mass-loss and energy injection from stellar processes. This naturally leads to a filamentary large scale torus component, where cold gas is able to flow radially inwards. The filaments join into a dense and very turbulent disk structure. In a post-processing step, we calculate spectral energy distributions and images with the 3D radiative transfer code MC3D Wolf (2003) and compare them to observations. Turbulence in the dense disk component is investigated in a separate project.

  14. The moving mesh code SHADOWFAX

    NASA Astrophysics Data System (ADS)

    Vandenbroucke, B.; De Rijcke, S.

    2016-07-01

    We introduce the moving mesh code SHADOWFAX, which can be used to evolve a mixture of gas, subject to the laws of hydrodynamics and gravity, and any collisionless fluid only subject to gravity, such as cold dark matter or stars. The code is written in C++ and its source code is made available to the scientific community under the GNU Affero General Public Licence. We outline the algorithm and the design of our implementation, and demonstrate its validity through the results of a set of basic test problems, which are also part of the public version. We also compare SHADOWFAX with a number of other publicly available codes using different hydrodynamical integration schemes, illustrating the advantages and disadvantages of the moving mesh technique.

  15. Shock compressed solids on the Nova laser

    SciTech Connect

    Colvin, J D; Gold, D M; Kalantar, D H; Mikaelian, K O; Remington, B A; Weber, S V; Wiley, G

    1999-08-03

    Experiments are being developed to shock compress metal foils in the solid state to study the material strength under high compression. The x-ray drive has been characterized and hydrodynamics experiments performed to study growth of the Rayleigh-Taylor (RT) instability in Al foils at a peak pressure of about 1.8 Mbar. Pre-imposed modulations with an initial wavelength of lo-50 pm, and amplitude of 0.5 pm show growth. Variation in the growth factors may be a result of shot-shot variation in preheating of the Al sample due to emission from the plasma in the hohlraum target

  16. Shear and Compression Bioreactor for Cartilage Synthesis.

    PubMed

    Shahin, Kifah; Doran, Pauline M

    2015-01-01

    Mechanical forces, including hydrodynamic shear, hydrostatic pressure, compression, tension, and friction, can have stimulatory effects on cartilage synthesis in tissue engineering systems. Bioreactors capable of exerting forces on cells and tissue constructs within a controlled culture environment are needed to provide appropriate mechanical stimuli. In this chapter, we describe the construction, assembly, and operation of a mechanobioreactor providing simultaneous dynamic shear and compressive loading on developing cartilage tissues to mimic the rolling and squeezing action of articular joints. The device is suitable for studying the effects of mechanical treatment on stem cells and chondrocytes seeded into three-dimensional scaffolds. PMID:26445842

  17. Hydrodynamic Efficiency of Ablation Propulsion with Pulsed Ion Beam

    SciTech Connect

    Buttapeng, Chainarong; Yazawa, Masaru; Harada, Nobuhiro; Suematsu, Hisayuki; Jiang Weihua; Yatsui, Kiyoshi

    2006-05-02

    This paper presents the hydrodynamic efficiency of ablation plasma produced by pulsed ion beam on the basis of the ion beam-target interaction. We used a one-dimensional hydrodynamic fluid compressible to study the physics involved namely an ablation acceleration behavior and analyzed it as a rocketlike model in order to investigate its hydrodynamic variables for propulsion applications. These variables were estimated by the concept of ablation driven implosion in terms of ablated mass fraction, implosion efficiency, and hydrodynamic energy conversion. Herein, the energy conversion efficiency of 17.5% was achieved. In addition, the results show maximum energy efficiency of the ablation process (ablation efficiency) of 67% meaning the efficiency with which pulsed ion beam energy-ablation plasma conversion. The effects of ion beam energy deposition depth to hydrodynamic efficiency were briefly discussed. Further, an evaluation of propulsive force with high specific impulse of 4000s, total impulse of 34mN and momentum to energy ratio in the range of {mu}N/W was also analyzed.

  18. Effect of Second-Order Hydrodynamics on a Floating Offshore Wind Turbine

    SciTech Connect

    Roald, L.; Jonkman, J.; Robertson, A.

    2014-05-01

    The design of offshore floating wind turbines uses design codes that can simulate the entire coupled system behavior. At the present, most codes include only first-order hydrodynamics, which induce forces and motions varying with the same frequency as the incident waves. Effects due to second- and higher-order hydrodynamics are often ignored in the offshore industry, because the forces induced typically are smaller than the first-order forces. In this report, first- and second-order hydrodynamic analysis used in the offshore oil and gas industry is applied to two different wind turbine concepts--a spar and a tension leg platform.

  19. The equations of nearly incompressible fluids. I. Hydrodynamics, turbulence, and waves

    NASA Astrophysics Data System (ADS)

    Zank, G. P.; Matthaeus, W. H.

    1991-01-01

    A unified analysis delineating the conditions under which the equations of classical incompressible and compressible hydrodynamics are related in the absence of large-scale thermal, gravitational, and field gradients is presented. By means of singular expansion techniques, a method is developed to derive modified systems of fluid equations in which the effects of compressibility are admitted only weakly in terms of the incompressible hydrodynamic solutions (hence ``nearly incompressible hydrodynamics''). Besides including molecular viscosity self-consistently, the role of thermal conduction in an ideal fluid is also considered. With the inclusion of heat conduction, it is found that two distinct routes to incompressibility are possible, distinguished according to the relative magnitudes of the temperature, density, and pressure fluctuations. This leads to two distinct models for thermally conducting, nearly incompressible hydrodynamics—heat-fluctuation-dominated hydrodynamics (HFDH's) and heat-fluctuation-modified hydrodynamics (HFMD's). For the HFD case, the well-known classical passive scalar equation for temperature is derived as one of the nearly incompressible fluid equations and temperature and density fluctuations are predicted to be anticorrelated. For HFM fluids, a new thermal transport equation, in which compressible acoustic effects are present, is obtained together with a more-complicated ``correlation'' between temperature, density, and pressure fluctuations. Although the equations of nearly incompressible hydrodynamics are envisaged principally as being applicable to homogeneous turbulence and wave propagation in low Mach number flow, it is anticipated that their applicability is likely to be far greater.

  20. Hydrodynamics of shear coaxial liquid rocket injectors

    NASA Astrophysics Data System (ADS)

    Tsohas, John

    Hydrodynamic instabilities within injector passages can couple to chamber acoustic modes and lead to unacceptable levels of combustion instabilities inside liquid rocket engines. The instability of vena-contracta regions and mixing between fuel and oxidizer can serve as a fundamental source of unsteadiness produced by the injector, even in the absence of upstream or downstream pressure perturbations. This natural or "unforced" response can provide valuable information regarding frequencies where the element could conceivably couple to chamber modes. In particular, during throttled conditions the changes in the injector response may lead to an alignment of the injector and chamber modes. For these reasons, the basic unforced response of the injector element is of particular interest when developing a new engine. The Loci/Chem code was used to perform single-element, 2-D unsteady CFD computations on the Hydrogen/Oxygen Multi-Element Experiment (HOMEE) injector which was hot-fire tested at Purdue University. The Loci/Chem code was used to evaluate the effects of O/F ratio, LOX post thickness, recess length and LOX tube length on the hydrodynamics of shear co-axial rocket injectors.

  1. Wavelet transform in electrocardiography--data compression.

    PubMed

    Provazník, I; Kozumplík, J

    1997-06-01

    An application of the wavelet transform to electrocardiography is described in the paper. The transform is used as a first stage of a lossy compression algorithm for efficient coding of rest ECG signals. The proposed technique is based on the decomposition of the ECG signal into a set of basic functions covering the time-frequency domain. Thus, non-stationary character of ECG data is considered. Some of the time-frequency signal components are removed because of their low influence to signal characteristics. Resulting components are efficiently coded by quantization, composition into a sequence of coefficients and compression by a run-length coder and a entropic Huffman coder. The proposed wavelet-based compression algorithm can compress data to average code length about 1 bit/sample. The algorithm can be also implemented to a real-time processing system when wavelet transform is computed by fast linear filters described in the paper. PMID:9291025

  2. Hydrodynamics, resurgence, and transasymptotics

    NASA Astrophysics Data System (ADS)

    Başar, Gökçe; Dunne, Gerald V.

    2015-12-01

    The second order hydrodynamical description of a homogeneous conformal plasma that undergoes a boost-invariant expansion is given by a single nonlinear ordinary differential equation, whose resurgent asymptotic properties we study, developing further the recent work of Heller and Spalinski [Phys. Rev. Lett. 115, 072501 (2015)]. Resurgence clearly identifies the nonhydrodynamic modes that are exponentially suppressed at late times, analogous to the quasinormal modes in gravitational language, organizing these modes in terms of a trans-series expansion. These modes are analogs of instantons in semiclassical expansions, where the damping rate plays the role of the instanton action. We show that this system displays the generic features of resurgence, with explicit quantitative relations between the fluctuations about different orders of these nonhydrodynamic modes. The imaginary part of the trans-series parameter is identified with the Stokes constant, and the real part with the freedom associated with initial conditions.

  3. Hydrodynamics of Peristaltic Propulsion

    NASA Astrophysics Data System (ADS)

    Athanassiadis, Athanasios; Hart, Douglas

    2014-11-01

    A curious class of animals called salps live in marine environments and self-propel by ejecting vortex rings much like jellyfish and squid. However, unlike other jetting creatures that siphon and eject water from one side of their body, salps produce vortex rings by pumping water through siphons on opposite ends of their hollow cylindrical bodies. In the simplest cases, it seems like some species of salp can successfully move by contracting just two siphons connected by an elastic body. When thought of as a chain of timed contractions, salp propulsion is reminiscent of peristaltic pumping applied to marine locomotion. Inspired by salps, we investigate the hydrodynamics of peristaltic propulsion, focusing on the scaling relationships that determine flow rate, thrust production, and energy usage in a model system. We discuss possible actuation methods for a model peristaltic vehicle, considering both the material and geometrical requirements for such a system.

  4. Hydrodynamics of Turning Flocks

    NASA Astrophysics Data System (ADS)

    Yang, Xingbo; Marchetti, M. Cristina

    2015-03-01

    We present a hydrodynamic model of flocking that generalizes the familiar Toner-Tu equations to incorporate turning inertia of well polarized flocks. The continuum equations are derived by coarse graining the inertial spin model recently proposed by Cavagna et al. The interplay between orientational inertia and bend elasticity of the flock yields spin waves that mediate the propagation of turning information throughout the flock. When the inertia is large, we find a novel instability that signals the transition to complex spatio-temporal patterns of continuously turning and swirling flocks. This work was supported by the NSF Awards DMR-1305184 and DGE-1068780 at Syracuse University and NSF Award PHY11-25915 and the Gordon and Betty Moore Foundation Grant No. 2919 at the KITP at the University of California, Santa Barbara.

  5. Synchronization and hydrodynamic interactions

    NASA Astrophysics Data System (ADS)

    Powers, Thomas; Qian, Bian; Breuer, Kenneth

    2008-03-01

    Cilia and flagella commonly beat in a coordinated manner. Examples include the flagella that Volvox colonies use to move, the cilia that sweep foreign particles up out of the human airway, and the nodal cilia that set up the flow that determines the left-right axis in developing vertebrate embryos. In this talk we present an experimental study of how hydrodynamic interactions can lead to coordination in a simple idealized system: two nearby paddles driven with fixed torques in a highly viscous fluid. The paddles attain a synchronized state in which they rotate together with a phase difference of 90 degrees. We discuss how synchronization depends on system parameters and present numerical calculations using the method of regularized stokeslets.

  6. Simulated performance results of the OMV video compression telemetry system

    NASA Technical Reports Server (NTRS)

    Ingels, Frank; Parker, Glenn; Thomas, Lee Ann

    1989-01-01

    The control system of NASA's Orbital Maneuvering Vehicle (OMV) will employ range/range-rate radar, a forward command link, and a compressed video return link. The video data is compressed by sampling every sixth frame of data; a rate of 5 frames/sec is adequate for the OMV docking speeds. Further axial compression is obtained, albeit at the expense of spatial resolution, by averaging adjacent pixels. The remaining compression is achieved on the basis of differential pulse-code modulation and Huffman run-length encoding. A concatenated error-correction coding system is used to protect the compressed video data stream from channel errors.

  7. Prototype Mixed Finite Element Hydrodynamics Capability in ARES

    SciTech Connect

    Rieben, R N

    2008-07-10

    This document describes work on a prototype Mixed Finite Element Method (MFEM) hydrodynamics algorithm in the ARES code, and its application to a set of standard test problems. This work is motivated by the need for improvements to the algorithms used in the Lagrange hydrodynamics step to make them more robust. We begin by identifying the outstanding issues with traditional numerical hydrodynamics algorithms followed by a description of the proposed method and how it may address several of these longstanding issues. We give a theoretical overview of the proposed MFEM algorithm as well as a summary of the coding additions and modifications that were made to add this capability to the ARES code. We present results obtained with the new method on a set of canonical hydrodynamics test problems and demonstrate significant improvement in comparison to results obtained with traditional methods. We conclude with a summary of the issues still at hand and motivate the need for continued research to develop the proposed method into maturity.

  8. Data compression for the microgravity experiments

    NASA Technical Reports Server (NTRS)

    Sayood, Khalid; Whyte, Wayne A., Jr.; Anderson, Karen S.; Shalkhauser, Mary JO; Summers, Anne M.

    1989-01-01

    Researchers present the environment and conditions under which data compression is to be performed for the microgravity experiment. Also presented are some coding techniques that would be useful for coding in this environment. It should be emphasized that researchers are currently at the beginning of this program and the toolkit mentioned is far from complete.

  9. Reaching the hydrodynamic regime in a Bose-Einstein condensate by suppression of avalanches

    SciTech Connect

    Stam, K. M. R. van der; Meppelink, R.; Vogels, J. M.; Straten, P. van der

    2007-03-15

    We report the realization of a Bose-Einstein condensate (BEC) in the hydrodynamic regime. The hydrodynamic regime is reached by evaporative cooling at a relatively low density suppressing the effect of avalanches. With the suppression of avalanches a BEC containing more than 10{sup 8} atoms is produced. The collisional opacity can be tuned from the collisionless regime to a collisional opacity of more than 2 by compressing the trap after condensation. In the collisional opaque regime a significant heating of the cloud at time scales shorter than half of the radial trap period is measured, which is a direct proof that the BEC is hydrodynamic.

  10. Modeling the Compression of Merged Compact Toroids by Multiple Plasma Jets

    NASA Astrophysics Data System (ADS)

    Thio, Y. C. Francis; Knapp, Charles E.; Kirkpatrick, Ron

    2000-10-01

    A fusion propulsion scheme has been proposed that makes use of the merging of a spherical distribution of plasma jets to dynamically form a gaseous liner. The gaseous liner is used to implode a magnetized target to produce the fusion reaction in a standoff manner. In this paper, the merging of the plasma jets to form the gaseous liner is investigated numerically. The Los Alamos SPHINX code, based on the smoothed particle hydrodynamics method is used to model the interaction of the jets. 2-D and 3-D simulations have been performed to study the characterisitcs of the resulting flow when these jets collide. The results show that the jets merge to form a plasma liner that converge radially which may be used to compress the central plasma to fusion conditions. Details of the computational model and the SPH numerical methods will be presented together with the numerical results.

  11. Modeling the Compression of Merged Compact Toroids by Multiple Plasma Jets

    NASA Technical Reports Server (NTRS)

    Thio, Y. C. Francis; Knapp, Charles E.; Kirkpatrick, Ron; Rodgers, Stephen L. (Technical Monitor)

    2000-01-01

    A fusion propulsion scheme has been proposed that makes use of the merging of a spherical distribution of plasma jets to dynamically form a gaseous liner. The gaseous liner is used to implode a magnetized target to produce the fusion reaction in a standoff manner. In this paper, the merging of the plasma jets to form the gaseous liner is investigated numerically. The Los Alamos SPHINX code, based on the smoothed particle hydrodynamics method is used to model the interaction of the jets. 2-D and 3-D simulations have been performed to study the characteristics of the resulting flow when these jets collide. The results show that the jets merge to form a plasma liner that converge radially which may be used to compress the central plasma to fusion conditions. Details of the computational model and the SPH numerical methods will be presented together with the numerical results.

  12. Hydrodynamics of sediment threshold

    NASA Astrophysics Data System (ADS)

    Ali, Sk Zeeshan; Dey, Subhasish

    2016-07-01

    A novel hydrodynamic model for the threshold of cohesionless sediment particle motion under a steady unidirectional streamflow is presented. The hydrodynamic forces (drag and lift) acting on a solitary sediment particle resting over a closely packed bed formed by the identical sediment particles are the primary motivating forces. The drag force comprises of the form drag and form induced drag. The lift force includes the Saffman lift, Magnus lift, centrifugal lift, and turbulent lift. The points of action of the force system are appropriately obtained, for the first time, from the basics of micro-mechanics. The sediment threshold is envisioned as the rolling mode, which is the plausible mode to initiate a particle motion on the bed. The moment balance of the force system on the solitary particle about the pivoting point of rolling yields the governing equation. The conditions of sediment threshold under the hydraulically smooth, transitional, and rough flow regimes are examined. The effects of velocity fluctuations are addressed by applying the statistical theory of turbulence. This study shows that for a hindrance coefficient of 0.3, the threshold curve (threshold Shields parameter versus shear Reynolds number) has an excellent agreement with the experimental data of uniform sediments. However, most of the experimental data are bounded by the upper and lower limiting threshold curves, corresponding to the hindrance coefficients of 0.2 and 0.4, respectively. The threshold curve of this study is compared with those of previous researchers. The present model also agrees satisfactorily with the experimental data of nonuniform sediments.

  13. A high-speed distortionless predictive image-compression scheme

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.; Smyth, P.; Wang, H.

    1990-01-01

    A high-speed distortionless predictive image-compression scheme that is based on differential pulse code modulation output modeling combined with efficient source-code design is introduced. Experimental results show that this scheme achieves compression that is very close to the difference entropy of the source.

  14. Structured illumination temporal compressive microscopy

    PubMed Central

    Yuan, Xin; Pang, Shuo

    2016-01-01

    We present a compressive video microscope based on structured illumination with incoherent light source. The source-side illumination coding scheme allows the emission photons being collected by the full aperture of the microscope objective, and thus is suitable for the fluorescence readout mode. A 2-step iterative reconstruction algorithm, termed BWISE, has been developed to address the mismatch between the illumination pattern size and the detector pixel size. Image sequences with a temporal compression ratio of 4:1 were demonstrated. PMID:27231586

  15. Wavelet-based audio embedding and audio/video compression

    NASA Astrophysics Data System (ADS)

    Mendenhall, Michael J.; Claypoole, Roger L., Jr.

    2001-12-01

    Watermarking, traditionally used for copyright protection, is used in a new and exciting way. An efficient wavelet-based watermarking technique embeds audio information into a video signal. Several effective compression techniques are applied to compress the resulting audio/video signal in an embedded fashion. This wavelet-based compression algorithm incorporates bit-plane coding, index coding, and Huffman coding. To demonstrate the potential of this audio embedding and audio/video compression algorithm, we embed an audio signal into a video signal and then compress. Results show that overall compression rates of 15:1 can be achieved. The video signal is reconstructed with a median PSNR of nearly 33 dB. Finally, the audio signal is extracted from the compressed audio/video signal without error.

  16. Lossy Compression of Haptic Data by Using DCT

    NASA Astrophysics Data System (ADS)

    Tanaka, Hiroyuki; Ohnishi, Kouhei

    In this paper, lossy data compression of haptic data is presented and the results of its application to a motion copying system are described. Lossy data compression has been studied and practically applied in audio and image coding. Lossy data compression of the haptic data has been not studied extensively. Haptic data compression using discrete cosine transform (DCT) and modified DCT (MDCT) for haptic data storage are described in this paper. In the lossy compression, calculated DCT/MDCT coefficients are quantized by quantization vector. The quantized coefficients are further compressed by lossless coding based on Huffman coding. The compressed haptic data is applied to the motion copying system, and the results are provided.

  17. Compression and venous ulcers.

    PubMed

    Stücker, M; Link, K; Reich-Schupke, S; Altmeyer, P; Doerler, M

    2013-03-01

    Compression therapy is considered to be the most important conservative treatment of venous leg ulcers. Until a few years ago, compression bandages were regarded as first-line therapy of venous leg ulcers. However, to date medical compression stockings are the first choice of treatment. With respect to compression therapy of venous leg ulcers the following statements are widely accepted: 1. Compression improves the healing of ulcers when compared with no compression; 2. Multicomponent compression systems are more effective than single-component compression systems; 3. High compression is more effective than lower compression; 4. Medical compression stockings are more effective than compression with short stretch bandages. Healed venous leg ulcers show a high relapse rate without ongoing treatment. The use of medical stockings significantly reduces the amount of recurrent ulcers. Furthermore, the relapse rate of venous leg ulcers can be significantly reduced by a combination of compression therapy and surgery of varicose veins compared with compression therapy alone. PMID:23482538

  18. Hydrodynamic Simulations of Giant Impacts

    NASA Astrophysics Data System (ADS)

    Reinhardt, Christian; Stadel, Joachim

    2013-07-01

    We studied the basic numerical aspects of giant impacts using Smoothed Particles Hydrodynamics (SPH), which has been used in most of the prior studies conducted in this area (e.g., Benz, Canup). Our main goal was to modify the massive parallel, multi-stepping code GASOLINE widely used in cosmological simulations so that it can properly simulate the behavior of condensed materials such as granite or iron using the Tillotson equation of state. GASOLINE has been used to simulate hundreds of millions of particles for ideal gas physics so that using several millions of particles in condensed material simulations seems possible. In order to focus our attention of the numerical aspects of the problem we neglected the internal structure of the protoplanets and modelled them as homogenous (isothermal) granite spheres. For the energy balance we only considered PdV work and shock heating of the material during the impact (neglected cooling of the material). Starting at a low resolution of 2048 particles for the target and the impactor we run several simulations for different impact parameters and impact velocities and successfully reproduced the main features of the pioneering work of Benz from 1986. The impact sends a shock wave through both bodies heating the target and disrupting the remaining impactor. As in prior simulations material is ejected from the collision. How much, and whether it leaves the system or survives in an orbit for a longer time, depends on the initial conditions but also on resolution. Increasing the resolution (to 1.2x10⁶ particles) results in both a much clearer shock wave and deformation of the bodies during the impact and a more compact and detailed "arm" like structure of the ejected material. Currently we are investigating some numerical issues we encountered and are implementing differentiated models, making one step closer to more realistic protoplanets in such giant impact simulations.

  19. Binary Pulse Compression Techniques for MST Radars

    NASA Technical Reports Server (NTRS)

    Woodman, R. F.; Sulzer, M. P.; Farley, D. T.

    1984-01-01

    In most mesosphere-stratosphere-troposphere (MST) applications pulsed radars are peak power limited and have excess average power capability. Short pulses are required for good range resolution but the problem of range biguity (signals received simultaneously from more than one altitude) sets a minimum limit on the interpulse period (IPP). Pulse compression is a echnique which allows more of the transmitter average power capacity to be used without scarificing range resolution. Binary phase coding methods for pulse compression are discussed. Many aspects of codes and decoding and their applications to MST experiments are addressed; this includes Barker codes and longer individual codes, and then complementary codes and other code sets. Software decoding, hardware decoders, and coherent integrators are also discussed.

  20. Displaying radiologic images on personal computers: image storage and compression--Part 2.

    PubMed

    Gillespy, T; Rowberg, A H

    1994-02-01

    This is part 2 of our article on image storage and compression, the third article of our series for radiologists and imaging scientists on displaying, manipulating, and analyzing radiologic images on personal computers. Image compression is classified as lossless (nondestructive) or lossy (destructive). Common lossless compression algorithms include variable-length bit codes (Huffman codes and variants), dictionary-based compression (Lempel-Ziv variants), and arithmetic coding. Huffman codes and the Lempel-Ziv-Welch (LZW) algorithm are commonly used for image compression. All of these compression methods are enhanced if the image has been transformed into a differential image based on a differential pulse-code modulation (DPCM) algorithm. The LZW compression after the DPCM image transformation performed the best on our example images, and performed almost as well as the best of the three commercial compression programs tested. Lossy compression techniques are capable of much higher data compression, but reduced image quality and compression artifacts may be noticeable. Lossy compression is comprised of three steps: transformation, quantization, and coding. Two commonly used transformation methods are the discrete cosine transformation and discrete wavelet transformation. In both methods, most of the image information is contained in a relatively few of the transformation coefficients. The quantization step reduces many of the lower order coefficients to 0, which greatly improves the efficiency of the coding (compression) step. In fractal-based image compression, image patterns are stored as equations that can be reconstructed at different levels of resolution. PMID:8172973

  1. Information preserving image compression for archiving NMR images.

    PubMed

    Li, C C; Gokmen, M; Hirschman, A D; Wang, Y

    1991-01-01

    This paper presents a result on information preserving compression of NMR images for the archiving purpose. Both Lynch-Davisson coding and linear predictive coding have been studied. For NMR images of 256 x 256 x 12 resolution, the Lynch-Davisson coding with a block size of 64 as applied to prediction error sequences in the Gray code bit planes of each image gave an average compression ratio of 2.3:1 for 14 testing images. The predictive coding with a third order linear predictor and the Huffman encoding of the prediction error gave an average compression ratio of 3.1:1 for 54 images under test, while the maximum compression ratio achieved was 3.8:1. This result is one step further toward the improvement, albeit small, of the information preserving image compression for medical applications. PMID:1913579

  2. Data compression of large document data bases.

    PubMed

    Heaps, H S

    1975-02-01

    Consideration is given to a document data base that is structured for information retrieval purposes by means of an inverted index and term dictionary. Vocabulary characteristics of various fields are described, and it is shown how the data base may be stored in a compressed form by use of restricted variable length codes that produce a compression not greatly in excess of the optimum that could be achieved through use of Huffman codes. The coding is word oriented. An alternative scheme of word fragment coding is described. It has the advantage that it allows the use of a small dictionary, but is less efficient with respect to compression of the data base. PMID:1127034

  3. Warm dense mater: another application for pulsed power hydrodynamics

    SciTech Connect

    Reinovsky, Robert Emil

    2009-01-01

    Pulsed Power Hydrodynamics (PPH) is an application of low-impedance pulsed power, and high magnetic field technology to the study of advanced hydrodynamic problems, instabilities, turbulence, and material properties. PPH can potentially be applied to the study of the properties of warm dense matter (WDM) as well. Exploration of the properties of warm dense matter such as equation of state, viscosity, conductivity is an emerging area of study focused on the behavior of matter at density near solid density (from 10% of solid density to slightly above solid density) and modest temperatures ({approx}1-10 eV). Conditions characteristic of WDM are difficult to obtain, and even more difficult to diagnose. One approach to producing WDM uses laser or particle beam heating of very small quantities of matter on timescales short compared to the subsequent hydrodynamic expansion timescales (isochoric heating) and a vigorous community of researchers are applying these techniques. Pulsed power hydrodynamic techniques, such as large convergence liner compression of a large volume, modest density, low temperature plasma to densities approaching solid density or through multiple shock compression and heating of normal density material between a massive, high density, energetic liner and a high density central 'anvil' are possible ways to reach relevant conditions. Another avenue to WDM conditions is through the explosion and subsequent expansion of a conductor (wire) against a high pressure (density) gas background (isobaric expansion) techniques. However, both techniques demand substantial energy, proper power conditioning and delivery, and an understanding of the hydrodynamic and instability processes that limit each technique. In this paper we will examine the challenges to pulsed power technology and to pulsed power systems presented by the opportunity to explore this interesting region of parameter space.

  4. Compressive beamforming.

    PubMed

    Xenaki, Angeliki; Gerstoft, Peter; Mosegaard, Klaus

    2014-07-01

    Sound source localization with sensor arrays involves the estimation of the direction-of-arrival (DOA) from a limited number of observations. Compressive sensing (CS) solves such underdetermined problems achieving sparsity, thus improved resolution, and can be solved efficiently with convex optimization. The DOA estimation problem is formulated in the CS framework and it is shown that CS has superior performance compared to traditional DOA estimation methods especially under challenging scenarios such as coherent arrivals and single-snapshot data. An offset and resolution analysis is performed to indicate the limitations of CS. It is shown that the limitations are related to the beampattern, thus can be predicted. The high-resolution capabilities and the robustness of CS are demonstrated on experimental array data from ocean acoustic measurements for source tracking with single-snapshot data. PMID:24993212

  5. Hydrodynamic Elastic Magneto Plastic

    Energy Science and Technology Software Center (ESTSC)

    1985-02-01

    The HEMP code solves the conservation equations of two-dimensional elastic-plastic flow, in plane x-y coordinates or in cylindrical symmetry around the x-axis. Provisions for calculation of fixed boundaries, free surfaces, pistons, and boundary slide planes have been included, along with other special conditions.

  6. Recent development of hydrodynamic modeling

    NASA Astrophysics Data System (ADS)

    Hirano, Tetsufumi

    2014-09-01

    In this talk, I give an overview of recent development in hydrodynamic modeling of high-energy nuclear collisions. First, I briefly discuss about current situation of hydrodynamic modeling by showing results from the integrated dynamical approach in which Monte-Carlo calculation of initial conditions, quark-gluon fluid dynamics and hadronic cascading are combined. In particular, I focus on rescattering effects of strange hadrons on final observables. Next I highlight three topics in recent development in hydrodynamic modeling. These include (1) medium response to jet propagation in di-jet asymmetric events, (2) causal hydrodynamic fluctuation and its application to Bjorken expansion and (3) chiral magnetic wave from anomalous hydrodynamic simulations. (1) Recent CMS data suggest the existence of QGP response to propagation of jets. To investigate this phenomenon, we solve hydrodynamic equations with source term which exhibits deposition of energy and momentum from jets. We find a large number of low momentum particles are emitted at large angle from jet axis. This gives a novel interpretation of the CMS data. (2) It has been claimed that a matter created even in p-p/p-A collisions may behave like a fluid. However, fluctuation effects would be important in such a small system. We formulate relativistic fluctuating hydrodynamics and apply it to Bjorken expansion. We found the final multiplicity fluctuates around the mean value even if initial condition is fixed. This effect is relatively important in peripheral A-A collisions and p-p/p-A collisions. (3) Anomalous transport of the quark-gluon fluid is predicted when extremely high magnetic field is applied. We investigate this possibility by solving anomalous hydrodynamic equations. We found the difference of the elliptic flow parameter between positive and negative particles appears due to the chiral magnetic wave. Finally, I provide some personal perspective of hydrodynamic modeling of high energy nuclear collisions

  7. Combustion chamber analysis code

    NASA Technical Reports Server (NTRS)

    Przekwas, A. J.; Lai, Y. G.; Krishnan, A.; Avva, R. K.; Giridharan, M. G.

    1993-01-01

    A three-dimensional, time dependent, Favre averaged, finite volume Navier-Stokes code has been developed to model compressible and incompressible flows (with and without chemical reactions) in liquid rocket engines. The code has a non-staggered formulation with generalized body-fitted-coordinates (BFC) capability. Higher order differencing methodologies such as MUSCL and Osher-Chakravarthy schemes are available. Turbulent flows can be modeled using any of the five turbulent models present in the code. A two-phase, two-liquid, Lagrangian spray model has been incorporated into the code. Chemical equilibrium and finite rate reaction models are available to model chemically reacting flows. The discrete ordinate method is used to model effects of thermal radiation. The code has been validated extensively against benchmark experimental data and has been applied to model flows in several propulsion system components of the SSME and the STME.

  8. Constraining relativistic viscous hydrodynamical evolution

    SciTech Connect

    Martinez, Mauricio; Strickland, Michael

    2009-04-15

    We show that by requiring positivity of the longitudinal pressure it is possible to constrain the initial conditions one can use in second-order viscous hydrodynamical simulations of ultrarelativistic heavy-ion collisions. We demonstrate this explicitly for (0+1)-dimensional viscous hydrodynamics and discuss how the constraint extends to higher dimensions. Additionally, we present an analytic approximation to the solution of (0+1)-dimensional second-order viscous hydrodynamical evolution equations appropriate to describe the evolution of matter in an ultrarelativistic heavy-ion collision.

  9. Hydrodynamic body shape analysis and their impact on swimming performance.

    PubMed

    Li, Tian-Zeng; Zhan, Jie-Min

    2015-01-01

    This study presents the hydrodynamic characteristics of different adult male swimmer's body shape using computational fluid dynamics method. This simulation strategy is carried out by CFD fluent code with solving the 3D incompressible Navier-Stokes equations using the RNG k-ε turbulence closure. The water free surface is captured by the volume of fluid (VOF) method. A set of full body models, which is based on the anthropometrical characteristics of the most common male swimmers, is created by Computer Aided Industrial Design (CAID) software, Rhinoceros. The analysis of CFD results revealed that swimmer's body shape has a noticeable effect on the hydrodynamics performances. This explains why male swimmer with an inverted triangle body shape has good hydrodynamic characteristics for competitive swimming. PMID:26898107

  10. A hybrid numerical fluid dynamics code for resistive magnetohydrodynamics

    Energy Science and Technology Software Center (ESTSC)

    2006-04-01

    Spasmos is a computational fluid dynamics code that uses two numerical methods to solve the equations of resistive magnetohydrodynamic (MHD) flows in compressible, inviscid, conducting media[1]. The code is implemented as a set of libraries for the Python programming language[2]. It represents conducting and non-conducting gases and materials with uncomplicated (analytic) equations of state. It supports calculations in 1D, 2D, and 3D geometry, though only the 1D configuation has received significant testing to date. Becausemore » it uses the Python interpreter as a front end, users can easily write test programs to model systems with a variety of different numerical and physical parameters. Currently, the code includes 1D test programs for hydrodynamics (linear acoustic waves, the Sod weak shock[3], the Noh strong shock[4], the Sedov explosion[5], magnetic diffusion (decay of a magnetic pulse[6], a driven oscillatory "wine-cellar" problem[7], magnetic equilibrium), and magnetohydrodynamics (an advected magnetic pulse[8], linear MHD waves, a magnetized shock tube[9]). Spasmos current runs only in a serial configuration. In the future, it will use MPI for parallel computation.« less

  11. Spin hydrodynamic generation

    NASA Astrophysics Data System (ADS)

    Takahashi, R.; Matsuo, M.; Ono, M.; Harii, K.; Chudo, H.; Okayasu, S.; Ieda, J.; Takahashi, S.; Maekawa, S.; Saitoh, E.

    2016-01-01

    Magnetohydrodynamic generation is the conversion of fluid kinetic energy into electricity. Such conversion, which has been applied to various types of electric power generation, is driven by the Lorentz force acting on charged particles and thus a magnetic field is necessary. On the other hand, recent studies of spintronics have revealed the similarity between the function of a magnetic field and that of spin-orbit interactions in condensed matter. This suggests the existence of an undiscovered route to realize the conversion of fluid dynamics into electricity without using magnetic fields. Here we show electric voltage generation from fluid dynamics free from magnetic fields; we excited liquid-metal flows in a narrow channel and observed longitudinal voltage generation in the liquid. This voltage has nothing to do with electrification or thermoelectric effects, but turned out to follow a universal scaling rule based on a spin-mediated scenario. The result shows that the observed voltage is caused by spin-current generation from a fluid motion: spin hydrodynamic generation. The observed phenomenon allows us to make mechanical spin-current and electric generators, opening a door to fluid spintronics.

  12. Relativistic hydrodynamics on graphic cards

    NASA Astrophysics Data System (ADS)

    Gerhard, Jochen; Lindenstruth, Volker; Bleicher, Marcus

    2013-02-01

    We show how to accelerate relativistic hydrodynamics simulations using graphic cards (graphic processing units, GPUs). These improvements are of highest relevance e.g. to the field of high-energetic nucleus-nucleus collisions at RHIC and LHC where (ideal and dissipative) relativistic hydrodynamics is used to calculate the evolution of hot and dense QCD matter. The results reported here are based on the Sharp And Smooth Transport Algorithm (SHASTA), which is employed in many hydrodynamical models and hybrid simulation packages, e.g. the Ultrarelativistic Quantum Molecular Dynamics model (UrQMD). We have redesigned the SHASTA using the OpenCL computing framework to work on accelerators like graphic processing units (GPUs) as well as on multi-core processors. With the redesign of the algorithm the hydrodynamic calculations have been accelerated by a factor 160 allowing for event-by-event calculations and better statistics in hybrid calculations.

  13. Reciprocal relations in dissipationless hydrodynamics

    SciTech Connect

    Melnikovsky, L. A.

    2014-12-15

    Hidden symmetry in dissipationless terms of arbitrary hydrodynamics equations is recognized. We demonstrate that all fluxes are generated by a single function and derive conventional Euler equations using the proposed formalism.

  14. Lossless Compression on MRI Images Using SWT.

    PubMed

    Anusuya, V; Raghavan, V Srinivasa; Kavitha, G

    2014-10-01

    Medical image compression is one of the growing research fields in biomedical applications. Most medical images need to be compressed using lossless compression as each pixel information is valuable. With the wide pervasiveness of medical imaging applications in health-care settings and the increased interest in telemedicine technologies, it has become essential to reduce both storage and transmission bandwidth requirements needed for archival and communication of related data, preferably by employing lossless compression methods. Furthermore, providing random access as well as resolution and quality scalability to the compressed data has become of great utility. Random access refers to the ability to decode any section of the compressed image without having to decode the entire data set. The system proposes to implement a lossless codec using an entropy coder. 3D medical images are decomposed into 2D slices and subjected to 2D-stationary wavelet transform (SWT). The decimated coefficients are compressed in parallel using embedded block coding with optimized truncation of the embedded bit stream. These bit streams are decoded and reconstructed using inverse SWT. Finally, the compression ratio (CR) is evaluated to prove the efficiency of the proposal. As an enhancement, the proposed system concentrates on minimizing the computation time by introducing parallel computing on the arithmetic coding stage as it deals with multiple subslices. PMID:24848945

  15. Data Compression--A Comparison of Methods. Computer Science and Technology.

    ERIC Educational Resources Information Center

    Aronson, Jules

    This report delineates the theory and terminology of data compression. It surveys four data compression methods--null suppression, pattern substitution, statistical encoding, and telemetry compression--and relates them to a standard statistical coding problem, i.e., the noiseless coding problem. The well defined solution to that problem can serve…

  16. Boltzmann equation and hydrodynamic fluctuations.

    PubMed

    Colangeli, Matteo; Kröger, Martin; Ottinger, Hans Christian

    2009-11-01

    We apply the method of invariant manifolds to derive equations of generalized hydrodynamics from the linearized Boltzmann equation and determine exact transport coefficients, obeying Green-Kubo formulas. Numerical calculations are performed in the special case of Maxwell molecules. We investigate, through the comparison with experimental data and former approaches, the spectrum of density fluctuations and address the regime of finite Knudsen numbers and finite frequencies hydrodynamics. PMID:20364972

  17. Eightfold Classification of Hydrodynamic Dissipation.

    PubMed

    Haehl, Felix M; Loganayagam, R; Rangamani, Mukund

    2015-05-22

    We provide a complete characterization of hydrodynamic transport consistent with the second law of thermodynamics at arbitrary orders in the gradient expansion. A key ingredient in facilitating this analysis is the notion of adiabatic hydrodynamics, which enables isolation of the genuinely dissipative parts of transport. We demonstrate that most transport is adiabatic. Furthermore, in the dissipative part, only terms at the leading order in gradient expansion are constrained to be sign definite by the second law (as has been derived before). PMID:26047219

  18. Heuristic dynamic complexity coding

    NASA Astrophysics Data System (ADS)

    Škorupa, Jozef; Slowack, Jürgen; Mys, Stefaan; Lambert, Peter; Van de Walle, Rik

    2008-04-01

    Distributed video coding is a new video coding paradigm that shifts the computational intensive motion estimation from encoder to decoder. This results in a lightweight encoder and a complex decoder, as opposed to the predictive video coding scheme (e.g., MPEG-X and H.26X) with a complex encoder and a lightweight decoder. Both schemas, however, do not have the ability to adapt to varying complexity constraints imposed by encoder and decoder, which is an essential ability for applications targeting a wide range of devices with different complexity constraints or applications with temporary variable complexity constraints. Moreover, the effect of complexity adaptation on the overall compression performance is of great importance and has not yet been investigated. To address this need, we have developed a video coding system with the possibility to adapt itself to complexity constraints by dynamically sharing the motion estimation computations between both components. On this system we have studied the effect of the complexity distribution on the compression performance. This paper describes how motion estimation can be shared using heuristic dynamic complexity and how distribution of complexity affects the overall compression performance of the system. The results show that the complexity can indeed be shared between encoder and decoder in an efficient way at acceptable rate-distortion performance.

  19. Improved Compression of Wavelet-Transformed Images

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron; Klimesh, Matthew

    2005-01-01

    A recently developed data-compression method is an adaptive technique for coding quantized wavelet-transformed data, nominally as part of a complete image-data compressor. Unlike some other approaches, this method admits a simple implementation and does not rely on the use of large code tables. A common data compression approach, particularly for images, is to perform a wavelet transform on the input data, and then losslessly compress a quantized version of the wavelet-transformed data. Under this compression approach, it is common for the quantized data to include long sequences, or runs, of zeros. The new coding method uses prefixfree codes for the nonnegative integers as part of an adaptive algorithm for compressing the quantized wavelet-transformed data by run-length coding. In the form of run-length coding used here, the data sequence to be encoded is parsed into strings consisting of some number (possibly 0) of zeros, followed by a nonzero value. The nonzero value and the length of the run of zeros are encoded. For a data stream that contains a sufficiently high frequency of zeros, this method is known to be more effective than using a single variable length code to encode each symbol. The specific prefix-free codes used are from two classes of variable-length codes: a class known as Golomb codes, and a class known as exponential-Golomb codes. The codes within each class are indexed by a single integer parameter. The present method uses exponential-Golomb codes for the lengths of the runs of zeros, and Golomb codes for the nonzero values. The code parameters within each code class are determined adaptively on the fly as compression proceeds, on the basis of statistics from previously encoded values. In particular, a simple adaptive method has been devised to select the parameter identifying the particular exponential-Golomb code to use. The method tracks the average number of bits used to encode recent runlengths, and takes the difference between this average

  20. Hemodynamics of a hydrodynamic injection

    PubMed Central

    Kanefuji, Tsutomu; Yokoo, Takeshi; Suda, Takeshi; Abe, Hiroyuki; Kamimura, Kenya; Liu, Dexi

    2014-01-01

    The hemodynamics during a hydrodynamic injection were evaluated using cone beam computed tomography (CBCT) and fluoroscopic imaging. The impacts of hydrodynamic (5 seconds) and slow (60 seconds) injections into the tail veins of mice were compared using 9% body weight of a phase-contrast medium. Hydrodynamically injected solution traveled to the heart and drew back to the hepatic veins (HV), which led to liver expansion and a trace amount of spillover into the portal vein (PV). The liver volumes peaked at 165.6 ± 13.3% and 165.5 ± 11.9% of the original liver volumes in the hydrodynamic and slow injections, respectively. Judging by the intensity of the CBCT images at the PV, HV, right atrium, liver parenchyma (LP), and the inferior vena cava (IVC) distal to the HV conjunction, the slow injection resulted in the higher intensity at PV than at LP. In contrast, a significantly higher intensity was observed in LP after hydrodynamic injection in comparison with that of PV, suggesting that the liver took up the iodine from the blood flow. These results suggest that the enlargement speed of the liver, rather than the expanded volume, primarily determines the efficiency of hydrodynamic delivery to the liver. PMID:26015971

  1. Adaptive entropy coded subband coding of images.

    PubMed

    Kim, Y H; Modestino, J W

    1992-01-01

    The authors describe a design approach, called 2-D entropy-constrained subband coding (ECSBC), based upon recently developed 2-D entropy-constrained vector quantization (ECVQ) schemes. The output indexes of the embedded quantizers are further compressed by use of noiseless entropy coding schemes, such as Huffman or arithmetic codes, resulting in variable-rate outputs. Depending upon the specific configurations of the ECVQ and the ECPVQ over the subbands, many different types of SBC schemes can be derived within the generic 2-D ECSBC framework. Among these, the authors concentrate on three representative types of 2-D ECSBC schemes and provide relative performance evaluations. They also describe an adaptive buffer instrumented version of 2-D ECSBC, called 2-D ECSBC/AEC, for use with fixed-rate channels which completely eliminates buffer overflow/underflow problems. This adaptive scheme achieves performance quite close to the corresponding ideal 2-D ECSBC system. PMID:18296138

  2. Detection of the Compressed Primary Stellar Wind in eta Carinae

    NASA Technical Reports Server (NTRS)

    Teodoro, Mairan Macedo; Madura, Thomas I.; Gull, Theodore R.; Corcoran, Michael F.; Hamaguchi, K.

    2014-01-01

    A series of three HST/STIS spectroscopic mappings, spaced approximately one year apart, reveal three partial arcs in [Fe II] and [Ni II] emissions moving outward from eta Carinae. We identify these arcs with the shell-like structures, seen in the 3D hydrodynamical simulations, formed by compression of the primary wind by the secondary wind during periastron passages.

  3. Slurry bubble column hydrodynamics

    NASA Astrophysics Data System (ADS)

    Rados, Novica

    Slurry bubble column reactors are presently used for a wide range of reactions in both chemical and biochemical industry. The successful design and scale up of slurry bubble column reactors require a complete understanding of multiphase fluid dynamics, i.e. phase mixing, heat and mass transport characteristics. The primary objective of this thesis is to improve presently limited understanding of the gas-liquid-solid slurry bubble column hydrodynamics. The effect of superficial gas velocity (8 to 45 cm/s), pressure (0.1 to 1.0 MPa) and solids loading (20 and 35 wt.%) on the time-averaged solids velocity and turbulent parameter profiles has been studied using Computer Automated Radioactive Particle Tracking (CARPT). To accomplish this, CARPT technique has been significantly improved for the measurements in highly attenuating systems, such as high pressure, high solids loading stainless steel slurry bubble column. At a similar set of operational conditions time-averaged gas and solids holdup profiles have been evaluated using the developed Computed Tomography (CT)/Overall gas holdup procedure. This procedure is based on the combination of the CT scans and the overall gas holdup measurements. The procedure assumes constant solids loading in the radial direction and axially invariant cross-sectionally averaged gas holdup. The obtained experimental holdup, velocity and turbulent parameters data are correlated and compared with the existing low superficial gas velocities and atmospheric pressure CARPT/CT gas-liquid and gas-liquid-solid slurry data. The obtained solids axial velocity radial profiles are compared with the predictions of the one dimensional (1-D) liquid/slurry recirculation phenomenological model. The obtained solids loading axial profiles are compared with the predictions of the Sedimentation and Dispersion Model (SDM). The overall gas holdup values, gas holdup radial profiles, solids loading axial profiles, solids axial velocity radial profiles and solids

  4. A New Approach for Fingerprint Image Compression

    SciTech Connect

    Mazieres, Bertrand

    1997-12-01

    The FBI has been collecting fingerprint cards since 1924 and now has over 200 million of them. Digitized with 8 bits of grayscale resolution at 500 dots per inch, it means 2000 terabytes of information. Also, without any compression, transmitting a 10 Mb card over a 9600 baud connection will need 3 hours. Hence we need a compression and a compression as close to lossless as possible: all fingerprint details must be kept. A lossless compression usually do not give a better compression ratio than 2:1, which is not sufficient. Compressing these images with the JPEG standard leads to artefacts which appear even at low compression rates. Therefore the FBI has chosen in 1993 a scheme of compression based on a wavelet transform, followed by a scalar quantization and an entropy coding : the so-called WSQ. This scheme allows to achieve compression ratios of 20:1 without any perceptible loss of quality. The publication of the FBI specifies a decoder, which means that many parameters can be changed in the encoding process: the type of analysis/reconstruction filters, the way the bit allocation is made, the number of Huffman tables used for the entropy coding. The first encoder used 9/7 filters for the wavelet transform and did the bit allocation using a high-rate bit assumption. Since the transform is made into 64 subbands, quite a lot of bands receive only a few bits even at an archival quality compression rate of 0.75 bit/pixel. Thus, after a brief overview of the standard, we will discuss a new approach for the bit-allocation that seems to make more sense where theory is concerned. Then we will talk about some implementation aspects, particularly for the new entropy coder and the features that allow other applications than fingerprint image compression. Finally, we will compare the performances of the new encoder to those of the first encoder.

  5. A Hydrochemical Hybrid Code for Astrophysical Problems. I. Code Verification and Benchmarks for a Photon-dominated Region (PDR)

    NASA Astrophysics Data System (ADS)

    Motoyama, Kazutaka; Morata, Oscar; Shang, Hsien; Krasnopolsky, Ruben; Hasegawa, Tatsuhiko

    2015-07-01

    A two-dimensional hydrochemical hybrid code, KM2, is constructed to deal with astrophysical problems that would require coupled hydrodynamical and chemical evolution. The code assumes axisymmetry in a cylindrical coordinate system and consists of two modules: a hydrodynamics module and a chemistry module. The hydrodynamics module solves hydrodynamics using a Godunov-type finite volume scheme and treats included chemical species as passively advected scalars. The chemistry module implicitly solves nonequilibrium chemistry and change of energy due to thermal processes with transfer of external ultraviolet radiation. Self-shielding effects on photodissociation of CO and H2 are included. In this introductory paper, the adopted numerical method is presented, along with code verifications using the hydrodynamics module and a benchmark on the chemistry module with reactions specific to a photon-dominated region (PDR). Finally, as an example of the expected capability, the hydrochemical evolution of a PDR is presented based on the PDR benchmark.

  6. 3D Hydrodynamic Simulations of Relativistic Jets

    NASA Astrophysics Data System (ADS)

    Hughes, P. A.; Miller, M. A.; Duncan, G. C.; Swift, C. M.

    1998-12-01

    We present the results of validation runs and the first extragalactic jet simulations performed with a 3D relativistic numerical hydrodynamic code employing a solver of the RHLLE type and using adaptive mesh refinement (AMR; Duncan & Hughes, 1994, Ap. J., 436, L119). Test problems include the shock tube, blast wave and spherical shock reflection (implosion). Trials with the code show that as a consequence of AMR it is viable to perform exploratory runs on workstation class machines (with no more than 128Mb of memory) prior to production runs. In the former case we achieve a resolution not much less than that normally regarded as the minimum needed to capture the essential physics of a problem, which means that such runs can provide valuable guidance allowing the optimum use of supercomputer resources. We present initial results from a program to explore the 3D stability properties of flows previously studied using a 2D axisymmetric code, and our first attempt to explore the structure and morphology of a relativistic jet encountering an ambient density gradient that mimics an ambient inhomogeneity or cloud.

  7. Locally adaptive vector quantization: Data compression with feature preservation

    NASA Technical Reports Server (NTRS)

    Cheung, K. M.; Sayano, M.

    1992-01-01

    A study of a locally adaptive vector quantization (LAVQ) algorithm for data compression is presented. This algorithm provides high-speed one-pass compression and is fully adaptable to any data source and does not require a priori knowledge of the source statistics. Therefore, LAVQ is a universal data compression algorithm. The basic algorithm and several modifications to improve performance are discussed. These modifications are nonlinear quantization, coarse quantization of the codebook, and lossless compression of the output. Performance of LAVQ on various images using irreversible (lossy) coding is comparable to that of the Linde-Buzo-Gray algorithm, but LAVQ has a much higher speed; thus this algorithm has potential for real-time video compression. Unlike most other image compression algorithms, LAVQ preserves fine detail in images. LAVQ's performance as a lossless data compression algorithm is comparable to that of Lempel-Ziv-based algorithms, but LAVQ uses far less memory during the coding process.

  8. Object-Based Image Compression

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.

    2003-01-01

    Image compression frequently supports reduced storage requirement in a computer system, as well as enhancement of effective channel bandwidth in a communication system, by decreasing the source bit rate through reduction of source redundancy. The majority of image compression techniques emphasize pixel-level operations, such as matching rectangular or elliptical sampling blocks taken from the source data stream, with exemplars stored in a database (e.g., a codebook in vector quantization or VQ). Alternatively, one can represent a source block via transformation, coefficient quantization, and selection of coefficients deemed significant for source content approximation in the decompressed image. This approach, called transform coding (TC), has predominated for several decades in the signal and image processing communities. A further technique that has been employed is the deduction of affine relationships from source properties such as local self-similarity, which supports the construction of adaptive codebooks in a self-VQ paradigm that has been called iterated function systems (IFS). Although VQ, TC, and IFS based compression algorithms have enjoyed varying levels of success for different types of applications, bit rate requirements, and image quality constraints, few of these algorithms examine the higher-level spatial structure of an image, and fewer still exploit this structure to enhance compression ratio. In this paper, we discuss a fourth type of compression algorithm, called object-based compression, which is based on research in joint segmentaton and compression, as well as previous research in the extraction of sketch-like representations from digital imagery. Here, large image regions that correspond to contiguous recognizeable objects or parts of objects are segmented from the source, then represented compactly in the compressed image. Segmentation is facilitated by source properties such as size, shape, texture, statistical properties, and spectral

  9. Lossless Video Sequence Compression Using Adaptive Prediction

    NASA Technical Reports Server (NTRS)

    Li, Ying; Sayood, Khalid

    2007-01-01

    We present an adaptive lossless video compression algorithm based on predictive coding. The proposed algorithm exploits temporal, spatial, and spectral redundancies in a backward adaptive fashion with extremely low side information. The computational complexity is further reduced by using a caching strategy. We also study the relationship between the operational domain for the coder (wavelet or spatial) and the amount of temporal and spatial redundancy in the sequence being encoded. Experimental results show that the proposed scheme provides significant improvements in compression efficiencies.

  10. An analysis of smoothed particle hydrodynamics

    SciTech Connect

    Swegle, J.W.; Attaway, S.W.; Heinstein, M.W.; Mello, F.J.; Hicks, D.L.

    1994-03-01

    SPH (Smoothed Particle Hydrodynamics) is a gridless Lagrangian technique which is appealing as a possible alternative to numerical techniques currently used to analyze high deformation impulsive loading events. In the present study, the SPH algorithm has been subjected to detailed testing and analysis to determine its applicability in the field of solid dynamics. An important result of the work is a rigorous von Neumann stability analysis which provides a simple criterion for the stability or instability of the method in terms of the stress state and the second derivative of the kernel function. Instability, which typically occurs only for solids in tension, results not from the numerical time integration algorithm, but because the SPH algorithm creates an effective stress with a negative modulus. The analysis provides insight into possible methods for removing the instability. Also, SPH has been coupled into the transient dynamics finite element code PRONTO, and a weighted residual derivation of the SPH equations has been obtained.

  11. Hydrodynamic growth and mix experiments at National Ignition Facility

    NASA Astrophysics Data System (ADS)

    Smalyuk, V. A.; Caggiano, J.; Casey, D.; Cerjan, C.; Clark, D. S.; Edwards, J.; Grim, G.; Haan, S. W.; Hammel, B. A.; Hamza, A.; Hsing, W.; Hurricane, O.; Kilkenny, J.; Kline, J.; Knauer, J.; Landen, O.; McNaney, J.; Mintz, M.; Nikroo, A.; Parham, T.; Park, H.-S.; Pino, J.; Raman, K.; Remington, B. A.; Robey, H. F.; Rowley, D.; Tipton, R.; Weber, S.; Yeamans, C.

    2016-03-01

    Hydrodynamic growth and its effects on implosion performance and mix were studied at the National Ignition Facility (NIF). Spherical shells with pre-imposed 2D modulations were used to measure Rayleigh-Taylor (RT) instability growth in the acceleration phase of implosions using in-flight x-ray radiography. In addition, implosion performance and mix have been studied at peak compression using plastic shells filled with tritium gas and imbedding localized CD diagnostic layer in various locations in the ablator. Neutron yield and ion temperature of the DT fusion reactions were used as a measure of shell-gas mix, while neutron yield of the TT fusion reaction was used as a measure of implosion performance. The results have indicated that the low-mode hydrodynamic instabilities due to surface roughness were the primary culprits to yield degradation, with atomic ablator-gas mix playing a secondary role.

  12. Clinical coding. Code breakers.

    PubMed

    Mathieson, Steve

    2005-02-24

    --The advent of payment by results has seen the role of the clinical coder pushed to the fore in England. --Examinations for a clinical coding qualification began in 1999. In 2004, approximately 200 people took the qualification. --Trusts are attracting people to the role by offering training from scratch or through modern apprenticeships. PMID:15768716

  13. Spectral image compression for data communications

    NASA Astrophysics Data System (ADS)

    Hauta-Kasari, Markku; Lehtonen, Juha; Parkkinen, Jussi P. S.; Jaeaeskelaeinen, Timo

    2000-12-01

    We report a technique for spectral image compression to be used in the field of data communications. The spectral domain of the images is represented by a low-dimensional component image set, which is used to obtain an efficient compression of the high-dimensional spectral data. The component images are compressed using a similar technique as the JPEG- and MPEG-type compressions use to subsample the chrominance channels. The spectral compression is based on Principal Component Analysis (PCA) combined with color image transmission coding technique of 'chromatic channel subsampling' of the component images. The component images are subsampled using 4:2:2, 4:2:0, and 4:1:1-based compressions. In addition, we extended the test for larger block sizes and larger number of component images than in the original JPEG- and MPEG-standards. Totally 50 natural spectral images were used as test material in our experiments. Several error measures of the compression are reported. The same compressions are done using Independent Component Analysis and the results are compared with PCA. These methods give a good compression ratio while keeping visual quality of color still good. Quantitative comparisons between the original and reconstructed spectral images are presented.

  14. Code Verification of the HIGRAD Computational Fluid Dynamics Solver

    SciTech Connect

    Van Buren, Kendra L.; Canfield, Jesse M.; Hemez, Francois M.; Sauer, Jeremy A.

    2012-05-04

    The purpose of this report is to outline code and solution verification activities applied to HIGRAD, a Computational Fluid Dynamics (CFD) solver of the compressible Navier-Stokes equations developed at the Los Alamos National Laboratory, and used to simulate various phenomena such as the propagation of wildfires and atmospheric hydrodynamics. Code verification efforts, as described in this report, are an important first step to establish the credibility of numerical simulations. They provide evidence that the mathematical formulation is properly implemented without significant mistakes that would adversely impact the application of interest. Highly accurate analytical solutions are derived for four code verification test problems that exercise different aspects of the code. These test problems are referred to as: (i) the quiet start, (ii) the passive advection, (iii) the passive diffusion, and (iv) the piston-like problem. These problems are simulated using HIGRAD with different levels of mesh discretization and the numerical solutions are compared to their analytical counterparts. In addition, the rates of convergence are estimated to verify the numerical performance of the solver. The first three test problems produce numerical approximations as expected. The fourth test problem (piston-like) indicates the extent to which the code is able to simulate a 'mild' discontinuity, which is a condition that would typically be better handled by a Lagrangian formulation. The current investigation concludes that the numerical implementation of the solver performs as expected. The quality of solutions is sufficient to provide credible simulations of fluid flows around wind turbines. The main caveat associated to these findings is the low coverage provided by these four problems, and somewhat limited verification activities. A more comprehensive evaluation of HIGRAD may be beneficial for future studies.

  15. Modified JPEG Huffman coding.

    PubMed

    Lakhani, Gopal

    2003-01-01

    It is a well observed characteristic that when a DCT block is traversed in the zigzag order, the AC coefficients generally decrease in size and the run-length of zero coefficients increase in number. This article presents a minor modification to the Huffman coding of the JPEG baseline compression algorithm to exploit this redundancy. For this purpose, DCT blocks are divided into bands so that each band can be coded using a separate code table. Three implementations are presented, which all move the end-of-block marker up in the middle of DCT block and use it to indicate the band boundaries. Experimental results are presented to compare reduction in the code size obtained by our methods with the JPEG sequential-mode Huffman coding and arithmetic coding methods. The average code reduction to the total image code size of one of our methods is 4%. Our methods can also be used for progressive image transmission and hence, experimental results are also given to compare them with two-, three-, and four-band implementations of the JPEG spectral selection method. PMID:18237897

  16. Compressed bitmap indices for efficient query processing

    SciTech Connect

    Wu, Kesheng; Otoo, Ekow; Shoshani, Arie

    2001-09-30

    Many database applications make extensive use of bitmap indexing schemes. In this paper, we study how to improve the efficiencies of these indexing schemes by proposing new compression schemes for the bitmaps. Most compression schemes are designed primarily to achieve good compression. During query processing they can be orders of magnitude slower than their uncompressed counterparts. The new schemes are designed to bridge this performance gap by reducing compression effectiveness and improving operation speed. In a number of tests on both synthetic data and real application data, we found that the new schemes significantly outperform the well-known compression schemes while using only modestly more space. For example, compared to the Byte-aligned Bitmap Code, the new schemes are 12 times faster and it uses only 50 percent more space. The new schemes use much less space(<30 percent) than the uncompressed scheme and are faster in a majority of the test cases.

  17. Postprocessing of Compressed Images via Sequential Denoising.

    PubMed

    Dar, Yehuda; Bruckstein, Alfred M; Elad, Michael; Giryes, Raja

    2016-07-01

    In this paper, we propose a novel postprocessing technique for compression-artifact reduction. Our approach is based on posing this task as an inverse problem, with a regularization that leverages on existing state-of-the-art image denoising algorithms. We rely on the recently proposed Plug-and-Play Prior framework, suggesting the solution of general inverse problems via alternating direction method of multipliers, leading to a sequence of Gaussian denoising steps. A key feature in our scheme is a linearization of the compression-decompression process, so as to get a formulation that can be optimized. In addition, we supply a thorough analysis of this linear approximation for several basic compression procedures. The proposed method is suitable for diverse compression techniques that rely on transform coding. In particular, we demonstrate impressive gains in image quality for several leading compression methods-JPEG, JPEG2000, and HEVC. PMID:27214878

  18. Postprocessing of Compressed Images via Sequential Denoising

    NASA Astrophysics Data System (ADS)

    Dar, Yehuda; Bruckstein, Alfred M.; Elad, Michael; Giryes, Raja

    2016-07-01

    In this work we propose a novel postprocessing technique for compression-artifact reduction. Our approach is based on posing this task as an inverse problem, with a regularization that leverages on existing state-of-the-art image denoising algorithms. We rely on the recently proposed Plug-and-Play Prior framework, suggesting the solution of general inverse problems via Alternating Direction Method of Multipliers (ADMM), leading to a sequence of Gaussian denoising steps. A key feature in our scheme is a linearization of the compression-decompression process, so as to get a formulation that can be optimized. In addition, we supply a thorough analysis of this linear approximation for several basic compression procedures. The proposed method is suitable for diverse compression techniques that rely on transform coding. Specifically, we demonstrate impressive gains in image quality for several leading compression methods - JPEG, JPEG2000, and HEVC.

  19. Compressive Hyperspectral Imaging With Side Information

    NASA Astrophysics Data System (ADS)

    Yuan, Xin; Tsai, Tsung-Han; Zhu, Ruoyu; Llull, Patrick; Brady, David; Carin, Lawrence

    2015-09-01

    A blind compressive sensing algorithm is proposed to reconstruct hyperspectral images from spectrally-compressed measurements.The wavelength-dependent data are coded and then superposed, mapping the three-dimensional hyperspectral datacube to a two-dimensional image. The inversion algorithm learns a dictionary {\\em in situ} from the measurements via global-local shrinkage priors. By using RGB images as side information of the compressive sensing system, the proposed approach is extended to learn a coupled dictionary from the joint dataset of the compressed measurements and the corresponding RGB images, to improve reconstruction quality. A prototype camera is built using a liquid-crystal-on-silicon modulator. Experimental reconstructions of hyperspectral datacubes from both simulated and real compressed measurements demonstrate the efficacy of the proposed inversion algorithm, the feasibility of the camera and the benefit of side information.

  20. The New CCSDS Image Compression Recommendation

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Armbruster, Philippe; Kiely, Aaron B.; Masschelein, Bart; Moury, Gilles; Schafer, Christoph

    2004-01-01

    The Consultative Committee for Space Data Systems (CCSDS) data compression working group has recently adopted a recommendation for image data compression, with a final release expected in 2005. The algorithm adopted in the recommendation consists a two dimensional discrete wavelet transform of the image, followed by progressive bit-plane coding of the transformed data. The algorithm can provide both lossless and lossy compression, and allows a user to directly control the compressed data volume or the fidelity with which the wavelet-transformed data can be reconstructed. The algorithm is suitable for both frame-based image data and scan-based sensor data, and has applications for near-earth and deep-space missions. The standard will be accompanied by free software sources on a future web site. An ASIC implementation of the compressor is currently under development. This paper describes the compression algorithm along with the requirements that drove the selection of the algorithm.

  1. Simulating Coupling Complexity in Space Plasmas: First Results from a new code

    NASA Astrophysics Data System (ADS)

    Kryukov, I.; Zank, G. P.; Pogorelov, N. V.; Raeder, J.; Ciardo, G.; Florinski, V. A.; Heerikhuisen, J.; Li, G.; Petrini, F.; Shematovich, V. I.; Winske, D.; Shaikh, D.; Webb, G. M.; Yee, H. M.

    2005-12-01

    mass ejection and interplanetary shock propagation model for the inner and outer heliosphere, including, at a test-particle level, wave-particle interactions and particle acceleration at traveling shock waves and compression regions. 3) To develop an advanced Geospace General Circulation Model (GGCM) capable of realistically modeling space weather events, in particular the interaction with CMEs and geomagnetic storms. Furthermore, by implementing scalable run-time supports and sophisticated off- and on-line prediction algorithms, we anticipate important advances in the development of automatic and intelligent system software to optimize a wide variety of 'embedded' computations on parallel computers. Finally, public domain MHD and hydrodynamic codes had a transforming effect on space and astrophysics. We expect that our new generation, open source, public domain multi-scale code will have a similar transformational effect in a variety of disciplines, opening up new classes of problems to physicists and engineers alike.

  2. Numerical relativistic hydrodynamic simulations of neutron stars

    NASA Astrophysics Data System (ADS)

    Haywood, Joe R.

    Developments in numerical relativistic hydrodynamics over the past thirty years, along with the advent of high speed computers, have made problems needing general relativity and relativistic hydrodynamics tractable. One such problem is the relativistic evolution of neutron stars, either in a head on collision or in binary orbit. Also of current interest is the detection of gravitational radiation from binary neutron stars, black-hole neutron star binaries, binary black holes, etc. Such systems expected to emit gravitational radiation with amplitude large enough to be detected on Earth by such groups as LIGO and VIRGO. Unfortunately, the expected signal strength is below the current noise level. However, signal processing techniques have been developed which should eventually find a signal, if a good theoretical template can be found. In the cases above it is not possible to obtain an analytic solution to the Einstein equations and a numerical approximation is therefore most necessary. In this thesis the Einstein equations are written using the formalism of Arnowitt, Desser and Misner and a conformally flat metric is assumed. Numerical simulations of colliding neutron stars, having either a realistic or Gamma = 2 polytropic equation of state (EOS), are presented which confirm the rise in central density seen by [51, 89] for the softer EOS. For the binary calculation, the results of Wilson et al. [89] are confirmed, which show that the neutron stars can collapse to black holes before colliding when the EOS is realistic and we also confirm results of Miller [56] and others that there is essentially no compression, the central density does not increase, when the stiffer equation of state is used. Finally, a template for the gravitational radiation emitted from the binary is calculated and we show that the frequency of the emitted gravitational waves changes more slowly for the [89] EOS, which may result in a stronger signal in the 50-100 Hz band of LIGO.

  3. Numerical simulations of glass impacts using smooth particle hydrodynamics

    SciTech Connect

    Mandell, D.A.; Wingate, C.A.

    1996-05-01

    As part of a program to develop advanced hydrocode design tools, we have implemented a brittle fracture model for glass into the SPHINX smooth particle hydrodynamics code. We have evaluated this model and the code by predicting data from one-dimensional flyer plate impacts into glass. Since fractured glass properties, which are needed in the model, are not available, we did sensitivity studies of these properties, as well as sensitivity studies to determine the number of particles needed in the calculations. The numerical results are in good agreement with the data. {copyright} {ital 1996 American Institute of Physics.}

  4. Numerical simulations of glass impacts using smooth particle hydrodynamics

    SciTech Connect

    Mandell, D.A.; Wingate, C.A.

    1995-07-01

    As part of a program to develop advanced hydrocode design tools, we have implemented a brittle fracture model for glass into the SPHINX smooth particle hydrodynamics code. We have evaluated this model and the code by predicting data from one-dimensional flyer plate impacts into glass. Since fractured glass properties, which are needed in the model, are not available, we did sensitivity studies of these properties, as well as sensitivity studies to determine the number of particles needed in the calculations. The numerical results are in good agreement with the data.

  5. Turbulence in Compressible Flows

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Lecture notes for the AGARD Fluid Dynamics Panel (FDP) Special Course on 'Turbulence in Compressible Flows' have been assembled in this report. The following topics were covered: Compressible Turbulent Boundary Layers, Compressible Turbulent Free Shear Layers, Turbulent Combustion, DNS/LES and RANS Simulations of Compressible Turbulent Flows, and Case Studies of Applications of Turbulence Models in Aerospace.

  6. Modeling of Magma Dynamics Based on Two-Fluid Hydrodynamics

    NASA Astrophysics Data System (ADS)

    Perepechko, Y. V.; Sorokin, K.

    2012-12-01

    Multi-velocity multi-porous models are often used as a hydrodynamic basis to describe dynamics of fluid-magma systems. These models cover such problems as fast acoustic processes or large-scaled dynamics of magma systems having non-compressible magma. Nonlinear dynamics of magma as multiphase compressible medium has not been studied sufficiently. In this work we study nonlinear thermodynamically consistent two-liquid model of magma system dynamics, based on conservation law method. The model is restricted by short times of local heat balance between phases. Pressure balance between phases is absent. Two-fluid magma model have various rheological properties of the composing phases: viscous liquid and viscoelastic Maxwell medium. The dynamics of magna flows have been studied for two types of magma systems: magma channels and intraplate intermediate magma chambers. Numerical problem of the dynamics for such media is solved using the control volume method ensuring physical correctness of the solution. The solutions are successfully verified for benchmark one-velocity models. In this work we give the results of numerical modeling using CVM for a number of non-stationary problems of nonlinear liquid filtering through granulated medium in magma channels and problems two-liquid system convection in intraplate magma chambers for various parameters. In the last case the convection regimes vary depending on non-dimensional Rayleigh and Darcy numbers and the parameter field, where compressibility effects appear, is located. The given model can be used as a hydrodynamic basis to model the evolution of magma, fluid-magma systems to study thermo-acoustic influence on hydrodynamic flows in such systems. This work was financially supported by the Russian Foundation for Basic Research, Grant #12-05-00625.

  7. Direct simulation of compressible reacting flows

    NASA Technical Reports Server (NTRS)

    Poinsot, Thierry J.

    1989-01-01

    A research program for direct numerical simulations of compressible reacting flows is described. Two main research subjects are proposed: the effect of pressure waves on turbulent combustion and the use of direct simulation methods to validate flamelet models for turbulent combustion. The interest of a compressible code to study turbulent combustion is emphasized through examples of reacting shear layer and combustion instabilities studies. The choice of experimental data to compare with direct simulation results is discussed. A tentative program is given and the computation cases to use are described as well as the code validation runs.

  8. Embedded memory compression for video and graphics applications

    NASA Astrophysics Data System (ADS)

    Teng, Andy; Gokce, Dane; Aleksic, Mickey; Reznik, Yuriy A.

    2010-08-01

    We describe design of a low-complexity lossless and near-lossless image compression system with random access, suitable for embedded memory compression applications. This system employs a block-based DPCM coder using variable-length encoding for the residual. As part of this design, we propose to use non-prefix (one-to-one) codes for coding of residuals, and show that they offer improvements in compression performance compared to conventional techniques, such as Golomb-Rice and Huffman codes.

  9. New equation of state models for hydrodynamic applications

    NASA Astrophysics Data System (ADS)

    Young, David A.; Barbee, Troy W.; Rogers, Forrest J.

    1998-07-01

    Two new theoretical methods for computing the equation of state of hot, dense matter are discussed. The ab initio phonon theory gives a first-principles calculation of lattice frequencies, which can be used to compare theory and experiment for isothermal and shock compression of solids. The ACTEX dense plasma theory has been improved to allow it to be compared directly with ultrahigh pressure shock data on low-Z materials. The comparisons with experiment are good, suggesting that these models will be useful in generating global EOS tables for hydrodynamic simulations.

  10. A study of eigenvalue sensitivity for hydrodynamic stability operators

    NASA Technical Reports Server (NTRS)

    Schmid, Peter J.; Henningson, Dan S.; Khorrami, Mehdi R.; Malik, Mujeeb R.

    1993-01-01

    The eigenvalue sensitivity for hydrodynamic stability operators is investigated. Classical matrix perturbation techniques as well as the concept of epsilon-pseudospectra are applied to show that parts of the spectrum are highly sensitive to small perturbations. Applications are drawn from incompressible plane Couette flow, trailing line vortex flow, and compressible Blasius boundary-layer flow. Parameter studies indicate a monotonically increasing effect of the Reynolds number on the sensitivity. The phenomenon of eigenvalue sensitivity is due to the nonnormality of the operators and their discrete matrix analogs and may be associated with large transient growth of the corresponding initial value problem.

  11. New equation of state model for hydrodynamic applications

    SciTech Connect

    Young, D.A.; Barbee, T.W. III; Rogers, F.J.

    1997-07-01

    Two new theoretical methods for computing the equation of state of hot, dense matter are discussed.The ab initio phonon theory gives a first-principles calculation of lattice frequencies, which can be used to compare theory and experiment for isothermal and shock compression of solids. The ACTEX dense plasma theory has been improved to allow it to be compared directly with ultrahigh pressure shock data on low-Z materials. The comparisons with experiment are good, suggesting that these models will be useful in generating global EOS tables for hydrodynamic simulations.

  12. Abnormal pressures as hydrodynamic phenomena

    USGS Publications Warehouse

    Neuzil, C.E.

    1995-01-01

    So-called abnormal pressures, subsurface fluid pressures significantly higher or lower than hydrostatic, have excited speculation about their origin since subsurface exploration first encountered them. Two distinct conceptual models for abnormal pressures have gained currency among earth scientists. The static model sees abnormal pressures generally as relict features preserved by a virtual absence of fluid flow over geologic time. The hydrodynamic model instead envisions abnormal pressures as phenomena in which flow usually plays an important role. This paper develops the theoretical framework for abnormal pressures as hydrodynamic phenomena, shows that it explains the manifold occurrences of abnormal pressures, and examines the implications of this approach. -from Author

  13. Hydrodynamic interactions in protein folding

    NASA Astrophysics Data System (ADS)

    Cieplak, Marek; Niewieczerzał, Szymon

    2009-03-01

    We incorporate hydrodynamic interactions (HIs) in a coarse-grained and structure-based model of proteins by employing the Rotne-Prager hydrodynamic tensor. We study several small proteins and demonstrate that HIs facilitate folding. We also study HIV-1 protease and show that HIs make the flap closing dynamics faster. The HIs are found to affect time correlation functions in the vicinity of the native state even though they have no impact on same time characteristics of the structure fluctuations around the native state.

  14. Hydrodynamic interactions in protein folding.

    PubMed

    Cieplak, Marek; Niewieczerzał, Szymon

    2009-03-28

    We incorporate hydrodynamic interactions (HIs) in a coarse-grained and structure-based model of proteins by employing the Rotne-Prager hydrodynamic tensor. We study several small proteins and demonstrate that HIs facilitate folding. We also study HIV-1 protease and show that HIs make the flap closing dynamics faster. The HIs are found to affect time correlation functions in the vicinity of the native state even though they have no impact on same time characteristics of the structure fluctuations around the native state. PMID:19334888

  15. PANEL CODE FOR PLANAR CASCADES

    NASA Technical Reports Server (NTRS)

    Mcfarland, E. R.

    1994-01-01

    The Panel Code for Planar Cascades was developed as an aid for the designer of turbomachinery blade rows. The effective design of turbomachinery blade rows relies on the use of computer codes to model the flow on blade-to-blade surfaces. Most of the currently used codes model the flow as inviscid, irrotational, and compressible with solutions being obtained by finite difference or finite element numerical techniques. While these codes can yield very accurate solutions, they usually require an experienced user to manipulate input data and control parameters. Also, they often limit a designer in the types of blade geometries, cascade configurations, and flow conditions that can be considered. The Panel Code for Planar Cascades accelerates the design process and gives the designer more freedom in developing blade shapes by offering a simple blade-to-blade flow code. Panel, or integral equation, solution techniques have been used for several years by external aerodynamicists who have developed and refined them into a primary design tool of the aircraft industry. The Panel Code for Planar Cascades adapts these same techniques to provide a versatile, stable, and efficient calculation scheme for internal flow. The code calculates the compressible, inviscid, irrotational flow through a planar cascade of arbitrary blade shapes. Since the panel solution technique is for incompressible flow, a compressibility correction is introduced to account for compressible flow effects. The analysis is limited to flow conditions in the subsonic and shock-free transonic range. Input to the code consists of inlet flow conditions, blade geometry data, and simple control parameters. Output includes flow parameters at selected control points. This program is written in FORTRAN IV for batch execution and has been implemented on an IBM 370 series computer with a central memory requirement of approximately 590K of 8 bit bytes. This program was developed in 1982.

  16. Data compression. [reduction, storage, and transmission of data

    NASA Technical Reports Server (NTRS)

    Babkin, V. F.; Kryukov, A. B.; Shtarkov, Y. M.

    1974-01-01

    An approach to data compression is discussed in which the effect achieved by compression is evaluated by the closeness of the approach to the minimum possible volume. An attempt is made to systematize the known results on data compression. The review contains: description of methods of data compression based on statistical coding and information theory; application of methods of interpolation and extrapolation; a specific compression method (related to description of the histogram of a sample); some criteria of effectiveness and methods of service information representation; and discussion of models suggested for theoretical analysis.

  17. On-board image compression for the RAE lunar mission

    NASA Technical Reports Server (NTRS)

    Miller, W. H.; Lynch, T. J.

    1976-01-01

    The requirements, design, implementation, and flight performance of an on-board image compression system for the lunar orbiting Radio Astronomy Explorer-2 (RAE-2) spacecraft are described. The image to be compressed is a panoramic camera view of the long radio astronomy antenna booms used for gravity-gradient stabilization of the spacecraft. A compression ratio of 32 to 1 is obtained by a combination of scan line skipping and adaptive run-length coding. The compressed imagery data are convolutionally encoded for error protection. This image compression system occupies about 1000 cu cm and consumes 0.4 W.

  18. On the use of standards for microarray lossless image compression.

    PubMed

    Pinho, Armando J; Paiva, António R C; Neves, António J R

    2006-03-01

    The interest in methods that are able to efficiently compress microarray images is relatively new. This is not surprising, since the appearance and fast growth of the technology responsible for producing these images is also quite recent. In this paper, we present a set of compression results obtained with 49 publicly available images, using three image coding standards: lossless JPEG2000, JBIG, and JPEG-LS. We concluded that the compression technology behind JBIG seems to be the one that offers the best combination of compression efficiency and flexibility for microarray image compression. PMID:16532784

  19. Dependability Improvement for PPM Compressed Data by Using Compression Pattern Matching

    NASA Astrophysics Data System (ADS)

    Kitakami, Masato; Okura, Toshihiro

    Data compression is popularly applied to computer systems and communication systems in order to reduce storage size and communication time, respectively. Since large data are used frequently, string matching for such data takes a long time. If the data are compressed, the time gets much longer because decompression is necessary. Long string matching time makes computer virus scan time longer and gives serious influence to the security of data. From this, CPM (Compression Pattern Matching) methods for several compression methods have been proposed. This paper proposes CPM method for PPM which achieves fast virus scan and improves dependability of the compressed data, where PPM is based on a Markov model, uses a context information, and achieves a better compression ratio than BW transform and Ziv-Lempel coding. The proposed method encodes the context information, which is generated in the compression process, and appends the encoded data at the beginning of the compressed data as a header. The proposed method uses only the header information. Computer simulation says that augmentation of the compression ratio is less than 5 percent if the order of the PPM is less than 5 and the source file size is more than 1M bytes, where order is the maximum length of the context used in PPM compression. String matching time is independent of the source file size and is very short, less than 0.3 micro seconds in the PC used for the simulation.

  20. A programmable image compression system

    NASA Technical Reports Server (NTRS)

    Farrelle, Paul M.

    1989-01-01

    A programmable image compression system which has the necessary flexibility to address diverse imaging needs is described. It can compress and expand single frame video images (monochrome or color) as well as documents and graphics (black and white or color) for archival or transmission applications. Through software control, the compression mode can be set for lossless or controlled quality coding; the image size and bit depth can be varied; and the image source and destination devices can be readily changed. Despite the large combination of image data types, image sources, and algorithms, the system provides a simple consistent interface to the programmer. This system (OPTIPAC) is based on the TITMS320C25 digital signal processing (DSP) chip and has been implemented as a co-processor board for an IBM PC-AT compatible computer. The underlying philosophy can readily be applied to different hardware platforms. By using multiple DSP chips or incorporating algorithm specific chips, the compression and expansion times can be significantly reduced to meet performance requirements.

  1. Perceptually lossy compression of documents

    NASA Astrophysics Data System (ADS)

    Beretta, Giordano B.; Bhaskaran, Vasudev; Konstantinides, Konstantinos; Natarajan, Balas R.

    1997-06-01

    The main cost of owning a facsimile machine consists of the telephone charges for the communications, thus short transmission times are a key feature for facsimile machines. Similarly, on a packet-routed service such as the Internet, a low number of packets is essential to avoid operator wait times. Concomitantly, the user expectations have increased considerably. In facsimile, the switch from binary to full color increases the data size by a factor of 24. On the Internet, the switch from plain text American Standard Code for Information Interchange (ASCII) encoded files to files marked up in the Hypertext Markup Language (HTML) with ample embedded graphics has increased the size of transactions by several orders of magnitude. A common compressing method for raster files in these applications in the Joint Photographic Experts Group (JPEG) method, because efficient implementations are readily available. In this method the implementors design the discrete quantization tables (DQT) and the Huffman tables (HT) to maximize the compression factor while maintaining the introduced artifacts at the threshold of perceptual detectability. Unfortunately the achieved compression rates are unsatisfactory for applications such as color facsimile and World Wide Web (W3) browsing. We present a design methodology for image-independent DQTs that while producing perceptually lossy data, does not impair the reading performance of users. Combined with a text sharpening algorithm that compensates for scanning device limitations, the methodology presented in this paper allows us to achieve compression ratios near 1:100.

  2. A robust coding scheme for packet video

    NASA Technical Reports Server (NTRS)

    Chen, Y. C.; Sayood, Khalid; Nelson, D. J.

    1991-01-01

    We present a layered packet video coding algorithm based on a progressive transmission scheme. The algorithm provides good compression and can handle significant packet loss with graceful degradation in the reconstruction sequence. Simulation results for various conditions are presented.

  3. A robust coding scheme for packet video

    NASA Technical Reports Server (NTRS)

    Chen, Yun-Chung; Sayood, Khalid; Nelson, Don J.

    1992-01-01

    A layered packet video coding algorithm based on a progressive transmission scheme is presented. The algorithm provides good compression and can handle significant packet loss with graceful degradation in the reconstruction sequence. Simulation results for various conditions are presented.

  4. Quasispherical fuel compression and fast ignition in a heavy-ion-driven X-target with one-sided illumination

    NASA Astrophysics Data System (ADS)

    Henestroza, Enrique; Logan, B. Grant; Perkins, L. John

    2011-03-01

    The HYDRA radiation-hydrodynamics code [M. M. Marinak et al., Phys. Plasmas 8, 2275 (2001)] is used to explore one-sided axial target illumination with annular and solid-profile uranium ion beams at 60 GeV to compress and ignite deuterium-tritium fuel filling the volume of metal cases with cross sections in the shape of an "X" (X-target). Quasi-three-dimensional, spherical fuel compression of the fuel toward the X-vertex on axis is obtained by controlling the geometry of the case, the timing, power, and radii of three annuli of ion beams for compression, and the hydroeffects of those beams heating the case as well as the fuel. Scaling projections suggest that this target may be capable of assembling large fuel masses resulting in high fusion yields at modest drive energies. Initial two-dimensional calculations have achieved fuel compression ratios of up to 150X solid density, with an areal density ρR of about 1 g/cm2. At these currently modest fuel densities, fast ignition pulses of 3 MJ, 60 GeV, 50 ps, and radius of 300 μm are injected through a hole in the X-case on axis to further heat the fuel to propagating burn conditions. The resulting burn waves are observed to propagate throughout the tamped fuel mass, with fusion yields of about 300 MJ. Tamping is found to be important, but radiation drive to be unimportant, to the fuel compression. Rayleigh-Taylor instability mix is found to have a minor impact on ignition and subsequent fuel burn-up.

  5. Inertial confinement fusion implosions with imposed magnetic field compression using the OMEGA Laser

    SciTech Connect

    Hohenberger, M.; Chang, P.-Y.; Fiksel, G.; Knauer, J. P.; Marshall, F. J.; Betti, R.; Meyerhofer, D. D.; and others

    2012-05-15

    Experiments applying laser-driven magnetic-flux compression to inertial confinement fusion (ICF) targets to enhance the implosion performance are described. Spherical plastic (CH) targets filled with 10 atm of deuterium gas were imploded by the OMEGA Laser, compare Phys. Plasmas 18, 056703 or Phys. Plasmas 18, 056309. Before being imploded, the targets were immersed in an 80-kG magnetic seed field. Upon laser irradiation, the high implosion velocities and ionization of the target fill trapped the magnetic field inside the capsule, and it was amplified to tens of megagauss through flux compression. At such strong magnetic fields, the hot spot inside the spherical target was strongly magnetized, reducing the heat losses through electron confinement. The experimentally observed ion temperature was enhanced by 15%, and the neutron yield was increased by 30%, compared to nonmagnetized implosions [P. Y. Chang et al., Phys. Rev. Lett. 107, 035006 (2011)]. This represents the first experimental verification of performance enhancement resulting from embedding a strong magnetic field into an ICF capsule. Experimental data for the fuel-assembly performance and magnetic field are compared to numerical results from combining the 1-D hydrodynamics code LILAC with a 2-D magnetohydrodynamics postprocessor.

  6. Hydrodynamic slip in silicon nanochannels

    NASA Astrophysics Data System (ADS)

    Ramos-Alvarado, Bladimir; Kumar, Satish; Peterson, G. P.

    2016-03-01

    Equilibrium and nonequilibrium molecular dynamics simulations were performed to better understand the hydrodynamic behavior of water flowing through silicon nanochannels. The water-silicon interaction potential was calibrated by means of size-independent molecular dynamics simulations of silicon wettability. The wettability of silicon was found to be dependent on the strength of the water-silicon interaction and the structure of the underlying surface. As a result, the anisotropy was found to be an important factor in the wettability of these types of crystalline solids. Using this premise as a fundamental starting point, the hydrodynamic slip in nanoconfined water was characterized using both equilibrium and nonequilibrium calculations of the slip length under low shear rate operating conditions. As was the case for the wettability analysis, the hydrodynamic slip was found to be dependent on the wetted solid surface atomic structure. Additionally, the interfacial water liquid structure was the most significant parameter to describe the hydrodynamic boundary condition. The calibration of the water-silicon interaction potential performed by matching the experimental contact angle of silicon led to the verification of the no-slip condition, experimentally reported for silicon nanochannels at low shear rates.

  7. Meat Products, Hydrodynamic Pressure Processing

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The hydrodynamic pressure process (HDP) has been shown to be very effective at improving meat tenderness in a variety of meat cuts. When compared to conventional aging for tenderization, HDP was more effective. The HDP process may offer the meat industry a new alternative for tenderizing meat in add...

  8. Google Earth as a tool in 2-D hydrodynamic modeling

    NASA Astrophysics Data System (ADS)

    Chien, Nguyen Quang; Keat Tan, Soon

    2011-01-01

    A method for coupling virtual globes with geophysical hydrodynamic models is presented. Virtual globes such as Google TM Earth can be used as a visualization tool to help users create and enter input data. The authors discuss techniques for representing linear and areal geographical objects with KML (Keyhole Markup Language) files generated using computer codes (scripts). Although virtual globes offer very limited tools for data input, some data of categorical or vector type can be entered by users, and then transformed into inputs for the hydrodynamic program by using appropriate scripts. An application with the AnuGA hydrodynamic model was used as an illustration of the method. Firstly, users draw polygons on the Google Earth screen. These features are then saved in a KML file which is read using a script file written in the Lua programming language. After the hydrodynamic simulation has been performed, another script file is used to convert the resulting output text file to a KML file for visualization, where the depths of inundation are represented by the color of discrete point icons. The visualization of a wind speed vector field was also included as a supplementary example.

  9. Compressed sensing based video multicast

    NASA Astrophysics Data System (ADS)

    Schenkel, Markus B.; Luo, Chong; Frossard, Pascal; Wu, Feng

    2010-07-01

    We propose a new scheme for wireless video multicast based on compressed sensing. It has the property of graceful degradation and, unlike systems adhering to traditional separate coding, it does not suffer from a cliff effect. Compressed sensing is applied to generate measurements of equal importance from a video such that a receiver with a better channel will naturally have more information at hands to reconstruct the content without penalizing others. We experimentally compare different random matrices at the encoder side in terms of their performance for video transmission. We further investigate how properties of natural images can be exploited to improve the reconstruction performance by transmitting a small amount of side information. And we propose a way of exploiting inter-frame correlation by extending only the decoder. Finally we compare our results with a different scheme targeting the same problem with simulations and find competitive results for some channel configurations.

  10. Learning in compressed space.

    PubMed

    Fabisch, Alexander; Kassahun, Yohannes; Wöhrle, Hendrik; Kirchner, Frank

    2013-06-01

    We examine two methods which are used to deal with complex machine learning problems: compressed sensing and model compression. We discuss both methods in the context of feed-forward artificial neural networks and develop the backpropagation method in compressed parameter space. We further show that compressing the weights of a layer of a multilayer perceptron is equivalent to compressing the input of the layer. Based on this theoretical framework, we will use orthogonal functions and especially random projections for compression and perform experiments in supervised and reinforcement learning to demonstrate that the presented methods reduce training time significantly. PMID:23501172

  11. Hydrodynamic analysis of time series

    NASA Astrophysics Data System (ADS)

    Suciu, N.; Vamos, C.; Vereecken, H.; Vanderborght, J.

    2003-04-01

    It was proved that balance equations for systems with corpuscular structure can be derived if a kinematic description by piece-wise analytic functions is available [1]. For example, the hydrodynamic equations for one-dimensional systems of inelastic particles, derived in [2], were used to prove the inconsistency of the Fourier law of heat with the microscopic structure of the system. The hydrodynamic description is also possible for single particle systems. In this case, averages of physical quantities associated with the particle, over a space-time window, generalizing the usual ``moving averages'' which are performed on time intervals only, were shown to be almost everywhere continuous space-time functions. Moreover, they obey balance partial differential equations (continuity equation for the 'concentration', Navier-Stokes equation, a. s. o.) [3]. Time series can be interpreted as trajectories in the space of the recorded parameter. Their hydrodynamic interpretation is expected to enable deterministic predictions, when closure relations can be obtained for the balance equations. For the time being, a first result is the estimation of the probability density for the occurrence of a given parameter value, by the normalized concentration field from the hydrodynamic description. The method is illustrated by hydrodynamic analysis of three types of time series: white noise, stock prices from financial markets and groundwater levels recorded at Krauthausen experimental field of Forschungszentrum Jülich (Germany). [1] C. Vamoş, A. Georgescu, N. Suciu, I. Turcu, Physica A 227, 81-92, 1996. [2] C. Vamoş, N. Suciu, A. Georgescu, Phys. Rev E 55, 5, 6277-6280, 1997. [3] C. Vamoş, N. Suciu, W. Blaj, Physica A, 287, 461-467, 2000.

  12. Hydrodynamics of diatom chains and semiflexible fibres.

    PubMed

    Nguyen, Hoa; Fauci, Lisa

    2014-07-01

    Diatoms are non-motile, unicellular phytoplankton that have the ability to form colonies in the form of chains. Depending upon the species of diatoms and the linking structures that hold the cells together, these chains can be quite stiff or very flexible. Recently, the bending rigidities of some species of diatom chains have been quantified. In an effort to understand the role of flexibility in nutrient uptake and aggregate formation, we begin by developing a three-dimensional model of the coupled elastic-hydrodynamic system of a diatom chain moving in an incompressible fluid. We find that simple beam theory does a good job of describing diatom chain deformation in a parabolic flow when its ends are tethered, but does not tell the whole story of chain deformations when they are subjected to compressive stresses in shear. While motivated by the fluid dynamics of diatom chains, our computational model of semiflexible fibres illustrates features that apply widely to other systems. The use of an adaptive immersed boundary framework allows us to capture complicated buckling and recovery dynamics of long, semiflexible fibres in shear. PMID:24789565

  13. Hydrodynamic water impact. [Apollo spacecraft waterlanding

    NASA Technical Reports Server (NTRS)

    Kettleborough, C. F.

    1972-01-01

    The hydrodynamic impact of a falling body upon a viscous incompressible fluid was investigated by numerically solving the equations of motion. Initially the mathematical model simulated the axisymmetric impact of a rigid right circular cylinder upon the initially quiescent free surface of a fluid. A compressible air layer exists between the falling cylinder and the liquid free surface. The mathematical model was developed by applying the Navier-Stokes equations to the incompressible air layer and the incompressible fluid. Assuming the flow to be one dimensional within the air layer, the average velocity, pressure and density distributions were calculated. The liquid free surface was allowed to deform as the air pressure acting on it increases. For the liquid the normalized equations were expressed in two-dimensional cylindrical coordinates. The governing equations for the air layer and the liquid were expressed in finite difference form and solved numerically. For the liquid a modified version of the Marker-and-Cell method was used. The mathematical model has been reexamined and a new approach has recently been initiated. Essentially this consists of examining the impact of an inclined plate onto a quiesent water surface with the equations now formulated in cartesian coordinates.

  14. Hydrodynamic Simulations of Gaseous Argon Shock Experiments

    NASA Astrophysics Data System (ADS)

    Garcia, Daniel; Dattelbaum, Dana; Goodwin, Peter; Morris, John; Sheffield, Stephen; Burkett, Michael

    2015-06-01

    The lack of published Argon gas shock data motivated an evaluation of the Argon Equation of State (EOS) in gas phase initial density regimes never before reached. In particular, these regimes include initial pressures in the range of 200-500 psi (0.025 - 0.056 g/cc) and initial shock velocities around 0.2 cm/ μs. The objective of the numerical evaluation was to develop a physical understanding of the EOS behavior of shocked and subsequently multiply re-shocked Argon gas initially pressurized to 200-500 psi through Pagosa numerical hydrodynamic simulations utilizing the SESAME equation of state. Pagosa is a Los Alamos National Laboratory 2-D and 3-D Eulerian hydrocode capable of modeling high velocity compressible flow with multiple materials. The approach involved the use of gas gun experiments to evaluate the shock and multiple re-shock behavior of pressurized Argon gas to validate Pagosa simulations and the SESAME EOS. Additionally, the diagnostic capability within the experiments allowed for the EOS to be fully constrained with measured shock velocity, particle velocity and temperature. The simulations demonstrate excellent agreement with the experiments in the shock velocity/particle velocity space, but note unanticipated differences in the ionization front temperatures.

  15. Hydrodynamics of diatom chains and semiflexible fibres

    PubMed Central

    Nguyen, Hoa; Fauci, Lisa

    2014-01-01

    Diatoms are non-motile, unicellular phytoplankton that have the ability to form colonies in the form of chains. Depending upon the species of diatoms and the linking structures that hold the cells together, these chains can be quite stiff or very flexible. Recently, the bending rigidities of some species of diatom chains have been quantified. In an effort to understand the role of flexibility in nutrient uptake and aggregate formation, we begin by developing a three-dimensional model of the coupled elastic–hydrodynamic system of a diatom chain moving in an incompressible fluid. We find that simple beam theory does a good job of describing diatom chain deformation in a parabolic flow when its ends are tethered, but does not tell the whole story of chain deformations when they are subjected to compressive stresses in shear. While motivated by the fluid dynamics of diatom chains, our computational model of semiflexible fibres illustrates features that apply widely to other systems. The use of an adaptive immersed boundary framework allows us to capture complicated buckling and recovery dynamics of long, semiflexible fibres in shear. PMID:24789565

  16. Low complexity efficient raw SAR data compression

    NASA Astrophysics Data System (ADS)

    Rane, Shantanu; Boufounos, Petros; Vetro, Anthony; Okada, Yu

    2011-06-01

    We present a low-complexity method for compression of raw Synthetic Aperture Radar (SAR) data. Raw SAR data is typically acquired using a satellite or airborne platform without sufficient computational capabilities to process the data and generate a SAR image on-board. Hence, the raw data needs to be compressed and transmitted to the ground station, where SAR image formation can be carried out. To perform low-complexity compression, our method uses 1-dimensional transforms, followed by quantization and entropy coding. In contrast to previous approaches, which send uncompressed or Huffman-coded bits, we achieve more efficient entropy coding using an arithmetic coder that responds to a continuously updated probability distribution. We present experimental results on compression of raw Ku-SAR data. In those we evaluate the effect of the length of the transform on compression performance and demonstrate the advantages of the proposed framework over a state-of-the-art low complexity scheme called Block Adaptive Quantization (BAQ).

  17. An image compression technique for use on token ring networks

    NASA Technical Reports Server (NTRS)

    Gorjala, B.; Sayood, Khalid; Meempat, G.

    1992-01-01

    A low complexity technique for compression of images for transmission over local area networks is presented. The technique uses the synchronous traffic as a side channel for improving the performance of an adaptive differential pulse code modulation (ADPCM) based coder.

  18. Calculating cold curves for Equation of State using different types of Density Functional Theory codes

    NASA Astrophysics Data System (ADS)

    Mattsson, Ann E.; Cochrane, Kyle R.; Carpenter, John H.; Desjarlais, Michael P.

    2008-03-01

    With fast computers and improved radiation-hydrodynamics simulation techniques, increasingly complex high energy-density physics systems are investigated by modeling and simulation efforts, putting unprecedented strain on the underlying Equation of State (EOS) modeling. EOS models that have been adequate in the past can fail in unexpected ways. With the aim of improving the EOS, models are often fitted to calculated data in parts of the parameter space where little or no experimental data is available. One example is the compression part of the cold curve. We show that care needs to be taken in using Density Functional Theory (DFT) codes. While being perfectly adequate for calculations in many parts of the parameter space, approximations inherent to pseudo-potential codes can limit their applicability for large compressions. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  19. Compressing bitmap indexes for faster search operations

    SciTech Connect

    Wu, Kesheng; Otoo, Ekow J.; Shoshani, Arie

    2002-04-25

    In this paper, we study the effects of compression on bitmap indexes. The main operations on the bitmaps during query processing are bitwise logical operations such as AND, OR, NOT, etc. Using the general purpose compression schemes, such as gzip, the logical operations on the compressed bitmaps are much slower than on the uncompressed bitmaps. Specialized compression schemes, like the byte-aligned bitmap code(BBC), are usually faster in performing logical operations than the general purpose schemes, but in many cases they are still orders of magnitude slower than the uncompressed scheme. To make the compressed bitmap indexes operate more efficiently, we designed a CPU-friendly scheme which we refer to as the word-aligned hybrid code (WAH). Tests on both synthetic and real application data show that the new scheme significantly outperforms well-known compression schemes at a modest increase in storage space. Compared to BBC, a scheme well-known for its operational efficiency, WAH performs logical operations about 12 times faster and uses only 60 percent more space. Compared to the uncompressed scheme, in most test cases WAH is faster while still using less space. We further verified with additional tests that the improvement in logical operation speed translates to similar improvement in query processing speed.

  20. An Efficient Variable-Length Data-Compression Scheme

    NASA Technical Reports Server (NTRS)

    Cheung, Kar-Ming; Kiely, Aaron B.

    1996-01-01

    Adaptive variable-length coding scheme for compression of stream of independent and identically distributed source data involves either Huffman code or alternating run-length Huffman (ARH) code, depending on characteristics of data. Enables efficient compression of output of lossless or lossy precompression process, with speed and simplicity greater than those of older coding schemes developed for same purpose. In addition, scheme suitable for parallel implementation on hardware with modular structure, provides for rapid adaptation to changing data source, compatible with block orientation to alleviate memory requirements, ensures efficiency over wide range of entropy, and easily combined with such other communication schemes as those for containment of errors and for packetization.

  1. Incipient transition phenomena in compressible flows over a flat plate

    NASA Technical Reports Server (NTRS)

    Erlebacher, G.; Hussaini, M. Y.

    1986-01-01

    The full three-dimensional time-dependent compressible Navier-Stokes equations are solved by a Fourier-Chebyshev method to study the stability of compressible flows over a flat plate. After the code is validated in the linear regime, it is applied to study the existence of the secondary instability mechanism in the supersonic regime.

  2. Volume transport and generalized hydrodynamic equations for monatomic fluids.

    PubMed

    Eu, Byung Chan

    2008-10-01

    In this paper, the effects of volume transport on the generalized hydrodynamic equations for a pure simple fluid are examined from the standpoint of statistical mechanics and, in particular, kinetic theory of fluids. First, we derive the generalized hydrodynamic equations, namely, the constitutive equations for the stress tensor and heat flux for a single-component monatomic fluid, from the generalized Boltzmann equation in the presence of volume transport. Then their linear steady-state solutions are derived and examined with regard to the effects of volume transport on them. The generalized hydrodynamic equations and linear constitutive relations obtained for nonconserved variables make it possible to assess Brenner's proposition [Physica A 349, 11 (2005); Physica A 349, 60 (2005)] for volume transport and attendant mass and volume velocities as well as the effects of volume transport on the Newtonian law of viscosity, compression/dilatation (bulk viscosity) phenomena, and Fourier's law of heat conduction. On the basis of study made, it is concluded that the notion of volume transport is sufficiently significant to retain in irreversible thermodynamics of fluids and fluid mechanics. PMID:19045107

  3. Hydrodynamic fluctuation-induced forces in confined fluids.

    PubMed

    Monahan, Christopher; Naji, Ali; Horgan, Ronald; Lu, Bing-Sui; Podgornik, Rudolf

    2016-01-14

    We study thermal, fluctuation-induced hydrodynamic interaction forces in a classical, compressible, viscous fluid confined between two rigid, planar walls with no-slip boundary conditions. We calculate hydrodynamic fluctuations using the linearized, stochastic Navier-Stokes formalism of Landau and Lifshitz. The mean fluctuation-induced force acting on the fluid boundaries vanishes in this system, so we evaluate the two-point, time-dependent force correlations. The equal-time correlation function of the forces acting on a single wall gives the force variance, which we show to be finite and independent of the plate separation at large inter-plate distances. The equal-time, cross-plate force correlation, on the other hand, decays with the inverse inter-plate distance and is independent of the fluid viscosity at large distances; it turns out to be negative over the whole range of plate separations, indicating that the two bounding plates are subjected to counter-phase correlations. We show that the time-dependent force correlations exhibit damped temporal oscillations for small plate separations and a more irregular oscillatory behavior at large separations. The long-range hydrodynamic correlations reported here represent a "secondary Casimir effect", because the mean fluctuation-induced force, which represents the primary Casimir effect, is absent. PMID:26477742

  4. Hydrodynamics of coalescing binary neutron stars: Ellipsoidal treatment

    NASA Technical Reports Server (NTRS)

    Lai, Dong; Shapiro, Stuart L.

    1995-01-01

    We employ an approximate treatment of dissipative hydrodynamics in three dimensions to study the coalescence of binary neutron stars driven by the emission of gravitational waves. The stars are modeled as compressible ellipsoids obeying a polytropic equation of state; all internal fluid velocities are assumed to be linear functions of the coordinates. The hydrodynamics equations then reduce to a set of coupled ordinary differential equations for the evolution of the principal axes of the ellipsoids, the internal velocity parameters, and the binary orbital parameters. Gravitational radiation reaction and viscous dissipation are both incorporated. We set up exact initial binary equilibrium configurations and follow the transition from the quasi-static, secular decay of the orbit at large separation to the rapid dynamical evolution of the configurations just prior to contact. A hydrodynamical instability resulting from tidal interactions significantly accelerates the coalescence at small separation, leading to appreciable radial infall velocity and tidal lag angles near contact. This behavior is reflected in the gravitational waveforms and may be observable by gravitational wave detectors under construction. In cases where the neutron stars have spins which are not aligned with the orbital angular momentum, the spin-induced quadrupole moment can lead to precession of the orbital plane and therefore modulation of the gravitational wave amplitude even at large orbital radius. However, the amplitude of the modulation is small for typical neutron star binaries with spins much smaller than the orbital angular momentum.

  5. Magnetothermal instability in laser plasmas including hydrodynamic effects

    SciTech Connect

    Bissell, J. J.; Kingham, R. J.; Ridgers, C. P.

    2012-05-15

    The impact of both density gradients and hydrodynamics on the evolution of the field compressing magnetothermal instability is considered [J. J. Bissell et al., Phys. Rev. Lett. 105, 175001 (2010)]. Hydrodynamic motion is found to have a limited effect on overall growth-rates; however, density gradients are shown to introduce an additional source term corresponding to a generalised description of the field generating thermal instability [D. Tidman and R. Shanny, Phys. Fluids 17, 1207 (1974)]. The field compressing and field generating source terms are contrasted, and the former is found to represent either the primary or sole instability mechanism for a range of conditions, especially those with Hall parameter {chi}>10{sup -1}. The generalised theory is compared to numerical simulation in the context of a recent nano-second gas-jet experiment [D. H. Froula et al., Phys. Rev. Lett. 98, 135001 (2007)] and shown to be in good agreement: exhibiting peak growth-rates and wavelengths of order 10 ns{sup 1} and 50 {mu}m, respectively. The instability's relevance to other experimental conditions, including those in inertial confinement fusion (I.C.F.) hohlraums, is also discussed.

  6. Transform coding for space applications

    NASA Technical Reports Server (NTRS)

    Glover, Daniel

    1993-01-01

    Data compression coding requirements for aerospace applications differ somewhat from the compression requirements for entertainment systems. On the one hand, entertainment applications are bit rate driven with the goal of getting the best quality possible with a given bandwidth. Science applications are quality driven with the goal of getting the lowest bit rate for a given level of reconstruction quality. In the past, the required quality level has been nothing less than perfect allowing only the use of lossless compression methods (if that). With the advent of better, faster, cheaper missions, an opportunity has arisen for lossy data compression methods to find a use in science applications as requirements for perfect quality reconstruction runs into cost constraints. This paper presents a review of the data compression problem from the space application perspective. Transform coding techniques are described and some simple, integer transforms are presented. The application of these transforms to space-based data compression problems is discussed. Integer transforms have an advantage over conventional transforms in computational complexity. Space applications are different from broadcast or entertainment in that it is desirable to have a simple encoder (in space) and tolerate a more complicated decoder (on the ground) rather than vice versa. Energy compaction with new transforms are compared with the Walsh-Hadamard (WHT), Discrete Cosine (DCT), and Integer Cosine (ICT) transforms.

  7. A strategy for reducing stagnation phase hydrodynamic instability growth in inertial confinement fusion implosions

    NASA Astrophysics Data System (ADS)

    Clark, D. S.; Robey, H. F.; Smalyuk, V. A.

    2015-05-01

    Encouraging progress is being made in demonstrating control of ablation front hydrodynamic instability growth in inertial confinement fusion implosion experiments on the National Ignition Facility [E. I. Moses, R. N. Boyd, B. A. Remington, C. J. Keane, and R. Al-Ayat, Phys. Plasmas 16, 041006 (2009)]. Even once ablation front stabilities are controlled, however, instability during the stagnation phase of the implosion can still quench ignition. A scheme is proposed to reduce the growth of stagnation phase instabilities through the reverse of the "adiabat shaping" mechanism proposed to control ablation front growth. Two-dimensional radiation hydrodynamics simulations confirm that improved stagnation phase stability should be possible without compromising fuel compression.

  8. Isogeometric analysis of Lagrangian hydrodynamics: Axisymmetric formulation in the rz-cylindrical coordinates

    NASA Astrophysics Data System (ADS)

    Bazilevs, Y.; Long, C. C.; Akkerman, I.; Benson, D. J.; Shashkov, M. J.

    2014-04-01

    A recent Isogeometric Analysis (IGA) formulation of Lagrangian shock hydrodynamics [4] is extended to the 3D axisymmetric case. The Euler equations of compressible hydrodynamics are formulated using the rz-cylindrical coordinates, and are discretized in the weak form using NURBS-based IGA. Artificial shock viscosity and internal energy projection are added to stabilize the formulation. The resulting discretization exhibits good accuracy and robustness properties. It also gives exact symmetry preservation on the appropriately constructed meshes. Several benchmark examples are computed to examine the performance of the proposed formulation.

  9. Microbunching and RF Compression

    SciTech Connect

    Venturini, M.; Migliorati, M.; Ronsivalle, C.; Ferrario, M.; Vaccarezza, C.

    2010-05-23

    Velocity bunching (or RF compression) represents a promising technique complementary to magnetic compression to achieve the high peak current required in the linac drivers for FELs. Here we report on recent progress aimed at characterizing the RF compression from the point of view of the microbunching instability. We emphasize the development of a linear theory for the gain function of the instability and its validation against macroparticle simulations that represents a useful tool in the evaluation of the compression schemes for FEL sources.

  10. Sonar feature-based bandwidth compression

    NASA Astrophysics Data System (ADS)

    Saghri, John A.; Tescher, Andrew G.

    1992-07-01

    A sonar bandwidth compression (BWC) technique which, unlike conventional methods, adaptively varies the coding resolution in the compression process based on a priori information is described. This novel approach yields a robust compression system whose performance exceeds the conventional methods by factors of 2-to-1 and 1.5-to-1 for display- formatted and time series sonar data, respectively. The data is first analyzed by a feature extraction routine to determine those pixels of the image that collectively comprise intelligence-bearing signal features. The data is then split into a foreground image which contains the extracted source characteristic and a larger background image which is the remainder. Since the background image is highly textured, it suffices to code only the local statistics rather than the actual pixels themselves. This results in a substantial reduction of the bit rate required to code the background image. The feature-based compression algorithm developed for sonar imagery data is also extended to the sonar time series data via a novel approach involving an initial one-dimensional DCT transformation of the time series data before the actual compression process. The unique advantage of this approach is that the coding is done in an alternative two-dimensional image domain where, unlike the original time domain, it is possible to observe, differentiate, and prioritize essential features of data in the compression process. The feature-based BWC developed for sonar data is potentially very useful for applications involving highly textured imagery. Two such applications are synthetic aperture radar and ultrasound medical imaging.

  11. Study and simulation of low rate video coding schemes

    NASA Technical Reports Server (NTRS)

    Sayood, Khalid; Chen, Yun-Chung; Kipp, G.

    1992-01-01

    The semiannual report is included. Topics covered include communication, information science, data compression, remote sensing, color mapped images, robust coding scheme for packet video, recursively indexed differential pulse code modulation, image compression technique for use on token ring networks, and joint source/channel coder design.

  12. Universal lossless compression algorithm for textual images

    NASA Astrophysics Data System (ADS)

    al Zahir, Saif

    2012-03-01

    In recent years, an unparalleled volume of textual information has been transported over the Internet via email, chatting, blogging, tweeting, digital libraries, and information retrieval systems. As the volume of text data has now exceeded 40% of the total volume of traffic on the Internet, compressing textual data becomes imperative. Many sophisticated algorithms were introduced and employed for this purpose including Huffman encoding, arithmetic encoding, the Ziv-Lempel family, Dynamic Markov Compression, and Burrow-Wheeler Transform. My research presents novel universal algorithm for compressing textual images. The algorithm comprises two parts: 1. a universal fixed-to-variable codebook; and 2. our row and column elimination coding scheme. Simulation results on a large number of Arabic, Persian, and Hebrew textual images show that this algorithm has a compression ratio of nearly 87%, which exceeds published results including JBIG2.

  13. 3D MHD Simulations of Spheromak Compression

    NASA Astrophysics Data System (ADS)

    Stuber, James E.; Woodruff, Simon; O'Bryan, John; Romero-Talamas, Carlos A.; Darpa Spheromak Team

    2015-11-01

    The adiabatic compression of compact tori could lead to a compact and hence low cost fusion energy system. The critical scientific issues in spheromak compression relate both to confinement properties and to the stability of the configuration undergoing compression. We present results from the NIMROD code modified with the addition of magnetic field coils that allow us to examine the role of rotation on the stability and confinement of the spheromak (extending prior work for the FRC). We present results from a scan in initial rotation, from 0 to 100km/s. We show that strong rotational shear (10km/s over 1cm) occurs. We compare the simulation results with analytic scaling relations for adiabatic compression. Work performed under DARPA grant N66001-14-1-4044.

  14. Compressed gas manifold

    DOEpatents

    Hildebrand, Richard J.; Wozniak, John J.

    2001-01-01

    A compressed gas storage cell interconnecting manifold including a thermally activated pressure relief device, a manual safety shut-off valve, and a port for connecting the compressed gas storage cells to a motor vehicle power source and to a refueling adapter. The manifold is mechanically and pneumatically connected to a compressed gas storage cell by a bolt including a gas passage therein.

  15. Fast lossless compression via cascading Bloom filters

    PubMed Central

    2014-01-01

    Background Data from large Next Generation Sequencing (NGS) experiments present challenges both in terms of costs associated with storage and in time required for file transfer. It is sometimes possible to store only a summary relevant to particular applications, but generally it is desirable to keep all information needed to revisit experimental results in the future. Thus, the need for efficient lossless compression methods for NGS reads arises. It has been shown that NGS-specific compression schemes can improve results over generic compression methods, such as the Lempel-Ziv algorithm, Burrows-Wheeler transform, or Arithmetic Coding. When a reference genome is available, effective compression can be achieved by first aligning the reads to the reference genome, and then encoding each read using the alignment position combined with the differences in the read relative to the reference. These reference-based methods have been shown to compress better than reference-free schemes, but the alignment step they require demands several hours of CPU time on a typical dataset, whereas reference-free methods can usually compress in minutes. Results We present a new approach that achieves highly efficient compression by using a reference genome, but completely circumvents the need for alignment, affording a great reduction in the time needed to compress. In contrast to reference-based methods that first align reads to the genome, we hash all reads into Bloom filters to encode, and decode by querying the same Bloom filters using read-length subsequences of the reference genome. Further compression is achieved by using a cascade of such filters. Conclusions Our method, called BARCODE, runs an order of magnitude faster than reference-based methods, while compressing an order of magnitude better than reference-free methods, over a broad range of sequencing coverage. In high coverage (50-100 fold), compared to the best tested compressors, BARCODE saves 80-90% of the running time

  16. γ^2 Velorum: combining interferometric observations with hydrodynamic simulations

    NASA Astrophysics Data System (ADS)

    Lamberts, A.; Millour, F.

    2015-12-01

    Colliding stellar winds in massive binary systems have been studied through their radio and strong X-ray emission for decades. More recently, spectro-interferometric observations in the near infrared have become available for certain binaries, but identifying the different contributions to the emission remains a challenge. Multidimensional hydrodynamic simulations reveal a complex double shocked structure and can guide the analysis of observational data. In this work, we analyse the wind collision region in the WR+O binary, γ^2 Velorum. We combine multi-epoch AMBER observations with mock data obtained with hydrodynamic simulations with the RAMSES code. We assess the contributions of the wind collision region in order to constrain the wind structure of both stars.

  17. New Equation of State Models for Hydrodynamic Applications

    NASA Astrophysics Data System (ADS)

    Young, David A.; Barbee, Troy W., III; Rogers, Forrest J.

    1997-07-01

    Accurate models of the equation of state of matter at high pressures and temperatures are increasingly required for hydrodynamic simulations. We have developed two new approaches to accurate EOS modeling: 1) ab initio phonons from electron band structure theory for condensed matter and 2) the ACTEX dense plasma model for ultrahigh pressure shocks. We have studied the diamond and high pressure phases of carbon with the ab initio model and find good agreement between theory and experiment for shock Hugoniots, isotherms, and isobars. The theory also predicts a comprehensive phase diagram for carbon. For ultrahigh pressure shock states, we have studied the comparison of ACTEX theory with experiments for deuterium, beryllium, polystyrene, water, aluminum, and silicon dioxide. The agreement is good, showing that complex multispecies plasmas are treated adequately by the theory. These models will be useful in improving the numerical EOS tables used by hydrodynamic codes.

  18. Development and Implementation of Radiation-Hydrodynamics Verification Test Problems

    SciTech Connect

    Marcath, Matthew J.; Wang, Matthew Y.; Ramsey, Scott D.

    2012-08-22

    Analytic solutions to the radiation-hydrodynamic equations are useful for verifying any large-scale numerical simulation software that solves the same set of equations. The one-dimensional, spherically symmetric Coggeshall No.9 and No.11 analytic solutions, cell-averaged over a uniform-grid have been developed to analyze the corresponding solutions from the Los Alamos National Laboratory Eulerian Applications Project radiation-hydrodynamics code xRAGE. These Coggeshall solutions have been shown to be independent of heat conduction, providing a unique opportunity for comparison with xRAGE solutions with and without the heat conduction module. Solution convergence was analyzed based on radial step size. Since no shocks are involved in either problem and the solutions are smooth, second-order convergence was expected for both cases. The global L1 errors were used to estimate the convergence rates with and without the heat conduction module implemented.

  19. Hydrodynamic Simulations of Close and Contact Binary Systems using Bipolytropes

    NASA Astrophysics Data System (ADS)

    Kadam, Kundan

    2016-01-01

    I will present the results of hydrodynamic simulations of close and contact bipolytropic binary systems. This project is motivated by the peculiar case of the red nova, V1309 Sco, which is indeed a merger of a contact binary. Both the stars are believed to have evolved off the main sequence by the time of the merger and possess a small helium core. In order to represent the binary accurately, I need a core-envelope structure for both the stars. I have achieved this using bipolytropes or composite polytropes. For the simulations, I use an explicit 3D Eulerian hydrodynamics code in cylindrical coordinates. I will discuss the evolution and merger scenarios of systems with different mass ratios and core mass fractions as well as the effects due to the treatment of the adiabatic exponent.

  20. Hydrodynamic Simulations of Shell Convection in Stellar Cores

    NASA Astrophysics Data System (ADS)

    Mocák, Miroslav; Müller, Ewald; Siess, Lionel

    Shell convection driven by nuclear burning in a stellar core is a common hydrodynamic event in the evolution of many types of stars. We encounter and simulate this convection (1) in the helium core of a low-mass red giant during core helium flash leading to a dredge-down of protons across an entropy barrier, (2) in a carbon-oxygen core of an intermediate-mass star during core carbon flash, and (3) in the oxygen and carbon burning shell above the silicon-sulfur rich core of a massive star prior to supernova explosion. Our results, which were obtained with the hydrodynamics code HERAKLES, suggest that both entropy gradients and entropy barriers are less important for stellar structure than commonly assumed. Our simulations further reveal a new dynamic mixing process operating below the base of shell convection zones.

  1. Compressible turbulent mixing: Effects of compressibility

    NASA Astrophysics Data System (ADS)

    Ni, Qionglin

    2016-04-01

    We studied by numerical simulations the effects of compressibility on passive scalar transport in stationary compressible turbulence. The turbulent Mach number varied from zero to unity. The difference in driven forcing was the magnitude ratio of compressive to solenoidal modes. In the inertial range, the scalar spectrum followed the k-5 /3 scaling and suffered negligible influence from the compressibility. The growth of the Mach number showed (1) a first reduction and second enhancement in the transfer of scalar flux; (2) an increase in the skewness and flatness of the scalar derivative and a decrease in the mixed skewness and flatness of the velocity-scalar derivatives; (3) a first stronger and second weaker intermittency of scalar relative to that of velocity; and (4) an increase in the intermittency parameter which measures the intermittency of scalar in the dissipative range. Furthermore, the growth of the compressive mode of forcing indicated (1) a decrease in the intermittency parameter and (2) less efficiency in enhancing scalar mixing. The visualization of scalar dissipation showed that, in the solenoidal-forced flow, the field was filled with the small-scale, highly convoluted structures, while in the compressive-forced flow, the field was exhibited as the regions dominated by the large-scale motions of rarefaction and compression.

  2. An Empirical Evaluation of Coding Methods for Multi-Symbol Alphabets.

    ERIC Educational Resources Information Center

    Moffat, Alistair; And Others

    1994-01-01

    Evaluates the performance of different methods of data compression coding in several situations. Huffman's code, arithmetic coding, fixed codes, fast approximations to arithmetic coding, and splay coding are discussed in terms of their speed, memory requirements, and proximity to optimal performance. Recommendations for the best methods of…

  3. Hydrodynamic interactions between rotating helices.

    PubMed

    Kim, MunJu; Powers, Thomas R

    2004-06-01

    Escherichia coli bacteria use rotating helical flagella to swim. At this scale, viscous effects dominate inertia, and there are significant hydrodynamic interactions between nearby helices. These interactions cause the flagella to bundle during the "runs" of bacterial chemotaxis. Here we use slender-body theory to solve for the flow fields generated by rigid helices rotated by stationary motors. We determine how the hydrodynamic forces and torques depend on phase and phase difference, show that rigid helices driven at constant torque do not synchronize, and solve for the flows. We also use symmetry arguments based on kinematic reversibility to show that for two rigid helices rotating with zero phase difference, there is no time-averaged attractive or repulsive force between the helices. PMID:15244620

  4. Hydrodynamic damage to animal cells.

    PubMed

    Chisti, Y

    2001-01-01

    Animal cells are affected by hydrodynamic forces that occur in culture vessel, transfer piping, and recovery operations such as microfiltration. Depending on the type, intensity, and duration of the force, and the specifics of the cell, the force may induce various kinds of responses in the subject cells. Both biochemical and physiological responses are observed, including apoptosis and purely mechanical destruction of the cell. This review examines the kinds of hydrodynamic forces encountered in bioprocessing equipment and the impact of those forces on cells. Methods are given for quantifying the magnitude of the specific forces, and the response thresholds are noted for the common types of cells cultured in free suspension, supported on microcarriers, and anchored to stationary surfaces. PMID:11451047

  5. Brain vascular and hydrodynamic physiology

    PubMed Central

    Tasker, Robert C.

    2013-01-01

    Protecting the brain in vulnerable infants undergoing surgery is a central aspect of perioperative care. Understanding the link between blood flow, oxygen delivery and oxygen consumption leads to a more informed approach to bedside care. In some cases, we need to consider how high can we let the partial pressure of carbon dioxide go before we have concerns about risk of increased cerebral blood volume and change in intracranial hydrodynamics? Alternatively, in almost all such cases, we have to address the question of how low can we let the blood pressure drop before we should be concerned about brain perfusion? This review, provides a basic understanding of brain bioenergetics, hemodynamics, hydrodynamics, autoregulation and vascular homeostasis to changes in blood gases that is fundamental to our thinking about bedside care and monitoring. PMID:24331089

  6. Generic Conditions for Hydrodynamic Synchronization

    NASA Astrophysics Data System (ADS)

    Uchida, Nariya; Golestanian, Ramin

    2011-02-01

    Synchronization of actively oscillating organelles such as cilia and flagella facilitates self-propulsion of cells and pumping fluid in low Reynolds number environments. To understand the key mechanism behind synchronization induced by hydrodynamic interaction, we study a model of rigid-body rotors making fixed trajectories of arbitrary shape under driving forces that are arbitrary functions of the phase. For a wide class of geometries, we obtain the necessary and sufficient conditions for synchronization of a pair of rotors. We also find a novel synchronized pattern with an oscillating phase shift. Our results shed light on the role of hydrodynamic interactions in biological systems, and could help in developing efficient mixing and transport strategies in microfluidic devices.

  7. Microscopic derivation of discrete hydrodynamics.

    PubMed

    Español, Pep; Anero, Jesús G; Zúñiga, Ignacio

    2009-12-28

    By using the standard theory of coarse graining based on Zwanzig's projection operator, we derive the dynamic equations for discrete hydrodynamic variables. These hydrodynamic variables are defined in terms of the Delaunay triangulation. The resulting microscopically derived equations can be understood, a posteriori, as a discretization on an arbitrary irregular grid of the Navier-Stokes equations. The microscopic derivation provides a set of discrete equations that exactly conserves mass, momentum, and energy and the dissipative part of the dynamics produces strict entropy increase. In addition, the microscopic derivation provides a practical implementation of thermal fluctuations in a way that the fluctuation-dissipation theorem is satisfied exactly. This paper points toward a close connection between coarse-graining procedures from microscopic dynamics and discretization schemes for partial differential equations. PMID:20059064

  8. MAFCO: A Compression Tool for MAF Files

    PubMed Central

    Matos, Luís M. O.; Neves, António J. R.; Pratas, Diogo; Pinho, Armando J.

    2015-01-01

    In the last decade, the cost of genomic sequencing has been decreasing so much that researchers all over the world accumulate huge amounts of data for present and future use. These genomic data need to be efficiently stored, because storage cost is not decreasing as fast as the cost of sequencing. In order to overcome this problem, the most popular general-purpose compression tool, gzip, is usually used. However, these tools were not specifically designed to compress this kind of data, and often fall short when the intention is to reduce the data size as much as possible. There are several compression algorithms available, even for genomic data, but very few have been designed to deal with Whole Genome Alignments, containing alignments between entire genomes of several species. In this paper, we present a lossless compression tool, MAFCO, specifically designed to compress MAF (Multiple Alignment Format) files. Compared to gzip, the proposed tool attains a compression gain from 34% to 57%, depending on the data set. When compared to a recent dedicated method, which is not compatible with some data sets, the compression gain of MAFCO is about 9%. Both source-code and binaries for several operating systems are freely available for non-commercial use at: http://bioinformatics.ua.pt/software/mafco. PMID:25816229

  9. An efficient compression scheme for bitmap indices

    SciTech Connect

    Wu, Kesheng; Otoo, Ekow J.; Shoshani, Arie

    2004-04-13

    When using an out-of-core indexing method to answer a query, it is generally assumed that the I/O cost dominates the overall query response time. Because of this, most research on indexing methods concentrate on reducing the sizes of indices. For bitmap indices, compression has been used for this purpose. However, in most cases, operations on these compressed bitmaps, mostly bitwise logical operations such as AND, OR, and NOT, spend more time in CPU than in I/O. To speedup these operations, a number of specialized bitmap compression schemes have been developed; the best known of which is the byte-aligned bitmap code (BBC). They are usually faster in performing logical operations than the general purpose compression schemes, but, the time spent in CPU still dominates the total query response time. To reduce the query response time, we designed a CPU-friendly scheme named the word-aligned hybrid (WAH) code. In this paper, we prove that the sizes of WAH compressed bitmap indices are about two words per row for large range of attributes. This size is smaller than typical sizes of commonly used indices, such as a B-tree. Therefore, WAH compressed indices are not only appropriate for low cardinality attributes but also for high cardinality attributes.In the worst case, the time to operate on compressed bitmaps is proportional to the total size of the bitmaps involved. The total size of the bitmaps required to answer a query on one attribute is proportional to the number of hits. These indicate that WAH compressed bitmap indices are optimal. To verify their effectiveness, we generated bitmap indices for four different datasets and measured the response time of many range queries. Tests confirm that sizes of compressed bitmap indices are indeed smaller than B-tree indices, and query processing with WAH compressed indices is much faster than with BBC compressed indices, projection indices and B-tree indices. In addition, we also verified that the average query response time

  10. Hydrodynamic modelling of small upland lakes under strong wind forcing

    NASA Astrophysics Data System (ADS)

    Morales, L.; French, J.; Burningham, H.

    2012-04-01

    Small lakes (Area < 1 km2) represent 46.3% of the total lake surface globally and constitute an important source of water supply. Lakes also provide an important sedimentary archive of environmental and climate changes and ecosystem function. Hydrodynamic controls on the transport and distribution of lake sediments, and also seasonal variations in thermal structure due to solar radiation, precipitation, evaporation and mixing and the complex vertical and horizontal circulation patterns induced by the action of wind are not very well understood. The work presented here analyses hydrodynamic motions present in small upland lakes due to circulation and internal scale waves, and their linkages with the distribution of bottom sediment accumulation in the lake. For purpose, a 3D hydrodynamic is calibrated and implemented for Llyn Conwy, a small oligotrophic upland lake in North Wales, UK. The model, based around the FVCOM open source community model code, resolves the Navier-Stokes equations using a 3D unstructured mesh and a finite volume scheme. The model is forced by meteorological boundary conditions. Improvements made to the FVCOM code include a new graphical user interface to pre- and post process the model input and results respectively, and a JONSWAT wave model to include the effects of wind-wave induced bottom stresses on lake sediment dynamics. Modelled internal scale waves are validated against summer temperature measurements acquired from a thermistor chain deployed at the deepest part of the lake. Seiche motions were validated using data recorded by high-frequency level sensors around the lake margins, and the velocity field and the circulation patterns were validated using the data recorded by an ADCP and GPS drifters. The model is shown to reproduce the lake hydrodynamics and reveals well-developed seiches at different frequencies superimposed on wind-driven circulation patterns that appear to control the distribution of bottom sediments in this small

  11. Speech coding

    NASA Astrophysics Data System (ADS)

    Gersho, Allen

    1990-05-01

    Recent advances in algorithms and techniques for speech coding now permit high quality voice reproduction at remarkably low bit rates. The advent of powerful single-ship signal processors has made it cost effective to implement these new and sophisticated speech coding algorithms for many important applications in voice communication and storage. Some of the main ideas underlying the algorithms of major interest today are reviewed. The concept of removing redundancy by linear prediction is reviewed, first in the context of predictive quantization or DPCM. Then linear predictive coding, adaptive predictive coding, and vector quantization are discussed. The concepts of excitation coding via analysis-by-synthesis, vector sum excitation codebooks, and adaptive postfiltering are explained. The main idea of vector excitation coding (VXC) or code excited linear prediction (CELP) are presented. Finally low-delay VXC coding and phonetic segmentation for VXC are described.

  12. Computation of Thermally Perfect Compressible Flow Properties

    NASA Technical Reports Server (NTRS)

    Witte, David W.; Tatum, Kenneth E.; Williams, S. Blake

    1996-01-01

    A set of compressible flow relations for a thermally perfect, calorically imperfect gas are derived for a value of c(sub p) (specific heat at constant pressure) expressed as a polynomial function of temperature and developed into a computer program, referred to as the Thermally Perfect Gas (TPG) code. The code is available free from the NASA Langley Software Server at URL http://www.larc.nasa.gov/LSS. The code produces tables of compressible flow properties similar to those found in NACA Report 1135. Unlike the NACA Report 1135 tables which are valid only in the calorically perfect temperature regime the TPG code results are also valid in the thermally perfect, calorically imperfect temperature regime, giving the TPG code a considerably larger range of temperature application. Accuracy of the TPG code in the calorically perfect and in the thermally perfect, calorically imperfect temperature regimes are verified by comparisons with the methods of NACA Report 1135. The advantages of the TPG code compared to the thermally perfect, calorically imperfect method of NACA Report 1135 are its applicability to any type of gas (monatomic, diatomic, triatomic, or polyatomic) or any specified mixture of gases, ease-of-use, and tabulated results.

  13. A hydrodynamics-reaction kinetics coupled model for evaluating bioreactors derived from CFD simulation.

    PubMed

    Wang, Xu; Ding, Jie; Guo, Wan-Qian; Ren, Nan-Qi

    2010-12-01

    Investigating how a bioreactor functions is a necessary precursor for successful reactor design and operation. Traditional methods used to investigate flow-field cannot meet this challenge accurately and economically. Hydrodynamics model can solve this problem, but to understand a bioreactor in sufficient depth, it is often insufficient. In this paper, a coupled hydrodynamics-reaction kinetics model was formulated from computational fluid dynamics (CFD) code to simulate a gas-liquid-solid three-phase biotreatment system for the first time. The hydrodynamics model is used to formulate prediction of the flow field and the reaction kinetics model then portrays the reaction conversion process. The coupled model is verified and used to simulate the behavior of an expanded granular sludge bed (EGSB) reactor for biohydrogen production. The flow patterns were visualized and analyzed. The coupled model also demonstrates a qualitative relationship between hydrodynamics and biohydrogen production. The advantages and limitations of applying this coupled model are discussed. PMID:20727741

  14. A Microfluidic-based Hydrodynamic Trap for Single Particles

    PubMed Central

    Johnson-Chavarria, Eric M.; Tanyeri, Melikhan; Schroeder, Charles M.

    2011-01-01

    The ability to confine and manipulate single particles in free solution is a key enabling technology for fundamental and applied science. Methods for particle trapping based on optical, magnetic, electrokinetic, and acoustic techniques have led to major advancements in physics and biology ranging from the molecular to cellular level. In this article, we introduce a new microfluidic-based technique for particle trapping and manipulation based solely on hydrodynamic fluid flow. Using this method, we demonstrate trapping of micro- and nano-scale particles in aqueous solutions for long time scales. The hydrodynamic trap consists of an integrated microfluidic device with a cross-slot channel geometry where two opposing laminar streams converge, thereby generating a planar extensional flow with a fluid stagnation point (zero-velocity point). In this device, particles are confined at the trap center by active control of the flow field to maintain particle position at the fluid stagnation point. In this manner, particles are effectively trapped in free solution using a feedback control algorithm implemented with a custom-built LabVIEW code. The control algorithm consists of image acquisition for a particle in the microfluidic device, followed by particle tracking, determination of particle centroid position, and active adjustment of fluid flow by regulating the pressure applied to an on-chip pneumatic valve using a pressure regulator. In this way, the on-chip dynamic metering valve functions to regulate the relative flow rates in the outlet channels, thereby enabling fine-scale control of stagnation point position and particle trapping. The microfluidic-based hydrodynamic trap exhibits several advantages as a method for particle trapping. Hydrodynamic trapping is possible for any arbitrary particle without specific requirements on the physical or chemical properties of the trapped object. In addition, hydrodynamic trapping enables confinement of a "single" target object in

  15. Radiological Image Compression

    NASA Astrophysics Data System (ADS)

    Lo, Shih-Chung Benedict

    The movement toward digital images in radiology presents the problem of how to conveniently and economically store, retrieve, and transmit the volume of digital images. Basic research into image data compression is necessary in order to move from a film-based department to an efficient digital -based department. Digital data compression technology consists of two types of compression technique: error-free and irreversible. Error -free image compression is desired; however, present techniques can only achieve compression ratio of from 1.5:1 to 3:1, depending upon the image characteristics. Irreversible image compression can achieve a much higher compression ratio; however, the image reconstructed from the compressed data shows some difference from the original image. This dissertation studies both error-free and irreversible image compression techniques. In particular, some modified error-free techniques have been tested and the recommended strategies for various radiological images are discussed. A full-frame bit-allocation irreversible compression technique has been derived. A total of 76 images which include CT head and body, and radiographs digitized to 2048 x 2048, 1024 x 1024, and 512 x 512 have been used to test this algorithm. The normalized mean -square-error (NMSE) on the difference image, defined as the difference between the original and the reconstructed image from a given compression ratio, is used as a global measurement on the quality of the reconstructed image. The NMSE's of total of 380 reconstructed and 380 difference images are measured and the results tabulated. Three complex compression methods are also suggested to compress images with special characteristics. Finally, various parameters which would effect the quality of the reconstructed images are discussed. A proposed hardware compression module is given in the last chapter.

  16. Fast-ignition transport studies: Realistic electron source, integrated particle-in-cell and hydrodynamic modeling, imposed magnetic fields

    SciTech Connect

    Strozzi, D. J.; Tabak, M.; Larson, D. J.; Divol, L.; Kemp, A. J.; Bellei, C.; Marinak, M. M.; Key, M. H.

    2012-07-15

    Transport modeling of idealized, cone-guided fast ignition targets indicates the severe challenge posed by fast-electron source divergence. The hybrid particle-in-cell (PIC) code Zuma is run in tandem with the radiation-hydrodynamics code Hydra to model fast-electron propagation, fuel heating, and thermonuclear burn. The fast electron source is based on a 3D explicit-PIC laser-plasma simulation with the PSC code. This shows a quasi two-temperature energy spectrum and a divergent angle spectrum (average velocity-space polar angle of 52 Degree-Sign ). Transport simulations with the PIC-based divergence do not ignite for >1 MJ of fast-electron energy, for a modest (70 {mu}m) standoff distance from fast-electron injection to the dense fuel. However, artificially collimating the source gives an ignition energy of 132 kJ. To mitigate the divergence, we consider imposed axial magnetic fields. Uniform fields {approx}50 MG are sufficient to recover the artificially collimated ignition energy. Experiments at the Omega laser facility have generated fields of this magnitude by imploding a capsule in seed fields of 50-100 kG. Such imploded fields will likely be more compressed in the transport region than in the laser absorption region. When fast electrons encounter increasing field strength, magnetic mirroring can reflect a substantial fraction of them and reduce coupling to the fuel. A hollow magnetic pipe, which peaks at a finite radius, is presented as one field configuration which circumvents mirroring.

  17. Optimality Of Variable-Length Codes

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Miller, Warner H.; Rice, Robert F.

    1994-01-01

    Report presents analysis of performances of conceptual Rice universal noiseless coders designed to provide efficient compression of data over wide range of source-data entropies. Includes predictive preprocessor that maps source data into sequence of nonnegative integers and variable-length-coding processor, which adapts to varying entropy of source data by selecting whichever one of number of optional codes yields shortest codeword.

  18. Uplink Coding

    NASA Technical Reports Server (NTRS)

    Pollara, Fabrizio; Hamkins, Jon; Dolinar, Sam; Andrews, Ken; Divsalar, Dariush

    2006-01-01

    This viewgraph presentation reviews uplink coding. The purpose and goals of the briefing are (1) Show a plan for using uplink coding and describe benefits (2) Define possible solutions and their applicability to different types of uplink, including emergency uplink (3) Concur with our conclusions so we can embark on a plan to use proposed uplink system (4) Identify the need for the development of appropriate technology and infusion in the DSN (5) Gain advocacy to implement uplink coding in flight projects Action Item EMB04-1-14 -- Show a plan for using uplink coding, including showing where it is useful or not (include discussion of emergency uplink coding).

  19. Snapshot colored compressive spectral imager.

    PubMed

    Correa, Claudia V; Arguello, Henry; Arce, Gonzalo R

    2015-10-01

    Traditional spectral imaging approaches require sensing all the voxels of a scene. Colored mosaic FPA detector-based architectures can acquire sets of the scene's spectral components, but the number of spectral planes depends directly on the number of available filters used on the FPA, which leads to reduced spatiospectral resolutions. Instead of sensing all the voxels of the scene, compressive spectral imaging (CSI) captures coded and dispersed projections of the spatiospectral source. This approach mitigates the resolution issues by exploiting optical phenomena in lenses and other elements, which, in turn, compromise the portability of the devices. This paper presents a compact snapshot colored compressive spectral imager (SCCSI) that exploits the benefits of the colored mosaic FPA detectors and the compression capabilities of CSI sensing techniques. The proposed optical architecture has no moving parts and can capture the spatiospectral information of a scene in a single snapshot by using a dispersive element and a color-patterned detector. The optical and the mathematical models of SCCSI are presented along with a testbed implementation of the system. Simulations and real experiments show the accuracy of SCCSI and compare the reconstructions with those of similar CSI optical architectures, such as the CASSI and SSCSI systems, resulting in improvements of up to 6 dB and 1 dB of PSNR, respectively. PMID:26479928

  20. DualSPHysics: Open-source parallel CFD solver based on Smoothed Particle Hydrodynamics (SPH)

    NASA Astrophysics Data System (ADS)

    Crespo, A. J. C.; Domínguez, J. M.; Rogers, B. D.; Gómez-Gesteira, M.; Longshaw, S.; Canelas, R.; Vacondio, R.; Barreiro, A.; García-Feal, O.

    2015-02-01

    DualSPHysics is a hardware accelerated Smoothed Particle Hydrodynamics code developed to solve free-surface flow problems. DualSPHysics is an open-source code developed and released under the terms of GNU General Public License (GPLv3). Along with the source code, a complete documentation that makes easy the compilation and execution of the source files is also distributed. The code has been shown to be efficient and reliable. The parallel power computing of Graphics Computing Units (GPUs) is used to accelerate DualSPHysics by up to two orders of magnitude compared to the performance of the serial version.