Science.gov

Sample records for compressible hydrodynamics codes

  1. Pencil: Finite-difference Code for Compressible Hydrodynamic Flows

    NASA Astrophysics Data System (ADS)

    Brandenburg, Axel; Dobler, Wolfgang

    2010-10-01

    The Pencil code is a high-order finite-difference code for compressible hydrodynamic flows with magnetic fields. It is highly modular and can easily be adapted to different types of problems. The code runs efficiently under MPI on massively parallel shared- or distributed-memory computers, like e.g. large Beowulf clusters. The Pencil code is primarily designed to deal with weakly compressible turbulent flows. To achieve good parallelization, explicit (as opposed to compact) finite differences are used. Typical scientific targets include driven MHD turbulence in a periodic box, convection in a slab with non-periodic upper and lower boundaries, a convective star embedded in a fully nonperiodic box, accretion disc turbulence in the shearing sheet approximation, self-gravity, non-local radiation transfer, dust particle evolution with feedback on the gas, etc. A range of artificial viscosity and diffusion schemes can be invoked to deal with supersonic flows. For direct simulations regular viscosity and diffusion is being used. The code is written in well-commented Fortran90.

  2. HYDRODYNAMIC COMPRESSIVE FORGING.

    DTIC Science & Technology

    HYDRODYNAMICS), (*FORGING, COMPRESSIVE PROPERTIES, LUBRICANTS, PERFORMANCE(ENGINEERING), DIES, TENSILE PROPERTIES, MOLYBDENUM ALLOYS , STRAIN...MECHANICS), BERYLLIUM ALLOYS , NICKEL ALLOYS , CASTING ALLOYS , PRESSURE, FAILURE(MECHANICS).

  3. Compressible Astrophysics Simulation Code

    SciTech Connect

    Howell, L.; Singer, M.

    2007-07-18

    This is an astrophysics simulation code involving a radiation diffusion module developed at LLNL coupled to compressible hydrodynamics and adaptive mesh infrastructure developed at LBNL. One intended application is to neutrino diffusion in core collapse supernovae.

  4. pyro: A teaching code for computational astrophysical hydrodynamics

    NASA Astrophysics Data System (ADS)

    Zingale, M.

    2014-10-01

    We describe pyro: a simple, freely-available code to aid students in learning the computational hydrodynamics methods widely used in astrophysics. pyro is written with simplicity and learning in mind and intended to allow students to experiment with various methods popular in the field, including those for advection, compressible and incompressible hydrodynamics, multigrid, and diffusion in a finite-volume framework. We show some of the test problems from pyro, describe its design philosophy, and suggest extensions for students to build their understanding of these methods.

  5. TORUS: Radiation transport and hydrodynamics code

    NASA Astrophysics Data System (ADS)

    Harries, Tim

    2014-04-01

    TORUS is a flexible radiation transfer and radiation-hydrodynamics code. The code has a basic infrastructure that includes the AMR mesh scheme that is used by several physics modules including atomic line transfer in a moving medium, molecular line transfer, photoionization, radiation hydrodynamics and radiative equilibrium. TORUS is useful for a variety of problems, including magnetospheric accretion onto T Tauri stars, spiral nebulae around Wolf-Rayet stars, discs around Herbig AeBe stars, structured winds of O supergiants and Raman-scattered line formation in symbiotic binaries, and dust emission and molecular line formation in star forming clusters. The code is written in Fortran 2003 and is compiled using a standard Gnu makefile. The code is parallelized using both MPI and OMP, and can use these parallel sections either separately or in a hybrid mode.

  6. Using Pulsed Power for Hydrodynamic Code Validation

    DTIC Science & Technology

    2001-06-01

    Air Force Research Laboratory ( AFRL ). A...bank at the Air Force Research Laboratory ( AFRL ). A cylindrical aluminum liner that is magnetically imploded onto a central target by self-induced...James Degnan, George Kiuttu Air Force Research Laboratory Albuquerque, NM 87117 Abstract As part of ongoing hydrodynamic code

  7. An implicit Smooth Particle Hydrodynamic code

    SciTech Connect

    Knapp, Charles E.

    2000-05-01

    An implicit version of the Smooth Particle Hydrodynamic (SPH) code SPHINX has been written and is working. In conjunction with the SPHINX code the new implicit code models fluids and solids under a wide range of conditions. SPH codes are Lagrangian, meshless and use particles to model the fluids and solids. The implicit code makes use of the Krylov iterative techniques for solving large linear-systems and a Newton-Raphson method for non-linear corrections. It uses numerical derivatives to construct the Jacobian matrix. It uses sparse techniques to save on memory storage and to reduce the amount of computation. It is believed that this is the first implicit SPH code to use Newton-Krylov techniques, and is also the first implicit SPH code to model solids. A description of SPH and the techniques used in the implicit code are presented. Then, the results of a number of tests cases are discussed, which include a shock tube problem, a Rayleigh-Taylor problem, a breaking dam problem, and a single jet of gas problem. The results are shown to be in very good agreement with analytic solutions, experimental results, and the explicit SPHINX code. In the case of the single jet of gas case it has been demonstrated that the implicit code can do a problem in much shorter time than the explicit code. The problem was, however, very unphysical, but it does demonstrate the potential of the implicit code. It is a first step toward a useful implicit SPH code.

  8. CASTRO: A NEW COMPRESSIBLE ASTROPHYSICAL SOLVER. II. GRAY RADIATION HYDRODYNAMICS

    SciTech Connect

    Zhang, W.; Almgren, A.; Bell, J.; Howell, L.; Burrows, A.

    2011-10-01

    We describe the development of a flux-limited gray radiation solver for the compressible astrophysics code, CASTRO. CASTRO uses an Eulerian grid with block-structured adaptive mesh refinement based on a nested hierarchy of logically rectangular variable-sized grids with simultaneous refinement in both space and time. The gray radiation solver is based on a mixed-frame formulation of radiation hydrodynamics. In our approach, the system is split into two parts, one part that couples the radiation and fluid in a hyperbolic subsystem, and another parabolic part that evolves radiation diffusion and source-sink terms. The hyperbolic subsystem is solved explicitly with a high-order Godunov scheme, whereas the parabolic part is solved implicitly with a first-order backward Euler method.

  9. A comparison of cosmological hydrodynamic codes

    NASA Technical Reports Server (NTRS)

    Kang, Hyesung; Ostriker, Jeremiah P.; Cen, Renyue; Ryu, Dongsu; Hernquist, Lars; Evrard, August E.; Bryan, Greg L.; Norman, Michael L.

    1994-01-01

    We present a detailed comparison of the simulation results of various hydrodynamic codes. Starting with identical initial conditions based on the cold dark matter scenario for the growth of structure, with parameters h = 0.5 Omega = Omega(sub b) = 1, and sigma(sub 8) = 1, we integrate from redshift z = 20 to z = O to determine the physical state within a representative volume of size L(exp 3) where L = 64 h(exp -1) Mpc. Five indenpendent codes are compared: three of them Eulerian mesh-based and two variants of the smooth particle hydrodynamics 'SPH' Lagrangian approach. The Eulerian codes were run at N(exp 3) = (32(exp 3), 64(exp 3), 128(exp 3), and 256(exp 3)) cells, the SPH codes at N(exp 3) = 32(exp 3) and 64(exp 3) particles. Results were then rebinned to a 16(exp 3) grid with the exception that the rebinned data should converge, by all techniques, to a common and correct result as N approaches infinity. We find that global averages of various physical quantities do, as expected, tend to converge in the rebinned model, but that uncertainites in even primitive quantities such as (T), (rho(exp 2))(exp 1/2) persists at the 3%-17% level achieve comparable and satisfactory accuracy for comparable computer time in their treatment of the high-density, high-temeprature regions as measured in the rebinned data; the variance among the five codes (at highest resolution) for the mean temperature (as weighted by rho(exp 2) is only 4.5%. Examined at high resolution we suspect that the density resolution is better in the SPH codes and the thermal accuracy in low-density regions better in the Eulerian codes. In the low-density, low-temperature regions the SPH codes have poor accuracy due to statiscal effects, and the Jameson code gives the temperatures which are too high, due to overuse of artificial viscosity in these high Mach number regions. Overall the comparison allows us to better estimate errors; it points to ways of improving this current generation ofhydrodynamic

  10. Axially symmetric pseudo-Newtonian hydrodynamics code

    NASA Astrophysics Data System (ADS)

    Kim, Jinho; Kim, Hee Il; Choptuik, Matthew William; Lee, Hyung Mok

    2012-08-01

    We develop a numerical hydrodynamics code using a pseudo-Newtonian formulation that uses the weak-field approximation for the geometry, and a generalized source term for the Poisson equation that takes into account relativistic effects. The code was designed to treat moderately relativistic systems such as rapidly rotating neutron stars. The hydrodynamic equations are solved using a finite volume method with high-resolution shock-capturing techniques. We implement several different slope limiters for second-order reconstruction schemes and also investigate higher order reconstructions such as the piecewise parabolic method, essentially non-oscillatory method (ENO) and weighted ENO. We use the method of lines to convert the mixed spatial-time partial differential equations into ordinary differential equations (ODEs) that depend only on time. These ODEs are solved using second- and third-order Runge-Kutta methods. The Poisson equation for the gravitational potential is solved with a multigrid method, and to simplify the boundary condition, we use compactified coordinates which map spatial infinity to a finite computational coordinate using a tangent function. In order to confirm the validity of our code, we carry out four different tests including one- and two-dimensional shock tube tests, stationary star tests of both non-rotating and rotating models, and radial oscillation mode tests for spherical stars. In the shock tube tests, the code shows good agreement with analytic solutions which include shocks, rarefaction waves and contact discontinuities. The code is found to be stable and accurate: for example, when solving a stationary stellar model the fractional changes in the maximum density, total mass, and total angular momentum per dynamical time are found to be 3 × 10-6, 5 × 10-7 and 2 × 10-6, respectively. We also find that the frequencies of the radial modes obtained by the numerical simulation of the steady-state star agree very well with those obtained by

  11. Subband Coding Methods for Seismic Data Compression

    NASA Technical Reports Server (NTRS)

    Kiely, A.; Pollara, F.

    1995-01-01

    This paper presents a study of seismic data compression techniques and a compression algorithm based on subband coding. The compression technique described could be used as a progressive transmission system, where successive refinements of the data can be requested by the user. This allows seismologists to first examine a coarse version of waveforms with minimal usage of the channel and then decide where refinements are required. Rate-distortion performance results are presented and comparisons are made with two block transform methods.

  12. CASTRO: A NEW COMPRESSIBLE ASTROPHYSICAL SOLVER. III. MULTIGROUP RADIATION HYDRODYNAMICS

    SciTech Connect

    Zhang, W.; Almgren, A.; Bell, J.; Howell, L.; Burrows, A.; Dolence, J.

    2013-01-15

    We present a formulation for multigroup radiation hydrodynamics that is correct to order O(v/c) using the comoving-frame approach and the flux-limited diffusion approximation. We describe a numerical algorithm for solving the system, implemented in the compressible astrophysics code, CASTRO. CASTRO uses a Eulerian grid with block-structured adaptive mesh refinement based on a nested hierarchy of logically rectangular variable-sized grids with simultaneous refinement in both space and time. In our multigroup radiation solver, the system is split into three parts: one part that couples the radiation and fluid in a hyperbolic subsystem, another part that advects the radiation in frequency space, and a parabolic part that evolves radiation diffusion and source-sink terms. The hyperbolic subsystem and the frequency space advection are solved explicitly with high-order Godunov schemes, whereas the parabolic part is solved implicitly with a first-order backward Euler method. Our multigroup radiation solver works for both neutrino and photon radiation.

  13. Code Compression Schems for Embedded Processors

    NASA Astrophysics Data System (ADS)

    Horti, Deepa; Jamge, S. B.

    2010-11-01

    Code density is a major requirement in embedded system design since it not only reduces the need for the scarce re-source memory but also implicitly improves further important design parameters like power consumption and performance. Within this paper we have introduced a novel and an efficient approach that belongs to statistical compression schemes as well as dictionary based compression schemes.

  14. CASTRO: A NEW COMPRESSIBLE ASTROPHYSICAL SOLVER. I. HYDRODYNAMICS AND SELF-GRAVITY

    SciTech Connect

    Almgren, A. S.; Beckner, V. E.; Bell, J. B.; Day, M. S.; Lijewski, M. J.; Nonaka, A.; Howell, L. H.; Singer, M.; Joggerst, C. C.; Zingale, M.

    2010-06-01

    We present a new code, CASTRO, that solves the multicomponent compressible hydrodynamic equations for astrophysical flows including self-gravity, nuclear reactions, and radiation. CASTRO uses an Eulerian grid and incorporates adaptive mesh refinement (AMR). Our approach to AMR uses a nested hierarchy of logically rectangular grids with simultaneous refinement in both space and time. The radiation component of CASTRO will be described in detail in the next paper, Part II, of this series.

  15. Image coding compression based on DCT

    NASA Astrophysics Data System (ADS)

    Feng, Fei; Liu, Peixue; Jiang, Baohua

    2012-04-01

    With the development of computer science and communications, the digital image processing develops more and more fast. High quality images are loved by people, but it will waste more stored space in our computer and it will waste more bandwidth when it is transferred by Internet. Therefore, it's necessary to have an study on technology of image compression. At present, many algorithms about image compression is applied to network and the image compression standard is established. In this dissertation, some analysis on DCT will be written. Firstly, the principle of DCT will be shown. It's necessary to realize image compression, because of the widely using about this technology; Secondly, we will have a deep understanding of DCT by the using of Matlab, the process of image compression based on DCT, and the analysis on Huffman coding; Thirdly, image compression based on DCT will be shown by using Matlab and we can have an analysis on the quality of the picture compressed. It is true that DCT is not the only algorithm to realize image compression. I am sure there will be more algorithms to make the image compressed have a high quality. I believe the technology about image compression will be widely used in the network or communications in the future.

  16. TESS: A RELATIVISTIC HYDRODYNAMICS CODE ON A MOVING VORONOI MESH

    SciTech Connect

    Duffell, Paul C.; MacFadyen, Andrew I. E-mail: macfadyen@nyu.edu

    2011-12-01

    We have generalized a method for the numerical solution of hyperbolic systems of equations using a dynamic Voronoi tessellation of the computational domain. The Voronoi tessellation is used to generate moving computational meshes for the solution of multidimensional systems of conservation laws in finite-volume form. The mesh-generating points are free to move with arbitrary velocity, with the choice of zero velocity resulting in an Eulerian formulation. Moving the points at the local fluid velocity makes the formulation effectively Lagrangian. We have written the TESS code to solve the equations of compressible hydrodynamics and magnetohydrodynamics for both relativistic and non-relativistic fluids on a dynamic Voronoi mesh. When run in Lagrangian mode, TESS is significantly less diffusive than fixed mesh codes and thus preserves contact discontinuities to high precision while also accurately capturing strong shock waves. TESS is written for Cartesian, spherical, and cylindrical coordinates and is modular so that auxiliary physics solvers are readily integrated into the TESS framework and so that this can be readily adapted to solve general systems of equations. We present results from a series of test problems to demonstrate the performance of TESS and to highlight some of the advantages of the dynamic tessellation method for solving challenging problems in astrophysical fluid dynamics.

  17. Image compression with embedded multiwavelet coding

    NASA Astrophysics Data System (ADS)

    Liang, Kai-Chieh; Li, Jin; Kuo, C.-C. Jay

    1996-03-01

    An embedded image coding scheme using the multiwavelet transform and inter-subband prediction is proposed in this research. The new proposed coding scheme consists of the following building components: GHM multiwavelet transform, prediction across subbands, successive approximation quantization, and adaptive binary arithmetic coding. Our major contribution is the introduction of a set of prediction rules to fully exploit the correlations between multiwavelet coefficients in different frequency bands. The performance of the proposed new method is comparable to that of state-of-the-art wavelet compression methods.

  18. Pulse compression using binary phase codes

    NASA Technical Reports Server (NTRS)

    Farley, D. T.

    1983-01-01

    In most MST applications pulsed radars are peak power limited and have excess average power capacity. Short pulses are required for good range resolution, but the problem of range ambiguity (signals received simultaneously from more than one altitude) sets a minimum limit on the interpulse period (IPP). Pulse compression is a technique which allows more of the transmitter average power capacity to be used without sacrificing range resolution. As the name implies, a pulse of power P and duration T is in a certain sense converted into one of power nP and duration T/n. In the frequency domain, compression involves manipulating the phases of the different frequency components of the pulse. One way to compress a pulse is via phase coding, especially binary phase coding, a technique which is particularly amenable to digital processing techniques. This method, which is used extensively in radar probing of the atmosphere and ionosphere is discussed. Barker codes, complementary and quasi-complementary code sets, and cyclic codes are addressed.

  19. Compression of polyphase codes with Doppler shift

    NASA Astrophysics Data System (ADS)

    Wirth, W. D.

    It is shown that pulse compression with sufficient Doppler tolerance may be achieved with polyphase codes derived from linear frequency modulation (LFM) and nonlinear frequency modulation (NLFM). Low sidelobes in range and Doppler are required especially for the radar search function. These may be achieved by an LFM derived phase coder together with Hamming weighting or by applying a PNL polyphase code derived from NLFM. For a discrete and known Doppler frequency with an expanded and mismatched reference vector a sidelobe reduction is possible. The compression is then achieved without a loss in resolution. A set up for the expanded reference gives zero sidelobes only in an interval around the signal peak or a least square minimization for all range elements. This version may be useful for target tracking.

  20. DISH CODE A deeply simplified hydrodynamic code for applications to warm dense matter

    SciTech Connect

    More, Richard

    2007-08-22

    DISH is a 1-dimensional (planar) Lagrangian hydrodynamic code intended for application to experiments on warm dense matter. The code is a simplified version of the DPC code written in the Data and Planning Center of the National Institute for Fusion Science in Toki, Japan. DPC was originally intended as a testbed for exploring equation of state and opacity models, but turned out to have a variety of applications. The Dish code is a "deeply simplified hydrodynamic" code, deliberately made as simple as possible. It is intended to be easy to understand, easy to use and easy to change.

  1. Multi-shot compressed coded aperture imaging

    NASA Astrophysics Data System (ADS)

    Shao, Xiaopeng; Du, Juan; Wu, Tengfei; Jin, Zhenhua

    2013-09-01

    The classical methods of compressed coded aperture (CCA) still require an optical sensor with high resolution, although the sampling rate has broken the Nyquist sampling rate already. A novel architecture of multi-shot compressed coded aperture imaging (MCCAI) using a low resolution optical sensor is proposed, which is mainly based on the 4-f imaging system, combining with two spatial light modulators (SLM) to achieve the compressive imaging goal. The first SLM employed for random convolution is placed at the frequency spectrum plane of the 4-f imaging system, while the second SLM worked as a selecting filter is positioned in front of the optical sensor. By altering the random coded pattern of the second SLM and sampling, a couple of observations can be obtained by a low resolution optical sensor easily, and these observations will be combined mathematically and used to reconstruct the high resolution image. That is to say, MCCAI aims at realizing the super resolution imaging with multiple random samplings by using a low resolution optical sensor. To improve the computational imaging performance, total variation (TV) regularization is introduced into the super resolution reconstruction model to get rid of the artifacts, and alternating direction method of multipliers (ADM) is utilized to solve the optimal result efficiently. The results show that the MCCAI architecture is suitable for super resolution computational imaging using a much lower resolution optical sensor than traditional CCA imaging methods by capturing multiple frame images.

  2. KEPLER: General purpose 1D multizone hydrodynamics code

    NASA Astrophysics Data System (ADS)

    Weaver, T. A.; Zimmerman, G. B.; Woosley, S. E.

    2017-02-01

    KEPLER is a general purpose stellar evolution/explosion code that incorporates implicit hydrodynamics and a detailed treatment of nuclear burning processes. It has been used to study the complete evolution of massive and supermassive stars, all major classes of supernovae, hydrostatic and explosive nucleosynthesis, and x- and gamma-ray bursts on neutron stars and white dwarfs.

  3. High compression image and image sequence coding

    NASA Technical Reports Server (NTRS)

    Kunt, Murat

    1989-01-01

    The digital representation of an image requires a very large number of bits. This number is even larger for an image sequence. The goal of image coding is to reduce this number, as much as possible, and reconstruct a faithful duplicate of the original picture or image sequence. Early efforts in image coding, solely guided by information theory, led to a plethora of methods. The compression ratio reached a plateau around 10:1 a couple of years ago. Recent progress in the study of the brain mechanism of vision and scene analysis has opened new vistas in picture coding. Directional sensitivity of the neurones in the visual pathway combined with the separate processing of contours and textures has led to a new class of coding methods capable of achieving compression ratios as high as 100:1 for images and around 300:1 for image sequences. Recent progress on some of the main avenues of object-based methods is presented. These second generation techniques make use of contour-texture modeling, new results in neurophysiology and psychophysics and scene analysis.

  4. Coding Strategies and Implementations of Compressive Sensing

    NASA Astrophysics Data System (ADS)

    Tsai, Tsung-Han

    This dissertation studies the coding strategies of computational imaging to overcome the limitation of conventional sensing techniques. The information capacity of conventional sensing is limited by the physical properties of optics, such as aperture size, detector pixels, quantum efficiency, and sampling rate. These parameters determine the spatial, depth, spectral, temporal, and polarization sensitivity of each imager. To increase sensitivity in any dimension can significantly compromise the others. This research implements various coding strategies subject to optical multidimensional imaging and acoustic sensing in order to extend their sensing abilities. The proposed coding strategies combine hardware modification and signal processing to exploiting bandwidth and sensitivity from conventional sensors. We discuss the hardware architecture, compression strategies, sensing process modeling, and reconstruction algorithm of each sensing system. Optical multidimensional imaging measures three or more dimensional information of the optical signal. Traditional multidimensional imagers acquire extra dimensional information at the cost of degrading temporal or spatial resolution. Compressive multidimensional imaging multiplexes the transverse spatial, spectral, temporal, and polarization information on a two-dimensional (2D) detector. The corresponding spectral, temporal and polarization coding strategies adapt optics, electronic devices, and designed modulation techniques for multiplex measurement. This computational imaging technique provides multispectral, temporal super-resolution, and polarization imaging abilities with minimal loss in spatial resolution and noise level while maintaining or gaining higher temporal resolution. The experimental results prove that the appropriate coding strategies may improve hundreds times more sensing capacity. Human auditory system has the astonishing ability in localizing, tracking, and filtering the selected sound sources or

  5. General Relativistic Smoothed Particle Hydrodynamics code developments: A progress report

    NASA Astrophysics Data System (ADS)

    Faber, Joshua; Silberman, Zachary; Rizzo, Monica

    2017-01-01

    We report on our progress in developing a new general relativistic Smoothed Particle Hydrodynamics (SPH) code, which will be appropriate for studying the properties of accretion disks around black holes as well as compact object binary mergers and their ejecta. We will discuss in turn the relativistic formalisms being used to handle the evolution, our techniques for dealing with conservative and primitive variables, as well as those used to ensure proper conservation of various physical quantities. Code tests and performance metrics will be discussed, as will the prospects for including smoothed particle hydrodynamics codes within other numerical relativity codebases, particularly the publicly available Einstein Toolkit. We acknowledge support from NSF award ACI-1550436 and an internal RIT D-RIG grant.

  6. A new hydrodynamics code for Type Ia supernovae

    NASA Astrophysics Data System (ADS)

    Leung, S.-C.; Chu, M.-C.; Lin, L.-M.

    2015-12-01

    A two-dimensional hydrodynamics code for Type Ia supernova (SNIa) simulations is presented. The code includes a fifth-order shock-capturing scheme WENO, detailed nuclear reaction network, flame-capturing scheme and sub-grid turbulence. For post-processing, we have developed a tracer particle scheme to record the thermodynamical history of the fluid elements. We also present a one-dimensional radiative transfer code for computing observational signals. The code solves the Lagrangian hydrodynamics and moment-integrated radiative transfer equations. A local ionization scheme and composition dependent opacity are included. Various verification tests are presented, including standard benchmark tests in one and two dimensions. SNIa models using the pure turbulent deflagration model and the delayed-detonation transition model are studied. The results are consistent with those in the literature. We compute the detailed chemical evolution using the tracer particles' histories, and we construct corresponding bolometric light curves from the hydrodynamics results. We also use a GPU to speed up the computation of some highly repetitive subroutines. We achieve an acceleration of 50 times for some subroutines and a factor of 6 in the global run time.

  7. RAMSES: A new N-body and hydrodynamical code

    NASA Astrophysics Data System (ADS)

    Teyssier, Romain

    2010-11-01

    A new N-body and hydrodynamical code, called RAMSES, is presented. It has been designed to study structure formation in the universe with high spatial resolution. The code is based on Adaptive Mesh Refinement (AMR) technique, with a tree based data structure allowing recursive grid refinements on a cell-by-cell basis. The N-body solver is very similar to the one developed for the ART code (Kravtsov et al. 97), with minor differences in the exact implementation. The hydrodynamical solver is based on a second-order Godunov method, a modern shock-capturing scheme known to compute accurately the thermal history of the fluid component. The accuracy of the code is carefully estimated using various test cases, from pure gas dynamical tests to cosmological ones. The specific refinement strategy used in cosmological simulations is described, and potential spurious effects associated to shock waves propagation in the resulting AMR grid are discussed and found to be negligible. Results obtained in a large N-body and hydrodynamical simulation of structure formation in a low density LCDM universe are finally reported, with 256^3 particles and 4.1 10^7 cells in the AMR grid, reaching a formal resolution of 8192^3. A convergence analysis of different quantities, such as dark matter density power spectrum, gas pressure power spectrum and individual haloes temperature profiles, shows that numerical results are converging down to the actual resolution limit of the code, and are well reproduced by recent analytical predictions in the framework of the halo model.

  8. Adding kinetics and hydrodynamics to the CHEETAH thermochemical code

    SciTech Connect

    Fried, L.E., Howard, W.M., Souers, P.C.

    1997-01-15

    In FY96 we released CHEETAH 1.40, which made extensive improvements on the stability and user friendliness of the code. CHEETAH now has over 175 users in government, academia, and industry. Efforts have also been focused on adding new advanced features to CHEETAH 2.0, which is scheduled for release in FY97. We have added a new chemical kinetics capability to CHEETAH. In the past, CHEETAH assumed complete thermodynamic equilibrium and independence of time. The addition of a chemical kinetic framework will allow for modeling of time-dependent phenomena, such as partial combustion and detonation in composite explosives with large reaction zones. We have implemented a Wood-Kirkwood detonation framework in CHEETAH, which allows for the treatment of nonideal detonations and explosive failure. A second major effort in the project this year has been linking CHEETAH to hydrodynamic codes to yield an improved HE product equation of state. We have linked CHEETAH to 1- and 2-D hydrodynamic codes, and have compared the code to experimental data. 15 refs., 13 figs., 1 tab.

  9. The escape of high explosive products: An exact-solution problem for verification of hydrodynamics codes

    SciTech Connect

    Doebling, Scott William

    2016-10-22

    This paper documents the escape of high explosive (HE) products problem. The problem, first presented by Fickett & Rivard, tests the implementation and numerical behavior of a high explosive detonation and energy release model and its interaction with an associated compressible hydrodynamics simulation code. The problem simulates the detonation of a finite-length, one-dimensional piece of HE that is driven by a piston from one end and adjacent to a void at the other end. The HE equation of state is modeled as a polytropic ideal gas. The HE detonation is assumed to be instantaneous with an infinitesimal reaction zone. Via judicious selection of the material specific heat ratio, the problem has an exact solution with linear characteristics, enabling a straightforward calculation of the physical variables as a function of time and space. Lastly, implementation of the exact solution in the Python code ExactPack is discussed, as are verification cases for the exact solution code.

  10. External-Compression Supersonic Inlet Design Code

    NASA Technical Reports Server (NTRS)

    Slater, John W.

    2011-01-01

    A computer code named SUPIN has been developed to perform aerodynamic design and analysis of external-compression, supersonic inlets. The baseline set of inlets include axisymmetric pitot, two-dimensional single-duct, axisymmetric outward-turning, and two-dimensional bifurcated-duct inlets. The aerodynamic methods are based on low-fidelity analytical and numerical procedures. The geometric methods are based on planar geometry elements. SUPIN has three modes of operation: 1) generate the inlet geometry from a explicit set of geometry information, 2) size and design the inlet geometry and analyze the aerodynamic performance, and 3) compute the aerodynamic performance of a specified inlet geometry. The aerodynamic performance quantities includes inlet flow rates, total pressure recovery, and drag. The geometry output from SUPIN includes inlet dimensions, cross-sectional areas, coordinates of planar profiles, and surface grids suitable for input to grid generators for analysis by computational fluid dynamics (CFD) methods. The input data file for SUPIN and the output file from SUPIN are text (ASCII) files. The surface grid files are output as formatted Plot3D or stereolithography (STL) files. SUPIN executes in batch mode and is available as a Microsoft Windows executable and Fortran95 source code with a makefile for Linux.

  11. CHOLLA: A New Massively Parallel Hydrodynamics Code for Astrophysical Simulation

    NASA Astrophysics Data System (ADS)

    Schneider, Evan E.; Robertson, Brant E.

    2015-04-01

    We present Computational Hydrodynamics On ParaLLel Architectures (Cholla ), a new three-dimensional hydrodynamics code that harnesses the power of graphics processing units (GPUs) to accelerate astrophysical simulations. Cholla models the Euler equations on a static mesh using state-of-the-art techniques, including the unsplit Corner Transport Upwind algorithm, a variety of exact and approximate Riemann solvers, and multiple spatial reconstruction techniques including the piecewise parabolic method (PPM). Using GPUs, Cholla evolves the fluid properties of thousands of cells simultaneously and can update over 10 million cells per GPU-second while using an exact Riemann solver and PPM reconstruction. Owing to the massively parallel architecture of GPUs and the design of the Cholla code, astrophysical simulations with physically interesting grid resolutions (≳2563) can easily be computed on a single device. We use the Message Passing Interface library to extend calculations onto multiple devices and demonstrate nearly ideal scaling beyond 64 GPUs. A suite of test problems highlights the physical accuracy of our modeling and provides a useful comparison to other codes. We then use Cholla to simulate the interaction of a shock wave with a gas cloud in the interstellar medium, showing that the evolution of the cloud is highly dependent on its density structure. We reconcile the computed mixing time of a turbulent cloud with a realistic density distribution destroyed by a strong shock with the existing analytic theory for spherical cloud destruction by describing the system in terms of its median gas density.

  12. CHOLLA: A NEW MASSIVELY PARALLEL HYDRODYNAMICS CODE FOR ASTROPHYSICAL SIMULATION

    SciTech Connect

    Schneider, Evan E.; Robertson, Brant E.

    2015-04-15

    We present Computational Hydrodynamics On ParaLLel Architectures (Cholla ), a new three-dimensional hydrodynamics code that harnesses the power of graphics processing units (GPUs) to accelerate astrophysical simulations. Cholla models the Euler equations on a static mesh using state-of-the-art techniques, including the unsplit Corner Transport Upwind algorithm, a variety of exact and approximate Riemann solvers, and multiple spatial reconstruction techniques including the piecewise parabolic method (PPM). Using GPUs, Cholla evolves the fluid properties of thousands of cells simultaneously and can update over 10 million cells per GPU-second while using an exact Riemann solver and PPM reconstruction. Owing to the massively parallel architecture of GPUs and the design of the Cholla code, astrophysical simulations with physically interesting grid resolutions (≳256{sup 3}) can easily be computed on a single device. We use the Message Passing Interface library to extend calculations onto multiple devices and demonstrate nearly ideal scaling beyond 64 GPUs. A suite of test problems highlights the physical accuracy of our modeling and provides a useful comparison to other codes. We then use Cholla to simulate the interaction of a shock wave with a gas cloud in the interstellar medium, showing that the evolution of the cloud is highly dependent on its density structure. We reconcile the computed mixing time of a turbulent cloud with a realistic density distribution destroyed by a strong shock with the existing analytic theory for spherical cloud destruction by describing the system in terms of its median gas density.

  13. RAM: a Relativistic Adaptive Mesh Refinement Hydrodynamics Code

    SciTech Connect

    Zhang, Wei-Qun; MacFadyen, Andrew I.; /Princeton, Inst. Advanced Study

    2005-06-06

    The authors have developed a new computer code, RAM, to solve the conservative equations of special relativistic hydrodynamics (SRHD) using adaptive mesh refinement (AMR) on parallel computers. They have implemented a characteristic-wise, finite difference, weighted essentially non-oscillatory (WENO) scheme using the full characteristic decomposition of the SRHD equations to achieve fifth-order accuracy in space. For time integration they use the method of lines with a third-order total variation diminishing (TVD) Runge-Kutta scheme. They have also implemented fourth and fifth order Runge-Kutta time integration schemes for comparison. The implementation of AMR and parallelization is based on the FLASH code. RAM is modular and includes the capability to easily swap hydrodynamics solvers, reconstruction methods and physics modules. In addition to WENO they have implemented a finite volume module with the piecewise parabolic method (PPM) for reconstruction and the modified Marquina approximate Riemann solver to work with TVD Runge-Kutta time integration. They examine the difficulty of accurately simulating shear flows in numerical relativistic hydrodynamics codes. They show that under-resolved simulations of simple test problems with transverse velocity components produce incorrect results and demonstrate the ability of RAM to correctly solve these problems. RAM has been tested in one, two and three dimensions and in Cartesian, cylindrical and spherical coordinates. they have demonstrated fifth-order accuracy for WENO in one and two dimensions and performed detailed comparison with other schemes for which they show significantly lower convergence rates. Extensive testing is presented demonstrating the ability of RAM to address challenging open questions in relativistic astrophysics.

  14. A new class of polyphase pulse compression codes

    NASA Astrophysics Data System (ADS)

    Deng, Hai; Lin, Maoyong

    The study presents the synthesis method of a new class of polyphase pulse compression codes - NLFM code, and investigates the properties of this code. The NLFM code, which is derived from sampling and quantization of a nonlinear FM waveform, features a low-range sidelobe and insensitivity to Doppler effect. Simulation results show that the major properties of the NLFM polyphase code are superior to the Frank code.

  15. On Using Goldbach G0 Codes and Even-Rodeh Codes for Text Compression on Using Goldbach G0 Codes and Even-Rodeh Codes for Text Compression

    NASA Astrophysics Data System (ADS)

    Budiman, M. A.; Rachmawati, D.

    2017-03-01

    This research aims to study the efficiency of two variants of variable-length codes (i.e., Goldbach G0 codes and Even-Rodeh codes) in compressing texts. The parameters being examined are the ratio of compression, the space savings, and the bit rate. As a benchmark, all of the original (uncompressed) texts are assumed to be encoded in American Standard Codes for Information Interchange (ASCII). Several texts, including those derived from some corpora (the Artificial corpus, the Calgary corpus, the Canterbury corpus, the Large corpus, and the Miscellaneous corpus) are tested in the experiment. The overall result shows that the Even-Rodeh codes are consistently more efficient to compress texts than the unoptimzed Goldbach G0 codes.

  16. Hydrodynamic simulations of gaseous Argon shock compression experiments

    NASA Astrophysics Data System (ADS)

    Garcia, Daniel B.; Dattelbaum, Dana M.; Goodwin, Peter M.; Sheffield, Stephen A.; Morris, John S.; Gustavsen, Richard L.; Burkett, Michael W.

    2017-01-01

    The lack of published Ar gas shock data motivated an evaluation of the Ar Equation of State (EOS) in gas phase initial density regimes. In particular, these regimes include initial pressures in the range of 13.8 - 34.5 bar (0.025 - 0.056 g/ cm3) and initial shock velocities around 0.2 cm/μs. The objective of the numerical evaluation was to develop a physical understanding of the EOS behavior of shocked and subsequently multiply re-shocked Ar gas through Pagosa numerical simulations utilizing the SESAME equation of state. Pagosa is a Los Alamos National Laboratory 2-D and 3-D Eulerian continuum dynamics code capable of modeling high velocity compressible flow with multiple materials. The approach involved the use of gas gun experiments to evaluate the shock and multiple re-shock behavior of pressurized Ar gas to validate Pagosa simulations and the SESAME EOS. Additionally, the diagnostic capability within the experiments allowed for the EOS to be fully constrained with measured shock velocity, particle velocity and temperature. The simulations demonstrate excellent agreement with the experiments in the shock velocity/particle velocity space, and reasonable comparisons for the ionization temperatures.

  17. MR image compression using a wavelet transform coding algorithm.

    PubMed

    Angelidis, P A

    1994-01-01

    We present here a technique for MR image compression. It is based on a transform coding scheme using the wavelet transform and vector quantization. Experimental results show that the method offers high compression ratios with low degradation of the image quality. The technique is expected to be particularly useful wherever storing and transmitting large numbers of images is necessary.

  18. A 2-dimensional MHD code & survey of the ``buckling'' phenomenon in cylindrical magnetic flux compression experiments

    NASA Astrophysics Data System (ADS)

    Xiao, Bo; Wang, Ganghua; Gu, Zhuowei; Computational Physics Team

    2015-11-01

    We made a 2-dimensional magneto-hydrodynamics Lagrangian code. The code handles two kinds of magnetic configuration, a (x-y) plane with z-direction magnetic field Bz and a (r-z) plane with θ-direction magnetic field Bθ. The solving of the MHD equations is split into a pure dynamical step (i.e., ideal MHD) and a diffusion step. In the diffusion step, the Joule heat is calculated with a numerical scheme based on an specific form of the Joule heat production equation, ∂eJ/∂t = ∇ . (η/μ0 º × (∇ × º)) -∂/∂t (1/2μ0 B2) , where the term ∂/∂t (1/2μ0 B2) is the magnetic field energy variation caused solely by diffusion. This scheme insures the equality of the total Joule heat produced and the total electromagnetic energy lost in the system. Material elastoplasticity is considered in the code. An external circuit is coupled to the magneto-hydrodynamics and a detonation module is also added to enhance the code's ability for simulating magnetically-driven compression experiments. As a first application, the code was utilized to simulate a cylindrical magnetic flux compression experiment. The origin of the ``buckling'' phenomenon observed in the experiment is explored.

  19. Rank minimization code aperture design for spectrally selective compressive imaging.

    PubMed

    Arguello, Henry; Arce, Gonzalo R

    2013-03-01

    A new code aperture design framework for multiframe code aperture snapshot spectral imaging (CASSI) system is presented. It aims at the optimization of code aperture sets such that a group of compressive spectral measurements is constructed, each with information from a specific subset of bands. A matrix representation of CASSI is introduced that permits the optimization of spectrally selective code aperture sets. Furthermore, each code aperture set forms a matrix such that rank minimization is used to reduce the number of CASSI shots needed. Conditions for the code apertures are identified such that a restricted isometry property in the CASSI compressive measurements is satisfied with higher probability. Simulations show higher quality of spectral image reconstruction than that attained by systems using Hadamard or random code aperture sets.

  20. Hydrodynamic Liner Experiments Using the Ranchero Flux Compression Generator System

    SciTech Connect

    Goforth, J.H.; Atchison, W.L.; Fowler, C.M.; Lopez, E.A.; Oona, H.; Tasker, D.G.; King, J.C.; Herrera, D.H.; Torres, D.T.; Sena, F.C.; McGuire, J.A.; Reinovsky, R.E.; Stokes, J.L.; Tabaka, L.J.; Garcia, O.F.; Faehl, R.J.; Lindemuth, I.R.; Keinigs, R.K.; Broste, B.

    1998-10-18

    The authors have developed a system for driving hydrodynamic liners at currents approaching 30 MA. Their 43 cm module will deliver currents of interest, and when fully developed, the 1.4 m module will allow similar currents with more total system inductance. With these systems they can perform interesting physics experiments and support the Atlas development effort.

  1. Syndrome-source-coding and its universal generalization. [error correcting codes for data compression

    NASA Technical Reports Server (NTRS)

    Ancheta, T. C., Jr.

    1976-01-01

    A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A 'universal' generalization of syndrome-source-coding is formulated which provides robustly effective distortionless coding of source ensembles. Two examples are given, comparing the performance of noiseless universal syndrome-source-coding to (1) run-length coding and (2) Lynch-Davisson-Schalkwijk-Cover universal coding for an ensemble of binary memoryless sources.

  2. Compressive imaging using fast transform coding

    NASA Astrophysics Data System (ADS)

    Thompson, Andrew; Calderbank, Robert

    2016-10-01

    We propose deterministic sampling strategies for compressive imaging based on Delsarte-Goethals frames. We show that these sampling strategies result in multi-scale measurements which can be related to the 2D Haar wavelet transform. We demonstrate the effectiveness of our proposed strategies through numerical experiments.

  3. Streamlined Genome Sequence Compression using Distributed Source Coding

    PubMed Central

    Wang, Shuang; Jiang, Xiaoqian; Chen, Feng; Cui, Lijuan; Cheng, Samuel

    2014-01-01

    We aim at developing a streamlined genome sequence compression algorithm to support alternative miniaturized sequencing devices, which have limited communication, storage, and computation power. Existing techniques that require heavy client (encoder side) cannot be applied. To tackle this challenge, we carefully examined distributed source coding theory and developed a customized reference-based genome compression protocol to meet the low-complexity need at the client side. Based on the variation between source and reference, our protocol will pick adaptively either syndrome coding or hash coding to compress subsequences of changing code length. Our experimental results showed promising performance of the proposed method when compared with the state-of-the-art algorithm (GRS). PMID:25520552

  4. The escape of high explosive products: An exact-solution problem for verification of hydrodynamics codes

    DOE PAGES

    Doebling, Scott William

    2016-10-22

    This paper documents the escape of high explosive (HE) products problem. The problem, first presented by Fickett & Rivard, tests the implementation and numerical behavior of a high explosive detonation and energy release model and its interaction with an associated compressible hydrodynamics simulation code. The problem simulates the detonation of a finite-length, one-dimensional piece of HE that is driven by a piston from one end and adjacent to a void at the other end. The HE equation of state is modeled as a polytropic ideal gas. The HE detonation is assumed to be instantaneous with an infinitesimal reaction zone. Viamore » judicious selection of the material specific heat ratio, the problem has an exact solution with linear characteristics, enabling a straightforward calculation of the physical variables as a function of time and space. Lastly, implementation of the exact solution in the Python code ExactPack is discussed, as are verification cases for the exact solution code.« less

  5. Development of 1D Liner Compression Code for IDL

    NASA Astrophysics Data System (ADS)

    Shimazu, Akihisa; Slough, John; Pancotti, Anthony

    2015-11-01

    A 1D liner compression code is developed to model liner implosion dynamics in the Inductively Driven Liner Experiment (IDL) where FRC plasmoid is compressed via inductively-driven metal liners. The driver circuit, magnetic field, joule heating, and liner dynamics calculations are performed at each time step in sequence to couple these effects in the code. To obtain more realistic magnetic field results for a given drive coil geometry, 2D and 3D effects are incorporated into the 1D field calculation through use of correction factor table lookup approach. Commercial low-frequency electromagnetic fields solver, ANSYS Maxwell 3D, is used to solve the magnetic field profile for static liner condition at various liner radius in order to derive correction factors for the 1D field calculation in the code. The liner dynamics results from the code is verified to be in good agreement with the results from commercial explicit dynamics solver, ANSYS Explicit Dynamics, and previous liner experiment. The developed code is used to optimize the capacitor bank and driver coil design for better energy transfer and coupling. FRC gain calculations are also performed using the liner compression data from the code for the conceptual design of the reactor sized system for fusion energy gains.

  6. Efficient image compression scheme based on differential coding

    NASA Astrophysics Data System (ADS)

    Zhu, Li; Wang, Guoyou; Liu, Ying

    2007-11-01

    Embedded zerotree (EZW) and Set Partitioning in Hierarchical Trees (SPIHT) coding, introduced by J.M. Shapiro and Amir Said, are very effective and being used in many fields widely. In this study, brief explanation of the principles of SPIHT was first provided, and then, some improvement of SPIHT algorithm according to experiments was introduced. 1) For redundancy among the coefficients in the wavelet region, we propose differential method to reduce it during coding. 2) Meanwhile, based on characteristic of the coefficients' distribution in subband, we adjust sorting pass and optimize differential coding, in order to reduce the redundancy coding in each subband. 3) The image coding result, calculated by certain threshold, shows that through differential coding, the rate of compression get higher, and the quality of reconstructed image have been get raised greatly, when bpp (bit per pixel)=0.5, PSNR (Peak Signal to Noise Ratio) of reconstructed image exceeds that of standard SPIHT by 0.2~0.4db.

  7. THEHYCO-3DT: Thermal hydrodynamic code for the 3 dimensional transient calculation of advanced LMFBR core

    SciTech Connect

    Vitruk, S.G.; Korsun, A.S.; Ushakov, P.A.

    1995-09-01

    The multilevel mathematical model of neutron thermal hydrodynamic processes in a passive safety core without assemblies duct walls and appropriate computer code SKETCH, consisted of thermal hydrodynamic module THEHYCO-3DT and neutron one, are described. A new effective discretization technique for energy, momentum and mass conservation equations is applied in hexagonal - z geometry. The model adequacy and applicability are presented. The results of the calculations show that the model and the computer code could be used in conceptual design of advanced reactors.

  8. Improved zerotree coding algorithm for wavelet image compression

    NASA Astrophysics Data System (ADS)

    Chen, Jun; Li, Yunsong; Wu, Chengke

    2000-12-01

    A listless minimum zerotree coding algorithm based on the fast lifting wavelet transform with lower memory requirement and higher compression performance is presented in this paper. Most state-of-the-art image compression techniques based on wavelet coefficients, such as EZW and SPIHT, exploit the dependency between the subbands in a wavelet transformed image. We propose a minimum zerotree of wavelet coefficients which exploits the dependency not only between the coarser and the finer subbands but also within the lowest frequency subband. And a ne listless significance map coding algorithm based on the minimum zerotree, using new flag maps and new scanning order different form Wen-Kuo Lin et al. LZC, is also proposed. A comparison reveals that the PSNR results of LMZC are higher than those of LZC, and the compression performance of LMZC outperforms that of SPIHT in terms of hard implementation.

  9. Closed-form quality measures for compressed medical images: compression noise statistics of transform coding

    NASA Astrophysics Data System (ADS)

    Li, Dunling; Loew, Murray H.

    2004-05-01

    This paper provides a theoretical foundation for the closed-form expression of model observers on compressed images. In medical applications, model observers, especially the channelized Hotelling observer, have been successfully used to predict human observer performance and to evaluate image quality for detection tasks in various backgrounds. To use model observers, however, requires knowledge of noise statistics. This paper first identifies quantization noise as the sole distortion source in transform coding, one of the most commonly used methods for image compression. Then, it represents transform coding as a 1-D block-based matrix expression, it further derives first and second moments, and the probability density function (pdf) of the compression noise at pixel, block and image levels. The compression noise statistics depend on the transform matrix and the quantization matrix in the transform coding algorithm. Compression noise is jointly normally distributed when the dimension of the transform (the block size) is typical and the contents of image sets vary randomly. Moreover, this paper uses JPEG as a test example to verify the derived statistics. The test simulation results show that the closed-form expression of JPEG quantization and compression noise statistics correctly predicts the estimated ones from actual images.

  10. Analysis of LAPAN-IPB image lossless compression using differential pulse code modulation and huffman coding

    NASA Astrophysics Data System (ADS)

    Hakim, P. R.; Permala, R.

    2017-01-01

    LAPAN-A3/IPB satellite is the latest Indonesian experimental microsatellite with remote sensing and earth surveillance missions. The satellite has three optical payloads, which are multispectral push-broom imager, digital matrix camera and video camera. To increase data transmission efficiency, the multispectral imager data can be compressed using either lossy or lossless compression method. This paper aims to analyze Differential Pulse Code Modulation (DPCM) method and Huffman coding that are used in LAPAN-IPB satellite image lossless compression. Based on several simulation and analysis that have been done, current LAPAN-IPB lossless compression algorithm has moderate performance. There are several aspects that can be improved from current configuration, which are the type of DPCM code used, the type of Huffman entropy-coding scheme, and the use of sub-image compression method. The key result of this research shows that at least two neighboring pixels should be used for DPCM calculation to increase compression performance. Meanwhile, varying Huffman tables with sub-image approach could also increase the performance if on-board computer can support for more complicated algorithm. These results can be used as references in designing Payload Data Handling System (PDHS) for an upcoming LAPAN-A4 satellite.

  11. A seismic data compression system using subband coding

    NASA Technical Reports Server (NTRS)

    Kiely, A. B.; Pollara, F.

    1995-01-01

    This article presents a study of seismic data compression techniques and a compression algorithm based on subband coding. The algorithm includes three stages: a decorrelation stage, a quantization stage that introduces a controlled amount of distortion to allow for high compression ratios, and a lossless entropy coding stage based on a simple but efficient arithmetic coding method. Subband coding methods are particularly suited to the decorrelation of nonstationary processes such as seismic events. Adaptivity to the nonstationary behavior of the waveform is achieved by dividing the data into separate blocks that are encoded separately with an adaptive arithmetic encoder. This is done with high efficiency due to the low overhead introduced by the arithmetic encoder in specifying its parameters. The technique could be used as a progressive transmission system, where successive refinements of the data can be requested by the user. This allows seismologists to first examine a coarse version of waveforms with minimal usage of the channel and then decide where refinements are required. Rate-distortion performance results are presented and comparisons are made with two block transform methods.

  12. Achieving H.264-like compression efficiency with distributed video coding

    NASA Astrophysics Data System (ADS)

    Milani, Simone; Wang, Jiajun; Ramchandran, Kannan

    2007-01-01

    Recently, a new class of distributed source coding (DSC) based video coders has been proposed to enable low-complexity encoding. However, to date, these low-complexity DSC-based video encoders have been unable to compress as efficiently as motion-compensated predictive coding based video codecs, such as H.264/AVC, due to insufficiently accurate modeling of video data. In this work, we examine achieving H.264-like high compression efficiency with a DSC-based approach without the encoding complexity constraint. The success of H.264/AVC highlights the importance of accurately modeling the highly non-stationary video data through fine-granularity motion estimation. This motivates us to deviate from the popular approach of approaching the Wyner-Ziv bound with sophisticated capacity-achieving channel codes that require long block lengths and high decoding complexity, and instead focus on accurately modeling video data. Such a DSC-based, compression-centric encoder is an important first step towards building a robust DSC-based video coding framework.

  13. A compressible Navier-Stokes code for turbulent flow modeling

    NASA Technical Reports Server (NTRS)

    Coakley, T. J.

    1984-01-01

    An implicit, finite volume code for solving two dimensional, compressible turbulent flows is described. Second order upwind differencing of the inviscid terms of the equations is used to enhance stability and accuracy. A diagonal form of the implicit algorithm is used to improve efficiency. Several zero and two equation turbulence models are incorporated to study their impact on overall flow modeling accuracy. Applications to external and internal flows are discussed.

  14. Numerical simulations of hydrodynamic instabilities: Perturbation codes PANSY, PERLE, and 2D code CHIC applied to a realistic LIL target

    NASA Astrophysics Data System (ADS)

    Hallo, L.; Olazabal-Loumé, M.; Maire, P. H.; Breil, J.; Morse, R.-L.; Schurtz, G.

    2006-06-01

    This paper deals with ablation front instabilities simulations in the context of direct drive ICF. A simplified DT target, representative of realistic target on LIL is considered. We describe here two numerical approaches: the linear perturbation method using the perturbation codes Perle (planar) and Pansy (spherical) and the direct simulation method using our Bi-dimensional hydrodynamic code Chic. Numerical solutions are shown to converge, in good agreement with analytical models.

  15. RICH: Numerical simulation of compressible hydrodynamics on a moving Voronoi mesh

    NASA Astrophysics Data System (ADS)

    Yalinewich, Almog; Steinberg, Elad; Sari, Re'em

    2014-10-01

    RICH (Racah Institute Computational Hydrodynamics) is a 2D hydrodynamic code based on Godunov's method. The code, largely based on AREPO, acts on an unstructured moving mesh. It differs from AREPO in the interpolation and time advancement scheme as well as a novel parallelization scheme based on Voronoi tessellation. Though not universally true, in many cases a moving mesh gives better results than a static mesh: where matter moves one way and a sound wave is traveling in the other way (such that relative to the grid the wave is not moving), a static mesh gives better results than a moving mesh. RICH is designed in an object oriented, user friendly way that facilitates incorporation of new algorithms and physical processes.

  16. TPCI: the PLUTO-CLOUDY Interface . A versatile coupled photoionization hydrodynamics code

    NASA Astrophysics Data System (ADS)

    Salz, M.; Banerjee, R.; Mignone, A.; Schneider, P. C.; Czesla, S.; Schmitt, J. H. M. M.

    2015-04-01

    We present an interface between the (magneto-) hydrodynamics code PLUTO and the plasma simulation and spectral synthesis code CLOUDY. By combining these codes, we constructed a new photoionization hydrodynamics solver: the PLUTO-CLOUDY Interface (TPCI), which is well suited to simulate photoevaporative flows under strong irradiation. The code includes the electromagnetic spectrum from X-rays to the radio range and solves the photoionization and chemical network of the 30 lightest elements. TPCI follows an iterative numerical scheme: first, the equilibrium state of the medium is solved for a given radiation field by CLOUDY, resulting in a net radiative heating or cooling. In the second step, the latter influences the (magneto-) hydrodynamic evolution calculated by PLUTO. Here, we validated the one-dimensional version of the code on the basis of four test problems: photoevaporation of a cool hydrogen cloud, cooling of coronal plasma, formation of a Strömgren sphere, and the evaporating atmosphere of a hot Jupiter. This combination of an equilibrium photoionization solver with a general MHD code provides an advanced simulation tool applicable to a variety of astrophysical problems. A copy of the code is available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/576/A21

  17. The multigrid method for semi-implicit hydrodynamics codes

    SciTech Connect

    Brandt, A.; Dendy, J.E. Jr.; Ruppel, H.

    1980-03-01

    The multigrid method is applied to the pressure iteration in both Eulerian and Lagrangian codes, and computational examples of its efficiency are presented. In addition a general technique for speeding up the calculation of very low Mach number flows is presented. The latter feature is independent of the multigrid algorithm.

  18. Multigrid method for semi-implicit hydrodynamics codes

    SciTech Connect

    Brandt, A.; Dendy, J.E. Jr.; Ruppel, H.

    1980-03-01

    The multigrid method is applied to the pressure iteration in both Eulerian and Lagrangian codes, and computational examples of its efficiency are presented. In addition a general technique for speeding up the calculation of very low Mach number flows is presented. The latter feature is independent of the multigrid algorithm.

  19. Gaseous laser targets and optical diagnostics for studying compressible hydrodynamic instabilities

    SciTech Connect

    Edwards, J M; Robey, H; Mackinnon, A

    2001-06-29

    Explore the combination of optical diagnostics and gaseous targets to obtain important information about compressible turbulent flows that cannot be derived from traditional laser experiments for the purposes of V and V of hydrodynamics models and understanding scaling. First year objectives: Develop and characterize blast wave-gas jet test bed; Perform single pulse shadowgraphy of blast wave interaction with turbulent gas jet as a function of blast wave Mach number; Explore double pulse shadowgraphy and image correlation for extracting velocity spectra in the shock-turbulent flow interaction; and Explore the use/adaptation of advanced diagnostics.

  20. A compressible high-order unstructured spectral difference code for stratified convection in rotating spherical shells

    NASA Astrophysics Data System (ADS)

    Wang, Junfeng; Liang, Chunlei; Miesch, Mark S.

    2015-06-01

    We present a novel and powerful Compressible High-ORder Unstructured Spectral-difference (CHORUS) code for simulating thermal convection and related fluid dynamics in the interiors of stars and planets. The computational geometries are treated as rotating spherical shells filled with stratified gas. The hydrodynamic equations are discretized by a robust and efficient high-order Spectral Difference Method (SDM) on unstructured meshes. The computational stencil of the spectral difference method is compact and advantageous for parallel processing. CHORUS demonstrates excellent parallel performance for all test cases reported in this paper, scaling up to 12 000 cores on the Yellowstone High-Performance Computing cluster at NCAR. The code is verified by defining two benchmark cases for global convection in Jupiter and the Sun. CHORUS results are compared with results from the ASH code and good agreement is found. The CHORUS code creates new opportunities for simulating such varied phenomena as multi-scale solar convection, core convection, and convection in rapidly-rotating, oblate stars.

  1. A new relativistic hydrodynamics code for high-energy heavy-ion collisions

    NASA Astrophysics Data System (ADS)

    Okamoto, Kazuhisa; Akamatsu, Yukinao; Nonaka, Chiho

    2016-10-01

    We construct a new Godunov type relativistic hydrodynamics code in Milne coordinates, using a Riemann solver based on the two-shock approximation which is stable under the existence of large shock waves. We check the correctness of the numerical algorithm by comparing numerical calculations and analytical solutions in various problems, such as shock tubes, expansion of matter into the vacuum, the Landau-Khalatnikov solution, and propagation of fluctuations around Bjorken flow and Gubser flow. We investigate the energy and momentum conservation property of our code in a test problem of longitudinal hydrodynamic expansion with an initial condition for high-energy heavy-ion collisions. We also discuss numerical viscosity in the test problems of expansion of matter into the vacuum and conservation properties. Furthermore, we discuss how the numerical stability is affected by the source terms of relativistic numerical hydrodynamics in Milne coordinates.

  2. Terminal Ballistic Application of Hydrodynamic Computer Code Calculations.

    DTIC Science & Technology

    1977-04-01

    this test , the length to diameter ratio was two, and therefore, edge effects are important. The results of HEMP code calculations are also plotted ...distribut ion s of Al l i son and Vitali8 are also plotted in Figure 13. Good agreement exists between the experimental and calculated collapse...Vineland Avenue Dr. J. Kury North Hollywood , CA 91602 E. D. Giroux Dr. E . Lee 1 Systems , Science ~ Software Dr. H. Horn ig ATTN : Dr. R. Sedgw ick

  3. SMITE - A Second Order Eulerian Code for Hydrodynamic and Elastic-Plastic Problems

    DTIC Science & Technology

    1975-08-01

    et al Mathematical Applications Group, Incorporated Prepared for: Ballistic Research Laboratories August 1975 DISTRIBI,TED BY: mi] National...SMITE - A SECOND ORDER EULERIAN CODE FOR HYDRODYNAMIC AND ELASTIC-PLASTIC PROBLEMS Prepared by Mathematical Applications Group, Inc. 3...AODRcis jMathematical Applications Group, Inc. 13 Westchester Plaza IFlmsford, New York 10523 10. PROGRAM ELEMENT, PROJECT, TASK AREA t WORK

  4. CRASH: A BLOCK-ADAPTIVE-MESH CODE FOR RADIATIVE SHOCK HYDRODYNAMICS-IMPLEMENTATION AND VERIFICATION

    SciTech Connect

    Van der Holst, B.; Toth, G.; Sokolov, I. V.; Myra, E. S.; Fryxell, B.; Drake, R. P.; Powell, K. G.; Holloway, J. P.; Stout, Q.; Adams, M. L.; Morel, J. E.; Karni, S.

    2011-06-01

    We describe the Center for Radiative Shock Hydrodynamics (CRASH) code, a block-adaptive-mesh code for multi-material radiation hydrodynamics. The implementation solves the radiation diffusion model with a gray or multi-group method and uses a flux-limited diffusion approximation to recover the free-streaming limit. Electrons and ions are allowed to have different temperatures and we include flux-limited electron heat conduction. The radiation hydrodynamic equations are solved in the Eulerian frame by means of a conservative finite-volume discretization in either one-, two-, or three-dimensional slab geometry or in two-dimensional cylindrical symmetry. An operator-split method is used to solve these equations in three substeps: (1) an explicit step of a shock-capturing hydrodynamic solver; (2) a linear advection of the radiation in frequency-logarithm space; and (3) an implicit solution of the stiff radiation diffusion, heat conduction, and energy exchange. We present a suite of verification test problems to demonstrate the accuracy and performance of the algorithms. The applications are for astrophysics and laboratory astrophysics. The CRASH code is an extension of the Block-Adaptive Tree Solarwind Roe Upwind Scheme (BATS-R-US) code with a new radiation transfer and heat conduction library and equation-of-state and multi-group opacity solvers. Both CRASH and BATS-R-US are part of the publicly available Space Weather Modeling Framework.

  5. Hydrodynamic Instability, Integrated Code, Laboratory Astrophysics, and Astrophysics

    NASA Astrophysics Data System (ADS)

    Takabe, Hideaki

    2016-10-01

    This is an article for the memorial lecture of Edward Teller Medal and is presented as memorial lecture at the IFSA03 conference held on September 12th, 2003, at Monterey, CA. The author focuses on his main contributions to fusion science and its extension to astrophysics in the field of theory and computation by picking up five topics. The first one is the anomalous resisitivity to hot electrons penetrating over-dense region through the ion wave turbulence driven by the return current compensating the current flow by the hot electrons. It is concluded that almost the same value of potential as the average kinetic energy of the hot electrons is realized to prevent the penetration of the hot electrons. The second is the ablative stabilization of Rayleigh-Taylor instability at ablation front and its dispersion relation so-called Takabe formula. This formula gave a principal guideline for stable target design. The author has developed an integrated code ILESTA (ID & 2D) for analyses and design of laser produced plasma including implosion dynamics. It is also applied to design high gain targets. The third is the development of the integrated code ILESTA. The forth is on Laboratory Astrophysics with intense lasers. This consists of two parts; one is review on its historical background and the other is on how we relate laser plasma to wide-ranging astrophysics and the purposes for promoting such research. In relation to one purpose, I gave a comment on anomalous transport of relativistic electrons in Fast Ignition laser fusion scheme. Finally, I briefly summarize recent activity in relation to application of the author's experience to the development of an integrated code for studying extreme phenomena in astrophysics.

  6. Hydrodynamic Instability, Integrated Code, Laboratory Astrophysics, and Astrophysics

    NASA Astrophysics Data System (ADS)

    Takabe, Hideaki

    This is an article for the memorial lecture of Edward Teller Medal and is presented as memorial lecture at the IFSA03 conference held on September 12th, 2003, at Monterey, CA. The author focuses on his main contributions to fusion science and its extension to astrophysics in the field of theory and computation by picking up five topics. The first one is the anomalous resisitivity to hot electrons penetrating over-dense region through the ion wave turbulence driven by the return current compensating the current flow by the hot electrons. It is concluded that almost the same value of potential as the average kinetic energy of the hot electrons is realized to prevent the penetration of the hot electrons. The second is the ablative stabilization of Rayleigh-Taylor instability at ablation front and its dispersion relation so-called Takabe formula. This formula gave a principal guideline for stable target design. The author has developed an integrated code ILESTA (1D & 2D) for analyses and design of laser produced plasma including implosion dynamics. It is also applied to design high gain targets. The third is the development of the integrated code ILESTA. The forth is on Laboratory Astrophysics with intense lasers. This consists of two parts; one is review on its historical background and the other is on how we relate laser plasma to wide-ranging astrophysics and the purposes for promoting such research. In relation to one purpose, I gave a comment on anomalous transport of relativistic electrons in Fast Ignition laser fusion scheme. Finally, I briefly summarize recent activity in relation to application of the author's experience to the development of an integrated code for studying extreme phenomena in astrophysics.

  7. Modified-Gravity-GADGET: a new code for cosmological hydrodynamical simulations of modified gravity models

    NASA Astrophysics Data System (ADS)

    Puchwein, Ewald; Baldi, Marco; Springel, Volker

    2013-11-01

    We present a new massively parallel code for N-body and cosmological hydrodynamical simulations of modified gravity models. The code employs a multigrid-accelerated Newton-Gauss-Seidel relaxation solver on an adaptive mesh to efficiently solve for perturbations in the scalar degree of freedom of the modified gravity model. As this new algorithm is implemented as a module for the P-GADGET3 code, it can at the same time follow the baryonic physics included in P-GADGET3, such as hydrodynamics, radiative cooling and star formation. We demonstrate that the code works reliably by applying it to simple test problems that can be solved analytically, as well as by comparing cosmological simulations to results from the literature. Using the new code, we perform the first non-radiative and radiative cosmological hydrodynamical simulations of an f (R)-gravity model. We also discuss the impact of active galactic nucleus feedback on the matter power spectrum, as well as degeneracies between the influence of baryonic processes and modifications of gravity.

  8. High strain Lagrangian hydrodynamics: A three dimensional SPH code for dynamic material response

    NASA Astrophysics Data System (ADS)

    Allahdadi, Firooz A.; Carney, Theodore C.; Hipp, Jim R.; Libersky, Larry D.; Petschek, Albert G.

    1993-03-01

    MAGI, a three-dimensional shock and material response code which is based on Smoothed Particle Hydrodynamics is described. Calculations are presented and compared with experimental results. The SPH method is unique in that it employs no spatial mesh. The absence of a grid leads to some nice features such as the ability to handle large distortions in a pure Lagrangian frame and a natural treatment of voids. Both of these features are important in the tracking of debris clouds produced by hypervelocity impact, a difficult problem for which Smoothed Particle Hydrodynamics seems ideally suited. It is believed this is the first application of SPH to the dynamics of elastic-plastic solid.

  9. Coded strobing photography: compressive sensing of high speed periodic videos.

    PubMed

    Veeraraghavan, Ashok; Reddy, Dikpal; Raskar, Ramesh

    2011-04-01

    We show that, via temporal modulation, one can observe and capture a high-speed periodic video well beyond the abilities of a low-frame-rate camera. By strobing the exposure with unique sequences within the integration time of each frame, we take coded projections of dynamic events. From a sequence of such frames, we reconstruct a high-speed video of the high-frequency periodic process. Strobing is used in entertainment, medical imaging, and industrial inspection to generate lower beat frequencies. But this is limited to scenes with a detectable single dominant frequency and requires high-intensity lighting. In this paper, we address the problem of sub-Nyquist sampling of periodic signals and show designs to capture and reconstruct such signals. The key result is that for such signals, the Nyquist rate constraint can be imposed on the strobe rate rather than the sensor rate. The technique is based on intentional aliasing of the frequency components of the periodic signal while the reconstruction algorithm exploits recent advances in sparse representations and compressive sensing. We exploit the sparsity of periodic signals in the Fourier domain to develop reconstruction algorithms that are inspired by compressive sensing.

  10. A CMOS Imager with Focal Plane Compression using Predictive Coding

    NASA Technical Reports Server (NTRS)

    Leon-Salas, Walter D.; Balkir, Sina; Sayood, Khalid; Schemm, Nathan; Hoffman, Michael W.

    2007-01-01

    This paper presents a CMOS image sensor with focal-plane compression. The design has a column-level architecture and it is based on predictive coding techniques for image decorrelation. The prediction operations are performed in the analog domain to avoid quantization noise and to decrease the area complexity of the circuit, The prediction residuals are quantized and encoded by a joint quantizer/coder circuit. To save area resources, the joint quantizerlcoder circuit exploits common circuitry between a single-slope analog-to-digital converter (ADC) and a Golomb-Rice entropy coder. This combination of ADC and encoder allows the integration of the entropy coder at the column level. A prototype chip was fabricated in a 0.35 pm CMOS process. The output of the chip is a compressed bit stream. The test chip occupies a silicon area of 2.60 mm x 5.96 mm which includes an 80 X 44 APS array. Tests of the fabricated chip demonstrate the validity of the design.

  11. Channel coding and data compression system considerations for efficient communication of planetary imaging data

    NASA Technical Reports Server (NTRS)

    Rice, R. F.

    1974-01-01

    End-to-end system considerations involving channel coding and data compression which could drastically improve the efficiency in communicating pictorial information from future planetary spacecraft are presented.

  12. Simulating hypervelocity impact effects on structures using the smoothed particle hydrodynamics code MAGI

    NASA Technical Reports Server (NTRS)

    Libersky, Larry; Allahdadi, Firooz A.; Carney, Theodore C.

    1992-01-01

    Analysis of interaction occurring between space debris and orbiting structures is of great interest to the planning and survivability of space assets. Computer simulation of the impact events using hydrodynamic codes can provide some understanding of the processes but the problems involved with this fundamental approach are formidable. First, any realistic simulation is necessarily three-dimensional, e.g., the impact and breakup of a satellite. Second, the thickness of important components such as satellite skins or bumper shields are small with respect to the dimension of the structure as a whole, presenting severe zoning problems for codes. Thirdly, the debris cloud produced by the primary impact will yield many secondary impacts which will contribute to the damage and possible breakup of the structure. The problem was approached by choosing a relatively new computational technique that has virtues peculiar to space impacts. The method is called Smoothed Particle Hydrodynamics.

  13. High-fidelity numerical simulations of compressible turbulence and mixing generated by hydrodynamic instabilities

    NASA Astrophysics Data System (ADS)

    Movahed, Pooya

    High-speed flows are prone to hydrodynamic interfacial instabilities that evolve to turbulence, thereby intensely mixing different fluids and dissipating energy. The lack of knowledge of these phenomena has impeded progress in a variety of disciplines. In science, a full understanding of mixing between heavy and light elements after the collapse of a supernova and between adjacent layers of different density in geophysical (atmospheric and oceanic) flows remains lacking. In engineering, the inability to achieve ignition in inertial fusion and efficient combustion constitute further examples of this lack of basic understanding of turbulent mixing. In this work, my goal is to develop accurate and efficient numerical schemes and employ them to study compressible turbulence and mixing generated by interactions between shocked (Richtmyer-Meshkov) and accelerated (Rayleigh-Taylor) interfaces, which play important roles in high-energy-density physics environments. To accomplish my goal, a hybrid high-order central/discontinuity-capturing finite difference scheme is first presented. The underlying principle is that, to accurately and efficiently represent both broadband motions and discontinuities, non-dissipative methods are used where the solution is smooth, while the more expensive and dissipative capturing schemes are applied near discontinuous regions. Thus, an accurate numerical sensor is developed to discriminate between smooth regions, shocks and material discontinuities, which all require a different treatment. The interface capturing approach is extended to central differences, such that smooth distributions of varying specific heats ratio can be simulated without generating spurious pressure oscillations. I verified and validated this approach against a stringent suite of problems including shocks, interfaces, turbulence and two-dimensional single-mode Richtmyer-Meshkov instability simulations. The three-dimensional code is shown to scale well up to 4000 cores

  14. Hydrodynamic Optimization Method and Design Code for Stall-Regulated Hydrokinetic Turbine Rotors

    SciTech Connect

    Sale, D.; Jonkman, J.; Musial, W.

    2009-08-01

    This report describes the adaptation of a wind turbine performance code for use in the development of a general use design code and optimization method for stall-regulated horizontal-axis hydrokinetic turbine rotors. This rotor optimization code couples a modern genetic algorithm and blade-element momentum performance code in a user-friendly graphical user interface (GUI) that allows for rapid and intuitive design of optimal stall-regulated rotors. This optimization method calculates the optimal chord, twist, and hydrofoil distributions which maximize the hydrodynamic efficiency and ensure that the rotor produces an ideal power curve and avoids cavitation. Optimizing a rotor for maximum efficiency does not necessarily create a turbine with the lowest cost of energy, but maximizing the efficiency is an excellent criterion to use as a first pass in the design process. To test the capabilities of this optimization method, two conceptual rotors were designed which successfully met the design objectives.

  15. A 3+1 dimensional viscous hydrodynamic code for relativistic heavy ion collisions

    NASA Astrophysics Data System (ADS)

    Karpenko, Iu.; Huovinen, P.; Bleicher, M.

    2014-11-01

    We describe the details of 3+1 dimensional relativistic hydrodynamic code for the simulations of quark-gluon/hadron matter expansion in ultra-relativistic heavy ion collisions. The code solves the equations of relativistic viscous hydrodynamics in the Israel-Stewart framework. With the help of ideal-viscous splitting, we keep the ability to solve the equations of ideal hydrodynamics in the limit of zero viscosities using a Godunov-type algorithm. Milne coordinates are used to treat the predominant expansion in longitudinal (beam) direction effectively. The results are successfully tested against known analytical relativistic inviscid and viscous solutions, as well as against existing 2+1D relativistic viscous code. Catalogue identifier: AETZ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AETZ_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 13 825 No. of bytes in distributed program, including test data, etc.: 92 750 Distribution format: tar.gz Programming language: C++. Computer: any with a C++ compiler and the CERN ROOT libraries. Operating system: tested on GNU/Linux Ubuntu 12.04 x64 (gcc 4.6.3), GNU/Linux Ubuntu 13.10 (gcc 4.8.2), Red Hat Linux 6 (gcc 4.4.7). RAM: scales with the number of cells in hydrodynamic grid; 1900 Mbytes for 3D 160×160×100 grid. Classification: 1.5, 4.3, 12. External routines: CERN ROOT (http://root.cern.ch), Gnuplot (http://www.gnuplot.info/) for plotting the results. Nature of problem: relativistic hydrodynamical description of the 3-dimensional quark-gluon/hadron matter expansion in ultra-relativistic heavy ion collisions. Solution method: finite volume Godunov-type method. Running time: scales with the number of hydrodynamic cells; typical running times on Intel(R) Core(TM) i7-3770 CPU @ 3.40 GHz, single thread mode, 160

  16. A smooth particle hydrodynamics code to model collisions between solid, self-gravitating objects

    NASA Astrophysics Data System (ADS)

    Schäfer, C.; Riecker, S.; Maindl, T. I.; Speith, R.; Scherrer, S.; Kley, W.

    2016-05-01

    Context. Modern graphics processing units (GPUs) lead to a major increase in the performance of the computation of astrophysical simulations. Owing to the different nature of GPU architecture compared to traditional central processing units (CPUs) such as x86 architecture, existing numerical codes cannot be easily migrated to run on GPU. Here, we present a new implementation of the numerical method smooth particle hydrodynamics (SPH) using CUDA and the first astrophysical application of the new code: the collision between Ceres-sized objects. Aims: The new code allows for a tremendous increase in speed of astrophysical simulations with SPH and self-gravity at low costs for new hardware. Methods: We have implemented the SPH equations to model gas, liquids and elastic, and plastic solid bodies and added a fragmentation model for brittle materials. Self-gravity may be optionally included in the simulations and is treated by the use of a Barnes-Hut tree. Results: We find an impressive performance gain using NVIDIA consumer devices compared to our existing OpenMP code. The new code is freely available to the community upon request. If you are interested in our CUDA SPH code miluphCUDA, please write an email to Christoph Schäfer. miluphCUDA is the CUDA port of miluph. miluph is pronounced [maßl2v]. We do not support the use of the code for military purposes.

  17. Simulation of a ceramic impact experiment using the SPHINX smooth particle hydrodynamics code

    SciTech Connect

    Mandell, D.A.; Wingate, C.A.; Schwalbe, L.A.

    1996-08-01

    We are developing statistically based, brittle-fracture models and are implementing them into hydrocodes that can be used for designing systems with components of ceramics, glass, and/or other brittle materials. Because of the advantages it has simulating fracture, we are working primarily with the smooth particle hydrodynamics code SPHINX. We describe a new brittle fracture model that we have implemented into SPHINX, and we discuss how the model differs from others. To illustrate the code`s current capability, we simulate an experiment in which a tungsten rod strikes a target of heavily confined ceramic. Simulations in 3D at relatively coarse resolution yield poor results. However, 2D plane-strain approximations to the test produce crack patterns that are strikingly similar to the data, although the fracture model needs further refinement to match some of the finer details. We conclude with an outline of plans for continuing research and development.

  18. Modelling of Be Disks in Binary Systems Using the Hydrodynamic Code PLUTO

    NASA Astrophysics Data System (ADS)

    Cyr, I. H.; Panoglou, D.; Jones, C. E.; Carciofi, A. C.

    2016-11-01

    The study of the gas structure and dynamics of Be star disks is critical to our understanding of the Be star phenomenon. The central star is the major force driving the evolution of these disks, however other external forces may also affect the formation of the disk, for example, the gravitational torque produced in a close binary system. We are interested in understanding the gravitational effects of a low-mass binary companion on the formation and growth of a disk in a close binary system. To study these effects, we used the grid-based hydrodynamic code PLUTO. Because this code has not been used to study such systems before, we compared our simulations against codes used in previous work on binary systems. We were able to simulate the formation of a disk in both an isolated and binary system. Our current results suggest that PLUTO is in fact a well suited tool to study the dynamics of Be disks.

  19. Investigating the Magnetorotational Instability with Dedalus, and Open-Souce Hydrodynamics Code

    SciTech Connect

    Burns, Keaton J; /UC, Berkeley, aff SLAC

    2012-08-31

    The magnetorotational instability is a fluid instability that causes the onset of turbulence in discs with poloidal magnetic fields. It is believed to be an important mechanism in the physics of accretion discs, namely in its ability to transport angular momentum outward. A similar instability arising in systems with a helical magnetic field may be easier to produce in laboratory experiments using liquid sodium, but the applicability of this phenomenon to astrophysical discs is unclear. To explore and compare the properties of these standard and helical magnetorotational instabilities (MRI and HRMI, respectively), magnetohydrodynamic (MHD) capabilities were added to Dedalus, an open-source hydrodynamics simulator. Dedalus is a Python-based pseudospectral code that uses external libraries and parallelization with the goal of achieving speeds competitive with codes implemented in lower-level languages. This paper will outline the MHD equations as implemented in Dedalus, the steps taken to improve the performance of the code, and the status of MRI investigations using Dedalus.

  20. Effective wavelet-based compression method with adaptive quantization threshold and zerotree coding

    NASA Astrophysics Data System (ADS)

    Przelaskowski, Artur; Kazubek, Marian; Jamrogiewicz, Tomasz

    1997-10-01

    Efficient image compression technique especially for medical applications is presented. Dyadic wavelet decomposition by use of Antonini and Villasenor bank filters is followed by adaptive space-frequency quantization and zerotree-based entropy coding of wavelet coefficients. Threshold selection and uniform quantization is made on a base of spatial variance estimate built on the lowest frequency subband data set. Threshold value for each coefficient is evaluated as linear function of 9-order binary context. After quantization zerotree construction, pruning and arithmetic coding is applied for efficient lossless data coding. Presented compression method is less complex than the most effective EZW-based techniques but allows to achieve comparable compression efficiency. Specifically our method has similar to SPIHT efficiency in MR image compression, slightly better for CT image and significantly better in US image compression. Thus the compression efficiency of presented method is competitive with the best published algorithms in the literature across diverse classes of medical images.

  1. High Strain Lagrangian Hydrodynamics. A Three-Dimensional SPH Code for Dynamic Material Response

    NASA Astrophysics Data System (ADS)

    Libersky, Larry D.; Petschek, Albert G.; Carney, Theodore C.; Hipp, Jim R.; Allahdadi, Firooz A.

    1993-11-01

    MAGI, a three-dimensional shock and material response code which is based on smoothed particle hydrodynamics (SPH) is described. Calculations are presented and compared with experimental results. The SPH method is unique in that it employs no spatial mesh. The absence of a grid leads to some nice features such as the ability to handle large distortions in a pure Lagrangian frame and a natural treatment of voids. Both of these features are important in the tracking of debris clouds produced by hypervelocity impact—a difficult problem for which SPH seems ideally suited. We believe this is the first application of SPH to the dynamics of elastic-plastic solids.

  2. Space communication system for compressed data with a concatenated Reed-Solomon-Viterbi coding channel

    NASA Technical Reports Server (NTRS)

    Rice, R. F.; Hilbert, E. E. (Inventor)

    1976-01-01

    A space communication system incorporating a concatenated Reed Solomon Viterbi coding channel is discussed for transmitting compressed and uncompressed data from a spacecraft to a data processing center on Earth. Imaging (and other) data are first compressed into source blocks which are then coded by a Reed Solomon coder and interleaver, followed by a convolutional encoder. The received data is first decoded by a Viterbi decoder, followed by a Reed Solomon decoder and deinterleaver. The output of the latter is then decompressed, based on the compression criteria used in compressing the data in the spacecraft. The decompressed data is processed to reconstruct an approximation of the original data-producing condition or images.

  3. Prediction of material strength and fracture of glass using the SPHINX smooth particle hydrodynamics code

    SciTech Connect

    Mandell, D.A.; Wingate, C.A.

    1994-08-01

    The design of many military devices involves numerical predictions of the material strength and fracture of brittle materials. The materials of interest include ceramics, that are used in armor packages; glass that is used in truck and jeep windshields and in helicopters; and rock and concrete that are used in underground bunkers. As part of a program to develop advanced hydrocode design tools, the authors have implemented a brittle fracture model for glass into the SPHINX smooth particle hydrodynamics code. The authors have evaluated this model and the code by predicting data from one-dimensional flyer plate impacts into glass, and data from tungsten rods impacting glass. Since fractured glass properties, which are needed in the model, are not available, the authors did sensitivity studies of these properties, as well as sensitivity studies to determine the number of particles needed in the calculations. The numerical results are in good agreement with the data.

  4. A new multidimensional, energy-dependent two-moment transport code for neutrino-hydrodynamics

    NASA Astrophysics Data System (ADS)

    Just, O.; Obergaulinger, M.; Janka, H.-T.

    2015-11-01

    We present the new code ALCAR developed to model multidimensional, multienergy-group neutrino transport in the context of supernovae and neutron-star mergers. The algorithm solves the evolution equations of the zeroth- and first-order angular moments of the specific intensity, supplemented by an algebraic relation for the second-moment tensor to close the system. The scheme takes into account frame-dependent effects of the order O(v/c) as well as the most important types of neutrino interactions. The transport scheme is significantly more efficient than a multidimensional solver of the Boltzmann equation, while it is more accurate and consistent than the flux-limited diffusion method. The finite-volume discretization of the essentially hyperbolic system of moment equations employs methods well-known from hydrodynamics. For the time integration of the potentially stiff moment equations we employ a scheme in which only the local source terms are treated implicitly, while the advection terms are kept explicit, thereby allowing for an efficient computational parallelization of the algorithm. We investigate various problem set-ups in one and two dimensions to verify the implementation and to test the quality of the algebraic closure scheme. In our most detailed test, we compare a fully dynamic, one-dimensional core-collapse simulation with two published calculations performed with well-known Boltzmann-type neutrino-hydrodynamics codes and we find very satisfactory agreement.

  5. Adaptive uniform grayscale coded aperture design for high dynamic range compressive spectral imaging

    NASA Astrophysics Data System (ADS)

    Diaz, Nelson; Rueda, Hoover; Arguello, Henry

    2016-05-01

    Imaging spectroscopy is an important area with many applications in surveillance, agriculture and medicine. The disadvantage of conventional spectroscopy techniques is that they collect the whole datacube. In contrast, compressive spectral imaging systems capture snapshot compressive projections, which are the input of reconstruction algorithms to yield the underlying datacube. Common compressive spectral imagers use coded apertures to perform the coded projections. The coded apertures are the key elements in these imagers since they define the sensing matrix of the system. The proper design of the coded aperture entries leads to a good quality in the reconstruction. In addition, the compressive measurements are prone to saturation due to the limited dynamic range of the sensor, hence the design of coded apertures must consider saturation. The saturation errors in compressive measurements are unbounded and compressive sensing recovery algorithms only provide solutions for bounded noise or bounded with high probability. In this paper it is proposed the design of uniform adaptive grayscale coded apertures (UAGCA) to improve the dynamic range of the estimated spectral images by reducing the saturation levels. The saturation is attenuated between snapshots using an adaptive filter which updates the entries of the grayscale coded aperture based on the previous snapshots. The coded apertures are optimized in terms of transmittance and number of grayscale levels. The advantage of the proposed method is the efficient use of the dynamic range of the image sensor. Extensive simulations show improvements in the image reconstruction of the proposed method compared with grayscale coded apertures (UGCA) and adaptive block-unblock coded apertures (ABCA) in up to 10 dB.

  6. Medical Image Compression Using a New Subband Coding Method

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Smith, Mark J. T.; Scales, Allen; Tucker, Doug

    1995-01-01

    A recently introduced iterative complexity- and entropy-constrained subband quantization design algorithm is generalized and applied to medical image compression. In particular, the corresponding subband coder is used to encode Computed Tomography (CT) axial slice head images, where statistical dependencies between neighboring image subbands are exploited. Inter-slice conditioning is also employed for further improvements in compression performance. The subband coder features many advantages such as relatively low complexity and operation over a very wide range of bit rates. Experimental results demonstrate that the performance of the new subband coder is relatively good, both objectively and subjectively.

  7. A channel differential EZW coding scheme for EEG data compression.

    PubMed

    Dehkordi, Vahid R; Daou, Hoda; Labeau, Fabrice

    2011-11-01

    In this paper, a method is proposed to compress multichannel electroencephalographic (EEG) signals in a scalable fashion. Correlation between EEG channels is exploited through clustering using a k-means method. Representative channels for each of the clusters are encoded individually while other channels are encoded differentially, i.e., with respect to their respective cluster representatives. The compression is performed using the embedded zero-tree wavelet encoding adapted to 1-D signals. Simulations show that the scalable features of the scheme lead to a flexible quality/rate tradeoff, without requiring detailed EEG signal modeling.

  8. A coded aperture compressive imaging array and its visual detection and tracking algorithms for surveillance systems.

    PubMed

    Chen, Jing; Wang, Yongtian; Wu, Hanxiao

    2012-10-29

    In this paper, we propose an application of a compressive imaging system to the problem of wide-area video surveillance systems. A parallel coded aperture compressive imaging system is proposed to reduce the needed high resolution coded mask requirements and facilitate the storage of the projection matrix. Random Gaussian, Toeplitz and binary phase coded masks are utilized to obtain the compressive sensing images. The corresponding motion targets detection and tracking algorithms directly using the compressive sampling images are developed. A mixture of Gaussian distribution is applied in the compressive image space to model the background image and for foreground detection. For each motion target in the compressive sampling domain, a compressive feature dictionary spanned by target templates and noises templates is sparsely represented. An l(1) optimization algorithm is used to solve the sparse coefficient of templates. Experimental results demonstrate that low dimensional compressed imaging representation is sufficient to determine spatial motion targets. Compared with the random Gaussian and Toeplitz phase mask, motion detection algorithms using a random binary phase mask can yield better detection results. However using random Gaussian and Toeplitz phase mask can achieve high resolution reconstructed image. Our tracking algorithm can achieve a real time speed that is up to 10 times faster than that of the l(1) tracker without any optimization.

  9. Application of P4 Polyphase codes pulse compression method to air-coupled ultrasonic testing systems.

    PubMed

    Li, Honggang; Zhou, Zhenggan

    2017-03-03

    Air-coupled ultrasonic testing systems are usually restricted by low signal-to-noise ratios (SNR). The use of pulse compression techniques based on P4 Polyphase codes can improve the ultrasound SNR. This type of codes can generate higher Peak Side Lobe (PSL) ratio and lower noise of compressed signal. This paper proposes the use of P4 Polyphase sequences to code ultrasound with a NDT system based on air-coupled piezoelectric transducer. Furthermore, the principle of selecting parameters of P4 Polyphase sequence for obtaining optimal pulse compression effect is also studied. Successful results are presented in molded composite material. A hybrid signal processing method for improvement in SNR up to 12.11dB and in time domain resolution about 35% are achieved when compared with conventional pulse compression technique.

  10. Property study of integer wavelet transform lossless compression coding based on lifting scheme

    NASA Astrophysics Data System (ADS)

    Xie, Cheng Jun; Yan, Su; Xiang, Yang

    2006-01-01

    In this paper the algorithms and its improvement of integer wavelet transform combining SPIHT and arithmetic coding in image lossless compression is mainly studied. The experimental result shows that if the order of low-pass filter vanish matrix is fixed, the improvement of compression effect is not evident when invertible integer wavelet transform is satisfied and focusing of energy property monotonic increase with transform scale. For the same wavelet bases, the order of low-pass filter vanish matrix is more important than the order of high-pass filter vanish matrix in improving the property of image compression. Integer wavelet transform lossless compression coding based on lifting scheme has no relation to the entropy of image. The effect of compression is depended on the the focuing of energy property of image transform.

  11. GLS coding based security solution to JPEG with the structure of aggregated compression and encryption

    NASA Astrophysics Data System (ADS)

    Zhang, Yushu; Xiao, Di; Liu, Hong; Nan, Hai

    2014-05-01

    There exists close relation among chaos, coding and cryptography. All the three can be combined into a whole as aggregated chaos-based coding and cryptography (ATC) to compress and encrypt data simultaneously. In particular, image data own high redundancy and wide transmission and thereby it is well worth doing research on ATC for image, which is very helpful to real application.

  12. AN OPEN-SOURCE NEUTRINO RADIATION HYDRODYNAMICS CODE FOR CORE-COLLAPSE SUPERNOVAE

    SciTech Connect

    O’Connor, Evan

    2015-08-15

    We present an open-source update to the spherically symmetric, general-relativistic hydrodynamics, core-collapse supernova (CCSN) code GR1D. The source code is available at http://www.GR1Dcode.org. We extend its capabilities to include a general-relativistic treatment of neutrino transport based on the moment formalisms of Shibata et al. and Cardall et al. We pay special attention to implementing and testing numerical methods and approximations that lessen the computational demand of the transport scheme by removing the need to invert large matrices. This is especially important for the implementation and development of moment-like transport methods in two and three dimensions. A critical component of neutrino transport calculations is the neutrino–matter interaction coefficients that describe the production, absorption, scattering, and annihilation of neutrinos. In this article we also describe our open-source neutrino interaction library NuLib (available at http://www.nulib.org). We believe that an open-source approach to describing these interactions is one of the major steps needed to progress toward robust models of CCSNe and robust predictions of the neutrino signal. We show, via comparisons to full Boltzmann neutrino-transport simulations of CCSNe, that our neutrino transport code performs remarkably well. Furthermore, we show that the methods and approximations we employ to increase efficiency do not decrease the fidelity of our results. We also test the ability of our general-relativistic transport code to model failed CCSNe by evolving a 40-solar-mass progenitor to the onset of collapse to a black hole.

  13. An Open-source Neutrino Radiation Hydrodynamics Code for Core-collapse Supernovae

    NASA Astrophysics Data System (ADS)

    O'Connor, Evan

    2015-08-01

    We present an open-source update to the spherically symmetric, general-relativistic hydrodynamics, core-collapse supernova (CCSN) code GR1D. The source code is available at http://www.GR1Dcode.org. We extend its capabilities to include a general-relativistic treatment of neutrino transport based on the moment formalisms of Shibata et al. and Cardall et al. We pay special attention to implementing and testing numerical methods and approximations that lessen the computational demand of the transport scheme by removing the need to invert large matrices. This is especially important for the implementation and development of moment-like transport methods in two and three dimensions. A critical component of neutrino transport calculations is the neutrino-matter interaction coefficients that describe the production, absorption, scattering, and annihilation of neutrinos. In this article we also describe our open-source neutrino interaction library NuLib (available at http://www.nulib.org). We believe that an open-source approach to describing these interactions is one of the major steps needed to progress toward robust models of CCSNe and robust predictions of the neutrino signal. We show, via comparisons to full Boltzmann neutrino-transport simulations of CCSNe, that our neutrino transport code performs remarkably well. Furthermore, we show that the methods and approximations we employ to increase efficiency do not decrease the fidelity of our results. We also test the ability of our general-relativistic transport code to model failed CCSNe by evolving a 40-solar-mass progenitor to the onset of collapse to a black hole.

  14. Pulse code modulation data compression for automated test equipment

    SciTech Connect

    Navickas, T.A.; Jones, S.G.

    1991-05-01

    Development of automated test equipment for an advanced telemetry system requires continuous monitoring of PCM data while exercising telemetry inputs. This requirements leads to a large amount of data that needs to be stored and later analyzed. For example, a data stream of 4 Mbits/s and a test time of thirty minutes would yield 900 Mbytes of raw data. With this raw data, information needs to be stored to correlate the raw data to the test stimulus. This leads to a total of 1.8 Gb of data to be stored and analyzed. There is no method to analyze this amount of data in a reasonable time. A data compression method is needed to reduce the amount of data collected to a reasonable amount. The solution to the problem was data reduction. Data reduction was accomplished by real time limit checking, time stamping, and smart software. Limit checking was accomplished by an eight state finite state machine and four compression algorithms. Time stamping was needed to correlate stimulus to the appropriate output for data reconstruction. The software was written in the C programming language with a DOS extender used to allow it to run in extended mode. A 94--98% compression in the amount of data gathered was accomplished using this method. 1 fig.

  15. An efficient coding algorithm for the compression of ECG signals using the wavelet transform.

    PubMed

    Rajoub, Bashar A

    2002-04-01

    A wavelet-based electrocardiogram (ECG) data compression algorithm is proposed in this paper. The ECG signal is first preprocessed, the discrete wavelet transform (DWT) is then applied to the preprocessed signal. Preprocessing guarantees that the magnitudes of the wavelet coefficients be less than one, and reduces the reconstruction errors near both ends of the compressed signal. The DWT coefficients are divided into three groups, each group is thresholded using a threshold based on a desired energy packing efficiency. A binary significance map is then generated by scanning the wavelet decomposition coefficients and outputting a binary one if the scanned coefficient is significant, and a binary zero if it is insignificant. Compression is achieved by 1) using a variable length code based on run length encoding to compress the significance map and 2) using direct binary representation for representing the significant coefficients. The ability of the coding algorithm to compress ECG signals is investigated, the results were obtained by compressing and decompressing the test signals. The proposed algorithm is compared with direct-based and wavelet-based compression algorithms and showed superior performance. A compression ratio of 24:1 was achieved for MIT-BIH record 117 with a percent root mean square difference as low as 1.08%.

  16. Image compression with embedded wavelet coding via vector quantization

    NASA Astrophysics Data System (ADS)

    Katsavounidis, Ioannis; Kuo, C.-C. Jay

    1995-09-01

    In this research, we improve Shapiro's EZW algorithm by performing the vector quantization (VQ) of the wavelet transform coefficients. The proposed VQ scheme uses different vector dimensions for different wavelet subbands and also different codebook sizes so that more bits are assigned to those subbands that have more energy. Another feature is that the vector codebooks used are tree-structured to maintain the embedding property. Finally, the energy of these vectors is used as a prediction parameter between different scales to improve the performance. We investigate the performance of the proposed method together with the 7 - 9 tap bi-orthogonal wavelet basis, and look into ways to incorporate loseless compression techniques.

  17. Multispectral data compression through transform coding and block quantization

    NASA Technical Reports Server (NTRS)

    Ready, P. J.; Wintz, P. A.

    1972-01-01

    Transform coding and block quantization techniques are applied to multispectral aircraft scanner data, and digitized satellite imagery. The multispectral source is defined and an appropriate mathematical model proposed. The Karhunen-Loeve, Fourier, and Hadamard encoders are considered and are compared to the rate distortion function for the equivalent Gaussian source and to the performance of the single sample PCM encoder.

  18. Non-US data compression and coding research. FASAC Technical Assessment Report

    SciTech Connect

    Gray, R.M.; Cohn, M.; Craver, L.W.; Gersho, A.; Lookabaugh, T.; Pollara, F.; Vetterli, M.

    1993-11-01

    This assessment of recent data compression and coding research outside the United States examines fundamental and applied work in the basic areas of signal decomposition, quantization, lossless compression, and error control, as well as application development efforts in image/video compression and speech/audio compression. Seven computer scientists and engineers who are active in development of these technologies in US academia, government, and industry carried out the assessment. Strong industrial and academic research groups in Western Europe, Israel, and the Pacific Rim are active in the worldwide search for compression algorithms that provide good tradeoffs among fidelity, bit rate, and computational complexity, though the theoretical roots and virtually all of the classical compression algorithms were developed in the United States. Certain areas, such as segmentation coding, model-based coding, and trellis-coded modulation, have developed earlier or in more depth outside the United States, though the United States has maintained its early lead in most areas of theory and algorithm development. Researchers abroad are active in other currently popular areas, such as quantizer design techniques based on neural networks and signal decompositions based on fractals and wavelets, but, in most cases, either similar research is or has been going on in the United States, or the work has not led to useful improvements in compression performance. Because there is a high degree of international cooperation and interaction in this field, good ideas spread rapidly across borders (both ways) through international conferences, journals, and technical exchanges. Though there have been no fundamental data compression breakthroughs in the past five years--outside or inside the United State--there have been an enormous number of significant improvements in both places in the tradeoffs among fidelity, bit rate, and computational complexity.

  19. CAESCAP: A computer code for compressed-air energy-storage-plant cycle analysis

    NASA Astrophysics Data System (ADS)

    Fort, J. A.

    1982-10-01

    The analysis code, CAESCAP, was developed as an aid in comparing and evaluating proposed compressed air energy storage (CAES) cycles. Input consists of component parameters and working fluid conditions at points along a cycle. The code calculates thermodynamic properties at each point and then calculates overall cycle performance. Working fluid capabilities include steam, air, nitrogen, and parahydrogen. The CAESCAP code was used to analyze a variety of CAES cycles. The combination of straightforward input and flexible design make the code easy and inexpensive to use.

  20. Hydrodynamic Mixing of Ablator Material into the Compressed Fuel and Hot Spot of Direct-Drive DT Cryogenic Implosions

    NASA Astrophysics Data System (ADS)

    Regan, S. P.; Goncharov, V. N.; Epstein, R.; Betti, R.; Bonino, M. J.; Cao, D.; Collins, T. J. B.; Campbell, E. M.; Forrest, C. J.; Glebov, V. Yu.; Harding, D. R.; Marozas, J. A.; Marshall, F. J.; McKenty, P. W.; Sangster, T. C.; Stoeckl, C.; Luo, R. W.; Schoff, M. E.; Farrell, M.

    2016-10-01

    Hydrodynamic mixing of ablator material into the compressed fuel and hot spot of direct-drive DT cryogenic implosions is diagnosed using time-integrated, spatially resolved xray spectroscopy. The laser drive ablates most of the 8- μm-thick CH ablator, which is doped with trace amounts of Ge ( 0.5 at.) and surrounds the cryogenic DT layer. A small fraction of the ablator material is mixed into the compressed shell and the hot spot by the ablation-front Rayleigh-Taylor hydrodynamic instability seeded by laser imprint, the target mounting stalk, and surface debris. The amount of mix mass inferred from spectroscopic analysis of the Ge K-shell emission will be presented. This material is based upon work supported by the Department Of Energy National Nuclear Security Administration under Award Number DE-NA0001944. Part of this work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  1. One-Dimensional Lagrangian Code for Plasma Hydrodynamic Analysis of a Fusion Pellet Driven by Ion Beams.

    SciTech Connect

    1986-12-01

    Version 00 The MEDUSA-IB code performs implosion and thermonuclear burn calculations of an ion beam driven ICF target, based on one-dimensional plasma hydrodynamics and transport theory. It can calculate the following values in spherical geometry through the progress of implosion and fuel burnup of a multi-layered target. (1) Hydrodynamic velocities, density, ion, electron and radiation temperature, radiation energy density, Rs and burn rate of target as a function of coordinates and time, (2) Fusion gain as a function of time, (3) Ionization degree, (4) Temperature dependent ion beam energy deposition, (5) Radiation, -particle and neutron spectra as a function of time.

  2. Barker code pulse compression with a large Doppler tolerance

    NASA Astrophysics Data System (ADS)

    Jiang, Xuefeng; Zhu, Zhaoda

    1991-03-01

    This paper discusses the application of least square approximate inverse filtering techniques to radar range sidelobe suppression. The method is illustrated by application to the design of a compensated noncoherent sidelobe suppression filter (SSF). The compensated noncoherent SSF of the 13-element Barker code has been found. The -40 kHz to 40 kHz Doppler tolerance of the filter is obtained under the conditions that the subpulse duration is equal to 0.7 microsec and the peak sidelobe level is less than -30 dB. Theoretical computations and experimental results indicate that the SSF implemented has much wider Doppler tolerance than the Rihaczek-Golden (1971) SSF.

  3. Numerical Simulation of Supersonic Compression Corners and Hypersonic Inlet Flows Using the RPLUS2D Code

    NASA Technical Reports Server (NTRS)

    Kapoor, Kamlesh; Anderson, Bernhard H.; Shaw, Robert J.

    1994-01-01

    A two-dimensional computational code, PRLUS2D, which was developed for the reactive propulsive flows of ramjets and scramjets, was validated for two-dimensional shock-wave/turbulent-boundary-layer interactions. The problem of compression corners at supersonic speeds was solved using the RPLUS2D code. To validate the RPLUS2D code for hypersonic speeds, it was applied to a realistic hypersonic inlet geometry. Both the Baldwin-Lomax and the Chien two-equation turbulence models were used. Computational results showed that the RPLUS2D code compared very well with experimentally obtained data for supersonic compression corner flows, except in the case of large separated flows resulting from the interactions between the shock wave and turbulent boundary layer. The computational results compared well with the experiment results in a hypersonic NASA P8 inlet case, with the Chien two-equation turbulence model performing better than the Baldwin-Lomax model.

  4. Global Time Dependent Solutions of Stochastically Driven Standard Accretion Disks: Development of Hydrodynamical Code

    NASA Astrophysics Data System (ADS)

    Wani, Naveel; Maqbool, Bari; Iqbal, Naseer; Misra, Ranjeev

    2016-07-01

    X-ray binaries and AGNs are powered by accretion discs around compact objects, where the x-rays are emitted from the inner regions and uv emission arise from the relatively cooler outer parts. There has been an increasing evidence that the variability of the x-rays in different timescales is caused by stochastic fluctuations in the accretion disc at different radii. These fluctuations although arise in the outer parts of the disc but propagate inwards to give rise to x-ray variability and hence provides a natural connection between the x-ray and uv variability. There are analytical expressions to qualitatively understand the effect of these stochastic variabilities, but quantitative predictions are only possible by a detailed hydrodynamical study of the global time dependent solution of standard accretion disc. We have developed numerical efficient code (to incorporate all these effects), which considers gas pressure dominated solutions and stochastic fluctuations with the inclusion of boundary effect of the last stable orbit.

  5. Wavelet transform and Huffman coding based electrocardiogram compression algorithm: Application to telecardiology

    NASA Astrophysics Data System (ADS)

    Chouakri, S. A.; Djaafri, O.; Taleb-Ahmed, A.

    2013-08-01

    We present in this work an algorithm for electrocardiogram (ECG) signal compression aimed to its transmission via telecommunication channel. Basically, the proposed ECG compression algorithm is articulated on the use of wavelet transform, leading to low/high frequency components separation, high order statistics based thresholding, using level adjusted kurtosis value, to denoise the ECG signal, and next a linear predictive coding filter is applied to the wavelet coefficients producing a lower variance signal. This latter one will be coded using the Huffman encoding yielding an optimal coding length in terms of average value of bits per sample. At the receiver end point, with the assumption of an ideal communication channel, the inverse processes are carried out namely the Huffman decoding, inverse linear predictive coding filter and inverse discrete wavelet transform leading to the estimated version of the ECG signal. The proposed ECG compression algorithm is tested upon a set of ECG records extracted from the MIT-BIH Arrhythmia Data Base including different cardiac anomalies as well as the normal ECG signal. The obtained results are evaluated in terms of compression ratio and mean square error which are, respectively, around 1:8 and 7%. Besides the numerical evaluation, the visual perception demonstrates the high quality of ECG signal restitution where the different ECG waves are recovered correctly.

  6. [A quality controllable algorithm for ECG compression based on wavelet transform and ROI coding].

    PubMed

    Zhao, An; Wu, Baoming

    2006-12-01

    This paper presents an ECG compression algorithm based on wavelet transform and region of interest (ROI) coding. The algorithm has realized near-lossless coding in ROI and quality controllable lossy coding outside of ROI. After mean removal of the original signal, multi-layer orthogonal discrete wavelet transform is performed. Simultaneously,feature extraction is performed on the original signal to find the position of ROI. The coefficients related to the ROI are important coefficients and kept. Otherwise, the energy loss of the transform domain is calculated according to the goal PRDBE (Percentage Root-mean-square Difference with Baseline Eliminated), and then the threshold of the coefficients outside of ROI is determined according to the loss of energy. The important coefficients, which include the coefficients of ROI and the coefficients that are larger than the threshold outside of ROI, are put into a linear quantifier. The map, which records the positions of the important coefficients in the original wavelet coefficients vector, is compressed with a run-length encoder. Huffman coding has been applied to improve the compression ratio. ECG signals taken from the MIT/BIH arrhythmia database are tested, and satisfactory results in terms of clinical information preserving, quality and compress ratio are obtained.

  7. Group-complementary code sets for implementing pulse compression with desirable range resolution properties

    NASA Astrophysics Data System (ADS)

    Weathers, G.; Holliday, E. M.

    This paper describes the structure and properties of a waveform design technique intended to provide desirable range resolution properties in radar sensor systems. The waveform design, called group-complementary coding, consists of groups of binary sequences which can be used for bi-phase coding of a radar carrier pulsed waveform. When pulse compression processing is extended to include the composite of a number of pulses through coherent integration, then group-complementary coding provides the often desirable property of complete range sidelobe cancellation (for zero Doppler shift).

  8. Implementation of a simple model for linear and nonlinear mixing at unstable fluid interfaces in hydrodynamics codes

    SciTech Connect

    Ramshaw, J D

    2000-10-01

    A simple model was recently described for predicting the time evolution of the width of the mixing layer at an unstable fluid interface [J. D. Ramshaw, Phys. Rev. E 58, 5834 (1998); ibid. 61, 5339 (2000)]. The ordinary differential equations of this model have been heuristically generalized into partial differential equations suitable for implementation in multicomponent hydrodynamics codes. The central ingredient in this generalization is a nun-diffusional expression for the species mass fluxes. These fluxes describe the relative motion of the species, and thereby determine the local mixing rate and spatial distribution of mixed fluid as a function of time. The generalized model has been implemented in a two-dimensional hydrodynamics code. The model equations and implementation procedure are summarized, and comparisons with experimental mixing data are presented.

  9. Novel lossless FMRI image compression based on motion compensation and customized entropy coding.

    PubMed

    Sanchez, Victor; Nasiopoulos, Panos; Abugharbieh, Rafeef

    2009-07-01

    We recently proposed a method for lossless compression of 4-D medical images based on the advanced video coding standard (H.264/AVC). In this paper, we present two major contributions that enhance our previous work for compression of functional MRI (fMRI) data: 1) a new multiframe motion compensation process that employs 4-D search, variable-size block matching, and bidirectional prediction; and 2) a new context-based adaptive binary arithmetic coder designed for lossless compression of the residual and motion vector data. We validate our method on real fMRI sequences of various resolutions and compare the performance to two state-of-the-art methods: 4D-JPEG2000 and H.264/AVC. Quantitative results demonstrate that our proposed technique significantly outperforms current state of the art with an average compression ratio improvement of 13%.

  10. Data compression in wireless sensors network using MDCT and embedded harmonic coding.

    PubMed

    Alsalaet, Jaafar K; Ali, Abduladhem A

    2015-05-01

    One of the major applications of wireless sensors networks (WSNs) is vibration measurement for the purpose of structural health monitoring and machinery fault diagnosis. WSNs have many advantages over the wired networks such as low cost and reduced setup time. However, the useful bandwidth is limited, as compared to wired networks, resulting in relatively low sampling. One solution to this problem is data compression which, in addition to enhancing sampling rate, saves valuable power of the wireless nodes. In this work, a data compression scheme, based on Modified Discrete Cosine Transform (MDCT) followed by Embedded Harmonic Components Coding (EHCC) is proposed to compress vibration signals. The EHCC is applied to exploit harmonic redundancy present is most vibration signals resulting in improved compression ratio. This scheme is made suitable for the tiny hardware of wireless nodes and it is proved to be fast and effective. The efficiency of the proposed scheme is investigated by conducting several experimental tests.

  11. Compression performance of HEVC and its format range and screen content coding extensions

    NASA Astrophysics Data System (ADS)

    Li, Bin; Xu, Jizheng; Sullivan, Gary J.

    2015-09-01

    This paper presents a comparison-based test of the objective compression performance of the High Efficiency Video Coding (HEVC) standard, its format range extensions (RExt), and its draft screen content coding extensions (SCC). The current dominant standard, H.264/MPEG-4 AVC, is used as an anchor reference in the comparison. The conditions used for the comparison tests were designed to reflect relevant application scenarios and to enable a fair comparison to the maximum extent feasible - i.e., using comparable quantization settings, reference frame buffering, intra refresh periods, rate-distortion optimization decision processing, etc. It is noted that such PSNR-based objective comparisons generally provide more conservative estimates of HEVC benefit than are found in subjective studies. The experimental results show that, when compared with H.264/MPEG-4 AVC, HEVC version 1 provides a bit rate savings for equal PSNR of about 23% for all-intra coding, 34% for random access coding, and 38% for low-delay coding. This is consistent with prior studies and the general characterization that HEVC can provide about a bit rate savings of about 50% for equal subjective quality for most applications. The HEVC format range extensions provide a similar bit rate savings of about 13-25% for all-intra coding, 28-33% for random access coding, and 32-38% for low-delay coding at different bit rate ranges. For lossy coding of screen content, the HEVC screen content coding extensions achieve a bit rate savings of about 66%, 63%, and 61% for all-intra coding, random access coding, and low-delay coding, respectively. For lossless coding, the corresponding bit rate savings are about 40%, 33%, and 32%, respectively.

  12. Hyperspectral image compression: adapting SPIHT and EZW to anisotropic 3-D wavelet coding.

    PubMed

    Christophe, Emmanuel; Mailhes, Corinne; Duhamel, Pierre

    2008-12-01

    Hyperspectral images present some specific characteristics that should be used by an efficient compression system. In compression, wavelets have shown a good adaptability to a wide range of data, while being of reasonable complexity. Some wavelet-based compression algorithms have been successfully used for some hyperspectral space missions. This paper focuses on the optimization of a full wavelet compression system for hyperspectral images. Each step of the compression algorithm is studied and optimized. First, an algorithm to find the optimal 3-D wavelet decomposition in a rate-distortion sense is defined. Then, it is shown that a specific fixed decomposition has almost the same performance, while being more useful in terms of complexity issues. It is shown that this decomposition significantly improves the classical isotropic decomposition. One of the most useful properties of this fixed decomposition is that it allows the use of zero tree algorithms. Various tree structures, creating a relationship between coefficients, are compared. Two efficient compression methods based on zerotree coding (EZW and SPIHT) are adapted on this near-optimal decomposition with the best tree structure found. Performances are compared with the adaptation of JPEG 2000 for hyperspectral images on six different areas presenting different statistical properties.

  13. Code Development of Three-Dimensional General Relativistic Hydrodynamics with AMR (Adaptive-Mesh Refinement) and Results from Special and General Relativistic Hydrodynamics

    NASA Astrophysics Data System (ADS)

    Dönmez, Orhan

    2004-09-01

    In this paper, the general procedure to solve the general relativistic hydrodynamical (GRH) equations with adaptive-mesh refinement (AMR) is presented. In order to achieve, the GRH equations are written in the conservation form to exploit their hyperbolic character. The numerical solutions of GRH equations are obtained by high resolution shock Capturing schemes (HRSC), specifically designed to solve nonlinear hyperbolic systems of conservation laws. These schemes depend on the characteristic information of the system. The Marquina fluxes with MUSCL left and right states are used to solve GRH equations. First, different test problems with uniform and AMR grids on the special relativistic hydrodynamics equations are carried out to verify the second-order convergence of the code in one, two and three dimensions. Results from uniform and AMR grid are compared. It is found that adaptive grid does a better job when the number of resolution is increased. Second, the GRH equations are tested using two different test problems which are Geodesic flow and Circular motion of particle In order to do this, the flux part of GRH equations is coupled with source part using Strang splitting. The coupling of the GRH equations is carried out in a treatment which gives second order accurate solutions in space and time.

  14. APSARA: A multi-dimensional unsplit fourth-order explicit Eulerian hydrodynamics code for arbitrary curvilinear grids

    NASA Astrophysics Data System (ADS)

    Wongwathanarat, A.; Grimm-Strele, H.; Müller, E.

    2016-10-01

    We present a new fourth-order, finite-volume hydrodynamics code named Apsara. The code employs a high-order, finite-volume method for mapped coordinates with extensions for nonlinear hyperbolic conservation laws. Apsara can handle arbitrary structured curvilinear meshes in three spatial dimensions. The code has successfully passed several hydrodynamic test problems, including the advection of a Gaussian density profile and a nonlinear vortex and the propagation of linear acoustic waves. For these test problems, Apsara produces fourth-order accurate results in case of smooth grid mappings. The order of accuracy is reduced to first-order when using the nonsmooth circular grid mapping. When applying the high-order method to simulations of low-Mach number flows, for example, the Gresho vortex and the Taylor-Green vortex, we discover that Apsara delivers superior results to codes based on the dimensionally split, piecewise parabolic method (PPM) widely used in astrophysics. Hence, Apsara is a suitable tool for simulating highly subsonic flows in astrophysics. In the first astrophysical application, we perform implicit large eddy simulations (ILES) of anisotropic turbulence in the context of core collapse supernova (CCSN) and obtain results similar to those previously reported.

  15. A lossless multichannel bio-signal compression based on low-complexity joint coding scheme for portable medical devices.

    PubMed

    Kim, Dong-Sun; Kwon, Jin-San

    2014-09-18

    Research on real-time health systems have received great attention during recent years and the needs of high-quality personal multichannel medical signal compression for personal medical product applications are increasing. The international MPEG-4 audio lossless coding (ALS) standard supports a joint channel-coding scheme for improving compression performance of multichannel signals and it is very efficient compression method for multi-channel biosignals. However, the computational complexity of such a multichannel coding scheme is significantly greater than that of other lossless audio encoders. In this paper, we present a multichannel hardware encoder based on a low-complexity joint-coding technique and shared multiplier scheme for portable devices. A joint-coding decision method and a reference channel selection scheme are modified for a low-complexity joint coder. The proposed joint coding decision method determines the optimized joint-coding operation based on the relationship between the cross correlation of residual signals and the compression ratio. The reference channel selection is designed to select a channel for the entropy coding of the joint coding. The hardware encoder operates at a 40 MHz clock frequency and supports two-channel parallel encoding for the multichannel monitoring system. Experimental results show that the compression ratio increases by 0.06%, whereas the computational complexity decreases by 20.72% compared to the MPEG-4 ALS reference software encoder. In addition, the compression ratio increases by about 11.92%, compared to the single channel based bio-signal lossless data compressor.

  16. Assessment of error propagation in ultraspectral sounder data via JPEG2000 compression and turbo coding

    NASA Astrophysics Data System (ADS)

    Olsen, Donald P.; Wang, Charles C.; Sklar, Dean; Huang, Bormin; Ahuja, Alok

    2005-08-01

    Research has been undertaken to examine the robustness of JPEG2000 when corrupted by transmission bit errors in a satellite data stream. Contemporary and future ultraspectral sounders such as Atmospheric Infrared Sounder (AIRS), Cross-track Infrared Sounder (CrIS), Infrared Atmospheric Sounding Interferometer (IASI), Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS), and Hyperspectral Environmental Suite (HES) generate a large volume of three-dimensional data. Hence, compression of ultraspectral sounder data will facilitate data transmission and archiving. There is a need for lossless or near-lossless compression of ultraspectral sounder data to avoid potential retrieval degradation of geophysical parameters due to lossy compression. This paper investigates the simulated error propagation in AIRS ultraspectral sounder data with advanced source and channel coding in a satellite data stream. The source coding is done via JPEG2000, the latest International Organization for Standardization (ISO)/International Telecommunication Union (ITU) standard for image compression. After JPEG2000 compression the AIRS ultraspectral sounder data is then error correction encoded using a rate 0.954 turbo product code (TPC) for channel error control. Experimental results of error patterns on both channel and source decoding are presented. The error propagation effects are curbed via the block-based protection mechanism in the JPEG2000 codec as well as memory characteristics of the forward error correction (FEC) scheme to contain decoding errors within received blocks. A single nonheader bit error in a source code block tends to contaminate the bits until the end of the source code block before the inverse discrete wavelet transform (IDWT), and those erroneous bits propagate even further after the IDWT. Furthermore, a single header bit error may result in the corruption of almost the entire decompressed granule. JPEG2000 appears vulnerable to bit errors in a noisy channel of

  17. Performance evaluation of the intra compression in the video coding standards

    NASA Astrophysics Data System (ADS)

    Abramowski, Andrzej

    2015-09-01

    The article presents a comparison of the Intra prediction algorithms in the current state-of-the-art video coding standards, including MJPEG 2000, VP8, VP9, H.264/AVC and H.265/HEVC. The effectiveness of techniques employed by each standard is evaluated in terms of compression efficiency and average encoding time. The compression efficiency is measured using BD-PSNR and BD-RATE metrics with H.265/HEVC results as an anchor. Tests are performed on a set of video sequences, composed of sequences gathered by Joint Collaborative Team on Video Coding during the development of the H.265/HEVC standard and 4K sequences provided by Ultra Video Group. According to results, H.265/HEVC provides significant bit-rate savings at the expense of computational complexity, while VP9 may be regarded as a compromise between the efficiency and required encoding time.

  18. Lossless image compression based on optimal prediction, adaptive lifting, and conditional arithmetic coding.

    PubMed

    Boulgouris, N V; Tzovaras, D; Strintzis, M G

    2001-01-01

    The optimal predictors of a lifting scheme in the general n-dimensional case are obtained and applied for the lossless compression of still images using first quincunx sampling and then simple row-column sampling. In each case, the efficiency of the linear predictors is enhanced nonlinearly. Directional postprocessing is used in the quincunx case, and adaptive-length postprocessing in the row-column case. Both methods are seen to perform well. The resulting nonlinear interpolation schemes achieve extremely efficient image decorrelation. We further investigate context modeling and adaptive arithmetic coding of wavelet coefficients in a lossless compression framework. Special attention is given to the modeling contexts and the adaptation of the arithmetic coder to the actual data. Experimental evaluation shows that the best of the resulting coders produces better results than other known algorithms for multiresolution-based lossless image coding.

  19. Inferential multi-spectral image compression based on distributed source coding

    NASA Astrophysics Data System (ADS)

    Wu, Xian-yun; Li, Yun-song; Wu, Cheng-ke; Kong, Fan-qiang

    2008-08-01

    Based on the analyses of the interferential multispectral imagery(IMI), a new compression algorithm based on distributed source coding is proposed. There are apparent push motions between the IMI sequences, the relative shift between two images is detected by the block match algorithm at the encoder. Our algorithm estimates the rate of each bitplane with the estimated side information frame. then our algorithm adopts a ROI coding algorithm, in which the rate-distortion lifting procedure is carried out in rate allocation stage. Using our algorithm, the FBC can be removed from the traditional scheme. The compression algorithm developed in the paper can obtain up to 3dB's gain comparing with JPEG2000 and significantly reduce the complexity and storage consumption comparing with 3D-SPIHT at the cost of slight degrade in PSNR.

  20. Optical information authentication using compressed double-random-phase-encoded images and quick-response codes.

    PubMed

    Wang, Xiaogang; Chen, Wen; Chen, Xudong

    2015-03-09

    In this paper, we develop a new optical information authentication system based on compressed double-random-phase-encoded images and quick-response (QR) codes, where the parameters of optical lightwave are used as keys for optical decryption and the QR code is a key for verification. An input image attached with QR code is first optically encoded in a simplified double random phase encoding (DRPE) scheme without using interferometric setup. From the single encoded intensity pattern recorded by a CCD camera, a compressed double-random-phase-encoded image, i.e., the sparse phase distribution used for optical decryption, is generated by using an iterative phase retrieval technique with QR code. We compare this technique to the other two methods proposed in literature, i.e., Fresnel domain information authentication based on the classical DRPE with holographic technique and information authentication based on DRPE and phase retrieval algorithm. Simulation results show that QR codes are effective on improving the security and data sparsity of optical information encryption and authentication system.

  1. Combining node-centered parallel radiation transport and higher-order multi-material cell-centered hydrodynamics methods in three-temperature radiation hydrodynamics code TRHD

    NASA Astrophysics Data System (ADS)

    Sijoy, C. D.; Chaturvedi, S.

    2016-06-01

    Higher-order cell-centered multi-material hydrodynamics (HD) and parallel node-centered radiation transport (RT) schemes are combined self-consistently in three-temperature (3T) radiation hydrodynamics (RHD) code TRHD (Sijoy and Chaturvedi, 2015) developed for the simulation of intense thermal radiation or high-power laser driven RHD. For RT, a node-centered gray model implemented in a popular RHD code MULTI2D (Ramis et al., 2009) is used. This scheme, in principle, can handle RT in both optically thick and thin materials. The RT module has been parallelized using message passing interface (MPI) for parallel computation. Presently, for multi-material HD, we have used a simple and robust closure model in which common strain rates to all materials in a mixed cell is assumed. The closure model has been further generalized to allow different temperatures for the electrons and ions. In addition to this, electron and radiation temperatures are assumed to be in non-equilibrium. Therefore, the thermal relaxation between the electrons and ions and the coupling between the radiation and matter energies are required to be computed self-consistently. This has been achieved by using a node-centered symmetric-semi-implicit (SSI) integration scheme. The electron thermal conduction is calculated using a cell-centered, monotonic, non-linear finite volume scheme (NLFV) suitable for unstructured meshes. In this paper, we have described the details of the 2D, 3T, non-equilibrium, multi-material RHD code developed with a special attention to the coupling of various cell-centered and node-centered formulations along with a suite of validation test problems to demonstrate the accuracy and performance of the algorithms. We also report the parallel performance of RT module. Finally, in order to demonstrate the full capability of the code implementation, we have presented the simulation of laser driven shock propagation in a layered thin foil. The simulation results are found to be in good

  2. Channel coding and data compression system considerations for efficient communication of planetary imaging data

    NASA Technical Reports Server (NTRS)

    Rice, R. F.

    1974-01-01

    End-to-end system considerations involving channel coding and data compression are reported which could drastically improve the efficiency in communicating pictorial information from future planetary spacecraft. In addition to presenting new and potentially significant system considerations, this report attempts to fill a need for a comprehensive tutorial which makes much of this very subject accessible to readers whose disciplines lie outside of communication theory.

  3. Compressive Sampling based Image Coding for Resource-deficient Visual Communication.

    PubMed

    Liu, Xianming; Zhai, Deming; Zhou, Jiantao; Zhang, Xinfeng; Zhao, Debin; Gao, Wen

    2016-04-14

    In this paper, a new compressive sampling based image coding scheme is developed to achieve competitive coding efficiency at lower encoder computational complexity, while supporting error resilience. This technique is particularly suitable for visual communication with resource-deficient devices. At the encoder, compact image representation is produced, which is a polyphase down-sampled version of the input image; but the conventional low-pass filter prior to down-sampling is replaced by a local random binary convolution kernel. The pixels of the resulting down-sampled pre-filtered image are local random measurements and placed in the original spatial configuration. The advantages of local random measurements are two folds: 1) preserve high-frequency image features that are otherwise discarded by low-pass filtering; 2) remain a conventional image and can therefore be coded by any standardized codec to remove statistical redundancy of larger scales. Moreover, measurements generated by different kernels can be considered as multiple descriptions of the original image and therefore the proposed scheme has the advantage of multiple description coding. At the decoder, a unified sparsity-based soft-decoding technique is developed to recover the original image from received measurements in a framework of compressive sensing. Experimental results demonstrate that the proposed scheme is competitive compared with existing methods, with a unique strength of recovering fine details and sharp edges at low bit-rates.

  4. Thermodynamic analysis of five compressed-air energy-storage cycles. [Using CAESCAP computer code

    SciTech Connect

    Fort, J. A.

    1983-03-01

    One important aspect of the Compressed-Air Energy-Storage (CAES) Program is the evaluation of alternative CAES plant designs. The thermodynamic performance of the various configurations is particularly critical to the successful demonstration of CAES as an economically feasible energy-storage option. A computer code, the Compressed-Air Energy-Storage Cycle-Analysis Program (CAESCAP), was developed in 1982 at the Pacific Northwest Laboratory. This code was designed specifically to calculate overall thermodynamic performance of proposed CAES-system configurations. The results of applying this code to the analysis of five CAES plant designs are presented in this report. The designs analyzed were: conventional CAES; adiabatic CAES; hybrid CAES; pressurized fluidized-bed CAES; and direct coupled steam-CAES. Inputs to the code were based on published reports describing each plant cycle. For each cycle analyzed, CAESCAP calculated the thermodynamic station conditions and individual-component efficiencies, as well as overall cycle-performance-parameter values. These data were then used to diagram the availability and energy flow for each of the five cycles. The resulting diagrams graphically illustrate the overall thermodynamic performance inherent in each plant configuration, and enable a more accurate and complete understanding of each design.

  5. On multigrid solution of the implicit equations of hydrodynamics. Experiments for the compressible Euler equations in general coordinates

    NASA Astrophysics Data System (ADS)

    Kifonidis, K.; Müller, E.

    2012-08-01

    Aims: We describe and study a family of new multigrid iterative solvers for the multidimensional, implicitly discretized equations of hydrodynamics. Schemes of this class are free of the Courant-Friedrichs-Lewy condition. They are intended for simulations in which widely differing wave propagation timescales are present. A preferred solver in this class is identified. Applications to some simple stiff test problems that are governed by the compressible Euler equations, are presented to evaluate the convergence behavior, and the stability properties of this solver. Algorithmic areas are determined where further work is required to make the method sufficiently efficient and robust for future application to difficult astrophysical flow problems. Methods: The basic equations are formulated and discretized on non-orthogonal, structured curvilinear meshes. Roe's approximate Riemann solver and a second-order accurate reconstruction scheme are used for spatial discretization. Implicit Runge-Kutta (ESDIRK) schemes are employed for temporal discretization. The resulting discrete equations are solved with a full-coarsening, non-linear multigrid method. Smoothing is performed with multistage-implicit smoothers. These are applied here to the time-dependent equations by means of dual time stepping. Results: For steady-state problems, our results show that the efficiency of the present approach is comparable to the best implicit solvers for conservative discretizations of the compressible Euler equations that can be found in the literature. The use of red-black as opposed to symmetric Gauss-Seidel iteration in the multistage-smoother is found to have only a minor impact on multigrid convergence. This should enable scalable parallelization without having to seriously compromise the method's algorithmic efficiency. For time-dependent test problems, our results reveal that the multigrid convergence rate degrades with increasing Courant numbers (i.e. time step sizes). Beyond a

  6. Analysis of Doppler Effect on the Pulse Compression of Different Codes Emitted by an Ultrasonic LPS

    PubMed Central

    Paredes, José A.; Aguilera, Teodoro; Álvarez, Fernando J.; Lozano, Jesús; Morera, Jorge

    2011-01-01

    This work analyses the effect of the receiver movement on the detection by pulse compression of different families of codes characterizing the emissions of an Ultrasonic Local Positioning System. Three families of codes have been compared: Kasami, Complementary Sets of Sequences and Loosely Synchronous, considering in all cases three different lengths close to 64, 256 and 1,024 bits. This comparison is first carried out by using a system model in order to obtain a set of results that are then experimentally validated with the help of an electric slider that provides radial speeds up to 2 m/s. The performance of the codes under analysis has been characterized by means of the auto-correlation and cross-correlation bounds. The results derived from this study should be of interest to anyone performing matched filtering of ultrasonic signals with a moving emitter/receiver. PMID:22346670

  7. Recent Hydrodynamics Improvements to the RELAP5-3D Code

    SciTech Connect

    Richard A. Riemke; Cliff B. Davis; Richard.R. Schultz

    2009-07-01

    The hydrodynamics section of the RELAP5-3D computer program has been recently improved. Changes were made as follows: (1) improved turbine model, (2) spray model for the pressurizer model, (3) feedwater heater model, (4) radiological transport model, (5) improved pump model, and (6) compressor model.

  8. Comparison of Particle Flow Code and Smoothed Particle Hydrodynamics Modelling of Landslide Run outs

    NASA Astrophysics Data System (ADS)

    Preh, A.; Poisel, R.; Hungr, O.

    2009-04-01

    In most continuum mechanics methods modelling the run out of landslides the moving mass is divided into a number of elements, the velocities of which can be established by numerical integration of Newtońs second law (Lagrangian solution). The methods are based on fluid mechanics modelling the movements of an equivalent fluid. In 2004, McDougall and Hungr presented a three-dimensional numerical model for rapid landslides, e.g. debris flows and rock avalanches, called DAN3D.The method is based on the previous work of Hungr (1995) and is using an integrated two-dimensional Lagrangian solution and meshless Smooth Particle Hydrodynamics (SPH) principle to maintain continuity. DAN3D has an open rheological kernel, allowing the use of frictional (with constant porepressure ratio) and Voellmy rheologies and gives the possibility to change material rheology along the path. Discontinuum (granular) mechanics methods model the run out mass as an assembly of particles moving down a surface. Each particle is followed exactly as it moves and interacts with the surface and with its neighbours. Every particle is checked on contacts with every other particle in every time step using a special cell-logic for contact detection in order to reduce the computational effort. The Discrete Element code PFC3D was adapted in order to make possible discontinuum mechanics models of run outs. Punta Thurwieser Rock Avalanche and Frank Slide were modelled by DAN as well as by PFC3D. The simulations showed correspondingly that the parameters necessary to get results coinciding with observations in nature are completely different. The maximum velocity distributions due to DAN3D reveal that areas of different maximum flow velocity are next to each other in Punta Thurwieser run out whereas the distribution of maximum flow velocity shows almost constant maximum flow velocity over the width of the run out regarding Frank Slide. Some 30 percent of total kinetic energy is rotational kinetic energy in

  9. Gaseous Laser Targets and Optical Dignostics for Studying Compressible Turbulent Hydrodynamic Instabilities

    SciTech Connect

    Edwards, M J; Hansen, J; Miles, A R; Froula, D; Gregori, G; Glenzer, S; Edens, A; Dittmire, T

    2005-02-08

    The possibility of studying compressible turbulent flows using gas targets driven by high power lasers and diagnosed with optical techniques is investigated. The potential advantage over typical laser experiments that use solid targets and x-ray diagnostics is more detailed information over a larger range of spatial scales. An experimental system is described to study shock - jet interactions at high Mach number. This consists of a mini-chamber full of nitrogen at a pressure {approx} 1 atms. The mini-chamber is situated inside a much larger vacuum chamber. An intense laser pulse ({approx}100J in {approx} 5ns) is focused on to a thin {approx} 0.3{micro}m thick silicon nitride window at one end of the mini-chamber. The window acts both as a vacuum barrier, and laser entrance hole. The ''explosion'' caused by the deposition of the laser energy just inside the window drives a strong blast wave out into the nitrogen atmosphere. The spherical shock expands and interacts with a jet of xenon introduced though the top of the mini-chamber. The Mach number of the interaction is controlled by the separation of the jet from the explosion. The resulting flow is visualized using an optical schlieren system using a pulsed laser source at a wavelength of 0.53 {micro}m. The technical path leading up to the design of this experiment is presented, and future prospects briefly considered. Lack of laser time in the final year of the project severely limited experimental results obtained using the new apparatus.

  10. Single Stock Dynamics on High-Frequency Data: From a Compressed Coding Perspective

    PubMed Central

    Fushing, Hsieh; Chen, Shu-Chun; Hwang, Chii-Ruey

    2014-01-01

    High-frequency return, trading volume and transaction number are digitally coded via a nonparametric computing algorithm, called hierarchical factor segmentation (HFS), and then are coupled together to reveal a single stock dynamics without global state-space structural assumptions. The base-8 digital coding sequence, which is capable of revealing contrasting aggregation against sparsity of extreme events, is further compressed into a shortened sequence of state transitions. This compressed digital code sequence vividly demonstrates that the aggregation of large absolute returns is the primary driving force for stimulating both the aggregations of large trading volumes and transaction numbers. The state of system-wise synchrony is manifested with very frequent recurrence in the stock dynamics. And this data-driven dynamic mechanism is seen to correspondingly vary as the global market transiting in and out of contraction-expansion cycles. These results not only elaborate the stock dynamics of interest to a fuller extent, but also contradict some classical theories in finance. Overall this version of stock dynamics is potentially more coherent and realistic, especially when the current financial market is increasingly powered by high-frequency trading via computer algorithms, rather than by individual investors. PMID:24586235

  11. Single stock dynamics on high-frequency data: from a compressed coding perspective.

    PubMed

    Fushing, Hsieh; Chen, Shu-Chun; Hwang, Chii-Ruey

    2014-01-01

    High-frequency return, trading volume and transaction number are digitally coded via a nonparametric computing algorithm, called hierarchical factor segmentation (HFS), and then are coupled together to reveal a single stock dynamics without global state-space structural assumptions. The base-8 digital coding sequence, which is capable of revealing contrasting aggregation against sparsity of extreme events, is further compressed into a shortened sequence of state transitions. This compressed digital code sequence vividly demonstrates that the aggregation of large absolute returns is the primary driving force for stimulating both the aggregations of large trading volumes and transaction numbers. The state of system-wise synchrony is manifested with very frequent recurrence in the stock dynamics. And this data-driven dynamic mechanism is seen to correspondingly vary as the global market transiting in and out of contraction-expansion cycles. These results not only elaborate the stock dynamics of interest to a fuller extent, but also contradict some classical theories in finance. Overall this version of stock dynamics is potentially more coherent and realistic, especially when the current financial market is increasingly powered by high-frequency trading via computer algorithms, rather than by individual investors.

  12. High performance optical encryption based on computational ghost imaging with QR code and compressive sensing technique

    NASA Astrophysics Data System (ADS)

    Zhao, Shengmei; Wang, Le; Liang, Wenqiang; Cheng, Weiwen; Gong, Longyan

    2015-10-01

    In this paper, we propose a high performance optical encryption (OE) scheme based on computational ghost imaging (GI) with QR code and compressive sensing (CS) technique, named QR-CGI-OE scheme. N random phase screens, generated by Alice, is a secret key and be shared with its authorized user, Bob. The information is first encoded by Alice with QR code, and the QR-coded image is then encrypted with the aid of computational ghost imaging optical system. Here, measurement results from the GI optical system's bucket detector are the encrypted information and be transmitted to Bob. With the key, Bob decrypts the encrypted information to obtain the QR-coded image with GI and CS techniques, and further recovers the information by QR decoding. The experimental and numerical simulated results show that the authorized users can recover completely the original image, whereas the eavesdroppers can not acquire any information about the image even the eavesdropping ratio (ER) is up to 60% at the given measurement times. For the proposed scheme, the number of bits sent from Alice to Bob are reduced considerably and the robustness is enhanced significantly. Meantime, the measurement times in GI system is reduced and the quality of the reconstructed QR-coded image is improved.

  13. GENESIS: A High-Resolution Code for Three-dimensional Relativistic Hydrodynamics

    NASA Astrophysics Data System (ADS)

    Aloy, M. A.; Ibáñez, J. M.; Martí, J. M.; Müller, E.

    1999-05-01

    The main features of a three-dimensional, high-resolution special relativistic hydro code based on relativistic Riemann solvers are described. The capabilities and performance of the code are discussed. In particular, we present the results of extensive test calculations that demonstrate that the code can accurately and efficiently handle strong shocks in three spatial dimensions. Results of the performance of the code on single and multiprocessor machines are given. Simulations (in double precision) with <=7×106 computational cells require less than 1 Gbyte of RAM memory and ~7×10-5 CPU s per zone and time step (on a SCI Cray-Origin 2000 with a R10000 processor). Currently, a version of the numerical code is under development, which is suited for massively parallel computers with distributed memory architecture (such as, e.g., Cray T3E).

  14. Sub-Nyquist sampling and detection in Costas coded pulse compression radars

    NASA Astrophysics Data System (ADS)

    Hanif, Adnan; Mansoor, Atif Bin; Imran, Ali Shariq

    2016-12-01

    Modern pulse compression radar involves digital signal processing of high bandwidth pulses modulated with different coding schemes. One of the limiting factors in the radar's design to achieve desired target range and resolution is the need of high rate analog-to-digital (A/D) conversion fulfilling the Nyquist sampling criteria. The high sampling rates necessitate huge storage capacity, more power consumption, and extra processing requirement. We introduce a new approach to sample wideband radar waveform modulated with Costas sequence at a sub-Nyquist rate based upon the concept of compressive sensing (CS). Sub-Nyquist measurements of Costas sequence waveform are performed in an analog-to-information (A/I) converter based upon random demodulation replacing traditional A/D converter. The novel work presents an 8-order Costas coded waveform with sub-Nyquist sampling and its reconstruction. The reconstructed waveform is compared with the conventionally sampled signal and depicts high-quality signal recovery from sub-Nyquist sampled signal. Furthermore, performance of CS-based detections after reconstruction are evaluated in terms of receiver operating characteristic (ROC) curves and compared with conventional Nyquist-rate matched filtering scheme.

  15. A New Multi-dimensional General Relativistic Neutrino Hydrodynamic Code for Core-collapse Supernovae. I. Method and Code Tests in Spherical Symmetry

    NASA Astrophysics Data System (ADS)

    Müller, Bernhard; Janka, Hans-Thomas; Dimmelmeier, Harald

    2010-07-01

    We present a new general relativistic code for hydrodynamical supernova simulations with neutrino transport in spherical and azimuthal symmetry (one dimension and two dimensions, respectively). The code is a combination of the COCONUT hydro module, which is a Riemann-solver-based, high-resolution shock-capturing method, and the three-flavor, fully energy-dependent VERTEX scheme for the transport of massless neutrinos. VERTEX integrates the coupled neutrino energy and momentum equations with a variable Eddington factor closure computed from a model Boltzmann equation and uses the "ray-by-ray plus" approximation in two dimensions, assuming the neutrino distribution to be axially symmetric around the radial direction at every point in space, and thus the neutrino flux to be radial. Our spacetime treatment employs the Arnowitt-Deser-Misner 3+1 formalism with the conformal flatness condition for the spatial three metric. This approach is exact for the one-dimensional case and has previously been shown to yield very accurate results for spherical and rotational stellar core collapse. We introduce new formulations of the energy equation to improve total energy conservation in relativistic and Newtonian hydro simulations with grid-based Eulerian finite-volume codes. Moreover, a modified version of the VERTEX scheme is developed that simultaneously conserves energy and lepton number in the neutrino transport with better accuracy and higher numerical stability in the high-energy tail of the spectrum. To verify our code, we conduct a series of tests in spherical symmetry, including a detailed comparison with published results of the collapse, shock formation, shock breakout, and accretion phases. Long-time simulations of proto-neutron star cooling until several seconds after core bounce both demonstrate the robustness of the new COCONUT-VERTEX code and show the approximate treatment of relativistic effects by means of an effective relativistic gravitational potential as in

  16. A NEW MULTI-DIMENSIONAL GENERAL RELATIVISTIC NEUTRINO HYDRODYNAMIC CODE FOR CORE-COLLAPSE SUPERNOVAE. I. METHOD AND CODE TESTS IN SPHERICAL SYMMETRY

    SciTech Connect

    Mueller, Bernhard; Janka, Hans-Thomas; Dimmelmeier, Harald E-mail: thj@mpa-garching.mpg.d

    2010-07-15

    We present a new general relativistic code for hydrodynamical supernova simulations with neutrino transport in spherical and azimuthal symmetry (one dimension and two dimensions, respectively). The code is a combination of the COCONUT hydro module, which is a Riemann-solver-based, high-resolution shock-capturing method, and the three-flavor, fully energy-dependent VERTEX scheme for the transport of massless neutrinos. VERTEX integrates the coupled neutrino energy and momentum equations with a variable Eddington factor closure computed from a model Boltzmann equation and uses the 'ray-by-ray plus' approximation in two dimensions, assuming the neutrino distribution to be axially symmetric around the radial direction at every point in space, and thus the neutrino flux to be radial. Our spacetime treatment employs the Arnowitt-Deser-Misner 3+1 formalism with the conformal flatness condition for the spatial three metric. This approach is exact for the one-dimensional case and has previously been shown to yield very accurate results for spherical and rotational stellar core collapse. We introduce new formulations of the energy equation to improve total energy conservation in relativistic and Newtonian hydro simulations with grid-based Eulerian finite-volume codes. Moreover, a modified version of the VERTEX scheme is developed that simultaneously conserves energy and lepton number in the neutrino transport with better accuracy and higher numerical stability in the high-energy tail of the spectrum. To verify our code, we conduct a series of tests in spherical symmetry, including a detailed comparison with published results of the collapse, shock formation, shock breakout, and accretion phases. Long-time simulations of proto-neutron star cooling until several seconds after core bounce both demonstrate the robustness of the new COCONUT-VERTEX code and show the approximate treatment of relativistic effects by means of an effective relativistic gravitational potential as in

  17. ZEUS-2D: A radiation magnetohydrodynamics code for astrophysical flows in two space dimensions. I - The hydrodynamic algorithms and tests.

    NASA Astrophysics Data System (ADS)

    Stone, James M.; Norman, Michael L.

    1992-06-01

    A detailed description of ZEUS-2D, a numerical code for the simulation of fluid dynamical flows including a self-consistent treatment of the effects of magnetic fields and radiation transfer is presented. Attention is given to the hydrodynamic (HD) algorithms which form the foundation for the more complex MHD and radiation HD algorithms. The effect of self-gravity on the flow dynamics is accounted for by an iterative solution of the sparse-banded matrix resulting from discretizing the Poisson equation in multidimensions. The results of an extensive series of HD test problems are presented. A detailed description of the MHD algorithms in ZEUS-2D is presented. A new method of computing the electromotive force is developed using the method of characteristics (MOC). It is demonstrated through the results of an extensive series of MHD test problems that the resulting hybrid MOC-constrained transport method provides for the accurate evolution of all modes of MHD wave families.

  18. Modelling of the magnetic field effects in hydrodynamic codes using a second order tensorial diffusion scheme

    NASA Astrophysics Data System (ADS)

    Breil, J.; Maire, P.-H.; Nicolaï, P.; Schurtz, G.

    2008-05-01

    In laser produced plasmas large self-generated magnetic fields have been measured. The classical formulas by Braginskii predict that magnetic fields induce a reduction of the magnitude of the heat flux and its rotation through the Righi-Leduc effect. In this paper a second order tensorial diffusion method used to correctly solve the Righi-Leduc effect in multidimensional code is presented.

  19. Scalability of the CTH Hydrodynamics Code on the Sun HPC 10000 Architecture

    DTIC Science & Technology

    2000-02-01

    the Sun HPC 10000 computer system. The scalability of the message-passing code on this symmetrical multiple processor architecture is presented and is...compared to the ideal linear multiple processor performance. The computed results are also compared to experimental data for the purpose of validating the shock physics application on the Sun HPC 10000 system.

  20. COSAL: A black-box compressible stability analysis code for transition prediction in three-dimensional boundary layers

    NASA Technical Reports Server (NTRS)

    Malik, M. R.

    1982-01-01

    A fast computer code COSAL for transition prediction in three dimensional boundary layers using compressible stability analysis is described. The compressible stability eigenvalue problem is solved using a finite difference method, and the code is a black box in the sense that no guess of the eigenvalue is required from the user. Several optimization procedures were incorporated into COSAL to calculate integrated growth rates (N factor) for transition correlation for swept and tapered laminar flow control wings using the well known e to the Nth power method. A user's guide to the program is provided.

  1. Worst configurations (instantons) for compressed sensing over reals: a channel coding approach

    SciTech Connect

    Chertkov, Michael; Chilappagari, Shashi K; Vasic, Bane

    2010-01-01

    We consider Linear Programming (LP) solution of a Compressed Sensing (CS) problem over reals, also known as the Basis Pursuit (BasP) algorithm. The BasP allows interpretation as a channel-coding problem, and it guarantees the error-free reconstruction over reals for properly chosen measurement matrix and sufficiently sparse error vectors. In this manuscript, we examine how the BasP performs on a given measurement matrix and develop a technique to discover sparse vectors for which the BasP fails. The resulting algorithm is a generalization of our previous results on finding the most probable error-patterns, so called instantons, degrading performance of a finite size Low-Density Parity-Check (LDPC) code in the error-floor regime. The BasP fails when its output is different from the actual error-pattern. We design CS-Instanton Search Algorithm (ISA) generating a sparse vector, called CS-instanton, such that the BasP fails on the instanton, while its action on any modification of the CS-instanton decreasing a properly defined norm is successful. We also prove that, given a sufficiently dense random input for the error-vector, the CS-ISA converges to an instanton in a small finite number of steps. Performance of the CS-ISA is tested on example of a randomly generated 512 * 120 matrix, that outputs the shortest instanton (error vector) pattern of length 11.

  2. The Rice coding algorithm achieves high-performance lossless and progressive image compression based on the improving of integer lifting scheme Rice coding algorithm

    NASA Astrophysics Data System (ADS)

    Jun, Xie Cheng; Su, Yan; Wei, Zhang

    2006-08-01

    In this paper, a modified algorithm was introduced to improve Rice coding algorithm and researches of image compression with the CDF (2,2) wavelet lifting scheme was made. Our experiments show that the property of the lossless image compression is much better than Huffman, Zip, lossless JPEG, RAR, and a little better than (or equal to) the famous SPIHT. The lossless compression rate is improved about 60.4%, 45%, 26.2%, 16.7%, 0.4% on average. The speed of the encoder is faster about 11.8 times than the SPIHT's and its efficiency in time can be improved by 162%. The speed of the decoder is faster about 12.3 times than that of the SPIHT's and its efficiency in time can be rasied about 148%. This algorithm, instead of largest levels wavelet transform, has high coding efficiency when the wavelet transform levels is larger than 3. For the source model of distributions similar to the Laplacian, it can improve the efficiency of coding and realize the progressive transmit coding and decoding.

  3. Design of indirectly driven, high-compression Inertial Confinement Fusion implosions with improved hydrodynamic stability using a 4-shock adiabat-shaped drive

    NASA Astrophysics Data System (ADS)

    Milovich, J. L.; Robey, H. F.; Clark, D. S.; Baker, K. L.; Casey, D. T.; Cerjan, C.; Field, J.; MacPhee, A. G.; Pak, A.; Patel, P. K.; Peterson, J. L.; Smalyuk, V. A.; Weber, C. R.

    2015-12-01

    Experimental results from indirectly driven ignition implosions during the National Ignition Campaign (NIC) [M. J. Edwards et al., Phys. Plasmas 20, 070501 (2013)] achieved a record compression of the central deuterium-tritium fuel layer with measured areal densities up to 1.2 g/cm2, but with significantly lower total neutron yields (between 1.5 × 1014 and 5.5 × 1014) than predicted, approximately 10% of the 2D simulated yield. An order of magnitude improvement in the neutron yield was subsequently obtained in the "high-foot" experiments [O. A. Hurricane et al., Nature 506, 343 (2014)]. However, this yield was obtained at the expense of fuel compression due to deliberately higher fuel adiabat. In this paper, the design of an adiabat-shaped implosion is presented, in which the laser pulse is tailored to achieve similar resistance to ablation-front instability growth, but with a low fuel adiabat to achieve high compression. Comparison with measured performance shows a factor of 3-10× improvement in the neutron yield (>40% of predicted simulated yield) over similar NIC implosions, while maintaining a reasonable fuel compression of >1 g/cm2. Extension of these designs to higher laser power and energy is discussed to further explore the trade-off between increased implosion velocity and the deleterious effects of hydrodynamic instabilities.

  4. Design of indirectly driven, high-compression Inertial Confinement Fusion implosions with improved hydrodynamic stability using a 4-shock adiabat-shaped drive

    SciTech Connect

    Milovich, J. L. Robey, H. F.; Clark, D. S.; Baker, K. L.; Casey, D. T.; Cerjan, C.; Field, J.; MacPhee, A. G.; Pak, A.; Patel, P. K.; Peterson, J. L.; Smalyuk, V. A.; Weber, C. R.

    2015-12-15

    Experimental results from indirectly driven ignition implosions during the National Ignition Campaign (NIC) [M. J. Edwards et al., Phys. Plasmas 20, 070501 (2013)] achieved a record compression of the central deuterium-tritium fuel layer with measured areal densities up to 1.2 g/cm{sup 2}, but with significantly lower total neutron yields (between 1.5 × 10{sup 14} and 5.5 × 10{sup 14}) than predicted, approximately 10% of the 2D simulated yield. An order of magnitude improvement in the neutron yield was subsequently obtained in the “high-foot” experiments [O. A. Hurricane et al., Nature 506, 343 (2014)]. However, this yield was obtained at the expense of fuel compression due to deliberately higher fuel adiabat. In this paper, the design of an adiabat-shaped implosion is presented, in which the laser pulse is tailored to achieve similar resistance to ablation-front instability growth, but with a low fuel adiabat to achieve high compression. Comparison with measured performance shows a factor of 3–10× improvement in the neutron yield (>40% of predicted simulated yield) over similar NIC implosions, while maintaining a reasonable fuel compression of >1 g/cm{sup 2}. Extension of these designs to higher laser power and energy is discussed to further explore the trade-off between increased implosion velocity and the deleterious effects of hydrodynamic instabilities.

  5. Solwnd: A 3D Compressible MHD Code for Solar Wind Studies. Version 1.0: Cartesian Coordinates

    NASA Technical Reports Server (NTRS)

    Deane, Anil E.

    1996-01-01

    Solwnd 1.0 is a three-dimensional compressible MHD code written in Fortran for studying the solar wind. Time-dependent boundary conditions are available. The computational algorithm is based on Flux Corrected Transport and the code is based on the existing code of Zalesak and Spicer. The flow considered is that of shear flow with incoming flow that perturbs this base flow. Several test cases corresponding to pressure balanced magnetic structures with velocity shear flow and various inflows including Alfven waves are presented. Version 1.0 of solwnd considers a rectangular Cartesian geometry. Future versions of solwnd will consider a spherical geometry. Some discussions of this issue is presented.

  6. A Multigroup diffusion Solver Using Pseudo Transient Continuation for a Radiaiton-Hydrodynamic Code with Patch-Based AMR

    SciTech Connect

    Shestakov, A I; Offner, S R

    2007-03-02

    We present a scheme to solve the nonlinear multigroup radiation diffusion (MGD) equations. The method is incorporated into a massively parallel, multidimensional, Eulerian radiation-hydrodynamic code with adaptive mesh refinement (AMR). The patch-based AMR algorithm refines in both space and time creating a hierarchy of levels, coarsest to finest. The physics modules are time-advanced using operator splitting. On each level, separate 'level-solve' packages advance the modules. Our multigroup level-solve adapts an implicit procedure which leads to a two-step iterative scheme that alternates between elliptic solves for each group with intra-cell group coupling. For robustness, we introduce pseudo transient continuation ({Psi}tc). We analyze the magnitude of the {Psi}tc parameter to ensure positivity of the resulting linear system, diagonal dominance and convergence of the two-step scheme. For AMR, a level defines a subdomain for refinement. For diffusive processes such as MGD, the refined level uses Dirichet boundary data at the coarse-fine interface and the data is derived from the coarse level solution. After advancing on the fine level, an additional procedure, the sync-solve (SS), is required in order to enforce conservation. The MGD SS reduces to an elliptic solve on a combined grid for a system of G equations, where G is the number of groups. We adapt the 'partial temperature' scheme for the SS; hence, we reuse the infrastructure developed for scalar equations. Results are presented. We consider a multigroup test problem with a known analytic solution. We demonstrate utility of {Psi}tc by running with increasingly larger timesteps. Lastly, we simulate the sudden release of energy Y inside an Al sphere (r = 15 cm) suspended in air at STP. For Y = 11 kT, we find that gray radiation diffusion and MGD produce similar results. However, if Y = 1 MT, the two packages yield different results. Our large Y simulation contradicts a long-standing theory and demonstrates

  7. A Multigroup diffusion solver using pseudo transient continuation for a radiation-hydrodynamic code with patch-based AMR

    SciTech Connect

    Shestakov, A I; Offner, S R

    2006-09-21

    We present a scheme to solve the nonlinear multigroup radiation diffusion (MGD) equations. The method is incorporated into a massively parallel, multidimensional, Eulerian radiation-hydrodynamic code with adaptive mesh refinement (AMR). The patch-based AMR algorithm refines in both space and time creating a hierarchy of levels, coarsest to finest. The physics modules are time-advanced using operator splitting. On each level, separate 'level-solve' packages advance the modules. Our multigroup level-solve adapts an implicit procedure which leads to a two-step iterative scheme that alternates between elliptic solves for each group with intra-cell group coupling. For robustness, we introduce pseudo transient continuation ({Psi}tc). We analyze the magnitude of the {Psi}tc parameter to ensure positivity of the resulting linear system, diagonal dominance and convergence of the two-step scheme. For AMR, a level defines a subdomain for refinement. For diffusive processes such as MGD, the refined level uses Dirichet boundary data at the coarse-fine interface and the data is derived from the coarse level solution. After advancing on the fine level, an additional procedure, the sync-solve (SS), is required in order to enforce conservation. The MGD SS reduces to an elliptic solve on a combined grid for a system of G equations, where G is the number of groups. We adapt the 'partial temperature' scheme for the SS; hence, we reuse the infrastructure developed for scalar equations. Results are presented. We consider a multigroup test problem with a known analytic solution. We demonstrate utility of {Psi}tc by running with increasingly larger timesteps. Lastly, we simulate the sudden release of energy Y inside an Al sphere (r = 15 cm) suspended in air at STP. For Y = 11 kT, we find that gray radiation diffusion and MGD produce similar results. However, if Y = 1 MT, the two packages yield different results. Our large Y simulation contradicts a long-standing theory and demonstrates

  8. An excellent reduction in sidelobe level for P4 code by using of a new pulse compression scheme

    NASA Astrophysics Data System (ADS)

    Alighale, S.; Zakeri, B.

    2014-10-01

    P4 polyphase code is well known in pulse compression technique. For P4 code with length 1000, peak sidelobe level (PSL) and integrated sidelobe level (ISL) are -36dB and -16dB, respectively. In order to increase the performance, there are different reduction techniques to reduce the sidelobes of P4 code. This paper presents a novel sidelobe reduction technique that reduces the PSL and ISL to -127dB and -104dB, respectively. Also, other sidelobe reduction techniques such as Woo filter are investigated and compared with the novel proposed technique. Simulations and results show that the proposed technique produces a better peak side lobe ratio (PSL) and integrated side lobe ratio (ISL) than other techniques.

  9. Development of a Three-Dimensional PSE Code for Compressible Flows: Stability of Three-Dimensional Compressible Boundary Layers

    NASA Technical Reports Server (NTRS)

    Balakumar, P.; Jeyasingham, Samarasingham

    1999-01-01

    A program is developed to investigate the linear stability of three-dimensional compressible boundary layer flows over bodies of revolutions. The problem is formulated as a two dimensional (2D) eigenvalue problem incorporating the meanflow variations in the normal and azimuthal directions. Normal mode solutions are sought in the whole plane rather than in a line normal to the wall as is done in the classical one dimensional (1D) stability theory. The stability characteristics of a supersonic boundary layer over a sharp cone with 50 half-angle at 2 degrees angle of attack is investigated. The 1D eigenvalue computations showed that the most amplified disturbances occur around x(sub 2) = 90 degrees and the azimuthal mode number for the most amplified disturbances range between m = -30 to -40. The frequencies of the most amplified waves are smaller in the middle region where the crossflow dominates the instability than the most amplified frequencies near the windward and leeward planes. The 2D eigenvalue computations showed that due to the variations in the azimuthal direction, the eigenmodes are clustered into isolated confined regions. For some eigenvalues, the eigenfunctions are clustered in two regions. Due to the nonparallel effect in the azimuthal direction, the eigenmodes are clustered into isolated confined regions. For some eigenvalues, the eigenfunctions are clustered in two regions. Due to the nonparallel effect in the azimuthal direction, the most amplified disturbances are shifted to 120 degrees compared to 90 degrees for the parallel theory. It is also observed that the nonparallel amplification rates are smaller than that is obtained from the parallel theory.

  10. Joint compression/watermarking scheme using majority-parity guidance and halftoning-based block truncation coding.

    PubMed

    Guo, Jing-Ming; Liu, Yun-Fu

    2010-08-01

    In this paper, a watermarking scheme, called majority-parity-guided error-diffused block truncation coding (MPG-EDBTC), is proposed to achieve high image quality and embedding capacity. EDBTC exploits the error diffusion to effectively reduce blocking effect and false contour which inherently exhibit in traditional BTC. In addition, the coding efficiency is significantly improved by replacing high and low means evaluation with extreme values substitution. The proposed MPG-EDBTC embeds a watermark simultaneously during compression by evaluating the parity value in a predefined parity-check region (PCR). As documented in the experimental results, the proposed scheme can provide good robustness, image quality, and processing efficiency. Finally, the proposed MPG-EDBTC is extended to embed multiple watermarks and achieves excellent image quality, robustness, and capacity. Nowadays, most multimedia is compressed before it is stored. It is more appropriate to embed information such as watermarks during compression. The proposed method has been proved to solve effectively the inherent problems in traditional BTC, and provide excellent performance in watermark embedding.

  11. Wavelet-based compression with ROI coding support for mobile access to DICOM images over heterogeneous radio networks.

    PubMed

    Maglogiannis, Ilias; Doukas, Charalampos; Kormentzas, George; Pliakas, Thomas

    2009-07-01

    Most of the commercial medical image viewers do not provide scalability in image compression and/or region of interest (ROI) encoding/decoding. Furthermore, these viewers do not take into consideration the special requirements and needs of a heterogeneous radio setting that is constituted by different access technologies [e.g., general packet radio services (GPRS)/ universal mobile telecommunications system (UMTS), wireless local area network (WLAN), and digital video broadcasting (DVB-H)]. This paper discusses a medical application that contains a viewer for digital imaging and communications in medicine (DICOM) images as a core module. The proposed application enables scalable wavelet-based compression, retrieval, and decompression of DICOM medical images and also supports ROI coding/decoding. Furthermore, the presented application is appropriate for use by mobile devices activating in heterogeneous radio settings. In this context, performance issues regarding the usage of the proposed application in the case of a prototype heterogeneous system setup are also discussed.

  12. ECG signal compression by multi-iteration EZW coding for different wavelets and thresholds.

    PubMed

    Tohumoglu, Gülay; Sezgin, K Erbil

    2007-02-01

    The modified embedded zero-tree wavelet (MEZW) compression algorithm for the one-dimensional signal was originally derived for image compression based on Shapiro's EZW algorithm. It is revealed that the proposed codec is significantly more efficient in compression and in computation than previously proposed ECG compression schemes. The coder also attains exact bit rate control and generates a bit stream progressive in quality or rate. The EZW and MEZW algorithms apply the chosen threshold values or the expressions in order to specify that the significant transformed coefficients are greatly significant. Thus, two different threshold definitions, namely percentage and dyadic thresholds, are used, and they are applied for different wavelet types in biorthogonal and orthogonal classes. In detail, the MEZW and EZW algorithms results are quantitatively compared in terms of the compression ratio (CR) and percentage root mean square difference (PRD). Experiments are carried out on the selected records from the MIT-BIH arrhythmia database and an original ECG signal. It is observed that the MEZW algorithm shows a clear advantage in the CR achieved for a given PRD over the traditional EZW, and it gives better results for the biorthogonal wavelets than the orthogonal wavelets.

  13. Wideband audio compression using subband coding and entropy-constrained scalar quantization

    NASA Astrophysics Data System (ADS)

    Trinkaus, Trevor R.

    1995-04-01

    Source coding of wideband audio signals for storage applications and/or transmission over band limited channels is currently a research topic receiving considerable attention. A goal common to all systems designed for wideband audio coding is to achieve an efficient reduction in code rate, while maintaining imperceptible differences between the original and coded audio signals. In this thesis, an effective source coding scheme aimed at reducing the code rate to the entropy of the quantized audio source, while providing good subjective audio quality, is discussed. This scheme employs the technique of subband coding, where a 32-band single sideband modulated filter bank is used to perform subband analysis and synthesis operations. Encoding and decoding of the subbands is accomplished using entropy constrained uniform scalar quantization and subsequent arithmetic coding. A computationally efficient subband rate allocation procedure is used which relies on analytic models to describe the rate distortion characteristics of the subband quantizers. Signal quality is maintained by incorporating masking properties of the human ear into this rate allocation procedure. Results of simulations performed on compact disc quality audio segments are provided.

  14. Application of wavelet filtering and Barker-coded pulse compression hybrid method to air-coupled ultrasonic testing

    NASA Astrophysics Data System (ADS)

    Zhou, Zhenggan; Ma, Baoquan; Jiang, Jingtao; Yu, Guang; Liu, Kui; Zhang, Dongmei; Liu, Weiping

    2014-10-01

    Air-coupled ultrasonic testing (ACUT) technique has been viewed as a viable solution in defect detection of advanced composites used in aerospace and aviation industries. However, the giant mismatch of acoustic impedance in air-solid interface makes the transmission efficiency of ultrasound low, and leads to poor signal-to-noise (SNR) ratio of received signal. The utilisation of signal-processing techniques in non-destructive testing is highly appreciated. This paper presents a wavelet filtering and phase-coded pulse compression hybrid method to improve the SNR and output power of received signal. The wavelet transform is utilised to filter insignificant components from noisy ultrasonic signal, and pulse compression process is used to improve the power of correlated signal based on cross-correction algorithm. For the purpose of reasonable parameter selection, different families of wavelets (Daubechies, Symlet and Coiflet) and decomposition level in discrete wavelet transform are analysed, different Barker codes (5-13 bits) are also analysed to acquire higher main-to-side lobe ratio. The performance of the hybrid method was verified in a honeycomb composite sample. Experimental results demonstrated that the proposed method is very efficient in improving the SNR and signal strength. The applicability of the proposed method seems to be a very promising tool to evaluate the integrity of high ultrasound attenuation composite materials using the ACUT.

  15. Smoothed Particle Hydrodynamic Simulator

    SciTech Connect

    2016-10-05

    This code is a highly modular framework for developing smoothed particle hydrodynamic (SPH) simulations running on parallel platforms. The compartmentalization of the code allows for rapid development of new SPH applications and modifications of existing algorithms. The compartmentalization also allows changes in one part of the code used by many applications to instantly be made available to all applications.

  16. Scaling and performance of a 3-D radiation hydrodynamics code on message-passing parallel computers: final report

    SciTech Connect

    Hayes, J C; Norman, M

    1999-10-28

    This report details an investigation into the efficacy of two approaches to solving the radiation diffusion equation within a radiation hydrodynamic simulation. Because leading-edge scientific computing platforms have evolved from large single-node vector processors to parallel aggregates containing tens to thousands of individual CPU's, the ability of an algorithm to maintain high compute efficiency when distributed over a large array of nodes is critically important. The viability of an algorithm thus hinges upon the tripartite question of numerical accuracy, total time to solution, and parallel efficiency.

  17. Hydrodynamic effects in the atmosphere of variable stars

    NASA Technical Reports Server (NTRS)

    Davis, C. G., Jr.; Bunker, S. S.

    1975-01-01

    Numerical models of variable stars are established, using a nonlinear radiative transfer coupled hydrodynamics code. The variable Eddington method of radiative transfer is used. Comparisons are for models of W Virginis, beta Doradus, and eta Aquilae. From these models it appears that shocks are formed in the atmospheres of classical Cepheids as well as W Virginis stars. In classical Cepheids, with periods from 7 to 10 days, the bumps occurring in the light and velocity curves appear as the result of a compression wave that reflects from the star's center. At the head of the outward going compression wave, shocks form in the atmosphere. Comparisons between the hydrodynamic motions in W Virginis and classical Cepheids are made. The strong shocks in W Virginis do not penetrate into the interior as do the compression waves formed in classical Cepheids. The shocks formed in W Virginis stars cause emission lines, while in classical Cepheids the shocks are weaker.

  18. Data compression using adaptive transform coding. Appendix 1: Item 1. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Rost, Martin Christopher

    1988-01-01

    Adaptive low-rate source coders are described in this dissertation. These coders adapt by adjusting the complexity of the coder to match the local coding difficulty of the image. This is accomplished by using a threshold driven maximum distortion criterion to select the specific coder used. The different coders are built using variable blocksized transform techniques, and the threshold criterion selects small transform blocks to code the more difficult regions and larger blocks to code the less complex regions. A theoretical framework is constructed from which the study of these coders can be explored. An algorithm for selecting the optimal bit allocation for the quantization of transform coefficients is developed. The bit allocation algorithm is more fully developed, and can be used to achieve more accurate bit assignments than the algorithms currently used in the literature. Some upper and lower bounds for the bit-allocation distortion-rate function are developed. An obtainable distortion-rate function is developed for a particular scalar quantizer mixing method that can be used to code transform coefficients at any rate.

  19. New Binary Complementary Codes Compressing a Pulse to a Width of Several Sub-pulses

    DTIC Science & Technology

    2005-04-14

    Department of Computer and Information Engineering , Nippon Institute of Technology 4-1 Gakuendai, Miyashiro, Saitama-ken, 345-8501 Japan 8. PERFORMING...codes pressed to several sub-pulses,” Trans. IEICE of Japan (in Japanese), . J85 -B, no.8, pp.1434-1444, Aug. 2002. akasugi and S.Fukao, “Sidelobe

  20. Picture data compression coder using subband/transform coding with a Lempel-Ziv-based coder

    NASA Technical Reports Server (NTRS)

    Glover, Daniel R. (Inventor)

    1995-01-01

    Digital data coders/decoders are used extensively in video transmission. A digitally encoded video signal is separated into subbands. Separating the video into subbands allows transmission at low data rates. Once the data is separated into these subbands it can be coded and then decoded by statistical coders such as the Lempel-Ziv based coder.

  1. Development of a Fast Breeder Reactor Fuel Bundle-Duct Interaction Analysis Code - BAMBOO: Analysis Model and Validation by the Out-of-Pile Compression Test

    SciTech Connect

    Uwaba, Tomoyuki; Tanaka, Kosuke

    2001-10-15

    To analyze the wire-wrapped fast breeder reactor (FBR) fuel pin bundle deformation under bundle-duct interaction (BDI) conditions, the Japan Nuclear Cycle Development Institute has developed the BAMBOO computer code. A three-dimensional beam element model is used in this code to calculate fuel pin bowing and cladding oval distortion, which are the dominant deformation mechanisms in a fuel pin bundle. In this work, the property of the cladding oval distortion considering the wire-pitch was evaluated experimentally and introduced in the code analysis.The BAMBOO code was validated in this study by using an out-of-pile bundle compression testing apparatus and comparing these results with the code results. It is concluded that BAMBOO reasonably predicts the pin-to-duct clearances in the compression tests by treating the cladding oval distortion as the suppression mechanism to BDI.

  2. Euler Technology Assessment for Preliminary Aircraft Design: Compressibility Predictions by Employing the Cartesian Unstructured Grid SPLITFLOW Code

    NASA Technical Reports Server (NTRS)

    Finley, Dennis B.; Karman, Steve L., Jr.

    1996-01-01

    The objective of the second phase of the Euler Technology Assessment program was to evaluate the ability of Euler computational fluid dynamics codes to predict compressible flow effects over a generic fighter wind tunnel model. This portion of the study was conducted by Lockheed Martin Tactical Aircraft Systems, using an in-house Cartesian-grid code called SPLITFLOW. The Cartesian grid technique offers several advantages, including ease of volume grid generation and reduced number of cells compared to other grid schemes. SPLITFLOW also includes grid adaption of the volume grid during the solution to resolve high-gradient regions. The SPLITFLOW code predictions of configuration forces and moments are shown to be adequate for preliminary design, including predictions of sideslip effects and the effects of geometry variations at low and high angles-of-attack. The transonic pressure prediction capabilities of SPLITFLOW are shown to be improved over subsonic comparisons. The time required to generate the results from initial surface data is on the order of several hours, including grid generation, which is compatible with the needs of the design environment.

  3. Compression embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.

    1998-01-01

    A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method.

  4. Compression embedding

    DOEpatents

    Sandford, M.T. II; Handel, T.G.; Bradley, J.N.

    1998-03-10

    A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique is disclosed. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method. 11 figs.

  5. An Adaptive Data Gathering Scheme for Multi-Hop Wireless Sensor Networks Based on Compressed Sensing and Network Coding

    PubMed Central

    Yin, Jun; Yang, Yuwang; Wang, Lei

    2016-01-01

    Joint design of compressed sensing (CS) and network coding (NC) has been demonstrated to provide a new data gathering paradigm for multi-hop wireless sensor networks (WSNs). By exploiting the correlation of the network sensed data, a variety of data gathering schemes based on NC and CS (Compressed Data Gathering—CDG) have been proposed. However, these schemes assume that the sparsity of the network sensed data is constant and the value of the sparsity is known before starting each data gathering epoch, thus they ignore the variation of the data observed by the WSNs which are deployed in practical circumstances. In this paper, we present a complete design of the feedback CDG scheme where the sink node adaptively queries those interested nodes to acquire an appropriate number of measurements. The adaptive measurement-formation procedure and its termination rules are proposed and analyzed in detail. Moreover, in order to minimize the number of overall transmissions in the formation procedure of each measurement, we have developed a NP-complete model (Maximum Leaf Nodes Minimum Steiner Nodes—MLMS) and realized a scalable greedy algorithm to solve the problem. Experimental results show that the proposed measurement-formation method outperforms previous schemes, and experiments on both datasets from ocean temperature and practical network deployment also prove the effectiveness of our proposed feedback CDG scheme. PMID:27043574

  6. An Adaptive Data Gathering Scheme for Multi-Hop Wireless Sensor Networks Based on Compressed Sensing and Network Coding.

    PubMed

    Yin, Jun; Yang, Yuwang; Wang, Lei

    2016-04-01

    Joint design of compressed sensing (CS) and network coding (NC) has been demonstrated to provide a new data gathering paradigm for multi-hop wireless sensor networks (WSNs). By exploiting the correlation of the network sensed data, a variety of data gathering schemes based on NC and CS (Compressed Data Gathering--CDG) have been proposed. However, these schemes assume that the sparsity of the network sensed data is constant and the value of the sparsity is known before starting each data gathering epoch, thus they ignore the variation of the data observed by the WSNs which are deployed in practical circumstances. In this paper, we present a complete design of the feedback CDG scheme where the sink node adaptively queries those interested nodes to acquire an appropriate number of measurements. The adaptive measurement-formation procedure and its termination rules are proposed and analyzed in detail. Moreover, in order to minimize the number of overall transmissions in the formation procedure of each measurement, we have developed a NP-complete model (Maximum Leaf Nodes Minimum Steiner Nodes--MLMS) and realized a scalable greedy algorithm to solve the problem. Experimental results show that the proposed measurement-formation method outperforms previous schemes, and experiments on both datasets from ocean temperature and practical network deployment also prove the effectiveness of our proposed feedback CDG scheme.

  7. SIMULATING THE COMMON ENVELOPE PHASE OF A RED GIANT USING SMOOTHED-PARTICLE HYDRODYNAMICS AND UNIFORM-GRID CODES

    SciTech Connect

    Passy, Jean-Claude; Mac Low, Mordecai-Mark; De Marco, Orsola; Fryer, Chris L.; Diehl, Steven; Rockefeller, Gabriel; Herwig, Falk; Oishi, Jeffrey S.; Bryan, Greg L.

    2012-01-01

    We use three-dimensional hydrodynamical simulations to study the rapid infall phase of the common envelope (CE) interaction of a red giant branch star of mass equal to 0.88 M{sub Sun} and a companion star of mass ranging from 0.9 down to 0.1 M{sub Sun }. We first compare the results obtained using two different numerical techniques with different resolutions, and find very good agreement overall. We then compare the outcomes of those simulations with observed systems thought to have gone through a CE. The simulations fail to reproduce those systems in the sense that most of the envelope of the donor remains bound at the end of the simulations and the final orbital separations between the donor's remnant and the companion, ranging from 26.8 down to 5.9 R{sub Sun }, are larger than the ones observed. We suggest that this discrepancy vouches for recombination playing an essential role in the ejection of the envelope and/or significant shrinkage of the orbit happening in the subsequent phase.

  8. Recent Advances in the Modeling of the Transport of Two-Plasmon-Decay Electrons in the 1-D Hydrodynamic Code LILAC

    NASA Astrophysics Data System (ADS)

    Delettrez, J. A.; Myatt, J. F.; Yaakobi, B.

    2015-11-01

    The modeling of the fast-electron transport in the 1-D hydrodynamic code LILAC was modified because of the addition of cross-beam-energy-transfer (CBET) in implosion simulations. Using the old fast-electron with source model CBET results in a shift of the peak of the hard x-ray (HXR) production from the end of the laser pulse, as observed in experiments, to earlier in the pulse. This is caused by a drop in the laser intensity of the quarter-critical surface from CBET interaction at lower densities. Data from simulations with the laser plasma simulation environment (LPSE) code will be used to modify the source algorithm in LILAC. In addition, the transport model in LILAC has been modified to include deviations from the straight-line algorithm and non-specular reflection at the sheath to take into account the scattering from collisions and magnetic fields in the corona. Simulation results will be compared with HXR emissions from both room-temperature plastic and cryogenic target experiments. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0001944.

  9. Assessing the Effects of Data Compression in Simulations Using Physically Motivated Metrics

    DOE PAGES

    Laney, Daniel; Langer, Steven; Weber, Christopher; ...

    2014-01-01

    This paper examines whether lossy compression can be used effectively in physics simulations as a possible strategy to combat the expected data-movement bottleneck in future high performance computing architectures. We show that, for the codes and simulations we tested, compression levels of 3–5X can be applied without causing significant changes to important physical quantities. Rather than applying signal processing error metrics, we utilize physics-based metrics appropriate for each code to assess the impact of compression. We evaluate three different simulation codes: a Lagrangian shock-hydrodynamics code, an Eulerian higher-order hydrodynamics turbulence modeling code, and an Eulerian coupled laser-plasma interaction code. Wemore » compress relevant quantities after each time-step to approximate the effects of tightly coupled compression and study the compression rates to estimate memory and disk-bandwidth reduction. We find that the error characteristics of compression algorithms must be carefully considered in the context of the underlying physics being modeled.« less

  10. A New Multi-dimensional General Relativistic Neutrino Hydrodynamics Code for Core-collapse Supernovae. IV. The Neutrino Signal

    NASA Astrophysics Data System (ADS)

    Müller, Bernhard; Janka, Hans-Thomas

    2014-06-01

    Considering six general relativistic, two-dimensional (2D) supernova (SN) explosion models of progenitor stars between 8.1 and 27 M ⊙, we systematically analyze the properties of the neutrino emission from core collapse and bounce to the post-explosion phase. The models were computed with the VERTEX-COCONUT code, using three-flavor, energy-dependent neutrino transport in the ray-by-ray-plus approximation. Our results confirm the close similarity of the mean energies, langErang, of \\bar{\

  11. Simultaneously sparse and low-rank hyperspectral image recovery from coded aperture compressive measurements via convex optimization

    NASA Astrophysics Data System (ADS)

    Gélvez, Tatiana C.; Rueda, Hoover F.; Arguello, Henry

    2016-05-01

    A hyperspectral image (HSI) can be described as a set of images with spatial information across different spectral bands. Compressive spectral imaging techniques (CSI) permit to capture a 3-dimensional hyperspectral scene using 2 dimensional coded and multiplexed projections. Recovering the original scene from a very few projections can be valuable in applications such as remote sensing, video surveillance and biomedical imaging. Typically, HSI exhibit high correlations both, in the spatial and spectral dimensions. Thus, exploiting these correlations allows to accurately recover the original scene from compressed measurements. Traditional approaches exploit the sparsity of the scene when represented in a proper basis. For this purpose, an optimization problem that seeks to minimize a joint l2 - l1 norm is solved to obtain the original scene. However, there exist some HSI with an important feature which does not have been widely exploited; HSI are commonly low rank, thus only a few number of spectral signatures are presented in the image. Therefore, this paper proposes an approach to recover a simultaneous sparse and low rank hyperspectral image by exploiting both features at the same time. The proposed approach solves an optimization problem that seeks to minimize the l2-norm, penalized by the l1-norm, to force the solution to be sparse, and penalized by the nuclear norm to force the solution to be low rank. Theoretical analysis along with a set of simulations over different data sets show that simultaneously exploiting low rank and sparse structures enhances the performance of the recovery algorithm and the quality of the recovered image with an average improvement of around 3 dB in terms of the peak-signal to noise ratio (PSNR).

  12. A New Multi-dimensional General Relativistic Neutrino Hydrodynamics Code for Core-collapse Supernovae. II. Relativistic Explosion Models of Core-collapse Supernovae

    NASA Astrophysics Data System (ADS)

    Müller, Bernhard; Janka, Hans-Thomas; Marek, Andreas

    2012-09-01

    We present the first two-dimensional general relativistic (GR) simulations of stellar core collapse and explosion with the COCONUT hydrodynamics code in combination with the VERTEX solver for energy-dependent, three-flavor neutrino transport, using the extended conformal flatness condition for approximating the space-time metric and a ray-by-ray-plus ansatz to tackle the multi-dimensionality of the transport. For both of the investigated 11.2 and 15 M ⊙ progenitors we obtain successful, though seemingly marginal, neutrino-driven supernova explosions. This outcome and the time evolution of the models basically agree with results previously obtained with the PROMETHEUS hydro solver including an approximative treatment of relativistic effects by a modified Newtonian potential. However, GR models exhibit subtle differences in the neutrinospheric conditions compared with Newtonian and pseudo-Newtonian simulations. These differences lead to significantly higher luminosities and mean energies of the radiated electron neutrinos and antineutrinos and therefore to larger energy-deposition rates and heating efficiencies in the gain layer with favorable consequences for strong nonradial mass motions and ultimately for an explosion. Moreover, energy transfer to the stellar medium around the neutrinospheres through nucleon recoil in scattering reactions of heavy-lepton neutrinos also enhances the mentioned effects. Together with previous pseudo-Newtonian models, the presented relativistic calculations suggest that the treatment of gravity and energy-exchanging neutrino interactions can make differences of even 50%-100% in some quantities and is likely to contribute to a finally successful explosion mechanism on no minor level than hydrodynamical differences between different dimensions.

  13. A NEW MULTI-DIMENSIONAL GENERAL RELATIVISTIC NEUTRINO HYDRODYNAMICS CODE FOR CORE-COLLAPSE SUPERNOVAE. II. RELATIVISTIC EXPLOSION MODELS OF CORE-COLLAPSE SUPERNOVAE

    SciTech Connect

    Mueller, Bernhard; Janka, Hans-Thomas; Marek, Andreas E-mail: thj@mpa-garching.mpg.de

    2012-09-01

    We present the first two-dimensional general relativistic (GR) simulations of stellar core collapse and explosion with the COCONUT hydrodynamics code in combination with the VERTEX solver for energy-dependent, three-flavor neutrino transport, using the extended conformal flatness condition for approximating the space-time metric and a ray-by-ray-plus ansatz to tackle the multi-dimensionality of the transport. For both of the investigated 11.2 and 15 M{sub Sun} progenitors we obtain successful, though seemingly marginal, neutrino-driven supernova explosions. This outcome and the time evolution of the models basically agree with results previously obtained with the PROMETHEUS hydro solver including an approximative treatment of relativistic effects by a modified Newtonian potential. However, GR models exhibit subtle differences in the neutrinospheric conditions compared with Newtonian and pseudo-Newtonian simulations. These differences lead to significantly higher luminosities and mean energies of the radiated electron neutrinos and antineutrinos and therefore to larger energy-deposition rates and heating efficiencies in the gain layer with favorable consequences for strong nonradial mass motions and ultimately for an explosion. Moreover, energy transfer to the stellar medium around the neutrinospheres through nucleon recoil in scattering reactions of heavy-lepton neutrinos also enhances the mentioned effects. Together with previous pseudo-Newtonian models, the presented relativistic calculations suggest that the treatment of gravity and energy-exchanging neutrino interactions can make differences of even 50%-100% in some quantities and is likely to contribute to a finally successful explosion mechanism on no minor level than hydrodynamical differences between different dimensions.

  14. An investigative study of multispectral data compression for remotely-sensed images using vector quantization and difference-mapped shift-coding

    NASA Technical Reports Server (NTRS)

    Jaggi, S.

    1993-01-01

    A study is conducted to investigate the effects and advantages of data compression techniques on multispectral imagery data acquired by NASA's airborne scanners at the Stennis Space Center. The first technique used was vector quantization. The vector is defined in the multispectral imagery context as an array of pixels from the same location from each channel. The error obtained in substituting the reconstructed images for the original set is compared for different compression ratios. Also, the eigenvalues of the covariance matrix obtained from the reconstructed data set are compared with the eigenvalues of the original set. The effects of varying the size of the vector codebook on the quality of the compression and on subsequent classification are also presented. The output data from the Vector Quantization algorithm was further compressed by a lossless technique called Difference-mapped Shift-extended Huffman coding. The overall compression for 7 channels of data acquired by the Calibrated Airborne Multispectral Scanner (CAMS), with an RMS error of 15.8 pixels was 195:1 (0.41 bpp) and with an RMS error of 3.6 pixels was 18:1 (.447 bpp). The algorithms were implemented in software and interfaced with the help of dedicated image processing boards to an 80386 PC compatible computer. Modules were developed for the task of image compression and image analysis. Also, supporting software to perform image processing for visual display and interpretation of the compressed/classified images was developed.

  15. DNABIT Compress - Genome compression algorithm.

    PubMed

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-22

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.

  16. How to Build a Time Machine: Interfacing Hydrodynamics, Ionization Calculations and X-ray Spectral Codes for Supernova Remnants

    NASA Astrophysics Data System (ADS)

    Badenes, Carlos

    2006-02-01

    Thanks to Chandra and XMM-Newton, spatially resolved spectroscopy of SNRsin the X-ray band has become a reality. Several impressive data sets forejecta-dominated SNRs can now be found in the archives, the Cas A VLP justbeing one (albeit probably the most spectacular) example. However, it isoften hard to establish quantitative, unambiguous connections between theX-ray observations of SNRs and the dramatic events involved in a corecollapse or thermonuclear SN explosion. The reason for this is that thevery high quality of the data sets generated by Chandra and XMM for thelikes of Cas A, SNR 292.0+1.8, Tycho, and SN 1006 has surpassed our abilityto analyze them. The core of the problem is in the transient nature of theplasmas in SNRs, which results in anintimate relationship between the structure of the ejecta and AM, the SNRdynamics arising from their interaction, and the ensuing X-rayemission. Thus, the ONLY way to understand the X-ray observations ofejecta-dominated SNRs at all levels, from the spatially integrated spectrato the subarcsecond scales that can be resolved by Chandra, is to couplehydrodynamic simulations to nonequilibrium ionization (NEI) calculationsand X-ray spectral codes. I will review the basic ingredients that enterthis kind of calculations, and what are the prospects for using them tounderstand the X-ray emission from the shocked ejecta in young SNRs. Thisunderstanding (when it is possible), can turn SNRs into veritable timemachines, revealing the secrets of the titanic explosions that generatedthem hundreds of years ago.

  17. A new multi-dimensional general relativistic neutrino hydrodynamics code for core-collapse supernovae. IV. The neutrino signal

    SciTech Connect

    Müller, Bernhard; Janka, Hans-Thomas E-mail: bjmuellr@mpa-garching.mpg.de

    2014-06-10

    Considering six general relativistic, two-dimensional (2D) supernova (SN) explosion models of progenitor stars between 8.1 and 27 M {sub ☉}, we systematically analyze the properties of the neutrino emission from core collapse and bounce to the post-explosion phase. The models were computed with the VERTEX-COCONUT code, using three-flavor, energy-dependent neutrino transport in the ray-by-ray-plus approximation. Our results confirm the close similarity of the mean energies, (E), of ν-bar {sub e} and heavy-lepton neutrinos and even their crossing during the accretion phase for stars with M ≳ 10 M {sub ☉} as observed in previous 1D and 2D simulations with state-of-the-art neutrino transport. We establish a roughly linear scaling of 〈E{sub ν-bar{sub e}}〉 with the proto-neutron star (PNS) mass, which holds in time as well as for different progenitors. Convection inside the PNS affects the neutrino emission on the 10%-20% level, and accretion continuing beyond the onset of the explosion prevents the abrupt drop of the neutrino luminosities seen in artificially exploded 1D models. We demonstrate that a wavelet-based time-frequency analysis of SN neutrino signals in IceCube will offer sensitive diagnostics for the SN core dynamics up to at least ∼10 kpc distance. Strong, narrow-band signal modulations indicate quasi-periodic shock sloshing motions due to the standing accretion shock instability (SASI), and the frequency evolution of such 'SASI neutrino chirps' reveals shock expansion or contraction. The onset of the explosion is accompanied by a shift of the modulation frequency below 40-50 Hz, and post-explosion, episodic accretion downflows will be signaled by activity intervals stretching over an extended frequency range in the wavelet spectrogram.

  18. Compression embedding

    DOEpatents

    Sandford, M.T. II; Handel, T.G.; Bradley, J.N.

    1998-07-07

    A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique are disclosed. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%. 21 figs.

  19. Compression embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.

    1998-01-01

    A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%.

  20. The HULL Hydrodynamics Computer Code

    DTIC Science & Technology

    1976-09-01

    Mark A. Fry, Capt, USAF Richard E. Durrett, Major, USAF Gary P. Ganong , Major, USAF Daniel A. Matuska, Major, USAF Mitchell D. Stucker, Capt, USAF... Ganong , G.P., and Roberts, W.A., The Effect of the Nuclear Environment on Crater Ejecta Trajectories for Surface Bursts, AFWL-TR-68-125, Air Force...unication. 17. Ganong , G.P.. et al.. Private communication. 18- A?^o9;ceG-Seapoan1 L^^y^ AFWL.TR-69.19, 19. Needham, C.E., TheorpHrai r=i^ i

  1. TRHD: Three-temperature radiation-hydrodynamics code with an implicit non-equilibrium radiation transport using a cell-centered monotonic finite volume scheme on unstructured-grids

    NASA Astrophysics Data System (ADS)

    Sijoy, C. D.; Chaturvedi, S.

    2015-05-01

    Three-temperature (3T), unstructured-mesh, non-equilibrium radiation hydrodynamics (RHD) code have been developed for the simulation of intense thermal radiation or high-power laser driven radiative shock hydrodynamics in two-dimensional (2D) axis-symmetric geometries. The governing hydrodynamics equations are solved using a compatible unstructured Lagrangian method based on a control volume differencing (CVD) scheme. A second-order predictor-corrector (PC) integration scheme is used for the temporal discretization of the hydrodynamics equations. For the radiation energy transport, frequency averaged gray model is used in which the flux-limited diffusion (FLD) approximation is used to recover the free-streaming limit of the radiation propagation in optically thin regions. The proposed RHD model allows to have different temperatures for the electrons and ions. In addition to this, the electron and thermal radiation temperatures are assumed to be in non-equilibrium. Therefore, the thermal relaxation between the electrons and ions and the coupling between the radiation and matter energies are required to be computed self-consistently. For this, the coupled flux limited electron heat conduction and the non-equilibrium radiation diffusion equations are solved simultaneously by using an implicit, axis-symmetric, cell-centered, monotonic, nonlinear finite volume (NLFV) scheme. In this paper, we have described the details of the 2D, 3T, non-equilibrium RHD code developed along with a suite of validation test problems to demonstrate the accuracy and performance of the algorithms. We have also conducted a performance analysis with different linearity preserving interpolation schemes that are used for the evaluation of the nodal values in the NLFV scheme. Finally, in order to demonstrate full capability of the code implementation, we have presented the simulation of laser driven thin Aluminum (Al) foil acceleration. The simulation results are found to be in good agreement

  2. Lossless data compression studies for NOAA hyperspectral environmental suite using 3D integer wavelet transforms with 3D embedded zerotree coding

    NASA Astrophysics Data System (ADS)

    Huang, Bormin; Huang, Hung-Lung; Chen, Hao; Ahuja, Alok; Baggett, Kevin; Schmit, Timothy J.; Heymann, Roger W.

    2003-09-01

    Hyperspectral sounder data is a particular class of data that requires high accuracy for useful retrieval of atmospheric temperature and moisture profiles, surface characteristics, cloud properties, and trace gas information. Therefore compression of these data sets is better to be lossless or near lossless. The next-generation NOAA/NESDIS GOES-R hyperspectral sounder, now referred to as the HES (Hyperspectral Environmental Suite), will have hyperspectral resolution (over one thousand channels with spectral widths on the order of 0.5 wavenumber) and high spatial resolution (less than 10 km). Given the large volume of three-dimensional hyperspectral sounder data that will be generated by the HES instrument, the use of robust data compression techniques will be beneficial to data transfer and archive. In this paper, we study lossless data compression for the HES using 3D integer wavelet transforms via the lifting schemes. The wavelet coefficients are then processed with the 3D embedded zerotree wavelet (EZW) algorithm followed by context-based arithmetic coding. We extend the 3D EZW scheme to take on any size of 3D satellite data, each of whose dimensions need not be divisible by 2N, where N is the levels of the wavelet decomposition being performed. The compression ratios of various kinds of wavelet transforms are presented along with a comparison with the JPEG2000 codec.

  3. Progress in smooth particle hydrodynamics

    SciTech Connect

    Wingate, C.A.; Dilts, G.A.; Mandell, D.A.; Crotzer, L.A.; Knapp, C.E.

    1998-07-01

    Smooth Particle Hydrodynamics (SPH) is a meshless, Lagrangian numerical method for hydrodynamics calculations where calculational elements are fuzzy particles which move according to the hydrodynamic equations of motion. Each particle carries local values of density, temperature, pressure and other hydrodynamic parameters. A major advantage of SPH is that it is meshless, thus large deformation calculations can be easily done with no connectivity complications. Interface positions are known and there are no problems with advecting quantities through a mesh that typical Eulerian codes have. These underlying SPH features make fracture physics easy and natural and in fact, much of the applications work revolves around simulating fracture. Debris particles from impacts can be easily transported across large voids with SPH. While SPH has considerable promise, there are some problems inherent in the technique that have so far limited its usefulness. The most serious problem is the well known instability in tension leading to particle clumping and numerical fracture. Another problem is that the SPH interpolation is only correct when particles are uniformly spaced a half particle apart leading to incorrect strain rates, accelerations and other quantities for general particle distributions. SPH calculations are also sensitive to particle locations. The standard artificial viscosity treatment in SPH leads to spurious viscosity in shear flows. This paper will demonstrate solutions for these problems that they and others have been developing. The most promising is to replace the SPH interpolant with the moving least squares (MLS) interpolant invented by Lancaster and Salkauskas in 1981. SPH and MLS are closely related with MLS being essentially SPH with corrected particle volumes. When formulated correctly, JLS is conservative, stable in both compression and tension, does not have the SPH boundary problems and is not sensitive to particle placement. The other approach to

  4. Verification of the FBR fuel bundle-duct interaction analysis code BAMBOO by the out-of-pile bundle compression test with large diameter pins

    NASA Astrophysics Data System (ADS)

    Uwaba, Tomoyuki; Ito, Masahiro; Nemoto, Junichi; Ichikawa, Shoichi; Katsuyama, Kozo

    2014-09-01

    The BAMBOO computer code was verified by results for the out-of-pile bundle compression test with large diameter pin bundle deformation under the bundle-duct interaction (BDI) condition. The pin diameters of the examined test bundles were 8.5 mm and 10.4 mm, which are targeted as preliminary fuel pin diameters for the upgraded core of the prototype fast breeder reactor (FBR) and for demonstration and commercial FBRs studied in the FaCT project. In the bundle compression test, bundle cross-sectional views were obtained from X-ray computer tomography (CT) images and local parameters of bundle deformation such as pin-to-duct and pin-to-pin clearances were measured by CT image analyses. In the verification, calculation results of bundle deformation obtained by the BAMBOO code analyses were compared with the experimental results from the CT image analyses. The comparison showed that the BAMBOO code reasonably predicts deformation of large diameter pin bundles under the BDI condition by assuming that pin bowing and cladding oval distortion are the major deformation mechanisms, the same as in the case of small diameter pin bundles. In addition, the BAMBOO analysis results confirmed that cladding oval distortion effectively suppresses BDI in large diameter pin bundles as well as in small diameter pin bundles.

  5. Development of a Fast Breeder Reactor Fuel Bundle Deformation Analysis Code - BAMBOO: Development of a Pin Dispersion Model and Verification by the Out-of-Pile Compression Test

    SciTech Connect

    Uwaba, Tomoyuki; Ito, Masahiro; Ukai, Shigeharu

    2004-02-15

    To analyze the wire-wrapped fast breeder reactor fuel pin bundle deformation under bundle/duct interaction conditions, the Japan Nuclear Cycle Development Institute has developed the BAMBOO computer code. This code uses the three-dimensional beam element to calculate fuel pin bowing and cladding oval distortion as the primary deformation mechanisms in a fuel pin bundle. The pin dispersion, which is disarrangement of pins in a bundle and would occur during irradiation, was modeled in this code to evaluate its effect on bundle deformation. By applying the contact analysis method commonly used in the finite element method, this model considers the contact conditions at various axial positions as well as the nodal points and can analyze the irregular arrangement of fuel pins with the deviation of the wire configuration.The dispersion model was introduced in the BAMBOO code and verified by using the results of the out-of-pile compression test of the bundle, where the dispersion was caused by the deviation of the wire position. And the effect of the dispersion on the bundle deformation was evaluated based on the analysis results of the code.

  6. Black Widow Pulsar radiation hydrodynamics simulation using Castro: Methodology

    NASA Astrophysics Data System (ADS)

    Barrios Sazo, Maria; Zingale, Michael; Zhang, Weiqun

    2017-01-01

    A black widow pulsar (BWP) is a millisecond pulsar in a tight binary system with a low mass star. The fast rotating pulsar emits intense radiation, which injects energy and ablates the companion star. Observation of the ablation is seen as pulsar eclipses caused by a larger object than the companion star Roche lobe. This phenomenon is attributed to a cloud surrounding the evaporating star. We will present the methodology for modeling the interaction between the radiation coming from the pulsar and the companion star using the radiation hydrodynamics code Castro. Castro is an adaptive mesh refinement (AMR) code that solves the compressible hydrodynamic equations for astrophysical flows with simultaneous refinement in space and time. The code also includes self-gravity, nuclear reactions and radiation. We are employing the gray-radiation solver, which uses a mixed-frame formulation of radiation hydrodynamics under the flux-limited diffusion approximation. In our setup, we are modeling the companion star with the radiation field as a boundary condition, coming from one side of the domain. In addition to a model setup in 2-d axisymmetry, we also have a 3-d setup, which is more physical given the nature of the system considering the companion is facing the pulsar on one side. We discuss the progress of our calculations, first results, and future work.The work at Stony Brook was supported by DOE/Office of Nuclear Physics grant DE-FG02-87ER40317

  7. Comparison of transform coding methods with an optimal predictor for the data compression of digital elevation models

    NASA Technical Reports Server (NTRS)

    Lewis, Michael

    1994-01-01

    Statistical encoding techniques enable the reduction of the number of bits required to encode a set of symbols, and are derived from their probabilities. Huffman encoding is an example of statistical encoding that has been used for error-free data compression. The degree of compression given by Huffman encoding in this application can be improved by the use of prediction methods. These replace the set of elevations by a set of corrections that have a more advantageous probability distribution. In particular, the method of Lagrange Multipliers for minimization of the mean square error has been applied to local geometrical predictors. Using this technique, an 8-point predictor achieved about a 7 percent improvement over an existing simple triangular predictor.

  8. Skew resisting hydrodynamic seal

    DOEpatents

    Conroy, William T.; Dietle, Lannie L.; Gobeli, Jeffrey D.; Kalsi, Manmohan S.

    2001-01-01

    A novel hydrodynamically lubricated compression type rotary seal that is suitable for lubricant retention and environmental exclusion. Particularly, the seal geometry ensures constraint of a hydrodynamic seal in a manner preventing skew-induced wear and provides adequate room within the seal gland to accommodate thermal expansion. The seal accommodates large as-manufactured variations in the coefficient of thermal expansion of the sealing material, provides a relatively stiff integral spring effect to minimize pressure-induced shuttling of the seal within the gland, and also maintains interfacial contact pressure within the dynamic sealing interface in an optimum range for efficient hydrodynamic lubrication and environment exclusion. The seal geometry also provides for complete support about the circumference of the seal to receive environmental pressure, as compared the interrupted character of seal support set forth in U.S. Pat. Nos. 5,873,576 and 6,036,192 and provides a hydrodynamic seal which is suitable for use with non-Newtonian lubricants.

  9. Numerical Simulation of Carbon Simple Cubic by Dynamic Compression

    NASA Astrophysics Data System (ADS)

    Kato, Kaori; Aoki, Takayuki; Sekine, Toshimori

    2001-02-01

    An impact scheme of a slab target and flyer with a layered structure is proposed to achieve low-entropy dynamic compression of diamond. The thermodynamic state of diamond during compression is examined using one-dimensional Lagrangian hydrodynamic code and the tabulated equation of state library, SESAME@. The use of a material with a small shock impedance for the impact interfaces markedly decreases the strength of the primary shock wave. It is found that a gradient of shock impedance across the thickness of the flyer generates small multiple shock waves into the diamond and is effective for low-entropy compression. The thermodynamic conditions required for carbon simple cubic and low-entropy dynamic compression is achieved.

  10. Hydrodynamic Hunters.

    PubMed

    Jashnsaz, Hossein; Al Juboori, Mohammed; Weistuch, Corey; Miller, Nicholas; Nguyen, Tyler; Meyerhoff, Viktoria; McCoy, Bryan; Perkins, Stephanie; Wallgren, Ross; Ray, Bruce D; Tsekouras, Konstantinos; Anderson, Gregory G; Pressé, Steve

    2017-03-28

    The Gram-negative Bdellovibrio bacteriovorus (BV) is a model bacterial predator that hunts other bacteria and may serve as a living antibiotic. Despite over 50 years since its discovery, it is suggested that BV probably collides into its prey at random. It remains unclear to what degree, if any, BV uses chemical cues to target its prey. The targeted search problem by the predator for its prey in three dimensions is a difficult problem: it requires the predator to sensitively detect prey and forecast its mobile prey's future position on the basis of previously detected signal. Here instead we find that rather than chemically detecting prey, hydrodynamics forces BV into regions high in prey density, thereby improving its odds of a chance collision with prey and ultimately reducing BV's search space for prey. We do so by showing that BV's dynamics are strongly influenced by self-generated hydrodynamic flow fields forcing BV onto surfaces and, for large enough defects on surfaces, forcing BV in orbital motion around these defects. Key experimental controls and calculations recapitulate the hydrodynamic origin of these behaviors. While BV's prey (Escherichia coli) are too small to trap BV in hydrodynamic orbit, the prey are also susceptible to their own hydrodynamic fields, substantially confining them to surfaces and defects where mobile predator and prey density is now dramatically enhanced. Colocalization, driven by hydrodynamics, ultimately reduces BV's search space for prey from three to two dimensions (on surfaces) even down to a single dimension (around defects). We conclude that BV's search for individual prey remains random, as suggested in the literature, but confined, however-by generic hydrodynamic forces-to reduced dimensionality.

  11. Benchmarking the Multidimensional Stellar Implicit Code MUSIC

    NASA Astrophysics Data System (ADS)

    Goffrey, T.; Pratt, J.; Viallet, M.; Baraffe, I.; Popov, M. V.; Walder, R.; Folini, D.; Geroux, C.; Constantino, T.

    2017-03-01

    We present the results of a numerical benchmark study for the MUltidimensional Stellar Implicit Code (MUSIC) based on widely applicable two- and three-dimensional compressible hydrodynamics problems relevant to stellar interiors. MUSIC is an implicit large eddy simulation code that uses implicit time integration, implemented as a Jacobian-free Newton Krylov method. A physics based preconditioning technique which can be adjusted to target varying physics is used to improve the performance of the solver. The problems used for this benchmark study include the Rayleigh-Taylor and Kelvin-Helmholtz instabilities, and the decay of the Taylor-Green vortex. Additionally we show a test of hydrostatic equilibrium, in a stellar environment which is dominated by radiative effects. In this setting the flexibility of the preconditioning technique is demonstrated. This work aims to bridge the gap between the hydrodynamic test problems typically used during development of numerical methods and the complex flows of stellar interiors. A series of multidimensional tests were performed and analysed. Each of these test cases was analysed with a simple, scalar diagnostic, with the aim of enabling direct code comparisons. As the tests performed do not have analytic solutions, we verify MUSIC by comparing it to established codes including ATHENA and the PENCIL code. MUSIC is able to both reproduce behaviour from established and widely-used codes as well as results expected from theoretical predictions. This benchmarking study concludes a series of papers describing the development of the MUSIC code and provides confidence in future applications.

  12. Measurement of Compression Factor and Error Sensitivity Factor of Five Selected Two-Dimensional Facsimile Coding Techniques

    DTIC Science & Technology

    1979-09-01

    informiation*, une v~sritable " Banque tie donn(’es", ropartle stir des nioyens detraitoment nationaux et roglonaux, et qui devra rester alinientoe, mime A jour...Photo n’ 1 - Documnent trLs dense lettre 1., 5mmi de haut Restitution photo ft 9 24b Cola tit d’autant plus v’alable que, r~ ca t plus telle ligne A...rapport de compression eat de i T A [,& Faa. 3 ot - ire suivi d’une ligne A retard (LAR) disper- Af sive ayani un temps de propagation de Sroupe T8t

  13. Maestro and Castro: Simulation Codes for Astrophysical Flows

    NASA Astrophysics Data System (ADS)

    Zingale, Michael; Almgren, Ann; Beckner, Vince; Bell, John; Friesen, Brian; Jacobs, Adam; Katz, Maximilian P.; Malone, Christopher; Nonaka, Andrew; Zhang, Weiqun

    2017-01-01

    Stellar explosions are multiphysics problems—modeling them requires the coordinated input of gravity solvers, reaction networks, radiation transport, and hydrodynamics together with microphysics recipes to describe the physics of matter under extreme conditions. Furthermore, these models involve following a wide range of spatial and temporal scales, which puts tough demands on simulation codes. We developed the codes Maestro and Castro to meet the computational challenges of these problems. Maestro uses a low Mach number formulation of the hydrodynamics to efficiently model convection. Castro solves the fully compressible radiation hydrodynamics equations to capture the explosive phases of stellar phenomena. Both codes are built upon the BoxLib adaptive mesh refinement library, which prepares them for next-generation exascale computers. Common microphysics shared between the codes allows us to transfer a problem from the low Mach number regime in Maestro to the explosive regime in Castro. Importantly, both codes are freely available (https://github.com/BoxLib-Codes). We will describe the design of the codes and some of their science applications, as well as future development directions.Support for development was provided by NSF award AST-1211563 and DOE/Office of Nuclear Physics grant DE-FG02-87ER40317 to Stony Brook and by the Applied Mathematics Program of the DOE Office of Advance Scientific Computing Research under US DOE contract DE-AC02-05CH11231 to LBNL.

  14. Ship Hydrodynamics

    ERIC Educational Resources Information Center

    Lafrance, Pierre

    1978-01-01

    Explores in a non-mathematical treatment some of the hydrodynamical phenomena and forces that affect the operation of ships, especially at high speeds. Discusses the major components of ship resistance such as the different types of drags and ways to reduce them and how to apply those principles for the hovercraft. (GA)

  15. Image data compression investigation

    NASA Technical Reports Server (NTRS)

    Myrie, Carlos

    1989-01-01

    NASA continuous communications systems growth has increased the demand for image transmission and storage. Research and analysis was conducted on various lossy and lossless advanced data compression techniques or approaches used to improve the efficiency of transmission and storage of high volume stellite image data such as pulse code modulation (PCM), differential PCM (DPCM), transform coding, hybrid coding, interframe coding, and adaptive technique. In this presentation, the fundamentals of image data compression utilizing two techniques which are pulse code modulation (PCM) and differential PCM (DPCM) are presented along with an application utilizing these two coding techniques.

  16. Radiation Hydrodynamics

    SciTech Connect

    Castor, J I

    2003-10-16

    The discipline of radiation hydrodynamics is the branch of hydrodynamics in which the moving fluid absorbs and emits electromagnetic radiation, and in so doing modifies its dynamical behavior. That is, the net gain or loss of energy by parcels of the fluid material through absorption or emission of radiation are sufficient to change the pressure of the material, and therefore change its motion; alternatively, the net momentum exchange between radiation and matter may alter the motion of the matter directly. Ignoring the radiation contributions to energy and momentum will give a wrong prediction of the hydrodynamic motion when the correct description is radiation hydrodynamics. Of course, there are circumstances when a large quantity of radiation is present, yet can be ignored without causing the model to be in error. This happens when radiation from an exterior source streams through the problem, but the latter is so transparent that the energy and momentum coupling is negligible. Everything we say about radiation hydrodynamics applies equally well to neutrinos and photons (apart from the Einstein relations, specific to bosons), but in almost every area of astrophysics neutrino hydrodynamics is ignored, simply because the systems are exceedingly transparent to neutrinos, even though the energy flux in neutrinos may be substantial. Another place where we can do ''radiation hydrodynamics'' without using any sophisticated theory is deep within stars or other bodies, where the material is so opaque to the radiation that the mean free path of photons is entirely negligible compared with the size of the system, the distance over which any fluid quantity varies, and so on. In this case we can suppose that the radiation is in equilibrium with the matter locally, and its energy, pressure and momentum can be lumped in with those of the rest of the fluid. That is, it is no more necessary to distinguish photons from atoms, nuclei and electrons, than it is to distinguish

  17. Progressive transmission and compression images

    NASA Technical Reports Server (NTRS)

    Kiely, A. B.

    1996-01-01

    We describe an image data compression strategy featuring progressive transmission. The method exploits subband coding and arithmetic coding for compression. We analyze the Laplacian probability density, which closely approximates the statistics of individual subbands, to determine a strategy for ordering the compressed subband data in a way that improves rate-distortion performance. Results are presented for a test image.

  18. Predictive Encoding in Text Compression.

    ERIC Educational Resources Information Center

    Raita, Timo; Teuhola, Jukka

    1989-01-01

    Presents three text compression methods of increasing power and evaluates each based on the trade-off between compression gain and processing time. The advantages of using hash coding for speed and optimal arithmetic coding to successor information for compression gain are discussed. (26 references) (Author/CLB)

  19. Bacterial Hydrodynamics

    NASA Astrophysics Data System (ADS)

    Lauga, Eric

    2016-01-01

    Bacteria predate plants and animals by billions of years. Today, they are the world's smallest cells, yet they represent the bulk of the world's biomass and the main reservoir of nutrients for higher organisms. Most bacteria can move on their own, and the majority of motile bacteria are able to swim in viscous fluids using slender helical appendages called flagella. Low-Reynolds number hydrodynamics is at the heart of the ability of flagella to generate propulsion at the micrometer scale. In fact, fluid dynamic forces impact many aspects of bacteriology, ranging from the ability of cells to reorient and search their surroundings to their interactions within mechanically and chemically complex environments. Using hydrodynamics as an organizing framework, I review the biomechanics of bacterial motility and look ahead to future challenges.

  20. Hydrodynamic models of a Cepheid atmosphere

    NASA Technical Reports Server (NTRS)

    Karp, A. H.

    1975-01-01

    Instead of computing a large number of coarsely zoned hydrodynamic models covering the entire atmospheric instability strip, the author computed a single model as well as computer limitations allow. The implicit hydrodynamic code of Kutter and Sparks was modified to include radiative transfer effects in optically thin zones.

  1. Quantum hydrodynamics

    NASA Astrophysics Data System (ADS)

    Tsubota, Makoto; Kobayashi, Michikazu; Takeuchi, Hiromitsu

    2013-01-01

    Quantum hydrodynamics in superfluid helium and atomic Bose-Einstein condensates (BECs) has been recently one of the most important topics in low temperature physics. In these systems, a macroscopic wave function (order parameter) appears because of Bose-Einstein condensation, which creates quantized vortices. Turbulence consisting of quantized vortices is called quantum turbulence (QT). The study of quantized vortices and QT has increased in intensity for two reasons. The first is that recent studies of QT are considerably advanced over older studies, which were chiefly limited to thermal counterflow in 4He, which has no analog with classical traditional turbulence, whereas new studies on QT are focused on a comparison between QT and classical turbulence. The second reason is the realization of atomic BECs in 1995, for which modern optical techniques enable the direct control and visualization of the condensate and can even change the interaction; such direct control is impossible in other quantum condensates like superfluid helium and superconductors. Our group has made many important theoretical and numerical contributions to the field of quantum hydrodynamics of both superfluid helium and atomic BECs. In this article, we review some of the important topics in detail. The topics of quantum hydrodynamics are diverse, so we have not attempted to cover all these topics in this article. We also ensure that the scope of this article does not overlap with our recent review article (arXiv:1004.5458), “Quantized vortices in superfluid helium and atomic Bose-Einstein condensates”, and other review articles.

  2. Hydrodynamic supercontinuum.

    PubMed

    Chabchoub, A; Hoffmann, N; Onorato, M; Genty, G; Dudley, J M; Akhmediev, N

    2013-08-02

    We report the experimental observation of multi-bound-soliton solutions of the nonlinear Schrödinger equation (NLS) in the context of hydrodynamic surface gravity waves. Higher-order N-soliton solutions with N=2, 3 are studied in detail and shown to be associated with self-focusing in the wave group dynamics and the generation of a steep localized carrier wave underneath the group envelope. We also show that for larger input soliton numbers, the wave group experiences irreversible spectral broadening, which we refer to as a hydrodynamic supercontinuum by analogy with optics. This process is shown to be associated with the fission of the initial multisoliton into individual fundamental solitons due to higher-order nonlinear perturbations to the NLS. Numerical simulations using an extended NLS model described by the modified nonlinear Schrödinger equation, show excellent agreement with experiment and highlight the universal role that higher-order nonlinear perturbations to the NLS play in supercontinuum generation.

  3. Convolutional coding techniques for data protection

    NASA Technical Reports Server (NTRS)

    Massey, J. L.

    1975-01-01

    Results of research on the use of convolutional codes in data communications are presented. Convolutional coding fundamentals are discussed along with modulation and coding interaction. Concatenated coding systems and data compression with convolutional codes are described.

  4. Hydrodynamic effects on coalescence.

    SciTech Connect

    Dimiduk, Thomas G.; Bourdon, Christopher Jay; Grillet, Anne Mary; Baer, Thomas A.; de Boer, Maarten Pieter; Loewenberg, Michael; Gorby, Allen D.; Brooks, Carlton, F.

    2006-10-01

    The goal of this project was to design, build and test novel diagnostics to probe the effect of hydrodynamic forces on coalescence dynamics. Our investigation focused on how a drop coalesces onto a flat surface which is analogous to two drops coalescing, but more amenable to precise experimental measurements. We designed and built a flow cell to create an axisymmetric compression flow which brings a drop onto a flat surface. A computer-controlled system manipulates the flow to steer the drop and maintain a symmetric flow. Particle image velocimetry was performed to confirm that the control system was delivering a well conditioned flow. To examine the dynamics of the coalescence, we implemented an interferometry capability to measure the drainage of the thin film between the drop and the surface during the coalescence process. A semi-automated analysis routine was developed which converts the dynamic interferogram series into drop shape evolution data.

  5. Flash Kα radiography of laser-driven solid sphere compression for fast ignition

    NASA Astrophysics Data System (ADS)

    Sawada, H.; Lee, S.; Shiroto, T.; Nagatomo, H.; Arikawa, Y.; Nishimura, H.; Ueda, T.; Shigemori, K.; Sunahara, A.; Ohnishi, N.; Beg, F. N.; Theobald, W.; Pérez, F.; Patel, P. K.; Fujioka, S.

    2016-06-01

    Time-resolved compression of a laser-driven solid deuterated plastic sphere with a cone was measured with flash Kα x-ray radiography. A spherically converging shockwave launched by nanosecond GEKKO XII beams was used for compression while a flash of 4.51 keV Ti Kα x-ray backlighter was produced by a high-intensity, picosecond laser LFEX (Laser for Fast ignition EXperiment) near peak compression for radiography. Areal densities of the compressed core were inferred from two-dimensional backlit x-ray images recorded with a narrow-band spherical crystal imager. The maximum areal density in the experiment was estimated to be 87 ± 26 mg/cm2. The temporal evolution of the experimental and simulated areal densities with a 2-D radiation-hydrodynamics code is in good agreement.

  6. Effect of non-local electron conduction in compression of solid ball target for fast ignition

    NASA Astrophysics Data System (ADS)

    Nagatomo, Hideo; Asahina, Takashi; Nicolai, Philippe; Sunahara, Atsushi; Johzaki, Tomoyuki

    2016-10-01

    In the first phase of the fast ignition scheme, fuel target is compressed by the implosion laser, where only achievement of high dense fuel is required because the increment of the temperature to ignite the fuel is given by heating lasers. The ideal compression method for solid target is isentropic compression with tailored pulse shape. However, it requires the high laser intensity >1015 W/cm2 which cause the hot electrons. Numerical simulation for these conditions non-local electron transport model is necessary. Recently, we have installed SNB model to a 2-D radiation hydrodynamic simulation code. In this presentation, effect of hot electron in isentropic compression and optimum method are discussed, which may be also significant for shock ignition scheme. Also effect of external magnetic field to the hot electron will be considered. This study was supported by JSPS KAKENHI Grant No. 26400532.

  7. Data compression for satellite images

    NASA Technical Reports Server (NTRS)

    Chen, P. H.; Wintz, P. A.

    1976-01-01

    An efficient data compression system is presented for satellite pictures and two grey level pictures derived from satellite pictures. The compression techniques take advantages of the correlation between adjacent picture elements. Several source coding methods are investigated. Double delta coding is presented and shown to be the most efficient. Both predictive differential quantizing technique and double delta coding can be significantly improved by applying a background skipping technique. An extension code is constructed. This code requires very little storage space and operates efficiently. Simulation results are presented for various coding schemes and source codes.

  8. An explicit-implicit solution of the hydrodynamic and radiation equations

    NASA Astrophysics Data System (ADS)

    Sahota, Manjit S.

    A solution of the coupled radiation-hydrodynamic equations on a median mesh is presented for a transient, three-dimensional, compressible, multimaterial, free-Lagrangian code. The code uses fixed-mass particles surrounded by median Lagrangian cells. These cells are free to change connectivity, which ensures accuracy in the differencing of equations and allows the code to handle extreme distortions. All calculations are done on a median Lagrangian mesh that is constructed from the Delaunay tetrahedral mesh using the Voronoi connection algorithm. Because each tetrahedron volume is shared equally by the four mass points (computational cells) located at the tetrahedron vertices, calculations are done at a tetrahedron level for enhanced computational efficiency, and the rate-of-change data are subsequently accumulated at mass points from these tetrahedral contributions. The hydrodynamic part of the calculations is done using an explicit time-advancement technique, and the radiation calculations are done using a hybrid explicit-implicit time-advancement scheme in the equilibrium-diffusion limit. An explicit solution of the radiation-diffusion equation is obtained for cells that meet the current time-step criterion imposed by the hydrodynamic solution, and a fully implicit point-relaxation solution is obtained elsewhere without defining an inversion matrix. The approach has a distinct advantage over the conventional matrix-inversion approaches, because defining such a matrix for an unstructured grid is both cumbersome and computationally intensive. The new algorithm runs >20 times faster than a matrix-solver approach using the conjugate-gradient technique, and is easily parallelizable on the Cray family of supercomputers. With the new algorithm, the radiation-diffusion part of the calculation runs about twice as fast as the hydrodynamic part of the calculation. The code conserves mass, momentum, and energy exactly, except in some pathological situations.

  9. Chromatin hydrodynamics.

    PubMed

    Bruinsma, Robijn; Grosberg, Alexander Y; Rabin, Yitzhak; Zidovska, Alexandra

    2014-05-06

    Following recent observations of large scale correlated motion of chromatin inside the nuclei of live differentiated cells, we present a hydrodynamic theory-the two-fluid model-in which the content of a nucleus is described as a chromatin solution with the nucleoplasm playing the role of the solvent and the chromatin fiber that of a solute. This system is subject to both passive thermal fluctuations and active scalar and vector events that are associated with free energy consumption, such as ATP hydrolysis. Scalar events drive the longitudinal viscoelastic modes (where the chromatin fiber moves relative to the solvent) while vector events generate the transverse modes (where the chromatin fiber moves together with the solvent). Using linear response methods, we derive explicit expressions for the response functions that connect the chromatin density and velocity correlation functions to the corresponding correlation functions of the active sources and the complex viscoelastic moduli of the chromatin solution. We then derive general expressions for the flow spectral density of the chromatin velocity field. We use the theory to analyze experimental results recently obtained by one of the present authors and her co-workers. We find that the time dependence of the experimental data for both native and ATP-depleted chromatin can be well-fitted using a simple model-the Maxwell fluid-for the complex modulus, although there is some discrepancy in terms of the wavevector dependence. Thermal fluctuations of ATP-depleted cells are predominantly longitudinal. ATP-active cells exhibit intense transverse long wavelength velocity fluctuations driven by force dipoles. Fluctuations with wavenumbers larger than a few inverse microns are dominated by concentration fluctuations with the same spectrum as thermal fluctuations but with increased intensity.

  10. Compressive holographic video

    NASA Astrophysics Data System (ADS)

    Wang, Zihao; Spinoulas, Leonidas; He, Kuan; Tian, Lei; Cossairt, Oliver; Katsaggelos, Aggelos K.; Chen, Huaijin

    2017-01-01

    Compressed sensing has been discussed separately in spatial and temporal domains. Compressive holography has been introduced as a method that allows 3D tomographic reconstruction at different depths from a single 2D image. Coded exposure is a temporal compressed sensing method for high speed video acquisition. In this work, we combine compressive holography and coded exposure techniques and extend the discussion to 4D reconstruction in space and time from one coded captured image. In our prototype, digital in-line holography was used for imaging macroscopic, fast moving objects. The pixel-wise temporal modulation was implemented by a digital micromirror device. In this paper we demonstrate $10\\times$ temporal super resolution with multiple depths recovery from a single image. Two examples are presented for the purpose of recording subtle vibrations and tracking small particles within 5 ms.

  11. Compressive holographic video.

    PubMed

    Wang, Zihao; Spinoulas, Leonidas; He, Kuan; Tian, Lei; Cossairt, Oliver; Katsaggelos, Aggelos K; Chen, Huaijin

    2017-01-09

    Compressed sensing has been discussed separately in spatial and temporal domains. Compressive holography has been introduced as a method that allows 3D tomographic reconstruction at different depths from a single 2D image. Coded exposure is a temporal compressed sensing method for high speed video acquisition. In this work, we combine compressive holography and coded exposure techniques and extend the discussion to 4D reconstruction in space and time from one coded captured image. In our prototype, digital in-line holography was used for imaging macroscopic, fast moving objects. The pixel-wise temporal modulation was implemented by a digital micromirror device. In this paper we demonstrate 10× temporal super resolution with multiple depths recovery from a single image. Two examples are presented for the purpose of recording subtle vibrations and tracking small particles within 5 ms.

  12. Algorithm refinement for fluctuating hydrodynamics

    SciTech Connect

    Williams, Sarah A.; Bell, John B.; Garcia, Alejandro L.

    2007-07-03

    This paper introduces an adaptive mesh and algorithmrefinement method for fluctuating hydrodynamics. This particle-continuumhybrid simulates the dynamics of a compressible fluid with thermalfluctuations. The particle algorithm is direct simulation Monte Carlo(DSMC), a molecular-level scheme based on the Boltzmann equation. Thecontinuum algorithm is based on the Landau-Lifshitz Navier-Stokes (LLNS)equations, which incorporate thermal fluctuations into macroscopichydrodynamics by using stochastic fluxes. It uses a recently-developedsolver for LLNS, based on third-order Runge-Kutta. We present numericaltests of systems in and out of equilibrium, including time-dependentsystems, and demonstrate dynamic adaptive refinement by the computationof a moving shock wave. Mean system behavior and second moment statisticsof our simulations match theoretical values and benchmarks well. We findthat particular attention should be paid to the spectrum of the flux atthe interface between the particle and continuum methods, specificallyfor the non-hydrodynamic (kinetic) time scales.

  13. Predictions for the drive capabilities of the RancheroS Flux Compression Generator into various load inductances using the Eulerian AMR Code Roxane

    SciTech Connect

    Watt, Robert Gregory

    2016-06-06

    The Ranchero Magnetic Flux Compression Generator (FCG) has been used to create current pulses in the 10-­100 MA range for driving both “static” low inductance (0.5 nH) loads1 for generator demonstration purposes and high inductance (10-­20 nH) imploding liner loads2 for ultimate use in physics experiments at very high energy density. Simulations of the standard Ranchero generator have recently shown that it had a design issue that could lead to flux trapping in the generator, and a non-­ robust predictability in its use in high energy density experiments. A re-­examination of the design concept for the standard Ranchero generator, prompted by the possible appearance of an aneurism at the output glide plane, has led to a new generation of Ranchero generators designated the RancheroS (for swooped). This generator has removed the problematic output glide plane and replaced it with a region of constantly increasing diameter in the output end of the FCG cavity in which the armature is driven outward under the influence of an additional HE load not present in the original Ranchero. The resultant RancheroS generator, to be tested in LA43S-­L13, probably in early FY17, has a significantly increased initial inductance and may be able to drive a somewhat higher load inductance than the standard Ranchero. This report will use the Eulerian AMR code Roxane to study the ability of the new design to drive static loads, with a goal of providing a database corresponding to the load inductances for which the generator might be used and the anticipated peak currents such loads might produce in physics experiments. Such a database, combined with a simple analytic model of an ideal generator, where d(LI)/dt = 0, and supplemented by earlier estimates of losses in actual use of the standard Ranchero, scaled to estimate the increase in losses due to the longer current carrying perimeter in the RancheroS, can then be used to bound the expectations for the current drive one may

  14. Compression ratio effect on methane HCCI combustion

    SciTech Connect

    Aceves, S. M.; Pitz, W.; Smith, J. R.; Westbrook, C.

    1998-09-29

    We have used the HCT (Hydrodynamics, Chemistry and Transport) chemical kinetics code to simulate HCCI (homogeneous charge compression ignition) combustion of methane-air mixtures. HCT is applied to explore the ignition timing, bum duration, NOx production, gross indicated efficiency and gross IMEP of a supercharged engine (3 atm. Intake pressure) with 14:1, 16:l and 18:1 compression ratios at 1200 rpm. HCT has been modified to incorporate the effect of heat transfer and to calculate the temperature that results from mixing the recycled exhaust with the fresh mixture. This study uses a single control volume reaction zone that varies as a function of crank angle. The ignition process is controlled by adjusting the intake equivalence ratio and the residual gas trapping (RGT). RGT is internal exhaust gas recirculation which recycles both thermal energy and combustion product species. Adjustment of equivalence ratio and RGT is accomplished by varying the timing of the exhaust valve closure in either 2-stroke or 4-stroke engines. Inlet manifold temperature is held constant at 300 K. Results show that, for each compression ratio, there is a range of operational conditions that show promise of achieving the control necessary to vary power output while keeping indicated efficiency above 50% and NOx levels below 100 ppm. HCT results are also compared with a set of recent experimental data for natural gas.

  15. Progressive Transmission and Compression of Images

    NASA Technical Reports Server (NTRS)

    Kiely, A. B.

    1996-01-01

    We describe an image data compression strategy featuring progressive transmission. The method exploits subband coding and arithmetic coding for compression. We analyze the Laplacian probability density, which closely approximates the statistics of individual subbands, to determine a strategy for ordering the compressed subband data in a way that improves rate-distortion performance. Results are presented for a test image.

  16. Three-dimensional hydrodynamic experiments on the National Ignition Facilitya)

    NASA Astrophysics Data System (ADS)

    Blue, B. E.; Robey, H. F.; Glendinning, S. G.; Bono, M. J.; Burkhart, S. C.; Celeste, J. R.; Coker, R. F.; Costa, R. L.; Dixit, S. N.; Foster, J. M.; Hansen, J. F.; Haynam, C. A.; Hermann, M. R.; Holder, J. P.; Hsing, W. W.; Kalantar, D. H.; Lanier, N. E.; Latray, D. A.; Louis, H.; MacGowan, B. J.; Maggelssen, G. R.; Marshall, C. D.; Moses, E. I.; Nikitin, A. J.; O'Brien, D. W.; Perry, T. S.; Poole, M. W.; Rekow, V. V.; Rosen, P. A.; Schneider, M. B.; Stry, P. E.; Van Wonterghem, B. M.; Wallace, R.; Weber, S. V.; Wilde, B. H.; Woods, D. T.; Young, B. K.

    2005-05-01

    The production of supersonic jets of material via the interaction of a strong shock wave with a spatially localized density perturbation is a common feature of inertial confinement fusion and astrophysics. The behavior of two-dimensional (2D) supersonic jets has previously been investigated in detail [J. M. Foster, B. H. Wilde, P. A. Rosen, T. S. Perry, M. Fell, M. J. Edwards, B. F. Lasinski, R. E. Turner, and M. L. Gittings, Phys. Plasmas 9, 2251 (2002)]. In three dimensions (3D), however, there are new aspects to the behavior of supersonic jets in compressible media. In this paper, the commissioning activities on the National Ignition Facility (NIF) [J. A. Paisner, J. D. Boyes, S. A. Kumpan, W. H. Lowdermilk, and M. Sorem, Laser Focus World 30, 75 (1994)] to enable hydrodynamic experiments will be presented as well as the results from the first series of hydrodynamic experiments. In these experiments, two of the first four beams of NIF are used to drive a 40Mbar shock wave into millimeter scale aluminum targets backed by 100mg/cc carbon aerogel foam. The remaining beams are delayed in time and are used to provide a point-projection x-ray backlighter source for diagnosing the three-dimensional structure of the jet evolution resulting from a variety of 2D and 3D features. Comparisons between data and simulations using several codes will be presented.

  17. Three-Dimensional Hydrodynamics Experiments on the National Ignition Facility

    SciTech Connect

    Blue, B E; Weber, S V; Glendinning, S; Lanier, N; Woods, D; Bono, M; Dixit, S; Haynam, C; Holder, J; Kalantar, D; MacGowan, B; Moses, E; Nikitin, A; Rekow, V; Wallace, R; Van Wonterghem, B; Rosen, P; Foster, J; Stry, P; Wilde, B; Hsing, W; Robey, H

    2004-11-12

    The production of supersonic jets of material via the interaction of a strong shock wave with a spatially localized density perturbation is a common feature of inertial confinement fusion and astrophysics. The behavior of two-dimensional (2D) supersonic jets has previously been investigated in detail [J. M. Foster et. al, Phys. Plasmas 9, 2251 (2002)]. In three-dimensions (3D), however, there are new aspects to the behavior of supersonic jets in compressible media. In this paper, the commissioning activities on the National Ignition Facility (NIF) [J. A. Paisner et al., Laser Focus World 30, 75 (1994)] to enable hydrodynamic experiments will be presented as well as the results from the first series of hydrodynamic experiments. In these experiments, two of the first four beams of NIF are used to drive a 40 Mbar shock wave into millimeter scale aluminum targets backed by 100 mg/cc carbon aerogel foam. The remaining beams are delayed in time and are used to provide a point-projection x-ray backlighter source for diagnosing the three-dimensional structure of the jet evolution resulting from a variety of 2D and 3D features. Comparisons between data and simulations using several codes will be presented.

  18. Three-Dimensional Hydrodynamic Experiments on the National Ignition Facility

    SciTech Connect

    Blue, B E; Robey, H F; Glendinning, S G; Bono, M J; Dixit, S N; Foster, J M; Haynam, C A; Holder, J P; Hsing, W W; Kalantar, D H; Lanier, N E; MacGowan, B J; Moses, E I; Nikitin, A J; Perry, T S; Rekow, V V; Rosen, P A; Stry, P E; Van Wonterghem, B M; Wallace, R; Weber, S V; Wilde, B H; Woods, D T

    2005-02-09

    The production of supersonic jets of material via the interaction of a strong shock wave with a spatially localized density perturbation is a common feature of inertial confinement fusion and astrophysics. The behavior of two-dimensional (2D) supersonic jets has previously been investigated in detail [J. M. Foster et. al, Phys. Plasmas 9, 2251 (2002)]. In three-dimensions (3D), however, there are new aspects to the behavior of supersonic jets in compressible media. In this paper, the commissioning activities on the National Ignition Facility (NIF) [J. A. Paisner et al., Laser Focus World 30, 75 (1994)] to enable hydrodynamic experiments will be presented as well as the results from the first series of hydrodynamic experiments. In these experiments, two of the first four beams of NIF are used to drive a 40 Mbar shock wave into millimeter scale aluminum targets backed by 100 mg/cc carbon aerogel foam. The remaining beams are delayed in time and are used to provide a point-projection x-ray backlighter source for diagnosing the three-dimensional structure of the jet evolution resulting from a variety of 2D and 3D features. Comparisons between data and simulations using several codes will be presented.

  19. Data Compression.

    ERIC Educational Resources Information Center

    Bookstein, Abraham; Storer, James A.

    1992-01-01

    Introduces this issue, which contains papers from the 1991 Data Compression Conference, and defines data compression. The two primary functions of data compression are described, i.e., storage and communications; types of data using compression technology are discussed; compression methods are explained; and current areas of research are…

  20. Universal Noiseless Coding Subroutines

    NASA Technical Reports Server (NTRS)

    Schlutsmeyer, A. P.; Rice, R. F.

    1986-01-01

    Software package consists of FORTRAN subroutines that perform universal noiseless coding and decoding of integer and binary data strings. Purpose of this type of coding to achieve data compression in sense that coded data represents original data perfectly (noiselessly) while taking fewer bits to do so. Routines universal because they apply to virtually any "real-world" data source.

  1. WHITE DWARF MERGERS ON ADAPTIVE MESHES. I. METHODOLOGY AND CODE VERIFICATION

    SciTech Connect

    Katz, Max P.; Zingale, Michael; Calder, Alan C.; Swesty, F. Douglas; Almgren, Ann S.; Zhang, Weiqun

    2016-03-10

    The Type Ia supernova (SN Ia) progenitor problem is one of the most perplexing and exciting problems in astrophysics, requiring detailed numerical modeling to complement observations of these explosions. One possible progenitor that has merited recent theoretical attention is the white dwarf (WD) merger scenario, which has the potential to naturally explain many of the observed characteristics of SNe Ia. To date there have been relatively few self-consistent simulations of merging WD systems using mesh-based hydrodynamics. This is the first paper in a series describing simulations of these systems using a hydrodynamics code with adaptive mesh refinement. In this paper we describe our numerical methodology and discuss our implementation in the compressible hydrodynamics code CASTRO, which solves the Euler equations, and the Poisson equation for self-gravity, and couples the gravitational and rotation forces to the hydrodynamics. Standard techniques for coupling gravitation and rotation forces to the hydrodynamics do not adequately conserve the total energy of the system for our problem, but recent advances in the literature allow progress and we discuss our implementation here. We present a set of test problems demonstrating the extent to which our software sufficiently models a system where large amounts of mass are advected on the computational domain over long timescales. Future papers in this series will describe our treatment of the initial conditions of these systems and will examine the early phases of the merger to determine its viability for triggering a thermonuclear detonation.

  2. White Dwarf Mergers on Adaptive Meshes. I. Methodology and Code Verification

    NASA Astrophysics Data System (ADS)

    Katz, Max P.; Zingale, Michael; Calder, Alan C.; Swesty, F. Douglas; Almgren, Ann S.; Zhang, Weiqun

    2016-03-01

    The Type Ia supernova (SN Ia) progenitor problem is one of the most perplexing and exciting problems in astrophysics, requiring detailed numerical modeling to complement observations of these explosions. One possible progenitor that has merited recent theoretical attention is the white dwarf (WD) merger scenario, which has the potential to naturally explain many of the observed characteristics of SNe Ia. To date there have been relatively few self-consistent simulations of merging WD systems using mesh-based hydrodynamics. This is the first paper in a series describing simulations of these systems using a hydrodynamics code with adaptive mesh refinement. In this paper we describe our numerical methodology and discuss our implementation in the compressible hydrodynamics code CASTRO, which solves the Euler equations, and the Poisson equation for self-gravity, and couples the gravitational and rotation forces to the hydrodynamics. Standard techniques for coupling gravitation and rotation forces to the hydrodynamics do not adequately conserve the total energy of the system for our problem, but recent advances in the literature allow progress and we discuss our implementation here. We present a set of test problems demonstrating the extent to which our software sufficiently models a system where large amounts of mass are advected on the computational domain over long timescales. Future papers in this series will describe our treatment of the initial conditions of these systems and will examine the early phases of the merger to determine its viability for triggering a thermonuclear detonation.

  3. Argon X-ray line imaging - A compression diagnostic for inertial confinement fusion targets

    SciTech Connect

    Koppel, L.N.

    1980-01-01

    The paper describes argon X-ray line imaging which measures the compressed fuel volume directly by forming one-dimensional images of X-rays from argon gas seeded into the D-T fuel. The photon energies of the X-rays are recorded on the film of a diffraction-crystal spectrograph. Neutron activation, which detects activated nuclei produced by the interaction of 14-MeV neutrons with the selected materials of the target, allows to calculate the final compressed fuel density using a hydrodynamics simulation code and the knowledge of the total number of activated nuclei and the neutron yield. Argon X-ray appears to be a valid fuel-compression diagnostic for final fuel densities in the range of 10 to 50 times liquid D-T density.

  4. Hydrodynamic Simulations of Planetary Rings

    NASA Astrophysics Data System (ADS)

    Miller, Jacob; Stewart, G. R.; Esposito, L. W.

    2013-10-01

    Simulations of rings have traditionally been done using N-body methods, granting insight into the interactions of individual ring particles on varying scales. However, due to the scale of a typical ring system and the sheer number of particles involved, a global N-body simulation is too computationally expensive, unless particle collisions are replaced by stochastic forces (Bromley & Kenyon, 2013). Rings are extraordinarily flat systems and therefore are well-suited to existing geophysical shallow-water hydrodynamics models with well-established non-linear advection methods. By adopting a general relationship between pressure and surface density such as a polytropic equation of state, we can modify the shallow-water formula to treat a thin, compressible, self-gravitating, shearing fluid. Previous hydrodynamic simulations of planetary rings have been restricted to axisymmetric flows and therefore have not treated the response to nonaxisymmetric perturbations by moons (Schmidt & Tscharnuter 1999, Latter & Ogilvie 2010). We seek to expand on existing hydrodynamic methods and, by comparing our work with complementary N-body simulations and Cassini observations, confirm the veracity of our results at small scales before eventually moving to a global domain size. We will use non-Newtonian, dynamically variable viscosity to model the viscous transport caused by unresolved self-gravity wakes. Self-gravity will be added to model the dynamics of large-scale structures, such as density waves and edge waves. Support from NASA Outer Planets and Planetary Geology and Geophysics programs is gratefully acknowledged.

  5. Supernova hydrodynamics experiments using the Nova laser

    SciTech Connect

    Remington, B.A.; Glendinning, S.G.; Estabrook, K.; Wallace, R.J.; Rubenchik, A.; Kane, J.; Arnett, D.; Drake, R.P.; McCray, R.

    1997-04-01

    We are developing experiments using the Nova laser to investigate two areas of physics relevant to core-collapse supernovae (SN): (1) compressible nonlinear hydrodynamic mixing and (2) radiative shock hydrodynamics. In the former, we are examining the differences between the 2D and 3D evolution of the Rayleigh-Taylor instability, an issue critical to the observables emerging from SN in the first year after exploding. In the latter, we are investigating the evolution of a colliding plasma system relevant to the ejecta-stellar wind interactions of the early stages of SN remnant formation. The experiments and astrophysical implications are discussed.

  6. Hydrodynamics from Landau initial conditions

    SciTech Connect

    Sen, Abhisek; Gerhard, Jochen; Torrieri, Giorgio; Read jr, Kenneth F.; Wong, Cheuk-Yin

    2015-01-01

    We investigate ideal hydrodynamic evolution, with Landau initial conditions, both in a semi-analytical 1+1D approach and in a numerical code incorporating event-by-event variation with many events and transverse density inhomogeneities. The object of the calculation is to test how fast would a Landau initial condition transition to a commonly used boost-invariant expansion. We show that the transition to boost-invariant flow occurs too late for realistic setups, with corrections of O (20 - 30%) expected at freezeout for most scenarios. Moreover, the deviation from boost-invariance is correlated with both transverse flow and elliptic flow, with the more highly transversely flowing regions also showing the most violation of boost invariance. Therefore, if longitudinal flow is not fully developed at the early stages of heavy ion collisions, 2+1 dimensional hydrodynamics is inadequate to extract transport coefficients of the quark-gluon plasma. Based on [1, 2

  7. Testing hydrodynamics schemes in galaxy disc simulations

    NASA Astrophysics Data System (ADS)

    Few, C. G.; Dobbs, C.; Pettitt, A.; Konstandin, L.

    2016-08-01

    We examine how three fundamentally different numerical hydrodynamics codes follow the evolution of an isothermal galactic disc with an external spiral potential. We compare an adaptive mesh refinement code (RAMSES), a smoothed particle hydrodynamics code (SPHNG), and a volume-discretized mesh-less code (GIZMO). Using standard refinement criteria, we find that RAMSES produces a disc that is less vertically concentrated and does not reach such high densities as the SPHNG or GIZMO runs. The gas surface density in the spiral arms increases at a lower rate for the RAMSES simulations compared to the other codes. There is also a greater degree of substructure in the SPHNG and GIZMO runs and secondary spiral arms are more pronounced. By resolving the Jeans length with a greater number of grid cells, we achieve more similar results to the Lagrangian codes used in this study. Other alterations to the refinement scheme (adding extra levels of refinement and refining based on local density gradients) are less successful in reducing the disparity between RAMSES and SPHNG/GIZMO. Although more similar, SPHNG displays different density distributions and vertical mass profiles to all modes of GIZMO (including the smoothed particle hydrodynamics version). This suggests differences also arise which are not intrinsic to the particular method but rather due to its implementation. The discrepancies between codes (in particular, the densities reached in the spiral arms) could potentially result in differences in the locations and time-scales for gravitational collapse, and therefore impact star formation activity in more complex galaxy disc simulations.

  8. Fractal image compression

    NASA Technical Reports Server (NTRS)

    Barnsley, Michael F.; Sloan, Alan D.

    1989-01-01

    Fractals are geometric or data structures which do not simplify under magnification. Fractal Image Compression is a technique which associates a fractal to an image. On the one hand, the fractal can be described in terms of a few succinct rules, while on the other, the fractal contains much or all of the image information. Since the rules are described with less bits of data than the image, compression results. Data compression with fractals is an approach to reach high compression ratios for large data streams related to images. The high compression ratios are attained at a cost of large amounts of computation. Both lossless and lossy modes are supported by the technique. The technique is stable in that small errors in codes lead to small errors in image data. Applications to the NASA mission are discussed.

  9. Speech coding

    SciTech Connect

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    1998-05-08

    coding techniques are equally applicable to any voice signal whether or not it carries any intelligible information, as the term speech implies. Other terms that are commonly used are speech compression and voice compression since the fundamental idea behind speech coding is to reduce (compress) the transmission rate (or equivalently the bandwidth) And/or reduce storage requirements In this document the terms speech and voice shall be used interchangeably.

  10. Disruptive Innovation in Numerical Hydrodynamics

    SciTech Connect

    Waltz, Jacob I.

    2012-09-06

    We propose the research and development of a high-fidelity hydrodynamic algorithm for tetrahedral meshes that will lead to a disruptive innovation in the numerical modeling of Laboratory problems. Our proposed innovation has the potential to reduce turnaround time by orders of magnitude relative to Advanced Simulation and Computing (ASC) codes; reduce simulation setup costs by millions of dollars per year; and effectively leverage Graphics Processing Unit (GPU) and future Exascale computing hardware. If successful, this work will lead to a dramatic leap forward in the Laboratory's quest for a predictive simulation capability.

  11. Hydrodynamical comparison test of solar models

    NASA Astrophysics Data System (ADS)

    Bach, K.; Kim, Y.-C.

    2012-12-01

    We present three dimensional radiation-hydrodynamical (RHD) simulations for solar surface convection based on three most recent solar mixtures: Grevesse & Sauval (1998), Asplund, Grevesse & Sauval (2005), and Asplund, Grevesse, Sauval & Scott (2009). The outer convection zone of the Sun is an extremely turbulent region composed of partly ionized compressible gases at high temperature. The super-adiabatic layer (SAL) is the transition region where the transport of energy changes drastically from convection to radiation. In order to describe physical processes accurately, a realistic treatment of radiation should be considered as well as convection. However, newly updated solar mixtures that are established from radiation-hydrodynamics do not generate properly internal structures estimated by helioseismology. In order to address this fundamental problem, solar models are constructed consistently based on each mixture and used as initial configurations for radiation-hydrodynamical simulations. From our simulations, we find that the turbulent flows in each model are statistically similar in the SAL.

  12. Astrophysical smooth particle hydrodynamics

    NASA Astrophysics Data System (ADS)

    Rosswog, Stephan

    2009-04-01

    The paper presents a detailed review of the smooth particle hydrodynamics (SPH) method with particular focus on its astrophysical applications. We start by introducing the basic ideas and concepts and thereby outline all ingredients that are necessary for a practical implementation of the method in a working SPH code. Much of SPH's success relies on its excellent conservation properties and therefore the numerical conservation of physical invariants receives much attention throughout this review. The self-consistent derivation of the SPH equations from the Lagrangian of an ideal fluid is the common theme of the remainder of the text. We derive a modern, Newtonian SPH formulation from the Lagrangian of an ideal fluid. It accounts for changes of the local resolution lengths which result in corrective, so-called "grad-h-terms". We extend this strategy to special relativity for which we derive the corresponding grad-h equation set. The variational approach is further applied to a general-relativistic fluid evolving in a fixed, curved background space-time. Particular care is taken to explicitly derive all relevant equations in a coherent way.

  13. Scaling supernova hydrodynamics to the laboratory

    SciTech Connect

    Kane, J O; Remington, B A; Arnett, D; Fryxell, B A; Drake, R P

    1998-11-10

    Supernova (SN) 1987A focused attention on the critical role of hydrodynamic instabilities in the evolution of supernovae. To test the modeling of these instabilities, they are attempting to rigorously scale the physics of the laboratory in supernova. The scaling of hydrodynamics on microscopic laser scales to hydrodynamics on the SN-size scales is presented and requirements established. Initial results were reported in [1]. Next the appropriate conditions are generated on the NOVA laser. 10-15 Mbar shock at the interface of a two-layer planar target, which triggers perturbation growth, due to the Richtmyer-Meshkov instability and to the Rayleigh-Taylor instability as the interface decelerates is generated. This scales the hydrodynamics of the He-H interface of a Type II supernova at intermediate times, up to a few x10{sup 3} s. The experiment is modeled using the hydrodynamics codes HYADES and CALE, and the supernova code PROMETHEUS. Results of the experiments and simulations are presented. Analysis of the spike bubble velocities using potential flow theory and Ott thin shell theory is presented, as well as a study of 2D vs. 3D difference in growth at the He-H interface of Sn 1987A.

  14. Measurement of Compression Factor and Error Sensitivity Factor of Facsimile Coding Techniques Submitted to the CCITT By Great Britain and the Federal Republic of Germany

    DTIC Science & Technology

    1979-10-01

    toutes los Informations, une v~ritable " Banque de dann~es". r~partie sur des moyens doetraltement nationaux et r~gionaux. et qui dovra rester aliment...lettre 1, 5mm de haut - Restitution photo n!’ 9 Cola est d’autant plus valable quo T~f est plus tell. ligne k retard cot donn6c par grmnd. A cot ftard...traversant ;prime itant dgale i I/Af, It rapport de compression est do - TAf Fec. 3 -filtre suivi d’une ligne a retard (LAR) disper- sive ayant un temps de

  15. Inertial-Fusion-Related Hydrodynamic Instabilities in a Spherical Gas Bubble Accelerated by a Planar Shock Wave

    SciTech Connect

    Niederhaus, John; Ranjan, Devesh; Anderson, Mark; Oakley, Jason; Bonazza, Riccardo; Greenough, Jeff

    2005-05-15

    Experiments studying the compression and unstable growth of a dense spherical bubble in a gaseous medium subjected to a strong planar shock wave (2.8 < M < 3.4) are performed in a vertical shock tube. The test gas is initially contained in a free-falling spherical soap-film bubble, and the shocked bubble is imaged using planar laser diagnostics. Concurrently, simulations are carried out using a compressible hydrodynamics code in r-z axisymmetric geometry.Experiments and computations indicate the formation of characteristic vortical structures in the post-shock flow, due to Richtmyer-Meshkov and Kelvin-Helmholtz instabilities, and smaller-scale vortices due to secondary effects. Inconsistencies between experimental and computational results are examined, and the usefulness of the current axisymmetric approach is evaluated.

  16. File Compression and Expansion of the Genetic Code by the use of the Yin/Yang Directions to find its Sphered Cube

    PubMed Central

    Castro-Chavez, Fernando

    2014-01-01

    Objective The objective of this article is to demonstrate that the genetic code can be studied and represented in a 3-D Sphered Cube for bioinformatics and for education by using the graphical help of the ancient “Book of Changes” or I Ching for the comparison, pair by pair, of the three basic characteristics of nucleotides: H-bonds, molecular structure, and their tautomerism. Methods The source of natural biodiversity is the high plasticity of the genetic code, analyzable with a reverse engineering of its 2-D and 3-D representations (here illustrated), but also through the classical 64-hexagrams of the ancient I Ching, as if they were the 64-codons or words of the genetic code. Results In this article, the four elements of the Yin/Yang were found by correlating the 3×2=6 sets of Cartesian comparisons of the mentioned properties of nucleic acids, to the directionality of their resulting blocks of codons grouped according to their resulting amino acids and/or functions, integrating a 384-codon Sphered Cube whose function is illustrated by comparing six brain peptides and a promoter of osteoblasts from Humans versus Neanderthal, as well as to Negadi’s work on the importance of the number 384 within the genetic code. Conclusions Starting with the codon/anticodon correlation of Nirenberg, published in full here for the first time, and by studying the genetic code and its 3-D display, the buffers of reiteration within codons codifying for the same amino acid, displayed the two long (binary number one) and older Yin/Yang arrows that travel in opposite directions, mimicking the parental DNA strands, while annealing to the two younger and broken (binary number zero) Yin/Yang arrows, mimicking the new DNA strands; the graphic analysis of the of the genetic code and its plasticity was helpful to compare compatible sequences (human compatible to human versus neanderthal compatible to neanderthal), while further exploring the wondrous biodiversity of nature for

  17. Shock Propagation and Instability Structures in Compressed Silica Aerogels

    SciTech Connect

    Howard, W M; Molitoris, J D; DeHaven, M R; Gash, A E; Satcher, J H

    2002-05-30

    We have performed a series of experiments examining shock propagation in low density aerogels. High-pressure ({approx}100 kbar) shock waves are produced by detonating high explosives. Radiography is used to obtain a time sequence imaging of the shocks as they enter and traverse the aerogel. We compress the aerogel by impinging shocks waves on either one or both sides of an aerogel slab. The shock wave initially transmitted to the aerogel is very narrow and flat, but disperses and curves as it propagates. Optical images of the shock front reveal the initial formation of a hot dense region that cools and evolves into a well-defined microstructure. Structures observed in the shock front are examined in the framework of hydrodynamic instabilities generated as the shock traverses the low-density aerogel. The primary features of shock propagation are compared to simulations, which also include modeling the detonation of the high explosive, with a 2-D Arbitrary Lagrange Eulerian hydrodynamics code The code includes a detailed thermochemical equation of state and rate law kinetics. We will present an analysis of the data from the time resolved imaging diagnostics and form a consistent picture of the shock transmission, propagation and instability structure.

  18. Design of Fiber Optic Sensors for Measuring Hydrodynamic Parameters

    NASA Technical Reports Server (NTRS)

    Lyons, Donald R.; Quiett, Carramah; Griffin, DeVon (Technical Monitor)

    2001-01-01

    The science of optical hydrodynamics involves relating the optical properties to the fluid dynamic properties of a hydrodynamic system. Fiber-optic sensors are being designed for measuring the hydrodynamic parameters of various systems. As a flowing fluid makes an encounter with a flat surface, it forms a boundary layer near this surface. The region between the boundary layer and the flat plate contains information about parameters such as viscosity, compressibility, pressure, density, and velocity. An analytical model has been developed for examining the hydrodynamic parameters near the surface of a fiber-optic sensor. An analysis of the conservation of momentum, the continuity equation and the Navier-Stokes equation for compressible flow were used to develop expressions for the velocity and the density as a function of the distance along the flow and above the surface. When examining the flow near the surface, these expressions are used to estimate the sensitivity required to perform direct optical measurements and to derive the shear force for indirect optical measurements. The derivation of this result permits the incorporation of better design parameters for other fiber-based sensors. Future work includes analyzing the optical parametric designs of fiber-optic sensors, modeling sensors to utilize the parameters for hydrodynamics and applying different mixtures of hydrodynamic flow. Finally, the fabrication of fiber-optic sensors for hydrodynamic flow applications of the type described in this presentation could enhance aerospace, submarine, and medical technology.

  19. Supernova-relevant hydrodynamic instability experiment on the Nova laser

    SciTech Connect

    Kane, J.; Arnett, D.; Remington, B.A.; Glendinning, S.G.; Castor, J.; Rubenchik, A.; Berning, M.

    1996-02-12

    Supernova 1987A focused attention on the critical role of hydrodynamic instabilities in the evolution of supernovae. On quite a separate front, the detrimental effect of hydrodynamic instabilities in inertial confinement fusion (ICF) has long been known. Tools from both areas are being tested on a common project. At Lawrence Livermore National Laboratory (LLNL), the Nova Laser is being used in scaled laboratory experiments of hydrodynamic mixing under supernova-relevant conditions. Numerical simulations of the experiments are being done, using hydrodynamics codes at the Laboratory, and astrophysical codes successfully used to model the hydrodynamics of supernovae. A two-layer package composed of Cu and CH{sub 2} with a single mode sinusoidal 1D perturbation at the interface, shocked by indirect laser drive from the Cu side of the package, produced significant Rayleigh-Taylor (RT) growth in the nonlinear regime. The scale and gross structure of the growth was successfully modeled, by mapping an early-time simulation done with 1D HYADES, a radiation transport code, into 2D CALE, a LLNL hydrodynamics code. The HYADES result was also mapped in 2D into the supernova code PROMETHEUS, which was also able to reproduce the scale and gross structure of the growth.

  20. Supernova-relevant hydrodynamic instability experiment on the Nova laser

    NASA Astrophysics Data System (ADS)

    Kane, J.; Arnett, D.; Remington, B. A.; Glendinning, S. G.; Castor, J.; Rubenchik, A.

    1996-02-01

    Supernova 1987A focused attention on the critical role of hydrodynamic instabilities in the evolution of supernovae. On quite a separate front, the detrimental effect of hydrodynamic instabilities in Inertial Confinement Fusion (ICF) has long been known. Tools from both areas are being tested on a common project. At Lawrence Livermore National Laboratory (LLNL), the Nova Laser is being used in scaled laboratory experiments of hydrodynamic mixing under supernova-relevant conditions. Numerical simulations of the experiments are being done, using hydrodynamics codes at the Laboratory, and astrophysical codes successfully used to model the hydrodynamics of supernovae. A two-layer package composed of Cu and CH2 with a single mode sinusoidal 1D perturbation at the interface, shocked by indirect laser drive from the Cu side of the package, produced significant Rayleigh-Taylor (RT) growth in the nonlinear regime. The scale and gross structure of the growth was successfully modeled, by mapping an early-time simulation done with 1D HYADES, a radiation transport code, into 2D CALE, a LLNL hydrodynamics code. The HYADES result was also mapped in 2D into the supernova code PROMETHEUS, which was also able to reproduce the scale and gross structure of the growth.

  1. Fast Coding Unit Encoding Mechanism for Low Complexity Video Coding

    PubMed Central

    Wu, Yueying; Jia, Kebin; Gao, Guandong

    2016-01-01

    In high efficiency video coding (HEVC), coding tree contributes to excellent compression performance. However, coding tree brings extremely high computational complexity. Innovative works for improving coding tree to further reduce encoding time are stated in this paper. A novel low complexity coding tree mechanism is proposed for HEVC fast coding unit (CU) encoding. Firstly, this paper makes an in-depth study of the relationship among CU distribution, quantization parameter (QP) and content change (CC). Secondly, a CU coding tree probability model is proposed for modeling and predicting CU distribution. Eventually, a CU coding tree probability update is proposed, aiming to address probabilistic model distortion problems caused by CC. Experimental results show that the proposed low complexity CU coding tree mechanism significantly reduces encoding time by 27% for lossy coding and 42% for visually lossless coding and lossless coding. The proposed low complexity CU coding tree mechanism devotes to improving coding performance under various application conditions. PMID:26999741

  2. Fast Coding Unit Encoding Mechanism for Low Complexity Video Coding.

    PubMed

    Gao, Yuan; Liu, Pengyu; Wu, Yueying; Jia, Kebin; Gao, Guandong

    2016-01-01

    In high efficiency video coding (HEVC), coding tree contributes to excellent compression performance. However, coding tree brings extremely high computational complexity. Innovative works for improving coding tree to further reduce encoding time are stated in this paper. A novel low complexity coding tree mechanism is proposed for HEVC fast coding unit (CU) encoding. Firstly, this paper makes an in-depth study of the relationship among CU distribution, quantization parameter (QP) and content change (CC). Secondly, a CU coding tree probability model is proposed for modeling and predicting CU distribution. Eventually, a CU coding tree probability update is proposed, aiming to address probabilistic model distortion problems caused by CC. Experimental results show that the proposed low complexity CU coding tree mechanism significantly reduces encoding time by 27% for lossy coding and 42% for visually lossless coding and lossless coding. The proposed low complexity CU coding tree mechanism devotes to improving coding performance under various application conditions.

  3. SeqCompress: an algorithm for biological sequence compression.

    PubMed

    Sardaraz, Muhammad; Tahir, Muhammad; Ikram, Ataul Aziz; Bajwa, Hassan

    2014-10-01

    The growth of Next Generation Sequencing technologies presents significant research challenges, specifically to design bioinformatics tools that handle massive amount of data efficiently. Biological sequence data storage cost has become a noticeable proportion of total cost in the generation and analysis. Particularly increase in DNA sequencing rate is significantly outstripping the rate of increase in disk storage capacity, which may go beyond the limit of storage capacity. It is essential to develop algorithms that handle large data sets via better memory management. This article presents a DNA sequence compression algorithm SeqCompress that copes with the space complexity of biological sequences. The algorithm is based on lossless data compression and uses statistical model as well as arithmetic coding to compress DNA sequences. The proposed algorithm is compared with recent specialized compression tools for biological sequences. Experimental results show that proposed algorithm has better compression gain as compared to other existing algorithms.

  4. Image compression technique

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1997-03-25

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace`s equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image. 16 figs.

  5. Image compression technique

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1997-01-01

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace's equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image.

  6. Circumstellar Hydrodynamics and Spectral Radiation in ALGOLS

    NASA Astrophysics Data System (ADS)

    Terrell, Dirk Curtis

    1994-01-01

    Algols are the remnants of binary systems that have undergone large scale mass transfer. This dissertation presents the results of the coupling of a hydrodynamical model and a radiative model of the flow of gas from the inner Lagrangian point. The hydrodynamical model is a fully Lagrangian, three-dimensional scheme with a novel treatment of viscosity and an implementation of the smoothed particle hydrodynamics method to compute pressure gradients. Viscosity is implemented by allowing particles within a specified interaction length to share momentum. The hydrodynamical model includes a provision for computing the self-gravity of the disk material, although it is not used in the present application to Algols. Hydrogen line profiles and equivalent widths computed with a code by Drake and Ulrich are compared with observations of both short and long period Algols. More sophisticated radiative transfer computations are done with the escape probability code of Ko and Kallman which includes the spectral lines of thirteen elements. The locations and velocities of the gas particles, and the viscous heating from the hydro program are supplied to the radiative transfer program, which computes the equilibrium temperature of the gas and generates its emission spectrum. Intrinsic line profiles are assumed to be delta functions and are properly Doppler shifted and summed for gas particles that are not eclipsed by either star. Polarization curves are computed by combining the hydro program with the Wilson-Liou polarization program. Although the results are preliminary, they show that polarization observations show great promise for studying circumstellar matter.

  7. Optimization of radar pulse compression processing

    NASA Astrophysics Data System (ADS)

    Song, Samuel M.; Kim, Woonkyung M.; Lee, Myung-Su

    1997-06-01

    We propose an optimal radar pulse compression technique and evaluate its performance in the presence of Doppler shift. The traditional pulse compression using Barker code increases the signal strength by transmitting a Barker coded long pulse. The received signal is then processed by an appropriate correlation processing. This Barker code radar pulse compression enhances the detection sensitivity while maintaining the range resolution of a single chip of the Barker coded long pulse. But unfortunately, the technique suffers from the addition of range sidelobes which sometimes will mask weak targets in the vicinity of larger targets. Our proposed optimal algorithm completely eliminates the sidelobes at the cost of additional processing.

  8. Large scale water entry simulation with smoothed particle hydrodynamics on single- and multi-GPU systems

    NASA Astrophysics Data System (ADS)

    Ji, Zhe; Xu, Fei; Takahashi, Akiyuki; Sun, Yu

    2016-12-01

    In this paper, a Weakly Compressible Smoothed Particle Hydrodynamics (WCSPH) framework is presented utilizing the parallel architecture of single- and multi-GPU (Graphic Processing Unit) platforms. The program is developed for water entry simulations where an efficient potential based contact force is introduced to tackle the interaction between fluid and solid particles. The single-GPU SPH scheme is implemented with a series of optimization to achieve high performance. To go beyond the memory limitation of single GPU, the scheme is further extended to multi-GPU platform basing on an improved 3D domain decomposition and inter-node data communication strategy. A typical benchmark test of wedge entry is investigated in varied dimensions and scales to validate the accuracy and efficiency of the program. The results of 2D and 3D benchmark tests manifest great consistency with the experiment and better accuracy than other numerical models. The performance of the single-GPU code is assessed by comparing with serial and parallel CPU codes. The improvement of the domain decomposition strategy is verified, and a study on the scalability and efficiency of the multi-GPU code is carried out as well by simulating tests with varied scales in different amount of GPUs. Lastly, the single- and multi-GPU codes are further compared with existing state-of-the-art SPH parallel frameworks for a comprehensive assessment.

  9. Wavelet and wavelet packet compression of electrocardiograms.

    PubMed

    Hilton, M L

    1997-05-01

    Wavelets and wavelet packets have recently emerged as powerful tools for signal compression. Wavelet and wavelet packet-based compression algorithms based on embedded zerotree wavelet (EZW) coding are developed for electrocardiogram (ECG) signals, and eight different wavelets are evaluated for their ability to compress Holter ECG data. Pilot data from a blind evaluation of compressed ECG's by cardiologists suggest that the clinically useful information present in original ECG signals is preserved by 8:1 compression, and in most cases 16:1 compressed ECG's are clinically useful.

  10. Compressive Holography

    NASA Astrophysics Data System (ADS)

    Lim, Se Hoon

    Compressive holography estimates images from incomplete data by using sparsity priors. Compressive holography combines digital holography and compressive sensing. Digital holography consists of computational image estimation from data captured by an electronic focal plane array. Compressive sensing enables accurate data reconstruction by prior knowledge on desired signal. Computational and optical co-design optimally supports compressive holography in the joint computational and optical domain. This dissertation explores two examples of compressive holography: estimation of 3D tomographic images from 2D data and estimation of images from under sampled apertures. Compressive holography achieves single shot holographic tomography using decompressive inference. In general, 3D image reconstruction suffers from underdetermined measurements with a 2D detector. Specifically, single shot holographic tomography shows the uniqueness problem in the axial direction because the inversion is ill-posed. Compressive sensing alleviates the ill-posed problem by enforcing some sparsity constraints. Holographic tomography is applied for video-rate microscopic imaging and diffuse object imaging. In diffuse object imaging, sparsity priors are not valid in coherent image basis due to speckle. So incoherent image estimation is designed to hold the sparsity in incoherent image basis by support of multiple speckle realizations. High pixel count holography achieves high resolution and wide field-of-view imaging. Coherent aperture synthesis can be one method to increase the aperture size of a detector. Scanning-based synthetic aperture confronts a multivariable global optimization problem due to time-space measurement errors. A hierarchical estimation strategy divides the global problem into multiple local problems with support of computational and optical co-design. Compressive sparse aperture holography can be another method. Compressive sparse sampling collects most of significant field

  11. Scaling Laws for Hydrodynamically Equivalent Implosions

    NASA Astrophysics Data System (ADS)

    Murakami, Masakatsu

    2001-10-01

    The EPOC (equivalent physics of confinement) scenario for the proof of principle of high gain inertial confinement fusion is presented, where the key concept "hydrodynamically equivalent implosions" plays a crucial role. Scaling laws on the target and confinement parameters are derived by applying the Lie group analysis to the PDE (partially differential equations) chain of the hydrodynamic system. It turns out that the conventional scaling law based on adiabatic approximation significantly differs from one which takes such energy transport effect as electron heat conduction into account. Confinement plasma parameters of the hot spot such as the central temperature and the areal mass density at peak compression are obtained with a self-similar solution for spherical implosions.

  12. Supernova hydrodynamics experiments using the Nova laser*

    NASA Astrophysics Data System (ADS)

    Remington, B. A.; Glendinning, S. G.; Estabrook, K. G.; London, R. A.; Wallace, R. J.; Kane, J.; Arnett, D.; Drake, R. P.; Liang, E.; McCray, R.; Rubenchik, A.

    1997-04-01

    We are developing experiments using the Nova laser [1,2] to investigate two areas of physics relevant to core-collapse supernovae (SN): compressible nonlinear hydrodynamic mixing and (2) radiative shock hydrodynamics. In the former, we are examining the differences between the 2D and 3D evolution of the Rayleigh-Taylor instability, an issue critical to the observables emerging from SN in the first year after exploding. In the latter, we are investigating the evolution of a colliding plasma system relevant to the ejecta-stellar wind interactions of the early stages of SN remnant formation. The experiments and astrophysical implications will be discussed. *Work performed under the auspices of the U.S. Department of Energy by the Lawrence Livermore National Laboratory under contract number W-7405-ENG-48. [1] J. Kane et al., in press, Astrophys. J. Lett. (March-April, 1997). [2] B.A. Remington et al., in press, Phys. Plasmas (May, 1997).

  13. Supernova hydrodynamics experiments on the Nova laser

    NASA Astrophysics Data System (ADS)

    Kane, J.; Arnett, D.; Remington, B. A.; Glendinning, S. G.; Rubenchik, A.; Drake, R. P.; Fryxell, B. A.; Muller, E.

    1997-12-01

    The critical roles of hydrodynamic instabilities in SN 1987A and in ICF are well known; 2D-3D differences are important in both areas. In a continuing project at Lawrence Livermore National Laboratory (LLNL), the Nova Laser is being used in scaled laboratory experiments of hydrodynamic mixing under supernova-relevant conditions. Numerical simulations of the experiments are being done, using LLNL hydro codes, and astrophysics codes used to model supernovae. Initial investigations with two-layer planar packages having 2D sinusoidal interface perturbations are described in Ap.J. 478, L75 (1997). Early-time simulations done with the LLNL 1D radiation transport code HYADES are mapped into the 2D LLNL code CALE and into the multi-D supernova code PROMETHEUS. Work is underway on experiments comparing interface instability growth produced by 2D sinusoidal versus 3D cross-hatch and axisymmetric cylindrical perturbations. Results of the simulations will be presented and compared with experiment. Implications for interpreting supernova observations and for supernova modelling will be discussed. * Work performed under the auspices of the U.S. Department of Energy by the Lawrence Livermore National Laboratory under contract number W-7405-ENG-48.

  14. Scaling supernova hydrodynamics to the laboratory

    SciTech Connect

    Kane, J. O.

    1999-06-01

    Supernova (SN) 1987A focused attention on the critical role of hydrodynamic instabilities in the evolution of supernovae. To test the modeling of these instabilities, we are developing laboratory experiments of hydrodynamic mixing under conditions relevant to supernovae. Initial results were reported in J. Kane et al., Astrophys. J.478, L75 (1997) The Nova laser is used to shock two-layer targets, producing Richtmyer-Meshkov (RM) and Rayleigh-Taylor (RT) instabilities at the interfaces between the layers, analogous to instabilities seen at the interfaces of SN 1987A. Because the hydrodynamics in the laser experiments at intermediate times (3-40 ns) and in SN 1987A at intermediate times (5 s-104 s) are well described by the Euler equations, the hydrodynamics scale between the two regimes. The experiments are modeled using the hydrodynamics codes HYADES and CALE, and the supernova code PROMETHEUS, thus serving as a benchmark for PROMETHEUS. Results of the experiments and simulations are presented. Analysis of the spike and bubble velocities in the experiment using potential flow theory and a modified Ott thin shell theory is presented. A numerical study of 2D vs. 3D differences in instability growth at the O-He and He-H interface of SN 1987A, and the design for analogous laser experiments are presented. We discuss further work to incorporate more features of the SN in the experiments, including spherical geometry, multiple layers and density gradients. Past and ongoing work in laboratory and laser astrophysics is reviewed, including experimental work on supernova remnants (SNRs). A numerical study of RM instability in SNRs is presented.

  15. Scaling supernova hydrodynamics to the laboratory

    SciTech Connect

    Kane, J.; Arnett, D.; Remington, B.A.; Glendinning, S.G.; Bazan, G.; Drake, R.P.; Fryxell, B.A.; Teyssier, R.

    1999-05-01

    Supernova (SN) 1987A focused attention on the critical role of hydrodynamic instabilities in the evolution of supernovae. To test the modeling of these instabilities, we are developing laboratory experiments of hydrodynamic mixing under conditions relevant to supernovae. Initial results were reported in J. Kane {ital et al.} [Astrophys. J. {bold 478}, L75 (1997) and B. A. Remington {ital et al.}, Phys. Plasmas {bold 4}, 1994 (1997)]. The Nova laser is used to generate a 10{endash}15 Mbar shock at the interface of a two-layer planar target, which triggers perturbation growth due to the Richtmyer{endash}Meshkov instability, and to the Rayleigh{endash}Taylor instability as the interface decelerates. This resembles the hydrodynamics of the He-H interface of a Type II supernova at intermediate times, up to a few {times}10{sup 3}s. The scaling of hydrodynamics on microscopic laser scales to the SN-size scales is presented. The experiment is modeled using the hydrodynamics codes HYADES [J. T. Larson and S. M. Lane, J. Quant. Spect. Rad. Trans. {bold 51}, 179 (1994)] and CALE [R. T. Barton, {ital Numerical Astrophysics} (Jones and Bartlett, Boston, 1985), pp. 482{endash}497], and the supernova code PROMETHEUS [P. R. Woodward and P. Collela, J. Comp. Phys. {bold 54}, 115 (1984)]. Results of the experiments and simulations are presented. Analysis of the spike-and-bubble velocities using potential flow theory and Ott thin-shell theory is presented, as well as a study of 2D versus 3D differences in perturbation growth at the He-H interface of SN 1987A.

  16. Scaling supernova hydrodynamics to the laboratory

    NASA Astrophysics Data System (ADS)

    Kane, J.; Arnett, D.; Remington, B. A.; Glendinning, S. G.; Bazan, G.; Drake, R. P.; Fryxell, B. A.; Teyssier, R.; Moore, K.

    1999-05-01

    Supernova (SN) 1987A focused attention on the critical role of hydrodynamic instabilities in the evolution of supernovae. To test the modeling of these instabilities, we are developing laboratory experiments of hydrodynamic mixing under conditions relevant to supernovae. Initial results were reported in J. Kane et al. [Astrophys. J. 478, L75 (1997) and B. A. Remington et al., Phys. Plasmas 4, 1994 (1997)]. The Nova laser is used to generate a 10-15 Mbar shock at the interface of a two-layer planar target, which triggers perturbation growth due to the Richtmyer-Meshkov instability, and to the Rayleigh-Taylor instability as the interface decelerates. This resembles the hydrodynamics of the He-H interface of a Type II supernova at intermediate times, up to a few ×103 s. The scaling of hydrodynamics on microscopic laser scales to the SN-size scales is presented. The experiment is modeled using the hydrodynamics codes HYADES [J. T. Larson and S. M. Lane, J. Quant. Spect. Rad. Trans. 51, 179 (1994)] and CALE [R. T. Barton, Numerical Astrophysics (Jones and Bartlett, Boston, 1985), pp. 482-497], and the supernova code PROMETHEUS [P. R. Woodward and P. Collela, J. Comp. Phys. 54, 115 (1984)]. Results of the experiments and simulations are presented. Analysis of the spike-and-bubble velocities using potential flow theory and Ott thin-shell theory is presented, as well as a study of 2D versus 3D differences in perturbation growth at the He-H interface of SN 1987A.

  17. Combined effects of laser and non-thermal electron beams on hydrodynamics and shock formation in the Shock Ignition scheme

    NASA Astrophysics Data System (ADS)

    Nicolai, Ph.; Feugeas, J. L.; Touati, M.; Breil, J.; Dubroca, B.; Nguyen-Buy, T.; Ribeyre, X.; Tikhonchuk, V.; Gus'kov, S.

    2014-10-01

    An issue to be addressed in Inertial Confinement Fusion (ICF) is the detailed description of the kinetic transport of relativistic or non-thermal electrons generated by laser within the time and space scales of the imploded target hydrodynamics. We have developed at CELIA the model M1, a fast and reduced kinetic model for relativistic electron transport. The latter has been implemented into the 2D radiation hydrodynamic code CHIC. In the framework of the Shock Ignition (SI) scheme, it has been shown in simplified conditions that the energy transferred by the non-thermal electrons from the corona to the compressed shell of an ICF target could be an important mechanism for the creation of ablation pressure. Nevertheless, in realistic configurations, taking the density profile and the electron energy spectrum into account, the target has to be carefully designed to avoid deleterious effects on compression efficiency. In addition, the electron energy deposition may modify the laser-driven shock formation and its propagation through the target. The non-thermal electron effects on the shock propagation will be analyzed in a realistic configuration.

  18. Sensitivity analysis of hydrodynamic stability operators

    NASA Technical Reports Server (NTRS)

    Schmid, Peter J.; Henningson, Dan S.; Khorrami, Mehdi R.; Malik, Mujeeb R.

    1992-01-01

    The eigenvalue sensitivity for hydrodynamic stability operators is investigated. Classical matrix perturbation techniques as well as the concept of epsilon-pseudoeigenvalues are applied to show that parts of the spectrum are highly sensitive to small perturbations. Applications are drawn from incompressible plane Couette, trailing line vortex flow and compressible Blasius boundary layer flow. Parametric studies indicate a monotonically increasing effect of the Reynolds number on the sensitivity. The phenomenon of eigenvalue sensitivity is due to the non-normality of the operators and their discrete matrix analogs and may be associated with large transient growth of the corresponding initial value problem.

  19. Resurgence in extended hydrodynamics

    NASA Astrophysics Data System (ADS)

    Aniceto, Inês; Spaliński, Michał

    2016-04-01

    It has recently been understood that the hydrodynamic series generated by the Müller-Israel-Stewart theory is divergent and that this large-order behavior is consistent with the theory of resurgence. Furthermore, it was observed that the physical origin of this is the presence of a purely damped nonhydrodynamic mode. It is very interesting to ask whether this picture persists in cases where the spectrum of nonhydrodynamic modes is richer. We take the first step in this direction by considering the simplest hydrodynamic theory which, instead of the purely damped mode, contains a pair of nonhydrodynamic modes of complex conjugate frequencies. This mimics the pattern of black brane quasinormal modes which appear on the gravity side of the AdS/CFT description of N =4 supersymmetric Yang-Mills plasma. We find that the resulting hydrodynamic series is divergent in a way consistent with resurgence and precisely encodes information about the nonhydrodynamic modes of the theory.

  20. Observation of Compressible Plasma Mix in Cylindrically Convergent Implosions

    NASA Astrophysics Data System (ADS)

    Barnes, Cris W.; Batha, Steven H.; Lanier, Nicholas E.; Magelssen, Glenn R.; Tubbs, David L.; Dunne, A. M.; Rothman, Steven R.; Youngs, David L.

    2000-10-01

    An understanding of hydrodynamic mix in convergent geometry will be of key importance in the development of a robust ignition/burn capability on NIF, LMJ and future pulsed power machines. We have made use of the OMEGA laser facility at the University of Rochester to investigate directly the mix evolution in a convergent geometry, compressible plasma regime. The experiments comprise a plastic cylindrical shell imploded by direct laser irradiation. The cylindrical shell surrounds a lower density plastic foam which provides sufficient back pressure to allow the implosion to stagnate at a sufficiently high radius to permit quantitative radiographic diagnosis of the interface evolution near turnaround. The susceptibility to mix of the shell-foam interface is varied by choosing different density material for the inner shell surface (thus varying the Atwood number). This allows the study of shock-induced Richtmyer-Meshkov growth during the coasting phase, and Rayleigh-Taylor growth during the stagnation phase. The experimental results will be described along with calculational predictions using various radiation hydrodynamics codes and turbulent mix models.

  1. TEM Video Compressive Sensing

    SciTech Connect

    Stevens, Andrew J.; Kovarik, Libor; Abellan, Patricia; Yuan, Xin; Carin, Lawrence; Browning, Nigel D.

    2015-08-02

    One of the main limitations of imaging at high spatial and temporal resolution during in-situ TEM experiments is the frame rate of the camera being used to image the dynamic process. While the recent development of direct detectors has provided the hardware to achieve frame rates approaching 0.1ms, the cameras are expensive and must replace existing detectors. In this paper, we examine the use of coded aperture compressive sensing methods [1, 2, 3, 4] to increase the framerate of any camera with simple, low-cost hardware modifications. The coded aperture approach allows multiple sub-frames to be coded and integrated into a single camera frame during the acquisition process, and then extracted upon readout using statistical compressive sensing inversion. Our simulations show that it should be possible to increase the speed of any camera by at least an order of magnitude. Compressive Sensing (CS) combines sensing and compression in one operation, and thus provides an approach that could further improve the temporal resolution while correspondingly reducing the electron dose rate. Because the signal is measured in a compressive manner, fewer total measurements are required. When applied to TEM video capture, compressive imaging couled improve acquisition speed and reduce the electron dose rate. CS is a recent concept, and has come to the forefront due the seminal work of Candès [5]. Since the publication of Candès, there has been enormous growth in the application of CS and development of CS variants. For electron microscopy applications, the concept of CS has also been recently applied to electron tomography [6], and reduction of electron dose in scanning transmission electron microscopy (STEM) imaging [7]. To demonstrate the applicability of coded aperture CS video reconstruction for atomic level imaging, we simulate compressive sensing on observations of Pd nanoparticles and Ag nanoparticles during exposure to high temperatures and other environmental

  2. Syndrome source coding and its universal generalization

    NASA Technical Reports Server (NTRS)

    Ancheta, T. C., Jr.

    1975-01-01

    A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A universal generalization of syndrome-source-coding is formulated which provides robustly-effective, distortionless, coding of source ensembles.

  3. Context-Aware Image Compression

    PubMed Central

    Chan, Jacky C. K.; Mahjoubfar, Ata; Chen, Claire L.; Jalali, Bahram

    2016-01-01

    We describe a physics-based data compression method inspired by the photonic time stretch wherein information-rich portions of the data are dilated in a process that emulates the effect of group velocity dispersion on temporal signals. With this coding operation, the data can be downsampled at a lower rate than without it. In contrast to previous implementation of the warped stretch compression, here the decoding can be performed without the need of phase recovery. We present rate-distortion analysis and show improvement in PSNR compared to compression via uniform downsampling. PMID:27367904

  4. Simple Waves in Ideal Radiation Hydrodynamics

    SciTech Connect

    Johnson, B M

    2008-09-03

    In the dynamic diffusion limit of radiation hydrodynamics, advection dominates diffusion; the latter primarily affects small scales and has negligible impact on the large scale flow. The radiation can thus be accurately regarded as an ideal fluid, i.e., radiative diffusion can be neglected along with other forms of dissipation. This viewpoint is applied here to an analysis of simple waves in an ideal radiating fluid. It is shown that much of the hydrodynamic analysis carries over by simply replacing the material sound speed, pressure and index with the values appropriate for a radiating fluid. A complete analysis is performed for a centered rarefaction wave, and expressions are provided for the Riemann invariants and characteristic curves of the one-dimensional system of equations. The analytical solution is checked for consistency against a finite difference numerical integration, and the validity of neglecting the diffusion operator is demonstrated. An interesting physical result is that for a material component with a large number of internal degrees of freedom and an internal energy greater than that of the radiation, the sound speed increases as the fluid is rarefied. These solutions are an excellent test for radiation hydrodynamic codes operating in the dynamic diffusion regime. The general approach may be useful in the development of Godunov numerical schemes for radiation hydrodynamics.

  5. Hydrodynamic simulations of the core helium flash

    NASA Astrophysics Data System (ADS)

    Mocák, Miroslav; Müller, Ewald; Weiss, Achim; Kifonidis, Konstantinos

    2008-10-01

    We desribe and discuss hydrodynamic simulations of the core helium flash using an initial model of a 1.25 M⊙ star with a metallicity of 0.02 near at its peak. Past research concerned with the dynamics of the core helium flash is inconclusive. Its results range from a confirmation of the standard picture, where the star remains in hydrostatic equilibrium during the flash (Deupree 1996), to a disruption or a significant mass loss of the star (Edwards 1969; Cole & Deupree 1980). However, the most recent multidimensional hydrodynamic study (Dearborn et al. 2006) suggests a quiescent behavior of the core helium flash and seems to rule out an explosive scenario. Here we present partial results of a new comprehensive study of the core helium flash, which seem to confirm this qualitative behavior and give a better insight into operation of the convection zone powered by helium burning during the flash. The hydrodynamic evolution is followed on a computational grid in spherical coordinates using our new version of the multi-dimensional hydrodynamic code HERAKLES, which is based on a direct Eulerian implementation of the piecewise parabolic method.

  6. Lossless compression of VLSI layout image data.

    PubMed

    Dai, Vito; Zakhor, Avideh

    2006-09-01

    We present a novel lossless compression algorithm called Context Copy Combinatorial Code (C4), which integrates the advantages of two very disparate compression techniques: context-based modeling and Lempel-Ziv (LZ) style copying. While the algorithm can be applied to many lossless compression applications, such as document image compression, our primary target application has been lossless compression of integrated circuit layout image data. These images contain a heterogeneous mix of data: dense repetitive data better suited to LZ-style coding, and less dense structured data, better suited to context-based encoding. As part of C4, we have developed a novel binary entropy coding technique called combinatorial coding which is simultaneously as efficient as arithmetic coding, and as fast as Huffman coding. Compression results show C4 outperforms JBIG, ZIP, BZIP2, and two-dimensional LZ, and achieves lossless compression ratios greater than 22 for binary layout image data, and greater than 14 for gray-pixel image data.

  7. Compressing subbanded image data with Lempel-Ziv-based coders

    NASA Technical Reports Server (NTRS)

    Glover, Daniel; Kwatra, S. C.

    1993-01-01

    A method of improving the compression of image data using Lempel-Ziv-based coding is presented. Image data is first processed with a simple transform, such as the Walsh Hadamard Transform, to produce subbands. The subbanded data can be rounded to eight bits or it can be quantized for higher compression at the cost of some reduction in the quality of the reconstructed image. The data is then run-length coded to take advantage of the large runs of zeros produced by quantization. Compression results are presented and contrasted with a subband compression method using quantization followed by run-length coding and Huffman coding. The Lempel-Ziv-based coding in conjunction with run-length coding produces the best compression results at the same reconstruction quality (compared with the Huffman-based coding) on the image data used.

  8. Modeling multiphase flow using fluctuating hydrodynamics.

    PubMed

    Chaudhri, Anuj; Bell, John B; Garcia, Alejandro L; Donev, Aleksandar

    2014-09-01

    Fluctuating hydrodynamics provides a model for fluids at mesoscopic scales where thermal fluctuations can have a significant impact on the behavior of the system. Here we investigate a model for fluctuating hydrodynamics of a single-component, multiphase flow in the neighborhood of the critical point. The system is modeled using a compressible flow formulation with a van der Waals equation of state, incorporating a Korteweg stress term to treat interfacial tension. We present a numerical algorithm for modeling this system based on an extension of algorithms developed for fluctuating hydrodynamics for ideal fluids. The scheme is validated by comparison of measured structure factors and capillary wave spectra with equilibrium theory. We also present several nonequilibrium examples to illustrate the capability of the algorithm to model multiphase fluid phenomena in a neighborhood of the critical point. These examples include a study of the impact of fluctuations on the spinodal decomposition following a rapid quench, as well as the piston effect in a cavity with supercooled walls. The conclusion in both cases is that thermal fluctuations affect the size and growth of the domains in off-critical quenches.

  9. Particle Mesh Hydrodynamics for Astrophysics Simulations

    NASA Astrophysics Data System (ADS)

    Chatelain, Philippe; Cottet, Georges-Henri; Koumoutsakos, Petros

    We present a particle method for the simulation of three dimensional compressible hydrodynamics based on a hybrid Particle-Mesh discretization of the governing equations. The method is rooted on the regularization of particle locations as in remeshed Smoothed Particle Hydrodynamics (rSPH). The rSPH method was recently introduced to remedy problems associated with the distortion of computational elements in SPH, by periodically re-initializing the particle positions and by using high order interpolation kernels. In the PMH formulation, the particles solely handle the convective part of the compressible Euler equations. The particle quantities are then interpolated onto a mesh, where the pressure terms are computed. PMH, like SPH, is free of the convection CFL condition while at the same time it is more efficient as derivatives are computed on a mesh rather than particle-particle interactions. PMH does not detract from the adaptive character of SPH and allows for control of its accuracy. We present simulations of a benchmark astrophysics problem demonstrating the capabilities of this approach.

  10. Recent advances in coding theory for near error-free communications

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.; Deutsch, L. J.; Dolinar, S. J.; Mceliece, R. J.; Pollara, F.; Shahshahani, M.; Swanson, L.

    1991-01-01

    Channel and source coding theories are discussed. The following subject areas are covered: large constraint length convolutional codes (the Galileo code); decoder design (the big Viterbi decoder); Voyager's and Galileo's data compression scheme; current research in data compression for images; neural networks for soft decoding; neural networks for source decoding; finite-state codes; and fractals for data compression.

  11. Compressing TV-image data

    NASA Technical Reports Server (NTRS)

    Hilbert, E. E.; Lee, J.; Rice, R. F.; Schlutsmeyer, A. P.

    1981-01-01

    Compressing technique calculates activity estimator for each segment of image line. Estimator is used in conjunction with allowable bits per line, N, to determine number of bits necessary to code each segment and which segments can tolerate truncation. Preprocessed line data are then passed to adaptive variable-length coder, which selects optimum transmission code. Method increases capacity of broadcast and cable television transmissions and helps reduce size of storage medium for video and digital audio recordings.

  12. Hydrodynamically Lubricated Rotary Shaft Having Twist Resistant Geometry

    DOEpatents

    Dietle, Lannie; Gobeli, Jeffrey D.

    1993-07-27

    A hydrodynamically lubricated squeeze packing type rotary shaft with a cross-sectional geometry suitable for pressurized lubricant retention is provided which, in the preferred embodiment, incorporates a protuberant static sealing interface that, compared to prior art, dramatically improves the exclusionary action of the dynamic sealing interface in low pressure and unpressurized applications by achieving symmetrical deformation of the seal at the static and dynamic sealing interfaces. In abrasive environments, the improved exclusionary action results in a dramatic reduction of seal and shaft wear, compared to prior art, and provides a significant increase in seal life. The invention also increases seal life by making higher levels of initial compression possible, compared to prior art, without compromising hydrodynamic lubrication; this added compression makes the seal more tolerant of compression set, abrasive wear, mechanical misalignment, dynamic runout, and manufacturing tolerances, and also makes hydrodynamic seals with smaller cross-sections more practical. In alternate embodiments, the benefits enumerated above are achieved by cooperative configurations of the seal and the gland which achieve symmetrical deformation of the seal at the static and dynamic sealing interfaces. The seal may also be configured such that predetermined radial compression deforms it to a desired operative configuration, even through symmetrical deformation is lacking.

  13. Computational brittle fracture using smooth particle hydrodynamics

    SciTech Connect

    Mandell, D.A.; Wingate, C.A.; Schwalbe, L.A.

    1996-10-01

    We are developing statistically based, brittle-fracture models and are implementing them into hydrocodes that can be used for designing systems with components of ceramics, glass, and/or other brittle materials. Because of the advantages it has simulating fracture, we are working primarily with the smooth particle hydrodynamics code SPBM. We describe a new brittle fracture model that we have implemented into SPBM. To illustrate the code`s current capability, we have simulated a number of experiments. We discuss three of these simulations in this paper. The first experiment consists of a brittle steel sphere impacting a plate. The experimental sphere fragment patterns are compared to the calculations. The second experiment is a steel flyer plate in which the recovered steel target crack patterns are compared to the calculated crack patterns. We also briefly describe a simulation of a tungsten rod impacting a heavily confined alumina target, which has been recently reported on in detail.

  14. SPHGR: Smoothed-Particle Hydrodynamics Galaxy Reduction

    NASA Astrophysics Data System (ADS)

    Thompson, Robert

    2015-02-01

    SPHGR (Smoothed-Particle Hydrodynamics Galaxy Reduction) is a python based open-source framework for analyzing smoothed-particle hydrodynamic simulations. Its basic form can run a baryonic group finder to identify galaxies and a halo finder to identify dark matter halos; it can also assign said galaxies to their respective halos, calculate halo & galaxy global properties, and iterate through previous time steps to identify the most-massive progenitors of each halo and galaxy. Data about each individual halo and galaxy is collated and easy to access. SPHGR supports a wide range of simulations types including N-body, full cosmological volumes, and zoom-in runs. Support for multiple SPH code outputs is provided by pyGadgetReader (ascl:1411.001), mainly Gadget (ascl:0003.001) and TIPSY (ascl:1111.015).

  15. Zombie Vortex Instability. I. A Purely Hydrodynamic Instability to Resurrect the Dead Zones of Protoplanetary Disks

    NASA Astrophysics Data System (ADS)

    Marcus, Philip S.; Pei, Suyang; Jiang, Chung-Hsiang; Barranco, Joseph A.; Hassanzadeh, Pedram; Lecoanet, Daniel

    2015-07-01

    There is considerable interest in hydrodynamic instabilities in dead zones of protoplanetary disks as a mechanism for driving angular momentum transport and as a source of particle-trapping vortices to mix chondrules and incubate planetesimal formation. We present simulations with a pseudo-spectral anelastic code and with the compressible code Athena, showing that stably stratified flows in a shearing, rotating box are violently unstable and produce space-filling, sustained turbulence dominated by large vortices with Rossby numbers of order ˜0.2-0.3. This Zombie Vortex Instability (ZVI) is observed in both codes and is triggered by Kolmogorov turbulence with Mach numbers less than ˜0.01. It is a common view that if a given constant density flow is stable, then stable vertical stratification should make the flow even more stable. Yet, we show that sufficient vertical stratification can be unstable to ZVI. ZVI is robust and requires no special tuning of boundary conditions, or initial radial entropy or vortensity gradients (though we have studied ZVI only in the limit of infinite cooling time). The resolution of this paradox is that stable stratification allows for a new avenue to instability: baroclinic critical layers. ZVI has not been seen in previous studies of flows in rotating, shearing boxes because those calculations frequently lacked vertical density stratification and/or sufficient numerical resolution. Although we do not expect appreciable angular momentum transport from ZVI in the small domains in this study, we hypothesize that ZVI in larger domains with compressible equations may lead to angular transport via spiral density waves.

  16. Filtering, Coding, and Compression with Malvar Wavelets

    DTIC Science & Technology

    1993-12-01

    The vocal tract is made up of the lips, mouth, and tongue . These can not change nearly as quickly as the vocal cords can, therefore the vocal tract...fluctuates slowly in the frequency domain and has a spike in the low quefrency region. These spikes are called formant peaks and have a number of uses in...the formant corresponding to the pitch (2). The cepstrum is used to find the formants of the pitch so that this information can be removed from the

  17. The moving mesh code SHADOWFAX

    NASA Astrophysics Data System (ADS)

    Vandenbroucke, B.; De Rijcke, S.

    2016-07-01

    We introduce the moving mesh code SHADOWFAX, which can be used to evolve a mixture of gas, subject to the laws of hydrodynamics and gravity, and any collisionless fluid only subject to gravity, such as cold dark matter or stars. The code is written in C++ and its source code is made available to the scientific community under the GNU Affero General Public Licence. We outline the algorithm and the design of our implementation, and demonstrate its validity through the results of a set of basic test problems, which are also part of the public version. We also compare SHADOWFAX with a number of other publicly available codes using different hydrodynamical integration schemes, illustrating the advantages and disadvantages of the moving mesh technique.

  18. Combining Hydrodynamic and Evolution Calculations of Rotating Stars

    NASA Astrophysics Data System (ADS)

    Deupree, R. G.

    1996-12-01

    Rotation has two primary effects on stellar evolutionary models: the direct influence on the model structure produced by the rotational terms, and the indirect influence produced by rotational instabilities which redistribute angular momentum and composition inside the model. Using a two dimensional, fully implicit finite difference code, I can follow events on both evolutionary and hydrodynamic timescales, thus allowing the simulation of both effects. However, there are several issues concerning how to integrate the results from hydrodynamic runs into evolutionary runs that must be examined. The schemes I have devised for the integration of the hydrodynamic simulations into evolutionary calculations are outlined, and the positive and negative features summarized. The practical differences among the various schemes are small, and a successful marriage between hydrodynamic and evolution calculations is possible.

  19. Saliency-aware video compression.

    PubMed

    Hadizadeh, Hadi; Bajić, Ivan V

    2014-01-01

    In region-of-interest (ROI)-based video coding, ROI parts of the frame are encoded with higher quality than non-ROI parts. At low bit rates, such encoding may produce attention-grabbing coding artifacts, which may draw viewer's attention away from ROI, thereby degrading visual quality. In this paper, we present a saliency-aware video compression method for ROI-based video coding. The proposed method aims at reducing salient coding artifacts in non-ROI parts of the frame in order to keep user's attention on ROI. Further, the method allows saliency to increase in high quality parts of the frame, and allows saliency to reduce in non-ROI parts. Experimental results indicate that the proposed method is able to improve visual quality of encoded video relative to conventional rate distortion optimized video coding, as well as two state-of-the art perceptual video coding methods.

  20. Extended x-ray absorption fine structure measurements of quasi-isentropically compressed vanadium targets on the OMEGA laser

    SciTech Connect

    Yaakobi, B.; Boehly, T. R.; Sangster, T. C.; Meyerhofer, D. D.; Remington, B. A.; Allen, P. G.; Pollaine, S. M.; Lorenzana, H. E.; Lorenz, K. T.; Hawreliak, J. A.

    2008-06-15

    The use of in situ extended x-ray absorption fine structure (EXAFS) for characterizing nanosecond laser-shocked vanadium, titanium, and iron has recently been demonstrated. These measurements are extended to laser-driven, quasi-isentropic compression experiments (ICE). The radiation source (backlighter) for EXAFS in all of these experiments is obtained by imploding a spherical target on the OMEGA laser [T. R. Boehly et al., Rev. Sci. Instrum. 66, 508 (1995)]. Isentropic compression (where the entropy is kept constant) enables to reach high compressions at relatively low temperatures. The absorption spectra are used to determine the temperature and compression in a vanadium sample quasi-isentropically compressed to pressures of up to {approx}0.75 Mbar. The ability to measure the temperature and compression directly is unique to EXAFS. The drive pressure is calibrated by substituting aluminum for the vanadium and interferometrically measuring the velocity of the back target surface by the velocity interferometer system for any reflector (VISAR). The experimental results obtained by EXAFS and VISAR agree with each other and with the simulations of a hydrodynamic code. The role of a shield to protect the sample from impact heating is studied. It is shown that the shield produces an initial weak shock that is followed by a quasi-isentropic compression at a relatively low temperature. The role of radiation heating from the imploding target as well as from the laser-absorption region is studied. The results show that in laser-driven ICE, as compared with laser-driven shocks, comparable compressions can be achieved at lower temperatures. The EXAFS results show important details not seen in the VISAR results.

  1. Hydrodynamics of Turning Flocks.

    PubMed

    Yang, Xingbo; Marchetti, M Cristina

    2015-12-18

    We present a hydrodynamic model of flocking that generalizes the familiar Toner-Tu equations to incorporate turning inertia of well-polarized flocks. The continuum equations controlled by only two dimensionless parameters, orientational inertia and alignment strength, are derived by coarse-graining the inertial spin model recently proposed by Cavagna et al. The interplay between orientational inertia and bend elasticity of the flock yields anisotropic spin waves that mediate the propagation of turning information throughout the flock. The coupling between spin-current density to the local vorticity field through a nonlinear friction gives rise to a hydrodynamic mode with angular-dependent propagation speed at long wavelengths. This mode becomes unstable as a result of the growth of bend and splay deformations augmented by the spin wave, signaling the transition to complex spatiotemporal patterns of continuously turning and swirling flocks.

  2. Hydrodynamics of Turning Flocks

    NASA Astrophysics Data System (ADS)

    Yang, Xingbo; Marchetti, M. Cristina

    2015-12-01

    We present a hydrodynamic model of flocking that generalizes the familiar Toner-Tu equations to incorporate turning inertia of well-polarized flocks. The continuum equations controlled by only two dimensionless parameters, orientational inertia and alignment strength, are derived by coarse-graining the inertial spin model recently proposed by Cavagna et al. The interplay between orientational inertia and bend elasticity of the flock yields anisotropic spin waves that mediate the propagation of turning information throughout the flock. The coupling between spin-current density to the local vorticity field through a nonlinear friction gives rise to a hydrodynamic mode with angular-dependent propagation speed at long wavelengths. This mode becomes unstable as a result of the growth of bend and splay deformations augmented by the spin wave, signaling the transition to complex spatiotemporal patterns of continuously turning and swirling flocks.

  3. Modeling Warm Dense Matter Experiments using the 3D ALE-AMR Code and the Move Toward Exascale Computing

    SciTech Connect

    Koniges, A; Eder, E; Liu, W; Barnard, J; Friedman, A; Logan, G; Fisher, A; Masers, N; Bertozzi, A

    2011-11-04

    The Neutralized Drift Compression Experiment II (NDCX II) is an induction accelerator planned for initial commissioning in 2012. The final design calls for a 3 MeV, Li+ ion beam, delivered in a bunch with characteristic pulse duration of 1 ns, and transverse dimension of order 1 mm. The NDCX II will be used in studies of material in the warm dense matter (WDM) regime, and ion beam/hydrodynamic coupling experiments relevant to heavy ion based inertial fusion energy. We discuss recent efforts to adapt the 3D ALE-AMR code to model WDM experiments on NDCX II. The code, which combines Arbitrary Lagrangian Eulerian (ALE) hydrodynamics with Adaptive Mesh Refinement (AMR), has physics models that include ion deposition, radiation hydrodynamics, thermal diffusion, anisotropic material strength with material time history, and advanced models for fragmentation. Experiments at NDCX-II will explore the process of bubble and droplet formation (two-phase expansion) of superheated metal solids using ion beams. Experiments at higher temperatures will explore equation of state and heavy ion fusion beam-to-target energy coupling efficiency. Ion beams allow precise control of local beam energy deposition providing uniform volumetric heating on a timescale shorter than that of hydrodynamic expansion. The ALE-AMR code does not have any export control restrictions and is currently running at the National Energy Research Scientific Computing Center (NERSC) at LBNL and has been shown to scale well to thousands of CPUs. New surface tension models that are being implemented and applied to WDM experiments. Some of the approaches use a diffuse interface surface tension model that is based on the advective Cahn-Hilliard equations, which allows for droplet breakup in divergent velocity fields without the need for imposed perturbations. Other methods require seeding or other methods for droplet breakup. We also briefly discuss the effects of the move to exascale computing and related

  4. The hydrodynamics of galaxy formation on Kiloparsec scales

    NASA Technical Reports Server (NTRS)

    Norman, Michael L.; Anninos, Wenbo Yuan; Centrella, Joan

    1993-01-01

    Two dimensional numerical simulations of Zeldovich pancake fragmentation in a dark matter dominated universe were carried out to study the hydrodynamical and gravitational effects on the formation of structures such as protogalaxies. Preliminary results were given in Yuan, Centrella and, Norman (1991). Here we report a more exhaustive study to determine the sensitivity of protogalaxies to input parameters. The numerical code we used for the simulations combines the hydrodynamical code ZEUS-2D (Stone and Norman, 1992) which was modified to include the expansion of the universe and radiative cooling of the gas with a particle-mesh code which follows the motion of dark matter particles. The resulting hybrid code is able to handle highly nonuniform grids which we utilized to obtain a high resolution (much greater than 1 kpc) in the dense region of the pancake.

  5. Hydrodynamics of fossil fishes

    PubMed Central

    Fletcher, Thomas; Altringham, John; Peakall, Jeffrey; Wignall, Paul; Dorrell, Robert

    2014-01-01

    From their earliest origins, fishes have developed a suite of adaptations for locomotion in water, which determine performance and ultimately fitness. Even without data from behaviour, soft tissue and extant relatives, it is possible to infer a wealth of palaeobiological and palaeoecological information. As in extant species, aspects of gross morphology such as streamlining, fin position and tail type are optimized even in the earliest fishes, indicating similar life strategies have been present throughout their evolutionary history. As hydrodynamical studies become more sophisticated, increasingly complex fluid movement can be modelled, including vortex formation and boundary layer control. Drag-reducing riblets ornamenting the scales of fast-moving sharks have been subjected to particularly intense research, but this has not been extended to extinct forms. Riblets are a convergent adaptation seen in many Palaeozoic fishes, and probably served a similar hydrodynamic purpose. Conversely, structures which appear to increase skin friction may act as turbulisors, reducing overall drag while serving a protective function. Here, we examine the diverse adaptions that contribute to drag reduction in modern fishes and review the few attempts to elucidate the hydrodynamics of extinct forms. PMID:24943377

  6. Hydrodynamics of insect spermatozoa

    NASA Astrophysics Data System (ADS)

    Pak, On Shun; Lauga, Eric

    2010-11-01

    Microorganism motility plays important roles in many biological processes including reproduction. Many microorganisms propel themselves by propagating traveling waves along their flagella. Depending on the species, propagation of planar waves (e.g. Ceratium) and helical waves (e.g. Trichomonas) were observed in eukaryotic flagellar motion, and hydrodynamic models for both were proposed in the past. However, the motility of insect spermatozoa remains largely unexplored. An interesting morphological feature of such cells, first observed in Tenebrio molitor and Bacillus rossius, is the double helical deformation pattern along the flagella, which is characterized by the presence of two superimposed helical flagellar waves (one with a large amplitude and low frequency, and the other with a small amplitude and high frequency). Here we present the first hydrodynamic investigation of the locomotion of insect spermatozoa. The swimming kinematics, trajectories and hydrodynamic efficiency of the swimmer are computed based on the prescribed double helical deformation pattern. We then compare our theoretical predictions with experimental measurements, and explore the dependence of the swimming performance on the geometric and dynamical parameters.

  7. Hydrodynamics of fossil fishes.

    PubMed

    Fletcher, Thomas; Altringham, John; Peakall, Jeffrey; Wignall, Paul; Dorrell, Robert

    2014-08-07

    From their earliest origins, fishes have developed a suite of adaptations for locomotion in water, which determine performance and ultimately fitness. Even without data from behaviour, soft tissue and extant relatives, it is possible to infer a wealth of palaeobiological and palaeoecological information. As in extant species, aspects of gross morphology such as streamlining, fin position and tail type are optimized even in the earliest fishes, indicating similar life strategies have been present throughout their evolutionary history. As hydrodynamical studies become more sophisticated, increasingly complex fluid movement can be modelled, including vortex formation and boundary layer control. Drag-reducing riblets ornamenting the scales of fast-moving sharks have been subjected to particularly intense research, but this has not been extended to extinct forms. Riblets are a convergent adaptation seen in many Palaeozoic fishes, and probably served a similar hydrodynamic purpose. Conversely, structures which appear to increase skin friction may act as turbulisors, reducing overall drag while serving a protective function. Here, we examine the diverse adaptions that contribute to drag reduction in modern fishes and review the few attempts to elucidate the hydrodynamics of extinct forms.

  8. Adaptive Encoding for Numerical Data Compression.

    ERIC Educational Resources Information Center

    Yokoo, Hidetoshi

    1994-01-01

    Discusses the adaptive compression of computer files of numerical data whose statistical properties are not given in advance. A new lossless coding method for this purpose, which utilizes Adelson-Velskii and Landis (AVL) trees, is proposed. The method is effective to any word length. Its application to the lossless compression of gray-scale images…

  9. Compression stockings

    MedlinePlus

    ... knee bend. Compression Stockings Can Be Hard to Put on If it's hard for you to put on the stockings, try these tips: Apply lotion ... your legs, but let it dry before you put on the stockings. Use a little baby powder ...

  10. Nonlinear hydrodynamics of cosmological sheets. 1: Numerical techniques and tests

    NASA Technical Reports Server (NTRS)

    Anninos, Wenbo Y.; Norman, Michael J.

    1994-01-01

    We present the numerical techniques and tests used to construct and validate a computer code designed to study the multidimensional nonlinear hydrodynamics of large-scale sheet structures in the universe, especially the fragmentation of such structures under various instabilities. This code is composed of two codes, the hydrodynamical code ZEUS-2D and a particle-mesh code. The ZEUS-2D code solves the hydrodynamical equations in two dimensions using explicit Eulerian finite-difference techniques, with modifications made to incorporate the expansion of the universe and the gas cooling due to Compton scattering, bremsstrahlung, and hydrogen and helium cooling. The particle-mesh code solves the equation of motion for the collisionless dark matter. The code uses two-dimensional Cartesian coordinates with a nonuniform grid in one direction to provide high resolution for the sheet structures. A series of one-dimensional and two-dimensional linear perturbation tests are presented which are designed to test the hydro solver and the Poisson solver with and without the expansion of the universe. We also present a radiative shock wave test which is designed to ensure the code's capability to handle radiative cooling properly. And finally a series of one-dimensional Zel'dovich pancake tests used to test the dark matter code and the hydro solver in the nonlinear regime are discussed and compared with the results of Bond et al. (1984) and Shapiro & Struck-Marcell (1985). Overall, the code is shown to produce accurate and stable results, which provide us a powerful tool to further our studies.

  11. Nonlinear hydrodynamics of cosmological sheets. 1: Numerical techniques and tests

    NASA Astrophysics Data System (ADS)

    Anninos, Wenbo Y.; Norman, Michael J.

    1994-07-01

    We present the numerical techniques and tests used to construct and validate a computer code designed to study the multidimensional nonlinear hydrodynamics of large-scale sheet structures in the universe, especially the fragmentation of such structures under various instabilities. This code is composed of two codes, the hydrodynamical code ZEUS-2D and a particle-mesh code. The ZEUS-2D code solves the hydrodynamical equations in two dimensions using explicit Eulerian finite-difference techniques, with modifications made to incorporate the expansion of the universe and the gas cooling due to Compton scattering, bremsstrahlung, and hydrogen and helium cooling. The particle-mesh code solves the equation of motion for the collisionless dark matter. The code uses two-dimensional Cartesian coordinates with a nonuniform grid in one direction to provide high resolution for the sheet structures. A series of one-dimensional and two-dimensional linear perturbation tests are presented which are designed to test the hydro solver and the Poisson solver with and without the expansion of the universe. We also present a radiative shock wave test which is designed to ensure the code's capability to handle radiative cooling properly. And finally a series of one-dimensional Zel'dovich pancake tests used to test the dark matter code and the hydro solver in the nonlinear regime are discussed and compared with the results of Bond et al. (1984) and Shapiro & Struck-Marcell (1985). Overall, the code is shown to produce accurate and stable results, which provide us a powerful tool to further our studies.

  12. Constructing stable 3D hydrodynamical models of giant stars

    NASA Astrophysics Data System (ADS)

    Ohlmann, Sebastian T.; Röpke, Friedrich K.; Pakmor, Rüdiger; Springel, Volker

    2017-02-01

    Hydrodynamical simulations of stellar interactions require stable models of stars as initial conditions. Such initial models, however, are difficult to construct for giant stars because of the wide range in spatial scales of the hydrostatic equilibrium and in dynamical timescales between the core and the envelope of the giant. They are needed for, e.g., modeling the common envelope phase where a giant envelope encompasses both the giant core and a companion star. Here, we present a new method of approximating and reconstructing giant profiles from a stellar evolution code to produce stable models for multi-dimensional hydrodynamical simulations. We determine typical stellar stratification profiles with the one-dimensional stellar evolution code mesa. After an appropriate mapping, hydrodynamical simulations are conducted using the moving-mesh code arepo. The giant profiles are approximated by replacing the core of the giant with a point mass and by constructing a suitable continuation of the profile to the center. Different reconstruction methods are tested that can specifically control the convective behaviour of the model. After mapping to a grid, a relaxation procedure that includes damping of spurious velocities yields stable models in three-dimensional hydrodynamical simulations. Initially convectively stable configurations lead to stable hydrodynamical models while for stratifications that are convectively unstable in the stellar evolution code, simulations recover the convective behaviour of the initial model and show large convective plumes with Mach numbers up to 0.8. Examples are shown for a 2 M⊙ red giant and a 0.67 M⊙ asymptotic giant branch star. A detailed analysis shows that the improved method reliably provides stable models of giant envelopes that can be used as initial conditions for subsequent hydrodynamical simulations of stellar interactions involving giant stars.

  13. Preprocessing of compressed digital video

    NASA Astrophysics Data System (ADS)

    Segall, C. Andrew; Karunaratne, Passant V.; Katsaggelos, Aggelos K.

    2000-12-01

    Pre-processing algorithms improve on the performance of a video compression system by removing spurious noise and insignificant features from the original images. This increases compression efficiency and attenuates coding artifacts. Unfortunately, determining the appropriate amount of pre-filtering is a difficult problem, as it depends on both the content of an image as well as the target bit-rate of compression algorithm. In this paper, we explore a pre- processing technique that is loosely coupled to the quantization decisions of a rate control mechanism. This technique results in a pre-processing system that operates directly on the Displaced Frame Difference (DFD) and is applicable to any standard-compatible compression system. Results explore the effect of several standard filters on the DFD. An adaptive technique is then considered.

  14. Comparison Of Data Compression Schemes For Medical Images

    NASA Astrophysics Data System (ADS)

    Noh, Ki H.; Jenkins, Janice M.

    1986-06-01

    Medical images acquired and stored digitally continue to pose a major problem in the area of picture archiving and transmission. The need for accurate reproduction of such images, which constitute patient medical records, and the medico-legal problems of possible loss of information has led us to examine the suitability of data compression schemes for several different medical image modalities. We have examined both reversible coding and irreversible coding as methods of image for-matting and reproduction. In reversible coding we have tested run-length coding and arithmetic coding on image bit planes. In irreversible coding, we have studied transform coding, linear predictive coding, and block truncation coding and their effects on image quality versus compression ratio in several image modalities. In transform coding, we have applied discrete Fourier coding, discrete cosine coding, discrete sine transform, and Walsh-Hadamard transform to images in which a subset of the transformed coefficients were retained and quantized. In linear predictive coding, we used a fixed level quantizer. In the case of block truncation coding, the first and second moments were retained. Results of all types of irreversible coding for data compression were unsatisfactory in terms of reproduction of the original image. Run-length coding was useful on several bit planes of an image but not on others. Arithmetic coding was found to be completely reversible and resulted in up to 2 to 1 compression ratio.

  15. Postexplosion hydrodynamics of supernovae in red supergiants

    NASA Technical Reports Server (NTRS)

    Herant, Marc; Woosley, S. E.

    1994-01-01

    Shock propagation, mixing, and clumping are studied in the explosion of red supergiants as Type II supernovae using a two-dimensional smooth particle hydrodynamic (SPH) code. We show that extensive Rayleigh-Talor instabilities develop in the ejecta in the wake of the reverse shock wave. In all cases, the shell structure of the progenitor is obliterated to leave a clumpy, well-mixed supernova remnant. However, the occurrence of mass loss during the lifetime of the progenitor can significantly reduce the amount of mixing. These results are independent of the Type II supernova explosion mechanism.

  16. Impact modeling with Smooth Particle Hydrodynamics

    SciTech Connect

    Stellingwerf, R.F.; Wingate, C.A.

    1993-07-01

    Smooth Particle Hydrodynamics (SPH) can be used to model hypervelocity impact phenomena via the addition of a strength of materials treatment. SPH is the only technique that can model such problems efficiently due to the combination of 3-dimensional geometry, large translations of material, large deformations, and large void fractions for most problems of interest. This makes SPH an ideal candidate for modeling of asteroid impact, spacecraft shield modeling, and planetary accretion. In this paper we describe the derivation of the strength equations in SPH, show several basic code tests, and present several impact test cases with experimental comparisons.

  17. Combustion chamber analysis code

    NASA Technical Reports Server (NTRS)

    Przekwas, A. J.; Lai, Y. G.; Krishnan, A.; Avva, R. K.; Giridharan, M. G.

    1993-01-01

    A three-dimensional, time dependent, Favre averaged, finite volume Navier-Stokes code has been developed to model compressible and incompressible flows (with and without chemical reactions) in liquid rocket engines. The code has a non-staggered formulation with generalized body-fitted-coordinates (BFC) capability. Higher order differencing methodologies such as MUSCL and Osher-Chakravarthy schemes are available. Turbulent flows can be modeled using any of the five turbulent models present in the code. A two-phase, two-liquid, Lagrangian spray model has been incorporated into the code. Chemical equilibrium and finite rate reaction models are available to model chemically reacting flows. The discrete ordinate method is used to model effects of thermal radiation. The code has been validated extensively against benchmark experimental data and has been applied to model flows in several propulsion system components of the SSME and the STME.

  18. Low torque hydrodynamic lip geometry for rotary seals

    SciTech Connect

    Dietle, Lannie L.; Schroeder, John E.

    2015-07-21

    A hydrodynamically lubricating geometry for the generally circular dynamic sealing lip of rotary seals that are employed to partition a lubricant from an environment. The dynamic sealing lip is provided for establishing compressed sealing engagement with a relatively rotatable surface, and for wedging a film of lubricating fluid into the interface between the dynamic sealing lip and the relatively rotatable surface in response to relative rotation that may occur in the clockwise or the counter-clockwise direction. A wave form incorporating an elongated dimple provides the gradual convergence, efficient impingement angle, and gradual interfacial contact pressure rise that are conducive to efficient hydrodynamic wedging. Skewed elevated contact pressure zones produced by compression edge effects provide for controlled lubricant movement within the dynamic sealing interface between the seal and the relatively rotatable surface, producing enhanced lubrication and low running torque.

  19. Polar Codes

    DTIC Science & Technology

    2014-12-01

    density parity check (LDPC) code, a Reed–Solomon code, and three convolutional codes. iii CONTENTS EXECUTIVE SUMMARY...the most common. Many civilian systems use low density parity check (LDPC) FEC codes, and the Navy is planning to use LDPC for some future systems...other forward error correction methods: a turbo code, a low density parity check (LDPC) code, a Reed–Solomon code, and three convolutional codes

  20. Nonlinear Generalized Hydrodynamic Wave Equations in Strongly Coupled Dusty Plasmas

    SciTech Connect

    Veeresha, B. M.; Sen, A.; Kaw, P. K.

    2008-09-07

    A set of nonlinear equations for the study of low frequency waves in a strongly coupled dusty plasma medium is derived using the phenomenological generalized hydrodynamic (GH) model and is used to study the modulational stability of dust acoustic waves to parallel perturbations. Dust compressibility contributions arising from strong Coulomb coupling effects are found to introduce significant modifications in the threshold and range of the instability domain.

  1. MUFASA: galaxy formation simulations with meshless hydrodynamics

    NASA Astrophysics Data System (ADS)

    Davé, Romeel; Thompson, Robert; Hopkins, Philip F.

    2016-11-01

    We present the MUFASA suite of cosmological hydrodynamic simulations, which employs the GIZMO meshless finite mass (MFM) code including H2-based star formation, nine-element chemical evolution, two-phase kinetic outflows following scalings from the Feedback in Realistic Environments zoom simulations, and evolving halo mass-based quenching. Our fiducial (50 h-1 Mpc)3 volume is evolved to z = 0 with a quarter billion elements. The predicted galaxy stellar mass functions (GSMFs) reproduces observations from z = 4 → 0 to ≲ 1.2σ in cosmic variance, providing an unprecedented match to this key diagnostic. The cosmic star formation history and stellar mass growth show general agreement with data, with a strong archaeological downsizing trend such that dwarf galaxies form the majority of their stars after z ˜ 1. We run 25 and 12.5 h-1 Mpc volumes to z = 2 with identical feedback prescriptions, the latter resolving all hydrogen-cooling haloes, and the three runs display fair resolution convergence. The specific star formation rates broadly agree with data at z = 0, but are underpredicted at z ˜ 2 by a factor of 3, re-emphasizing a longstanding puzzle in galaxy evolution models. We compare runs using MFM and two flavours of smoothed particle hydrodynamics, and show that the GSMF is sensitive to hydrodynamics methodology at the ˜×2 level, which is sub-dominant to choices for parametrizing feedback.

  2. MAESTRO: An Adaptive Low Mach Number Hydrodynamics Algorithm for Stellar Flows

    NASA Astrophysics Data System (ADS)

    Nonaka, Andrew; Almgren, A. S.; Bell, J. B.; Malone, C. M.; Zingale, M.

    2010-01-01

    Many astrophysical phenomena are highly subsonic, requiring specialized numerical methods suitable for long-time integration. We present MAESTRO, a low Mach number stellar hydrodynamics code that can be used to simulate long-time, low-speed flows that would be prohibitively expensive to model using traditional compressible codes. MAESTRO is based on an equation set that we have derived using low Mach number asymptotics; this equation set does not explicitly track acoustic waves and thus allows a significant increase in the time step. MAESTRO is suitable for two- and three-dimensional local atmospheric flows as well as three-dimensional full-star flows, and uses adaptive mesh refinement (AMR) to locally refine grids in regions of interest. Our initial scientific applications include the convective phase of Type Ia supernovae and Type I X-ray Bursts on neutron stars. The work at LBNL was supported by the SciDAC Program of the DOE Office of Advanced Scientific Computing Research under the DOE under contract No. DE-AC02-05CH11231. The work at Stony Brook was supported by the DOE/Office of Nuclear Physics, grant No. DE-FG02-06ER41448. We made use of the Jaguar via a DOE INCITE allocation at the OLCF at ORNL and Franklin at NERSC at LBNL.

  3. Hydrodynamics of Ship Propellers

    NASA Astrophysics Data System (ADS)

    Breslin, John P.; Andersen, Poul

    1996-11-01

    This book deals with flows over propellers operating behind ships, and the hydrodynamic forces and movements that the propeller generates on the shaft and on the ship hull. The first part of the book is devoted to fundamentals of the flow about hydrofoil sections and wings, and to propellers in uniform flow, with guidance for design and pragmatic analysis of performance. The second part covers the development of unsteady forces arising from operation in nonuniform hull wakes. A final chapter discusses the optimization of efficiency of compound propulsors. Researchers in ocean technology and naval architecture will find this book appealing.

  4. How to fake hydrodynamic signals

    NASA Astrophysics Data System (ADS)

    Romatschke, Paul

    2016-12-01

    Flow signatures in experimental data from relativistic ion collisions, are usually interpreted as a fingerprint of the presence of a hydrodynamic phase during the evolution of these systems. I review some theoretical ideas to 'fake' this hydrodynamic behavior in p+A and A+A collisions. I find that transverse flow and femtoscopic measurements can easily be forged through non-hydrodynamic evolution, while large elliptic flow requires some non-vanishing interactions in the hot phase.

  5. Hydrodynamic synchronization of flagellar oscillators

    NASA Astrophysics Data System (ADS)

    Friedrich, Benjamin

    2016-11-01

    In this review, we highlight the physics of synchronization in collections of beating cilia and flagella. We survey the nonlinear dynamics of synchronization in collections of noisy oscillators. This framework is applied to flagellar synchronization by hydrodynamic interactions. The time-reversibility of hydrodynamics at low Reynolds numbers requires swimming strokes that break time-reversal symmetry to facilitate hydrodynamic synchronization. We discuss different physical mechanisms for flagellar synchronization, which break this symmetry in different ways.

  6. Molecular Hydrodynamics from Memory Kernels

    NASA Astrophysics Data System (ADS)

    Lesnicki, Dominika; Vuilleumier, Rodolphe; Carof, Antoine; Rotenberg, Benjamin

    2016-04-01

    The memory kernel for a tagged particle in a fluid, computed from molecular dynamics simulations, decays algebraically as t-3 /2 . We show how the hydrodynamic Basset-Boussinesq force naturally emerges from this long-time tail and generalize the concept of hydrodynamic added mass. This mass term is negative in the present case of a molecular solute, which is at odds with incompressible hydrodynamics predictions. Lastly, we discuss the various contributions to the friction, the associated time scales, and the crossover between the molecular and hydrodynamic regimes upon increasing the solute radius.

  7. Fast compression implementation for hyperspectral sensor

    NASA Astrophysics Data System (ADS)

    Hihara, Hiroki; Yoshida, Jun; Ishida, Juro; Takada, Jun; Senda, Yuzo; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Ohgi, Nagamitsu

    2010-11-01

    Fast and small foot print lossless image compressors aiming at hyper-spectral sensor for the earth observation satellite have been developed. Since more than one hundred channels are required for hyper-spectral sensors on optical observation satellites, fast compression algorithm with small foot print implementation is essential for reducing encoder size and weight resulting in realizing light-weight and small-size sensor system. The image compression method should have low complexity in order to reduce size and weight of the sensor signal processing unit, power consumption and fabrication cost. Coding efficiency and compression speed enables enlargement of the capacity of signal compression channels, which resulted in reducing signal compression channels onboard by multiplexing sensor signal channels into reduced number of compression channels. The employed method is based on FELICS1, which is hierarchical predictive coding method with resolution scaling. To improve FELICS's performance of image decorrelation and entropy coding, we applied two-dimensional interpolation prediction and adaptive Golomb-Rice coding, which enables small footprint. It supports progressive decompression using resolution scaling, whilst still delivering superior performance as measured by speed and complexity. The small footprint circuitry is embedded into the hyper-spectral sensor data formatter. In consequence, lossless compression function has been added without additional size and weight.

  8. Shock compression of nitrobenzene

    NASA Astrophysics Data System (ADS)

    Kozu, Naoshi; Arai, Mitsuru; Tamura, Masamitsu; Fujihisa, Hiroshi; Aoki, Katsutoshi; Yoshida, Masatake; Kondo, Ken-Ichi

    1999-06-01

    The Hugoniot (4 - 30 GPa) and the isotherm (1 - 7 GPa) of nitrobenzene have been investigated by shock and static compression experiments. Nitrobenzene has the most basic structure of nitro aromatic compounds, which are widely used as energetic materials, but nitrobenzene has been considered not to explode in spite of the fact its calculated heat of detonation is similar to TNT, about 1 kcal/g. Explosive plane-wave generators and diamond anvil cell were used for shock and static compression, respectively. The obtained Hugoniot consists of two linear lines, and the kink exists around 10 GPa. The upper line agrees well with the Hugoniot of detonation products calculated by KHT code, so it is expected that nitrobenzene detonates in that area. Nitrobenzene solidifies under 1 GPa of static compression, and the isotherm of solid nitrobenzene was obtained by X-ray diffraction technique. Comparing the Hugoniot and the isotherm, nitrobenzene is in liquid phase under experimented shock condition. From the expected phase diagram, shocked nitrobenzene seems to remain metastable liquid in solid phase region on that diagram.

  9. A hybrid numerical fluid dynamics code for resistive magnetohydrodynamics

    SciTech Connect

    Johnson, Jeffrey

    2006-04-01

    Spasmos is a computational fluid dynamics code that uses two numerical methods to solve the equations of resistive magnetohydrodynamic (MHD) flows in compressible, inviscid, conducting media[1]. The code is implemented as a set of libraries for the Python programming language[2]. It represents conducting and non-conducting gases and materials with uncomplicated (analytic) equations of state. It supports calculations in 1D, 2D, and 3D geometry, though only the 1D configuation has received significant testing to date. Because it uses the Python interpreter as a front end, users can easily write test programs to model systems with a variety of different numerical and physical parameters. Currently, the code includes 1D test programs for hydrodynamics (linear acoustic waves, the Sod weak shock[3], the Noh strong shock[4], the Sedov explosion[5], magnetic diffusion (decay of a magnetic pulse[6], a driven oscillatory "wine-cellar" problem[7], magnetic equilibrium), and magnetohydrodynamics (an advected magnetic pulse[8], linear MHD waves, a magnetized shock tube[9]). Spasmos current runs only in a serial configuration. In the future, it will use MPI for parallel computation.

  10. A two-phase code for protoplanetary disks

    NASA Astrophysics Data System (ADS)

    Inaba, S.; Barge, P.; Daniel, E.; Guillard, H.

    2005-02-01

    A high accuracy 2D hydrodynamical code has been developed to simulate the flow of gas and solid particles in protoplanetary disks. Gas is considered as a compressible fluid while solid particles, fully coupled to the gas by aerodynamical forces, are treated as a pressure-free diluted second phase. The solid particles lose energy and angular momentum which are transfered to the gas. As a result particles migrate inward toward the star and gas moves outward. High accuracy is necessary to account for the coupling. Boundary conditions must account for the inward/outward motions of the two phases. The code has been tested on one and two dimensional situations. The numerical results were compared with analytical solutions in three different cases: i) the disk is composed of a single gas component; ii) solid particles migrate in a steady flow of gas; iii) gas and solid particles evolve simultaneously. The code can easily reproduce known analytical solutions and is a powerful tool to study planetary formation at the decoupling stage. For example, the evolution of an over-density in the radial distribution of solids is found to differ significantly from the case where no back reaction of the particles onto the gas is assumed. Inside the bump, solid particles have a drift velocity approximately 16 times smaller than outside which significantly increases the residence time of the particles in the nebula. This opens some interesting perspectives to solve the timescale problem for the formation of planetesimals.

  11. Effect of Second-Order Hydrodynamics on a Floating Offshore Wind Turbine

    SciTech Connect

    Roald, L.; Jonkman, J.; Robertson, A.

    2014-05-01

    The design of offshore floating wind turbines uses design codes that can simulate the entire coupled system behavior. At the present, most codes include only first-order hydrodynamics, which induce forces and motions varying with the same frequency as the incident waves. Effects due to second- and higher-order hydrodynamics are often ignored in the offshore industry, because the forces induced typically are smaller than the first-order forces. In this report, first- and second-order hydrodynamic analysis used in the offshore oil and gas industry is applied to two different wind turbine concepts--a spar and a tension leg platform.

  12. Image compression requirements and standards in PACS

    NASA Astrophysics Data System (ADS)

    Wilson, Dennis L.

    1995-05-01

    Cost effective telemedicine and storage create a need for medical image compression. Compression saves communication bandwidth and reduces the size of the stored images. After clinicians become acquainted with the quality of the images using some of the newer algorithms, they accept the idea of lossy compression. The older algorithms, JPEG and MPEG in particular, are generally not adequate for high quality compression of medical images. The requirements for compression for medical images center on diagnostic quality images after the restoration of the images. The compression artifacts should not interfere with the viewing of the images for diagnosis. New requirements for compression arise from the fact that the images will likely be viewed on a computer workstation, where the images may be manipulated in ways that would bring out the artifacts. A medical imaging compression standard must be applicable across a large variety of image types from CT and MR to CR and ultrasound. To have one or a very few compression algorithms that are effective across a broad range of image types is desirable. Related series of images as for CT, MR, or cardiology require inter-image processing as well as intra-image processing for effective compression. Two preferred decompositions of the medical images are lapped orthogonal transforms and wavelet transforms. These transforms decompose the images in frequency in two different ways. The lapped orthogonal transforms groups the data according to the area where the data originated, while the wavelet transforms group the data by the frequency band of the image. The compression realized depends on the similarity of close transform coefficients. Huffman coding or the coding of the RICE algorithm are a beginning for the encoding. To be really effective the coding must have an extension for the areas where there is little information, the low entropy extension. In these areas there are less than one bit per pixel and multiple pixels must be

  13. Hydrodynamics of Bacterial Cooperation

    NASA Astrophysics Data System (ADS)

    Petroff, A.; Libchaber, A.

    2012-12-01

    Over the course of the last several decades, the study of microbial communities has identified countless examples of cooperation between microorganisms. Generally—as in the case of quorum sensing—cooperation is coordinated by a chemical signal that diffuses through the community. Less well understood is a second class of cooperation that is mediated through physical interactions between individuals. To better understand how the bacteria use hydrodynamics to manipulate their environment and coordinate their actions, we study the sulfur-oxidizing bacterium Thiovulum majus. These bacteria live in the diffusive boundary layer just above the muddy bottoms of ponds. As buried organic material decays, sulfide diffuses out of the mud. Oxygen from the pond diffuses into the boundary layer from above. These bacteria form communities—called veils— which are able to transport nutrients through the boundary layer faster than diffusion, thereby increasing their metabolic rate. In these communities, bacteria attach to surfaces and swim in place. As millions of bacteria beat their flagella, the community induces a macroscopic fluid flow, which mix the boundary layer. Here we present experimental observations and mathematical models that elucidate the hydrodynamics linking the behavior of an individual bacterium to the collective dynamics of the community. We begin by characterizing the flow of water around an individual bacterium swimming in place. We then discuss the flow of water and nutrients around a small number of individuals. Finally, we present observations and models detailing the macroscopic dynamics of a Thiovulum veil.

  14. Load responsive hydrodynamic bearing

    DOEpatents

    Kalsi, Manmohan S.; Somogyi, Dezso; Dietle, Lannie L.

    2002-01-01

    A load responsive hydrodynamic bearing is provided in the form of a thrust bearing or journal bearing for supporting, guiding and lubricating a relatively rotatable member to minimize wear thereof responsive to relative rotation under severe load. In the space between spaced relatively rotatable members and in the presence of a liquid or grease lubricant, one or more continuous ring shaped integral generally circular bearing bodies each define at least one dynamic surface and a plurality of support regions. Each of the support regions defines a static surface which is oriented in generally opposed relation with the dynamic surface for contact with one of the relatively rotatable members. A plurality of flexing regions are defined by the generally circular body of the bearing and are integral with and located between adjacent support regions. Each of the flexing regions has a first beam-like element being connected by an integral flexible hinge with one of the support regions and a second beam-like element having an integral flexible hinge connection with an adjacent support region. A least one local weakening geometry of the flexing region is located intermediate the first and second beam-like elements. In response to application of load from one of the relatively rotatable elements to the bearing, the beam-like elements and the local weakening geometry become flexed, causing the dynamic surface to deform and establish a hydrodynamic geometry for wedging lubricant into the dynamic interface.

  15. Pilot-Wave Hydrodynamics

    NASA Astrophysics Data System (ADS)

    Bush, John W. M.

    2015-01-01

    Yves Couder, Emmanuel Fort, and coworkers recently discovered that a millimetric droplet sustained on the surface of a vibrating fluid bath may self-propel through a resonant interaction with its own wave field. This article reviews experimental evidence indicating that the walking droplets exhibit certain features previously thought to be exclusive to the microscopic, quantum realm. It then reviews theoretical descriptions of this hydrodynamic pilot-wave system that yield insight into the origins of its quantum-like behavior. Quantization arises from the dynamic constraint imposed on the droplet by its pilot-wave field, and multimodal statistics appear to be a feature of chaotic pilot-wave dynamics. I attempt to assess the potential and limitations of this hydrodynamic system as a quantum analog. This fluid system is compared to quantum pilot-wave theories, shown to be markedly different from Bohmian mechanics and more closely related to de Broglie's original conception of quantum dynamics, his double-solution theory, and its relatively recent extensions through researchers in stochastic electrodynamics.

  16. Block adaptive rate controlled image data compression

    NASA Technical Reports Server (NTRS)

    Rice, R. F.; Hilbert, E.; Lee, J.-J.; Schlutsmeyer, A.

    1979-01-01

    A block adaptive rate controlled (BARC) image data compression algorithm is described. It is noted that in the algorithm's principal rate controlled mode, image lines can be coded at selected rates by combining practical universal noiseless coding techniques with block adaptive adjustments in linear quantization. Compression of any source data at chosen rates of 3.0 bits/sample and above can be expected to yield visual image quality with imperceptible degradation. Exact reconstruction will be obtained if the one-dimensional difference entropy is below the selected compression rate. It is noted that the compressor can also be operated as a floating rate noiseless coder by simply not altering the input data quantization. Here, the universal noiseless coder ensures that the code rate is always close to the entropy. Application of BARC image data compression to the Galileo orbiter mission of Jupiter is considered.

  17. Distributed sensor data compression algorithm

    NASA Astrophysics Data System (ADS)

    Ambrose, Barry; Lin, Freddie

    2006-04-01

    Theoretically it is possible for two sensors to reliably send data at rates smaller than the sum of the necessary data rates for sending the data independently, essentially taking advantage of the correlation of sensor readings to reduce the data rate. In 2001, Caltech researchers Michelle Effros and Qian Zhao developed new techniques for data compression code design for correlated sensor data, which were published in a paper at the 2001 Data Compression Conference (DCC 2001). These techniques take advantage of correlations between two or more closely positioned sensors in a distributed sensor network. Given two signals, X and Y, the X signal is sent using standard data compression. The goal is to design a partition tree for the Y signal. The Y signal is sent using a code based on the partition tree. At the receiving end, if ambiguity arises when using the partition tree to decode the Y signal, the X signal is used to resolve the ambiguity. We have extended this work to increase the efficiency of the code search algorithms. Our results have shown that development of a highly integrated sensor network protocol that takes advantage of a correlation in sensor readings can result in 20-30% sensor data transport cost savings. In contrast, the best possible compression using state-of-the-art compression techniques that did not take into account the correlation of the incoming data signals achieved only 9-10% compression at most. This work was sponsored by MDA, but has very widespread applicability to ad hoc sensor networks, hyperspectral imaging sensors and vehicle health monitoring sensors for space applications.

  18. Modeling the Compression of Merged Compact Toroids by Multiple Plasma Jets

    NASA Technical Reports Server (NTRS)

    Thio, Y. C. Francis; Knapp, Charles E.; Kirkpatrick, Ron; Rodgers, Stephen L. (Technical Monitor)

    2000-01-01

    A fusion propulsion scheme has been proposed that makes use of the merging of a spherical distribution of plasma jets to dynamically form a gaseous liner. The gaseous liner is used to implode a magnetized target to produce the fusion reaction in a standoff manner. In this paper, the merging of the plasma jets to form the gaseous liner is investigated numerically. The Los Alamos SPHINX code, based on the smoothed particle hydrodynamics method is used to model the interaction of the jets. 2-D and 3-D simulations have been performed to study the characteristics of the resulting flow when these jets collide. The results show that the jets merge to form a plasma liner that converge radially which may be used to compress the central plasma to fusion conditions. Details of the computational model and the SPH numerical methods will be presented together with the numerical results.

  19. Hydrodynamic Efficiency of Ablation Propulsion with Pulsed Ion Beam

    SciTech Connect

    Buttapeng, Chainarong; Yazawa, Masaru; Harada, Nobuhiro; Suematsu, Hisayuki; Jiang Weihua; Yatsui, Kiyoshi

    2006-05-02

    This paper presents the hydrodynamic efficiency of ablation plasma produced by pulsed ion beam on the basis of the ion beam-target interaction. We used a one-dimensional hydrodynamic fluid compressible to study the physics involved namely an ablation acceleration behavior and analyzed it as a rocketlike model in order to investigate its hydrodynamic variables for propulsion applications. These variables were estimated by the concept of ablation driven implosion in terms of ablated mass fraction, implosion efficiency, and hydrodynamic energy conversion. Herein, the energy conversion efficiency of 17.5% was achieved. In addition, the results show maximum energy efficiency of the ablation process (ablation efficiency) of 67% meaning the efficiency with which pulsed ion beam energy-ablation plasma conversion. The effects of ion beam energy deposition depth to hydrodynamic efficiency were briefly discussed. Further, an evaluation of propulsive force with high specific impulse of 4000s, total impulse of 34mN and momentum to energy ratio in the range of {mu}N/W was also analyzed.

  20. Clinical coding. Code breakers.

    PubMed

    Mathieson, Steve

    2005-02-24

    --The advent of payment by results has seen the role of the clinical coder pushed to the fore in England. --Examinations for a clinical coding qualification began in 1999. In 2004, approximately 200 people took the qualification. --Trusts are attracting people to the role by offering training from scratch or through modern apprenticeships.

  1. GAMER: GPU-accelerated Adaptive MEsh Refinement code

    NASA Astrophysics Data System (ADS)

    Schive, Hsi-Yu; Tsai, Yu-Chih; Chiueh, Tzihong

    2016-12-01

    GAMER (GPU-accelerated Adaptive MEsh Refinement) serves as a general-purpose adaptive mesh refinement + GPU framework and solves hydrodynamics with self-gravity. The code supports adaptive mesh refinement (AMR), hydrodynamics with self-gravity, and a variety of GPU-accelerated hydrodynamic and Poisson solvers. It also supports hybrid OpenMP/MPI/GPU parallelization, concurrent CPU/GPU execution for performance optimization, and Hilbert space-filling curve for load balance. Although the code is designed for simulating galaxy formation, it can be easily modified to solve a variety of applications with different governing equations. All optimization strategies implemented in the code can be inherited straightforwardly.

  2. Structured illumination temporal compressive microscopy

    PubMed Central

    Yuan, Xin; Pang, Shuo

    2016-01-01

    We present a compressive video microscope based on structured illumination with incoherent light source. The source-side illumination coding scheme allows the emission photons being collected by the full aperture of the microscope objective, and thus is suitable for the fluorescence readout mode. A 2-step iterative reconstruction algorithm, termed BWISE, has been developed to address the mismatch between the illumination pattern size and the detector pixel size. Image sequences with a temporal compression ratio of 4:1 were demonstrated. PMID:27231586

  3. Fast, efficient lossless data compression

    NASA Technical Reports Server (NTRS)

    Ross, Douglas

    1991-01-01

    This paper presents lossless data compression and decompression algorithms which can be easily implemented in software. The algorithms can be partitioned into their fundamental parts which can be implemented at various stages within a data acquisition system. This allows for efficient integration of these functions into systems at the stage where they are most applicable. The algorithms were coded in Forth to run on a Silicon Composers Single Board Computer (SBC) using the Harris RTX2000 Forth processor. The algorithms require very few system resources and operate very fast. The performance of the algorithms with the RTX enables real time data compression and decompression to be implemented for a wide range of applications.

  4. MONTE CARLO RADIATION-HYDRODYNAMICS WITH IMPLICIT METHODS

    SciTech Connect

    Roth, Nathaniel; Kasen, Daniel

    2015-03-15

    We explore the application of Monte Carlo transport methods to solving coupled radiation-hydrodynamics (RHD) problems. We use a time-dependent, frequency-dependent, three-dimensional radiation transport code that is special relativistic and includes some detailed microphysical interactions such as resonant line scattering. We couple the transport code to two different one-dimensional (non-relativistic) hydrodynamics solvers: a spherical Lagrangian scheme and a Eulerian Godunov solver. The gas–radiation energy coupling is treated implicitly, allowing us to take hydrodynamical time-steps that are much longer than the radiative cooling time. We validate the code and assess its performance using a suite of radiation hydrodynamical test problems, including ones in the radiation energy dominated regime. We also develop techniques that reduce the noise of the Monte Carlo estimated radiation force by using the spatial divergence of the radiation pressure tensor. The results suggest that Monte Carlo techniques hold promise for simulating the multi-dimensional RHD of astrophysical systems.

  5. Prototype Mixed Finite Element Hydrodynamics Capability in ARES

    SciTech Connect

    Rieben, R N

    2008-07-10

    This document describes work on a prototype Mixed Finite Element Method (MFEM) hydrodynamics algorithm in the ARES code, and its application to a set of standard test problems. This work is motivated by the need for improvements to the algorithms used in the Lagrange hydrodynamics step to make them more robust. We begin by identifying the outstanding issues with traditional numerical hydrodynamics algorithms followed by a description of the proposed method and how it may address several of these longstanding issues. We give a theoretical overview of the proposed MFEM algorithm as well as a summary of the coding additions and modifications that were made to add this capability to the ARES code. We present results obtained with the new method on a set of canonical hydrodynamics test problems and demonstrate significant improvement in comparison to results obtained with traditional methods. We conclude with a summary of the issues still at hand and motivate the need for continued research to develop the proposed method into maturity.

  6. Reaching the hydrodynamic regime in a Bose-Einstein condensate by suppression of avalanches

    SciTech Connect

    Stam, K. M. R. van der; Meppelink, R.; Vogels, J. M.; Straten, P. van der

    2007-03-15

    We report the realization of a Bose-Einstein condensate (BEC) in the hydrodynamic regime. The hydrodynamic regime is reached by evaporative cooling at a relatively low density suppressing the effect of avalanches. With the suppression of avalanches a BEC containing more than 10{sup 8} atoms is produced. The collisional opacity can be tuned from the collisionless regime to a collisional opacity of more than 2 by compressing the trap after condensation. In the collisional opaque regime a significant heating of the cloud at time scales shorter than half of the radial trap period is measured, which is a direct proof that the BEC is hydrodynamic.

  7. Compressing DNA sequence databases with coil

    PubMed Central

    White, W Timothy J; Hendy, Michael D

    2008-01-01

    Background Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip) compression – an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. Results We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression – the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST) data. Finally, coil can efficiently encode incremental additions to a sequence database. Conclusion coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work. PMID:18489794

  8. General formulation of transverse hydrodynamics

    SciTech Connect

    Ryblewski, Radoslaw; Florkowski, Wojciech

    2008-06-15

    General formulation of hydrodynamics describing transversally thermalized matter created at the early stages of ultrarelativistic heavy-ion collisions is presented. Similarities and differences with the standard three-dimensionally thermalized relativistic hydrodynamics are discussed. The role of the conservation laws as well as the thermodynamic consistency of two-dimensional thermodynamic variables characterizing transversally thermalized matter is emphasized.

  9. Low Mach number fluctuating hydrodynamics for electrolytes

    NASA Astrophysics Data System (ADS)

    Péraud, Jean-Philippe; Nonaka, Andy; Chaudhri, Anuj; Bell, John B.; Donev, Aleksandar; Garcia, Alejandro L.

    2016-11-01

    We formulate and study computationally the low Mach number fluctuating hydrodynamic equations for electrolyte solutions. We are interested in studying transport in mixtures of charged species at the mesoscale, down to scales below the Debye length, where thermal fluctuations have a significant impact on the dynamics. Continuing our previous work on fluctuating hydrodynamics of multicomponent mixtures of incompressible isothermal miscible liquids [A. Donev et al., Phys. Fluids 27, 037103 (2015), 10.1063/1.4913571], we now include the effect of charged species using a quasielectrostatic approximation. Localized charges create an electric field, which in turn provides additional forcing in the mass and momentum equations. Our low Mach number formulation eliminates sound waves from the fully compressible formulation and leads to a more computationally efficient quasi-incompressible formulation. We demonstrate our ability to model saltwater (NaCl) solutions in both equilibrium and nonequilibrium settings. We show that our algorithm is second order in the deterministic setting and for length scales much greater than the Debye length gives results consistent with an electroneutral approximation. In the stochastic setting, our model captures the predicted dynamics of equilibrium and nonequilibrium fluctuations. We also identify and model an instability that appears when diffusive mixing occurs in the presence of an applied electric field.

  10. Hydrocyclone separation hydrodynamics

    SciTech Connect

    Ivanov, A.A.; Ruzanov, S.R.; Lunyushkina, I.A.

    1987-10-20

    The lack of an adequate hydrodynamic model for a hydrocyclone has so far been the main obstacle to devising a general method for designing such apparatus. The authors present a method of calculating the liquid flow in the working zone. The results have been used to calculate the separating power in application to dilute suspensions. The Navier-Stokes equations and the equation of continuity are used in examining the behavior together with assumptions based on experiment: the conditions for stationary axisymmetric flow, constant turbulent viscosity, and a constant radial profile for the tangential low speed at all the heights. The boundary conditions are those for liquid slip at the side walls and absence of vortex drainage at the axis. The results enable one to choose the dimensions for particular separations.

  11. Synchronization and hydrodynamic interactions

    NASA Astrophysics Data System (ADS)

    Powers, Thomas; Qian, Bian; Breuer, Kenneth

    2008-03-01

    Cilia and flagella commonly beat in a coordinated manner. Examples include the flagella that Volvox colonies use to move, the cilia that sweep foreign particles up out of the human airway, and the nodal cilia that set up the flow that determines the left-right axis in developing vertebrate embryos. In this talk we present an experimental study of how hydrodynamic interactions can lead to coordination in a simple idealized system: two nearby paddles driven with fixed torques in a highly viscous fluid. The paddles attain a synchronized state in which they rotate together with a phase difference of 90 degrees. We discuss how synchronization depends on system parameters and present numerical calculations using the method of regularized stokeslets.

  12. Hydrodynamics, resurgence, and transasymptotics

    NASA Astrophysics Data System (ADS)

    Başar, Gökçe; Dunne, Gerald V.

    2015-12-01

    The second order hydrodynamical description of a homogeneous conformal plasma that undergoes a boost-invariant expansion is given by a single nonlinear ordinary differential equation, whose resurgent asymptotic properties we study, developing further the recent work of Heller and Spalinski [Phys. Rev. Lett. 115, 072501 (2015)]. Resurgence clearly identifies the nonhydrodynamic modes that are exponentially suppressed at late times, analogous to the quasinormal modes in gravitational language, organizing these modes in terms of a trans-series expansion. These modes are analogs of instantons in semiclassical expansions, where the damping rate plays the role of the instanton action. We show that this system displays the generic features of resurgence, with explicit quantitative relations between the fluctuations about different orders of these nonhydrodynamic modes. The imaginary part of the trans-series parameter is identified with the Stokes constant, and the real part with the freedom associated with initial conditions.

  13. Hydrodynamics of Turning Flocks

    NASA Astrophysics Data System (ADS)

    Yang, Xingbo; Marchetti, M. Cristina

    2015-03-01

    We present a hydrodynamic model of flocking that generalizes the familiar Toner-Tu equations to incorporate turning inertia of well polarized flocks. The continuum equations are derived by coarse graining the inertial spin model recently proposed by Cavagna et al. The interplay between orientational inertia and bend elasticity of the flock yields spin waves that mediate the propagation of turning information throughout the flock. When the inertia is large, we find a novel instability that signals the transition to complex spatio-temporal patterns of continuously turning and swirling flocks. This work was supported by the NSF Awards DMR-1305184 and DGE-1068780 at Syracuse University and NSF Award PHY11-25915 and the Gordon and Betty Moore Foundation Grant No. 2919 at the KITP at the University of California, Santa Barbara.

  14. Hydrodynamics of Peristaltic Propulsion

    NASA Astrophysics Data System (ADS)

    Athanassiadis, Athanasios; Hart, Douglas

    2014-11-01

    A curious class of animals called salps live in marine environments and self-propel by ejecting vortex rings much like jellyfish and squid. However, unlike other jetting creatures that siphon and eject water from one side of their body, salps produce vortex rings by pumping water through siphons on opposite ends of their hollow cylindrical bodies. In the simplest cases, it seems like some species of salp can successfully move by contracting just two siphons connected by an elastic body. When thought of as a chain of timed contractions, salp propulsion is reminiscent of peristaltic pumping applied to marine locomotion. Inspired by salps, we investigate the hydrodynamics of peristaltic propulsion, focusing on the scaling relationships that determine flow rate, thrust production, and energy usage in a model system. We discuss possible actuation methods for a model peristaltic vehicle, considering both the material and geometrical requirements for such a system.

  15. Compressed convolution

    NASA Astrophysics Data System (ADS)

    Elsner, Franz; Wandelt, Benjamin D.

    2014-01-01

    We introduce the concept of compressed convolution, a technique to convolve a given data set with a large number of non-orthogonal kernels. In typical applications our technique drastically reduces the effective number of computations. The new method is applicable to convolutions with symmetric and asymmetric kernels and can be easily controlled for an optimal trade-off between speed and accuracy. It is based on linear compression of the collection of kernels into a small number of coefficients in an optimal eigenbasis. The final result can then be decompressed in constant time for each desired convolved output. The method is fully general and suitable for a wide variety of problems. We give explicit examples in the context of simulation challenges for upcoming multi-kilo-detector cosmic microwave background (CMB) missions. For a CMB experiment with detectors with similar beam properties, we demonstrate that the algorithm can decrease the costs of beam convolution by two to three orders of magnitude with negligible loss of accuracy. Likewise, it has the potential to allow the reduction of disk space required to store signal simulations by a similar amount. Applications in other areas of astrophysics and beyond are optimal searches for a large number of templates in noisy data, e.g. from a parametrized family of gravitational wave templates; or calculating convolutions with highly overcomplete wavelet dictionaries, e.g. in methods designed to uncover sparse signal representations.

  16. Hydrodynamics of sediment threshold

    NASA Astrophysics Data System (ADS)

    Ali, Sk Zeeshan; Dey, Subhasish

    2016-07-01

    A novel hydrodynamic model for the threshold of cohesionless sediment particle motion under a steady unidirectional streamflow is presented. The hydrodynamic forces (drag and lift) acting on a solitary sediment particle resting over a closely packed bed formed by the identical sediment particles are the primary motivating forces. The drag force comprises of the form drag and form induced drag. The lift force includes the Saffman lift, Magnus lift, centrifugal lift, and turbulent lift. The points of action of the force system are appropriately obtained, for the first time, from the basics of micro-mechanics. The sediment threshold is envisioned as the rolling mode, which is the plausible mode to initiate a particle motion on the bed. The moment balance of the force system on the solitary particle about the pivoting point of rolling yields the governing equation. The conditions of sediment threshold under the hydraulically smooth, transitional, and rough flow regimes are examined. The effects of velocity fluctuations are addressed by applying the statistical theory of turbulence. This study shows that for a hindrance coefficient of 0.3, the threshold curve (threshold Shields parameter versus shear Reynolds number) has an excellent agreement with the experimental data of uniform sediments. However, most of the experimental data are bounded by the upper and lower limiting threshold curves, corresponding to the hindrance coefficients of 0.2 and 0.4, respectively. The threshold curve of this study is compared with those of previous researchers. The present model also agrees satisfactorily with the experimental data of nonuniform sediments.

  17. A HYDROCHEMICAL HYBRID CODE FOR ASTROPHYSICAL PROBLEMS. I. CODE VERIFICATION AND BENCHMARKS FOR A PHOTON-DOMINATED REGION (PDR)

    SciTech Connect

    Motoyama, Kazutaka; Morata, Oscar; Hasegawa, Tatsuhiko; Shang, Hsien; Krasnopolsky, Ruben

    2015-07-20

    A two-dimensional hydrochemical hybrid code, KM2, is constructed to deal with astrophysical problems that would require coupled hydrodynamical and chemical evolution. The code assumes axisymmetry in a cylindrical coordinate system and consists of two modules: a hydrodynamics module and a chemistry module. The hydrodynamics module solves hydrodynamics using a Godunov-type finite volume scheme and treats included chemical species as passively advected scalars. The chemistry module implicitly solves nonequilibrium chemistry and change of energy due to thermal processes with transfer of external ultraviolet radiation. Self-shielding effects on photodissociation of CO and H{sub 2} are included. In this introductory paper, the adopted numerical method is presented, along with code verifications using the hydrodynamics module and a benchmark on the chemistry module with reactions specific to a photon-dominated region (PDR). Finally, as an example of the expected capability, the hydrochemical evolution of a PDR is presented based on the PDR benchmark.

  18. Lossless wavelet compression on medical image

    NASA Astrophysics Data System (ADS)

    Zhao, Xiuying; Wei, Jingyuan; Zhai, Linpei; Liu, Hong

    2006-09-01

    An increasing number of medical imagery is created directly in digital form. Such as Clinical image Archiving and Communication Systems (PACS), as well as telemedicine networks require the storage and transmission of this huge amount of medical image data. Efficient compression of these data is crucial. Several lossless and lossy techniques for the compression of the data have been proposed. Lossless techniques allow exact reconstruction of the original imagery, while lossy techniques aim to achieve high compression ratios by allowing some acceptable degradation in the image. Lossless compression does not degrade the image, thus facilitating accurate diagnosis, of course at the expense of higher bit rates, i.e. lower compression ratios. Various methods both for lossy (irreversible) and lossless (reversible) image compression are proposed in the literature. The recent advances in the lossy compression techniques include different methods such as vector quantization. Wavelet coding, neural networks, and fractal coding. Although these methods can achieve high compression ratios (of the order 50:1, or even more), they do not allow reconstructing exactly the original version of the input data. Lossless compression techniques permit the perfect reconstruction of the original image, but the achievable compression ratios are only of the order 2:1, up to 4:1. In our paper, we use a kind of lifting scheme to generate truly loss-less non-linear integer-to-integer wavelet transforms. At the same time, we exploit the coding algorithm producing an embedded code has the property that the bits in the bit stream are generated in order of importance, so that all the low rate codes are included at the beginning of the bit stream. Typically, the encoding process stops when the target bit rate is met. Similarly, the decoder can interrupt the decoding process at any point in the bit stream, and still reconstruct the image. Therefore, a compression scheme generating an embedded code can

  19. Warm dense mater: another application for pulsed power hydrodynamics

    SciTech Connect

    Reinovsky, Robert Emil

    2009-01-01

    Pulsed Power Hydrodynamics (PPH) is an application of low-impedance pulsed power, and high magnetic field technology to the study of advanced hydrodynamic problems, instabilities, turbulence, and material properties. PPH can potentially be applied to the study of the properties of warm dense matter (WDM) as well. Exploration of the properties of warm dense matter such as equation of state, viscosity, conductivity is an emerging area of study focused on the behavior of matter at density near solid density (from 10% of solid density to slightly above solid density) and modest temperatures ({approx}1-10 eV). Conditions characteristic of WDM are difficult to obtain, and even more difficult to diagnose. One approach to producing WDM uses laser or particle beam heating of very small quantities of matter on timescales short compared to the subsequent hydrodynamic expansion timescales (isochoric heating) and a vigorous community of researchers are applying these techniques. Pulsed power hydrodynamic techniques, such as large convergence liner compression of a large volume, modest density, low temperature plasma to densities approaching solid density or through multiple shock compression and heating of normal density material between a massive, high density, energetic liner and a high density central 'anvil' are possible ways to reach relevant conditions. Another avenue to WDM conditions is through the explosion and subsequent expansion of a conductor (wire) against a high pressure (density) gas background (isobaric expansion) techniques. However, both techniques demand substantial energy, proper power conditioning and delivery, and an understanding of the hydrodynamic and instability processes that limit each technique. In this paper we will examine the challenges to pulsed power technology and to pulsed power systems presented by the opportunity to explore this interesting region of parameter space.

  20. Code Verification of the HIGRAD Computational Fluid Dynamics Solver

    SciTech Connect

    Van Buren, Kendra L.; Canfield, Jesse M.; Hemez, Francois M.; Sauer, Jeremy A.

    2012-05-04

    The purpose of this report is to outline code and solution verification activities applied to HIGRAD, a Computational Fluid Dynamics (CFD) solver of the compressible Navier-Stokes equations developed at the Los Alamos National Laboratory, and used to simulate various phenomena such as the propagation of wildfires and atmospheric hydrodynamics. Code verification efforts, as described in this report, are an important first step to establish the credibility of numerical simulations. They provide evidence that the mathematical formulation is properly implemented without significant mistakes that would adversely impact the application of interest. Highly accurate analytical solutions are derived for four code verification test problems that exercise different aspects of the code. These test problems are referred to as: (i) the quiet start, (ii) the passive advection, (iii) the passive diffusion, and (iv) the piston-like problem. These problems are simulated using HIGRAD with different levels of mesh discretization and the numerical solutions are compared to their analytical counterparts. In addition, the rates of convergence are estimated to verify the numerical performance of the solver. The first three test problems produce numerical approximations as expected. The fourth test problem (piston-like) indicates the extent to which the code is able to simulate a 'mild' discontinuity, which is a condition that would typically be better handled by a Lagrangian formulation. The current investigation concludes that the numerical implementation of the solver performs as expected. The quality of solutions is sufficient to provide credible simulations of fluid flows around wind turbines. The main caveat associated to these findings is the low coverage provided by these four problems, and somewhat limited verification activities. A more comprehensive evaluation of HIGRAD may be beneficial for future studies.

  1. Hydrodynamic Elastic Magneto Plastic

    SciTech Connect

    Wilkins, M. L.; Levatin, J. A.

    1985-02-01

    The HEMP code solves the conservation equations of two-dimensional elastic-plastic flow, in plane x-y coordinates or in cylindrical symmetry around the x-axis. Provisions for calculation of fixed boundaries, free surfaces, pistons, and boundary slide planes have been included, along with other special conditions.

  2. Hydrodynamic body shape analysis and their impact on swimming performance.

    PubMed

    Li, Tian-Zeng; Zhan, Jie-Min

    2015-01-01

    This study presents the hydrodynamic characteristics of different adult male swimmer's body shape using computational fluid dynamics method. This simulation strategy is carried out by CFD fluent code with solving the 3D incompressible Navier-Stokes equations using the RNG k-ε turbulence closure. The water free surface is captured by the volume of fluid (VOF) method. A set of full body models, which is based on the anthropometrical characteristics of the most common male swimmers, is created by Computer Aided Industrial Design (CAID) software, Rhinoceros. The analysis of CFD results revealed that swimmer's body shape has a noticeable effect on the hydrodynamics performances. This explains why male swimmer with an inverted triangle body shape has good hydrodynamic characteristics for competitive swimming.

  3. Hydrodynamical noise and Gubser flow

    NASA Astrophysics Data System (ADS)

    Yan, Li; Grönqvist, Hanna

    2016-03-01

    Hydrodynamical noise is introduced on top of Gubser's analytical solution to viscous hydrodynamics. With respect to the ultra-central collision events of Pb-Pb, p-Pb and p-p at the LHC energies, we solve the evolution of noisy fluid systems and calculate the radial flow velocity correlations. We show that the absolute amplitude of the hydrodynamical noise is determined by the multiplicity of the collision event. The evolution of azimuthal anisotropies, which is related to the generation of harmonic flow, receives finite enhancements from hydrodynamical noise. Although it is strongest in the p-p systems, the effect of hydrodynamical noise on flow harmonics is found to be negligible, especially in the ultra-central Pb-Pb collisions. For the short-range correlations, hydrodynamical noise contributes to the formation of a near-side peak on top of the correlation structure originated from initial state fluctuations. The shape of the peak is affected by the strength of hydrodynamical noise, whose height and width grow from the Pb-Pb system to the p-Pb and p-p systems.

  4. Recent development of hydrodynamic modeling

    NASA Astrophysics Data System (ADS)

    Hirano, Tetsufumi

    2014-09-01

    In this talk, I give an overview of recent development in hydrodynamic modeling of high-energy nuclear collisions. First, I briefly discuss about current situation of hydrodynamic modeling by showing results from the integrated dynamical approach in which Monte-Carlo calculation of initial conditions, quark-gluon fluid dynamics and hadronic cascading are combined. In particular, I focus on rescattering effects of strange hadrons on final observables. Next I highlight three topics in recent development in hydrodynamic modeling. These include (1) medium response to jet propagation in di-jet asymmetric events, (2) causal hydrodynamic fluctuation and its application to Bjorken expansion and (3) chiral magnetic wave from anomalous hydrodynamic simulations. (1) Recent CMS data suggest the existence of QGP response to propagation of jets. To investigate this phenomenon, we solve hydrodynamic equations with source term which exhibits deposition of energy and momentum from jets. We find a large number of low momentum particles are emitted at large angle from jet axis. This gives a novel interpretation of the CMS data. (2) It has been claimed that a matter created even in p-p/p-A collisions may behave like a fluid. However, fluctuation effects would be important in such a small system. We formulate relativistic fluctuating hydrodynamics and apply it to Bjorken expansion. We found the final multiplicity fluctuates around the mean value even if initial condition is fixed. This effect is relatively important in peripheral A-A collisions and p-p/p-A collisions. (3) Anomalous transport of the quark-gluon fluid is predicted when extremely high magnetic field is applied. We investigate this possibility by solving anomalous hydrodynamic equations. We found the difference of the elliptic flow parameter between positive and negative particles appears due to the chiral magnetic wave. Finally, I provide some personal perspective of hydrodynamic modeling of high energy nuclear collisions

  5. Improved Compression of Wavelet-Transformed Images

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron; Klimesh, Matthew

    2005-01-01

    A recently developed data-compression method is an adaptive technique for coding quantized wavelet-transformed data, nominally as part of a complete image-data compressor. Unlike some other approaches, this method admits a simple implementation and does not rely on the use of large code tables. A common data compression approach, particularly for images, is to perform a wavelet transform on the input data, and then losslessly compress a quantized version of the wavelet-transformed data. Under this compression approach, it is common for the quantized data to include long sequences, or runs, of zeros. The new coding method uses prefixfree codes for the nonnegative integers as part of an adaptive algorithm for compressing the quantized wavelet-transformed data by run-length coding. In the form of run-length coding used here, the data sequence to be encoded is parsed into strings consisting of some number (possibly 0) of zeros, followed by a nonzero value. The nonzero value and the length of the run of zeros are encoded. For a data stream that contains a sufficiently high frequency of zeros, this method is known to be more effective than using a single variable length code to encode each symbol. The specific prefix-free codes used are from two classes of variable-length codes: a class known as Golomb codes, and a class known as exponential-Golomb codes. The codes within each class are indexed by a single integer parameter. The present method uses exponential-Golomb codes for the lengths of the runs of zeros, and Golomb codes for the nonzero values. The code parameters within each code class are determined adaptively on the fly as compression proceeds, on the basis of statistics from previously encoded values. In particular, a simple adaptive method has been devised to select the parameter identifying the particular exponential-Golomb code to use. The method tracks the average number of bits used to encode recent runlengths, and takes the difference between this average

  6. Data compression techniques and applications

    NASA Astrophysics Data System (ADS)

    Benelli, G.; Cappellini, V.; Lotti, F.

    1980-02-01

    The paper reviews several data compression methods for signal and image digital processing and transmission, including both established and more recent techniques. Attention is also given to methods of prediction-interpolation, differential pulse code modulation, delta modulation and transformations. The processing of two dimensional data is also considered, and the results of the application of these techniques to space telemetry and biomedical digital signal processing and telemetry systems are presented.

  7. Special Relativistic Hydrodynamics with Gravitation

    NASA Astrophysics Data System (ADS)

    Hwang, Jai-chan; Noh, Hyerim

    2016-12-01

    Special relativistic hydrodynamics with weak gravity has hitherto been unknown in the literature. Whether such an asymmetric combination is possible has been unclear. Here, the hydrodynamic equations with Poisson-type gravity, considering fully relativistic velocity and pressure under the weak gravity and the action-at-a-distance limit, are consistently derived from Einstein’s theory of general relativity. An analysis is made in the maximal slicing, where the Poisson’s equation becomes much simpler than our previous study in the zero-shear gauge. Also presented is the hydrodynamic equations in the first post-Newtonian approximation, now under the general hypersurface condition. Our formulation includes the anisotropic stress.

  8. Constraining relativistic viscous hydrodynamical evolution

    SciTech Connect

    Martinez, Mauricio; Strickland, Michael

    2009-04-15

    We show that by requiring positivity of the longitudinal pressure it is possible to constrain the initial conditions one can use in second-order viscous hydrodynamical simulations of ultrarelativistic heavy-ion collisions. We demonstrate this explicitly for (0+1)-dimensional viscous hydrodynamics and discuss how the constraint extends to higher dimensions. Additionally, we present an analytic approximation to the solution of (0+1)-dimensional second-order viscous hydrodynamical evolution equations appropriate to describe the evolution of matter in an ultrarelativistic heavy-ion collision.

  9. Embedded foveation image coding.

    PubMed

    Wang, Z; Bovik, A C

    2001-01-01

    The human visual system (HVS) is highly space-variant in sampling, coding, processing, and understanding. The spatial resolution of the HVS is highest around the point of fixation (foveation point) and decreases rapidly with increasing eccentricity. By taking advantage of this fact, it is possible to remove considerable high-frequency information redundancy from the peripheral regions and still reconstruct a perceptually good quality image. Great success has been obtained previously by a class of embedded wavelet image coding algorithms, such as the embedded zerotree wavelet (EZW) and the set partitioning in hierarchical trees (SPIHT) algorithms. Embedded wavelet coding not only provides very good compression performance, but also has the property that the bitstream can be truncated at any point and still be decoded to recreate a reasonably good quality image. In this paper, we propose an embedded foveation image coding (EFIC) algorithm, which orders the encoded bitstream to optimize foveated visual quality at arbitrary bit-rates. A foveation-based image quality metric, namely, foveated wavelet image quality index (FWQI), plays an important role in the EFIC system. We also developed a modified SPIHT algorithm to improve the coding efficiency. Experiments show that EFIC integrates foveation filtering with foveated image coding and demonstrates very good coding performance and scalability in terms of foveated image quality measurement.

  10. Extreme hydrodynamic load calculations for fixed steel structures

    SciTech Connect

    Jong, P.R. de; Vugts, J.; Gudmestad, O.T.

    1996-12-31

    This paper discusses the expected differences between the planned ISO code for design of offshore structures and the present Standard Norwegian Practice (SNP), concerning the extreme hydrodynamic design load calculation for fixed steel space frame structures. Since the ISO code is expected to be similar to the API RP2A LRFD code, the provisions of API RP2A LRFD are used to represent the ISO standard. It should be noted that the new ISO code may include NewWave theory, in addition to the wave theories recommended by the API. Design loads and associated failure probabilities resulting from the application of the code provisions are compared for a typical North Sea structure, the Europipe riser platform 16/11-E.

  11. A secure and efficient entropy coding based on arithmetic coding

    NASA Astrophysics Data System (ADS)

    Li, Hengjian; Zhang, Jiashu

    2009-12-01

    A novel security arithmetic coding scheme based on nonlinear dynamic filter (NDF) with changeable coefficients is proposed in this paper. The NDF is employed to generate the pseudorandom number generator (NDF-PRNG) and its coefficients are derived from the plaintext for higher security. During the encryption process, the mapping interval in each iteration of arithmetic coding (AC) is decided by both the plaintext and the initial values of NDF, and the data compression is also achieved with entropy optimality simultaneously. And this modification of arithmetic coding methodology which also provides security is easy to be expanded into the most international image and video standards as the last entropy coding stage without changing the existing framework. Theoretic analysis and numerical simulations both on static and adaptive model show that the proposed encryption algorithm satisfies highly security without loss of compression efficiency respect to a standard AC or computation burden.

  12. Lotic Water Hydrodynamic Model

    SciTech Connect

    Judi, David Ryan; Tasseff, Byron Alexander

    2015-01-23

    Water-related natural disasters, for example, floods and droughts, are among the most frequent and costly natural hazards, both socially and economically. Many of these floods are a result of excess rainfall collecting in streams and rivers, and subsequently overtopping banks and flowing overland into urban environments. Floods can cause physical damage to critical infrastructure and present health risks through the spread of waterborne diseases. Los Alamos National Laboratory (LANL) has developed Lotic, a state-of-the-art surface water hydrodynamic model, to simulate propagation of flood waves originating from a variety of events. Lotic is a two-dimensional (2D) flood model that has been used primarily for simulations in which overland water flows are characterized by movement in two dimensions, such as flood waves expected from rainfall-runoff events, storm surge, and tsunamis. In 2013, LANL developers enhanced Lotic through several development efforts. These developments included enhancements to the 2D simulation engine, including numerical formulation, computational efficiency developments, and visualization. Stakeholders can use simulation results to estimate infrastructure damage and cascading consequences within other sets of infrastructure, as well as to inform the development of flood mitigation strategies.

  13. Two-dimensional radiation-hydrodynamic calculations for a nominal 1-Mt nuclear explosion near the ground

    SciTech Connect

    Horak, H.G.; Jones, E.M.; Sandford, M.T. II; Whitaker, R.W.; Anderson, R.C.; Kodis, J.W.

    1982-03-01

    The two-dimensional radiation-hydrodynamic code SN-YAQUI was used to calculate the evolution of a hypothetical nuclear fireball of 1-Mt yield at a burst altitude of 500 m. The ground-reflected shock wave interacts strongly with the fireball and induces the early formation of a rapidly rotating ring-shaped vortex. The hydrodynamic and radiation phenomena are discussed.

  14. Hydrodynamic model for picosecond propagation of laser-created nanoplasmas

    NASA Astrophysics Data System (ADS)

    Saxena, Vikrant; Jurek, Zoltan; Ziaja, Beata; Santra, Robin

    2015-06-01

    The interaction of a free-electron-laser pulse with a moderate or large size cluster is known to create a quasi-neutral nanoplasma, which then expands on hydrodynamic timescale, i.e., > 1 ps. To have a better understanding of ion and electron data from experiments derived from laser-irradiated clusters, one needs to simulate cluster dynamics on such long timescales for which the molecular dynamics approach becomes inefficient. We therefore propose a two-step Molecular Dynamics-Hydrodynamic scheme. In the first step we use molecular dynamics code to follow the dynamics of an irradiated cluster until all the photo-excitation and corresponding relaxation processes are finished and a nanoplasma, consisting of ground-state ions and thermalized electrons, is formed. In the second step we perform long-timescale propagation of this nanoplasma with a computationally efficient hydrodynamic approach. In the present paper we examine the feasibility of a hydrodynamic two-fluid approach to follow the expansion of spherically symmetric nanoplasma, without accounting for the impact ionization and three-body recombination processes at this stage. We compare our results with the corresponding molecular dynamics simulations. We show that all relevant information about the nanoplasma propagation can be extracted from hydrodynamic simulations at a significantly lower computational cost when compared to a molecular dynamics approach. Finally, we comment on the accuracy and limitations of our present model and discuss possible future developments of the two-step strategy.

  15. Supernova-relevant hydrodynamic instability experiments on the Nova Laser

    SciTech Connect

    Kane, J.; arnett, D.; Remington, B.A.; Glendinning, S.G.; wallace, R.; Mangan, R.; Rubenchik, A.; Fryxell, B.A.

    1997-04-18

    Supernova 1987A focused attention on the critical role of hydrodynamic instabilities in the evolution of supernovae. To test the modeling of these instabilities we are developing laboratory experiments of hydrodynamic mixing under conditions relevant to supernovae. The target consists of two-layer planar package composed on 85 micron Cu backed by 500 micron CH2, having a single mode sinusoidal perturbation at the interface, with gamma = 200 microns, nuo + 20 microns. The Nova laser is used to generate a 10-15 Mbar (10- 15x10{sup 12} dynes/cm2) shock at the interface, which triggers perturbation growth, due to the Richtmyer-Meshov instability followed by the Raleigh-Taylor instability as the interface decelerates. This resembles the hydrodynamics of the He-H interface of a Type II supernova at the intermediate times, up to a few x10{sup 3} s. The experiment is modeled using the hydrodynamic codes HYADES and CALE, and the supernova code PROMETHEUS. We are designing experiments to test the differences in the growth of 2D vs 3D single mode perturbations; such differences may help explain the high observed velocities of radioactive core material in SN1987A. Results of the experiments and simulations are presented.

  16. A New Approach for Fingerprint Image Compression

    SciTech Connect

    Mazieres, Bertrand

    1997-12-01

    The FBI has been collecting fingerprint cards since 1924 and now has over 200 million of them. Digitized with 8 bits of grayscale resolution at 500 dots per inch, it means 2000 terabytes of information. Also, without any compression, transmitting a 10 Mb card over a 9600 baud connection will need 3 hours. Hence we need a compression and a compression as close to lossless as possible: all fingerprint details must be kept. A lossless compression usually do not give a better compression ratio than 2:1, which is not sufficient. Compressing these images with the JPEG standard leads to artefacts which appear even at low compression rates. Therefore the FBI has chosen in 1993 a scheme of compression based on a wavelet transform, followed by a scalar quantization and an entropy coding : the so-called WSQ. This scheme allows to achieve compression ratios of 20:1 without any perceptible loss of quality. The publication of the FBI specifies a decoder, which means that many parameters can be changed in the encoding process: the type of analysis/reconstruction filters, the way the bit allocation is made, the number of Huffman tables used for the entropy coding. The first encoder used 9/7 filters for the wavelet transform and did the bit allocation using a high-rate bit assumption. Since the transform is made into 64 subbands, quite a lot of bands receive only a few bits even at an archival quality compression rate of 0.75 bit/pixel. Thus, after a brief overview of the standard, we will discuss a new approach for the bit-allocation that seems to make more sense where theory is concerned. Then we will talk about some implementation aspects, particularly for the new entropy coder and the features that allow other applications than fingerprint image compression. Finally, we will compare the performances of the new encoder to those of the first encoder.

  17. Multigrid semi-implicit hydrodynamics revisited

    SciTech Connect

    Dendy, J.E.

    1983-01-01

    The multigrid method has for several years been very successful for simple equations like Laplace's equation on a rectangle. For more complicated situations, however, success has been more elusive. Indeeed, there are only a few applications in which the multigrid method is now being successfully used in complicated production codes. The one with which we are most familiar is the application by Alcouffe to TTDAMG. We are more familiar with this second application in which, for a set of test problems, TTDAMG ran seven to twenty times less expensively (on a CRAY-1 computer) than its best competitor. This impressive performance, in a field where a factor of two improvement is considered significant, encourages one to attempt the application of the multigrid method in other complicated situations. The application discussed in this paper was actually attempted several years ago. In that paper the multigrid method was applied to the pressure iteration in three Eulerian and Lagrangian codes. The application to the Eulerian codes, both incompressible and compressible, was successful, but the application to the Lagrangian code was less so. The reason given for this lack of success was that the differencing for the pressure equation in the Lagrangian code, SALE, was bad. In this paper, we examine again the application of multigrad to the pressure equation in SALE with the goal of succeeding this time without cheating.

  18. Reciprocal relations in dissipationless hydrodynamics

    SciTech Connect

    Melnikovsky, L. A.

    2014-12-15

    Hidden symmetry in dissipationless terms of arbitrary hydrodynamics equations is recognized. We demonstrate that all fluxes are generated by a single function and derive conventional Euler equations using the proposed formalism.

  19. Detection of the Compressed Primary Stellar Wind in eta Carinae

    NASA Technical Reports Server (NTRS)

    Teodoro, Mairan Macedo; Madura, Thomas I.; Gull, Theodore R.; Corcoran, Michael F.; Hamaguchi, K.

    2014-01-01

    A series of three HST/STIS spectroscopic mappings, spaced approximately one year apart, reveal three partial arcs in [Fe II] and [Ni II] emissions moving outward from eta Carinae. We identify these arcs with the shell-like structures, seen in the 3D hydrodynamical simulations, formed by compression of the primary wind by the secondary wind during periastron passages.

  20. Respiratory sounds compression.

    PubMed

    Yadollahi, Azadeh; Moussavi, Zahra

    2008-04-01

    Recently, with the advances in digital signal processing, compression of biomedical signals has received great attention for telemedicine applications. In this paper, an adaptive transform coding-based method for compression of respiratory and swallowing sounds is proposed. Using special characteristics of respiratory sounds, the recorded signals are divided into stationary and nonstationary portions, and two different bit allocation methods (BAMs) are designed for each portion. The method was applied to the data of 12 subjects and its performance in terms of overall signal-to-noise ratio (SNR) values was calculated at different bit rates. The performance of different quantizers was also considered and the sensitivity of the quantizers to initial conditions has been alleviated. In addition, the fuzzy clustering method was examined for classifying the signal into different numbers of clusters and investigating the performance of the adaptive BAM with increasing the number of classes. Furthermore, the effects of assigning different numbers of bits for encoding stationary and nonstationary portions of the signal were studied. The adaptive BAM with variable number of bits was found to improve the SNR values of the fixed BAM by 5 dB. Last, the possibility of removing the training part for finding the parameters of adaptive BAMs for each individual was investigated. The results indicate that it is possible to use a predefined set of BAMs for all subjects and remove the training part completely. Moreover, the method is fast enough to be implemented for real-time application.

  1. Predictive depth coding of wavelet transformed images

    NASA Astrophysics Data System (ADS)

    Lehtinen, Joonas

    1999-10-01

    In this paper, a new prediction based method, predictive depth coding, for lossy wavelet image compression is presented. It compresses a wavelet pyramid composition by predicting the number of significant bits in each wavelet coefficient quantized by the universal scalar quantization and then by coding the prediction error with arithmetic coding. The adaptively found linear prediction context covers spatial neighbors of the coefficient to be predicted and the corresponding coefficients on lower scale and in the different orientation pyramids. In addition to the number of significant bits, the sign and the bits of non-zero coefficients are coded. The compression method is tested with a standard set of images and the results are compared with SFQ, SPIHT, EZW and context based algorithms. Even though the algorithm is very simple and it does not require any extra memory, the compression results are relatively good.

  2. A robust coding scheme for packet video

    NASA Technical Reports Server (NTRS)

    Chen, Yun-Chung; Sayood, Khalid; Nelson, Don J.

    1992-01-01

    A layered packet video coding algorithm based on a progressive transmission scheme is presented. The algorithm provides good compression and can handle significant packet loss with graceful degradation in the reconstruction sequence. Simulation results for various conditions are presented.

  3. Locally adaptive vector quantization: Data compression with feature preservation

    NASA Technical Reports Server (NTRS)

    Cheung, K. M.; Sayano, M.

    1992-01-01

    A study of a locally adaptive vector quantization (LAVQ) algorithm for data compression is presented. This algorithm provides high-speed one-pass compression and is fully adaptable to any data source and does not require a priori knowledge of the source statistics. Therefore, LAVQ is a universal data compression algorithm. The basic algorithm and several modifications to improve performance are discussed. These modifications are nonlinear quantization, coarse quantization of the codebook, and lossless compression of the output. Performance of LAVQ on various images using irreversible (lossy) coding is comparable to that of the Linde-Buzo-Gray algorithm, but LAVQ has a much higher speed; thus this algorithm has potential for real-time video compression. Unlike most other image compression algorithms, LAVQ preserves fine detail in images. LAVQ's performance as a lossless data compression algorithm is comparable to that of Lempel-Ziv-based algorithms, but LAVQ uses far less memory during the coding process.

  4. Numerical modelling of spallation in 2D hydrodynamics codes

    NASA Astrophysics Data System (ADS)

    Maw, J. R.; Giles, A. R.

    1996-05-01

    A model for spallation based on the void growth model of Johnson has been implemented in 2D Lagrangian and Eulerian hydrocodes. The model has been extended to treat complete separation of material when voids coalesce and to describe the effects of elevated temperatures and melting. The capabilities of the model are illustrated by comparison with data from explosively generated spall experiments. Particular emphasis is placed on the prediction of multiple spall effects in weak, low melting point, materials such as lead. The correlation between the model predictions and observations on the strain rate dependence of spall strength is discussed.

  5. A modified Henyey method for computing radiative transfer hydrodynamics

    NASA Technical Reports Server (NTRS)

    Karp, A. H.

    1975-01-01

    The implicit hydrodynamic code of Kutter and Sparks (1972), which is limited to optically thick regions and employs the diffusion approximation for radiative transfer, is modified to include radiative transfer effects in the optically thin regions of a model star. A modified Henyey method is used to include the solution of the radiative transfer equation in this implicit code, and the convergence properties of this method are proven. A comparison is made between two hydrodynamic models of a classical Cepheid with a 12-day period, one of which was computed with the diffusion approximation and the other with the modified Henyey method. It is found that the two models produce nearly identical light and velocity curves, but differ in the fact that the former never has temperature inversions in the atmosphere while the latter does when sufficiently strong shocks are present.

  6. Lossless Video Sequence Compression Using Adaptive Prediction

    NASA Technical Reports Server (NTRS)

    Li, Ying; Sayood, Khalid

    2007-01-01

    We present an adaptive lossless video compression algorithm based on predictive coding. The proposed algorithm exploits temporal, spatial, and spectral redundancies in a backward adaptive fashion with extremely low side information. The computational complexity is further reduced by using a caching strategy. We also study the relationship between the operational domain for the coder (wavelet or spatial) and the amount of temporal and spatial redundancy in the sequence being encoded. Experimental results show that the proposed scheme provides significant improvements in compression efficiencies.

  7. Two-layer and Adaptive Entropy Coding Algorithms for H.264-based Lossless Image Coding

    DTIC Science & Technology

    2008-04-01

    adaptive binary arithmetic coding (CABAC) [7], and context-based adaptive variable length coding (CAVLC) [3], should be adaptively adopted for advancing...Sep. 2006. [7] H. Schwarz, D. Marpe and T. Wiegand, Context-based adaptive binary arithmetic coding in the H.264/AVC video compression standard, IEEE

  8. Influence of the equation of state on the compression and heating of hydrogen

    NASA Astrophysics Data System (ADS)

    Tahir, N. A.; Juranek, H.; Shutov, A.; Redmer, R.; Piriz, A. R.; Temporal, M.; Varentsov, D.; Udrea, S.; Hoffmann, D. H.; Deutsch, C.; Lomonosov, I.; Fortov, V. E.

    2003-05-01

    This paper presents two-dimensional hydrodynamic simulations of implosion of a multilayered cylindrical target that is driven by an intense heavy ion beam which has an annular focal spot. The target consists of a hollow lead cylinder which is filled with hydrogen at one tenth of the solid density at room temperature. The beam is assumed to be made of 2.7-GeV/u uranium ions and six different cases for the beam intensity (total number of particles in the beam, N) are considered. In each of these six cases the particles are delivered in single bunches, 20 ns long. The simulations have been carried out using a two-dimensional hydrodynamic computer code BIG-2. A multiple shock reflection scheme is employed in these calculations that leads to very high densities of the compressed hydrogen while the temperature remains relatively low. In this study we have used two different equation-of-state models for hydrogen, namely, the SESAME data and a model that includes molecular dissociation that is based on a fluid variational theory in the neutral fluid region which is replaced by Padé approximation in the fully ionized plasma region. Our calculations show that the latter model predicts higher densities, higher pressures but lower temperatures compared to the SESAME model. The differences in the results are more pronounced for lower driving energies (lower beam intensities).

  9. Slurry bubble column hydrodynamics

    NASA Astrophysics Data System (ADS)

    Rados, Novica

    Slurry bubble column reactors are presently used for a wide range of reactions in both chemical and biochemical industry. The successful design and scale up of slurry bubble column reactors require a complete understanding of multiphase fluid dynamics, i.e. phase mixing, heat and mass transport characteristics. The primary objective of this thesis is to improve presently limited understanding of the gas-liquid-solid slurry bubble column hydrodynamics. The effect of superficial gas velocity (8 to 45 cm/s), pressure (0.1 to 1.0 MPa) and solids loading (20 and 35 wt.%) on the time-averaged solids velocity and turbulent parameter profiles has been studied using Computer Automated Radioactive Particle Tracking (CARPT). To accomplish this, CARPT technique has been significantly improved for the measurements in highly attenuating systems, such as high pressure, high solids loading stainless steel slurry bubble column. At a similar set of operational conditions time-averaged gas and solids holdup profiles have been evaluated using the developed Computed Tomography (CT)/Overall gas holdup procedure. This procedure is based on the combination of the CT scans and the overall gas holdup measurements. The procedure assumes constant solids loading in the radial direction and axially invariant cross-sectionally averaged gas holdup. The obtained experimental holdup, velocity and turbulent parameters data are correlated and compared with the existing low superficial gas velocities and atmospheric pressure CARPT/CT gas-liquid and gas-liquid-solid slurry data. The obtained solids axial velocity radial profiles are compared with the predictions of the one dimensional (1-D) liquid/slurry recirculation phenomenological model. The obtained solids loading axial profiles are compared with the predictions of the Sedimentation and Dispersion Model (SDM). The overall gas holdup values, gas holdup radial profiles, solids loading axial profiles, solids axial velocity radial profiles and solids

  10. Improved lossless intra coding for next generation video coding

    NASA Astrophysics Data System (ADS)

    Vanam, Rahul; He, Yuwen; Ye, Yan

    2016-09-01

    Recently, there have been efforts by the ITU-T VCEG and ISO/IEC MPEG to further improve the compression performance of the High Efficiency Video Coding (HEVC) standard for developing a potential next generation video coding standard. The exploratory codec software of this potential standard includes new coding tools for inter and intra coding. In this paper, we present a new intra prediction mode for lossless intra coding. Our new intra mode derives a prediction filter for each input pixel using its neighboring reconstructed pixels, and applies this filter to the nearest neighboring reconstructed pixels to generate a prediction pixel. The proposed intra mode is demonstrated to improve the performance of the exploratory software for lossless intra coding, yielding a maximum and average bitrate savings of 4.4% and 2.11%, respectively.

  11. An analysis of smoothed particle hydrodynamics

    SciTech Connect

    Swegle, J.W.; Attaway, S.W.; Heinstein, M.W.; Mello, F.J.; Hicks, D.L.

    1994-03-01

    SPH (Smoothed Particle Hydrodynamics) is a gridless Lagrangian technique which is appealing as a possible alternative to numerical techniques currently used to analyze high deformation impulsive loading events. In the present study, the SPH algorithm has been subjected to detailed testing and analysis to determine its applicability in the field of solid dynamics. An important result of the work is a rigorous von Neumann stability analysis which provides a simple criterion for the stability or instability of the method in terms of the stress state and the second derivative of the kernel function. Instability, which typically occurs only for solids in tension, results not from the numerical time integration algorithm, but because the SPH algorithm creates an effective stress with a negative modulus. The analysis provides insight into possible methods for removing the instability. Also, SPH has been coupled into the transient dynamics finite element code PRONTO, and a weighted residual derivation of the SPH equations has been obtained.

  12. Postprocessing of Compressed Images via Sequential Denoising

    NASA Astrophysics Data System (ADS)

    Dar, Yehuda; Bruckstein, Alfred M.; Elad, Michael; Giryes, Raja

    2016-07-01

    In this work we propose a novel postprocessing technique for compression-artifact reduction. Our approach is based on posing this task as an inverse problem, with a regularization that leverages on existing state-of-the-art image denoising algorithms. We rely on the recently proposed Plug-and-Play Prior framework, suggesting the solution of general inverse problems via Alternating Direction Method of Multipliers (ADMM), leading to a sequence of Gaussian denoising steps. A key feature in our scheme is a linearization of the compression-decompression process, so as to get a formulation that can be optimized. In addition, we supply a thorough analysis of this linear approximation for several basic compression procedures. The proposed method is suitable for diverse compression techniques that rely on transform coding. Specifically, we demonstrate impressive gains in image quality for several leading compression methods - JPEG, JPEG2000, and HEVC.

  13. The New CCSDS Image Compression Recommendation

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Armbruster, Philippe; Kiely, Aaron B.; Masschelein, Bart; Moury, Gilles; Schafer, Christoph

    2004-01-01

    The Consultative Committee for Space Data Systems (CCSDS) data compression working group has recently adopted a recommendation for image data compression, with a final release expected in 2005. The algorithm adopted in the recommendation consists a two dimensional discrete wavelet transform of the image, followed by progressive bit-plane coding of the transformed data. The algorithm can provide both lossless and lossy compression, and allows a user to directly control the compressed data volume or the fidelity with which the wavelet-transformed data can be reconstructed. The algorithm is suitable for both frame-based image data and scan-based sensor data, and has applications for near-earth and deep-space missions. The standard will be accompanied by free software sources on a future web site. An ASIC implementation of the compressor is currently under development. This paper describes the compression algorithm along with the requirements that drove the selection of the algorithm.

  14. Compressed bitmap indices for efficient query processing

    SciTech Connect

    Wu, Kesheng; Otoo, Ekow; Shoshani, Arie

    2001-09-30

    Many database applications make extensive use of bitmap indexing schemes. In this paper, we study how to improve the efficiencies of these indexing schemes by proposing new compression schemes for the bitmaps. Most compression schemes are designed primarily to achieve good compression. During query processing they can be orders of magnitude slower than their uncompressed counterparts. The new schemes are designed to bridge this performance gap by reducing compression effectiveness and improving operation speed. In a number of tests on both synthetic data and real application data, we found that the new schemes significantly outperform the well-known compression schemes while using only modestly more space. For example, compared to the Byte-aligned Bitmap Code, the new schemes are 12 times faster and it uses only 50 percent more space. The new schemes use much less space(<30 percent) than the uncompressed scheme and are faster in a majority of the test cases.

  15. Adaptable recursive binary entropy coding technique

    NASA Astrophysics Data System (ADS)

    Kiely, Aaron B.; Klimesh, Matthew A.

    2002-07-01

    We present a novel data compression technique, called recursive interleaved entropy coding, that is based on recursive interleaving of variable-to variable length binary source codes. A compression module implementing this technique has the same functionality as arithmetic coding and can be used as the engine in various data compression algorithms. The encoder compresses a bit sequence by recursively encoding groups of bits that have similar estimated statistics, ordering the output in a way that is suited to the decoder. As a result, the decoder has low complexity. The encoding process for our technique is adaptable in that each bit to be encoded has an associated probability-of-zero estimate that may depend on previously encoded bits; this adaptability allows more effective compression. Recursive interleaved entropy coding may have advantages over arithmetic coding, including most notably the admission of a simple and fast decoder. Much variation is possible in the choice of component codes and in the interleaving structure, yielding coder designs of varying complexity and compression efficiency; coder designs that achieve arbitrarily small redundancy can be produced. We discuss coder design and performance estimation methods. We present practical encoding and decoding algorithms, as well as measured performance results.

  16. Load-Induced Hydrodynamic Lubrication of Porous Films.

    PubMed

    Khosla, Tushar; Cremaldi, Joseph; Erickson, Jeffrey S; Pesika, Noshir S

    2015-08-19

    We present an exploratory study of the tribological properties and mechanisms of porous polymer surfaces under applied loads in aqueous media. We show how it is possible to change the lubrication regime from boundary lubrication to hydrodynamic lubrication even at relatively low shearing velocities by the addition of vertical pores to a compliant polymer. It is hypothesized that the compressed, pressurized liquid in the pores produces a repulsive hydrodynamic force as it extrudes from the pores. The presence of the fluid between two shearing surfaces results in low coefficients of friction (μ ≈ 0.31). The coefficient of friction is reduced further by using a boundary lubricant. The tribological properties are studied for a range of applied loads and shear velocities to demonstrate the potential applications of such materials in total joint replacement devices.

  17. Direct simulation of compressible reacting flows

    NASA Technical Reports Server (NTRS)

    Poinsot, Thierry J.

    1989-01-01

    A research program for direct numerical simulations of compressible reacting flows is described. Two main research subjects are proposed: the effect of pressure waves on turbulent combustion and the use of direct simulation methods to validate flamelet models for turbulent combustion. The interest of a compressible code to study turbulent combustion is emphasized through examples of reacting shear layer and combustion instabilities studies. The choice of experimental data to compare with direct simulation results is discussed. A tentative program is given and the computation cases to use are described as well as the code validation runs.

  18. Numerical simulations of glass impacts using smooth particle hydrodynamics

    SciTech Connect

    Mandell, D.A.; Wingate, C.A.

    1995-07-01

    As part of a program to develop advanced hydrocode design tools, we have implemented a brittle fracture model for glass into the SPHINX smooth particle hydrodynamics code. We have evaluated this model and the code by predicting data from one-dimensional flyer plate impacts into glass. Since fractured glass properties, which are needed in the model, are not available, we did sensitivity studies of these properties, as well as sensitivity studies to determine the number of particles needed in the calculations. The numerical results are in good agreement with the data.

  19. One-dimensional hydrodynamic simulation of high energy density experiments

    NASA Astrophysics Data System (ADS)

    Grinenko, A.

    2009-07-01

    A new one-dimensional hydrodynamic code for simulation of experiments involving the creation of high energy density in matter by means of laser or heavy ion beam irradiation is described. The code uses well-tested second order Lagrangian scheme in combination with the flux-limited van Leer convection algorithm for re-mapping to an arbitrary grid. Simple test cases with self-similar solutions are examined. Finally, the heating of solid targets by lasers and ions beams is investigated as examples.

  20. The hydrodynamics of colloidal gelation.

    PubMed

    Varga, Zsigmond; Wang, Gang; Swan, James

    2015-12-14

    Colloidal gels are formed during arrested phase separation. Sub-micron, mutually attractive particles aggregate to form a system spanning network with high interfacial area, far from equilibrium. Models for microstructural evolution during colloidal gelation have often struggled to match experimental results with long standing questions regarding the role of hydrodynamic interactions. In nearly all models, these interactions are neglected entirely. In the present work, we report simulations of gelation with and without hydrodynamic interactions between the suspended particles executed in HOOMD-blue. The disparities between these simulations are striking and mirror the experimental-theoretical mismatch in the literature. The hydrodynamic simulations agree with experimental observations, however. We explore a simple model of the competing transport processes in gelation that anticipates these disparities, and conclude that hydrodynamic forces are essential. Near the gel boundary, there exists a competition between compaction of individual aggregates which suppresses gelation and coagulation of aggregates which enhances it. The time scale for compaction is mildly slowed by hydrodynamic interactions, while the time scale for coagulation is greatly accelerated. This enhancement to coagulation leads to a shift in the gel boundary to lower strengths of attraction and lower particle concentrations when compared to models that neglect hydrodynamic interactions. Away from the gel boundary, differences in the nearest neighbor distribution and fractal dimension persist within gels produced by both simulation methods. This result necessitates a fundamental rethinking of how dynamic, discrete element models for gelation kinetics are developed as well as how collective hydrodynamic interactions influence the arrest of attractive colloidal dispersions.

  1. Transform coding for space applications

    NASA Technical Reports Server (NTRS)

    Glover, Daniel

    1993-01-01

    Data compression coding requirements for aerospace applications differ somewhat from the compression requirements for entertainment systems. On the one hand, entertainment applications are bit rate driven with the goal of getting the best quality possible with a given bandwidth. Science applications are quality driven with the goal of getting the lowest bit rate for a given level of reconstruction quality. In the past, the required quality level has been nothing less than perfect allowing only the use of lossless compression methods (if that). With the advent of better, faster, cheaper missions, an opportunity has arisen for lossy data compression methods to find a use in science applications as requirements for perfect quality reconstruction runs into cost constraints. This paper presents a review of the data compression problem from the space application perspective. Transform coding techniques are described and some simple, integer transforms are presented. The application of these transforms to space-based data compression problems is discussed. Integer transforms have an advantage over conventional transforms in computational complexity. Space applications are different from broadcast or entertainment in that it is desirable to have a simple encoder (in space) and tolerate a more complicated decoder (on the ground) rather than vice versa. Energy compaction with new transforms are compared with the Walsh-Hadamard (WHT), Discrete Cosine (DCT), and Integer Cosine (ICT) transforms.

  2. Quantum rate-distortion coding

    NASA Astrophysics Data System (ADS)

    Barnum, Howard

    2000-10-01

    I introduce rate-distortion theory for the coding of quantum information, and derive a lower bound, involving the coherent information, on the rate at which qubits must be used to store or compress an entangled quantum source with a given maximum level of distortion per source emission.

  3. Ethical coding.

    PubMed

    Resnik, Barry I

    2009-01-01

    It is ethical, legal, and proper for a dermatologist to maximize income through proper coding of patient encounters and procedures. The overzealous physician can misinterpret reimbursement requirements or receive bad advice from other physicians and cross the line from aggressive coding to coding fraud. Several of the more common problem areas are discussed.

  4. Turbulence in Compressible Flows

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Lecture notes for the AGARD Fluid Dynamics Panel (FDP) Special Course on 'Turbulence in Compressible Flows' have been assembled in this report. The following topics were covered: Compressible Turbulent Boundary Layers, Compressible Turbulent Free Shear Layers, Turbulent Combustion, DNS/LES and RANS Simulations of Compressible Turbulent Flows, and Case Studies of Applications of Turbulence Models in Aerospace.

  5. Study and simulation of low rate video coding schemes

    NASA Technical Reports Server (NTRS)

    Sayood, Khalid; Chen, Yun-Chung; Kipp, G.

    1992-01-01

    The semiannual report is included. Topics covered include communication, information science, data compression, remote sensing, color mapped images, robust coding scheme for packet video, recursively indexed differential pulse code modulation, image compression technique for use on token ring networks, and joint source/channel coder design.

  6. On-board image compression for the RAE lunar mission

    NASA Technical Reports Server (NTRS)

    Miller, W. H.; Lynch, T. J.

    1976-01-01

    The requirements, design, implementation, and flight performance of an on-board image compression system for the lunar orbiting Radio Astronomy Explorer-2 (RAE-2) spacecraft are described. The image to be compressed is a panoramic camera view of the long radio astronomy antenna booms used for gravity-gradient stabilization of the spacecraft. A compression ratio of 32 to 1 is obtained by a combination of scan line skipping and adaptive run-length coding. The compressed imagery data are convolutionally encoded for error protection. This image compression system occupies about 1000 cu cm and consumes 0.4 W.

  7. Group-invariant solutions of hydrodynamics and radiation hydrodynamics

    SciTech Connect

    Coggeshall, S.V.

    1993-08-01

    Using the property of invariance under Lie groups of transformations, the equations of hydrodynamics are transformed from partial differential equations to ordinary differential equations, for which special analytic solutions can be found. These particular solutions can be used for (1) numerical benchmarks, (2) the basis for analytic models, and (3) insight into more general solutions. Additionally, group transformations can be used to construct new solutions from existing ones. A space-time projective group is used to generate complicated solutions from simpler solutions. Discussion of these procedures is presented along with examples of analytic of 1,2 and 3-D hydrodynamics.

  8. The unreasonable effectiveness of hydrodynamics in heavy ion collisions

    NASA Astrophysics Data System (ADS)

    Noronha-Hostler, Jacquelyn; Noronha, Jorge; Gyulassy, Miklos

    2016-12-01

    Event-by-event hydrodynamic simulations of AA and pA collisions involve initial energy densities with large spatial gradients. This is associated with the presence of large Knudsen numbers (Kn ≈ 1) at early times, which may lead one to question the validity of the hydrodynamic approach in these rapidly evolving, largely inhomogeneous systems. A new procedure to smooth out the initial energy densities is employed to show that the initial spatial eccentricities, εn, are remarkably robust with respect to variations in the underlying scale of initial energy density spatial gradients, λ. For √{sNN} = 2.76 TeV LHC initial conditions generated by the MCKLN code, εn (across centralities) remains nearly constant if the fluctuation scale varies by an order of magnitude, i.e., when λ varies from 0.1 to 1 fm. Given that the local Knudsen number Kn ≈ 1 / λ, the robustness of the initial eccentricities with respect to changes in the fluctuation scale suggests that the vn's cannot be used to distinguish between events with large Kn from events where Kn is in the hydrodynamic regime. We use the 2+1 Lagrangian hydrodynamic code v-USPhydro to show that this is indeed the case: anisotropic flow coefficients computed within event-by-event viscous hydrodynamics are only sensitive to long wavelength scales of order 1 /ΛQCD ≈ 1 fm and are incredibly robust with respect to variations in the initial local Knudsen number. This robustness can be used to justify the somewhat unreasonable effectiveness of the nearly perfect fluid paradigm in heavy ion collisions.

  9. Dependability Improvement for PPM Compressed Data by Using Compression Pattern Matching

    NASA Astrophysics Data System (ADS)

    Kitakami, Masato; Okura, Toshihiro

    Data compression is popularly applied to computer systems and communication systems in order to reduce storage size and communication time, respectively. Since large data are used frequently, string matching for such data takes a long time. If the data are compressed, the time gets much longer because decompression is necessary. Long string matching time makes computer virus scan time longer and gives serious influence to the security of data. From this, CPM (Compression Pattern Matching) methods for several compression methods have been proposed. This paper proposes CPM method for PPM which achieves fast virus scan and improves dependability of the compressed data, where PPM is based on a Markov model, uses a context information, and achieves a better compression ratio than BW transform and Ziv-Lempel coding. The proposed method encodes the context information, which is generated in the compression process, and appends the encoded data at the beginning of the compressed data as a header. The proposed method uses only the header information. Computer simulation says that augmentation of the compression ratio is less than 5 percent if the order of the PPM is less than 5 and the source file size is more than 1M bytes, where order is the maximum length of the context used in PPM compression. String matching time is independent of the source file size and is very short, less than 0.3 micro seconds in the PC used for the simulation.

  10. Spectrally Adaptable Compressive Sensing Imaging System

    DTIC Science & Technology

    2014-05-01

    2D coded projections. The underlying spectral 3D data cube is then recovered using compressed sensing (CS) reconstruction algorithms which assume...introduced in [?], is a remarkable imaging architecture that allows capturing spectral imaging information of a 3D cube with just a single 2D mea...allows capturing spectral imaging information of a 3D cube with just a single 2D measurement of the coded and spectrally dispersed source field

  11. Numerical Hydrodynamics in Special Relativity.

    PubMed

    Martí, José Maria; Müller, Ewald

    2003-01-01

    This review is concerned with a discussion of numerical methods for the solution of the equations of special relativistic hydrodynamics (SRHD). Particular emphasis is put on a comprehensive review of the application of high-resolution shock-capturing methods in SRHD. Results of a set of demanding test bench simulations obtained with different numerical SRHD methods are compared. Three applications (astrophysical jets, gamma-ray bursts and heavy ion collisions) of relativistic flows are discussed. An evaluation of various SRHD methods is presented, and future developments in SRHD are analyzed involving extension to general relativistic hydrodynamics and relativistic magneto-hydrodynamics. The review further provides FORTRAN programs to compute the exact solution of a 1D relativistic Riemann problem with zero and nonzero tangential velocities, and to simulate 1D relativistic flows in Cartesian Eulerian coordinates using the exact SRHD Riemann solver and PPM reconstruction.

  12. COBRA-NC: a thermal hydraulics code for transient analysis of nuclear reactor components. Volume 2. COBRA-NC numerical solution methods

    SciTech Connect

    Thurgood, M.J.; George, T.L.; Wheeler, C.L.

    1986-04-01

    The COBRA-NC computer program has been developed to predict the thermal-hydraulic response of nuclear reactor components to thermal-hydraulic transients. The code solves the multicomponent, compressible three-dimensional, two-fluid, three-field equations for two-phase flow. The three fields are the vapor field, the continuous liquid field, and the liquid drop field. The code has been used to model flow and heat transfer within the reactor core, the reactor vessel, the steam generators, and in the nuclear containment. This volume describes the finite-volume equations and the numerical solution methods used to solve these equations. It is directed toward the user who is interested in gaining a more complete understanding of the numerical methods used to obtain a solution to the hydrodynamic equations.

  13. Hydrodynamic behavior of fractal aggregates

    NASA Astrophysics Data System (ADS)

    Wiltzius, Pierre

    1987-02-01

    Measurements of the radius of gyration RG and the hydrodynamic radius RH of colloidal silica aggregates are reported. These aggregates have fractal geometry and RH is proportional to RG for 500 Å<=RH<=7000 Å, with a ratio RH/RG=0.72+/-0.02. The results are compared with predictions for macromolecules of various shapes. The proportionality of the two radii can be understood with use of the pair correlation function of fractal objects and hydrodynamic interactions on the Oseen level. The value of the ratio remains to be explained.

  14. Abnormal pressures as hydrodynamic phenomena

    USGS Publications Warehouse

    Neuzil, C.E.

    1995-01-01

    So-called abnormal pressures, subsurface fluid pressures significantly higher or lower than hydrostatic, have excited speculation about their origin since subsurface exploration first encountered them. Two distinct conceptual models for abnormal pressures have gained currency among earth scientists. The static model sees abnormal pressures generally as relict features preserved by a virtual absence of fluid flow over geologic time. The hydrodynamic model instead envisions abnormal pressures as phenomena in which flow usually plays an important role. This paper develops the theoretical framework for abnormal pressures as hydrodynamic phenomena, shows that it explains the manifold occurrences of abnormal pressures, and examines the implications of this approach. -from Author

  15. Wavelet-based Image Compression using Subband Threshold

    NASA Astrophysics Data System (ADS)

    Muzaffar, Tanzeem; Choi, Tae-Sun

    2002-11-01

    Wavelet based image compression has been a focus of research in recent days. In this paper, we propose a compression technique based on modification of original EZW coding. In this lossy technique, we try to discard less significant information in the image data in order to achieve further compression with minimal effect on output image quality. The algorithm calculates weight of each subband and finds the subband with minimum weight in every level. This minimum weight subband in each level, that contributes least effect during image reconstruction, undergoes a threshold process to eliminate low-valued data in it. Zerotree coding is done next on the resultant output for compression. Different values of threshold were applied during experiment to see the effect on compression ratio and reconstructed image quality. The proposed method results in further increase in compression ratio with negligible loss in image quality.

  16. Correlation estimation and performance optimization for distributed image compression

    NASA Astrophysics Data System (ADS)

    He, Zhihai; Cao, Lei; Cheng, Hui

    2006-01-01

    Correlation estimation plays a critical role in resource allocation and rate control for distributed data compression. A Wyner-Ziv encoder for distributed image compression is often considered as a lossy source encoder followed by a lossless Slepian-Wolf encoder. The source encoder consists of spatial transform, quantization, and bit plane extraction. In this work, we find that Gray code, which has been extensively used in digital modulation, is able to significantly improve the correlation between the source data and its side information. Theoretically, we analyze the behavior of Gray code within the context of distributed image compression. Using this theoretical model, we are able to efficiently allocate the bit budget and determine the code rate of the Slepian-Wolf encoder. Our experimental results demonstrate that the Gray code, coupled with accurate correlation estimation and rate control, significantly improves the picture quality, by up to 4 dB, over the existing methods for distributed image compression.

  17. An image compression technique for use on token ring networks

    NASA Technical Reports Server (NTRS)

    Gorjala, B.; Sayood, Khalid; Meempat, G.

    1992-01-01

    A low complexity technique for compression of images for transmission over local area networks is presented. The technique uses the synchronous traffic as a side channel for improving the performance of an adaptive differential pulse code modulation (ADPCM) based coder.

  18. Compressing bitmap indexes for faster search operations

    SciTech Connect

    Wu, Kesheng; Otoo, Ekow J.; Shoshani, Arie

    2002-04-25

    In this paper, we study the effects of compression on bitmap indexes. The main operations on the bitmaps during query processing are bitwise logical operations such as AND, OR, NOT, etc. Using the general purpose compression schemes, such as gzip, the logical operations on the compressed bitmaps are much slower than on the uncompressed bitmaps. Specialized compression schemes, like the byte-aligned bitmap code(BBC), are usually faster in performing logical operations than the general purpose schemes, but in many cases they are still orders of magnitude slower than the uncompressed scheme. To make the compressed bitmap indexes operate more efficiently, we designed a CPU-friendly scheme which we refer to as the word-aligned hybrid code (WAH). Tests on both synthetic and real application data show that the new scheme significantly outperforms well-known compression schemes at a modest increase in storage space. Compared to BBC, a scheme well-known for its operational efficiency, WAH performs logical operations about 12 times faster and uses only 60 percent more space. Compared to the uncompressed scheme, in most test cases WAH is faster while still using less space. We further verified with additional tests that the improvement in logical operation speed translates to similar improvement in query processing speed.

  19. Sharing code.

    PubMed

    Kubilius, Jonas

    2014-01-01

    Sharing code is becoming increasingly important in the wake of Open Science. In this review I describe and compare two popular code-sharing utilities, GitHub and Open Science Framework (OSF). GitHub is a mature, industry-standard tool but lacks focus towards researchers. In comparison, OSF offers a one-stop solution for researchers but a lot of functionality is still under development. I conclude by listing alternative lesser-known tools for code and materials sharing.

  20. Wavelet-based compression of pathological images for telemedicine applications

    NASA Astrophysics Data System (ADS)

    Chen, Chang W.; Jiang, Jianfei; Zheng, Zhiyong; Wu, Xue G.; Yu, Lun

    2000-05-01

    In this paper, we present the performance evaluation of wavelet-based coding techniques as applied to the compression of pathological images for application in an Internet-based telemedicine system. We first study how well suited the wavelet-based coding is as it applies to the compression of pathological images, since these images often contain fine textures that are often critical to the diagnosis of potential diseases. We compare the wavelet-based compression with the DCT-based JPEG compression in the DICOM standard for medical imaging applications. Both objective and subjective measures have been studied in the evaluation of compression performance. These studies are performed in close collaboration with expert pathologists who have conducted the evaluation of the compressed pathological images and communication engineers and information scientists who designed the proposed telemedicine system. These performance evaluations have shown that the wavelet-based coding is suitable for the compression of various pathological images and can be integrated well with the Internet-based telemedicine systems. A prototype of the proposed telemedicine system has been developed in which the wavelet-based coding is adopted for the compression to achieve bandwidth efficient transmission and therefore speed up the communications between the remote terminal and the central server of the telemedicine system.

  1. Incipient transition phenomena in compressible flows over a flat plate

    NASA Technical Reports Server (NTRS)

    Erlebacher, G.; Hussaini, M. Y.

    1986-01-01

    The full three-dimensional time-dependent compressible Navier-Stokes equations are solved by a Fourier-Chebyshev method to study the stability of compressible flows over a flat plate. After the code is validated in the linear regime, it is applied to study the existence of the secondary instability mechanism in the supersonic regime.

  2. An Analog Processor for Image Compression

    NASA Technical Reports Server (NTRS)

    Tawel, R.

    1992-01-01

    This paper describes a novel analog Vector Array Processor (VAP) that was designed for use in real-time and ultra-low power image compression applications. This custom CMOS processor is based architectually on the Vector Quantization (VQ) algorithm in image coding, and the hardware implementation fully exploits the inherent parallelism built-in the VQ algorithm.

  3. Improved Techniques for Video Compression and Communication

    ERIC Educational Resources Information Center

    Chen, Haoming

    2016-01-01

    Video compression and communication has been an important field over the past decades and critical for many applications, e.g., video on demand, video-conferencing, and remote education. In many applications, providing low-delay and error-resilient video transmission and increasing the coding efficiency are two major challenges. Low-delay and…

  4. Hydrodynamically mediated macrophyte silica dynamics.

    PubMed

    Schoelynck, J; Bal, K; Puijalon, S; Meire, P; Struyf, E

    2012-11-01

    In most aquatic ecosystems, hydrodynamic conditions are a key abiotic factor determining species distributions and abundance of aquatic plants. Resisting stress and keeping an upright position often relies on investment in tissue reinforcement, which is costly to produce. Silica could provide a more economical alternative. Two laboratory experiments were conducted to measure the response of two submerged species, Egeria densa Planch. and Limnophila heterophylla (Roxb.) Benth., to dissolved silicic acid availability and exposure to hydrodynamic stress. The results were verified with a third species in a field study (Nuphar lutea (L.) Smith). Biogenic silica (BSi) concentration in both stems and leaves increases with increasing dissolved silica availability but also with the presence of hydrodynamic stress. We suggest that the inclusion of extra silica enables the plant to alternatively invest its energy in the production of lignin and cellulose. Although we found no significant effects of hydrodynamic stress on cellulose or lignin concentrations either in the laboratory or in the field, BSi was negatively correlated with cellulose concentration and positively correlated with lignin concentration in samples collected in the field study. This implies that the plant might perform with equal energy efficiency in both standing and running water environments. This could provide submerged species with a tool to respond to abiotic factors, to adapt to new ecological conditions and hence potentially colonise new environments.

  5. Hydrodynamic slip in silicon nanochannels

    NASA Astrophysics Data System (ADS)

    Ramos-Alvarado, Bladimir; Kumar, Satish; Peterson, G. P.

    2016-03-01

    Equilibrium and nonequilibrium molecular dynamics simulations were performed to better understand the hydrodynamic behavior of water flowing through silicon nanochannels. The water-silicon interaction potential was calibrated by means of size-independent molecular dynamics simulations of silicon wettability. The wettability of silicon was found to be dependent on the strength of the water-silicon interaction and the structure of the underlying surface. As a result, the anisotropy was found to be an important factor in the wettability of these types of crystalline solids. Using this premise as a fundamental starting point, the hydrodynamic slip in nanoconfined water was characterized using both equilibrium and nonequilibrium calculations of the slip length under low shear rate operating conditions. As was the case for the wettability analysis, the hydrodynamic slip was found to be dependent on the wetted solid surface atomic structure. Additionally, the interfacial water liquid structure was the most significant parameter to describe the hydrodynamic boundary condition. The calibration of the water-silicon interaction potential performed by matching the experimental contact angle of silicon led to the verification of the no-slip condition, experimentally reported for silicon nanochannels at low shear rates.

  6. An efficient radiative cooling approximation for use in hydrodynamic simulations

    NASA Astrophysics Data System (ADS)

    Lombardi, James C.; McInally, William G.; Faber, Joshua A.

    2015-02-01

    To make relevant predictions about observable emission, hydrodynamical simulation codes must employ schemes that account for radiative losses, but the large dimensionality of accurate radiative transfer schemes is often prohibitive. Stamatellos and collaborators introduced a scheme for smoothed particle hydrodynamics (SPH) simulations based on the notion of polytropic pseudo-clouds that uses only local quantities to estimate cooling rates. The computational approach is extremely efficient and works well in cases close to spherical symmetry, such as in star formation problems. Unfortunately, the method, which takes the local gravitational potential as an input, can be inaccurate when applied to non-spherical configurations, limiting its usefulness when studying discs or stellar collisions, among other situations of interest. Here, we introduce the `pressure scale height method,' which incorporates the fluid pressure scaleheight into the determination of column densities and cooling rates, and show that it produces more accurate results across a wide range of physical scenarios while retaining the computational efficiency of the original method. The tested models include spherical polytropes as well as discs with specified density and temperature profiles. We focus on applying our techniques within an SPH code, although our method can be implemented within any particle-based Lagrangian or grid-based Eulerian hydrodynamic scheme. Our new method may be applied in a broad range of situations, including within the realm of stellar interactions, collisions, and mergers.

  7. Dynamic code block size for JPEG 2000

    NASA Astrophysics Data System (ADS)

    Tsai, Ping-Sing; LeCornec, Yann

    2008-02-01

    Since the standardization of the JPEG 2000, it has found its way into many different applications such as DICOM (digital imaging and communication in medicine), satellite photography, military surveillance, digital cinema initiative, professional video cameras, and so on. The unified framework of the JPEG 2000 architecture makes practical high quality real-time compression possible even in video mode, i.e. motion JPEG 2000. In this paper, we present a study of the compression impact using dynamic code block size instead of fixed code block size as specified in the JPEG 2000 standard. The simulation results show that there is no significant impact on compression if dynamic code block sizes are used. In this study, we also unveil the advantages of using dynamic code block sizes.

  8. Hydrodynamic Simulations of Gaseous Argon Shock Experiments

    NASA Astrophysics Data System (ADS)

    Garcia, Daniel; Dattelbaum, Dana; Goodwin, Peter; Morris, John; Sheffield, Stephen; Burkett, Michael

    2015-06-01

    The lack of published Argon gas shock data motivated an evaluation of the Argon Equation of State (EOS) in gas phase initial density regimes never before reached. In particular, these regimes include initial pressures in the range of 200-500 psi (0.025 - 0.056 g/cc) and initial shock velocities around 0.2 cm/ μs. The objective of the numerical evaluation was to develop a physical understanding of the EOS behavior of shocked and subsequently multiply re-shocked Argon gas initially pressurized to 200-500 psi through Pagosa numerical hydrodynamic simulations utilizing the SESAME equation of state. Pagosa is a Los Alamos National Laboratory 2-D and 3-D Eulerian hydrocode capable of modeling high velocity compressible flow with multiple materials. The approach involved the use of gas gun experiments to evaluate the shock and multiple re-shock behavior of pressurized Argon gas to validate Pagosa simulations and the SESAME EOS. Additionally, the diagnostic capability within the experiments allowed for the EOS to be fully constrained with measured shock velocity, particle velocity and temperature. The simulations demonstrate excellent agreement with the experiments in the shock velocity/particle velocity space, but note unanticipated differences in the ionization front temperatures.

  9. Hydrodynamics of diatom chains and semiflexible fibres.

    PubMed

    Nguyen, Hoa; Fauci, Lisa

    2014-07-06

    Diatoms are non-motile, unicellular phytoplankton that have the ability to form colonies in the form of chains. Depending upon the species of diatoms and the linking structures that hold the cells together, these chains can be quite stiff or very flexible. Recently, the bending rigidities of some species of diatom chains have been quantified. In an effort to understand the role of flexibility in nutrient uptake and aggregate formation, we begin by developing a three-dimensional model of the coupled elastic-hydrodynamic system of a diatom chain moving in an incompressible fluid. We find that simple beam theory does a good job of describing diatom chain deformation in a parabolic flow when its ends are tethered, but does not tell the whole story of chain deformations when they are subjected to compressive stresses in shear. While motivated by the fluid dynamics of diatom chains, our computational model of semiflexible fibres illustrates features that apply widely to other systems. The use of an adaptive immersed boundary framework allows us to capture complicated buckling and recovery dynamics of long, semiflexible fibres in shear.

  10. Compressed Sensing Meets Wave Chaology

    NASA Astrophysics Data System (ADS)

    Pinto, Innocenzo M.; Addesso, Paolo; Principe, Maria

    2015-03-01

    The Wigner distribution is an important tool in the study of high-frequency wave-packet dynamics in ray-chaotic enclosures. Smoothing the Wigner distribution helps improving its readability, by suppressing nonlinear artifacts, but spoils its resolution. Adding a sparsity constraint to smoothing, in the spirit of the compressed coding paradigm, restores resolution while still avoiding artifacts. The result is particularly valuable in the perspective of complexity gauging via Renyi-Wehrl entropy measures. Representative numerical experiments are presented to substantiate such clues.

  11. Universal lossless compression algorithm for textual images

    NASA Astrophysics Data System (ADS)

    al Zahir, Saif

    2012-03-01

    In recent years, an unparalleled volume of textual information has been transported over the Internet via email, chatting, blogging, tweeting, digital libraries, and information retrieval systems. As the volume of text data has now exceeded 40% of the total volume of traffic on the Internet, compressing textual data becomes imperative. Many sophisticated algorithms were introduced and employed for this purpose including Huffman encoding, arithmetic encoding, the Ziv-Lempel family, Dynamic Markov Compression, and Burrow-Wheeler Transform. My research presents novel universal algorithm for compressing textual images. The algorithm comprises two parts: 1. a universal fixed-to-variable codebook; and 2. our row and column elimination coding scheme. Simulation results on a large number of Arabic, Persian, and Hebrew textual images show that this algorithm has a compression ratio of nearly 87%, which exceeds published results including JBIG2.

  12. A formulation of consistent particle hydrodynamics in strong form

    NASA Astrophysics Data System (ADS)

    Yamamoto, Satoko; Makino, Junichiro

    2017-03-01

    In fluid dynamical simulations in astrophysics, large deformations are common and surface tracking is sometimes necessary. The smoothed particle hydrodynamics (SPH) method has been used in many such simulations. Recently, however, it has been shown that SPH cannot handle contact discontinuities or free surfaces accurately. There are several reasons for this problem. The first one is that SPH requires that the density is continuous and differentiable. The second one is that SPH does not have consistency, and thus the accuracy is of the zeroth-order in space. In addition, we cannot express accurate boundary conditions with SPH. In this paper, we propose a novel, high-order scheme for particle-based hydrodynamics of compressible fluid. Our method is based on a kernel-weighted high-order fitting polynomial for intensive variables. With this approach, we can construct a scheme which solves all of the three problems described above. For shock capturing, we use a tensor form of von Neumann-Richtmyer artificial viscosity. We have applied our method to many test problems and obtained excellent results. Our method is not conservative, since particles do not have mass or energy, but only their densities. However, because of the Lagrangian nature of our scheme, the violation of the conservation laws turned out to be small. We name this method Consistent Particle Hydrodynamics in Strong Form (CPHSF).

  13. Object-based wavelet compression using coefficient selection

    NASA Astrophysics Data System (ADS)

    Zhao, Lifeng; Kassim, Ashraf A.

    1998-12-01

    In this paper, we present a novel approach to code image regions of arbitrary shapes. The proposed algorithm combines a coefficient selection scheme with traditional wavelet compression for coding arbitrary regions and uses a shape adaptive embedded zerotree wavelet coding (SA-EZW) to quantize the selected coefficients. Since the shape information is implicitly encoded by the SA-EZW, our decoder can reconstruct the arbitrary region without separate shape coding. This makes the algorithm simple to implement and avoids the problem of contour coding. Our algorithm also provides a sufficient framework to address content-based scalability and improved coding efficiency as described by MPEG-4.

  14. Squish: Near-Optimal Compression for Archival of Relational Datasets

    PubMed Central

    Gao, Yihan; Parameswaran, Aditya

    2017-01-01

    Relational datasets are being generated at an alarmingly rapid rate across organizations and industries. Compressing these datasets could significantly reduce storage and archival costs. Traditional compression algorithms, e.g., gzip, are suboptimal for compressing relational datasets since they ignore the table structure and relationships between attributes. We study compression algorithms that leverage the relational structure to compress datasets to a much greater extent. We develop Squish, a system that uses a combination of Bayesian Networks and Arithmetic Coding to capture multiple kinds of dependencies among attributes and achieve near-entropy compression rate. Squish also supports user-defined attributes: users can instantiate new data types by simply implementing five functions for a new class interface. We prove the asymptotic optimality of our compression algorithm and conduct experiments to show the effectiveness of our system: Squish achieves a reduction of over 50% in storage size relative to systems developed in prior work on a variety of real datasets. PMID:28180028

  15. A strategy for reducing stagnation phase hydrodynamic instability growth in inertial confinement fusion implosions

    NASA Astrophysics Data System (ADS)

    Clark, D. S.; Robey, H. F.; Smalyuk, V. A.

    2015-05-01

    Encouraging progress is being made in demonstrating control of ablation front hydrodynamic instability growth in inertial confinement fusion implosion experiments on the National Ignition Facility [E. I. Moses, R. N. Boyd, B. A. Remington, C. J. Keane, and R. Al-Ayat, Phys. Plasmas 16, 041006 (2009)]. Even once ablation front stabilities are controlled, however, instability during the stagnation phase of the implosion can still quench ignition. A scheme is proposed to reduce the growth of stagnation phase instabilities through the reverse of the "adiabat shaping" mechanism proposed to control ablation front growth. Two-dimensional radiation hydrodynamics simulations confirm that improved stagnation phase stability should be possible without compromising fuel compression.

  16. Neutrino scattering from hydrodynamic modes in hot and dense neutron matter

    NASA Astrophysics Data System (ADS)

    Shen, Gang; Reddy, Sanjay

    2014-03-01

    We calculate the scattering rate of low-energy neutrinos in hot and dense neutron matter encountered in neutrons stars and supernovae in the hydrodynamic regime. We find that the Brillouin peak, associated with the sound mode, and the Rayleigh peak, associated with the thermal diffusion mode, dominate the dynamic structure factor. Although the total scattering cross section is constrained by the compressibility sum rule, the differential cross section calculated using the hydrodynamic response function differs from results obtained in approximate treatments often used in astrophysics such as random phase approximations. We identified these differences and discuss their implications for neutrino transport in supernovae.

  17. Microbunching and RF Compression

    SciTech Connect

    Venturini, M.; Migliorati, M.; Ronsivalle, C.; Ferrario, M.; Vaccarezza, C.

    2010-05-23

    Velocity bunching (or RF compression) represents a promising technique complementary to magnetic compression to achieve the high peak current required in the linac drivers for FELs. Here we report on recent progress aimed at characterizing the RF compression from the point of view of the microbunching instability. We emphasize the development of a linear theory for the gain function of the instability and its validation against macroparticle simulations that represents a useful tool in the evaluation of the compression schemes for FEL sources.

  18. Interplay of Laser-Plasma Interactions and Inertial Fusion Hydrodynamics

    NASA Astrophysics Data System (ADS)

    Strozzi, D. J.; Bailey, D. S.; Michel, P.; Divol, L.; Sepke, S. M.; Kerbel, G. D.; Thomas, C. A.; Ralph, J. E.; Moody, J. D.; Schneider, M. B.

    2017-01-01

    The effects of laser-plasma interactions (LPI) on the dynamics of inertial confinement fusion hohlraums are investigated via a new approach that self-consistently couples reduced LPI models into radiation-hydrodynamics numerical codes. The interplay between hydrodynamics and LPI—specifically stimulated Raman scatter and crossed-beam energy transfer (CBET)—mostly occurs via momentum and energy deposition into Langmuir and ion acoustic waves. This spatially redistributes energy coupling to the target, which affects the background plasma conditions and thus, modifies laser propagation. This model shows reduced CBET and significant laser energy depletion by Langmuir waves, which reduce the discrepancy between modeling and data from hohlraum experiments on wall x-ray emission and capsule implosion shape.

  19. Development and Implementation of Radiation-Hydrodynamics Verification Test Problems

    SciTech Connect

    Marcath, Matthew J.; Wang, Matthew Y.; Ramsey, Scott D.

    2012-08-22

    Analytic solutions to the radiation-hydrodynamic equations are useful for verifying any large-scale numerical simulation software that solves the same set of equations. The one-dimensional, spherically symmetric Coggeshall No.9 and No.11 analytic solutions, cell-averaged over a uniform-grid have been developed to analyze the corresponding solutions from the Los Alamos National Laboratory Eulerian Applications Project radiation-hydrodynamics code xRAGE. These Coggeshall solutions have been shown to be independent of heat conduction, providing a unique opportunity for comparison with xRAGE solutions with and without the heat conduction module. Solution convergence was analyzed based on radial step size. Since no shocks are involved in either problem and the solutions are smooth, second-order convergence was expected for both cases. The global L1 errors were used to estimate the convergence rates with and without the heat conduction module implemented.

  20. Non-linear hydrodynamical simulations of delta Scuti star pulsations

    NASA Astrophysics Data System (ADS)

    Templeton, M. R.; Guzik, J. A.; McNamara, B. J.

    1998-12-01

    We present the initial results of non-linear hydrodynamic simulations of the pulsation modes of delta Scuti stars. These models use the Ostlie and Cox (1993) Lagrangian hydrodynamic code, adapted to use the most recent OPAL (1996) opacities, the Stellingwerf (1974) periodic relaxation method of obtaining stable limit cycle pulsations, and time-dependent convection. Initial tests of first- and second-overtone pulsation models are consistent with the models of Bono, et al (1997) showing asymmetric lightcurves for first overtone rather than fundamental pulsations. Future modeling work will test several stellar models with varying masses, ages, metal and helium abundances and envelope abundance gradients. Ultimately, we hope to determine the role that abundances and, more specifically, helium abundance gradients in delta Scuti envelopes play in light curve shape. This work will be applied to a test sample of known radially-pulsating delta Scuti field stars and the newly-discovered delta Scuti/SX Phoenicis variables in the Galactic Bulge.

  1. Interplay of Laser-Plasma Interactions and Inertial Fusion Hydrodynamics

    DOE PAGES

    Strozzi, D. J.; Bailey, D. S.; Michel, P.; ...

    2017-01-12

    The effects of laser-plasma interactions (LPI) on the dynamics of inertial confinement fusion hohlraums are investigated in this work via a new approach that self-consistently couples reduced LPI models into radiation-hydrodynamics numerical codes. The interplay between hydrodynamics and LPI—specifically stimulated Raman scatter and crossed-beam energy transfer (CBET)—mostly occurs via momentum and energy deposition into Langmuir and ion acoustic waves. This spatially redistributes energy coupling to the target, which affects the background plasma conditions and thus, modifies laser propagation. In conclusion, this model shows reduced CBET and significant laser energy depletion by Langmuir waves, which reduce the discrepancy between modeling andmore » data from hohlraum experiments on wall x-ray emission and capsule implosion shape.« less

  2. Time Evolution of the Anisotropies of the Hydrodynamically Expanding Sqgp

    NASA Astrophysics Data System (ADS)

    Bagoly, A.; Csanád, M.

    In high energy heavy ion collisions of RHIC and LHC, a strongly interacting quark gluon plasma (sQGP) is created. This medium undergoes a hydrodynamic evolution, before it freezes out to form a hadronic matter. The initial state of the sQGP is determined by the initial distribution of the participating nucleons and their interactions. Due to the finite number of nucleons, the initial distribution fluctuates on an event-by-event basis. The transverse plane anisotropy of the initial state can be translated into a series of anisotropy coefficients or eccentricities: second, third, fourth-order anisotropy etc. These anisotropies then evolve in time, and result in measurable momentum-space anisotropies, to be measured with respect to their respective symmetry planes. In this paper we investigate the time evolution of the anisotropies. With a numerical hydrodynamic code, we analyze how the speed of sound and viscosity influence this evolution.

  3. Code Verification Results of an LLNL ASC Code on Some Tri-Lab Verification Test Suite Problems

    SciTech Connect

    Anderson, S R; Bihari, B L; Salari, K; Woodward, C S

    2006-12-29

    As scientific codes become more complex and involve larger numbers of developers and algorithms, chances for algorithmic implementation mistakes increase. In this environment, code verification becomes essential to building confidence in the code implementation. This paper will present first results of a new code verification effort within LLNL's B Division. In particular, we will show results of code verification of the LLNL ASC ARES code on the test problems: Su Olson non-equilibrium radiation diffusion, Sod shock tube, Sedov point blast modeled with shock hydrodynamics, and Noh implosion.

  4. An efficient compression scheme for bitmap indices

    SciTech Connect

    Wu, Kesheng; Otoo, Ekow J.; Shoshani, Arie

    2004-04-13

    When using an out-of-core indexing method to answer a query, it is generally assumed that the I/O cost dominates the overall query response time. Because of this, most research on indexing methods concentrate on reducing the sizes of indices. For bitmap indices, compression has been used for this purpose. However, in most cases, operations on these compressed bitmaps, mostly bitwise logical operations such as AND, OR, and NOT, spend more time in CPU than in I/O. To speedup these operations, a number of specialized bitmap compression schemes have been developed; the best known of which is the byte-aligned bitmap code (BBC). They are usually faster in performing logical operations than the general purpose compression schemes, but, the time spent in CPU still dominates the total query response time. To reduce the query response time, we designed a CPU-friendly scheme named the word-aligned hybrid (WAH) code. In this paper, we prove that the sizes of WAH compressed bitmap indices are about two words per row for large range of attributes. This size is smaller than typical sizes of commonly used indices, such as a B-tree. Therefore, WAH compressed indices are not only appropriate for low cardinality attributes but also for high cardinality attributes.In the worst case, the time to operate on compressed bitmaps is proportional to the total size of the bitmaps involved. The total size of the bitmaps required to answer a query on one attribute is proportional to the number of hits. These indicate that WAH compressed bitmap indices are optimal. To verify their effectiveness, we generated bitmap indices for four different datasets and measured the response time of many range queries. Tests confirm that sizes of compressed bitmap indices are indeed smaller than B-tree indices, and query processing with WAH compressed indices is much faster than with BBC compressed indices, projection indices and B-tree indices. In addition, we also verified that the average query response time

  5. Computation of Thermally Perfect Compressible Flow Properties

    NASA Technical Reports Server (NTRS)

    Witte, David W.; Tatum, Kenneth E.; Williams, S. Blake

    1996-01-01

    A set of compressible flow relations for a thermally perfect, calorically imperfect gas are derived for a value of c(sub p) (specific heat at constant pressure) expressed as a polynomial function of temperature and developed into a computer program, referred to as the Thermally Perfect Gas (TPG) code. The code is available free from the NASA Langley Software Server at URL http://www.larc.nasa.gov/LSS. The code produces tables of compressible flow properties similar to those found in NACA Report 1135. Unlike the NACA Report 1135 tables which are valid only in the calorically perfect temperature regime the TPG code results are also valid in the thermally perfect, calorically imperfect temperature regime, giving the TPG code a considerably larger range of temperature application. Accuracy of the TPG code in the calorically perfect and in the thermally perfect, calorically imperfect temperature regimes are verified by comparisons with the methods of NACA Report 1135. The advantages of the TPG code compared to the thermally perfect, calorically imperfect method of NACA Report 1135 are its applicability to any type of gas (monatomic, diatomic, triatomic, or polyatomic) or any specified mixture of gases, ease-of-use, and tabulated results.

  6. Compressed gas manifold

    DOEpatents

    Hildebrand, Richard J.; Wozniak, John J.

    2001-01-01

    A compressed gas storage cell interconnecting manifold including a thermally activated pressure relief device, a manual safety shut-off valve, and a port for connecting the compressed gas storage cells to a motor vehicle power source and to a refueling adapter. The manifold is mechanically and pneumatically connected to a compressed gas storage cell by a bolt including a gas passage therein.

  7. Supernova-relevant hydrodynamic instability experiments on the Nova laser

    SciTech Connect

    Kane, J.; Arnett, D.; Remington, B.A.; Glendinning, S.G.; Wallace, R.; Managan, R.; Rubenchik, A. Rubenchik, A. Fryxell, B.A.

    1997-04-01

    Observations of Supernova 1987A suggest that hydrodynamic instabilities play a critical role in the evolution of supernovae. To test the modeling of these instabilities, and to study instability issues which are difficult to model, we are developing laboratory experiments of hydrodynamic mixing under conditions relevant to supernovae. We use the Nova laser to generate a 10{endash}15 Mbar shock at the interface between an 85 {mu}m thick layer of Cu and a 500 {mu}m layer of CH{sub 2}; our first target is planar. We impose a single mode sinusoidal material perturbation at the interface with {lambda}=200{mu}m, {eta}{sub 0}=20{mu}m, causing perturbation growth by the RM instability as the shock accelerates the interface, and by RT instability as the interface decelerates. This resembles the hydrodynamics of the He-H interface of a Type II supernova at intermediate times, up to a few {times}10{sup 3}s. We use the supernova code PROMETHEUS and the hydrodynamics codes HYADES and CALE to model the experiment. We are designing further experiments to compare results for 2D vs. 3D single mode perturbations; high resolution 3D modeling requires prohibitive time and computing resources, but we can perform and study 3D experiments as easily as 2D experiments. Low resolution simulations suggest that the perturbations grow 50{percent} faster in 3D than in 2D; such a difference may help explain the high observed velocities of radioactive core material in SN1987A. We present the results of the experiments and simulations. {copyright} {ital 1997 American Institute of Physics.}

  8. Supernova-relevant hydrodynamic instability experiments on the Nova laser

    SciTech Connect

    Kane, J.; Arnett, D.; Remington, B. A.; Glendinning, S. G.; Wallace, R.; Managan, R.; Rubenchik, A.; Fryxell, B. A.

    1997-04-15

    Observations of Supernova 1987A suggest that hydrodynamic instabilities play a critical role in the evolution of supernovae. To test the modeling of these instabilities, and to study instability issues which are difficult to model, we are developing laboratory experiments of hydrodynamic mixing under conditions relevant to supernovae. We use the Nova laser to generate a 10-15 Mbar shock at the interface between an 85 {mu}m thick layer of Cu and a 500 {mu}m layer of CH{sub 2}; our first target is planar. We impose a single mode sinusoidal material perturbation at the interface with {lambda}=200 {mu}m, {eta}{sub 0}=20 {mu}m, causing perturbation growth by the RM instability as the shock accelerates the interface, and by RT instability as the interface decelerates. This resembles the hydrodynamics of the He-H interface of a Type II supernova at intermediate times, up to a few x10{sup 3} s. We use the supernova code PROMETHEUS and the hydrodynamics codes HYADES and CALE to model the experiment. We are designing further experiments to compare results for 2D vs. 3D single mode perturbations; high resolution 3D modeling requires prohibitive time and computing resources, but we can perform and study 3D experiments as easily as 2D experiments. Low resolution simulations suggest that the perturbations grow 50% faster in 3D than in 2D; such a difference may help explain the high observed velocities of radioactive core material in SN1987A. We present the results of the experiments and simulations.

  9. Three-Dimensional Hydrodynamic Simulations of OMEGA Implosions

    NASA Astrophysics Data System (ADS)

    Igumenshchev, I. V.

    2016-10-01

    The effects of large-scale (with Legendre modes less than 30) asymmetries in OMEGA direct-drive implosions caused by laser illumination nonuniformities (beam-power imbalance and beam mispointing and mistiming) and target offset, mount, and layers nonuniformities were investigated using three-dimensional (3-D) hydrodynamic simulations. Simulations indicate that the performance degradation in cryogenic implosions is caused mainly by the target offsets ( 10 to 20 μm), beampower imbalance (σrms 10 %), and initial target asymmetry ( 5% ρRvariation), which distort implosion cores, resulting in a reduced hot-spot confinement and an increased residual kinetic energy of the stagnated target. The ion temperature inferred from the width of simulated neutron spectra are influenced by bulk fuel motion in the distorted hot spot and can result in up to 2-keV apparent temperature increase. Similar temperature variations along different lines of sight are observed. Simulated x-ray images of implosion cores in the 4- to 8-keV energy range show good agreement with experiments. Demonstrating hydrodynamic equivalence to ignition designs on OMEGA requires reducing large-scale target and laser-imposed nonuniformities, minimizing target offset, and employing high-efficient mid-adiabat (α = 4) implosion designs that mitigate cross-beam energy transfer (CBET) and suppress short-wavelength Rayleigh-Taylor growth. These simulations use a new low-noise 3-D Eulerian hydrodynamic code ASTER. Existing 3-D hydrodynamic codes for direct-drive implosions currently miss CBET and noise-free ray-trace laser deposition algorithms. ASTER overcomes these limitations using a simplified 3-D laser-deposition model, which includes CBET and is capable of simulating the effects of beam-power imbalance, beam mispointing, mistiming, and target offset. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0001944.

  10. Anomalous hydrodynamics of fractional quantum Hall states

    SciTech Connect

    Wiegmann, P.

    2013-09-15

    We propose a comprehensive framework for quantum hydrodynamics of the fractional quantum Hall (FQH) states. We suggest that the electronic fluid in the FQH regime can be phenomenologically described by the quantized hydrodynamics of vortices in an incompressible rotating liquid. We demonstrate that such hydrodynamics captures all major features of FQH states, including the subtle effect of the Lorentz shear stress. We present a consistent quantization of the hydrodynamics of an incompressible fluid, providing a powerful framework to study the FQH effect and superfluids. We obtain the quantum hydrodynamics of the vortex flow by quantizing the Kirchhoff equations for vortex dynamics.

  11. Multi-dimensional hydrodynamics of core-collapse supernovae

    NASA Astrophysics Data System (ADS)

    Murphy, Jeremiah W.

    Core-collapse supernovae are some of the most energetic events in the Universe, they herald the birth of neutron stars and black holes, are a major site for nucleosynthesis, influence galactic hydrodynamics, and trigger further star formation. As such, it is important to understand the mechanism of explosion. Moreover, observations imply that asymmetries are, in the least, a feature of the mechanism, and theory suggests that multi-dimensional hydrodynamics may be crucial for successful explosions. In this dissertation, we present theoretical investigations into the multi-dimensional nature of the supernova mechanism. It had been suggested that nuclear reactions might excite non-radial g-modes (the [straight epsilon]-mechanism) in the cores of progenitors, leading to asymmetric explosions. We calculate the eigenmodes for a large suite of progenitors including excitation by nuclear reactions and damping by neutrino and acoustic losses. Without exception, we find unstable g-modes for each progenitor. However, the timescales for growth are at least an order of magnitude longer than the time until collapse. Thus, the [straight epsilon]- mechanism does not provide appreciable amplification of non-radial modes before the core undergoes collapse. Regardless, neutrino-driven convection, the standing accretion shock instability, and other instabilities during the explosion provide ample asymmetry. To adequately simulate these, we have developed a new hydrodynamics code, BETHE-hydro that uses the Arbitrary Lagrangian-Eulerian (ALE) approach, includes rotational terms, solves Poisson's equation for gravity on arbitrary grids, and conserves energy and momentum in its basic implementation. By using time-dependent arbitrary grids that can adapt to the numerical challenges of the problem, this code offers unique flexibility in simulating astrophysical phenomena. Finally, we use BETHE-hydro to investigate the conditions and criteria for supernova explosions by the neutrino

  12. A hydrodynamics-reaction kinetics coupled model for evaluating bioreactors derived from CFD simulation.

    PubMed

    Wang, Xu; Ding, Jie; Guo, Wan-Qian; Ren, Nan-Qi

    2010-12-01

    Investigating how a bioreactor functions is a necessary precursor for successful reactor design and operation. Traditional methods used to investigate flow-field cannot meet this challenge accurately and economically. Hydrodynamics model can solve this problem, but to understand a bioreactor in sufficient depth, it is often insufficient. In this paper, a coupled hydrodynamics-reaction kinetics model was formulated from computational fluid dynamics (CFD) code to simulate a gas-liquid-solid three-phase biotreatment system for the first time. The hydrodynamics model is used to formulate prediction of the flow field and the reaction kinetics model then portrays the reaction conversion process. The coupled model is verified and used to simulate the behavior of an expanded granular sludge bed (EGSB) reactor for biohydrogen production. The flow patterns were visualized and analyzed. The coupled model also demonstrates a qualitative relationship between hydrodynamics and biohydrogen production. The advantages and limitations of applying this coupled model are discussed.

  13. Sharing code

    PubMed Central

    Kubilius, Jonas

    2014-01-01

    Sharing code is becoming increasingly important in the wake of Open Science. In this review I describe and compare two popular code-sharing utilities, GitHub and Open Science Framework (OSF). GitHub is a mature, industry-standard tool but lacks focus towards researchers. In comparison, OSF offers a one-stop solution for researchers but a lot of functionality is still under development. I conclude by listing alternative lesser-known tools for code and materials sharing. PMID:25165519

  14. DualSPHysics: Open-source parallel CFD solver based on Smoothed Particle Hydrodynamics (SPH)

    NASA Astrophysics Data System (ADS)

    Crespo, A. J. C.; Domínguez, J. M.; Rogers, B. D.; Gómez-Gesteira, M.; Longshaw, S.; Canelas, R.; Vacondio, R.; Barreiro, A.; García-Feal, O.

    2015-02-01

    DualSPHysics is a hardware accelerated Smoothed Particle Hydrodynamics code developed to solve free-surface flow problems. DualSPHysics is an open-source code developed and released under the terms of GNU General Public License (GPLv3). Along with the source code, a complete documentation that makes easy the compilation and execution of the source files is also distributed. The code has been shown to be efficient and reliable. The parallel power computing of Graphics Computing Units (GPUs) is used to accelerate DualSPHysics by up to two orders of magnitude compared to the performance of the serial version.

  15. Simulating Rayleigh-Taylor (RT) instability using PPM hydrodynamics @scale on Roadrunner (u)

    SciTech Connect

    Woodward, Paul R; Dimonte, Guy; Rockefeller, Gabriel M; Fryer, Christopher L; Dimonte, Guy; Dai, W; Kares, R. J.

    2011-01-05

    The effect of initial conditions on the self-similar growth of the RT instability is investigated using a hydrodynamics code based on the piecewise-parabolic-method (PPM). The PPM code was converted to the hybrid architecture of Roadrunner in order to perform the simulations at extremely high speed and spatial resolution. This paper describes the code conversion to the Cell processor, the scaling studies to 12 CU's on Roadrunner and results on the dependence of the RT growth rate on initial conditions. The relevance of the Roadrunner implementation of this PPM code to other existing and anticipated computer architectures is also discussed.

  16. Microscopic derivation of discrete hydrodynamics.

    PubMed

    Español, Pep; Anero, Jesús G; Zúñiga, Ignacio

    2009-12-28

    By using the standard theory of coarse graining based on Zwanzig's projection operator, we derive the dynamic equations for discrete hydrodynamic variables. These hydrodynamic variables are defined in terms of the Delaunay triangulation. The resulting microscopically derived equations can be understood, a posteriori, as a discretization on an arbitrary irregular grid of the Navier-Stokes equations. The microscopic derivation provides a set of discrete equations that exactly conserves mass, momentum, and energy and the dissipative part of the dynamics produces strict entropy increase. In addition, the microscopic derivation provides a practical implementation of thermal fluctuations in a way that the fluctuation-dissipation theorem is satisfied exactly. This paper points toward a close connection between coarse-graining procedures from microscopic dynamics and discretization schemes for partial differential equations.

  17. Hydrodynamic Viscosity in Accretion Disks

    NASA Astrophysics Data System (ADS)

    Duschl, Wolfgang J.; Strittmatter, Peter A.; Biermann, Peter L.

    We propose a generalized accretion disk viscosity prescription based on hydrodynamically driven turbulence at the critical effective Reynolds number. This approach is consistent with recent re-analysis by Richard & Zahn (1999) of experimental results on turbulent Couette-Taylor flows. This new β-viscosity formulation applies to both selfgravitating and non-selfgravitating disks and is shown to yield the standard α-disk prescription in the case of shock dissipation limited, non-selfgravitating disks.

  18. Spectra and statistics in compressible isotropic turbulence

    NASA Astrophysics Data System (ADS)

    Wang, Jianchun; Gotoh, Toshiyuki; Watanabe, Takeshi

    2017-01-01

    Spectra and one-point statistics of velocity and thermodynamic variables in isotropic turbulence of compressible fluid are examined by using numerical simulations with solenoidal forcing at the turbulent Mach number Mt from 0.05 to 1.0 and at the Taylor Reynolds number Reλ from 40 to 350. The velocity field is decomposed into a solenoidal component and a compressible component in terms of the Helmholtz decomposition, and the compressible velocity component is further decomposed into a pseudosound component, namely, the hydrodynamic component associated with the incompressible field and an acoustic component associated with sound waves. It is found that the acoustic mode dominates over the pseudosound mode at turbulent Mach numbers Mt≥0.4 in our numerical simulations. At turbulent Mach numbers Mt≤0.4 , there exists a critical wave number kc beyond which the pseudosound mode dominates while the acoustic mode dominates at small wave numbers k compressible velocity is fully enslaved to the solenoidal velocity, and its spectrum scales as Mt4k-3 in the inertial range. It is also found that in the inertial range, the spectra of pressure, density, and temperature exhibit a k-7 /3 scaling for Mt≤0.3 and a k-5 /3 scaling for Mt≥0.5 .

  19. Models of Jupiter's Growth Incorporating Thermal and Hydrodynamics Constraints

    NASA Astrophysics Data System (ADS)

    D'Angelo, G.; Lissauer, J. J.; Hubickyj, O.; Bodenheimer, P.

    2008-12-01

    We have modeled the growth of Jupiter incorporating both thermal and hydrodynamical constraints on its accretion of gas from the circumsolar disk. We have used a planetary formation code, based on a Henyey- type stellar evolution code, to compute the planet's internal structure and a three-dimensional hydrodynamics code to calculate the planet's interactions with the protoplanetary disk. Our principal results are: (1) Three dimensional hydrodynamics calculations show that the flow of gas in the circumsolar disk limits the region occupied by the planet's tenuous gaseous envelope to within about 0.25 Rh (Hill sphere radii) of the planet's center, which is much smaller than the value of ~ 1 Rh that was assumed in previous studies. (2) This smaller size of the planet's envelope increases the planet's accretion time, but only by 5-- 10%. In general, in agreement with previous results of Hubickyj et al. [Hubickyj, O., Bodenheimer, P., Lissauer, J.J., 2005. Icarus, 179, 415-431], Jupiter formation times are in the range 2.5--3 Myr, assuming a protoplanetary disk with solid surface density of 10 g/cm2 and dust opacity in the protoplanet's envelope equal to 2% that of interstellar material. Thermal pressure limits the rate at which a planet less than a few dozen times as massive as Earth can accumulate gas from the protoplanetary disk, whereas hydrodynamics regulates the growth rate for more massive planets. (3) In a protoplanetary disk whose alpha-viscosity parameter is ~ 0.004, giant planets will grow to several times the mass of Jupiter unless the disk has a small local surface density when the planet begins to accrete gas hydrodynamically, or the disk is dispersed very soon thereafter. The large number of planets known with masses near Jupiter's compared with the smaller number of substantially more massive planets is more naturally explained by planetary growth within circumstellar disks whose alpha-viscosity parameter is ~ 0.0004. (4) Capture of Jupiter's irregular

  20. A microfluidic-based hydrodynamic trap for single particles.

    PubMed

    Johnson-Chavarria, Eric M; Tanyeri, Melikhan; Schroeder, Charles M

    2011-01-21

    The ability to confine and manipulate single particles in free solution is a key enabling technology for fundamental and applied science. Methods for particle trapping based on optical, magnetic, electrokinetic, and acoustic techniques have led to major advancements in physics and biology ranging from the molecular to cellular level. In this article, we introduce a new microfluidic-based technique for particle trapping and manipulation based solely on hydrodynamic fluid flow. Using this method, we demonstrate trapping of micro- and nano-scale particles in aqueous solutions for long time scales. The hydrodynamic trap consists of an integrated microfluidic device with a cross-slot channel geometry where two opposing laminar streams converge, thereby generating a planar extensional flow with a fluid stagnation point (zero-velocity point). In this device, particles are confined at the trap center by active control of the flow field to maintain particle position at the fluid stagnation point. In this manner, particles are effectively trapped in free solution using a feedback control algorithm implemented with a custom-built LabVIEW code. The control algorithm consists of image acquisition for a particle in the microfluidic device, followed by particle tracking, determination of particle centroid position, and active adjustment of fluid flow by regulating the pressure applied to an on-chip pneumatic valve using a pressure regulator. In this way, the on-chip dynamic metering valve functions to regulate the relative flow rates in the outlet channels, thereby enabling fine-scale control of stagnation point position and particle trapping. The microfluidic-based hydrodynamic trap exhibits several advantages as a method for particle trapping. Hydrodynamic trapping is possible for any arbitrary particle without specific requirements on the physical or chemical properties of the trapped object. In addition, hydrodynamic trapping enables confinement of a "single" target object in

  1. Recent progress in anisotropic hydrodynamics

    NASA Astrophysics Data System (ADS)

    Strickland, Michael

    2017-03-01

    The quark-gluon plasma created in a relativistic heavy-ion collisions possesses a sizable pressure anisotropy in the local rest frame at very early times after the initial nuclear impact and this anisotropy only slowly relaxes as the system evolves. In a kinetic theory picture, this translates into the existence of sizable momentum-space anisotropies in the underlying partonic distribution functions, < pL2> ≪ < pT2>. In such cases, it is better to reorganize the hydrodynamical expansion by taking into account momentum-space anisotropies at leading-order in the expansion instead of as a perturbative correction to an isotropic distribution. The resulting anisotropic hydrodynamics framework has been shown to more accurately describe the dynamics of rapidly expanding systems such as the quark-gluon plasma. In this proceedings contribution, I review the basic ideas of anisotropic hydrodynamics, recent progress, and present a few preliminary phenomenological predictions for identified particle spectra and elliptic flow.

  2. Particle hydrodynamics with tessellation techniques

    NASA Astrophysics Data System (ADS)

    Heß, Steffen; Springel, Volker

    2010-08-01

    Lagrangian smoothed particle hydrodynamics (SPH) is a well-established approach to model fluids in astrophysical problems, thanks to its geometric flexibility and ability to automatically adjust the spatial resolution to the clumping of matter. However, a number of recent studies have emphasized inaccuracies of SPH in the treatment of fluid instabilities. The origin of these numerical problems can be traced back to spurious surface effects across contact discontinuities, and to SPH's inherent prevention of mixing at the particle level. We here investigate a new fluid particle model where the density estimate is carried out with the help of an auxiliary mesh constructed as the Voronoi tessellation of the simulation particles instead of an adaptive smoothing kernel. This Voronoi-based approach improves the ability of the scheme to represent sharp contact discontinuities. We show that this eliminates spurious surface tension effects present in SPH and that play a role in suppressing certain fluid instabilities. We find that the new `Voronoi Particle Hydrodynamics' (VPH) described here produces comparable results to SPH in shocks, and better ones in turbulent regimes of pure hydrodynamical simulations. We also discuss formulations of the artificial viscosity needed in this scheme and how judiciously chosen correction forces can be derived in order to maintain a high degree of particle order and hence a regular Voronoi mesh. This is especially helpful in simulating self-gravitating fluids with existing gravity solvers used for N-body simulations.

  3. Economical filters for range sidelobe reduction with combined codes

    NASA Astrophysics Data System (ADS)

    Ackroyd, M. H.

    1982-06-01

    Approximate inverse filters for combined Barker codes are implemented using a significantly smaller number of coefficients than required for direct implementation. The approach can apply to a longer combined code and give greater sidelobe suppression. A filter to compress a combined code formed by combining two 13-element Barker sequences is presented as an example.

  4. Wavelet-based embedded zerotree extension to color coding

    NASA Astrophysics Data System (ADS)

    Franques, Victoria T.

    1998-03-01

    Recently, a new image compression algorithm was developed which employs wavelet transform and a simple binary linear quantization scheme with an embedded coding technique to perform data compaction. This new family of coder, Embedded Zerotree Wavelet (EZW), provides a better compression performance than the current JPEG coding standard for low bit rates. Since EZW coding algorithm emerged, all of the published coding results related to this coding technique are on monochrome images. In this paper the author has enhanced the original coding algorithm to yield a better compression ratio, and has extended the wavelet-based zerotree coding to color images. Color imagery is often represented by several components, such as RGB, in which each component is generally processed separately. With color coding, each component could be compressed individually in the same manner as a monochrome image, therefore requiring a threefold increase in processing time. Most image coding standards employ de-correlated components, such as YIQ or Y, CB, CR and subsampling of the 'chroma' components, such coding technique is employed here. Results of the coding, including reconstructed images and coding performance, will be presented.

  5. Image and video compression for HDR content

    NASA Astrophysics Data System (ADS)

    Zhang, Yang; Reinhard, Erik; Agrafiotis, Dimitris; Bull, David R.

    2012-10-01

    High Dynamic Range (HDR) technology can offer high levels of immersion with a dynamic range meeting and exceeding that of the Human Visual System (HVS). A primary drawback with HDR images and video is that memory and bandwidth requirements are significantly higher than for conventional images and video. Many bits can be wasted coding redundant imperceptible information. The challenge is therefore to develop means for efficiently compressing HDR imagery to a manageable bit rate without compromising perceptual quality. In this paper, we build on previous work of ours and propose a compression method for both HDR images and video, based on an HVS optimised wavelet subband weighting method. The method has been fully integrated into a JPEG 2000 codec for HDR image compression and implemented as a pre-processing step for HDR video coding (an H.264 codec is used as the host codec for video compression). Experimental results indicate that the proposed method outperforms previous approaches and operates in accordance with characteristics of the HVS, tested objectively using a HDR Visible Difference Predictor (VDP). Aiming to further improve the compression performance of our method, we additionally present the results of a psychophysical experiment, carried out with the aid of a high dynamic range display, to determine the difference in the noise visibility threshold between HDR and Standard Dynamic Range (SDR) luminance edge masking. Our findings show that noise has increased visibility on the bright side of a luminance edge. Masking is more consistent on the darker side of the edge.

  6. SPHRAY: A Smoothed Particle Hydrodynamics Ray Tracer for Radiative Transfer

    NASA Astrophysics Data System (ADS)

    Altay, Gabriel; Croft, Rupert A. C.; Pelupessy, Inti

    2011-03-01

    SPHRAY, a Smoothed Particle Hydrodynamics (SPH) ray tracer, is designed to solve the 3D, time dependent, radiative transfer (RT) equations for arbitrary density fields. The SPH nature of SPHRAY makes the incorporation of separate hydrodynamics and gravity solvers very natural. SPHRAY relies on a Monte Carlo (MC) ray tracing scheme that does not interpolate the SPH particles onto a grid but instead integrates directly through the SPH kernels. Given initial conditions and a description of the sources of ionizing radiation, the code will calculate the non-equilibrium ionization state (HI, HII, HeI, HeII, HeIII, e) and temperature (internal energy/entropy) of each SPH particle. The sources of radiation can include point like objects, diffuse recombination radiation, and a background field from outside the computational volume. The MC ray tracing implementation allows for the quick introduction of new physics and is parallelization friendly. A quick Axis Aligned Bounding Box (AABB) test taken from computer graphics applications allows for the acceleration of the raytracing component. We present the algorithms used in SPHRAY and verify the code by performing all the test problems detailed in the recent Radiative Transfer Comparison Project of Iliev et. al. The Fortran 90 source code for SPHRAY and example SPH density fields are made available online.

  7. Use of Fresnelets for phase-shifting digital hologram compression.

    PubMed

    Darakis, Emmanouil; Soraghan, John J

    2006-12-01

    Fresnelets are wavelet-like base functions specially tailored for digital holography applications. We introduce their use in phase-shifting interferometry (PSI) digital holography for the compression of such holographic data. Two compression methods are investigated. One uses uniform quantization of the Fresnelet coefficients followed by lossless coding, and the other uses set portioning in hierarchical trees (SPIHT) coding. Quantization and lossless coding of the original data is used to compare the performance of the proposed algorithms. The comparison reveals that the Fresnelet transform of phase-shifting holograms in combination with SPIHT or uniform quantization can be used very effectively for the compression of holographic data. The performance of the new compression schemes is demonstrated on real PSI digital holographic data.

  8. Wave journal bearing with compressible lubricant--Part 1: The wave bearing concept and a comparison to the plain circular bearing

    NASA Technical Reports Server (NTRS)

    Dimofte, Florin

    1995-01-01

    To improve hydrodynamic journal bearing steady-state and dynamic performance, a new bearing concept, the wave journal bearing, was developed at the author's lab. This concept features a waved inner bearing diameter. Compared to other alternative bearing geometries used to improve bearing performance such as spiral or herring-bone grooves, steps, etc., the wave bearing's design is relatively simple and allows the shaft to rotate in either direction. A three-wave bearing operating with a compressible lubricant, i.e., gas is analyzed using a numerical code. Its performance is compared to a plain (truly) circular bearing over a broad range of bearing working parameters, e.g., bearing numbers from 0.01 to 100.

  9. Cosmological Hydrodynamics on a Moving Mesh

    NASA Astrophysics Data System (ADS)

    Hernquist, Lars

    We propose to construct a model for the visible Universe using cosmological simulations of structure formation. Our simulations will include both dark matter and baryons, and will employ two entirely different schemes for evolving the gas: smoothed particle hydrodynamics (SPH) and a moving mesh approach as incorporated in the new code, AREPO. By performing simulations that are otherwise nearly identical, except for the hydrodynamics solver, we will isolate and understand differences in the properties of galaxies, galaxy groups and clusters, and the intergalactic medium caused by the computational approach that have plagued efforts to understand galaxy formation for nearly two decades. By performing simulations at different levels of resolution and with increasingly complex treatments of the gas physics, we will identify the results that are converged numerically and that are robust with respect to variations in unresolved physical processes, especially those related to star formation, black hole growth, and related feedback effects. In this manner, we aim to undertake a research program that will redefine the state of the art in cosmological hydrodynamics and galaxy formation. In particular, we will focus our scientific efforts on understanding: 1) the formation of galactic disks in a cosmological context; 2) the physical state of diffuse gas in galaxy clusters and groups so that they can be used as high-precision probes of cosmology; 3) the nature of gas inflows into galaxy halos and the subsequent accretion of gas by forming disks; 4) the co-evolution of galaxies and galaxy clusters with their central supermassive black holes and the implications of related feedback for galaxy evolution and the dichotomy between blue and red galaxies; 5) the physical state of the intergalactic medium (IGM) and the evolution of the metallicity of the IGM; and 6) the reaction of dark matter around galaxies to galaxy formation. Our proposed work will be of immediate significance for

  10. INTERACTION OF LASER RADIATION WITH MATTER. LASER PLASMA: Model of mixing of shells of a thermonuclear laser target upon spherical compression

    NASA Astrophysics Data System (ADS)

    Zmitrenko, N. V.; Proncheva, N. G.; Rozanov, Vladislav B.; Yakhin, R. A.

    2007-08-01

    Based on many direct numerical simulations of the development of hydrodynamic instabilities upon compression of laser thermonuclear targets, an efficient model is developed for describing the width of the mixing region taking into account the influence of the initial conditions on the mixing process dynamics. Approaches are proposed which are based on the evolution theory of the development of hydrodynamic instabilities [1], which was specially elaborated to describe the compression of targets for inertial thermonuclear fusion.

  11. Modeling hydrodynamics, water quality, and benthic processes to predict ecological effects in Narragansett Bay

    EPA Science Inventory

    The environmental fluid dynamics code (EFDC) was used to study the three dimensional (3D) circulation, water quality, and ecology in Narragansett Bay, RI. Predictions of the Bay hydrodynamics included the behavior of the water surface elevation, currents, salinity, and temperatur...

  12. Integer cosine transform for image compression

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.; Pollara, F.; Shahshahani, M.

    1991-01-01

    This article describes a recently introduced transform algorithm called the integer cosine transform (ICT), which is used in transform-based data compression schemes. The ICT algorithm requires only integer operations on small integers and at the same time gives a rate-distortion performance comparable to that offered by the floating-point discrete cosine transform (DCT). The article addresses the issue of implementation complexity, which is of prime concern for source coding applications of interest in deep-space communications. Complexity reduction in the transform stage of the compression scheme is particularly relevant, since this stage accounts for most (typically over 80 percent) of the computational load.

  13. Wavelet compression techniques for hyperspectral data

    NASA Technical Reports Server (NTRS)

    Evans, Bruce; Ringer, Brian; Yeates, Mathew

    1994-01-01

    Hyperspectral sensors are electro-optic sensors which typically operate in visible and near infrared bands. Their characteristic property is the ability to resolve a relatively large number (i.e., tens to hundreds) of contiguous spectral bands to produce a detailed profile of the electromagnetic spectrum. In contrast, multispectral sensors measure relatively few non-contiguous spectral bands. Like multispectral sensors, hyperspectral sensors are often also imaging sensors, measuring spectra over an array of spatial resolution cells. The data produced may thus be viewed as a three dimensional array of samples in which two dimensions correspond to spatial position and the third to wavelength. Because they multiply the already large storage/transmission bandwidth requirements of conventional digital images, hyperspectral sensors generate formidable torrents of data. Their fine spectral resolution typically results in high redundancy in the spectral dimension, so that hyperspectral data sets are excellent candidates for compression. Although there have been a number of studies of compression algorithms for multispectral data, we are not aware of any published results for hyperspectral data. Three algorithms for hyperspectral data compression are compared. They were selected as representatives of three major approaches for extending conventional lossy image compression techniques to hyperspectral data. The simplest approach treats the data as an ensemble of images and compresses each image independently, ignoring the correlation between spectral bands. The second approach transforms the data to decorrelate the spectral bands, and then compresses the transformed data as a set of independent images. The third approach directly generalizes two-dimensional transform coding by applying a three-dimensional transform as part of the usual transform-quantize-entropy code procedure. The algorithms studied all use the discrete wavelet transform. In the first two cases, a wavelet

  14. Enhancements to MPEG4 MVC for depth compression

    NASA Astrophysics Data System (ADS)

    Iyer, Kiran Nanjunda; Maiti, Kausik; Navathe, Bilva Bhalchandra; Sharma, Anshul; Bopardikar, Ajit

    2010-07-01

    Depth map is expected to be an essential component of upcoming 3D video formats. In a multiview scenario, along with color (texture), amount of depth information will also increase linearly with the number of views. Therefore various techniques are being explored in the research community to efficiently compress the depth data. In this paper, we propose novel methods of depth compression based on MPEG4 Multiview Video Coding standard (MVC) without any substantial increase in computational complexity. Our aim is to improve depth coding gain with minimal modification to the standard. We present experimental results which indicate a considerable coding gain when compared with MVC.

  15. Wavelet-based image compression using fixed residual value

    NASA Astrophysics Data System (ADS)

    Muzaffar, Tanzeem; Choi, Tae-Sun

    2000-12-01

    Wavelet based compression is getting popular due to its promising compaction properties at low bitrate. Zerotree wavelet image coding scheme efficiently exploits multi-level redundancy present in transformed data to minimize coding bits. In this paper, a new technique is proposed to achieve high compression by adding new zerotree and significant symbols to original EZW coder. Contrary to four symbols present in basic EZW scheme, modified algorithm uses eight symbols to generate fewer bits for a given data. Subordinate pass of EZW is eliminated and replaced with fixed residual value transmission for easy implementation. This modification simplifies the coding technique as well and speeds up the process, retaining the property of embeddedness.

  16. Optimal lift force on vesicles near a compressible substrate

    NASA Astrophysics Data System (ADS)

    Beaucourt, J.; Biben, T.; Misbah, C.

    2004-08-01

    The dynamics of vesicles near a compressible substrate mimicking the glycocalyx layer of the internal part of blood vessels reveals the existence of an optimal lift force due to an elasto-hydrodynamic coupling between the counter flow and the deformation of the wall. An estimation of the order of magnitude of the optimal elastic modulus reveals that it lies within the physiological range, which may have important consequences for the dynamic of blood cells (leucocytes or red blood cells).

  17. Detection of the Compressed Primary Stellar Wind in eta Carinae*

    NASA Technical Reports Server (NTRS)

    Teodoro, M.; Madura, T. I.; Gull, T. R.; Corcoran, M. F.; Hamaguchi, K.

    2013-01-01

    A series of three Hubble Space Telescope Space Telescope Imaging Spectrograph (HST/STIS) spectroscopic mappings, spaced approximately one year apart, reveal three partial arcs in [Fe II] and [Ni II] emissions moving outward from ? Carinae. We identify these arcs with the shell-like structures, seen in the 3D hydrodynamical simulations, formed by compression of the primary wind by the secondary wind during periastron passages.

  18. DETECTION OF THE COMPRESSED PRIMARY STELLAR WIND IN {eta} CARINAE

    SciTech Connect

    Teodoro, M.; Madura, T. I.; Gull, T. R.; Corcoran, M. F.; Hamaguchi, K.

    2013-08-10

    A series of three Hubble Space Telescope/Space Telescope Imaging Spectrograph spectroscopic mappings, spaced approximately one year apart, reveal three partial arcs in [Fe II] and [Ni II] emissions moving outward from {eta} Carinae. We identify these arcs with the shell-like structures, seen in the three-dimensional hydrodynamical simulations, formed by compression of the primary wind by the secondary wind during periastron passages.

  19. The interaction of hydrodynamic shocks with self-gravitating clouds

    NASA Astrophysics Data System (ADS)

    Falle, S. A. E. G.; Vaidya, B.; Hartquist, T. W.

    2017-02-01

    We describe the results of 3D simulations of the interaction of hydrodynamic shocks with Bonnor-Ebert spheres performed with an adaptive mesh refinement code. The calculations are isothermal and the clouds are embedded in a medium in which the sound speed is either 4 or 10 times that in the cloud. The strengths of the shocks are such that they induce gravitational collapse in some cases and not in others, and we derive a simple estimate for the shock strength required for this to occur. These results are relevant to dense cores and Bok globules in star-forming regions subjected to shocks produced by stellar feedback.

  20. Future trends in image coding

    NASA Astrophysics Data System (ADS)

    Habibi, Ali

    1993-01-01

    The objective of this article is to present a discussion on the future of image data compression in the next two decades. It is virtually impossible to predict with any degree of certainty the breakthroughs in theory and developments, the milestones in advancement of technology and the success of the upcoming commercial products in the market place which will be the main factors in establishing the future stage to image coding. What we propose to do, instead, is look back at the progress in image coding during the last two decades and assess the state of the art in image coding today. Then, by observing the trends in developments of theory, software, and hardware coupled with the future needs for use and dissemination of imagery data and the constraints on the bandwidth and capacity of various networks, predict the future state of image coding. What seems to be certain today is the growing need for bandwidth compression. The television is using a technology which is half a century old and is ready to be replaced by high definition television with an extremely high digital bandwidth. Smart telephones coupled with personal computers and TV monitors accommodating both printed and video data will be common in homes and businesses within the next decade. Efficient and compact digital processing modules using developing technologies will make bandwidth compressed imagery the cheap and preferred alternative in satellite and on-board applications. In view of the above needs, we expect increased activities in development of theory, software, special purpose chips and hardware for image bandwidth compression in the next two decades. The following sections summarize the future trends in these areas.

  1. Stochastic Hard-Sphere Dynamics for Hydrodynamics of Non-Ideal Fluids

    SciTech Connect

    Donev, A; Alder, B J; Garcia, A L

    2008-02-26

    A novel stochastic fluid model is proposed with a nonideal structure factor consistent with compressibility, and adjustable transport coefficients. This stochastic hard-sphere dynamics (SHSD) algorithm is a modification of the direct simulation Monte Carlo algorithm and has several computational advantages over event-driven hard-sphere molecular dynamics. Surprisingly, SHSD results in an equation of state and a pair correlation function identical to that of a deterministic Hamiltonian system of penetrable spheres interacting with linear core pair potentials. The fluctuating hydrodynamic behavior of the SHSD fluid is verified for the Brownian motion of a nanoparticle suspended in a compressible solvent.

  2. Low torque hydrodynamic lip geometry for bi-directional rotation seals

    DOEpatents

    Dietle, Lannie L [Houston, TX; Schroeder, John E [Richmond, TX

    2011-11-15

    A hydrodynamically lubricating geometry for the generally circular dynamic sealing lip of rotary seals that are employed to partition a lubricant from an environment. The dynamic sealing lip is provided for establishing compressed sealing engagement with a relatively rotatable surface, and for wedging a film of lubricating fluid into the interface between the dynamic sealing lip and the relatively rotatable surface in response to relative rotation that may occur in the clockwise or the counter-clockwise direction. A wave form incorporating an elongated dimple provides the gradual convergence, efficient impingement angle, and gradual interfacial contact pressure rise that are conducive to efficient hydrodynamic wedging. Skewed elevated contact pressure zones produced by compression edge effects provide for controlled lubricant movement within the dynamic sealing interface between the seal and the relatively rotatable surface, producing enhanced lubrication and low running torque.

  3. Low torque hydrodynamic lip geometry for bi-directional rotation seals

    DOEpatents

    Dietle, Lannie L.; Schroeder, John E.

    2009-07-21

    A hydrodynamically lubricating geometry for the generally circular dynamic sealing lip of rotary seals that are employed to partition a lubricant from an environment. The dynamic sealing lip is provided for establishing compressed sealing engagement with a relatively rotatable surface, and for wedging a film of lubricating fluid into the interface between the dynamic sealing lip and the relatively rotatable surface in response to relative rotation that may occur in the clockwise or the counter-clockwise direction. A wave form incorporating an elongated dimple provides the gradual convergence, efficient impingement angle, and gradual interfacial contact pressure rise that are conducive to efficient hydrodynamic wedging. Skewed elevated contact pressure zones produced by compression edge effects provide for controlled lubricant movement within the dynamic sealing interface between the seal and the relatively rotatable surface, producing enhanced lubrication and low running torque.

  4. Three-dimensional image compression with integer wavelet transforms.

    PubMed

    Bilgin, A; Zweig, G; Marcellin, M W

    2000-04-10

    A three-dimensional (3-D) image-compression algorithm based on integer wavelet transforms and zerotree coding is presented. The embedded coding of zerotrees of wavelet coefficients (EZW) algorithm is extended to three dimensions, and context-based adaptive arithmetic coding is used to improve its performance. The resultant algorithm, 3-D CB-EZW, efficiently encodes 3-D image data by the exploitation of the dependencies in all dimensions, while enabling lossy and lossless decompression from the same bit stream. Compared with the best available two-dimensional lossless compression techniques, the 3-D CB-EZW algorithm produced averages of 22%, 25%, and 20% decreases in compressed file sizes for computed tomography, magnetic resonance, and Airborne Visible Infrared Imaging Spectrometer images, respectively. The progressive performance of the algorithm is also compared with other lossy progressive-coding algorithms.

  5. Three-Dimensional Image Compression With Integer Wavelet Transforms

    NASA Astrophysics Data System (ADS)

    Bilgin, Ali; Zweig, George; Marcellin, Michael W.

    2000-04-01

    A three-dimensional (3-D) image-compression algorithm based on integer wavelet transforms and zerotree coding is presented. The embedded coding of zerotrees of wavelet coefficients (EZW) algorithm is extended to three dimensions, and context-based adaptive arithmetic coding is used to improve its performance. The resultant algorithm, 3-D CB-EZW, efficiently encodes 3-D image data by the exploitation of the dependencies in all dimensions, while enabling lossy and lossless decompression from the same bit stream. Compared with the best available two-dimensional lossless compression techniques, the 3-D CB-EZW algorithm produced averages of 22%, 25%, and 20% decreases in compressed file sizes for computed tomography, magnetic resonance, and Airborne Visible Infrared Imaging Spectrometer images, respectively. The progressive performance of the algorithm is also compared with other lossy progressive-coding algorithms.

  6. A comparison of SPH schemes for the compressible Euler equations

    NASA Astrophysics Data System (ADS)

    Puri, Kunal; Ramachandran, Prabhu

    2014-01-01

    We review the current state-of-the-art Smoothed Particle Hydrodynamics (SPH) schemes for the compressible Euler equations. We identify three prototypical schemes and apply them to a suite of test problems in one and two dimensions. The schemes are in order, standard SPH with an adaptive density kernel estimation (ADKE) technique introduced Sigalotti et al. (2008) [44], the variational SPH formulation of Price (2012) [33] (referred herein as the MPM scheme) and the Godunov type SPH (GSPH) scheme of Inutsuka (2002) [12]. The tests investigate the accuracy of the inviscid discretizations, shock capturing ability and the particle settling behavior. The schemes are found to produce nearly identical results for the 1D shock tube problems with the MPM and GSPH schemes being the most robust. The ADKE scheme requires parameter values which must be tuned to the problem at hand. We propose an addition of an artificial heating term to the GSPH scheme to eliminate unphysical spikes in the thermal energy at the contact discontinuity. The resulting modification is simple and can be readily incorporated in existing codes. In two dimensions, the differences between the schemes is more evident with the quality of results determined by the particle distribution. In particular, the ADKE scheme shows signs of particle clumping and irregular motion for the 2D strong shock and Sedov point explosion tests. The noise in particle data is linked with the particle distribution which remains regular for the Hamiltonian formulations (MPM and GSPH) and becomes irregular for the ADKE scheme. In the interest of reproducibility, we make available our implementation of the algorithms and test problems discussed in this work.

  7. An algorithm for compression of bilevel images.

    PubMed

    Reavy, M D; Boncelet, C G

    2001-01-01

    This paper presents the block arithmetic coding for image compression (BACIC) algorithm: a new method for lossless bilevel image compression which can replace JBIG, the current standard for bilevel image compression. BACIC uses the block arithmetic coder (BAC): a simple, efficient, easy-to-implement, variable-to-fixed arithmetic coder, to encode images. BACIC models its probability estimates adaptively based on a 12-bit context of previous pixel values; the 12-bit context serves as an index into a probability table whose entries are used to compute p(1) (the probability of a bit equaling one), the probability measure BAC needs to compute a codeword. In contrast, the Joint Bilevel Image Experts Group (JBIG) uses a patented arithmetic coder, the IBM QM-coder, to compress image data and a predetermined probability table to estimate its probability measures. JBIG, though, has not get been commercially implemented; instead, JBIG's predecessor, the Group 3 fax (G3), continues to be used. BACIC achieves compression ratios comparable to JBIG's and is introduced as an alternative to the JBIG and G3 algorithms. BACIC's overall compression ratio is 19.0 for the eight CCITT test images (compared to JBIG's 19.6 and G3's 7.7), is 16.0 for 20 additional business-type documents (compared to JBIG's 16.0 and G3's 6.74), and is 3.07 for halftone images (compared to JBIG's 2.75 and G3's 0.50).

  8. Lossless compression of instrumentation data. Final report

    SciTech Connect

    Stearns, S.D.

    1995-11-01

    This is our final report on Sandia National Laboratories Laboratory- Directed Research and Development (LDRD) project 3517.070. Its purpose has been to investigate lossless compression of digital waveform and image data, particularly the types of instrumentation data generated and processed at Sandia Labs. The three-year project period ran from October 1992 through September 1995. This report begins with a descriptive overview of data compression, with and without loss, followed by a summary of the activities on the Sandia project, including research at several universities and the development of waveform compression software. Persons who participated in the project are also listed. The next part of the report contains a general discussion of the principles of lossless compression. Two basic compression stages, decorrelation and entropy coding, are described and discussed. An example of seismic data compression is included. Finally, there is a bibliography of published research. Taken together, the published papers contain the details of most of the work and accomplishments on the project. This final report is primarily an overview, without the technical details and results found in the publications listed in the bibliography.

  9. Issues in multiview autostereoscopic image compression

    NASA Astrophysics Data System (ADS)

    Shah, Druti; Dodgson, Neil A.

    2001-06-01

    Multi-view auto-stereoscopic images and image sequences require large amounts of space for storage and large bandwidth for transmission. High bandwidth can be tolerated for certain applications where the image source and display are close together but, for long distance or broadcast, compression of information is essential. We report on the results of our two- year investigation into multi-view image compression. We present results based on four techniques: differential pulse code modulation (DPCM), disparity estimation, three- dimensional discrete cosine transform (3D-DCT), and principal component analysis (PCA). Our work on DPCM investigated the best predictors to use for predicting a given pixel. Our results show that, for a given pixel, it is generally the nearby pixels within a view that provide better prediction than the corresponding pixel values in adjacent views. This led to investigations into disparity estimation. We use both correlation and least-square error measures to estimate disparity. Both perform equally well. Combining this with DPCM led to a novel method of encoding, which improved the compression ratios by a significant factor. The 3D-DCT has been shown to be a useful compression tool, with compression schemes based on ideas from the two-dimensional JPEG standard proving effective. An alternative to 3D-DCT is PCA. This has proved to be less effective than the other compression methods investigated.

  10. Hyperspace storage compression for multimedia systems

    NASA Astrophysics Data System (ADS)

    Holtz, Klaus E.; Lettieri, Alfred; Holtz, Eric S.

    1994-04-01

    Storing multimedia text, speech or images in personal computers now requires very large storage facilities. Data compression eases the problem, but all algorithms based on Shannon's information theory will distort the data with increased compression. Autosophy, an emerging science of `self-assembling structures', provides a new mathematical theory of `learning' and a new `information theory'. `Lossless' data compression is achieved by storing data in mathematically omni dimensional hyperspace. Such algorithms are already used in disc file compression and V.42 bis modems. Speech can be compressed using similar methods. `Lossless' autosophy image compression has been implemented and tested in an IBM PC (486), confirming the algorithms and theoretical predictions of the new `information theory'. Computer graphics frames or television images are disassembled into `known' fragments for storage in an omni dimensional hyperspace library. Each unique fragment is used only once. Each image frame is converted into a single output code which is later used for image retrieval. The hyperspace image library is stored on a disc. Experimental data confirms that hyperspace storage is independent of image size, resolution or frame rate; depending solely on `novelty' or `movement' within the images. The new algorithms promise dramatic improvements in all multimedia data storage.

  11. Real-Time Digital Compression Of Television Image Data

    NASA Technical Reports Server (NTRS)

    Barnes, Scott P.; Shalkhauser, Mary JO; Whyte, Wayne A., Jr.

    1990-01-01

    Digital encoding/decoding system compresses color television image data in real time for transmission at lower data rates and, consequently, lower bandwidths. Implements predictive coding process, in which each picture element (pixel) predicted from values of prior neighboring pixels, and coded transmission expresses difference between actual and predicted current values. Combines differential pulse-code modulation process with non-linear, nonadaptive predictor, nonuniform quantizer, and multilevel Huffman encoder.

  12. Early hydrodynamic evolution of a stellar collision

    SciTech Connect

    Kushnir, Doron; Katz, Boaz

    2014-04-20

    The early phase of the hydrodynamic evolution following the collision of two stars is analyzed. Two strong shocks propagate from the contact surface and move toward the center of each star at a velocity that is a small fraction of the velocity of the approaching stars. The shocked region near the contact surface has a planar symmetry and a uniform pressure. The density vanishes at the (Lagrangian) surface of contact, and the speed of sound diverges there. The temperature, however, reaches a finite value, since as the density vanishes, the finite pressure is radiation dominated. For carbon-oxygen white dwarf (CO WD) collisions, this temperature is too low for any appreciable nuclear burning shortly after the collision, which allows for a significant fraction of the mass to be highly compressed to the density required for efficient {sup 56}Ni production in the detonation wave that follows. This property is crucial for the viability of collisions of typical CO WD as progenitors of type Ia supernovae, since otherwise only massive (>0.9 M {sub ☉}) CO WDs would have led to such explosions (as required by all other progenitor models). The divergence of the speed of sound limits numerical studies of stellar collisions, as it makes convergence tests exceedingly expensive unless dedicated schemes are used. We provide a new one-dimensional Lagrangian numerical scheme to achieve this. A self-similar planar solution is derived for zero-impact parameter collisions between two identical stars, under some simplifying assumptions (including a power-law density profile), which is the planar version of previous piston problems that were studied in cylindrical and spherical symmetries.

  13. Annual Report: Hydrodynamics and Radiative Hydrodynamics with Astrophysical Applications

    SciTech Connect

    R. Paul Drake

    2005-12-01

    We report the ongoing work of our group in hydrodynamics and radiative hydrodynamics with astrophysical applications. During the period of the existing grant, we have carried out two types of experiments at the Omega laser. One set of experiments has studied radiatively collapsing shocks, obtaining high-quality scaling data using a backlit pinhole and obtaining the first (ever, anywhere) Thomson-scattering data from a radiative shock. Other experiments have studied the deeply nonlinear development of the Rayleigh-Taylor (RT) instability from complex initial conditions, obtaining the first (ever, anywhere) dual-axis radiographic data using backlit pinholes and ungated detectors. All these experiments have applications to astrophysics, discussed in the corresponding papers either in print or in preparation. We also have obtained preliminary radiographs of experimental targets using our x-ray source. The targets for the experiments have been assembled at Michigan, where we also prepare many of the simple components. The above activities, in addition to a variety of data analysis and design projects, provide good experience for graduate and undergraduates students. In the process of doing this research we have built a research group that uses such work to train junior scientists.

  14. Forced wetting and hydrodynamic assist

    NASA Astrophysics Data System (ADS)

    Blake, Terence D.; Fernandez-Toledano, Juan-Carlos; Doyen, Guillaume; De Coninck, Joël

    2015-11-01

    Wetting is a prerequisite for coating a uniform layer of liquid onto a solid. Wetting failure and air entrainment set the ultimate limit to coating speed. It is well known in the coating art that this limit can be postponed by manipulating the coating flow to generate what has been termed "hydrodynamic assist," but the underlying mechanism is unclear. Experiments have shown that the conditions that postpone air entrainment also reduce the apparent dynamic contact angle, suggesting a direct link, but how the flow might affect the contact angle remains to be established. Here, we use molecular dynamics to compare the outcome of steady forced wetting with previous results for the spontaneous spreading of liquid drops and apply the molecular-kinetic theory of dynamic wetting to rationalize our findings and place them on a quantitative footing. The forced wetting simulations reveal significant slip at the solid-liquid interface and details of the flow immediately adjacent to the moving contact line. Our results confirm that the local, microscopic contact angle is dependent not simply only on the velocity of wetting but also on the nature of the flow that drives it. In particular, they support an earlier suggestion that during forced wetting, an intense shear stress in the vicinity of the contact line can assist surface tension forces in promoting dynamic wetting, thus reducing the velocity-dependence of the contact angle. Hydrodynamic assist then appears as a natural consequence of wetting that emerges when the contact line is driven by a strong and highly confined flow. Our theoretical approach also provides a self-consistent model of molecular slip at the solid-liquid interface that enables its magnitude to be estimated from dynamic contact angle measurements. In addition, the model predicts how hydrodynamic assist and slip may be influenced by liquid viscosity and solid-liquid interactions.

  15. Parallel image compression

    NASA Technical Reports Server (NTRS)

    Reif, John H.

    1987-01-01

    A parallel compression algorithm for the 16,384 processor MPP machine was developed. The serial version of the algorithm can be viewed as a combination of on-line dynamic lossless test compression techniques (which employ simple learning strategies) and vector quantization. These concepts are described. How these concepts are combined to form a new strategy for performing dynamic on-line lossy compression is discussed. Finally, the implementation of this algorithm in a massively parallel fashion on the MPP is discussed.

  16. ICER-3D Hyperspectral Image Compression Software

    NASA Technical Reports Server (NTRS)

    Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, three-dimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume. The ICER-3D software is capable of providing either lossless or lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The compression algorithm, which was derived from the ICER image compression algorithm, includes wavelet-transform, context-modeling, and entropy coding subalgorithms. The 3D wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of sets of hyperspectral image data, while facilitating elimination of spectral ringing artifacts, using a technique summarized in "Improving 3D Wavelet-Based Compression of Spectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. Correlation is further exploited by a context-modeling subalgorithm, which exploits spectral dependencies in the wavelet-transformed hyperspectral data, using an algorithm that is summarized in "Context Modeler for Wavelet Compression of Hyperspectral Images" (NPO-43239), which follows this article. An important feature of ICER-3D is a scheme for limiting the adverse effects of loss of data during transmission. In this scheme, as in the similar scheme used by ICER, the spatial-frequency domain is partitioned into rectangular error-containment regions. In ICER-3D, the partitions extend through all the wavelength bands. The data in each partition are compressed independently of those in the other partitions, so that loss or corruption of data from any partition does not affect the other partitions. Furthermore, because compression is progressive within each partition, when data are lost, any data from that partition received

  17. Hydrodynamic instability experiments and simulations

    SciTech Connect

    Dimonte, G.; Schneider, M.; Frerking, C.E.

    1995-07-01

    Richtmyer-Meshkov experiments are conducted on the Nova laser with strong radiatively driven shocks (Mach > 20) in planar, two-fluid targets with Atwood number A < 0. Single mode interfacial perturbations are used to test linear theory and 3D random perturbations are used to study turbulent mix. Rayleigh-Taylor experiments are conducted on a new facility called the Linear Electric Motor (LEM) in which macroscopic fluids are accelerated electromagnetically with arbitrary acceleration profiles. The initial experiments are described. Hydrodynamic simulations in 2D are in reasonable agreement with the experiments, but these studies show that simulations in 3D with good radiation transport and equation of state are needed.

  18. Hydrodynamic Synchronisation of Model Microswimmers

    NASA Astrophysics Data System (ADS)

    Putz, V. B.; Yeomans, J. M.

    2009-12-01

    We define a model microswimmer with a variable cycle time, thus allowing the possibility of phase locking driven by hydrodynamic interactions between swimmers. We find that, for extensile or contractile swimmers, phase locking does occur, with the relative phase of the two swimmers being, in general, close to 0 or π, depending on their relative position and orientation. We show that, as expected on grounds of symmetry, self T-dual swimmers, which are time-reversal covariant, do not phase-lock. We also discuss the phase behaviour of a line of tethered swimmers, or pumps. These show oscillations in their relative phases reminiscent of the metachronal waves of cilia.

  19. Hydrodynamics of post CHF region

    SciTech Connect

    Ishii, M.; De Jarlais, G.

    1984-04-01

    Among various two-phase flow regimes, the inverted flow in the post-dryout region is relatively less well understood due to its special heat transfer conditions. The review of existing data indicates further research is needed in the areas of basic hydrodynamics related to liquid core disintegration mechanisms, slug and droplet formations, entrainment, and droplet size distributions. In view of this, the inverted flow is studied in detail both analytically and experimentally. Criteria for initial flow regimes in the post-dryout region are given. Preliminary models for subsequent flow regime transition criteria are derived together with correlations for a mean droplet diameter based on the adiabatic simulation data.

  20. Numerical Hydrodynamics in General Relativity.

    PubMed

    Font, José A

    2000-01-01

    The current status of numerical solutions for the equations of ideal general relativistic hydrodynamics is reviewed. Different formulations of the equations are presented, with special mention of conservative and hyperbolic formulations well-adapted to advanced numerical methods. A representative sample of available numerical schemes is discussed and particular emphasis is paid to solution procedures based on schemes exploiting the characteristic structure of the equations through linearized Riemann solvers. A comprehensive summary of relevant astrophysical simulations in strong gravitational fields, including gravitational collapse, accretion onto black holes and evolution of neutron stars, is also presented.

  1. Compression of a spherically symmetric deuterium-tritium plasma liner onto a magnetized deuterium-tritium target

    SciTech Connect

    Santarius, J. F.

    2012-07-15

    Converging plasma jets may be able to reach the regime of high energy density plasmas (HEDP). The successful application of plasma jets to magneto-inertial fusion (MIF) would heat the plasma by fusion products and should increase the plasma energy density. This paper reports the results of using the University of Wisconsin's 1-D Lagrangian, radiation-hydrodynamics, fusion code BUCKY to investigate two MIF converging plasma jet test cases originally analyzed by Samulyak et al.[Physics of Plasmas 17, 092702 (2010)]. In these cases, 15 cm or 5 cm radially thick deuterium-tritium (DT) plasma jets merge at 60 cm from the origin and converge radially onto a DT target magnetized to 2 T and of radius 5 cm. The BUCKY calculations reported here model these cases, starting from the time of initial contact of the jets and target. Compared to the one-temperature Samulyak et al. calculations, the one-temperature BUCKY results show similar behavior, except that the plasma radius remains about twice as long near maximum compression. One-temperature and two-temperature BUCKY results differ, reflecting the sensitivity of the calculations to timing and plasma parameter details, with the two-temperature case giving a more sustained compression.

  2. Foundation of Hydrodynamics of Strongly Interacting Systems

    SciTech Connect

    Wong, Cheuk-Yin

    2014-01-01

    Hydrodynamics and quantum mechanics have many elements in common, as the density field and velocity fields are common variables that can be constructed in both descriptions. Starting with the Schroedinger equation and the Klein-Gordon for a single particle in hydrodynamical form, we examine the basic assumptions under which a quantum system of particles interacting through their mean fields can be described by hydrodynamics.

  3. Microscale hydrodynamics near moving contact lines

    NASA Technical Reports Server (NTRS)

    Garoff, Stephen; Chen, Q.; Rame, Enrique; Willson, K. R.

    1994-01-01

    The hydrodynamics governing the fluid motions on a microscopic scale near moving contact lines are different from those governing motion far from the contact line. We explore these unique hydrodynamics by detailed measurement of the shape of a fluid meniscus very close to a moving contact line. The validity of present models of the hydrodynamics near moving contact lines as well as the dynamic wetting characteristics of a family of polymer liquids are discussed.

  4. Thermal transport in a noncommutative hydrodynamics

    SciTech Connect

    Geracie, M. Son, D. T.

    2015-03-15

    We find the hydrodynamic equations of a system of particles constrained to be in the lowest Landau level. We interpret the hydrodynamic theory as a Hamiltonian system with the Poisson brackets between the hydrodynamic variables determined from the noncommutativity of space. We argue that the most general hydrodynamic theory can be obtained from this Hamiltonian system by allowing the Righi-Leduc coefficient to be an arbitrary function of thermodynamic variables. We compute the Righi-Leduc coefficient at high temperatures and show that it satisfies the requirements of particle-hole symmetry, which we outline.

  5. High-fidelity plasma codes for burn physics

    SciTech Connect

    Cooley, James; Graziani, Frank; Marinak, Marty; Murillo, Michael

    2016-10-19

    Accurate predictions of equation of state (EOS), ionic and electronic transport properties are of critical importance for high-energy-density plasma science. Transport coefficients inform radiation-hydrodynamic codes and impact diagnostic interpretation, which in turn impacts our understanding of the development of instabilities, the overall energy balance of burning plasmas, and the efficacy of self-heating from charged-particle stopping. Important processes include thermal and electrical conduction, electron-ion coupling, inter-diffusion, ion viscosity, and charged particle stopping. However, uncertainties in these coefficients are not well established. Fundamental plasma science codes, also called high-fidelity plasma codes, are a relatively recent computational tool that augments both experimental data and theoretical foundations of transport coefficients. This paper addresses the current status of HFPC codes and their future development, and the potential impact they play in improving the predictive capability of the multi-physics hydrodynamic codes used in HED design.

  6. WEC3: Wave Energy Converter Code Comparison Project: Preprint

    SciTech Connect

    Combourieu, Adrien; Lawson, Michael; Babarit, Aurelien; Ruehl, Kelley; Roy, Andre; Costello, Ronan; Laporte Weywada, Pauline; Bailey, Helen

    2017-01-01

    This paper describes the recently launched Wave Energy Converter Code Comparison (WEC3) project and present preliminary results from this effort. The objectives of WEC3 are to verify and validate numerical modelling tools that have been developed specifically to simulate wave energy conversion devices and to inform the upcoming IEA OES Annex VI Ocean Energy Modelling Verification and Validation project. WEC3 is divided into two phases. Phase 1 consists of a code-to-code verification and Phase II entails code-to-experiment validation. WEC3 focuses on mid-fidelity codes that simulate WECs using time-domain multibody dynamics methods to model device motions and hydrodynamic coefficients to model hydrodynamic forces. Consequently, high-fidelity numerical modelling tools, such as Navier-Stokes computational fluid dynamics simulation, and simple frequency domain modelling tools were not included in the WEC3 project.

  7. QR Codes

    ERIC Educational Resources Information Center

    Lai, Hsin-Chih; Chang, Chun-Yen; Li, Wen-Shiane; Fan, Yu-Lin; Wu, Ying-Tien

    2013-01-01

    This study presents an m-learning method that incorporates Integrated Quick Response (QR) codes. This learning method not only achieves the objectives of outdoor education, but it also increases applications of Cognitive Theory of Multimedia Learning (CTML) (Mayer, 2001) in m-learning for practical use in a diverse range of outdoor locations. When…

  8. Hydrodynamic models of a Cepheid atmosphere. I - Deep envelope models

    NASA Technical Reports Server (NTRS)

    Karp, A. H.

    1975-01-01

    The implicit hydrodynamic code of Kutter and Sparks has been modified to include radiative transfer effects. This modified code has been used to compute deep envelope models of a classical Cepheid with a period of 12 days. It is shown that in this particular model the hydrogen ionization region plays only a small role in producing the observed phase lag between the light and velocity curves. The cause of the bumps on the model's light curve is examined, and a mechanism is presented to explain those Cepheids with two secondary features on their light curves. This mechanism is shown to be consistent with the Hertzsprung sequence only if the evolutionary mass-luminosity law is used.

  9. Hydromechanical transmission with hydrodynamic drive

    DOEpatents

    Orshansky, Jr., deceased, Elias; Weseloh, William E.

    1979-01-01

    This transmission has a first planetary gear assembly having first input means connected to an input shaft, first output means, and first reaction means, and a second planetary gear assembly having second input means connected to the first input means, second output means, and second reaction means connected directly to the first reaction means by a reaction shaft. First clutch means, when engaged, connect the first output means to an output shaft in a high driving range. A hydrodynamic drive is used; for example, a torque converter, which may or may not have a stationary case, has a pump connected to the second output means, a stator grounded by an overrunning clutch to the case, and a turbine connected to an output member, and may be used in a starting phase. Alternatively, a fluid coupling or other type of hydrodynamic drive may be used. Second clutch means, when engaged, for connecting the output member to the output shaft in a low driving range. A variable-displacement hydraulic unit is mechanically connected to the input shaft, and a fixed-displacement hydraulic unit is mechanically connected to the reaction shaft. The hydraulic units are hydraulically connected together so that when one operates as a pump the other acts as a motor, and vice versa. Both clutch means are connected to the output shaft through a forward-reverse shift arrangement. It is possible to lock out the torque converter after the starting phase is over.

  10. Inducer Hydrodynamic Load Measurement Devices

    NASA Technical Reports Server (NTRS)

    Skelley, Stephen E.; Zoladz, Thomas F.

    2002-01-01

    Marshall Space Flight Center (MSFC) has demonstrated two measurement devices for sensing and resolving the hydrodynamic loads on fluid machinery. The first - a derivative of the six component wind tunnel balance - senses the forces and moments on the rotating device through a weakened shaft section instrumented with a series of strain gauges. This "rotating balance" was designed to directly measure the steady and unsteady hydrodynamic loads on an inducer, thereby defining both the amplitude and frequency content associated with operating in various cavitation modes. The second device - a high frequency response pressure transducer surface mounted on a rotating component - was merely an extension of existing technology for application in water. MSFC has recently completed experimental evaluations of both the rotating balance and surface-mount transducers in a water test loop. The measurement bandwidth of the rotating balance was severely limited by the relative flexibility of the device itself, resulting in an unexpectedly low structural bending mode and invalidating the higher frequency response data. Despite these limitations, measurements confirmed that the integrated loads on the four-bladed inducer respond to both cavitation intensity and cavitation phenomena. Likewise, the surface-mount pressure transducers were subjected to a range of temperatures and flow conditions in a non-rotating environment to record bias shifts and transfer functions between the transducers and a reference device. The pressure transducer static performance was within manufacturer's specifications and dynamic response accurately followed that of the reference.

  11. The hydrodynamics of lamprey locomotion

    NASA Astrophysics Data System (ADS)

    Leftwich, Megan C.

    The lamprey, an anguilliform swimmer, propels itself by undulating most of its body. This type of swimming produces flow patterns that are highly three-dimensional in nature and not very well understood. However, substantial previous work has been done to understand two-dimensional unsteady propulsion, the possible wake structures and thrust performance. Limited studies of three-dimensional propulsors with simple geometries have displayed the importance of the third dimension in designing unsteady swimmers. Some of the results of those studies, primarily the ways in which vorticity is organized in the wake region, are seen in lamprey swimming as well. In the current work, the third dimension is not the only important factor, but complex geometry and body undulations also contribute to the hydrodynamics. Through dye flow visualization, particle induced velocimetry and pressure measurements, the hydrodynamics of anguilliform swimming are studied using a custom built robotic lamprey. These studies all indicate that the undulations of the body are not producing thrust. Instead, it is the tail which acts to propel the animal. This conclusion led to further investigation of the tail, specifically the role of varying tail flexibility on hydrodymnamics. It is found that by making the tail more flexible, one decreases the coherence of the vorticity in the lamprey's wake. Additional flexibility also yields less thrust.

  12. Fluctuating Hydrodynamics of Electrolytes Solutions

    NASA Astrophysics Data System (ADS)

    Peraud, Jean-Philippe; Nonaka, Andy; Chaudhri, Anuj; Bell, John B.; Donev, Aleksandar; Garcia, Alejandro L.

    2016-11-01

    In this work, we develop a numerical method for multicomponent solutions featuring electrolytes, in the context of fluctuating hydrodynamics as modeled by the Landau-Lifshitz Navier Stokes equations. Starting from a previously developed numerical scheme for multicomponent low Mach number fluctuating hydrodynamics, we study the effect of the additional forcing terms induced by charged species. We validate our numerical approach with additional theoretical considerations and with examples involving sodium-chloride solutions, with length scales close to Debye length. In particular, we show how charged species modify the structure factors of the fluctuations, both in equilibrium and non-equilibrium (giant fluctuations) systems, and show that the former is consistent with Debye-Huckel theory. We also discuss the consistency of this approach with the electroneutral approximation in regimes where characteristic length scales are significantly larger than the Debye length. Finally, we use this method to explore a type of electrokinetic instability. This work was supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research,.

  13. Inducer Hydrodynamic Load Measurement Devices

    NASA Technical Reports Server (NTRS)

    Skelley, Stephen E.; Zoladz, Thomas F.; Turner, Jim (Technical Monitor)

    2002-01-01

    Marshall Space Flight Center (MSFC) has demonstrated two measurement devices for sensing and resolving the hydrodynamic loads on fluid machinery. The first - a derivative of the six-component wind tunnel balance - senses the forces and moments on the rotating device through a weakened shaft section instrumented with a series of strain gauges. This rotating balance was designed to directly measure the steady and unsteady hydrodynamic loads on an inducer, thereby defining both the amplitude and frequency content associated with operating in various cavitation modes. The second device - a high frequency response pressure transducer surface mounted on a rotating component - was merely an extension of existing technology for application in water. MSFC has recently completed experimental evaluations of both the rotating balance and surface-mount transducers in a water test loop. The measurement bandwidth of the rotating balance was severely limited by the relative flexibility of the device itself, resulting in an unexpectedly low structural bending mode and invalidating the higher-frequency response data. Despite these limitations, measurements confirmed that the integrated loads on the four-bladed inducer respond to both cavitation intensity and cavitation phenomena. Likewise, the surface-mount pressure transducers were subjected to a range of temperatures and flow conditions in a non-rotating environment to record bias shifts and transfer functions between the transducers and a reference device. The pressure transducer static performance was within manufacturer's specifications and dynamic response accurately followed that of the reference.

  14. Hydrodynamic dispersion within porous biofilms

    NASA Astrophysics Data System (ADS)

    Davit, Y.; Byrne, H.; Osborne, J.; Pitt-Francis, J.; Gavaghan, D.; Quintard, M.

    2013-01-01

    Many microorganisms live within surface-associated consortia, termed biofilms, that can form intricate porous structures interspersed with a network of fluid channels. In such systems, transport phenomena, including flow and advection, regulate various aspects of cell behavior by controlling nutrient supply, evacuation of waste products, and permeation of antimicrobial agents. This study presents multiscale analysis of solute transport in these porous biofilms. We start our analysis with a channel-scale description of mass transport and use the method of volume averaging to derive a set of homogenized equations at the biofilm-scale in the case where the width of the channels is significantly smaller than the thickness of the biofilm. We show that solute transport may be described via two coupled partial differential equations or telegrapher's equations for the averaged concentrations. These models are particularly relevant for chemicals, such as some antimicrobial agents, that penetrate cell clusters very slowly. In most cases, especially for nutrients, solute penetration is faster, and transport can be described via an advection-dispersion equation. In this simpler case, the effective diffusion is characterized by a second-order tensor whose components depend on (1) the topology of the channels' network; (2) the solute's diffusion coefficients in the fluid and the cell clusters; (3) hydrodynamic dispersion effects; and (4) an additional dispersion term intrinsic to the two-phase configuration. Although solute transport in biofilms is commonly thought to be diffusion dominated, this analysis shows that hydrodynamic dispersion effects may significantly contribute to transport.

  15. Novel wavelet coder for color image compression

    NASA Astrophysics Data System (ADS)

    Wang, Houng-Jyh M.; Kuo, C.-C. Jay

    1997-10-01

    A new still image compression algorithm based on the multi-threshold wavelet coding (MTWC) technique is proposed in this work. It is an embedded wavelet coder in the sense that its compression ratio can be controlled depending on the bandwidth requirement of image transmission. At low bite rates, MTWC can avoid the blocking artifact from JPEG to result in a better reconstructed image quality. An subband decision scheme is developed based on the rate-distortion theory to enhance the image fidelity. Moreover, a new quantization sequence order is introduced based on our analysis of error energy reduction in significant and refinement maps. Experimental results are given to demonstrate the superior performance of the proposed new algorithm in its high reconstructed quality for color and gray level image compression and low computational complexity. Generally speaking, it gives a better rate- distortion tradeoff and performs faster than most existing state-of-the-art wavelet coders.

  16. Bayesian resolution enhancement of compressed video.

    PubMed

    Segall, C Andrew; Katsaggelos, Aggelos K; Molina, Rafael; Mateos, Javier

    2004-07-01

    Super-resolution algorithms recover high-frequency information from a sequence of low-resolution observations. In this paper, we consider the impact of video compression on the super-resolution task. Hybrid motion-compensation and transform coding schemes are the focus, as these methods provide observations of the underlying displacement values as well as a variable noise process. We utilize the Bayesian framework to incorporate this information and fuse the super-resolution and post-processing problems. A tractable solution is defined, and relationships between algorithm parameters and information in the compressed bitstream are established. The association between resolution recovery and compression ratio is also explored. Simulations illustrate the performance of the procedure with both synthetic and nonsynthetic sequences.

  17. Inelastic response of silicon to shock compression

    SciTech Connect

    Higginbotham, Andrew; Stubley, P. G.; Comley, A. J.; Eggert, J. H.; Foster, J. M.; Kalantar, D. H.; McGonegle, D.; Patel, S.; Peacock, L. J.; Rothman, S. D.; Smith, R. F.; Suggit, M. J.; Wark, J. S.

    2016-04-13

    The elastic and inelastic response of [001] oriented silicon to laser compression has been a topic of considerable discussion for well over a decade, yet there has been little progress in understanding the basic behaviour of this apparently simple material. We present experimental x-ray diffraction data showing complex elastic strain profiles in laser compressed samples on nanosecond timescales. We also present molecular dynamics and elasticity code modelling which suggests that a pressure induced phase transition is the cause of the previously reported ‘anomalous’ elastic waves. Moreover, this interpretation allows for measurement of the kinetic timescales for transition. Lastly, this model is also discussed in the wider context of reported deformation of silicon to rapid compression in the literature.

  18. Inelastic response of silicon to shock compression

    DOE PAGES

    Higginbotham, Andrew; Stubley, P. G.; Comley, A. J.; ...

    2016-04-13

    The elastic and inelastic response of [001] oriented silicon to laser compression has been a topic of considerable discussion for well over a decade, yet there has been little progress in understanding the basic behaviour of this apparently simple material. We present experimental x-ray diffraction data showing complex elastic strain profiles in laser compressed samples on nanosecond timescales. We also present molecular dynamics and elasticity code modelling which suggests that a pressure induced phase transition is the cause of the previously reported ‘anomalous’ elastic waves. Moreover, this interpretation allows for measurement of the kinetic timescales for transition. Lastly, this modelmore » is also discussed in the wider context of reported deformation of silicon to rapid compression in the literature.« less

  19. Inelastic response of silicon to shock compression

    PubMed Central

    Higginbotham, A.; Stubley, P. G.; Comley, A. J.; Eggert, J. H.; Foster, J. M.; Kalantar, D. H.; McGonegle, D.; Patel, S.; Peacock, L. J.; Rothman, S. D.; Smith, R. F.; Suggit, M. J.; Wark, J. S.

    2016-01-01

    The elastic and inelastic response of [001] oriented silicon to laser compression has been a topic of considerable discussion for well over a decade, yet there has been little progress in understanding the basic behaviour of this apparently simple material. We present experimental x-ray diffraction data showing complex elastic strain profiles in laser compressed samples on nanosecond timescales. We also present molecular dynamics and elasticity code modelling which suggests that a pressure induced phase transition is the cause of the previously reported ‘anomalous’ elastic waves. Moreover, this interpretation allows for measurement of the kinetic timescales for transition. This model is also discussed in the wider context of reported deformation of silicon to rapid compression in the literature. PMID:27071341

  20. Low Mach number fluctuating hydrodynamics of multispecies liquid mixtures

    SciTech Connect

    Donev, Aleksandar Bhattacharjee, Amit Kumar; Nonaka, Andy; Bell, John B.; Garcia, Alejandro L.

    2015-03-15

    We develop a low Mach number formulation of the hydrodynamic equations describing transport of mass and momentum in a multispecies mixture of incompressible miscible liquids at specified temperature and pressure, which generalizes our prior work on ideal mixtures of ideal gases [Balakrishnan et al., “Fluctuating hydrodynamics of multispecies nonreactive mixtures,” Phys. Rev. E 89 013017 (2014)] and binary liquid mixtures [Donev et al., “Low mach number fluctuating hydrodynamics of diffusively mixing fluids,” Commun. Appl. Math. Comput. Sci. 9(1), 47-105 (2014)]. In this formulation, we combine and extend a number of existing descriptions of multispecies transport available in the literature. The formulation applies to non-ideal mixtures of arbitrary number of species, without the need to single out a “solvent” species, and includes contributions to the diffusive mass flux due to gradients of composition, temperature, and pressure. Momentum transport and advective mass transport are handled using a low Mach number approach that eliminates fast sound waves (pressure fluctuations) from the full compressible system of equations and leads to a quasi-incompressible formulation. Thermal fluctuations are included in our fluctuating hydrodynamics description following the principles of nonequilibrium thermodynamics. We extend the semi-implicit staggered-grid finite-volume numerical method developed in our prior work on binary liquid mixtures [Nonaka et al., “Low mach number fluctuating hydrodynamics of binary liquid mixtures,” http://arxiv.org/abs/1410.2300 (2015)] and use it to study the development of giant nonequilibrium concentration fluctuations in a ternary mixture subjected to a steady concentration gradient. We also numerically study the development of diffusion-driven gravitational instabilities in a ternary mixture and compare our numerical results to recent experimental measurements [Carballido-Landeira et al., “Mixed-mode instability of a