Science.gov

Sample records for compressible hydrodynamics codes

  1. VH-1: Multidimensional ideal compressible hydrodynamics code

    NASA Astrophysics Data System (ADS)

    Hawley, John; Blondin, John; Lindahl, Greg; Lufkin, Eric

    2012-04-01

    VH-1 is a multidimensional ideal compressible hydrodynamics code written in FORTRAN for use on any computing platform, from desktop workstations to supercomputers. It uses a Lagrangian remap version of the Piecewise Parabolic Method developed by Paul Woodward and Phil Colella in their 1984 paper. VH-1 comes in a variety of versions, from a simple one-dimensional serial variant to a multi-dimensional version scalable to thousands of processors.

  2. Pencil: Finite-difference Code for Compressible Hydrodynamic Flows

    NASA Astrophysics Data System (ADS)

    Brandenburg, Axel; Dobler, Wolfgang

    2010-10-01

    The Pencil code is a high-order finite-difference code for compressible hydrodynamic flows with magnetic fields. It is highly modular and can easily be adapted to different types of problems. The code runs efficiently under MPI on massively parallel shared- or distributed-memory computers, like e.g. large Beowulf clusters. The Pencil code is primarily designed to deal with weakly compressible turbulent flows. To achieve good parallelization, explicit (as opposed to compact) finite differences are used. Typical scientific targets include driven MHD turbulence in a periodic box, convection in a slab with non-periodic upper and lower boundaries, a convective star embedded in a fully nonperiodic box, accretion disc turbulence in the shearing sheet approximation, self-gravity, non-local radiation transfer, dust particle evolution with feedback on the gas, etc. A range of artificial viscosity and diffusion schemes can be invoked to deal with supersonic flows. For direct simulations regular viscosity and diffusion is being used. The code is written in well-commented Fortran90.

  3. HYDRODYNAMIC COMPRESSIVE FORGING.

    DTIC Science & Technology

    HYDRODYNAMICS), (*FORGING, COMPRESSIVE PROPERTIES, LUBRICANTS, PERFORMANCE(ENGINEERING), DIES, TENSILE PROPERTIES, MOLYBDENUM ALLOYS , STRAIN...MECHANICS), BERYLLIUM ALLOYS , NICKEL ALLOYS , CASTING ALLOYS , PRESSURE, FAILURE(MECHANICS).

  4. Compressible Astrophysics Simulation Code

    SciTech Connect

    Howell, L.; Singer, M.

    2007-07-18

    This is an astrophysics simulation code involving a radiation diffusion module developed at LLNL coupled to compressible hydrodynamics and adaptive mesh infrastructure developed at LBNL. One intended application is to neutrino diffusion in core collapse supernovae.

  5. Gasoline2: a modern smoothed particle hydrodynamics code

    NASA Astrophysics Data System (ADS)

    Wadsley, James W.; Keller, Benjamin W.; Quinn, Thomas R.

    2017-10-01

    The methods in the Gasoline2 smoothed particle hydrodynamics (SPH) code are described and tested. Gasoline2 is the most recent version of the Gasoline code for parallel hydrodynamics and gravity with identical hydrodynamics to the Changa code. As with other Modern SPH codes, we prevent sharp jumps in time-steps, use upgraded kernels and larger neighbour numbers and employ local viscosity limiters. Unique features in Gasoline2 include its Geometric Density Average Force expression, explicit Turbulent Diffusion terms and Gradient-Based shock detection to limit artificial viscosity. This last feature allows Gasoline2 to completely avoid artificial viscosity in non-shocking compressive flows. We present a suite of tests demonstrating the value of these features with the same code configuration and parameter choices used for production simulations.

  6. PHANTOM: Smoothed particle hydrodynamics and magnetohydrodynamics code

    NASA Astrophysics Data System (ADS)

    Price, Daniel J.; Wurster, James; Nixon, Chris; Tricco, Terrence S.; Toupin, Stéven; Pettitt, Alex; Chan, Conrad; Laibe, Guillaume; Glover, Simon; Dobbs, Clare; Nealon, Rebecca; Liptai, David; Worpel, Hauke; Bonnerot, Clément; Dipierro, Giovanni; Ragusa, Enrico; Federrath, Christoph; Iaconi, Roberto; Reichardt, Thomas; Forgan, Duncan; Hutchison, Mark; Constantino, Thomas; Ayliffe, Ben; Mentiplay, Daniel; Hirsh, Kieran; Lodato, Giuseppe

    2017-09-01

    Phantom is a smoothed particle hydrodynamics and magnetohydrodynamics code focused on stellar, galactic, planetary, and high energy astrophysics. It is modular, and handles sink particles, self-gravity, two fluid and one fluid dust, ISM chemistry and cooling, physical viscosity, non-ideal MHD, and more. Its modular structure makes it easy to add new physics to the code.

  7. pyro: A teaching code for computational astrophysical hydrodynamics

    NASA Astrophysics Data System (ADS)

    Zingale, M.

    2014-10-01

    We describe pyro: a simple, freely-available code to aid students in learning the computational hydrodynamics methods widely used in astrophysics. pyro is written with simplicity and learning in mind and intended to allow students to experiment with various methods popular in the field, including those for advection, compressible and incompressible hydrodynamics, multigrid, and diffusion in a finite-volume framework. We show some of the test problems from pyro, describe its design philosophy, and suggest extensions for students to build their understanding of these methods.

  8. TORUS: Radiation transport and hydrodynamics code

    NASA Astrophysics Data System (ADS)

    Harries, Tim

    2014-04-01

    TORUS is a flexible radiation transfer and radiation-hydrodynamics code. The code has a basic infrastructure that includes the AMR mesh scheme that is used by several physics modules including atomic line transfer in a moving medium, molecular line transfer, photoionization, radiation hydrodynamics and radiative equilibrium. TORUS is useful for a variety of problems, including magnetospheric accretion onto T Tauri stars, spiral nebulae around Wolf-Rayet stars, discs around Herbig AeBe stars, structured winds of O supergiants and Raman-scattered line formation in symbiotic binaries, and dust emission and molecular line formation in star forming clusters. The code is written in Fortran 2003 and is compiled using a standard Gnu makefile. The code is parallelized using both MPI and OMP, and can use these parallel sections either separately or in a hybrid mode.

  9. Using Pulsed Power for Hydrodynamic Code Validation

    DTIC Science & Technology

    2001-06-01

    Air Force Research Laboratory ( AFRL ). A...bank at the Air Force Research Laboratory ( AFRL ). A cylindrical aluminum liner that is magnetically imploded onto a central target by self-induced...James Degnan, George Kiuttu Air Force Research Laboratory Albuquerque, NM 87117 Abstract As part of ongoing hydrodynamic code

  10. An implicit Smooth Particle Hydrodynamic code

    SciTech Connect

    Knapp, Charles E.

    2000-05-01

    An implicit version of the Smooth Particle Hydrodynamic (SPH) code SPHINX has been written and is working. In conjunction with the SPHINX code the new implicit code models fluids and solids under a wide range of conditions. SPH codes are Lagrangian, meshless and use particles to model the fluids and solids. The implicit code makes use of the Krylov iterative techniques for solving large linear-systems and a Newton-Raphson method for non-linear corrections. It uses numerical derivatives to construct the Jacobian matrix. It uses sparse techniques to save on memory storage and to reduce the amount of computation. It is believed that this is the first implicit SPH code to use Newton-Krylov techniques, and is also the first implicit SPH code to model solids. A description of SPH and the techniques used in the implicit code are presented. Then, the results of a number of tests cases are discussed, which include a shock tube problem, a Rayleigh-Taylor problem, a breaking dam problem, and a single jet of gas problem. The results are shown to be in very good agreement with analytic solutions, experimental results, and the explicit SPHINX code. In the case of the single jet of gas case it has been demonstrated that the implicit code can do a problem in much shorter time than the explicit code. The problem was, however, very unphysical, but it does demonstrate the potential of the implicit code. It is a first step toward a useful implicit SPH code.

  11. Production code control system for hydrodynamics simulations

    SciTech Connect

    Slone, D.M.

    1997-08-18

    We describe how the Production Code Control System (pCCS), written in Perl, has been used to control and monitor the execution of a large hydrodynamics simulation code in a production environment. We have been able to integrate new, disparate, and often independent, applications into the PCCS framework without the need to modify any of our existing application codes. Both users and code developers see a consistent interface to the simulation code and associated applications regardless of the physical platform, whether an MPP, SMP, server, or desktop workstation. We will also describe our use of Perl to develop a configuration management system for the simulation code, as well as a code usage database and report generator. We used Perl to write a backplane that allows us plug in preprocessors, the hydrocode, postprocessors, visualization tools, persistent storage requests, and other codes. We need only teach PCCS a minimal amount about any new tool or code to essentially plug it in and make it usable to the hydrocode. PCCS has made it easier to link together disparate codes, since using Perl has removed the need to learn the idiosyncrasies of system or RPC programming. The text handling in Perl makes it easy to teach PCCS about new codes, or changes to existing codes.

  12. Radiation hydrodynamics integrated in the PLUTO code

    NASA Astrophysics Data System (ADS)

    Kolb, Stefan M.; Stute, Matthias; Kley, Wilhelm; Mignone, Andrea

    2013-11-01

    Aims: The transport of energy through radiation is very important in many astrophysical phenomena. In dynamical problems the time-dependent equations of radiation hydrodynamics have to be solved. We present a newly developed radiation-hydrodynamics module specifically designed for the versatile magnetohydrodynamic (MHD) code PLUTO. Methods: The solver is based on the flux-limited diffusion approximation in the two-temperature approach. All equations are solved in the co-moving frame in the frequency-independent (gray) approximation. The hydrodynamics is solved by the different Godunov schemes implemented in PLUTO, and for the radiation transport we use a fully implicit scheme. The resulting system of linear equations is solved either using the successive over-relaxation (SOR) method (for testing purposes) or using matrix solvers that are available in the PETSc library. We state in detail the methodology and describe several test cases to verify the correctness of our implementation. The solver works in standard coordinate systems, such as Cartesian, cylindrical, and spherical, and also for non-equidistant grids. Results: We present a new radiation-hydrodynamics solver coupled to the MHD-code PLUTO that is a modern, versatile, and efficient new module for treating complex radiation hydrodynamical problems in astrophysics. As test cases, either purely radiative situations, or full radiation-hydrodynamical setups (including radiative shocks and convection in accretion disks) were successfully studied. The new module scales very well on parallel computers using MPI. For problems in star or planet formation, we added the possibility of irradiation by a central source.

  13. A comparison of cosmological hydrodynamic codes

    NASA Technical Reports Server (NTRS)

    Kang, Hyesung; Ostriker, Jeremiah P.; Cen, Renyue; Ryu, Dongsu; Hernquist, Lars; Evrard, August E.; Bryan, Greg L.; Norman, Michael L.

    1994-01-01

    We present a detailed comparison of the simulation results of various hydrodynamic codes. Starting with identical initial conditions based on the cold dark matter scenario for the growth of structure, with parameters h = 0.5 Omega = Omega(sub b) = 1, and sigma(sub 8) = 1, we integrate from redshift z = 20 to z = O to determine the physical state within a representative volume of size L(exp 3) where L = 64 h(exp -1) Mpc. Five indenpendent codes are compared: three of them Eulerian mesh-based and two variants of the smooth particle hydrodynamics 'SPH' Lagrangian approach. The Eulerian codes were run at N(exp 3) = (32(exp 3), 64(exp 3), 128(exp 3), and 256(exp 3)) cells, the SPH codes at N(exp 3) = 32(exp 3) and 64(exp 3) particles. Results were then rebinned to a 16(exp 3) grid with the exception that the rebinned data should converge, by all techniques, to a common and correct result as N approaches infinity. We find that global averages of various physical quantities do, as expected, tend to converge in the rebinned model, but that uncertainites in even primitive quantities such as (T), (rho(exp 2))(exp 1/2) persists at the 3%-17% level achieve comparable and satisfactory accuracy for comparable computer time in their treatment of the high-density, high-temeprature regions as measured in the rebinned data; the variance among the five codes (at highest resolution) for the mean temperature (as weighted by rho(exp 2) is only 4.5%. Examined at high resolution we suspect that the density resolution is better in the SPH codes and the thermal accuracy in low-density regions better in the Eulerian codes. In the low-density, low-temperature regions the SPH codes have poor accuracy due to statiscal effects, and the Jameson code gives the temperatures which are too high, due to overuse of artificial viscosity in these high Mach number regions. Overall the comparison allows us to better estimate errors; it points to ways of improving this current generation ofhydrodynamic

  14. CASTRO: A NEW COMPRESSIBLE ASTROPHYSICAL SOLVER. II. GRAY RADIATION HYDRODYNAMICS

    SciTech Connect

    Zhang, W.; Almgren, A.; Bell, J.; Howell, L.; Burrows, A.

    2011-10-01

    We describe the development of a flux-limited gray radiation solver for the compressible astrophysics code, CASTRO. CASTRO uses an Eulerian grid with block-structured adaptive mesh refinement based on a nested hierarchy of logically rectangular variable-sized grids with simultaneous refinement in both space and time. The gray radiation solver is based on a mixed-frame formulation of radiation hydrodynamics. In our approach, the system is split into two parts, one part that couples the radiation and fluid in a hyperbolic subsystem, and another parabolic part that evolves radiation diffusion and source-sink terms. The hyperbolic subsystem is solved explicitly with a high-order Godunov scheme, whereas the parabolic part is solved implicitly with a first-order backward Euler method.

  15. Axially symmetric pseudo-Newtonian hydrodynamics code

    NASA Astrophysics Data System (ADS)

    Kim, Jinho; Kim, Hee Il; Choptuik, Matthew William; Lee, Hyung Mok

    2012-08-01

    We develop a numerical hydrodynamics code using a pseudo-Newtonian formulation that uses the weak-field approximation for the geometry, and a generalized source term for the Poisson equation that takes into account relativistic effects. The code was designed to treat moderately relativistic systems such as rapidly rotating neutron stars. The hydrodynamic equations are solved using a finite volume method with high-resolution shock-capturing techniques. We implement several different slope limiters for second-order reconstruction schemes and also investigate higher order reconstructions such as the piecewise parabolic method, essentially non-oscillatory method (ENO) and weighted ENO. We use the method of lines to convert the mixed spatial-time partial differential equations into ordinary differential equations (ODEs) that depend only on time. These ODEs are solved using second- and third-order Runge-Kutta methods. The Poisson equation for the gravitational potential is solved with a multigrid method, and to simplify the boundary condition, we use compactified coordinates which map spatial infinity to a finite computational coordinate using a tangent function. In order to confirm the validity of our code, we carry out four different tests including one- and two-dimensional shock tube tests, stationary star tests of both non-rotating and rotating models, and radial oscillation mode tests for spherical stars. In the shock tube tests, the code shows good agreement with analytic solutions which include shocks, rarefaction waves and contact discontinuities. The code is found to be stable and accurate: for example, when solving a stationary stellar model the fractional changes in the maximum density, total mass, and total angular momentum per dynamical time are found to be 3 × 10-6, 5 × 10-7 and 2 × 10-6, respectively. We also find that the frequencies of the radial modes obtained by the numerical simulation of the steady-state star agree very well with those obtained by

  16. Code Differentiation for Hydrodynamic Model Optimization

    SciTech Connect

    Henninger, R.J.; Maudlin, P.J.

    1999-06-27

    Use of a hydrodynamics code for experimental data fitting purposes (an optimization problem) requires information about how a computed result changes when the model parameters change. These so-called sensitivities provide the gradient that determines the search direction for modifying the parameters to find an optimal result. Here, the authors apply code-based automatic differentiation (AD) techniques applied in the forward and adjoint modes to two problems with 12 parameters to obtain these gradients and compare the computational efficiency and accuracy of the various methods. They fit the pressure trace from a one-dimensional flyer-plate experiment and examine the accuracy for a two-dimensional jet-formation problem. For the flyer-plate experiment, the adjoint mode requires similar or less computer time than the forward methods. Additional parameters will not change the adjoint mode run time appreciably, which is a distinct advantage for this method. Obtaining ''accurate'' sensitivities for the j et problem parameters remains problematic.

  17. Hydrodynamic stability of compressible plane Couette flow

    SciTech Connect

    Chagelishvili, G.D. Department of Plasma Physics, Space Research Institute, str. Profsoyuznaya 84 Rogava, A.D. ); Segal, I.N. Department of Plasma Physics, Space Research Institute, str. Profsoyuznaya 84/32, 117810 Moscow )

    1994-12-01

    The evolution of two-dimensional spatial Fourier harmonics in a compressible plane Couette flow is considered. A new mechanism of energy exchange between the mean flow and sound-type perturbations is discovered.

  18. CASTRO: A NEW COMPRESSIBLE ASTROPHYSICAL SOLVER. III. MULTIGROUP RADIATION HYDRODYNAMICS

    SciTech Connect

    Zhang, W.; Almgren, A.; Bell, J.; Howell, L.; Burrows, A.; Dolence, J.

    2013-01-15

    We present a formulation for multigroup radiation hydrodynamics that is correct to order O(v/c) using the comoving-frame approach and the flux-limited diffusion approximation. We describe a numerical algorithm for solving the system, implemented in the compressible astrophysics code, CASTRO. CASTRO uses a Eulerian grid with block-structured adaptive mesh refinement based on a nested hierarchy of logically rectangular variable-sized grids with simultaneous refinement in both space and time. In our multigroup radiation solver, the system is split into three parts: one part that couples the radiation and fluid in a hyperbolic subsystem, another part that advects the radiation in frequency space, and a parabolic part that evolves radiation diffusion and source-sink terms. The hyperbolic subsystem and the frequency space advection are solved explicitly with high-order Godunov schemes, whereas the parabolic part is solved implicitly with a first-order backward Euler method. Our multigroup radiation solver works for both neutrino and photon radiation.

  19. Subband Coding Methods for Seismic Data Compression

    NASA Technical Reports Server (NTRS)

    Kiely, A.; Pollara, F.

    1995-01-01

    This paper presents a study of seismic data compression techniques and a compression algorithm based on subband coding. The compression technique described could be used as a progressive transmission system, where successive refinements of the data can be requested by the user. This allows seismologists to first examine a coarse version of waveforms with minimal usage of the channel and then decide where refinements are required. Rate-distortion performance results are presented and comparisons are made with two block transform methods.

  20. Subband Coding Methods for Seismic Data Compression

    NASA Technical Reports Server (NTRS)

    Kiely, A.; Pollara, F.

    1995-01-01

    This paper presents a study of seismic data compression techniques and a compression algorithm based on subband coding. The compression technique described could be used as a progressive transmission system, where successive refinements of the data can be requested by the user. This allows seismologists to first examine a coarse version of waveforms with minimal usage of the channel and then decide where refinements are required. Rate-distortion performance results are presented and comparisons are made with two block transform methods.

  1. Code Compression Schems for Embedded Processors

    NASA Astrophysics Data System (ADS)

    Horti, Deepa; Jamge, S. B.

    2010-11-01

    Code density is a major requirement in embedded system design since it not only reduces the need for the scarce re-source memory but also implicitly improves further important design parameters like power consumption and performance. Within this paper we have introduced a novel and an efficient approach that belongs to statistical compression schemes as well as dictionary based compression schemes.

  2. CASTRO: A NEW COMPRESSIBLE ASTROPHYSICAL SOLVER. I. HYDRODYNAMICS AND SELF-GRAVITY

    SciTech Connect

    Almgren, A. S.; Beckner, V. E.; Bell, J. B.; Day, M. S.; Lijewski, M. J.; Nonaka, A.; Howell, L. H.; Singer, M.; Joggerst, C. C.; Zingale, M.

    2010-06-01

    We present a new code, CASTRO, that solves the multicomponent compressible hydrodynamic equations for astrophysical flows including self-gravity, nuclear reactions, and radiation. CASTRO uses an Eulerian grid and incorporates adaptive mesh refinement (AMR). Our approach to AMR uses a nested hierarchy of logically rectangular grids with simultaneous refinement in both space and time. The radiation component of CASTRO will be described in detail in the next paper, Part II, of this series.

  3. Compressible Lagrangian hydrodynamics without Lagrangian cells

    NASA Astrophysics Data System (ADS)

    Clark, Robert A.

    The partial differential Eqs [2.1, 2.2, and 2.3], along with the equation of state 2.4, which describe the time evolution of compressible fluid flow can be solved without the use of a Lagrangian mesh. The method follows embedded fluid points and uses finite difference approximations to ěc nablaP and ěc nabla · ěc u to update p, ěc u and e. We have demonstrated that the method can accurately calculate highly distorted flows without difficulty. The finite difference approximations are not unique, improvements may be found in the near future. The neighbor selection is not unique, but the one being used at present appears to do an excellent job. The method could be directly extended to three dimensions. One drawback to the method is the failure toexplicitly conserve mass, momentum and energy. In fact, at any given time, the mass is not defined. We must perform an auxiliary calculation by integrating the density field over space to obtain mass, energy and momentum. However, in all cases where we have done this, we have found the drift in these quantities to be no more than a few percent.

  4. TESS: A RELATIVISTIC HYDRODYNAMICS CODE ON A MOVING VORONOI MESH

    SciTech Connect

    Duffell, Paul C.; MacFadyen, Andrew I. E-mail: macfadyen@nyu.edu

    2011-12-01

    We have generalized a method for the numerical solution of hyperbolic systems of equations using a dynamic Voronoi tessellation of the computational domain. The Voronoi tessellation is used to generate moving computational meshes for the solution of multidimensional systems of conservation laws in finite-volume form. The mesh-generating points are free to move with arbitrary velocity, with the choice of zero velocity resulting in an Eulerian formulation. Moving the points at the local fluid velocity makes the formulation effectively Lagrangian. We have written the TESS code to solve the equations of compressible hydrodynamics and magnetohydrodynamics for both relativistic and non-relativistic fluids on a dynamic Voronoi mesh. When run in Lagrangian mode, TESS is significantly less diffusive than fixed mesh codes and thus preserves contact discontinuities to high precision while also accurately capturing strong shock waves. TESS is written for Cartesian, spherical, and cylindrical coordinates and is modular so that auxiliary physics solvers are readily integrated into the TESS framework and so that this can be readily adapted to solve general systems of equations. We present results from a series of test problems to demonstrate the performance of TESS and to highlight some of the advantages of the dynamic tessellation method for solving challenging problems in astrophysical fluid dynamics.

  5. Image coding compression based on DCT

    NASA Astrophysics Data System (ADS)

    Feng, Fei; Liu, Peixue; Jiang, Baohua

    2012-04-01

    With the development of computer science and communications, the digital image processing develops more and more fast. High quality images are loved by people, but it will waste more stored space in our computer and it will waste more bandwidth when it is transferred by Internet. Therefore, it's necessary to have an study on technology of image compression. At present, many algorithms about image compression is applied to network and the image compression standard is established. In this dissertation, some analysis on DCT will be written. Firstly, the principle of DCT will be shown. It's necessary to realize image compression, because of the widely using about this technology; Secondly, we will have a deep understanding of DCT by the using of Matlab, the process of image compression based on DCT, and the analysis on Huffman coding; Thirdly, image compression based on DCT will be shown by using Matlab and we can have an analysis on the quality of the picture compressed. It is true that DCT is not the only algorithm to realize image compression. I am sure there will be more algorithms to make the image compressed have a high quality. I believe the technology about image compression will be widely used in the network or communications in the future.

  6. Image compression with embedded multiwavelet coding

    NASA Astrophysics Data System (ADS)

    Liang, Kai-Chieh; Li, Jin; Kuo, C.-C. Jay

    1996-03-01

    An embedded image coding scheme using the multiwavelet transform and inter-subband prediction is proposed in this research. The new proposed coding scheme consists of the following building components: GHM multiwavelet transform, prediction across subbands, successive approximation quantization, and adaptive binary arithmetic coding. Our major contribution is the introduction of a set of prediction rules to fully exploit the correlations between multiwavelet coefficients in different frequency bands. The performance of the proposed new method is comparable to that of state-of-the-art wavelet compression methods.

  7. Pulse compression using binary phase codes

    NASA Technical Reports Server (NTRS)

    Farley, D. T.

    1983-01-01

    In most MST applications pulsed radars are peak power limited and have excess average power capacity. Short pulses are required for good range resolution, but the problem of range ambiguity (signals received simultaneously from more than one altitude) sets a minimum limit on the interpulse period (IPP). Pulse compression is a technique which allows more of the transmitter average power capacity to be used without sacrificing range resolution. As the name implies, a pulse of power P and duration T is in a certain sense converted into one of power nP and duration T/n. In the frequency domain, compression involves manipulating the phases of the different frequency components of the pulse. One way to compress a pulse is via phase coding, especially binary phase coding, a technique which is particularly amenable to digital processing techniques. This method, which is used extensively in radar probing of the atmosphere and ionosphere is discussed. Barker codes, complementary and quasi-complementary code sets, and cyclic codes are addressed.

  8. DISH CODE A deeply simplified hydrodynamic code for applications to warm dense matter

    SciTech Connect

    More, Richard

    2007-08-22

    DISH is a 1-dimensional (planar) Lagrangian hydrodynamic code intended for application to experiments on warm dense matter. The code is a simplified version of the DPC code written in the Data and Planning Center of the National Institute for Fusion Science in Toki, Japan. DPC was originally intended as a testbed for exploring equation of state and opacity models, but turned out to have a variety of applications. The Dish code is a "deeply simplified hydrodynamic" code, deliberately made as simple as possible. It is intended to be easy to understand, easy to use and easy to change.

  9. KEPLER: General purpose 1D multizone hydrodynamics code

    NASA Astrophysics Data System (ADS)

    Weaver, T. A.; Zimmerman, G. B.; Woosley, S. E.

    2017-02-01

    KEPLER is a general purpose stellar evolution/explosion code that incorporates implicit hydrodynamics and a detailed treatment of nuclear burning processes. It has been used to study the complete evolution of massive and supermassive stars, all major classes of supernovae, hydrostatic and explosive nucleosynthesis, and x- and gamma-ray bursts on neutron stars and white dwarfs.

  10. Compression of polyphase codes with Doppler shift

    NASA Astrophysics Data System (ADS)

    Wirth, W. D.

    It is shown that pulse compression with sufficient Doppler tolerance may be achieved with polyphase codes derived from linear frequency modulation (LFM) and nonlinear frequency modulation (NLFM). Low sidelobes in range and Doppler are required especially for the radar search function. These may be achieved by an LFM derived phase coder together with Hamming weighting or by applying a PNL polyphase code derived from NLFM. For a discrete and known Doppler frequency with an expanded and mismatched reference vector a sidelobe reduction is possible. The compression is then achieved without a loss in resolution. A set up for the expanded reference gives zero sidelobes only in an interval around the signal peak or a least square minimization for all range elements. This version may be useful for target tracking.

  11. Lossless Compression of JPEG Coded Photo Collections.

    PubMed

    Wu, Hao; Sun, Xiaoyan; Yang, Jingyu; Zeng, Wenjun; Wu, Feng

    2016-04-06

    The explosion of digital photos has posed a significant challenge to photo storage and transmission for both personal devices and cloud platforms. In this paper, we propose a novel lossless compression method to further reduce the size of a set of JPEG coded correlated images without any loss of information. The proposed method jointly removes inter/intra image redundancy in the feature, spatial, and frequency domains. For each collection, we first organize the images into a pseudo video by minimizing the global prediction cost in the feature domain. We then present a hybrid disparity compensation method to better exploit both the global and local correlations among the images in the spatial domain. Furthermore, the redundancy between each compensated signal and the corresponding target image is adaptively reduced in the frequency domain. Experimental results demonstrate the effectiveness of the proposed lossless compression method. Compared to the JPEG coded image collections, our method achieves average bit savings of more than 31%.

  12. Multi-shot compressed coded aperture imaging

    NASA Astrophysics Data System (ADS)

    Shao, Xiaopeng; Du, Juan; Wu, Tengfei; Jin, Zhenhua

    2013-09-01

    The classical methods of compressed coded aperture (CCA) still require an optical sensor with high resolution, although the sampling rate has broken the Nyquist sampling rate already. A novel architecture of multi-shot compressed coded aperture imaging (MCCAI) using a low resolution optical sensor is proposed, which is mainly based on the 4-f imaging system, combining with two spatial light modulators (SLM) to achieve the compressive imaging goal. The first SLM employed for random convolution is placed at the frequency spectrum plane of the 4-f imaging system, while the second SLM worked as a selecting filter is positioned in front of the optical sensor. By altering the random coded pattern of the second SLM and sampling, a couple of observations can be obtained by a low resolution optical sensor easily, and these observations will be combined mathematically and used to reconstruct the high resolution image. That is to say, MCCAI aims at realizing the super resolution imaging with multiple random samplings by using a low resolution optical sensor. To improve the computational imaging performance, total variation (TV) regularization is introduced into the super resolution reconstruction model to get rid of the artifacts, and alternating direction method of multipliers (ADM) is utilized to solve the optimal result efficiently. The results show that the MCCAI architecture is suitable for super resolution computational imaging using a much lower resolution optical sensor than traditional CCA imaging methods by capturing multiple frame images.

  13. General Relativistic Smoothed Particle Hydrodynamics code developments: A progress report

    NASA Astrophysics Data System (ADS)

    Faber, Joshua; Silberman, Zachary; Rizzo, Monica

    2017-01-01

    We report on our progress in developing a new general relativistic Smoothed Particle Hydrodynamics (SPH) code, which will be appropriate for studying the properties of accretion disks around black holes as well as compact object binary mergers and their ejecta. We will discuss in turn the relativistic formalisms being used to handle the evolution, our techniques for dealing with conservative and primitive variables, as well as those used to ensure proper conservation of various physical quantities. Code tests and performance metrics will be discussed, as will the prospects for including smoothed particle hydrodynamics codes within other numerical relativity codebases, particularly the publicly available Einstein Toolkit. We acknowledge support from NSF award ACI-1550436 and an internal RIT D-RIG grant.

  14. High compression image and image sequence coding

    NASA Technical Reports Server (NTRS)

    Kunt, Murat

    1989-01-01

    The digital representation of an image requires a very large number of bits. This number is even larger for an image sequence. The goal of image coding is to reduce this number, as much as possible, and reconstruct a faithful duplicate of the original picture or image sequence. Early efforts in image coding, solely guided by information theory, led to a plethora of methods. The compression ratio reached a plateau around 10:1 a couple of years ago. Recent progress in the study of the brain mechanism of vision and scene analysis has opened new vistas in picture coding. Directional sensitivity of the neurones in the visual pathway combined with the separate processing of contours and textures has led to a new class of coding methods capable of achieving compression ratios as high as 100:1 for images and around 300:1 for image sequences. Recent progress on some of the main avenues of object-based methods is presented. These second generation techniques make use of contour-texture modeling, new results in neurophysiology and psychophysics and scene analysis.

  15. A new hydrodynamics code for Type Ia supernovae

    NASA Astrophysics Data System (ADS)

    Leung, S.-C.; Chu, M.-C.; Lin, L.-M.

    2015-12-01

    A two-dimensional hydrodynamics code for Type Ia supernova (SNIa) simulations is presented. The code includes a fifth-order shock-capturing scheme WENO, detailed nuclear reaction network, flame-capturing scheme and sub-grid turbulence. For post-processing, we have developed a tracer particle scheme to record the thermodynamical history of the fluid elements. We also present a one-dimensional radiative transfer code for computing observational signals. The code solves the Lagrangian hydrodynamics and moment-integrated radiative transfer equations. A local ionization scheme and composition dependent opacity are included. Various verification tests are presented, including standard benchmark tests in one and two dimensions. SNIa models using the pure turbulent deflagration model and the delayed-detonation transition model are studied. The results are consistent with those in the literature. We compute the detailed chemical evolution using the tracer particles' histories, and we construct corresponding bolometric light curves from the hydrodynamics results. We also use a GPU to speed up the computation of some highly repetitive subroutines. We achieve an acceleration of 50 times for some subroutines and a factor of 6 in the global run time.

  16. Coding Strategies and Implementations of Compressive Sensing

    NASA Astrophysics Data System (ADS)

    Tsai, Tsung-Han

    This dissertation studies the coding strategies of computational imaging to overcome the limitation of conventional sensing techniques. The information capacity of conventional sensing is limited by the physical properties of optics, such as aperture size, detector pixels, quantum efficiency, and sampling rate. These parameters determine the spatial, depth, spectral, temporal, and polarization sensitivity of each imager. To increase sensitivity in any dimension can significantly compromise the others. This research implements various coding strategies subject to optical multidimensional imaging and acoustic sensing in order to extend their sensing abilities. The proposed coding strategies combine hardware modification and signal processing to exploiting bandwidth and sensitivity from conventional sensors. We discuss the hardware architecture, compression strategies, sensing process modeling, and reconstruction algorithm of each sensing system. Optical multidimensional imaging measures three or more dimensional information of the optical signal. Traditional multidimensional imagers acquire extra dimensional information at the cost of degrading temporal or spatial resolution. Compressive multidimensional imaging multiplexes the transverse spatial, spectral, temporal, and polarization information on a two-dimensional (2D) detector. The corresponding spectral, temporal and polarization coding strategies adapt optics, electronic devices, and designed modulation techniques for multiplex measurement. This computational imaging technique provides multispectral, temporal super-resolution, and polarization imaging abilities with minimal loss in spatial resolution and noise level while maintaining or gaining higher temporal resolution. The experimental results prove that the appropriate coding strategies may improve hundreds times more sensing capacity. Human auditory system has the astonishing ability in localizing, tracking, and filtering the selected sound sources or

  17. GIZMO: Multi-method magneto-hydrodynamics+gravity code

    NASA Astrophysics Data System (ADS)

    Hopkins, Philip F.

    2014-10-01

    GIZMO is a flexible, multi-method magneto-hydrodynamics+gravity code that solves the hydrodynamic equations using a variety of different methods. It introduces new Lagrangian Godunov-type methods that allow solving the fluid equations with a moving particle distribution that is automatically adaptive in resolution and avoids the advection errors, angular momentum conservation errors, and excessive diffusion problems that seriously limit the applicability of “adaptive mesh” (AMR) codes, while simultaneously avoiding the low-order errors inherent to simpler methods like smoothed-particle hydrodynamics (SPH). GIZMO also allows the use of SPH either in “traditional” form or “modern” (more accurate) forms, or use of a mesh. Self-gravity is solved quickly with a BH-Tree (optionally a hybrid PM-Tree for periodic boundaries) and on-the-fly adaptive gravitational softenings. The code is descended from P-GADGET, itself descended from GADGET-2 (ascl:0003.001), and many of the naming conventions remain (for the sake of compatibility with the large library of GADGET work and analysis software).

  18. RAMSES: A new N-body and hydrodynamical code

    NASA Astrophysics Data System (ADS)

    Teyssier, Romain

    2010-11-01

    A new N-body and hydrodynamical code, called RAMSES, is presented. It has been designed to study structure formation in the universe with high spatial resolution. The code is based on Adaptive Mesh Refinement (AMR) technique, with a tree based data structure allowing recursive grid refinements on a cell-by-cell basis. The N-body solver is very similar to the one developed for the ART code (Kravtsov et al. 97), with minor differences in the exact implementation. The hydrodynamical solver is based on a second-order Godunov method, a modern shock-capturing scheme known to compute accurately the thermal history of the fluid component. The accuracy of the code is carefully estimated using various test cases, from pure gas dynamical tests to cosmological ones. The specific refinement strategy used in cosmological simulations is described, and potential spurious effects associated to shock waves propagation in the resulting AMR grid are discussed and found to be negligible. Results obtained in a large N-body and hydrodynamical simulation of structure formation in a low density LCDM universe are finally reported, with 256^3 particles and 4.1 10^7 cells in the AMR grid, reaching a formal resolution of 8192^3. A convergence analysis of different quantities, such as dark matter density power spectrum, gas pressure power spectrum and individual haloes temperature profiles, shows that numerical results are converging down to the actual resolution limit of the code, and are well reproduced by recent analytical predictions in the framework of the halo model.

  19. Adding kinetics and hydrodynamics to the CHEETAH thermochemical code

    SciTech Connect

    Fried, L.E., Howard, W.M., Souers, P.C.

    1997-01-15

    In FY96 we released CHEETAH 1.40, which made extensive improvements on the stability and user friendliness of the code. CHEETAH now has over 175 users in government, academia, and industry. Efforts have also been focused on adding new advanced features to CHEETAH 2.0, which is scheduled for release in FY97. We have added a new chemical kinetics capability to CHEETAH. In the past, CHEETAH assumed complete thermodynamic equilibrium and independence of time. The addition of a chemical kinetic framework will allow for modeling of time-dependent phenomena, such as partial combustion and detonation in composite explosives with large reaction zones. We have implemented a Wood-Kirkwood detonation framework in CHEETAH, which allows for the treatment of nonideal detonations and explosive failure. A second major effort in the project this year has been linking CHEETAH to hydrodynamic codes to yield an improved HE product equation of state. We have linked CHEETAH to 1- and 2-D hydrodynamic codes, and have compared the code to experimental data. 15 refs., 13 figs., 1 tab.

  20. The escape of high explosive products: An exact-solution problem for verification of hydrodynamics codes

    SciTech Connect

    Doebling, Scott William

    2016-10-22

    This paper documents the escape of high explosive (HE) products problem. The problem, first presented by Fickett & Rivard, tests the implementation and numerical behavior of a high explosive detonation and energy release model and its interaction with an associated compressible hydrodynamics simulation code. The problem simulates the detonation of a finite-length, one-dimensional piece of HE that is driven by a piston from one end and adjacent to a void at the other end. The HE equation of state is modeled as a polytropic ideal gas. The HE detonation is assumed to be instantaneous with an infinitesimal reaction zone. Via judicious selection of the material specific heat ratio, the problem has an exact solution with linear characteristics, enabling a straightforward calculation of the physical variables as a function of time and space. Lastly, implementation of the exact solution in the Python code ExactPack is discussed, as are verification cases for the exact solution code.

  1. CHOLLA: A New Massively Parallel Hydrodynamics Code for Astrophysical Simulation

    NASA Astrophysics Data System (ADS)

    Schneider, Evan E.; Robertson, Brant E.

    2015-04-01

    We present Computational Hydrodynamics On ParaLLel Architectures (Cholla ), a new three-dimensional hydrodynamics code that harnesses the power of graphics processing units (GPUs) to accelerate astrophysical simulations. Cholla models the Euler equations on a static mesh using state-of-the-art techniques, including the unsplit Corner Transport Upwind algorithm, a variety of exact and approximate Riemann solvers, and multiple spatial reconstruction techniques including the piecewise parabolic method (PPM). Using GPUs, Cholla evolves the fluid properties of thousands of cells simultaneously and can update over 10 million cells per GPU-second while using an exact Riemann solver and PPM reconstruction. Owing to the massively parallel architecture of GPUs and the design of the Cholla code, astrophysical simulations with physically interesting grid resolutions (≳2563) can easily be computed on a single device. We use the Message Passing Interface library to extend calculations onto multiple devices and demonstrate nearly ideal scaling beyond 64 GPUs. A suite of test problems highlights the physical accuracy of our modeling and provides a useful comparison to other codes. We then use Cholla to simulate the interaction of a shock wave with a gas cloud in the interstellar medium, showing that the evolution of the cloud is highly dependent on its density structure. We reconcile the computed mixing time of a turbulent cloud with a realistic density distribution destroyed by a strong shock with the existing analytic theory for spherical cloud destruction by describing the system in terms of its median gas density.

  2. CHOLLA: A NEW MASSIVELY PARALLEL HYDRODYNAMICS CODE FOR ASTROPHYSICAL SIMULATION

    SciTech Connect

    Schneider, Evan E.; Robertson, Brant E.

    2015-04-15

    We present Computational Hydrodynamics On ParaLLel Architectures (Cholla ), a new three-dimensional hydrodynamics code that harnesses the power of graphics processing units (GPUs) to accelerate astrophysical simulations. Cholla models the Euler equations on a static mesh using state-of-the-art techniques, including the unsplit Corner Transport Upwind algorithm, a variety of exact and approximate Riemann solvers, and multiple spatial reconstruction techniques including the piecewise parabolic method (PPM). Using GPUs, Cholla evolves the fluid properties of thousands of cells simultaneously and can update over 10 million cells per GPU-second while using an exact Riemann solver and PPM reconstruction. Owing to the massively parallel architecture of GPUs and the design of the Cholla code, astrophysical simulations with physically interesting grid resolutions (≳256{sup 3}) can easily be computed on a single device. We use the Message Passing Interface library to extend calculations onto multiple devices and demonstrate nearly ideal scaling beyond 64 GPUs. A suite of test problems highlights the physical accuracy of our modeling and provides a useful comparison to other codes. We then use Cholla to simulate the interaction of a shock wave with a gas cloud in the interstellar medium, showing that the evolution of the cloud is highly dependent on its density structure. We reconcile the computed mixing time of a turbulent cloud with a realistic density distribution destroyed by a strong shock with the existing analytic theory for spherical cloud destruction by describing the system in terms of its median gas density.

  3. RAM: a Relativistic Adaptive Mesh Refinement Hydrodynamics Code

    SciTech Connect

    Zhang, Wei-Qun; MacFadyen, Andrew I.; /Princeton, Inst. Advanced Study

    2005-06-06

    The authors have developed a new computer code, RAM, to solve the conservative equations of special relativistic hydrodynamics (SRHD) using adaptive mesh refinement (AMR) on parallel computers. They have implemented a characteristic-wise, finite difference, weighted essentially non-oscillatory (WENO) scheme using the full characteristic decomposition of the SRHD equations to achieve fifth-order accuracy in space. For time integration they use the method of lines with a third-order total variation diminishing (TVD) Runge-Kutta scheme. They have also implemented fourth and fifth order Runge-Kutta time integration schemes for comparison. The implementation of AMR and parallelization is based on the FLASH code. RAM is modular and includes the capability to easily swap hydrodynamics solvers, reconstruction methods and physics modules. In addition to WENO they have implemented a finite volume module with the piecewise parabolic method (PPM) for reconstruction and the modified Marquina approximate Riemann solver to work with TVD Runge-Kutta time integration. They examine the difficulty of accurately simulating shear flows in numerical relativistic hydrodynamics codes. They show that under-resolved simulations of simple test problems with transverse velocity components produce incorrect results and demonstrate the ability of RAM to correctly solve these problems. RAM has been tested in one, two and three dimensions and in Cartesian, cylindrical and spherical coordinates. they have demonstrated fifth-order accuracy for WENO in one and two dimensions and performed detailed comparison with other schemes for which they show significantly lower convergence rates. Extensive testing is presented demonstrating the ability of RAM to address challenging open questions in relativistic astrophysics.

  4. External-Compression Supersonic Inlet Design Code

    NASA Technical Reports Server (NTRS)

    Slater, John W.

    2011-01-01

    A computer code named SUPIN has been developed to perform aerodynamic design and analysis of external-compression, supersonic inlets. The baseline set of inlets include axisymmetric pitot, two-dimensional single-duct, axisymmetric outward-turning, and two-dimensional bifurcated-duct inlets. The aerodynamic methods are based on low-fidelity analytical and numerical procedures. The geometric methods are based on planar geometry elements. SUPIN has three modes of operation: 1) generate the inlet geometry from a explicit set of geometry information, 2) size and design the inlet geometry and analyze the aerodynamic performance, and 3) compute the aerodynamic performance of a specified inlet geometry. The aerodynamic performance quantities includes inlet flow rates, total pressure recovery, and drag. The geometry output from SUPIN includes inlet dimensions, cross-sectional areas, coordinates of planar profiles, and surface grids suitable for input to grid generators for analysis by computational fluid dynamics (CFD) methods. The input data file for SUPIN and the output file from SUPIN are text (ASCII) files. The surface grid files are output as formatted Plot3D or stereolithography (STL) files. SUPIN executes in batch mode and is available as a Microsoft Windows executable and Fortran95 source code with a makefile for Linux.

  5. Parallelization of an unstructured grid, hydrodynamic-diffusion code

    SciTech Connect

    Milovich, J L; Shestakov, A

    1998-05-20

    We describe the parallelization of a three dimensional, un structured grid, finite element code which solves hyperbolic conservation laws for mass, momentum, and energy, and diffusion equations modeling heat conduction and radiation transport. Explicit temporal differencing advances the cell-based gasdynamic equations. Diffusion equations use fully implicit differencing of nodal variables which leads to large, sparse, symmetric, and positive definite matrices. Because of the unstructured grid, the off-diagonal non-zero elements appear in unpredictable locations. The linear systems are solved using parallelized conjugate gradients. The code is parailelized by domain decomposition of physical space into disjoint subdomains (SDS). Each processor receives its own SD plus a border of ghost cells. Results are presented on a problem coupling hydrodynamics to non-linear heat cond

  6. HERACLES: a three-dimensional radiation hydrodynamics code

    NASA Astrophysics Data System (ADS)

    González, M.; Audit, E.; Huynh, P.

    2007-03-01

    Aims:We present a new three-dimensional radiation hydrodynamics code called HERACLES that uses an original moment method to solve the radiative transfer. Methods: The radiation transfer is modelled using a two-moment model and a closure relation that allows large angular anisotropies in the radiation field to be preserved and reproduced. The radiative equations thus obtained are solved by a second-order Godunov-type method and integrated implicitly by using iterative solvers. HERACLES has been parallelized with the MPI library and implemented in Cartesian, cylindrical, and spherical coordinates. To characterize the accuracy of HERACLES and to compare it with other codes, we performed a series of tests including purely radiative tests and radiation-hydrodynamics ones. Results: The results show that the physical model used in HERACLES for the transfer is fairly accurate in both the diffusion and transport limit, but also for semi-transparent regions. Conclusions: . This makes HERACLES very well-suited to studying many astrophysical problems such as radiative shocks, molecular jets of young stars, fragmentation and formation of dense cores in the interstellar medium, and protoplanetary discs. Appendices are only available in electronic form at http://www.aanda.org

  7. Collisions and separations in 2D hydrodynamical code

    NASA Astrophysics Data System (ADS)

    Asida, Shimon

    1991-06-01

    Hydrodynamic problems involving the collision or separation of zones of different materials include the following types: armor penetration by a jet formed in the explosion of a shaped charge or by a kinetic projectile, and instabilities in cosmic jets. Calculations of hydrodynamic processes are based on numerical simulations which solve the differential equations by means of difference equations. A special grid is defined and the physical system is advanced via finite steps in time; in a Eulerian treatment, the grid is stationary in space whereas in a Lagrangian treatment it moves together with the fluid. In Lagrangian methods, the grid is defined on the fluid and the boundaries between materials are formed by the edges of computational cells, so that the shape of the grid depends on the shape of the boundary. Where there is a strong flow, the cells distort and the grid must be frequently redefined to enable the calculation to continue. Boundary collisions cause difficulty in defining a grid. In Eulerian methods, where the computational grid is defined over all the space through which the materials flow, it is necessary to use cells with non-homogeneous contents to follow the boundaries; such calculations are more complicated and less accurate. The aim of the present work was to develop a Lagrangian method for treating such collisions. The code, based on an existing 2D Lagrangian code with the addition of a new collision mechanism, uses a mixed computational grid, comprising squares and triangles, with which it is possible to describe systems.

  8. Telemetry advances in data compression and channel coding

    NASA Technical Reports Server (NTRS)

    Miller, Warner H.; Morakis, James C.; Yeh, Pen-Shu

    1990-01-01

    Addressed in this paper is the dependence of telecommunication channel, forward error correcting coding and source data compression coding on integrated circuit technology. Emphasis is placed on real time high speed Reed Solomon (RS) decoding using full custom VLSI technology. Performance curves of NASA's standard channel coder and a proposed standard lossless data compression coder are presented.

  9. Bit-Wise Arithmetic Coding For Compression Of Data

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron

    1996-01-01

    Bit-wise arithmetic coding is data-compression scheme intended especially for use with uniformly quantized data from source with Gaussian, Laplacian, or similar probability distribution function. Code words of fixed length, and bits treated as being independent. Scheme serves as means of progressive transmission or of overcoming buffer-overflow or rate constraint limitations sometimes arising when data compression used.

  10. A new class of polyphase pulse compression codes

    NASA Astrophysics Data System (ADS)

    Deng, Hai; Lin, Maoyong

    The study presents the synthesis method of a new class of polyphase pulse compression codes - NLFM code, and investigates the properties of this code. The NLFM code, which is derived from sampling and quantization of a nonlinear FM waveform, features a low-range sidelobe and insensitivity to Doppler effect. Simulation results show that the major properties of the NLFM polyphase code are superior to the Frank code.

  11. Hydrodynamic simulations of gaseous Argon shock compression experiments

    NASA Astrophysics Data System (ADS)

    Garcia, Daniel B.; Dattelbaum, Dana M.; Goodwin, Peter M.; Sheffield, Stephen A.; Morris, John S.; Gustavsen, Richard L.; Burkett, Michael W.

    2017-01-01

    The lack of published Ar gas shock data motivated an evaluation of the Ar Equation of State (EOS) in gas phase initial density regimes. In particular, these regimes include initial pressures in the range of 13.8 - 34.5 bar (0.025 - 0.056 g/ cm3) and initial shock velocities around 0.2 cm/μs. The objective of the numerical evaluation was to develop a physical understanding of the EOS behavior of shocked and subsequently multiply re-shocked Ar gas through Pagosa numerical simulations utilizing the SESAME equation of state. Pagosa is a Los Alamos National Laboratory 2-D and 3-D Eulerian continuum dynamics code capable of modeling high velocity compressible flow with multiple materials. The approach involved the use of gas gun experiments to evaluate the shock and multiple re-shock behavior of pressurized Ar gas to validate Pagosa simulations and the SESAME EOS. Additionally, the diagnostic capability within the experiments allowed for the EOS to be fully constrained with measured shock velocity, particle velocity and temperature. The simulations demonstrate excellent agreement with the experiments in the shock velocity/particle velocity space, and reasonable comparisons for the ionization temperatures.

  12. On Using Goldbach G0 Codes and Even-Rodeh Codes for Text Compression on Using Goldbach G0 Codes and Even-Rodeh Codes for Text Compression

    NASA Astrophysics Data System (ADS)

    Budiman, M. A.; Rachmawati, D.

    2017-03-01

    This research aims to study the efficiency of two variants of variable-length codes (i.e., Goldbach G0 codes and Even-Rodeh codes) in compressing texts. The parameters being examined are the ratio of compression, the space savings, and the bit rate. As a benchmark, all of the original (uncompressed) texts are assumed to be encoded in American Standard Codes for Information Interchange (ASCII). Several texts, including those derived from some corpora (the Artificial corpus, the Calgary corpus, the Canterbury corpus, the Large corpus, and the Miscellaneous corpus) are tested in the experiment. The overall result shows that the Even-Rodeh codes are consistently more efficient to compress texts than the unoptimzed Goldbach G0 codes.

  13. Coding For Compression Of Low-Entropy Data

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu

    1994-01-01

    Improved method of encoding digital data provides for efficient lossless compression of partially or even mostly redundant data from low-information-content source. Method of coding implemented in relatively simple, high-speed arithmetic and logic circuits. Also increases coding efficiency beyond that of established Huffman coding method in that average number of bits per code symbol can be less than 1, which is the lower bound for Huffman code.

  14. Bit-wise arithmetic coding for data compression

    NASA Technical Reports Server (NTRS)

    Kiely, A. B.

    1994-01-01

    This article examines the problem of compressing a uniformly quantized independent and identically distributed (IID) source. We present a new compression technique, bit-wise arithmetic coding, that assigns fixed-length codewords to the quantizer output and uses arithmetic coding to compress the codewords, treating the codeword bits as independent. We examine the performance of this method and evaluate the overhead required when used block-adaptively. Simulation results are presented for Gaussian and Laplacian sources. This new technique could be used as the entropy coder in a transform or subband coding system.

  15. Parallelization of ICF3D, a Diffusion and Hydrodynamics Code

    NASA Astrophysics Data System (ADS)

    Shestakov, A. I.; Milovich, J. L.

    1997-11-01

    We describe the parallelization of the unstructured grid ICF3D code. The strategy divides physical space into a collection of disjoint subdomains, one per processing element (PE). The subdomains may be of arbitrary shape but, for efficiency, should have small surface-to-volume ratios. The strategy is ideally suited for distributed memory computers, but also works on shared memory architectures. The hydrodynamic module, which uses a cell-based algorithm using discontinuous finite elements, is parallelized by assigning cells to different PEs. This assignment is done by a separate program and constitutes input data for ICF3D. The diffusion module, a kernel of the heat conduction and radiation diffusion packages, advances continuous fields which are discretized using a nodal finite element method. This module is parallelized by assigning points to individual PEs. The assignment is done within ICF3D. The code is in C++. Special message passing objects (MPO) determine the connectivity of the subdomains and transfer data between them by calling MPI functions. Results are presented on a variety of computers: CRAY T3D and IBM SP2 at Livermore, and Intel's ASCI RED at Sandia, Albuquerque.

  16. A 2-dimensional MHD code & survey of the ``buckling'' phenomenon in cylindrical magnetic flux compression experiments

    NASA Astrophysics Data System (ADS)

    Xiao, Bo; Wang, Ganghua; Gu, Zhuowei; Computational Physics Team

    2015-11-01

    We made a 2-dimensional magneto-hydrodynamics Lagrangian code. The code handles two kinds of magnetic configuration, a (x-y) plane with z-direction magnetic field Bz and a (r-z) plane with θ-direction magnetic field Bθ. The solving of the MHD equations is split into a pure dynamical step (i.e., ideal MHD) and a diffusion step. In the diffusion step, the Joule heat is calculated with a numerical scheme based on an specific form of the Joule heat production equation, ∂eJ/∂t = ∇ . (η/μ0 º × (∇ × º)) -∂/∂t (1/2μ0 B2) , where the term ∂/∂t (1/2μ0 B2) is the magnetic field energy variation caused solely by diffusion. This scheme insures the equality of the total Joule heat produced and the total electromagnetic energy lost in the system. Material elastoplasticity is considered in the code. An external circuit is coupled to the magneto-hydrodynamics and a detonation module is also added to enhance the code's ability for simulating magnetically-driven compression experiments. As a first application, the code was utilized to simulate a cylindrical magnetic flux compression experiment. The origin of the ``buckling'' phenomenon observed in the experiment is explored.

  17. MR image compression using a wavelet transform coding algorithm.

    PubMed

    Angelidis, P A

    1994-01-01

    We present here a technique for MR image compression. It is based on a transform coding scheme using the wavelet transform and vector quantization. Experimental results show that the method offers high compression ratios with low degradation of the image quality. The technique is expected to be particularly useful wherever storing and transmitting large numbers of images is necessary.

  18. Hydrodynamic Liner Experiments Using the Ranchero Flux Compression Generator System

    SciTech Connect

    Goforth, J.H.; Atchison, W.L.; Fowler, C.M.; Lopez, E.A.; Oona, H.; Tasker, D.G.; King, J.C.; Herrera, D.H.; Torres, D.T.; Sena, F.C.; McGuire, J.A.; Reinovsky, R.E.; Stokes, J.L.; Tabaka, L.J.; Garcia, O.F.; Faehl, R.J.; Lindemuth, I.R.; Keinigs, R.K.; Broste, B.

    1998-10-18

    The authors have developed a system for driving hydrodynamic liners at currents approaching 30 MA. Their 43 cm module will deliver currents of interest, and when fully developed, the 1.4 m module will allow similar currents with more total system inductance. With these systems they can perform interesting physics experiments and support the Atlas development effort.

  19. Rank minimization code aperture design for spectrally selective compressive imaging.

    PubMed

    Arguello, Henry; Arce, Gonzalo R

    2013-03-01

    A new code aperture design framework for multiframe code aperture snapshot spectral imaging (CASSI) system is presented. It aims at the optimization of code aperture sets such that a group of compressive spectral measurements is constructed, each with information from a specific subset of bands. A matrix representation of CASSI is introduced that permits the optimization of spectrally selective code aperture sets. Furthermore, each code aperture set forms a matrix such that rank minimization is used to reduce the number of CASSI shots needed. Conditions for the code apertures are identified such that a restricted isometry property in the CASSI compressive measurements is satisfied with higher probability. Simulations show higher quality of spectral image reconstruction than that attained by systems using Hadamard or random code aperture sets.

  20. Syndrome-source-coding and its universal generalization. [error correcting codes for data compression

    NASA Technical Reports Server (NTRS)

    Ancheta, T. C., Jr.

    1976-01-01

    A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A 'universal' generalization of syndrome-source-coding is formulated which provides robustly effective distortionless coding of source ensembles. Two examples are given, comparing the performance of noiseless universal syndrome-source-coding to (1) run-length coding and (2) Lynch-Davisson-Schalkwijk-Cover universal coding for an ensemble of binary memoryless sources.

  1. Compressive imaging using fast transform coding

    NASA Astrophysics Data System (ADS)

    Thompson, Andrew; Calderbank, Robert

    2016-10-01

    We propose deterministic sampling strategies for compressive imaging based on Delsarte-Goethals frames. We show that these sampling strategies result in multi-scale measurements which can be related to the 2D Haar wavelet transform. We demonstrate the effectiveness of our proposed strategies through numerical experiments.

  2. New Methods for Lossless Image Compression Using Arithmetic Coding.

    ERIC Educational Resources Information Center

    Howard, Paul G.; Vitter, Jeffrey Scott

    1992-01-01

    Identifies four components of a good predictive lossless image compression method: (1) pixel sequence, (2) image modeling and prediction, (3) error modeling, and (4) error coding. Highlights include Laplace distribution and a comparison of the multilevel progressive method for image coding with the prediction by partial precision matching method.…

  3. Streamlined Genome Sequence Compression using Distributed Source Coding

    PubMed Central

    Wang, Shuang; Jiang, Xiaoqian; Chen, Feng; Cui, Lijuan; Cheng, Samuel

    2014-01-01

    We aim at developing a streamlined genome sequence compression algorithm to support alternative miniaturized sequencing devices, which have limited communication, storage, and computation power. Existing techniques that require heavy client (encoder side) cannot be applied. To tackle this challenge, we carefully examined distributed source coding theory and developed a customized reference-based genome compression protocol to meet the low-complexity need at the client side. Based on the variation between source and reference, our protocol will pick adaptively either syndrome coding or hash coding to compress subsequences of changing code length. Our experimental results showed promising performance of the proposed method when compared with the state-of-the-art algorithm (GRS). PMID:25520552

  4. The escape of high explosive products: An exact-solution problem for verification of hydrodynamics codes

    DOE PAGES

    Doebling, Scott William

    2016-10-22

    This paper documents the escape of high explosive (HE) products problem. The problem, first presented by Fickett & Rivard, tests the implementation and numerical behavior of a high explosive detonation and energy release model and its interaction with an associated compressible hydrodynamics simulation code. The problem simulates the detonation of a finite-length, one-dimensional piece of HE that is driven by a piston from one end and adjacent to a void at the other end. The HE equation of state is modeled as a polytropic ideal gas. The HE detonation is assumed to be instantaneous with an infinitesimal reaction zone. Viamore » judicious selection of the material specific heat ratio, the problem has an exact solution with linear characteristics, enabling a straightforward calculation of the physical variables as a function of time and space. Lastly, implementation of the exact solution in the Python code ExactPack is discussed, as are verification cases for the exact solution code.« less

  5. Description of a parallel, 3D, finite element, hydrodynamics-diffusion code

    SciTech Connect

    Milovich, J L; Prasad, M K; Shestakov, A I

    1999-04-11

    We describe a parallel, 3D, unstructured grid finite element, hydrodynamic diffusion code for inertial confinement fusion (ICF) applications and the ancillary software used to run it. The code system is divided into two entities, a controller and a stand-alone physics code. The code system may reside on different computers; the controller on the user's workstation and the physics code on a supercomputer. The physics code is composed of separate hydrodynamic, equation-of-state, laser energy deposition, heat conduction, and radiation transport packages and is parallelized for distributed memory architectures. For parallelization, a SPMD model is adopted; the domain is decomposed into a disjoint collection of subdomains, one per processing element (PE). The PEs communicate using MPI. The code is used to simulate the hydrodynamic implosion of a spherical bubble.

  6. New numerical solutions of three-dimensional compressible hydrodynamic convection. [in stars

    NASA Technical Reports Server (NTRS)

    Hossain, Murshed; Mullan, D. J.

    1990-01-01

    Numerical solutions of three-dimensional compressible hydrodynamics (including sound waves) in a stratified medium with open boundaries are presented. Convergent/divergent points play a controlling role in the flows, which are dominated by a single frequency related to the mean sound crossing time. Superposed on these rapid compressive flows, slower eddy-like flows eventually create convective transport. The solutions contain small structures stacked on top of larger ones, with vertical scales equal to the local pressure scale heights, H sub p. Although convective transport starts later in the evolution, vertical scales of H sub p are apparently selected at much earlier times by nonlinear compressive effects.

  7. Wyner-Ziv video compression using rateless LDPC codes

    NASA Astrophysics Data System (ADS)

    He, Da-ke; Jagmohan, Ashish; Lu, Ligang; Sheinin, Vadim

    2008-01-01

    In this paper we consider Wyner-Ziv video compression using rateless LDPC codes. It is shown that the advantages of using rateless LDPC codes in Wyner-Ziv video compression, in comparison to using traditional fixed-rate LDPC codes, are at least threefold: 1) it significantly reduces the storage complexity; 2) it allows seamless integration with mode selection; and 3) it greatly improves the overall system's performance. Experimental results on the standard CIF-sized sequence mobile_and_calendar show that by combining rateless LDPC coding with simple skip mode selection, one can build a Wyner-Ziv video compression system that is, at rate 0.2 bits per pixel, about 2.25dB away from the standard JM software implementation of the H.264 main profile, more than 8.5dB better than H.264 Intra where all frames are H.264 coded intrapredicted frames, and about 2.3dB better than the same Wyner-Ziv system using fixed-rate LDPC coding. In terms of encoding complexity, the Wyner-Ziv video compression system is two orders of magnitude less complex than the JM implementation of the H.264 main profile.

  8. THEHYCO-3DT: Thermal hydrodynamic code for the 3 dimensional transient calculation of advanced LMFBR core

    SciTech Connect

    Vitruk, S.G.; Korsun, A.S.; Ushakov, P.A.

    1995-09-01

    The multilevel mathematical model of neutron thermal hydrodynamic processes in a passive safety core without assemblies duct walls and appropriate computer code SKETCH, consisted of thermal hydrodynamic module THEHYCO-3DT and neutron one, are described. A new effective discretization technique for energy, momentum and mass conservation equations is applied in hexagonal - z geometry. The model adequacy and applicability are presented. The results of the calculations show that the model and the computer code could be used in conceptual design of advanced reactors.

  9. GPUPEGAS: A NEW GPU-ACCELERATED HYDRODYNAMIC CODE FOR NUMERICAL SIMULATIONS OF INTERACTING GALAXIES

    SciTech Connect

    Kulikov, Igor

    2014-09-01

    In this paper, a new scalable hydrodynamic code, GPUPEGAS (GPU-accelerated Performance Gas Astrophysical Simulation), for the simulation of interacting galaxies is proposed. The details of a parallel numerical method co-design are described. A speed-up of 55 times was obtained within a single GPU accelerator. The use of 60 GPU accelerators resulted in 96% parallel efficiency. A collisionless hydrodynamic approach has been used for modeling of stars and dark matter. The scalability of the GPUPEGAS code is shown.

  10. Development of 1D Liner Compression Code for IDL

    NASA Astrophysics Data System (ADS)

    Shimazu, Akihisa; Slough, John; Pancotti, Anthony

    2015-11-01

    A 1D liner compression code is developed to model liner implosion dynamics in the Inductively Driven Liner Experiment (IDL) where FRC plasmoid is compressed via inductively-driven metal liners. The driver circuit, magnetic field, joule heating, and liner dynamics calculations are performed at each time step in sequence to couple these effects in the code. To obtain more realistic magnetic field results for a given drive coil geometry, 2D and 3D effects are incorporated into the 1D field calculation through use of correction factor table lookup approach. Commercial low-frequency electromagnetic fields solver, ANSYS Maxwell 3D, is used to solve the magnetic field profile for static liner condition at various liner radius in order to derive correction factors for the 1D field calculation in the code. The liner dynamics results from the code is verified to be in good agreement with the results from commercial explicit dynamics solver, ANSYS Explicit Dynamics, and previous liner experiment. The developed code is used to optimize the capacitor bank and driver coil design for better energy transfer and coupling. FRC gain calculations are also performed using the liner compression data from the code for the conceptual design of the reactor sized system for fusion energy gains.

  11. Efficient image compression scheme based on differential coding

    NASA Astrophysics Data System (ADS)

    Zhu, Li; Wang, Guoyou; Liu, Ying

    2007-11-01

    Embedded zerotree (EZW) and Set Partitioning in Hierarchical Trees (SPIHT) coding, introduced by J.M. Shapiro and Amir Said, are very effective and being used in many fields widely. In this study, brief explanation of the principles of SPIHT was first provided, and then, some improvement of SPIHT algorithm according to experiments was introduced. 1) For redundancy among the coefficients in the wavelet region, we propose differential method to reduce it during coding. 2) Meanwhile, based on characteristic of the coefficients' distribution in subband, we adjust sorting pass and optimize differential coding, in order to reduce the redundancy coding in each subband. 3) The image coding result, calculated by certain threshold, shows that through differential coding, the rate of compression get higher, and the quality of reconstructed image have been get raised greatly, when bpp (bit per pixel)=0.5, PSNR (Peak Signal to Noise Ratio) of reconstructed image exceeds that of standard SPIHT by 0.2~0.4db.

  12. Improved zerotree coding algorithm for wavelet image compression

    NASA Astrophysics Data System (ADS)

    Chen, Jun; Li, Yunsong; Wu, Chengke

    2000-12-01

    A listless minimum zerotree coding algorithm based on the fast lifting wavelet transform with lower memory requirement and higher compression performance is presented in this paper. Most state-of-the-art image compression techniques based on wavelet coefficients, such as EZW and SPIHT, exploit the dependency between the subbands in a wavelet transformed image. We propose a minimum zerotree of wavelet coefficients which exploits the dependency not only between the coarser and the finer subbands but also within the lowest frequency subband. And a ne listless significance map coding algorithm based on the minimum zerotree, using new flag maps and new scanning order different form Wen-Kuo Lin et al. LZC, is also proposed. A comparison reveals that the PSNR results of LMZC are higher than those of LZC, and the compression performance of LMZC outperforms that of SPIHT in terms of hard implementation.

  13. Differential direct coding: a compression algorithm for nucleotide sequence data

    PubMed Central

    Vey, Gregory

    2009-01-01

    While modern hardware can provide vast amounts of inexpensive storage for biological databases, the compression of nucleotide sequence data is still of paramount importance in order to facilitate fast search and retrieval operations through a reduction in disk traffic. This issue becomes even more important in light of the recent increase of very large data sets, such as metagenomes. In this article, I propose the Differential Direct Coding algorithm, a general-purpose nucleotide compression protocol that can differentiate between sequence data and auxiliary data by supporting the inclusion of supplementary symbols that are not members of the set of expected nucleotide bases, thereby offering reconciliation between sequence-specific and general-purpose compression strategies. This algorithm permits a sequence to contain a rich lexicon of auxiliary symbols that can represent wildcards, annotation data and special subsequences, such as functional domains or special repeats. In particular, the representation of special subsequences can be incorporated to provide structure-based coding that increases the overall degree of compression. Moreover, supporting a robust set of symbols removes the requirement of wildcard elimination and restoration phases, resulting in a complexity of O(n) for execution time, making this algorithm suitable for very large data sets. Because this algorithm compresses data on the basis of triplets, it is highly amenable to interpretation as a polypeptide at decompression time. Also, an encoded sequence may be further compressed using other existing algorithms, like gzip, thereby maximizing the final degree of compression. Overall, the Differential Direct Coding algorithm can offer a beneficial impact on disk traffic for database queries and other disk-intensive operations. PMID:20157486

  14. A progressive data compression scheme based upon adaptive transform coding: Mixture block coding of natural images

    NASA Technical Reports Server (NTRS)

    Rost, Martin C.; Sayood, Khalid

    1991-01-01

    A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.

  15. Closed-form quality measures for compressed medical images: compression noise statistics of transform coding

    NASA Astrophysics Data System (ADS)

    Li, Dunling; Loew, Murray H.

    2004-05-01

    This paper provides a theoretical foundation for the closed-form expression of model observers on compressed images. In medical applications, model observers, especially the channelized Hotelling observer, have been successfully used to predict human observer performance and to evaluate image quality for detection tasks in various backgrounds. To use model observers, however, requires knowledge of noise statistics. This paper first identifies quantization noise as the sole distortion source in transform coding, one of the most commonly used methods for image compression. Then, it represents transform coding as a 1-D block-based matrix expression, it further derives first and second moments, and the probability density function (pdf) of the compression noise at pixel, block and image levels. The compression noise statistics depend on the transform matrix and the quantization matrix in the transform coding algorithm. Compression noise is jointly normally distributed when the dimension of the transform (the block size) is typical and the contents of image sets vary randomly. Moreover, this paper uses JPEG as a test example to verify the derived statistics. The test simulation results show that the closed-form expression of JPEG quantization and compression noise statistics correctly predicts the estimated ones from actual images.

  16. Neptune: An astrophysical smooth particle hydrodynamics code for massively parallel computer architectures

    NASA Astrophysics Data System (ADS)

    Sandalski, Stou

    Smooth particle hydrodynamics is an efficient method for modeling the dynamics of fluids. It is commonly used to simulate astrophysical processes such as binary mergers. We present a newly developed GPU accelerated smooth particle hydrodynamics code for astrophysical simulations. The code is named neptune after the Roman god of water. It is written in OpenMP parallelized C++ and OpenCL and includes octree based hydrodynamic and gravitational acceleration. The design relies on object-oriented methodologies in order to provide a flexible and modular framework that can be easily extended and modified by the user. Several pre-built scenarios for simulating collisions of polytropes and black-hole accretion are provided. The code is released under the MIT Open Source license and publicly available at http://code.google.com/p/neptune-sph/.

  17. Analysis of LAPAN-IPB image lossless compression using differential pulse code modulation and huffman coding

    NASA Astrophysics Data System (ADS)

    Hakim, P. R.; Permala, R.

    2017-01-01

    LAPAN-A3/IPB satellite is the latest Indonesian experimental microsatellite with remote sensing and earth surveillance missions. The satellite has three optical payloads, which are multispectral push-broom imager, digital matrix camera and video camera. To increase data transmission efficiency, the multispectral imager data can be compressed using either lossy or lossless compression method. This paper aims to analyze Differential Pulse Code Modulation (DPCM) method and Huffman coding that are used in LAPAN-IPB satellite image lossless compression. Based on several simulation and analysis that have been done, current LAPAN-IPB lossless compression algorithm has moderate performance. There are several aspects that can be improved from current configuration, which are the type of DPCM code used, the type of Huffman entropy-coding scheme, and the use of sub-image compression method. The key result of this research shows that at least two neighboring pixels should be used for DPCM calculation to increase compression performance. Meanwhile, varying Huffman tables with sub-image approach could also increase the performance if on-board computer can support for more complicated algorithm. These results can be used as references in designing Payload Data Handling System (PDHS) for an upcoming LAPAN-A4 satellite.

  18. A seismic data compression system using subband coding

    NASA Technical Reports Server (NTRS)

    Kiely, A. B.; Pollara, F.

    1995-01-01

    This article presents a study of seismic data compression techniques and a compression algorithm based on subband coding. The algorithm includes three stages: a decorrelation stage, a quantization stage that introduces a controlled amount of distortion to allow for high compression ratios, and a lossless entropy coding stage based on a simple but efficient arithmetic coding method. Subband coding methods are particularly suited to the decorrelation of nonstationary processes such as seismic events. Adaptivity to the nonstationary behavior of the waveform is achieved by dividing the data into separate blocks that are encoded separately with an adaptive arithmetic encoder. This is done with high efficiency due to the low overhead introduced by the arithmetic encoder in specifying its parameters. The technique could be used as a progressive transmission system, where successive refinements of the data can be requested by the user. This allows seismologists to first examine a coarse version of waveforms with minimal usage of the channel and then decide where refinements are required. Rate-distortion performance results are presented and comparisons are made with two block transform methods.

  19. Achieving H.264-like compression efficiency with distributed video coding

    NASA Astrophysics Data System (ADS)

    Milani, Simone; Wang, Jiajun; Ramchandran, Kannan

    2007-01-01

    Recently, a new class of distributed source coding (DSC) based video coders has been proposed to enable low-complexity encoding. However, to date, these low-complexity DSC-based video encoders have been unable to compress as efficiently as motion-compensated predictive coding based video codecs, such as H.264/AVC, due to insufficiently accurate modeling of video data. In this work, we examine achieving H.264-like high compression efficiency with a DSC-based approach without the encoding complexity constraint. The success of H.264/AVC highlights the importance of accurately modeling the highly non-stationary video data through fine-granularity motion estimation. This motivates us to deviate from the popular approach of approaching the Wyner-Ziv bound with sophisticated capacity-achieving channel codes that require long block lengths and high decoding complexity, and instead focus on accurately modeling video data. Such a DSC-based, compression-centric encoder is an important first step towards building a robust DSC-based video coding framework.

  20. Terminal Ballistic Application of Hydrodynamic Computer Code Calculations.

    DTIC Science & Technology

    1977-04-01

    summarize results of applying a particular nume rical method , the method programed in the HEMP code , to the simulation of fragr.I r~t i ’~~ shell , Misznay...Page LIST OF ILLUSTRATIONS I. INTRODUCTION 7 A. Background and Summary 7 B. HEMP Code Formulation 9 C. Accuracy of HEMP Code Solutions 10 I I...of an Explosively Loaded Cylinder as Computed with the HEMP Code 13 4. Comparison of the Calculated Fragment Velocity Distribution and Arena Test

  1. Introduction and guide to LLNL's relativistic 3-D nuclear hydrodynamics code

    SciTech Connect

    Zingman, J.A.; McAbee, T.L.; Alonso, C.T.; Wilson, J.R.

    1987-11-01

    We have constructed a relativistic hydrodynamic model to investigate Bevalac and higher energy, heavy-ion collisions. The basis of the model is a finite-difference solution to covariant hydrodynamics, which will be described in the rest of this paper. This paper also contains: a brief review of the equations and numerical methods we have employed in the solution to the hydrodynamic equations, a detailed description of several of the most important subroutines, and a numerical test on the code. 30 refs., 8 figs., 1 tab.

  2. Numerical simulations of hydrodynamic instabilities: Perturbation codes PANSY, PERLE, and 2D code CHIC applied to a realistic LIL target

    NASA Astrophysics Data System (ADS)

    Hallo, L.; Olazabal-Loumé, M.; Maire, P. H.; Breil, J.; Morse, R.-L.; Schurtz, G.

    2006-06-01

    This paper deals with ablation front instabilities simulations in the context of direct drive ICF. A simplified DT target, representative of realistic target on LIL is considered. We describe here two numerical approaches: the linear perturbation method using the perturbation codes Perle (planar) and Pansy (spherical) and the direct simulation method using our Bi-dimensional hydrodynamic code Chic. Numerical solutions are shown to converge, in good agreement with analytical models.

  3. A compressible Navier-Stokes code for turbulent flow modeling

    NASA Technical Reports Server (NTRS)

    Coakley, T. J.

    1984-01-01

    An implicit, finite volume code for solving two dimensional, compressible turbulent flows is described. Second order upwind differencing of the inviscid terms of the equations is used to enhance stability and accuracy. A diagonal form of the implicit algorithm is used to improve efficiency. Several zero and two equation turbulence models are incorporated to study their impact on overall flow modeling accuracy. Applications to external and internal flows are discussed.

  4. RICH: Numerical simulation of compressible hydrodynamics on a moving Voronoi mesh

    NASA Astrophysics Data System (ADS)

    Yalinewich, Almog; Steinberg, Elad; Sari, Re'em

    2014-10-01

    RICH (Racah Institute Computational Hydrodynamics) is a 2D hydrodynamic code based on Godunov's method. The code, largely based on AREPO, acts on an unstructured moving mesh. It differs from AREPO in the interpolation and time advancement scheme as well as a novel parallelization scheme based on Voronoi tessellation. Though not universally true, in many cases a moving mesh gives better results than a static mesh: where matter moves one way and a sound wave is traveling in the other way (such that relative to the grid the wave is not moving), a static mesh gives better results than a moving mesh. RICH is designed in an object oriented, user friendly way that facilitates incorporation of new algorithms and physical processes.

  5. TPCI: the PLUTO-CLOUDY Interface . A versatile coupled photoionization hydrodynamics code

    NASA Astrophysics Data System (ADS)

    Salz, M.; Banerjee, R.; Mignone, A.; Schneider, P. C.; Czesla, S.; Schmitt, J. H. M. M.

    2015-04-01

    We present an interface between the (magneto-) hydrodynamics code PLUTO and the plasma simulation and spectral synthesis code CLOUDY. By combining these codes, we constructed a new photoionization hydrodynamics solver: the PLUTO-CLOUDY Interface (TPCI), which is well suited to simulate photoevaporative flows under strong irradiation. The code includes the electromagnetic spectrum from X-rays to the radio range and solves the photoionization and chemical network of the 30 lightest elements. TPCI follows an iterative numerical scheme: first, the equilibrium state of the medium is solved for a given radiation field by CLOUDY, resulting in a net radiative heating or cooling. In the second step, the latter influences the (magneto-) hydrodynamic evolution calculated by PLUTO. Here, we validated the one-dimensional version of the code on the basis of four test problems: photoevaporation of a cool hydrogen cloud, cooling of coronal plasma, formation of a Strömgren sphere, and the evaporating atmosphere of a hot Jupiter. This combination of an equilibrium photoionization solver with a general MHD code provides an advanced simulation tool applicable to a variety of astrophysical problems. A copy of the code is available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/576/A21

  6. The multigrid method for semi-implicit hydrodynamics codes

    SciTech Connect

    Brandt, A.; Dendy, J.E. Jr.; Ruppel, H.

    1980-03-01

    The multigrid method is applied to the pressure iteration in both Eulerian and Lagrangian codes, and computational examples of its efficiency are presented. In addition a general technique for speeding up the calculation of very low Mach number flows is presented. The latter feature is independent of the multigrid algorithm.

  7. Multigrid method for semi-implicit hydrodynamics codes

    SciTech Connect

    Brandt, A.; Dendy, J.E. Jr.; Ruppel, H.

    1980-03-01

    The multigrid method is applied to the pressure iteration in both Eulerian and Lagrangian codes, and computational examples of its efficiency are presented. In addition a general technique for speeding up the calculation of very low Mach number flows is presented. The latter feature is independent of the multigrid algorithm.

  8. Coded aperture design in mismatched compressive spectral imaging.

    PubMed

    Galvis, Laura; Arguello, Henry; Arce, Gonzalo R

    2015-11-20

    Compressive spectral imaging (CSI) senses a scene by using two-dimensional coded projections such that the number of measurements is far less than that used in spectral scanning-type instruments. An architecture that efficiently implements CSI is the coded aperture snapshot spectral imager (CASSI). A physical limitation of the CASSI is the system resolution, which is determined by the lowest resolution element used in the detector and the coded aperture. Although the final resolution of the system is usually given by the detector, in the CASSI, for instance, the use of a low resolution coded aperture implemented using a digital micromirror device (DMD), which induces the grouping of pixels in superpixels in the detector, is decisive to the final resolution. The mismatch occurs by the differences in the pitch size of the DMD mirrors and focal plane array (FPA) pixels. A traditional solution to this mismatch consists of grouping several pixels in square features, which subutilizes the DMD and the detector resolution and, therefore, reduces the spatial and spectral resolution of the reconstructed spectral images. This paper presents a model for CASSI which admits the mismatch and permits exploiting the maximum resolution of the coding element and the FPA sensor. A super-resolution algorithm and a synthetic coded aperture are developed in order to solve the mismatch. The mathematical models are verified using a real implementation of CASSI. The results of the experiments show a significant gain in spatial and spectral imaging quality over the traditional grouping pixel technique.

  9. Gaseous laser targets and optical diagnostics for studying compressible hydrodynamic instabilities

    SciTech Connect

    Edwards, J M; Robey, H; Mackinnon, A

    2001-06-29

    Explore the combination of optical diagnostics and gaseous targets to obtain important information about compressible turbulent flows that cannot be derived from traditional laser experiments for the purposes of V and V of hydrodynamics models and understanding scaling. First year objectives: Develop and characterize blast wave-gas jet test bed; Perform single pulse shadowgraphy of blast wave interaction with turbulent gas jet as a function of blast wave Mach number; Explore double pulse shadowgraphy and image correlation for extracting velocity spectra in the shock-turbulent flow interaction; and Explore the use/adaptation of advanced diagnostics.

  10. A new relativistic hydrodynamics code for high-energy heavy-ion collisions

    NASA Astrophysics Data System (ADS)

    Okamoto, Kazuhisa; Akamatsu, Yukinao; Nonaka, Chiho

    2016-10-01

    We construct a new Godunov type relativistic hydrodynamics code in Milne coordinates, using a Riemann solver based on the two-shock approximation which is stable under the existence of large shock waves. We check the correctness of the numerical algorithm by comparing numerical calculations and analytical solutions in various problems, such as shock tubes, expansion of matter into the vacuum, the Landau-Khalatnikov solution, and propagation of fluctuations around Bjorken flow and Gubser flow. We investigate the energy and momentum conservation property of our code in a test problem of longitudinal hydrodynamic expansion with an initial condition for high-energy heavy-ion collisions. We also discuss numerical viscosity in the test problems of expansion of matter into the vacuum and conservation properties. Furthermore, we discuss how the numerical stability is affected by the source terms of relativistic numerical hydrodynamics in Milne coordinates.

  11. CRASH: A Block-Adaptive-Mesh Code for Radiative Shock Hydrodynamics

    NASA Astrophysics Data System (ADS)

    van der Holst, B.; Toth, G.; Sokolov, I. V.; Powell, K. G.; Holloway, J. P.; Myra, E. S.; Stout, Q.; Adams, M. L.; Morel, J. E.; Drake, R. P.

    2011-01-01

    We describe the CRASH (Center for Radiative Shock Hydrodynamics) code, a block adaptive mesh code for multi-material radiation hydrodynamics. The implementation solves the radiation diffusion model with the gray or multigroup method and uses a flux limited diffusion approximation to recover the free-streaming limit. The electrons and ions are allowed to have different temperatures and we include a flux limited electron heat conduction. The radiation hydrodynamic equations are solved in the Eulerian frame by means of a conservative finite volume discretization in either one, two, or three-dimensional slab geometry or in two-dimensional cylindrical symmetry. An operator split method is used to solve these equations in three substeps: (1) solve the hydrodynamic equations with shock-capturing schemes, (2) a linear advection of the radiation in frequency-logarithm space, and (3) an implicit solve of the stiff radiation diffusion, heat conduction, and energy exchange. We present a suite of verification test problems to demonstrate the accuracy and performance of the algorithms. The CRASH code is an extension of the Block-Adaptive Tree Solarwind Roe Upwind Scheme (BATS-R-US) code with this new radiation transfer and heat conduction library and equation-of-state and multigroup opacity solvers. Both CRASH and BATS-R-US are part of the publicly available Space Weather Modeling Framework (SWMF).

  12. SMITE - A Second Order Eulerian Code for Hydrodynamic and Elastic-Plastic Problems

    DTIC Science & Technology

    1975-08-01

    et al Mathematical Applications Group, Incorporated Prepared for: Ballistic Research Laboratories August 1975 DISTRIBI,TED BY: mi] National...SMITE - A SECOND ORDER EULERIAN CODE FOR HYDRODYNAMIC AND ELASTIC-PLASTIC PROBLEMS Prepared by Mathematical Applications Group, Inc. 3...AODRcis jMathematical Applications Group, Inc. 13 Westchester Plaza IFlmsford, New York 10523 10. PROGRAM ELEMENT, PROJECT, TASK AREA t WORK

  13. CRASH: A BLOCK-ADAPTIVE-MESH CODE FOR RADIATIVE SHOCK HYDRODYNAMICS-IMPLEMENTATION AND VERIFICATION

    SciTech Connect

    Van der Holst, B.; Toth, G.; Sokolov, I. V.; Myra, E. S.; Fryxell, B.; Drake, R. P.; Powell, K. G.; Holloway, J. P.; Stout, Q.; Adams, M. L.; Morel, J. E.; Karni, S.

    2011-06-01

    We describe the Center for Radiative Shock Hydrodynamics (CRASH) code, a block-adaptive-mesh code for multi-material radiation hydrodynamics. The implementation solves the radiation diffusion model with a gray or multi-group method and uses a flux-limited diffusion approximation to recover the free-streaming limit. Electrons and ions are allowed to have different temperatures and we include flux-limited electron heat conduction. The radiation hydrodynamic equations are solved in the Eulerian frame by means of a conservative finite-volume discretization in either one-, two-, or three-dimensional slab geometry or in two-dimensional cylindrical symmetry. An operator-split method is used to solve these equations in three substeps: (1) an explicit step of a shock-capturing hydrodynamic solver; (2) a linear advection of the radiation in frequency-logarithm space; and (3) an implicit solution of the stiff radiation diffusion, heat conduction, and energy exchange. We present a suite of verification test problems to demonstrate the accuracy and performance of the algorithms. The applications are for astrophysics and laboratory astrophysics. The CRASH code is an extension of the Block-Adaptive Tree Solarwind Roe Upwind Scheme (BATS-R-US) code with a new radiation transfer and heat conduction library and equation-of-state and multi-group opacity solvers. Both CRASH and BATS-R-US are part of the publicly available Space Weather Modeling Framework.

  14. Code aperture optimization for spectrally agile compressive imaging.

    PubMed

    Arguello, Henry; Arce, Gonzalo R

    2011-11-01

    Coded aperture snapshot spectral imaging (CASSI) provides a mechanism for capturing a 3D spectral cube with a single shot 2D measurement. In many applications selective spectral imaging is sought since relevant information often lies within a subset of spectral bands. Capturing and reconstructing all the spectral bands in the observed image cube, to then throw away a large portion of this data, is inefficient. To this end, this paper extends the concept of CASSI to a system admitting multiple shot measurements, which leads not only to higher quality of reconstruction but also to spectrally selective imaging when the sequence of code aperture patterns is optimized. The aperture code optimization problem is shown to be analogous to the optimization of a constrained multichannel filter bank. The optimal code apertures allow the decomposition of the CASSI measurement into several subsets, each having information from only a few selected spectral bands. The rich theory of compressive sensing is used to effectively reconstruct the spectral bands of interest from the measurements. A number of simulations are developed to illustrate the spectral imaging characteristics attained by optimal aperture codes.

  15. Hydrodynamic Instability, Integrated Code, Laboratory Astrophysics, and Astrophysics

    NASA Astrophysics Data System (ADS)

    Takabe, Hideaki

    This is an article for the memorial lecture of Edward Teller Medal and is presented as memorial lecture at the IFSA03 conference held on September 12th, 2003, at Monterey, CA. The author focuses on his main contributions to fusion science and its extension to astrophysics in the field of theory and computation by picking up five topics. The first one is the anomalous resisitivity to hot electrons penetrating over-dense region through the ion wave turbulence driven by the return current compensating the current flow by the hot electrons. It is concluded that almost the same value of potential as the average kinetic energy of the hot electrons is realized to prevent the penetration of the hot electrons. The second is the ablative stabilization of Rayleigh-Taylor instability at ablation front and its dispersion relation so-called Takabe formula. This formula gave a principal guideline for stable target design. The author has developed an integrated code ILESTA (1D & 2D) for analyses and design of laser produced plasma including implosion dynamics. It is also applied to design high gain targets. The third is the development of the integrated code ILESTA. The forth is on Laboratory Astrophysics with intense lasers. This consists of two parts; one is review on its historical background and the other is on how we relate laser plasma to wide-ranging astrophysics and the purposes for promoting such research. In relation to one purpose, I gave a comment on anomalous transport of relativistic electrons in Fast Ignition laser fusion scheme. Finally, I briefly summarize recent activity in relation to application of the author's experience to the development of an integrated code for studying extreme phenomena in astrophysics.

  16. Hydrodynamic Instability, Integrated Code, Laboratory Astrophysics, and Astrophysics

    NASA Astrophysics Data System (ADS)

    Takabe, Hideaki

    2016-10-01

    This is an article for the memorial lecture of Edward Teller Medal and is presented as memorial lecture at the IFSA03 conference held on September 12th, 2003, at Monterey, CA. The author focuses on his main contributions to fusion science and its extension to astrophysics in the field of theory and computation by picking up five topics. The first one is the anomalous resisitivity to hot electrons penetrating over-dense region through the ion wave turbulence driven by the return current compensating the current flow by the hot electrons. It is concluded that almost the same value of potential as the average kinetic energy of the hot electrons is realized to prevent the penetration of the hot electrons. The second is the ablative stabilization of Rayleigh-Taylor instability at ablation front and its dispersion relation so-called Takabe formula. This formula gave a principal guideline for stable target design. The author has developed an integrated code ILESTA (ID & 2D) for analyses and design of laser produced plasma including implosion dynamics. It is also applied to design high gain targets. The third is the development of the integrated code ILESTA. The forth is on Laboratory Astrophysics with intense lasers. This consists of two parts; one is review on its historical background and the other is on how we relate laser plasma to wide-ranging astrophysics and the purposes for promoting such research. In relation to one purpose, I gave a comment on anomalous transport of relativistic electrons in Fast Ignition laser fusion scheme. Finally, I briefly summarize recent activity in relation to application of the author's experience to the development of an integrated code for studying extreme phenomena in astrophysics.

  17. A compressible high-order unstructured spectral difference code for stratified convection in rotating spherical shells

    NASA Astrophysics Data System (ADS)

    Wang, Junfeng; Liang, Chunlei; Miesch, Mark S.

    2015-06-01

    We present a novel and powerful Compressible High-ORder Unstructured Spectral-difference (CHORUS) code for simulating thermal convection and related fluid dynamics in the interiors of stars and planets. The computational geometries are treated as rotating spherical shells filled with stratified gas. The hydrodynamic equations are discretized by a robust and efficient high-order Spectral Difference Method (SDM) on unstructured meshes. The computational stencil of the spectral difference method is compact and advantageous for parallel processing. CHORUS demonstrates excellent parallel performance for all test cases reported in this paper, scaling up to 12 000 cores on the Yellowstone High-Performance Computing cluster at NCAR. The code is verified by defining two benchmark cases for global convection in Jupiter and the Sun. CHORUS results are compared with results from the ASH code and good agreement is found. The CHORUS code creates new opportunities for simulating such varied phenomena as multi-scale solar convection, core convection, and convection in rapidly-rotating, oblate stars.

  18. Modified-Gravity-GADGET: a new code for cosmological hydrodynamical simulations of modified gravity models

    NASA Astrophysics Data System (ADS)

    Puchwein, Ewald; Baldi, Marco; Springel, Volker

    2013-11-01

    We present a new massively parallel code for N-body and cosmological hydrodynamical simulations of modified gravity models. The code employs a multigrid-accelerated Newton-Gauss-Seidel relaxation solver on an adaptive mesh to efficiently solve for perturbations in the scalar degree of freedom of the modified gravity model. As this new algorithm is implemented as a module for the P-GADGET3 code, it can at the same time follow the baryonic physics included in P-GADGET3, such as hydrodynamics, radiative cooling and star formation. We demonstrate that the code works reliably by applying it to simple test problems that can be solved analytically, as well as by comparing cosmological simulations to results from the literature. Using the new code, we perform the first non-radiative and radiative cosmological hydrodynamical simulations of an f (R)-gravity model. We also discuss the impact of active galactic nucleus feedback on the matter power spectrum, as well as degeneracies between the influence of baryonic processes and modifications of gravity.

  19. High strain Lagrangian hydrodynamics: A three dimensional SPH code for dynamic material response

    NASA Astrophysics Data System (ADS)

    Allahdadi, Firooz A.; Carney, Theodore C.; Hipp, Jim R.; Libersky, Larry D.; Petschek, Albert G.

    1993-03-01

    MAGI, a three-dimensional shock and material response code which is based on Smoothed Particle Hydrodynamics is described. Calculations are presented and compared with experimental results. The SPH method is unique in that it employs no spatial mesh. The absence of a grid leads to some nice features such as the ability to handle large distortions in a pure Lagrangian frame and a natural treatment of voids. Both of these features are important in the tracking of debris clouds produced by hypervelocity impact, a difficult problem for which Smoothed Particle Hydrodynamics seems ideally suited. It is believed this is the first application of SPH to the dynamics of elastic-plastic solid.

  20. Magneto-hydrodynamic calculation of magnetic flux compression using imploding cylindrical liners

    NASA Astrophysics Data System (ADS)

    Zhao, Jibo; Sun, Chengwei; Gu, Zhuowei

    2015-06-01

    Based on the one-dimensional elastic-plastic reactive hydrodynamic code SSS, the one-dimensional magneto-hydrodynamics code SSS/MHD is developed successfully, and calculation is carried for cylindrical magneto cumulative generators (MC-1 device). The magnetic field diffusion into liner and sample tuber is analyzed, and the result shows that the maximum value of magnetic induction intensity to cavity 0.2 mm in liner is only sixteen Tesla, while the one in sample tuber is several hundred Tesla, which is caused by balancing of electromagnetism force and imploding one for the different velocity of liner and sample tuber. The curves of magnetic induction intensity on axes of cavity and the velocity history on the wall of sample tuber are calculated, which accord with the experiment results. The works in this paper account for that code SSS/MHD can be applied in experiment configures of detonation, shock and electromagnetism load and improve of parameter successfully. The experiment data can be estimated, analyzed and checked validly, and the physics course of correlative device can be understood deeply, according to SSS/MHD. This work was supported by the special funds of the National Natural Science Foundation of China under Grant 11176002.

  1. Simulating hypervelocity impact effects on structures using the smoothed particle hydrodynamics code MAGI

    NASA Technical Reports Server (NTRS)

    Libersky, Larry; Allahdadi, Firooz A.; Carney, Theodore C.

    1992-01-01

    Analysis of interaction occurring between space debris and orbiting structures is of great interest to the planning and survivability of space assets. Computer simulation of the impact events using hydrodynamic codes can provide some understanding of the processes but the problems involved with this fundamental approach are formidable. First, any realistic simulation is necessarily three-dimensional, e.g., the impact and breakup of a satellite. Second, the thickness of important components such as satellite skins or bumper shields are small with respect to the dimension of the structure as a whole, presenting severe zoning problems for codes. Thirdly, the debris cloud produced by the primary impact will yield many secondary impacts which will contribute to the damage and possible breakup of the structure. The problem was approached by choosing a relatively new computational technique that has virtues peculiar to space impacts. The method is called Smoothed Particle Hydrodynamics.

  2. A CMOS Imager with Focal Plane Compression using Predictive Coding

    NASA Technical Reports Server (NTRS)

    Leon-Salas, Walter D.; Balkir, Sina; Sayood, Khalid; Schemm, Nathan; Hoffman, Michael W.

    2007-01-01

    This paper presents a CMOS image sensor with focal-plane compression. The design has a column-level architecture and it is based on predictive coding techniques for image decorrelation. The prediction operations are performed in the analog domain to avoid quantization noise and to decrease the area complexity of the circuit, The prediction residuals are quantized and encoded by a joint quantizer/coder circuit. To save area resources, the joint quantizerlcoder circuit exploits common circuitry between a single-slope analog-to-digital converter (ADC) and a Golomb-Rice entropy coder. This combination of ADC and encoder allows the integration of the entropy coder at the column level. A prototype chip was fabricated in a 0.35 pm CMOS process. The output of the chip is a compressed bit stream. The test chip occupies a silicon area of 2.60 mm x 5.96 mm which includes an 80 X 44 APS array. Tests of the fabricated chip demonstrate the validity of the design.

  3. Coded strobing photography: compressive sensing of high speed periodic videos.

    PubMed

    Veeraraghavan, Ashok; Reddy, Dikpal; Raskar, Ramesh

    2011-04-01

    We show that, via temporal modulation, one can observe and capture a high-speed periodic video well beyond the abilities of a low-frame-rate camera. By strobing the exposure with unique sequences within the integration time of each frame, we take coded projections of dynamic events. From a sequence of such frames, we reconstruct a high-speed video of the high-frequency periodic process. Strobing is used in entertainment, medical imaging, and industrial inspection to generate lower beat frequencies. But this is limited to scenes with a detectable single dominant frequency and requires high-intensity lighting. In this paper, we address the problem of sub-Nyquist sampling of periodic signals and show designs to capture and reconstruct such signals. The key result is that for such signals, the Nyquist rate constraint can be imposed on the strobe rate rather than the sensor rate. The technique is based on intentional aliasing of the frequency components of the periodic signal while the reconstruction algorithm exploits recent advances in sparse representations and compressive sensing. We exploit the sparsity of periodic signals in the Fourier domain to develop reconstruction algorithms that are inspired by compressive sensing.

  4. A CMOS Imager with Focal Plane Compression using Predictive Coding

    NASA Technical Reports Server (NTRS)

    Leon-Salas, Walter D.; Balkir, Sina; Sayood, Khalid; Schemm, Nathan; Hoffman, Michael W.

    2007-01-01

    This paper presents a CMOS image sensor with focal-plane compression. The design has a column-level architecture and it is based on predictive coding techniques for image decorrelation. The prediction operations are performed in the analog domain to avoid quantization noise and to decrease the area complexity of the circuit, The prediction residuals are quantized and encoded by a joint quantizer/coder circuit. To save area resources, the joint quantizerlcoder circuit exploits common circuitry between a single-slope analog-to-digital converter (ADC) and a Golomb-Rice entropy coder. This combination of ADC and encoder allows the integration of the entropy coder at the column level. A prototype chip was fabricated in a 0.35 pm CMOS process. The output of the chip is a compressed bit stream. The test chip occupies a silicon area of 2.60 mm x 5.96 mm which includes an 80 X 44 APS array. Tests of the fabricated chip demonstrate the validity of the design.

  5. High-fidelity numerical simulations of compressible turbulence and mixing generated by hydrodynamic instabilities

    NASA Astrophysics Data System (ADS)

    Movahed, Pooya

    High-speed flows are prone to hydrodynamic interfacial instabilities that evolve to turbulence, thereby intensely mixing different fluids and dissipating energy. The lack of knowledge of these phenomena has impeded progress in a variety of disciplines. In science, a full understanding of mixing between heavy and light elements after the collapse of a supernova and between adjacent layers of different density in geophysical (atmospheric and oceanic) flows remains lacking. In engineering, the inability to achieve ignition in inertial fusion and efficient combustion constitute further examples of this lack of basic understanding of turbulent mixing. In this work, my goal is to develop accurate and efficient numerical schemes and employ them to study compressible turbulence and mixing generated by interactions between shocked (Richtmyer-Meshkov) and accelerated (Rayleigh-Taylor) interfaces, which play important roles in high-energy-density physics environments. To accomplish my goal, a hybrid high-order central/discontinuity-capturing finite difference scheme is first presented. The underlying principle is that, to accurately and efficiently represent both broadband motions and discontinuities, non-dissipative methods are used where the solution is smooth, while the more expensive and dissipative capturing schemes are applied near discontinuous regions. Thus, an accurate numerical sensor is developed to discriminate between smooth regions, shocks and material discontinuities, which all require a different treatment. The interface capturing approach is extended to central differences, such that smooth distributions of varying specific heats ratio can be simulated without generating spurious pressure oscillations. I verified and validated this approach against a stringent suite of problems including shocks, interfaces, turbulence and two-dimensional single-mode Richtmyer-Meshkov instability simulations. The three-dimensional code is shown to scale well up to 4000 cores

  6. Channel coding and data compression system considerations for efficient communication of planetary imaging data

    NASA Technical Reports Server (NTRS)

    Rice, R. F.

    1974-01-01

    End-to-end system considerations involving channel coding and data compression which could drastically improve the efficiency in communicating pictorial information from future planetary spacecraft are presented.

  7. Hydrodynamic Optimization Method and Design Code for Stall-Regulated Hydrokinetic Turbine Rotors

    SciTech Connect

    Sale, D.; Jonkman, J.; Musial, W.

    2009-08-01

    This report describes the adaptation of a wind turbine performance code for use in the development of a general use design code and optimization method for stall-regulated horizontal-axis hydrokinetic turbine rotors. This rotor optimization code couples a modern genetic algorithm and blade-element momentum performance code in a user-friendly graphical user interface (GUI) that allows for rapid and intuitive design of optimal stall-regulated rotors. This optimization method calculates the optimal chord, twist, and hydrofoil distributions which maximize the hydrodynamic efficiency and ensure that the rotor produces an ideal power curve and avoids cavitation. Optimizing a rotor for maximum efficiency does not necessarily create a turbine with the lowest cost of energy, but maximizing the efficiency is an excellent criterion to use as a first pass in the design process. To test the capabilities of this optimization method, two conceptual rotors were designed which successfully met the design objectives.

  8. A 3+1 dimensional viscous hydrodynamic code for relativistic heavy ion collisions

    NASA Astrophysics Data System (ADS)

    Karpenko, Iu.; Huovinen, P.; Bleicher, M.

    2014-11-01

    We describe the details of 3+1 dimensional relativistic hydrodynamic code for the simulations of quark-gluon/hadron matter expansion in ultra-relativistic heavy ion collisions. The code solves the equations of relativistic viscous hydrodynamics in the Israel-Stewart framework. With the help of ideal-viscous splitting, we keep the ability to solve the equations of ideal hydrodynamics in the limit of zero viscosities using a Godunov-type algorithm. Milne coordinates are used to treat the predominant expansion in longitudinal (beam) direction effectively. The results are successfully tested against known analytical relativistic inviscid and viscous solutions, as well as against existing 2+1D relativistic viscous code. Catalogue identifier: AETZ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AETZ_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 13 825 No. of bytes in distributed program, including test data, etc.: 92 750 Distribution format: tar.gz Programming language: C++. Computer: any with a C++ compiler and the CERN ROOT libraries. Operating system: tested on GNU/Linux Ubuntu 12.04 x64 (gcc 4.6.3), GNU/Linux Ubuntu 13.10 (gcc 4.8.2), Red Hat Linux 6 (gcc 4.4.7). RAM: scales with the number of cells in hydrodynamic grid; 1900 Mbytes for 3D 160×160×100 grid. Classification: 1.5, 4.3, 12. External routines: CERN ROOT (http://root.cern.ch), Gnuplot (http://www.gnuplot.info/) for plotting the results. Nature of problem: relativistic hydrodynamical description of the 3-dimensional quark-gluon/hadron matter expansion in ultra-relativistic heavy ion collisions. Solution method: finite volume Godunov-type method. Running time: scales with the number of hydrodynamic cells; typical running times on Intel(R) Core(TM) i7-3770 CPU @ 3.40 GHz, single thread mode, 160

  9. BETHE-Hydro: An Arbitrary Lagrangian-Eulerian Multidimensional Hydrodynamics Code for Astrophysical Simulations

    NASA Astrophysics Data System (ADS)

    Murphy, Jeremiah W.; Burrows, Adam

    2008-11-01

    In this paper, we describe a new hydrodynamics code for one- and two-dimensional (1D and 2D) astrophysical simulations, BETHE-hydro, that uses time-dependent, arbitrary, unstructured grids. The core of the hydrodynamics algorithm is an arbitrary Lagrangian-Eulerian (ALE) approach, in which the gradient and divergence operators are made compatible using the support-operator method. We present 1D and 2D gravity solvers that are finite differenced using the support-operator technique, and the resulting system of linear equations are solved using the tridiagonal method for 1D simulations and an iterative multigrid-preconditioned conjugate-gradient method for 2D simulations. Rotational terms are included for 2D calculations using cylindrical coordinates. We document an incompatibility between a subcell pressure algorithm to suppress hourglass motions, and the subcell remapping algorithm and present a modified subcell pressure scheme that avoids this problem. Strengths of this code include a straightforward structure, enabling simple inclusion of additional physics packages, the ability to use a general equation of state, and most importantly, the ability to solve self-gravitating hydrodynamic flows on time-dependent, arbitrary grids. In what follows, we describe in detail the numerical techniques employed and, with a large suite of tests, demonstrate that BETHE-hydro finds accurate solutions with second-order convergence.

  10. A smooth particle hydrodynamics code to model collisions between solid, self-gravitating objects

    NASA Astrophysics Data System (ADS)

    Schäfer, C.; Riecker, S.; Maindl, T. I.; Speith, R.; Scherrer, S.; Kley, W.

    2016-05-01

    Context. Modern graphics processing units (GPUs) lead to a major increase in the performance of the computation of astrophysical simulations. Owing to the different nature of GPU architecture compared to traditional central processing units (CPUs) such as x86 architecture, existing numerical codes cannot be easily migrated to run on GPU. Here, we present a new implementation of the numerical method smooth particle hydrodynamics (SPH) using CUDA and the first astrophysical application of the new code: the collision between Ceres-sized objects. Aims: The new code allows for a tremendous increase in speed of astrophysical simulations with SPH and self-gravity at low costs for new hardware. Methods: We have implemented the SPH equations to model gas, liquids and elastic, and plastic solid bodies and added a fragmentation model for brittle materials. Self-gravity may be optionally included in the simulations and is treated by the use of a Barnes-Hut tree. Results: We find an impressive performance gain using NVIDIA consumer devices compared to our existing OpenMP code. The new code is freely available to the community upon request. If you are interested in our CUDA SPH code miluphCUDA, please write an email to Christoph Schäfer. miluphCUDA is the CUDA port of miluph. miluph is pronounced [maßl2v]. We do not support the use of the code for military purposes.

  11. Investigating the Magnetorotational Instability with Dedalus, and Open-Souce Hydrodynamics Code

    SciTech Connect

    Burns, Keaton J; /UC, Berkeley, aff SLAC

    2012-08-31

    The magnetorotational instability is a fluid instability that causes the onset of turbulence in discs with poloidal magnetic fields. It is believed to be an important mechanism in the physics of accretion discs, namely in its ability to transport angular momentum outward. A similar instability arising in systems with a helical magnetic field may be easier to produce in laboratory experiments using liquid sodium, but the applicability of this phenomenon to astrophysical discs is unclear. To explore and compare the properties of these standard and helical magnetorotational instabilities (MRI and HRMI, respectively), magnetohydrodynamic (MHD) capabilities were added to Dedalus, an open-source hydrodynamics simulator. Dedalus is a Python-based pseudospectral code that uses external libraries and parallelization with the goal of achieving speeds competitive with codes implemented in lower-level languages. This paper will outline the MHD equations as implemented in Dedalus, the steps taken to improve the performance of the code, and the status of MRI investigations using Dedalus.

  12. Simulation of a ceramic impact experiment using the SPHINX smooth particle hydrodynamics code

    SciTech Connect

    Mandell, D.A.; Wingate, C.A.; Schwalbe, L.A.

    1996-08-01

    We are developing statistically based, brittle-fracture models and are implementing them into hydrocodes that can be used for designing systems with components of ceramics, glass, and/or other brittle materials. Because of the advantages it has simulating fracture, we are working primarily with the smooth particle hydrodynamics code SPHINX. We describe a new brittle fracture model that we have implemented into SPHINX, and we discuss how the model differs from others. To illustrate the code`s current capability, we simulate an experiment in which a tungsten rod strikes a target of heavily confined ceramic. Simulations in 3D at relatively coarse resolution yield poor results. However, 2D plane-strain approximations to the test produce crack patterns that are strikingly similar to the data, although the fracture model needs further refinement to match some of the finer details. We conclude with an outline of plans for continuing research and development.

  13. Modelling of Be Disks in Binary Systems Using the Hydrodynamic Code PLUTO

    NASA Astrophysics Data System (ADS)

    Cyr, I. H.; Panoglou, D.; Jones, C. E.; Carciofi, A. C.

    2016-11-01

    The study of the gas structure and dynamics of Be star disks is critical to our understanding of the Be star phenomenon. The central star is the major force driving the evolution of these disks, however other external forces may also affect the formation of the disk, for example, the gravitational torque produced in a close binary system. We are interested in understanding the gravitational effects of a low-mass binary companion on the formation and growth of a disk in a close binary system. To study these effects, we used the grid-based hydrodynamic code PLUTO. Because this code has not been used to study such systems before, we compared our simulations against codes used in previous work on binary systems. We were able to simulate the formation of a disk in both an isolated and binary system. Our current results suggest that PLUTO is in fact a well suited tool to study the dynamics of Be disks.

  14. An efficient pulse compression method of chirp-coded excitation in medical ultrasound imaging.

    PubMed

    Yoon, Changhan; Lee, Wooyoul; Chang, Jin; Song, Tai-kyong; Yoo, Yangmo

    2013-10-01

    Coded excitation can improve the SNR in medical ultrasound imaging. In coded excitation, pulse compression is applied to compress the elongated coded signals into a short pulse, which typically requires high computational complexity, i.e., a compression filter with a few hundred coefficients. In this paper, we propose an efficient pulse compression method of chirp-coded excitation, in which the pulse compression is conducted with complex baseband data after downsampling, to lower the computational complexity. In the proposed method, although compression is conducted with the complex data, the L-fold downsampling is applied for reducing both data rates and the number of compression filter coefficients; thus, total computational complexity is reduced to the order of 1/L(2). The proposed method was evaluated with simulation and phantom experiments. From the simulation and experiment results, the proposed pulse compression method produced similar axial resolution compared with the conventional pulse compression method with negligible errors, i.e., ≫36 dB in signal-to-error ratio (SER). These results indicate that the proposed method can maintain the performance of pulse compression of chirp-coded excitation while substantially reducing computational complexity.

  15. High Strain Lagrangian Hydrodynamics. A Three-Dimensional SPH Code for Dynamic Material Response

    NASA Astrophysics Data System (ADS)

    Libersky, Larry D.; Petschek, Albert G.; Carney, Theodore C.; Hipp, Jim R.; Allahdadi, Firooz A.

    1993-11-01

    MAGI, a three-dimensional shock and material response code which is based on smoothed particle hydrodynamics (SPH) is described. Calculations are presented and compared with experimental results. The SPH method is unique in that it employs no spatial mesh. The absence of a grid leads to some nice features such as the ability to handle large distortions in a pure Lagrangian frame and a natural treatment of voids. Both of these features are important in the tracking of debris clouds produced by hypervelocity impact—a difficult problem for which SPH seems ideally suited. We believe this is the first application of SPH to the dynamics of elastic-plastic solids.

  16. Effective wavelet-based compression method with adaptive quantization threshold and zerotree coding

    NASA Astrophysics Data System (ADS)

    Przelaskowski, Artur; Kazubek, Marian; Jamrogiewicz, Tomasz

    1997-10-01

    Efficient image compression technique especially for medical applications is presented. Dyadic wavelet decomposition by use of Antonini and Villasenor bank filters is followed by adaptive space-frequency quantization and zerotree-based entropy coding of wavelet coefficients. Threshold selection and uniform quantization is made on a base of spatial variance estimate built on the lowest frequency subband data set. Threshold value for each coefficient is evaluated as linear function of 9-order binary context. After quantization zerotree construction, pruning and arithmetic coding is applied for efficient lossless data coding. Presented compression method is less complex than the most effective EZW-based techniques but allows to achieve comparable compression efficiency. Specifically our method has similar to SPIHT efficiency in MR image compression, slightly better for CT image and significantly better in US image compression. Thus the compression efficiency of presented method is competitive with the best published algorithms in the literature across diverse classes of medical images.

  17. Prediction of material strength and fracture of glass using the SPHINX smooth particle hydrodynamics code

    SciTech Connect

    Mandell, D.A.; Wingate, C.A.

    1994-08-01

    The design of many military devices involves numerical predictions of the material strength and fracture of brittle materials. The materials of interest include ceramics, that are used in armor packages; glass that is used in truck and jeep windshields and in helicopters; and rock and concrete that are used in underground bunkers. As part of a program to develop advanced hydrocode design tools, the authors have implemented a brittle fracture model for glass into the SPHINX smooth particle hydrodynamics code. The authors have evaluated this model and the code by predicting data from one-dimensional flyer plate impacts into glass, and data from tungsten rods impacting glass. Since fractured glass properties, which are needed in the model, are not available, the authors did sensitivity studies of these properties, as well as sensitivity studies to determine the number of particles needed in the calculations. The numerical results are in good agreement with the data.

  18. Prediction of material strength and fracture of brittle materials using the SPHINX smooth particle hydrodynamics code

    SciTech Connect

    Mandell, D.A.; Wingate, C.A.; Stellingwwerf, R.F.

    1995-12-31

    The design of many devices involves numerical predictions of the material strength and fracture of brittle materials. The materials of interest include ceramics that are used in armor packages; glass that is used in windshields; and rock and concrete that are used in oil wells. As part of a program to develop advanced hydrocode design tools, the authors have implemented a brittle fracture model for glass into the SPHINX smooth particle hydrodynamics code. The authors have evaluated this model and the code by predicting data from tungsten rods impacting glass. Since fractured glass properties, which are needed in the model, are not available, they did sensitivity studies of these properties, as well as sensitivity studies to determine the number of particles needed in the calculations. The numerical results are in good agreement with the data.

  19. A new multidimensional, energy-dependent two-moment transport code for neutrino-hydrodynamics

    NASA Astrophysics Data System (ADS)

    Just, O.; Obergaulinger, M.; Janka, H.-T.

    2015-11-01

    We present the new code ALCAR developed to model multidimensional, multienergy-group neutrino transport in the context of supernovae and neutron-star mergers. The algorithm solves the evolution equations of the zeroth- and first-order angular moments of the specific intensity, supplemented by an algebraic relation for the second-moment tensor to close the system. The scheme takes into account frame-dependent effects of the order O(v/c) as well as the most important types of neutrino interactions. The transport scheme is significantly more efficient than a multidimensional solver of the Boltzmann equation, while it is more accurate and consistent than the flux-limited diffusion method. The finite-volume discretization of the essentially hyperbolic system of moment equations employs methods well-known from hydrodynamics. For the time integration of the potentially stiff moment equations we employ a scheme in which only the local source terms are treated implicitly, while the advection terms are kept explicit, thereby allowing for an efficient computational parallelization of the algorithm. We investigate various problem set-ups in one and two dimensions to verify the implementation and to test the quality of the algebraic closure scheme. In our most detailed test, we compare a fully dynamic, one-dimensional core-collapse simulation with two published calculations performed with well-known Boltzmann-type neutrino-hydrodynamics codes and we find very satisfactory agreement.

  20. Space communication system for compressed data with a concatenated Reed-Solomon-Viterbi coding channel

    NASA Technical Reports Server (NTRS)

    Rice, R. F.; Hilbert, E. E. (Inventor)

    1976-01-01

    A space communication system incorporating a concatenated Reed Solomon Viterbi coding channel is discussed for transmitting compressed and uncompressed data from a spacecraft to a data processing center on Earth. Imaging (and other) data are first compressed into source blocks which are then coded by a Reed Solomon coder and interleaver, followed by a convolutional encoder. The received data is first decoded by a Viterbi decoder, followed by a Reed Solomon decoder and deinterleaver. The output of the latter is then decompressed, based on the compression criteria used in compressing the data in the spacecraft. The decompressed data is processed to reconstruct an approximation of the original data-producing condition or images.

  1. Coupling Hydrodynamic and Wave Propagation Codes for Modeling of Seismic Waves recorded at the SPE Test.

    NASA Astrophysics Data System (ADS)

    Larmat, C. S.; Rougier, E.; Delorey, A.; Steedman, D. W.; Bradley, C. R.

    2016-12-01

    The goal of the Source Physics Experiment (SPE) is to bring empirical and theoretical advances to the problem of detection and identification of underground nuclear explosions. For this, the SPE program includes a strong modeling effort based on first principles calculations with the challenge to capture both the source and near-source processes and those taking place later in time as seismic waves propagate within complex 3D geologic environments. In this paper, we report on results of modeling that uses hydrodynamic simulation codes (Abaqus and CASH) coupled with a 3D full waveform propagation code, SPECFEM3D. For modeling the near source region, we employ a fully-coupled Euler-Lagrange (CEL) modeling capability with a new continuum-based visco-plastic fracture model for simulation of damage processes, called AZ_Frac. These capabilities produce high-fidelity models of various factors believed to be key in the generation of seismic waves: the explosion dynamics, a weak grout-filled borehole, the surrounding jointed rock, and damage creation and deformations happening around the source and the free surface. SPECFEM3D, based on the Spectral Element Method (SEM) is a direct numerical method for full wave modeling with mathematical accuracy. The coupling interface consists of a series of grid points of the SEM mesh situated inside of the hydrodynamic code's domain. Displacement time series at these points are computed using output data from CASH or Abaqus (by interpolation if needed) and fed into the time marching scheme of SPECFEM3D. We will present validation tests with the Sharpe's model and comparisons of waveforms modeled with Rg waves (2-8Hz) that were recorded up to 2 km for SPE. We especially show effects of the local topography, velocity structure and spallation. Our models predict smaller amplitudes of Rg waves for the first five SPE shots compared to pure elastic models such as Denny &Johnson (1991).

  2. MULTI2D - a computer code for two-dimensional radiation hydrodynamics

    NASA Astrophysics Data System (ADS)

    Ramis, R.; Meyer-ter-Vehn, J.; Ramírez, J.

    2009-06-01

    Simulation of radiation hydrodynamics in two spatial dimensions is developed, having in mind, in particular, target design for indirectly driven inertial confinement energy (IFE) and the interpretation of related experiments. Intense radiation pulses by laser or particle beams heat high-Z target configurations of different geometries and lead to a regime which is optically thick in some regions and optically thin in others. A diffusion description is inadequate in this situation. A new numerical code has been developed which describes hydrodynamics in two spatial dimensions (cylindrical R-Z geometry) and radiation transport along rays in three dimensions with the 4 π solid angle discretized in direction. Matter moves on a non-structured mesh composed of trilateral and quadrilateral elements. Radiation flux of a given direction enters on two (one) sides of a triangle and leaves on the opposite side(s) in proportion to the viewing angles depending on the geometry. This scheme allows to propagate sharply edged beams without ray tracing, though at the price of some lateral diffusion. The algorithm treats correctly both the optically thin and optically thick regimes. A symmetric semi-implicit (SSI) method is used to guarantee numerical stability. Program summaryProgram title: MULTI2D Catalogue identifier: AECV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 151 098 No. of bytes in distributed program, including test data, etc.: 889 622 Distribution format: tar.gz Programming language: C Computer: PC (32 bits architecture) Operating system: Linux/Unix RAM: 2 Mbytes Word size: 32 bits Classification: 19.7 External routines: X-window standard library (libX11.so) and corresponding heading files (X11/*.h) are

  3. Three-dimensional hydrodynamic Bondi-Hoyle accretion. 1: Code validation and stationary accretors

    NASA Technical Reports Server (NTRS)

    Ruffert, Maximilian

    1994-01-01

    We investigate the hydrodynamics of three-dimensional classical Bondi-Hoyle accretion. Totally absorbing stationary spheres of varying sizes (from 10.0 down to 0.02 Bondi radii) accrete matter from a homogeneous and slightly perturbed medium, which is taken to be an ideal gas (gamma = 5/3 or 1.2). To accommodate the long-range gravitational forces, the extent of the computational volume is typically a factor of 100 larger than the radius of the accretor. We compare the numerical mass accretion rates with the theoretical predictions of Bondi, to assess the validity of the code. The hydrodynamics is modeled by the piecewise parabolic method. No energy sources (nuclear burning) or sinks (radiation, conduction) are included. The resolution in the vicinity of the accretor is increased by multiply nesting several (6-8) grids around the stationary sphere, each finer grid being a factor of 2 smaller spatially than the next coarser grid. This allows us to include a coarse model for the surface of the accretor (vacuum sphere) on the finest grid while at the same time evolving the gas on the coarser grids. The accretion rates derived numerically are in in very good agreement (to about 10% over several orders of magnitude) with the values given by Bondi for a stationary accretor within a hydrodynamic medium. However, the equations have to be changed in order to include the finite size of the accretor (in some cases very large compared to the sonic point or even to the Bondi radius).

  4. Channel coding/decoding alternatives for compressed TV data on advanced planetary missions.

    NASA Technical Reports Server (NTRS)

    Rice, R. F.

    1972-01-01

    The compatibility of channel coding/decoding schemes with a specific TV compressor developed for advanced planetary missions is considered. Under certain conditions, it is shown that compressed data can be transmitted at approximately the same rate as uncompressed data without any loss in quality. Thus, the full gains of data compression can be achieved in real-time transmission.

  5. A high order special relativistic hydrodynamic and magnetohydrodynamic code with space-time adaptive mesh refinement

    NASA Astrophysics Data System (ADS)

    Zanotti, Olindo; Dumbser, Michael

    2015-03-01

    We present a high order one-step ADER-WENO finite volume scheme with space-time adaptive mesh refinement (AMR) for the solution of the special relativistic hydrodynamic and magnetohydrodynamic equations. By adopting a local discontinuous Galerkin predictor method, a high order one-step time discretization is obtained, with no need for Runge-Kutta sub-steps. This turns out to be particularly advantageous in combination with space-time adaptive mesh refinement, which has been implemented following a "cell-by-cell" approach. As in existing second order AMR methods, also the present higher order AMR algorithm features time-accurate local time stepping (LTS), where grids on different spatial refinement levels are allowed to use different time steps. We also compare two different Riemann solvers for the computation of the numerical fluxes at the cell interfaces. The new scheme has been validated over a sample of numerical test problems in one, two and three spatial dimensions, exploring its ability in resolving the propagation of relativistic hydrodynamical and magnetohydrodynamical waves in different physical regimes. The astrophysical relevance of the new code for the study of the Richtmyer-Meshkov instability is briefly discussed in view of future applications.

  6. Tidal disruptions by rotating black holes: relativistic hydrodynamics with Newtonian codes

    NASA Astrophysics Data System (ADS)

    Tejeda, Emilio; Gafton, Emanuel; Rosswog, Stephan; Miller, John C.

    2017-08-01

    We propose an approximate approach for studying the relativistic regime of stellar tidal disruptions by rotating massive black holes. It combines an exact relativistic description of the hydrodynamical evolution of a test fluid in a fixed curved space-time with a Newtonian treatment of the fluid's self-gravity. Explicit expressions for the equations of motion are derived for Kerr space-time using two different coordinate systems. We implement the new methodology within an existing Newtonian smoothed particle hydrodynamics code and show that including the additional physics involves very little extra computational cost. We carefully explore the validity of the novel approach by first testing its ability to recover geodesic motion, and then by comparing the outcome of tidal disruption simulations against previous relativistic studies. We further compare simulations in Boyer-Lindquist and Kerr-Schild coordinates and conclude that our approach allows accurate simulation even of tidal disruption events where the star penetrates deeply inside the tidal radius of a rotating black hole. Finally, we use the new method to study the effect of the black hole spin on the morphology and fallback rate of the debris streams resulting from tidal disruptions, finding that while the spin has little effect on the fallback rate, it does imprint heavily on the stream morphology, and can even be a determining factor in the survival or disruption of the star itself. Our methodology is discussed in detail as a reference for future astrophysical applications.

  7. Structure of the solar photosphere studied from the radiation hydrodynamics code ANTARES

    NASA Astrophysics Data System (ADS)

    Leitner, P.; Lemmerer, B.; Hanslmeier, A.; Zaqarashvili, T.; Veronig, A.; Grimm-Strele, H.; Muthsam, H. J.

    2017-09-01

    The ANTARES radiation hydrodynamics code is capable of simulating the solar granulation in detail unequaled by direct observation. We introduce a state-of-the-art numerical tool to the solar physics community and demonstrate its applicability to model the solar granulation. The code is based on the weighted essentially non-oscillatory finite volume method and by its implementation of local mesh refinement is also capable of simulating turbulent fluids. While the ANTARES code already provides promising insights into small-scale dynamical processes occurring in the quiet-Sun photosphere, it will soon be capable of modeling the latter in the scope of radiation magnetohydrodynamics. In this first preliminary study we focus on the vertical photospheric stratification by examining a 3-D model photosphere with an evolution time much larger than the dynamical timescales of the solar granulation and of particular large horizontal extent corresponding to 25''×25'' on the solar surface to smooth out horizontal spatial inhomogeneities separately for up- and downflows. The highly resolved Cartesian grid thereby covers ˜4 Mm of the upper convection zone and the adjacent photosphere. Correlation analysis, both local and two-point, provides a suitable means to probe the photospheric structure and thereby to identify several layers of characteristic dynamics: The thermal convection zone is found to reach some ten kilometers above the solar surface, while convectively overshooting gas penetrates even higher into the low photosphere. An ≈145 km wide transition layer separates the convective from the oscillatory layers in the higher photosphere.

  8. Adaptive uniform grayscale coded aperture design for high dynamic range compressive spectral imaging

    NASA Astrophysics Data System (ADS)

    Diaz, Nelson; Rueda, Hoover; Arguello, Henry

    2016-05-01

    Imaging spectroscopy is an important area with many applications in surveillance, agriculture and medicine. The disadvantage of conventional spectroscopy techniques is that they collect the whole datacube. In contrast, compressive spectral imaging systems capture snapshot compressive projections, which are the input of reconstruction algorithms to yield the underlying datacube. Common compressive spectral imagers use coded apertures to perform the coded projections. The coded apertures are the key elements in these imagers since they define the sensing matrix of the system. The proper design of the coded aperture entries leads to a good quality in the reconstruction. In addition, the compressive measurements are prone to saturation due to the limited dynamic range of the sensor, hence the design of coded apertures must consider saturation. The saturation errors in compressive measurements are unbounded and compressive sensing recovery algorithms only provide solutions for bounded noise or bounded with high probability. In this paper it is proposed the design of uniform adaptive grayscale coded apertures (UAGCA) to improve the dynamic range of the estimated spectral images by reducing the saturation levels. The saturation is attenuated between snapshots using an adaptive filter which updates the entries of the grayscale coded aperture based on the previous snapshots. The coded apertures are optimized in terms of transmittance and number of grayscale levels. The advantage of the proposed method is the efficient use of the dynamic range of the image sensor. Extensive simulations show improvements in the image reconstruction of the proposed method compared with grayscale coded apertures (UGCA) and adaptive block-unblock coded apertures (ABCA) in up to 10 dB.

  9. A channel differential EZW coding scheme for EEG data compression.

    PubMed

    Dehkordi, Vahid R; Daou, Hoda; Labeau, Fabrice

    2011-11-01

    In this paper, a method is proposed to compress multichannel electroencephalographic (EEG) signals in a scalable fashion. Correlation between EEG channels is exploited through clustering using a k-means method. Representative channels for each of the clusters are encoded individually while other channels are encoded differentially, i.e., with respect to their respective cluster representatives. The compression is performed using the embedded zero-tree wavelet encoding adapted to 1-D signals. Simulations show that the scalable features of the scheme lead to a flexible quality/rate tradeoff, without requiring detailed EEG signal modeling.

  10. Medical Image Compression Using a New Subband Coding Method

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Smith, Mark J. T.; Scales, Allen; Tucker, Doug

    1995-01-01

    A recently introduced iterative complexity- and entropy-constrained subband quantization design algorithm is generalized and applied to medical image compression. In particular, the corresponding subband coder is used to encode Computed Tomography (CT) axial slice head images, where statistical dependencies between neighboring image subbands are exploited. Inter-slice conditioning is also employed for further improvements in compression performance. The subband coder features many advantages such as relatively low complexity and operation over a very wide range of bit rates. Experimental results demonstrate that the performance of the new subband coder is relatively good, both objectively and subjectively.

  11. An Open-source Neutrino Radiation Hydrodynamics Code for Core-collapse Supernovae

    NASA Astrophysics Data System (ADS)

    O'Connor, Evan

    2015-08-01

    We present an open-source update to the spherically symmetric, general-relativistic hydrodynamics, core-collapse supernova (CCSN) code GR1D. The source code is available at http://www.GR1Dcode.org. We extend its capabilities to include a general-relativistic treatment of neutrino transport based on the moment formalisms of Shibata et al. and Cardall et al. We pay special attention to implementing and testing numerical methods and approximations that lessen the computational demand of the transport scheme by removing the need to invert large matrices. This is especially important for the implementation and development of moment-like transport methods in two and three dimensions. A critical component of neutrino transport calculations is the neutrino-matter interaction coefficients that describe the production, absorption, scattering, and annihilation of neutrinos. In this article we also describe our open-source neutrino interaction library NuLib (available at http://www.nulib.org). We believe that an open-source approach to describing these interactions is one of the major steps needed to progress toward robust models of CCSNe and robust predictions of the neutrino signal. We show, via comparisons to full Boltzmann neutrino-transport simulations of CCSNe, that our neutrino transport code performs remarkably well. Furthermore, we show that the methods and approximations we employ to increase efficiency do not decrease the fidelity of our results. We also test the ability of our general-relativistic transport code to model failed CCSNe by evolving a 40-solar-mass progenitor to the onset of collapse to a black hole.

  12. AN OPEN-SOURCE NEUTRINO RADIATION HYDRODYNAMICS CODE FOR CORE-COLLAPSE SUPERNOVAE

    SciTech Connect

    O’Connor, Evan

    2015-08-15

    We present an open-source update to the spherically symmetric, general-relativistic hydrodynamics, core-collapse supernova (CCSN) code GR1D. The source code is available at http://www.GR1Dcode.org. We extend its capabilities to include a general-relativistic treatment of neutrino transport based on the moment formalisms of Shibata et al. and Cardall et al. We pay special attention to implementing and testing numerical methods and approximations that lessen the computational demand of the transport scheme by removing the need to invert large matrices. This is especially important for the implementation and development of moment-like transport methods in two and three dimensions. A critical component of neutrino transport calculations is the neutrino–matter interaction coefficients that describe the production, absorption, scattering, and annihilation of neutrinos. In this article we also describe our open-source neutrino interaction library NuLib (available at http://www.nulib.org). We believe that an open-source approach to describing these interactions is one of the major steps needed to progress toward robust models of CCSNe and robust predictions of the neutrino signal. We show, via comparisons to full Boltzmann neutrino-transport simulations of CCSNe, that our neutrino transport code performs remarkably well. Furthermore, we show that the methods and approximations we employ to increase efficiency do not decrease the fidelity of our results. We also test the ability of our general-relativistic transport code to model failed CCSNe by evolving a 40-solar-mass progenitor to the onset of collapse to a black hole.

  13. Compression of digital mammograms with region-of-interest coding evaluated on a CAD system

    NASA Astrophysics Data System (ADS)

    Engan, Kjersti; Lillo, Martin R.; Gulsrud, Thor Ole

    2005-04-01

    Screening programs produce large amount of mammographic data, and good compression schemes would be beneficial for both storage and transmission purposes. In medical data it is crucial that diagnostic important information is preserved. In this work we have implemented two different region-of-interest (ROI) coding methods together with a Set Partitioning in Hierarchical Trees (SPIHT) scheme to be used for compression of mammograms. Region-of-interest coding allows a region of the image to be compressed with higher fidelity than the rest of the image. This is useful in medical data to be able to compress a region containing a possibly cancer area with very high fidelity, but still manage an overall good compression ratio. Both the ROI methods, the basic SPIHT method as well as JPEG compression standard, the latter two without possibility of ROI coding, are evaluated by studying the results from a Computer Aided Detection (CAD) system for microcalcifications tested on the original and the compressed mammograms. In addition a visual inspection is performed as well as Peak Signal-to-Noise-Ratio (PSNR) calculations. Mammograms from the MIAS database is used. We show that mammograms can be compressed to less than 0.5 (0.3) bpp without any visual degradation and without significantly influence on the performance of the CAD system.

  14. A coded aperture compressive imaging array and its visual detection and tracking algorithms for surveillance systems.

    PubMed

    Chen, Jing; Wang, Yongtian; Wu, Hanxiao

    2012-10-29

    In this paper, we propose an application of a compressive imaging system to the problem of wide-area video surveillance systems. A parallel coded aperture compressive imaging system is proposed to reduce the needed high resolution coded mask requirements and facilitate the storage of the projection matrix. Random Gaussian, Toeplitz and binary phase coded masks are utilized to obtain the compressive sensing images. The corresponding motion targets detection and tracking algorithms directly using the compressive sampling images are developed. A mixture of Gaussian distribution is applied in the compressive image space to model the background image and for foreground detection. For each motion target in the compressive sampling domain, a compressive feature dictionary spanned by target templates and noises templates is sparsely represented. An l(1) optimization algorithm is used to solve the sparse coefficient of templates. Experimental results demonstrate that low dimensional compressed imaging representation is sufficient to determine spatial motion targets. Compared with the random Gaussian and Toeplitz phase mask, motion detection algorithms using a random binary phase mask can yield better detection results. However using random Gaussian and Toeplitz phase mask can achieve high resolution reconstructed image. Our tracking algorithm can achieve a real time speed that is up to 10 times faster than that of the l(1) tracker without any optimization.

  15. Property study of integer wavelet transform lossless compression coding based on lifting scheme

    NASA Astrophysics Data System (ADS)

    Xie, Cheng Jun; Yan, Su; Xiang, Yang

    2006-01-01

    In this paper the algorithms and its improvement of integer wavelet transform combining SPIHT and arithmetic coding in image lossless compression is mainly studied. The experimental result shows that if the order of low-pass filter vanish matrix is fixed, the improvement of compression effect is not evident when invertible integer wavelet transform is satisfied and focusing of energy property monotonic increase with transform scale. For the same wavelet bases, the order of low-pass filter vanish matrix is more important than the order of high-pass filter vanish matrix in improving the property of image compression. Integer wavelet transform lossless compression coding based on lifting scheme has no relation to the entropy of image. The effect of compression is depended on the the focuing of energy property of image transform.

  16. Application of P4 Polyphase codes pulse compression method to air-coupled ultrasonic testing systems.

    PubMed

    Li, Honggang; Zhou, Zhenggan

    2017-07-01

    Air-coupled ultrasonic testing systems are usually restricted by low signal-to-noise ratios (SNR). The use of pulse compression techniques based on P4 Polyphase codes can improve the ultrasound SNR. This type of codes can generate higher Peak Side Lobe (PSL) ratio and lower noise of compressed signal. This paper proposes the use of P4 Polyphase sequences to code ultrasound with a NDT system based on air-coupled piezoelectric transducer. Furthermore, the principle of selecting parameters of P4 Polyphase sequence for obtaining optimal pulse compression effect is also studied. Successful results are presented in molded composite material. A hybrid signal processing method for improvement in SNR up to 12.11dB and in time domain resolution about 35% are achieved when compared with conventional pulse compression technique. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. GLS coding based security solution to JPEG with the structure of aggregated compression and encryption

    NASA Astrophysics Data System (ADS)

    Zhang, Yushu; Xiao, Di; Liu, Hong; Nan, Hai

    2014-05-01

    There exists close relation among chaos, coding and cryptography. All the three can be combined into a whole as aggregated chaos-based coding and cryptography (ATC) to compress and encrypt data simultaneously. In particular, image data own high redundancy and wide transmission and thereby it is well worth doing research on ATC for image, which is very helpful to real application.

  18. Speech coding and compression using wavelets and lateral inhibitory networks

    NASA Astrophysics Data System (ADS)

    Ricart, Richard

    1990-12-01

    The purpose of this thesis is to introduce the concept of lateral inhibition as a generalized technique for compressing time/frequency representations of electromagnetic and acoustical signals, particularly speech. This requires at least a rudimentary treatment of the theory of frames- which generalizes most commonly known time/frequency distributions -the biology of hearing, and digital signal processing. As such, this material, along with the interrelationships of the disparate subjects, is presented in a tutorial style. This may leave the mathematician longing for more rigor, the neurophysiological psychologist longing for more substantive support of the hypotheses presented, and the engineer longing for a reprieve from the theoretical barrage. Despite the problems that arise when trying to appeal to too wide an audience, this thesis should be a cogent analysis of the compression of time/frequency distributions via lateral inhibitory networks.

  19. Joint source-channel coding: secured and progressive transmission of compressed medical images on the Internet.

    PubMed

    Babel, Marie; Parrein, Benoît; Déforges, Olivier; Normand, Nicolas; Guédon, Jean-Pierre; Coat, Véronique

    2008-06-01

    The joint source-channel coding system proposed in this paper has two aims: lossless compression with a progressive mode and the integrity of medical data, which takes into account the priorities of the image and the properties of a network with no guaranteed quality of service. In this context, the use of scalable coding, locally adapted resolution (LAR) and a discrete and exact Radon transform, known as the Mojette transform, meets this twofold requirement. In this paper, details of this joint coding implementation are provided as well as a performance evaluation with respect to the reference CALIC coding and to unequal error protection using Reed-Solomon codes.

  20. Pulse code modulation data compression for automated test equipment

    SciTech Connect

    Navickas, T.A.; Jones, S.G.

    1991-05-01

    Development of automated test equipment for an advanced telemetry system requires continuous monitoring of PCM data while exercising telemetry inputs. This requirements leads to a large amount of data that needs to be stored and later analyzed. For example, a data stream of 4 Mbits/s and a test time of thirty minutes would yield 900 Mbytes of raw data. With this raw data, information needs to be stored to correlate the raw data to the test stimulus. This leads to a total of 1.8 Gb of data to be stored and analyzed. There is no method to analyze this amount of data in a reasonable time. A data compression method is needed to reduce the amount of data collected to a reasonable amount. The solution to the problem was data reduction. Data reduction was accomplished by real time limit checking, time stamping, and smart software. Limit checking was accomplished by an eight state finite state machine and four compression algorithms. Time stamping was needed to correlate stimulus to the appropriate output for data reconstruction. The software was written in the C programming language with a DOS extender used to allow it to run in extended mode. A 94--98% compression in the amount of data gathered was accomplished using this method. 1 fig.

  1. FORCE2: A state-of-the-art two-phase code for hydrodynamic calculations

    SciTech Connect

    Ding, Jianmin; Lyczkowski, R.W. ); Burge, S.W. . Research Center)

    1993-02-01

    A three-dimensional computer code for two-phase flow named FORCE2 has been developed by Babcock and Wilcox (B W) in close collaboration with Argonne National Laboratory (ANL). FORCE2 is capable of both transient as well as steady-state simulations. This Cartesian coordinates computer program is a finite control volume, industrial grade and quality embodiment of the pilot-scale FLUFIX/MOD2 code and contains features such as three-dimensional blockages, volume and surface porosities to account for various obstructions in the flow field, and distributed resistance modeling to account for pressure drops caused by baffles, distributor plates and large tube banks. Recently computed results demonstrated the significance of and necessity for three-dimensional models of hydrodynamics and erosion. This paper describes the process whereby ANL's pilot-scale FLUFIX/MOD2 models and numerics were implemented into FORCE2. A description of the quality control to assess the accuracy of the new code and the validation using some of the measured data from Illinois Institute of Technology (UT) and the University of Illinois at Urbana-Champaign (UIUC) are given. It is envisioned that one day, FORCE2 with additional modules such as radiation heat transfer, combustion kinetics and multi-solids together with user-friendly pre- and post-processor software and tailored for massively parallel multiprocessor shared memory computational platforms will be used by industry and researchers to assist in reducing and/or eliminating the environmental and economic barriers which limit full consideration of coal, shale and biomass as energy sources, to retain energy security, and to remediate waste and ecological problems.

  2. FORCE2: A state-of-the-art two-phase code for hydrodynamic calculations

    SciTech Connect

    Ding, Jianmin; Lyczkowski, R.W.; Burge, S.W.

    1993-02-01

    A three-dimensional computer code for two-phase flow named FORCE2 has been developed by Babcock and Wilcox (B & W) in close collaboration with Argonne National Laboratory (ANL). FORCE2 is capable of both transient as well as steady-state simulations. This Cartesian coordinates computer program is a finite control volume, industrial grade and quality embodiment of the pilot-scale FLUFIX/MOD2 code and contains features such as three-dimensional blockages, volume and surface porosities to account for various obstructions in the flow field, and distributed resistance modeling to account for pressure drops caused by baffles, distributor plates and large tube banks. Recently computed results demonstrated the significance of and necessity for three-dimensional models of hydrodynamics and erosion. This paper describes the process whereby ANL`s pilot-scale FLUFIX/MOD2 models and numerics were implemented into FORCE2. A description of the quality control to assess the accuracy of the new code and the validation using some of the measured data from Illinois Institute of Technology (UT) and the University of Illinois at Urbana-Champaign (UIUC) are given. It is envisioned that one day, FORCE2 with additional modules such as radiation heat transfer, combustion kinetics and multi-solids together with user-friendly pre- and post-processor software and tailored for massively parallel multiprocessor shared memory computational platforms will be used by industry and researchers to assist in reducing and/or eliminating the environmental and economic barriers which limit full consideration of coal, shale and biomass as energy sources, to retain energy security, and to remediate waste and ecological problems.

  3. An efficient coding algorithm for the compression of ECG signals using the wavelet transform.

    PubMed

    Rajoub, Bashar A

    2002-04-01

    A wavelet-based electrocardiogram (ECG) data compression algorithm is proposed in this paper. The ECG signal is first preprocessed, the discrete wavelet transform (DWT) is then applied to the preprocessed signal. Preprocessing guarantees that the magnitudes of the wavelet coefficients be less than one, and reduces the reconstruction errors near both ends of the compressed signal. The DWT coefficients are divided into three groups, each group is thresholded using a threshold based on a desired energy packing efficiency. A binary significance map is then generated by scanning the wavelet decomposition coefficients and outputting a binary one if the scanned coefficient is significant, and a binary zero if it is insignificant. Compression is achieved by 1) using a variable length code based on run length encoding to compress the significance map and 2) using direct binary representation for representing the significant coefficients. The ability of the coding algorithm to compress ECG signals is investigated, the results were obtained by compressing and decompressing the test signals. The proposed algorithm is compared with direct-based and wavelet-based compression algorithms and showed superior performance. A compression ratio of 24:1 was achieved for MIT-BIH record 117 with a percent root mean square difference as low as 1.08%.

  4. Image compression with embedded wavelet coding via vector quantization

    NASA Astrophysics Data System (ADS)

    Katsavounidis, Ioannis; Kuo, C.-C. Jay

    1995-09-01

    In this research, we improve Shapiro's EZW algorithm by performing the vector quantization (VQ) of the wavelet transform coefficients. The proposed VQ scheme uses different vector dimensions for different wavelet subbands and also different codebook sizes so that more bits are assigned to those subbands that have more energy. Another feature is that the vector codebooks used are tree-structured to maintain the embedding property. Finally, the energy of these vectors is used as a prediction parameter between different scales to improve the performance. We investigate the performance of the proposed method together with the 7 - 9 tap bi-orthogonal wavelet basis, and look into ways to incorporate loseless compression techniques.

  5. One-Dimensional Lagrangian Code for Plasma Hydrodynamic Analysis of a Fusion Pellet Driven by Ion Beams.

    SciTech Connect

    1986-12-01

    Version 00 The MEDUSA-IB code performs implosion and thermonuclear burn calculations of an ion beam driven ICF target, based on one-dimensional plasma hydrodynamics and transport theory. It can calculate the following values in spherical geometry through the progress of implosion and fuel burnup of a multi-layered target. (1) Hydrodynamic velocities, density, ion, electron and radiation temperature, radiation energy density, Rs and burn rate of target as a function of coordinates and time, (2) Fusion gain as a function of time, (3) Ionization degree, (4) Temperature dependent ion beam energy deposition, (5) Radiation, -particle and neutron spectra as a function of time.

  6. User manual for INVICE 0.1-beta : a computer code for inverse analysis of isentropic compression experiments.

    SciTech Connect

    Davis, Jean-Paul

    2005-03-01

    INVICE (INVerse analysis of Isentropic Compression Experiments) is a FORTRAN computer code that implements the inverse finite-difference method to analyze velocity data from isentropic compression experiments. This report gives a brief description of the methods used and the options available in the first beta version of the code, as well as instructions for using the code.

  7. Multispectral data compression through transform coding and block quantization

    NASA Technical Reports Server (NTRS)

    Ready, P. J.; Wintz, P. A.

    1972-01-01

    Transform coding and block quantization techniques are applied to multispectral aircraft scanner data, and digitized satellite imagery. The multispectral source is defined and an appropriate mathematical model proposed. The Karhunen-Loeve, Fourier, and Hadamard encoders are considered and are compared to the rate distortion function for the equivalent Gaussian source and to the performance of the single sample PCM encoder.

  8. Hydrodynamic Mixing of Ablator Material into the Compressed Fuel and Hot Spot of Direct-Drive DT Cryogenic Implosions

    NASA Astrophysics Data System (ADS)

    Regan, S. P.; Goncharov, V. N.; Epstein, R.; Betti, R.; Bonino, M. J.; Cao, D.; Collins, T. J. B.; Campbell, E. M.; Forrest, C. J.; Glebov, V. Yu.; Harding, D. R.; Marozas, J. A.; Marshall, F. J.; McKenty, P. W.; Sangster, T. C.; Stoeckl, C.; Luo, R. W.; Schoff, M. E.; Farrell, M.

    2016-10-01

    Hydrodynamic mixing of ablator material into the compressed fuel and hot spot of direct-drive DT cryogenic implosions is diagnosed using time-integrated, spatially resolved xray spectroscopy. The laser drive ablates most of the 8- μm-thick CH ablator, which is doped with trace amounts of Ge ( 0.5 at.) and surrounds the cryogenic DT layer. A small fraction of the ablator material is mixed into the compressed shell and the hot spot by the ablation-front Rayleigh-Taylor hydrodynamic instability seeded by laser imprint, the target mounting stalk, and surface debris. The amount of mix mass inferred from spectroscopic analysis of the Ge K-shell emission will be presented. This material is based upon work supported by the Department Of Energy National Nuclear Security Administration under Award Number DE-NA0001944. Part of this work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  9. Non-US data compression and coding research. FASAC Technical Assessment Report

    SciTech Connect

    Gray, R.M.; Cohn, M.; Craver, L.W.; Gersho, A.; Lookabaugh, T.; Pollara, F.; Vetterli, M.

    1993-11-01

    This assessment of recent data compression and coding research outside the United States examines fundamental and applied work in the basic areas of signal decomposition, quantization, lossless compression, and error control, as well as application development efforts in image/video compression and speech/audio compression. Seven computer scientists and engineers who are active in development of these technologies in US academia, government, and industry carried out the assessment. Strong industrial and academic research groups in Western Europe, Israel, and the Pacific Rim are active in the worldwide search for compression algorithms that provide good tradeoffs among fidelity, bit rate, and computational complexity, though the theoretical roots and virtually all of the classical compression algorithms were developed in the United States. Certain areas, such as segmentation coding, model-based coding, and trellis-coded modulation, have developed earlier or in more depth outside the United States, though the United States has maintained its early lead in most areas of theory and algorithm development. Researchers abroad are active in other currently popular areas, such as quantizer design techniques based on neural networks and signal decompositions based on fractals and wavelets, but, in most cases, either similar research is or has been going on in the United States, or the work has not led to useful improvements in compression performance. Because there is a high degree of international cooperation and interaction in this field, good ideas spread rapidly across borders (both ways) through international conferences, journals, and technical exchanges. Though there have been no fundamental data compression breakthroughs in the past five years--outside or inside the United State--there have been an enormous number of significant improvements in both places in the tradeoffs among fidelity, bit rate, and computational complexity.

  10. Global Time Dependent Solutions of Stochastically Driven Standard Accretion Disks: Development of Hydrodynamical Code

    NASA Astrophysics Data System (ADS)

    Wani, Naveel; Maqbool, Bari; Iqbal, Naseer; Misra, Ranjeev

    2016-07-01

    X-ray binaries and AGNs are powered by accretion discs around compact objects, where the x-rays are emitted from the inner regions and uv emission arise from the relatively cooler outer parts. There has been an increasing evidence that the variability of the x-rays in different timescales is caused by stochastic fluctuations in the accretion disc at different radii. These fluctuations although arise in the outer parts of the disc but propagate inwards to give rise to x-ray variability and hence provides a natural connection between the x-ray and uv variability. There are analytical expressions to qualitatively understand the effect of these stochastic variabilities, but quantitative predictions are only possible by a detailed hydrodynamical study of the global time dependent solution of standard accretion disc. We have developed numerical efficient code (to incorporate all these effects), which considers gas pressure dominated solutions and stochastic fluctuations with the inclusion of boundary effect of the last stable orbit.

  11. CAESCAP: A computer code for compressed-air energy-storage-plant cycle analysis

    NASA Astrophysics Data System (ADS)

    Fort, J. A.

    1982-10-01

    The analysis code, CAESCAP, was developed as an aid in comparing and evaluating proposed compressed air energy storage (CAES) cycles. Input consists of component parameters and working fluid conditions at points along a cycle. The code calculates thermodynamic properties at each point and then calculates overall cycle performance. Working fluid capabilities include steam, air, nitrogen, and parahydrogen. The CAESCAP code was used to analyze a variety of CAES cycles. The combination of straightforward input and flexible design make the code easy and inexpensive to use.

  12. Barker code pulse compression with a large Doppler tolerance

    NASA Astrophysics Data System (ADS)

    Jiang, Xuefeng; Zhu, Zhaoda

    1991-03-01

    This paper discusses the application of least square approximate inverse filtering techniques to radar range sidelobe suppression. The method is illustrated by application to the design of a compensated noncoherent sidelobe suppression filter (SSF). The compensated noncoherent SSF of the 13-element Barker code has been found. The -40 kHz to 40 kHz Doppler tolerance of the filter is obtained under the conditions that the subpulse duration is equal to 0.7 microsec and the peak sidelobe level is less than -30 dB. Theoretical computations and experimental results indicate that the SSF implemented has much wider Doppler tolerance than the Rihaczek-Golden (1971) SSF.

  13. Numerical Simulation of Supersonic Compression Corners and Hypersonic Inlet Flows Using the RPLUS2D Code

    NASA Technical Reports Server (NTRS)

    Kapoor, Kamlesh; Anderson, Bernhard H.; Shaw, Robert J.

    1994-01-01

    A two-dimensional computational code, PRLUS2D, which was developed for the reactive propulsive flows of ramjets and scramjets, was validated for two-dimensional shock-wave/turbulent-boundary-layer interactions. The problem of compression corners at supersonic speeds was solved using the RPLUS2D code. To validate the RPLUS2D code for hypersonic speeds, it was applied to a realistic hypersonic inlet geometry. Both the Baldwin-Lomax and the Chien two-equation turbulence models were used. Computational results showed that the RPLUS2D code compared very well with experimentally obtained data for supersonic compression corner flows, except in the case of large separated flows resulting from the interactions between the shock wave and turbulent boundary layer. The computational results compared well with the experiment results in a hypersonic NASA P8 inlet case, with the Chien two-equation turbulence model performing better than the Baldwin-Lomax model.

  14. Lossless Compression of Chemical Fingerprints Using Integer Entropy Codes Improves Storage and Retrieval

    PubMed Central

    Baldi, Pierre; Benz, Ryan W.

    2008-01-01

    Many modern chemoinformatics systems for small molecules rely on large fingerprint vector representations, where the components of the vector record the presence or number of occurrences in the molecular graphs of particular combinatorial features, such as labeled paths or labeled trees. These large fingerprint vectors are often compressed to much shorter fingerprint vectors using a lossy compression scheme based on a simple modulo procedure. Here we combine statistical models of fingerprints with integer entropy codes, such as Golomb and Elias codes, to encode the indices or the run-lengths of the fingerprints. After reordering the fingerprint components by decreasing frequency order, the indices are monotone increasing and the run-lenghts are quasi-monotone increasing, and both exhibit power-law distribution trends. We take advantage of these statistical properties to derive new efficient, lossless, compression algorithms for monotone integer sequences: Monotone Value (MOV) Coding and Monotone Length (MOL) Coding. In contrast with lossy systems that use 1,024 or more bits of storage per molecule, we can achieve lossless compression of long chemical fingerprints based on circular substructures in slightly over 300 bits per molecule, close to the Shannon entropy limit, using a MOL Elias Gamma code for run-lengths. The improvement in storage comes at a modest computational cost. Furthermore, because the compression is lossless, uncompressed similarity (e.g. Tanimoto) between molecules can be computed exactly from their compressed representations, leading to significant improvements in retrival performance, as shown on six benchmark datasets of drug-like molecules. PMID:17967006

  15. Implementation of a simple model for linear and nonlinear mixing at unstable fluid interfaces in hydrodynamics codes

    SciTech Connect

    Ramshaw, J D

    2000-10-01

    A simple model was recently described for predicting the time evolution of the width of the mixing layer at an unstable fluid interface [J. D. Ramshaw, Phys. Rev. E 58, 5834 (1998); ibid. 61, 5339 (2000)]. The ordinary differential equations of this model have been heuristically generalized into partial differential equations suitable for implementation in multicomponent hydrodynamics codes. The central ingredient in this generalization is a nun-diffusional expression for the species mass fluxes. These fluxes describe the relative motion of the species, and thereby determine the local mixing rate and spatial distribution of mixed fluid as a function of time. The generalized model has been implemented in a two-dimensional hydrodynamics code. The model equations and implementation procedure are summarized, and comparisons with experimental mixing data are presented.

  16. Contribution to the nonlinear theory of sound and hydrodynamic turbulence of a compressible liquid

    NASA Astrophysics Data System (ADS)

    L'vov, Victor S.; Mikhailov, Alexandr V.

    1981-01-01

    The interaction of sound with hydrodynamic turbulence has been studied in detail. The sound absorption decrement, the correlation time and length and the frequency diffusion coefficient for the acoustic wave packet are calculated. The spectral composition of the sound radiated by a unit, turbulent volume and the spectral energy density of sound in equilibrium with the turbulence are studied. The region of applicability of the kinetic equation for sound with a linear dispersion low is found.

  17. [A quality controllable algorithm for ECG compression based on wavelet transform and ROI coding].

    PubMed

    Zhao, An; Wu, Baoming

    2006-12-01

    This paper presents an ECG compression algorithm based on wavelet transform and region of interest (ROI) coding. The algorithm has realized near-lossless coding in ROI and quality controllable lossy coding outside of ROI. After mean removal of the original signal, multi-layer orthogonal discrete wavelet transform is performed. Simultaneously,feature extraction is performed on the original signal to find the position of ROI. The coefficients related to the ROI are important coefficients and kept. Otherwise, the energy loss of the transform domain is calculated according to the goal PRDBE (Percentage Root-mean-square Difference with Baseline Eliminated), and then the threshold of the coefficients outside of ROI is determined according to the loss of energy. The important coefficients, which include the coefficients of ROI and the coefficients that are larger than the threshold outside of ROI, are put into a linear quantifier. The map, which records the positions of the important coefficients in the original wavelet coefficients vector, is compressed with a run-length encoder. Huffman coding has been applied to improve the compression ratio. ECG signals taken from the MIT/BIH arrhythmia database are tested, and satisfactory results in terms of clinical information preserving, quality and compress ratio are obtained.

  18. Wavelet transform and Huffman coding based electrocardiogram compression algorithm: Application to telecardiology

    NASA Astrophysics Data System (ADS)

    Chouakri, S. A.; Djaafri, O.; Taleb-Ahmed, A.

    2013-08-01

    We present in this work an algorithm for electrocardiogram (ECG) signal compression aimed to its transmission via telecommunication channel. Basically, the proposed ECG compression algorithm is articulated on the use of wavelet transform, leading to low/high frequency components separation, high order statistics based thresholding, using level adjusted kurtosis value, to denoise the ECG signal, and next a linear predictive coding filter is applied to the wavelet coefficients producing a lower variance signal. This latter one will be coded using the Huffman encoding yielding an optimal coding length in terms of average value of bits per sample. At the receiver end point, with the assumption of an ideal communication channel, the inverse processes are carried out namely the Huffman decoding, inverse linear predictive coding filter and inverse discrete wavelet transform leading to the estimated version of the ECG signal. The proposed ECG compression algorithm is tested upon a set of ECG records extracted from the MIT-BIH Arrhythmia Data Base including different cardiac anomalies as well as the normal ECG signal. The obtained results are evaluated in terms of compression ratio and mean square error which are, respectively, around 1:8 and 7%. Besides the numerical evaluation, the visual perception demonstrates the high quality of ECG signal restitution where the different ECG waves are recovered correctly.

  19. A lossless compression method for medical image sequences using JPEG-LS and interframe coding.

    PubMed

    Miaou, Shaou-Gang; Ke, Fu-Sheng; Chen, Shu-Ching

    2009-09-01

    Hospitals and medical centers produce an enormous amount of digital medical images every day, especially in the form of image sequences, which requires considerable storage space. One solution could be the application of lossless compression. Among available methods, JPEG-LS has excellent coding performance. However, it only compresses a single picture with intracoding and does not utilize the interframe correlation among pictures. Therefore, this paper proposes a method that combines the JPEG-LS and an interframe coding with motion vectors to enhance the compression performance of using JPEG-LS alone. Since the interframe correlation between two adjacent images in a medical image sequence is usually not as high as that in a general video image sequence, the interframe coding is activated only when the interframe correlation is high enough. With six capsule endoscope image sequences under test, the proposed method achieves average compression gains of 13.3% and 26.3% over the methods of using JPEG-LS and JPEG2000 alone, respectively. Similarly, for an MRI image sequence, coding gains of 77.5% and 86.5% are correspondingly obtained.

  20. Hierarchical prediction and context adaptive coding for lossless color image compression.

    PubMed

    Kim, Seyun; Cho, Nam Ik

    2014-01-01

    This paper presents a new lossless color image compression algorithm, based on the hierarchical prediction and context-adaptive arithmetic coding. For the lossless compression of an RGB image, it is first decorrelated by a reversible color transform and then Y component is encoded by a conventional lossless grayscale image compression method. For encoding the chrominance images, we develop a hierarchical scheme that enables the use of upper, left, and lower pixels for the pixel prediction, whereas the conventional raster scan prediction methods use upper and left pixels. An appropriate context model for the prediction error is also defined and the arithmetic coding is applied to the error signal corresponding to each context. For several sets of images, it is shown that the proposed method further reduces the bit rates compared with JPEG2000 and JPEG-XR.

  1. Data compression in wireless sensors network using MDCT and embedded harmonic coding.

    PubMed

    Alsalaet, Jaafar K; Ali, Abduladhem A

    2015-05-01

    One of the major applications of wireless sensors networks (WSNs) is vibration measurement for the purpose of structural health monitoring and machinery fault diagnosis. WSNs have many advantages over the wired networks such as low cost and reduced setup time. However, the useful bandwidth is limited, as compared to wired networks, resulting in relatively low sampling. One solution to this problem is data compression which, in addition to enhancing sampling rate, saves valuable power of the wireless nodes. In this work, a data compression scheme, based on Modified Discrete Cosine Transform (MDCT) followed by Embedded Harmonic Components Coding (EHCC) is proposed to compress vibration signals. The EHCC is applied to exploit harmonic redundancy present is most vibration signals resulting in improved compression ratio. This scheme is made suitable for the tiny hardware of wireless nodes and it is proved to be fast and effective. The efficiency of the proposed scheme is investigated by conducting several experimental tests.

  2. Novel lossless FMRI image compression based on motion compensation and customized entropy coding.

    PubMed

    Sanchez, Victor; Nasiopoulos, Panos; Abugharbieh, Rafeef

    2009-07-01

    We recently proposed a method for lossless compression of 4-D medical images based on the advanced video coding standard (H.264/AVC). In this paper, we present two major contributions that enhance our previous work for compression of functional MRI (fMRI) data: 1) a new multiframe motion compensation process that employs 4-D search, variable-size block matching, and bidirectional prediction; and 2) a new context-based adaptive binary arithmetic coder designed for lossless compression of the residual and motion vector data. We validate our method on real fMRI sequences of various resolutions and compare the performance to two state-of-the-art methods: 4D-JPEG2000 and H.264/AVC. Quantitative results demonstrate that our proposed technique significantly outperforms current state of the art with an average compression ratio improvement of 13%.

  3. Group-complementary code sets for implementing pulse compression with desirable range resolution properties

    NASA Astrophysics Data System (ADS)

    Weathers, G.; Holliday, E. M.

    This paper describes the structure and properties of a waveform design technique intended to provide desirable range resolution properties in radar sensor systems. The waveform design, called group-complementary coding, consists of groups of binary sequences which can be used for bi-phase coding of a radar carrier pulsed waveform. When pulse compression processing is extended to include the composite of a number of pulses through coherent integration, then group-complementary coding provides the often desirable property of complete range sidelobe cancellation (for zero Doppler shift).

  4. Code Development of Three-Dimensional General Relativistic Hydrodynamics with AMR (Adaptive-Mesh Refinement) and Results from Special and General Relativistic Hydrodynamics

    NASA Astrophysics Data System (ADS)

    Dönmez, Orhan

    2004-09-01

    In this paper, the general procedure to solve the general relativistic hydrodynamical (GRH) equations with adaptive-mesh refinement (AMR) is presented. In order to achieve, the GRH equations are written in the conservation form to exploit their hyperbolic character. The numerical solutions of GRH equations are obtained by high resolution shock Capturing schemes (HRSC), specifically designed to solve nonlinear hyperbolic systems of conservation laws. These schemes depend on the characteristic information of the system. The Marquina fluxes with MUSCL left and right states are used to solve GRH equations. First, different test problems with uniform and AMR grids on the special relativistic hydrodynamics equations are carried out to verify the second-order convergence of the code in one, two and three dimensions. Results from uniform and AMR grid are compared. It is found that adaptive grid does a better job when the number of resolution is increased. Second, the GRH equations are tested using two different test problems which are Geodesic flow and Circular motion of particle In order to do this, the flux part of GRH equations is coupled with source part using Strang splitting. The coupling of the GRH equations is carried out in a treatment which gives second order accurate solutions in space and time.

  5. Compression performance of HEVC and its format range and screen content coding extensions

    NASA Astrophysics Data System (ADS)

    Li, Bin; Xu, Jizheng; Sullivan, Gary J.

    2015-09-01

    This paper presents a comparison-based test of the objective compression performance of the High Efficiency Video Coding (HEVC) standard, its format range extensions (RExt), and its draft screen content coding extensions (SCC). The current dominant standard, H.264/MPEG-4 AVC, is used as an anchor reference in the comparison. The conditions used for the comparison tests were designed to reflect relevant application scenarios and to enable a fair comparison to the maximum extent feasible - i.e., using comparable quantization settings, reference frame buffering, intra refresh periods, rate-distortion optimization decision processing, etc. It is noted that such PSNR-based objective comparisons generally provide more conservative estimates of HEVC benefit than are found in subjective studies. The experimental results show that, when compared with H.264/MPEG-4 AVC, HEVC version 1 provides a bit rate savings for equal PSNR of about 23% for all-intra coding, 34% for random access coding, and 38% for low-delay coding. This is consistent with prior studies and the general characterization that HEVC can provide about a bit rate savings of about 50% for equal subjective quality for most applications. The HEVC format range extensions provide a similar bit rate savings of about 13-25% for all-intra coding, 28-33% for random access coding, and 32-38% for low-delay coding at different bit rate ranges. For lossy coding of screen content, the HEVC screen content coding extensions achieve a bit rate savings of about 66%, 63%, and 61% for all-intra coding, random access coding, and low-delay coding, respectively. For lossless coding, the corresponding bit rate savings are about 40%, 33%, and 32%, respectively.

  6. A lossless compression method based on mix coding and IWT for MODIS image

    NASA Astrophysics Data System (ADS)

    Ren, Ruizhi; Guo, Shuxu; Gu, Lingjia; Wang, Lang; Wang, Xu

    2009-08-01

    In order to effectively store and transmit MODIS multispectral data, a lossless compression method based on mix coding and integer wavelet transform (IWT) is proposed in this paper. Firstly, the algorithm computes the correlation coefficients between spectrums in MODIS data. Using proper coefficient threshold, the original bands will be divided two groups: one group use spectral prediction method and then compress residual error, while the other group data is directly compressed by some standard compressor. For the spectral prediction group, we can find the current band that has greatest correlation with the previous band by the judgments of correlation coefficient, thus the optimal spectral prediction sequence is obtained by band reordering. The prediction band data can be computed with the previous band data and optimal linear predictor, so the spectral redundancy can be eliminated by using spectral prediction. In order to reduce residual differences in further, the block optimal linear predictor is designed in this paper. Next, except for the first band of the spectral prediction sequence, the residual errors of other bands are encoded by IWT and SPIHT. The direct compression bands and the first band of spectral prediction sequence are compressed by JPEG2000. Finally, the coefficients of block optimal linear predictor and other side information are encoded by adaptive arithmetic coding. The experimental results show that the proposed method is efficient and practical for MODIS data.

  7. APSARA: A multi-dimensional unsplit fourth-order explicit Eulerian hydrodynamics code for arbitrary curvilinear grids

    NASA Astrophysics Data System (ADS)

    Wongwathanarat, A.; Grimm-Strele, H.; Müller, E.

    2016-10-01

    We present a new fourth-order, finite-volume hydrodynamics code named Apsara. The code employs a high-order, finite-volume method for mapped coordinates with extensions for nonlinear hyperbolic conservation laws. Apsara can handle arbitrary structured curvilinear meshes in three spatial dimensions. The code has successfully passed several hydrodynamic test problems, including the advection of a Gaussian density profile and a nonlinear vortex and the propagation of linear acoustic waves. For these test problems, Apsara produces fourth-order accurate results in case of smooth grid mappings. The order of accuracy is reduced to first-order when using the nonsmooth circular grid mapping. When applying the high-order method to simulations of low-Mach number flows, for example, the Gresho vortex and the Taylor-Green vortex, we discover that Apsara delivers superior results to codes based on the dimensionally split, piecewise parabolic method (PPM) widely used in astrophysics. Hence, Apsara is a suitable tool for simulating highly subsonic flows in astrophysics. In the first astrophysical application, we perform implicit large eddy simulations (ILES) of anisotropic turbulence in the context of core collapse supernova (CCSN) and obtain results similar to those previously reported.

  8. Combining node-centered parallel radiation transport and higher-order multi-material cell-centered hydrodynamics methods in three-temperature radiation hydrodynamics code TRHD

    NASA Astrophysics Data System (ADS)

    Sijoy, C. D.; Chaturvedi, S.

    2016-06-01

    Higher-order cell-centered multi-material hydrodynamics (HD) and parallel node-centered radiation transport (RT) schemes are combined self-consistently in three-temperature (3T) radiation hydrodynamics (RHD) code TRHD (Sijoy and Chaturvedi, 2015) developed for the simulation of intense thermal radiation or high-power laser driven RHD. For RT, a node-centered gray model implemented in a popular RHD code MULTI2D (Ramis et al., 2009) is used. This scheme, in principle, can handle RT in both optically thick and thin materials. The RT module has been parallelized using message passing interface (MPI) for parallel computation. Presently, for multi-material HD, we have used a simple and robust closure model in which common strain rates to all materials in a mixed cell is assumed. The closure model has been further generalized to allow different temperatures for the electrons and ions. In addition to this, electron and radiation temperatures are assumed to be in non-equilibrium. Therefore, the thermal relaxation between the electrons and ions and the coupling between the radiation and matter energies are required to be computed self-consistently. This has been achieved by using a node-centered symmetric-semi-implicit (SSI) integration scheme. The electron thermal conduction is calculated using a cell-centered, monotonic, non-linear finite volume scheme (NLFV) suitable for unstructured meshes. In this paper, we have described the details of the 2D, 3T, non-equilibrium, multi-material RHD code developed with a special attention to the coupling of various cell-centered and node-centered formulations along with a suite of validation test problems to demonstrate the accuracy and performance of the algorithms. We also report the parallel performance of RT module. Finally, in order to demonstrate the full capability of the code implementation, we have presented the simulation of laser driven shock propagation in a layered thin foil. The simulation results are found to be in good

  9. Hyperspectral image compression: adapting SPIHT and EZW to anisotropic 3-D wavelet coding.

    PubMed

    Christophe, Emmanuel; Mailhes, Corinne; Duhamel, Pierre

    2008-12-01

    Hyperspectral images present some specific characteristics that should be used by an efficient compression system. In compression, wavelets have shown a good adaptability to a wide range of data, while being of reasonable complexity. Some wavelet-based compression algorithms have been successfully used for some hyperspectral space missions. This paper focuses on the optimization of a full wavelet compression system for hyperspectral images. Each step of the compression algorithm is studied and optimized. First, an algorithm to find the optimal 3-D wavelet decomposition in a rate-distortion sense is defined. Then, it is shown that a specific fixed decomposition has almost the same performance, while being more useful in terms of complexity issues. It is shown that this decomposition significantly improves the classical isotropic decomposition. One of the most useful properties of this fixed decomposition is that it allows the use of zero tree algorithms. Various tree structures, creating a relationship between coefficients, are compared. Two efficient compression methods based on zerotree coding (EZW and SPIHT) are adapted on this near-optimal decomposition with the best tree structure found. Performances are compared with the adaptation of JPEG 2000 for hyperspectral images on six different areas presenting different statistical properties.

  10. A lossless multichannel bio-signal compression based on low-complexity joint coding scheme for portable medical devices.

    PubMed

    Kim, Dong-Sun; Kwon, Jin-San

    2014-09-18

    Research on real-time health systems have received great attention during recent years and the needs of high-quality personal multichannel medical signal compression for personal medical product applications are increasing. The international MPEG-4 audio lossless coding (ALS) standard supports a joint channel-coding scheme for improving compression performance of multichannel signals and it is very efficient compression method for multi-channel biosignals. However, the computational complexity of such a multichannel coding scheme is significantly greater than that of other lossless audio encoders. In this paper, we present a multichannel hardware encoder based on a low-complexity joint-coding technique and shared multiplier scheme for portable devices. A joint-coding decision method and a reference channel selection scheme are modified for a low-complexity joint coder. The proposed joint coding decision method determines the optimized joint-coding operation based on the relationship between the cross correlation of residual signals and the compression ratio. The reference channel selection is designed to select a channel for the entropy coding of the joint coding. The hardware encoder operates at a 40 MHz clock frequency and supports two-channel parallel encoding for the multichannel monitoring system. Experimental results show that the compression ratio increases by 0.06%, whereas the computational complexity decreases by 20.72% compared to the MPEG-4 ALS reference software encoder. In addition, the compression ratio increases by about 11.92%, compared to the single channel based bio-signal lossless data compressor.

  11. Assessment of error propagation in ultraspectral sounder data via JPEG2000 compression and turbo coding

    NASA Astrophysics Data System (ADS)

    Olsen, Donald P.; Wang, Charles C.; Sklar, Dean; Huang, Bormin; Ahuja, Alok

    2005-08-01

    Research has been undertaken to examine the robustness of JPEG2000 when corrupted by transmission bit errors in a satellite data stream. Contemporary and future ultraspectral sounders such as Atmospheric Infrared Sounder (AIRS), Cross-track Infrared Sounder (CrIS), Infrared Atmospheric Sounding Interferometer (IASI), Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS), and Hyperspectral Environmental Suite (HES) generate a large volume of three-dimensional data. Hence, compression of ultraspectral sounder data will facilitate data transmission and archiving. There is a need for lossless or near-lossless compression of ultraspectral sounder data to avoid potential retrieval degradation of geophysical parameters due to lossy compression. This paper investigates the simulated error propagation in AIRS ultraspectral sounder data with advanced source and channel coding in a satellite data stream. The source coding is done via JPEG2000, the latest International Organization for Standardization (ISO)/International Telecommunication Union (ITU) standard for image compression. After JPEG2000 compression the AIRS ultraspectral sounder data is then error correction encoded using a rate 0.954 turbo product code (TPC) for channel error control. Experimental results of error patterns on both channel and source decoding are presented. The error propagation effects are curbed via the block-based protection mechanism in the JPEG2000 codec as well as memory characteristics of the forward error correction (FEC) scheme to contain decoding errors within received blocks. A single nonheader bit error in a source code block tends to contaminate the bits until the end of the source code block before the inverse discrete wavelet transform (IDWT), and those erroneous bits propagate even further after the IDWT. Furthermore, a single header bit error may result in the corruption of almost the entire decompressed granule. JPEG2000 appears vulnerable to bit errors in a noisy channel of

  12. Performance evaluation of the intra compression in the video coding standards

    NASA Astrophysics Data System (ADS)

    Abramowski, Andrzej

    2015-09-01

    The article presents a comparison of the Intra prediction algorithms in the current state-of-the-art video coding standards, including MJPEG 2000, VP8, VP9, H.264/AVC and H.265/HEVC. The effectiveness of techniques employed by each standard is evaluated in terms of compression efficiency and average encoding time. The compression efficiency is measured using BD-PSNR and BD-RATE metrics with H.265/HEVC results as an anchor. Tests are performed on a set of video sequences, composed of sequences gathered by Joint Collaborative Team on Video Coding during the development of the H.265/HEVC standard and 4K sequences provided by Ultra Video Group. According to results, H.265/HEVC provides significant bit-rate savings at the expense of computational complexity, while VP9 may be regarded as a compromise between the efficiency and required encoding time.

  13. Lossless image compression based on optimal prediction, adaptive lifting, and conditional arithmetic coding.

    PubMed

    Boulgouris, N V; Tzovaras, D; Strintzis, M G

    2001-01-01

    The optimal predictors of a lifting scheme in the general n-dimensional case are obtained and applied for the lossless compression of still images using first quincunx sampling and then simple row-column sampling. In each case, the efficiency of the linear predictors is enhanced nonlinearly. Directional postprocessing is used in the quincunx case, and adaptive-length postprocessing in the row-column case. Both methods are seen to perform well. The resulting nonlinear interpolation schemes achieve extremely efficient image decorrelation. We further investigate context modeling and adaptive arithmetic coding of wavelet coefficients in a lossless compression framework. Special attention is given to the modeling contexts and the adaptation of the arithmetic coder to the actual data. Experimental evaluation shows that the best of the resulting coders produces better results than other known algorithms for multiresolution-based lossless image coding.

  14. Inferential multi-spectral image compression based on distributed source coding

    NASA Astrophysics Data System (ADS)

    Wu, Xian-yun; Li, Yun-song; Wu, Cheng-ke; Kong, Fan-qiang

    2008-08-01

    Based on the analyses of the interferential multispectral imagery(IMI), a new compression algorithm based on distributed source coding is proposed. There are apparent push motions between the IMI sequences, the relative shift between two images is detected by the block match algorithm at the encoder. Our algorithm estimates the rate of each bitplane with the estimated side information frame. then our algorithm adopts a ROI coding algorithm, in which the rate-distortion lifting procedure is carried out in rate allocation stage. Using our algorithm, the FBC can be removed from the traditional scheme. The compression algorithm developed in the paper can obtain up to 3dB's gain comparing with JPEG2000 and significantly reduce the complexity and storage consumption comparing with 3D-SPIHT at the cost of slight degrade in PSNR.

  15. Optical information authentication using compressed double-random-phase-encoded images and quick-response codes.

    PubMed

    Wang, Xiaogang; Chen, Wen; Chen, Xudong

    2015-03-09

    In this paper, we develop a new optical information authentication system based on compressed double-random-phase-encoded images and quick-response (QR) codes, where the parameters of optical lightwave are used as keys for optical decryption and the QR code is a key for verification. An input image attached with QR code is first optically encoded in a simplified double random phase encoding (DRPE) scheme without using interferometric setup. From the single encoded intensity pattern recorded by a CCD camera, a compressed double-random-phase-encoded image, i.e., the sparse phase distribution used for optical decryption, is generated by using an iterative phase retrieval technique with QR code. We compare this technique to the other two methods proposed in literature, i.e., Fresnel domain information authentication based on the classical DRPE with holographic technique and information authentication based on DRPE and phase retrieval algorithm. Simulation results show that QR codes are effective on improving the security and data sparsity of optical information encryption and authentication system.

  16. Channel coding and data compression system considerations for efficient communication of planetary imaging data

    NASA Technical Reports Server (NTRS)

    Rice, R. F.

    1974-01-01

    End-to-end system considerations involving channel coding and data compression are reported which could drastically improve the efficiency in communicating pictorial information from future planetary spacecraft. In addition to presenting new and potentially significant system considerations, this report attempts to fill a need for a comprehensive tutorial which makes much of this very subject accessible to readers whose disciplines lie outside of communication theory.

  17. On multigrid solution of the implicit equations of hydrodynamics. Experiments for the compressible Euler equations in general coordinates

    NASA Astrophysics Data System (ADS)

    Kifonidis, K.; Müller, E.

    2012-08-01

    Aims: We describe and study a family of new multigrid iterative solvers for the multidimensional, implicitly discretized equations of hydrodynamics. Schemes of this class are free of the Courant-Friedrichs-Lewy condition. They are intended for simulations in which widely differing wave propagation timescales are present. A preferred solver in this class is identified. Applications to some simple stiff test problems that are governed by the compressible Euler equations, are presented to evaluate the convergence behavior, and the stability properties of this solver. Algorithmic areas are determined where further work is required to make the method sufficiently efficient and robust for future application to difficult astrophysical flow problems. Methods: The basic equations are formulated and discretized on non-orthogonal, structured curvilinear meshes. Roe's approximate Riemann solver and a second-order accurate reconstruction scheme are used for spatial discretization. Implicit Runge-Kutta (ESDIRK) schemes are employed for temporal discretization. The resulting discrete equations are solved with a full-coarsening, non-linear multigrid method. Smoothing is performed with multistage-implicit smoothers. These are applied here to the time-dependent equations by means of dual time stepping. Results: For steady-state problems, our results show that the efficiency of the present approach is comparable to the best implicit solvers for conservative discretizations of the compressible Euler equations that can be found in the literature. The use of red-black as opposed to symmetric Gauss-Seidel iteration in the multistage-smoother is found to have only a minor impact on multigrid convergence. This should enable scalable parallelization without having to seriously compromise the method's algorithmic efficiency. For time-dependent test problems, our results reveal that the multigrid convergence rate degrades with increasing Courant numbers (i.e. time step sizes). Beyond a

  18. Recent Hydrodynamics Improvements to the RELAP5-3D Code

    SciTech Connect

    Richard A. Riemke; Cliff B. Davis; Richard.R. Schultz

    2009-07-01

    The hydrodynamics section of the RELAP5-3D computer program has been recently improved. Changes were made as follows: (1) improved turbine model, (2) spray model for the pressurizer model, (3) feedwater heater model, (4) radiological transport model, (5) improved pump model, and (6) compressor model.

  19. Comparison of Particle Flow Code and Smoothed Particle Hydrodynamics Modelling of Landslide Run outs

    NASA Astrophysics Data System (ADS)

    Preh, A.; Poisel, R.; Hungr, O.

    2009-04-01

    In most continuum mechanics methods modelling the run out of landslides the moving mass is divided into a number of elements, the velocities of which can be established by numerical integration of Newtońs second law (Lagrangian solution). The methods are based on fluid mechanics modelling the movements of an equivalent fluid. In 2004, McDougall and Hungr presented a three-dimensional numerical model for rapid landslides, e.g. debris flows and rock avalanches, called DAN3D.The method is based on the previous work of Hungr (1995) and is using an integrated two-dimensional Lagrangian solution and meshless Smooth Particle Hydrodynamics (SPH) principle to maintain continuity. DAN3D has an open rheological kernel, allowing the use of frictional (with constant porepressure ratio) and Voellmy rheologies and gives the possibility to change material rheology along the path. Discontinuum (granular) mechanics methods model the run out mass as an assembly of particles moving down a surface. Each particle is followed exactly as it moves and interacts with the surface and with its neighbours. Every particle is checked on contacts with every other particle in every time step using a special cell-logic for contact detection in order to reduce the computational effort. The Discrete Element code PFC3D was adapted in order to make possible discontinuum mechanics models of run outs. Punta Thurwieser Rock Avalanche and Frank Slide were modelled by DAN as well as by PFC3D. The simulations showed correspondingly that the parameters necessary to get results coinciding with observations in nature are completely different. The maximum velocity distributions due to DAN3D reveal that areas of different maximum flow velocity are next to each other in Punta Thurwieser run out whereas the distribution of maximum flow velocity shows almost constant maximum flow velocity over the width of the run out regarding Frank Slide. Some 30 percent of total kinetic energy is rotational kinetic energy in

  20. The KORSA Computer Code Modeling of Stratified Two-Phase Flow Hydrodynamics in Horizontal Pipes

    SciTech Connect

    Yudov, Yu. V.

    2002-07-01

    The KORSAR best estimate computer code has been developed at NITI since 1996. It is designed to numerically simulate transient and accident conditions in VVER-type reactors /1/. From 1999 and on, the code development activity has been coordinated by the Center for Computer Code Development under Russia's Minatom. (authors)

  1. Compressive Sampling based Image Coding for Resource-deficient Visual Communication.

    PubMed

    Liu, Xianming; Zhai, Deming; Zhou, Jiantao; Zhang, Xinfeng; Zhao, Debin; Gao, Wen

    2016-04-14

    In this paper, a new compressive sampling based image coding scheme is developed to achieve competitive coding efficiency at lower encoder computational complexity, while supporting error resilience. This technique is particularly suitable for visual communication with resource-deficient devices. At the encoder, compact image representation is produced, which is a polyphase down-sampled version of the input image; but the conventional low-pass filter prior to down-sampling is replaced by a local random binary convolution kernel. The pixels of the resulting down-sampled pre-filtered image are local random measurements and placed in the original spatial configuration. The advantages of local random measurements are two folds: 1) preserve high-frequency image features that are otherwise discarded by low-pass filtering; 2) remain a conventional image and can therefore be coded by any standardized codec to remove statistical redundancy of larger scales. Moreover, measurements generated by different kernels can be considered as multiple descriptions of the original image and therefore the proposed scheme has the advantage of multiple description coding. At the decoder, a unified sparsity-based soft-decoding technique is developed to recover the original image from received measurements in a framework of compressive sensing. Experimental results demonstrate that the proposed scheme is competitive compared with existing methods, with a unique strength of recovering fine details and sharp edges at low bit-rates.

  2. Thermodynamic analysis of five compressed-air energy-storage cycles. [Using CAESCAP computer code

    SciTech Connect

    Fort, J. A.

    1983-03-01

    One important aspect of the Compressed-Air Energy-Storage (CAES) Program is the evaluation of alternative CAES plant designs. The thermodynamic performance of the various configurations is particularly critical to the successful demonstration of CAES as an economically feasible energy-storage option. A computer code, the Compressed-Air Energy-Storage Cycle-Analysis Program (CAESCAP), was developed in 1982 at the Pacific Northwest Laboratory. This code was designed specifically to calculate overall thermodynamic performance of proposed CAES-system configurations. The results of applying this code to the analysis of five CAES plant designs are presented in this report. The designs analyzed were: conventional CAES; adiabatic CAES; hybrid CAES; pressurized fluidized-bed CAES; and direct coupled steam-CAES. Inputs to the code were based on published reports describing each plant cycle. For each cycle analyzed, CAESCAP calculated the thermodynamic station conditions and individual-component efficiencies, as well as overall cycle-performance-parameter values. These data were then used to diagram the availability and energy flow for each of the five cycles. The resulting diagrams graphically illustrate the overall thermodynamic performance inherent in each plant configuration, and enable a more accurate and complete understanding of each design.

  3. Mechanization of Library Procedures in the Medium-sized Medical Library: X. Uniqueness of Compression Codes for Bibliographic Retrieval *

    PubMed Central

    Coe, Mary Jordan

    1970-01-01

    Two-word compression techniques, the University of Chicago experimental search code and a phonetic code similar to the SOUNDEX coding system, were tested as search codes on a data base of 7,464 bibliographic records. These codes were automatically generated and tested for uniqueness. A modified version of the University of Chicago search code produced the best results with a uniqueness factor of 98.83 percent. The algorithms for generating these codes are explained, and implication of the findings for medium-sized libraries are discussed. PMID:4924789

  4. Analysis of Doppler Effect on the Pulse Compression of Different Codes Emitted by an Ultrasonic LPS

    PubMed Central

    Paredes, José A.; Aguilera, Teodoro; Álvarez, Fernando J.; Lozano, Jesús; Morera, Jorge

    2011-01-01

    This work analyses the effect of the receiver movement on the detection by pulse compression of different families of codes characterizing the emissions of an Ultrasonic Local Positioning System. Three families of codes have been compared: Kasami, Complementary Sets of Sequences and Loosely Synchronous, considering in all cases three different lengths close to 64, 256 and 1,024 bits. This comparison is first carried out by using a system model in order to obtain a set of results that are then experimentally validated with the help of an electric slider that provides radial speeds up to 2 m/s. The performance of the codes under analysis has been characterized by means of the auto-correlation and cross-correlation bounds. The results derived from this study should be of interest to anyone performing matched filtering of ultrasonic signals with a moving emitter/receiver. PMID:22346670

  5. Cholla: 3D GPU-based hydrodynamics code for astrophysical simulation

    NASA Astrophysics Data System (ADS)

    Schneider, Evan E.; Robertson, Brant E.

    2016-07-01

    Cholla (Computational Hydrodynamics On ParaLLel Architectures) models the Euler equations on a static mesh and evolves the fluid properties of thousands of cells simultaneously using GPUs. It can update over ten million cells per GPU-second while using an exact Riemann solver and PPM reconstruction, allowing computation of astrophysical simulations with physically interesting grid resolutions (>256^3) on a single device; calculations can be extended onto multiple devices with nearly ideal scaling beyond 64 GPUs.

  6. Gaseous Laser Targets and Optical Dignostics for Studying Compressible Turbulent Hydrodynamic Instabilities

    SciTech Connect

    Edwards, M J; Hansen, J; Miles, A R; Froula, D; Gregori, G; Glenzer, S; Edens, A; Dittmire, T

    2005-02-08

    The possibility of studying compressible turbulent flows using gas targets driven by high power lasers and diagnosed with optical techniques is investigated. The potential advantage over typical laser experiments that use solid targets and x-ray diagnostics is more detailed information over a larger range of spatial scales. An experimental system is described to study shock - jet interactions at high Mach number. This consists of a mini-chamber full of nitrogen at a pressure {approx} 1 atms. The mini-chamber is situated inside a much larger vacuum chamber. An intense laser pulse ({approx}100J in {approx} 5ns) is focused on to a thin {approx} 0.3{micro}m thick silicon nitride window at one end of the mini-chamber. The window acts both as a vacuum barrier, and laser entrance hole. The ''explosion'' caused by the deposition of the laser energy just inside the window drives a strong blast wave out into the nitrogen atmosphere. The spherical shock expands and interacts with a jet of xenon introduced though the top of the mini-chamber. The Mach number of the interaction is controlled by the separation of the jet from the explosion. The resulting flow is visualized using an optical schlieren system using a pulsed laser source at a wavelength of 0.53 {micro}m. The technical path leading up to the design of this experiment is presented, and future prospects briefly considered. Lack of laser time in the final year of the project severely limited experimental results obtained using the new apparatus.

  7. Improvement Text Compression Performance Using Combination of Burrows Wheeler Transform, Move to Front, and Huffman Coding Methods

    NASA Astrophysics Data System (ADS)

    Aprilianto, Mohammada; Abdurohman, Maman

    2014-04-01

    Text is a media that is often used to convey information in both wired and wireless-based network. One limitation of the wireless system is the network bandwidth. In this study we implemented a text compression application with lossless compression technique using combination of Burrows wheeler transform, move to front, and Huffman coding methods. With the addition of the compression of the text, it is expected to save network resources. This application provides information about compression ratio. From the testing process, it concludes that text compression with only Huffman coding method will be efficient when the number of text characters are above 400 characters, meanwhile text compression with burrows wheeler transform, move to front, and Huffman coding methods will be efficient when the number of text characters are above 531 characters. Combination of these methods are more efficient than just Huffman coding when the number of text characters are above 979 characters. The more characters that are compressed and the more patterns of the same symbol, the better the compression ratio.

  8. GENESIS: A High-Resolution Code for Three-dimensional Relativistic Hydrodynamics

    NASA Astrophysics Data System (ADS)

    Aloy, M. A.; Ibáñez, J. M.; Martí, J. M.; Müller, E.

    1999-05-01

    The main features of a three-dimensional, high-resolution special relativistic hydro code based on relativistic Riemann solvers are described. The capabilities and performance of the code are discussed. In particular, we present the results of extensive test calculations that demonstrate that the code can accurately and efficiently handle strong shocks in three spatial dimensions. Results of the performance of the code on single and multiprocessor machines are given. Simulations (in double precision) with <=7×106 computational cells require less than 1 Gbyte of RAM memory and ~7×10-5 CPU s per zone and time step (on a SCI Cray-Origin 2000 with a R10000 processor). Currently, a version of the numerical code is under development, which is suited for massively parallel computers with distributed memory architecture (such as, e.g., Cray T3E).

  9. Single stock dynamics on high-frequency data: from a compressed coding perspective.

    PubMed

    Fushing, Hsieh; Chen, Shu-Chun; Hwang, Chii-Ruey

    2014-01-01

    High-frequency return, trading volume and transaction number are digitally coded via a nonparametric computing algorithm, called hierarchical factor segmentation (HFS), and then are coupled together to reveal a single stock dynamics without global state-space structural assumptions. The base-8 digital coding sequence, which is capable of revealing contrasting aggregation against sparsity of extreme events, is further compressed into a shortened sequence of state transitions. This compressed digital code sequence vividly demonstrates that the aggregation of large absolute returns is the primary driving force for stimulating both the aggregations of large trading volumes and transaction numbers. The state of system-wise synchrony is manifested with very frequent recurrence in the stock dynamics. And this data-driven dynamic mechanism is seen to correspondingly vary as the global market transiting in and out of contraction-expansion cycles. These results not only elaborate the stock dynamics of interest to a fuller extent, but also contradict some classical theories in finance. Overall this version of stock dynamics is potentially more coherent and realistic, especially when the current financial market is increasingly powered by high-frequency trading via computer algorithms, rather than by individual investors.

  10. Single Stock Dynamics on High-Frequency Data: From a Compressed Coding Perspective

    PubMed Central

    Fushing, Hsieh; Chen, Shu-Chun; Hwang, Chii-Ruey

    2014-01-01

    High-frequency return, trading volume and transaction number are digitally coded via a nonparametric computing algorithm, called hierarchical factor segmentation (HFS), and then are coupled together to reveal a single stock dynamics without global state-space structural assumptions. The base-8 digital coding sequence, which is capable of revealing contrasting aggregation against sparsity of extreme events, is further compressed into a shortened sequence of state transitions. This compressed digital code sequence vividly demonstrates that the aggregation of large absolute returns is the primary driving force for stimulating both the aggregations of large trading volumes and transaction numbers. The state of system-wise synchrony is manifested with very frequent recurrence in the stock dynamics. And this data-driven dynamic mechanism is seen to correspondingly vary as the global market transiting in and out of contraction-expansion cycles. These results not only elaborate the stock dynamics of interest to a fuller extent, but also contradict some classical theories in finance. Overall this version of stock dynamics is potentially more coherent and realistic, especially when the current financial market is increasingly powered by high-frequency trading via computer algorithms, rather than by individual investors. PMID:24586235

  11. A novel data hiding scheme for block truncation coding compressed images using dynamic programming strategy

    NASA Astrophysics Data System (ADS)

    Chang, Ching-Chun; Liu, Yanjun; Nguyen, Son T.

    2015-03-01

    Data hiding is a technique that embeds information into digital cover data. This technique has been concentrated on the spatial uncompressed domain, and it is considered more challenging to perform in the compressed domain, i.e., vector quantization, JPEG, and block truncation coding (BTC). In this paper, we propose a new data hiding scheme for BTC-compressed images. In the proposed scheme, a dynamic programming strategy was used to search for the optimal solution of the bijective mapping function for LSB substitution. Then, according to the optimal solution, each mean value embeds three secret bits to obtain high hiding capacity with low distortion. The experimental results indicated that the proposed scheme obtained both higher hiding capacity and hiding efficiency than the other four existing schemes, while ensuring good visual quality of the stego-image. In addition, the proposed scheme achieved a low bit rate as original BTC algorithm.

  12. The Influence of the Rock Properties on the Pulse Compression Performance of Coded Signals

    NASA Astrophysics Data System (ADS)

    Wu, H.; Zhu, W.

    2016-12-01

    Coded excitation technology (CET) is an effective measurement method to enhance the penetrability and resolution of ultrasound. However, the high complexity of rock structure may lead to significant influence on pulse compression performance of coded signals. To analyze the influences, a numerical simulation, and an ultrasonic testing experiment on different samples—Plexiglas, sandstone, granite, and three marbles of different thickness—were performed; by these two experiment, tapered linear frequency modulated (TLFM) signal, Barker with sine carrier (BS), and Barker with TLFM carrier (BTL) in rock, and the change of pulse compression performances of them are investigated and discussed. For random noise, colored noise, and ultrasound transducer also have influences on the performance of the coded signals, simulation and transducer docking experiment were performed. The results show that the random and colored noise have less effects on gain in signal-to-noise ratio (GSNR), of which can be alleviated by filtering; and the transducer has the most impact on coded signals with minimum impact on BTL signal. Furthermore, the propagation of coded excitation ultrasound in the homogenous rock is also simulated with finite difference method, which is compared with the simulation of single pulse. The experimental results show that the change of waveform has minimum influence on performance of pulse compression. On the basis of the simulation above, six samples are detected with ultrasonic signals excited by single pulse, TLFM, BS, and BTL. By analyzing the variations in signal-to-noise ratio (SNR), and main lobe width (MLW) of received signals, several meaningful findings are obtained: firstly, the porosity, heterogeneity, and attenuation could decrease the gain in SNR and broaden the mainlobe width; secondly, TLFM has the maximum GSNG loss and lowest resolution loss. On the contrary, BS has the least GSNG loss and maximum resolution loss, and overall, BTL has a higher

  13. High performance optical encryption based on computational ghost imaging with QR code and compressive sensing technique

    NASA Astrophysics Data System (ADS)

    Zhao, Shengmei; Wang, Le; Liang, Wenqiang; Cheng, Weiwen; Gong, Longyan

    2015-10-01

    In this paper, we propose a high performance optical encryption (OE) scheme based on computational ghost imaging (GI) with QR code and compressive sensing (CS) technique, named QR-CGI-OE scheme. N random phase screens, generated by Alice, is a secret key and be shared with its authorized user, Bob. The information is first encoded by Alice with QR code, and the QR-coded image is then encrypted with the aid of computational ghost imaging optical system. Here, measurement results from the GI optical system's bucket detector are the encrypted information and be transmitted to Bob. With the key, Bob decrypts the encrypted information to obtain the QR-coded image with GI and CS techniques, and further recovers the information by QR decoding. The experimental and numerical simulated results show that the authorized users can recover completely the original image, whereas the eavesdroppers can not acquire any information about the image even the eavesdropping ratio (ER) is up to 60% at the given measurement times. For the proposed scheme, the number of bits sent from Alice to Bob are reduced considerably and the robustness is enhanced significantly. Meantime, the measurement times in GI system is reduced and the quality of the reconstructed QR-coded image is improved.

  14. A NEW MULTI-DIMENSIONAL GENERAL RELATIVISTIC NEUTRINO HYDRODYNAMIC CODE FOR CORE-COLLAPSE SUPERNOVAE. I. METHOD AND CODE TESTS IN SPHERICAL SYMMETRY

    SciTech Connect

    Mueller, Bernhard; Janka, Hans-Thomas; Dimmelmeier, Harald E-mail: thj@mpa-garching.mpg.d

    2010-07-15

    We present a new general relativistic code for hydrodynamical supernova simulations with neutrino transport in spherical and azimuthal symmetry (one dimension and two dimensions, respectively). The code is a combination of the COCONUT hydro module, which is a Riemann-solver-based, high-resolution shock-capturing method, and the three-flavor, fully energy-dependent VERTEX scheme for the transport of massless neutrinos. VERTEX integrates the coupled neutrino energy and momentum equations with a variable Eddington factor closure computed from a model Boltzmann equation and uses the 'ray-by-ray plus' approximation in two dimensions, assuming the neutrino distribution to be axially symmetric around the radial direction at every point in space, and thus the neutrino flux to be radial. Our spacetime treatment employs the Arnowitt-Deser-Misner 3+1 formalism with the conformal flatness condition for the spatial three metric. This approach is exact for the one-dimensional case and has previously been shown to yield very accurate results for spherical and rotational stellar core collapse. We introduce new formulations of the energy equation to improve total energy conservation in relativistic and Newtonian hydro simulations with grid-based Eulerian finite-volume codes. Moreover, a modified version of the VERTEX scheme is developed that simultaneously conserves energy and lepton number in the neutrino transport with better accuracy and higher numerical stability in the high-energy tail of the spectrum. To verify our code, we conduct a series of tests in spherical symmetry, including a detailed comparison with published results of the collapse, shock formation, shock breakout, and accretion phases. Long-time simulations of proto-neutron star cooling until several seconds after core bounce both demonstrate the robustness of the new COCONUT-VERTEX code and show the approximate treatment of relativistic effects by means of an effective relativistic gravitational potential as in

  15. A New Multi-dimensional General Relativistic Neutrino Hydrodynamic Code for Core-collapse Supernovae. I. Method and Code Tests in Spherical Symmetry

    NASA Astrophysics Data System (ADS)

    Müller, Bernhard; Janka, Hans-Thomas; Dimmelmeier, Harald

    2010-07-01

    We present a new general relativistic code for hydrodynamical supernova simulations with neutrino transport in spherical and azimuthal symmetry (one dimension and two dimensions, respectively). The code is a combination of the COCONUT hydro module, which is a Riemann-solver-based, high-resolution shock-capturing method, and the three-flavor, fully energy-dependent VERTEX scheme for the transport of massless neutrinos. VERTEX integrates the coupled neutrino energy and momentum equations with a variable Eddington factor closure computed from a model Boltzmann equation and uses the "ray-by-ray plus" approximation in two dimensions, assuming the neutrino distribution to be axially symmetric around the radial direction at every point in space, and thus the neutrino flux to be radial. Our spacetime treatment employs the Arnowitt-Deser-Misner 3+1 formalism with the conformal flatness condition for the spatial three metric. This approach is exact for the one-dimensional case and has previously been shown to yield very accurate results for spherical and rotational stellar core collapse. We introduce new formulations of the energy equation to improve total energy conservation in relativistic and Newtonian hydro simulations with grid-based Eulerian finite-volume codes. Moreover, a modified version of the VERTEX scheme is developed that simultaneously conserves energy and lepton number in the neutrino transport with better accuracy and higher numerical stability in the high-energy tail of the spectrum. To verify our code, we conduct a series of tests in spherical symmetry, including a detailed comparison with published results of the collapse, shock formation, shock breakout, and accretion phases. Long-time simulations of proto-neutron star cooling until several seconds after core bounce both demonstrate the robustness of the new COCONUT-VERTEX code and show the approximate treatment of relativistic effects by means of an effective relativistic gravitational potential as in

  16. Finite element modeling of magnetic compression using coupled electromagnetic-structural codes

    SciTech Connect

    Hainsworth, G.; Leonard, P.J.; Rodger, D.; Leyden, C.

    1996-05-01

    A link between the electromagnetic code, MEGA, and the structural code, DYNA3D has been developed. Although the primary use of this is for modelling of Railgun components, it has recently been applied to a small experimental Coilgun at Bath. The performance of Coilguns is very dependent on projectile material conductivity, and so high purity aluminium was investigated. However, due to its low strength, it is crushed significantly by magnetic compression in the gun. Although impractical as a real projectile material, this provides useful benchmark experimental data on high strain rate plastic deformation caused by magnetic forces. This setup is equivalent to a large scale version of the classic jumping ring experiment, where the ring jumps with an acceleration of 40 kG.

  17. A Lossless Multichannel Bio-Signal Compression Based on Low-Complexity Joint Coding Scheme for Portable Medical Devices

    PubMed Central

    Kim, Dong-Sun; Kwon, Jin-San

    2014-01-01

    Research on real-time health systems have received great attention during recent years and the needs of high-quality personal multichannel medical signal compression for personal medical product applications are increasing. The international MPEG-4 audio lossless coding (ALS) standard supports a joint channel-coding scheme for improving compression performance of multichannel signals and it is very efficient compression method for multi-channel biosignals. However, the computational complexity of such a multichannel coding scheme is significantly greater than that of other lossless audio encoders. In this paper, we present a multichannel hardware encoder based on a low-complexity joint-coding technique and shared multiplier scheme for portable devices. A joint-coding decision method and a reference channel selection scheme are modified for a low-complexity joint coder. The proposed joint coding decision method determines the optimized joint-coding operation based on the relationship between the cross correlation of residual signals and the compression ratio. The reference channel selection is designed to select a channel for the entropy coding of the joint coding. The hardware encoder operates at a 40 MHz clock frequency and supports two-channel parallel encoding for the multichannel monitoring system. Experimental results show that the compression ratio increases by 0.06%, whereas the computational complexity decreases by 20.72% compared to the MPEG-4 ALS reference software encoder. In addition, the compression ratio increases by about 11.92%, compared to the single channel based bio-signal lossless data compressor. PMID:25237900

  18. Cross-Beam Energy Transfer (CBET) Effect with Additional Ion Heating Integrated into the 2-D Hydrodynamics Code DRACO

    NASA Astrophysics Data System (ADS)

    Marozas, J. A.; Collins, T. J. B.

    2012-10-01

    The cross-beam energy transfer (CBET) effect causes pump and probe beams to exchange energy via stimulated Brillouin scattering.footnotetext W. L. Kruer, The Physics of Laser--Plasma Interactions, Frontiers in Physics, Vol. 73, edited by D. Pines (Addison-Wesley, Redwood City, CA, 1988), p. 45. The total energy gained does not, in general, equate to the total energy lost; the ion-acoustic wave comprises the residual energy balance, which can decay, resulting in ion heating.footnotetext E. A. Williams et al., Phys. Plasmas 11, 231 (2004). The additional ion heating can retune the conditions for CBET affecting the overall energy transfer as a function of time. CBET and the additional ion heating are incorporated into the 2-D hydrodynamics code DRACOfootnotetext P. B. Radha et al., Phys. Plasmas 12, 056307 (2005). as an integral part of the 3-D ray trace where CBET is treated self-consistently within on the hydrodynamic evolution. DRACO simulation results employing CBET will be discussed. This work was supported by the U.S. Department of Energy Office of Inertial Confinement Fusion under Cooperative Agreement No. DE-FC52-08NA28302.

  19. ZEUS-2D: A radiation magnetohydrodynamics code for astrophysical flows in two space dimensions. I - The hydrodynamic algorithms and tests.

    NASA Astrophysics Data System (ADS)

    Stone, James M.; Norman, Michael L.

    1992-06-01

    A detailed description of ZEUS-2D, a numerical code for the simulation of fluid dynamical flows including a self-consistent treatment of the effects of magnetic fields and radiation transfer is presented. Attention is given to the hydrodynamic (HD) algorithms which form the foundation for the more complex MHD and radiation HD algorithms. The effect of self-gravity on the flow dynamics is accounted for by an iterative solution of the sparse-banded matrix resulting from discretizing the Poisson equation in multidimensions. The results of an extensive series of HD test problems are presented. A detailed description of the MHD algorithms in ZEUS-2D is presented. A new method of computing the electromotive force is developed using the method of characteristics (MOC). It is demonstrated through the results of an extensive series of MHD test problems that the resulting hybrid MOC-constrained transport method provides for the accurate evolution of all modes of MHD wave families.

  20. Sub-Nyquist sampling and detection in Costas coded pulse compression radars

    NASA Astrophysics Data System (ADS)

    Hanif, Adnan; Mansoor, Atif Bin; Imran, Ali Shariq

    2016-12-01

    Modern pulse compression radar involves digital signal processing of high bandwidth pulses modulated with different coding schemes. One of the limiting factors in the radar's design to achieve desired target range and resolution is the need of high rate analog-to-digital (A/D) conversion fulfilling the Nyquist sampling criteria. The high sampling rates necessitate huge storage capacity, more power consumption, and extra processing requirement. We introduce a new approach to sample wideband radar waveform modulated with Costas sequence at a sub-Nyquist rate based upon the concept of compressive sensing (CS). Sub-Nyquist measurements of Costas sequence waveform are performed in an analog-to-information (A/I) converter based upon random demodulation replacing traditional A/D converter. The novel work presents an 8-order Costas coded waveform with sub-Nyquist sampling and its reconstruction. The reconstructed waveform is compared with the conventionally sampled signal and depicts high-quality signal recovery from sub-Nyquist sampled signal. Furthermore, performance of CS-based detections after reconstruction are evaluated in terms of receiver operating characteristic (ROC) curves and compared with conventional Nyquist-rate matched filtering scheme.

  1. Scalability of the CTH Hydrodynamics Code on the Sun HPC 10000 Architecture

    DTIC Science & Technology

    2000-02-01

    the Sun HPC 10000 computer system. The scalability of the message-passing code on this symmetrical multiple processor architecture is presented and is...compared to the ideal linear multiple processor performance. The computed results are also compared to experimental data for the purpose of validating the shock physics application on the Sun HPC 10000 system.

  2. Modelling of the magnetic field effects in hydrodynamic codes using a second order tensorial diffusion scheme

    NASA Astrophysics Data System (ADS)

    Breil, J.; Maire, P.-H.; Nicolaï, P.; Schurtz, G.

    2008-05-01

    In laser produced plasmas large self-generated magnetic fields have been measured. The classical formulas by Braginskii predict that magnetic fields induce a reduction of the magnitude of the heat flux and its rotation through the Righi-Leduc effect. In this paper a second order tensorial diffusion method used to correctly solve the Righi-Leduc effect in multidimensional code is presented.

  3. Real-time postprocessing technique for compression artifact reduction in low-bit-rate video coding

    NASA Astrophysics Data System (ADS)

    Shen, Mei-Yin; Kuo, C.-C. Jay

    1998-10-01

    A computationally efficient postprocessing technique to reduce compression artifacts in low-bit-rate video coding is proposed in this research. We first formulate the artifact reduction problem as a robust estimation problem. Under this framework, the artifact-free image is obtained by minimizing a cost function that accounts for smoothness constraints as well as image fidelity. Instead of using the traditional approach that applies the gradient descent search for optimization, a set of nonlinear filters is proposed to determine the approximating global minimum to reduce the computational complexity so that real-time postprocessing is possible. We have performed experimental results on the H.263 codec and observed that the proposed method is effective in reducing severe blocking and ringing artifacts, while maintaining a low complexity and a low memory bandwidth.

  4. COSAL: A black-box compressible stability analysis code for transition prediction in three-dimensional boundary layers

    NASA Technical Reports Server (NTRS)

    Malik, M. R.

    1982-01-01

    A fast computer code COSAL for transition prediction in three dimensional boundary layers using compressible stability analysis is described. The compressible stability eigenvalue problem is solved using a finite difference method, and the code is a black box in the sense that no guess of the eigenvalue is required from the user. Several optimization procedures were incorporated into COSAL to calculate integrated growth rates (N factor) for transition correlation for swept and tapered laminar flow control wings using the well known e to the Nth power method. A user's guide to the program is provided.

  5. A new relativistic viscous hydrodynamics code and its application to the Kelvin-Helmholtz instability in high-energy heavy-ion collisions

    NASA Astrophysics Data System (ADS)

    Okamoto, Kazuhisa; Nonaka, Chiho

    2017-06-01

    We construct a new relativistic viscous hydrodynamics code optimized in the Milne coordinates. We split the conservation equations into an ideal part and a viscous part, using the Strang spitting method. In the code a Riemann solver based on the two-shock approximation is utilized for the ideal part and the Piecewise Exact Solution (PES) method is applied for the viscous part. We check the validity of our numerical calculations by comparing analytical solutions, the viscous Bjorken's flow and the Israel-Stewart theory in Gubser flow regime. Using the code, we discuss possible development of the Kelvin-Helmholtz instability in high-energy heavy-ion collisions.

  6. Worst configurations (instantons) for compressed sensing over reals: a channel coding approach

    SciTech Connect

    Chertkov, Michael; Chilappagari, Shashi K; Vasic, Bane

    2010-01-01

    We consider Linear Programming (LP) solution of a Compressed Sensing (CS) problem over reals, also known as the Basis Pursuit (BasP) algorithm. The BasP allows interpretation as a channel-coding problem, and it guarantees the error-free reconstruction over reals for properly chosen measurement matrix and sufficiently sparse error vectors. In this manuscript, we examine how the BasP performs on a given measurement matrix and develop a technique to discover sparse vectors for which the BasP fails. The resulting algorithm is a generalization of our previous results on finding the most probable error-patterns, so called instantons, degrading performance of a finite size Low-Density Parity-Check (LDPC) code in the error-floor regime. The BasP fails when its output is different from the actual error-pattern. We design CS-Instanton Search Algorithm (ISA) generating a sparse vector, called CS-instanton, such that the BasP fails on the instanton, while its action on any modification of the CS-instanton decreasing a properly defined norm is successful. We also prove that, given a sufficiently dense random input for the error-vector, the CS-ISA converges to an instanton in a small finite number of steps. Performance of the CS-ISA is tested on example of a randomly generated 512 * 120 matrix, that outputs the shortest instanton (error vector) pattern of length 11.

  7. Design of indirectly driven, high-compression Inertial Confinement Fusion implosions with improved hydrodynamic stability using a 4-shock adiabat-shaped drive

    SciTech Connect

    Milovich, J. L. Robey, H. F.; Clark, D. S.; Baker, K. L.; Casey, D. T.; Cerjan, C.; Field, J.; MacPhee, A. G.; Pak, A.; Patel, P. K.; Peterson, J. L.; Smalyuk, V. A.; Weber, C. R.

    2015-12-15

    Experimental results from indirectly driven ignition implosions during the National Ignition Campaign (NIC) [M. J. Edwards et al., Phys. Plasmas 20, 070501 (2013)] achieved a record compression of the central deuterium-tritium fuel layer with measured areal densities up to 1.2 g/cm{sup 2}, but with significantly lower total neutron yields (between 1.5 × 10{sup 14} and 5.5 × 10{sup 14}) than predicted, approximately 10% of the 2D simulated yield. An order of magnitude improvement in the neutron yield was subsequently obtained in the “high-foot” experiments [O. A. Hurricane et al., Nature 506, 343 (2014)]. However, this yield was obtained at the expense of fuel compression due to deliberately higher fuel adiabat. In this paper, the design of an adiabat-shaped implosion is presented, in which the laser pulse is tailored to achieve similar resistance to ablation-front instability growth, but with a low fuel adiabat to achieve high compression. Comparison with measured performance shows a factor of 3–10× improvement in the neutron yield (>40% of predicted simulated yield) over similar NIC implosions, while maintaining a reasonable fuel compression of >1 g/cm{sup 2}. Extension of these designs to higher laser power and energy is discussed to further explore the trade-off between increased implosion velocity and the deleterious effects of hydrodynamic instabilities.

  8. Design of indirectly driven, high-compression Inertial Confinement Fusion implosions with improved hydrodynamic stability using a 4-shock adiabat-shaped drive

    NASA Astrophysics Data System (ADS)

    Milovich, J. L.; Robey, H. F.; Clark, D. S.; Baker, K. L.; Casey, D. T.; Cerjan, C.; Field, J.; MacPhee, A. G.; Pak, A.; Patel, P. K.; Peterson, J. L.; Smalyuk, V. A.; Weber, C. R.

    2015-12-01

    Experimental results from indirectly driven ignition implosions during the National Ignition Campaign (NIC) [M. J. Edwards et al., Phys. Plasmas 20, 070501 (2013)] achieved a record compression of the central deuterium-tritium fuel layer with measured areal densities up to 1.2 g/cm2, but with significantly lower total neutron yields (between 1.5 × 1014 and 5.5 × 1014) than predicted, approximately 10% of the 2D simulated yield. An order of magnitude improvement in the neutron yield was subsequently obtained in the "high-foot" experiments [O. A. Hurricane et al., Nature 506, 343 (2014)]. However, this yield was obtained at the expense of fuel compression due to deliberately higher fuel adiabat. In this paper, the design of an adiabat-shaped implosion is presented, in which the laser pulse is tailored to achieve similar resistance to ablation-front instability growth, but with a low fuel adiabat to achieve high compression. Comparison with measured performance shows a factor of 3-10× improvement in the neutron yield (>40% of predicted simulated yield) over similar NIC implosions, while maintaining a reasonable fuel compression of >1 g/cm2. Extension of these designs to higher laser power and energy is discussed to further explore the trade-off between increased implosion velocity and the deleterious effects of hydrodynamic instabilities.

  9. The Rice coding algorithm achieves high-performance lossless and progressive image compression based on the improving of integer lifting scheme Rice coding algorithm

    NASA Astrophysics Data System (ADS)

    Jun, Xie Cheng; Su, Yan; Wei, Zhang

    2006-08-01

    In this paper, a modified algorithm was introduced to improve Rice coding algorithm and researches of image compression with the CDF (2,2) wavelet lifting scheme was made. Our experiments show that the property of the lossless image compression is much better than Huffman, Zip, lossless JPEG, RAR, and a little better than (or equal to) the famous SPIHT. The lossless compression rate is improved about 60.4%, 45%, 26.2%, 16.7%, 0.4% on average. The speed of the encoder is faster about 11.8 times than the SPIHT's and its efficiency in time can be improved by 162%. The speed of the decoder is faster about 12.3 times than that of the SPIHT's and its efficiency in time can be rasied about 148%. This algorithm, instead of largest levels wavelet transform, has high coding efficiency when the wavelet transform levels is larger than 3. For the source model of distributions similar to the Laplacian, it can improve the efficiency of coding and realize the progressive transmit coding and decoding.

  10. Solwnd: A 3D Compressible MHD Code for Solar Wind Studies. Version 1.0: Cartesian Coordinates

    NASA Technical Reports Server (NTRS)

    Deane, Anil E.

    1996-01-01

    Solwnd 1.0 is a three-dimensional compressible MHD code written in Fortran for studying the solar wind. Time-dependent boundary conditions are available. The computational algorithm is based on Flux Corrected Transport and the code is based on the existing code of Zalesak and Spicer. The flow considered is that of shear flow with incoming flow that perturbs this base flow. Several test cases corresponding to pressure balanced magnetic structures with velocity shear flow and various inflows including Alfven waves are presented. Version 1.0 of solwnd considers a rectangular Cartesian geometry. Future versions of solwnd will consider a spherical geometry. Some discussions of this issue is presented.

  11. A Validation Study of the Compressible Rayleigh–Taylor Instability Comparing the Ares and Miranda Codes

    DOE PAGES

    Rehagen, Thomas J.; Greenough, Jeffrey A.; Olson, Britton J.

    2017-04-20

    In this paper, the compressible Rayleigh–Taylor (RT) instability is studied by performing a suite of large eddy simulations (LES) using the Miranda and Ares codes. A grid convergence study is carried out for each of these computational methods, and the convergence properties of integral mixing diagnostics and late-time spectra are established. A comparison between the methods is made using the data from the highest resolution simulations in order to validate the Ares hydro scheme. We find that the integral mixing measures, which capture the global properties of the RT instability, show good agreement between the two codes at this resolution.more » The late-time turbulent kinetic energy and mass fraction spectra roughly follow a Kolmogorov spectrum, and drop off as k approaches the Nyquist wave number of each simulation. The spectra from the highest resolution Miranda simulation follow a Kolmogorov spectrum for longer than the corresponding spectra from the Ares simulation, and have a more abrupt drop off at high wave numbers. The growth rate is determined to be between around 0.03 and 0.05 at late times; however, it has not fully converged by the end of the simulation. Finally, we study the transition from direct numerical simulation (DNS) to LES. The highest resolution simulations become LES at around t/τ ≃ 1.5. Finally, to have a fully resolved DNS through the end of our simulations, the grid spacing must be 3.6 (3.1) times finer than our highest resolution mesh when using Miranda (Ares).« less

  12. Single exposure optically compressed imaging and visualization using random aperture coding

    NASA Astrophysics Data System (ADS)

    Stern, A.; Rivenson, Yair; Javidi, Bahrain

    2008-11-01

    The common approach in digital imaging follows the sample-then-compress framework. According to this approach, in the first step as many pixels as possible are captured and in the second step the captured image is compressed by digital means. The recently introduced theory of compressed sensing provides the mathematical foundation necessary to combine these two steps in a single one, that is, to compress the information optically before it is recorded. In this paper we overview and extend an optical implementation of compressed sensing theory that we have recently proposed. With this new imaging approach the compression is accomplished inherently in the optical acquisition step. The primary feature of this imaging approach is a randomly encoded aperture realized by means of a random phase screen. The randomly encoded aperture implements random projection of the object field in the image plane. Using a single exposure, a randomly encoded image is captured which can be decoded by proper decoding algorithm.

  13. A Multigroup diffusion solver using pseudo transient continuation for a radiation-hydrodynamic code with patch-based AMR

    SciTech Connect

    Shestakov, A I; Offner, S R

    2006-09-21

    We present a scheme to solve the nonlinear multigroup radiation diffusion (MGD) equations. The method is incorporated into a massively parallel, multidimensional, Eulerian radiation-hydrodynamic code with adaptive mesh refinement (AMR). The patch-based AMR algorithm refines in both space and time creating a hierarchy of levels, coarsest to finest. The physics modules are time-advanced using operator splitting. On each level, separate 'level-solve' packages advance the modules. Our multigroup level-solve adapts an implicit procedure which leads to a two-step iterative scheme that alternates between elliptic solves for each group with intra-cell group coupling. For robustness, we introduce pseudo transient continuation ({Psi}tc). We analyze the magnitude of the {Psi}tc parameter to ensure positivity of the resulting linear system, diagonal dominance and convergence of the two-step scheme. For AMR, a level defines a subdomain for refinement. For diffusive processes such as MGD, the refined level uses Dirichet boundary data at the coarse-fine interface and the data is derived from the coarse level solution. After advancing on the fine level, an additional procedure, the sync-solve (SS), is required in order to enforce conservation. The MGD SS reduces to an elliptic solve on a combined grid for a system of G equations, where G is the number of groups. We adapt the 'partial temperature' scheme for the SS; hence, we reuse the infrastructure developed for scalar equations. Results are presented. We consider a multigroup test problem with a known analytic solution. We demonstrate utility of {Psi}tc by running with increasingly larger timesteps. Lastly, we simulate the sudden release of energy Y inside an Al sphere (r = 15 cm) suspended in air at STP. For Y = 11 kT, we find that gray radiation diffusion and MGD produce similar results. However, if Y = 1 MT, the two packages yield different results. Our large Y simulation contradicts a long-standing theory and demonstrates

  14. A Multigroup diffusion Solver Using Pseudo Transient Continuation for a Radiaiton-Hydrodynamic Code with Patch-Based AMR

    SciTech Connect

    Shestakov, A I; Offner, S R

    2007-03-02

    We present a scheme to solve the nonlinear multigroup radiation diffusion (MGD) equations. The method is incorporated into a massively parallel, multidimensional, Eulerian radiation-hydrodynamic code with adaptive mesh refinement (AMR). The patch-based AMR algorithm refines in both space and time creating a hierarchy of levels, coarsest to finest. The physics modules are time-advanced using operator splitting. On each level, separate 'level-solve' packages advance the modules. Our multigroup level-solve adapts an implicit procedure which leads to a two-step iterative scheme that alternates between elliptic solves for each group with intra-cell group coupling. For robustness, we introduce pseudo transient continuation ({Psi}tc). We analyze the magnitude of the {Psi}tc parameter to ensure positivity of the resulting linear system, diagonal dominance and convergence of the two-step scheme. For AMR, a level defines a subdomain for refinement. For diffusive processes such as MGD, the refined level uses Dirichet boundary data at the coarse-fine interface and the data is derived from the coarse level solution. After advancing on the fine level, an additional procedure, the sync-solve (SS), is required in order to enforce conservation. The MGD SS reduces to an elliptic solve on a combined grid for a system of G equations, where G is the number of groups. We adapt the 'partial temperature' scheme for the SS; hence, we reuse the infrastructure developed for scalar equations. Results are presented. We consider a multigroup test problem with a known analytic solution. We demonstrate utility of {Psi}tc by running with increasingly larger timesteps. Lastly, we simulate the sudden release of energy Y inside an Al sphere (r = 15 cm) suspended in air at STP. For Y = 11 kT, we find that gray radiation diffusion and MGD produce similar results. However, if Y = 1 MT, the two packages yield different results. Our large Y simulation contradicts a long-standing theory and demonstrates

  15. 3D Hydrodynamic Simulations with Yguazú-A Code to Model a Jet in a Galaxy Cluster

    NASA Astrophysics Data System (ADS)

    Haro-Corzo, S. A. R.; Velazquez, P.; Diaz, A.

    2009-05-01

    We present preliminary results for a galaxy's jet expanding into an intra-cluster medium (ICM). We attempt to model the jet-gas interaction and the evolution of a extragalactic collimated jet placed at center of computational grid, which it is modeled as a cylinder ejecting gas in the z-axis direction with fixed velocity. It has precession motion around z-axis (period of 10^5 sec.) and orbital motion in XY-plane (period of 500 yr.). This jet is embedded in the ICM, which is modeled as surrounding wind in the XZ plane. We carried out 3D hydrodynamical simulations using Yguazú-A code. This simulation do not include radiative losses. In order to compare the numerical results with observations, we generated synthetic X-ray emission images. X-ray observations with high-resolution of rich cluster of galaxies show diffuse emission with filamentary structure (sometimes called as cooling flow or X-ray filament). Radio observations show a jet-like emission of the central region of the cluster. Joining these observations, in this work we explore the possibility that the jet-ambient gas interaction leads to a filamentary morphology in the X-ray domain. We have found that simulation considering orbital motion offers the possibility to explain the diffuse emission observed in the X-ray domain. The circular orbital motion, additional to precession motion, contribute to disperse the shocked gas and the X-ray appearance of the 3D simulation reproduce some important details of Abel 1795 X-ray emission (Rodriguez-Martinez et al. 2006, A&A, 448, 15): A bright bow-shock at north (spot), where interact directly the jet and the ICM and which is observed in the X-ray image. Meanwhile, in the south side there is no bow-shock X-ray emission, but the wake appears as a X-ray source. This wake is part of the diffuse shocked ambient gas region.

  16. A new relativistic viscous hydrodynamics code and its application to the Kelvin–Helmholtz instability in high-energy heavy-ion collisions

    DOE PAGES

    Okamoto, Kazuhisa; Nonaka, Chiho

    2017-06-09

    Here, we construct a new relativistic viscous hydrodynamics code optimized in the Milne coordinates. We also split the conservation equations into an ideal part and a viscous part, using the Strang spitting method. In the code a Riemann solver based on the two-shock approximation is utilized for the ideal part and the Piecewise Exact Solution (PES) method is applied for the viscous part. Furthemore, we check the validity of our numerical calculations by comparing analytical solutions, the viscous Bjorken’s flow and the Israel–Stewart theory in Gubser flow regime. Using the code, we discuss possible development of the Kelvin–Helmholtz instability inmore » high-energy heavy-ion collisions.« less

  17. An excellent reduction in sidelobe level for P4 code by using of a new pulse compression scheme

    NASA Astrophysics Data System (ADS)

    Alighale, S.; Zakeri, B.

    2014-10-01

    P4 polyphase code is well known in pulse compression technique. For P4 code with length 1000, peak sidelobe level (PSL) and integrated sidelobe level (ISL) are -36dB and -16dB, respectively. In order to increase the performance, there are different reduction techniques to reduce the sidelobes of P4 code. This paper presents a novel sidelobe reduction technique that reduces the PSL and ISL to -127dB and -104dB, respectively. Also, other sidelobe reduction techniques such as Woo filter are investigated and compared with the novel proposed technique. Simulations and results show that the proposed technique produces a better peak side lobe ratio (PSL) and integrated side lobe ratio (ISL) than other techniques.

  18. Development of a Three-Dimensional PSE Code for Compressible Flows: Stability of Three-Dimensional Compressible Boundary Layers

    NASA Technical Reports Server (NTRS)

    Balakumar, P.; Jeyasingham, Samarasingham

    1999-01-01

    A program is developed to investigate the linear stability of three-dimensional compressible boundary layer flows over bodies of revolutions. The problem is formulated as a two dimensional (2D) eigenvalue problem incorporating the meanflow variations in the normal and azimuthal directions. Normal mode solutions are sought in the whole plane rather than in a line normal to the wall as is done in the classical one dimensional (1D) stability theory. The stability characteristics of a supersonic boundary layer over a sharp cone with 50 half-angle at 2 degrees angle of attack is investigated. The 1D eigenvalue computations showed that the most amplified disturbances occur around x(sub 2) = 90 degrees and the azimuthal mode number for the most amplified disturbances range between m = -30 to -40. The frequencies of the most amplified waves are smaller in the middle region where the crossflow dominates the instability than the most amplified frequencies near the windward and leeward planes. The 2D eigenvalue computations showed that due to the variations in the azimuthal direction, the eigenmodes are clustered into isolated confined regions. For some eigenvalues, the eigenfunctions are clustered in two regions. Due to the nonparallel effect in the azimuthal direction, the eigenmodes are clustered into isolated confined regions. For some eigenvalues, the eigenfunctions are clustered in two regions. Due to the nonparallel effect in the azimuthal direction, the most amplified disturbances are shifted to 120 degrees compared to 90 degrees for the parallel theory. It is also observed that the nonparallel amplification rates are smaller than that is obtained from the parallel theory.

  19. Joint compression/watermarking scheme using majority-parity guidance and halftoning-based block truncation coding.

    PubMed

    Guo, Jing-Ming; Liu, Yun-Fu

    2010-08-01

    In this paper, a watermarking scheme, called majority-parity-guided error-diffused block truncation coding (MPG-EDBTC), is proposed to achieve high image quality and embedding capacity. EDBTC exploits the error diffusion to effectively reduce blocking effect and false contour which inherently exhibit in traditional BTC. In addition, the coding efficiency is significantly improved by replacing high and low means evaluation with extreme values substitution. The proposed MPG-EDBTC embeds a watermark simultaneously during compression by evaluating the parity value in a predefined parity-check region (PCR). As documented in the experimental results, the proposed scheme can provide good robustness, image quality, and processing efficiency. Finally, the proposed MPG-EDBTC is extended to embed multiple watermarks and achieves excellent image quality, robustness, and capacity. Nowadays, most multimedia is compressed before it is stored. It is more appropriate to embed information such as watermarks during compression. The proposed method has been proved to solve effectively the inherent problems in traditional BTC, and provide excellent performance in watermark embedding.

  20. Compression and smart coding of offset and gain maps for intraoral digital x-ray sensors

    SciTech Connect

    Frosio, I.; Borghese, N. A.

    2009-02-15

    The response of indirect x-ray digital imaging sensors is often not homogenous on the entire surface area. In this case, calibration is needed to build offset and gain maps, which are used to correct the sensor output. The sensors of new generation are equipped with an on-board memory, which serves to store these maps. However, because of its limited dimension, the maps have to be compressed before saving them. This step is critical because of the extremely high compression rate required. The authors propose here a novel method to achieve such a high compression rate, without degrading the quality of the sensor output. It is based on quad tree decomposition, which performs an adaptive sampling of the offset and gain maps, matched with a RBF-based interpolation strategy. The method was tested on a typical intraoral radiographic sensor and compared with traditional compression techniques. Qualitative and quantitative results show that the method achieves a higher compression rate and produces images of superior quality. The method can be adopted also in different fields where a high compression rate is required.

  1. Wavelet-based compression with ROI coding support for mobile access to DICOM images over heterogeneous radio networks.

    PubMed

    Maglogiannis, Ilias; Doukas, Charalampos; Kormentzas, George; Pliakas, Thomas

    2009-07-01

    Most of the commercial medical image viewers do not provide scalability in image compression and/or region of interest (ROI) encoding/decoding. Furthermore, these viewers do not take into consideration the special requirements and needs of a heterogeneous radio setting that is constituted by different access technologies [e.g., general packet radio services (GPRS)/ universal mobile telecommunications system (UMTS), wireless local area network (WLAN), and digital video broadcasting (DVB-H)]. This paper discusses a medical application that contains a viewer for digital imaging and communications in medicine (DICOM) images as a core module. The proposed application enables scalable wavelet-based compression, retrieval, and decompression of DICOM medical images and also supports ROI coding/decoding. Furthermore, the presented application is appropriate for use by mobile devices activating in heterogeneous radio settings. In this context, performance issues regarding the usage of the proposed application in the case of a prototype heterogeneous system setup are also discussed.

  2. JPEG2000 compressed domain image retrieval using context labels of significance coding and wavelet autocorrelogram

    NASA Astrophysics Data System (ADS)

    Angkura, Navin; Aramvith, Supavadee; Siddhichai, Supakorn

    2007-09-01

    JPEG has been a widely recognized image compression standard for many years. Nevertheless, it faces its own limitations as compressed image quality degrades significantly at lower bit rates. This limitation has been addressed in JPEG2000 which also has a tendency to replace JPEG, especially in the storage and retrieval applications. To efficiently and practically index and retrieve compressed-domain images from a database, several image features could be extracted directly in compressed domain without having to fully decompress the JPEG2000 images. JPEG2000 utilizes wavelet transform. Wavelet transforms is one of widely-used to analyze and describe texture patterns of image. Another advantage of wavelet transform is that one can analyze textures with multiresolution and can classify directional texture pattern information into each directional subband. Where as, HL subband implies horizontal frequency information, LH subband implies vertical frequency information and HH subband implies diagonal frequency. Nevertheless, many wavelet-based image retrieval approaches are not good tool to use directional subband information, obtained by wavelet transforms, for efficient directional texture pattern classification of retrieved images. This paper proposes a novel image retrieval technique in JPEG2000 compressed domain using image significant map to compute an image context in order to construct image index. Experimental results indicate that the proposed method can effectively differentiate and categorize images with different texture directional information. In addition, an integration of the proposed features with wavelet autocorrelogram also showed improvement in retrieval performance using ANMRR (Average Normalized Modified Retrieval Rank) compared to other known methods.

  3. ECG signal compression by multi-iteration EZW coding for different wavelets and thresholds.

    PubMed

    Tohumoglu, Gülay; Sezgin, K Erbil

    2007-02-01

    The modified embedded zero-tree wavelet (MEZW) compression algorithm for the one-dimensional signal was originally derived for image compression based on Shapiro's EZW algorithm. It is revealed that the proposed codec is significantly more efficient in compression and in computation than previously proposed ECG compression schemes. The coder also attains exact bit rate control and generates a bit stream progressive in quality or rate. The EZW and MEZW algorithms apply the chosen threshold values or the expressions in order to specify that the significant transformed coefficients are greatly significant. Thus, two different threshold definitions, namely percentage and dyadic thresholds, are used, and they are applied for different wavelet types in biorthogonal and orthogonal classes. In detail, the MEZW and EZW algorithms results are quantitatively compared in terms of the compression ratio (CR) and percentage root mean square difference (PRD). Experiments are carried out on the selected records from the MIT-BIH arrhythmia database and an original ECG signal. It is observed that the MEZW algorithm shows a clear advantage in the CR achieved for a given PRD over the traditional EZW, and it gives better results for the biorthogonal wavelets than the orthogonal wavelets.

  4. A New Dynamic Complex Baseband Pulse Compression Method for Chirp-Coded Excitation in Medical Ultrasound Imaging.

    PubMed

    Kang, Jinbum; Kim, Yeajin; Lee, Wooyoul; Yoo, Yangmo

    2017-09-01

    Chirp-coded excitation can increase the signal-to-noise ratio (SNR) without degrading the axial resolution. Effective pulse compression (PC) is important to maintain the axial resolution and can be achieved with radio frequency (RF) and complex baseband (CBB) data (i.e., PCRF and PCCBB, respectively). PCCBB can further reduce the computational complexity compared to PCRF; however, PCCBB suffers from a degraded SNR due to tissue attenuation. In this study, we propose a new dynamic complex baseband pulse compression method (PCCBB-Dynamic) that can improve the SNR while compensating for tissue attenuation. The compression filter coefficients in the PCCBB-Dynamic method are generated by dynamically changing the demodulation frequencies along with the depth. For pulse compression, the obtained PCCBB-Dynamic coefficients are independently applied to the in-phase and quadrature components of the complex baseband data. To evaluate the performance of the proposed method, simulation, phantom and in vivo studies were conducted, and all three studies showed improved SNR, i.e., maximally 3.87, 7.41 and 5.75 dB, respectively. In addition, the measured peak range sidelobe level (PRSL) of the proposed method yielded lower values than the PCRF and PCCBB, and it also derived a suitable target location, i.e., a <0.07-mm target location error (TLE), while maintaining the axial resolution. In an in vivo abdominal experiment, the PCCBB-Dynamic method depicted brighter and clearer features in the hyperechoic region because highly correlated signals were produced by compensating for tissue attenuation. These results demonstrated that the proposed method can improve the SNR of chirp-coded excitation while preserving the axial resolution and the target location and reducing the computational complexity.

  5. Wideband audio compression using subband coding and entropy-constrained scalar quantization

    NASA Astrophysics Data System (ADS)

    Trinkaus, Trevor R.

    1995-04-01

    Source coding of wideband audio signals for storage applications and/or transmission over band limited channels is currently a research topic receiving considerable attention. A goal common to all systems designed for wideband audio coding is to achieve an efficient reduction in code rate, while maintaining imperceptible differences between the original and coded audio signals. In this thesis, an effective source coding scheme aimed at reducing the code rate to the entropy of the quantized audio source, while providing good subjective audio quality, is discussed. This scheme employs the technique of subband coding, where a 32-band single sideband modulated filter bank is used to perform subband analysis and synthesis operations. Encoding and decoding of the subbands is accomplished using entropy constrained uniform scalar quantization and subsequent arithmetic coding. A computationally efficient subband rate allocation procedure is used which relies on analytic models to describe the rate distortion characteristics of the subband quantizers. Signal quality is maintained by incorporating masking properties of the human ear into this rate allocation procedure. Results of simulations performed on compact disc quality audio segments are provided.

  6. Smoothed Particle Hydrodynamic Simulator

    SciTech Connect

    2016-10-05

    This code is a highly modular framework for developing smoothed particle hydrodynamic (SPH) simulations running on parallel platforms. The compartmentalization of the code allows for rapid development of new SPH applications and modifications of existing algorithms. The compartmentalization also allows changes in one part of the code used by many applications to instantly be made available to all applications.

  7. Image and video compression/decompression based on human visual perception system and transform coding

    SciTech Connect

    Fu, Chi Yung., Petrich, L.I., Lee, M.

    1997-02-01

    The quantity of information has been growing exponentially, and the form and mix of information have been shifting into the image and video areas. However, neither the storage media nor the available bandwidth can accommodated the vastly expanding requirements for image information. A vital, enabling technology here is compression/decompression. Our compression work is based on a combination of feature-based algorithms inspired by the human visual- perception system (HVS), and some transform-based algorithms (such as our enhanced discrete cosine transform, wavelet transforms), vector quantization and neural networks. All our work was done on desktop workstations using the C++ programming language and commercially available software. During FY 1996, we explored and implemented an enhanced feature-based algorithms, vector quantization, and neural- network-based compression technologies. For example, we improved the feature compression for our feature-based algorithms by a factor of two to ten, a substantial improvement. We also found some promising results when using neural networks and applying them to some video sequences. In addition, we also investigated objective measures to characterize compression results, because traditional means such as the peak signal- to-noise ratio (PSNR) are not adequate to fully characterize the results, since such measures do not take into account the details of human visual perception. We have successfully used our one- year LDRD funding as seed money to explore new research ideas and concepts, the results of this work have led us to obtain external funding from the dud. At this point, we are seeking matching funds from DOE to match the dud funding so that we can bring such technologies into fruition. 9 figs., 2 tabs.

  8. Application of wavelet filtering and Barker-coded pulse compression hybrid method to air-coupled ultrasonic testing

    NASA Astrophysics Data System (ADS)

    Zhou, Zhenggan; Ma, Baoquan; Jiang, Jingtao; Yu, Guang; Liu, Kui; Zhang, Dongmei; Liu, Weiping

    2014-10-01

    Air-coupled ultrasonic testing (ACUT) technique has been viewed as a viable solution in defect detection of advanced composites used in aerospace and aviation industries. However, the giant mismatch of acoustic impedance in air-solid interface makes the transmission efficiency of ultrasound low, and leads to poor signal-to-noise (SNR) ratio of received signal. The utilisation of signal-processing techniques in non-destructive testing is highly appreciated. This paper presents a wavelet filtering and phase-coded pulse compression hybrid method to improve the SNR and output power of received signal. The wavelet transform is utilised to filter insignificant components from noisy ultrasonic signal, and pulse compression process is used to improve the power of correlated signal based on cross-correction algorithm. For the purpose of reasonable parameter selection, different families of wavelets (Daubechies, Symlet and Coiflet) and decomposition level in discrete wavelet transform are analysed, different Barker codes (5-13 bits) are also analysed to acquire higher main-to-side lobe ratio. The performance of the hybrid method was verified in a honeycomb composite sample. Experimental results demonstrated that the proposed method is very efficient in improving the SNR and signal strength. The applicability of the proposed method seems to be a very promising tool to evaluate the integrity of high ultrasound attenuation composite materials using the ACUT.

  9. Scaling and performance of a 3-D radiation hydrodynamics code on message-passing parallel computers: final report

    SciTech Connect

    Hayes, J C; Norman, M

    1999-10-28

    This report details an investigation into the efficacy of two approaches to solving the radiation diffusion equation within a radiation hydrodynamic simulation. Because leading-edge scientific computing platforms have evolved from large single-node vector processors to parallel aggregates containing tens to thousands of individual CPU's, the ability of an algorithm to maintain high compute efficiency when distributed over a large array of nodes is critically important. The viability of an algorithm thus hinges upon the tripartite question of numerical accuracy, total time to solution, and parallel efficiency.

  10. Hydrodynamic effects in the atmosphere of variable stars

    NASA Technical Reports Server (NTRS)

    Davis, C. G., Jr.; Bunker, S. S.

    1975-01-01

    Numerical models of variable stars are established, using a nonlinear radiative transfer coupled hydrodynamics code. The variable Eddington method of radiative transfer is used. Comparisons are for models of W Virginis, beta Doradus, and eta Aquilae. From these models it appears that shocks are formed in the atmospheres of classical Cepheids as well as W Virginis stars. In classical Cepheids, with periods from 7 to 10 days, the bumps occurring in the light and velocity curves appear as the result of a compression wave that reflects from the star's center. At the head of the outward going compression wave, shocks form in the atmosphere. Comparisons between the hydrodynamic motions in W Virginis and classical Cepheids are made. The strong shocks in W Virginis do not penetrate into the interior as do the compression waves formed in classical Cepheids. The shocks formed in W Virginis stars cause emission lines, while in classical Cepheids the shocks are weaker.

  11. Hydrodynamic effects in the atmosphere of variable stars

    NASA Technical Reports Server (NTRS)

    Davis, C. G., Jr.; Bunker, S. S.

    1975-01-01

    Numerical models of variable stars are established, using a nonlinear radiative transfer coupled hydrodynamics code. The variable Eddington method of radiative transfer is used. Comparisons are for models of W Virginis, beta Doradus, and eta Aquilae. From these models it appears that shocks are formed in the atmospheres of classical Cepheids as well as W Virginis stars. In classical Cepheids, with periods from 7 to 10 days, the bumps occurring in the light and velocity curves appear as the result of a compression wave that reflects from the star's center. At the head of the outward going compression wave, shocks form in the atmosphere. Comparisons between the hydrodynamic motions in W Virginis and classical Cepheids are made. The strong shocks in W Virginis do not penetrate into the interior as do the compression waves formed in classical Cepheids. The shocks formed in W Virginis stars cause emission lines, while in classical Cepheids the shocks are weaker.

  12. Numerical investigation of nanosecond laser induced plasma and shock wave dynamics from air using 2D hydrodynamic code

    NASA Astrophysics Data System (ADS)

    Sai Shiva, S.; Leela, Ch.; Prem Kiran, P.; Sijoy, C. D.; Ikkurthi, V. R.; Chaturvedi, S.

    2017-08-01

    A two-dimensional axis symmetric hydrodynamic model was developed to investigate nanosecond laser induced plasma and shock wave dynamics in ambient air over the input laser energies of 50-150 mJ and time scales from 25 ns to 8 μs. The formation of localized hot spots during laser energy deposition, asymmetric spatio-temporal evolution, rolling, and splitting of the plasma observed in the simulations were in good agreement with the experimental results. The formed plasma was observed to have two regions: the hot plasma core and the plasma outer region. The asymmetric expansion was due to the variation in the thermodynamic variables along the laser propagation and radial directions. The rolling of the plasma was observed to take place in the core region where very high temperatures exist. Similarly, the splitting of the plasma was observed to take place in the core region between the localized hot spots that causes the hydrodynamic instabilities. The rolling and splitting times were observed to vary with the input laser energy deposited. The plasma expansion was observed to be asymmetric for all the simulated time scales considered, whereas the shock wave evolution was observed to transfer from asymmetric to symmetric expansion. Finally, the simulated temporal evolution of the electron number density, temperature of the hot core plasma, and the temperature evolution across the shock front after the detachment from the plasma were presented over the time scales 25 ns-8 μs for different input laser pulse energies.

  13. Picture data compression coder using subband/transform coding with a Lempel-Ziv-based coder

    NASA Technical Reports Server (NTRS)

    Glover, Daniel R. (Inventor)

    1995-01-01

    Digital data coders/decoders are used extensively in video transmission. A digitally encoded video signal is separated into subbands. Separating the video into subbands allows transmission at low data rates. Once the data is separated into these subbands it can be coded and then decoded by statistical coders such as the Lempel-Ziv based coder.

  14. Data compression using adaptive transform coding. Appendix 1: Item 1. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Rost, Martin Christopher

    1988-01-01

    Adaptive low-rate source coders are described in this dissertation. These coders adapt by adjusting the complexity of the coder to match the local coding difficulty of the image. This is accomplished by using a threshold driven maximum distortion criterion to select the specific coder used. The different coders are built using variable blocksized transform techniques, and the threshold criterion selects small transform blocks to code the more difficult regions and larger blocks to code the less complex regions. A theoretical framework is constructed from which the study of these coders can be explored. An algorithm for selecting the optimal bit allocation for the quantization of transform coefficients is developed. The bit allocation algorithm is more fully developed, and can be used to achieve more accurate bit assignments than the algorithms currently used in the literature. Some upper and lower bounds for the bit-allocation distortion-rate function are developed. An obtainable distortion-rate function is developed for a particular scalar quantizer mixing method that can be used to code transform coefficients at any rate.

  15. New Binary Complementary Codes Compressing a Pulse to a Width of Several Sub-pulses

    DTIC Science & Technology

    2005-04-14

    Department of Computer and Information Engineering , Nippon Institute of Technology 4-1 Gakuendai, Miyashiro, Saitama-ken, 345-8501 Japan 8. PERFORMING...codes pressed to several sub-pulses,” Trans. IEICE of Japan (in Japanese), . J85 -B, no.8, pp.1434-1444, Aug. 2002. akasugi and S.Fukao, “Sidelobe

  16. Development of a Fast Breeder Reactor Fuel Bundle-Duct Interaction Analysis Code - BAMBOO: Analysis Model and Validation by the Out-of-Pile Compression Test

    SciTech Connect

    Uwaba, Tomoyuki; Tanaka, Kosuke

    2001-10-15

    To analyze the wire-wrapped fast breeder reactor (FBR) fuel pin bundle deformation under bundle-duct interaction (BDI) conditions, the Japan Nuclear Cycle Development Institute has developed the BAMBOO computer code. A three-dimensional beam element model is used in this code to calculate fuel pin bowing and cladding oval distortion, which are the dominant deformation mechanisms in a fuel pin bundle. In this work, the property of the cladding oval distortion considering the wire-pitch was evaluated experimentally and introduced in the code analysis.The BAMBOO code was validated in this study by using an out-of-pile bundle compression testing apparatus and comparing these results with the code results. It is concluded that BAMBOO reasonably predicts the pin-to-duct clearances in the compression tests by treating the cladding oval distortion as the suppression mechanism to BDI.

  17. Performance of compressed analogue (CA) and continuous interleaved sampling (CIS) coding strategies for cochlear implants in quiet and noise.

    PubMed

    Kompis, M; Vischer, M W; Häusler, R

    1999-01-01

    Speech understanding with compressed analogue (CA) and continuous interleaved sampling (CIS) coding strategies for cochlear implants was compared in quiet and in noise at signal-to-noise ratios (SNRs) of 15, 10 and 5 dB. The speech recognition of three experienced users of the Ineraid cochlear implant (CA coding strategy) was assessed using a set of sentence, vowel and consonant tests. Three weeks after the fitting of a CIS processor, the tests were repeated with the new device. Speech recognition scores for the sentence and consonant tests tended to be higher with the CIS processor in no or little noise, but lower in the test situations with less favourable SNRs, when compared to the CA processor (average score differences for the consonant test: +7.8% correct at 15 dB SNR; -6.8% correct at 5 dB SNR; p = 0.05). Results for the vowel test were slightly lower on average for the CIS processing strategy at all SNRs. A possible explanation for the differences in performance between CIS and CA in the consonant and sentence tests at different SNRs is the generally higher free-field threshold associated with the CA coding strategy, which may act as a single-channel noise suppression.

  18. Euler Technology Assessment for Preliminary Aircraft Design: Compressibility Predictions by Employing the Cartesian Unstructured Grid SPLITFLOW Code

    NASA Technical Reports Server (NTRS)

    Finley, Dennis B.; Karman, Steve L., Jr.

    1996-01-01

    The objective of the second phase of the Euler Technology Assessment program was to evaluate the ability of Euler computational fluid dynamics codes to predict compressible flow effects over a generic fighter wind tunnel model. This portion of the study was conducted by Lockheed Martin Tactical Aircraft Systems, using an in-house Cartesian-grid code called SPLITFLOW. The Cartesian grid technique offers several advantages, including ease of volume grid generation and reduced number of cells compared to other grid schemes. SPLITFLOW also includes grid adaption of the volume grid during the solution to resolve high-gradient regions. The SPLITFLOW code predictions of configuration forces and moments are shown to be adequate for preliminary design, including predictions of sideslip effects and the effects of geometry variations at low and high angles-of-attack. The transonic pressure prediction capabilities of SPLITFLOW are shown to be improved over subsonic comparisons. The time required to generate the results from initial surface data is on the order of several hours, including grid generation, which is compatible with the needs of the design environment.

  19. Compression embedding

    DOEpatents

    Sandford, M.T. II; Handel, T.G.; Bradley, J.N.

    1998-03-10

    A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique is disclosed. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method. 11 figs.

  20. Compression embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.

    1998-01-01

    A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method.

  1. An Adaptive Data Gathering Scheme for Multi-Hop Wireless Sensor Networks Based on Compressed Sensing and Network Coding.

    PubMed

    Yin, Jun; Yang, Yuwang; Wang, Lei

    2016-04-01

    Joint design of compressed sensing (CS) and network coding (NC) has been demonstrated to provide a new data gathering paradigm for multi-hop wireless sensor networks (WSNs). By exploiting the correlation of the network sensed data, a variety of data gathering schemes based on NC and CS (Compressed Data Gathering--CDG) have been proposed. However, these schemes assume that the sparsity of the network sensed data is constant and the value of the sparsity is known before starting each data gathering epoch, thus they ignore the variation of the data observed by the WSNs which are deployed in practical circumstances. In this paper, we present a complete design of the feedback CDG scheme where the sink node adaptively queries those interested nodes to acquire an appropriate number of measurements. The adaptive measurement-formation procedure and its termination rules are proposed and analyzed in detail. Moreover, in order to minimize the number of overall transmissions in the formation procedure of each measurement, we have developed a NP-complete model (Maximum Leaf Nodes Minimum Steiner Nodes--MLMS) and realized a scalable greedy algorithm to solve the problem. Experimental results show that the proposed measurement-formation method outperforms previous schemes, and experiments on both datasets from ocean temperature and practical network deployment also prove the effectiveness of our proposed feedback CDG scheme.

  2. An Adaptive Data Gathering Scheme for Multi-Hop Wireless Sensor Networks Based on Compressed Sensing and Network Coding

    PubMed Central

    Yin, Jun; Yang, Yuwang; Wang, Lei

    2016-01-01

    Joint design of compressed sensing (CS) and network coding (NC) has been demonstrated to provide a new data gathering paradigm for multi-hop wireless sensor networks (WSNs). By exploiting the correlation of the network sensed data, a variety of data gathering schemes based on NC and CS (Compressed Data Gathering—CDG) have been proposed. However, these schemes assume that the sparsity of the network sensed data is constant and the value of the sparsity is known before starting each data gathering epoch, thus they ignore the variation of the data observed by the WSNs which are deployed in practical circumstances. In this paper, we present a complete design of the feedback CDG scheme where the sink node adaptively queries those interested nodes to acquire an appropriate number of measurements. The adaptive measurement-formation procedure and its termination rules are proposed and analyzed in detail. Moreover, in order to minimize the number of overall transmissions in the formation procedure of each measurement, we have developed a NP-complete model (Maximum Leaf Nodes Minimum Steiner Nodes—MLMS) and realized a scalable greedy algorithm to solve the problem. Experimental results show that the proposed measurement-formation method outperforms previous schemes, and experiments on both datasets from ocean temperature and practical network deployment also prove the effectiveness of our proposed feedback CDG scheme. PMID:27043574

  3. SIMULATING THE COMMON ENVELOPE PHASE OF A RED GIANT USING SMOOTHED-PARTICLE HYDRODYNAMICS AND UNIFORM-GRID CODES

    SciTech Connect

    Passy, Jean-Claude; Mac Low, Mordecai-Mark; De Marco, Orsola; Fryer, Chris L.; Diehl, Steven; Rockefeller, Gabriel; Herwig, Falk; Oishi, Jeffrey S.; Bryan, Greg L.

    2012-01-01

    We use three-dimensional hydrodynamical simulations to study the rapid infall phase of the common envelope (CE) interaction of a red giant branch star of mass equal to 0.88 M{sub Sun} and a companion star of mass ranging from 0.9 down to 0.1 M{sub Sun }. We first compare the results obtained using two different numerical techniques with different resolutions, and find very good agreement overall. We then compare the outcomes of those simulations with observed systems thought to have gone through a CE. The simulations fail to reproduce those systems in the sense that most of the envelope of the donor remains bound at the end of the simulations and the final orbital separations between the donor's remnant and the companion, ranging from 26.8 down to 5.9 R{sub Sun }, are larger than the ones observed. We suggest that this discrepancy vouches for recombination playing an essential role in the ejection of the envelope and/or significant shrinkage of the orbit happening in the subsequent phase.

  4. Recent Advances in the Modeling of the Transport of Two-Plasmon-Decay Electrons in the 1-D Hydrodynamic Code LILAC

    NASA Astrophysics Data System (ADS)

    Delettrez, J. A.; Myatt, J. F.; Yaakobi, B.

    2015-11-01

    The modeling of the fast-electron transport in the 1-D hydrodynamic code LILAC was modified because of the addition of cross-beam-energy-transfer (CBET) in implosion simulations. Using the old fast-electron with source model CBET results in a shift of the peak of the hard x-ray (HXR) production from the end of the laser pulse, as observed in experiments, to earlier in the pulse. This is caused by a drop in the laser intensity of the quarter-critical surface from CBET interaction at lower densities. Data from simulations with the laser plasma simulation environment (LPSE) code will be used to modify the source algorithm in LILAC. In addition, the transport model in LILAC has been modified to include deviations from the straight-line algorithm and non-specular reflection at the sheath to take into account the scattering from collisions and magnetic fields in the corona. Simulation results will be compared with HXR emissions from both room-temperature plastic and cryogenic target experiments. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0001944.

  5. A New Multi-dimensional General Relativistic Neutrino Hydrodynamics Code for Core-collapse Supernovae. IV. The Neutrino Signal

    NASA Astrophysics Data System (ADS)

    Müller, Bernhard; Janka, Hans-Thomas

    2014-06-01

    Considering six general relativistic, two-dimensional (2D) supernova (SN) explosion models of progenitor stars between 8.1 and 27 M ⊙, we systematically analyze the properties of the neutrino emission from core collapse and bounce to the post-explosion phase. The models were computed with the VERTEX-COCONUT code, using three-flavor, energy-dependent neutrino transport in the ray-by-ray-plus approximation. Our results confirm the close similarity of the mean energies, langErang, of \\bar{\

  6. A NEW MULTI-DIMENSIONAL GENERAL RELATIVISTIC NEUTRINO HYDRODYNAMICS CODE FOR CORE-COLLAPSE SUPERNOVAE. II. RELATIVISTIC EXPLOSION MODELS OF CORE-COLLAPSE SUPERNOVAE

    SciTech Connect

    Mueller, Bernhard; Janka, Hans-Thomas; Marek, Andreas E-mail: thj@mpa-garching.mpg.de

    2012-09-01

    We present the first two-dimensional general relativistic (GR) simulations of stellar core collapse and explosion with the COCONUT hydrodynamics code in combination with the VERTEX solver for energy-dependent, three-flavor neutrino transport, using the extended conformal flatness condition for approximating the space-time metric and a ray-by-ray-plus ansatz to tackle the multi-dimensionality of the transport. For both of the investigated 11.2 and 15 M{sub Sun} progenitors we obtain successful, though seemingly marginal, neutrino-driven supernova explosions. This outcome and the time evolution of the models basically agree with results previously obtained with the PROMETHEUS hydro solver including an approximative treatment of relativistic effects by a modified Newtonian potential. However, GR models exhibit subtle differences in the neutrinospheric conditions compared with Newtonian and pseudo-Newtonian simulations. These differences lead to significantly higher luminosities and mean energies of the radiated electron neutrinos and antineutrinos and therefore to larger energy-deposition rates and heating efficiencies in the gain layer with favorable consequences for strong nonradial mass motions and ultimately for an explosion. Moreover, energy transfer to the stellar medium around the neutrinospheres through nucleon recoil in scattering reactions of heavy-lepton neutrinos also enhances the mentioned effects. Together with previous pseudo-Newtonian models, the presented relativistic calculations suggest that the treatment of gravity and energy-exchanging neutrino interactions can make differences of even 50%-100% in some quantities and is likely to contribute to a finally successful explosion mechanism on no minor level than hydrodynamical differences between different dimensions.

  7. A New Multi-dimensional General Relativistic Neutrino Hydrodynamics Code for Core-collapse Supernovae. II. Relativistic Explosion Models of Core-collapse Supernovae

    NASA Astrophysics Data System (ADS)

    Müller, Bernhard; Janka, Hans-Thomas; Marek, Andreas

    2012-09-01

    We present the first two-dimensional general relativistic (GR) simulations of stellar core collapse and explosion with the COCONUT hydrodynamics code in combination with the VERTEX solver for energy-dependent, three-flavor neutrino transport, using the extended conformal flatness condition for approximating the space-time metric and a ray-by-ray-plus ansatz to tackle the multi-dimensionality of the transport. For both of the investigated 11.2 and 15 M ⊙ progenitors we obtain successful, though seemingly marginal, neutrino-driven supernova explosions. This outcome and the time evolution of the models basically agree with results previously obtained with the PROMETHEUS hydro solver including an approximative treatment of relativistic effects by a modified Newtonian potential. However, GR models exhibit subtle differences in the neutrinospheric conditions compared with Newtonian and pseudo-Newtonian simulations. These differences lead to significantly higher luminosities and mean energies of the radiated electron neutrinos and antineutrinos and therefore to larger energy-deposition rates and heating efficiencies in the gain layer with favorable consequences for strong nonradial mass motions and ultimately for an explosion. Moreover, energy transfer to the stellar medium around the neutrinospheres through nucleon recoil in scattering reactions of heavy-lepton neutrinos also enhances the mentioned effects. Together with previous pseudo-Newtonian models, the presented relativistic calculations suggest that the treatment of gravity and energy-exchanging neutrino interactions can make differences of even 50%-100% in some quantities and is likely to contribute to a finally successful explosion mechanism on no minor level than hydrodynamical differences between different dimensions.

  8. Assessing the Effects of Data Compression in Simulations Using Physically Motivated Metrics

    DOE PAGES

    Laney, Daniel; Langer, Steven; Weber, Christopher; ...

    2014-01-01

    This paper examines whether lossy compression can be used effectively in physics simulations as a possible strategy to combat the expected data-movement bottleneck in future high performance computing architectures. We show that, for the codes and simulations we tested, compression levels of 3–5X can be applied without causing significant changes to important physical quantities. Rather than applying signal processing error metrics, we utilize physics-based metrics appropriate for each code to assess the impact of compression. We evaluate three different simulation codes: a Lagrangian shock-hydrodynamics code, an Eulerian higher-order hydrodynamics turbulence modeling code, and an Eulerian coupled laser-plasma interaction code. Wemore » compress relevant quantities after each time-step to approximate the effects of tightly coupled compression and study the compression rates to estimate memory and disk-bandwidth reduction. We find that the error characteristics of compression algorithms must be carefully considered in the context of the underlying physics being modeled.« less

  9. Simultaneously sparse and low-rank hyperspectral image recovery from coded aperture compressive measurements via convex optimization

    NASA Astrophysics Data System (ADS)

    Gélvez, Tatiana C.; Rueda, Hoover F.; Arguello, Henry

    2016-05-01

    A hyperspectral image (HSI) can be described as a set of images with spatial information across different spectral bands. Compressive spectral imaging techniques (CSI) permit to capture a 3-dimensional hyperspectral scene using 2 dimensional coded and multiplexed projections. Recovering the original scene from a very few projections can be valuable in applications such as remote sensing, video surveillance and biomedical imaging. Typically, HSI exhibit high correlations both, in the spatial and spectral dimensions. Thus, exploiting these correlations allows to accurately recover the original scene from compressed measurements. Traditional approaches exploit the sparsity of the scene when represented in a proper basis. For this purpose, an optimization problem that seeks to minimize a joint l2 - l1 norm is solved to obtain the original scene. However, there exist some HSI with an important feature which does not have been widely exploited; HSI are commonly low rank, thus only a few number of spectral signatures are presented in the image. Therefore, this paper proposes an approach to recover a simultaneous sparse and low rank hyperspectral image by exploiting both features at the same time. The proposed approach solves an optimization problem that seeks to minimize the l2-norm, penalized by the l1-norm, to force the solution to be sparse, and penalized by the nuclear norm to force the solution to be low rank. Theoretical analysis along with a set of simulations over different data sets show that simultaneously exploiting low rank and sparse structures enhances the performance of the recovery algorithm and the quality of the recovered image with an average improvement of around 3 dB in terms of the peak-signal to noise ratio (PSNR).

  10. An investigative study of multispectral data compression for remotely-sensed images using vector quantization and difference-mapped shift-coding

    NASA Technical Reports Server (NTRS)

    Jaggi, S.

    1993-01-01

    A study is conducted to investigate the effects and advantages of data compression techniques on multispectral imagery data acquired by NASA's airborne scanners at the Stennis Space Center. The first technique used was vector quantization. The vector is defined in the multispectral imagery context as an array of pixels from the same location from each channel. The error obtained in substituting the reconstructed images for the original set is compared for different compression ratios. Also, the eigenvalues of the covariance matrix obtained from the reconstructed data set are compared with the eigenvalues of the original set. The effects of varying the size of the vector codebook on the quality of the compression and on subsequent classification are also presented. The output data from the Vector Quantization algorithm was further compressed by a lossless technique called Difference-mapped Shift-extended Huffman coding. The overall compression for 7 channels of data acquired by the Calibrated Airborne Multispectral Scanner (CAMS), with an RMS error of 15.8 pixels was 195:1 (0.41 bpp) and with an RMS error of 3.6 pixels was 18:1 (.447 bpp). The algorithms were implemented in software and interfaced with the help of dedicated image processing boards to an 80386 PC compatible computer. Modules were developed for the task of image compression and image analysis. Also, supporting software to perform image processing for visual display and interpretation of the compressed/classified images was developed.

  11. A new multi-dimensional general relativistic neutrino hydrodynamics code for core-collapse supernovae. IV. The neutrino signal

    SciTech Connect

    Müller, Bernhard; Janka, Hans-Thomas E-mail: bjmuellr@mpa-garching.mpg.de

    2014-06-10

    Considering six general relativistic, two-dimensional (2D) supernova (SN) explosion models of progenitor stars between 8.1 and 27 M {sub ☉}, we systematically analyze the properties of the neutrino emission from core collapse and bounce to the post-explosion phase. The models were computed with the VERTEX-COCONUT code, using three-flavor, energy-dependent neutrino transport in the ray-by-ray-plus approximation. Our results confirm the close similarity of the mean energies, (E), of ν-bar {sub e} and heavy-lepton neutrinos and even their crossing during the accretion phase for stars with M ≳ 10 M {sub ☉} as observed in previous 1D and 2D simulations with state-of-the-art neutrino transport. We establish a roughly linear scaling of 〈E{sub ν-bar{sub e}}〉 with the proto-neutron star (PNS) mass, which holds in time as well as for different progenitors. Convection inside the PNS affects the neutrino emission on the 10%-20% level, and accretion continuing beyond the onset of the explosion prevents the abrupt drop of the neutrino luminosities seen in artificially exploded 1D models. We demonstrate that a wavelet-based time-frequency analysis of SN neutrino signals in IceCube will offer sensitive diagnostics for the SN core dynamics up to at least ∼10 kpc distance. Strong, narrow-band signal modulations indicate quasi-periodic shock sloshing motions due to the standing accretion shock instability (SASI), and the frequency evolution of such 'SASI neutrino chirps' reveals shock expansion or contraction. The onset of the explosion is accompanied by a shift of the modulation frequency below 40-50 Hz, and post-explosion, episodic accretion downflows will be signaled by activity intervals stretching over an extended frequency range in the wavelet spectrogram.

  12. How to Build a Time Machine: Interfacing Hydrodynamics, Ionization Calculations and X-ray Spectral Codes for Supernova Remnants

    NASA Astrophysics Data System (ADS)

    Badenes, Carlos

    2006-02-01

    Thanks to Chandra and XMM-Newton, spatially resolved spectroscopy of SNRsin the X-ray band has become a reality. Several impressive data sets forejecta-dominated SNRs can now be found in the archives, the Cas A VLP justbeing one (albeit probably the most spectacular) example. However, it isoften hard to establish quantitative, unambiguous connections between theX-ray observations of SNRs and the dramatic events involved in a corecollapse or thermonuclear SN explosion. The reason for this is that thevery high quality of the data sets generated by Chandra and XMM for thelikes of Cas A, SNR 292.0+1.8, Tycho, and SN 1006 has surpassed our abilityto analyze them. The core of the problem is in the transient nature of theplasmas in SNRs, which results in anintimate relationship between the structure of the ejecta and AM, the SNRdynamics arising from their interaction, and the ensuing X-rayemission. Thus, the ONLY way to understand the X-ray observations ofejecta-dominated SNRs at all levels, from the spatially integrated spectrato the subarcsecond scales that can be resolved by Chandra, is to couplehydrodynamic simulations to nonequilibrium ionization (NEI) calculationsand X-ray spectral codes. I will review the basic ingredients that enterthis kind of calculations, and what are the prospects for using them tounderstand the X-ray emission from the shocked ejecta in young SNRs. Thisunderstanding (when it is possible), can turn SNRs into veritable timemachines, revealing the secrets of the titanic explosions that generatedthem hundreds of years ago.

  13. DNABIT Compress - Genome compression algorithm.

    PubMed

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-22

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.

  14. Compression embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.

    1998-01-01

    A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%.

  15. Compression embedding

    DOEpatents

    Sandford, M.T. II; Handel, T.G.; Bradley, J.N.

    1998-07-07

    A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique are disclosed. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%. 21 figs.

  16. The HULL Hydrodynamics Computer Code

    DTIC Science & Technology

    1976-09-01

    Mark A. Fry, Capt, USAF Richard E. Durrett, Major, USAF Gary P. Ganong , Major, USAF Daniel A. Matuska, Major, USAF Mitchell D. Stucker, Capt, USAF... Ganong , G.P., and Roberts, W.A., The Effect of the Nuclear Environment on Crater Ejecta Trajectories for Surface Bursts, AFWL-TR-68-125, Air Force...unication. 17. Ganong , G.P.. et al.. Private communication. 18- A?^o9;ceG-Seapoan1 L^^y^ AFWL.TR-69.19, 19. Needham, C.E., TheorpHrai r=i^ i

  17. TRHD: Three-temperature radiation-hydrodynamics code with an implicit non-equilibrium radiation transport using a cell-centered monotonic finite volume scheme on unstructured-grids

    NASA Astrophysics Data System (ADS)

    Sijoy, C. D.; Chaturvedi, S.

    2015-05-01

    Three-temperature (3T), unstructured-mesh, non-equilibrium radiation hydrodynamics (RHD) code have been developed for the simulation of intense thermal radiation or high-power laser driven radiative shock hydrodynamics in two-dimensional (2D) axis-symmetric geometries. The governing hydrodynamics equations are solved using a compatible unstructured Lagrangian method based on a control volume differencing (CVD) scheme. A second-order predictor-corrector (PC) integration scheme is used for the temporal discretization of the hydrodynamics equations. For the radiation energy transport, frequency averaged gray model is used in which the flux-limited diffusion (FLD) approximation is used to recover the free-streaming limit of the radiation propagation in optically thin regions. The proposed RHD model allows to have different temperatures for the electrons and ions. In addition to this, the electron and thermal radiation temperatures are assumed to be in non-equilibrium. Therefore, the thermal relaxation between the electrons and ions and the coupling between the radiation and matter energies are required to be computed self-consistently. For this, the coupled flux limited electron heat conduction and the non-equilibrium radiation diffusion equations are solved simultaneously by using an implicit, axis-symmetric, cell-centered, monotonic, nonlinear finite volume (NLFV) scheme. In this paper, we have described the details of the 2D, 3T, non-equilibrium RHD code developed along with a suite of validation test problems to demonstrate the accuracy and performance of the algorithms. We have also conducted a performance analysis with different linearity preserving interpolation schemes that are used for the evaluation of the nodal values in the NLFV scheme. Finally, in order to demonstrate full capability of the code implementation, we have presented the simulation of laser driven thin Aluminum (Al) foil acceleration. The simulation results are found to be in good agreement

  18. Progress in smooth particle hydrodynamics

    SciTech Connect

    Wingate, C.A.; Dilts, G.A.; Mandell, D.A.; Crotzer, L.A.; Knapp, C.E.

    1998-07-01

    Smooth Particle Hydrodynamics (SPH) is a meshless, Lagrangian numerical method for hydrodynamics calculations where calculational elements are fuzzy particles which move according to the hydrodynamic equations of motion. Each particle carries local values of density, temperature, pressure and other hydrodynamic parameters. A major advantage of SPH is that it is meshless, thus large deformation calculations can be easily done with no connectivity complications. Interface positions are known and there are no problems with advecting quantities through a mesh that typical Eulerian codes have. These underlying SPH features make fracture physics easy and natural and in fact, much of the applications work revolves around simulating fracture. Debris particles from impacts can be easily transported across large voids with SPH. While SPH has considerable promise, there are some problems inherent in the technique that have so far limited its usefulness. The most serious problem is the well known instability in tension leading to particle clumping and numerical fracture. Another problem is that the SPH interpolation is only correct when particles are uniformly spaced a half particle apart leading to incorrect strain rates, accelerations and other quantities for general particle distributions. SPH calculations are also sensitive to particle locations. The standard artificial viscosity treatment in SPH leads to spurious viscosity in shear flows. This paper will demonstrate solutions for these problems that they and others have been developing. The most promising is to replace the SPH interpolant with the moving least squares (MLS) interpolant invented by Lancaster and Salkauskas in 1981. SPH and MLS are closely related with MLS being essentially SPH with corrected particle volumes. When formulated correctly, JLS is conservative, stable in both compression and tension, does not have the SPH boundary problems and is not sensitive to particle placement. The other approach to

  19. The research of hyperspectral image EBCOT lossless compression coding technology based on inter-spectrum adjustable parameter matrix reversible transform and intra-frame IDWT

    NASA Astrophysics Data System (ADS)

    Xie, Cheng-jun; Wei, Ying; Bi, Xin-wen; Li, Hui-zhu

    2009-10-01

    This paper presents a new reversible transform of inter-spectrum adjustable parameter matrix, which has better redundancy elimination effect by adjusting magnitude parameter λ and shift parameter δ to adjust transform matrix. Intraframe redundancy is eliminated by integer discrete wavelet transform (IDWT).These two kinds of transforms are all completed entirely by addition and shift, whose fast operation speed makes hardware implementation easier. After interspectrum and intra-frame transform, the hyper-spectral image is coded by improved EBCOT algorithm. Using hyper-spectral images Canal shot by AVIRIS of American JPL laboratory as test images, the experimental results show that in lossless image compression applications the method proposed in this paper is much better than the research results of MST, NIMST, a research team of Chinese Academy of Sciences, DPCMARJ, WinZip and JPEG-LS. The condition in which λ=7 and δ=3 in this paper, on the average the compression ratio using this algorithm increases by 11%, 15%, 18%, 31%, 36%, 38% and 43% respectively compared to the above algorithms. From the foregoing it follows that the algorithm presented in this paper is a very good hyper-spectral image lossless compression coding algorithm.

  20. Lossless data compression studies for NOAA hyperspectral environmental suite using 3D integer wavelet transforms with 3D embedded zerotree coding

    NASA Astrophysics Data System (ADS)

    Huang, Bormin; Huang, Hung-Lung; Chen, Hao; Ahuja, Alok; Baggett, Kevin; Schmit, Timothy J.; Heymann, Roger W.

    2003-09-01

    Hyperspectral sounder data is a particular class of data that requires high accuracy for useful retrieval of atmospheric temperature and moisture profiles, surface characteristics, cloud properties, and trace gas information. Therefore compression of these data sets is better to be lossless or near lossless. The next-generation NOAA/NESDIS GOES-R hyperspectral sounder, now referred to as the HES (Hyperspectral Environmental Suite), will have hyperspectral resolution (over one thousand channels with spectral widths on the order of 0.5 wavenumber) and high spatial resolution (less than 10 km). Given the large volume of three-dimensional hyperspectral sounder data that will be generated by the HES instrument, the use of robust data compression techniques will be beneficial to data transfer and archive. In this paper, we study lossless data compression for the HES using 3D integer wavelet transforms via the lifting schemes. The wavelet coefficients are then processed with the 3D embedded zerotree wavelet (EZW) algorithm followed by context-based arithmetic coding. We extend the 3D EZW scheme to take on any size of 3D satellite data, each of whose dimensions need not be divisible by 2N, where N is the levels of the wavelet decomposition being performed. The compression ratios of various kinds of wavelet transforms are presented along with a comparison with the JPEG2000 codec.

  1. Black Widow Pulsar radiation hydrodynamics simulation using Castro: Methodology

    NASA Astrophysics Data System (ADS)

    Barrios Sazo, Maria; Zingale, Michael; Zhang, Weiqun

    2017-01-01

    A black widow pulsar (BWP) is a millisecond pulsar in a tight binary system with a low mass star. The fast rotating pulsar emits intense radiation, which injects energy and ablates the companion star. Observation of the ablation is seen as pulsar eclipses caused by a larger object than the companion star Roche lobe. This phenomenon is attributed to a cloud surrounding the evaporating star. We will present the methodology for modeling the interaction between the radiation coming from the pulsar and the companion star using the radiation hydrodynamics code Castro. Castro is an adaptive mesh refinement (AMR) code that solves the compressible hydrodynamic equations for astrophysical flows with simultaneous refinement in space and time. The code also includes self-gravity, nuclear reactions and radiation. We are employing the gray-radiation solver, which uses a mixed-frame formulation of radiation hydrodynamics under the flux-limited diffusion approximation. In our setup, we are modeling the companion star with the radiation field as a boundary condition, coming from one side of the domain. In addition to a model setup in 2-d axisymmetry, we also have a 3-d setup, which is more physical given the nature of the system considering the companion is facing the pulsar on one side. We discuss the progress of our calculations, first results, and future work.The work at Stony Brook was supported by DOE/Office of Nuclear Physics grant DE-FG02-87ER40317

  2. Verification of the FBR fuel bundle-duct interaction analysis code BAMBOO by the out-of-pile bundle compression test with large diameter pins

    NASA Astrophysics Data System (ADS)

    Uwaba, Tomoyuki; Ito, Masahiro; Nemoto, Junichi; Ichikawa, Shoichi; Katsuyama, Kozo

    2014-09-01

    The BAMBOO computer code was verified by results for the out-of-pile bundle compression test with large diameter pin bundle deformation under the bundle-duct interaction (BDI) condition. The pin diameters of the examined test bundles were 8.5 mm and 10.4 mm, which are targeted as preliminary fuel pin diameters for the upgraded core of the prototype fast breeder reactor (FBR) and for demonstration and commercial FBRs studied in the FaCT project. In the bundle compression test, bundle cross-sectional views were obtained from X-ray computer tomography (CT) images and local parameters of bundle deformation such as pin-to-duct and pin-to-pin clearances were measured by CT image analyses. In the verification, calculation results of bundle deformation obtained by the BAMBOO code analyses were compared with the experimental results from the CT image analyses. The comparison showed that the BAMBOO code reasonably predicts deformation of large diameter pin bundles under the BDI condition by assuming that pin bowing and cladding oval distortion are the major deformation mechanisms, the same as in the case of small diameter pin bundles. In addition, the BAMBOO analysis results confirmed that cladding oval distortion effectively suppresses BDI in large diameter pin bundles as well as in small diameter pin bundles.

  3. Development of a Fast Breeder Reactor Fuel Bundle Deformation Analysis Code - BAMBOO: Development of a Pin Dispersion Model and Verification by the Out-of-Pile Compression Test

    SciTech Connect

    Uwaba, Tomoyuki; Ito, Masahiro; Ukai, Shigeharu

    2004-02-15

    To analyze the wire-wrapped fast breeder reactor fuel pin bundle deformation under bundle/duct interaction conditions, the Japan Nuclear Cycle Development Institute has developed the BAMBOO computer code. This code uses the three-dimensional beam element to calculate fuel pin bowing and cladding oval distortion as the primary deformation mechanisms in a fuel pin bundle. The pin dispersion, which is disarrangement of pins in a bundle and would occur during irradiation, was modeled in this code to evaluate its effect on bundle deformation. By applying the contact analysis method commonly used in the finite element method, this model considers the contact conditions at various axial positions as well as the nodal points and can analyze the irregular arrangement of fuel pins with the deviation of the wire configuration.The dispersion model was introduced in the BAMBOO code and verified by using the results of the out-of-pile compression test of the bundle, where the dispersion was caused by the deviation of the wire position. And the effect of the dispersion on the bundle deformation was evaluated based on the analysis results of the code.

  4. Skew resisting hydrodynamic seal

    DOEpatents

    Conroy, William T.; Dietle, Lannie L.; Gobeli, Jeffrey D.; Kalsi, Manmohan S.

    2001-01-01

    A novel hydrodynamically lubricated compression type rotary seal that is suitable for lubricant retention and environmental exclusion. Particularly, the seal geometry ensures constraint of a hydrodynamic seal in a manner preventing skew-induced wear and provides adequate room within the seal gland to accommodate thermal expansion. The seal accommodates large as-manufactured variations in the coefficient of thermal expansion of the sealing material, provides a relatively stiff integral spring effect to minimize pressure-induced shuttling of the seal within the gland, and also maintains interfacial contact pressure within the dynamic sealing interface in an optimum range for efficient hydrodynamic lubrication and environment exclusion. The seal geometry also provides for complete support about the circumference of the seal to receive environmental pressure, as compared the interrupted character of seal support set forth in U.S. Pat. Nos. 5,873,576 and 6,036,192 and provides a hydrodynamic seal which is suitable for use with non-Newtonian lubricants.

  5. Hydrodynamic Design Optimization Tool

    DTIC Science & Technology

    2011-08-01

    appreciated. The authors would also like to thank David Walden and Francis Noblesse of Code 50 for being instrumental in defining this project, Wesley...and efficiently during the early stage of the design process. The Computational Fluid Dynamics ( CFD ) group at George Mason University has an...specific design constraints. In order to apply CFD -based tool to the hydrodynamic design optimization of ship hull forms, an initial hull form is

  6. Hydrodynamic Hunters.

    PubMed

    Jashnsaz, Hossein; Al Juboori, Mohammed; Weistuch, Corey; Miller, Nicholas; Nguyen, Tyler; Meyerhoff, Viktoria; McCoy, Bryan; Perkins, Stephanie; Wallgren, Ross; Ray, Bruce D; Tsekouras, Konstantinos; Anderson, Gregory G; Pressé, Steve

    2017-03-28

    The Gram-negative Bdellovibrio bacteriovorus (BV) is a model bacterial predator that hunts other bacteria and may serve as a living antibiotic. Despite over 50 years since its discovery, it is suggested that BV probably collides into its prey at random. It remains unclear to what degree, if any, BV uses chemical cues to target its prey. The targeted search problem by the predator for its prey in three dimensions is a difficult problem: it requires the predator to sensitively detect prey and forecast its mobile prey's future position on the basis of previously detected signal. Here instead we find that rather than chemically detecting prey, hydrodynamics forces BV into regions high in prey density, thereby improving its odds of a chance collision with prey and ultimately reducing BV's search space for prey. We do so by showing that BV's dynamics are strongly influenced by self-generated hydrodynamic flow fields forcing BV onto surfaces and, for large enough defects on surfaces, forcing BV in orbital motion around these defects. Key experimental controls and calculations recapitulate the hydrodynamic origin of these behaviors. While BV's prey (Escherichia coli) are too small to trap BV in hydrodynamic orbit, the prey are also susceptible to their own hydrodynamic fields, substantially confining them to surfaces and defects where mobile predator and prey density is now dramatically enhanced. Colocalization, driven by hydrodynamics, ultimately reduces BV's search space for prey from three to two dimensions (on surfaces) even down to a single dimension (around defects). We conclude that BV's search for individual prey remains random, as suggested in the literature, but confined, however-by generic hydrodynamic forces-to reduced dimensionality.

  7. Softened Lagrangian hydrodynamics for cosmology

    NASA Astrophysics Data System (ADS)

    Gnedin, Nickolay Y.

    1995-04-01

    A new approach to cosmological hydrodynamics is discussed that is based on a moving, quasi-Lagrangian mesh. The softened Lagrangian hydrodynamics (SLH) method utilizes a high-resolution Lagrangian hydrodynamic code combined with a low-resolution Eulerian solver to deal with severe mesh distortions. Most of the volume of a simulation is treated with the Lagrangian code and only in sites where the Lagrangian approach fails, due to mesh distortions, does the Eulerian part of the code step in. This approach utilizes a high-resolution gravity solver without use of TREE or P3M methods; Poisson's equation is solved on the moving baryonic mesh using a simple relaxation technique. The dark matter is included by means of the cloud-in-cell method on the Lagrangian mesh. All three components of the cosmological code--gravity, dark matter, and baryons--are thus treated selft-consistently with exactly the same resolution. The computer code based on the SLH approach is described in detail, and comparison with existing Eulerian and smooth particle hydrodynamics (SPH) codes is presented. For most purposes the SLH approach turns out to be the intermediate between Eulerian and SPH codes, but it outperforms both of these approaches in resolving caustics. Thus, it may turn out to be a valuable tool to study galaxy formation.

  8. X-ray radiographic imaging of hydrodynamic phenomena in radiation driven materials -- shock propagation, material compression and shear flow. Revision 1

    SciTech Connect

    Hammel, B.A.; Kilkenny, J.D.; Munro, D.; Remington, B.A.; Kornblum, H.N.; Perry, T.S.; Phillion, D.W.; Wallace, R.J.

    1994-02-01

    One- and two-dimensional, time resolved x-ray radiographic imaging at high photon energy (5-7 keV) is used to study shock propagation, material motion and compression, and the effects of shear flow in solid density samples which are driven by x-ray ablation with the Nova laser. By backlighting the samples with x-rays and observing the increase in sample areal density due to shock compression, the authors directly measure the trajectory of strong shocks ({approx}40 Mbar) in flight, in solid density plastic samples. Doping a section of the samples with high-Z material (Br) provides radiographic contrast, allowing the measurement of the shock induced particle motion. Instability growth due to shear flow at an interface is investigated by imbedding a metal wire in a cylindrical plastic sample and launching a shock in the axial direction. Time resolved radiographic measurements are made with either a slit-imager coupled to an x-ray streak camera or a pinhole camera coupled to a gated microchannel plate detector, providing {approx} 10-{mu}m spatial and {approx} 100-ps temporal resolution.

  9. Comparison of transform coding methods with an optimal predictor for the data compression of digital elevation models

    NASA Technical Reports Server (NTRS)

    Lewis, Michael

    1994-01-01

    Statistical encoding techniques enable the reduction of the number of bits required to encode a set of symbols, and are derived from their probabilities. Huffman encoding is an example of statistical encoding that has been used for error-free data compression. The degree of compression given by Huffman encoding in this application can be improved by the use of prediction methods. These replace the set of elevations by a set of corrections that have a more advantageous probability distribution. In particular, the method of Lagrange Multipliers for minimization of the mean square error has been applied to local geometrical predictors. Using this technique, an 8-point predictor achieved about a 7 percent improvement over an existing simple triangular predictor.

  10. Numerical Simulation of Carbon Simple Cubic by Dynamic Compression

    NASA Astrophysics Data System (ADS)

    Kato, Kaori; Aoki, Takayuki; Sekine, Toshimori

    2001-02-01

    An impact scheme of a slab target and flyer with a layered structure is proposed to achieve low-entropy dynamic compression of diamond. The thermodynamic state of diamond during compression is examined using one-dimensional Lagrangian hydrodynamic code and the tabulated equation of state library, SESAME@. The use of a material with a small shock impedance for the impact interfaces markedly decreases the strength of the primary shock wave. It is found that a gradient of shock impedance across the thickness of the flyer generates small multiple shock waves into the diamond and is effective for low-entropy compression. The thermodynamic conditions required for carbon simple cubic and low-entropy dynamic compression is achieved.

  11. Hydrodynamic simulations of clumps

    NASA Astrophysics Data System (ADS)

    Feldmeier, Achim; Hamann, Wolf-Rainer; Rätzel, D.; Oskinova, Lidia M.

    2008-04-01

    Clumps in hot star winds can originate from shock compression due to the line driven instability. One-dimensional hydrodynamic simulations reveal a radial wind structure consisting of highly compressed shells separated by voids, and colliding with fast clouds. Two-dimensional simulations are still largely missing, despite first attempts. Clumpiness dramatically affects the radiative transfer and thus all wind diagnostics in the UV, optical, and in X-rays. The microturbulence approximation applied hitherto is currently superseded by a more sophisticated radiative transfer in stochastic media. Besides clumps, i.e. jumps in the density stratification, so-called kinks in the velocity law, i.e. jumps in dv/dr, play an eminent role in hot star winds. Kinks are a new type of radiative-acoustic shock, and propagate at super-Abbottic speed.

  12. Benchmarking the Multidimensional Stellar Implicit Code MUSIC

    NASA Astrophysics Data System (ADS)

    Goffrey, T.; Pratt, J.; Viallet, M.; Baraffe, I.; Popov, M. V.; Walder, R.; Folini, D.; Geroux, C.; Constantino, T.

    2017-04-01

    We present the results of a numerical benchmark study for the MUltidimensional Stellar Implicit Code (MUSIC) based on widely applicable two- and three-dimensional compressible hydrodynamics problems relevant to stellar interiors. MUSIC is an implicit large eddy simulation code that uses implicit time integration, implemented as a Jacobian-free Newton Krylov method. A physics based preconditioning technique which can be adjusted to target varying physics is used to improve the performance of the solver. The problems used for this benchmark study include the Rayleigh-Taylor and Kelvin-Helmholtz instabilities, and the decay of the Taylor-Green vortex. Additionally we show a test of hydrostatic equilibrium, in a stellar environment which is dominated by radiative effects. In this setting the flexibility of the preconditioning technique is demonstrated. This work aims to bridge the gap between the hydrodynamic test problems typically used during development of numerical methods and the complex flows of stellar interiors. A series of multidimensional tests were performed and analysed. Each of these test cases was analysed with a simple, scalar diagnostic, with the aim of enabling direct code comparisons. As the tests performed do not have analytic solutions, we verify MUSIC by comparing it to established codes including ATHENA and the PENCIL code. MUSIC is able to both reproduce behaviour from established and widely-used codes as well as results expected from theoretical predictions. This benchmarking study concludes a series of papers describing the development of the MUSIC code and provides confidence in future applications.

  13. Ship Hydrodynamics

    ERIC Educational Resources Information Center

    Lafrance, Pierre

    1978-01-01

    Explores in a non-mathematical treatment some of the hydrodynamical phenomena and forces that affect the operation of ships, especially at high speeds. Discusses the major components of ship resistance such as the different types of drags and ways to reduce them and how to apply those principles for the hovercraft. (GA)

  14. Ship Hydrodynamics

    ERIC Educational Resources Information Center

    Lafrance, Pierre

    1978-01-01

    Explores in a non-mathematical treatment some of the hydrodynamical phenomena and forces that affect the operation of ships, especially at high speeds. Discusses the major components of ship resistance such as the different types of drags and ways to reduce them and how to apply those principles for the hovercraft. (GA)

  15. Maestro and Castro: Simulation Codes for Astrophysical Flows

    NASA Astrophysics Data System (ADS)

    Zingale, Michael; Almgren, Ann; Beckner, Vince; Bell, John; Friesen, Brian; Jacobs, Adam; Katz, Maximilian P.; Malone, Christopher; Nonaka, Andrew; Zhang, Weiqun

    2017-01-01

    Stellar explosions are multiphysics problems—modeling them requires the coordinated input of gravity solvers, reaction networks, radiation transport, and hydrodynamics together with microphysics recipes to describe the physics of matter under extreme conditions. Furthermore, these models involve following a wide range of spatial and temporal scales, which puts tough demands on simulation codes. We developed the codes Maestro and Castro to meet the computational challenges of these problems. Maestro uses a low Mach number formulation of the hydrodynamics to efficiently model convection. Castro solves the fully compressible radiation hydrodynamics equations to capture the explosive phases of stellar phenomena. Both codes are built upon the BoxLib adaptive mesh refinement library, which prepares them for next-generation exascale computers. Common microphysics shared between the codes allows us to transfer a problem from the low Mach number regime in Maestro to the explosive regime in Castro. Importantly, both codes are freely available (https://github.com/BoxLib-Codes). We will describe the design of the codes and some of their science applications, as well as future development directions.Support for development was provided by NSF award AST-1211563 and DOE/Office of Nuclear Physics grant DE-FG02-87ER40317 to Stony Brook and by the Applied Mathematics Program of the DOE Office of Advance Scientific Computing Research under US DOE contract DE-AC02-05CH11231 to LBNL.

  16. Radiation Hydrodynamics

    SciTech Connect

    Castor, J I

    2003-10-16

    The discipline of radiation hydrodynamics is the branch of hydrodynamics in which the moving fluid absorbs and emits electromagnetic radiation, and in so doing modifies its dynamical behavior. That is, the net gain or loss of energy by parcels of the fluid material through absorption or emission of radiation are sufficient to change the pressure of the material, and therefore change its motion; alternatively, the net momentum exchange between radiation and matter may alter the motion of the matter directly. Ignoring the radiation contributions to energy and momentum will give a wrong prediction of the hydrodynamic motion when the correct description is radiation hydrodynamics. Of course, there are circumstances when a large quantity of radiation is present, yet can be ignored without causing the model to be in error. This happens when radiation from an exterior source streams through the problem, but the latter is so transparent that the energy and momentum coupling is negligible. Everything we say about radiation hydrodynamics applies equally well to neutrinos and photons (apart from the Einstein relations, specific to bosons), but in almost every area of astrophysics neutrino hydrodynamics is ignored, simply because the systems are exceedingly transparent to neutrinos, even though the energy flux in neutrinos may be substantial. Another place where we can do ''radiation hydrodynamics'' without using any sophisticated theory is deep within stars or other bodies, where the material is so opaque to the radiation that the mean free path of photons is entirely negligible compared with the size of the system, the distance over which any fluid quantity varies, and so on. In this case we can suppose that the radiation is in equilibrium with the matter locally, and its energy, pressure and momentum can be lumped in with those of the rest of the fluid. That is, it is no more necessary to distinguish photons from atoms, nuclei and electrons, than it is to distinguish

  17. Superluminous Supernovae hydrodynamic models

    NASA Astrophysics Data System (ADS)

    Orellana, M.

    2017-07-01

    We use our radiation hydrodynamic code in order to simulate magnetar powered Superluminous Supernovae (SLSNe). It is assumed that a central rapidly rotating magnetar deposits all its rotational energy into the ejecta where is added to the usual power. The magnetar luminosity and spin-down timescale are adopted as the free parameters of the model. For the case of ASASSN-15lh, which has been claimed as the most luminous supernova ever discovered, we have found physically plausible magnetar parameters can reproduce the overall shape of the bolometric light curve (LC) provided the progenitor mass is ≍ 8M⊙. The ejecta dynamics of this event shows signs of the magnetar energy input which deviates the expansion from the usually assumed homologous behaviour. Our numerical experiments lead us to conclude that the hydrodynamical modeling is necessary in order to derive the properties of powerful magnetars driving SLSNe.

  18. GENASIS: General Astrophysical Simulation System. I. Refinable Mesh and Nonrelativistic Hydrodynamics

    NASA Astrophysics Data System (ADS)

    Cardall, Christian Y.; Budiardja, Reuben D.; Endeve, Eirik; Mezzacappa, Anthony

    2014-02-01

    GenASiS (General Astrophysical Simulation System) is a new code being developed initially and primarily, though by no means exclusively, for the simulation of core-collapse supernovae on the world's leading capability supercomputers. This paper—the first in a series—demonstrates a centrally refined coordinate patch suitable for gravitational collapse and documents methods for compressible nonrelativistic hydrodynamics. We benchmark the hydrodynamics capabilities of GenASiS against many standard test problems; the results illustrate the basic competence of our implementation, demonstrate the strengths and limitations of the HLLC relative to the HLL Riemann solver in a number of interesting cases, and provide preliminary indications of the code's ability to scale and to function with cell-by-cell fixed-mesh refinement.

  19. Bacterial Hydrodynamics

    NASA Astrophysics Data System (ADS)

    Lauga, Eric

    2016-01-01

    Bacteria predate plants and animals by billions of years. Today, they are the world's smallest cells, yet they represent the bulk of the world's biomass and the main reservoir of nutrients for higher organisms. Most bacteria can move on their own, and the majority of motile bacteria are able to swim in viscous fluids using slender helical appendages called flagella. Low-Reynolds number hydrodynamics is at the heart of the ability of flagella to generate propulsion at the micrometer scale. In fact, fluid dynamic forces impact many aspects of bacteriology, ranging from the ability of cells to reorient and search their surroundings to their interactions within mechanically and chemically complex environments. Using hydrodynamics as an organizing framework, I review the biomechanics of bacterial motility and look ahead to future challenges.

  20. Quantum hydrodynamics

    NASA Astrophysics Data System (ADS)

    Tsubota, Makoto; Kobayashi, Michikazu; Takeuchi, Hiromitsu

    2013-01-01

    Quantum hydrodynamics in superfluid helium and atomic Bose-Einstein condensates (BECs) has been recently one of the most important topics in low temperature physics. In these systems, a macroscopic wave function (order parameter) appears because of Bose-Einstein condensation, which creates quantized vortices. Turbulence consisting of quantized vortices is called quantum turbulence (QT). The study of quantized vortices and QT has increased in intensity for two reasons. The first is that recent studies of QT are considerably advanced over older studies, which were chiefly limited to thermal counterflow in 4He, which has no analog with classical traditional turbulence, whereas new studies on QT are focused on a comparison between QT and classical turbulence. The second reason is the realization of atomic BECs in 1995, for which modern optical techniques enable the direct control and visualization of the condensate and can even change the interaction; such direct control is impossible in other quantum condensates like superfluid helium and superconductors. Our group has made many important theoretical and numerical contributions to the field of quantum hydrodynamics of both superfluid helium and atomic BECs. In this article, we review some of the important topics in detail. The topics of quantum hydrodynamics are diverse, so we have not attempted to cover all these topics in this article. We also ensure that the scope of this article does not overlap with our recent review article (arXiv:1004.5458), “Quantized vortices in superfluid helium and atomic Bose-Einstein condensates”, and other review articles.

  1. Hydrodynamic models of a Cepheid atmosphere

    NASA Technical Reports Server (NTRS)

    Karp, A. H.

    1975-01-01

    Instead of computing a large number of coarsely zoned hydrodynamic models covering the entire atmospheric instability strip, the author computed a single model as well as computer limitations allow. The implicit hydrodynamic code of Kutter and Sparks was modified to include radiative transfer effects in optically thin zones.

  2. Image data compression investigation

    NASA Technical Reports Server (NTRS)

    Myrie, Carlos

    1989-01-01

    NASA continuous communications systems growth has increased the demand for image transmission and storage. Research and analysis was conducted on various lossy and lossless advanced data compression techniques or approaches used to improve the efficiency of transmission and storage of high volume stellite image data such as pulse code modulation (PCM), differential PCM (DPCM), transform coding, hybrid coding, interframe coding, and adaptive technique. In this presentation, the fundamentals of image data compression utilizing two techniques which are pulse code modulation (PCM) and differential PCM (DPCM) are presented along with an application utilizing these two coding techniques.

  3. A Study of Shaped-Charge Collapse and Jet Formation Using the HEMP (hydrodynamic, Elastic, Magneto, and Plastic) Code and a Comparison with Experimental Observations

    DTIC Science & Technology

    1984-12-01

    at BRL was used for the copper liners. A plotting package developed by Mr. John Harrison of BRL was included in the version of HEMP used in this study...AD-A149 472 AD IB MEMORANDUM REPORT BRL-MR-3417 L A STUDY OF SHAPED-CHARGE COLLAPSE AND JET FORMATION USING THE HEMP CODE AND A COMPARISON WITH...FORMATION USING THE HEMP CODE AND A COMPARISON FINAL WITH EXPERIMENTAL OBSERVATIONS S. PERFORMING ORG. REPORT NUMOER 7. AUTHOR(-) 8. CONTRACT OR GRANT

  4. Hydrodynamic test problems

    SciTech Connect

    Moran, B

    2005-06-02

    We present test problems that can be used to check the hydrodynamic implementation in computer codes designed to model the implosion of a National Ignition Facility (NIF) capsule. The problems are simplified, yet one of them is three-dimensional. It consists of a nearly-spherical incompressible imploding shell subjected to an exponentially decaying pressure on its outer surface. We present a semi-analytic solution for the time-evolution of that shell with arbitrary small three-dimensional perturbations on its inner and outer surfaces. The perturbations on the shell surfaces are intended to model the imperfections that are created during capsule manufacturing.

  5. Fluid Film Bearing Code Development

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The next generation of rocket engine turbopumps is being developed by industry through Government-directed contracts. These turbopumps will use fluid film bearings because they eliminate the life and shaft-speed limitations of rolling-element bearings, increase turbopump design flexibility, and reduce the need for turbopump overhauls and maintenance. The design of the fluid film bearings for these turbopumps, however, requires sophisticated analysis tools to model the complex physical behavior characteristic of fluid film bearings operating at high speeds with low viscosity fluids. State-of-the-art analysis and design tools are being developed at the Texas A&M University under a grant guided by the NASA Lewis Research Center. The latest version of the code, HYDROFLEXT, is a thermohydrodynamic bulk flow analysis with fluid compressibility, full inertia, and fully developed turbulence models. It can predict the static and dynamic force response of rigid and flexible pad hydrodynamic bearings and of rigid and tilting pad hydrostatic bearings. The Texas A&M code is a comprehensive analysis tool, incorporating key fluid phenomenon pertinent to bearings that operate at high speeds with low-viscosity fluids typical of those used in rocket engine turbopumps. Specifically, the energy equation was implemented into the code to enable fluid properties to vary with temperature and pressure. This is particularly important for cryogenic fluids because their properties are sensitive to temperature as well as pressure. As shown in the figure, predicted bearing mass flow rates vary significantly depending on the fluid model used. Because cryogens are semicompressible fluids and the bearing dynamic characteristics are highly sensitive to fluid compressibility, fluid compressibility effects are also modeled. The code contains fluid properties for liquid hydrogen, liquid oxygen, and liquid nitrogen as well as for water and air. Other fluids can be handled by the code provided that the

  6. Progressive transmission and compression images

    NASA Technical Reports Server (NTRS)

    Kiely, A. B.

    1996-01-01

    We describe an image data compression strategy featuring progressive transmission. The method exploits subband coding and arithmetic coding for compression. We analyze the Laplacian probability density, which closely approximates the statistics of individual subbands, to determine a strategy for ordering the compressed subband data in a way that improves rate-distortion performance. Results are presented for a test image.

  7. Predictive Encoding in Text Compression.

    ERIC Educational Resources Information Center

    Raita, Timo; Teuhola, Jukka

    1989-01-01

    Presents three text compression methods of increasing power and evaluates each based on the trade-off between compression gain and processing time. The advantages of using hash coding for speed and optimal arithmetic coding to successor information for compression gain are discussed. (26 references) (Author/CLB)

  8. FLY: a Tree Code for Adaptive Mesh Refinement

    NASA Astrophysics Data System (ADS)

    Becciani, U.; Antonuccio-Delogu, V.; Costa, A.; Ferro, D.

    FLY is a public domain parallel treecode, which makes heavy use of the one-sided communication paradigm to handle the management of the tree structure. It implements the equations for cosmological evolution and can be run for different cosmological models. This paper shows an example of the integration of a tree N-body code with an adaptive mesh, following the PARAMESH scheme. This new implementation will allow the FLY output, and more generally any binary output, to be used with any hydrodynamics code that adopts the PARAMESH data structure, to study compressible flow problems.

  9. Hydrodynamic supercontinuum.

    PubMed

    Chabchoub, A; Hoffmann, N; Onorato, M; Genty, G; Dudley, J M; Akhmediev, N

    2013-08-02

    We report the experimental observation of multi-bound-soliton solutions of the nonlinear Schrödinger equation (NLS) in the context of hydrodynamic surface gravity waves. Higher-order N-soliton solutions with N=2, 3 are studied in detail and shown to be associated with self-focusing in the wave group dynamics and the generation of a steep localized carrier wave underneath the group envelope. We also show that for larger input soliton numbers, the wave group experiences irreversible spectral broadening, which we refer to as a hydrodynamic supercontinuum by analogy with optics. This process is shown to be associated with the fission of the initial multisoliton into individual fundamental solitons due to higher-order nonlinear perturbations to the NLS. Numerical simulations using an extended NLS model described by the modified nonlinear Schrödinger equation, show excellent agreement with experiment and highlight the universal role that higher-order nonlinear perturbations to the NLS play in supercontinuum generation.

  10. Hydrodynamic effects on coalescence.

    SciTech Connect

    Dimiduk, Thomas G.; Bourdon, Christopher Jay; Grillet, Anne Mary; Baer, Thomas A.; de Boer, Maarten Pieter; Loewenberg, Michael; Gorby, Allen D.; Brooks, Carlton, F.

    2006-10-01

    The goal of this project was to design, build and test novel diagnostics to probe the effect of hydrodynamic forces on coalescence dynamics. Our investigation focused on how a drop coalesces onto a flat surface which is analogous to two drops coalescing, but more amenable to precise experimental measurements. We designed and built a flow cell to create an axisymmetric compression flow which brings a drop onto a flat surface. A computer-controlled system manipulates the flow to steer the drop and maintain a symmetric flow. Particle image velocimetry was performed to confirm that the control system was delivering a well conditioned flow. To examine the dynamics of the coalescence, we implemented an interferometry capability to measure the drainage of the thin film between the drop and the surface during the coalescence process. A semi-automated analysis routine was developed which converts the dynamic interferogram series into drop shape evolution data.

  11. A NEW MULTI-DIMENSIONAL GENERAL RELATIVISTIC NEUTRINO HYDRODYNAMICS CODE OF CORE-COLLAPSE SUPERNOVAE. III. GRAVITATIONAL WAVE SIGNALS FROM SUPERNOVA EXPLOSION MODELS

    SciTech Connect

    Mueller, Bernhard; Janka, Hans-Thomas; Marek, Andreas E-mail: thj@mpa-garching.mpg.de

    2013-03-20

    We present a detailed theoretical analysis of the gravitational wave (GW) signal of the post-bounce evolution of core-collapse supernovae (SNe), employing for the first time relativistic, two-dimensional explosion models with multi-group, three-flavor neutrino transport based on the ray-by-ray-plus approximation. The waveforms reflect the accelerated mass motions associated with the characteristic evolutionary stages that were also identified in previous works: a quasi-periodic modulation by prompt post-shock convection is followed by a phase of relative quiescence before growing amplitudes signal violent hydrodynamical activity due to convection and the standing accretion shock instability during the accretion period of the stalled shock. Finally, a high-frequency, low-amplitude variation from proto-neutron star (PNS) convection below the neutrinosphere appears superimposed on the low-frequency trend associated with the aspherical expansion of the SN shock after the onset of the explosion. Relativistic effects in combination with detailed neutrino transport are shown to be essential for quantitative predictions of the GW frequency evolution and energy spectrum, because they determine the structure of the PNS surface layer and its characteristic g-mode frequency. Burst-like high-frequency activity phases, correlated with sudden luminosity increase and spectral hardening of electron (anti-)neutrino emission for some 10 ms, are discovered as new features after the onset of the explosion. They correspond to intermittent episodes of anisotropic accretion by the PNS in the case of fallback SNe. We find stronger signals for more massive progenitors with large accretion rates. The typical frequencies are higher for massive PNSs, though the time-integrated spectrum also strongly depends on the model dynamics.

  12. Show Code.

    PubMed

    Shalev, Daniel

    2017-01-01

    "Let's get one thing straight: there is no such thing as a show code," my attending asserted, pausing for effect. "You either try to resuscitate, or you don't. None of this halfway junk." He spoke so loudly that the two off-service consultants huddled at computers at the end of the unit looked up… We did four rounds of compressions and pushed epinephrine twice. It was not a long code. We did good, strong compressions and coded this man in earnest until the end. Toward the final round, though, as I stepped up to do compressions, my attending looked at me in a deep way. It was a look in between willing me as some object under his command and revealing to me everything that lay within his brash, confident surface but could not be spoken. © 2017 The Hastings Center.

  13. Convolutional coding techniques for data protection

    NASA Technical Reports Server (NTRS)

    Massey, J. L.

    1975-01-01

    Results of research on the use of convolutional codes in data communications are presented. Convolutional coding fundamentals are discussed along with modulation and coding interaction. Concatenated coding systems and data compression with convolutional codes are described.

  14. Flash Kα radiography of laser-driven solid sphere compression for fast ignition

    SciTech Connect

    Sawada, H.; Lee, S.; Nagatomo, H.; Arikawa, Y.; Nishimura, H.; Ueda, T.; Shigemori, K.; Fujioka, S.; Shiroto, T.; Ohnishi, N.; Sunahara, A.; Beg, F. N.; Theobald, W.; Pérez, F.; Patel, P. K.

    2016-06-20

    Time-resolved compression of a laser-driven solid deuterated plastic sphere with a cone was measured with flash Kα x-ray radiography. A spherically converging shockwave launched by nanosecond GEKKO XII beams was used for compression while a flash of 4.51 keV Ti Kα x-ray backlighter was produced by a high-intensity, picosecond laser LFEX (Laser for Fast ignition EXperiment) near peak compression for radiography. Areal densities of the compressed core were inferred from two-dimensional backlit x-ray images recorded with a narrow-band spherical crystal imager. The maximum areal density in the experiment was estimated to be 87 ± 26 mg/cm{sup 2}. The temporal evolution of the experimental and simulated areal densities with a 2-D radiation-hydrodynamics code is in good agreement.

  15. Effect of non-local electron conduction in compression of solid ball target for fast ignition

    NASA Astrophysics Data System (ADS)

    Nagatomo, Hideo; Asahina, Takashi; Nicolai, Philippe; Sunahara, Atsushi; Johzaki, Tomoyuki

    2016-10-01

    In the first phase of the fast ignition scheme, fuel target is compressed by the implosion laser, where only achievement of high dense fuel is required because the increment of the temperature to ignite the fuel is given by heating lasers. The ideal compression method for solid target is isentropic compression with tailored pulse shape. However, it requires the high laser intensity >1015 W/cm2 which cause the hot electrons. Numerical simulation for these conditions non-local electron transport model is necessary. Recently, we have installed SNB model to a 2-D radiation hydrodynamic simulation code. In this presentation, effect of hot electron in isentropic compression and optimum method are discussed, which may be also significant for shock ignition scheme. Also effect of external magnetic field to the hot electron will be considered. This study was supported by JSPS KAKENHI Grant No. 26400532.

  16. Flash Kα radiography of laser-driven solid sphere compression for fast ignition

    NASA Astrophysics Data System (ADS)

    Sawada, H.; Lee, S.; Shiroto, T.; Nagatomo, H.; Arikawa, Y.; Nishimura, H.; Ueda, T.; Shigemori, K.; Sunahara, A.; Ohnishi, N.; Beg, F. N.; Theobald, W.; Pérez, F.; Patel, P. K.; Fujioka, S.

    2016-06-01

    Time-resolved compression of a laser-driven solid deuterated plastic sphere with a cone was measured with flash Kα x-ray radiography. A spherically converging shockwave launched by nanosecond GEKKO XII beams was used for compression while a flash of 4.51 keV Ti Kα x-ray backlighter was produced by a high-intensity, picosecond laser LFEX (Laser for Fast ignition EXperiment) near peak compression for radiography. Areal densities of the compressed core were inferred from two-dimensional backlit x-ray images recorded with a narrow-band spherical crystal imager. The maximum areal density in the experiment was estimated to be 87 ± 26 mg/cm2. The temporal evolution of the experimental and simulated areal densities with a 2-D radiation-hydrodynamics code is in good agreement.

  17. The OMV Data Compression System Science Data Compression Workshop

    NASA Technical Reports Server (NTRS)

    Lewis, Garton H., Jr.

    1989-01-01

    The Video Compression Unit (VCU), Video Reconstruction Unit (VRU), theory and algorithms for implementation of Orbital Maneuvering Vehicle (OMV) source coding, docking mode, channel coding, error containment, and video tape preprocessed space imagery are presented in viewgraph format.

  18. A hydrodynamic approach to cosmology - Methodology

    NASA Technical Reports Server (NTRS)

    Cen, Renyue

    1992-01-01

    The present study describes an accurate and efficient hydrodynamic code for evolving self-gravitating cosmological systems. The hydrodynamic code is a flux-based mesh code originally designed for engineering hydrodynamical applications. A variety of checks were performed which indicate that the resolution of the code is a few cells, providing accuracy for integral energy quantities in the present simulations of 1-3 percent over the whole runs. Six species (H I, H II, He I, He II, He III) are tracked separately, and relevant ionization and recombination processes, as well as line and continuum heating and cooling, are computed. The background radiation field is simultaneously determined in the range 1 eV to 100 keV, allowing for absorption, emission, and cosmological effects. It is shown how the inevitable numerical inaccuracies can be estimated and to some extent overcome.

  19. An explicit-implicit solution of the hydrodynamic and radiation equations

    NASA Astrophysics Data System (ADS)

    Sahota, Manjit S.

    A solution of the coupled radiation-hydrodynamic equations on a median mesh is presented for a transient, three-dimensional, compressible, multimaterial, free-Lagrangian code. The code uses fixed-mass particles surrounded by median Lagrangian cells. These cells are free to change connectivity, which ensures accuracy in the differencing of equations and allows the code to handle extreme distortions. All calculations are done on a median Lagrangian mesh that is constructed from the Delaunay tetrahedral mesh using the Voronoi connection algorithm. Because each tetrahedron volume is shared equally by the four mass points (computational cells) located at the tetrahedron vertices, calculations are done at a tetrahedron level for enhanced computational efficiency, and the rate-of-change data are subsequently accumulated at mass points from these tetrahedral contributions. The hydrodynamic part of the calculations is done using an explicit time-advancement technique, and the radiation calculations are done using a hybrid explicit-implicit time-advancement scheme in the equilibrium-diffusion limit. An explicit solution of the radiation-diffusion equation is obtained for cells that meet the current time-step criterion imposed by the hydrodynamic solution, and a fully implicit point-relaxation solution is obtained elsewhere without defining an inversion matrix. The approach has a distinct advantage over the conventional matrix-inversion approaches, because defining such a matrix for an unstructured grid is both cumbersome and computationally intensive. The new algorithm runs >20 times faster than a matrix-solver approach using the conjugate-gradient technique, and is easily parallelizable on the Cray family of supercomputers. With the new algorithm, the radiation-diffusion part of the calculation runs about twice as fast as the hydrodynamic part of the calculation. The code conserves mass, momentum, and energy exactly, except in some pathological situations.

  20. Chromatin hydrodynamics.

    PubMed

    Bruinsma, Robijn; Grosberg, Alexander Y; Rabin, Yitzhak; Zidovska, Alexandra

    2014-05-06

    Following recent observations of large scale correlated motion of chromatin inside the nuclei of live differentiated cells, we present a hydrodynamic theory-the two-fluid model-in which the content of a nucleus is described as a chromatin solution with the nucleoplasm playing the role of the solvent and the chromatin fiber that of a solute. This system is subject to both passive thermal fluctuations and active scalar and vector events that are associated with free energy consumption, such as ATP hydrolysis. Scalar events drive the longitudinal viscoelastic modes (where the chromatin fiber moves relative to the solvent) while vector events generate the transverse modes (where the chromatin fiber moves together with the solvent). Using linear response methods, we derive explicit expressions for the response functions that connect the chromatin density and velocity correlation functions to the corresponding correlation functions of the active sources and the complex viscoelastic moduli of the chromatin solution. We then derive general expressions for the flow spectral density of the chromatin velocity field. We use the theory to analyze experimental results recently obtained by one of the present authors and her co-workers. We find that the time dependence of the experimental data for both native and ATP-depleted chromatin can be well-fitted using a simple model-the Maxwell fluid-for the complex modulus, although there is some discrepancy in terms of the wavevector dependence. Thermal fluctuations of ATP-depleted cells are predominantly longitudinal. ATP-active cells exhibit intense transverse long wavelength velocity fluctuations driven by force dipoles. Fluctuations with wavenumbers larger than a few inverse microns are dominated by concentration fluctuations with the same spectrum as thermal fluctuations but with increased intensity.

  1. Algorithm refinement for fluctuating hydrodynamics

    SciTech Connect

    Williams, Sarah A.; Bell, John B.; Garcia, Alejandro L.

    2007-07-03

    This paper introduces an adaptive mesh and algorithmrefinement method for fluctuating hydrodynamics. This particle-continuumhybrid simulates the dynamics of a compressible fluid with thermalfluctuations. The particle algorithm is direct simulation Monte Carlo(DSMC), a molecular-level scheme based on the Boltzmann equation. Thecontinuum algorithm is based on the Landau-Lifshitz Navier-Stokes (LLNS)equations, which incorporate thermal fluctuations into macroscopichydrodynamics by using stochastic fluxes. It uses a recently-developedsolver for LLNS, based on third-order Runge-Kutta. We present numericaltests of systems in and out of equilibrium, including time-dependentsystems, and demonstrate dynamic adaptive refinement by the computationof a moving shock wave. Mean system behavior and second moment statisticsof our simulations match theoretical values and benchmarks well. We findthat particular attention should be paid to the spectrum of the flux atthe interface between the particle and continuum methods, specificallyfor the non-hydrodynamic (kinetic) time scales.

  2. Data compression for satellite images

    NASA Technical Reports Server (NTRS)

    Chen, P. H.; Wintz, P. A.

    1976-01-01

    An efficient data compression system is presented for satellite pictures and two grey level pictures derived from satellite pictures. The compression techniques take advantages of the correlation between adjacent picture elements. Several source coding methods are investigated. Double delta coding is presented and shown to be the most efficient. Both predictive differential quantizing technique and double delta coding can be significantly improved by applying a background skipping technique. An extension code is constructed. This code requires very little storage space and operates efficiently. Simulation results are presented for various coding schemes and source codes.

  3. Compressive holographic video.

    PubMed

    Wang, Zihao; Spinoulas, Leonidas; He, Kuan; Tian, Lei; Cossairt, Oliver; Katsaggelos, Aggelos K; Chen, Huaijin

    2017-01-09

    Compressed sensing has been discussed separately in spatial and temporal domains. Compressive holography has been introduced as a method that allows 3D tomographic reconstruction at different depths from a single 2D image. Coded exposure is a temporal compressed sensing method for high speed video acquisition. In this work, we combine compressive holography and coded exposure techniques and extend the discussion to 4D reconstruction in space and time from one coded captured image. In our prototype, digital in-line holography was used for imaging macroscopic, fast moving objects. The pixel-wise temporal modulation was implemented by a digital micromirror device. In this paper we demonstrate 10× temporal super resolution with multiple depths recovery from a single image. Two examples are presented for the purpose of recording subtle vibrations and tracking small particles within 5 ms.

  4. Compressive holographic video

    NASA Astrophysics Data System (ADS)

    Wang, Zihao; Spinoulas, Leonidas; He, Kuan; Tian, Lei; Cossairt, Oliver; Katsaggelos, Aggelos K.; Chen, Huaijin

    2017-01-01

    Compressed sensing has been discussed separately in spatial and temporal domains. Compressive holography has been introduced as a method that allows 3D tomographic reconstruction at different depths from a single 2D image. Coded exposure is a temporal compressed sensing method for high speed video acquisition. In this work, we combine compressive holography and coded exposure techniques and extend the discussion to 4D reconstruction in space and time from one coded captured image. In our prototype, digital in-line holography was used for imaging macroscopic, fast moving objects. The pixel-wise temporal modulation was implemented by a digital micromirror device. In this paper we demonstrate $10\\times$ temporal super resolution with multiple depths recovery from a single image. Two examples are presented for the purpose of recording subtle vibrations and tracking small particles within 5 ms.

  5. Compression ratio effect on methane HCCI combustion

    SciTech Connect

    Aceves, S. M.; Pitz, W.; Smith, J. R.; Westbrook, C.

    1998-09-29

    We have used the HCT (Hydrodynamics, Chemistry and Transport) chemical kinetics code to simulate HCCI (homogeneous charge compression ignition) combustion of methane-air mixtures. HCT is applied to explore the ignition timing, bum duration, NOx production, gross indicated efficiency and gross IMEP of a supercharged engine (3 atm. Intake pressure) with 14:1, 16:l and 18:1 compression ratios at 1200 rpm. HCT has been modified to incorporate the effect of heat transfer and to calculate the temperature that results from mixing the recycled exhaust with the fresh mixture. This study uses a single control volume reaction zone that varies as a function of crank angle. The ignition process is controlled by adjusting the intake equivalence ratio and the residual gas trapping (RGT). RGT is internal exhaust gas recirculation which recycles both thermal energy and combustion product species. Adjustment of equivalence ratio and RGT is accomplished by varying the timing of the exhaust valve closure in either 2-stroke or 4-stroke engines. Inlet manifold temperature is held constant at 300 K. Results show that, for each compression ratio, there is a range of operational conditions that show promise of achieving the control necessary to vary power output while keeping indicated efficiency above 50% and NOx levels below 100 ppm. HCT results are also compared with a set of recent experimental data for natural gas.

  6. Predictions for the drive capabilities of the RancheroS Flux Compression Generator into various load inductances using the Eulerian AMR Code Roxane

    SciTech Connect

    Watt, Robert Gregory

    2016-06-06

    The Ranchero Magnetic Flux Compression Generator (FCG) has been used to create current pulses in the 10-­100 MA range for driving both “static” low inductance (0.5 nH) loads1 for generator demonstration purposes and high inductance (10-­20 nH) imploding liner loads2 for ultimate use in physics experiments at very high energy density. Simulations of the standard Ranchero generator have recently shown that it had a design issue that could lead to flux trapping in the generator, and a non-­ robust predictability in its use in high energy density experiments. A re-­examination of the design concept for the standard Ranchero generator, prompted by the possible appearance of an aneurism at the output glide plane, has led to a new generation of Ranchero generators designated the RancheroS (for swooped). This generator has removed the problematic output glide plane and replaced it with a region of constantly increasing diameter in the output end of the FCG cavity in which the armature is driven outward under the influence of an additional HE load not present in the original Ranchero. The resultant RancheroS generator, to be tested in LA43S-­L13, probably in early FY17, has a significantly increased initial inductance and may be able to drive a somewhat higher load inductance than the standard Ranchero. This report will use the Eulerian AMR code Roxane to study the ability of the new design to drive static loads, with a goal of providing a database corresponding to the load inductances for which the generator might be used and the anticipated peak currents such loads might produce in physics experiments. Such a database, combined with a simple analytic model of an ideal generator, where d(LI)/dt = 0, and supplemented by earlier estimates of losses in actual use of the standard Ranchero, scaled to estimate the increase in losses due to the longer current carrying perimeter in the RancheroS, can then be used to bound the expectations for the current drive one may

  7. Three-Dimensional Hydrodynamics Experiments on the National Ignition Facility

    SciTech Connect

    Blue, B E; Weber, S V; Glendinning, S; Lanier, N; Woods, D; Bono, M; Dixit, S; Haynam, C; Holder, J; Kalantar, D; MacGowan, B; Moses, E; Nikitin, A; Rekow, V; Wallace, R; Van Wonterghem, B; Rosen, P; Foster, J; Stry, P; Wilde, B; Hsing, W; Robey, H

    2004-11-12

    The production of supersonic jets of material via the interaction of a strong shock wave with a spatially localized density perturbation is a common feature of inertial confinement fusion and astrophysics. The behavior of two-dimensional (2D) supersonic jets has previously been investigated in detail [J. M. Foster et. al, Phys. Plasmas 9, 2251 (2002)]. In three-dimensions (3D), however, there are new aspects to the behavior of supersonic jets in compressible media. In this paper, the commissioning activities on the National Ignition Facility (NIF) [J. A. Paisner et al., Laser Focus World 30, 75 (1994)] to enable hydrodynamic experiments will be presented as well as the results from the first series of hydrodynamic experiments. In these experiments, two of the first four beams of NIF are used to drive a 40 Mbar shock wave into millimeter scale aluminum targets backed by 100 mg/cc carbon aerogel foam. The remaining beams are delayed in time and are used to provide a point-projection x-ray backlighter source for diagnosing the three-dimensional structure of the jet evolution resulting from a variety of 2D and 3D features. Comparisons between data and simulations using several codes will be presented.

  8. Three-Dimensional Hydrodynamic Experiments on the National Ignition Facility

    SciTech Connect

    Blue, B E; Robey, H F; Glendinning, S G; Bono, M J; Dixit, S N; Foster, J M; Haynam, C A; Holder, J P; Hsing, W W; Kalantar, D H; Lanier, N E; MacGowan, B J; Moses, E I; Nikitin, A J; Perry, T S; Rekow, V V; Rosen, P A; Stry, P E; Van Wonterghem, B M; Wallace, R; Weber, S V; Wilde, B H; Woods, D T

    2005-02-09

    The production of supersonic jets of material via the interaction of a strong shock wave with a spatially localized density perturbation is a common feature of inertial confinement fusion and astrophysics. The behavior of two-dimensional (2D) supersonic jets has previously been investigated in detail [J. M. Foster et. al, Phys. Plasmas 9, 2251 (2002)]. In three-dimensions (3D), however, there are new aspects to the behavior of supersonic jets in compressible media. In this paper, the commissioning activities on the National Ignition Facility (NIF) [J. A. Paisner et al., Laser Focus World 30, 75 (1994)] to enable hydrodynamic experiments will be presented as well as the results from the first series of hydrodynamic experiments. In these experiments, two of the first four beams of NIF are used to drive a 40 Mbar shock wave into millimeter scale aluminum targets backed by 100 mg/cc carbon aerogel foam. The remaining beams are delayed in time and are used to provide a point-projection x-ray backlighter source for diagnosing the three-dimensional structure of the jet evolution resulting from a variety of 2D and 3D features. Comparisons between data and simulations using several codes will be presented.

  9. Three-dimensional hydrodynamic experiments on the National Ignition Facilitya)

    NASA Astrophysics Data System (ADS)

    Blue, B. E.; Robey, H. F.; Glendinning, S. G.; Bono, M. J.; Burkhart, S. C.; Celeste, J. R.; Coker, R. F.; Costa, R. L.; Dixit, S. N.; Foster, J. M.; Hansen, J. F.; Haynam, C. A.; Hermann, M. R.; Holder, J. P.; Hsing, W. W.; Kalantar, D. H.; Lanier, N. E.; Latray, D. A.; Louis, H.; MacGowan, B. J.; Maggelssen, G. R.; Marshall, C. D.; Moses, E. I.; Nikitin, A. J.; O'Brien, D. W.; Perry, T. S.; Poole, M. W.; Rekow, V. V.; Rosen, P. A.; Schneider, M. B.; Stry, P. E.; Van Wonterghem, B. M.; Wallace, R.; Weber, S. V.; Wilde, B. H.; Woods, D. T.; Young, B. K.

    2005-05-01

    The production of supersonic jets of material via the interaction of a strong shock wave with a spatially localized density perturbation is a common feature of inertial confinement fusion and astrophysics. The behavior of two-dimensional (2D) supersonic jets has previously been investigated in detail [J. M. Foster, B. H. Wilde, P. A. Rosen, T. S. Perry, M. Fell, M. J. Edwards, B. F. Lasinski, R. E. Turner, and M. L. Gittings, Phys. Plasmas 9, 2251 (2002)]. In three dimensions (3D), however, there are new aspects to the behavior of supersonic jets in compressible media. In this paper, the commissioning activities on the National Ignition Facility (NIF) [J. A. Paisner, J. D. Boyes, S. A. Kumpan, W. H. Lowdermilk, and M. Sorem, Laser Focus World 30, 75 (1994)] to enable hydrodynamic experiments will be presented as well as the results from the first series of hydrodynamic experiments. In these experiments, two of the first four beams of NIF are used to drive a 40Mbar shock wave into millimeter scale aluminum targets backed by 100mg/cc carbon aerogel foam. The remaining beams are delayed in time and are used to provide a point-projection x-ray backlighter source for diagnosing the three-dimensional structure of the jet evolution resulting from a variety of 2D and 3D features. Comparisons between data and simulations using several codes will be presented.

  10. Noiseless Coding Of Magnetometer Signals

    NASA Technical Reports Server (NTRS)

    Rice, Robert F.; Lee, Jun-Ji

    1989-01-01

    Report discusses application of noiseless data-compression coding to digitized readings of spaceborne magnetometers for transmission back to Earth. Objective of such coding to increase efficiency by decreasing rate of transmission without sacrificing integrity of data. Adaptive coding compresses data by factors ranging from 2 to 6.

  11. Hydrodynamic Simulations of Planetary Rings

    NASA Astrophysics Data System (ADS)

    Miller, Jacob; Stewart, G. R.; Esposito, L. W.

    2013-10-01

    Simulations of rings have traditionally been done using N-body methods, granting insight into the interactions of individual ring particles on varying scales. However, due to the scale of a typical ring system and the sheer number of particles involved, a global N-body simulation is too computationally expensive, unless particle collisions are replaced by stochastic forces (Bromley & Kenyon, 2013). Rings are extraordinarily flat systems and therefore are well-suited to existing geophysical shallow-water hydrodynamics models with well-established non-linear advection methods. By adopting a general relationship between pressure and surface density such as a polytropic equation of state, we can modify the shallow-water formula to treat a thin, compressible, self-gravitating, shearing fluid. Previous hydrodynamic simulations of planetary rings have been restricted to axisymmetric flows and therefore have not treated the response to nonaxisymmetric perturbations by moons (Schmidt & Tscharnuter 1999, Latter & Ogilvie 2010). We seek to expand on existing hydrodynamic methods and, by comparing our work with complementary N-body simulations and Cassini observations, confirm the veracity of our results at small scales before eventually moving to a global domain size. We will use non-Newtonian, dynamically variable viscosity to model the viscous transport caused by unresolved self-gravity wakes. Self-gravity will be added to model the dynamics of large-scale structures, such as density waves and edge waves. Support from NASA Outer Planets and Planetary Geology and Geophysics programs is gratefully acknowledged.

  12. Supernova hydrodynamics experiments using the Nova laser

    SciTech Connect

    Remington, B.A.; Glendinning, S.G.; Estabrook, K.; Wallace, R.J.; Rubenchik, A.; Kane, J.; Arnett, D.; Drake, R.P.; McCray, R.

    1997-04-01

    We are developing experiments using the Nova laser to investigate two areas of physics relevant to core-collapse supernovae (SN): (1) compressible nonlinear hydrodynamic mixing and (2) radiative shock hydrodynamics. In the former, we are examining the differences between the 2D and 3D evolution of the Rayleigh-Taylor instability, an issue critical to the observables emerging from SN in the first year after exploding. In the latter, we are investigating the evolution of a colliding plasma system relevant to the ejecta-stellar wind interactions of the early stages of SN remnant formation. The experiments and astrophysical implications are discussed.

  13. Progressive Transmission and Compression of Images

    NASA Technical Reports Server (NTRS)

    Kiely, A. B.

    1996-01-01

    We describe an image data compression strategy featuring progressive transmission. The method exploits subband coding and arithmetic coding for compression. We analyze the Laplacian probability density, which closely approximates the statistics of individual subbands, to determine a strategy for ordering the compressed subband data in a way that improves rate-distortion performance. Results are presented for a test image.

  14. Hydrodynamics from Landau initial conditions

    SciTech Connect

    Sen, Abhisek; Gerhard, Jochen; Torrieri, Giorgio; Read jr, Kenneth F.; Wong, Cheuk-Yin

    2015-01-01

    We investigate ideal hydrodynamic evolution, with Landau initial conditions, both in a semi-analytical 1+1D approach and in a numerical code incorporating event-by-event variation with many events and transverse density inhomogeneities. The object of the calculation is to test how fast would a Landau initial condition transition to a commonly used boost-invariant expansion. We show that the transition to boost-invariant flow occurs too late for realistic setups, with corrections of O (20 - 30%) expected at freezeout for most scenarios. Moreover, the deviation from boost-invariance is correlated with both transverse flow and elliptic flow, with the more highly transversely flowing regions also showing the most violation of boost invariance. Therefore, if longitudinal flow is not fully developed at the early stages of heavy ion collisions, 2+1 dimensional hydrodynamics is inadequate to extract transport coefficients of the quark-gluon plasma. Based on [1, 2

  15. WHITE DWARF MERGERS ON ADAPTIVE MESHES. I. METHODOLOGY AND CODE VERIFICATION

    SciTech Connect

    Katz, Max P.; Zingale, Michael; Calder, Alan C.; Swesty, F. Douglas; Almgren, Ann S.; Zhang, Weiqun

    2016-03-10

    The Type Ia supernova (SN Ia) progenitor problem is one of the most perplexing and exciting problems in astrophysics, requiring detailed numerical modeling to complement observations of these explosions. One possible progenitor that has merited recent theoretical attention is the white dwarf (WD) merger scenario, which has the potential to naturally explain many of the observed characteristics of SNe Ia. To date there have been relatively few self-consistent simulations of merging WD systems using mesh-based hydrodynamics. This is the first paper in a series describing simulations of these systems using a hydrodynamics code with adaptive mesh refinement. In this paper we describe our numerical methodology and discuss our implementation in the compressible hydrodynamics code CASTRO, which solves the Euler equations, and the Poisson equation for self-gravity, and couples the gravitational and rotation forces to the hydrodynamics. Standard techniques for coupling gravitation and rotation forces to the hydrodynamics do not adequately conserve the total energy of the system for our problem, but recent advances in the literature allow progress and we discuss our implementation here. We present a set of test problems demonstrating the extent to which our software sufficiently models a system where large amounts of mass are advected on the computational domain over long timescales. Future papers in this series will describe our treatment of the initial conditions of these systems and will examine the early phases of the merger to determine its viability for triggering a thermonuclear detonation.

  16. White Dwarf Mergers on Adaptive Meshes. I. Methodology and Code Verification

    NASA Astrophysics Data System (ADS)

    Katz, Max P.; Zingale, Michael; Calder, Alan C.; Swesty, F. Douglas; Almgren, Ann S.; Zhang, Weiqun

    2016-03-01

    The Type Ia supernova (SN Ia) progenitor problem is one of the most perplexing and exciting problems in astrophysics, requiring detailed numerical modeling to complement observations of these explosions. One possible progenitor that has merited recent theoretical attention is the white dwarf (WD) merger scenario, which has the potential to naturally explain many of the observed characteristics of SNe Ia. To date there have been relatively few self-consistent simulations of merging WD systems using mesh-based hydrodynamics. This is the first paper in a series describing simulations of these systems using a hydrodynamics code with adaptive mesh refinement. In this paper we describe our numerical methodology and discuss our implementation in the compressible hydrodynamics code CASTRO, which solves the Euler equations, and the Poisson equation for self-gravity, and couples the gravitational and rotation forces to the hydrodynamics. Standard techniques for coupling gravitation and rotation forces to the hydrodynamics do not adequately conserve the total energy of the system for our problem, but recent advances in the literature allow progress and we discuss our implementation here. We present a set of test problems demonstrating the extent to which our software sufficiently models a system where large amounts of mass are advected on the computational domain over long timescales. Future papers in this series will describe our treatment of the initial conditions of these systems and will examine the early phases of the merger to determine its viability for triggering a thermonuclear detonation.

  17. Testing hydrodynamics schemes in galaxy disc simulations

    NASA Astrophysics Data System (ADS)

    Few, C. G.; Dobbs, C.; Pettitt, A.; Konstandin, L.

    2016-08-01

    We examine how three fundamentally different numerical hydrodynamics codes follow the evolution of an isothermal galactic disc with an external spiral potential. We compare an adaptive mesh refinement code (RAMSES), a smoothed particle hydrodynamics code (SPHNG), and a volume-discretized mesh-less code (GIZMO). Using standard refinement criteria, we find that RAMSES produces a disc that is less vertically concentrated and does not reach such high densities as the SPHNG or GIZMO runs. The gas surface density in the spiral arms increases at a lower rate for the RAMSES simulations compared to the other codes. There is also a greater degree of substructure in the SPHNG and GIZMO runs and secondary spiral arms are more pronounced. By resolving the Jeans length with a greater number of grid cells, we achieve more similar results to the Lagrangian codes used in this study. Other alterations to the refinement scheme (adding extra levels of refinement and refining based on local density gradients) are less successful in reducing the disparity between RAMSES and SPHNG/GIZMO. Although more similar, SPHNG displays different density distributions and vertical mass profiles to all modes of GIZMO (including the smoothed particle hydrodynamics version). This suggests differences also arise which are not intrinsic to the particular method but rather due to its implementation. The discrepancies between codes (in particular, the densities reached in the spiral arms) could potentially result in differences in the locations and time-scales for gravitational collapse, and therefore impact star formation activity in more complex galaxy disc simulations.

  18. Data Compression.

    ERIC Educational Resources Information Center

    Bookstein, Abraham; Storer, James A.

    1992-01-01

    Introduces this issue, which contains papers from the 1991 Data Compression Conference, and defines data compression. The two primary functions of data compression are described, i.e., storage and communications; types of data using compression technology are discussed; compression methods are explained; and current areas of research are…

  19. Argon X-ray line imaging - A compression diagnostic for inertial confinement fusion targets

    SciTech Connect

    Koppel, L.N.

    1980-01-01

    The paper describes argon X-ray line imaging which measures the compressed fuel volume directly by forming one-dimensional images of X-rays from argon gas seeded into the D-T fuel. The photon energies of the X-rays are recorded on the film of a diffraction-crystal spectrograph. Neutron activation, which detects activated nuclei produced by the interaction of 14-MeV neutrons with the selected materials of the target, allows to calculate the final compressed fuel density using a hydrodynamics simulation code and the knowledge of the total number of activated nuclei and the neutron yield. Argon X-ray appears to be a valid fuel-compression diagnostic for final fuel densities in the range of 10 to 50 times liquid D-T density.

  20. Universal Noiseless Coding Subroutines

    NASA Technical Reports Server (NTRS)

    Schlutsmeyer, A. P.; Rice, R. F.

    1986-01-01

    Software package consists of FORTRAN subroutines that perform universal noiseless coding and decoding of integer and binary data strings. Purpose of this type of coding to achieve data compression in sense that coded data represents original data perfectly (noiselessly) while taking fewer bits to do so. Routines universal because they apply to virtually any "real-world" data source.

  1. Universal Noiseless Coding Subroutines

    NASA Technical Reports Server (NTRS)

    Schlutsmeyer, A. P.; Rice, R. F.

    1986-01-01

    Software package consists of FORTRAN subroutines that perform universal noiseless coding and decoding of integer and binary data strings. Purpose of this type of coding to achieve data compression in sense that coded data represents original data perfectly (noiselessly) while taking fewer bits to do so. Routines universal because they apply to virtually any "real-world" data source.

  2. Hydrodynamic efficiencies in implosion experiments

    NASA Astrophysics Data System (ADS)

    Koenig, Michel; Fabre, Edouard; Boudenne, Jean-Michel; Michard, Alain; Fews, P.

    1990-04-01

    Experiments on the implosion of high aspect ratio glass microballoons, filled with an equimolar mixture of 10 atmosphere D-T gas, aimed at determining hydrodynamic efficiencies, the characteristics (density and temperature) of the wall, are described. Experimental results for kinetic and thermal variations, obtained for 350 and 450 micrometer targets, at an absorbed laser power of about 120 J, are compared with values given by simulations with the FILM code. The comparison is made at the time of shock reflection on the internal wall of the shell. The use of X spectroscopy in such experiments is discussed.

  3. Disruptive Innovation in Numerical Hydrodynamics

    SciTech Connect

    Waltz, Jacob I.

    2012-09-06

    We propose the research and development of a high-fidelity hydrodynamic algorithm for tetrahedral meshes that will lead to a disruptive innovation in the numerical modeling of Laboratory problems. Our proposed innovation has the potential to reduce turnaround time by orders of magnitude relative to Advanced Simulation and Computing (ASC) codes; reduce simulation setup costs by millions of dollars per year; and effectively leverage Graphics Processing Unit (GPU) and future Exascale computing hardware. If successful, this work will lead to a dramatic leap forward in the Laboratory's quest for a predictive simulation capability.

  4. Hydrodynamical comparison test of solar models

    NASA Astrophysics Data System (ADS)

    Bach, K.; Kim, Y.-C.

    2012-12-01

    We present three dimensional radiation-hydrodynamical (RHD) simulations for solar surface convection based on three most recent solar mixtures: Grevesse & Sauval (1998), Asplund, Grevesse & Sauval (2005), and Asplund, Grevesse, Sauval & Scott (2009). The outer convection zone of the Sun is an extremely turbulent region composed of partly ionized compressible gases at high temperature. The super-adiabatic layer (SAL) is the transition region where the transport of energy changes drastically from convection to radiation. In order to describe physical processes accurately, a realistic treatment of radiation should be considered as well as convection. However, newly updated solar mixtures that are established from radiation-hydrodynamics do not generate properly internal structures estimated by helioseismology. In order to address this fundamental problem, solar models are constructed consistently based on each mixture and used as initial configurations for radiation-hydrodynamical simulations. From our simulations, we find that the turbulent flows in each model are statistically similar in the SAL.

  5. Hydrodynamics of micropipette aspiration.

    PubMed Central

    Drury, J L; Dembo, M

    1999-01-01

    The dynamics of human neutrophils during micropipette aspiration are frequently analyzed by approximating these cells as simple slippery droplets of viscous fluid. Here, we present computations that reveal the detailed predictions of the simplest and most idealized case of such a scheme; namely, the case where the fluid of the droplet is homogeneous and Newtonian, and the surface tension of the droplet is constant. We have investigated the behavior of this model as a function of surface tension, droplet radius, viscosity, aspiration pressure, and pipette radius. In addition, we have tabulated a dimensionless factor, M, which can be utilized to calculate the apparent viscosity of the slippery droplet. Computations were carried out using a low Reynolds number hydrodynamics transport code based on the finite-element method. Although idealized and simplistic, we find that the slippery droplet model predicts many observed features of neutrophil aspiration. However, there are certain features that are not observed in neutrophils. In particular, the model predicts dilation of the membrane past the point of being continuous, as well as a reentrant jet at high aspiration pressures. PMID:9876128

  6. Astrophysical smooth particle hydrodynamics

    NASA Astrophysics Data System (ADS)

    Rosswog, Stephan

    2009-04-01

    The paper presents a detailed review of the smooth particle hydrodynamics (SPH) method with particular focus on its astrophysical applications. We start by introducing the basic ideas and concepts and thereby outline all ingredients that are necessary for a practical implementation of the method in a working SPH code. Much of SPH's success relies on its excellent conservation properties and therefore the numerical conservation of physical invariants receives much attention throughout this review. The self-consistent derivation of the SPH equations from the Lagrangian of an ideal fluid is the common theme of the remainder of the text. We derive a modern, Newtonian SPH formulation from the Lagrangian of an ideal fluid. It accounts for changes of the local resolution lengths which result in corrective, so-called "grad-h-terms". We extend this strategy to special relativity for which we derive the corresponding grad-h equation set. The variational approach is further applied to a general-relativistic fluid evolving in a fixed, curved background space-time. Particular care is taken to explicitly derive all relevant equations in a coherent way.

  7. Fractal image compression

    NASA Technical Reports Server (NTRS)

    Barnsley, Michael F.; Sloan, Alan D.

    1989-01-01

    Fractals are geometric or data structures which do not simplify under magnification. Fractal Image Compression is a technique which associates a fractal to an image. On the one hand, the fractal can be described in terms of a few succinct rules, while on the other, the fractal contains much or all of the image information. Since the rules are described with less bits of data than the image, compression results. Data compression with fractals is an approach to reach high compression ratios for large data streams related to images. The high compression ratios are attained at a cost of large amounts of computation. Both lossless and lossy modes are supported by the technique. The technique is stable in that small errors in codes lead to small errors in image data. Applications to the NASA mission are discussed.

  8. Speech coding

    SciTech Connect

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    1998-05-08

    coding techniques are equally applicable to any voice signal whether or not it carries any intelligible information, as the term speech implies. Other terms that are commonly used are speech compression and voice compression since the fundamental idea behind speech coding is to reduce (compress) the transmission rate (or equivalently the bandwidth) And/or reduce storage requirements In this document the terms speech and voice shall be used interchangeably.

  9. Scaling supernova hydrodynamics to the laboratory

    SciTech Connect

    Kane, J O; Remington, B A; Arnett, D; Fryxell, B A; Drake, R P

    1998-11-10

    Supernova (SN) 1987A focused attention on the critical role of hydrodynamic instabilities in the evolution of supernovae. To test the modeling of these instabilities, they are attempting to rigorously scale the physics of the laboratory in supernova. The scaling of hydrodynamics on microscopic laser scales to hydrodynamics on the SN-size scales is presented and requirements established. Initial results were reported in [1]. Next the appropriate conditions are generated on the NOVA laser. 10-15 Mbar shock at the interface of a two-layer planar target, which triggers perturbation growth, due to the Richtmyer-Meshkov instability and to the Rayleigh-Taylor instability as the interface decelerates is generated. This scales the hydrodynamics of the He-H interface of a Type II supernova at intermediate times, up to a few x10{sup 3} s. The experiment is modeled using the hydrodynamics codes HYADES and CALE, and the supernova code PROMETHEUS. Results of the experiments and simulations are presented. Analysis of the spike bubble velocities using potential flow theory and Ott thin shell theory is presented, as well as a study of 2D vs. 3D difference in growth at the He-H interface of Sn 1987A.

  10. Inertial-Fusion-Related Hydrodynamic Instabilities in a Spherical Gas Bubble Accelerated by a Planar Shock Wave

    SciTech Connect

    Niederhaus, John; Ranjan, Devesh; Anderson, Mark; Oakley, Jason; Bonazza, Riccardo; Greenough, Jeff

    2005-05-15

    Experiments studying the compression and unstable growth of a dense spherical bubble in a gaseous medium subjected to a strong planar shock wave (2.8 < M < 3.4) are performed in a vertical shock tube. The test gas is initially contained in a free-falling spherical soap-film bubble, and the shocked bubble is imaged using planar laser diagnostics. Concurrently, simulations are carried out using a compressible hydrodynamics code in r-z axisymmetric geometry.Experiments and computations indicate the formation of characteristic vortical structures in the post-shock flow, due to Richtmyer-Meshkov and Kelvin-Helmholtz instabilities, and smaller-scale vortices due to secondary effects. Inconsistencies between experimental and computational results are examined, and the usefulness of the current axisymmetric approach is evaluated.

  11. File Compression and Expansion of the Genetic Code by the use of the Yin/Yang Directions to find its Sphered Cube.

    PubMed

    Castro-Chavez, Fernando

    2014-07-01

    The objective of this article is to demonstrate that the genetic code can be studied and represented in a 3-D Sphered Cube for bioinformatics and for education by using the graphical help of the ancient "Book of Changes" or I Ching for the comparison, pair by pair, of the three basic characteristics of nucleotides: H-bonds, molecular structure, and their tautomerism. The source of natural biodiversity is the high plasticity of the genetic code, analyzable with a reverse engineering of its 2-D and 3-D representations (here illustrated), but also through the classical 64-hexagrams of the ancient I Ching, as if they were the 64-codons or words of the genetic code. In this article, the four elements of the Yin/Yang were found by correlating the 3×2=6 sets of Cartesian comparisons of the mentioned properties of nucleic acids, to the directionality of their resulting blocks of codons grouped according to their resulting amino acids and/or functions, integrating a 384-codon Sphered Cube whose function is illustrated by comparing six brain peptides and a promoter of osteoblasts from Humans versus Neanderthal, as well as to Negadi's work on the importance of the number 384 within the genetic code. Starting with the codon/anticodon correlation of Nirenberg, published in full here for the first time, and by studying the genetic code and its 3-D display, the buffers of reiteration within codons codifying for the same amino acid, displayed the two long (binary number one) and older Yin/Yang arrows that travel in opposite directions, mimicking the parental DNA strands, while annealing to the two younger and broken (binary number zero) Yin/Yang arrows, mimicking the new DNA strands; the graphic analysis of the of the genetic code and its plasticity was helpful to compare compatible sequences (human compatible to human versus neanderthal compatible to neanderthal), while further exploring the wondrous biodiversity of nature for educational purposes.

  12. Design of Fiber Optic Sensors for Measuring Hydrodynamic Parameters

    NASA Technical Reports Server (NTRS)

    Lyons, Donald R.; Quiett, Carramah; Griffin, DeVon (Technical Monitor)

    2001-01-01

    The science of optical hydrodynamics involves relating the optical properties to the fluid dynamic properties of a hydrodynamic system. Fiber-optic sensors are being designed for measuring the hydrodynamic parameters of various systems. As a flowing fluid makes an encounter with a flat surface, it forms a boundary layer near this surface. The region between the boundary layer and the flat plate contains information about parameters such as viscosity, compressibility, pressure, density, and velocity. An analytical model has been developed for examining the hydrodynamic parameters near the surface of a fiber-optic sensor. An analysis of the conservation of momentum, the continuity equation and the Navier-Stokes equation for compressible flow were used to develop expressions for the velocity and the density as a function of the distance along the flow and above the surface. When examining the flow near the surface, these expressions are used to estimate the sensitivity required to perform direct optical measurements and to derive the shear force for indirect optical measurements. The derivation of this result permits the incorporation of better design parameters for other fiber-based sensors. Future work includes analyzing the optical parametric designs of fiber-optic sensors, modeling sensors to utilize the parameters for hydrodynamics and applying different mixtures of hydrodynamic flow. Finally, the fabrication of fiber-optic sensors for hydrodynamic flow applications of the type described in this presentation could enhance aerospace, submarine, and medical technology.

  13. Revealing the Physics of Galactic Winds Through Massively-Parallel Hydrodynamics Simulations

    NASA Astrophysics Data System (ADS)

    Schneider, Evan Elizabeth

    This thesis documents the hydrodynamics code Cholla and a numerical study of multiphase galactic winds. Cholla is a massively-parallel, GPU-based code designed for astrophysical simulations that is freely available to the astrophysics community. A static-mesh Eulerian code, Cholla is ideally suited to carrying out massive simulations (> 20483 cells) that require very high resolution. The code incorporates state-of-the-art hydrodynamics algorithms including third-order spatial reconstruction, exact and linearized Riemann solvers, and unsplit integration algorithms that account for transverse fluxes on multidimensional grids. Operator-split radiative cooling and a dual-energy formalism for high mach number flows are also included. An extensive test suite demonstrates Cholla's superior ability to model shocks and discontinuities, while the GPU-native design makes the code extremely computationally efficient - speeds of 5-10 million cell updates per GPU-second are typical on current hardware for 3D simulations with all of the aforementioned physics. The latter half of this work comprises a comprehensive study of the mixing between a hot, supernova-driven wind and cooler clouds representative of those observed in multiphase galactic winds. Both adiabatic and radiatively-cooling clouds are investigated. The analytic theory of cloud-crushing is applied to the problem, and adiabatic turbulent clouds are found to be mixed with the hot wind on similar timescales as the classic spherical case (4-5 t cc) with an appropriate rescaling of the cloud-crushing time. Radiatively cooling clouds survive considerably longer, and the differences in evolution between turbulent and spherical clouds cannot be reconciled with a simple rescaling. The rapid incorporation of low-density material into the hot wind implies efficient mass-loading of hot phases of galactic winds. At the same time, the extreme compression of high-density cloud material leads to long-lived but slow-moving clumps

  14. Supernova-relevant hydrodynamic instability experiment on the Nova laser

    SciTech Connect

    Kane, J.; Arnett, D.; Remington, B.A.; Glendinning, S.G.; Castor, J.; Rubenchik, A.; Berning, M.

    1996-02-12

    Supernova 1987A focused attention on the critical role of hydrodynamic instabilities in the evolution of supernovae. On quite a separate front, the detrimental effect of hydrodynamic instabilities in inertial confinement fusion (ICF) has long been known. Tools from both areas are being tested on a common project. At Lawrence Livermore National Laboratory (LLNL), the Nova Laser is being used in scaled laboratory experiments of hydrodynamic mixing under supernova-relevant conditions. Numerical simulations of the experiments are being done, using hydrodynamics codes at the Laboratory, and astrophysical codes successfully used to model the hydrodynamics of supernovae. A two-layer package composed of Cu and CH{sub 2} with a single mode sinusoidal 1D perturbation at the interface, shocked by indirect laser drive from the Cu side of the package, produced significant Rayleigh-Taylor (RT) growth in the nonlinear regime. The scale and gross structure of the growth was successfully modeled, by mapping an early-time simulation done with 1D HYADES, a radiation transport code, into 2D CALE, a LLNL hydrodynamics code. The HYADES result was also mapped in 2D into the supernova code PROMETHEUS, which was also able to reproduce the scale and gross structure of the growth.

  15. Supernova-relevant hydrodynamic instability experiment on the Nova laser

    NASA Astrophysics Data System (ADS)

    Kane, J.; Arnett, D.; Remington, B. A.; Glendinning, S. G.; Castor, J.; Rubenchik, A.

    1996-02-01

    Supernova 1987A focused attention on the critical role of hydrodynamic instabilities in the evolution of supernovae. On quite a separate front, the detrimental effect of hydrodynamic instabilities in Inertial Confinement Fusion (ICF) has long been known. Tools from both areas are being tested on a common project. At Lawrence Livermore National Laboratory (LLNL), the Nova Laser is being used in scaled laboratory experiments of hydrodynamic mixing under supernova-relevant conditions. Numerical simulations of the experiments are being done, using hydrodynamics codes at the Laboratory, and astrophysical codes successfully used to model the hydrodynamics of supernovae. A two-layer package composed of Cu and CH2 with a single mode sinusoidal 1D perturbation at the interface, shocked by indirect laser drive from the Cu side of the package, produced significant Rayleigh-Taylor (RT) growth in the nonlinear regime. The scale and gross structure of the growth was successfully modeled, by mapping an early-time simulation done with 1D HYADES, a radiation transport code, into 2D CALE, a LLNL hydrodynamics code. The HYADES result was also mapped in 2D into the supernova code PROMETHEUS, which was also able to reproduce the scale and gross structure of the growth.

  16. Shock Propagation and Instability Structures in Compressed Silica Aerogels

    SciTech Connect

    Howard, W M; Molitoris, J D; DeHaven, M R; Gash, A E; Satcher, J H

    2002-05-30

    We have performed a series of experiments examining shock propagation in low density aerogels. High-pressure ({approx}100 kbar) shock waves are produced by detonating high explosives. Radiography is used to obtain a time sequence imaging of the shocks as they enter and traverse the aerogel. We compress the aerogel by impinging shocks waves on either one or both sides of an aerogel slab. The shock wave initially transmitted to the aerogel is very narrow and flat, but disperses and curves as it propagates. Optical images of the shock front reveal the initial formation of a hot dense region that cools and evolves into a well-defined microstructure. Structures observed in the shock front are examined in the framework of hydrodynamic instabilities generated as the shock traverses the low-density aerogel. The primary features of shock propagation are compared to simulations, which also include modeling the detonation of the high explosive, with a 2-D Arbitrary Lagrange Eulerian hydrodynamics code The code includes a detailed thermochemical equation of state and rate law kinetics. We will present an analysis of the data from the time resolved imaging diagnostics and form a consistent picture of the shock transmission, propagation and instability structure.

  17. PLUTO code for computational Astrophysics: News and Developments

    NASA Astrophysics Data System (ADS)

    Tzeferacos, P.; Mignone, A.

    2012-01-01

    We present an overview on recent developments and functionalities available with the PLUTO code for astrophysical fluid dynamics. The recent extension of the code to a conservative finite difference formulation and high order spatial discretization of the compressible equations of magneto-hydrodynamics (MHD), complementary to its finite volume approach, allows for a highly accurate treatment of smooth flows, while avoiding loss of accuracy near smooth extrema and providing sharp non-oscillatory transitions at discontinuities. Among the novel features, we present alternative, fully explicit treatments to include non-ideal dissipative processes (namely viscosity, resistivity and anisotropic thermal conduction), that do not suffer from the usual timestep limitation of explicit time stepping. These methods, offsprings of the multistep Runge-Kutta family that use a Chebyshev polynomial recursion, are competitive substitutes of computationally expensive implicit schemes that involve sparse matrix inversion. Several multi-dimensional benchmarks and appli-cations assess the potential of PLUTO to efficiently handle many astrophysical problems.

  18. File Compression and Expansion of the Genetic Code by the use of the Yin/Yang Directions to find its Sphered Cube

    PubMed Central

    Castro-Chavez, Fernando

    2014-01-01

    Objective The objective of this article is to demonstrate that the genetic code can be studied and represented in a 3-D Sphered Cube for bioinformatics and for education by using the graphical help of the ancient “Book of Changes” or I Ching for the comparison, pair by pair, of the three basic characteristics of nucleotides: H-bonds, molecular structure, and their tautomerism. Methods The source of natural biodiversity is the high plasticity of the genetic code, analyzable with a reverse engineering of its 2-D and 3-D representations (here illustrated), but also through the classical 64-hexagrams of the ancient I Ching, as if they were the 64-codons or words of the genetic code. Results In this article, the four elements of the Yin/Yang were found by correlating the 3×2=6 sets of Cartesian comparisons of the mentioned properties of nucleic acids, to the directionality of their resulting blocks of codons grouped according to their resulting amino acids and/or functions, integrating a 384-codon Sphered Cube whose function is illustrated by comparing six brain peptides and a promoter of osteoblasts from Humans versus Neanderthal, as well as to Negadi’s work on the importance of the number 384 within the genetic code. Conclusions Starting with the codon/anticodon correlation of Nirenberg, published in full here for the first time, and by studying the genetic code and its 3-D display, the buffers of reiteration within codons codifying for the same amino acid, displayed the two long (binary number one) and older Yin/Yang arrows that travel in opposite directions, mimicking the parental DNA strands, while annealing to the two younger and broken (binary number zero) Yin/Yang arrows, mimicking the new DNA strands; the graphic analysis of the of the genetic code and its plasticity was helpful to compare compatible sequences (human compatible to human versus neanderthal compatible to neanderthal), while further exploring the wondrous biodiversity of nature for

  19. Entropy-limited hydrodynamics: a novel approach to relativistic hydrodynamics

    NASA Astrophysics Data System (ADS)

    Guercilena, Federico; Radice, David; Rezzolla, Luciano

    2017-07-01

    We present entropy-limited hydrodynamics (ELH): a new approach for the computation of numerical fluxes arising in the discretization of hyperbolic equations in conservation form. ELH is based on the hybridisation of an unfiltered high-order scheme with the first-order Lax-Friedrichs method. The activation of the low-order part of the scheme is driven by a measure of the locally generated entropy inspired by the artificial-viscosity method proposed by Guermond et al. (J. Comput. Phys. 230(11):4248-4267, 2011, doi: 10.1016/j.jcp.2010.11.043). Here, we present ELH in the context of high-order finite-differencing methods and of the equations of general-relativistic hydrodynamics. We study the performance of ELH in a series of classical astrophysical tests in general relativity involving isolated, rotating and nonrotating neutron stars, and including a case of gravitational collapse to black hole. We present a detailed comparison of ELH with the fifth-order monotonicity preserving method MP5 (Suresh and Huynh in J. Comput. Phys. 136(1):83-99, 1997, doi: 10.1006/jcph.1997.5745), one of the most common high-order schemes currently employed in numerical-relativity simulations. We find that ELH achieves comparable and, in many of the cases studied here, better accuracy than more traditional methods at a fraction of the computational cost (up to {˜}50% speedup). Given its accuracy and its simplicity of implementation, ELH is a promising framework for the development of new special- and general-relativistic hydrodynamics codes well adapted for massively parallel supercomputers.

  20. Hydrodynamic effects in proteins.

    PubMed

    Szymczak, Piotr; Cieplak, Marek

    2011-01-26

    Experimental and numerical results pertaining to flow-induced effects in proteins are reviewed. Special emphasis is placed on shear-induced unfolding and on the role of solvent mediated hydrodynamic interactions in the conformational transitions in proteins.

  1. Fast Coding Unit Encoding Mechanism for Low Complexity Video Coding.

    PubMed

    Gao, Yuan; Liu, Pengyu; Wu, Yueying; Jia, Kebin; Gao, Guandong

    2016-01-01

    In high efficiency video coding (HEVC), coding tree contributes to excellent compression performance. However, coding tree brings extremely high computational complexity. Innovative works for improving coding tree to further reduce encoding time are stated in this paper. A novel low complexity coding tree mechanism is proposed for HEVC fast coding unit (CU) encoding. Firstly, this paper makes an in-depth study of the relationship among CU distribution, quantization parameter (QP) and content change (CC). Secondly, a CU coding tree probability model is proposed for modeling and predicting CU distribution. Eventually, a CU coding tree probability update is proposed, aiming to address probabilistic model distortion problems caused by CC. Experimental results show that the proposed low complexity CU coding tree mechanism significantly reduces encoding time by 27% for lossy coding and 42% for visually lossless coding and lossless coding. The proposed low complexity CU coding tree mechanism devotes to improving coding performance under various application conditions.

  2. Fast Coding Unit Encoding Mechanism for Low Complexity Video Coding

    PubMed Central

    Wu, Yueying; Jia, Kebin; Gao, Guandong

    2016-01-01

    In high efficiency video coding (HEVC), coding tree contributes to excellent compression performance. However, coding tree brings extremely high computational complexity. Innovative works for improving coding tree to further reduce encoding time are stated in this paper. A novel low complexity coding tree mechanism is proposed for HEVC fast coding unit (CU) encoding. Firstly, this paper makes an in-depth study of the relationship among CU distribution, quantization parameter (QP) and content change (CC). Secondly, a CU coding tree probability model is proposed for modeling and predicting CU distribution. Eventually, a CU coding tree probability update is proposed, aiming to address probabilistic model distortion problems caused by CC. Experimental results show that the proposed low complexity CU coding tree mechanism significantly reduces encoding time by 27% for lossy coding and 42% for visually lossless coding and lossless coding. The proposed low complexity CU coding tree mechanism devotes to improving coding performance under various application conditions. PMID:26999741

  3. Circumstellar Hydrodynamics and Spectral Radiation in ALGOLS

    NASA Astrophysics Data System (ADS)

    Terrell, Dirk Curtis

    1994-01-01

    Algols are the remnants of binary systems that have undergone large scale mass transfer. This dissertation presents the results of the coupling of a hydrodynamical model and a radiative model of the flow of gas from the inner Lagrangian point. The hydrodynamical model is a fully Lagrangian, three-dimensional scheme with a novel treatment of viscosity and an implementation of the smoothed particle hydrodynamics method to compute pressure gradients. Viscosity is implemented by allowing particles within a specified interaction length to share momentum. The hydrodynamical model includes a provision for computing the self-gravity of the disk material, although it is not used in the present application to Algols. Hydrogen line profiles and equivalent widths computed with a code by Drake and Ulrich are compared with observations of both short and long period Algols. More sophisticated radiative transfer computations are done with the escape probability code of Ko and Kallman which includes the spectral lines of thirteen elements. The locations and velocities of the gas particles, and the viscous heating from the hydro program are supplied to the radiative transfer program, which computes the equilibrium temperature of the gas and generates its emission spectrum. Intrinsic line profiles are assumed to be delta functions and are properly Doppler shifted and summed for gas particles that are not eclipsed by either star. Polarization curves are computed by combining the hydro program with the Wilson-Liou polarization program. Although the results are preliminary, they show that polarization observations show great promise for studying circumstellar matter.

  4. RECENT RESULTS OF RADIATION HYDRODYNAMICS AND TURBULENCE EXPERIMENTS IN CYLINDRICAL GEOMETRY.

    SciTech Connect

    Magelssen G. R.; Scott, J. M.; Batha, S. H.; Holmes, R. L.; Lanier, N. E.; Tubbs, D. L.; Elliott, N. E.; Dunne, A. M.; Rothman, S.; Parker, K. W.; Youngs, D.

    2001-01-01

    Cylindrical implosion experiments at the University of Rochester laser facility, OMEGA, were performed to study radiation hydrodynamics and compressible turbulence in convergent geometry. Laser beams were used to directly drive a cylinder with either a gold (AU) or dichloropolystyrene (C6H8CL2) marker layer placed between a solid CH ablator and a foam cushion. When the cylinder is imploded the Richtmyer-Meshkov instability and convergence cause the marker layer to increase in thickness. Marker thickness measurements were made by x-ray backlighting along the cylinder axis. Experimental results of the effect of surface roughness will be presented. Computational results with an AMR code are in good agreement with the experimental results from targets with the roughest surface. Computational results suggest that marker layer 'end effects' and bowing increase the effective thickness of the marker layer at lower levels of roughness.

  5. Effect of compressibility on the hypervelocity penetration

    NASA Astrophysics Data System (ADS)

    Song, W. J.; Chen, X. W.; Chen, P.

    2017-06-01

    We further consider the effect of rod strength by employing the compressible penetration model to study the effect of compressibility on hypervelocity penetration. Meanwhile, we define different instances of penetration efficiency in various modified models and compare these penetration efficiencies to identify the effects of different factors in the compressible model. To systematically discuss the effect of compressibility in different metallic rod-target combinations, we construct three cases, i.e., the penetrations by the more compressible rod into the less compressible target, rod into the analogously compressible target, and the less compressible rod into the more compressible target. The effects of volumetric strain, internal energy, and strength on the penetration efficiency are analyzed simultaneously. It indicates that the compressibility of the rod and target increases the pressure at the rod/target interface. The more compressible rod/target has larger volumetric strain and higher internal energy. Both the larger volumetric strain and higher strength enhance the penetration or anti-penetration ability. On the other hand, the higher internal energy weakens the penetration or anti-penetration ability. The two trends conflict, but the volumetric strain dominates in the variation of the penetration efficiency, which would not approach the hydrodynamic limit if the rod and target are not analogously compressible. However, if the compressibility of the rod and target is analogous, it has little effect on the penetration efficiency.

  6. Large scale water entry simulation with smoothed particle hydrodynamics on single- and multi-GPU systems

    NASA Astrophysics Data System (ADS)

    Ji, Zhe; Xu, Fei; Takahashi, Akiyuki; Sun, Yu

    2016-12-01

    In this paper, a Weakly Compressible Smoothed Particle Hydrodynamics (WCSPH) framework is presented utilizing the parallel architecture of single- and multi-GPU (Graphic Processing Unit) platforms. The program is developed for water entry simulations where an efficient potential based contact force is introduced to tackle the interaction between fluid and solid particles. The single-GPU SPH scheme is implemented with a series of optimization to achieve high performance. To go beyond the memory limitation of single GPU, the scheme is further extended to multi-GPU platform basing on an improved 3D domain decomposition and inter-node data communication strategy. A typical benchmark test of wedge entry is investigated in varied dimensions and scales to validate the accuracy and efficiency of the program. The results of 2D and 3D benchmark tests manifest great consistency with the experiment and better accuracy than other numerical models. The performance of the single-GPU code is assessed by comparing with serial and parallel CPU codes. The improvement of the domain decomposition strategy is verified, and a study on the scalability and efficiency of the multi-GPU code is carried out as well by simulating tests with varied scales in different amount of GPUs. Lastly, the single- and multi-GPU codes are further compared with existing state-of-the-art SPH parallel frameworks for a comprehensive assessment.

  7. VH1 Hydrodynamics for Introductory Astronomy

    NASA Astrophysics Data System (ADS)

    Christian, Wolfgang; Blondin, John

    1997-05-01

    Improvements in personal computer operating systems and hardware now makes it possible to run research grade Fortran simulations on student computers. Unfortunately, many legacy applications do not have a graphical user interface and are sometimes hard coded to a specific problem making them unsuitable for beginning students. A good way to re-purpose such legacy code for undergraduate teaching is to build a graphical front end using a Rapid Application Development, RAD, tool that starts the simulation as a separate thread. This technique is being used with Virginia Hydrodynamics One, VH1, to provide an introduction to computational hydrodynamics. Standard test problems including gravitational collapse of an interstellar cloud, radiation cooling, and formation of shocks are demonstrated using this on Microsoft Windows 95/NT.

  8. Smoothed Particle Hydrodynamics: Applications Within DSTO

    DTIC Science & Technology

    2006-10-01

    that the velocities and forces obtained from their numerical code were in excellent agreement with model laboratory measurements. Fontaine [19] has...Continuum Mechanics", Air Force Armament Laboratory AFATL-TR-78- 125 (1978). 19. Fontaine , E. "On the use of Smoothed Particle Hydrodynamics to model...Laboratory, Japan 1 National Aerospace Laboratory, Netherlands 1 Mr. Jean -Louis Barillon, France 1 printed Dr. Elaine Oran, Naval Research Laboratory

  9. SeqCompress: an algorithm for biological sequence compression.

    PubMed

    Sardaraz, Muhammad; Tahir, Muhammad; Ikram, Ataul Aziz; Bajwa, Hassan

    2014-10-01

    The growth of Next Generation Sequencing technologies presents significant research challenges, specifically to design bioinformatics tools that handle massive amount of data efficiently. Biological sequence data storage cost has become a noticeable proportion of total cost in the generation and analysis. Particularly increase in DNA sequencing rate is significantly outstripping the rate of increase in disk storage capacity, which may go beyond the limit of storage capacity. It is essential to develop algorithms that handle large data sets via better memory management. This article presents a DNA sequence compression algorithm SeqCompress that copes with the space complexity of biological sequences. The algorithm is based on lossless data compression and uses statistical model as well as arithmetic coding to compress DNA sequences. The proposed algorithm is compared with recent specialized compression tools for biological sequences. Experimental results show that proposed algorithm has better compression gain as compared to other existing algorithms.

  10. Supernova hydrodynamics experiments using the Nova laser*

    NASA Astrophysics Data System (ADS)

    Remington, B. A.; Glendinning, S. G.; Estabrook, K. G.; London, R. A.; Wallace, R. J.; Kane, J.; Arnett, D.; Drake, R. P.; Liang, E.; McCray, R.; Rubenchik, A.

    1997-04-01

    We are developing experiments using the Nova laser [1,2] to investigate two areas of physics relevant to core-collapse supernovae (SN): compressible nonlinear hydrodynamic mixing and (2) radiative shock hydrodynamics. In the former, we are examining the differences between the 2D and 3D evolution of the Rayleigh-Taylor instability, an issue critical to the observables emerging from SN in the first year after exploding. In the latter, we are investigating the evolution of a colliding plasma system relevant to the ejecta-stellar wind interactions of the early stages of SN remnant formation. The experiments and astrophysical implications will be discussed. *Work performed under the auspices of the U.S. Department of Energy by the Lawrence Livermore National Laboratory under contract number W-7405-ENG-48. [1] J. Kane et al., in press, Astrophys. J. Lett. (March-April, 1997). [2] B.A. Remington et al., in press, Phys. Plasmas (May, 1997).

  11. Scaling Laws for Hydrodynamically Equivalent Implosions

    NASA Astrophysics Data System (ADS)

    Murakami, Masakatsu

    2001-10-01

    The EPOC (equivalent physics of confinement) scenario for the proof of principle of high gain inertial confinement fusion is presented, where the key concept "hydrodynamically equivalent implosions" plays a crucial role. Scaling laws on the target and confinement parameters are derived by applying the Lie group analysis to the PDE (partially differential equations) chain of the hydrodynamic system. It turns out that the conventional scaling law based on adiabatic approximation significantly differs from one which takes such energy transport effect as electron heat conduction into account. Confinement plasma parameters of the hot spot such as the central temperature and the areal mass density at peak compression are obtained with a self-similar solution for spherical implosions.

  12. Image compression technique

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1997-03-25

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace`s equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image. 16 figs.

  13. Image compression technique

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1997-01-01

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace's equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image.

  14. Optimization of radar pulse compression processing

    NASA Astrophysics Data System (ADS)

    Song, Samuel M.; Kim, Woonkyung M.; Lee, Myung-Su

    1997-06-01

    We propose an optimal radar pulse compression technique and evaluate its performance in the presence of Doppler shift. The traditional pulse compression using Barker code increases the signal strength by transmitting a Barker coded long pulse. The received signal is then processed by an appropriate correlation processing. This Barker code radar pulse compression enhances the detection sensitivity while maintaining the range resolution of a single chip of the Barker coded long pulse. But unfortunately, the technique suffers from the addition of range sidelobes which sometimes will mask weak targets in the vicinity of larger targets. Our proposed optimal algorithm completely eliminates the sidelobes at the cost of additional processing.

  15. Visually lossless compression of digital hologram sequences

    NASA Astrophysics Data System (ADS)

    Darakis, Emmanouil; Kowiel, Marcin; Näsänen, Risto; Naughton, Thomas J.

    2010-01-01

    Digital hologram sequences have great potential for the recording of 3D scenes of moving macroscopic objects as their numerical reconstruction can yield a range of perspective views of the scene. Digital holograms inherently have large information content and lossless coding of holographic data is rather inefficient due to the speckled nature of the interference fringes they contain. Lossy coding of still holograms and hologram sequences has shown promising results. By definition, lossy compression introduces errors in the reconstruction. In all of the previous studies, numerical metrics were used to measure the compression error and through it, the coding quality. Digital hologram reconstructions are highly speckled and the speckle pattern is very sensitive to data changes. Hence, numerical quality metrics can be misleading. For example, for low compression ratios, a numerically significant coding error can have visually negligible effects. Yet, in several cases, it is of high interest to know how much lossy compression can be achieved, while maintaining the reconstruction quality at visually lossless levels. Using an experimental threshold estimation method, the staircase algorithm, we determined the highest compression ratio that was not perceptible to human observers for objects compressed with Dirac and MPEG-4 compression methods. This level of compression can be regarded as the point below which compression is perceptually lossless although physically the compression is lossy. It was found that up to 4 to 7.5 fold compression can be obtained with the above methods without any perceptible change in the appearance of video sequences.

  16. Supernova hydrodynamics experiments on the Nova laser

    NASA Astrophysics Data System (ADS)

    Kane, J.; Arnett, D.; Remington, B. A.; Glendinning, S. G.; Rubenchik, A.; Drake, R. P.; Fryxell, B. A.; Muller, E.

    1997-12-01

    The critical roles of hydrodynamic instabilities in SN 1987A and in ICF are well known; 2D-3D differences are important in both areas. In a continuing project at Lawrence Livermore National Laboratory (LLNL), the Nova Laser is being used in scaled laboratory experiments of hydrodynamic mixing under supernova-relevant conditions. Numerical simulations of the experiments are being done, using LLNL hydro codes, and astrophysics codes used to model supernovae. Initial investigations with two-layer planar packages having 2D sinusoidal interface perturbations are described in Ap.J. 478, L75 (1997). Early-time simulations done with the LLNL 1D radiation transport code HYADES are mapped into the 2D LLNL code CALE and into the multi-D supernova code PROMETHEUS. Work is underway on experiments comparing interface instability growth produced by 2D sinusoidal versus 3D cross-hatch and axisymmetric cylindrical perturbations. Results of the simulations will be presented and compared with experiment. Implications for interpreting supernova observations and for supernova modelling will be discussed. * Work performed under the auspices of the U.S. Department of Energy by the Lawrence Livermore National Laboratory under contract number W-7405-ENG-48.

  17. Wavelet and wavelet packet compression of electrocardiograms.

    PubMed

    Hilton, M L

    1997-05-01

    Wavelets and wavelet packets have recently emerged as powerful tools for signal compression. Wavelet and wavelet packet-based compression algorithms based on embedded zerotree wavelet (EZW) coding are developed for electrocardiogram (ECG) signals, and eight different wavelets are evaluated for their ability to compress Holter ECG data. Pilot data from a blind evaluation of compressed ECG's by cardiologists suggest that the clinically useful information present in original ECG signals is preserved by 8:1 compression, and in most cases 16:1 compressed ECG's are clinically useful.

  18. Scaling supernova hydrodynamics to the laboratory

    SciTech Connect

    Kane, J. O.

    1999-06-01

    Supernova (SN) 1987A focused attention on the critical role of hydrodynamic instabilities in the evolution of supernovae. To test the modeling of these instabilities, we are developing laboratory experiments of hydrodynamic mixing under conditions relevant to supernovae. Initial results were reported in J. Kane et al., Astrophys. J.478, L75 (1997) The Nova laser is used to shock two-layer targets, producing Richtmyer-Meshkov (RM) and Rayleigh-Taylor (RT) instabilities at the interfaces between the layers, analogous to instabilities seen at the interfaces of SN 1987A. Because the hydrodynamics in the laser experiments at intermediate times (3-40 ns) and in SN 1987A at intermediate times (5 s-104 s) are well described by the Euler equations, the hydrodynamics scale between the two regimes. The experiments are modeled using the hydrodynamics codes HYADES and CALE, and the supernova code PROMETHEUS, thus serving as a benchmark for PROMETHEUS. Results of the experiments and simulations are presented. Analysis of the spike and bubble velocities in the experiment using potential flow theory and a modified Ott thin shell theory is presented. A numerical study of 2D vs. 3D differences in instability growth at the O-He and He-H interface of SN 1987A, and the design for analogous laser experiments are presented. We discuss further work to incorporate more features of the SN in the experiments, including spherical geometry, multiple layers and density gradients. Past and ongoing work in laboratory and laser astrophysics is reviewed, including experimental work on supernova remnants (SNRs). A numerical study of RM instability in SNRs is presented.

  19. Scaling supernova hydrodynamics to the laboratory

    NASA Astrophysics Data System (ADS)

    Kane, J.; Arnett, D.; Remington, B. A.; Glendinning, S. G.; Bazan, G.; Drake, R. P.; Fryxell, B. A.; Teyssier, R.; Moore, K.

    1999-05-01

    Supernova (SN) 1987A focused attention on the critical role of hydrodynamic instabilities in the evolution of supernovae. To test the modeling of these instabilities, we are developing laboratory experiments of hydrodynamic mixing under conditions relevant to supernovae. Initial results were reported in J. Kane et al. [Astrophys. J. 478, L75 (1997) and B. A. Remington et al., Phys. Plasmas 4, 1994 (1997)]. The Nova laser is used to generate a 10-15 Mbar shock at the interface of a two-layer planar target, which triggers perturbation growth due to the Richtmyer-Meshkov instability, and to the Rayleigh-Taylor instability as the interface decelerates. This resembles the hydrodynamics of the He-H interface of a Type II supernova at intermediate times, up to a few ×103 s. The scaling of hydrodynamics on microscopic laser scales to the SN-size scales is presented. The experiment is modeled using the hydrodynamics codes HYADES [J. T. Larson and S. M. Lane, J. Quant. Spect. Rad. Trans. 51, 179 (1994)] and CALE [R. T. Barton, Numerical Astrophysics (Jones and Bartlett, Boston, 1985), pp. 482-497], and the supernova code PROMETHEUS [P. R. Woodward and P. Collela, J. Comp. Phys. 54, 115 (1984)]. Results of the experiments and simulations are presented. Analysis of the spike-and-bubble velocities using potential flow theory and Ott thin-shell theory is presented, as well as a study of 2D versus 3D differences in perturbation growth at the He-H interface of SN 1987A.

  20. Scaling supernova hydrodynamics to the laboratory

    SciTech Connect

    Kane, J.; Arnett, D.; Remington, B.A.; Glendinning, S.G.; Bazan, G.; Drake, R.P.; Fryxell, B.A.; Teyssier, R.

    1999-05-01

    Supernova (SN) 1987A focused attention on the critical role of hydrodynamic instabilities in the evolution of supernovae. To test the modeling of these instabilities, we are developing laboratory experiments of hydrodynamic mixing under conditions relevant to supernovae. Initial results were reported in J. Kane {ital et al.} [Astrophys. J. {bold 478}, L75 (1997) and B. A. Remington {ital et al.}, Phys. Plasmas {bold 4}, 1994 (1997)]. The Nova laser is used to generate a 10{endash}15 Mbar shock at the interface of a two-layer planar target, which triggers perturbation growth due to the Richtmyer{endash}Meshkov instability, and to the Rayleigh{endash}Taylor instability as the interface decelerates. This resembles the hydrodynamics of the He-H interface of a Type II supernova at intermediate times, up to a few {times}10{sup 3}s. The scaling of hydrodynamics on microscopic laser scales to the SN-size scales is presented. The experiment is modeled using the hydrodynamics codes HYADES [J. T. Larson and S. M. Lane, J. Quant. Spect. Rad. Trans. {bold 51}, 179 (1994)] and CALE [R. T. Barton, {ital Numerical Astrophysics} (Jones and Bartlett, Boston, 1985), pp. 482{endash}497], and the supernova code PROMETHEUS [P. R. Woodward and P. Collela, J. Comp. Phys. {bold 54}, 115 (1984)]. Results of the experiments and simulations are presented. Analysis of the spike-and-bubble velocities using potential flow theory and Ott thin-shell theory is presented, as well as a study of 2D versus 3D differences in perturbation growth at the He-H interface of SN 1987A.

  1. Sensitivity analysis of hydrodynamic stability operators

    NASA Technical Reports Server (NTRS)

    Schmid, Peter J.; Henningson, Dan S.; Khorrami, Mehdi R.; Malik, Mujeeb R.

    1992-01-01

    The eigenvalue sensitivity for hydrodynamic stability operators is investigated. Classical matrix perturbation techniques as well as the concept of epsilon-pseudoeigenvalues are applied to show that parts of the spectrum are highly sensitive to small perturbations. Applications are drawn from incompressible plane Couette, trailing line vortex flow and compressible Blasius boundary layer flow. Parametric studies indicate a monotonically increasing effect of the Reynolds number on the sensitivity. The phenomenon of eigenvalue sensitivity is due to the non-normality of the operators and their discrete matrix analogs and may be associated with large transient growth of the corresponding initial value problem.

  2. Resurgence in extended hydrodynamics

    NASA Astrophysics Data System (ADS)

    Aniceto, Inês; Spaliński, Michał

    2016-04-01

    It has recently been understood that the hydrodynamic series generated by the Müller-Israel-Stewart theory is divergent and that this large-order behavior is consistent with the theory of resurgence. Furthermore, it was observed that the physical origin of this is the presence of a purely damped nonhydrodynamic mode. It is very interesting to ask whether this picture persists in cases where the spectrum of nonhydrodynamic modes is richer. We take the first step in this direction by considering the simplest hydrodynamic theory which, instead of the purely damped mode, contains a pair of nonhydrodynamic modes of complex conjugate frequencies. This mimics the pattern of black brane quasinormal modes which appear on the gravity side of the AdS/CFT description of N =4 supersymmetric Yang-Mills plasma. We find that the resulting hydrodynamic series is divergent in a way consistent with resurgence and precisely encodes information about the nonhydrodynamic modes of the theory.

  3. Hydrodynamic Vortex on Surfaces

    NASA Astrophysics Data System (ADS)

    Ragazzo, Clodoaldo Grotta; de Barros Viglioni, Humberto Henrique

    2017-04-01

    The equations of motion for a system of point vortices on an oriented Riemannian surface of finite topological type are presented. The equations are obtained from a Green's function on the surface. The uniqueness of the Green's function is established under hydrodynamic conditions at the surface's boundaries and ends. The hydrodynamic force on a point vortex is computed using a new weak formulation of Euler's equation adapted to the point vortex context. An analogy between the hydrodynamic force on a massive point vortex and the electromagnetic force on a massive electric charge is presented as well as the equations of motion for massive vortices. Any noncompact Riemann surface admits a unique Riemannian metric such that a single vortex in the surface does not move ("Steady Vortex Metric"). Some examples of surfaces with steady vortex metric isometrically embedded in R^3 are presented.

  4. Combined effects of laser and non-thermal electron beams on hydrodynamics and shock formation in the Shock Ignition scheme

    NASA Astrophysics Data System (ADS)

    Nicolai, Ph.; Feugeas, J. L.; Touati, M.; Breil, J.; Dubroca, B.; Nguyen-Buy, T.; Ribeyre, X.; Tikhonchuk, V.; Gus'kov, S.

    2014-10-01

    An issue to be addressed in Inertial Confinement Fusion (ICF) is the detailed description of the kinetic transport of relativistic or non-thermal electrons generated by laser within the time and space scales of the imploded target hydrodynamics. We have developed at CELIA the model M1, a fast and reduced kinetic model for relativistic electron transport. The latter has been implemented into the 2D radiation hydrodynamic code CHIC. In the framework of the Shock Ignition (SI) scheme, it has been shown in simplified conditions that the energy transferred by the non-thermal electrons from the corona to the compressed shell of an ICF target could be an important mechanism for the creation of ablation pressure. Nevertheless, in realistic configurations, taking the density profile and the electron energy spectrum into account, the target has to be carefully designed to avoid deleterious effects on compression efficiency. In addition, the electron energy deposition may modify the laser-driven shock formation and its propagation through the target. The non-thermal electron effects on the shock propagation will be analyzed in a realistic configuration.

  5. Compressive Holography

    NASA Astrophysics Data System (ADS)

    Lim, Se Hoon

    Compressive holography estimates images from incomplete data by using sparsity priors. Compressive holography combines digital holography and compressive sensing. Digital holography consists of computational image estimation from data captured by an electronic focal plane array. Compressive sensing enables accurate data reconstruction by prior knowledge on desired signal. Computational and optical co-design optimally supports compressive holography in the joint computational and optical domain. This dissertation explores two examples of compressive holography: estimation of 3D tomographic images from 2D data and estimation of images from under sampled apertures. Compressive holography achieves single shot holographic tomography using decompressive inference. In general, 3D image reconstruction suffers from underdetermined measurements with a 2D detector. Specifically, single shot holographic tomography shows the uniqueness problem in the axial direction because the inversion is ill-posed. Compressive sensing alleviates the ill-posed problem by enforcing some sparsity constraints. Holographic tomography is applied for video-rate microscopic imaging and diffuse object imaging. In diffuse object imaging, sparsity priors are not valid in coherent image basis due to speckle. So incoherent image estimation is designed to hold the sparsity in incoherent image basis by support of multiple speckle realizations. High pixel count holography achieves high resolution and wide field-of-view imaging. Coherent aperture synthesis can be one method to increase the aperture size of a detector. Scanning-based synthetic aperture confronts a multivariable global optimization problem due to time-space measurement errors. A hierarchical estimation strategy divides the global problem into multiple local problems with support of computational and optical co-design. Compressive sparse aperture holography can be another method. Compressive sparse sampling collects most of significant field

  6. Hydrodynamics of bacterial suspensions

    NASA Astrophysics Data System (ADS)

    Arlt, Jochen; Duncan, William J.; Poon, Wilson C. K.

    2005-08-01

    Suspensions of motile E. coli bacteria serve as a model system to experimentally study the hydrodynamics of active particle suspensions. Colloidal probe particles are localised within a suspension of motile bacteria by use of optical tweezers and their uctuations are monitored. The activity of the bacteria effects the fluctuations of the probe particles and their correlation, revealing information about the hydrodynamics of the suspension. We highlight experimental problems that make the interpretation of 'single probe' experiments (as reported before in literature) diffcult and present some preliminary results for 'dual probe' cross-correlation experiments.

  7. Patch-based Adaptive Mesh Refinement for Multimaterial Hydrodynamics

    SciTech Connect

    Lomov, I; Pember, R; Greenough, J; Liu, B

    2005-10-18

    We present a patch-based direct Eulerian adaptive mesh refinement (AMR) algorithm for modeling real equation-of-state, multimaterial compressible flow with strength. Our approach to AMR uses a hierarchical, structured grid approach first developed by (Berger and Oliger 1984), (Berger and Oliger 1984). The grid structure is dynamic in time and is composed of nested uniform rectangular grids of varying resolution. The integration scheme on the grid hierarchy is a recursive procedure in which the coarse grids are advanced, then the fine grids are advanced multiple steps to reach the same time, and finally the coarse and fine grids are synchronized to remove conservation errors during the separate advances. The methodology presented here is based on a single grid algorithm developed for multimaterial gas dynamics by (Colella et al. 1993), refined by(Greenough et al. 1995), and extended to the solution of solid mechanics problems with significant strength by (Lomov and Rubin 2003). The single grid algorithm uses a second-order Godunov scheme with an approximate single fluid Riemann solver and a volume-of-fluid treatment of material interfaces. The method also uses a non-conservative treatment of the deformation tensor and an acoustic approximation for shear waves in the Riemann solver. This departure from a strict application of the higher-order Godunov methodology to the equation of solid mechanics is justified due to the fact that highly nonlinear behavior of shear stresses is rare. This algorithm is implemented in two codes, Geodyn and Raptor, the latter of which is a coupled rad-hydro code. The present discussion will be solely concerned with hydrodynamics modeling. Results from a number of simulations for flows with and without strength will be presented.

  8. Smoothed particle hydrodynamics and magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Price, Daniel J.

    2012-02-01

    This paper presents an overview and introduction to smoothed particle hydrodynamics and magnetohydrodynamics in theory and in practice. Firstly, we give a basic grounding in the fundamentals of SPH, showing how the equations of motion and energy can be self-consistently derived from the density estimate. We then show how to interpret these equations using the basic SPH interpolation formulae and highlight the subtle difference in approach between SPH and other particle methods. In doing so, we also critique several 'urban myths' regarding SPH, in particular the idea that one can simply increase the 'neighbour number' more slowly than the total number of particles in order to obtain convergence. We also discuss the origin of numerical instabilities such as the pairing and tensile instabilities. Finally, we give practical advice on how to resolve three of the main issues with SPMHD: removing the tensile instability, formulating dissipative terms for MHD shocks and enforcing the divergence constraint on the particles, and we give the current status of developments in this area. Accompanying the paper is the first public release of the NDSPMHD SPH code, a 1, 2 and 3 dimensional code designed as a testbed for SPH/SPMHD algorithms that can be used to test many of the ideas and used to run all of the numerical examples contained in the paper.

  9. Hydrodynamic simulations of the core helium flash

    NASA Astrophysics Data System (ADS)

    Mocák, Miroslav; Müller, Ewald; Weiss, Achim; Kifonidis, Konstantinos

    2008-10-01

    We desribe and discuss hydrodynamic simulations of the core helium flash using an initial model of a 1.25 M⊙ star with a metallicity of 0.02 near at its peak. Past research concerned with the dynamics of the core helium flash is inconclusive. Its results range from a confirmation of the standard picture, where the star remains in hydrostatic equilibrium during the flash (Deupree 1996), to a disruption or a significant mass loss of the star (Edwards 1969; Cole & Deupree 1980). However, the most recent multidimensional hydrodynamic study (Dearborn et al. 2006) suggests a quiescent behavior of the core helium flash and seems to rule out an explosive scenario. Here we present partial results of a new comprehensive study of the core helium flash, which seem to confirm this qualitative behavior and give a better insight into operation of the convection zone powered by helium burning during the flash. The hydrodynamic evolution is followed on a computational grid in spherical coordinates using our new version of the multi-dimensional hydrodynamic code HERAKLES, which is based on a direct Eulerian implementation of the piecewise parabolic method.

  10. Simple Waves in Ideal Radiation Hydrodynamics

    SciTech Connect

    Johnson, B M

    2008-09-03

    In the dynamic diffusion limit of radiation hydrodynamics, advection dominates diffusion; the latter primarily affects small scales and has negligible impact on the large scale flow. The radiation can thus be accurately regarded as an ideal fluid, i.e., radiative diffusion can be neglected along with other forms of dissipation. This viewpoint is applied here to an analysis of simple waves in an ideal radiating fluid. It is shown that much of the hydrodynamic analysis carries over by simply replacing the material sound speed, pressure and index with the values appropriate for a radiating fluid. A complete analysis is performed for a centered rarefaction wave, and expressions are provided for the Riemann invariants and characteristic curves of the one-dimensional system of equations. The analytical solution is checked for consistency against a finite difference numerical integration, and the validity of neglecting the diffusion operator is demonstrated. An interesting physical result is that for a material component with a large number of internal degrees of freedom and an internal energy greater than that of the radiation, the sound speed increases as the fluid is rarefied. These solutions are an excellent test for radiation hydrodynamic codes operating in the dynamic diffusion regime. The general approach may be useful in the development of Godunov numerical schemes for radiation hydrodynamics.

  11. Two algorithms for compressing noise like signals

    NASA Astrophysics Data System (ADS)

    Agaian, Sos S.; Cherukuri, Ravindranath; Akopian, David

    2005-05-01

    Compression is a technique that is used to encode data so that the data needs less storage/memory space. Compression of random data is vital in case where data where we need preserve data that has low redundancy and whose power spectrum is close to noise. In case of noisy signals that are used in various data hiding schemes the data has low redundancy and low energy spectrum. Therefore, upon compressing with lossy compression algorithms the low energy spectrum might get lost. Since the LSB plane data has low redundancy, lossless compression algorithms like Run length, Huffman coding, Arithmetic coding are in effective in providing a good compression ratio. These problems motivated in developing a new class of compression algorithms for compressing noisy signals. In this paper, we introduce a two new compression technique that compresses the random data like noise with reference to know pseudo noise sequence generated using a key. In addition, we developed a representation model for digital media using the pseudo noise signals. For simulation, we have made comparison between our methods and existing compression techniques like Run length that shows the Run length cannot compress when data is random but the proposed algorithms can compress. Furthermore, the proposed algorithms can be extended to all kinds of random data used in various applications.

  12. Hydrodynamic response of solid target heated by heavy ion beams from future facility HIAF

    NASA Astrophysics Data System (ADS)

    Ren, Jieru; Zhao, Yongtao; Cheng, Rui; Xu, Zhongfeng; Xiao, Guoqing

    2017-09-01

    The hydrodynamic response of solid target heated by heavy ion beams at High Intensity Accelerator Facility (HIAF) project was simulated with 1-D computer code. The energy deposition was benchmarked by a 2-D program. The work serves to show the prospect of HIAF project for High Energy Density Physics (HEDP) study, and provide helpful information for the future experiments. Various target materials and schemes are used in the calculation. The results show that in the first phase of HIAF project, the available ion beam is already a powerful tool to generate HED matter with specially designed target, and the second phase of the project will extend the accessible state of matter a big step further. What's more, the hydrodynamic behavior of the target under direct heating indicates that the beam parameter design for HEDP research should come to a compromise, which means, for example, with higher intensity or smaller focal spot, the beam pulse length must be compressed short enough to avoid the target dispersal before the end of the pulse.

  13. Three-dimensional Hybrid Continuum-Atomistic Simulations for Multiscale Hydrodynamics

    SciTech Connect

    Wijesinghe, S; Hornung, R; Garcia, A; Hadjiconstantinou, N

    2004-04-15

    We present an adaptive mesh and algorithmic refinement (AMAR) scheme for modeling multi-scale hydrodynamics. The AMAR approach extends standard conservative adaptive mesh refinement (AMR) algorithms by providing a robust flux-based method for coupling an atomistic fluid representation to a continuum model. The atomistic model is applied locally in regions where the continuum description is invalid or inaccurate, such as near strong flow gradients and at fluid interfaces, or when the continuum grid is refined to the molecular scale. The need for such ''hybrid'' methods arises from the fact that hydrodynamics modeled by continuum representations are often under-resolved or inaccurate while solutions generated using molecular resolution globally are not feasible. In the implementation described herein, Direct Simulation Monte Carlo (DSMC) provides an atomistic description of the flow and the compressible two-fluid Euler equations serve as our continuum-scale model. The AMR methodology provides local grid refinement while the algorithm refinement feature allows the transition to DSMC where needed. The continuum and atomistic representations are coupled by matching fluxes at the continuum-atomistic interfaces and by proper averaging and interpolation of data between scales. Our AMAR application code is implemented in C++ and is built upon the SAMRAI (Structured Adaptive Mesh Refinement Application Infrastructure) framework developed at Lawrence Livermore National Laboratory. SAMRAI provides the parallel adaptive gridding algorithm and enables the coupling between the continuum and atomistic methods.

  14. Observation of Compressible Plasma Mix in Cylindrically Convergent Implosions

    NASA Astrophysics Data System (ADS)

    Barnes, Cris W.; Batha, Steven H.; Lanier, Nicholas E.; Magelssen, Glenn R.; Tubbs, David L.; Dunne, A. M.; Rothman, Steven R.; Youngs, David L.

    2000-10-01

    An understanding of hydrodynamic mix in convergent geometry will be of key importance in the development of a robust ignition/burn capability on NIF, LMJ and future pulsed power machines. We have made use of the OMEGA laser facility at the University of Rochester to investigate directly the mix evolution in a convergent geometry, compressible plasma regime. The experiments comprise a plastic cylindrical shell imploded by direct laser irradiation. The cylindrical shell surrounds a lower density plastic foam which provides sufficient back pressure to allow the implosion to stagnate at a sufficiently high radius to permit quantitative radiographic diagnosis of the interface evolution near turnaround. The susceptibility to mix of the shell-foam interface is varied by choosing different density material for the inner shell surface (thus varying the Atwood number). This allows the study of shock-induced Richtmyer-Meshkov growth during the coasting phase, and Rayleigh-Taylor growth during the stagnation phase. The experimental results will be described along with calculational predictions using various radiation hydrodynamics codes and turbulent mix models.

  15. Hydrodynamics of the Dirac spectrum

    DOE PAGES

    Liu, Yizhuang; Warchoł, Piotr; Zahed, Ismail

    2015-12-15

    We discuss a hydrodynamical description of the eigenvalues of the Dirac spectrum in even dimensions in the vacuum and in the large N (volume) limit. The linearized hydrodynamics supports sound waves. The hydrodynamical relaxation of the eigenvalues is captured by a hydrodynamical (tunneling) minimum configuration which follows from a pertinent form of Euler equation. As a result, the relaxation from a phase of unbroken chiral symmetry to a phase of broken chiral symmetry occurs over a time set by the speed of sound.

  16. Modeling multiphase flow using fluctuating hydrodynamics.

    PubMed

    Chaudhri, Anuj; Bell, John B; Garcia, Alejandro L; Donev, Aleksandar

    2014-09-01

    Fluctuating hydrodynamics provides a model for fluids at mesoscopic scales where thermal fluctuations can have a significant impact on the behavior of the system. Here we investigate a model for fluctuating hydrodynamics of a single-component, multiphase flow in the neighborhood of the critical point. The system is modeled using a compressible flow formulation with a van der Waals equation of state, incorporating a Korteweg stress term to treat interfacial tension. We present a numerical algorithm for modeling this system based on an extension of algorithms developed for fluctuating hydrodynamics for ideal fluids. The scheme is validated by comparison of measured structure factors and capillary wave spectra with equilibrium theory. We also present several nonequilibrium examples to illustrate the capability of the algorithm to model multiphase fluid phenomena in a neighborhood of the critical point. These examples include a study of the impact of fluctuations on the spinodal decomposition following a rapid quench, as well as the piston effect in a cavity with supercooled walls. The conclusion in both cases is that thermal fluctuations affect the size and growth of the domains in off-critical quenches.

  17. Syndrome source coding and its universal generalization

    NASA Technical Reports Server (NTRS)

    Ancheta, T. C., Jr.

    1975-01-01

    A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A universal generalization of syndrome-source-coding is formulated which provides robustly-effective, distortionless, coding of source ensembles.

  18. Particle Mesh Hydrodynamics for Astrophysics Simulations

    NASA Astrophysics Data System (ADS)

    Chatelain, Philippe; Cottet, Georges-Henri; Koumoutsakos, Petros

    We present a particle method for the simulation of three dimensional compressible hydrodynamics based on a hybrid Particle-Mesh discretization of the governing equations. The method is rooted on the regularization of particle locations as in remeshed Smoothed Particle Hydrodynamics (rSPH). The rSPH method was recently introduced to remedy problems associated with the distortion of computational elements in SPH, by periodically re-initializing the particle positions and by using high order interpolation kernels. In the PMH formulation, the particles solely handle the convective part of the compressible Euler equations. The particle quantities are then interpolated onto a mesh, where the pressure terms are computed. PMH, like SPH, is free of the convection CFL condition while at the same time it is more efficient as derivatives are computed on a mesh rather than particle-particle interactions. PMH does not detract from the adaptive character of SPH and allows for control of its accuracy. We present simulations of a benchmark astrophysics problem demonstrating the capabilities of this approach.

  19. TEM Video Compressive Sensing

    SciTech Connect

    Stevens, Andrew J.; Kovarik, Libor; Abellan, Patricia; Yuan, Xin; Carin, Lawrence; Browning, Nigel D.

    2015-08-02

    One of the main limitations of imaging at high spatial and temporal resolution during in-situ TEM experiments is the frame rate of the camera being used to image the dynamic process. While the recent development of direct detectors has provided the hardware to achieve frame rates approaching 0.1ms, the cameras are expensive and must replace existing detectors. In this paper, we examine the use of coded aperture compressive sensing methods [1, 2, 3, 4] to increase the framerate of any camera with simple, low-cost hardware modifications. The coded aperture approach allows multiple sub-frames to be coded and integrated into a single camera frame during the acquisition process, and then extracted upon readout using statistical compressive sensing inversion. Our simulations show that it should be possible to increase the speed of any camera by at least an order of magnitude. Compressive Sensing (CS) combines sensing and compression in one operation, and thus provides an approach that could further improve the temporal resolution while correspondingly reducing the electron dose rate. Because the signal is measured in a compressive manner, fewer total measurements are required. When applied to TEM video capture, compressive imaging couled improve acquisition speed and reduce the electron dose rate. CS is a recent concept, and has come to the forefront due the seminal work of Candès [5]. Since the publication of Candès, there has been enormous growth in the application of CS and development of CS variants. For electron microscopy applications, the concept of CS has also been recently applied to electron tomography [6], and reduction of electron dose in scanning transmission electron microscopy (STEM) imaging [7]. To demonstrate the applicability of coded aperture CS video reconstruction for atomic level imaging, we simulate compressive sensing on observations of Pd nanoparticles and Ag nanoparticles during exposure to high temperatures and other environmental

  20. Hydrodynamically Lubricated Rotary Shaft Having Twist Resistant Geometry

    DOEpatents

    Dietle, Lannie; Gobeli, Jeffrey D.

    1993-07-27

    A hydrodynamically lubricated squeeze packing type rotary shaft with a cross-sectional geometry suitable for pressurized lubricant retention is provided which, in the preferred embodiment, incorporates a protuberant static sealing interface that, compared to prior art, dramatically improves the exclusionary action of the dynamic sealing interface in low pressure and unpressurized applications by achieving symmetrical deformation of the seal at the static and dynamic sealing interfaces. In abrasive environments, the improved exclusionary action results in a dramatic reduction of seal and shaft wear, compared to prior art, and provides a significant increase in seal life. The invention also increases seal life by making higher levels of initial compression possible, compared to prior art, without compromising hydrodynamic lubrication; this added compression makes the seal more tolerant of compression set, abrasive wear, mechanical misalignment, dynamic runout, and manufacturing tolerances, and also makes hydrodynamic seals with smaller cross-sections more practical. In alternate embodiments, the benefits enumerated above are achieved by cooperative configurations of the seal and the gland which achieve symmetrical deformation of the seal at the static and dynamic sealing interfaces. The seal may also be configured such that predetermined radial compression deforms it to a desired operative configuration, even through symmetrical deformation is lacking.

  1. RADONE: a computer code for simulating fast-transient, one-dimensional hydrodynamic conditions and two-layer radionuclide concentrations including the effect of bed-deposition in controlled rivers and tidal estuaries

    SciTech Connect

    Eraslan, A.H.; Abdel-Razek, M.M.

    1985-05-01

    RADONE is a computer code for predicting the transient, one-dimensional transport of radiouclides in receiving water bodies. The model formulation considers the one-dimensional (cross-sectionally averaged) conservation of mass and momentum equations and the two coupled, depth-averaged radionuclide transport equations for the water layer and the bottom sediment layer. The coupling conditions incorporate bottom deposition and resuspension effects. The computer code uses a discrete-element method that offers variable river cross-section spacing, accurate representation of cross-sectional geometry, and numerical accuracy. A sample application is provided for the problem of hypothetical accidental releases and actual routine releases of radionuclides to the Hudson River.

  2. SPHGR: Smoothed-Particle Hydrodynamics Galaxy Reduction

    NASA Astrophysics Data System (ADS)

    Thompson, Robert

    2015-02-01

    SPHGR (Smoothed-Particle Hydrodynamics Galaxy Reduction) is a python based open-source framework for analyzing smoothed-particle hydrodynamic simulations. Its basic form can run a baryonic group finder to identify galaxies and a halo finder to identify dark matter halos; it can also assign said galaxies to their respective halos, calculate halo & galaxy global properties, and iterate through previous time steps to identify the most-massive progenitors of each halo and galaxy. Data about each individual halo and galaxy is collated and easy to access. SPHGR supports a wide range of simulations types including N-body, full cosmological volumes, and zoom-in runs. Support for multiple SPH code outputs is provided by pyGadgetReader (ascl:1411.001), mainly Gadget (ascl:0003.001) and TIPSY (ascl:1111.015).

  3. Context-Aware Image Compression

    PubMed Central

    Chan, Jacky C. K.; Mahjoubfar, Ata; Chen, Claire L.; Jalali, Bahram

    2016-01-01

    We describe a physics-based data compression method inspired by the photonic time stretch wherein information-rich portions of the data are dilated in a process that emulates the effect of group velocity dispersion on temporal signals. With this coding operation, the data can be downsampled at a lower rate than without it. In contrast to previous implementation of the warped stretch compression, here the decoding can be performed without the need of phase recovery. We present rate-distortion analysis and show improvement in PSNR compared to compression via uniform downsampling. PMID:27367904

  4. Computational brittle fracture using smooth particle hydrodynamics

    SciTech Connect

    Mandell, D.A.; Wingate, C.A.; Schwalbe, L.A.

    1996-10-01

    We are developing statistically based, brittle-fracture models and are implementing them into hydrocodes that can be used for designing systems with components of ceramics, glass, and/or other brittle materials. Because of the advantages it has simulating fracture, we are working primarily with the smooth particle hydrodynamics code SPBM. We describe a new brittle fracture model that we have implemented into SPBM. To illustrate the code`s current capability, we have simulated a number of experiments. We discuss three of these simulations in this paper. The first experiment consists of a brittle steel sphere impacting a plate. The experimental sphere fragment patterns are compared to the calculations. The second experiment is a steel flyer plate in which the recovered steel target crack patterns are compared to the calculated crack patterns. We also briefly describe a simulation of a tungsten rod impacting a heavily confined alumina target, which has been recently reported on in detail.

  5. Zombie Vortex Instability. I. A Purely Hydrodynamic Instability to Resurrect the Dead Zones of Protoplanetary Disks

    NASA Astrophysics Data System (ADS)

    Marcus, Philip S.; Pei, Suyang; Jiang, Chung-Hsiang; Barranco, Joseph A.; Hassanzadeh, Pedram; Lecoanet, Daniel

    2015-07-01

    There is considerable interest in hydrodynamic instabilities in dead zones of protoplanetary disks as a mechanism for driving angular momentum transport and as a source of particle-trapping vortices to mix chondrules and incubate planetesimal formation. We present simulations with a pseudo-spectral anelastic code and with the compressible code Athena, showing that stably stratified flows in a shearing, rotating box are violently unstable and produce space-filling, sustained turbulence dominated by large vortices with Rossby numbers of order ˜0.2-0.3. This Zombie Vortex Instability (ZVI) is observed in both codes and is triggered by Kolmogorov turbulence with Mach numbers less than ˜0.01. It is a common view that if a given constant density flow is stable, then stable vertical stratification should make the flow even more stable. Yet, we show that sufficient vertical stratification can be unstable to ZVI. ZVI is robust and requires no special tuning of boundary conditions, or initial radial entropy or vortensity gradients (though we have studied ZVI only in the limit of infinite cooling time). The resolution of this paradox is that stable stratification allows for a new avenue to instability: baroclinic critical layers. ZVI has not been seen in previous studies of flows in rotating, shearing boxes because those calculations frequently lacked vertical density stratification and/or sufficient numerical resolution. Although we do not expect appreciable angular momentum transport from ZVI in the small domains in this study, we hypothesize that ZVI in larger domains with compressible equations may lead to angular transport via spiral density waves.

  6. Hydrodynamics of Turning Flocks.

    PubMed

    Yang, Xingbo; Marchetti, M Cristina

    2015-12-18

    We present a hydrodynamic model of flocking that generalizes the familiar Toner-Tu equations to incorporate turning inertia of well-polarized flocks. The continuum equations controlled by only two dimensionless parameters, orientational inertia and alignment strength, are derived by coarse-graining the inertial spin model recently proposed by Cavagna et al. The interplay between orientational inertia and bend elasticity of the flock yields anisotropic spin waves that mediate the propagation of turning information throughout the flock. The coupling between spin-current density to the local vorticity field through a nonlinear friction gives rise to a hydrodynamic mode with angular-dependent propagation speed at long wavelengths. This mode becomes unstable as a result of the growth of bend and splay deformations augmented by the spin wave, signaling the transition to complex spatiotemporal patterns of continuously turning and swirling flocks.

  7. Nonlinear hydrodynamics. Lecture 9

    SciTech Connect

    Cox, A.N.

    1983-03-14

    A very sophisticated method for calculating the stability and pulsations of stars which make contact with actual observations of the stellar behavior, hydrodynamic calculations are very simple in principle. Conservation of mass can be accounted for by having mass shells that are fixed with their mass for all time. Motions of these shells can be calculated by taking the difference between the external force of gravity and that from the local pressure gradient. The conservation of energy can be coupled to this momentum conservation equation to give the current temperatures, densities, pressures, and opacities at the shell centers, as well as the positions, velocities, and accelerations of the mass shell interfaces. Energy flow across these interfaces can be calculated from the current conditions, and this energy is partitioned between internal energy and the work done on or by the mass shell. We discuss here only the purely radial case for hydrodynamics because it is very useful for stellar pulsation studies.

  8. Hydrodynamics of Turning Flocks

    NASA Astrophysics Data System (ADS)

    Yang, Xingbo; Marchetti, M. Cristina

    2015-12-01

    We present a hydrodynamic model of flocking that generalizes the familiar Toner-Tu equations to incorporate turning inertia of well-polarized flocks. The continuum equations controlled by only two dimensionless parameters, orientational inertia and alignment strength, are derived by coarse-graining the inertial spin model recently proposed by Cavagna et al. The interplay between orientational inertia and bend elasticity of the flock yields anisotropic spin waves that mediate the propagation of turning information throughout the flock. The coupling between spin-current density to the local vorticity field through a nonlinear friction gives rise to a hydrodynamic mode with angular-dependent propagation speed at long wavelengths. This mode becomes unstable as a result of the growth of bend and splay deformations augmented by the spin wave, signaling the transition to complex spatiotemporal patterns of continuously turning and swirling flocks.

  9. Combining Hydrodynamic and Evolution Calculations of Rotating Stars

    NASA Astrophysics Data System (ADS)

    Deupree, R. G.

    1996-12-01

    Rotation has two primary effects on stellar evolutionary models: the direct influence on the model structure produced by the rotational terms, and the indirect influence produced by rotational instabilities which redistribute angular momentum and composition inside the model. Using a two dimensional, fully implicit finite difference code, I can follow events on both evolutionary and hydrodynamic timescales, thus allowing the simulation of both effects. However, there are several issues concerning how to integrate the results from hydrodynamic runs into evolutionary runs that must be examined. The schemes I have devised for the integration of the hydrodynamic simulations into evolutionary calculations are outlined, and the positive and negative features summarized. The practical differences among the various schemes are small, and a successful marriage between hydrodynamic and evolution calculations is possible.

  10. Lossless compression of VLSI layout image data.

    PubMed

    Dai, Vito; Zakhor, Avideh

    2006-09-01

    We present a novel lossless compression algorithm called Context Copy Combinatorial Code (C4), which integrates the advantages of two very disparate compression techniques: context-based modeling and Lempel-Ziv (LZ) style copying. While the algorithm can be applied to many lossless compression applications, such as document image compression, our primary target application has been lossless compression of integrated circuit layout image data. These images contain a heterogeneous mix of data: dense repetitive data better suited to LZ-style coding, and less dense structured data, better suited to context-based encoding. As part of C4, we have developed a novel binary entropy coding technique called combinatorial coding which is simultaneously as efficient as arithmetic coding, and as fast as Huffman coding. Compression results show C4 outperforms JBIG, ZIP, BZIP2, and two-dimensional LZ, and achieves lossless compression ratios greater than 22 for binary layout image data, and greater than 14 for gray-pixel image data.

  11. Hydrodynamics of fossil fishes.

    PubMed

    Fletcher, Thomas; Altringham, John; Peakall, Jeffrey; Wignall, Paul; Dorrell, Robert

    2014-08-07

    From their earliest origins, fishes have developed a suite of adaptations for locomotion in water, which determine performance and ultimately fitness. Even without data from behaviour, soft tissue and extant relatives, it is possible to infer a wealth of palaeobiological and palaeoecological information. As in extant species, aspects of gross morphology such as streamlining, fin position and tail type are optimized even in the earliest fishes, indicating similar life strategies have been present throughout their evolutionary history. As hydrodynamical studies become more sophisticated, increasingly complex fluid movement can be modelled, including vortex formation and boundary layer control. Drag-reducing riblets ornamenting the scales of fast-moving sharks have been subjected to particularly intense research, but this has not been extended to extinct forms. Riblets are a convergent adaptation seen in many Palaeozoic fishes, and probably served a similar hydrodynamic purpose. Conversely, structures which appear to increase skin friction may act as turbulisors, reducing overall drag while serving a protective function. Here, we examine the diverse adaptions that contribute to drag reduction in modern fishes and review the few attempts to elucidate the hydrodynamics of extinct forms.

  12. Hydrodynamic blade guide

    DOEpatents

    Blaedel, Kenneth L.; Davis, Pete J.; Landram, Charles S.

    2000-01-01

    A saw having a self-pumped hydrodynamic blade guide or bearing for retaining the saw blade in a centered position in the saw kerf (width of cut made by the saw). The hydrodynamic blade guide or bearing utilizes pockets or grooves incorporated into the sides of the blade. The saw kerf in the workpiece provides the guide or bearing stator surface. Both sides of the blade entrain cutting fluid as the blade enters the kerf in the workpiece, and the trapped fluid provides pressure between the blade and the workpiece as an inverse function of the gap between the blade surface and the workpiece surface. If the blade wanders from the center of the kerf, then one gap will increase and one gap will decrease and the consequent pressure difference between the two sides of the blade will cause the blade to re-center itself in the kerf. Saws using the hydrodynamic blade guide or bearing have particular application in slicing slabs from boules of single crystal materials, for example, as well as for cutting other difficult to saw materials such as ceramics, glass, and brittle composite materials.

  13. Hydrodynamics of insect spermatozoa

    NASA Astrophysics Data System (ADS)

    Pak, On Shun; Lauga, Eric

    2010-11-01

    Microorganism motility plays important roles in many biological processes including reproduction. Many microorganisms propel themselves by propagating traveling waves along their flagella. Depending on the species, propagation of planar waves (e.g. Ceratium) and helical waves (e.g. Trichomonas) were observed in eukaryotic flagellar motion, and hydrodynamic models for both were proposed in the past. However, the motility of insect spermatozoa remains largely unexplored. An interesting morphological feature of such cells, first observed in Tenebrio molitor and Bacillus rossius, is the double helical deformation pattern along the flagella, which is characterized by the presence of two superimposed helical flagellar waves (one with a large amplitude and low frequency, and the other with a small amplitude and high frequency). Here we present the first hydrodynamic investigation of the locomotion of insect spermatozoa. The swimming kinematics, trajectories and hydrodynamic efficiency of the swimmer are computed based on the prescribed double helical deformation pattern. We then compare our theoretical predictions with experimental measurements, and explore the dependence of the swimming performance on the geometric and dynamical parameters.

  14. Hydrodynamics of fossil fishes

    PubMed Central

    Fletcher, Thomas; Altringham, John; Peakall, Jeffrey; Wignall, Paul; Dorrell, Robert

    2014-01-01

    From their earliest origins, fishes have developed a suite of adaptations for locomotion in water, which determine performance and ultimately fitness. Even without data from behaviour, soft tissue and extant relatives, it is possible to infer a wealth of palaeobiological and palaeoecological information. As in extant species, aspects of gross morphology such as streamlining, fin position and tail type are optimized even in the earliest fishes, indicating similar life strategies have been present throughout their evolutionary history. As hydrodynamical studies become more sophisticated, increasingly complex fluid movement can be modelled, including vortex formation and boundary layer control. Drag-reducing riblets ornamenting the scales of fast-moving sharks have been subjected to particularly intense research, but this has not been extended to extinct forms. Riblets are a convergent adaptation seen in many Palaeozoic fishes, and probably served a similar hydrodynamic purpose. Conversely, structures which appear to increase skin friction may act as turbulisors, reducing overall drag while serving a protective function. Here, we examine the diverse adaptions that contribute to drag reduction in modern fishes and review the few attempts to elucidate the hydrodynamics of extinct forms. PMID:24943377

  15. Speaker Recognition on Lossy Compressed Speech Using the Speex Codec

    DTIC Science & Technology

    2009-09-01

    data used in SR and that Speex coding can improve performance on data compressed by the GSM codec. 15. SUBJECT TERMS Speaker identification...effective for compression of data used in SR and that Speex coding can improve performance on data compressed by the GSM codec. Index Terms: speaker...sparse and mainly examined at the effect of GSM coding [1] [2]. Little or no studies have dealt with newer free coders, such as Speex [3] [4] or

  16. Recent advances in coding theory for near error-free communications

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.; Deutsch, L. J.; Dolinar, S. J.; Mceliece, R. J.; Pollara, F.; Shahshahani, M.; Swanson, L.

    1991-01-01

    Channel and source coding theories are discussed. The following subject areas are covered: large constraint length convolutional codes (the Galileo code); decoder design (the big Viterbi decoder); Voyager's and Galileo's data compression scheme; current research in data compression for images; neural networks for soft decoding; neural networks for source decoding; finite-state codes; and fractals for data compression.

  17. The hydrodynamics of galaxy formation on Kiloparsec scales

    NASA Technical Reports Server (NTRS)

    Norman, Michael L.; Anninos, Wenbo Yuan; Centrella, Joan

    1993-01-01

    Two dimensional numerical simulations of Zeldovich pancake fragmentation in a dark matter dominated universe were carried out to study the hydrodynamical and gravitational effects on the formation of structures such as protogalaxies. Preliminary results were given in Yuan, Centrella and, Norman (1991). Here we report a more exhaustive study to determine the sensitivity of protogalaxies to input parameters. The numerical code we used for the simulations combines the hydrodynamical code ZEUS-2D (Stone and Norman, 1992) which was modified to include the expansion of the universe and radiative cooling of the gas with a particle-mesh code which follows the motion of dark matter particles. The resulting hybrid code is able to handle highly nonuniform grids which we utilized to obtain a high resolution (much greater than 1 kpc) in the dense region of the pancake.

  18. Compressing subbanded image data with Lempel-Ziv-based coders

    NASA Technical Reports Server (NTRS)

    Glover, Daniel; Kwatra, S. C.

    1993-01-01

    A method of improving the compression of image data using Lempel-Ziv-based coding is presented. Image data is first processed with a simple transform, such as the Walsh Hadamard Transform, to produce subbands. The subbanded data can be rounded to eight bits or it can be quantized for higher compression at the cost of some reduction in the quality of the reconstructed image. The data is then run-length coded to take advantage of the large runs of zeros produced by quantization. Compression results are presented and contrasted with a subband compression method using quantization followed by run-length coding and Huffman coding. The Lempel-Ziv-based coding in conjunction with run-length coding produces the best compression results at the same reconstruction quality (compared with the Huffman-based coding) on the image data used.

  19. Visual pattern image sequence coding

    NASA Technical Reports Server (NTRS)

    Silsbee, Peter; Bovik, Alan C.; Chen, Dapang

    1990-01-01

    The visual pattern image coding (VPIC) configurable digital image-coding process is capable of coding with visual fidelity comparable to the best available techniques, at compressions which (at 30-40:1) exceed all other technologies. These capabilities are associated with unprecedented coding efficiencies; coding and decoding operations are entirely linear with respect to image size and entail a complexity that is 1-2 orders of magnitude faster than any previous high-compression technique. The visual pattern image sequence coding to which attention is presently given exploits all the advantages of the static VPIC in the reduction of information from an additional, temporal dimension, to achieve unprecedented image sequence coding performance.

  20. The moving mesh code SHADOWFAX

    NASA Astrophysics Data System (ADS)

    Vandenbroucke, B.; De Rijcke, S.

    2016-07-01

    We introduce the moving mesh code SHADOWFAX, which can be used to evolve a mixture of gas, subject to the laws of hydrodynamics and gravity, and any collisionless fluid only subject to gravity, such as cold dark matter or stars. The code is written in C++ and its source code is made available to the scientific community under the GNU Affero General Public Licence. We outline the algorithm and the design of our implementation, and demonstrate its validity through the results of a set of basic test problems, which are also part of the public version. We also compare SHADOWFAX with a number of other publicly available codes using different hydrodynamical integration schemes, illustrating the advantages and disadvantages of the moving mesh technique.

  1. Compressing TV-image data

    NASA Technical Reports Server (NTRS)

    Hilbert, E. E.; Lee, J.; Rice, R. F.; Schlutsmeyer, A. P.

    1981-01-01

    Compressing technique calculates activity estimator for each segment of image line. Estimator is used in conjunction with allowable bits per line, N, to determine number of bits necessary to code each segment and which segments can tolerate truncation. Preprocessed line data are then passed to adaptive variable-length coder, which selects optimum transmission code. Method increases capacity of broadcast and cable television transmissions and helps reduce size of storage medium for video and digital audio recordings.

  2. Compressing TV-image data

    NASA Technical Reports Server (NTRS)

    Hilbert, E. E.; Lee, J.; Rice, R. F.; Schlutsmeyer, A. P.

    1981-01-01

    Compressing technique calculates activity estimator for each segment of image line. Estimator is used in conjunction with allowable bits per line, N, to determine number of bits necessary to code each segment and which segments can tolerate truncation. Preprocessed line data are then passed to adaptive variable-length coder, which selects optimum transmission code. Method increases capacity of broadcast and cable television transmissions and helps reduce size of storage medium for video and digital audio recordings.

  3. Modeling Warm Dense Matter Experiments using the 3D ALE-AMR Code and the Move Toward Exascale Computing

    SciTech Connect

    Koniges, A; Eder, E; Liu, W; Barnard, J; Friedman, A; Logan, G; Fisher, A; Masers, N; Bertozzi, A

    2011-11-04

    The Neutralized Drift Compression Experiment II (NDCX II) is an induction accelerator planned for initial commissioning in 2012. The final design calls for a 3 MeV, Li+ ion beam, delivered in a bunch with characteristic pulse duration of 1 ns, and transverse dimension of order 1 mm. The NDCX II will be used in studies of material in the warm dense matter (WDM) regime, and ion beam/hydrodynamic coupling experiments relevant to heavy ion based inertial fusion energy. We discuss recent efforts to adapt the 3D ALE-AMR code to model WDM experiments on NDCX II. The code, which combines Arbitrary Lagrangian Eulerian (ALE) hydrodynamics with Adaptive Mesh Refinement (AMR), has physics models that include ion deposition, radiation hydrodynamics, thermal diffusion, anisotropic material strength with material time history, and advanced models for fragmentation. Experiments at NDCX-II will explore the process of bubble and droplet formation (two-phase expansion) of superheated metal solids using ion beams. Experiments at higher temperatures will explore equation of state and heavy ion fusion beam-to-target energy coupling efficiency. Ion beams allow precise control of local beam energy deposition providing uniform volumetric heating on a timescale shorter than that of hydrodynamic expansion. The ALE-AMR code does not have any export control restrictions and is currently running at the National Energy Research Scientific Computing Center (NERSC) at LBNL and has been shown to scale well to thousands of CPUs. New surface tension models that are being implemented and applied to WDM experiments. Some of the approaches use a diffuse interface surface tension model that is based on the advective Cahn-Hilliard equations, which allows for droplet breakup in divergent velocity fields without the need for imposed perturbations. Other methods require seeding or other methods for droplet breakup. We also briefly discuss the effects of the move to exascale computing and related

  4. Nonlinear hydrodynamics of cosmological sheets. 1: Numerical techniques and tests

    NASA Astrophysics Data System (ADS)

    Anninos, Wenbo Y.; Norman, Michael J.

    1994-07-01

    We present the numerical techniques and tests used to construct and validate a computer code designed to study the multidimensional nonlinear hydrodynamics of large-scale sheet structures in the universe, especially the fragmentation of such structures under various instabilities. This code is composed of two codes, the hydrodynamical code ZEUS-2D and a particle-mesh code. The ZEUS-2D code solves the hydrodynamical equations in two dimensions using explicit Eulerian finite-difference techniques, with modifications made to incorporate the expansion of the universe and the gas cooling due to Compton scattering, bremsstrahlung, and hydrogen and helium cooling. The particle-mesh code solves the equation of motion for the collisionless dark matter. The code uses two-dimensional Cartesian coordinates with a nonuniform grid in one direction to provide high resolution for the sheet structures. A series of one-dimensional and two-dimensional linear perturbation tests are presented which are designed to test the hydro solver and the Poisson solver with and without the expansion of the universe. We also present a radiative shock wave test which is designed to ensure the code's capability to handle radiative cooling properly. And finally a series of one-dimensional Zel'dovich pancake tests used to test the dark matter code and the hydro solver in the nonlinear regime are discussed and compared with the results of Bond et al. (1984) and Shapiro & Struck-Marcell (1985). Overall, the code is shown to produce accurate and stable results, which provide us a powerful tool to further our studies.

  5. Nonlinear hydrodynamics of cosmological sheets. 1: Numerical techniques and tests

    NASA Technical Reports Server (NTRS)

    Anninos, Wenbo Y.; Norman, Michael J.

    1994-01-01

    We present the numerical techniques and tests used to construct and validate a computer code designed to study the multidimensional nonlinear hydrodynamics of large-scale sheet structures in the universe, especially the fragmentation of such structures under various instabilities. This code is composed of two codes, the hydrodynamical code ZEUS-2D and a particle-mesh code. The ZEUS-2D code solves the hydrodynamical equations in two dimensions using explicit Eulerian finite-difference techniques, with modifications made to incorporate the expansion of the universe and the gas cooling due to Compton scattering, bremsstrahlung, and hydrogen and helium cooling. The particle-mesh code solves the equation of motion for the collisionless dark matter. The code uses two-dimensional Cartesian coordinates with a nonuniform grid in one direction to provide high resolution for the sheet structures. A series of one-dimensional and two-dimensional linear perturbation tests are presented which are designed to test the hydro solver and the Poisson solver with and without the expansion of the universe. We also present a radiative shock wave test which is designed to ensure the code's capability to handle radiative cooling properly. And finally a series of one-dimensional Zel'dovich pancake tests used to test the dark matter code and the hydro solver in the nonlinear regime are discussed and compared with the results of Bond et al. (1984) and Shapiro & Struck-Marcell (1985). Overall, the code is shown to produce accurate and stable results, which provide us a powerful tool to further our studies.

  6. Nonlinear hydrodynamics of cosmological sheets. 1: Numerical techniques and tests

    NASA Technical Reports Server (NTRS)

    Anninos, Wenbo Y.; Norman, Michael J.

    1994-01-01

    We present the numerical techniques and tests used to construct and validate a computer code designed to study the multidimensional nonlinear hydrodynamics of large-scale sheet structures in the universe, especially the fragmentation of such structures under various instabilities. This code is composed of two codes, the hydrodynamical code ZEUS-2D and a particle-mesh code. The ZEUS-2D code solves the hydrodynamical equations in two dimensions using explicit Eulerian finite-difference techniques, with modifications made to incorporate the expansion of the universe and the gas cooling due to Compton scattering, bremsstrahlung, and hydrogen and helium cooling. The particle-mesh code solves the equation of motion for the collisionless dark matter. The code uses two-dimensional Cartesian coordinates with a nonuniform grid in one direction to provide high resolution for the sheet structures. A series of one-dimensional and two-dimensional linear perturbation tests are presented which are designed to test the hydro solver and the Poisson solver with and without the expansion of the universe. We also present a radiative shock wave test which is designed to ensure the code's capability to handle radiative cooling properly. And finally a series of one-dimensional Zel'dovich pancake tests used to test the dark matter code and the hydro solver in the nonlinear regime are discussed and compared with the results of Bond et al. (1984) and Shapiro & Struck-Marcell (1985). Overall, the code is shown to produce accurate and stable results, which provide us a powerful tool to further our studies.

  7. Extended x-ray absorption fine structure measurements of quasi-isentropically compressed vanadium targets on the OMEGA laser

    SciTech Connect

    Yaakobi, B.; Boehly, T. R.; Sangster, T. C.; Meyerhofer, D. D.; Remington, B. A.; Allen, P. G.; Pollaine, S. M.; Lorenzana, H. E.; Lorenz, K. T.; Hawreliak, J. A.

    2008-06-15

    The use of in situ extended x-ray absorption fine structure (EXAFS) for characterizing nanosecond laser-shocked vanadium, titanium, and iron has recently been demonstrated. These measurements are extended to laser-driven, quasi-isentropic compression experiments (ICE). The radiation source (backlighter) for EXAFS in all of these experiments is obtained by imploding a spherical target on the OMEGA laser [T. R. Boehly et al., Rev. Sci. Instrum. 66, 508 (1995)]. Isentropic compression (where the entropy is kept constant) enables to reach high compressions at relatively low temperatures. The absorption spectra are used to determine the temperature and compression in a vanadium sample quasi-isentropically compressed to pressures of up to {approx}0.75 Mbar. The ability to measure the temperature and compression directly is unique to EXAFS. The drive pressure is calibrated by substituting aluminum for the vanadium and interferometrically measuring the velocity of the back target surface by the velocity interferometer system for any reflector (VISAR). The experimental results obtained by EXAFS and VISAR agree with each other and with the simulations of a hydrodynamic code. The role of a shield to protect the sample from impact heating is studied. It is shown that the shield produces an initial weak shock that is followed by a quasi-isentropic compression at a relatively low temperature. The role of radiation heating from the imploding target as well as from the laser-absorption region is studied. The results show that in laser-driven ICE, as compared with laser-driven shocks, comparable compressions can be achieved at lower temperatures. The EXAFS results show important details not seen in the VISAR results.

  8. Filtering, Coding, and Compression with Malvar Wavelets

    DTIC Science & Technology

    1993-12-01

    The vocal tract is made up of the lips, mouth, and tongue . These can not change nearly as quickly as the vocal cords can, therefore the vocal tract...fluctuates slowly in the frequency domain and has a spike in the low quefrency region. These spikes are called formant peaks and have a number of uses in...the formant corresponding to the pitch (2). The cepstrum is used to find the formants of the pitch so that this information can be removed from the

  9. Constructing stable 3D hydrodynamical models of giant stars

    NASA Astrophysics Data System (ADS)

    Ohlmann, Sebastian T.; Röpke, Friedrich K.; Pakmor, Rüdiger; Springel, Volker

    2017-02-01

    Hydrodynamical simulations of stellar interactions require stable models of stars as initial conditions. Such initial models, however, are difficult to construct for giant stars because of the wide range in spatial scales of the hydrostatic equilibrium and in dynamical timescales between the core and the envelope of the giant. They are needed for, e.g., modeling the common envelope phase where a giant envelope encompasses both the giant core and a companion star. Here, we present a new method of approximating and reconstructing giant profiles from a stellar evolution code to produce stable models for multi-dimensional hydrodynamical simulations. We determine typical stellar stratification profiles with the one-dimensional stellar evolution code mesa. After an appropriate mapping, hydrodynamical simulations are conducted using the moving-mesh code arepo. The giant profiles are approximated by replacing the core of the giant with a point mass and by constructing a suitable continuation of the profile to the center. Different reconstruction methods are tested that can specifically control the convective behaviour of the model. After mapping to a grid, a relaxation procedure that includes damping of spurious velocities yields stable models in three-dimensional hydrodynamical simulations. Initially convectively stable configurations lead to stable hydrodynamical models while for stratifications that are convectively unstable in the stellar evolution code, simulations recover the convective behaviour of the initial model and show large convective plumes with Mach numbers up to 0.8. Examples are shown for a 2 M⊙ red giant and a 0.67 M⊙ asymptotic giant branch star. A detailed analysis shows that the improved method reliably provides stable models of giant envelopes that can be used as initial conditions for subsequent hydrodynamical simulations of stellar interactions involving giant stars.

  10. Postexplosion hydrodynamics of supernovae in red supergiants

    NASA Technical Reports Server (NTRS)

    Herant, Marc; Woosley, S. E.

    1994-01-01

    Shock propagation, mixing, and clumping are studied in the explosion of red supergiants as Type II supernovae using a two-dimensional smooth particle hydrodynamic (SPH) code. We show that extensive Rayleigh-Talor instabilities develop in the ejecta in the wake of the reverse shock wave. In all cases, the shell structure of the progenitor is obliterated to leave a clumpy, well-mixed supernova remnant. However, the occurrence of mass loss during the lifetime of the progenitor can significantly reduce the amount of mixing. These results are independent of the Type II supernova explosion mechanism.

  11. Impact modeling with Smooth Particle Hydrodynamics

    SciTech Connect

    Stellingwerf, R.F.; Wingate, C.A.

    1993-07-01

    Smooth Particle Hydrodynamics (SPH) can be used to model hypervelocity impact phenomena via the addition of a strength of materials treatment. SPH is the only technique that can model such problems efficiently due to the combination of 3-dimensional geometry, large translations of material, large deformations, and large void fractions for most problems of interest. This makes SPH an ideal candidate for modeling of asteroid impact, spacecraft shield modeling, and planetary accretion. In this paper we describe the derivation of the strength equations in SPH, show several basic code tests, and present several impact test cases with experimental comparisons.

  12. Low torque hydrodynamic lip geometry for rotary seals

    DOEpatents

    Dietle, Lannie L.; Schroeder, John E.

    2015-07-21

    A hydrodynamically lubricating geometry for the generally circular dynamic sealing lip of rotary seals that are employed to partition a lubricant from an environment. The dynamic sealing lip is provided for establishing compressed sealing engagement with a relatively rotatable surface, and for wedging a film of lubricating fluid into the interface between the dynamic sealing lip and the relatively rotatable surface in response to relative rotation that may occur in the clockwise or the counter-clockwise direction. A wave form incorporating an elongated dimple provides the gradual convergence, efficient impingement angle, and gradual interfacial contact pressure rise that are conducive to efficient hydrodynamic wedging. Skewed elevated contact pressure zones produced by compression edge effects provide for controlled lubricant movement within the dynamic sealing interface between the seal and the relatively rotatable surface, producing enhanced lubrication and low running torque.

  13. Stochastic hard-sphere dynamics for hydrodynamics of nonideal fluids.

    PubMed

    Donev, Aleksandar; Alder, Berni J; Garcia, Alejandro L

    2008-08-15

    A novel stochastic fluid model is proposed with a nonideal structure factor consistent with compressibility, and adjustable transport coefficients. This stochastic hard-sphere dynamics (SHSD) algorithm is a modification of the direct simulation Monte Carlo algorithm and has several computational advantages over event-driven hard-sphere molecular dynamics. Surprisingly, SHSD results in an equation of state and a pair correlation function identical to that of a deterministic Hamiltonian system of penetrable spheres interacting with linear core pair potentials. The fluctuating hydrodynamic behavior of the SHSD fluid is verified for the Brownian motion of a nanoparticle suspended in a compressible solvent.

  14. Video Compression

    NASA Technical Reports Server (NTRS)

    1996-01-01

    Optivision developed two PC-compatible boards and associated software under a Goddard Space Flight Center Small Business Innovation Research grant for NASA applications in areas such as telerobotics, telesciences and spaceborne experimentation. From this technology, the company used its own funds to develop commercial products, the OPTIVideo MPEG Encoder and Decoder, which are used for realtime video compression and decompression. They are used in commercial applications including interactive video databases and video transmission. The encoder converts video source material to a compressed digital form that can be stored or transmitted, and the decoder decompresses bit streams to provide high quality playback.

  15. Saliency-aware video compression.

    PubMed

    Hadizadeh, Hadi; Bajić, Ivan V

    2014-01-01

    In region-of-interest (ROI)-based video coding, ROI parts of the frame are encoded with higher quality than non-ROI parts. At low bit rates, such encoding may produce attention-grabbing coding artifacts, which may draw viewer's attention away from ROI, thereby degrading visual quality. In this paper, we present a saliency-aware video compression method for ROI-based video coding. The proposed method aims at reducing salient coding artifacts in non-ROI parts of the frame in order to keep user's attention on ROI. Further, the method allows saliency to increase in high quality parts of the frame, and allows saliency to reduce in non-ROI parts. Experimental results indicate that the proposed method is able to improve visual quality of encoded video relative to conventional rate distortion optimized video coding, as well as two state-of-the art perceptual video coding methods.

  16. Adaptive Encoding for Numerical Data Compression.

    ERIC Educational Resources Information Center

    Yokoo, Hidetoshi

    1994-01-01

    Discusses the adaptive compression of computer files of numerical data whose statistical properties are not given in advance. A new lossless coding method for this purpose, which utilizes Adelson-Velskii and Landis (AVL) trees, is proposed. The method is effective to any word length. Its application to the lossless compression of gray-scale images…

  17. Adaptive Encoding for Numerical Data Compression.

    ERIC Educational Resources Information Center

    Yokoo, Hidetoshi

    1994-01-01

    Discusses the adaptive compression of computer files of numerical data whose statistical properties are not given in advance. A new lossless coding method for this purpose, which utilizes Adelson-Velskii and Landis (AVL) trees, is proposed. The method is effective to any word length. Its application to the lossless compression of gray-scale images…

  18. Nonlinear Generalized Hydrodynamic Wave Equations in Strongly Coupled Dusty Plasmas

    SciTech Connect

    Veeresha, B. M.; Sen, A.; Kaw, P. K.

    2008-09-07

    A set of nonlinear equations for the study of low frequency waves in a strongly coupled dusty plasma medium is derived using the phenomenological generalized hydrodynamic (GH) model and is used to study the modulational stability of dust acoustic waves to parallel perturbations. Dust compressibility contributions arising from strong Coulomb coupling effects are found to introduce significant modifications in the threshold and range of the instability domain.

  19. Hydrodynamics of Ship Propellers

    NASA Astrophysics Data System (ADS)

    Breslin, John P.; Andersen, Poul

    1996-11-01

    This book deals with flows over propellers operating behind ships, and the hydrodynamic forces and movements that the propeller generates on the shaft and on the ship hull. The first part of the book is devoted to fundamentals of the flow about hydrofoil sections and wings, and to propellers in uniform flow, with guidance for design and pragmatic analysis of performance. The second part covers the development of unsteady forces arising from operation in nonuniform hull wakes. A final chapter discusses the optimization of efficiency of compound propulsors. Researchers in ocean technology and naval architecture will find this book appealing.

  20. Incompressible smoothed particle hydrodynamics

    SciTech Connect

    Ellero, Marco Serrano, Mar; Espanol, Pep

    2007-10-01

    We present a smoothed particle hydrodynamic model for incompressible fluids. As opposed to solving a pressure Poisson equation in order to get a divergence-free velocity field, here incompressibility is achieved by requiring as a kinematic constraint that the volume of the fluid particles is constant. We use Lagrangian multipliers to enforce this restriction. These Lagrange multipliers play the role of non-thermodynamic pressures whose actual values are fixed through the kinematic restriction. We use the SHAKE methodology familiar in constrained molecular dynamics as an efficient method for finding the non-thermodynamic pressure satisfying the constraints. The model is tested for several flow configurations.

  1. The hydrodynamics of astrophysical jets: scaled experiments and numerical simulations

    NASA Astrophysics Data System (ADS)

    Belan, M.; Massaglia, S.; Tordella, D.; Mirzaei, M.; de Ponte, S.

    2013-06-01

    Context. In this paper we study the propagation of hypersonic hydrodynamic jets (Mach number >5) in a laboratory vessel and make comparisons with numerical simulations of axially symmetric flows with the same initial and boundary conditions. The astrophysical context is that of the jets originating around young stellar objects (YSOs). Aims: In order to gain a deeper insight into the phenomenology of YSO jets, we performed a set of experiments and numerical simulations of hypersonic jets in the range of Mach numbers from 10 to 20 and for jet-to-ambient density ratios from 0.85 to 5.4, using different gas species and observing jet lengths of the order of 150 initial radii or more. Exploiting the scalability of the hydrodynamic equations, we intend to reproduce the YSO jet behaviour with respect to jet velocity and elapsed times. In addition, we can make comparisons between the simulated, the experimental, and the observed morphologies. Methods: In the experiments the gas pressure and temperature are increased by a fast, quasi-isentropic compression by means of a piston system operating on a time scale of tens of milliseconds, while the gas density is visualized and measured by means of an electron beam system. We used the PLUTO software for the numerical solution of mixed hyperbolic/parabolic conservation laws targeting high Mach number flows in astrophysical fluid dynamics. We considered axisymmetric initial conditions and carried out numerical simulations in cylindrical geometry. The code has a modular flexible structure whereby different numerical algorithms can be separately combined to solve systems of conservation laws using the finite volume or finite difference approach based on Godunov-type schemes. Results: The agreement between experiments and numerical simulations is fairly good in most of the comparisons. The resulting scaled flow velocities and elapsed times are close to the ones shown by observations. The morphologies of the density distributions agree

  2. MUFASA: galaxy formation simulations with meshless hydrodynamics

    NASA Astrophysics Data System (ADS)

    Davé, Romeel; Thompson, Robert; Hopkins, Philip F.

    2016-11-01

    We present the MUFASA suite of cosmological hydrodynamic simulations, which employs the GIZMO meshless finite mass (MFM) code including H2-based star formation, nine-element chemical evolution, two-phase kinetic outflows following scalings from the Feedback in Realistic Environments zoom simulations, and evolving halo mass-based quenching. Our fiducial (50 h-1 Mpc)3 volume is evolved to z = 0 with a quarter billion elements. The predicted galaxy stellar mass functions (GSMFs) reproduces observations from z = 4 → 0 to ≲ 1.2σ in cosmic variance, providing an unprecedented match to this key diagnostic. The cosmic star formation history and stellar mass growth show general agreement with data, with a strong archaeological downsizing trend such that dwarf galaxies form the majority of their stars after z ˜ 1. We run 25 and 12.5 h-1 Mpc volumes to z = 2 with identical feedback prescriptions, the latter resolving all hydrogen-cooling haloes, and the three runs display fair resolution convergence. The specific star formation rates broadly agree with data at z = 0, but are underpredicted at z ˜ 2 by a factor of 3, re-emphasizing a longstanding puzzle in galaxy evolution models. We compare runs using MFM and two flavours of smoothed particle hydrodynamics, and show that the GSMF is sensitive to hydrodynamics methodology at the ˜×2 level, which is sub-dominant to choices for parametrizing feedback.

  3. How to fake hydrodynamic signals

    NASA Astrophysics Data System (ADS)

    Romatschke, Paul

    2016-12-01

    Flow signatures in experimental data from relativistic ion collisions, are usually interpreted as a fingerprint of the presence of a hydrodynamic phase during the evolution of these systems. I review some theoretical ideas to 'fake' this hydrodynamic behavior in p+A and A+A collisions. I find that transverse flow and femtoscopic measurements can easily be forged through non-hydrodynamic evolution, while large elliptic flow requires some non-vanishing interactions in the hot phase.

  4. Hydrodynamic synchronization of flagellar oscillators

    NASA Astrophysics Data System (ADS)

    Friedrich, Benjamin

    2016-11-01

    In this review, we highlight the physics of synchronization in collections of beating cilia and flagella. We survey the nonlinear dynamics of synchronization in collections of noisy oscillators. This framework is applied to flagellar synchronization by hydrodynamic interactions. The time-reversibility of hydrodynamics at low Reynolds numbers requires swimming strokes that break time-reversal symmetry to facilitate hydrodynamic synchronization. We discuss different physical mechanisms for flagellar synchronization, which break this symmetry in different ways.

  5. Compression stockings

    MedlinePlus

    ... knee bend. Compression Stockings Can Be Hard to Put on If it's hard for you to put on the stockings, try these tips: Apply lotion ... your legs, but let it dry before you put on the stockings. Use a little baby powder ...

  6. Molecular Hydrodynamics from Memory Kernels

    NASA Astrophysics Data System (ADS)

    Lesnicki, Dominika; Vuilleumier, Rodolphe; Carof, Antoine; Rotenberg, Benjamin

    2016-04-01

    The memory kernel for a tagged particle in a fluid, computed from molecular dynamics simulations, decays algebraically as t-3 /2 . We show how the hydrodynamic Basset-Boussinesq force naturally emerges from this long-time tail and generalize the concept of hydrodynamic added mass. This mass term is negative in the present case of a molecular solute, which is at odds with incompressible hydrodynamics predictions. Lastly, we discuss the various contributions to the friction, the associated time scales, and the crossover between the molecular and hydrodynamic regimes upon increasing the solute radius.

  7. MAESTRO: An Adaptive Low Mach Number Hydrodynamics Algorithm for Stellar Flows

    NASA Astrophysics Data System (ADS)

    Nonaka, Andrew; Almgren, A. S.; Bell, J. B.; Malone, C. M.; Zingale, M.

    2010-01-01

    Many astrophysical phenomena are highly subsonic, requiring specialized numerical methods suitable for long-time integration. We present MAESTRO, a low Mach number stellar hydrodynamics code that can be used to simulate long-time, low-speed flows that would be prohibitively expensive to model using traditional compressible codes. MAESTRO is based on an equation set that we have derived using low Mach number asymptotics; this equation set does not explicitly track acoustic waves and thus allows a significant increase in the time step. MAESTRO is suitable for two- and three-dimensional local atmospheric flows as well as three-dimensional full-star flows, and uses adaptive mesh refinement (AMR) to locally refine grids in regions of interest. Our initial scientific applications include the convective phase of Type Ia supernovae and Type I X-ray Bursts on neutron stars. The work at LBNL was supported by the SciDAC Program of the DOE Office of Advanced Scientific Computing Research under the DOE under contract No. DE-AC02-05CH11231. The work at Stony Brook was supported by the DOE/Office of Nuclear Physics, grant No. DE-FG02-06ER41448. We made use of the Jaguar via a DOE INCITE allocation at the OLCF at ORNL and Franklin at NERSC at LBNL.

  8. Combustion chamber analysis code

    NASA Technical Reports Server (NTRS)

    Przekwas, A. J.; Lai, Y. G.; Krishnan, A.; Avva, R. K.; Giridharan, M. G.

    1993-01-01

    A three-dimensional, time dependent, Favre averaged, finite volume Navier-Stokes code has been developed to model compressible and incompressible flows (with and without chemical reactions) in liquid rocket engines. The code has a non-staggered formulation with generalized body-fitted-coordinates (BFC) capability. Higher order differencing methodologies such as MUSCL and Osher-Chakravarthy schemes are available. Turbulent flows can be modeled using any of the five turbulent models present in the code. A two-phase, two-liquid, Lagrangian spray model has been incorporated into the code. Chemical equilibrium and finite rate reaction models are available to model chemically reacting flows. The discrete ordinate method is used to model effects of thermal radiation. The code has been validated extensively against benchmark experimental data and has been applied to model flows in several propulsion system components of the SSME and the STME.

  9. Hydrodynamics of pronuclear migration

    NASA Astrophysics Data System (ADS)

    Nazockdast, Ehssan; Needleman, Daniel; Shelley, Michael

    2014-11-01

    Microtubule (MT) filaments play a key role in many processes involved in cell devision including spindle formation, chromosome segregation, and pronuclear positioning. We present a direct numerical technique to simulate MT dynamics in such processes. Our method includes hydrodynamically mediated interactions between MTs and other cytoskeletal objects, using singularity methods for Stokes flow. Long-ranged many-body hydrodynamic interactions are computed using a highly efficient and scalable fast multipole method, enabling the simulation of thousands of MTs. Our simulation method also takes into account the flexibility of MTs using Euler-Bernoulli beam theory as well as their dynamic instability. Using this technique, we simulate pronuclear migration in single-celled Caenorhabditis elegans embryos. Two different positioning mechanisms, based on the interactions of MTs with the motor proteins and the cell cortex, are explored: cytoplasmic pulling and cortical pushing. We find that although the pronuclear complex migrates towards the center of the cell in both models, the generated cytoplasmic flows are fundamentally different. This suggest that cytoplasmic flow visualization during pronuclear migration can be utilized to differentiate between the two mechanisms.

  10. Hydrodynamics of Bacterial Cooperation

    NASA Astrophysics Data System (ADS)

    Petroff, A.; Libchaber, A.

    2012-12-01

    Over the course of the last several decades, the study of microbial communities has identified countless examples of cooperation between microorganisms. Generally—as in the case of quorum sensing—cooperation is coordinated by a chemical signal that diffuses through the community. Less well understood is a second class of cooperation that is mediated through physical interactions between individuals. To better understand how the bacteria use hydrodynamics to manipulate their environment and coordinate their actions, we study the sulfur-oxidizing bacterium Thiovulum majus. These bacteria live in the diffusive boundary layer just above the muddy bottoms of ponds. As buried organic material decays, sulfide diffuses out of the mud. Oxygen from the pond diffuses into the boundary layer from above. These bacteria form communities—called veils— which are able to transport nutrients through the boundary layer faster than diffusion, thereby increasing their metabolic rate. In these communities, bacteria attach to surfaces and swim in place. As millions of bacteria beat their flagella, the community induces a macroscopic fluid flow, which mix the boundary layer. Here we present experimental observations and mathematical models that elucidate the hydrodynamics linking the behavior of an individual bacterium to the collective dynamics of the community. We begin by characterizing the flow of water around an individual bacterium swimming in place. We then discuss the flow of water and nutrients around a small number of individuals. Finally, we present observations and models detailing the macroscopic dynamics of a Thiovulum veil.

  11. Load responsive hydrodynamic bearing

    DOEpatents

    Kalsi, Manmohan S.; Somogyi, Dezso; Dietle, Lannie L.

    2002-01-01

    A load responsive hydrodynamic bearing is provided in the form of a thrust bearing or journal bearing for supporting, guiding and lubricating a relatively rotatable member to minimize wear thereof responsive to relative rotation under severe load. In the space between spaced relatively rotatable members and in the presence of a liquid or grease lubricant, one or more continuous ring shaped integral generally circular bearing bodies each define at least one dynamic surface and a plurality of support regions. Each of the support regions defines a static surface which is oriented in generally opposed relation with the dynamic surface for contact with one of the relatively rotatable members. A plurality of flexing regions are defined by the generally circular body of the bearing and are integral with and located between adjacent support regions. Each of the flexing regions has a first beam-like element being connected by an integral flexible hinge with one of the support regions and a second beam-like element having an integral flexible hinge connection with an adjacent support region. A least one local weakening geometry of the flexing region is located intermediate the first and second beam-like elements. In response to application of load from one of the relatively rotatable elements to the bearing, the beam-like elements and the local weakening geometry become flexed, causing the dynamic surface to deform and establish a hydrodynamic geometry for wedging lubricant into the dynamic interface.

  12. Pilot-Wave Hydrodynamics

    NASA Astrophysics Data System (ADS)

    Bush, John W. M.

    2015-01-01

    Yves Couder, Emmanuel Fort, and coworkers recently discovered that a millimetric droplet sustained on the surface of a vibrating fluid bath may self-propel through a resonant interaction with its own wave field. This article reviews experimental evidence indicating that the walking droplets exhibit certain features previously thought to be exclusive to the microscopic, quantum realm. It then reviews theoretical descriptions of this hydrodynamic pilot-wave system that yield insight into the origins of its quantum-like behavior. Quantization arises from the dynamic constraint imposed on the droplet by its pilot-wave field, and multimodal statistics appear to be a feature of chaotic pilot-wave dynamics. I attempt to assess the potential and limitations of this hydrodynamic system as a quantum analog. This fluid system is compared to quantum pilot-wave theories, shown to be markedly different from Bohmian mechanics and more closely related to de Broglie's original conception of quantum dynamics, his double-solution theory, and its relatively recent extensions through researchers in stochastic electrodynamics.

  13. Preprocessing of compressed digital video

    NASA Astrophysics Data System (ADS)

    Segall, C. Andrew; Karunaratne, Passant V.; Katsaggelos, Aggelos K.

    2000-12-01

    Pre-processing algorithms improve on the performance of a video compression system by removing spurious noise and insignificant features from the original images. This increases compression efficiency and attenuates coding artifacts. Unfortunately, determining the appropriate amount of pre-filtering is a difficult problem, as it depends on both the content of an image as well as the target bit-rate of compression algorithm. In this paper, we explore a pre- processing technique that is loosely coupled to the quantization decisions of a rate control mechanism. This technique results in a pre-processing system that operates directly on the Displaced Frame Difference (DFD) and is applicable to any standard-compatible compression system. Results explore the effect of several standard filters on the DFD. An adaptive technique is then considered.

  14. Comparison Of Data Compression Schemes For Medical Images

    NASA Astrophysics Data System (ADS)

    Noh, Ki H.; Jenkins, Janice M.

    1986-06-01

    Medical images acquired and stored digitally continue to pose a major problem in the area of picture archiving and transmission. The need for accurate reproduction of such images, which constitute patient medical records, and the medico-legal problems of possible loss of information has led us to examine the suitability of data compression schemes for several different medical image modalities. We have examined both reversible coding and irreversible coding as methods of image for-matting and reproduction. In reversible coding we have tested run-length coding and arithmetic coding on image bit planes. In irreversible coding, we have studied transform coding, linear predictive coding, and block truncation coding and their effects on image quality versus compression ratio in several image modalities. In transform coding, we have applied discrete Fourier coding, discrete cosine coding, discrete sine transform, and Walsh-Hadamard transform to images in which a subset of the transformed coefficients were retained and quantized. In linear predictive coding, we used a fixed level quantizer. In the case of block truncation coding, the first and second moments were retained. Results of all types of irreversible coding for data compression were unsatisfactory in terms of reproduction of the original image. Run-length coding was useful on several bit planes of an image but not on others. Arithmetic coding was found to be completely reversible and resulted in up to 2 to 1 compression ratio.

  15. Effect of Second-Order Hydrodynamics on a Floating Offshore Wind Turbine

    SciTech Connect

    Roald, L.; Jonkman, J.; Robertson, A.

    2014-05-01

    The design of offshore floating wind turbines uses design codes that can simulate the entire coupled system behavior. At the present, most codes include only first-order hydrodynamics, which induce forces and motions varying with the same frequency as the incident waves. Effects due to second- and higher-order hydrodynamics are often ignored in the offshore industry, because the forces induced typically are smaller than the first-order forces. In this report, first- and second-order hydrodynamic analysis used in the offshore oil and gas industry is applied to two different wind turbine concepts--a spar and a tension leg platform.

  16. Hydrodynamic Efficiency of Ablation Propulsion with Pulsed Ion Beam

    SciTech Connect

    Buttapeng, Chainarong; Yazawa, Masaru; Harada, Nobuhiro; Suematsu, Hisayuki; Jiang Weihua; Yatsui, Kiyoshi

    2006-05-02

    This paper presents the hydrodynamic efficiency of ablation plasma produced by pulsed ion beam on the basis of the ion beam-target interaction. We used a one-dimensional hydrodynamic fluid compressible to study the physics involved namely an ablation acceleration behavior and analyzed it as a rocketlike model in order to investigate its hydrodynamic variables for propulsion applications. These variables were estimated by the concept of ablation driven implosion in terms of ablated mass fraction, implosion efficiency, and hydrodynamic energy conversion. Herein, the energy conversion efficiency of 17.5% was achieved. In addition, the results show maximum energy efficiency of the ablation process (ablation efficiency) of 67% meaning the efficiency with which pulsed ion beam energy-ablation plasma conversion. The effects of ion beam energy deposition depth to hydrodynamic efficiency were briefly discussed. Further, an evaluation of propulsive force with high specific impulse of 4000s, total impulse of 34mN and momentum to energy ratio in the range of {mu}N/W was also analyzed.

  17. Hydrodynamics of shear coaxial liquid rocket injectors

    NASA Astrophysics Data System (ADS)

    Tsohas, John

    Hydrodynamic instabilities within injector passages can couple to chamber acoustic modes and lead to unacceptable levels of combustion instabilities inside liquid rocket engines. The instability of vena-contracta regions and mixing between fuel and oxidizer can serve as a fundamental source of unsteadiness produced by the injector, even in the absence of upstream or downstream pressure perturbations. This natural or "unforced" response can provide valuable information regarding frequencies where the element could conceivably couple to chamber modes. In particular, during throttled conditions the changes in the injector response may lead to an alignment of the injector and chamber modes. For these reasons, the basic unforced response of the injector element is of particular interest when developing a new engine. The Loci/Chem code was used to perform single-element, 2-D unsteady CFD computations on the Hydrogen/Oxygen Multi-Element Experiment (HOMEE) injector which was hot-fire tested at Purdue University. The Loci/Chem code was used to evaluate the effects of O/F ratio, LOX post thickness, recess length and LOX tube length on the hydrodynamics of shear co-axial rocket injectors.

  18. MONTE CARLO RADIATION-HYDRODYNAMICS WITH IMPLICIT METHODS

    SciTech Connect

    Roth, Nathaniel; Kasen, Daniel

    2015-03-15

    We explore the application of Monte Carlo transport methods to solving coupled radiation-hydrodynamics (RHD) problems. We use a time-dependent, frequency-dependent, three-dimensional radiation transport code that is special relativistic and includes some detailed microphysical interactions such as resonant line scattering. We couple the transport code to two different one-dimensional (non-relativistic) hydrodynamics solvers: a spherical Lagrangian scheme and a Eulerian Godunov solver. The gas–radiation energy coupling is treated implicitly, allowing us to take hydrodynamical time-steps that are much longer than the radiative cooling time. We validate the code and assess its performance using a suite of radiation hydrodynamical test problems, including ones in the radiation energy dominated regime. We also develop techniques that reduce the noise of the Monte Carlo estimated radiation force by using the spatial divergence of the radiation pressure tensor. The results suggest that Monte Carlo techniques hold promise for simulating the multi-dimensional RHD of astrophysical systems.

  19. Prototype Mixed Finite Element Hydrodynamics Capability in ARES

    SciTech Connect

    Rieben, R N

    2008-07-10

    This document describes work on a prototype Mixed Finite Element Method (MFEM) hydrodynamics algorithm in the ARES code, and its application to a set of standard test problems. This work is motivated by the need for improvements to the algorithms used in the Lagrange hydrodynamics step to make them more robust. We begin by identifying the outstanding issues with traditional numerical hydrodynamics algorithms followed by a description of the proposed method and how it may address several of these longstanding issues. We give a theoretical overview of the proposed MFEM algorithm as well as a summary of the coding additions and modifications that were made to add this capability to the ARES code. We present results obtained with the new method on a set of canonical hydrodynamics test problems and demonstrate significant improvement in comparison to results obtained with traditional methods. We conclude with a summary of the issues still at hand and motivate the need for continued research to develop the proposed method into maturity.

  20. A two-phase code for protoplanetary disks

    NASA Astrophysics Data System (ADS)

    Inaba, S.; Barge, P.; Daniel, E.; Guillard, H.

    2005-02-01

    A high accuracy 2D hydrodynamical code has been developed to simulate the flow of gas and solid particles in protoplanetary disks. Gas is considered as a compressible fluid while solid particles, fully coupled to the gas by aerodynamical forces, are treated as a pressure-free diluted second phase. The solid particles lose energy and angular momentum which are transfered to the gas. As a result particles migrate inward toward the star and gas moves outward. High accuracy is necessary to account for the coupling. Boundary conditions must account for the inward/outward motions of the two phases. The code has been tested on one and two dimensional situations. The numerical results were compared with analytical solutions in three different cases: i) the disk is composed of a single gas component; ii) solid particles migrate in a steady flow of gas; iii) gas and solid particles evolve simultaneously. The code can easily reproduce known analytical solutions and is a powerful tool to study planetary formation at the decoupling stage. For example, the evolution of an over-density in the radial distribution of solids is found to differ significantly from the case where no back reaction of the particles onto the gas is assumed. Inside the bump, solid particles have a drift velocity approximately 16 times smaller than outside which significantly increases the residence time of the particles in the nebula. This opens some interesting perspectives to solve the timescale problem for the formation of planetesimals.

  1. A hybrid numerical fluid dynamics code for resistive magnetohydrodynamics

    SciTech Connect

    Johnson, Jeffrey

    2006-04-01

    Spasmos is a computational fluid dynamics code that uses two numerical methods to solve the equations of resistive magnetohydrodynamic (MHD) flows in compressible, inviscid, conducting media[1]. The code is implemented as a set of libraries for the Python programming language[2]. It represents conducting and non-conducting gases and materials with uncomplicated (analytic) equations of state. It supports calculations in 1D, 2D, and 3D geometry, though only the 1D configuation has received significant testing to date. Because it uses the Python interpreter as a front end, users can easily write test programs to model systems with a variety of different numerical and physical parameters. Currently, the code includes 1D test programs for hydrodynamics (linear acoustic waves, the Sod weak shock[3], the Noh strong shock[4], the Sedov explosion[5], magnetic diffusion (decay of a magnetic pulse[6], a driven oscillatory "wine-cellar" problem[7], magnetic equilibrium), and magnetohydrodynamics (an advected magnetic pulse[8], linear MHD waves, a magnetized shock tube[9]). Spasmos current runs only in a serial configuration. In the future, it will use MPI for parallel computation.

  2. Aspects of causal viscous hydrodynamics

    SciTech Connect

    Bhalerao, R. S.; Gupta, Sourendu

    2008-01-15

    We investigate the phenomenology of freely expanding fluids, with different material properties, evolving through the Israel-Stewart (IS) causal viscous hydrodynamics, and compare our results with those obtained in the relativistic Eckart-Landau-Navier-Stokes (ELNS) acausal viscous hydrodynamics. Through the analysis of scaling invariants we give a definition of thermalization time that can be self-consistently determined in viscous hydrodynamics. Next we construct the solutions for one-dimensional boost-invariant flows. Expansion of viscous fluids is slower than that of one-dimensional ideal fluids, resulting in entropy production. At late times, these flows are reasonably well approximated by solutions obtained in ELNS hydrodynamics. Estimates of initial energy densities from observed final values are strongly dependent on the dynamics one chooses. For the same material, and the same final state, IS hydrodynamics gives the smallest initial energy density. We also study fluctuations about these one-dimensional boost-invariant backgrounds; they are damped in ELNS hydrodynamics but can become sound waves in IS hydrodynamics. The difference is obvious in power spectra due to clear signals of wave-interference in IS hydrodynamics, which is completely absent in ELNS dynamics.

  3. General formulation of transverse hydrodynamics

    SciTech Connect

    Ryblewski, Radoslaw; Florkowski, Wojciech

    2008-06-15

    General formulation of hydrodynamics describing transversally thermalized matter created at the early stages of ultrarelativistic heavy-ion collisions is presented. Similarities and differences with the standard three-dimensionally thermalized relativistic hydrodynamics are discussed. The role of the conservation laws as well as the thermodynamic consistency of two-dimensional thermodynamic variables characterizing transversally thermalized matter is emphasized.

  4. Scalable motion vector coding

    NASA Astrophysics Data System (ADS)

    Barbarien, Joeri; Munteanu, Adrian; Verdicchio, Fabio; Andreopoulos, Yiannis; Cornelis, Jan P.; Schelkens, Peter

    2004-11-01

    Modern video coding applications require transmission of video data over variable-bandwidth channels to a variety of terminals with different screen resolutions and available computational power. Scalable video coding is needed to optimally support these applications. Recently proposed wavelet-based video codecs employing spatial domain motion compensated temporal filtering (SDMCTF) provide quality, resolution and frame-rate scalability while delivering compression performance comparable to that of the state-of-the-art non-scalable H.264-codec. These codecs require scalable coding of the motion vectors in order to support a large range of bit-rates with optimal compression efficiency. Scalable motion vector coding algorithms based on the integer wavelet transform followed by embedded coding of the wavelet coefficients were recently proposed. In this paper, a new and fundamentally different scalable motion vector codec (MVC) using median-based motion vector prediction is proposed. Extensive experimental results demonstrate that the proposed MVC systematically outperforms the wavelet-based state-of-the-art solutions. To be able to take advantage of the proposed scalable MVC, a rate allocation mechanism capable of optimally dividing the available rate among texture and motion information is required. Two rate allocation strategies are proposed and compared. The proposed MVC and rate allocation schemes are incorporated into an SDMCTF-based video codec and the benefits of scalable motion vector coding are experimentally demonstrated.

  5. Reaching the hydrodynamic regime in a Bose-Einstein condensate by suppression of avalanches

    SciTech Connect

    Stam, K. M. R. van der; Meppelink, R.; Vogels, J. M.; Straten, P. van der

    2007-03-15

    We report the realization of a Bose-Einstein condensate (BEC) in the hydrodynamic regime. The hydrodynamic regime is reached by evaporative cooling at a relatively low density suppressing the effect of avalanches. With the suppression of avalanches a BEC containing more than 10{sup 8} atoms is produced. The collisional opacity can be tuned from the collisionless regime to a collisional opacity of more than 2 by compressing the trap after condensation. In the collisional opaque regime a significant heating of the cloud at time scales shorter than half of the radial trap period is measured, which is a direct proof that the BEC is hydrodynamic.

  6. Hydrodynamics of Peristaltic Propulsion

    NASA Astrophysics Data System (ADS)

    Athanassiadis, Athanasios; Hart, Douglas

    2014-11-01

    A curious class of animals called salps live in marine environments and self-propel by ejecting vortex rings much like jellyfish and squid. However, unlike other jetting creatures that siphon and eject water from one side of their body, salps produce vortex rings by pumping water through siphons on opposite ends of their hollow cylindrical bodies. In the simplest cases, it seems like some species of salp can successfully move by contracting just two siphons connected by an elastic body. When thought of as a chain of timed contractions, salp propulsion is reminiscent of peristaltic pumping applied to marine locomotion. Inspired by salps, we investigate the hydrodynamics of peristaltic propulsion, focusing on the scaling relationships that determine flow rate, thrust production, and energy usage in a model system. We discuss possible actuation methods for a model peristaltic vehicle, considering both the material and geometrical requirements for such a system.

  7. Hydrodynamics, resurgence, and transasymptotics

    NASA Astrophysics Data System (ADS)

    Başar, Gökçe; Dunne, Gerald V.

    2015-12-01

    The second order hydrodynamical description of a homogeneous conformal plasma that undergoes a boost-invariant expansion is given by a single nonlinear ordinary differential equation, whose resurgent asymptotic properties we study, developing further the recent work of Heller and Spalinski [Phys. Rev. Lett. 115, 072501 (2015)]. Resurgence clearly identifies the nonhydrodynamic modes that are exponentially suppressed at late times, analogous to the quasinormal modes in gravitational language, organizing these modes in terms of a trans-series expansion. These modes are analogs of instantons in semiclassical expansions, where the damping rate plays the role of the instanton action. We show that this system displays the generic features of resurgence, with explicit quantitative relations between the fluctuations about different orders of these nonhydrodynamic modes. The imaginary part of the trans-series parameter is identified with the Stokes constant, and the real part with the freedom associated with initial conditions.

  8. Hydrodynamics of Turning Flocks

    NASA Astrophysics Data System (ADS)

    Yang, Xingbo; Marchetti, M. Cristina

    2015-03-01

    We present a hydrodynamic model of flocking that generalizes the familiar Toner-Tu equations to incorporate turning inertia of well polarized flocks. The continuum equations are derived by coarse graining the inertial spin model recently proposed by Cavagna et al. The interplay between orientational inertia and bend elasticity of the flock yields spin waves that mediate the propagation of turning information throughout the flock. When the inertia is large, we find a novel instability that signals the transition to complex spatio-temporal patterns of continuously turning and swirling flocks. This work was supported by the NSF Awards DMR-1305184 and DGE-1068780 at Syracuse University and NSF Award PHY11-25915 and the Gordon and Betty Moore Foundation Grant No. 2919 at the KITP at the University of California, Santa Barbara.

  9. Hydrocyclone separation hydrodynamics

    SciTech Connect

    Ivanov, A.A.; Ruzanov, S.R.; Lunyushkina, I.A.

    1987-10-20

    The lack of an adequate hydrodynamic model for a hydrocyclone has so far been the main obstacle to devising a general method for designing such apparatus. The authors present a method of calculating the liquid flow in the working zone. The results have been used to calculate the separating power in application to dilute suspensions. The Navier-Stokes equations and the equation of continuity are used in examining the behavior together with assumptions based on experiment: the conditions for stationary axisymmetric flow, constant turbulent viscosity, and a constant radial profile for the tangential low speed at all the heights. The boundary conditions are those for liquid slip at the side walls and absence of vortex drainage at the axis. The results enable one to choose the dimensions for particular separations.

  10. Synchronization and hydrodynamic interactions

    NASA Astrophysics Data System (ADS)

    Powers, Thomas; Qian, Bian; Breuer, Kenneth

    2008-03-01

    Cilia and flagella commonly beat in a coordinated manner. Examples include the flagella that Volvox colonies use to move, the cilia that sweep foreign particles up out of the human airway, and the nodal cilia that set up the flow that determines the left-right axis in developing vertebrate embryos. In this talk we present an experimental study of how hydrodynamic interactions can lead to coordination in a simple idealized system: two nearby paddles driven with fixed torques in a highly viscous fluid. The paddles attain a synchronized state in which they rotate together with a phase difference of 90 degrees. We discuss how synchronization depends on system parameters and present numerical calculations using the method of regularized stokeslets.

  11. Hydrodynamics of foams

    NASA Astrophysics Data System (ADS)

    Karakashev, Stoyan I.

    2017-08-01

    This brief review article is devoted to all the aspects related to hydrodynamics of foams. For this reason, we focused at first on the methods for studying the basic structural units of the foams—the foam films (FF) and the Plateau borders (PB), thus reviewing the literature about their drainage. After this, we scrutinized in detail the Derjaguin's works on the electrostatic disjoining pressure along with its Langmuir's interpretation, the microscopic and macroscopic approaches in the theory of the van der Waals disjoining pressure, the DLVO theory, the steric disjoining pressure of de Gennes, and the more recent works on non-DLVO forces. The basic methods for studying of foam drainage are presented as well. Engineering and other applications of foam are reviewed as well. All these aspects are presented from retrospective and perspective viewpoints.

  12. Mix and hydrodynamic instabilities on NIF

    NASA Astrophysics Data System (ADS)

    Smalyuk, V. A.; Robey, H. F.; Casey, D. T.; Clark, D. S.; Döppner, T.; Haan, S. W.; Hammel, B. A.; MacPhee, A. G.; Martinez, D.; Milovich, J. L.; Peterson, J. L.; Pickworth, L.; Pino, J. E.; Raman, K.; Tipton, R.; Weber, C. R.; Baker, K. L.; Bachmann, B.; Berzak Hopkins, L. F.; Bond, E.; Caggiano, J. A.; Callahan, D. A.; Celliers, P. M.; Cerjan, C.; Dixit, S. N.; Edwards, M. J.; Felker, S.; Field, J. E.; Fittinghoff, D. N.; Gharibyan, N.; Grim, G. P.; Hamza, A. V.; Hatarik, R.; Hohenberger, M.; Hsing, W. W.; Hurricane, O. A.; Jancaitis, K. S.; Jones, O. S.; Khan, S.; Kroll, J. J.; Lafortune, K. N.; Landen, O. L.; Ma, T.; MacGowan, B. J.; Masse, L.; Moore, A. S.; Nagel, S. R.; Nikroo, A.; Pak, A.; Patel, P. K.; Remington, B. A.; Sayre, D. B.; Spears, B. K.; Stadermann, M.; Tommasini, R.; Widmayer, C. C.; Yeamans, C. B.; Crippen, J.; Farrell, M.; Giraldez, E.; Rice, N.; Wilde, C. H.; Volegov, P. L.; Gatu Johnson, M.

    2017-06-01

    Several new platforms have been developed to experimentally measure hydrodynamic instabilities in all phases of indirect-drive, inertial confinement fusion implosions on National Ignition Facility. At the ablation front, instability growth of pre-imposed modulations was measured with a face-on, x-ray radiography platform in the linear regime using the Hydrodynamic Growth Radiography (HGR) platform. Modulation growth of "native roughness" modulations and engineering features (fill tubes and capsule support membranes) were measured in conditions relevant to layered DT implosions. A new experimental platform was developed to measure instability growth at the ablator-ice interface. In the deceleration phase of implosions, several experimental platforms were developed to measure both low-mode asymmetries and high-mode perturbations near peak compression with x-ray and nuclear techniques. In one innovative technique, the self-emission from the hot spot was enhanced with argon dopant to "self-backlight" the shell in-flight. To stabilize instability growth, new "adiabat-shaping" techniques were developed using the HGR platform and applied in layered DT implosions.

  13. Mix and hydrodynamic instabilities on NIF

    DOE PAGES

    Smalyuk, V. A.; Robey, H. F.; Casey, D. T.; ...

    2017-06-01

    Several new platforms have been developed to experimentally measure hydrodynamic instabilities in all phases of indirect-drive, inertial confinement fusion implosions on National Ignition Facility. At the ablation front, instability growth of pre-imposed modulations was measured with a face-on, x-ray radiography platform in the linear regime using the Hydrodynamic Growth Radiography (HGR) platform. Modulation growth of "native roughness" modulations and engineering features (fill tubes and capsule support membranes) were measured in conditions relevant to layered DT implosions. A new experimental platform was developed to measure instability growth at the ablator-ice interface. Here in the deceleration phase of implosions, several experimental platformsmore » were developed to measure both low-mode asymmetries and high-mode perturbations near peak compression with x-ray and nuclear techniques. In one innovative technique, the self-emission from the hot spot was enhanced with argon dopant to "self-backlight" the shell in-flight. To stabilize instability growth, new "adiabat-shaping" techniques were developed using the HGR platform and applied in layered DT implosions.« less

  14. Low Mach number fluctuating hydrodynamics for electrolytes

    NASA Astrophysics Data System (ADS)

    Péraud, Jean-Philippe; Nonaka, Andy; Chaudhri, Anuj; Bell, John B.; Donev, Aleksandar; Garcia, Alejandro L.

    2016-11-01

    We formulate and study computationally the low Mach number fluctuating hydrodynamic equations for electrolyte solutions. We are interested in studying transport in mixtures of charged species at the mesoscale, down to scales below the Debye length, where thermal fluctuations have a significant impact on the dynamics. Continuing our previous work on fluctuating hydrodynamics of multicomponent mixtures of incompressible isothermal miscible liquids [A. Donev et al., Phys. Fluids 27, 037103 (2015), 10.1063/1.4913571], we now include the effect of charged species using a quasielectrostatic approximation. Localized charges create an electric field, which in turn provides additional forcing in the mass and momentum equations. Our low Mach number formulation eliminates sound waves from the fully compressible formulation and leads to a more computationally efficient quasi-incompressible formulation. We demonstrate our ability to model saltwater (NaCl) solutions in both equilibrium and nonequilibrium settings. We show that our algorithm is second order in the deterministic setting and for length scales much greater than the Debye length gives results consistent with an electroneutral approximation. In the stochastic setting, our model captures the predicted dynamics of equilibrium and nonequilibrium fluctuations. We also identify and model an instability that appears when diffusive mixing occurs in the presence of an applied electric field.

  15. Low Mach number fluctuating hydrodynamics for electrolytes

    SciTech Connect

    Péraud, Jean-Philippe; Nonaka, Andy; Chaudhri, Anuj; Bell, John B.; Donev, Aleksandar; Garcia, Alejandro L.

    2016-11-18

    Here, we formulate and study computationally the low Mach number fluctuating hydrodynamic equations for electrolyte solutions. We are also interested in studying transport in mixtures of charged species at the mesoscale, down to scales below the Debye length, where thermal fluctuations have a significant impact on the dynamics. Continuing our previous work on fluctuating hydrodynamics of multicomponent mixtures of incompressible isothermal miscible liquids (A. Donev, et al., Physics of Fluids, 27, 3, 2015), we now include the effect of charged species using a quasielectrostatic approximation. Localized charges create an electric field, which in turn provides additional forcing in the mass and momentum equations. Our low Mach number formulation eliminates sound waves from the fully compressible formulation and leads to a more computationally efficient quasi-incompressible formulation. Furthermore, we demonstrate our ability to model saltwater (NaCl) solutions in both equilibrium and nonequilibrium settings. We show that our algorithm is second-order in the deterministic setting, and for length scales much greater than the Debye length gives results consistent with an electroneutral/ambipolar approximation. In the stochastic setting, our model captures the predicted dynamics of equilibrium and nonequilibrium fluctuations. We also identify and model an instability that appears when diffusive mixing occurs in the presence of an applied electric field.

  16. Low Mach number fluctuating hydrodynamics for electrolytes

    DOE PAGES

    Péraud, Jean-Philippe; Nonaka, Andy; Chaudhri, Anuj; ...

    2016-11-18

    Here, we formulate and study computationally the low Mach number fluctuating hydrodynamic equations for electrolyte solutions. We are also interested in studying transport in mixtures of charged species at the mesoscale, down to scales below the Debye length, where thermal fluctuations have a significant impact on the dynamics. Continuing our previous work on fluctuating hydrodynamics of multicomponent mixtures of incompressible isothermal miscible liquids (A. Donev, et al., Physics of Fluids, 27, 3, 2015), we now include the effect of charged species using a quasielectrostatic approximation. Localized charges create an electric field, which in turn provides additional forcing in the massmore » and momentum equations. Our low Mach number formulation eliminates sound waves from the fully compressible formulation and leads to a more computationally efficient quasi-incompressible formulation. Furthermore, we demonstrate our ability to model saltwater (NaCl) solutions in both equilibrium and nonequilibrium settings. We show that our algorithm is second-order in the deterministic setting, and for length scales much greater than the Debye length gives results consistent with an electroneutral/ambipolar approximation. In the stochastic setting, our model captures the predicted dynamics of equilibrium and nonequilibrium fluctuations. We also identify and model an instability that appears when diffusive mixing occurs in the presence of an applied electric field.« less

  17. Hydrodynamics of sediment threshold

    NASA Astrophysics Data System (ADS)

    Ali, Sk Zeeshan; Dey, Subhasish

    2016-07-01

    A novel hydrodynamic model for the threshold of cohesionless sediment particle motion under a steady unidirectional streamflow is presented. The hydrodynamic forces (drag and lift) acting on a solitary sediment particle resting over a closely packed bed formed by the identical sediment particles are the primary motivating forces. The drag force comprises of the form drag and form induced drag. The lift force includes the Saffman lift, Magnus lift, centrifugal lift, and turbulent lift. The points of action of the force system are appropriately obtained, for the first time, from the basics of micro-mechanics. The sediment threshold is envisioned as the rolling mode, which is the plausible mode to initiate a particle motion on the bed. The moment balance of the force system on the solitary particle about the pivoting point of rolling yields the governing equation. The conditions of sediment threshold under the hydraulically smooth, transitional, and rough flow regimes are examined. The effects of velocity fluctuations are addressed by applying the statistical theory of turbulence. This study shows that for a hindrance coefficient of 0.3, the threshold curve (threshold Shields parameter versus shear Reynolds number) has an excellent agreement with the experimental data of uniform sediments. However, most of the experimental data are bounded by the upper and lower limiting threshold curves, corresponding to the hindrance coefficients of 0.2 and 0.4, respectively. The threshold curve of this study is compared with those of previous researchers. The present model also agrees satisfactorily with the experimental data of nonuniform sediments.

  18. Fast compression implementation for hyperspectral sensor

    NASA Astrophysics Data System (ADS)

    Hihara, Hiroki; Yoshida, Jun; Ishida, Juro; Takada, Jun; Senda, Yuzo; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Ohgi, Nagamitsu

    2010-11-01

    Fast and small foot print lossless image compressors aiming at hyper-spectral sensor for the earth observation satellite have been developed. Since more than one hundred channels are required for hyper-spectral sensors on optical observation satellites, fast compression algorithm with small foot print implementation is essential for reducing encoder size and weight resulting in realizing light-weight and small-size sensor system. The image compression method should have low complexity in order to reduce size and weight of the sensor signal processing unit, power consumption and fabrication cost. Coding efficiency and compression speed enables enlargement of the capacity of signal compression channels, which resulted in reducing signal compression channels onboard by multiplexing sensor signal channels into reduced number of compression channels. The employed method is based on FELICS1, which is hierarchical predictive coding method with resolution scaling. To improve FELICS's performance of image decorrelation and entropy coding, we applied two-dimensional interpolation prediction and adaptive Golomb-Rice coding, which enables small footprint. It supports progressive decompression using resolution scaling, whilst still delivering superior performance as measured by speed and complexity. The small footprint circuitry is embedded into the hyper-spectral sensor data formatter. In consequence, lossless compression function has been added without additional size and weight.

  19. GAMER: GPU-accelerated Adaptive MEsh Refinement code

    NASA Astrophysics Data System (ADS)

    Schive, Hsi-Yu; Tsai, Yu-Chih; Chiueh, Tzihong

    2016-12-01

    GAMER (GPU-accelerated Adaptive MEsh Refinement) serves as a general-purpose adaptive mesh refinement + GPU framework and solves hydrodynamics with self-gravity. The code supports adaptive mesh refinement (AMR), hydrodynamics with self-gravity, and a variety of GPU-accelerated hydrodynamic and Poisson solvers. It also supports hybrid OpenMP/MPI/GPU parallelization, concurrent CPU/GPU execution for performance optimization, and Hilbert space-filling curve for load balance. Although the code is designed for simulating galaxy formation, it can be easily modified to solve a variety of applications with different governing equations. All optimization strategies implemented in the code can be inherited straightforwardly.

  20. Modeling the Compression of Merged Compact Toroids by Multiple Plasma Jets

    NASA Technical Reports Server (NTRS)

    Thio, Y. C. Francis; Knapp, Charles E.; Kirkpatrick, Ron; Rodgers, Stephen L. (Technical Monitor)

    2000-01-01

    A fusion propulsion scheme has been proposed that makes use of the merging of a spherical distribution of plasma jets to dynamically form a gaseous liner. The gaseous liner is used to implode a magnetized target to produce the fusion reaction in a standoff manner. In this paper, the merging of the plasma jets to form the gaseous liner is investigated numerically. The Los Alamos SPHINX code, based on the smoothed particle hydrodynamics method is used to model the interaction of the jets. 2-D and 3-D simulations have been performed to study the characteristics of the resulting flow when these jets collide. The results show that the jets merge to form a plasma liner that converge radially which may be used to compress the central plasma to fusion conditions. Details of the computational model and the SPH numerical methods will be presented together with the numerical results.

  1. Shear and Compression Bioreactor for Cartilage Synthesis.

    PubMed

    Shahin, Kifah; Doran, Pauline M

    2015-01-01

    Mechanical forces, including hydrodynamic shear, hydrostatic pressure, compression, tension, and friction, can have stimulatory effects on cartilage synthesis in tissue engineering systems. Bioreactors capable of exerting forces on cells and tissue constructs within a controlled culture environment are needed to provide appropriate mechanical stimuli. In this chapter, we describe the construction, assembly, and operation of a mechanobioreactor providing simultaneous dynamic shear and compressive loading on developing cartilage tissues to mimic the rolling and squeezing action of articular joints. The device is suitable for studying the effects of mechanical treatment on stem cells and chondrocytes seeded into three-dimensional scaffolds.

  2. Shock compression of nitrobenzene

    NASA Astrophysics Data System (ADS)

    Kozu, Naoshi; Arai, Mitsuru; Tamura, Masamitsu; Fujihisa, Hiroshi; Aoki, Katsutoshi; Yoshida, Masatake; Kondo, Ken-Ichi

    1999-06-01

    The Hugoniot (4 - 30 GPa) and the isotherm (1 - 7 GPa) of nitrobenzene have been investigated by shock and static compression experiments. Nitrobenzene has the most basic structure of nitro aromatic compounds, which are widely used as energetic materials, but nitrobenzene has been considered not to explode in spite of the fact its calculated heat of detonation is similar to TNT, about 1 kcal/g. Explosive plane-wave generators and diamond anvil cell were used for shock and static compression, respectively. The obtained Hugoniot consists of two linear lines, and the kink exists around 10 GPa. The upper line agrees well with the Hugoniot of detonation products calculated by KHT code, so it is expected that nitrobenzene detonates in that area. Nitrobenzene solidifies under 1 GPa of static compression, and the isotherm of solid nitrobenzene was obtained by X-ray diffraction technique. Comparing the Hugoniot and the isotherm, nitrobenzene is in liquid phase under experimented shock condition. From the expected phase diagram, shocked nitrobenzene seems to remain metastable liquid in solid phase region on that diagram.

  3. Simulated performance results of the OMV video compression telemetry system

    NASA Astrophysics Data System (ADS)

    Ingels, Frank; Parker, Glenn; Thomas, Lee Ann

    The control system of NASA's Orbital Maneuvering Vehicle (OMV) will employ range/range-rate radar, a forward command link, and a compressed video return link. The video data is compressed by sampling every sixth frame of data; a rate of 5 frames/sec is adequate for the OMV docking speeds. Further axial compression is obtained, albeit at the expense of spatial resolution, by averaging adjacent pixels. The remaining compression is achieved on the basis of differential pulse-code modulation and Huffman run-length encoding. A concatenated error-correction coding system is used to protect the compressed video data stream from channel errors.

  4. Simulated performance results of the OMV video compression telemetry system

    NASA Technical Reports Server (NTRS)

    Ingels, Frank; Parker, Glenn; Thomas, Lee Ann

    1989-01-01

    The control system of NASA's Orbital Maneuvering Vehicle (OMV) will employ range/range-rate radar, a forward command link, and a compressed video return link. The video data is compressed by sampling every sixth frame of data; a rate of 5 frames/sec is adequate for the OMV docking speeds. Further axial compression is obtained, albeit at the expense of spatial resolution, by averaging adjacent pixels. The remaining compression is achieved on the basis of differential pulse-code modulation and Huffman run-length encoding. A concatenated error-correction coding system is used to protect the compressed video data stream from channel errors.

  5. A high-speed distortionless predictive image-compression scheme

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.; Smyth, P.; Wang, H.

    1990-01-01

    A high-speed distortionless predictive image-compression scheme that is based on differential pulse code modulation output modeling combined with efficient source-code design is introduced. Experimental results show that this scheme achieves compression that is very close to the difference entropy of the source.

  6. Block adaptive rate controlled image data compression

    NASA Technical Reports Server (NTRS)

    Rice, R. F.; Hilbert, E.; Lee, J.-J.; Schlutsmeyer, A.

    1979-01-01

    A block adaptive rate controlled (BARC) image data compression algorithm is described. It is noted that in the algorithm's principal rate controlled mode, image lines can be coded at selected rates by combining practical universal noiseless coding techniques with block adaptive adjustments in linear quantization. Compression of any source data at chosen rates of 3.0 bits/sample and above can be expected to yield visual image quality with imperceptible degradation. Exact reconstruction will be obtained if the one-dimensional difference entropy is below the selected compression rate. It is noted that the compressor can also be operated as a floating rate noiseless coder by simply not altering the input data quantization. Here, the universal noiseless coder ensures that the code rate is always close to the entropy. Application of BARC image data compression to the Galileo orbiter mission of Jupiter is considered.

  7. Clinical coding. Code breakers.

    PubMed

    Mathieson, Steve

    2005-02-24

    --The advent of payment by results has seen the role of the clinical coder pushed to the fore in England. --Examinations for a clinical coding qualification began in 1999. In 2004, approximately 200 people took the qualification. --Trusts are attracting people to the role by offering training from scratch or through modern apprenticeships.

  8. Lossy Compression of Haptic Data by Using DCT

    NASA Astrophysics Data System (ADS)

    Tanaka, Hiroyuki; Ohnishi, Kouhei

    In this paper, lossy data compression of haptic data is presented and the results of its application to a motion copying system are described. Lossy data compression has been studied and practically applied in audio and image coding. Lossy data compression of the haptic data has been not studied extensively. Haptic data compression using discrete cosine transform (DCT) and modified DCT (MDCT) for haptic data storage are described in this paper. In the lossy compression, calculated DCT/MDCT coefficients are quantized by quantization vector. The quantized coefficients are further compressed by lossless coding based on Huffman coding. The compressed haptic data is applied to the motion copying system, and the results are provided.

  9. Distributed sensor data compression algorithm

    NASA Astrophysics Data System (ADS)

    Ambrose, Barry; Lin, Freddie

    2006-04-01

    Theoretically it is possible for two sensors to reliably send data at rates smaller than the sum of the necessary data rates for sending the data independently, essentially taking advantage of the correlation of sensor readings to reduce the data rate. In 2001, Caltech researchers Michelle Effros and Qian Zhao developed new techniques for data compression code design for correlated sensor data, which were published in a paper at the 2001 Data Compression Conference (DCC 2001). These techniques take advantage of correlations between two or more closely positioned sensors in a distributed sensor network. Given two signals, X and Y, the X signal is sent using standard data compression. The goal is to design a partition tree for the Y signal. The Y signal is sent using a code based on the partition tree. At the receiving end, if ambiguity arises when using the partition tree to decode the Y signal, the X signal is used to resolve the ambiguity. We have extended this work to increase the efficiency of the code search algorithms. Our results have shown that development of a highly integrated sensor network protocol that takes advantage of a correlation in sensor readings can result in 20-30% sensor data transport cost savings. In contrast, the best possible compression using state-of-the-art compression techniques that did not take into account the correlation of the incoming data signals achieved only 9-10% compression at most. This work was sponsored by MDA, but has very widespread applicability to ad hoc sensor networks, hyperspectral imaging sensors and vehicle health monitoring sensors for space applications.

  10. Warm dense mater: another application for pulsed power hydrodynamics

    SciTech Connect

    Reinovsky, Robert Emil

    2009-01-01

    Pulsed Power Hydrodynamics (PPH) is an application of low-impedance pulsed power, and high magnetic field technology to the study of advanced hydrodynamic problems, instabilities, turbulence, and material properties. PPH can potentially be applied to the study of the properties of warm dense matter (WDM) as well. Exploration of the properties of warm dense matter such as equation of state, viscosity, conductivity is an emerging area of study focused on the behavior of matter at density near solid density (from 10% of solid density to slightly above solid density) and modest temperatures ({approx}1-10 eV). Conditions characteristic of WDM are difficult to obtain, and even more difficult to diagnose. One approach to producing WDM uses laser or particle beam heating of very small quantities of matter on timescales short compared to the subsequent hydrodynamic expansion timescales (isochoric heating) and a vigorous community of researchers are applying these techniques. Pulsed power hydrodynamic techniques, such as large convergence liner compression of a large volume, modest density, low temperature plasma to densities approaching solid density or through multiple shock compression and heating of normal density material between a massive, high density, energetic liner and a high density central 'anvil' are possible ways to reach relevant conditions. Another avenue to WDM conditions is through the explosion and subsequent expansion of a conductor (wire) against a high pressure (density) gas background (isobaric expansion) techniques. However, both techniques demand substantial energy, proper power conditioning and delivery, and an understanding of the hydrodynamic and instability processes that limit each technique. In this paper we will examine the challenges to pulsed power technology and to pulsed power systems presented by the opportunity to explore this interesting region of parameter space.

  11. Structured illumination temporal compressive microscopy

    PubMed Central

    Yuan, Xin; Pang, Shuo

    2016-01-01

    We present a compressive video microscope based on structured illumination with incoherent light source. The source-side illumination coding scheme allows the emission photons being collected by the full aperture of the microscope objective, and thus is suitable for the fluorescence readout mode. A 2-step iterative reconstruction algorithm, termed BWISE, has been developed to address the mismatch between the illumination pattern size and the detector pixel size. Image sequences with a temporal compression ratio of 4:1 were demonstrated. PMID:27231586

  12. Fast, efficient lossless data compression

    NASA Technical Reports Server (NTRS)

    Ross, Douglas

    1991-01-01

    This paper presents lossless data compression and decompression algorithms which can be easily implemented in software. The algorithms can be partitioned into their fundamental parts which can be implemented at various stages within a data acquisition system. This allows for efficient integration of these functions into systems at the stage where they are most applicable. The algorithms were coded in Forth to run on a Silicon Composers Single Board Computer (SBC) using the Harris RTX2000 Forth processor. The algorithms require very few system resources and operate very fast. The performance of the algorithms with the RTX enables real time data compression and decompression to be implemented for a wide range of applications.

  13. Fast, efficient lossless data compression

    NASA Technical Reports Server (NTRS)

    Ross, Douglas

    1991-01-01

    This paper presents lossless data compression and decompression algorithms which can be easily implemented in software. The algorithms can be partitioned into their fundamental parts which can be implemented at various stages within a data acquisition system. This allows for efficient integration of these functions into systems at the stage where they are most applicable. The algorithms were coded in Forth to run on a Silicon Composers Single Board Computer (SBC) using the Harris RTX2000 Forth processor. The algorithms require very few system resources and operate very fast. The performance of the algorithms with the RTX enables real time data compression and decompression to be implemented for a wide range of applications.

  14. Improved Analytical Shaped Charge Code: BASC

    DTIC Science & Technology

    1981-03-01

    Comparison between BASC Code and Experimental Results for Scaled , Heavily-Confined, Shaped-Charge (Reference 10). Jet and Collapse Velocities vs % of...hydrodynamic, computer codes that have been applied to shaped-charge problems. 1 , 2 Althou2h these codes are adaptable to vari- ous geometrical...calculation of jet tip or lead pellet behavior and confined charges. Extensive semi-empirical functions, regaring liner acceleration and confinement

  15. Hydrodynamic Elastic Magneto Plastic

    SciTech Connect

    Wilkins, M. L.; Levatin, J. A.

    1985-02-01

    The HEMP code solves the conservation equations of two-dimensional elastic-plastic flow, in plane x-y coordinates or in cylindrical symmetry around the x-axis. Provisions for calculation of fixed boundaries, free surfaces, pistons, and boundary slide planes have been included, along with other special conditions.

  16. Hydrodynamical noise and Gubser flow

    NASA Astrophysics Data System (ADS)

    Yan, Li; Grönqvist, Hanna

    2016-03-01

    Hydrodynamical noise is introduced on top of Gubser's analytical solution to viscous hydrodynamics. With respect to the ultra-central collision events of Pb-Pb, p-Pb and p-p at the LHC energies, we solve the evolution of noisy fluid systems and calculate the radial flow velocity correlations. We show that the absolute amplitude of the hydrodynamical noise is determined by the multiplicity of the collision event. The evolution of azimuthal anisotropies, which is related to the generation of harmonic flow, receives finite enhancements from hydrodynamical noise. Although it is strongest in the p-p systems, the effect of hydrodynamical noise on flow harmonics is found to be negligible, especially in the ultra-central Pb-Pb collisions. For the short-range correlations, hydrodynamical noise contributes to the formation of a near-side peak on top of the correlation structure originated from initial state fluctuations. The shape of the peak is affected by the strength of hydrodynamical noise, whose height and width grow from the Pb-Pb system to the p-Pb and p-p systems.

  17. Compressing DNA sequence databases with coil

    PubMed Central

    White, W Timothy J; Hendy, Michael D

    2008-01-01

    Background Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip) compression – an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. Results We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression – the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST) data. Finally, coil can efficiently encode incremental additions to a sequence database. Conclusion coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work. PMID:18489794

  18. Hydrodynamic body shape analysis and their impact on swimming performance.

    PubMed

    Li, Tian-Zeng; Zhan, Jie-Min

    2015-01-01

    This study presents the hydrodynamic characteristics of different adult male swimmer's body shape using computational fluid dynamics method. This simulation strategy is carried out by CFD fluent code with solving the 3D incompressible Navier-Stokes equations using the RNG k-ε turbulence closure. The water free surface is captured by the volume of fluid (VOF) method. A set of full body models, which is based on the anthropometrical characteristics of the most common male swimmers, is created by Computer Aided Industrial Design (CAID) software, Rhinoceros. The analysis of CFD results revealed that swimmer's body shape has a noticeable effect on the hydrodynamics performances. This explains why male swimmer with an inverted triangle body shape has good hydrodynamic characteristics for competitive swimming.

  19. A HYDROCHEMICAL HYBRID CODE FOR ASTROPHYSICAL PROBLEMS. I. CODE VERIFICATION AND BENCHMARKS FOR A PHOTON-DOMINATED REGION (PDR)

    SciTech Connect

    Motoyama, Kazutaka; Morata, Oscar; Hasegawa, Tatsuhiko; Shang, Hsien; Krasnopolsky, Ruben

    2015-07-20

    A two-dimensional hydrochemical hybrid code, KM2, is constructed to deal with astrophysical problems that would require coupled hydrodynamical and chemical evolution. The code assumes axisymmetry in a cylindrical coordinate system and consists of two modules: a hydrodynamics module and a chemistry module. The hydrodynamics module solves hydrodynamics using a Godunov-type finite volume scheme and treats included chemical species as passively advected scalars. The chemistry module implicitly solves nonequilibrium chemistry and change of energy due to thermal processes with transfer of external ultraviolet radiation. Self-shielding effects on photodissociation of CO and H{sub 2} are included. In this introductory paper, the adopted numerical method is presented, along with code verifications using the hydrodynamics module and a benchmark on the chemistry module with reactions specific to a photon-dominated region (PDR). Finally, as an example of the expected capability, the hydrochemical evolution of a PDR is presented based on the PDR benchmark.

  20. Recent development of hydrodynamic modeling

    NASA Astrophysics Data System (ADS)

    Hirano, Tetsufumi

    2014-09-01

    In this talk, I give an overview of recent development in hydrodynamic modeling of high-energy nuclear collisions. First, I briefly discuss about current situation of hydrodynamic modeling by showing results from the integrated dynamical approach in which Monte-Carlo calculation of initial conditions, quark-gluon fluid dynamics and hadronic cascading are combined. In particular, I focus on rescattering effects of strange hadrons on final observables. Next I highlight three topics in recent development in hydrodynamic modeling. These include (1) medium response to jet propagation in di-jet asymmetric events, (2) causal hydrodynamic fluctuation and its application to Bjorken expansion and (3) chiral magnetic wave from anomalous hydrodynamic simulations. (1) Recent CMS data suggest the existence of QGP response to propagation of jets. To investigate this phenomenon, we solve hydrodynamic equations with source term which exhibits deposition of energy and momentum from jets. We find a large number of low momentum particles are emitted at large angle from jet axis. This gives a novel interpretation of the CMS data. (2) It has been claimed that a matter created even in p-p/p-A collisions may behave like a fluid. However, fluctuation effects would be important in such a small system. We formulate relativistic fluctuating hydrodynamics and apply it to Bjorken expansion. We found the final multiplicity fluctuates around the mean value even if initial condition is fixed. This effect is relatively important in peripheral A-A collisions and p-p/p-A collisions. (3) Anomalous transport of the quark-gluon fluid is predicted when extremely high magnetic field is applied. We investigate this possibility by solving anomalous hydrodynamic equations. We found the difference of the elliptic flow parameter between positive and negative particles appears due to the chiral magnetic wave. Finally, I provide some personal perspective of hydrodynamic modeling of high energy nuclear collisions

  1. Modified JPEG Huffman coding.

    PubMed

    Lakhani, Gopal

    2003-01-01

    It is a well observed characteristic that when a DCT block is traversed in the zigzag order, the AC coefficients generally decrease in size and the run-length of zero coefficients increase in number. This article presents a minor modification to the Huffman coding of the JPEG baseline compression algorithm to exploit this redundancy. For this purpose, DCT blocks are divided into bands so that each band can be coded using a separate code table. Three implementations are presented, which all move the end-of-block marker up in the middle of DCT block and use it to indicate the band boundaries. Experimental results are presented to compare reduction in the code size obtained by our methods with the JPEG sequential-mode Huffman coding and arithmetic coding methods. The average code reduction to the total image code size of one of our methods is 4%. Our methods can also be used for progressive image transmission and hence, experimental results are also given to compare them with two-, three-, and four-band implementations of the JPEG spectral selection method.

  2. Constraining relativistic viscous hydrodynamical evolution

    SciTech Connect

    Martinez, Mauricio; Strickland, Michael

    2009-04-15

    We show that by requiring positivity of the longitudinal pressure it is possible to constrain the initial conditions one can use in second-order viscous hydrodynamical simulations of ultrarelativistic heavy-ion collisions. We demonstrate this explicitly for (0+1)-dimensional viscous hydrodynamics and discuss how the constraint extends to higher dimensions. Additionally, we present an analytic approximation to the solution of (0+1)-dimensional second-order viscous hydrodynamical evolution equations appropriate to describe the evolution of matter in an ultrarelativistic heavy-ion collision.

  3. Special Relativistic Hydrodynamics with Gravitation

    NASA Astrophysics Data System (ADS)

    Hwang, Jai-chan; Noh, Hyerim

    2016-12-01

    Special relativistic hydrodynamics with weak gravity has hitherto been unknown in the literature. Whether such an asymmetric combination is possible has been unclear. Here, the hydrodynamic equations with Poisson-type gravity, considering fully relativistic velocity and pressure under the weak gravity and the action-at-a-distance limit, are consistently derived from Einstein’s theory of general relativity. An analysis is made in the maximal slicing, where the Poisson’s equation becomes much simpler than our previous study in the zero-shear gauge. Also presented is the hydrodynamic equations in the first post-Newtonian approximation, now under the general hypersurface condition. Our formulation includes the anisotropic stress.

  4. Real-Time Data Filtering and Compression in Wide Area Simulation Networks

    DTIC Science & Technology

    1992-10-02

    applicable to any tree-based codes (e.g., Shannon-Fano codes [SHAN49, FANO49], Fibonacci codes [LELE87], Huffman codes [HUFF52], etc.). 21 Descendent...International Conf. on Acoustics , Speech, and Signal Processing, May 1991. [BASS85] Bassiouni, M. "Data compression in scientific and statistical...codes [HUFF52], Shannon- Fano Codes [FANO49, SHAN49], Universal codes of Elias [ELIA75], the Fibonacci codes3 [FRAE85], etc. The code set is

  5. Bounce-free spherical hydrodynamic implosion

    NASA Astrophysics Data System (ADS)

    Kagan, Grigory; Tang, Xian-Zhu; Hsu, Scott C.; Awe, Thomas J.

    2011-10-01

    In a bounce-free spherical hydrodynamic implosion, the post-stagnation hot core plasma does not expand against the imploding flow. A solution family realizing such a regime has been explicitly found. This regime found is most naturally applied and would be of most benefit to plasma liner driven magneto-inertial fusion (MIF). That is, this version of inertial confinement relies on maintaining the compressed hot spot within the thermonuclear burning condition for as long as possible, rather than on initiating the burn wave. Consequently, in MIF it is the best-case scenario that the fuel target persists in the state of maximum compression after reaching stagnation. Also, the plasma liner driven MIF provides substantial freedom in shaping the profiles of the imploding flow (i.e. liner) pressure, density and fluid velocity. By comparing the fuel disassembly time against that of a stationary imploding flow case, we find that shaping this flow appropriately is likely to increase the dwell time and fusion gain by a factor of four or more. Moreover, in this newly found regime the shocked region of the liner is at rest. That is, the kinetic energy of the original liner is entirely converted into internal energy. Hence, our result supports the idea of using the deuterium-tritium in the inner parts of the liner or the so-called ``after-burner,'' which upon becoming shocked will also burn, thus further increasing the gain. The work is supported by LANL LDRD.

  6. Compression of digital hologram sequences using MPEG-4

    NASA Astrophysics Data System (ADS)

    Darakis, Emmanouil; Naughton, Thomas J.

    2009-05-01

    Recording and real time reconstruction of digital hologram sequences have recently become feasible. The amount of information that such hologram sequences contain results in voluminous data files, rendering their storage and transmission impractical. As a result, compression of digital hologram sequences is of utmost importance for practical applications of hologram sequences. In the absence of a specific hologram sequence compression technique, a first concern is how a high-performance conventional video compression technique would perform. Such a technique would not be optimized for hologram sequences but would provide a threshold that all hologram sequence compression techniques should reach. In this paper, the use of MPEG-4 part 2 video coding algorithm for the compression of hologram sequences is investigated. Although the algorithm was originally developed for the compression of ordinary video, we apply it on digital hologram sequences and investigate its performance. For this, appropriate digital hologram sequences are used to asses how the coding algorithm affects their information content. In addition, we investigate whether MPEG-4 interframe coding, which aims to achieve compression by exploiting similarities across adjacent frames of the sequence, offers any advantage compared to intraframe coding, where each frame is coded independently. Results show that the MPEG-4 coding algorithm can successfully compress hologram sequences to compression rates of ~ 20 : 1 while retaining the reconstruction quality of the hologram sequences.

  7. Compressed convolution

    NASA Astrophysics Data System (ADS)

    Elsner, Franz; Wandelt, Benjamin D.

    2014-01-01

    We introduce the concept of compressed convolution, a technique to convolve a given data set with a large number of non-orthogonal kernels. In typical applications our technique drastically reduces the effective number of computations. The new method is applicable to convolutions with symmetric and asymmetric kernels and can be easily controlled for an optimal trade-off between speed and accuracy. It is based on linear compression of the collection of kernels into a small number of coefficients in an optimal eigenbasis. The final result can then be decompressed in constant time for each desired convolved output. The method is fully general and suitable for a wide variety of problems. We give explicit examples in the context of simulation challenges for upcoming multi-kilo-detector cosmic microwave background (CMB) missions. For a CMB experiment with detectors with similar beam properties, we demonstrate that the algorithm can decrease the costs of beam convolution by two to three orders of magnitude with negligible loss of accuracy. Likewise, it has the potential to allow the reduction of disk space required to store signal simulations by a similar amount. Applications in other areas of astrophysics and beyond are optimal searches for a large number of templates in noisy data, e.g. from a parametrized family of gravitational wave templates; or calculating convolutions with highly overcomplete wavelet dictionaries, e.g. in methods designed to uncover sparse signal representations.

  8. Interframe Adaptive Data Compression Techniques for Images.

    DTIC Science & Technology

    1979-08-01

    1.3.1 Predictive Coding Techniques 8 1.3.2 Transform Coding Techniques 15 1.3.3 Hybrid Coding Techniques 17 1.4 Research Objectives 18 1.5 Description ...Chemical Plant Images 82 4.2.3 X-ray Projection Images 83 V INTERFRAME HYBRID CODING SCHEMES 91 5.1 Adaptive Interframe Hybrid Coding Scheme 95 5.2 Hybrid...Images 99 5.4.2 Chemical Plant Images 109 5.4.3 Angiocardiogram Images .7 777 T- - . vi Page VI DATA COMPRESSION FOR NOISY CHANNELS 117 6.1 Channel

  9. Code Verification of the HIGRAD Computational Fluid Dynamics Solver

    SciTech Connect

    Van Buren, Kendra L.; Canfield, Jesse M.; Hemez, Francois M.; Sauer, Jeremy A.

    2012-05-04

    The purpose of this report is to outline code and solution verification activities applied to HIGRAD, a Computational Fluid Dynamics (CFD) solver of the compressible Navier-Stokes equations developed at the Los Alamos National Laboratory, and used to simulate various phenomena such as the propagation of wildfires and atmospheric hydrodynamics. Code verification efforts, as described in this report, are an important first step to establish the credibility of numerical simulations. They provide evidence that the mathematical formulation is properly implemented without significant mistakes that would adversely impact the application of interest. Highly accurate analytical solutions are derived for four code verification test problems that exercise different aspects of the code. These test problems are referred to as: (i) the quiet start, (ii) the passive advection, (iii) the passive diffusion, and (iv) the piston-like problem. These problems are simulated using HIGRAD with different levels of mesh discretization and the numerical solutions are compared to their analytical counterparts. In addition, the rates of convergence are estimated to verify the numerical performance of the solver. The first three test problems produce numerical approximations as expected. The fourth test problem (piston-like) indicates the extent to which the code is able to simulate a 'mild' discontinuity, which is a condition that would typically be better handled by a Lagrangian formulation. The current investigation concludes that the numerical implementation of the solver performs as expected. The quality of solutions is sufficient to provide credible simulations of fluid flows around wind turbines. The main caveat associated to these findings is the low coverage provided by these four problems, and somewhat limited verification activities. A more comprehensive evaluation of HIGRAD may be beneficial for future studies.

  10. Engineering Hydrodynamic AUV Hulls

    NASA Astrophysics Data System (ADS)

    Allen, J.

    2016-12-01

    AUV stands for autonomous underwater vehicle. AUVs are used in oceanography and are similar to gliders. MBARIs AUVs as well as other AUVs map the ocean floor which is very important. They also measure physical characteristics of the water, such as temperature and salinity. My science fair project for 4th grade was a STEM activity in which I built and tested 3 different AUV bodies. I wanted to find out which design was the most hydrodynamic. I tested three different lengths of AUV hulls to see which AUV would glide the farthest. The first was 6 inches. The second was 12 inches and the third was 18 inches. I used clay for the nosecone and cut a ruler into two and made it the fin. Each AUV used the same nosecone and fin. I tested all three designs in a pool. I used biomimicry to create my hypothesis. When I was researching I found that long slim animals swim fastest. So, my hypothesis is the longer AUV will glide farthest. In the end I was right. The longer AUV did glide the farthest.

  11. Spin hydrodynamic generation

    NASA Astrophysics Data System (ADS)

    Takahashi, R.; Matsuo, M.; Ono, M.; Harii, K.; Chudo, H.; Okayasu, S.; Ieda, J.; Takahashi, S.; Maekawa, S.; Saitoh, E.

    2016-01-01

    Magnetohydrodynamic generation is the conversion of fluid kinetic energy into electricity. Such conversion, which has been applied to various types of electric power generation, is driven by the Lorentz force acting on charged particles and thus a magnetic field is necessary. On the other hand, recent studies of spintronics have revealed the similarity between the function of a magnetic field and that of spin-orbit interactions in condensed matter. This suggests the existence of an undiscovered route to realize the conversion of fluid dynamics into electricity without using magnetic fields. Here we show electric voltage generation from fluid dynamics free from magnetic fields; we excited liquid-metal flows in a narrow channel and observed longitudinal voltage generation in the liquid. This voltage has nothing to do with electrification or thermoelectric effects, but turned out to follow a universal scaling rule based on a spin-mediated scenario. The result shows that the observed voltage is caused by spin-current generation from a fluid motion: spin hydrodynamic generation. The observed phenomenon allows us to make mechanical spin-current and electric generators, opening a door to fluid spintronics.

  12. Lotic Water Hydrodynamic Model

    SciTech Connect

    Judi, David Ryan; Tasseff, Byron Alexander

    2015-01-23

    Water-related natural disasters, for example, floods and droughts, are among the most frequent and costly natural hazards, both socially and economically. Many of these floods are a result of excess rainfall collecting in streams and rivers, and subsequently overtopping banks and flowing overland into urban environments. Floods can cause physical damage to critical infrastructure and present health risks through the spread of waterborne diseases. Los Alamos National Laboratory (LANL) has developed Lotic, a state-of-the-art surface water hydrodynamic model, to simulate propagation of flood waves originating from a variety of events. Lotic is a two-dimensional (2D) flood model that has been used primarily for simulations in which overland water flows are characterized by movement in two dimensions, such as flood waves expected from rainfall-runoff events, storm surge, and tsunamis. In 2013, LANL developers enhanced Lotic through several development efforts. These developments included enhancements to the 2D simulation engine, including numerical formulation, computational efficiency developments, and visualization. Stakeholders can use simulation results to estimate infrastructure damage and cascading consequences within other sets of infrastructure, as well as to inform the development of flood mitigation strategies.

  13. Lossless wavelet compression on medical image

    NASA Astrophysics Data System (ADS)

    Zhao, Xiuying; Wei, Jingyuan; Zhai, Linpei; Liu, Hong

    2006-09-01

    An increasing number of medical imagery is created directly in digital form. Such as Clinical image Archiving and Communication Systems (PACS), as well as telemedicine networks require the storage and transmission of this huge amount of medical image data. Efficient compression of these data is crucial. Several lossless and lossy techniques for the compression of the data have been proposed. Lossless techniques allow exact reconstruction of the original imagery, while lossy techniques aim to achieve high compression ratios by allowing some acceptable degradation in the image. Lossless compression does not degrade the image, thus facilitating accurate diagnosis, of course at the expense of higher bit rates, i.e. lower compression ratios. Various methods both for lossy (irreversible) and lossless (reversible) image compression are proposed in the literature. The recent advances in the lossy compression techniques include different methods such as vector quantization. Wavelet coding, neural networks, and fractal coding. Although these methods can achieve high compression ratios (of the order 50:1, or even more), they do not allow reconstructing exactly the original version of the input data. Lossless compression techniques permit the perfect reconstruction of the original image, but the achievable compression ratios are only of the order 2:1, up to 4:1. In our paper, we use a kind of lifting scheme to generate truly loss-less non-linear integer-to-integer wavelet transforms. At the same time, we exploit the coding algorithm producing an embedded code has the property that the bits in the bit stream are generated in order of importance, so that all the low rate codes are included at the beginning of the bit stream. Typically, the encoding process stops when the target bit rate is met. Similarly, the decoder can interrupt the decoding process at any point in the bit stream, and still reconstruct the image. Therefore, a compression scheme generating an embedded code can

  14. Extreme hydrodynamic load calculations for fixed steel structures

    SciTech Connect

    Jong, P.R. de; Vugts, J.; Gudmestad, O.T.

    1996-12-31

    This paper discusses the expected differences between the planned ISO code for design of offshore structures and the present Standard Norwegian Practice (SNP), concerning the extreme hydrodynamic design load calculation for fixed steel space frame structures. Since the ISO code is expected to be similar to the API RP2A LRFD code, the provisions of API RP2A LRFD are used to represent the ISO standard. It should be noted that the new ISO code may include NewWave theory, in addition to the wave theories recommended by the API. Design loads and associated failure probabilities resulting from the application of the code provisions are compared for a typical North Sea structure, the Europipe riser platform 16/11-E.

  15. A New Parallel Code Based on PVM

    NASA Astrophysics Data System (ADS)

    Xu, Guohong

    1994-05-01

    We have developed a new parallel code for solving purely gravitational problems by combining PM methods and TREE methods to achieve both high spatial solution and high mass resolution. Very preliminary results will be shown to demonstrate the potential accuracy which the new code can reach. As a first application of the code, we tried to calculate the density profile and velocity dispersion of clusters of galaxies. Further work will be done to include hydrodynamics in the code. Very high computational efficiency is achieved by application of PVM (Parallel Virtural Machine) techniques in the code to configure many workstations into a virtural machine.

  16. Reciprocal relations in dissipationless hydrodynamics

    SciTech Connect

    Melnikovsky, L. A.

    2014-12-15

    Hidden symmetry in dissipationless terms of arbitrary hydrodynamics equations is recognized. We demonstrate that all fluxes are generated by a single function and derive conventional Euler equations using the proposed formalism.

  17. Hydrodynamic model for picosecond propagation of laser-created nanoplasmas

    NASA Astrophysics Data System (ADS)

    Saxena, Vikrant; Jurek, Zoltan; Ziaja, Beata; Santra, Robin

    2015-06-01

    The interaction of a free-electron-laser pulse with a moderate or large size cluster is known to create a quasi-neutral nanoplasma, which then expands on hydrodynamic timescale, i.e., > 1 ps. To have a better understanding of ion and electron data from experiments derived from laser-irradiated clusters, one needs to simulate cluster dynamics on such long timescales for which the molecular dynamics approach becomes inefficient. We therefore propose a two-step Molecular Dynamics-Hydrodynamic scheme. In the first step we use molecular dynamics code to follow the dynamics of an irradiated cluster until all the photo-excitation and corresponding relaxation processes are finished and a nanoplasma, consisting of ground-state ions and thermalized electrons, is formed. In the second step we perform long-timescale propagation of this nanoplasma with a computationally efficient hydrodynamic approach. In the present paper we examine the feasibility of a hydrodynamic two-fluid approach to follow the expansion of spherically symmetric nanoplasma, without accounting for the impact ionization and three-body recombination processes at this stage. We compare our results with the corresponding molecular dynamics simulations. We show that all relevant information about the nanoplasma propagation can be extracted from hydrodynamic simulations at a significantly lower computational cost when compared to a molecular dynamics approach. Finally, we comment on the accuracy and limitations of our present model and discuss possible future developments of the two-step strategy.

  18. Supernova-relevant hydrodynamic instability experiments on the Nova Laser

    SciTech Connect

    Kane, J.; arnett, D.; Remington, B.A.; Glendinning, S.G.; wallace, R.; Mangan, R.; Rubenchik, A.; Fryxell, B.A.

    1997-04-18

    Supernova 1987A focused attention on the critical role of hydrodynamic instabilities in the evolution of supernovae. To test the modeling of these instabilities we are developing laboratory experiments of hydrodynamic mixing under conditions relevant to supernovae. The target consists of two-layer planar package composed on 85 micron Cu backed by 500 micron CH2, having a single mode sinusoidal perturbation at the interface, with gamma = 200 microns, nuo + 20 microns. The Nova laser is used to generate a 10-15 Mbar (10- 15x10{sup 12} dynes/cm2) shock at the interface, which triggers perturbation growth, due to the Richtmyer-Meshov instability followed by the Raleigh-Taylor instability as the interface decelerates. This resembles the hydrodynamics of the He-H interface of a Type II supernova at the intermediate times, up to a few x10{sup 3} s. The experiment is modeled using the hydrodynamic codes HYADES and CALE, and the supernova code PROMETHEUS. We are designing experiments to test the differences in the growth of 2D vs 3D single mode perturbations; such differences may help explain the high observed velocities of radioactive core material in SN1987A. Results of the experiments and simulations are presented.

  19. Two-dimensional radiation-hydrodynamic calculations for a nominal 1-Mt nuclear explosion near the ground

    SciTech Connect

    Horak, H.G.; Jones, E.M.; Sandford, M.T. II; Whitaker, R.W.; Anderson, R.C.; Kodis, J.W.

    1982-03-01

    The two-dimensional radiation-hydrodynamic code SN-YAQUI was used to calculate the evolution of a hypothetical nuclear fireball of 1-Mt yield at a burst altitude of 500 m. The ground-reflected shock wave interacts strongly with the fireball and induces the early formation of a rapidly rotating ring-shaped vortex. The hydrodynamic and radiation phenomena are discussed.

  20. Compact torus compression of microwaves

    SciTech Connect

    Hewett, D.W.; Langdon, A.B.

    1985-05-17

    The possibility that a compact torus (CT) might be accelerated to large velocities has been suggested by Hartman and Hammer. If this is feasible one application of these moving CTs might be to compress microwaves. The proposed mechanism is that a coaxial vacuum region in front of a CT is prefilled with a number of normal electromagnetic modes on which the CT impinges. A crucial assumption of this proposal is that the CT excludes the microwaves and therefore compresses them. Should the microwaves penetrate the CT, compression efficiency is diminished and significant CT heating results. MFE applications in the same parameters regime have found electromagnetic radiation capable of penetrating, heating, and driving currents. We report here a cursory investigation of rf penetration using a 1-D version of a direct implicit PIC code.